id
stringlengths
16
65
title
stringlengths
19
278
text
stringlengths
894
136k
keyphrases
sequence
prmu
sequence
Plant_Mol_Biol-4-1-2295252
The microRNA regulated SBP-box genes SPL9 and SPL15 control shoot maturation in Arabidopsis
Throughout development the Arabidopsis shoot apical meristem successively undergoes several major phase transitions such as the juvenile-to-adult and floral transitions until, finally, it will produce flowers instead of leaves and shoots. Members of the Arabidopsis SBP-box gene family of transcription factors have been implicated in promoting the floral transition in dependence of miR156 and, accordingly, transgenics constitutively over-expressing this microRNA are delayed in flowering. To elaborate their roles in Arabidopsis shoot development, we analysed two of the 11 miR156 regulated Arabidopsis SBP-box genes, i.e. the likely paralogous genes SPL9 and SPL15. Single and double mutant phenotype analysis showed these genes to act redundantly in controlling the juvenile-to-adult phase transition. In addition, their loss-of-function results in a shortened plastochron during vegetative growth, altered inflorescence architecture and enhanced branching. In these aspects, the double mutant partly phenocopies constitutive MIR156b over-expressing transgenic plants and thus a major contribution to the phenotype of these transgenics as a result of the repression of SPL9 and SPL15 is strongly suggested. Introduction During maturation, plants pass through several developmentally distinct growth phases in which the shoot gradually gains reproductive competence (Poethig 1990). After the transition from embryonic to postembryonic growth, plants undergo at least two further phase transitions, the vegetative as well as the reproductive phase change. During vegetative growth, rosette leaves are initiated at the flanks of the shoot apical meristem (SAM) with a certain frequency, referred to as plastochron (Erickson and Michelini 1957). After going through the reproductive phase transition, also known as the floral transition, the SAM starts to initiate floral buds instead of leaves. In Arabidopsis, as in many other plants showing day length dependent flowering, the floral transition is preceded by a transition from juvenile to adult growth. This switch, known as the vegetative phase change, is physiologically defined as achieving competence to respond to photoperiodic induction of flowering (Poethig 1990). The transition from juvenile to adult growth is gradual and rather subtle but generally can be followed by several morphological markers. In Arabidopsis, for example, leaves produced in the juvenile phase have long petioles, are small, round and lack abaxial trichomes. In contrast, short petioles, elliptical anatomy and the development of trichomes on the abaxial side represent adult traits (Telfer et al. 1997). Regulation of these developmental transitions is largely dependent on (changes in) environmental cues such as day length, light intensity and temperature, as well as on endogenous factors such as the plant hormone gibberellin (Telfer et al. 1997). Whereas the molecular genetic mechanisms underlying the floral transition are already worked out in increasing detail (Komeda 2004), it has only been recently that we begin to understand the molecular genetic basis of the vegetative phase change. Most genes suggested to play a role in promoting the latter phase change have been identified by the analysis of mutants showing a precocious onset of adult traits and intriguingly, link vegetative phase change to RNA silencing pathways. These genes include the Arabidopsis ortholog of exportin 5/MSN5, HASTY (HST; Telfer and Poethig 1998; Bollman et al. 2003), the zinc-finger-domain protein encoding locus SERRATE (SE; Clarke et al. 1999) and ZIPPY (ZIP), an AGO-family member (Hunter et al. 2003). More recently, screens for mutations with zip-like phenotypes resulted in alleles of SUPRESSOR OF GENE SILENCING3 (SGS3) and RNA-DEPENDENT POLYMERASE6 (RDR6), both genes required for posttranscriptional gene silencing (PTGS) and acting in the same pathways as ZIP and HST (Peragine et al. 2004). Furthermore, a precocious vegetative phase change has also been found in dicer-like 4 (dcl4) mutants (Gasciolli et al. 2005; Xie et al. 2005; Yoshikawa et al. 2005). One explanation for the observed effects could be that target genes of this silencing pathway play a positive role on the vegetative phase change and their down-regulation consequently promotes juvenility. Hence, mutations in genes involved in this silencing pathway, as the ones described above, cause an accelerated vegetative development. In line with this idea, members of the plant specific SBP-box gene transcription factor family have been implicated in promoting vegetative and floral phase transitions. In particular, overexpression of the Arabidopsis SBP-box gene SPL3 leads to early flowering and a significant earlier appearance of abaxial trichomes on the rosette leaves (Cardon et al. 1997; Wu and Poethig 2006). Interestingly, together with 10 of 16 other family members, SPL3 expression is post-transcriptionally controlled by miR156 and probably also by the very closely related miR157 (Rhoades et al. 2002; Schwab et al. 2005; Wu and Poethig 2006; Gandikota et al. 2007). Consistent with its role of down-regulating SPL3 and related SPL target-genes, constitutive overexpression of miR156 encoding loci has been shown to cause the production of a significantly larger number of leaves with juvenile characteristics and a delay in flowering (Schwab et al. 2005; Wu and Poethig 2006). Although the available data clearly point to a regulatory role for the miRNA regulated SPL genes in the temporal development of the Arabidopsis shoot, the contribution of the single genes to the described phenotypes remains to be determined. Therefore, we identified and isolated mutant alleles for single SPL genes. In comparison to other miR156 targeted SPL genes, available expression data (AtGenExpress; Schmid et al. 2005) show SPL9 and SPL15 to be already quite active in the vegetative shoot apex. Accordingly, their mutant phenotypes were found to affect vegetative development. Here we report the mutant analysis of SPL9 and SPL15, two likely paralogous members of the SQUAMOSA PROMOTER BINDING PROTEIN-LIKE (SPL) transcription factor family (Cardon et al. 1999), and discuss their redundant regulatory role on the vegetative phase change and the temporal initiation of rosette leaves. Materials and methods Plant material and plant growth conditions All of the genetic stocks described in the paper were in Columbia background. The T-DNA insertion lines SALK_006573 (spl9-2), SALK_074426 (spl15-1) and SALK_138712 (spl15-2) were obtained at the Nottingham Arabidopsis Stock Centre (NASC). The T-DNA insertion lines GABI-Kat 544F04 (spl9-3) and WiscDsLox 457 (spl15-3) were obtained from GABI-Kat and the Arabidopsis Biological Research Centre (ABRC), respectively. Insertion mutant information for NASC- or ARBC-lines was obtained from the SIGNAL website at http://signal.salk.edu. Plants homozygous for the T-DNA insertions were identified by PCR using T-DNA left border- and gene-specific primers. T-DNA specific left border primers for SALK, GABI-Kat and WiscDsLox T-DNAs were 5′-GCGTGGACCGCTTGCTGCAACT-3′, 5′-ATATTGACCATCATACTCATTGC-3′ and 5′-TGGCAGGATATATTGTGGTGTAAACA-3′, respectively. In combination with the respective left border primer we used the following gene-specific primers: 5′-GCTATGGCTTAAGCCTTAAGTTAAAAGG-3′ for SALK_006573, 5′-CGTAGCTGTCGTGGACTAGTGTCAATC-3′ for SALK_074426 and SALK_138712, 5′-AACCTCTGTTCGATACCAGCCACAG-3′ for GABI-Kat 544F04 and 5′-AGCCATTGTAACCTTATCGGAGAATGAG-3′ for WiscDsLox 457. The stable En-1 insertion mutant 5ABA33-H1 (spl9-1) was obtained from the ZIGIA-population (Unte 2001). Plants homozygous for a four base pair insertion in the first exon of SPL9 caused by the excision of the En-1 transposon were backcrossed with wild type twice to obtain plants exclusively containing the four base-pair insertion without any further transposon contamination. In order to identify plants containing the mutation we used the following primer combination: 5′-AGTAAGAGGAAACCACCATGGAGATGG-3′ (forward) and 5′-AACCTTCCACTTGGCACCTTGGTATA-3′ (reverse, recognises the insertion). All plants were grown in plastic trays or pots filled with ready-to-use commercial, pre-fertilized soil mixture (Type ED73, Werkverband eV, Sinntal-Jossa, Germany). For stratification, seeds were kept on moist paper at 4°C in the dark for 4–5 days before transferring to soil (i.e. “sowing”) in growth chambers at 22°C, 50% relative humidity. Germination and cultivation of the plants in long-day conditions (16 h light, 8 h dark) were either under approx. 70 μE/cm2/s (LD1) or 175 μE/cm2/s (LD2) light provided by fluorescent tubes (L58W/840 and L58W/25 Osram, Munich, Germany). Plants in short-day (SD) conditions (8 h light, 16 h dark) were cultivated under approx. 450 μE/cm2/s light. To determine sensitivity to photoperiodic induction of flowering, stratified seeds were germinated in a modified SD with a 9 h light period. The developing plants were kept in these conditions for 21 days before they were transferred to similar growth chambers with continuous light provided by Osram HQIT 400 W lamps. Batches of plants were returned to the modified SD conditions after 1, 3 and 5 days. Phenotypic analysis Flowering time was measured as the time between sowing and anthesis (opening of first flower). Bolting time was recorded when the main inflorescence had reached a height of 0.5 cm. Inflorescence height was measured between the rosette and the first flower of the main inflorescence of plants with their first siliques fully ripened. In order to count the number of site shoots, all site shoots longer than 0.5 cm were scored. Abaxial trichomes were scored using a Leica MZFLIII stereomicroscope (Wetzlar, Germany). An estimation of the rosette leaf initiation rate (L/D day−1) was obtained by dividing the number (L) of rosette leaves having reached at least 0.5 cm in length through the number of days (D) between sowing and determination. Note that this value reflects but does not equalize (average) plastochron as it should be corrected for the true start of initiation of the first leaf as well as the time needed for the first adult leaf to reach a size of 0.5 cm. Histological analysis Apical regions were isolated from plants grown in SD for 41 days by trimming with a razor blade. The tissue was fixed in 4% formaldehyde/0.1 M PO4, pH 7.0 for 48 h and embedded in paraffin using a Leica ASP300 tissue processor (Wetzlar, Germany). Embedded apices were cut into 8 μm thin cross sections using a Jung Autocut 2055 and photographed using a Zeiss Axiophot microscope (Göttingen, Germany) equipped with a KY-F5U 28CCD camera (JVC, Yokohama, Japan). The first cross section in which the apex was visible plus two successive sections were used to determine the diameter of the apical region. From these three measured values the average was taken for comparison. In addition, cross sectional area and circularity factor (=4π A P−2, A is area, P is perimeter) of the leaf primordia, outlined by hand on the photographs, were determined as well. The measurements were performed with help of the program ImageJ 1.35s (Wayne Rasband, National Institutes of Health, USA). GA3 treatments Col-0, the spl9 spl15 double mutant and the 35S::MIR156b overexpressor were grown in LD1 conditions. Immediately after germination, half of the plants were treated by spraying 100 μM GA3, 0.02% Tween 20 and this was repeated twice per week until they started flowering. The other half of the plants was similarly treated with 0.02% Tween 20. Phylogenetic comparison Multiple alignments of amino acid sequences were generated by the program ClustalW of the MacVector 7.2.2 software package (Accelrys Ltd., Cambridge, UK) using the BLOSUM 30 matrix with an open gap penalty of 10 and an extend gap penalty of 0.05. Only the SBP-domain was used for the phylogenetic reconstruction. The tree was constructed using the neighbour-joining algorithm of the MacVector 7.2.2 software package. Quantitative real-time PCR analysis To perform quantitative RT-PCR (using the iQ5 real-time PCR detection system, Bio-Rad, Munich, Germany) apical regions were collected (roots and as much of the leaves as possible were removed using tweezers) of plants cultivated 5, 9, 13, 27 and 32 days after sowing in LD1 conditions. Total RNA was extracted using the RNeasy plant mini kit (Qiagen, Hilden, Germany), including an on-column DNase digestion. First-strand cDNA was synthesized using SuperScript III RNase H reverse transcriptase (Invitrogen) starting with 2 μg of total RNA primed with an oligo(T)12–18 primer (Gibco BRL, Karlsruhe, Germany). SPL9-specific primers, 5′-AGAACATTGGATACAACAGTGATGAGG-3′ (forward) and 5′-GTTTGAGTCGCCAATTCCCTTGTAGC-3′ (reverse) as well as SPL15-specific primers, 5′-TTGGGAGATCCTACTGCGTGGTCAACC-3′ (forward) and 5′-AGCCATTGTAACCTTATCGGAGAATGAG-3′ (reverse), were designed to generate a PCR product of 171 and 300 bp, respectively. Based on the analysis of Czechowski et al. (2005) PP2A expression was used as reference for transcript normalization with the primer pair 5′-TAACGTGGCCAAAATGATGC-3′ (forward) and 5′-GTTCTCCACAACCGCTTGGT-3′ (reverse). The PCR efficiencies for SPL9, SPL15 and PP2A primers were determined to be 96, 95.5 and 96%, respectively. Quantifications, in triplicate, were performed using the Brilliant SYBRGreen QPCR kit (Stratagene, La Jolla, CA, USA), according to the manufacturer’s protocol, in a final volume of 25 μl. PCR was carried out in 250 μl optical reaction vials (Stratagene) heated for 10 min at 95°C to hot-start the Taq polymerase, followed by 40 cycles of denaturation (30 s at 95°C), annealing (30 s at 58°C) and extension (30 s at 72°C). Semi-quantitative RT-PCR analysis Total RNA was extracted from seedlings of the Col-0 wild-type, mutant- and transgenic lines using the RNeasy plant mini kits (Qiagen). RT-PCR with equal amounts of RNA was performed using the one-step RT-PCR kit (Qiagen). SPL9 knockout lines were identified using the following primer pair: 5′-GGTCGGGTCAGTCGGGTCAGATACC-3′ (forward) and 5′-ACTGGCCGCCTCATCACTCTTGTATCC-3′ (reverse). SPL9 mRNA is expected to yield a 415 bp fragment whereas genomic SPL9 DNA is expected to yield a 1,138 bp fragment. SPL15 knockout lines were identified using the following primer pair: 5′-AGAAGCAAGAACCGGGTCAATACC-3′ (forward) and 5′-AGCCATTGTAACCTTATCGGAGAATGAG-3′ (reverse). SPL15 mRNA is expected to yield a 666 bp fragment whereas genomic SPL15 DNA is expected to yield a 1,004 bp fragment. RT-PCR of the loading control (RAN3; At5g55190) was performed with the primer pair 5′-ACCAGCAAACCGTGGATTACCCTAGC-3′ (forward) and 5′-ATTCCACAAAGTGAAGATTAGCGTCC-3′ (reverse) to yield a fragment of 531 bp when derived from RAN3 mRNA (genomic RAN3 is expected to yield a 1,314 bp fragment). Statistics Graphical representations of numerical data were generated with the Microsoft Excel program (Microsoft Germany, Munich) and statistical tests were performed using the Student’s t-Test within this program. P-values lower than 0.05 were considered to be statistically relevant and the data involved to represent significant differences. Remaining techniques and methods Standard molecular biology techniques were performed as described by Sambrook et al. (1989). Graphical plots and digital photographic images were cropped and assembled using Adobe Photoshop (Adobe Systems, San Jose, CA, USA). Results Molecular characterization of the Arabidopsis SBP-box genes SPL9 and SPL15 Mutant alleles for the Arabidopsis SBP-box genes SPL9 (At2g42200) and SPL15 (At3g57920) were obtained from screening publicly available electronic databases and seed stock centres for transposon or T-DNA tagged SPL genes. For SPL9, we identified three insertion alleles designated as spl9-1 to -3 and confirmed the nature and position of their mutations (see “Materials and methods”; Fig. 1a). The first allele, spl9-1, was identified in the En-transposon mutagenised ZIGIA population (Baumann et al. 1998; Unte 2001) and most likely resulted from the excision of an inserted En-1 transposon leaving behind a 4-bp insertion footprint in the first exon. The result of this is a frame shift in the coding sequence and the generation of a stop-codon 86 base pairs after the insertion site. Both spl9-2 and spl9-3 represent T-DNA insertion mutant alleles identified within, respectively, the SALK collection (Alonso et al. 2003) and the GABI-Kat collection (Li et al. 2007). Fig. 1Molecular characterization of SPL9 and SPL15. (a) Schematic representation of the genomic loci of SPL9 and SPL15. The positions of the mutations identified are indicated by open triangles, numbered according to the respective alleles. Boxes represent exons. The SBP-box sequences are depicted in black, the remaining coding sequences in grey and the untranslated 5′ and 3′ regions are left blank. (b) Changes in transcript levels of SPL9 and SPL15 in the shoot apical region during plant development in LD1 as determined with qRT-PCR and normalized against PP2A. For comparison, relative transcript levels were arbitrarily set to one for SPL9 5 days after sowing. Error bars indicate standard deviation. (c) Absence of SPL9 and SPL15 transcripts in seedlings of the respective mutants as validated by RT-PCR. Presence of the respective transcripts in Col-0 wild type seedlings is shown for comparison and the amplification of RAN3 transcript as quality control and reference for quantification. Fragment lengths are indicated on the left in base pairs (bp) Also three independent T-DNA insertion lines for SPL15 could be obtained and confirmed (see “Materials and methods”; Fig. 1a). Two alleles designated as spl15-1 and spl15-2 were identified within the SALK collection and one, spl15-3, within the WiscDsLox T-DNA collection. According to data available from the AtGenExpress micro-array database (Schmid et al. 2005), both SPL9 and SPL15 transcript levels increase during development and are preferentially found in the shoot apical region and in young flowers. We confirmed this temporal expression pattern with the help of qRT-PCR (Fig. 1b). In LD1 growing conditions (see “Materials and methods”), SPL9 and SPL15 transcript levels remain comparable during the first 2–3 weeks. Thereafter, the expression level of SPL9 starts to increase followed by that of SPL15. Around 32 days after sowing (DAS), at about the time Col-0 plants have undergone their reproductive phase transition, SPL9 transcript levels have become approximately two and a half times higher in comparison to SPL15 and six times in comparison to day 5. Arabidopsis lines carrying as a transgene a genomic fragment encompassing the locus for SPL15 and with a GUS reporter gene inserted downstream of the ATG start codon, confirmed the predominantly apical expression of SPL15 (Supplementary Fig. 1). RT-PCR performed on mRNA isolated from whole seedling plants homozygous for any of the three SPL9 or SPL15 mutant alleles (see “Materials and methods”) did not result in the detection of RNA derived of the respective genes (Fig. 1c). This strongly suggests that all mutant alleles isolated represent functional null-alleles. Accordingly, plants homozygous for any of the three spl9 mutant alleles showed highly identical phenotypes, as did all three homozygous spl15 mutants (see phenotypic analysis below; Supplementary Fig. 2). Allelic tests confirmed that the observed phenotypes are indeed due to mutation in either SPL9 or SPL15, respectively (data not shown). With over 75% of their amino acid residues identical, SPL9 and SPL15 show high similarity on the level of their proteins. Also a phylogenetic comparison based on the SBP-box of all 17 SPL genes in Arabidopsis revealed SPL9 and SPL15 as most closely related and most likely forming a pair of paralogous genes (Fig. 2). Based on this close relationship some degree of functional redundancy could be expected and, therefore, we created double mutant lines to uncover such redundancy. To ascertain that phenotypic changes in the mutant plants are solely due to the loss-of-function of SPL9 and SPL15 we generated two different homozygous double mutant lines with the allelic combinations spl9-1 spl15-1 and spl9-2 spl15-2, respectively. Both lines exhibit the same phenotype as described in the next section. For further detailed analysis the spl9-1 spl15-1 line was chosen and in the following referred to as spl9 spl15 for simplicity. Fig. 2Phylogenetic relationship of the Arabidopsis SBP-box genes as based on the conserved SBP-domain. The orthologous sequence of Chlamydomonas CRR1 has been used as outgroup. The likely paralogous pair SPL9 and SPL15 is boxed in grey. MiR156/157 targeted SPL genes are marked with an asterisk. Only bootstrap values over 50% are shown Phenotypic analysis of spl9 and spl15 mutants For a phenotypic analysis, we compared spl9 and spl15 single mutants, spl9spl15 double mutants to Col-0 wild type as well as to an 35S::MIR156b transgenic line (kindly provided by D. Weigel and R. Schwab). An interesting aspect of the MIR156b over-expressing plants, as already noticed by Schwab and co-workers (2005) is an increased rate of rosette leaf initiation which, in combination with a modest delayed flowering, results in the obvious denser rosettes of fully developed plants (Fig. 3a). In addition, advanced 35S::MIR156b plants became very bushy (Fig. 3c). We found these phenotypic aspects also displayed by the spl9 spl15 double mutant, albeit less pronounced (Fig. 3a, b). To quantify the contribution of SPL9 and SPL15 to these phenomena, we compared the number of rosette and cauline leaves of the respective single and double mutants and of the MIR156b overexpressor to wild type (Table 1). Whereas in LD2 growing conditions, the 35S::MIR156b line produced ca. eleven more rosette leaves in comparison to wild type, the single mutant lines produced, on average, only 1–2 rosette leaves more. Again, with ca. six extra rosette leaves, the spl9spl15 double mutant differed more from wild type than the single mutants and showed a stronger tendency towards the phenotype of the MIR156b overexpressor. The number of cauline leaves remained very comparable among all mutants and wild type, although some reduction may be observed particularly in the spl9 mutants. Fig. 3Phenotypic analysis of spl9 and spl15 mutants. (a) Flowering spl9, spl15 and spl9spl15 double mutant plants shown next to Col-0 wild type and the MIR156b overexpressor. Plants shown next to each other are of the same age and grown in parallel under LD2 conditions. (b, c) Col-0 wild type, spl9spl15 double mutant (b) and MIR156b overexpressor (c) at a more advanced stage of development in comparison to the plants shown in aTable 1Phenotypic evaluation of spl9 and spl15 mutant alleles in comparison to Col wt and a 35S::MIR156b transgene under LD conditionsRosette leavesCauline leavesBolting (DAS)Anthesis (DAS)Juvenile leavesaInfloresc. heightb (cm)MeanSDMeanSDMeanSDMeanSDMeanSDMeanSDCol-0 wt13.11.13.90.516.31.220.91.55.51.212.11.3spl9-114.31.13.4c0.715.91.419.6c1.78.30.88.01.0spl9-215.61.23.5c0.516.91.220.81.49.20.98.20.8spl9-315.71.33.3c0.617.5c1.521.31.59.60.78.60.8spl15-115.61.13.3c0.416.60.820.91.07.10.710.31.2spl15-214.91.03.50.517.11.920.91.97.30.811.4c1.1spl15-316.11.03.70.717.2c0.821.81.17.60.711.1c1.3spl9-1 spl15-119.5d1.43.40.918.5d1.422.3c1.810.9d0.86.9d1.4spl9-2 spl15-218.9d1.33.3c0.619.0d1.522.8c1.810.8d0.46.8d0.835S::MIR156b24.4e2.43.2c0.819.31.822.4c2.114.8e1.12.3e0.716 plants per genotype were used for determinationDAS, days after sowing; SD, standard deviationValues significantly different from Col-0 wt at 0.001 confidence level are shown in italicsaNumber of rosette leaves formed before the first leaf with abaxial trichomesbMeasured from rosette to first flowercValues significantly different from Col-0 wt at 0.05 but not at 0.001 confidence leveldValues significantly different from single mutants at 0.05 confidence leveleValues significantly different from double and single mutants at 0.05 confidence level Also with respect to the development of side shoots, the spl9spl15 double mutant differed more from wild type than the single mutants. In fact, spl9-1 and spl15-1 single mutants were found not to differ significantly from Col-0 plants that had formed, on average, 0.9 ± 0.6 side shoots of at least 0.5 cm in length by the time that the first siliques ripened. With an average of 2.1 ± 1.1 side shoots, the spl9spl15 double mutant did significantly differ from wild type as did the 35S::MIR156b transgenic line with, on average, 4.1 ± 0.8 side shoots. Taken together, the phenotypic data of the spl9spl15 double mutant clearly suggests a redundant function of SPL9 and SPL15 in shoot development and in the maintenance of apical dominance. MiR156 is assumed to target, besides SPL9 and SPL15, exclusively other SPL genes (Rhoades et al. 2002). These too were shown to be down regulated in MIR156b over-expressing plants (Schwab et al. 2005). As in comparison to the spl9 spl15 double mutant the MIR156b over-expressor displays an even more severe aberrant phenotype, it can also be deduced that in addition to SPL9 and SPL15, other miR156-controlled SPL genes act redundantly to control shoot development and apical dominance. In addition to the number of leaves formed before the appearance of the first flowers, we also determined for the same plants the time they needed to bolt as well as to anthesis (Table 1). On average, the spl9 and spl15 single mutants behaved similar to wild type but, as expected based on the data of Schwab et al. (2005), the 35S::MIR156b line bolted and flowered somewhat later. The spl9 spl15 double mutants showed an intermediate behaviour. Whereas for the single mutants the few more leaves formed may be accounted for by the slight delay in the transition to flowering, this delay is unlikely to explain the increased rosette leaf number of the spl9 spl15 double mutant. In line with the observation of Schwab and co-workers (2005) who reported a leaf-initiation rate per day in SD of 2.2 vs. 1.4 for the MIR156b overexpressor and the wild type, respectively, this is probably best explained by assuming a shortened plastochron during vegetative growth. To uncover a possible cause or consequence for this increased rate of leaf initiation, we microscopically examined cross sections of the vegetative shoot apex to determine size and phyllotaxy of the spl9spl5 double mutant and the MIR156b overexpressor and compared these to wild type. To this purpose, plants were grown for 41 days in SD conditions, whereafter the number of rosette leaves having reached at least 0.5 cm in length were recorded and their apices dissected, fixed and embedded in paraffin (see “Materials and methods”). At this age, Col-0 plants were found to have formed on average 24.5 leaves of 0.5 cm or more, the double mutant 33.8 and the MIR156b overexpressor plants already 43.4 (Fig. 4a). As the plants were of the same age, these differences most likely reflect differences in plastochron. Alternatively, one may assume large temporal differences per genotype concerning initiation of the first leaf and/or development of the last leaf recorded to have reached 0.5 cm in length. However, we obtained no indications for such discrepancies and noted increasing differences in rosette density during the entire vegetative growth phase of wild type and mutants. From these data, a relative 1.8-fold (43.4/24.5) increase in leaf initiation rate of the 35S::MIR156b transgenics over wild type can be deduced, a value that quite well matches the observation of Schwab et al. (2005). Leaf initiation rate of the double mutant seem to be increased by a factor of 1.4 (33.8/24.5) in comparison to wild type. Fig. 4Leaf formation of spl9spl15 double mutant in comparison to wild type and MIR156b overexpressor. (a) Determination of the average number of rosette leaves of at least 5 mm in length formed by the primary shoot and (b) of the average diameter of the primary shoot apex of spl9spl15 double mutants, Col-0 wild type and MIR156b overexpressor plants after having grown for 41 days in SD. (c, d) Average circularity (c) and cross sectional area (d) of leaf primordia as determined from cross section through primary shoot as shown in e–g. Values represent averages of 10 subsequent primordia as indicted in different shades of grey according to the legend shown in d. (e–g) Cross sections through primary shoot apices of a Col-0 wild type (e), a spl9spl15 double mutant (f) and a MIR156b overexpressor plant (g) after having grown for 41 days in SD. Error bars in a–d indicate standard deviation (n = 6). Successive leaf primordia in e–g are sequentially numbered starting with the youngest (P1). Marginal meristem on one side of leaf number 17 (L17; counted from the centre outwards) is encircled in e and g. Scale bar in e–g represents 200 μm After sectioning the paraffin embedded material, a small but not significant difference in average SAM-diameter of the spl9spl15 double mutant and Col-0 wild type could be observed (Fig. 4b, e–f). However, with an average diameter of 104 μm, the MIR156b overexpressor showed also a slight but yet significant (P < 0.05) decrease in its SAM size compared to Col-0 (Fig. 4b, e, g). Furthermore, both the spl9spl15 double mutant and the MIR156b overexpressor exhibited the same phyllotaxy as wild type with rosette leaves initiated either clock- or anticlockwise with an angle of divergence of about 137.5° between successive leaves and forming a spiral lattice with a parastichy pair (3,5) (Fig. 4e–g). From these observations, it is concluded that the observed shorter plastochron is neither the result nor the cause of an altered phyllotaxy in the spl9spl15 double mutant or the MIR156b overexpressor. The shortened plastochron, however, seems to correlate with a reduced SAM size. As obvious from cross sections shown in Fig. 4e, g, the young leaves of the MIR156b overexpressor appear more roundish in shape in comparison to wild type leaves at similar positions. In particular, the vacuolated cells surrounding their midveins seem larger and the developing laminas reduced, i.e. represented by less small cytoplasm rich cells along their lateral margins. In addition, the stipules of the MIR156b overexpressor seem to be more prominent. In these aspects of leaf development, the spl9spl15 double mutant seems to behave intermediate (Fig. 4e). To quantify the difference in shape and size of the leaf primordia, we determined their circularity and cross sectional area (see “Materials and methods”) starting from the first leaf cross section found to be separated from the apical meristem. To reduce effects due to imperfect cross sectioning, i.e. not absolute perpendicular to the longitudinal axes, as well as in correlating sequentially numbered primordia of different sections, we averaged the values obtained over 10 successive primordia. As shown in Fig. 4c, circularity of the youngest 10 leaf primordia was highly similar between the different genotypes. Circularity of subsequent older primordia decreased in all, however, more rapidly in wild type such that, on average, leaf 10–20 did differ significantly between the genotypes. Interestingly, it cannot be excluded that this difference is a direct consequence of a shortened plastochron in the mutant lines. In particular as the leaf initiation rate of the MIR156b overexpressor line lies roughly one and a half times above that of wild type. Accordingly, and with respect to absolute age, leaf 11–20 of the MIR156b overexpressor may be more comparable to leaves 6–15 of wild type and to which indeed no significant difference in circularity was found. However, in cross sectional area these young leaves differed significantly. On average, 6.5 ± 2.4 × 103 μm2 for wild-type leaves 6–15 and 26.7 ± 16.2 × 103 μm2 for leaf 11–20 of the MIR156b overexpressor. It is known that the shape and other characteristics of newly formed leaves progressively change in correlation with the vegetative phase transition (Telfer et al. 1997). Furthermore, likely due to a changed plastochron, the correlation between leaf number and flowering seems to differ for the mutants and wild type. Therefore, we further investigated the possibility that the observed differences correlate with relative altered timing of the vegetative phase transition. Functional analysis of SPL9 and SPL15 during the vegetative phase transition In order to determine the timing of the vegetative phase change we used the absence or presence of abaxial trichomes on rosette leaves as a morphological marker for leaves formed during the juvenile or adult growth phase, respectively (Telfer et al. 1997). On average, in our LD2 growing conditions, the first abaxial trichomes developed on rosette leaf number six of Col-0 wild-type plants (Table 1). The spl9spl15 double mutants displayed its first abaxial trichomes on leaf number twelve and the 35S::MIR156b overexpressor on leaf number 16. Although less than the spl9spl15 double mutant, the respective single mutants also developed significantly more juvenile leaves than wild type (Table 1). We distinguished juvenile and adult growth phase based on a phase dimorphism, i.e. in abaxial trichomes. However, on a plant-physiological level the juvenile phase in Arabidopsis is characterized as being incompetent to respond to photoperiodic induction of flowering (Poethig 1990). To determine if this competence was indeed affected, small populations of 20–22 plants of wild type, the spl9, spl15 single and double mutants and the MIR156b overexpressor were germinated and cultivated for 3 weeks in non-inductive SD conditions (see “Materials and methods”). The plants were then brought into continuous light and batchwise shifted back to SD after either 1, 3 or 5 days. Their flowering response was recorded within a 3-week period after this inductive treatment. Plants not flowering within this period also did not flower after 2 months like plants of a control group representing all genotypes that were kept continuously in SD. As shown in Table 2, the 5-day inductive treatment caused a flowering response in 100% of the plants of all genotypes. Three days also sufficed to induce all or almost all of the wild-type and single mutant plants, whereas the response of the double mutant and particular of the MIR156b overexpressor already declined. One day of continuous light, still enough to induce half or more of the wild-type and single mutant plants, did not induce flowering in any of the MIR156b overexpressor plants and only in one-tenth of the double mutants. These results thus demonstrate that also according to physiological criteria, SPL9 and SPL15 redundantly promote the juvenile-to-adult phase transition. In addition, other miR156-regulated SPL genes are expected to contribute as well based on the behaviour of the MIR156b overexpressor. Table 2Photoperiodic floral induction in wild type and spl mutantsPercentage of plantsa induced after treatment with continuous light for1 D3 D5 DCol-0 wt60100100spl9-159100100spl15-15095100spl9 spl1598610035S::MIR156b027100D, daysaTwo populations of 10–11 plants per genotype were evaluated within a 3 week period following inductive treatment The role of gibberellin in the function of SPL9 and SPL15 The plant hormone gibberellin (GA) is known to promote flowering in many plants and in Arabidopsis it is particularly required for flowering in SD (Wilson et al. 1992). Exogenous application of GA will induce abaxial trichomes on leaves where these are normally not present although they will not appear earlier than leaf three (Telfer et al. 1997). In order to test whether the spl9spl15 double mutant and the MIR156b overexpressor are defective in gibberellin sensitivity or biosynthesis, we exogenously applied GA3 and compared the onset of abaxial trichome production to mock treated and wild-type plants grown in LD1 (Fig. 5). Like in wild type, the GA3 treatment strongly reduced the number of rosette leaves without abaxial trichomes, i.e. juvenile leaves, with about a factor of three. This result shows that the spl9spl15 double mutant and the MIR156b overexpressor remained sensitive to GA3. However, in both the spl9spl15 double mutant and the MIR156b overexpressor the amount of GA3 applied could not reduce the number of juvenile leaves to that of obtained in GA3 treated wild type. Fig. 5Effect of GA on spl9spl15 double mutant and MIR156b overexpressor plants in comparison to wild type. The number of juvenile rosette leaves formed is shown for plants that were either regularly sprayed with GA3 (100 μM GA3) or mock treated. Error bars indicate standard deviation (n = 8) Discussion Transgenic plants constitutively over-expressing the plant specific miRNA156 have been described for Arabidopsis (Schwab et al. 2005; Wu and Poethig 2006) and rice (Xie et al. 2006). Recently, overexpression of a miR156 encoding locus has also been shown to be the cause of the natural maize mutant Corngrass1 phenotype (Chuck et al. 2007). Interestingly, in all three different species, overexpression of this well conserved miRNA (Axtell and Bartel 2005; Arazi et al. 2005) causes a similar phenotype suggesting an evolutionary conserved role for the function of miR156 and its SPL target genes. Generally, in comparison to the respective wild type, miR156 over-expressing plants are smaller, flower later, tend to lose apical dominance and initiate more leaves with a shorter plastochron. MiR156 targets eleven SBP-box genes in Arabidopsis but the results presented here clearly show that already simultaneous silencing of the two likely paralogous target genes, SPL9 and SPL15, well approximate the miR156 over-expressing phenotype regarding the traits mentioned above. SPL9 and SPL15 thus act as important and functionally redundant transcription factors regulating diverse processes in shoot maturation and most likely in combination with other miR156 regulated SPL genes. In agreement with this latter statement is our observation that in addition to spl9-1 and spl15-1, mutation of a third miR156 controlled gene, SPL2 (At5g43270; T-DNA insertion line SALK_022235), results in triple mutant plants showing an even better approximation to the MIR156b overexpressor phenotype (Supplementary Fig. 2; despite the absence of detectable SPL2 transcript, single homozygous spl2-1 mutant plants lack an obvious mutant phenotype, data not shown). In greenhouse LD conditions we found the triple mutant to have produced on average 17.3 ± 2.1 (n = 16) rosette leaves in comparison to 15.6 ± 2.6 for the spl9 spl15 double mutant. Col-0 wild-type and the MIR156b overexpressor plants grown in parallel produced 12.9 ± 1.7 and 22.5 ± 3.5 rosette leaves before flowering, respectively. SPL9 and SPL15 positively regulate the juvenile-to-adult growth phase transition It became clear with the detailed analysis of Arabidopsis MIR156b overexpressors by Wu and Poethig (2006) as well as with the description of the Corngrass1 mutant in maize by Chuck et al. (2007), that one of the major phenotypic alterations in miR156 over-expressing plants is an extended juvenile growth phase. This suggests that one of the important functions of miR156 targeted SBP-box genes is to promote the vegetative phase change. In agreement with this observation, Wu and Poethig (2006) showed that overexpression of the miRNA156 regulated gene SPL3 and its likely paralogs leads to a greatly shortened juvenile phase in Arabidopsis. Based on morphological markers (abaxial trichomes) and physiological parameters (response to photo-inductive stimulus) we found that the spl9spl15 double mutant exhibit a delayed vegetative phase transition and, therefore, conclude that both genes are very likely involved in the positive regulation of this developmental process in a redundant fashion. Most likely, because of this redundancy, photoperiodic induction of the single spl9 and spl15 mutants is not much affected. However, the effect on the appearance of abaxial trichomes as a marker for the juvenile-to-adult phase transition appears to be stronger in the spl9 mutant in comparison to spl15. This may be due to the fact that in shoot apical development expression of SPL9 starts to increase before that of SPL15 (Fig. 1b). As SPL9 and SPL15 promote the juvenile-to-adult growth phase transition and thus competence to respond to photoperiodic induction of flowering, it is interesting to note that both SPL9 and SPL15 themselves are strongly upregulated in the shoot apex upon such induction (Schmid et al. 2003). An additional role for these genes in establishing inflorescence or floral meristem identity may thus be suggested. SPL9 and SPL15 negatively regulate leaf initiation rate Our data on the leaf initiation rate suggest that SPL9 and SPL15 act negatively on leaf initiation. Silencing of both genes leads to a shorter plastochron. Other miR156 regulated SPL genes may act similarly as the plastochron is even further shortened in the MIR156b overexpressor plants. A few mutants are known to cause a shortened plastochron and most of them simultaneously affect phyllotaxy. However, we found that shortening of the plastochron due to loss of SPL gene function is neither the cause nor the result of a changed spatial distribution of leaf primordia at the shoot apex. A shorter plastochron without an altered phyllotaxy has also been reported for two rice mutants, plastochron1 and -2 (pla1, -2; Itoh et al. 1998; Miyoshi et al. 2004; Kawakatsu et al. 2006). PLA1 encodes a cytochrome P450 protein, whereas PLA2 encodes a MEI2-like RNA binding protein. In both mutants the reduction in plastochron is accompanied by an increase in the size of the SAM and a higher rate of cell division. However, although the SAM of pla2 is actually smaller than that of pla1, it has a shorter plastochron. Furthermore, higher cell division activity associated with constitutive overexpression of CyclinD shortened the plastochron in tobacco without altering SAM size (Cockcroft et al. 2000). These observations suggest, as already noticed by Kawakatsu et al. (2006), that not SAM size but rather cell division rate is decisive in plastochron duration. Also our results may lend support to this hypothesis as both the spl9spl15 double mutant and the MIR156b overexpressor exhibit a clearly shorter plastochron than wild-type plants but their SAM sizes differ only marginally. Therefore, it will be of interest to determine if SPL9, SPL15 and other miR156 regulated SPL genes control cell division rate in the SAM and, if so, in particular if their role is mediated through the phytohormone cytokinin. Not only is cytokinin a major positive regulator of cell proliferation and division in plants (Werner et al. 2001) it is also, in mutual dependence of auxin, a major determinant in the outgrowth of lateral shoots (Sachs and Thimann 1967; Chatfield et al. 2000). This latter aspect may explain the reduced apical dominance observed for the spl9spl15 double mutant and the MIR156b overexpressor. Finally, mutants disrupting cytokinin signalling are known to result in reduced leaf initiation rates in addition to a smaller SAM and other effects (Nishimura et al. 2004; Higuchi et al. 2004). Do SPL9 and SPL15 negatively regulate leaf maturation rate? Based on their observations of the pla mutants in correlation to the expression of the respective genes in leaf primordia but not in the SAM, Kawakatsu et al. (2006) proposed that the rate of leaf maturation plays a significant role in regulating the rate of leaf initiation. In addition, these authors postulated a model in which the inhibitory effect of pre-existing leaf primordia on the initiation of the next leaf is lost as they mature. Similarly, a shortened plastochron in the spl9, spl15 mutants and the MIR156b overexpressor may also be due to precocious maturation of their leaves as suggested by our comparison of cross sections through successive leaf primordia of wild type and the MIR156b overexpressor. Even if the shape, i.e. circularity, of the primordia may not significantly differ after correction for an altered plastochron by comparing primordia based on age and not on serial sequence number, their cross sectional area seems to increase more rapidly in the MIR156b overexpressor. In particular, the cells surrounding the midvein in MIR156b overexpressor leaves appear to enlarge more rapidly. SPL9 and SPL15 do not modulate the role of GA in the vegetative phase change Exogenous application of GA3 has been found to accelerate abaxial trichome production in Arabidopsis suggesting that gibberellins function to regulate vegetative phase change (Telfer et al. 1997). These findings are also supported by mutant analysis. For example in spindly (spy) mutants, which undergo constitutive GA response, abaxial trichomes occur on leaves initiated significantly earlier than in wild type (Jakobsen and Olszewski 1993; Telfer et al. 1997). On the other hand, Telfer et al. (1997) found that mutants blocked in GA biosynthesis as well as GA insensitive mutants are significantly delayed in the appearance of abaxial trichomes. Loss-of-function mutants for the here examined SPL9 and SPL15 genes clearly delay the appearance of abaxial trichomes. Our treatment of spl9spl15 double mutant and 35S::MIR156b transgenic plants with high doses of GA3 showed that, like in wild type, their number of juvenile leaves can be reduced but not to numbers equal to those found for similarly treated wild type. In fact the ratios of juvenile to adult leaves of wild type and mutant phenotypes remain highly comparable to untreated plants. From this we conclude that the role SPL9, SPL15 and other miR156 controlled SPL genes play in the vegetative phase change, is unlikely to be GA mediated although a minor contribution to GA sensitivity can not be excluded. Outlook MiR156 targeted members of the SBP-box family of transcription factors in both mono- and dicots appear to play an important role as positive regulators of shoot maturation and of the vegetative to reproductive phase transition in particular. Both genetic factors, i.e. miR156 and SBP-box genes, have also been suggested to be major determinants in the transition from undifferentiated to differentiated embryogenic calli of rice (Luo et al. 2006). As the interaction between SBP-box genes and miR156 is of ancient origin in land plants (Arazi et al. 2005; Riese et al. 2007) it will be interesting to learn to what extent their molecular interplay is of importance to developmental phase transitions in plants in general. Electronic supplementary material Below is the link to the electronic supplementary material. Supplementary Fig. 1 Expression of SPL15 as detected with a GUS reporter gene construct. A pSPL15::GUS:SPL15 reporter was constructed by subcloning a 2,793 bp genomic fragment of SPL15 beginning 1,260 bp upstream of the SPL15-ATG start codon and extending to 132 bp downstream of the SPL15-stop codon into the binary vector pGJ2148 (kindly provided by Guido Jach, MPIZ Cologne). A ß-glucuronidase (GUS) reporter was subsequently cloned in frame shortly downstream of the SPL15-ATG start codon and the whole construct stably transformed into Arabidopsis Col-0 background. (A–B) GUS staining of mature embryo gently squeezed out of the imbibed seed (A) and of a ca. 10 day old seedling grown in LD (B). Bars in A and B represent respectively 200 μm and 2 mm. (TIF 26,178 KB) Supplementary Fig. 2 Phenotypes of spl9 and spl15 related mutants. (A) Flowering plants homozygous for spl9-1, -2 and -3 alleles in Col-0 background next to wild type. (B) Flowering plants homozygous for spl15-1, -2 and -3 alleles in Col-0 background next to wild type. (C) Flowering plants, from left to right, of Col-0 wild type, the double mutants spl9-1 spl15-1 and spl9-2 spl15-2, the triple mutant spl2-1 spl9-1 spl15-1 and the MIR156b overexpressor. (TIF 32,914 KB)
[ "sbp-box genes", "shoot maturation", "arabidopsis", "phase change", "juvenile phase", "mirna" ]
[ "P", "P", "P", "P", "P", "P" ]
Virus_Res-2-1-2194287
Novel vectors expressing anti-apoptotic protein Bcl-2 to study cell death in Semliki Forest virus-infected cells
Semliki Forest virus (SFV, Alphavirus) induce rapid shut down of host cell protein synthesis and apoptotic death of infected vertebrate cells. Data on alphavirus-induced apoptosis are controversial. In this study, the anti-apoptotic bcl-2 gene was placed under the control of duplicated subgenomic promoter or different internal ribosome entry sites (IRES) and expressed using a novel bicistronic SFV vector. The use of IRES containing vectors resulted in high-level Bcl-2 synthesis during the early stages of infection. Nevertheless, in infected BHK-21 cells translational shutdown was almost complete by 6 h post-infection, which was similar to infection with appropriate control vectors. These results indicate that very early and high-level bcl-2 expression did not have a protective effect against SFV induced shutdown of host cell translation. No apoptotic cells were detected at those time points for any SFV vectors. Furthermore, Bcl-2 expression did not protect BHK-21 or AT3-neo cells at later time points, and infection of BHK-21 or AT3-neo cells with SFV replicon vectors or with wild-type SFV4 did not lead to release of cytochrome c from mitochondria. Taken together, our data suggest that SFV induced death in BHK-21 or AT3-neo cells is not triggered by the intrinsic pathway of apoptosis. 1 Introduction Semliki Forest virus (SFV) is a positive-stranded RNA virus in Alphavirus genus (family Togaviridae), a widely distributed group of human and animal pathogens (Strauss and Strauss, 1994). SFV genomic RNA (so-called 42S RNA) is approximately 11.5 kb long and encodes four non-structural proteins designated nsP1–4; these are involved in viral RNA synthesis. The remaining proteins form the virus capsid and envelope and are not essential for virus replication. After virus entry, the 42S RNA is translated into a large non-structural polyprotein, which is processed to form an early and subsequently a late replicase complex (Strauss and Strauss, 1994). The early replicase mediates synthesis of the negative-stranded RNA complementary to the genomic 42S RNA. Minus strands are used by the late replicase as templates for the synthesis of new positive strand 42S RNA, and for transcription of subgenomic mRNAs encoding the structural proteins. The structural genes of SFV are not required for replication and can be removed or replaced with a polylinker and/or with foreign gene sequences. This property forms the basis of the SFV-based replicon vector systems (Liljestrom and Garoff, 1991; Smerdou and Liljestrom, 1999). SFV-based replicon vectors mediate high-level expression of heterologous proteins. However, as with virus, vectors cause shutdown of cellular biosynthesis and induce apoptotic death (Glasgow et al., 1997, 1998; Scallan et al., 1997). This precludes long-term foreign gene expression, and several attempts to reduce the cytotoxicity of alphavirus vectors have been made (Fazakerley et al., 2002; Lundstrom et al., 2003, 2001; Perri et al., 2000). The anti-apoptotic gene bcl-2 is an antagonist of the intrinsic mitochondrial pathway of apoptosis (for reviews see Ashe and Berry, 2003; Cory and Adams, 2002; Tsujimoto and Shimizu, 2000). Bcl-2 can prevent release of cytochrome c from mitochondria, thus, precluding the apoptotic cascade (Kluck et al., 1997; Yang et al., 1997). Bcl-2 can block apoptosis induced by several viruses, including influenza virus and reovirus (Nencioni et al., 2003; Rodgers et al., 1997). Existing data on Bcl-2 in SFV- or Sindbis virus-induced apoptosis are contradictory. On one hand it has been shown that alphavirus-induced apoptosis of baby hamster kidney (BHK) cells, Chinese hamster ovary cells, rat insulinoma cells and rat prostatic adenocarcinoma (AT3) cells can be prevented by over-expression of Bcl-2 (Levine et al., 1993; Lundstrom et al., 1997; Mastrangelo et al., 2000; Scallan et al., 1997). Similarly, a Sindbis virus expressing Bcl-2 produces reduced encephalitis in infected mice (Levine et al., 1996). That Bcl-2 expression can block apoptosis, suggests involvement of intrinsic pathway of apoptosis. In contrast, other studies using rat embryo fibroblasts and monocyte cell lines overexpressing Bcl-2 failed to detect a protective effect against alphavirus-induced apoptosis (Grandgirard et al., 1998; Murphy et al., 2001). The aim of this study was to determine whether expression of anti-apoptotic Bcl-2 directly from SFV-based replicon vectors in BHK-21 cells could be used to prolong co-expression of marker proteins from a bicistronic SFV replicon. Using the SFV1 vector system (Liljestrom and Garoff, 1991), the bcl-2 gene was placed either under the control of a duplicated SFV subgenomic promoter or an internal ribosome entry site (IRES). It is possible that expression of Bcl-2 from the subgenomic promoter occurs too late to prevent cell death. Expression from an IRES element within the genomic RNA should be more rapid. We tested two different IRES elements, the Encephalomyocarditis virus IRES (EMCV-IRES) and the crucifer-infecting tobamovirus IRES (CR-IRES). The latter is a 148-nt element, which precedes the CR coat protein gene and displays IRES activity across all kingdoms (Dorokhov et al., 2002). Using this novel approach we demonstrate that early Bcl-2 expression does not protect SFV-infected BHK-21 cells from alphavirus-induced translational shutdown or cell death. Moreover, our results indicate that SFV-induced cell death in BHK-21 cells does not involve the release of cytochrome c from mitochondria, and most likely does not occur by the apoptotic intrinsic pathway. 2 Materials and methods 2.1 Plasmid construction The BamHI-XmaI multicloning site of the pSFV1 replicon (Liljestrom and Garoff, 1991) was replaced with a BamHI, ApaI, ClaI, AvrII, NruI, NsiI and XmaI multicloning site; the resulting construct was designated as pSFV-PL. The spliced sequences encoding the mouse Bcl-2 alpha protein (locus AAA37282), the EMCV-IRES (pIRES2-EGFP; BD Clontech) and the 148 bp CR-IRES (Ivanov et al., 1997) were amplified by PCR, cloned and verified by sequence analysis. Each IRES was fused to the Bcl-2 coding sequence and cloned into NsiI-XmaI digested pSFV-PL vector; obtained constructs were designated as pSFV-EMCV-bcl2 and pSFV-CR-bcl2. To create constructs expressing Bcl-2 protein from the duplicated subgenomic promoter, the IRES from pSFV-EMCV-bcl2 was replaced by an oligonucleotide duplex representing the minimal SFV subgenomic promoter (Hertz and Huang, 1992); the resulting construct was designated pSFV-PR-bcl2. The d1EGFP reporter gene (BD Clontech) was amplified by PCR, sequenced and cloned into pSFV-PL, pSFV-EMCV-bcl2, pSFV-CR-bcl2 and pSFV-PR-bcl2 vectors treated with ClaI-NsiI. Resulting constructs were designated as pSFV-PL-d1EGFP, pSFV-d1EGFP-EMCV-bcl2, pSFV-d1EGFP-CR-bcl2 and pSFV-d1EGFP-PR-bcl2, respectively (Fig. 1). Sequences and primers are available upon request. To construct SFV replicons expressing mutated chromoprotein HcRed (from the reef coral Heteractis crispa), HcRed was PCR amplified (from pHcRed1-N1; BD Clontech), cloned and sequenced. The sequence encoding Bcl-2 from pSFV-d1EGFP-EMCV-bcl2, pSFV-d1EGFP-CR-bcl2 and pSFV-d1EGFP-PR-bcl2 was replaced with HcRed to give constructs pSFV-d1EGFP-EMCV-HcRed, pSFV-d1EGFP-CR-HcRed and pSFV-d1EGFP-PR-HcRed (Fig. 1). To obtain constructs used for viability analysis under puromycin selection, the sequence encoding d1EGFP from pSFV-PL-d1EGFP, pSFV-d1EGFP-EMCV-bcl2, pSFV-d1EGFP-CR-bcl2 and pSFV-d1EGFP-PR-bcl2 was replaced by that of puromycin acetyltransferase (Pac), and constructs were designated pSFV-PL-Pac, pSFV-Pac-EMCV-bcl2, pSFV-Pac-CR-bcl2 and pSFV-Pac-PR-bcl2. To generate infectious RNA, constructs were linearised by SpeI digestion and in vitro transcription was carried out as previously described (Karlsson and Liljestrom, 2003). 2.2 Cells and viruses BHK-21 cells were grown in Glasgow's Minimal Essential Medium containing 5% foetal calf serum, 0.3% tryptose phosphate broth, 0.1 U/ml penicillin and 0.1 μg/ml streptomycin. AT3-neo and AT3-bcl2 cells were grown in Roswell Park Memorial Institute-1640 medium containing 10% foetal calf serum, 0.1 U/ml penicillin and 0.1 μg/ml streptomycin. All cells were grown at 37 °C in a 5% CO2 atmosphere. SFV4 was derived from the infectious cDNA clone pSP6-SFV4 (Liljestrom et al., 1991). 2.3 Transfection and collection of virus-like particles (VLPs) BHK-21 cells were co-transfected with equal amounts of vector and helper RNA (Liljestrom and Garoff, 1991). Helper RNA encodes the structural proteins under subgenomic promoter. Transfected cells were grown at 28 °C for 72 h and the VLPs collected, concentrated, purified and titrated as described by Karlsson and Liljestrom (2003). Although the replicase encoded by the replicon vector will amplify both RNAs, helper RNA is not packed into VLPs due to a missing packaging signal. To determine if any replication-proficient viruses were formed due to the recombination between replicon vector RNA and helper RNA, batches of VLPs were tested as described by Smerdou and Liljestrom (1999). All infections with VLPs were carried out in BHK-21 cells at a multiplicity of infections 10 (moi = 10) for 1 h at 37 °C. 2.4 Metabolic labelling BHK-21 cells in 35 mm diameter plates were infected with SFV VLPs as described above. Infected cells were washed twice with PBS, once with methionine and cysteine free Dulbecco's Modified Eagle Medium followed by 30 min labelling with 50 μCi/ml of [35S]methionine and [35S]cysteine (RedivuePRO-MIX, Amersham Biosciences). After labelling, cells were washed with PBS, lysed in Laemmli buffer and analyzed by SDS-PAGE. Gels were dried under vacuum and exposed to film. 2.5 Immunoblot analysis BHK-21 cells were infected and samples collected as described above. After SDS-PAGE, proteins were transferred to a nitrocellulose membrane, probed with rabbit polyclonal antisera against SFV nsP1, EGFP (in-house), HcRed (Clontech), with a mouse monoclonal antibody against Bcl-2 (Santa Cruz Biotechnology, Inc.) or with a mouse monoclonal antibody against beta-actin (C4) (Cruz Biotechnology, Inc.) and visualized by ECL immunoblot detection kit (Amersham Life Science). 2.6 Analysis of bicistronic SFV vector cytotoxicity in BHK-21 cells Cytotoxicity of SFV vectors was analyzed as described by Garmashova et al., 2006. Replicon RNA was obtained from pSFV-PL-Pac, pSFV-Pac-EMCV-bcl2, pSFV-Pac-CR-bcl2 and pSFV-Pac-PR-bcl2. 106 BHK-21 cells were electroporated with 5 μg of RNA. Cells were seeded into wells (growth area 2.0 cm2 per well; Cellstar, Greiner bio-one plates) and selected with puromycin (10 μg/ml) from 6 h post-transfection. Viable adherent cells were determined at 6, 24, 48 and 72 h post-transfection using Trypan blue (Flow Laboratories). The viability of infected cells was also analyzed by WST-1 assay (Roche). The assay is based on the reduction of WST-1 to a water-soluble formazan dye by viable cells. BHK-21 cells were seeded in 96-well plates (7 × 103 cells/well), grown for 18 h and infected with VLPs containing recombinant replicons at moi = 10. Control cells were mock-infected. Infected cells were analyzed 6, 24 or 48 h post-infection (p.i.) by adding 10 μl of WST-1 to each well, followed by incubating the plate for 1 h and measuring the change in color intensity at 450 nm in a microplate reader. 2.7 Immunofluorescence microscopy Cells were grown on cover slips and infected for selected times, mock-infected cells were used as controls. Cells were washed with PBS, fixed with 4% paraformaldehyde for 10 min at room temperature and permeabilized with cold methanol for 7 min at −20 °C. Cells were then washed with PBS, blocked in the 3% BSA-PBS and incubated for 1 h with primary antibody (mouse anti-cytochrome c monoclonal antibody (BD Pharmingen) or rabbit polyclonal antibody against SFV nsP1). Then the cells were washed again with PBS and incubated with a AlexaFluor 568 (Invitrogen) or Cy3 conjugated secondary antibody for 1 h, washed three times with PBS and air-dried. Staurosporine (final concentration 0.5 μM) was added to mock-infected cells 1–2 h before fixing to induce release of cytochrome c from mitochondria. Samples were analyzed on an Olympus U-RFL-TX microscope or a Bio-Rad MRC-1024 confocal microscope. 2.8 Analysis of the viability of AT3 cells infected with SFV VLPs 3 × 106 AT3-neo or AT3-bcl2 cells were infected with VLPs in serum-free RPMI1640 media supplemented with 0.2% BSA. The amount of VLPs used for infections corresponded to a moi = 20 for BHK-21 cells. At 12 h p.i., EGFP positive cells were separated using a BD FACSAria cell sorter; EGFP-positive cells were seeded in 24-well plates. Viable cells were determined at 24 and 48 h p.i. using Trypan blue (Flow Laboratories). 3 Results 3.1 Expression of d1EGFP and Bcl-2 using bicistronic SFV vectors To study the expression of foreign proteins using the bicistronic replicons, BHK-21 cells were infected with SFV-d1EGFP-CR-bcl2, SFV-d1EGFP-EMCV-bcl2 and SFV-d1EGFP-PR-bcl2 VLPs and in addition, with monocistronic SFV-PL-d1EGFP VLPs. Samples were collected at 2, 4, 6, 8, 12 and 24 h p.i. and analyzed by immunobloting. Expression of SFV nsP1 was generally detectable by 2 h p.i. and increased up to 8–12 h p.i. (Fig. 2a–e). The amount of nsP1 was approximately equal for all bicistronic vectors (Fig. 2b–d). Expression of d1EGFP was also detectable by 2-4 h post-infection (Fig. 2a–d) and was highest in SFV-PL-d1EGFP infected cells (Fig. 2a). Major differences were observed with Bcl-2 expression, which was detected on immunoblots as two bands (Fig. 2b–d). Bcl-2 expression was strongest and earliest (2 h p.i.) in SFV-d1EGFP-EMCV-bcl2 infected cells (Fig. 2c). In cells infected with SFV-d1EGFP-CR-bcl2 (Fig. 2b), expression of Bcl-2 was detected 4 h p.i. Expression of Bcl-2 was also found in cells infected with SFV-d1EGFP-PR-bcl2 (Fig. 2d) although levels were lower compared to IRES containing constructs. To assess the ability of the bicistronic vectors to express proteins in general, Bcl-2 sequences were replaced with non-cytotoxic HcRed. The resulting replicons SFV-d1EGFP-EMCV-HcRed, SFV-d1EGFP-CR-HcRed and pSFV-d1EGFP-PR-HcRed were used to infect BHK-21 cells. There was no significant difference in the expression of nsP1 and d1EGFP proteins between bicistronic vectors also expressing HcRed or Bcl-2 (Fig. 2e for SFV-d1EGFP-EMCV-HcRed, data not shown for SFV-d1EGFP-CR-HcRed and pSFV-d1EGFP-PR-HcRed). Expression of HcRed was strongest for SFV-d1EGFP-EMCV-HcRed (detectable by Western blotting by 4 h p.i.; Fig. 2e). This is in agreement with previous observations suggesting that the highest expression levels of the second target proteins are achieved with SFV-d1EGFP-EMCV vectors. The apparent delay in detection of protein expression (2 h p.i. for Bcl-2 versus 4 h p.i. for HcRed) is most likely due to the quality of the HcRed antibody (Clontech). Thus, vectors containing IRES elements expressed Bcl-2 earlier and to higher levels. Presence of a duplicated subgenomic promoter or IRES element following the Bcl-2 encoding sequence did not have any major effect on the time-course of infection, however, differences in expression levels of the first marker protein, d1EGFP, were observed. 3.2 Effects of infections by recombinant SFV VLPs on host cell protein synthesis Metabolic labelling was used to study the effects of infection by SFV VLPs on host cell protein synthesis. Infected BHK-21 cells were pulse labelled at 2, 4, 6, 8, 12 and 24 h p.i. In contrast to mock-infected cells, where protein synthesis is ongoing (Fig. 3A), translation in infected cells was rapidly inhibited (Fig. 3B–E). Shutdown of host protein synthesis was detected as early as 4 h p.i. (Fig. 3B–E). High-level and continuous expression of Bcl-2 was visible in cells infected with SFV-d1EGFP-EMCV-bcl2 (Fig. 3D). In cells infected with other bicistronic vectors, expression of Bcl-2 could only be detected by Western blot (Fig. 2b–d). These results indicate that high-level, continuous expression of Bcl-2 from a bicistronic replicon vector does not protect against shut-off of host cell protein synthesis. 3.3 Survival of cells infected with Bcl-2 expressing bicistronic replicons It has been shown that similar to wild-type virus, alphavirus based replicon vectors induce apoptotic death of infected vertebrate cells (Glasgow et al., 1997, 1998; Scallan et al., 1997) and that over-expression of Bcl-2 can protect cells against alphavirus-induced apoptosis (Lundstrom et al., 1997; Scallan et al., 1997). To study the effect of Bcl-2 expression from bicistronic vectors on cell death, BHK-21 cells were transfected with SFV-PL-Pac, SFV-Pac-EMCV-bcl2, SFV-Pac-CR-bcl2 and SFV-Pac-PR-bcl2 replicons or mock-transfected, and puromycin selection was applied 6 h post-transfection. Almost all mock-transfected cells died within 24 h of adding puromycin (Fig. 4a). The transfected cells survived longer and the percentage of viable adherent cells was found to be similar at the selected time points (Fig. 4a). Most of the transfected cells were dead 3 days post-transfection regardless of the replicon. Thus, our results demonstrate that high-level expression of Bcl-2 does not protect BHK-21 cells against SFV induced death. Similar results were also obtained when Annexin-V PE conjugate or propidium iodide were used to label apoptotic and necrotic cells (data not shown). The WST-1 assay, used to analyze cell viability, measures mitochondrial activity in cells. Results obtained with this assay were similar to cell survival experiments: expression of Bcl-2 from bicistronic vectors (SFV-d1EGFP-CR-bcl2, SFV-d1EGFP-EMCV-bcl2 or SFV-d1EGFP-PR-bcl2) did not provide any protective effect (Fig. 4b). In fact, viability of cells infected with bicistronic replicons at 48 h p.i. was reduced, compared to cells infected with SFV-PL-d1EGFP (Fig. 4b). Infection with SFV-d1EGFP-EMCV-bcl2 caused the most important reduction in cell viability, which is in accordance with our previous data. (Fig. 4a). This tendency was also seen when cells infected with SFV-d1EGFP-EMCV-HcRed, SFV-d1EGFP-CR-HcRed and SFV-d1EGFP-PR-HcRed VLPs were analyzed by WST-1 assay (data not shown), indicating that not Bcl-2 expression, but rather the second gene expression unit (especially EMCV-IRES) is responsible for this effect. Taken together, our results suggest that Bcl-2 expression does not have a detectable effect on the viability of SFV-infected BHK-21 cells. 3.4 SFV infection does not induce release of cytochrome c from mitochondria The anti-apoptotic effect of Bcl-2 is connected to its ability to prevent the release of cytochrome c from mitochondria (Kluck et al., 1997; Yang et al., 1997). Apoptosis induced by death receptors or by ER stress bypasses this mitochondrial pathway and is, therefore, relatively insensitive to protection by Bcl-2 (Scaffidi et al., 1998). The finding that high-levels of Bcl-2 expression did not have a protective effect against SFV induced death suggests that apoptosis in SFV-infected BHK-21 cells may not involve the mitochondrial pathway. To determine whether the mitochondrial pathway is activated in SFV-infected BHK cells, localization of cytochrome c was visualized by immunofluorescence in BHK-21 cells infected with the SFV-PL-d1EGFP 4 and 24 h p.i. Localization of cytochrome c in mock-infected BHK-21 cells was mitochondrial as evidenced by dotty localization (Fig. 5A). In cells, treated with the non-selective protein kinase inhibitor staurosporine, cytochrome c staining was diffuse, demonstrating its release from mitochondria (Fig. 5B). In cells infected with SFV-PL-d1EGFP, the cytochrome c staining maintained a granular pattern 4 h p.i. (Fig. 5C and D), indicating that when shut-off of cellular gene expression starts cytochrome c was not released from mitochondria. Furthermore, the cytochrome c localization pattern was unchanged even at 24 h p.i. (Fig. 5E and F). We conclude that induction of cell death in SFV replicon-infected BHK-21 cells does not involve the release of mitochondrial cytochrome c. This result is consistent with the absence of a protective effect of Bcl-2. SFV replicons lack the coding sequences for structural proteins. To analyze the role of the structural proteins in the release of cytochrome c from mitochondria, BHK-21 cells were infected with SFV4 (moi = 1) and localization of cytochrome c was determined at 4, 12 and 24 h p.i. By 4 h p.i. infected cells did not release cytochrome c (Fig. 6A and B). At 12 and 24 h p.i. detection of cytochrome c became more difficult due to extensive cytopathic effects. This includes cell rounding and re-localization of mitochondria into the perinuclear region (Fig. 6C–F); all these effects were more extensive than in SFV-PL-d1EGFP infected cells (Fig. 5). Nevertheless, the localization pattern of cytochrome c remained dotty indicating that its mitochondrial localization was preserved (Fig. 6C–F). Taken together, these results indicate that for both replicon vector and SFV4 infection, BHK-21 cells do not release cytochrome c from the mitochondria. 3.5 Bcl-2 expression does not protect SFV-infected AT3-neo cells against cell death To analyze whether the absence of cytochrome c release and lack of the protection against SFV induced cell death are specific to BHK-21 cells, two lines of AT3 cells, AT3-neo (Cepko et al., 1984) and AT3-Bcl2 (Levine et al., 1993) cells were also characterized. Firstly, AT3-neo cells were infected with SFV4 and cytochrome c distribution was determined. Infected AT3-neo cells showed extensive virus-induced cytopathic effects but as with BHK-21 cells no detectable release of cytochrome c (Fig. 7). Similarly, no detectable release of cytochrome c was observed in AT3-neo or AT3-bcl2 cells infected with SFV-d1EGFP-EMCV-bcl2 (Fig. 8A–D and I–L) or SFV-PL-d1EGFP VLPs (Fig. 8E–H and M–P). Therefore, we conclude that the inability of SFV to induce cytochrome c release from mitochondria was not a BHK-21 cell line-restricted phenomenon. Secondly, the viability of the AT3-neo and AT3-bcl2 cells, infected with SFV VLPs was analyzed. It should be noted that we were unable to obtain highly efficient infection of the AT3-neo and especially the AT3-bcl2 cells by use of the SFV VLPs. Typically, up to 20% of AT3-neo and 10% of AT3-bcl2 cells were infected when the amount of VLPs, corresponding to moi = 20 in BHK-21 cells, was used for infection; using larger amounts of VLPs did not result in higher percentages of infected cells. Thus, we conclude that AT3 cells are not as susceptible for SFV infection as BHK-21 cells and sorting of EGFP positive (i.e. infected) cells before the viability analysis was required. This revealed that, in the case of AT3-neo cells, expression of Bcl-2 by any SFV replicon vector did not provide any protective effect and almost all infected cells were dead by 48 h p.i. (Fig. 9a). Thus, the results were highly similar to those obtained in BHK-21 cells. In contrast, we found that almost all AT3-bcl2 cells, infected with SFV VLPs, were viable at 24 h p.i. and over the next 24 h numbers of viable cells rapidly increased in all samples. This indicates efficient cell division (Fig. 9b). Since cell sorting ruled out the possibility of non-infected cells being present in analyzed samples, we concluded that AT3-bcl2 cells, infected with SFV replicons were either able to recover from infection or, alternatively, established a persistent infection. Our observation that surviving AT3-bcl2 cells rapidly lost EGFP fluorescence favours the first option. Expression of Bcl-2 by SFV replicons had little or no effect on survival and subsequent division of infected AT3-bcl2 cells. All results taken together we conclude that not bcl-2 expression as such but some other property of AT3-bcl2 cells is likely responsible for this phenomenon. 4 Discussion Previously published data on mechanisms of alphavirus-induced cell death are controversial (Glasgow et al., 1997; Griffin, 2005; Li and Stollar, 2004). On the one hand, induction of apoptotic cell death via the death receptor pathway has been suggested (Li and Stollar, 2004; Nava et al., 1998); on the other hand, it has been shown that over-expression of the anti-apoptotic Bcl-2 can protect against alphavirus-induced apoptosis and pathogenesis (Levine et al., 1993; Lundstrom et al., 1997; Mastrangelo et al., 2000; Scallan et al., 1997). However, in these studies different alphaviruses, strains, cell lines and experimental systems were used. It is possible that mechanisms, by which cell death is induced may differ between viruses, cells etc. In this study, shut-off of cellular translation and cell death in SFV-infected BHK-21 cells were studied using a novel bicistronic replicon vector system. This avoids transient or stable expression of Bcl-2 in host cells. In transiently transfected cells, cell metabolism can be seriously altered by the transfection procedure (Lepik et al., 2003). Stable expression of Bcl-2, known to have oncogenic properties (Reed, 1994; Reed et al., 1991), may lead to adaptation of the cell, which in turn might affect virus replication and the outcome of infection. It is also important to mention that both transient and stable cellular expression are mediated by cellular RNA polymerase II and are, therefore, subjected to the alphavirus-induced transcriptional shutdown, which starts early in infection. In this study, Bcl-2 was expressed using novel vectors containing IRES elements. The use of IRESs has significant advantages over other systems. The first important advantage, as shown by our results (Fig. 2), is that IRES-mediated expression of Bcl-2 can take place directly from incoming genomic RNA. In contrast, expression from the subgenomic promoter requires replication and is activated later on in infection. Early expression of Bcl-2 may be crucial since it is not known when and how cell death is induced. It is possible that expression of Bcl-2 from the subgenomic promoter is too late to protect from cell death. Another important advantage of IRES-mediated expression is the level of expression, which is significantly higher than from the minimal subgenomic promoter (Fig. 2). It is also important to mention that at least for the EMCV-IRES, synthesis of Bcl-2 was resistant to inhibition of cellular translation (Fig. 3D). Detection of Bcl-2 protein by immunoblotting in BHK-21 cells infected with VLPs of SFV1 vectors harboring bcl-2 revealed a lower molecular band, which increased in intensity proportional to Bcl-2 expression. This is presumably protein expressed from an internal initiation site (Tsujimoto and Croce, 1986). The truncated protein could also be the product of Bcl-2 cleavage by endogenous caspase-3, resulting in curtailed Bcl-2 with proapoptotic activity (Cheng et al., 1997; Kirsch et al., 1999). The latter localizes to mitochondria and causes the release of cytochrome c, thus, promoting further caspase activation as a part of a positive feedback loop (Kirsch et al., 1999). It has been shown that alphaviruses induce apoptosis in Bcl-2-overexpressing cells by caspase-mediated proteolytic inactivation of Bcl-2 (Grandgirard et al., 1998). Viral capsid protein was suggested to trigger activation of the cell death machinery. However, in this study neither release of cytochrome c nor increase in the proportion of truncated product due to feedback loop action was observed. Infection of cells with VLPs eliminated the possibility that capsid protein alone triggers apoptosis. Simultaneous expression of two target proteins was achieved with all the bicistronic SFV expression vectors constructed in this study. The expression level of d1EGFP was slightly reduced for bicistronic vectors, compared to the monocistronic SFV-PL-d1EGFP, independent of the nature of the second gene (bcl-2 or HcRed) (Fig. 2). Interference between the native subgenomic promoter and the duplicated promoter or IRES element may have influenced expression from the first promoter. It is also possible that presence of the IRES elements had some effect on replication. However, infectivity of bicistronic vectors was not altered. Taken together, these results indicate that different IRES elements can be used to construct novel and efficient bicistronic SFV vectors. In this study, we show that early and high-level of Bcl-2 expression, achieved with bicistronic SFV replicon vectors, did not have a detectable effect on host protein synthesis shut-off (Fig. 3) or cell death (Fig. 4). This cannot be attributed to the delayed expression of bcl-2 since its high expression levels were observed at early time points (Fig. 2). These findings are coherent with our data suggesting that infection of BHK-21 cells with SFV replicons or with SFV4 does not cause the release of cytochrome c from mitochondria (Figs. 5 and 6). This effect was not restricted to BHK-21 cells since similar results were also obtained for AT3-neo (Figs. 7 and 8) and AT3-bcl2 (Fig. 8) cells. The release of cytochrome c in SFV-infected BHK-21 or AT3-neo cells did not even occur at late time points. This explains the lack of protective effect by Bcl-2 expression against cell death in infected cells, as observed by us. At the same time, the mechanism of cell death in infected cells remains unknown. We can rule out that binding of virions to the cellular receptor and/or virus internalisation have crucial roles, since induction of cell death does not depend on whether the cells are infected with virus or transfected by infectious transcripts. It seems most likely that cell death is induced by some non-mitochondrial pathway. This is consistent with previous data, which suggested that apoptosis in alphavirus infected cells uses the death receptor pathway (Nava et al., 1998; Li and Stollar, 2004). However, our findings do not exclude the possibility that cell death in SFV-infected BHK-21 cells may be triggered by the ER pathway. A delay of cell death in SFV-infected AT3 cell line expressing bcl-2 has been reported (Scallan et al., 1997). Our results confirm that the AT3-bcl2 cell line is remarkably resistant to infection by SFV VLPs. However, our data is more coherent with the hypothesis that these cells are not just delaying cell death but are also capable to recover from SFV infection. The finding that Bcl-2 expression by SFV replicons did not have any effect on the survival of AT3-neo cells suggests that resistance of the AT3-bcl2 cells to the SFV infection most likely represents an indirect effect of Bcl-2. The Bcl-2 gene is potent oncogene and its constitutive expression significantly changes cell cycle and gene expression. It is indeed evident from data presented in Fig. 9 that AT3-bcl2 cells grow much more rapidly than AT3-neo cells. It could be hypothesized that constitutive expression of Bcl-2 in AT3-bcl2 cells may have resulted in changes protecting cells from virus infection. The most likely candidates for roles of these factors may be the components of the innate immune system and/or host factor(s) like rat zinc-finger antiviral protein, which provides resistance against alphavirus infection (Bick et al., 2003).
[ "bcl-2", "cell death", "semliki forest virus", "alphaviruses", "at3" ]
[ "P", "P", "P", "P", "P" ]
Ann_Biomed_Eng-4-1-2239251
Biomechanical Analysis of Reducing Sacroiliac Joint Shear Load by Optimization of Pelvic Muscle and Ligament Forces
Effective stabilization of the sacroiliac joints (SIJ) is essential, since spinal loading is transferred via the SIJ to the coxal bones, and further to the legs. We performed a biomechanical analysis of SIJ stability in terms of reduced SIJ shear force in standing posture using a validated static 3-D simulation model. This model contained 100 muscle elements, 8 ligaments, and 8 joints in trunk, pelvis, and upper legs. Initially, the model was set up to minimize the maximum muscle stress. In this situation, the trunk load was mainly balanced between the coxal bones by vertical SIJ shear force. An imposed reduction of the vertical SIJ shear by 20% resulted in 70% increase of SIJ compression force due to activation of hip flexors and counteracting hip extensors. Another 20% reduction of the vertical SIJ shear force resulted in further increase of SIJ compression force by 400%, due to activation of the transversely oriented M. transversus abdominis and pelvic floor muscles. The M. transversus abdominis crosses the SIJ and clamps the sacrum between the coxal bones. Moreover, the pelvic floor muscles oppose lateral movement of the coxal bones, which stabilizes the position of the sacrum between the coxal bones (the pelvic arc). Our results suggest that training of the M. transversus abdominis and the pelvic floor muscles could help to relieve SI-joint related pelvic pain. Introduction The human body uses an ingenious 3-D framework of bones, joints, muscles, and ligaments for posture and movement. In upright posture, the trunk load passes the sacroiliac joints (SIJ). The orientation of the SIJ surfaces, however, is more or less in line with the direction of loading, which induces high shear forces between sacrum and coxal bones.34 The SIJ have a strong passive, viscoelastic ligamentous system for providing stability. These ligaments are vulnerable for creep during constant trunk load and need to be protected against high SIJ shear forces.19 From a biomechanical point of view, an active muscle corset that increases the compression force between the coxal bones and the sacrum could protect the ligamentous system and support the transfer of trunk load to the legs and vice versa. Interlocking of the SIJ may be promoted by transversely oriented muscles, e.g., M. transversus abdominis, M. piriformis, M. gluteus maximus, M. obliquus externus abdominis, and M. obliquus internus abdominis, which has been described as self-bracing.33–35 However, due to the complex lines of action of (counteracting) muscles and ligaments in the pelvic region, it is difficult to demonstrate the contribution of transversely oriented muscles to SIJ stability, in vitro as well as in vivo. In the past, a number of biomechanical models of the lumbosacral region (spine and pelvis) have been developed to study the aetiology of low back pain (LBP) in relation to (over)loading of the lumbar spine1, 9, 11, 36 and the pelvis.10,27 Most of these models dealt with mechanical stability in terms of muscle3,4,18 and compression forces between (lumbar) vertebrae.2,8,12,20,22, 28 A different approach is to relate LBP to overloading of SIJ and nearby ligaments, for example the iliolumbar ligaments.24,30 The load transfer through the SIJ was studied using a static, 3-D biomechanical simulation model based on the musculoskeletal anatomy of the trunk, pelvis, and upper legs.15 This simulation model calculates forces in muscles, ligaments, and joints that are needed to counterbalance trunk weight and other external forces. It was shown that this simulation model underestimated antagonistic muscle activity, but a good agreement was found for agonist muscle activity. The number of passive structures in the model was small, for example no joint capsules were incorporated. Therefore, the model was only valid for postures in which none of the joints were near an end position. The aim of the present study was to determine which muscles have to become active in the 3-D pelvic simulation model when there is an imposed reduction of the vertical SIJ shear force. Materials and Methods The 3-D Simulation Model The present study was performed using the validated, 3-D simulation model as described by Hoek van Dijke et al.15 The model is based on the musculoskeletal anatomy of the trunk, pelvis, and upper legs, including muscle and ligament attachment sites, cross-sectional areas of muscles and the direction of muscle, ligament, and joint reaction forces. The geometry of this model was based on structures extracted from MRI slices and from previously published data on lumbar spine8 and upper leg4,17 geometry. Figure 1a illustrates, in frontal and median view, the bones on which muscle and ligament forces act in the simulation model. These are the lowest thoracic vertebra, five lumbar vertebrae, the sacrum, the left and right coxal bones and the left and right femurs. The vertebrae are treated as a single structure. The arrangement of the bones depends on the static posture for which muscle forces are calculated, for example standing with or without trunk flexion. A description of the model equilibrium, its optimization scheme and validation of some of the parameters is presented in the Appendix. In the present study, we focus on the compression and shear forces in the SIJ. These forces are represented as perpendicular vectors. The normal vector of the SIJ surface has an oblique direction (xyz = 0.365, ±0.924, 0.114). Compression force is defined along this normal vector and can only vary in magnitude. One of the two components of the SIJ shear force was defined in the YZ-plane (xyz = 0, ±0.123, 0.992). This force is denoted as the vertical SIJ shear force. Directions of the SIJ compression force and the SIJ vertical shear force in the YZ-plane are shown in Fig. 1a, left panel. Figure 1b shows the vectors representing the most important muscle and ligament forces in the pelvic region superimposed on the bones. Bone shapes are for illustration purpose only; they are not part of the simulation model. In total, the model contains 100 vectors for muscle forces, 8 vectors for ligament forces, and 22 vectors for joint forces; see Table 1 for a list of all the structures. Figure 1Panel (a) shows the bones on which the muscles, ligaments and joint reaction forces act in the frontal plane (left) and the median plane (right): the lowest thoracic vertebra, five lumbar vertebrae, the sacrum, the left and right coxal bones, and the left and right femurs. The coordinate system is defined with the origin halfway between the rotation centers of the hip joints. Axes: x posterior, y left, z vertical. Panel (b) shows, superimposed on the bones, the vectors representing the most important force components in the frontal plane (left) and the median plane (right), see also Table 1. The labels in this panel refer to a selection of the muscle structures listed in Table 1Table 1List of muscles, ligaments, and joints and the number of vectors describing the forces used in the simulation model on transferring trunk load from lumbar spine via the pelvis to the upper legs (unilateral), see also Fig. 1Name of structureNumber of elementsRemark1M. adductor brevis2Upper and lower muscle2M. adductor longus13M. adductor magnus3Upper, middle and lower muscle4M. biceps femoris15M. coccygeus1Pelvic floor muscle6M. iliococcygeus1Pelvic floor muscle7M. pubococcygeus1Pelvic floor muscle8M. gemellus inferior19M. gemellus superior110M. gluteus maximus2Femur—sacrum and ilium muscle11M. gluteus maximus facia2Ilium—femur and trunk muscle12M. gluteus medius3Upper, middle, and lower muscle13M. gluteus minimus3Upper, middle, and lower muscle14M. gracilis115M. iliacus116M. longissimus117M. iliocostalis118M. multifidus119M. obliquus externus abdominis2Ventral and dorsal muscle20M. obliquus internus abdominis2Ventral and dorsal muscle21M. obturatorius externus122M. obturatorius internus123M. pectineus124M. piriformis125M. psoas2Upper and lower muscle26M. quadratus femoris127M. quadratus lumborum5Sacrum—rib12, L1, L2, L3 and L4 muscle28M. rectus abdominis129M. rectus femoris130M. sartorius131M. semimembranosus132M. semitendinosus133M. tensor fasciae latae134M. transversus abdominis1AIliolumbar ligament1Transversal planeBPosterior sacroiliac ligament1Transversal planeCSacrospinal ligament1DSacrotuberous ligament1IL5-S1 joint3Shear (two directions) and compressionIISI joint3Shear (two directions) and compressionIIIHip joint3Shear (two directions) and compressionIVKnee joint3Shear (two directions) and compressionVPubic symphysis1Compression Simulations and Data Analyses A first simulation, with the model in standing posture and a trunk weight of 500 N, showed that the vertical shear SIJ force was 563 N on each side of the sacrum. To find the muscles that promote sacroiliac joint stability, the maximum value for the vertical SIJ shear force was decreased in steps of 30 N (∼5% of the initial vertical SIJ shear force). Theoretically, lowering of the imposed vertical SIJ shear force to 0 N could induce a non-physiological equilibrium between muscle, ligament, and joint forces. In addition, when the model was set in 30° flexion, the force in the iliolumbar, the sacrotuberal, and the posterior sacroiliac ligaments was 250 N. This value was set as a maximum physiological ligament force in the simulation model in the upright position to prevent overloading of the pelvic ligaments that were implemented in the model. The following criteria were defined to warrant a physiological solution for muscle and ligament forces.Muscle tension must not exceed 240 kPa16;Lowering of the maximum vertical SIJ shear force must result in reduction of the total SIJ shear force (combination of vertical and horizontal shear);Ligament force must not exceed 250 N. A muscle was included for further analysis when it produced at least 15% of the maximum muscle stress during the simulation. For all muscles, the maximum muscle stress depended on the calculated minimum muscle stress (see optimization criterion 1 in the Appendix). Two muscle groups were analyzed separately: (1) the muscles that increased at least 80% in force after the first simulation step and (2) the muscles that increased at least 10 times in force after completion of the simulation series. Results Table 2 summarizes the muscle (de)activation pattern when the maximum vertical SIJ shear force was stepwise decreased. Initially, vertical SIJ shear force was 563 N (on each side of the sacrum) at a trunk load of 500 N. The angle between the normal direction of the SIJ surface and the direction of the total SIJ reaction force was 81°, indicating that mainly vertical shear force acted through the SIJ, see Fig. 2a. Force equilibrium was mainly achieved by activation of M. abdominal oblique (internus and externus), M. iliacus, M. psoas, M. rectus abdominis, M. rectus femoris, M. tensor fasciae latae, and loading of the sacrotuberous ligament. Table 2Summary of the structures that stabilize the sacroiliac joints in terms of lowered shearReduction of sacroiliac shearInitialPreset value (N)SIJ (vertical shear)563533503473443413383353323StructuresForce (N)M. adductor longus9181818189M. coccygeus1111241020M. iliococcygeus1111241020M. pubococcygeus126M. gluteus medius (lower)711131415274430M. gluteus medius (middle)58111010102941M. gluteus medius (upper)4290M. gluteus minimus (lower)333347M. gluteus minimus (middle)51011111151012M. gluteus minimus (upper)8171818175111625M. iliacus4747505458688885102M. obliquus externus abdominis2118171412M. obliquus internus abdominis15202023273429M. obturatorius externus1544458M. pectineus8181818186M. piriformis1826282522M. psoas (lower)64504627M. rectus abdominis272729313437517683M. rectus femoris343436394249646250M. sartorius16171819162214M. tensor fasciae latae313133353844588579M. transversus abdominis3556721325382Iliolumbar ligament53250250Posterior sacroiliac ligament26496273147250250250Sacrospinal ligament3815015914714513210674Sacrotuberous ligament20615121SIJ (compression)92121130142154229473607633SIJ (horizontal shear)−132−141−160−154−142−154−208−226−233Total SIJ shear579551528497465441436419398Angle of SIJ reaction force (°)817876747263433532Maximum muscle tension (kPa)37373942455369125247Included are those muscles that produced at least 15% of the maximum muscle stress after each simulationThe muscles printed italic increased at least 80% in force after the first simulation step. The muscles printed bold increased at least 10 times in force after completion of the simulation seriesFigure 2Directions of the force in the frontal plane exerted by the right ilium through the SIJ on the sacrum as a reaction to trunk load, Ftrunk. Panel (a): initial loading condition without limitation of the vertical shear component (563 N, see under “initial” in Table 2). This condition led to loading of the sacrotuberal ligaments, Fsacrotuberal lig. (solid thick line). Panel (b): loading condition with the vertical shear component preset at a 120 N lower level than the initial value (see under 443 N in Table 2). This condition led to loading of the sacrospinal ligaments, Fsacrospinal lig. (solid thick line). Panel (c): loading condition with the vertical shear component preset at a 240 N lower level than the initial value (see under 323 N in Table 2). In this situation, SIJ compression force increased by ∼400%, mainly by M. transversus abdominis, Ftransversus abdominis, and the pelvic floor, Fpelvic floor, muscle forces. The location of these muscles is schematically drawn by the thick solid lines, including the M. pubococcygeus, the M. iliococcygeus and the M. coccygeus (as drawn from the mid to the lateral position). It also led to loading of the iliolumbar ligaments to the maximum allowed force Filiolumbar lig. of 250 N, (solid thick line). 3-D images copyright of Primal Pictures Ltd. http://www.primalpictures.com When the maximum vertical SIJ shear force was decreased from 563 to 443 N in steps of 30 N, the SIJ compression force increased by about 70%. Force equilibrium was obtained, amongst others, by activation of some of the muscles with a hip flexion component (M. adductor longus, M. iliacus, M. pectineus, and M. sartorius, M. rectus femoris) and some of the counteracting hip extensors (MM. gluteus medius and minimus and M. piriformis). Most of these muscles became (more) active after we lowered the maximum vertical SIJ shear force by 30 N. This led to unloading of the sacrotuberous ligaments and loading of the sacrospinal ligaments. The angle between the normal direction of the SIJ surface and the direction of the total SIJ reaction force was reduced to 72°, indicating that a combination of reduced vertical SIJ shear and increased SIJ compression could balance the trunk load on the sacrum, see Fig. 2b. Further stepwise reduction of the vertical SIJ shear force resulted in a sharp rise of the maximum muscle stress. The simulation series ended with exceeding the maximum physiological muscle stress when the vertical SIJ shear force was decreased to about 60% of its initial value. Surprisingly, activation of some of the hip flexors and extensors had decreased or even disappeared. This was not the case for the MM. gluteus medius and minimus. In this simulation, force equilibrium was obtained by activation of the transversely oriented M. transversus abdominis (ventral to the SIJ) and the pelvic floor muscles, i.e., the M. coccygeus, the M. iliococcygeus, and the M. pubococcygeus (caudal to the SIJ). This resulted in further reduction of the angle between the normal direction of the SIJ surface and the direction of the total SIJ reaction force to 35°, see Fig. 2c. This indicates that the SIJ compression force, which increased by about 400% and the reduced vertical SIJ shear force, now clamped the sacrum between the coxal bones, see Fig. 2c. The MM. gluteus medius and minimus contributed to some extent to this increased compression due to a distinct force component in the transverse direction. To maintain force equilibrium, increased SIJ compression led to loading of the iliolumbar and posterior sacroiliac ligaments to the preset maximum value of 250 N. Discussion In the present study, the simulation model predicted muscle and ligament forces in the pelvic region when there was an imposed reduction of the vertical SIJ shear force. Initially, the forces acting through the SIJ were mainly vertical shear forces, see Fig. 2a. These forces were not only caused by trunk load, but also by muscles that acted in the longitudinal direction of the spine, for example the M. psoas and M. rectus abdominis. As a result of the forward bending moment, the sacrotuberous ligament was loaded. This large ligament protects the SIJ against excessive flexion of the sacrum relative to the coxal bones. The controlled reduction of the vertical SIJ shear force with 30 N forced some muscles that act as hip flexors and hip extensors to become active. Due to their transverse orientation, especially the MM. gluteus medius and minimus and M. piriformis contributed to the increased compression force between the coxal bones and the sacrum. However, these muscles did not contribute enough to self-bracing of the SIJ, because the total force through the SIJ still mainly acted in vertical direction. When the vertical SIJ shear was further reduced to about 60% of its initial value, the simulation model predicted that self-bracing mainly resulted from the transverse muscles ventrally (M. transversus abdominis) and caudally (pelvic floor) to the SIJ. In this situation, some of the hip flexors and extensors reduced in activity, for example the M. piriformis. Although the M. piriformis has a transverse orientation and crosses the SIJ, its contribution was minimized by the simulation program because this muscle also induces vertical SIJ shear force. The pelvic floor muscles, the M. coccygeus and M. pubo-, and iliococcygeus, contribute to the stabilization with respect to the sacrum. It has been suggested that this stabilization by force closure has an analogy with a classical stone arc.33 When sideways displacement of both ends of the arc is opposed, mechanical equilibrium of the stones is achieved by compression forces and not by shear forces. In the pelvis, the pelvic floor muscles may help the coxal bones to support the sacrum by compression forces, while shear forces between sacrum and coxal bones are minimized, see Fig. 3. Note that the SI compression force is defined as the force acting perpendicular to the SIJ surface. Therefore, decreasing or increasing this force will not alter the shear forces. The articular surfaces of the SIJ are irregular which results in bony interdigitation in the SI joint space. SIJ shear force calculated in the simulation model thus reflects the combination of real joint friction and friction due to this intermingling of bones. The real joint friction forces may be extremely small considering the extremely low coefficients of friction between the articular surfaces. The majority of the shear force is effectuated as normal contact pressures due to the bony interdigitation in the SI joint space. It was not possible to calculate the percentage of shear in terms of joint friction force. This requires a more detailed description of the SIJ surfaces. Figure 3Analogy of pelvic bones supporting the trunk with a classical stone arc. The M. transversus abdominis and the pelvic floor muscles caudal to the SIJ mainly oppose lateral movement of the coxal bones. Spinal loading is transferred mainly by compression forces through the SIJ to the coxal bones and further down to the legs. 3-D images copyright of Primal Pictures Ltd. http://www.primalpictures.com The simulation model predicts that simultaneous contraction of the M. transversus abdominis and pelvic floor muscles, i.e., the M. coccygeus, the M. iliococcygeus, and the M. pubococcygeus, contribute to lowering of the vertical SIJ shear forces, increasing of the SIJ compression and hence increasing of the SIJ stability. We emphasize that this simulation model was set up to estimate the forces acting in the pelvic region under static conditions and that the outcome of the simulations must be interpreted with caution.5 Nevertheless, in a previous study co-contraction was shown of pelvic muscles and M. transversus abdominis.26 This result and the prediction of our simulation model suggest that a protective mechanism against high SIJ shear forces may exist in humans. This mechanism has been investigated in vivo and in vitro. The contribution of the M. transversus abdominis to SIJ stability was shown in an in vivo study in patients with LBP.25 An in vitro study in embalmed human pelvises showed that simulated pelvic floor tension increased the stiffness of the pelvic ring in female pelvises.23 It is worthwhile to further investigate the contribution of both muscle groups simultaneously, not only during stiffness measurements of the SIJ but also during lumbo-pelvic stability tests based on increased intra-abdominal pressure (IAP). It was shown that the pelvic floor muscles, in combination with abdominal muscles and the diaphragm, may control and/or sustain IAP to increase lumbar spine stability as well.7,14 In the present study, the ligament forces were not allowed to exceed 250 N. The distribution between muscle and ligament forces depended on the maximum muscle stress as formulated in the first optimization scheme as presented in the Appendix. Increasing the maximum ligament forces might result in a lower maximum muscle stress, which could lead to a different muscle activation pattern to stabilize the SIJ. A small sensitivity test, however, showed that when the ligament forces were allowed to exceed the 250 N up to 500 N and in a next step up to 750 N, the model calculated a similar muscle activation pattern. The outcome of the present study also depended on the choice of optimization criteria and the magnitude of the cross-sectional areas of the muscles. The influence of different criteria was previously investigated for muscle forces in the leg.21 Indeed, various choices led to different calculated forces, but the obtained solutions were qualitatively similar, as was the case in our model. When we developed the model, other optimization criteria were also tested, for example minimization of the sum of muscle forces. However, minimization of the sum of squared muscle stresses yielded the most plausible solutions. The model cannot account for anatomical variations or detailed variation in muscle attachment sites. Obviously, direct comparison between the model predictions and the outcome of in vivo force measurements in the SIJ are not available, so there is no data to confirm the outcome of the present study. Nevertheless, EMG recordings of (superficial) abdominal and back muscles in various postures showed higher M. abdominal oblique internus activity when standing upright than resting on one leg and tilting the pelvic backwards.33 This muscle is considered as one of the self-bracing muscles of the SIJ. It was hypothesized that when standing on one leg, the shear load on the contralateral SIJ is diminished. Posterior tilt of the pelvis with less lumbar lordosis may than lead to less M. psoas major muscle load on the spine meaning less shear load on the SIJ. These findings indirectly support our findings that transversely oriented muscles reduce SIJ shear forces. We emphasize that the present model served as a tool to investigate the general relations between muscle and ligament forces in the pelvic region. The present simulations results may lead to the development of a new SIJ stabilizing training-program to reduce pain induced by high SIJ shear forces. The effectiveness of such a program, however, can only be tested with an intervention study. The simulation model predicted unloading of the sacrotuberous and loading of the iliolumbar and posterior sacroiliac ligaments when the vertical SIJ shear was forced to reduce. This loading of the dorsal ligaments resulted from the absence of transversely oriented muscles at the dorsal side of the SIJ to counterbalance activation of the M. transverse abdominis at the ventral side of the SIJ. Loading of the iliolumbar ligament has been related to LBP.24 It was shown that in sitting position, the stepwise backward movement of an erect trunk (from upright position into a slouch) resulted in forward flexion of the spine combined with backward tilt of the sacrum relative to the pelvis.32 It was shown, that this movement into a sudden or sustained slouch might cause loading of the well-innervated iliolumbar ligaments near failure load.31 The co-contraction that exists between the deep abdominal M. transversus abdominis and the deep back extensor M. multifidus presumably retains lumbo-pelvic stability.13 In the future, we intend to extend the model with co-contraction between the M. transversus abdominis, the M. multifidus, and the pelvic floor muscles to study prevention of (over)loading of pelvic ligaments at different static postures. Conclusions Effective stabilization of the SIJ is essential in transferring spinal load via the SIJ to the coxal bones and the legs. A biomechanical analysis of the upright standing posture showed that activation of transversely oriented abdominal M. transversus abdominis and pelvic floor, i.e., M. coccygeus and M. pubo- and iliococcygeus muscles would be an effective strategy to reduce vertical SIJ shear force and thus to increase SIJ stability. The force equilibrium in this situation induced loading of the iliolumbar and posterior sacroiliac ligaments. The M. transversus abdominis crosses the SIJ and clamps the sacrum between the coxal bones. Moreover, the pelvic floor muscles oppose lateral movement of the coxal bones, which stabilizes the position of the sacrum (the pelvic arc).
[ "sacroiliac joints", "pelvis", "pelvic floor muscles", "static forces", "human posture" ]
[ "P", "P", "P", "R", "R" ]
Mod_Rheumatol-4-1-2275302
Gene therapy for arthritis
Arthritis is among the leading causes of disability in the developed world. There remains no cure for this disease and the current treatments are only modestly effective at slowing the disease's progression and providing symptomatic relief. The clinical effectiveness of current treatment regimens has been limited by short half-lives of the drugs and the requirement for repeated systemic administration. Utilizing gene transfer approaches for the treatment of arthritis may overcome some of the obstacles associated with current treatment strategies. The present review examines recent developments in gene therapy for arthritis. Delivery strategies, gene transfer vectors, candidate genes, and safety are also discussed. Introduction Rheumatoid arthritis (RA) is the most common inflammatory disorder, affecting approximately 0.5–1% of the North American adult population. It causes significant pathology and functional impairment in affected individuals. RA is a multifactorial disease whose main risk factors include genetic susceptibility, sex and age, smoking, infectious agents, hormones, diet, and socioeconomic and ethnic factors [1]. Those afflicted by RA report a decrease in quality of life measures such as persistent pain, functional disability, fatigue, depression, and an inability to perform daily tasks [2]. RA is also a significant burden on the health care system, averaging between $2,800–$28,500 per patient per year for direct and indirect costs in developed countries [3]. Although the pathogenesis of RA is not completely understood, specific HLA-DR genes, autoantibody and immune complex production, T cell antigen-specific responses, networks of cytokine production, and a hyperplastic synovium have all been shown to play a role. RA primarily affects diarthrodial joints of the hands and feet. These joints are normally lined by a thin cell layer (1–3 cells) of both type I (macrophage-like) and type II (fibroblast-like) synoviocytes. In RA, the joint is inflamed and the synovium becomes hyperplastic, creating a pannus of synovial tissue comprised of CD4+ T cells, B cells, mast cells, dendritic cells, macrophages, and synoviocytes that invade and destroy nearby cartilage and bone. Neutrophils accumulate in the synovial fluid and also contribute to the destructive processes [4, 5]. Macrophages and fibroblast-like synoviocytes (FLS) secrete inflammatory cytokines, such as TNFα, IL-1β, and IL-6, all of which contribute to cartilage and bone destruction. They have been implicated as major contributors to many aspects of RA, including inflammatory cell infiltration, fibrosis, and T cell proliferative responses [6, 7]. These cytokines are thought to act primarily through MAP kinase and nuclear factor κB (NFκB) signaling pathways to activate transcription factors that turn on genes for chemokines and cell adhesion molecules, along with extracellular matrix degrading enzymes like matrix metalloproteinases (MMPs). Chemokines and chemokine receptors have also been linked to RA [6, 8, 9]. Initial treatment strategies for RA include drugs such as non-steroidal anti-inflammatory drugs (NSAIDs) and glucocorticoids. While providing pain relief and decreased joint swelling, these drugs are unable to stop the progression of RA [10]. This led to the use of small molecules like methotrexate that were demonstrated to slow disease progression. For many decades these disease-modifying agents of rheumatic disease (DMARDs) were the best treatment option for RA. In recent years, a further understanding of the disease process has led to the development of biologic DMARDs primarily aimed at neutralizing the effects of pro-inflammatory cytokines. The most successful agents in this class are anti-TNF-α molecules and IL-1β blocking agents. Based on improvement criteria set forth by the American College of Rheumatology (ACR), these drugs are more effective than treatment with methotrexate alone. Even so, less than half of the patients show improvement of at least 50% in their ACR scores. In addition, the half-life of these drugs is relatively short and they require frequent systemic administration in order to be effective [11]. Gene transfer strategies have the potential to overcome some of these limitations, potentially leading to increased efficacy and decreased frequency of administration. Gene delivery strategies Although the majority of inflammation in RA is localized to the joints, there are systemic components to the disease. Therefore, when developing treatment strategies, one must consider whether the therapy should be delivered locally or systemically. Local administration is attractive because it has less potential for side effects and the treatment is delivered directly to the joint, the main site of inflammation. However, systemic features of the disease would seem to be left untreated. Interestingly, several researchers have observed a “contralateral effect” in animal models of local gene delivery where the delivered transgene is protective not only to the injected joint, but also to distal, untreated joints [12–14]. This effect appears to be independent of trafficking of modified immune cells to distal joints, because ex vivo modified fibroblasts alone are still able to confer a contralateral effect [15]. It is also independent of systemic circulating levels of transgene and non-specific immunosuppression and instead was found to depend on a complex antigen-specific mechanism [16]. Systemic delivery, typically by intravenous administration, would be expected to have a broader therapeutic effect, but is also associated with an increase in side effects and toxicity. Another consideration for gene transfer is whether to employ in vivo or ex vivo delivery strategies. In vivo strategies have the advantage of being relatively easy and less expensive. In addition, many more studies have been performed in animal models looking at in vivo gene delivery. Ex vivo strategies, although expensive and time-consuming, have the advantage of being able to treat and select very specific cells, avoiding the possibility that the gene transfer vector could genetically modify a stem cell population and result in oncogene activation. A phase I clinical trial has been performed using a retroviral vector to deliver IL-1 receptor antagonist (IL-1ra) to cultured autologous synovial fibroblasts [17]. The cells were then injected into the RA patients' joints. After a scheduled arthroplasty 1 week later, significant expression of the transgene was seen in the injected joints. No adverse events were reported. This trial was initiated following previous experiments where antagonists of both IL-1β and TNF-α were delivered to autologous cultured rabbit fibroblasts ex vivo and injected into arthritic rabbit knee joints, with significant therapeutic benefit [15, 18]. Most ex vivo studies use fibroblast-like synoviocytes (FLS) as the target cell. These cells have been targeted specifically because they are thought to be directly responsible for cartilage destruction and drive and perpetuate the inflammatory response and autoimmunity [19]. The disadvantages of using this cell type are that FLS have a low proliferation rate, lack highly specific surface markers, and are a non-homogenous population. Future studies may examine other strategies, including gene delivery to T cells, dendritic cells, muscle cells, or mesenchymal stem cells [20]. Gene transfer vectors Gene transfer vectors can be broadly categorized into two groups: viral and non-viral vectors. In general, viral vectors tend to provide for longer-term gene expression but often come with additional safety concerns, ranging from fears of generating replication competent virus during vector production, random insertion of the transgene into the genome following treatment, or development of a harmful immune response. Plasmid DNA The most common non-viral vector used in arthritis studies is plasmid DNA. Plasmid DNA can be delivered by liposomes, gene gun, or direct injection of the plasmid. The use of plasmid DNA tends to be less toxic and less immunogenic than the use of viral vectors and is also easy and relatively inexpensive to produce. However, plasmid DNA often leads to low transfection efficiency and short-term expression of the transgene, lasting only 1–2 weeks [21–23]. These limitations make it unlikely that local delivery of plasmid DNA in the joint will be successful. The most success with the use of plasmid DNA in gene transfer for arthritis has been garnered through delivery of transgenes to skeletal muscle. Electrotransfer of soluble TNF-α receptor I variants to the tibial-cranial muscle at the onset of collagen-induced arthritis (CIA) led to a decrease in the clinical and histological signs of disease for up to 5 weeks [24]. Similarly, plasmids encoding cDNA for other anti-inflammatory molecules such as IL-1ra and a soluble TNFR-Fc fusion protein have been demonstrated to improve both macroscopic and microscopic scores of CIA when delivered intramuscularly [25, 26]. Plasmid encoding TGF-β delivered to skeletal muscle delayed progression of streptococcal cell wall induced arthritis when administered at the peak of the acute phase and virtually eliminated subsequent inflammation and arthritis when given at the beginning of the chronic phase of the disease [27]. Intramuscular injections of plasmids encoding immuno-modulatory molecules such as IL-4, IL-10, viral IL-10, and soluble complement receptor type I have given similar results [28–31]. Intramuscular delivery of a plasmid encoding TIMP-4, and inhibitor of matrix metalloproteinases, completely abolished the development of arthritis in a rat adjuvant-induced arthritis model [32]. Intravenous delivery of a plasmid encoding the heparin-binding domain of fibronectin inhibited leukocyte recruitment and decreased inflammation in CIA [33]. Intra-dermal injection of plasmid encoding IL-10 and intra-peritoneal injection of plasmid IL-10/liposome complexes have also been demonstrated to delay onset and progression of CIA [34, 35]. The liposome delivered DNA was able to maintain expression for only 10 days after injection, significantly less than the intramuscular studies mentioned above. Recently, the use of chitosan, a polycationic polysaccharide derived from crustacean shells, has been shown to act as an efficient gene carrier to rabbit knee joints both in vitro and in vivo [36]. Other non-viral vectors Other non-viral gene delivery systems that have potential in the treatment of RA are the artificial chromosome expression (ACE) system and the sleeping beauty (SB) transposon system. The ACE system is attractive because it is non-integrating and can provide stable and long-term gene expression of one or multiple genes. A feasibility study was recently performed in a Mycobacterium tuberculosis rat arthritis model, which demonstrated that rat skin fibroblasts could be modified ex vivo to express a reporter gene from an artificial chromosome. These cells, when subsequently injected into rat joints, demonstrated engraftment into the synovial tissue microarchitecture and detectable transgene expression. The ACE system did not induce local inflammation at the injection site that is often associated with viral vector administration [37]. The sleeping beauty transposon system melds the advantages of both viral and non-viral vectors, allowing for both integration into the genome and long term expression. No studies have yet been performed in arthritis models using the sleeping beauty transposon system, but success has been found in both cancer and hemophilia models, suggesting that it might be have potential to successfully treat arthritis as well [38]. Viral vectors are by far the most widely used vectors for delivering transgenes in arthritic animal models [39]. There are several different viral vectors that have been examined for use in gene transfer for arthritis, including adenovirus, retrovirus, adeno-associated virus (AAV), and lentivirus, each with their respective advantages and disadvantages. Adenovirus Adenovirus is a non-encapsidated double-stranded DNA virus that can infect non-dividing cells and can be produced at high titers. Many gene-therapy studies have been performed with this vector but it has several limitations that may prevent it from being successful in the clinic. The high prevalence of neutralizing antibodies may prevent successful administration or re-administration. Injected adenovirus also causes a significant inflammatory immune response, which is a safety concern. In addition, adenovirus vectors typically only allow for 1–3 weeks of transgene expression, which would limit its long-term efficacy. Some improvements to adenoviral vectors have recently been made in an effort to improve delivery of transgenes to the synovium. FLS lack the coxsackie-adenovirus receptor (CAR) and are not efficiently transduced by adenovirus. By modifying the fiber knobs on the virus, adenoviral transgene delivery to synoviocytes and synovium was improved dramatically [40, 41]. Other recent improvements include the development of an adenoviral vector with an inflammation inducible promoter [42]. This would allow expression of the transgene during active disease, but expression would turn off once inflammation was brought under control. Retrovirus Retroviruses, mostly derived from the Moloney murine leukemia virus, have a relatively simple genome and structure. They are enveloped viruses and contain two identical copies of their RNA genome. The key feature of the retroviral life cycle is the ability of the RNA genome to be reverse transcribed into double-stranded DNA, which can then randomly integrate into the genome. They have been mostly used in ex vivo studies and are desirable vectors for several reasons. They can provide for long-term stable expression and their integration into the genome makes it possible to permanently correct a genetic defect [43]. For arthritis in particular, the inflamed synovium appears to be more susceptible to uptake of the virus [18]. The drawbacks to retroviral vectors are that they only infect non-dividing cells and are produced at low titers. The fact that these vectors integrate into the genome randomly is also a concern. In fact, in a recent clinical trial in France using a retrovirus to correct an X-linked SCID disorder, 3 out of 10 children developed leukemia after the vector inserted in or near a known oncogene. As a result, similar trials in the U.S. for this disorder have been halted until more information can be gathered [44, 45]. Future improvements to these vectors, including the development of self-inactivating vectors, which contain no retroviral promoter or enhancer elements, and use of vectors from non-oncogenic retroviruses will hopefully make them safer for clinical use [43]. Lentivirus Lentivirus vectors are derived from retroviral vectors but have the advantage of infecting non-dividing cells. The most commonly studied lentiviral vectors are derived from either human immunodeficiency virus (HIV) or feline immunodeficiency virus (FIV), although equine anemia infectious virus and visna virus have also been examined [38]. The primary concern with HIV vectors is safety. In contrast, FIV is non-pathogenic in humans and does not cause serologic conversion [46]. Using an FIV vector, TNF-α was transduced into primary human FLS with high efficiency. When injected into knees of SCID mice, these cells induced cell proliferation and caused bone and joint destruction [47]. A replication-defective HIV vector encoding endostatin was injected into joints of TNF-α transgenic mice and was shown to decrease synovial blood vessel density and decrease the overall arthritis index [48]. Similarly, intra-articular expression of angiostatin inhibited the progression of CIA in mice [49]. A study examining a VSV-G pseudo-typed HIV vector, which has increased host range and stability, demonstrated that a transgene could be efficiently delivered to the synovium of rat knee joints, with transgene expression lasting up to 6 weeks in immunocompromised animals [50]. AAV One of the most promising gene transfer vectors is AAV, which is a small, non-enveloped single-stranded DNA virus with broad tissue tropism. It belongs to the Parvoviridae family and has a 4.68kb genome. AAV normally requires adenovirus or herpesvirus to produce active infection. Several serotypes have been identified in primates, with AAV2 being the prototype for most gene transfer studies. Heparan sulfate proteoglycan has been identified as the primary attachment receptor for AAV, with fibroblast growth factor receptor 1 and integrin αvβ5 acting as co-receptors [51]. Although little is known about the details of AAV infection, some of the basic mechanisms have been described. The virus enters the cell by receptor-mediated endocytosis. Acidification of late endosomes leads to AAV release into the cytosol, with subsequent translocation of the virus to a perinuclear region [52–54]. The virus then enters the nucleus by an unknown mechanism that is independent of the nuclear pore complex [55]. Following uncoating, the single-stranded genome is converted to a double strand and the viral DNA integrates into chromosome 19 (AAVS1 locus) in a site-specific manner [56, 57]. AAV is an attractive vector for gene transfer studies for several reasons. It has been shown to deliver transgenes to a wide variety of tissues, has low immunogenicity, and mediates long-term gene expression [51]. Recombinant, replication incompetent AAV vectors have been designed that lack the Rep genes, which are required for integration, so long-term expression with these vectors is thought to be mediated by episomal viral DNA [58]. In addition, AAV vectors have been designed that are able to package double-stranded viral genomes, bypassing a rate-limiting step of viral transduction (second-strand synthesis) and allowing rapid and highly efficient transduction both in vitro and in vivo [59]. Several studies have demonstrated the efficacy of AAV vectors in arthritis models. Primary and recurrent arthritis were suppressed following a single injection of AAV encoding IL-1ra into knee joints of rats with LPS-induced arthritis. Surprisingly, disease-regulated expression of the transgene was observed [60]. Our laboratory has recently observed a similar phenomenon in in vitro cultured FLS infected with AAV in which inflammatory cytokines can increase transgene expression in these cells in a regulatable, PI3K-dependent manner. Protesosome inhibition has also been shown to enhance AAV transduction of human synoviocytes both in vitro and in vivo [61]. AAV encoding soluble TNF-α receptor type I decreased synovial cell hyperplasia and cartilage and bone destruction in human TNF-α transgenic mice injected intra-articularly with the virus [62]. Intra-articular or peri-articular delivery of AAV encoding IL-4 in CIA mice was also shown to decrease paw swelling, protect from cartilage destruction, and delay the onset of CIA [63, 64]. A vIL-10 transgene delivered by AAV under control of a tetracycline inducible promoter decreased the incidence and severity of CIA on a macroscopic, radiologic, and histologic level [65]. More recently, angiostatin, an anti-angiogenic molecule, was demonstrated to efficiently decrease development of CIA in the treated joint when delivered by AAV [66]. AAV vectors are in clinical trials for the treatment of cystic fibrosis and hemophilia B and preliminary results are promising [67]. Targeted Genetics Corporation (Seattle, WA) is currently conducting a phase I clinical trial (13G01; identifier NCT00126724) to assess the safety of using an AAV2 vector to deliver a soluble TNF receptor-Fc fusion gene in RA. Gene transfer strategies IL-1β inhibition Several gene transfer strategies have been aimed at neutralizing the effects of IL-1β. IL-1β communicates with many different cell types in the joint. Its action on these cells leads to emigration of blood cells to the synovium, increased cartilage destruction, and increased production of other chemokines and pro-inflammatory mediators by macrophages, B cells, and T cells [7]. Neutralization of this key cytokine has proven beneficial in the treatment of RA. Several animal models support using gene transfer to block the effects of this cytokine. Primary and recurrent arthritis were suppressed following a single injection of AAV encoding IL-1ra into knee joints of rats with LPS-induced arthritis [60]. Adenoviral vectors encoding IL-1ra proved more effective than soluble type I TNF receptor-IgG fusion protein at reducing cartilage matrix degradation and decreasing leukocyte infiltration into the joint space of rabbits with antigen-induced arthritis [13]. Rabbit synovial fibroblasts modified ex vivo using a retrovirus encoding IL-1ra demonstrated a chondroprotective and mild anti-inflammatory effect when injected back into the joint after onset of antigen-induced arthritis [18]. This led to a phase I clinical trial using a retroviral vector to deliver IL-1ra to cultured human autologous synovial fibroblasts. The cells were then injected into RA patient joints. After a scheduled arthroplasty 1 week later, significant expression of the transgene was seen in the injected joints, and no adverse events were reported [17]. Intramuscular injection of plasmid DNA encoding IL-1ra has also been shown to decrease paw swelling and arthritis incidence in a CIA mouse model. Reduced synovitis and cartilage erosion were also seen [26, 68]. In a rat model of bacterial cell wall-induced arthritis, rat synoviocytes were modified ex vivo using a retroviral vector to express IL-1ra. When injected into ankle joints prior to reactivation of arthritis, a decreased severity of arthritis and attenuated destruction of cartilage and bone was observed [69]. In a SCID mouse model, human RA FLS transduced with retrovirus encoding IL-1ra co-implanted with normal human cartilage was able prevent progressive cartilage degradation compared to controls [70]. 3T3 mouse fibroblasts transfected with plasmid encoding IL-1ra were able to prevent the onset of CIA and cartilage destruction when injected into knee joints of CIA mice [12]. Similar effects on arthritis were seen in a rabbit model of antigen-induced arthritis in which a retroviral vector carrying the IL-1ra gene was used to transduce rabbit fibroblasts ex vivo, with subsequent injection of the cells into the knee joint [15]. More recently, the soluble form of interleukin-1 receptor accessory protein (sIL-1RAcP) was delivered to CIA mice either using an adenoviral vector or by injection of plasmid transfected 3T3 cells. In both instances, a profound prophylactic effect on the development of CIA was observed [71]. TNF-α inhibition TNF-α is another pro-inflammatory cytokine that plays a key role in the pathogenesis of RA. Many of its effects overlap those of IL-1β listed above. Currently, therapies aimed at neutralizing this cytokine represent the most successful treatment strategies for RA. Current regimens involve receiving injections (etanercept) or infusions (infliximab) every 2 or 8 weeks, respectively. Gene transfer strategies have the potential to provide longer-term control and may also be delivered locally rather than systemically, potentially minimizing treatment side effects. Several studies in animal models have been performed. After one injection at onset of CIA, intramuscular electrotransfer of plasmids encoding soluble TNF receptor I variants led to a decrease in clinical and histological signs of CIA [24]. Expression lasted up to 5 weeks and was as least as efficient as repeated injections of the recombinant protein etanercept in controlling the disease. A similar study using a retroviral vector to deliver the transgene peri-articularly saw similar results and also observed a decrease in systemic levels of IgG2a antibodies to collagen type II [72]. Electrotransfer of a plasmid encoding a soluble p75 TNF receptor:Fc fusion protein was also beneficial in a CIA model and was associated with a decrease in the levels of IL-1β and IL-12 in the paw [25]. Other studies using electrotransfer or intramuscular injection of a plasmid with a doxycycline-regulated promoter to control expression of a dimeric soluble TNF receptor II molecule saw a therapeutic effect on CIA only when doxycycline was administered [73, 74]. In TNF-α transgenic mice, intra-articular delivery of soluble TNF receptor I by adeno-associated virus led to a decrease in synovial cell hyperplasia and cartilage and bone destruction [62]. Similarly, AAV5 encoding sTNFRI-Ig was also able to decrease paw swelling in a rat AIA model when expression was under control of an inflammation responsive promoter, but interestingly, not if expression was under control of the CMV promoter [75]. Splenocytes from arthritic DBA-1 mice can passively transfer collagen type II-induced arthritis when injected into SCID recipients. If these splenocytes were first modified ex vivo using retroviral vectors to express soluble p75 tumor necrosis factor receptor, the SCID recipients did not develop arthritis, bone erosion, or joint inflammation [76, 77]. Delivery of a rat TNF receptor:Fc fusion protein in a streptococcal cell wall-induced arthritis model by either plasmid or local or systemic administration of AAV vectors encoding the molecule led to decreased inflammation, pannus formation, bone and joint destruction, and mRNA expression of joint pro-inflammatory cytokines [21]. Adenoviral delivery of a soluble TNF receptor type I-IgG fusion protein directly to rabbit knees with antigen-induced arthritis reduced-cartilage matrix degradation and decreased leukocyte infiltration into the joint space, especially when administered in conjunction with a soluble IL-1 type I receptor-IgG fusion protein [13]. The above animal model data has led to the initiation of a phase I clinical trial using AAV vectors to deliver soluble TNF Receptor:Fc fusion protein (Targeted Genetics Corporation). IL-18 inhibition IL-18 is a pro-inflammatory cytokine that is overexpressed in the synovium of RA patients and correlates with inflammation. Elevated levels of this cytokine are also observed in serum and synovial fluid. Overexpression of an IL-18-binding protein using an adenoviral vector was able to ameliorate arthritis in a CIA model, indicating neutralization of IL-18 may be an effective target in the future treatment of RA [78]. Immune deviation Previously, an imbalance between Th1 and Th2 cytokines was thought to play a role in the pathogenesis of several inflammatory diseases, including RA. Th1 cells secrete cytokines like IFN-γ that promote a pro-inflammatory environment, while Th2 cytokines secrete cytokines like IL-4, IL-10, and IL-13 that down-regulate Th1 activity. It was previously felt that overproduction of Th1 cytokines contributed significantly to the pathogenesis of RA. More recently, a new subset of T cells, termed Th17 cells, have been identified [79]. These cells produce, among other cytokines, IL-17, a pro-inflammatory cytokine previously implicated in the pathogenesis of CIA [80]. Dysregulation of Th17 cells and IL-17 overproduction has been implicated in the pathogenesis of inflammatory diseases and the development of severe autoimmunity. In fact, the pathogenesis of RA may be more directly related to an imbalance between Th17 cells and Foxp3-positive regulatory T cells than an imbalance between Th1 and Th2 cells [81]. Regardless of the exact mechanism, several strategies aimed at immune deviation have been successful in animal models of arthritis and are outlined below. IL-13 inhibits activated monocytes/macrophages from secreting a variety of pro-inflammatory molecules. A possible role for this cytokine in the pathogenesis of RA was observed when adenoviral delivery of IL-13 to RA synovial tissues explants led to a decrease in IL-1β, TNF-α, IL-8, MCP-1, NAP-78, PGE2, and MIP-1α when compared to controls [82]. Subsequent studies demonstrated that adenoviral delivery of IL-13 directly to ankle joints in a rat antigen-induced arthritis model significantly decreased paw size, bony destruction, vascularization, inflammatory cell infiltration, and inflammatory cytokine production [83]. IL-13 overexpression during immune-complex-mediated arthritis significantly decreased chondrocyte death and MMP mediated cartilage destruction, despite the presence of enhanced inflammation [84]. Using IL-4 to skew the cytokine profile towards Th2, or perhaps by inhibiting Th17 cell production, has also proven successful. Adenoviral delivery of IL-4 to RA synovial tissue explants demonstrated decrease IL-1β, TNF-α, IL-8, MCP-1, and PGE2 in the cultured medium [85]. Intra-articular delivery of adenoviral vectors encoding IL-4 to CIA mice led to an enhanced onset of inflammation but less chondrocyte death and cartilage and bone erosion. Proteoglycan synthesis was enhanced and there was decreased MMP activity [86]. IL-17, IL-12, cathepsin K and osteoprotegrin ligand mRNA levels were also reduced [86, 87]. Kim et al. observed similar effects on CIA upon local and systemic administration of adenoviral vectors encoding IL-4 [88]. AAV-mediated delivery of IL-4 has also prove beneficial in CIA models [63, 64]. Electrotransfer of an IL-4 encoding plasmid prior to CIA onset decreased synovitis and cartilage destruction, with an associated decrease in IL-1β in the paw and an increased TIMP2:MMP2 ratio [29]. Similar results were also seen using either gene gun delivery or intra-dermal administration of plasmid encoding IL-4 [89]. Both retroviral and adenoviral delivery of IL-4 in a rat antigen-induced arthritis model had beneficial effects [90–92]. Cell-based therapies that deliver IL-4 in arthritis models have also been successful. Injection of fibroblast transfected with plasmid encoding IL-4 decreased histologic evidence of joint inflammation and destruction in a CIA model [93, 94]. Likewise, collagen type II pulsed antigen presenting cells engineered to secret IL-4 down-regulated CIA [95]. IL-10 is an anti-inflammatory cytokine that has demonstrated benefit in several animal models of RA. Either intramuscular administration via electrotransfer or intra-dermal injection of plasmid encoding IL-10 had beneficial effects on CIA [31, 35]. Similarly, systemic administration of a plasmid IL-10/liposome mixture decreased signs of CIA after a single intra-peritoneal injection [34]. Viral IL-10 (vIL-10) is homologous to human and mouse IL-10 but while retaining its immunosuppressive function, it lacks many of the immunostimulatory properties of IL-10, and therefore may be a superior treatment option. An adenoviral vector encoding vIL-10 was able to decrease CIA when delivered locally or systemically [14, 96, 97]. Electrotransfer of viral IL-10 decreased histologic evidence of arthritis in an arthrogen collagen-induced arthritis model and was associated with decreased TNF-α, IL-1β, and IL-6 transcripts in the joint [30]. A tet-inducible vIL-10 transgene delivered by AAV was able to decrease macroscopic, radiology, and histologic signs of CIA only when doxycycline was administered [65]. TGF-β is a pleiotropic cytokine with many different effects on many different cell types and has been suggested of playing a role in RA. While some of its effects, like immunosuppression, would appear to be beneficial for RA, it has also been associated with pro-inflammatory activity. Not surprisingly, data from animal models using gene transfer support both possibilities, making its true role in RA difficult to decipher. Splenocytes from CIA mice were isolated and infected ex vivo with a retroviral vector encoding TGF-β. These cells were then injected in the intra-peritoneal cavity 5 days after arthritis onset. Without TGF-β expression, an exacerbation of arthritis is normally observed. However, TGF-β expressing splenocytes were able to inhibit this exacerbation and also resulted in a decrease in MMP2 activity and a transient reduction in anti-collagen type II antibodies [98]. In a rat streptococcal cell wall-induced arthritis model, intramuscular delivery of plasmid encoding TGF-β showed significant decreases in inflammatory cell infiltration, pannus formation, bone and joint destruction, and inflammatory cytokine production [27]. In contrast to the above reports, another study found that injection of adenovirus encoding TGF-β into the knees of rabbits with antigen-induced arthritis resulted in significant pathology in the knee joint and surrounding tissue, suggesting that TGF-β therapy may not be suitable for treating arthritis in some models [99]. CTLA-4Ig fusion protein binds to the co-stimulatory molecules B7-1 and B7-2 present on antigen-presenting cells and blocks CD28/B7 interactions, resulting in decreased T cell activation. It has been shown to ameliorate several experimental autoimmune diseases, including CIA. A single intravenous injection of an adenovirus encoding CTLA-4Ig fusion protein suppressed established CIA as least as efficiently as repeated injections of monoclonal antibody to CTLA-4. Pathogenic cellular and humoral responses were also diminished in adenoviral vector treated group as compared to antibody treated and control groups [100]. CIA could also be inhibited both histologically and clinically by intra-articular administration of a low dose of adenovirus encoding CTLA-4Ig fusion protein [101]. Promoting apoptosis One of the key features of RA is the increased cellularity of the synovial lining leading to pannus formation, which has been shown to contribute to cartilage invasiveness and bone destruction. Promoting synovial apoptosis has been suggested as a treatment strategy for RA. In a rabbit model of arthritis, intra-articular adenoviral delivery of TNF-related apoptosis-inducing ligand (TRAIL) was able to increase apoptosis in the synovial cell lining, decrease inflammatory cell infiltration, and promote new matrix deposition [102]. Mice injected with collagen type II-pulsed antigen presenting cells engineered to express TRAIL under control of a doxycycline inducible promoter decreased the incidence of CIA and infiltration of T cells in the joint in the presence of doxycycline. In situ TUNEL staining demonstrated TRAIL-induced apoptosis of activated T cells in the spleen [103]. Modulation of TRAIL receptor expression on RA synoviocytes has also been suggested as a gene therapy strategy for the treatment of RA [104]. Injection of adenovirus-expressing Fas ligand (FasL) into joints of CIA mice induced apoptosis and ameliorated CIA. IFN-γ production by collagen-specific T cells was also reduced [105]. Ex vivo modified T cells engineered to express FasL were injected into human RA synovial tissue that had been implanted in SCID mice. Analysis of the tissue following treatment demonstrated that synoviocytes and mononuclear cells present in the tissue had been eliminated by apoptosis through a Fas/FasL interaction [106]. Similar results were observed when adenovirus encoding FasL was injected directly into the implanted tissue [107]. Dendritic cells modified by adenoviral vectors to express FasL were able to suppress CIA when systemically injected and also demonstrated decreased IFN-γ production from spleen-derived lymphocytes and decreased T-cell proliferation in response to collagen stimulation [108]. Fas-associated death domain protein (FADD) also plays a key role in Fas-mediated apoptosis of synovial cells. It was found that adenoviral vectors expressing FADD could induce apoptosis in synoviocytes both in vitro and in vivo, suggesting that this strategy may be effective in the treatment of RA [109]. Anti-angiogenesis Increased cellularity of the synovial lining is also associated with neo-vascularization in the local environment of the joint. This angiogenesis is necessary for the development and maintenance of the pannus and also provides nutrients required for the survival and proliferation of infiltrating inflammatory cells [110, 111]. A peptide targeted to the integrins present in the inflamed synovium and associated with angiogenesis was fused to an anti-apoptotic peptide. Systemic administration of this fusion peptide in a CIA model resulted in decreased clinical arthritis and increased apoptosis of synovial blood vessels [112]. 3T3 fibroblasts modified with retroviral vectors to express angiostatin were able to decrease pannus formation and cartilage erosion when injected into knee joints of mice with CIA. Arthritis-associated angiogenesis was also inhibited [113]. AAV and HIV vector-mediated delivery of angiostatin to CIA knee joints were similarly beneficial [49, 66]. The potent anti-angiogenic factor endostatin was able to decrease arthritis and reduce blood vessel density in a human TNF-transgenic mouse model of arthritis when delivered to knee joints using a lentiviral vector [48]. Tie2 has also been demonstrated to play a role in angiogenesis in arthritis. Adenoviral delivery of a soluble receptor for Tie2 resulted in a decreased incidence and severity of CIA, inhibition of angiogenesis, and was associated with decreased bone destruction that appeared to result from a decrease in RANKL [114]. VEGF is another angiogenic factor that promotes synovitis and bone destruction in arthritis. When soluble VEGF receptor I was administered via adenoviral vectors to CIA mice, disease activity was suppressed significantly [115]. Thrombospondins (TSP1 and 2) have also been shown to inhibit angiogenesis and also decrease pro-inflammatory cytokine production in animal models of arthritis [116, 117]. In addition, adenoviral gene transfer of a urokinase plasminogen inhibitor was also able to inhibit angiogenesis in a CIA model [118]. Targeting matrix degradation enzymes Matrix metalloproteinases (MMPs) degrade extracellular matrix components and have been demonstrated to contribute to cartilage degradation in RA. Ribozymes and an antisense construct targeting the destruction of MMP-1 delivered to RA synovial fibroblasts via retroviral vectors decreased MMP-1 production and reduced the invasiveness of RA synovial fibroblasts in a SCID mouse model of RA, suggesting that this may be an effective approach to inhibiting cartilage destruction in RA [119, 120]. Another strategy that has proven successful is to increase the ratio of tissue inhibitors of matrix metalloproteinases (TIMPs) to MMPs. In a rat antigen-induced arthritis model, intramuscular injection of naked DNA encoding TIMP-4 completely abolished the development of the disease [32]. Likewise, adenoviral delivery of TIMP-1 and TIMP-3 to RA synovial fibroblasts significantly decreased their invasiveness both in vitro and in an in vivo SCID mouse model of RA. These molecules were found to act by both decreasing MMP production and reducing cell proliferation [121]. Targeting NFκB The transcription factor NFκB plays a significant role in the activation of many cytokines that contribute to the pathogenesis of RA. Inhibition of this factor could lead to therapeutic benefit in RA. Injection of decoy oligodeoxynucleotides with high affinity for NFkB into ankle joints of CIA rats significantly decreased joint swelling and joint destruction. The levels of the pro-inflammatory cytokines TNF-α and IL-1β were also decreased in the treated joints [122]. Another group found that inhibiting NFκB in RA synovial fibroblasts using an adenovirus to deliver a dominant negative inhibitor of NFκB led to an increase in apoptosis upon stimulation with TNF-α. These cells are normally resistant to apoptosis when stimulated with TNF-α, suggesting that this strategy may be beneficial in the treatment of RA [123]. In adjuvant arthritis in rats, a dominant negative IkB kinase β (IKKβ) was used to inhibit the NFκB pathway. Delivering the molecule intra-articularly using an AAV5 vector resulted in significantly reduced paw swelling and decreased levels of IL-6 and TNF-α. Bone and cartilage destruction, as well as MMP-3 and TIMP-1 levels were unaffected. The same vector was also able to efficiently transduce ex vivo-cultured biopsies from joints of human RA patients. TNF-α-induced IL-6 production was significantly decreased in the ex vivo cultures receiving the vector encoding IKKβ [124]. Other strategies Other molecules that have been demonstrated to play a role in arthritis using gene transfer in various in vitro or animal models are Csk, cathepsin L, fibronectin, galectin-1, p16INK4A, p21Cip1, SOCS3, soluble CR1, superoxide dismutase and catalase, Ras, and prothymosin α [28, 33, 125–136]. Vectors and genes used to successfully treat animal models of arthritis are summarized in Table 1. Table 1Summary of vectors and genes used in animal models of arthritis AdenovirusAAVRetrovirusLentivirusPlasmid DNAGenes demonstrated to successfully treat arthritis in various animal modelsIL-1ra, sTNFRI-Ig fusion protein, sIL-1RacP IL-18-binding protein, IL-13, IL-4, vIL-10, CTLA4-Ig, TRAIL, Csk, IFN-β, p16INK4A, P21CIP1, prothymosin-α, VEGF receptor I, Tie2 soluble receptor, FADD, FasL, SOCS3, urokinase plasminogen inhibitor, TIMP-1, TIMP-3, Thromobospondin-1, dominant negative NFκB inhibitorIL-1ra, sTNFR-Ig, sTNFRI, sTNFR:Fc fusion protein, IL-4, IL-10, angiostatin, IKKβIL-1ra, sTNFRI variants, IL-4, TGF-β, angiostatin, soluble complement receptor I, superoxide dismutase, catalaseAngiostatin, endostatinIL-1ra, sTNFRI receptor variants, sTNFR:Fc fusion protein, sTNFRII, TGF-β, IL-4, IL-10, vIL-10, soluble complement receptor I, TIMP-4, fibronectin peptide, sIL-1RAcP Safety concerns As with most investigative therapies, one of the most important concerns with regards to gene transfer technologies is the issue of safety. While the FDA has outlined specific strict standards for clinical trial protocols for gene transfer studies, there have been several instances where the therapy has resulted in serious adverse events in patients enrolled in the studies. The most widely known is probably the September 1999 death of Jesse Gelsinger, who had a fatal systemic inflammatory response to adenoviral vector gene transfer [137]. Retroviral vectors have also resulted in insertional mutagenesis leading to a secondary malignancy [44]. Lentiviral vectors may have this potential as well as sharing a similar mechanism, although this has never been demonstrated in clinical studies. More recently, in July of 2007, Jolee Mohr, a patient enrolled in Targeted Genetics’ clinical trial for rheumatoid arthritis using AAV vectors, died several weeks following her second injection of the experimental treatment. The trial is currently suspended, and although the investigation into her death is still ongoing, initial results suggest she died of a systemic fungal infection that may be unrelated to the injected virus. This case and the final results of the investigation are likely to have a large impact on the future of gene therapy trials in the United States, as AAV has been heralded as one of the most promising vectors in the field, precisely because it has, until now, failed to reveal significant safety concerns in previous studies. Future of gene therapy for arthritis Much progress has been made in the past several years in the use of gene therapy for the treatment of arthritis. However, there are many obstacles that must be overcome in order for it to become a viable treatment option. Future studies will need to address improving targeted delivery of vectors, regulating transgene expression, obtaining long-term transgene expression, and improving the safety and efficacy of the vectors already in use before gene therapy becomes a viable clinical therapy for arthritis. Even in light of the recent setback in clinical trials utilizing AAV vectors, the authors believe that the future of gene transfer for arthritis will rely heavily upon this vector. Once the safety issues have been clarified, these trials can hopefully move forward. Presently, AAV would seem to be the viral vector that has the best profile in terms of safety, efficacy, and level and length of transgene expression. Alternatively, strategies using siRNA technology or preventing the dysregulation of Th17 cells would appear to be emerging as treatment strategies that may one day outperform the current standards of care for rheumatoid arthritis.
[ "gene therapy", "gene transfer", "rheumatoid arthritis", "inflammation", "cytokines" ]
[ "P", "P", "P", "P", "P" ]
Pediatr_Nephrol-3-1-2064942
Mechanisms of progression of chronic kidney disease
Chronic kidney disease (CKD) occurs in all age groups, including children. Regardless of the underlying cause, CKD is characterized by progressive scarring that ultimately affects all structures of the kidney. The relentless progression of CKD is postulated to result from a self-perpetuating vicious cycle of fibrosis activated after initial injury. We will review possible mechanisms of progressive renal damage, including systemic and glomerular hypertension, various cytokines and growth factors, with special emphasis on the renin–angiotensin–aldosterone system (RAAS), podocyte loss, dyslipidemia and proteinuria. We will also discuss possible specific mechanisms of tubulointerstitial fibrosis that are not dependent on glomerulosclerosis, and possible underlying predispositions for CKD, such as genetic factors and low nephron number. Introduction Chronic kidney disease (CKD) occurs in all age groups, with an incidence in children between 1.5 per million and 3.0 per million. Renal developmental abnormalities (congenital abnormalities of the kidney and urinary tract, CAKUT) are the most common causes of CKD in children. Other diseases commonly underlying CKD in children include focal segmental glomerulosclerosis (FSGS), hemolytic uremic syndrome (HUS), immune complex diseases, and hereditary nephropathies, such as Alport’s disease [1]. The incidence of diabetes, especially type 2, is increasing in children. Although CKD secondary to diabetes usually does not develop until adulthood, early structural lesions of diabetic nephropathy start in childhood [2]. CKD shares a common appearance of glomerulosclerosis, vascular sclerosis and tubulointerstitial fibrosis, suggesting a common final pathway of progressive injury [3]. Adaptive changes in nephrons after initial injury are postulated ultimately to be maladaptive, eventually causing scarring and further nephron loss, thus perpetuating a vicious cycle that results in the end-stage kidney. We will review possible mechanisms of progressive renal damage, which include, but are not limited to, hemodynamic factors, the renin–angiotensin–aldosterone system (RAAS), various cytokines and growth factors, podocyte loss, dyslipidemia, proteinuria, specific mechanisms of tubulointerstitial fibrosis, and possible underlying predispositions for CKD, such as genetic factors and low nephron number. Systemic and glomerular hypertension Systemic hypertension often accompanies renal disease and may both result from, and contribute to, CKD. Progression of CKD is accelerated by hypertension, and control of blood pressure is key in the treatment of CKD. In addition, the glomerulus has a unique structure, with both an afferent and an efferent arteriole, which permits modulation of glomerular perfusion and pressure without corresponding systemic blood pressure change. The remnant kidney model has been extensively studied to investigate CKD [4]. In this model, one kidney and infarction/removal of two-thirds of the remaining kidney (i.e. five-sixths nephrectomy) results in progressive hyperperfusion, hyperfiltration, hypertrophy and FSGS [4–6]. Additional models with initial podocyte injury, namely the puromycin aminonucleoside and adriamycin models of renal disease, show initial proteinuria and podocyte damage similar to human minimal-change disease, followed by progressive FSGS [7]. Direct micropuncture studies have demonstrated that single nephron function was increased after renal ablation, and led to the hypothesis that hyperfiltration caused sclerosis, setting in motion a vicious cycle of hyperfiltration and glomerulosclerosis [3, 8]. Maneuvers that decreased hyperfiltration, such as low-protein diet, angiotensin I converting enzyme inhibitors (ACEIs), lipid-lowering agents, or heparin, were, indeed, effective in ameliorating glomerular sclerosis. However, in some studies, glomerular sclerosis was decreased without altering glomerular hyperfiltration [9], and glomerular sclerosis occurred in some settings even in the absence of intervening hyperperfusion [10]. Thus, focus was shifted to glomerular hypertension as a key mediator of progressive sclerosis. Maneuvers that increase glomerular capillary pressure, such as therapy with erythropoietin, glucocorticoids, or high-protein diet, accelerated glomerulosclerosis, while decreasing glomerular pressure ameliorated sclerosis. These beneficial effects were particularly apparent in the comparison of agents such as ACEIs that preferentially decrease glomerular pressure even more than systemic BP to non-specific antihypertensive agents [11]. Renin–angiotensin–aldosterone system The RAAS has been the focus of investigation of progression in CKD because of the efficacy of inhibition of its components in CKD. ACEIs decrease glomerular capillary pressure by preferential dilation of the efferent arteriole [1], likely mediated by both inhibition of angiotensin II (AngII) and especially by the effect of ACEIs in augmenting bradykinin, which is degraded by angiotensin I converting enzyme (ACE) [12]. Indeed, angiotensin type 1 receptor blockers (ARBs), which do not have this activity to increase bradykinin, do not preferentially dilate the efferent arteriole or decrease glomerular pressures to the extent of that seen with ACEIs in most experimental studies. However, both ACEIs and ARBs have shown superior efficacy in slowing progressive CKD in experimental models and in human CKD [13–16]. ARBs leave the angiotensin type 2 (AT2) receptor active, and may in theory even lead to augmented AT2 effects by allowing unbound AngII to bind to this receptor. The AT2 receptor counteracts some of the classic AT1 receptor actions and thus is mildly vasodilating and mediates growth inhibition and apoptosis [17–20]. Apoptosis often is associated with decreased injury, as injured cells are quickly removed without activation of profibrotic cytokines and chemokines. Absence of AT2 receptor actions, either by pharmacological inhibition or by genetic absence, indeed resulted in diminished apoptosis after injury, associated with increased fibrosis [21, 22]. Combined ACEI and AT1 receptor antagonist treatment could have a theoretic advantage, allowing further blockade of AngII actions while maintaining preferential local availability of the AT2 receptor [23]. In an experimental model, combined ACEI and ARB therapy did not result in added benefit on glomerulosclerosis when compared with single-drug therapy with similar blood pressure control [24, 25]. However, addition of AT2 receptor inhibition to ARB treatment prevented the beneficial effects of ARBs [26]. A beneficial effect of the AT2 receptor in renal injury was also demonstrated in transgenic mice overexpressing the AT2 receptor. These mice developed less severe injury than did the wild type after subtotal nephrectomy [27]. Results from small clinical studies of human CKD suggest that the combination of ARBs and ACEIs has greater effect in the decrease of proteinuria, not attributable to effects on systemic blood pressure [28, 29]. In a large study of hypertensive patients with diabetic nephropathy and microalbuminuria, combined therapy resulted in greater reduction of blood pressure and albuminuria than did therapy with either drug alone [30]. In a Japanese study, in addition to decreased proteinuria, the slope of decline of glomerular filtration rate (GFR) improved with combination ACEI and ARB versus monotherapy [31]. However, complete dose-range comparisons of combined therapy with monotherapy were not made in these clinical trials. A recent review of clinical trials with combination therapy with ACEI and ARB in CKD patients support that such combination therapy had increased effects to decrease proteinuria without significantly increasing adverse side effects [16]. Antifibrotic effects of combination therapy versus monotherapy could include augmented bradykinin and AT2 activity and also decreased urinary transforming growth factor (TGF)-β [32]. In addition, there may be greater suppression of the renin–angiotensin system (RAS) with combined therapy, decreasing both ligand generation by inhibition of ACE and binding of any remaining AngII to the AT1 receptor. However, even suprapharmacological doses of ACE inhibition did not achieve complete suppression of the local RAS in experimental models [33]. Similarly, patients receiving ACEIs long term still have measurable ACE in their plasma. These data support the notion that non-ACE-dependent AngII generation by chymotrypsin-sensitive generating enzyme occurs in humans. New directions under investigation include the development of renin antagonists that could obviate these obstacles to optimal inhibition of the RAAS. Renin itself may have direct effects, independent of activation of the RAAS, with renin receptor activity detected on mesangial cells [34]. Many profibrotic actions of the RAAS are mediated directly by AngII. AngII promotes migration of endothelial and vascular smooth muscle cells, and hypertrophy and hyperplasia of smooth muscle cells and mesangial cells [35, 36]. All components of the RAS are present in macrophages, which may thus serve as yet another source of AngII and also respond to ACEI and ARB. AngII also induces other growth factors, including basic fibroblast growth factor (basic FGF), platelet-derived growth factor (PDGF) and TGF-β, and plasminogen activator inhibitor-1 (PAI-1), all of which may impact on fibrosis (see below), [37–39]. Importantly, new data indicate that aldosterone has both genomic and non-genomic actions to promote fibrosis, independent of its actions to increase blood pressure by mediating salt retention [40, 41]. Aldosterone enhances angiotensin induction of PAI-1 (see below), and also has direct actions on fibrosis [40]. Conversely, aldosterone receptor antagonism with spironolactone decreased injury [40]. PAI-1 deficiency prevented aldosterone-induced glomerular injury, but interestingly did not alter cardiac or aortic injury in this mouse model, suggesting site-specific and perhaps species-specific mechanisms of aldosterone-PAI-1 mediated fibrosis [42]. In clinical trials, aldosterone antagonism has further decreased proteinuria when added to ACEI and ARB therapy [43, 44]. However, the potential risk of hyperkalemia may limit the ability to add aldosterone antagonism to angiotensin inhibition. Whether these approaches also apply to children with CKD has not been investigated. Clearly, the RAAS has many non-hemodynamic actions and thus, doses beyond usual antihypertensive doses are potentially of additional benefit. Regression has even been achieved in experimental models with high-dose ACEI/ARB. A shift in the balance of synthesis/degradation of extracellular matrix (ECM) must occur to accomplish regression of sclerosis; endothelial cells must regenerate, mesangial cells must regrow, and finally, podocytes must be restored. New glomeruli cannot be generated after term birth in humans. However, remaining segments of non-sclerotic loops can give rise to more open capillary area by lengthening or branching of the remaining capillaries [45–48]. Recent experimental data show that regression can, indeed, be induced by high-dose ACEI or ARB or spironolactone, linked to decreased PAI-1, restored plasmin activity and capillary remodeling [25, 49–51]. Of note, regression was not associated with increased expression or activity of matrix metalloproteases-2 or -9 or decreased mRNA for TGF-β or local decreases in TGF-β expression as assessed by in situ hybridization. However, lack of changes in mRNA does not rule out that local changes in TGF-β actions could occur, and clearly, in many systems, TGF-β has been shown to impact on ECM accumulation. Regression is also possible in human CKD, demonstrated in principle by regression of early diabetic sclerosis and tubulointerstitial fibrosis in patients over a 10-year period when the underlying diabetes was cured by pancreas transplantation [52]. Regression of existing lesions also occurred in IgA nephropathy in response to high-dose corticosteroids and tonsillectomy [53]. Specific cytokines/growth factors and progression of CKD Numerous cytokines/growth factors appear to modulate progression of glomerular and tubulointerstitial scarring. These factors and their roles may differ at the various stages of injury. Altered gene expressions and/or pharmacologic manipulations in pathophysiological settings have implicated e.g. PDGF, TGF-β, AngII, basic FGF, endothelin, various chemokines, peroxisome proliferator-activated receptor-γ (PPAR-γ) and PAI-1, among others, in progressive renal scarring [10, 54–56]. Current state-of-the-art approaches with proteomic and array analysis of renal tissue in human CKD and in animal models can identify novel targets and markers, and even mediators of progression [57, 58]. Of these many potential molecules of interest, we will discuss only a few that have been investigated in depth. Increased PAI-1 is associated with increased cardiovascular disease and fibrotic kidney disease [59]. Conversely, PAI-1 could be decreased by inhibition of AngII and/or aldosterone, and linked to prevention of sclerosis or even regression of existing kidney fibrosis [25, 38, 51, 60]. AngII and aldosterone can also induce PAI-1 expression and subsequent fibrosis independent of TGF-β activation [61]. Some of the effects of PAI-1 in promoting fibrosis are independent of its effects on proteolysis. PAI-1 also modulates cell migration, perhaps by its effects on vitronectin interaction [59]. Thus, PAI-1 may in some inflammatory or interstitial disease settings increase fibrosis primarily by enhancing cell migration and epithelial-mesenchymal transition (EMT). In contrast, in the glomerulus, the effects of PAI-1 in the increase of sclerosis may predominantly be due to its ability to modulate ECM turnover [59]. These data support that mechanisms of fibrosis in the interstitium and glomerulus are not identical, and involve complex interactions of parenchymal and infiltrating cells and cytokines, with variable net effects on ECM accumulation. TGF-β promotes ECM synthesis and is a key promoter of fibrosis. The biological actions of TGF-β are complex and depend not only on cell state, but also on the presence of decorin and latency-associated peptide (LAP), both of which can bind and modify its activity [37]. TGF-β also induces both PAI-1 and AngII [62]. Animals transgenic for TGF-β developed progressive renal disease [63]. Conversely, inhibition of either TGF-β or PDGF-B decreased mesangial matrix expansion in the anti-Thy1 model [64, 65]. Animals genetically deficient for TGF-β develop lymphoproliferative disease, thought to reflect a loss of TGF-β immune regulatory effect [66]. Interestingly, pharmacologic inhibition of TGF-β was more effective at lower dose, and with higher dose of anti-TGF-β associated with more fibrosis and greater macrophage influx, perhaps also reflecting effects on TGF-β immune modulation [67]. TGF-β may promote a more fibroblastic phenotype of the podocyte, with loss of differentiation markers and de novo expression of alpha-smooth muscle actin [68]. Although TGF-β promotes growth arrest and differentiation of podocytes at low doses, at higher doses, TGF-β causes podocyte apoptosis, mediated by Smad 7 signaling [69, 70]. Loss of podocytes (see below) is a key factor contributing to progressive kidney fibrosis. PPAR-γ modifies numerous cytokines and growth factors, including PAI-1 and TGF-β. PPAR-γ is a transcription factor and a member of the steroid superfamily [71]. On activation, PPAR-γ binds the retinoic acid X receptor, translocates to the nucleus and binds to peroxisome proliferator activator response elements (PPREs) in selected target genes, modifying their expression. PPAR-γ agonists, such as the thiazolidinediones, are most commonly used to treat type 2 diabetes, due to their beneficial effects to increase insulin sensitivity and improve lipid metabolism, and they have been shown to decrease diabetic injury correspondingly in diabetic animal models [72]. Interestingly, PPAR-γ agonists also have antifibrotic effects in non-diabetic or non-hyperlipidemic experimental models of CKD. PPAR-γ agonist ameliorated the development of sclerosis in these non-diabetic models, linked to decreased PAI-1 and TGF-β and decreased infiltrating macrophages and protection of podocytes against injury [56, 73]. Further study is necessary to determine the specific role each of the above factors plays at varying stages of renal fibrosis. Podocyte loss Podocytes are the primary target in many glomerular diseases, including FSGS and the experimental models of adriamycin and puromycin aminonucleoside-induced nephropathies [74]. The podocytes are pivotal for maintenance of normal permselectivity, and are a source of matrix in both physiological and pathophysiological settings. The podocyte does not normally proliferate. Loss of podocytes after injury is postulated to be a key factor resulting in progressive sclerosis [74]. This principle was proven in experimental models in mice and rats, where podocyte-specific injury was produced by genetic manipulation of the podocytes to express toxin receptors only on this cell [75, 76]. Injection of toxin then resulted in podocyte loss, the degree of which depended on toxin dose. Animals subsequently developed progressive sclerosis. Of interest, even though only podocytes were initially injured, subsequent injury rapidly also developed in endothelial and mesangial cells, with resulting sclerosis. Even when chimeric mice were genetically engineered so that only a portion of their podocytes was susceptible to the toxin, all podocytes developed injury after toxin exposure [77]. These data show that injury can also spread from the initially injured podocyte to initially intact podocytes within a glomerulus, setting up a vicious cycle of progressive injury at the glomerular level [77]. The limited proliferation in the mature podocyte is accompanied by high expression of a cyclin-dependent kinase inhibitor, p27kip1, a rate-limiting step for the growth response of the podocyte [78]. Either too much or too little proliferation of the podocyte in response to genetic manipulation of p27kip1 is postulated be detrimental [79]. Inadequate growth of the podocyte is postulated to give rise to areas of dehiscence and insudation of plasma proteins, which progress to adhesions and sclerosis [80]. Another cyclin-dependent kinase inhibitor, p21, appears to be necessary for development of injury after five-sixths nephrectomy in mice, pointing to the crucial importance of cell growth responses in determining response to injury [81]. Podocytes normally produce an endogenous heparin-like substance, which inhibits mesangial cell growth; thus, injury may decrease this growth inhibitory effect and allow increased mesangial growth. Podocytes are also the main renal source of angiopoietin-1 and vascular endothelial growth factor (VEGF), an endothelial cell-specific mitogen that plays a key role in both physiologic and pathologic angiogenesis and vascular permeability [82]. Overexpression or partial loss of podocyte VEGF results in a collapsing lesion or pre-eclampsia-like endotheliosis lesion, respectively [82]. Podocyte genes and CKD New studies of the molecular biology of the podocyte and identification of genes mutated in rare familial forms of FSGS and nephrotic syndrome, such as nephrin, WT-1, transient receptor potential cation channel-6 (TRPC-6), phospholipase C epsilon, α-actinin-4 and podocin, have given important new insights into mechanisms of progressive glomerulosclerosis. The gene mutated for congenital nephrotic syndrome, nephrin (NPHS1) is localized to the slit diaphragm of the podocyte and is tightly associated with CD2-associated protein (CD2AP) [83]. Nephrin functions as a zona occludens-type junction protein, and together with CD2AP, provides a crucial role in receptor patterning and cytoskeletal polarity and also provides signaling function of the slit diaphragm [84]. Mice with CD2AP knockout develop congenital nephrotic syndrome, similar to congenital nephrotic syndrome of Finnish type [85]. Autosomal dominant FSGS with adult onset is caused by mutation in α-actinin 4 (ACTN4) [86]. This is hypothesized to cause altered actin–cytoskeleton interaction, causing FSGS through a gain-of-function mechanism, in contrast to the loss-of-function mechanism implicated for disease caused by the nephrin mutation [85]. Patients with α-actinin 4 mutation progress to end-stage by age 30 years, with rare recurrence in a transplant. TRPC-6 encodes for a cation channel, which is present in several sites including podocytes. TRPC-6 is mutated in some kindreds with familial FSGS with adult onset in an autosomal dominant pattern [87]. Podocin, another podocyte-specific gene (NPHS2), is mutated in autosomal recessive FSGS with childhood onset with rapid progression to end-stage kidney disease [88]. Podocin interacts with the CD2AP-nephrin complex, indicating that podocin could serve in the structural organization of the slit diaphragm. In some series of steroid-resistant pediatric patients with non-familial forms of FSGS, a surprisingly high proportion, up to 25%, had podocin mutations [89, 90]. However, not all patients with nephrotic syndrome caused by mutation are steroid resistant. Diffuse mesangial sclerosis in a large kindred was recently linked to a truncating mutation of phospholipase C epsilon (PLCE1), and two of those patients responded to steroid therapy [91]. However, in two patients with missense mutation of this same gene, FSGS lesions developed, demonstrating that a spectrum of structural abnormalities may arise from varying mutations in the same gene. PLCE1 is expressed in the glomerulus, where it is postulated to play a key role in development, perhaps by interacting with other proteins that are crucial for the development and function of the slit diaphragm. WT-1 mutation, which may occur sporadically with only FSGS or be associated with Denys–Drash syndrome, was found in only 5% of steroid-resistant patients [92]. Interestingly, mutations of podocin or WT-1 were not found in relapsing or steroid-dependent pediatric patients [93]. Acquired disruption or polymorphisms of some of these complexly interacting molecules have been demonstrated in experimental models and in human proteinuric diseases. Thus, in puromycin aminonucleoside nephropathy, a model of FSGS, nephrin localization and organization were altered [94]. Similar decreases in nephrin were observed in hypertensive diabetic rat models with significant proteinuria [95]. TRPC-6, a calcium channel, was induced in various non-genetic human proteinuric diseases [96]. Conversely, treatments that ameliorated these experimental models preserved e.g. glomerular nephrin expression, providing further support for a key causal role for slit diaphragm and key podocyte molecules in proteinuria [97]. Whether polymorphisms, compound heterozygosity for mutations or merely altered distribution and/or expression of any of these proteins contribute to proteinuria or progressive disease in various causes of CKD in humans has not been determined. Dyslipidemia Patients with CKD frequently have dyslipidemia and greatly increased cardiovascular disease risk, even beyond that predicted by lipid abnormalities [98]. Abnormal lipids are important in modulating glomerular sclerosis in rats; however, analogous studies in humans are still evolving [99–102]. Glomerular injury was increased in experimental CKD when excess cholesterol was added to the diets. Glomerular disease has been reported in the rare familial disease, lecithin cholesterol acyltransferase deficiency, and with excess apolipoprotein E. However, renal disease is not typical in the more common forms of primary hyperlipidemias. Patients with minimal-change disease or membranous glomerulonephritis, characterized by hyperlipidemia as part of their nephrotic syndrome, usually do not develop glomerular scarring. However, recently, post hoc and meta-analyses of clinical trial data support that abnormal lipids are associated with increased loss of GFR and that treatment with statins may not only benefit cardiovascular disease risk, but also be of benefit for progressive CKD. A post hoc analysis suggests that statins may even slow progression in patients with stage 3 CKD [102]. These beneficial effects of statins appear to extend beyond their lipid-lowering effects [98, 101]. Proteinuria Proteinuria is a marker of renal injury, reflecting loss of normal permselecitvity. Further, proteinuria itself has been proposed to contribute to progressive renal injury inflammation [74, 103]. Increased proteinuria is associated with worse prognosis [104]. Whether proteinuria is merely a marker of injury or a contributor to progressive injury has been debated. Albumin can in vitro in tubular cells increase AngII and in turn upregulate TGF-β receptor expression [105]. However, in most settings, pure albumin per se is not directly injurious. Other filtered components of the urine in proteinuric states, such as oxidized proteins, appear to be more potent in inducing direct injury of tubular epithelial cells and activating proinflammatory and fibrotic chemokines and cytokines. Complement and various lipoproteins are also present in the urine in proteinuric disease states and can activate reactive oxygen species [101, 106]. Proteinuria may thus alter tubule cell function directly, potentially contributing to a more profibrotic phenotype, and also augment interstitial inflammation, in particular by macrophages. Proteinuria may activate many profibrotic pathways through its ability to increase NF-kB, and also by other pathways. These include for instance complement synthesis from tubules [107]. Interventions that are particularly effective in decreasing proteinuria, such as the administration of ACEIs or ARBs, also decrease overall end organ injury. Whether these beneficial effects are dependent on the reduction of proteinuria has not been proven, in that these interventions have multiple parallel effects that may all contribute to the decrease of fibrosis [107]. Mechanisms of tubulointerstitial fibrosis Tubulointerstitial fibrosis classically was thought merely to reflect glomerular injury and resulting whole nephron ischemia in most CKD. Interesting new data point to independent mechanisms of interstitial fibrosis and the importance of the tubulointerstitial lesion in progression. Decreased peritubular capillary density, possibly modulated by decreased VEGF or other angiogenic factors, has been proposed as a mechanism in various progressive renal diseases [108]. Future studies may demonstrate whether these interstitial microvascular lesions are causal or consequential in the development of interstitial injury. Increased numbers of macrophages are closely correlated with both glomerulosclerosis and tubulointerstitial fibrosis and are usually decreased by interventions that decrease fibrosis. These cells are potential sources of numerous cytokines and eicosanoids that affect the glomerulus [109]. Support for this hypothesis is seen with the protective effects of maneuvers that decrease macrophage influx. In a rat model of unilateral ureteral obstruction (UUO), administration of ACEI ameliorated interstitial monocyte/macrophage infiltration and decreased fibrosis [110]. Studies in β6 integrin-deficient mice revealed that infiltrating macrophages do not inevitably transduce fibrotic effects; in these mice local activation of TGF-β is impaired, and they are protected from fibrosis despite abundant macrophage infiltration [61]. Macrophages may even play a beneficial role in scarring. The specific role of the macrophage AT1a receptor in renal fibrosis was examined in studies of bone marrow transplantation in wild type mice with UUO mice reconstituted with either wild type macrophages or macrophages devoid of the AT1a receptor. There was more severe interstitial fibrosis in mice with the AT1a deficient macrophages, even though fewer infiltrating macrophages were observed, suggesting that the macrophage AT1a receptor functions to protect the kidney from fibrogenesis [111]. In human diabetic nephropathy there is an early increase in total interstitial cell volume (which may represent increased cell size and/or number), preceding the accumulation of interstitial collagen [112]. This is in contrast to the diabetic glomerular lesion, where the expanded mesangial area is largely due to increased matrix accumulation rather than hypercellularity. These interstitial cells could possibly represent interstitial myofibroblasts, postulated to play a key role in interstitial fibrosis. These activated interstitial cells are a major source of collagen synthesis, and increased expression of α-smooth muscle actin (SMA), a marker of myofibroblasts, predicts progressive renal dysfunction both in human and experimental renal disease. The source of interstitial myofibroblasts is a topic of controversy. Bone marrow-derived or potential renal stem cells may give rise not only to interstitial cells but also to regenerating parenchymal cells [113]. Epithelial–mesenchymal transformation (EMT) is another possible mechanism for generation of interstitial myofibroblasts [114]. This seamless plasticity of cells changing from epithelial to mesenchymal phenotypes exists during early development. EMT may also occur in the adult after injury, contributing approximately half of the interstitial fibroblasts in experimental models [114]. Injured tubular epithelial cells can change phenotype both in vivo and in vitro, with de novo expression of a fibroblast-specific protein (FSP1), and possibly migrate into the interstitium as myofibroblasts. The surrounding matrix and basement membrane underlying the tubular epithelium is disrupted by local proteolysis, modulated by an array of cytokines and growth factors, including insulin-like growth factors I and II, integrin-linked kinases, EGF, FGF-2 and TGF-β [114]. Several key factors inhibit EMT, including hepatocyte growth factor and bone morphogenetic factor-7, and thus inhibit fibrosis in experimental CKD [114]. Anatomic and genetic risks for CKD: nephron number and gene polymorphisms Risk for development of CKD and its rate of progression varies in differing populations. CKD associated with hypertension and arterio-nephrosclerosis is particularly common in African Americans, and FSGS is more frequently the underlying cause of steroid-resistant FSGS in African Americans and Hispanics than in Caucasians [115, 116]. These varying disease trends in differing ethnic populations could represent both genetic and environmental factors. Low birth weight is epidemiologically linked to increased risk for cardiovascular disease, hypertension and CKD in adulthood. The link is postulated to be due to the decreased nephron number that accompanies low term birth weight, defined as less than 2,500 g [117, 118]. These fewer nephrons are postulated to be under greater hemodynamic stress, thus contributing to progressive sclerosis. Of interest, low birth weight is much more common in African Americans than in Caucasians and is not accounted for by socioeconomic status [119]. Further, glomerular size in normal African Americans is larger than in Caucasians and could possibly reflect smaller nephron number [120]. In Australian Aborigines, marked increase in incidence of CKD is associated with larger but fewer glomeruli and low birth weight [121, 122]. Mechanisms other than hemodynamic stress that could underlie these differences in normal glomerular populations and also relate to increased incidence of end-stage renal disease include functional polymorphisms of genes that are involved both in renal/glomerular development and contribute to amplified scarring mechanisms, such as the renin–angiotensin system [10]. African Americans also have increased severity of renal disease associated with several systemic conditions. The course of lupus nephritis in a prospective trial was more severe in African Americans than in Caucasians, with more extensive crescent formation and interstitial fibrosis and greater likelihood of end-stage renal disease [123]. Even the manifestations of HIV infection in the kidney differ markedly between African Americans and Caucasians: HIV-associated renal disease in African Americans is typically an aggressive collapsing type of FSGS, contrasting lower grade immune-complex-mediated glomerulonephritides in Caucasians with HIV infection and renal disease [124]. Genetic background also modulates susceptibility in experimental models, both to podocyte injury (e.g. only the balb/c mouse strain is susceptible to adriamycin) and to hypertension injury (e.g. in the five-sixths nephrectomy model, C57Bl mice are resistant, 129Sv/J mice are susceptible) and even to diabetic injury [125–127]. There is also accumulating evidence that specific genes in humans modulate the course and rate of organ damage. Polymorphisms in several genes within the RAAS system, including ACE, angiotensinogen and the angiotensin type 1 receptor, have been linked with cardiovascular and renal disorders, including diabetic nephropathy, IgA nephropathy and uropathies [128–133]. The ACE DD genotype, associated with increased RAS activity, was increased in patients with IgA nephropathy who ultimately experienced progressive decline in renal function during follow-up compared with those whose function remained stable over the same time [134]. Polymorphisms of TGF-β are also implicated in hypertension and progressive fibrosis. The Arg 25 polymorphism may be increased in African Americans, who may also have greater elevation of circulating TGF-β when they reach end-stage renal disease than do Caucasians [135]. These observations suggest that complex genetic traits can modulate the response of glomerular cells to pathogenic stimuli in experimental models. Whether ethnic differences in development of renal disease in humans reflect contributions of genetic and/or environmental influences remains to be definitively determined. QUESTIONS (Answers appear following the reference list) A 6-year-old African American boy presented with generalized edema, 24 h urine protein excretion of 1.5 g and normal complements, and serum creatinine of 0.7 mg/dl. His blood pressure was 110/70 mmHg. His nephrotic syndrome did not respond to an 8-week course of steroids, and a renal biopsy is planned. The most likely diagnosis in this patient is: FSGS due to mutation of podocinMinimal-change diseaseFSGS, usual typeDiffuse mesangial sclerosisCollapsing glomerulopathyFor the same patient detailed in question 1, what additional treatment should be initiated at this time to decrease risk of CKD: DiureticsSpironolactoneACEIsBeta blockersACEIs and ARBsIn the same patient detailed in the above questions, what parameters would be most important to follow and evaluate for adjustment of therapy: EdemaWhite blood cell (WBC) countBlood pressureProteinuriaA 14-year-old Caucasian girl was diagnosed with IgA nephropathy, which on biopsy showed fibrocellular crescents, with focal proliferative and secondary sclerosing lesions of glomeruli. Her urine protein excretion was 1.0 g in 24 h. Urinalysis showed frequent red blood cell casts, serum creatinine was 1.2 mg/dl and her blood pressure was 120/93 mmHg. Which of the following mechanisms are likely to contribute to progression of her CKD: Podocyte lossProteinuriaGlomerular hypertensionInfiltrating macrophagesAll of the aboveA 10-year-old Caucasian boy with a history of multiple episodes of steroid-dependent nephrotic syndrome since the age of 4 years now has proteinuria of 3.8 g in 24 h, with unremarkable urinalysis without red blood cell casts; his serum creatinine is 0.6 mg/dl, and his blood pressure is 98/64 mmHg. He has an increased cholesterol level of 480 mg/dl and triglyceride levels are 110 mg/dl. What mechanisms of renal injury are likely to be activated in this child: podocyte lossproteinuriadyslipidemiaglomerular hypertension(b) and (c)
[ "angiotensin", "podocytes", "interstitial fibrosis", "glomerulosclerosis", "low birth weight", "angiotensin i converting enzyme inhibitors (acei)", "angiotensin receptors", "angiotensin receptor blockers", "transforming growth factor (tgf)-beta" ]
[ "P", "P", "P", "P", "P", "R", "R", "R", "M" ]
Crit_Care-8-2-420020
Clinical review: The implications of experimental and clinical studies of recruitment maneuvers in acute lung injury
Mechanical ventilation can cause and perpetuate lung injury if alveolar overdistension, cyclic collapse, and reopening of alveolar units occur. The use of low tidal volume and limited airway pressure has improved survival in patients with acute lung injury or acute respiratory distress syndrome. The use of recruitment maneuvers has been proposed as an adjunct to mechanical ventilation to re-expand collapsed lung tissue. Many investigators have studied the benefits of recruitment maneuvers in healthy anesthetized patients and in patients ventilated with low positive end-expiratory pressure. However, it is unclear whether recruitment maneuvers are useful when patients with acute lung injury or acute respiratory distress syndrome are ventilated with high positive end-expiratory pressure, and in the presence of lung fibrosis or a stiff chest wall. Moreover, it is unclear whether the use of high airway pressures during recruitment maneuvers can cause bacterial translocation. This article reviews the intrinsic mechanisms of mechanical stress, the controversy regarding clinical use of recruitment maneuvers, and the interactions between lung infection and application of high intrathoracic pressures. Introduction Mechanical ventilation (MV) is a supportive and life saving therapy in patients with acute lung injury (ALI) and/or acute respiratory distress syndrome (ARDS). Despite advances in critical care, mortality in these patients remains over 40% [1]. During the past decade the possibility that MV can produce morphologic and physiologic alterations in the lung has been recognized [2]. On histopathologic examination, findings in ventilator-induced lung injury (VILI) do not differ from those in ARDS [2]. To minimize this damage, lung protective strategies to avoid overdistension and cyclic collapse and reopening of alveoli have successfully been used in patients with ARDS receiving MV [3,4]. Recruitment maneuvers (RMs) consisting of sustained inflation to open the collapsed alveolar units have been proposed as an adjunct to MV in anesthesia and ARDS [5,6]. In most ARDS patients, however, lung recruitment and overdistension occur simultaneously at higher intrathoracic pressure [7]. Whether RMs can initiate cellular mechanisms of injury in healthy parts of the lung is unknown. In the present review of the literature we describe the intrinsic mechanisms that explain how MV inflicts alveolar damage and the controversy regarding the use of RMs as an adjunct to MV. Finally, we discuss the interactions between lung infection and periodic application of high intrathoracic pressure, both in experimental models of ALI and in patients with ARDS. Method To identify the most relevant English language publications, the Medline database was searched using the following keywords: mechanotransduction, acute lung injury, acute respiratory distress syndrome, mechanical ventilation, ventilator-induced lung injury, overdistension, recruitment maneuvers, and bacterial translocation. Many different methods of RM delivery have been proposed in the literature (Table 1). Several investigators have demonstrated that RMs can increase oxygenation and lung volume in collapsed prone lungs. However, the benefits of RMs in terms of oxygenation and lung recruitment in ARDS patients and in experimental models with alveolar flooding or consolidation are unclear. Intrinsic mechanism of ventilator-induced lung injury The mechanical stresses produced by MV at high pressure or volumes, and the forces generated by repeated opening and collapse lead to upregulation of an inflammatory response, with release of cytokines and chemokines and activation of neutrophils and macrophages that produce lung damage [2]. Injurious MV can lead to end-organ dysfunction, and the inflammatory cascade also plays a pivotal role in the systemic inflammatory response syndrome and in multiple organ system failure [8-10]. Like all adherent cells, alveolar epithelial cells interact with extracellular matrix through transmembrane adhesion receptors such as integrins. These receptors transmit forces from the surrounding matrix to the cytoskeleton via the focal adhesion complex [11]. When the basement membrane is strained, adherent epithelial cells must change shape and the ratio of their surface (plasma membrane) to their volume must increase. If the plasma membrane is disrupted the intracellular lipid stores are utilized to repair the cell surface. Most breaks are repaired within seconds, usually via a calcium-dependent response [12]. This dynamic remodeling process is the most important determinant of cell wounding [13]. Mechanotransduction is the conversion of mechanical stimuli, such as cell deformation, into biochemical and biomolecular alterations. How mechanical forces can be sensed by cells and converted into intracellular signals is still unclear, but in various experiments it was observed that mechanical stimuli activate the nuclear factor-κB – a critical transcription factor that is required for maximal expression of many cytokines involved in the pathogenesis of VILI [14]. It is unknown whether a single stimulus such as RMs applied during MV can trigger the above-mentioned pathways of lung injury, and the long-term benefits and safety of RMs will depend on the extent of this effect. Experimental evidence on recruitment maneuvers In saline lavaged rabbit lungs, Bond and coworkers [15] found an improvement in respiratory system compliance and oxygenation during high frequency oscillatory ventilation after RMs. In a similar model, Rimensberger and coworkers [16] showed that a single RM resulted in better oxygenation without augmenting histologic injury at positive end-expiratory pressure (PEEP) below the lower inflection point of the respiratory system pressure–volume curve, as compared with the group with PEEP set above the lower inflection point without RM. Furthermore, those investigators showed that a single sustained inflation to 30 cmH2O boosted the ventilatory cycle onto the deflation limb of the pressure–volume curve (Fig. 1). In other words, a RM applied in a recruitable lung increases the amount of recruited tissue at end expiration, favoring tidal ventilation. Some data suggest that RMs have different effects depending on the type of lung insult and on the use of various combinations of tidal volume and PEEP. Whether RMs are necessary to prevent alveolar collapse when optimal PEEP is used remains controversial. Van der Kloot and colleagues [17] studied the effects of RMs on gas exchange and lung volumes in three experimental models of ALI: saline lavage, oleic acid, and pneumonia. After application of RMs, oxygenation improved only in the surfactant depletion group when low PEEP was used. At high PEEP in any model, RMs had no effect. Similar effects were observed in the study conducted by Bond and coworkers [15]. Takeuchi and colleagues [18] highlighted the difficulties in maintaining tidal ventilation at high lung volumes. Those investigators showed that, after RMs, PEEP set at 2 cmH2O above the lower inflection point was more effective in maintaining gas exchange and minimizing inflammation and lung injury than was PEEP set at the maximum curvature of the deflation pressure–volume curve. When recruitment is achieved with posture, Cakar and coworkers [19] showed better oxygenation after RMs in the prone than in the supine position, and importantly the benefit was sustained at lower PEEP. In other studies other adjuncts to MV were necessary to keep the lung open after RMs [20]. Lu and coworkers [21] demonstrated that RMs completely reversed the atelectasis, bronchoconstriction, and decrease in arterial oxygen saturation observed after endotracheal suctioning in an anesthetized sheep model. In summary, the beneficial effects of RMs have been demonstrated in animal models of alveolar collapse induced by surfactant depletion. However, the pathobiology of ARDS is more complex and includes an altered vascular barrier function and alveolar flooding or consolidation. Indeed, in animal models other than that involving surfactant depletion, the effect of RMs on lung function is less evident. Role of recruitment maneuvers in anesthetized patients Formation of atelectasis and airway closure are mechanisms of impaired gas exchange in anesthetized patients with healthy lungs [22]. RMs have successfully been used to reverse collapsed dependent areas in these patients. Rothen and coworkers [22] found that a pressure of 40 cmH2O maintained for 7–8 s entirely re-expanded the collapsed lung tissue in anesthetized humans, although the net effect on gas exchange might be rather small if low ventilation/perfusion areas still coexist at the time of decrease in intrapulmonary shunt. Long-term effects of RMs in anesthetized patients depend on gas composition. Re-expanded lung tissue remained inflated for at least 40 min at low oxygen concentration [5], whereas lung collapse reappears within minutes with pure oxygen [23]. Finally, RMs followed by moderate PEEP may produce physiologic benefits in patients undergoing upper abdominal, thoracic, or laparoscopic surgery [24,25], and in patients prone to develop a moderate degree of lung injury after surgical procedures [26,27]. Recruitment maneuvers in patients with acute respiratory distress syndrome Since the publication of the reports from Amato and coworkers [3] and the Consensus Conference on ARDS [28], the application of periodic RMs in patients with ARDS has gained acceptance among clinicians, although controversy remains. Among the earliest reports providing evidence that RMs improve lung function was that from Pelosi and coworkers [29], who demonstrated that sighs at 45 cmH2O plateau pressure in patients ventilated with PEEP at 14 ± 2.2 cmH2O significantly improved oxygenation, intrapulmonary shunt, and lung mechanics. Foti and colleagues [6] observed that RMs were effective in improving oxygenation and alveolar recruitment only during MV at low PEEP, suggesting that high PEEP better stabilized alveoli and prevented loss of lung volume. In the same line, Lapinsky and coworkers [30] reported beneficial effects on oxygenation, but this effect was sustained only if PEEP was increased after RMs. Lim and colleagues [31] found an improvement in oxygenation that persisted for 1 hour after an 'extended sigh'; this effect was partially lost soon after ventilatory support returned to the baseline PEEP level. Other studies have shown a modest and variable effect of RMs on oxygenation when ARDS patients are ventilated with high PEEP. Richard and coworkers [32] demonstrated decreased oxygenation when tidal volume was switched from 10 to 6 ml/kg with PEEP set above the lower inflection point of the pressure–volume curve. However, increasing PEEP and RMs prevented alveolar derecruitment, and RMs performed in patients already ventilated with high PEEP had minimal effects on requirements for oxygenation support. Similarly, Villagrá and colleagues [33], studying the effect of RMs superimposed on a lung protective strategy (tidal volume <8 ml/kg and PEEP 3–4 cmH2O higher than the lower inflection point on the pressure–volume curve), found no effect on oxygenation regardless of the stage of ARDS, and in some patients venous admixture increased during RMs (Fig. 2). This deleterious effect suggested that the RMs increased lung volume by overdistending the more compliant already-opened and aerated alveolar units, favoring blood flow redistribution from overdistended to collapsed lung regions. Furthermore, a negative correlation was found between recruited lung volume induced by PEEP before RMs and RM-induced changes in oxygenation, suggesting that RMs are less effective when the lungs have been near optimally recruited by PEEP and tidal volume. Recently Hubmayr [34] suggested that alveolar flooding is probably the main mechanism of end-expiratory loss of lung aeration in human ARDS. This may explain, at least in part, why RMs are less beneficial in patients with ARDS. Nevertheless, when sudden lung derecruitment occurs in conditions of adequate PEEP ventilation, such as the loss in lung volume produced during secretion aspiration, Maggiore and coworkers [35] observed that suctioning-induced lung derecruitment can be prevented by performing RMs. The cause of ARDS may also influence the response to RMs. In a majority of trauma patients developing ARDS, Johannigman and coworkers [36] found an improvement in oxygenation after RMs in patients receiving MV with low tidal volume and high PEEP. However, when Bein and colleagues [37] analyzed the impact of RMs on intracranial pressure and cerebral metabolism in patients with acute cerebral injury and respiratory failure, they observed an increase in intracranial pressure at the end of RMs and a subsequent reduction in mean arterial pressure resulting in a decrease in cerebral perfusion pressure. Both normalized 10 min after RMs. Grasso and coworkers [38] found that RMs significantly improved arterial oxygenation and lung volume in patients with early ARDS without impaired chest wall mechanics (i.e. with large recruitment potential). Nevertheless, in the group with low chest wall compliance, RM-induced lung overdistension reduced blood pressure and cardiac output, making RMs ineffective and potentially harmful (Fig. 3). RMs can also be applied during assisted breathing in non-sedated patients. Patroniti and coworkers [39] applied one sigh per minute to baseline pressure support ventilation in patients with early ARDS. They observed a significant improvement in arterial oxygenation associated with an increase in end-expiratory lung volume and respiratory system compliance during the sigh, suggesting that sighs promote alveolar recruitment. These changes returned to baseline after the sighs were discontinued. Other studies emphasize the importance of body posture (supine or prone) on regional distribution of intrapulmonary ventilation and perfusion, and the beneficial effects of prone position in limiting VILI in experimental animals [40]. Lim and coworkers [31] found that the benefit was significantly greater when patients were in the supine position as compared with those in the prone position, suggesting that patients in the prone position have less collapsed lung. These findings were recently confirmed by Pelosi and colleagues [41], who demonstrated that adding cyclical sighs during ventilation in the prone position provided optimal lung recruitment in the early stage of human ARDS. Finally, two randomized physiologic pilot studies of RMs superimposed on low tidal volume ventilation and moderate to high PEEP conducted in approximately 100 patients with ALI showed no clear benefits in terms of oxygenation [42,43]. Moreover, RMs were potentially harmful because some patients developed hemodynamic instability, ventilator dysynchrony, and pneumothorax after RM. In summary, RMs can be useful in improving oxygenation in patients receiving MV with low PEEP and low tidal volume. However, in patients with ARDS receiving MV with high PEEP levels, the beneficial effects of RMs disappear. RMs may restore lung volume and oxygenation in endotracheal suctioning-induced lung derecruitment in mechanically ventilated patients diagnosed with ALI/ARDS. RMs should be avoided in patients with suspected or documented intracranial hypertension, in patients with a stiff chest wall, and in patients in the late stage of ARDS. Lung infection and mechanical ventilation Recent studies suggest that the detrimental effect of MV may be aggravated when lungs are infected or primed with endotoxin. In ex vivo rat lungs, Ricard and coworkers [44] showed that ventilation that severely injures lungs does not lead to release of significant amounts of inflammatory cytokines by the lung in the absence of lipopolysaccharide challenge. Likewise, in experimental studies other investigators have shown that MV predisposes to development of pneumonia [45] and that coexisting MV and infection have a strong impact on the lung because they appear to act synergistically in causing alveolar damage [46]. Finally, when bacteria were injected in animals with previous severe ALI, MV produced a clinical picture closely resembling that of hyperdynamic sepsis in humans [47]. These experimental studies taken together suggest that in the presence of lung infection MV (cyclic positive intrathoracic pressure) predisposes to greater bacterial burden and bacterial translocation from the lung into the systemic circulation than would occur without MV. These effects are particularly important when using ventilatory strategies that apply large transpulmonary pressures (high tidal volume and/or high alveolar pressures without PEEP) [48] and are partially attenuated when protective ventilatory strategies are used [49]. RMs can be applied as sighs or as periodic sustained inflations that can damage or transiently alter the integrity of the alveolar–capillary barrier [50,51]. Whether such strategies to improve lung function can result in failure of the alveolar–capillary barrier and promote transient bacterial translocation in humans remains unknown. The amount of recruitable lung parenchyma in patients with ALI/ARDS receiving MV is a matter of debate, and controversy exists on the use of RMs in such patients for two main reasons. First, consolidation (non-recruitable lung parenchyma) and sticky atelectasis (potentially recruitable) coexist in different amounts in ALI/ARDS, and cannot be distinguished and quantified at the bedside to inform a decision regarding a recruitment strategy. Second, the amount of lung tissue to be recruited in some ARDS patients is sparse [52]. Therefore, RMs can exert little effect on consolidated lung areas but can cause overdistension in some lung regions where bacteria are compartmentalized at the site of infection or colonization. Because spillover of lung cytokines into the systemic circulation is observed in lung inflammation and is potentiated with MV [53], a similar phenomenon is likely to occur when the concentration of bacteria in the lungs is high enough. In a recent study [54] it was found that high pressure ventilation promoted early translocation of bacteria; however, intermittent RMs applied as a sustained inflation superimposed on low-pressure ventilation without PEEP did not cause translocation of intratracheally inoculated Pseudomonas aeruginosa in rats with previously healthy lungs. However, we do not yet know whether the lung injury model used is valid for human ARDS or the degree of reproducibility of short-term experimental studies in patients receiving MV for days or weeks [55]. Conclusion On the basis of our review of the literature on experimental and clinical studies, considerable uncertainty remains regarding the use of RMs in humans with ARDS. RMs may have a role to play in patients with early ARDS and normal chest wall mechanics because there is great potential for alveolar recruitment, and after disconnections from the ventilator, when sudden loss of lung volume promotes alveolar instability and derecruitment. Recommendations to use RMs as adjuncts during lung protection ventilatory strategies seem unnecessary because sustained improvements in lung function have not been found when the strategies are combined. The presence of lung infection must be considered a major limitation for aggressive RMs because translocation of bacteria and the occurrence of systemic sepsis have been demonstrated in animal models. Finally, large randomized studies do not support the use of RMs in patients with ARDS. In conclusion, the use of RMs cannot be recommended in the light of current knowledge, and if RMs are used they should be restricted to an individualized clinical decision or to a last resort to improve oxygenation and lung mechanics in a severely hypoxemic ARDS patient. Competing interests None declared. Abbreviations ALI = acute lung injury; ARDS = acute respiratory distress syndrome; MV = mechanical ventilation; PEEP = positive end-expiratory pressure; RM = recruitment maneuver; VILI = ventilator-induced lung injury.
[ "acute lung injury", "mechanical ventilation", "mechanical stress", "lung infection", "lung collapse" ]
[ "P", "P", "P", "P", "P" ]
Eur_J_Pediatr-3-1-1914296
What is new in surgical treatment of vesicoureteric reflux?
In addition to conventional open surgery and endoscopic techniques, laparoscopic correction of vesicoureteric reflux, sometimes even robot-assisted, is becoming an alternative surgical treatment modality for this condition in a number of centres around the world. At least for a subgroup of patients laparoscopists are trying to develop new techniques in an effort to combine the best of both worlds: the minimal invasiveness of the STING and the same lasting effectiveness as in open surgery. The efficacy and potential advantages or disadvantages of these techniques are still under investigation. The different laparoscopic techniques and available data are presented. Introduction When confronted with the title “What is new in surgical treatment of vesicoureteric reflux?” many readers will automatically think of endoscopic techniques with subureteric injection of bulking agents, also known as STING (Subureteral Teflon INjection). Over the years several substances have been advocated as bulking agents, but the original Teflon is no longer in use. (The most commonly used substance nowadays is DefluxR, a dextranomer/hyaluronic acid copolymer.) However, a technique that has been around for a quarter of a century can hardly be considered for a text on surgical novelties. Instead, this review concentrates on the use of laparoscopic techniques in this setting. General considerations about vesicoureteric reflux Vesicoureteric reflux (VUR) remains one of the most frequent conditions in paediatric urology, although the exact prevalence is largely unknown. VUR can be primary, secondary (e.g. to elevated bladder pressures in neurogenic bladders or dysfunctional voiding) and sometimes intermittent in nature, only disclosing itself when infection has possibly induced a degree of insufficiency of the ureterovesical junction. It is generally assumed that VUR predisposes to urinary tract infections and that surgical treatment of reflux and prophylactic antibiotics are equivalent in terms of preventing infections and renal scarring. The relative merit of these interventions in the natural course of these conditions remains to some extent controversial [32]. The importance of voiding dysfunction with detrusor overactivity, underactivity or dysfunctional elimination disorder in the aetiology of VUR should not be underestimated [4, 30] and this has its implications in the treatment offered to these children. Hence bladder training and minimally invasive techniques have acquired a prominent role over the years. Children with VUR and concomitant voiding dysfunction are likely to suffer more breakthrough infections and have lower spontaneous resolution rates and therefore represent a large proportion of the patients undergoing surgical intervention [31]. Antimicrobials form the mainstay in the treatment of VUR, in combination with other conservative measures, because VUR will spontaneously disappear in a majority of children and rarely gives rise to serious long-term complications [2]. Increasingly however the exact role of prophylaxis is being questioned as well-designed prospective trials are rare [10]. Nevertheless, a small subgroup of patients does pose problems of break-through infections despite all conservative measures and in fact some of them seem prone to renal scarring leading to hypertension and exceptionally even end stage renal failure [16]. Traditional surgical techniques in the treatment of VUR Since the 1950s several surgical techniques have been developed for the correction of VUR. All techniques share the same basic principle of creating an anti-reflux mechanism by increasing the portion of the distal ureter lying in a submucosal tunnel between the detrusor muscle and the bladder mucosa. They offer comparable and very high success rates with few complications [12]. From a purely technical standpoint, these open techniques can basically be divided into two groups. There are those that involve mainly or entirely intravesical ureteral dissection (and hence a need for postoperative bladder drainage) and those that use a purely extravesical approach to the ureter without disconnecting it from the bladder. To the former group belong the techniques of Politano and Leadbetter (1958), Glenn and Anderson (1967), the psoas-hitch technique and the (most widely used) Cohen technique (1975) [6, 11, 13, 25]. In these techniques the ureter is disconnected from the bladder and reimplanted in a new and longer submucosal tunnel from the luminal side of the bladder. In the Cohen technique, a cross-trigonal tunnel is created bringing the ureter to the contralateral side, the other techniques result in a more natural course of the ureter, but are somewhat more prone to complications such as bowel injury or kinking of the ureter. The psoas-hitch technique is generally reserved for more complex situations as in mega-ureters or re-do surgery and is helpful in creating a longer tunnel. The conceptually different extravesical approach was popularized by Lich and Gregoir, reducing postoperative bladder irritation to insignificance, but predisposing to temporary bladder retention when performed bilaterally [14, 21]. The more recent and certainly minimally invasive STING technique where bulking agents are injected submucosally has gained wide acceptance. Undoubtedly this is technically a very easy, relatively cheap and patient-friendly treatment modality, tempting many doctors into an increasingly pre-emptive approach to VUR, using it as first-line treatment in cases of (antenatally detected) high-grade reflux even in infants [27]. Success rates, even in low-grade reflux, are clearly lower than in open surgery and a second injection of bulking agent is often necessary [8]. Moreover, prospective randomised trials and long-term results are still not available. The tendency to use this endoscopic technique as an alternative to medical treatment is underscored by the fact that since the Food and Drug Administration (FDA) approval of DefluxR the total number of procedures for reflux has increased, while open surgery rates have remained stable [20]. All these facts and tendencies mentioned above in turn suggest that, at least for the foreseeable future, there will remain a group of patients in whom STING is deemed-or proves to be-insufficient. Open surgery on the other hand has its drawbacks as well due to its invasiveness. In an ideal world physicians would be able to define very precisely and at the earliest possible point in time which group of patients with VUR is at increased risk for the complication of pyelonephritic scarring and which group is not. This would in turn allow a very tailored approach to each individual child with pre-emptive surgical measures in the group at risk. Failing this knowledge, the next best thing to aim for is to combine the superior results of time-honoured open procedures like a Cohen reimplantation or Lich-Gregoir operation with the much sought after minimal invasiveness of laparoscopy, possibly with the added ultra-precise tissue handling and dexterity of robotic surgery. These considerations are the driving force of the developments described in this text. Conventional laparoscopic techniques Both intra- and extravesical laparoscopic treatments have been described in a great variety of techniques. Most series however remain small and follow-up is very limited. Ehrlich et al. and Janetschek et al. were the first to report in 1994 and 1995 on two and six children undergoing laparoscopic Lich-Gregoir anti-reflux surgery for vesicoureteral reflux [7, 17]. The reflux was successfully corrected without morbidity, requiring only a short hospitalisation. Peri-operative ureteral stents were deemed unnecessary. One mild unilateral stenosis did develop later, requiring temporary stenting. Ehrlich et al. described decreased peri- and post-operative pain and improved cosmesis by comparison with open surgery. He suggested that this preliminary report deserved further study. Janetschek et al. on the other hand concluded that the Lich-Gregoir anti-reflux procedure was a complicated one because of the difficult suturing and knot-tying, offering no clear advantage over the conventional procedure. Other teams were very reluctant to join in the efforts to develop this approach for several years to come. The choice for a Lich-Gregoir technique for the first attempts at correction of VUR can be explained by the fact that, at that time, experience with laparoscopy in cavities other than the abdomen was very limited. Five years later Lakshmanan and Fung reported technical modifications to further minimize invasiveness, basically by downsizing ports and instruments and limiting tissue dissection [19]. A more recent paper by Riquelme et al. again reported excellent outcomes in 15 children, even in cases of bilateral reflux and duplex ureters [28]. There was no postoperative voiding dysfunction. Laparoscopic ureteral reimplantation with extracorporeal tailoring and stenting of megaureters combined with a Lich-Gregoir type of extravesical reimplantation was recently reported by Ansari et al. in three children [1]. Although the Cohen procedure was the more widely used in the treatment of VUR, a laparoscopic version thereof was investigated later than the extravesical laparoscopic techniques. The obvious reason is the anticipated difficulties with port placement and the limitations of the intravesical working space. Different approaches were used by Gill et al. and Yeung et al. [12, 33]. Gill et al. combined the use of two suprapubic ports with a transurethral resectoscope for unilateral cases whereas Yeung et al. used three suprapubic ports, more closely copying the open Cohen procedure. A recent report by Kutikov et al. on either transvesical laparoscopic cross-trigonal ureteral reimplantation in patients with reflux or a Glenn-Anderson reimplantation in patients with a primary obstructing mega-ureter mentions operative success in 25 of 27 patients with VUR and 4 out of 5 patients with mega-ureters, results that are comparable to the ones obtained in open surgery [18]. Complications were postoperative urinary leak in four patients and ureteral stricture at the anastomosis in two. The authors noted that most complications occurred in the younger patients with small bladder capacities. For completeness two papers on reimplantations in (young) adults can be mentioned. Chung et al. described successful laparoscopic nonrefluxing ureteral reimplantation with a psoas hitch using a submucosal tunnelling technique after submucosal injection of saline under cystoscopy in two adult female patients without postoperative complications [5]. Also in 2006, Puntambekar et al. described laparoscopic extravesical ureteroneocystostomy with psoas hitch in five gynaecologic patients, clearly minimizing the procedural morbidity [26]. Again no intraoperative or postoperative complications occurred. Gradually more relevant series with larger numbers of patients and longer follow-up are being presented. At the 2007 European Society for Paediatric Urology (ESPU) annual meeting two groups will present their experience in about 80 patients each, with success rates above 90% (http://www.espu.org). Robot-assisted techniques Over the last 2 years a few authors reported robot-assisted laparoscopic techniques using the Da VinciR (Intuitive Surgical, Mountain View, CA) system for the treatment of VUR, adding yet another approach to this rapidly expanding field [3, 23, 24]. They made good use of the experience gained with conventional laparoscopy, adding the advantages of robotics: enhanced dexterity of the instruments, absence of tremor and 3-D vision. The generally used term of “robotic surgery” is to some extent actually misleading because it suggests completely autonomous function of the equipment. In reality it works as a master-slave system, merely transferring the movements of the surgeon’s hands to the tip of the instruments (Fig. 1). The evolution parallels the one seen in conventional laparoscopy, experience having started with the extravesical approach and later moving to intravesical procedures. The sequence of surgical steps of both techniques will briefly be discussed. As stated, they closely mirror the steps in conventional laparoscopy. Fig. 1Outside view once the draped robotic arms are connected to the laparoscopic ports: the child seems completely “embraced” by the machine Extravesical technique To start, a cystoscopic evaluation of the relevant anatomy is carried out. The camera port is then placed in the umbilicus and the two working ports in each lower abdominal quadrant. A small transverse peritoneal incision is made on the laterodorsal side of the bladder where the ureter is retrieved. The ureter is then buried in a trough between the mucosa and detrusor to create the anti-reflux mechanism (Figs. 2 and 3). The bladder catheter is removed already at the end of the procedure unless a significant perforation needing suturing of the mucosa has been made. Fig. 2Extravesical approach: very gently the detrusor muscle is incised and peeled away until the delicate bladder mucosa starts to bulgeFig. 3Extravesical approach: the completely freed ureter is hinged into the trough to create an anti-reflux valve mechanism In our experience there were no bladder symptoms post surgery in any of the patients. All cases of reflux resolved, but there was one case of “de novo” contralateral low-grade reflux [3]. Later we successfully performed this operation in an adult male patient after a failed subureteral injection (unpublished data). Interestingly, Elmore et al. recently reported on the use of the open Lich-Gregoir technique as salvage in these patients as well [9] and already in 2004 one similar laparoscopic patient was reported by Shu et al. [29]. Both in open and laparoscopic surgery this was a novel approach, meant to avoid the sometimes difficult intravesical dissection due to foreign material after STING. Peters and Borer reported persisting low-grade reflux in 2 of their 24 patients [23]. Intravesical technique Olsen was the first to experiment with a Cohen cross-trigonal ureter reimplantation by laparoscopic access to the bladder in a pig model using the Da VinciR system [22]. In all pigs the reflux disappeared after the procedure. The advantage of the robotic equipment seemed to be the better access to submucosal tunnelling of the ureter and the intravesical suturing of the anastomosis. Peters and Woo in 2005 and Callewaert in 2006 reported their experience with robot-assisted Cohen procedures in six and three paediatric patients respectively [3, 24]. Initial port placement and closure of the incisions at the end of the procedure were the crucial steps, the rest of the procedure being straightforward. Once inside the bladder the mucosa is circumferentially incised around the ostium using the cautery hook. After both ureters are freed, a submucosal tunnel connecting the most proximal part of the two mucosal incisions is created, using forceps and scissors (Fig. 4). Creation of the submucosal tunnel and reimplantation of the ureters is remarkably easy because of the three dimensional visualisation and great dexterity inside the very small volume of a child’s bladder. The anatomical detail is such that dissection of the plane between the detrusor and mucosa is achieved with more detail than in open surgery. The bladder catheter is left indwelling for 24 to 48 h. Fig. 4Intravesical approach: creation of the submucosal tunnel connecting the periureteral incisions. (The jaws of the forceps measure 5 mm in length) We had one conversion to open surgery out of three cases in our early experience because of port-related problems in a small child [3]. Kutikov et al. using conventional laparoscopy similarly found that the smaller children were more prone to complications and that these procedures were technically more demanding [18]. Peters and Woo on the other hand reported no conversions in a series of six children aged between 5 and 15 [24]. They did however have a case of port-site urinary leakage requiring prolonged bladder drainage. Peters and Callewaert each reported one case of persisting low-grade reflux in their initial experience. Unlike the situation in open surgery, bladder spasms remained completely absent and anticholinergics were unnecessary. This fact is highly suggestive of the minimal invasiveness and limited trauma incurred by the bladder wall. When comparing the robotically assisted intra- and extravesical operations it is our impression that the Lich-Gregoir technique offers some advantages over the intravesical operation: no need for catheters, no haematuria and easier reproducibility. The drawback is that the abdominal cavity needs to be entered. The abdominal cavity even in smaller children is large enough to allow comfortable movement of the instruments, whereas intravesical operations in this patient group can be technically impossible due in part to the relative bulkiness of the robotic instruments. Conclusion Treatment modalities of reflux are evolving rapidly. Conventional or robot-assisted laparoscopic techniques must be considered a possible future alternative to the more traditional ways of treating this condition. There is no proven superiority at this time and experience is limited to a few centres only and relatively small numbers of patients. It is well established that with open surgery very high success rates can be achieved and that morbidity is relatively low and hospitalisation nowadays can be kept short. The first impressions are that morbidity using laparoscopic techniques is lower still and that there is some cosmetic gain, but it is obvious that the most important issue will be whether the long-term success rates are at least comparable. Most surgeons agree that robotics certainly add to the precision and ease of the individual surgical steps when compared to conventional laparoscopy, but the financial costs are very high. The intravesical approach using robotics is feasible, but technical difficulties must be taken into account in smaller children. (The same holds true for the conventional laparoscopy.) The extravesical robotic approach clearly seems the more promising, possibly even after failed submucosal injection therapy. Nevertheless we feel that the intravesical approach deserves further pursuing because it may allow surgical correction of other malformations at the level of the bladder neck and ureterovesical junction in a minimally invasive and very precise way. It would be premature to promote laparoscopy as the golden mean between STING and open surgery for a subgroup of reflux patients at this point, as this would imply diverting a large number of patients to a few centres where either the technical laparoscopic expertise or a robotic system is available. However, we remain convinced that in the (near) future laparoscopy will find its place in the care for these patients.
[ "vesicoureteric reflux", "laparoscopy" ]
[ "P", "P" ]
Ann_Surg_Oncol-3-1-2077912
Population-Based Study of Islet Cell Carcinoma
Background We examine the epidemiology, natural history, and prognostic factors that affect the duration of survival for islet cell carcinoma by using population-based registries. Islet cell carcinomas are low- to intermediate-grade neuroendocrine carcinomas of the pancreas. Also known as pancreatic endocrine tumors or pancreatic carcinoid, they account for the minority of pancreatic neoplasms and are generally more indolent than pancreatic adenocarcinoma. Islet cell carcinomas, which arise from islets of Langerhans, can produce insulin, glucagon, gastrin, and vasoactive intestinal peptide, causing the characteristic syndromes of insulinoma, glucagonoma, gastrinoma, and VIPoma. Pancreatic polypeptide is also frequently produced, yet it is not associated with a distinct clinically evident syndrome. Although the molecular biology of sporadic islet cell carcinoma is less well understood than other more common solid tumors, they can arise in connection with several hereditary cancer syndromes. The best known of these, multiple endocrine neoplasia type 1 (MEN1), is an autosomal-dominant inherited disorder characterized by tumors of the parathyroids, pituitary, and pancreas.1 Less commonly, neuroendocrine (carcinoid) tumors of the duodenum (gastrinomas), lung, thymus, and stomach have also been described.2 Tuberous sclerosis and neurofibromatosis are two other hereditary cancer syndromes associated with the development of neuroendocrine tumors. TSC1/2 complex inhibits mTOR and is normally expressed in neuroendocrine cells.3 Patients with a defect in the TSC2 gene have tuberous sclerosis and are known to develop islet cell carcinoma.4 Neurofibromatosis is associated with the development of carcinoid tumors of the ampulla of Vater, duodenum, and mediastinum.5,6 The gene responsible for neurofibromatosis 1 (NF1) regulates the activity of TSC2. The loss of NF1 in neurofibromatosis leads to constitutive mTOR activation.7 Finally, islet cell carcinomas also occur in approximately 12% of patients with von Hippel-Lindau disease (vHL).8 The vHL gene is located on chromosome 3p26–p25; inactivation of the vHL gene is thought to stimulate angiogenesis by promoting increased HIF-1α activity. Little is known about the epidemiology and natural history of islet cell carcinoma. Although several case series have been reported, there have been few population-based studies. This is in part due the uncommonness of this disease as well as the complexity of its classification. Although pancreatic carcinoid based on ICD-O-3 histology classification (8240–8245) has been partially described in studies of carcinoid from the Surveillance, Epidemiology, and End Results (SEER) Program,9 these have been incomplete analyses because most islet cell carcinomas were coded differently in the ICD-O-3 system (8150–8155). Survival of patients with islet cell tumors was also described in a recent report on malignant digestive endocrine tumors that was based on data from England and Wales.10 In this population-based study, we have undertaken a comprehensive analysis of patients with islet cell carcinoma identified through the SEER Program database in the United States. METHODS The SEER Program was created as a result of the National Cancer Act of 1971. The goal of the SEER Program is to collect data useful in the prevention, diagnosis, and treatment of cancer. In this study, we used the SEER data based on the November 2005 submission. For incidence and prevalence analyses, registry data was linked to total U.S. population data from 1969 to 2003.11 Since 1973, the SEER Program has expanded several times to improve representative sampling of minority groups as well as to increase the total sampling of cases to allow for greater precision. The original SEER 9 registries included Atlanta, Connecticut, Detroit, Hawaii, Iowa, New Mexico, San Francisco–Oakland, Seattle–Puget Sound, and Utah. In 1992, four additional registries were added to form the SEER 13 registries, which included the SEER 9 registries, plus Los Angeles, San Jose–Monterey, rural Georgia, and the Alaska Native Tumor Registry. More recently, in 2000, data from greater California, Kentucky, Louisiana, and New Jersey were added to the SEER 13 Program to form the SEER 17 registries. SEER 9, 13, and 17 registries cover approximately 9.5%, 13.8%, and 26.2% of the total U.S. population, respectively. The data set we use here contains information about a total of 4,539,680 tumors from 4,123,001 patients diagnosed from 1973 to 2003. Islet cell carcinomas were identified by search for ICO-O-3 histology codes 8150–8155, 8240–8245, and pancreatic primary site (duodenal gastrinomas were excluded). The included histology codes correspond to the following clinical/histologic diagnoses: islet cell carcinoma, insulinoma, glucagonoma, gastrinoma, mixed islet cell/exocrine carcinoma, VIPoma, carcinoid, enterochromaffin cell carcinoid, and adenocarcinoid. The SEER registries include neuroendocrine neoplasms that are considered invasive and malignant (behavior code of 2 or 3 in the International Classification of Diseases for Oncology, 2nd edition [ICD-O-2]). Cases designated as poorly differentiated or anaplastic were excluded. A total of 1310 cases of islet cell carcinoma were included in this study. Cases identified at the time of autopsy or by death certificate only were excluded (36 cases) from survival analyses. Although a tumor, node, metastasis system classification system has recently been proposed,12 during the period of time that we studied, there was no accepted staging system for islet cell carcinoma. Here, we use the SEER staging system. Tumors in the SEER registries were classified as localized, regional, or distant. In this system, a localized neoplasm was defined as an invasive malignant neoplasm confined entirely to the organ of origin. A regional neoplasm was defined as a neoplasm that has (1) extended beyond the limits of the organ of origin directly into surrounding organs or tissue; (2) involves regional lymph nodes; or (3) fulfills both of the above. A distant neoplasm was defined as a neoplasm that has spread to parts of the body remote from the primary tumor. The comparisons between patients, tumor characteristics and disease extension were based on the χ2 test. One-way analysis of variance was used for the comparison of continuous variables between groups. Survival duration was measured by the Kaplan-Meier method and compared by the log rank test. The statistical independence between prognostic variables was evaluated by multivariate analysis by the Cox proportional hazard model. SEER*Stat 6.2.4 (Surveillance Research Program, National Cancer Institute) was used for incidence and limited-duration prevalence analyses.13 All other statistical calculations were performed by SPSS 12.0 (Apache Software Foundation 2000). Survival durations calculated by SPSS were also verified by parallel analyses by SEER*Stat. Comparative differences were considered statistically significant when the P value was <.05. RESULTS Frequency and Incidence Between 1973 and 2003, a total of 101,192 pancreatic neoplasms in 101,173 patients were identified in the SEER 17 registries. Among these, 101,046 neoplasms were classified as malignant and occurred in 101,029 patients. When we restricted the search to codes of neuroendocrine histology, a total of 1385 neoplasms in 1385 patients were identified. We removed 75 patients who were classified as having tumors of poorly differentiated or anaplastic grade. Thus, a total of 1310 patients had pancreatic islet cell carcinomas in the SEER registries. This consisted of 1.3% of all patients with pancreatic cancers. By using linked population files, we calculated the incidence of islet cell carcinoma as a rate per 100,000 per year, age-adjusted to year 2000 U.S. standard population. Because the SEER 9, 13, and 17 registries were linked to different population data sets, we computed the age-adjusted incidence rates in three time periods. The age-adjusted incidence in the SEER 9 registries between 1973 and 1991 was .16 per 100,000. For the SEER 13 registries, from 1992 to 1999, an age-adjusted incidence of .14 per 100,000 was observed. Finally, for the period covered by the SEER 17 registries, from 2000 to 2003, the age-adjusted incidence rate was .12 per 100,000. This suggests that on the basis of the current U.S. population estimate of 302 million, approximately 362 cases of malignant islet cell carcinoma will be diagnosed each year. The number of small benign islet cell tumors may be higher. Detailed incidence data by time period, sex, and race are included in Table 1. TABLE 1.Age-adjusted incidence rate of islet cell carcinoma per 100,000 populationaTime period and raceAllMaleFemaleSEER 9 (1973–1991)  All races.16 (.15–.18).19 (.17–.22).14 (.13–.16)  White.16 (.14–.17).19 (.16–.21).14 (.12–.16)  African American.25 (.19–.32).29 (.19–.43).22 (.16–.32)  Other.11 (.07–.18).16 (.09–.28).08 (.04–.16)SEER 13 (1992–1999)  All races.14 (.13–.16).17 (.15–.20).12 (.10–.14)  White.15 (.13–.17).18 (.15–.21).13 (.11–.15)  African American.11 (.07–.16).12 (.05–.25).10 (.06–.18)  Other.13 (.09–.18).18 (.11–.28).09 (.05–.15)SEER 17 (2000–2003)  All races.12 (.10–.13).12 (.11–.15).11 (.09–.13)  White.12 (.11–.13).14 (.12–.16).11 (.09–.13)  African American.14 (.09–.19).10 (.05–.20).16 (.10–.24)  Other.07 (.04–.12).03 (.01–.11).11 (.06–.18)a Rates are per 100,000 population (95% confidence interval) age-adjusted to year 2000 U.S. standard population. Cases were selected by ICD-O-3 histology codes 8150–8155, 8240–8245, and confirmed pancreatic primary site. Cases designated as poorly differentiated or anaplastic by grade were excluded. Cases of unknown race are excluded from the “other” category (included in “all races”). Limited Duration Prevalence Among the population sampled by the SEER 9 registries, 28-year limited duration prevalence for islet cell carcinoma on January 1, 2003, was estimated by the counting method to be 227 (95% CI, 199–259). These data were then projected to the general U.S. population. Data were matched by sex, race, and age to the U.S. standard population. The estimated 28-year limited duration prevalence of islet cell carcinomas on January 1, 2003, in the United States was 2705 cases. In comparison, the 28-year limited duration prevalence for all pancreatic neoplasms regardless of histology was 27,201.14 Thus, although islet cell carcinoma represented 1.3% of pancreatic cancer by incidence, it represented 9.9% of cases by the 28-year limited duration prevalence analyses. Patient Characteristics Of the 1310 patients with islet cell carcinoma identified in the SEER database, there were 619 women and 691 men. The majority (1095 cases) were white. African American and other racial groups accounted for 134 and 78 cases, respectively. Details of patient characteristics are included in Table 2. The other racial groups included American Indian/Alaskan Natives, and Asian/Pacific Islanders. In three cases, the race was unknown. We plotted the number of cases by age group at diagnosis in Fig. 1; the peak age distribution was 65 to 69 years. However, the median age at diagnosis was 59 years. TABLE 2.Characteristics of 1310 patientsaCharacteristicValueSex, n (%)  Male691 (53)  Female619 (47)Race, n (%)  African American134 (10)  White1095 (84)  Other78 (6)  Unknown3 (.2)  Median (SD) age at diagnosis (y) 59 (15)ICD-O-3 groupings, n (%)  Islet cell1117 (85)  Insulinoma49 (4)  Glucagonoma29 (2)  Gastrinoma73 (6)  VIPoma16 (1)  Mixed histology26 (2)SEER stage, n (%)  Localized179 (14)  Regional295 (23)  Distant711 (54)  Unknown125 (10)Location of primary tumor, n (%)  Head379 (29)  Body103 (8)  Tail278 (21)  Overlapping108 (8)  Unknown442 (34)Year of diagnosis, n (%)  1973–1988482 (37)  1989–2003828 (63)a Cases selected from SEER 17 database by ICD-O-3 histology codes 8150–8155, 8240–8245, and pancreatic primary site. Cases designated as poorly differentiated or anaplastic by grade were excluded.FIG. 1.Age at diagnosis of 1310 cases of islet cell carcinoma. The median and mean (SD) ages at diagnosis are 59 and 58 (15) years. We next examined the effect of race and sex on age at diagnosis. The median age in years at diagnosis for white, African American, and other racial groups was 60 (mean 58, SD 14.9), 55 (mean 54.5, SD 16.5) and 56 (mean 57, SD 16.5), respectively (P = .02, Fig. 2). However, there was no difference in median age at diagnosis based on sex. FIG. 2.Age at diagnosis by race. White patients were older at the time of diagnosis (P = .02). The median age at diagnosis for white, African American, and other racial groups was 60 (mean 58, SD 14.9), 55 (mean 54.5, SD 16.5), and 55.5 (mean 57, SD 16.5), respectively. Tumor Location and Hormone Production The location of the primary tumors within the pancreas was described in 868 cases. In 442 cases, the detailed location data is not known. This is partially because the current coding system allows tumor location to be coded as islet of Langerhans, which gives no information about the location of the tumor within the pancreas. In 379 cases, the primary tumor was located in the head of the pancreas; body, tail, and overlapping groups accounted for 103, 278, and 108 cases, respectively. By ICD-O-3 histology codes, 1117 cases (Table 2) were coded as islet cell or carcinoid (ICD-O-3 = 8150, 8240, 8241). Because these designations can include either serotonin-producing or nonfunctional tumors, it is not possible to determine the secretory status in these cases. Among the known functional tumors, gastrinoma was the most common with 73 cases (22 cases of duodenal gastrinoma not included); insulinoma, glucagonoma, and VIPoma accounted for 49, 26, and 16 cases, respectively. Finally, in 26 cases, the tumors were considered to have mixed endocrine/exocrine histology (ICD-O-3 = 8154, 8243–8245). Next, we compared the location of the primary tumors within the pancreas by histological classification. We found significant differences in the pattern of primary tumor localization (P = .029); nonfunctional or serotonin-producing tumors (coded as islet cell and carcinoid tumors) were more likely to be located in the head (44%) than in the body (12%) or tail (31%) or overlapping (14%). Among the functional neoplasms, most insulinomas (57%), glucagonomas (53%), and VIPomas (64%) were located in the tail of the pancreas. Gastrinomas were much more likely to be located in the head of the pancreas (63%). Tumor Stage Of the 1310 cases, 125 (10%) were not staged (Table 2). For the remaining 1185 cases, 179 (14%) were localized; 295 (23%) were classified as regional; and 711 (54%) were classified as distant. When we compared stage by ICD-O-3 histology, we found that most carcinomas were metastatic at the time of diagnosis, including islet cell carcinomas (61%), insulinomas (61%), glucagonomas (56%), and gastrinomas (60%). A smaller percentage of VIPomas (47%) were metastatic (P < .001). This may be the result of the presence of massive diarrhea in VIPoma patients, which may bring them to medical attention earlier. The higher rate of metastases among insulinoma patients observed in this study is likely because most small insulinoma are considered benign and not reported to SEER. Next, we compared stage by tumor location and found that tumors located at the head of the pancreas trended toward a lower rate of distant disease (48% head vs. 57% body, 58% tail, and 60% overlapping) and a higher rate of regional disease (34% head vs. 27% body, 23% tail, 27% overlapping). However, the difference was not statistically significant (P = .063). Survival For survival analyses, we excluded 36 cases that were identified at autopsy or on the basis of death certificates only. The median overall survival for all 1274 remaining cases was 38 months (95% CI, 34–43). When compared by the log rank test, SEER stage predicted patient outcome (P < .001). The median survival for patients with localized, regional, and distant islet cell carcinoma was 124 months, 70 months, and 23 months, respectively (Fig. 3). One-year, 3-year, 5-year, and 10-year survival rates are listed in Table 3. The median survival for those cases that were not staged was 50 (95% CI, 34–66) months. Relative risk was calculated by Cox proportional modeling. Compared to the group with localized disease, patients with regional and distant disease had 1.56- and 3.50-fold of increased risk of death during the period (1973–2005) included in this study. FIG. 3.Stage and survival of 1157 patients from the time of diagnosis. Median duration of survival of patients with localized (n = 167), regional (n = 289), and distant disease (n = 558) was 124, 70, and 23 months, respectively (P < .001).TABLE 3.Survival by SEER stageSEER stageMedian survival (mo) (95% CI)Survival rateHRP1 y5 y10 yLocal124 (80–168)88%71%52%1.00 (referent)<.001Regional70 (54–86)82%55%38%1.56 (1.17–2.07)Distant23 (20–26)65%23%9%3.50 (2.71–4.54)SEER, Surveillance, Epidemiology, and End Results Program; 95% CI, 95% confidence interval; HR, hazard ratio. We then examined potential prognostic factors for survival duration that were based on data available from the SEER database. For these analyses, we stratified patients by stage. Patients with missing stage data were excluded from the analyses. When compared against the group designated as islet cell or carcinoid by ICD-O-3 histology codes, patients with gastrinoma (P = .001) and VIPoma (P = .044) had longer survival durations after adjusting for the effect of stage (Table 4), while the group with mixed histology experienced worse survival. The difference was not statistically significant, which is likely because of the small number of cases in this category. TABLE 4.Cox proportional hazard analyses of potential prognostic factors adjusted for stageaParameterHR (95% CI)P valueICD-O-3 group  Islet cell1.00 (referent)  Insulinoma.935 (.64–1.37).728  Glucagonoma1.12 (.68–1.84).652  Gastrinoma.57 (.41–.80).001  VIPoma.44 (.20–.98).044  Mixed histology1.18 (.65–2.13).596Location of primary tumor  Head1.00 (referent)  Body.74 (.55–.98).037  Tail.82 (.67–1.01).056  Overlapping1.32 (1.01–1.72).041  Age as a continuous variable1.03 (1.03–1.04)<.001Age grouped by median  0–591.00 (referent)  60+2.16 (1.87–2.49)<.001Sex  Female1.00 (referent)  Male1.07 (.93–1.23).367Race  African American1.00 (referent)  White1.18 (.93–1.50).176  Other1.31 (.89–1.92).173Year of diagnosis  1973–19881.00 (referent)  1989–2003.79 (.68–.91).001a All analyses have been adjusted for Surveillance, Epidemiology, and End Results Program stage. The location of the primary tumor may have a marked effect on the time of presentation (head tumors may obstruct the bile duct and cause visible jaundice) and resectability, as well as on surgical morbidity and mortality. Therefore, we next examined the prognostic role of the location of the primary tumor within the pancreas (Table 4). In our analyses, tumors located at the head of the pancreas were less likely to be associated with distant metastasis. The rates of distant metastasis for primary tumors located in the head, body, tail, and overlapping locations were 48%, 57%, 58%, and 60%, respectively. These differences were not statistically significant (P = .063). However, once adjusted for stage, primary tumor location in the pancreatic head was associated with a worse prognosis than the pancreatic body (P = .037). Similarly, tumors in the pancreatic head tended to be associated with worse survival compared with those in the pancreatic tail (P = .056); however, the difference was not statistically significant. Patients with primary tumors that were classified as overlapping had the worst prognosis (P = .041). In some cases, an overlapping lesion may indicate a larger primary tumor that covered a larger portion of the pancreas. Age is found to be a predictor of outcome in a variety of malignancies. We examined the effect of age at diagnosis on overall survival stratified by stage (Table 4, Fig. 4). In the Kaplan-Meier analysis, we separated the patients into two groups on the basis of median age. We observed decreasing survival with increasing age (P < .001). Similarly, we also examined the effect of age as a continuous variable in Cox proportional hazard modeling and found increasing age to be a predictor of poor outcome (P < .001). FIG. 4.Age and survival. For the <59 age group, the median survival durations for localized, regional, and distant disease were 282 (95% CI, 204–360) months, 114 (95% CI, 83–145) months, and 35 (28–42) months. For the 60+ age group, the median survival durations for localized, regional, and distant disease were 46 (95% CI, 20–72) months, 38 (95% CI, 24–52) months, and 13 (95% CI, 10–16) months (P < .001). Next, we examined the survival duration of patients with islet cell carcinoma by year of diagnosis. There has been an improvement in survival over time. Whether analyzed as a continuous variable by Cox proportional hazard modeling or divided into discrete time durations, the observed difference was statistically significant (P = .001). Sex and race did not significantly affect survival. The details of these analyses are included in Table 4. Finally, we performed multivariate survival analyses by Cox proportional hazard modeling. SEER stage, primary tumor localization, ICD-O-3 histology groups, and age at diagnosis were entered into the model. In multivariate analysis, the ICD-O-3 histology group was not a statistically significant predictor of outcome. All other variables retained statistical significance; the most important predictor of outcome was stage. Compared to patients with localized disease, patients with regional (HR = 1.44) and distant (HR = 3.40) disease had decreased survival duration. Location of the primary tumor within the pancreas (P = .032) and age at diagnosis (P < .001) also remained significant predictors of outcome (Table 5). TABLE 5.Multivariate survival analysesParameterHR (95% CI)P valueStage<.001  Local1.00 (referent)  Regional1.44(1.05–1.99).026  Distant3.40 (2.53–4.67)<.001  Age as a continuous variable1.03 (1.02–1.04)<.001Location of primary tumor.032  Head1.00 (referent)  Body.81 (.60–1.08).146  Tail.87 (.71–1.07).175  Overlapping1.26 (.97–1.65).089ICD-O-3 group.186  Islet cell1.00 (referent)  Insulinoma.80 (.46–1.39).427  Glucagonoma1.08 (.57–2.02).822  Gastrinoma.70 (.47–1.04).077  VIPoma.39 (.12–1.22).105  Mixed histology1.38 (.73–2.59).322HR, hazard ratio; 95% CI, 95% confidence interval. DISCUSSION In order for us to make advances in the diagnosis and management of patients with islet cell carcinoma, we must improve our understanding of the epidemiology, natural history, and prognostic factors for this relatively rare disease. Because of the rarity of neuroendocrine carcinoma and the lack of a staging system, much of the information previously published has been based on case series and anecdotal experiences. In this study, we take advantage of the vast amount of data collected by the SEER Program to examine the largest series of islet cell carcinomas reported to date. To our knowledge, this study represents the only population-based study of islet cell carcinoma in published literature. Prevalence is defined as the number people alive on a certain date in a population that ever had a diagnosis of the disease. In this study, the counting method15 was used to estimate prevalence from incidence and follow-up data obtained from the SEER 9 registries. Complete prevalence can be established by this method by using registries of very long duration. The SEER 9 registries have the longest follow-up duration and contain data suitable for prevalence analyses for the past 28 years. Given the longer survival duration often experienced by patients with neuroendocrine carcinoma, we report only limited duration prevalence data which may somewhat underestimate the complete prevalence. By incidence, islet cell carcinomas account for only 1.3% of all pancreatic cancers. However, because of the better outcome generally experienced by patients with islet cell carcinoma, they represent almost 10% of pancreatic cancers in prevalence analyses. We acknowledge that analyses from the SEER registries underestimate the total number of patients with islet cell tumors. All cases from the SEER database were denoted to be malignant. Thus, it is likely that small, benign-appearing tumors were not included in the SEER registries (for example, insulinomas and small nonfunctioning tumors). Although histologic evidence of invasion of basement membrane defines malignant behavior for most epithelial malignancies, the definition of malignant behavior for pancreatic neuroendocrine neoplasms is more complex. In the absence of malignant behavior such as direct invasion of adjacent organs, metastases to regional lymph nodes or distant sites, it may be difficult to classify an islet cell tumor as benign or malignant. Pancreatic endocrine tumors are classified as benign, uncertain malignant potential, and malignant on the basis of size, the presence or absence of lymphovascular invasion, and the number of mitoses and Ki-67–positive cells by immunohistochemistry. However, there is a considerable overlap among these histopathologic features in benign and malignant neuroendocrine tumors and there is even heterogeneity within different areas of the same tumor. Many small islet cell tumors may have been considered benign or of unclear malignant potential and were excluded from the SEER registries. However, size is likely a function of when a tumor is diagnosed; left untreated, it is likely that most islet cell tumors will eventually grow locally into adjacent structures or soft tissues, and/or spread to distant organs. Therefore, outside of small insulinomas, all islet cell neoplasms should be considered potentially malignant. Thus, although the SEER registry data provide important information about malignant islet cell tumors, the extent to which it underestimates the frequency of smaller islet cell neoplasms is unknown. In our experience, even small islet cell tumors without clear evidence of invasion or metastasis at the time of initial surgical resection may recur and spread years later. We found a wide distribution of age at diagnosis, with a median of 59 years. Separated by race, white patients were older at the time of diagnosis. When we compared stage by histologic type, we found that VIPomas were less likely to be metastatic at the time of diagnosis. Several findings may have contributed to this observation. One possible explanation is that VIPomas are generally associated with profound watery diarrhea, which may cause patients to seek medical attention earlier than if they had nonfunctional tumors. Second, as previously mentioned, small insulinomas and gastrinomas may have been considered benign (in the absence of invasion of adjacent organs, lymph node metastases, or distant metastases) and therefore not captured for analysis thereby enriching the population of study patients with metastatic functioning tumor. In an earlier publication that was based on data from the SEER Program, Modlin et al.9 described the five-year overall survival of patients with carcinoid tumors to be 59.5% to 67.2%. The survival of patients with islet cell carcinoma seems less favorable. In the present study, we observed a median overall survival of 38 months. This is identical to the median survival observed in a large retrospective series of 163 cases from the University of Texas M. D. Anderson Cancer Center.16 The survival duration of patients with distant metastases was also similar (23 months in current study vs. 25 months in the M. D. Anderson series). We did, however, observe improvements in the outcome of patients with islet cell carcinoma over time. These improvements were observed among all SEER stage groups (data not shown), and are likely due in part to improvements in supportive care. By multivariate survival analysis, we found that stage of disease, primary tumor location, and age at diagnosis were important predictors of outcome. The difference in survival by primary tumor location can be attributed to several possible factors. In this study, patients with tumors classified as having overlapping location had the worst outcome. This is likely because of the larger tumor size in this group. Tumor located at the head of pancreas may be diagnosed earlier because of hyperbilirubinemia resulting from biliary obstruction. This is suggested by our finding of a trend (nonsignificant) for a lower rate of metastastic disease at the time of diagnosis on the basis of tumor location in the pancreatic head. However, adjusted for stage, patients with a tumor located in the pancreatic head were more likely to have a worse outcome than patients with tumors located in the body or the tail of the pancreas. One possible explanation is that tumors arising in the pancreatic head are of greater malignant potential. For example, most insulinomas (57%), glucagonomas (53%), and VIPomas (64%) were located in the tail of the pancreas. Such functioning tumors were also associated with improved survival compared with nonfunctioning tumors. In addition, the fact that tumors in the pancreatic head are more likely to cause biliary obstruction (with a risk for cholangitis), invade the duodenum (resulting in hemorrhage or obstruction), or involve the peripancreatic mesenteric vasculature (resulting in pain, mesenteric foreshortening, and malabsorption) may contribute to local tumor morbidity which may influence survival duration. There is also likely to be a general tendency on the part of physicians to avoid surgical resection of the pancreatic head (pancreaticoduodenectomy or Whipple procedure) when faced with a large tumor or low volume metastatic disease. This is not the case with tumors in the body or tail of the pancreas that can be surgically excised with a distal pancreatectomy. However, it is the pancreatic head tumor that carries the greatest risk for tumor associated morbidity such as hemorrhage and biliary or gastric outlet obstruction.16 To what degree patients with pancreatic head tumors suffer morbidity and mortality from local disease progression rather than distant metastases cannot be determined from this data set. Finally, increasing age at diagnosis was a predictor of poor outcome in our study. The differences in survival were seen across all SEER stage groupings. Certainly, pancreatic resection carries considerable operative risks and the medical comorbidities that are associated with advancing age may have precluded some patients from resection while increasing the risk to those taken to surgery. For patients with advanced disease, systemic chemotherapy often includes drugs that are considered toxic.17,18 For example, doxorubicin is recognized to be cardiotoxic, and streptozocin may cause worsening of diabetes. Thus, heart disease and diabetes which are more common among older patients may have limited the use of these drugs. At present, surgery is the only curative treatment for islet cell carcinoma. Surgery should be recommended for most patients in whom cross-sectional imaging suggests that complete resection is possible.19 Although islet cell carcinoma has a better prognosis than adenocarcinoma of the pancreas, the disease remains incurable once multifocal unresectable metastatic disease exists. Although survival beyond 10 years has been described in the literature for some patients with metastatic disease, the survival duration for most with advanced disease is far shorter. Although streptozocin-based chemotherapy can induce objective response in 32% to 39% of patients,17,20 second-line treatment options are limited. Newer approaches such as peptide receptor radiotherapy, systemic agents targeting vascular endothelial growth factor, and mTOR are under development. Optimal management of patients with islet cell carcinoma requires an understanding of the disease process and a multimodal approach. A better understanding of the molecular biology of this disease may lead to improved clinical models for predicting outcome and developing novel treatment strategies for this relatively rare but complex disease. Until then, an understanding of the natural history of the disease as provided herein is necessary to allow physicians and patients to accurately assess the risks and potential benefits of treatment alternatives that are based on the extent of disease and the age and performance status of the patient.
[ "islet cell", "epidemiology", "survival", "pancreatic endocrine tumor", "neuroendocrine tumor" ]
[ "P", "P", "P", "P", "P" ]
Exp_Eye_Res-2-1-2394572
Pharmacological disruption of the outer limiting membrane leads to increased retinal integration of transplanted photoreceptor precursors
Retinal degeneration is the leading cause of untreatable blindness in the developed world. Cell transplantation strategies provide a novel therapeutic approach to repair the retina and restore sight. Previously, we have shown that photoreceptor precursor cells can integrate and form functional photoreceptors after transplantation into the subretinal space of the adult mouse. In a clinical setting, however, it is likely that far greater numbers of integrated photoreceptors would be required to restore visual function. We therefore sought to assess whether the outer limiting membrane (OLM), a natural barrier between the subretinal space and the outer nuclear layer (ONL), could be reversibly disrupted and if disruption of this barrier could lead to enhanced numbers of transplanted photoreceptors integrating into the ONL. Transient chemical disruption of the OLM was induced in adult mice using the glial toxin, dl-alpha-aminoadipic acid (AAA). Dissociated early post-natal neural retinal cells were transplanted via subretinal injection at various time-points after AAA administration. At 3 weeks post-injection, the number of integrated, differentiated photoreceptor cells was assessed and compared with those found in the PBS-treated contralateral eye. We demonstrate for the first time that the OLM can be reversibly disrupted in adult mice, using a specific dose of AAA administered by intravitreal injection. In this model, OLM disruption is maximal at 72 h, and recovers by 2 weeks. When combined with cell transplantation, disruption of the OLM leads to a significant increase in the number of photoreceptors integrated within the ONL compared with PBS-treated controls. This effect was only seen in animals in which AAA had been administered 72 h prior to transplantation, i.e. when precursor cells were delivered into the subretinal space at a time coincident with maximal OLM disruption. These findings suggest that the OLM presents a physical barrier to photoreceptor integration following transplantation into the subretinal space in the adult mouse. Reversible disruption of the OLM may provide a strategy for increasing cell integration in future therapeutic applications. 1 Introduction Retinal degeneration is the leading cause of untreatable blindness in the developed world. Current clinical treatments are limited, at best only slowing disease progression and very rarely restoring visual function. Cell transplantation offers a novel therapeutic approach, enabling the replacement of photoreceptor cells lost in the degenerative process. Photoreceptor transplantation may be more feasible than other types of neuronal transplantation, because photoreceptors are stimulated by light and their function is not, therefore, dependent on the reformation of complex afferent connections. Nevertheless, an efferent connection to host second order sensory neurons in the retina is essential for visual function and this is arguably best achieved if the transplanted photoreceptor is fully integrated into the host outer nuclear layer (ONL). Transplanted whole retinal sheets, derived from either embryonic or neonatal sources, can survive and differentiate, but frequently fail to integrate and make functional connections within the host neural retina (Ghosh and Ehinger, 2000; Royo and Quay, 1959; Seiler et al., 1990; Zhang et al., 2003b). Conversely, dissociated neural stem cells, such as those derived from the adult hippocampus, can migrate extensively within the retina when transplanted into an adult or developing recipient, but rarely differentiate into mature retinal phenotypes (Takahashi et al., 1998; Young et al., 2000). Progenitor cells isolated from dissociated embryonic retinae can differentiate into retinal neurons after transplantation and express photoreceptor-specific markers (Ahmad et al., 2000; Chacko et al., 2000; Coles et al., 2004; Klassen et al., 2004; Qiu et al., 2005; Yang et al., 2002), but migration and integration of these cells into the laminar structure of the host neural retina has remained limited. Better integration has been achieved by transplanting cells into an immature, developing retina, such as that of the neonatal Brazilian opossum, which provides a foetal-like host environment. The ability of transplanted cells to integrate within the host opossum retina declines with host maturation (Sakaguchi et al., 2003, 2004). This also coincides with the maturation of glial elements, such as Müller cells, which form anatomical barriers within the host retina, including the outer limiting membrane (OLM) (MacLaren, 1996). We have recently shown that a significant degree of integration of fully differentiated and functional photoreceptors can be achieved after transplantation into the adult retina, but only if donor cells are post-mitotic photoreceptor precursors. When transplanted into the subretinal space, these cells can migrate into the recipient ONL, form synaptic connections with downstream targets and are light-sensitive (MacLaren et al., 2006). However, while the number of integrated photoreceptor cells is sufficient to restore the pupillary light reflex, higher levels of integration are needed to improve visual acuity. Given that immature neurons and progenitor cells are intrinsically capable of migrating and differentiating (Komuro and Rakic, 1998a,b; Nadarajah and Parnavelas, 2002; Parnavelas et al., 2002; Pearson et al., 2005), it is likely that natural physical barriers, such as the OLM, impede migration. Transient disruption of these barriers at the time of transplantation might be one way of increasing the number integrating into the host retina. The OLM comprises a series of zonula adherens junctional complexes, located between the plasma membranes of photoreceptor inner segments and the apical processes of Müller glia (Fig. 1a–c). These junctions seal off the light-sensitive photoreceptor inner and outer segments from the rest of the retina, limiting the diffusion of phototransduction cascade components. The OLM is first discernible by post-natal day 5 (P5) in the mouse (Uga and Smelser, 1973; Woodford and Blanks, 1989). Alpha-aminoadipic acid (AAA) is a glutamate analogue (Fig. 1d), which disrupts the OLM by inducing specific toxicity in Müller glial cells within the mammalian retina (Karlsen et al., 1982; Pedersen and Karlsen, 1979). Single intravitreal injections of AAA have been shown to disrupt the OLM irreversibly and to the extent whereby photoreceptors drop out of the ONL to reside amongst outer segments in the subretinal space (Ishikawa and Mine, 1983). In this study, we show that by using an appropriate dose and route of administration, AAA can produce a transient disruption in OLM integrity. Furthermore, OLM disruption can facilitate movement of cells in the opposite direction, significantly enhancing the number of donor photoreceptors integrated into the recipient ONL after transplantation into the subretinal space. These findings suggest that the OLM represents at least one important barrier to cell integration. 2 Materials and methods 2.1 Animals C57Bl/6 and Nrl.gfp+/+ (Akimoto et al., 2006) mice were maintained in the animal facility at University College London. All experiments were conducted in accordance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. Mice defined as “adult” were at least 6 weeks, but not more than 3 months old. Nrl.gfp+/+ mice were used as donors to provide dissociated retinal progenitor cells for transplantation. Recipients were C57Bl/6 animals, unless otherwise stated. 2.2 α-Aminoadipic acid formulation and administration dl-α-Aminoadipic acid (AAA; Sigma) was prepared in phosphate-buffered saline (PBS), adjusted to pH 7.5 and sterile-filtered prior to administration. Mice were anaesthetized with a single intra-peritoneal injection of 0.15 ml of a mixture of Dormitor (1 mg/ml medetomidine hydrochloride; Pfizer Pharmaceuticals, Kent UK), ketamine (100 mg/ml; Fort Dodge Animal Health, Southampton, UK) and sterile water for injections in the ratio of 5:3:42 for adult mice. AAA was administered by intravitreal, subretinal or subcutaneous injection. For histological assessment, mice were sacrificed at various time points (3–4 mice per time-point) and the eyes were fixed in buffered formalin for 48 h at 4 °C. Retinal sections were prepared by overnight dehydration and paraffin embedding (Histocentre). Sections (5 μm thick) were cut and affixed to glass slides and stained using standard haematoxylin and eosin protocols. 2.3 Dissociation of retinal cells and transplantation To investigate the role of AAA-induced disruption of the OLM on cell integration, C57Bl/6 mice received subretinal transplants of dissociated retinal precursor cells 72 h or 1 week (9 mice per time-point) after intravitreal injection (through the inferior pars plana) of AAA (test eye) or PBS (contralateral eye). Dissociated cells were prepared from P 2–5 Nrl.gfp+/+ mice, as described previously (MacLaren et al., 2006). Cells were dissociated using a papain-based kit (Worthington Biochemical, Lorne Laboratories, UK) and diluted to a final concentration of ∼4 × 105 cells/μl. Surgery was performed under direct ophthalmoscopy through an operating microscope, as previously described (MacLaren et al., 2006). Cell suspensions were injected (1 μl) slowly to produce a standard and reproducible retinal detachment in the superior hemisphere. Mice were sacrificed 21 days after transplantation and eyes were fixed in 4% paraformaldehyde (PFA) in PBS, for cell counts. Retinal sections were prepared by cryo-protecting fixed eyes in 20% sucrose, before cryo-embedding in OCT (TissueTek) and sectioning at 18 μm. 2.4 Histology and immunohistochemistry Mice were sacrificed at various time points after AAA administration (3 mice per time-point) and eyes were fixed in 1% PFA in PBS, for immunohistochemistry. Retinal cryosections were permeablized in chilled acetone for 5 min. Sections were pre-blocked in Tris-buffered saline (TBS) containing normal goat serum (5%), bovine serum albumin (1%) and 0.05% Triton X-100 for 2 h before being incubated with primary antibody overnight at 4 °C. After rinsing with TBS, sections were incubated with secondary antibody for 2 h at room temperature (RT), rinsed and counter-stained with Hoechst 33342. Negative controls omitted the primary antibody. The following antibodies were used: rabbit anti-ZO-1 (kind gift of K. Matter) and rabbit anti-GFAP (Dako), with an anti-rabbit Alexa-546 tagged secondary antibody (Molecular Probes, Invitrogen). An apoptosis TdT DNA fragmentation kit (ApopTag Red Apoptosis Detection Kit, Chemicon, CA, USA), which stains apoptotic cells red, was used to perform an in situ TUNEL assay on the sections. 2.5 Electron microscopy Mice were sacrificed at various time points after AAA administration. The eyes were fixed, the cornea and lens removed and the eye cups orientated and processed, as previously described (Tschernutter et al., 2005). Ultrathin sections were collected on copper grids, (100 mesh, Agar Scientific) contrast stained with 1% uranyl acetate and lead citrate and analysed using a JEOL 1010 transmission electron microscope (80 kV), fitted with a digital camera for image capture. 2.6 Confocal microscopy Retinal sections were viewed on a confocal microscope (Zeiss LSM510), as previously described (MacLaren et al., 2006). Unless otherwise stated, images show: (i) merged Nomarski and confocal fluorescence projection images of GFP (green) and the nuclear counter stain Hoechst 33342 (blue), and (ii) the same region showing GFP signal only. 2.7 Integrated cell counts Cells were considered to be integrated if the whole cell body was correctly located within the outer nuclear layer and at least one of the following was visible: spherule synapse, inner/outer processes and/or inner/outer segments, as previously defined (MacLaren et al., 2006). The number of integrated cells per eye was determined by counting all the integrated GFP-positive cells in alternate serial sections through each eye. This was doubled to give the mean number of integrated cells per eye. While it is unlikely that a photoreceptor cell body would be present across three sections, given that AAA causes cell swelling in Müller glia (Ishikawa and Mine, 1983), it is possible that it may also affect other cell types including photoreceptors. The total number of nuclei in the ONL in a volume of 2500 μm3 at the site of transplantation was determined in both control and AAA-treated eyes (72 h and 1 week prior to transplantation). There was no significant difference in the number of photoreceptor cell bodies between any of the treatment groups (P = 0.35, N = 6; ANOVA). Thus, AAA treatment is highly unlikely to lead to double-counting of integrated cells. 2.8 Apoptotic cell counts The number of apoptotic cells was determined by counting all TUNEL-positive profiles in each layer of the retina in alternate serial sections. Only sections that encompassed the site of intravitreal injection were used and are thus not representative of apoptosis in the whole eye. 2.9 Statistics All means are stated ±SEM (standard error of the mean), unless otherwise stated. The statistical test used was a two-tailed paired t-test with a significance threshold of P < 0.05. N, number of eyes; n, number of sections examined or cells counted, where appropriate. 3 Results 3.1 Dosage and route of AAA administration We sought to determine whether it is possible to achieve a transient, reversible disruption of the OLM in the adult mouse by administration of the glial toxin, AAA. First, we assessed different routes of administration including: intravitreal (20 μg/μl, N = 4), subretinal (10 μg/μl, N = 4) and subcutaneous (0.7–2.7 mg/g body weight, N = 6) injection. The dose of AAA used for each route of administration was ascertained from previous published studies (Ishikawa and Mine, 1983; Pedersen and Karlsen, 1979; Rich et al., 1995) Retinae failed to recover normal histological morphology following subretinal injection, while subcutaneous injections resulted in variable morphological changes (data not shown). However, intravitreal administration caused modest and reversible morphological changes (see below). Therefore, AAA was administered to the retina via intravitreal injection, through the inferior pars plana towards the superior hemisphere of the eye, for the remainder of the experiments described. In order to establish the optimum dose of AAA required to induce a transient disruption of the OLM, several doses (20, 100 and 320 μg/μl, N = 3, N = 4 and N = 4, respectively for each time point) were administered intravitreally and the retinae examined histologically at 6, 24 h or 3 weeks post-injection (Fig. 2) and compared with contralateral controls. At 6 h post-injection of all three doses of AAA, we observed vacuoles at the margin of the OLM that protruded into the inner/outer segment layer. These were possibly due to a swelling of the Müller cell apical processes and both the size and number of vacuoles present was dosage dependent (Fig. 2a,d,g). Upon morphological examination at 24 h post-AAA injection, eyes receiving the low (20 μg/μl) dose appeared normal (Fig. 2b). Conversely, recovery was not evident at this time following application of either 100 or 320 μg doses; vacuoles remained in both, and eyes receiving the highest dose displayed marked morphological abnormalities (Fig. 2e,h). Following administration of these higher doses, the OLM also appeared disrupted, as indicated by the presence of cell bodies in the segment layers (Fig. 2e,h). By 3 weeks post-administration, retinae receiving 20 μg/μl AAA appeared normal by morphological assessment, whilst those receiving 100 μg/μl AAA, exhibited a small number of remaining vacuoles (Fig. 2c,f). In eyes that received the highest dose (320 μg/μl) retinal morphology remained severely disrupted at 3 weeks, including a loss of retinal lamination, sustained disruption of the outer limiting membrane and retinal thinning (Fig. 2i). An apparent disruption of the inner limiting membrane was also noted in these eyes, but was not observed with lower doses of AAA. Thus, the optimum dose of AAA required for transient OLM disruption by intravitreal injection was determined to be 100 μg/μl. This dose invariably resulted in OLM disruption, as determined by histological assessment, but with consistent recovery of OLM integrity and relatively normal retinal morphology by 3 weeks. The early morphological effects of AAA in the retina described here concur with the few published studies on subcutaneous and intravitreal injections in mice and rats (Pedersen and Karlsen, 1979; Rich et al., 1995). In addition, our findings also demonstrate that these effects can be induced in a reversible manner. 3.2 Window of OLM disruption Having determined the optimal dose of AAA for a reversible and consistent OLM disruption (100 μg/μl, administered intravitreally), we also wished to identify the time point at which this disruption was maximal. We therefore examined retinal morphology, OLM integrity and apoptosis in treated retinae at 24, 48, 72 h, 1 and 2 weeks post AAA administration (N = 3 for each time point). The effect of AAA pre-treatment on host photoreceptor morphology was examined in the adult Nrl.gfp+/+ mouse, which expresses GFP in rod photoreceptors (Akimoto et al., 2006; Mears et al., 2001; Swain et al., 2001). Disruption of the ONL was clearly identifiable by the movement of GFP-labelled cells into the inner and outer segment layers of the retina (Fig. 3a). At 24 h post AAA injection, photoreceptor morphology was largely normal, with correctly orientated inner and outer segments and only occasional photoreceptor cell bodies in the subretinal space. By 48 h, the laminar organization of the retina was disrupted and more photoreceptors were displaced from the ONL. Some retinal folds were evident at 72 h post-injection and the organization of the inner and outer segments was significantly disrupted. Recovery of the retina was first seen at 1 week post AAA injection, as lamination returned and photoreceptor inner and outer segments regained nearly normal orientation. Photoreceptor organization appeared largely normal by 2 weeks post injection. The displacement of photoreceptor cell bodies into the subretinal space post AAA administration suggested that the OLM was disrupted. To confirm this, we used antibodies directed against ZO-1, an adherens junction protein located at the OLM (see Fig. 1b,c). Staining showed that the OLM was largely intact at 24 and 48 h post-AAA administration, with the exception of a few localized areas of disruption (Fig. 3b). The peak of OLM disruption was present at approximately 72 h post-administration, as demonstrated by a substantial lack of ZO-1 staining at sites where photoreceptors had dropped out of the ONL. By 1 week, the OLM had largely reformed, with disturbed but continuous adherens junctions seen. Normal staining was seen by 2 weeks post AAA administration. While AAA is a glial-specific toxin (Karlsen et al., 1982; Pedersen and Karlsen, 1979), we wished to establish whether or not other cell types, particularly photoreceptors, might be affected by the changes induced by AAA (Fig. 3c). Retinal sections encompassing the area of AAA administration were stained with an ApopTag Red Apoptosis detection kit and the number of apoptotic nuclei per section quantified. Apoptosis staining revealed that at 24 and 48 h post AAA administration, the vast majority of apoptotic nuclei were present in the inner nuclear layer (25 ± 2 cells per retinal section at 24 h and 31 ± 5 cells per retinal section at 48 h, n = 15 sections), most likely Müller cells. Apoptosis was absent in the ONL at these time-points. By 72 h, however, the peak of morphological and OLM disruption, some apoptotic cells were present in the ONL (61 ± 5 cells per retinal section, n = 15). Less than 12 ± 2 apoptotic cells per retinal section remained in the ONL at 1 week, reducing to 2 ± 1 cells by 2 weeks post-administration (n = 12). Occasionally, apoptotic profiles were seen in the ganglion cell layer. However, these were most likely the result of the intravitreal injection procedure itself, as similar levels were seen in control PBS injected eyes. Note that little apoptosis was observed in any layers of the retina in the rest of the eye, away from the site of AAA injection, at all times points examined. Neuronal toxicity after d,l- AAA treatment is thought to be a secondary effect due to the loss/inhibition of the supporting glial cells in the retina or brain (Tsai et al., 1996). This concurs with our results, since apoptosis in the ONL was observed only after apoptosis of cells in the INL. The impact of AAA administration on photoreceptor morphology and OLM integrity were further examined using electron microscopy. Retinae were examined 72 h and 1 week post AAA administration and compared with PBS injected controls (Fig. 4). In PBS injected control retinae, the OLM was intact and photoreceptor inner and outer segment morphology was normal at 72 h post-administration (Fig. 4a). Conversely, the integrity of the OLM was lost in many regions of the AAA treated retinae and photoreceptor nuclei were mislocalized in the outer segment layer. The photoreceptor inner and outer segments were significantly disturbed, vacuoles were present and there was a loss of outer segments (Fig. 4b). By 1 week post AAA treatment, retinae showed significant recovery of OLM integrity, together with fewer vacuoles in the inner segment region and recovery of inner and outer segment organization (Fig. 4c). Together, these findings demonstrate that intravitreal administration of 100 μg AAA in the mouse causes a transient, reversible disruption of the OLM approximately 72 h post injection. 3.3 Cell integration with OLM disruption We next sought to determine whether or not disruption of the OLM permits greater levels of integration of transplanted photoreceptor precursor cells. To test this, animals received a subretinal injection of early post-natal Nrl.gfp+/+ donor cells 72 h after intravitreal administration of AAA, when OLM disruption is maximal. To control for the intravitreal injection, the contralateral eye received a subretinal injection of donor cells 72 h after intravitreal administration of PBS. Three weeks post-transplantation, retinae were sectioned and the total number of Nrl.gfp cells integrated within the ONL was quantified. Nrl.gfp expression is restricted to rod photoreceptors (Akimoto et al., 2006; Mears et al., 2001; Swain et al., 2001) and provides genetic evidence that any transplanted integrated cells within the ONL are photoreceptors. We have previously demonstrated that these integrated donor cells are light sensitive and form functional synaptic connections with downstream targets in the recipient retina. Transplanted non-GFP-positive cells remained in the subretinal space, forming a cell mass. Very few integrated cells, that were not photoreceptors, have previously been observed in transplants using donor tissue from C57Bl/6 GFP+/− mice (MacLaren et al., 2006). In all animals examined, the eyes that received pretreatment with AAA showed a significantly higher number of integrated donor cells within the ONL compared with their contralateral PBS-treated counterparts (AAA-treated 1088 ± 172.28 cells, vs. PBS control 523 ± 106.49 cells; N = 9, P = 0.009, paired t-test; Fig. 5a). The number of integrated photoreceptors was also increased compared to previous cell count data from wildtype mice (control 691 ± 209.50 cells; N = 5). To exclude variability in cell counts resulting from inter-animal variation, the ratio of the number of cells in the AAA treated eye compared with the contralateral control eye was also calculated for each individual mouse. This revealed an average three-fold increase in the number of transplanted photoreceptors for each animal as a result of AAA treatment (3.0 ± 0.75; N = 9; Fig. 5b,c). To determine whether this statistically significant increase in the number of integrated cells coincided with OLM disruption, we also performed cell transplants at 1 week post-AAA administration when the OLM disruption had largely recovered. Transplantation at this stage showed a mean fold-difference of less than 1.3-fold between the AAA treated eye compared with the control PBS-treated eye (Fig. 5b) and the number of integrated cells was not significantly higher post-AAA administration (AAA-treated 363 ± 52.22 cells, vs. PBS control 527.78 ± 94.89; N = 9, P = 0.34, paired t-test; Fig. 5a). We investigated whether AAA treatment enhanced reactive gliosis as this may affect cell integration. Previous studies have shown that AAA affects astrocytes in both the brain and eye (Ishikawa and Mine, 1983; Khurgel et al., 1996) and Rich et al. (1995) demonstrated up-regulation of GFAP in Müller cell processes, after chronic systemic administration of AAA during development (P3–9). GFAP is a marker of reactive gliosis and astrocytes, and we examined GFAP immunohistochemistry after AAA treatment and compared it with PBS injected controls. Here the doses of AAA used were much lower than those in the studies cited above and were administered as single intravitreal injections. We observed no difference in GFAP staining between the AAA and PBS treated eyes at either time point, indicating that AAA had little effect on Müller cell activation or astrocyte toxicity (Supplementary Fig. 1). Pre-treatment with either AAA or PBS had no identifiable effect on the morphology of integrated photoreceptors. In both the PBS control and the AAA treated retinae, integrated photoreceptors appeared fully differentiated and morphologically indistinguishable from those we have described previously (MacLaren et al., 2006) (Fig. 5d). 4 Discussion Here we demonstrate that the OLM in the adult mouse retina can be transiently disrupted by the intravitreal administration of AAA. OLM disruption is maximal approximately 72 h post administration. When combined with precursor cell transplantation, this time point correlates with a significantly enhanced level of transplanted photoreceptor cell integration into the recipient ONL, compared with sham-injected controls. These findings suggest that the OLM represents a natural barrier to the successful integration of photoreceptor precursor cells transplanted into the subretinal space. Consideration of the OLM may therefore be important in any future clinical photoreceptor transplantation strategies directed towards retinal repair. 4.1 Effect of AAA on Müller cells and the OLM We show for the first time that the glutamate analogue, AAA, can be used to induce a transient disruption of both Müller glial morphology and OLM integrity in the adult mouse retina. AAA appears to disrupt the OLM by exerting a largely Müller glial-specific transient toxicity (Karlsen et al., 1982; Olney, 1982; Pedersen and Karlsen, 1979). The Müller glia recover well, with only low levels of cell death observed in the first week after AAA administration. Death of Müller glia following exposure to AAA has previously been observed in the carp retina, following an injection of much larger doses than those used in the present study (Sugawara et al., 1990). In addition to Müller cells, the other glial cell type of the retina, astrocytes, also exhibit toxicity in response to AAA in the brain and eye (Ishikawa and Mine, 1983; Khurgel et al., 1996). The mechanism of action of AAA on Müller glia is uncertain, although the morphological damage includes swelling and nuclear changes (Pedersen and Karlsen, 1979). Suggested modes of action include: inhibition of glutamate uptake, resulting in possible neuroexcitotoxicity (Tsai et al., 1996); inhibition of the cystine/glutamate transporter expressed by Müller glia, leading to reduced levels of intracellular glutathione and oxidative stress (Kato et al., 1993); and uptake of AAA itself by Müller glia, and subsequent cytotoxicity via metabolic stress (Chang et al., 1997; McBean, 1994). The downstream effects of AAA occur in a time specific manner, resulting first in the early gliotoxicity, followed by an apparent secondary neurotoxicity, seen with increased dose (Tsai et al., 1996). 4.2 OLM disruption and photoreceptor integration Cells transplanted into the subretinal space of adult mice are capable of correctly integrating within the recipient ONL and forming functional, synaptically-connected photoreceptors, if these cells are at the appropriate post-mitotic precursor stage of development (MacLaren et al., 2006). However, the numbers integrating are below that likely to be required for a clinical therapy. Given that immature neurons and neural stem cells are intrinsically capable of migration (Hagg, 2005; Komuro and Rakic, 1998a; Nadarajah and Parnavelas, 2002; Parnavelas et al., 2002; Pearson et al., 2005), it is likely that barriers exist within the adult retina that impede greater numbers of donor cells from migrating and integrating, as demonstrated by the failure of graft integration in recipients where the host photoreceptor layer is largely intact (Zhang et al., 2003a). Extensive migration into the neural retina is largely restricted either to severely degenerated retina (Zhang et al., 2004) or, in wildtype animals, to areas where there is significant disruption to the ONL (Ghosh et al., 1999; Gouras et al., 1994; Zhang et al., 1999). Zhang and colleagues concluded that breaks in the OLM and/or loss of the photoreceptor component of the OLM were necessary for the formation of bridging fibres between graft and host tissues (Zhang et al., 2003a). Similarly, localized mechanical disruption of the retina may facilitate migration of dissociated cells from the subretinal space, as observed with neural stem cells (Nishida et al., 2000). The OLM consists of unique heterotypic (involving Müller cells and photoreceptors) or homotypic (between Müller cells) adherens junctions (Paffenholz et al., 1999; Williams et al., 1990). We have shown that AAA causes marked morphological changes within Müller glia, leading to the disruption of these adherens junctions. Furthermore, a number of photoreceptor nuclei became displaced within the segment layer, outside the OLM, which normally acts to retain them (Rich et al., 1995). This suggests that if cells can exit the ONL following disruption of the OLM, the converse is also likely—cells in the subretinal space can migrate more readily into the ONL when the OLM is disrupted. Accordingly, pre-treatment with AAA leads to significantly greater numbers of donor cells integrating following transplantation, compared with controls. Importantly, this effect was only seen if cells were transplanted 72 h after AAA administration, i.e. only when OLM disruption is at its peak. OLM disruption causes a significant increase in photoreceptor integration following transplantation. However, the increase (3-fold) is less than might be expected if the OLM were the only factor limiting donor cell migration, suggesting that manipulation of additional factors is likely to be required to optimize integration. A number of factors need consideration in relation to our findings. First, the donor cell population is heterogeneous; only a small proportion of cells in the early post-natal retina will be at the appropriate stage and specification for transplantation (i.e. photoreceptor precursors), so cell number may be a limiting factor. This could be augmented in the future by pre-selecting photoreceptor precursors, provided efficient cell sorting methods can be established. Second, Müller glia may actually facilitate donor cell migration into the ONL of the recipient retina or play a role in supporting rod differentiation, and the transient toxic effects of AAA may impede or reduce these supportive functions. Thus, while AAA may aid integration by disrupting the integrity of the OLM, it may conversely limit that enhancement by disturbing this supportive glial scaffold. Müller glia are also known preferentially to support rod process outgrowth (Kljavin and Reh, 1991). However, because all integrated photoreceptor cells in the AAA-treated retinas were morphologically identical to controls and those previously described (MacLaren et al., 2006), this suggests that Müller cells are at least partially dispensable, which is consistent with the observation that AAA-treatment does not affect normal photoreceptor development in early post-natal mice (Rich et al., 1995). Finally, Müller glia up-regulate the expression of GFAP and other intermediate filament proteins under stress (Bignami and Dahl, 1979; Bjorklund et al., 1985). It is possible that the physiological changes induced by AAA could trigger aspects of the glial scarring pathway, which may subsequently impede the migration of transplanted cells. Recent work investigating retinal cell transplantation by Kinouchi et al. (2003) has demonstrated that cell migration, to the ganglion cell layer is enhanced in mice lacking GFAP and vimentin, two intermediate filament proteins found in reactive Müller glia and astrocytes. Müller cell morphology was stated to appear normal in these mice and spanned the entire retina. They did not, however, observe increased cell integration into other retinal layers, including the outer nuclear layer (Kinouchi et al., 2003). 4.3 Therapeutic implications The results described here demonstrate a proof of concept; namely that disruption of the OLM increases the integration of transplanted photoreceptor precursor cells. However, the compound used, AAA or the l-AAA enantiomer have been shown to have other significant effects in the retina, albeit at higher doses than those used in this study. These include Müller cell necrosis, light insensitivity and suppression of the electroretinographic b-wave (Kato et al., 1990; Pedersen and Karlsen, 1979; Sugawara et al., 1990). Therefore, AAA is highly unlikely to be of therapeutic value. It will be of considerable interest to identify alternative reagents that can induce a specific, reversible disruption of OLM integrity without impacting on the function of the retina. It is important to note that cystoid macular oedema (CME) is a condition seen in the end stages of many diseases of the outer retina, such as retinitis pigmentosa and diabetic maculopathy. Microscopic examination of pathological specimens has shown that CME represents an intra-cytoplasmic swelling (oedema) of Müller cells in the foveal region (Yanoff et al., 1984) which is similar to the effects of AAA described in this study. Based on the results of our experiments, it is therefore not inconceivable that the OLM disruption resulting from CME may make the diseased human fovea a particularly favourable site for future retinal cell transplantation strategies.
[ "outer limiting membrane", "photoreceptor", "mouse", "cell integration", "stem cells", "müller cell", "retinal transplantation" ]
[ "P", "P", "P", "P", "P", "P", "R" ]
Eur_Spine_J-2-2-1602185
In-vivo demonstration of the effectiveness of thoracoscopic anterior release using the fulcrum-bending radiograph: a report of five cases
Thoracoscopic anterior release of stiff scoliotic curves is favored because of its minimally invasive nature. Animal and human cadaveric studies have shown that it can effectively improve spinal flexibility in non-scoliotic spines; however it has not been demonstrated to be effective in actual patients with scoliosis. The fulcrum-bending radiograph has been shown to accurately reflect the post-operative correction. To demonstrate that the flexibility was increased after the anterior release; five patients with idiopathic thoracic scoliosis who underwent staged anterior thoracoscopic release and posterior spinal fusion were assessed using the fulcrum-bending radiograph. The average number of discs excised was four. Spinal flexibility as revealed by the fulcrum-bending technique, was compared before and after the anterior release. The patients were followed for an average of 4 years (range 2.2–4.9 years). Fulcrum-bending flexibility was increased from 39% before the thoracoscopic anterior spinal release to 54% after the release (P<0.05). The average Cobb angle before the anterior release was 71° on the standing radiograph and 43° with the fulcrum-bending radiograph. This reduced to 33° on the fulcrum-bending radiograph after the release, and highly corresponded to the 30° measured at the post-operative standing radiograph and at the latest follow-up. Previous animal and cadaveric studies demonstrating the effectiveness of thoracoscopic anterior release did not have scoliosis. We are able to demonstrate in patients with adolescent idiopathic scoliosis, that thoracoscopic anterior spinal release effectively improves the spinal flexibility. Introduction In the management of patients with stiff thoracic scoliosis and kyphosis, anterior spinal release is helpful in increasing the spinal flexibility and therefore, the result of deformity correction. Excision of intervertebral discs through open thoracotomy is the preferred method in the past. However, cutting of the chest-wall muscles is associated with complications such as reduced ventilation, post-operative atelectasis, extensive and painful scars, blood loss, and prolonged hospital stay [7]. Video-assisted thoracoscopic surgery (VATS) for anterior release of the spine is becoming increasingly more popular due to its minimally invasive nature [1, 19, 25]. However, opponents do not believe that VATS is effective in improving the spinal flexibility as it is difficult to perform a radical discectomy, and to a lesser extent, rib-head excision by thoracoscopy [5, 23]. While VATS supporters have used animals and cadavers to demonstrate the effectiveness of thoracoscopic spinal release [6, 20], these studies were performed in spines that did not have scoliosis. The fulcrum-bending radiograph, obtained with the patient lying sideways hinging over a fulcrum provides a simple and reproducible technique for the in-vivo assessment of spinal flexibility. It can accurately predict before surgery, the amount of correction that can be achieved by modern segmental spinal instrumentations [4, 11, 16, 17]. Using this method, one can assess the effectiveness of an anterior release by either directly comparing the spinal flexibility (as reviewed by the fulcrum-bending radiograph) before and after an anterior release, or indirectly by comparing the predicted correction by the pre-release fulcrum-bending radiograph and the actual correction achieved by the anterior release and posterior fusion. Either way, differences in the measured Cobb angle would be attributable to the effect of the thoracoscopic release. In our institution, a number of thoracoscopic anterior release and posterior fusion surgeries for thoracic idiopathic scoliosis were staged. This gave the opportunity to perform the fulcrum-bending radiograph before and after the anterior release, thereby allowing direct assessments of flexibility changes as a result of the thoracoscopic procedure. Materials and methods Between 1997 and 1999, five patients with idiopathic scoliosis requiring anterior release for stiff thoracic curves were prospectively investigated. The authors define stiff curves as those which have a Cobb angle of more than 40° with the fulcrum-bending radiograph; this was arbitrarily used as we felt that the residual curves of over 40° gave unacceptable cosmetic results, and therefore a flexibility-modifying procedure would be indicated. The mean age at the time of operation was 23.0 years (range 13.9–35.3). According to the King’s classification [10], there were one type I, two type II, and two type III curves. For the patient with type I curve, both thoracic and lumbar curves were corrected and fused, and for the type II and III curves, only the thoracic curves were instrumented. According to Lenke’s classification [12], there were three type I, one type III, and one type V curve. All the patients underwent an anterior thoracoscopic release, followed by a second-stage posterior instrumented correction, and fusion 1 week later. The instrumentations used for the posterior spinal fusion were Texas Scottish Rite Hospital (TSRH, n=2; Sofamor-Danek, Memphis, TN, USA), ISOLA (n=2; Acromed Corp., Cleveland, OH, USA), and CD-Horizon (CD-H: n=1; Sofamor-Danek, Memphis, TN, USA) systems. The mean number of thoracic discs resected was four, ranging from 3 to 5 (Table 1). Posterior fixation was done by a pre-dominant hook construct using one upper and two intermediate hooks, and either a hook or pedicle-screw at the lowest fusion level. Table 1General patient dataCase 1Case 2Case 3Case 4Case 5King’s typeIIIIIIIIIIIAge (years) at operation21.513.925.435.219.1Levels releasedT7-T10T7-T11T6-T10T7-T12T6-T10Number of discs released34454Instrumentation used TSRHTSRHISOLACDISOLA The thoracoscopic anterior release was performed with the patient under general anesthesia. The technique involved a discectomy by making a large annular window over the convex side of the scoliosis; the whole nucleus and cartilaginous end-plates were removed. Excision of the posterior annulus to the posterior longitudinal ligament is always attempted but is usually successful only at the apical discs. Rib-head excision was not performed. The effectiveness of the thoracoscopic anterior release in increasing spinal flexibility was investigated by two methods. First by a direct comparison of the Cobb angle measured from a pre-release fulcrum-bending radiograph with that of the post-release fulcrum-bending radiograph. Second, by an indirect method, comparing the pre-release fulcrum-bending radiograph with the final post-operative correction. As the pre-operative fulcrum-bending radiograph has been reported to be able to accurately predict the post-operative coronal deformity correction for posterior surgery [4, 11, 17], the difference of the Cobb angles between the pre-operative fulcrum-bending radiograph (which predicted the correction for the posterior surgery alone) and the post-operative standing X-ray (which shows the actual correction achieved by combined anterior release and posterior instrumentation) will indirectly demonstrate the in vivo effect of the anterior thoracoscopic spinal release. Statistical analyses were carried out using the Student’s t-test, with a significance level of (P<0.05). The “fulcrum flexibility”, a measure of the spinal flexibility as revealed by the fulcrum-bending radiograph [11, 17] was calculated based on the following formula: where FB stands for fulcrum bending. This relationship was used to directly assess the changes in flexibility as a result of the anterior release. Results The patients were followed for an average of 4 years (range 2.2–4.9 years). The mean fulcrum flexibility before the anterior release was 39%; it increased by 15–54% after the anterior thoracoscopic release (P<0.05). The mean pre-operative Cobb angle on the postero anterior (PA) standing radiograph was 71°, the mean pre-operative fulcrum-bending angle was 43°, the mean post-release fulcrum-bending angle was 33°, and the actual mean Cobb angle after combined anterior thoracoscopic release and posterior surgery was 30°. The latter correlated with the post-release fulcrum-bending result (P=0.09), and was significantly different (P<0.05) from the pre-release fulcrum-bending result to suggest that the thoracoscopic anterior release can effectively improve the surgical correction of the coronal curve. On an average, four discs were excised per patient, and as the mean improvement in correction is 13°, it suggested that resection of one disc resulted in a mean improvement in correction by approximately 3° (Table 2, Fig. 1). Table 2Measured Cobb angles for each case (For case 5, it was not possible to perform a post-release fulcrum-bending XR because of wound pain)Case 1Case 2Case 3Case 4Case 5MeanPre-operative AP standing 657675786171Pre-release fulcrum bending434145454043Post-release fulcrum bending35284030N/A33AP standing latest follow-up322830282629Fig. 1 aPre-operative anteroposterior standing radiograph of case 2, showing a 76°-curve from T6 to T12. b Pre-release fulcrum-bending radiograph showing a correction to only 41°. c Fulcrum-bending radiograph taken 1 week after an anterior thoracoscopic release of four levels, showing an improvement in the flexibility to 28°. d Post-operative standing radiograph taken 1 week after the posterior correction, showing the curve correction to 29° The immediate post-operative Cobb angle of 30° (data not shown) was well predicted by the post-release fulcrum-bending radiograph of 33°. There was no significant change in this correction at the latest follow-up (Table 2). Discussion Anterior spinal release by open thoracotomy has been extensively used in the past to help improve spinal flexibility in stiff thoracic scoliosis. Its role has been increasingly taken over by VATS as the latter is associated with a lower morbidity [8, 21]. However, some surgeons feel that an anterior thoracoscopic release is not as effective as an open release. This is because they believe that a successful anterior spinal release requires a rib-head resection and a radical discectomy, which cannot be easily performed through VATS [5, 23]. Although studies using animals and cadavers have demonstrated that a thoracoscopic release can improve spinal flexibility [6, 20], it was not performed on actual patients with scoliosis. Thus, there is no definitive proof in human patients with scoliosis that the thoracoscopic spinal release is effective. Although some clinical studies have demonstrated that combined VATS and posterior spine fusion resulted in good scoliosis correction [2, 21, 22], they did not assess the actual flexibility of the scoliosis and therefore were not able to directly demonstrate that the thoracoscopic release added to the spinal flexibility. The use of the fulcrum-bending radiograph, which reflects the spinal flexibility and accurately predicts the post-operative coronal deformity correction, provides an opportunity to assess the effectiveness of thoracoscopic release in vivo [24, 27, 28] The five cases included in this study had stiff curves, with a pre-operative fulcrum-bending angle of more than 40°, a mean pre-operative fulcrum flexibility of only 39%, and a mean Cobb angle of 71°. As these cases had staged surgery, we were able to obtain a post-release fulcrum-bending radiograph for comparison with the pre-release and post-operative Cobb angles. After anterior thoracoscopic release, flexibility was increased from 39 to 54%, thus representing a direct proof that the procedure can improve the spinal flexibility, in our cases by an average of 15%. One patient (Case 5) was unable to lie on the fulcrum to perform a post-release fulcrum-bending radiograph due to wound pain. Nevertheless, case 5 has been included in the analysis, as a comparison of the pre-release fulcrum-bending radiograph with the post-operative radiograph will still provide an indirect evidence on the success of a thoracoscopic release (Table 2). For those who were able to lie on the fulcrum, we found the technique reliable. In this limited series of four cases, the post-release fulcrum-bending radiograph correlated well with the post-operative result. It should be noted that the previous work on the fulcrum-bending radiograph are based on the use of hooks and hybrid systems only [4, 15]. The predictability of this method with reference to the use of pedicle-screw fixation is not known. It is widely believed that pedicle-screw systems give a superior degree of correction when compared to hook systems, and may obviate the need to perform anterior releases. However, there is no published data, which directly compares the two systems, taking into account the spinal flexibility. The authors are aware of two different ways to measure Cobb angles before and after surgery. One method is to determine the Cobb angles in the pre-operative standing radiograph, and then to use the same levels throughout, although in the post-operative radiograph, the same measured levels may no longer be the most “tilted” vertebra. The alternative method is always a measure from the most “tilted” levels even though the levels may change between pre- and post-operative radiographs. While the authors prefer the former method, use of the latter method would not alter the results. Using the presented case as an example (Fig. 1), the pre-operative Cobb angle from T6 to T12 was 76°, the pre-release Cobb angle from T7 to T10 was 50°, the post-release Cobb angle from T8 to T10 was 40°, and the final Cobb angle from T7 to T10 was 40°. The technique of anterior spinal release is different amongst different surgeons, while some do only a discectomy; others routinely remove the rib-heads, and even disrupt the posterior longitudinal ligament. The use of VATS limits the number of structures that can be easily released due to limitations in visualization and access to all disc levels. This study is not comparing the open thoracotomy and release with the thoracoscopic anterior release; this is because no data is available for the former. Moreover, an evaluation of open release versus thoracoscopic release using the present method is not possible, as patients undergoing the open release would still have a painful wound, which would prevent them from lying sideways over a fulcrum. One potential pitfall of this study is that three different types of implants were used, while the original study on the fulcrum-bending radiograph was based on TSRH [4]. It maybe possible that different implants vary in their ability to correct scoliosis and therefore invalidate the indirect assessment. However, the authors feel that this is unlikely because it has been demonstrated by the same group that there was no significant difference in the ability of different instrumentation systems to correct thoracic scoliosis [18]. Moreover, the direct comparison of fulcrum flexibility before and after an anterior release would still stand. This study was performed in the early part of our experience with thoracoscopic anterior release, hence relatively few discs were released and the surgeries were staged. However, it did serve the purpose of demonstrating that thoracoscopic anterior release does result in an improvement in spinal flexibility. With increasing experience, the authors tend to release five to six discs per patient and the posterior surgery is performed on the same day. With the advent of new techniques and instrumentation, the indications for a thoracoscopic release may change in time. In particular, pedicle-screw systems appear to provide a better correction of large magnitude curves [9, 13, 26]. However, to date, there are no randomized studies comparing hooks versus screw systems, nor a correlation of the correction to spinal flexibility assessment, such as the fulcrum-bending correction index [17, 18]. Additionally, the use of anterior instrumentation systems may mean that such surgeries are carried out as a single-stage anteriorly, avoiding the need for posterior surgery [3, 14]. In summary, this is the first study to provide a direct in vivo evidence demonstrating the effectiveness of thoracoscopic anterior release in improving the spinal flexibility in patients with scoliosis.
[ "anterior release", "spinal flexibility", "adolescent idiopathic scoliosis", "video-assisted thoracoscopic surgery", "thoracoscopy" ]
[ "P", "P", "P", "P", "P" ]
Qual_Life_Res-3-1-2039790
Different perceptions of the burden of upper GI endoscopy: an empirical study in three patient groups
Background Few studies have evaluated patients’ perceived burden of cancer surveillance tests. Cancer screening and surveillance, however, require a large number of patients to undergo potentially burdensome tests with only some experiencing health gains from it. We investigated the determinants of patients’ reported burden of upper gastrointestinal (GI) endoscopy by comparing data from three patient groups. Introduction Opportunities for screening and surveillance of premalignant conditions have increased and will increase in the future. However, such interventions can be burdensome, and, as in any screening situation, the number of subjects exposed to this burden is often much higher than the number of subjects experiencing the beneficial health effects of the screening [1]. Upper gastrointestinal (GI) endoscopy is commonly used to diagnose and treat patients with a range of conditions and symptoms. Complications related to upper GI endoscopy are rare, and it is considered to be a safe procedure [2, 3]. Patients with Barrett esophagus (BE), a premalignant condition mostly without physical symptoms but associated with an increased risk of developing esophageal adenocarcinoma of 0.5% per year, are recommended to undergo regular biennial endoscopic surveillance for early detection of esophageal cancer [4]. All patients participating in surveillance experience the pain and discomfort of biennial upper GI endoscopy, whereas progression to adenocarcinoma occurs only in a minority of BE patients [5–8] and undisputable evidence that surveillance prolongs survival is still lacking [9–12]. Hence, the patients’ perceived burden of upper GI endoscopy testing needs to be taken in to account in evaluating the health benefits of surveillance of subjects with BE. In some situations, there is a trade-off between the effectiveness of screening (or surveillance) and the test uptake. For example, colorectal cancer screening using sigmoidoscopy is more effective than faecal occult blood testing [13]. At present, this trade-off is not relevant for surveillance of BE because a less burdensome test than upper GI endoscopy is not available, but the recognition that upper GI endoscopy is burdensome may prompt a reconsideration of the frequency of surveillance. Ongoing studies aim to identify groups of BE patients at lower risk of developing esophageal cancer than others, so that offering less frequent surveillance may be warranted [14]. At the patient level, empirical data on perceived burden of upper GI endoscopy can be used in the process of informing subjects with BE who consider participation to a surveillance programme. In a general sense, empirical data of the patients’ perceived burden of testing may contribute to subjects’ informed decision-making on participation (or non-participation) to screening or surveillance and hence, to quality of health care [15]. Studying the determinants of patients’ perceived burden of upper GI endoscopy, e.g. by comparing data from different patient groups, may allow for the identification of patient groups who are likely to experience more pain or discomfort than others. This information can be used in practice guidelines, e.g. on provision of sedation to prevent pain and discomfort, or other types of patient support. Studying determinants of patients’ perceived burden is of additional interest from the perspective of evaluation research. If patients’ perception of the burden of endoscopy differs by the context of e.g., surveillance or diagnostic work-up, the generalisability of data from one context to another is limited. Our previous work [16] has shown that BE patients under regular surveillance perceive upper GI endoscopy as burdensome. They experienced anxiety and discomfort, but hardly reported pain or symptoms. We analysed potential determinants of the perceived burden of upper GI endoscopy by comparing BE patients with two additional patient groups, i.e., patients with non-specific upper GI symptoms (NS) and patients with a recent diagnosis of cancer of the upper GI tract (CA). Methods Ethics approval The Medical Ethical Review Board of Erasmus MC—University Medical Center Rotterdam, The Netherlands, approved of the study (MEC 03.1064; October 9, 2003). Patients Patients undergoing upper GI endoscopy for surveillance of BE were participants of an ongoing trial (CYBAR), whose endoscopic burden was previously reported [16]. Inclusion criteria were: BE segment of 2 cm or more confirmed by a histological diagnosis of intestinal metaplasia, absence of high-grade dysplasia and carcinoma, willingness to adhere to endoscopic surveillance, ability to read the Dutch language and informed consent. Patients with non-specific upper GI symptoms (NS) were referred for endoscopy by their respective GPs because of non-specific upper GI symptoms. They needed to be able to read the Dutch language, provide informed consent, not to have “alarm symptoms” such as hematemesis, melena, or dysphagia, and not be diagnosed with BE previously. Patients with a recent diagnosis of upper GI cancer (CA) were referred for upper GI endoscopy plus ultrasonography (EUS) to determine therapeutic options. Ability to read the Dutch language and to give informed consent was also required in these patients. Patients were recruited from one academic and two regional hospitals for BE, two regional hospitals for NS and in one academic hospital for CA. Endoscopic procedure BE and NS patients underwent endoscopy with adult endoscopes (Olympus GIF-Q160, Zoeterwoude, The Netherlands). In the group of cancer patients, a combined endoscopy and EUS was performed with a Olympus GF-UM160. More than 95% of patients received oral anaesthetics (Xylocain 10% spray, Astra Zeneca, Zoetermeer, The Netherlands) preceding the introduction of the endoscope. Additional sedation with 2,5-5 mg midazolam (Roche, Woerden, The Netherlands) intravenously was offered as a standard procedure to all cancer patients, but was only administered with explicit patient consent. In BE and NS patients this was not standard, but it was administered on a patient’s request. Practice variations between and within countries in the use of sedation for upper GI endoscopy are common [17]. Hypotheses Perceived burden of endoscopy was operationalised as pain and discomfort during the procedure, symptoms afterwards and psychological distress over time. We hypothesized that subjects who had previous endoscopies may get used to it to some extent and hence report less burden. Demographic characteristics (age, sex, educational level, employment status, etc) were considered as potential confounders. We expected that BE patients may get used to regular endoscopy to some extent, and that they adhere to surveillance expecting that the test result will be reassuring. Therefore, we expected BE patients to report less discomfort and burden than the patients with non-specific GI symptoms, who had less endoscopy experience. We also expected the BE group to report less burden from the endoscopy than the cancer patients, due to the endoscopy itself (combined with EUS in the cancer patients) and the fact that cancer patients were aware of their generally bad prognosis. Table 1 shows the potential determinants of perceived burden of endoscopy between patient groups. Table 1Potential determinants of perceived burden of endoscopy for three patient groupsDeterminantBENSCA Indication for endoscopyRegular endoscopic surveillance for early detection of cancerTo find out if symptoms can be explained by e.g. a hiatal hernia or duodenal ulcer; a life-threatening diagnosis is not expected.To determine whether intentionally curative therapy is possible or notEndoscopy experienceYes, under endoscopic surveillance Limited, ranging from 0 to some Yes (by definition: at least one previous endoscopy needed to diagnose the cancer)Generic health statusGenerally healthy Mild impairment (aspecific gastrointestinal symptoms)Seriously impaired (cancer diagnosis)AgeIn-betweenYoungestOlderSexMore males than females Equal sex distributionPredominantly maleEndoscopy procedureNormal endoscopy procedure, sedation only on patient requestNormal endoscopy procedure, sedation only on patient requestEndoscopy combined with ultrasonography; sedation routinely offered Questionnaires and measurements Patients were asked to complete questionnaires at different time points, i.e., one week before the endoscopy (baseline), at the day of endoscopy (just before undergoing it), one week and one month after endoscopy [16]. In order to minimize the questionnaire load for CA patients they received only two questionnaires: the first on the day of endoscopy and the other one week afterwards. Some baseline items had to be included in the ‘endoscopy day’ questionnaire in the CA group. The content of the questionnaires is described below. Pain and discomfort Separate items in the questionnaire one week after endoscopy were used to assess pain and discomfort, respectively, as experienced during the procedure, for four steps of the procedure: the introduction of the endoscope, the endoscopy itself, the removal of the endoscope, and the period directly after endoscopy. Subjects were offered three response options (‘no’, ‘quite’ and ‘very’ painful or discomforting, respectively). Additionally, patients rated the overall burden of undergoing the endoscopy (very, somewhat, not burdensome) [16]. Symptoms We compared the prevalence of 10 symptoms experienced in the week after endoscopy with the prevalence at baseline. For CA patients, the baseline questions were asked at the day of endoscopy. Presence of throat ache, heartburn, regurgitation, flatulence or feeling bloated, vomiting, hematemesis, dysphagia for solid foods or for of liquids, diarrhea, and constipation, was assessed using four response options (not at all, one day, 2–3 days, 4 or more days) [16]. Psychological distress (BE and NS patients) We assessed general distress using the Hospital Anxiety and Depression scale (HAD) at all time points [18, 19]. Anxiety and depression scores of this scale range from 0–21, with scores of 11 or over indicating clinical, and scores between 8 and 10 indicating borderline anxiety or depression [18, 19]. We analysed the pattern of scores across measurements, assuming scores to return to normal after endoscopy. Scores from a Dutch general population sample (n = 1901; mean age = 61 year; 51% female) were available for comparison [19]. At baseline and at one week we also measured specific distress with the Impact of Event Scale (IES) [20, 21]. At baseline we assessed intrusive and avoiding thoughts regarding the endoscopy itself, and at one week regarding the communication of the final test result. The total scale ranges between 0 and 75, with scores of 26 or over indicating a high risk of developing a stress disorder [22]. Psychological distress (CA patients) For CA patients we omitted the HAD and the IES measures regarding the endoscopy itself, because we expected that distress in these patients was already at the top of the scale, making any additional distress caused by the procedure itself indiscernible. The IES to assess specific distress regarding the endoscopy result was included in the questionnaire at the day of endoscopy, because these patients received the endoscopy results earlier than the next questionnaire. Demographics and other data Demographic data were collected at baseline (at the endoscopy day for CA patients). The EQ–5D self-classifier results in a patient’s classification of own health on five domains: mobility, self-care, usual activities, pain, and anxiety and depression (3 response options: no, some, severe/ complete limitations) and a summary score [23–25]. We asked BE and NS patients whether this was their first, second or a later endoscopy. Whether sedation was used during endoscopy was recorded separately. Analyses Differences in demographic and treatment characteristics between patient groups were analysed by Chi-square tests for categorical variables or one-way analysis of variance (ANOVA) for continuous variables. The items for pain and discomfort were combined into summary scores to enable adjustment for confounders and analysis of determinants, by adding up the item responses (0, 1, 2, respectively) of the 4 items (range of pain and discomfort summary scores: 0 (no pain or discomfort) to 8) [16]. The response to the single item rating of overall burden was also treated as a summary score, with a range from 0 (no burden) to two [16]. Because these summary scores had a limited number of possible values and because the data were not distributed normally, we chose to analyse them with proportional odds models [26]. These models produce odds-ratios (ORs) for cumulative probabilities of the outcome variables. Proportional odds models are a variant of simple logistic regressions, but now ORs for dichotomies at all possible cut-off levels are estimated. E.g., for a variable with three possible outcomes 1, 2, and 3, ORs are estimated for (1 + 2) vs. 3 and for 1 vs. (2 + 3). The OR presented represents an overall OR, that is assumed to be similar across cut-off levels. Because some of the outcome variables and determinants had 10–15% missing data, and there were no reasons for selective missing data, we used multiple imputation (function AregImpute in Splus 6.0) [27] so that all available information in our dataset was used. In multivariate analysis of the determinants of patients’ perceived burden with the proportional odds model, we first adjusted for confounders (age, sex and employment status). Subsequently we evaluated the potential effects of the following determinants on discomfort, pain and overall burden, respectively:patient group (BE, NS or CA). This variable combines the differences in the endoscopy procedure (with or without EUS, sedation) and the indication to undergo the endoscopy. For BE and NS patients, this analysis was refined by additional separate analysis of the effect of the number of previous endoscopies (continuous, truncated at ≥20).baseline generic health status (EQ 5D summary score).whether sedation was administrated or not.baseline HAD anxiety score (not available for the CA patients).The prevalence of symptoms before and after endoscopy was compared using a method analogous to the Wilcoxon test. Responses were ranked and ANOVA was applied to the differences in these ranks [16]. The continuous HAD and IES scores were compared over time in SAS version 8.2 with repeated-measures ANOVA, using ‘Proc Mixed’ with REML and a compound symmetry covariance structure. Models comprised main effects of time (the measurements), confounders, determinants and interactions between determinants and time. Proportional odds models were estimated with Splus 6.0. All other analyses were conducted in SPSS version 11.0.1. Results Patients and response In total, 684 patients were eligible for inclusion: 192 BE, 365 NS and 127 CA patients. The overall response rate was 70% with 476 patients completed at least one questionnaire. The response differed by patient group; it was 180/192 (94%) in BE patients, 214/365 (59%) in NS patients and 82/127 (65%) in CA patients. Most BE patient had no dysplasia (78%), 22% had low-grade dysplasia [16]. NS patients were diagnosed with hiatal hernia (45%), non-specific gastritis (25%), reflux esophagitis (20%) and some other diagnoses (e.g. ulcer, polyps; 10%). CA patients underwent endoscopy and EUS for staging of esophageal carcinoma (72%), gastric cancer (26%) or lymphoma (2%). Differences between groups in mean age, sex and employment status were statistically significant (P < 0.001) (Table 2). We therefore considered these variables as confounders and controlled for them in further analyses. Table 2Patient characteristicsaBENSCADifferbNGroup18021482N.A.476Mean age (sd)62 (12)54 (16)64 (10)<0.001474Sex: male119 (66%)101 (47%)66 (80%)<0.001476EmploymentPaid employment59 (34%)85 (44%)25 (36%)<0.001438Retired87 (50%)65 (34%)38 (54%)Unpaid/unemployed29 (17%)43 (22%)7 (10%)Civil status Married/ together134 (77%)137 (69%)57 (80%)0.034444Never married/ tog.13 (7%)26 (13%)3 (4%)Divorced10 (6%)23 (12%)4 (6%)Widowed18 (10%) 12 (6%)7 (10%)EducationPrimary35 (20%)37 (19%)16 (23%)0.498435Secondary95 (56%)122 (63%)43 (61%)Tertiary40 (24%)35 (18%)12 (17%)HospitalAcademic center (1)37 (21%)082 (100%)N.A.476Regional hospital (3)143 (79%)214 (100%)0Sedation: yes43 (27%)18 (9%)56 (77%)<0.001419Endoscopy numberFirst1 (1%) 99 (59%)Unknown<0.001338Second26 (15%)38 (23%)Third or later144 (84%)30 (18%)EQ–5D summary score0.85 (0.18)0.73 (0.22)0.77 (0.21)<0.001433N.A., Not assessedaData for the BE group were published previously [16]bχ- test (categorical variables) or F-test (continuous) for differences between patient-groups About 84% of the BE patients had had two or more previous endoscopies [16], compared with 18% of the NS patients (P < 0.001). Seventy-seven per cent of the CA patients received sedation during endoscopy, compared with 27% of the BE and 9% of the NS patients (P < 0.001). The differences in the mean EQ–5D summary score were in the expected direction (P < 0.001) (Table 2). Pain and discomfort Tables 3–5 show that the patient groups differed significantly in reported discomfort, pain and overall burden of endoscopy. The p-values shown for the summary scores relate to univariate analysis of differences between the patient groups before adjustment for confounders. Table 3Discomfort during upper GI endoscopy as reported by patientsaDiscomfortNotQuiteVerynDifferbIntroducing the endoscope141 (34%)177 (42%)99 (24%)417P < 0.001    NS42 (24%)76 (43%)58 (33%)176    BE64 (37%)81 (47%)27 (16%)172    CA35 (51%)20 (29%)14 (20%)69Undergoing endoscopy166 (40%)162 (39%)89 (21%)417P = 0.024    NS62 (35%)63 (36%)51 (29%)176    BE75 (44%)72 (42%)25 (15%)172    CA29 (42%)27 (39%)13 (19%)69Removing the endoscope290 (70%)90 (22%)35 ( 8%)415P < 0.001    NS97 (55%)52 (30%)26 (15%)175    BE144 (84%)24 (14%)3 ( 2%)171    CA49 (71%)14 (20%)6 ( 9%)69Period immediately after317 (78%)71 (17%)20 ( 5%)408P = 0.348    NS132 (77%)29 (17%)11 ( 6%)172    BE136 (81%)29 (17%)4 ( 2%)169    CA49 (73%)13 (19%)5 ( 8%)67Discomfort summary score (range: 0–8)Mean (sd)    All2.35 (2.10)406P < 0.001    NS2.92 (2.36)171    BE1.88 (1.69)168    CA2.07 (1.99)67aData for the BE group were published previously [16]bSignificance of differences between three groups as determined by Chi-square test (categorical variables) or proportional odds models for ordinal response data (summary score). No correction for confoundersTable 4Pain during upper GI endoscopy as reported by patientsaPainNotQuiteVerynDifferbIntroducing the endoscope332 (80%)68 (16%)17 (4%)417P = 0.050    NS135 (77%)29 (17%)12 (7%)176    BE145 (85%)24 (14%)2 (1%)171    CA52 (74%)15 (21%)3 (4%)70Undergoing endoscopy320 (77%)77 (19%)17 (4%)414P < 0.001    NS141 (81%)23 (13%)10 (6%)174    BE137 (81%)28 (17%)5 (3%)170    CA42 (60%)26 (37%)2 (3%)70Removing the endoscope365 (88%)39 (9%)11 (3%)415P = 0.098    NS152 (87%)16 (9%)6 (3%)174    BE157 (92%)11 (6%)3 (2%)171    CA56 (80%)12 (17%)2 (3%)70P = 0.454Period immediately after350 (84%)52 (13%)15 (4%)417    NS147 (84%)23 (13%)6 (3%)176    BE145 (85%)22 (13%)4 (2%)171    CA58 (83%)7 (10%)5 (7%)70Pain summary score (range: 0–8)Mean (sd)    All0.86 (1.60)413P = 0.02    NS0.91 (1.81)173    BE0.66 (1.35)170    CA1.20 (1.60)70aData for the BE group were published previously [16]bSignificance of differences between three groups as determined by Chi-square test (categorical variables) or proportional odds models for ordinal response data (summary score). No correction for confoundersTable 5Overall burden of upper GI endoscopy as reported by patientsaOverall burdenNotQuiteVerynDifferbEndoscopy in general137 (34%)204 (51%)58 (15%)399P = 0.007    NS48 (30%)82 (50%)32 (20%)162    BE68 (41%)87 (52%)12 ( 7%)167    CA21 (30%)35 (50%)14 (20%)70Overall burden summary score (range: 0–2)Mean (sd)    All0.80 (0.67)399P < 0.001    NS0.90 (0.70)162    BE0.67 (0.61)167    CA0.90 (0.71)70aData for the BE group were published previously [16]bSignificance of differences between three groups as determined by Chi-square test (categorical variable) or proportional odds models for ordinal response data (summary score). No correction for confounders Table 6 shows these adjusted differences in summary scores for pair wise comparisons between the groups, and how these are affected by the determinants. NS patients reported significantly more discomfort than BE patients, as demonstrated by the significant OR of 1.69. After adjusting for differences in the number of previous endoscopies, the difference in reported discomfort between NS and BE patients was no longer significant. Similarly, the difference in reported discomfort between NS and BE patients could also be explained by differences regarding the administration of sedation. The differences between the NS and BE groups in the baseline EQ–5D summary score and in baseline anxiety scores did not explain the differences in reported discomfort: the ORs remained significant. Reported pain during upper GI endoscopy did not differ between NS and BE groups. The difference in reported overall burden was significant (OR = 1.64, P = 0.03). This difference became also insignificant after adjustment for the number of previous endoscopies and for sedation. Table 6Differences in discomfort summary score, pain summary score and overall burden score, pair wise between patient groups (after correction for age, sex and employment status as confounders); and effects of other determinants than patient group on these differencesDeterminantDiscomfort scorePain scoreOverall burdenORa95% CIbP-valueORa95% CIbP-valueORa95% CIbP-valueNS compared to BE (BE = reference group)Patient group: NS versus BE1.691.15–2.47< 0.011.090.70–1.710.701.641.05–2.550.03    + Number of previous endoscopies1.490.98–2.270.061.190.73–1.940.501.560.96–2.530.07    + Baseline EQ–5D1.511.02–2.230.040.830.53–1.320.441.490.96–2.330.08    + Baseline anxiety1.631.12–2.390.011.040.66–1.640.861.591.02–2.490.04    + Sedation1.420.96–2.090.081.080.68–1.720.731.400.88–2.220.15CA compared to BE (BE = reference group)cPatient group: CA versus BE1.220.75–1.990.422.691.51–4.77<0.012.371.38–4.07<0.01    + Baseline EQ–5D1.070.66–1.740.792.321.29–4.18<0.012.101.21–3.66<0.01    + Sedation2.061.16–3.640.012.711.40–5.27<0.013.101.68–5.74<0.01aOdds ratios were calculated by proportional odds models for ordinal response data. They show the differences between the patient groups, corrected for confounders and the determinant mentionedb95% confidence intervalcNumbers of previous endoscopies and baseline anxiety were not available for CA patients CA patients reported significantly more pain (OR = 2.69, P < 0.01) and overall burden than BE patients (OR = 2.37, P < 0.01; Table 6). The differences in reported pain could not be explained by differences in baseline EQ–5D summary scores or whether sedation had been administrated or not (all ORs remained significant, Table 6). CA and BE patients did not differ in reported discomfort (OR = 1.22, P = 0.42), but after taking differences in the provision of sedation into account, the difference in reported discomfort became significant (OR = 2.06, P = 0.01). Symptoms After endoscopy, throat ache was the only symptom that was reported more often than before the procedure (51 vs. 23%; P < 0.001). Other symptoms did not increase in frequency. Compared to BE patients, the increase in throat ache was smaller for NS patients and larger for CA patients (P < 0.001); 31% of NS patients reported throat ache before and 46% afterwards, compared to12% and 47% of BE patients, and 12% and 70% of CA patients, respectively. Psychological distress Figure 1 shows unadjusted mean anxiety and depression scores (HAD—not available for CA patients) by patient group over time. Fig. 1Differences between BE and NS groups in Hospital Anxiety and Depression (HAD) scale scores for general distress before and after upper GI endoscopy (mean scores, no adjustment for confounders) After adjusting for confounders (repeated measures ANOVA), anxiety levels were similar between the BE and NS groups across measurements, but the pattern differed significantly between them (interaction effect of ‘group’ with ‘time’, P = 0.01): BE patients reported lower anxiety levels at the start and slightly higher at the end. Determinants (number of previous endoscopies, baseline EQ–5D summary score, sedation) did not influence this pattern of anxiety over time (no significant interaction effects with ‘time’). Anxiety scores of both NS and BE patients were significantly higher at all time points than reported by a general population sample (score = 3.9; P < 0.001 for each group at each measurement) [19]. At all measurements, depression scores were lower in BE than in NS patients (P < 0.001). This difference was significantly larger before than after endoscopy (interaction effect of ‘group’ with ‘time’, P = 0.01). The number of previous endoscopies affected the pattern of depression over time (interaction effect of ‘number of previous endoscopies’ with ‘time’ P = 0.046) making the pattern of the two groups more similar. Depression scores differed from those reported by the general population sample: BE patients reported significantly lower levels at all measurements, while baseline NS scores were significantly higher (norm score = 3.7, P < 0.001 for each comparison). Specific distress (IES) scores regarding the endoscopy itself and its outcome were lower in BE patients than in NS patients (mean scores at baseline measurement and 1 week measurements are BE 5.5 (sd 9.5), NS 12.9 (sd 14.7), and BE 3.5 (sd 7.7), NS 9.4 (sd 14.3) respectively, P < 0.001). The determinants did not affect this difference. In both BE and NS patients, specific distress regarding the endoscopy (IES, baseline measurement) was higher than regarding the test result (IES, one week measurement) (P < 0.001). High IES-distress scores regarding the endoscopy were seen in 51 patients (14%). CA patients (mean IES score 22.3 (sd 17.8)) had significantly higher distress levels (IES) regarding the test-result than the other patient groups (P < 0.001). Discussion This study is the first to investigate determinants of patients’ perceived burden of upper GI endoscopy. Patients undergoing endoscopy for different reasons reported a different burden from the procedure. BE patients who underwent endoscopy as part of regular surveillance, reported the lowest discomfort, pain and overall burden, confirming our hypotheses in this respect. Patients with non-specific GI complaints reported more discomfort from the procedure, while those diagnosed with cancer experienced more pain and both groups reported more overall burden than patients under surveillance for BE. These differences remained significant after adjustment for confounders (age, sex, employment status). Differences in baseline anxiety scores or in baseline general health (EQ–5D) did not explain the differences in reported discomfort or pain. Differences in the number of previous endoscopies, and in whether sedation was provided during endoscopy or not, explained part of the differences in reported discomfort between NS and BE patients. Whether sedation was provided or not did not explain the differences in reported pain and overall burden between BE and CA patients. The study also confirms that that upper GI endoscopy is burdensome for all groups of patients: two-thirds of the total group of patients reported discomfort and overall burden from the procedure, and patients were distressed beforehand. These results may however underestimate the actual burden because this empirical study was limited to patients who actually underwent upper GI endoscopy, hence excluding patients who refrained from undergoing endoscopy because of past or anticipated adverse experiences. Another potential limitation of our study results from the differences in response rates between the groups. Differences between patient groups were also found for symptoms resulting from the endoscopy. Of all symptoms explored, only throat ache increased after upper GI endoscopy. CA patients reported a higher increase in throat ache than BE patients and NS patients. As upper GI endoscopy hardly caused any symptoms, we considered an investigation into determinants of these differences to be less interesting and therefore omitted those analyses. Furthermore, BE and NS patients differed in the levels of generic (HAD) and specific (IES) distress they reported. Specific distress (IES) was significantly higher in NS patients than in BE patients, both regarding the endoscopy itself and its result. General distress (HAD) also differed between groups: BE patients reported less depression across all measurements and the pattern of anxiety and depression across measurements was different. However, general distress is not necessarily related to the endoscopy. The persistent higher depression scores across different time points suggest that NS patients have more depressive symptoms in general but that this was not related to the endoscopy. The different pattern of anxiety levels before and after endoscopy, however, suggests that the patient groups also differed in endoscopy-related distress. The pattern corroborated the findings of the specific (IES) distress scores: NS patients were more distressed than BE patients before the endoscopy. The investigated determinants did not explain the differences between groups in specific distress or general distress pattern, except for the number of previous endoscopies explaining part of the difference in the depression scores. BE patients thus reported less distress and also less pain or discomfort than other patient groups. This was not caused by differences in patient characteristics (age, sex, employment status, baseline anxiety, baseline general health). There are several potential reasons why the reported burden differs. Firstly, BE patients are under regular surveillance and may get used to or adapt to the procedure decreasing its burden. As the number of previous endoscopies explained the lower distress, discomfort and overall burden reported in the BE group, we conclude that getting used, or adapting to endoscopy plays a role. Secondly, patients who perceive a greater benefit of the test may weigh its burden differently and consequently report less burden. BE patients potentially have more to gain from early discovery of adenocarcinoma than NS patients, who are usually referred for endoscopy to detect potential explanations for their symptoms, and also more than CA patients for whom endoscopy and ultrasonography are only part of the procedure to determine their treatment options and prognosis. As we did not measure perceived expected benefit of the endoscopy we are not able to determine whether this mechanism is part of the explanation. Thirdly, the endoscopic procedure was slightly different for the different patient groups. CA patients received sedation more often. Adjusting for this difference into the analysis did not explain the differences in pain and overall burden, whereas the difference in reported discomfort became significant after adjustment for sedation. These results suggest that differences in the proportions of patients receiving sedation during endoscopy did not explain the differences between the groups, and that sedation was provided to those patients who really needed it. The procedure for CA patients also differed; they underwent upper GI endoscopy combined with ultrasonography, and for the combined procedure an endoscope with a slightly larger diameter is used. Our data did not allow us to test separately whether this affected perceived pain and overall burden. Finally, most CA patients had esophageal carcinoma, and this disease may make passing the endoscope through the esophagus more difficult and therefore more painful. We measured general psychological distress (HAD) at different time points; assuming that a pattern of higher distress levels before compared to after endoscopy indicates that the procedure causes distress. As discussed in a previous paper [16], this may be debated for the reason that lower distress levels afterwards may also result from a reassurance effect of patients receiving a negative test result (no serious disease present). Nevertheless, the fact that the specific distress (IES) score relating to the endoscopy was higher than the IES score relating to the test outcome led us to conclude that the prospect of undergoing upper GI endoscopy does indeed increase distress levels. Even if upper GI endoscopy causes HAD anxiety and depression scores to be increased before the endoscopy, the relevance of these increased distress levels can be questioned. Anxiety may be a relevant problem with 20% of patients having scores indicating clinical anxiety levels at baseline, while the depression scores are less worrisome (6%). Endoscopy-specific distress (IES) was high in 14% of patients and higher than the distress related to the outcome. Anxiety scores in our study were increased compared to general population scores at all time points. Especially the fact that NS patients remained at increased levels one month after endoscopy makes the comparability of our scores with the population scores questionable [19]. General population scores are not available for procedure specific-distress (IES), as this can only be measured in patients. Considering the cut-off values for clinical scores, the prospect of endoscopy causes moderate distress. The observation that patients under regular endoscopic surveillance may adapt to this invasive procedure should not result in an underestimation of the burden of regular endoscopic surveillance. The search for less invasive surveillance tests should continue, and frequency of surveillance should preferably be established by evidence-based individualized estimates of risk of progression.
[ "barrett esophagus", "endoscopic surveillance", "discomfort", "anxiety", "distress", "upper gastrointestinal endoscopy", "perceived patient burden" ]
[ "P", "P", "P", "P", "P", "R", "R" ]
Clin_Rheumatol-3-1-2039777
A comparison of the measurement properties of the Juvenile Arthritis Functional Assessment Scale with the childhood health assessment questionnaire in daily practice
We compared the measurement properties of a performance test (Juvenile Arthritis Functional Assessment Scale; JAFAS) with a questionnaire-based instrument (Childhood Health Assessment Questionnaire; CHAQ) to measure functional ability in patients with juvenile idiopathic arthritis on the level of individual items. In 28 consecutive children visiting an outpatient paediatrics clinic, the JAFAS (range 0–20) and CHAQ (range 0–3) were applied, and measures of disease activity and joint range of motion (ROM) were determined. Twenty-eight children with a median age of 10 years and median disease duration of 3.2 years were included. The median JAFAS score was 0, and the median CHAQ score was 0.125. Cronbach’s alpha was 0.92 for the JAFAS and 0.96 for the CHAQ. The Spearman correlation coefficient between the JAFAS and the CHAQ was 0.55 (P < 0.01). With six out of ten items, the JAFAS classified the child as less disabled than with corresponding CHAQ activities. Overall, associations with measures of disease activity and ROM were higher for the CHAQ than for the JAFAS. A performance test (JAFAS) does not appear to have an added benefit over the questionnaire-based assessment (CHAQ) of physical function in a cross-sectional study. Introduction In patients with juvenile idiopathic arthritis (JIA), functional disability can both be evaluated by means of questionnaires and observed performance tests. In a previous study [1], the internal consistency, construct validity and responsiveness of a questionnaire-based instrument, the Childhood Health Assessment Questionnaire [2] (CHAQ), proved to be somewhat better than those of an observed performance test, the Juvenile Arthritis Functional Assessment Scale [3] (JAFAS). As performance tests are time consuming and require specific equipment and trained assessors, it is relevant to know whether they have an added benefit over a questionnaire-based instrument. In the abovementioned, both measures displayed a floor effect [1]. As minor average disability in patients with JIA is nowadays a reality, identifying those tasks that can discriminate among lower levels of disability becomes all the more important. In addition, as performance tests are time consuming and require specific equipment and trained assessors, it is relevant to know whether they have an added benefit over a questionnaire-based instrument. The aim of the present study was therefore to compare the measurement properties of the JAFAS and the CHAQ in an unselected population of children with JIA on the level of individual items. Materials and methods Study design and patient recruitment Between January 2001 and April 2002, 34 consecutive children with JIA were recruited according to the following criteria: age between 7 and 12 years, diagnosis JIA [4] and no other medical conditions interfering with functional ability. The patients were visiting the outpatient paediatric rheumatology clinic of the Leiden University Medical Centre. The clinic, which has two part-time working paediatric rheumatologists, is a tertiary referral centre for children with rheumatic diseases from the Leiden district and surrounding area (1 million inhabitants). The Medical Ethics Committee approved the study, and all patients and their parents gave written informed consent. Assessment methods The JAFAS (range 0–20) was developed as an objective measure of functional ability in children with rheumatic diseases between 7 and 18 years. With the JAFAS, the observed time needed to perform ten activities is compared with a standard ‘criterion’ time. The JAFAS was administered by one well-trained paediatric physical therapist (Bekkering). The CHAQ, including 30 activities in eight different domains, with a total score ranging from 0 (no limitation) to 3 (maximal limitation), was completed by interviewing the children. Disease activity was measured by the erythrocyte sedimentation range (ESR) and joint counts on swollen (JC-swollen) or tender joints (JC-tender), concerning 28 joints included in the Fuchs score [5] plus the ankles (range 0–30). The feeling of well-being and the presence of pain were determined by 15-cm Visual Analogue Scales (VAS), with anchors of ‘no pain/no discomfort’ on the left and ‘very severe pain/severe discomfort’ on the right (pain-VAS and VAS-well-being; final scores converted to scores ranged from 0 to 3). A similar VAS was used for the physician’s evaluation of disease activity (VAS-paediatrician; score range 0–3). Limitation in range of motion (ROM) was determined by a joint count on in motion-restricted joints (JC-limitation; range 0–30) and the paediatric Escola de Paulista de Medicina ROM scale [6] (pEPM-ROM; score range 0–6). Statistical analysis The ten JAFAS items were linked with nine corresponding CHAQ items (CHAQ-9). All ten items had a counterpart in the CHAQ; however, the JAFAS items ‘get in’ and ‘get out of bed’ matched only one item in the CHAQ (get in and out of the bed). Associations between the JAFAS, the CHAQ-total and CHAQ-9 scores were determined by means of Spearman correlation coefficients (rs). To test the concordance between individual JAFAS and CHAQ-9 items, the score on every item was dichotomised into not limited (0) or limited (≥1) and mutually compared with Cohen’s Kappa (κ value greater than 0.80 is considered as good) [7]. Internal reliability of the JAFAS, CHAQ-total and CHAQ-9 was determined by calculating Cronbach’s α (α value of 0.85 is considered good) [8] and item–total correlations. In addition, Spearman correlation coefficients of the JAFAS, CHAQ-total and CHAQ-9 scores with measures of disease activity and ROM were computed. Results Characteristics of the patients From the 33 eligible children who visited the outpatient paediatric rheumatology clinic in the study period, two children refused to participate, and three did not fulfil the inclusion criteria (two children had serious mental retardation and one child had the attention deficit hyperactivity disorder). Thus, 28 children, 12 boys and 16 girls, were included. Their median age was 10.0 years (range 7.3–12.8), and the median disease duration was 3.3 years (range 0.1–10.2). A majority of the children had a polyarticular (nine patients) or oligoarticular pattern (11 patients) of joint involvement. Systemic onset JIA (three patients), arthritis and psoriasis (four patients) and enthesitis type JIA (one patient) were less frequently seen. Twenty (71%) children used anti-rheumatic medication, of whom 17 (68%) used disease-modifying anti-rheumatic drugs and three (11%) used oral corticosteroids. The median scores of the ESR (7.7 mm/h, range 2–54), the JC-swollen (1.0, range 0–28), the JC-tender (0.8, range 0–8), the VAS-well-being (0.2, range 0–2.5), VAS-pain (0.1, range 0–1.5) and VAS-paediatrician (0.2, range 0–2.7) point at a relatively low level of disease activity. With respect to joint ROM, the median JC-limitation score was 1.0 (range 0–17), and the median EPM-ROM score was 0. 5 (range 0–19.5). The median scores of the JAFAS (0, range 0–13) and CHAQ (0.125, range 0–2.6) indicate, on average, the presence of no and very little functional disability, respectively. The frequency distributions (Fig. 1) of the JAFAS and the CHAQ show that according to the JAFAS, 18 out of 28 patients (65%) had no limitations, whereas according to the CHAQ, 13 out of 28 patients (47%) had no functional disability. Fig. 1Frequencies of JAFAS and CHAQ scores. No disability JAFAS = 0, CHAQ = 0. Mild disability JAFAS = 1–3, CHAQ = 0–0.5. Moderate disability JAFAS = 4–9, CHAQ = 0.6–1.5. Severe disability JAFAS = 10–20, CHAQ = 1.6–3.0 Reliability and validity With respect to internal reliability, Cronbach’s α was 0.91 for the JAFAS, 0.96 for the CHAQ-total score and 0.92 for the CHAQ-9. The item–total correlation was moderate (≥0.60; p < 0.01) for two out of ten JAFAS and six out of nine matching CHAQ tasks. Besides these six items (dressing, pull on socks, cutting meat, bend down, walk outdoors and climb stairs) selected in this study, the items reach for object, writing, turn door key or water tab, and running were frequently scored as difficult and showed good item–total correlations. Concerning the agreement between the two instruments, the JAFAS total score correlated moderately well with both the CHAQ-total score (r = 0.55, p < 0.01) and the computed score of the corresponding CHAQ-9 items (r = 0.56, p < 0.01). On the individual item level, there was excellent agreement [7] (κ > 0.80) with respect to tasks 2 (pull shirt or sweater over head) and 8 (from standing position sit on floor, then stand up). The results of the internal reliability statistics and the associations between the total scores and the corresponding item scores of JAFAS and CHAQ are presented in Table 1. Table 1JAFAS and CHAQ scores and their associations (Spearman correlation coefficients) in 28 patients with JIAItemJAFASNumber of patients limited in activityInternal reliability, item–total correlationCHAQNumber of patients limited in activityInternal reliability, item–total correlationConcordance among JAFAS and CHAQ itemsNumber of concordant pairsCHAQ /JAFASbCohen’s κ1Button shirt/blouse90.85**Dress, including tying shoelaces and doing buttons70.76**203/50.30 ns2Pull shirt or sweater over head10.38*Pull on sweater over head10.34 ns280/01.00**3Pull on both socks20.49**Pull on socks70.64**216/10.13 ns4Cut food with knife and fork40.70**Cutting meat70.79**215/20.23 ns5Get into bed00.00Getting in and out of bed30.47*244/0a6Get out of bed00.00244/0a7Pick something up off floor from standing position10.38*Bend down to pick up clothing or a piece of paper50.62**244/00.29*8From standing position sit on floor, then stand up20.52**Stand up from a low chair or floor20.51**280/01.00**9Walk 50 feet without assistance10.38*Walk outdoors on flat ground50.67**244/00.29*10Walk up flight of 5 steps10.38*Climb up five steps70.69**226/00.20 nsTotal scoresCronbach’s αCronbach’s αSpearman’s r10 JAFAS items140.919 CHAQ items180.920.56**total CHAQ score180.960.55***p < 0.05, **p < 0.01aNo statistics are computed because of a constant factorbNumber of pairs with CHAQ ≥ 1 and JAFAS = 0/Number of pairs with JAFAS ≥ 1 and CHAQ = 0 Construct validity The relationship between the JAFAS, CHAQ, CHAQ-9 and measures of disease activity, pain, swelling and limited range of joint motion are shown in Table 2. Both the JAFAS and the CHAQ scores correlated moderately to well with the VAS-paediatrician, JC-swollen, JC-limited joints, pEPM-ROM and CHAQ-pain. Neither the JAFAS nor the CHAQ scores were significantly associated with the JC-tender. The CHAQ showed significant associations with the ESR and CHAQ well-being, whereas the JAFAS did not. Table 2Spearman correlation coefficients between JAFAS, CHAQ and other measures in 28 patients JAFASCHAQCHAQ (9 items)VAS-paediatrician0.41*0.56**0.34 nsESR0.37 ns0.62*0.75**Joint count on swollen joints0.47*0.65**0.48*Joint count on tender joints0.07 ns0.41*0.09 nsJoint count on limited joints0.44*0.64**0.59**Paediatric EPM-ROM0.50**0.73**0.88**CHAQ–pain0.38*0.69**0.55**CHAQ-well-being0.20 ns0.44*0.48**p < 0.05, **p < 0.01 Discussion In parallel with an earlier publication [1], this study demonstrated no advantages of a performance test (JAFAS) as opposed to a questionnaire (CHAQ) to measure functional disability in children with JIA. We found modest correlations between the two instruments and the floor effect with the CHAQ being smaller than with the JAFAS. Moreover, the CHAQ showed a better internal reliability and stronger associations with measures of disease activity and joint ROM than the JAFAS. Tennant et al. [1] reported similar results regarding the internal reliability and validity, in addition to a smaller responsiveness of the JAFAS. Possible explanations for the discrepancy between the JAFAS and CHAQ are that the JAFAS is concerned with performance on one specific time point and the speed of performance, whereas the CHAQ refers to the last week and the experience of difficulties, including the need for aids, appliances or other persons. Discordance between observed and reported functional disability has been reported earlier in both children with JIA [9] as well in adults with rheumatoid arthritis [10]. The relatively small number of patients and the lack of distribution of the scores over the full ranges of the various outcome measures could limit the external validity of this study. However, the observed low level of functional disability is consistent with results of previous studies in paediatric rheumatology [11]. The children’s version of the CHAQ was originally designed and validated with the questionnaire self-administrated by the patients. In this study, the questions were read out and filled in by the investigator to ensure appropriate completion. Although it is likely that with this method, the same results are obtained as with self-administration, a possible influence on the final scoring cannot be totally ruled out. Given the large improvements in medical treatment of JIA and considering the persisting need for valid and responsive measures in clinical trials, a further elaboration of the tasks included in measures of functional ability, reflecting relevant activities in daily life, is needed. Lam et al. [12] showed that by utilizing new response scales as well as adding more challenging questions than those posed by the original Health Assessment Questionnaire (HAQ), the floor effect could be reduced and the sensitivity enhanced. In addition, the excellent internal reliability of the CHAQ-9 score as found in the present study and the observation that some JAFAS and selected CHAQ items were not or only marginally contributing to the final scores suggest that with both instruments, there are opportunities for a reduction in the number of items, in parallel with the recently developed short version of the HAQ in adult rheumatoid arthritis patients [13]. Which items should be included has to be further examined, as the nine CHAQ items this study focusses on are only selected because they corresponded with the JAFAS. With any future research, the conduction of large, prospective follow-up studies is to be advised.
[ "questionnaire", "juvenile idiopathic arthritis", "activities of daily living", "disability evaluation" ]
[ "P", "P", "M", "R" ]
Matern_Child_Health_J-2-2-1592249
Preconception Healthcare: What Women Know and Believe
Objectives: The objectives of this study were to determine if women realize the importance of optimizing their health prior to a pregnancy, whether the pregnancy is planned or not; and to evaluate their knowledge level and beliefs about preconception healthcare. Additionally, we sought to understand how and when women wanted to receive information on preconception health. Methods: A survey study was performed using consecutive patients presenting to primary care practices for an annual well-woman exam. Patients were recruited based on appointment type and willingness to complete the survey at the time of their appointment, but prior to being seen by the physician. Results: A total of 499 women completed the survey. Nearly all women (98.6%) realized the importance of optimizing their health prior to a pregnancy, and realized the best time to receive information about preconception health is before conception. The vast majority of patients surveyed (95.3%) preferred to receive information about preconception health from their primary care physician. Only 39% of women could recall their physician ever discussing this topic. The population studied revealed some significant knowledge deficiencies about factors that may threaten the health of mother or fetus. Conclusions: A majority of women do understand the importance of optimizing their health prior to conception, and look to their Primary care physician as their preferred source for such information. Study participants demonstrated deficiencies in their knowledge of risk factors that impact maternal and fetal health suggesting that physicians are not addressing preconception healthcare during routine care. Introduction Preconception care is defined as the promotion of the health and well being of a woman and her partner before pregnancy [1]. The goal of the preconception visit is to identify medical and social conditions that may put the mother or fetus at risk. The concepts of preconception care have been articulated for over a decade—but unfortunately have not become part of the routine practice. Although many studies [2–9] document the effectiveness of interventions targeted to increase awareness of preconception folic acid supplementation, however, little evidence links comprehensive preconception health promotion to improved pregnancy outcomes. Only one study using data gathered more than 10 years ago demonstrated a greater likelihood of pregnancy intendedness in a low-income cohort of woman exposed to information on preconception health during routine family planning visits at a community health department [10]. Studies in the United Kingdom [11, 12] of knowledge and attitudes toward preconception care among primary health care teams show widespread consensus among the healthcare workers of the importance of the topic. One of these studies included the attitudes of women of childbearing age and noted that the view of the importance of preconception care was less strongly held by the female population studied [11]. Other studies have focused on more specific topics, such as rubella immunity, folic acid supplementation and glycemic control in diabetics [13–15]. Approximately 2% to 3% of all pregnancies result in a neonate with a serious genetic disease or a birth defect that can cause disabilities, mental retardation, and in some cases early death [16]. We are unaware of additional studies assessing the general knowledge and beliefs that women possess about optimizing their health prior to conception, or their preferences for obtaining such information. Methods A survey study of consecutive patients presenting to primary care practices for an annual well-woman exam was performed in accord with prevailing ethical principles. The selected primary care practices, occupying the same building at the Mayo Clinic Arizona, represented a women’s health general internal medicine practice (5 physicians) and a family medicine practice (9 faculty physicians and 18 family medicine residents). Women were recruited for the study by nursing staff working within the practices, with permission and informed consent completed prior to distributing the survey tool. The Mayo Clinic IRB approved the study. Patients were recruited based on appointment type (e.g., well woman exam, annual Pap exam, and annual preventive medicine exam) and consenting to complete the survey questionnaire at the time of their appointment, but prior to being seen by the physician. The enrollment period was between August 2004 and July 2005. Women were considered eligible for the study if they were between the ages of 18 and 45 years, understood English, and gave their permission. The survey instrument was a four-page questionnaire and required approximately 10 minutes to complete. The survey included questions about demographics, pregnancy intendedness, knowledge and attitudes about preconception care, and personal preferences about sources of health information about preconception care. Data from each survey was entered into a database at the Research Survey Center, Mayo Clinic, Rochester, Minnesota, and the aggregate data made available for analysis to the research team. Results A total of 570 women were invited to participate in the study; 58 declined and 13 did not meet eligibility criteria, leaving 499 women who completed surveys for data analysis. The demographic profile of the study population is shown in Table 1. In this study population, the majority (70.6%) of women were not currently attempting to conceive as noted in Table 2. Approximately 5% were actively trying to conceive, with 13.5% and 8.4% considering a pregnancy in either the next 1–2 or 3–5 years respectively. Interestingly, of the women who had previously been pregnant, pregnancies had actually been planned in only 47.2% of instances. Nearly all women in the study (98.6%) realized the importance of optimizing their health prior to a pregnancy, again as noted in Table 2. However, only 39% could ever recall their physician discussing preconception health. The majority of the women in this study population who were interested in preconception health education preferred the information prior to a pregnancy (74.8%) or at the time of their annual medical exam (11.9%), as displayed in Fig. 1.Table 1Study population demographic (n=499)Age Range: 18 to 45 years  18 to 25 years24%  26 to 35 years30%  36 to 45 years46% Mean: 33 yearsEthnicity White84.8% Asian3.6% African-American1.3% Native American1.4% Other9.0%Education (highest level) 11th grade or less0.4% Graduated High School7.5% Some college or technical school33.0% Graduated college38.5% Some graduate work6.7% Graduate degree14.0%Household income Less than $25,00010.5% $26,000 to $50,00019.7% $51,000 to $75,00018.6% $76,000 to $99,00013.0% $100,000 to $125,00013.2% $126,000 to $150,0006.8% $151,000 to $200,0006.4% Greater than $200,00011.8%Table 2Conception history and planningPlans about getting pregnant No plans at present time70.6% Currently trying4.6% Considering in next 1 to 2 years13.5% Considering in next 3 to 5 years8.4% Have tried, unable to get pregnant2.8%Ever been pregnant? Yes50.7% No49.3%If ever pregnant, where previous pregnancies planned? Yes47.2% No52.8%Affect of optimizing health of mother and pregnancy? Has a good effect on the pregnancy98.6% Has no effect on the pregnancy0.8% Has a bad effect on the pregnancy0.6%Doctor ever spoken to you about preconception health? Yes39.0% No61.0%Are you interested in receiving preconception health education? Very interested34.8% Somewhat interested21.6% Unsure10.1% Not at all interested33.5%If interested or unsure about education, when would you prefer?` At the time I become pregnant7.6% Before I try to get pregnant74.8% During pregnancy0.7% Every time I get an annual medical exam11.9% Unsure5.0%Fig. 1Are you interested in preconception health education? The women who were interested in preconception health education, or unsure of such interest, were asked their preferences for sources for such information (Table 3 and Fig. 1). The vast majority preferred their physician, either a primary care physician (51.3%) or Obstetrician/gynecologist (44.0%). Only a fraction would primarily seek their information from sources other than their physician. The survey included questions about patient awareness of key preconception risk factors which may influence the outcome of a future pregnancy. The study population demonstrated high awareness of certain risk factors, such as tobacco, alcohol, drug use, and domestic abuse (Table 4). There were, however, opportunities for improvement in basic understanding of the risk to maternal/fetal health as it relates to fish consumption, exposure to cat litter, and the impact of family and/or genetic history. Discussion Preconception care is the primary prevention of maternal and perinatal morbidity and mortality. It is an important issue in women’s health that is easily overlooked by physicians and patients. The results of this study demonstrate that the vast majority of study participants understood the importance of preconception healthcare and realized that it should be obtained prior to conception. Interestingly, many of the women who expressed an interest in receiving information about preconception healthcare perceived the annual exam as the appropriate venue for exploring this topic. The study participants showed a strong preference for obtaining information about preconception healthcare from their personal physician and the majority eschewed the use of technology such as the Internet as a source of such information. The study revealed that a large percentage of women expected their primary care physician or OB/GYN physician to address preconception healthcare with them.Table 3Patient preferences for sources of preconception information (Percentage ranking the choice as first preference)Primary care physician51.3%Obstetrician/gynecologist44.0%Family, friends0.3%Magazine, newspaper0.3%Internet/world wide web3.1%Other0.7% These data demonstrate that the study population exhibited gaps in knowledge about specific preconception health topics, but confirmed previous findings that women do have an increased awareness of the importance of folic acid supplementation [2–9]. Additionally, the study population showed a high awareness of the risks associated with tobacco, alcohol and drug use, but were much less aware of the risks to fetal health associated with fish consumption and exposure to cat litter. Such findings point out the need to continue efforts at increasing public awareness of other modifiable risks to fetal health. Unfortunately, fewer than 40% of the women surveyed recalled discussing preconception healthcare with their physician at the time of their annual exam. Clearly, there is a gap in patient expectation and delivery of healthcare services in this population of educated middle class women. This study only surveyed women who had chosen to obtain their primary care in Internal Medicine and Family Medicine practices of a private health care facility, suggesting that there may be obstacles to delivering preconception care even in an academic outpatient setting. Potential explanations for this gap in the delivery of preconception healthcare may include: age of patients studied, time constraints in the outpatient setting or insufficient training and/or content knowledge in preconception healthcare of the physicians. This study demonstrates that an opportunity to better understand the gap in care delivery and to evaluate possible solutions in private middle class settings. More exposure on the part of residents to a curriculum on preconception healthcare may provide an opportunity for physicians in these specialty fields to improve their skills and the delivery of this much-needed care to women.Table 4Preconception health knowledge and opinions (Percentage awarea of risk factor potentially affecting a pregnancy)Consumption of certain fish54.2%Exposure to cat liter64.9%Folic acid use79.6%Impact of family and/or genetic history84.1%Infectious diseases (need to screen for)89.3%Immunizations (up to date)91.2%Alcohol use95.8%Abuse (verbal, sexual and/or physical)97.2%Medication use (prescription and nonprescription)97.4%Tobacco use98.2%Illicit drug use98.8%aAgreed or strongly agreed on a five point Likert scale One of the limitations of our study was the homogeneity of our patient population. The majority of our study participants were middle class, Caucasian and had at least some college education. Our findings may not be broadly applicable to women of other socioeconomic backgrounds, ethnicity and educational levels. The lack of diversity of our population reflects the demographics of this community and further study in other populations should be pursued. As all women of reproductive age and potential presenting for continuing care in the primary care setting are candidates for preconception care, the essential and critical role of primary care physicians and providers in the provision of preconception care is apparent. The Centers for Disease Control (CDC) has published their “Recommendations for Improving Preconception Health and Health Care” [17]. These recommendations were developed through a collaborative effort during which the CDC successfully aligned the missions of a number of its external partners and internal programs to ultimately draft these national recommendations for preconception care. These national recommendations can now serve as a roadmap for both graduate medical education and continuing medical education curricula to improve the knowledge and skill of the physician workforce in the delivery of comprehensive preconception health care.
[ "preconception health", "primary care", "preconception care", "women’s health" ]
[ "P", "P", "P", "P" ]
J_Gastrointest_Surg-3-1-1852384
Factors Affecting Morbidity and Mortality of Roux-en-Y Gastric Bypass for Clinically Severe Obesity: An Analysis of 1,000 Consecutive Open Cases by a Single Surgeon
Introduction Determinants of perioperative risk for RYGB are not well defined. Introduction Obesity is currently the number one public health problem in the United States, affecting one-third of all Americans (http:/www.surgeongeneral.gov/topics/obesity). Approximately 5 to 8% have clinically severe or morbid obesity and are candidates for bariatric surgery. This obesity epidemic has been accompanied by a geometric rise in the number of bariatric surgical procedures. The American Society for Bariatric Surgery (ASBS) estimates that the number of bariatric surgery procedures has increased from approximately 20,000 in 1996 to over 140,000 in 2004 (http://www.asbs.org). During this period, membership in the ASBS has also increased fivefold, suggesting that many more surgeons are performing these procedures. The most common bariatric procedure performed in the United States is Roux-en-Y gastric bypass (RYGB).1 The explosive growth of bariatric surgery has garnered much attention in the media, with much speculation about the risks, both short- and long-term, of RYGB. Unfortunately, to date, little information from large series is available concerning the operative mortality of RYGB and those factors that predict mortality. In 2002, Livingston et al.2 reported an operative mortality of 1.3% in 1,067 patients after open RYGB and found that only age over 55 years correlated with perioperative mortality. More recently, Fernandez et al.3 analyzed their patient cohort of 1,431 patients having open RYGB and found a 1.9% mortality, which was associated with age, weight, longer limb gastric bypass, and the occurrence of a leak or pulmonary embolism. This report is a multivariate analysis of preoperative mortality and morbidity in 1,000 consecutive open RYGBs performed over a 5-year period by a single surgeon (LF) in at a single institution. Materials and Methods Bariatric Surgery Program The bariatric surgery program at St. Luke’s–Roosevelt Hospital Center in New York City was initiated in April 1999 and the first operation performed in June 1999. The basis of this report consists of 1,000 consecutive open RYGBs (primary cases and revisions) performed by a single surgeon, LF, between June 1999 and June 2004. Clinical Protocol and Surgical Technique All patients were evaluated preoperatively and met generally accepted criteria outlined by the NIH at its Consensus Development Conference on Gastrointestinal Surgery for Severe Obesity.4 In addition, all patients were routinely evaluated by a registered dietitian experienced in the treatment of obesity. Mental health and other specialty consultations were only obtained if they were felt to be clinically indicated or were required by an insurance carrier. All patients were evaluated by an attending anesthesiologist before surgery.Routine preoperative studies included: electrocardiogram, chest x-ray, gallbladder ultrasonography, serum electrolytes, glucose, HbA1c, calcium, albumin, lipid profile, liver function tests, complete blood count, platelets, prothrombin time, partial thromboplastin time, INR, and urinalysis. Over time, additional preoperative studies, including serum insulin, iron, ferritin, vitamin B12, 25-OH vitamin D, and thiamine (vitamin B1) were added.RYGB was performed in a standard fashion with the following common elements: (1) open technique; (2) 20–30 ml pouch, nondivided stomach, TA 90B (US Surgical, Norwalk, CT, USA) applied twice; (3) hand-sewn two-layer retrocolic, antegastric gastrojejunostomy, 12 mm in length, tested with methylene blue under pressure intraoperatively; (4) side-to-side, functional end-to-end jejunojejunostomy with dispSAble GIA (US Surgical) or LC (Ethicon Endosurgery, Sommerville, NJ, USA) 75 mm staplers, and TA55 (US Surgical) or TX 60 (Ethicon Endosurgery) staplers and routinely oversewn with 3–0 silk Lembert sutures. The biliopancreatic limb-length measured 75 cm along the antimesenteric border for patients with BMI less than 50 and 150 cm for patients with BMI greater than or equal to 50. The Roux or alimentary limb-length was 150 cm in all patients. For the initial 14 cases, the stomach was divided using the GIA-100 stapler. This technique was abandoned and switched to the nondivided stomach after a stapler malfunction led to a leak.Closed suction drains were placed in all revisions, patients with BMI > 55, or if clinically indicated (e.g., identification of an intraoperative leak, technically difficult anastomosis). Drains were removed as clinically indicated. Fascia was closed with a running looped #1 PDS (Ethicon, Sommerville) and infiltrated with 0.25% bupivacaine. The subcutaneous space was drained with a #10 Jackson–Pratt drain and the skin was closed with a running subcuticular stitch.The gallbladder was removed if gallstones were documented by preoperative ultrasound. Incisional or umbilical hernias were primarily repaired if encountered, oftentimes through a separate periumbilical incision.Invasive monitoring was not used routinely and Foley catheters were only placed if patients required invasive hemodynamic monitoring or in the case of revisions or when patients had multiple prior abdominal operations. Nasogastric tubes were left in placed as clinically indicated (intraoperative leak identified) or in patients with BMI > 55. All patients received perioperative antibiotics for 24 h, cefazolin (2 g intravenously every 8 h for three total doses) or clindamycin (900 mg intravenously every 6 h for four doses). Prophylaxis against deep vein thrombosis consisted of 5,000 units of unfractionated heparin administered subcutaneously every 8 h and pneumatic compression stockings until ambulatory. All patients were given a PCA pump (patient-controlled analgesia) for pain. Patients were routinely cared for on a regular surgical floor, equipped for severely obese patients. Patients with significant SA or documented CAD were usually observed in the recovery room overnight, and only rarely admitted to the intensive care unit. On the morning of postoperative day 1, patients were routinely studied with gastrograffin upper-GI studies. If no leak was identified, they were given liquids for lunch and advanced to soft food (yogurt, apple sauce, and cottage cheese) for dinner and switched to oral pain medications. Drains were usually removed before discharge. Patients were discharged on POD #2 or #3 as indicated with the following medications: a codeine-derivative for pain, prenatal vitamins, iron polysaccharide, calcium citrate, ursodeoxycholic acid, if the gallbladder was present.After hospital discharge, patients were scheduled to be seen at 2 and 8 weeks, 6, 12, 18, 24 months, and yearly thereafter. All routine follow-up appointments included nutritional counseling and those after 2 weeks included laboratory studies [serum electrolytes, glucose, HbA1c, calcium, albumin, lipid profile, liver function tests, complete blood count, platelets, serum insulin, iron, ferritin, vitamin B12, 25-OH vitamin D, and thiamine (vitamin B1)]. Follow-up was 74% at 1 year, 68% at 2 years, 59% at 3 years, 53% at 4 years, and 48% at 5 years. Clinical Data and Data Analysis Clinical and laboratory data were PC-based database prospectively maintained since the program’s inception in 1999. Data collected included: age, sex, height weight, BMI, race/ethnicity, payer status, obesity-related comorbidities, operative procedure, duration of stay, mortality, major complications, and death. Complications were classified as systemic (prolonged intubation, deep venous thrombosis, pulmonary embolism, and myocardial infarction/fatal arrhythmia) or technical (incisional hernia, intestinal obstruction, leak/perforation, dehiscence, GI bleeding, anastomotic stricture, and anastomotic ulcer). Deaths were analyzed with respect to BMI, demographics, comorbidities, and complications.Superficial wound infections were not included; the incidence of urinary tract infections was not tracked. Nutritional complications were not evaluated because all patients were routinely maintained on vitamins, iron, and calcium supplements postoperatively; it would be impossible to determine the true incidence of any of these nutritional deficiencies.Univariate analyses and logistic regression with SPSS 11.0 were used to determine significance. Results The population consisted of 854 women and 146 men. Their demographic characteristics are summarized in Table 1. The prevalence of obesity-related comorbid conditions is shown in Table 2. Table 1Demographic Characteristics by Sex and Race WomenMenTotalp-ValueAge (years)38 ± 1.0 (15–73)40 ± 11.9 (15–65)38 ± 11.17 (15–73)0.064Weight (kg)134 ± 27 (84–263)170 ± 43 (81–345)139 ± 33 (82–345)<0.01BMI (kg/m2)51 ± 10 (35–1,000)55 ± 13 (24–116)51 ± 10 (24–116)<0.01Caucasian(28%) 238/853(53%) 78/14732%<0.01African–American(30%) 253/853(20%) 29/14728%0.01Hispanic(42%)358/853(27%)40/14740%<0.01Other(0.5%) 4/853(0%) 0/1470.4%0.41Table 2Prevalence of Obesity-Related Comorbid Conditions by Sex Women (N = 853) (%)Men (N = 147) (%)Total (%)p-ValueType II diabetes mellitus177 (21)54 (37)23<0.01Hypertension310 (36)79 (54)39<0.01CAD/CHF29 (3)26 (18)6<0.01Dyslipidemia376 (44)80 (54)470.02Sleep apnea172 (20)63 (43)24<0.01Asthma135 (16)16 (11)150.12Dyspnea on exertion811 (95)127 (86)48<0.01GERD513 (60)77 (52)590.08Osteoarthritis791 (93)126 (87)92<0.01Urinary stress incontinence430/(50)5 (3)44<0.01Irregular menses276 (32)NA32NACAD/CHF coronary artery disease/congestive heart failure, GERD gastroesophageal reflux disease The most common comorbidites encountered were dyspnea on exertion (94%), joint pain/arthritis (92%), and gastroesophageal reflux disease (59%). The comorbidites typically associated with systemic disease included hypertension (HTN, 39%), obstructive sleep apnea (SA, 24%), dyslipidemia (46%), and asthma (15%). Approximately 23% of the patient population suffered from type II diabetes mellitus (DM). At time of initial evaluation, 13.0% of this diabetic patient subset had a prior history of insulin-dependent DM, 57.6% had noninsulin dependent DM, and 23% had a previous diagnosis of DM intermittently controlled on diet or were newly diagnosed with DM during their preoperative evaluation. Six percent of the patient population had angiographically documented histories of coronary artery disease (CAD) but were deemed suitable risk by their respective specialists. Procedures and Duration of Hospital Stay There were 966 primary RYGB and 34 revisions of failed bariatric procedures (21—VBG, 5—RYGB, 8—other; all performed at outside institutions) to RYGB. The median length of stay for primary procedures was 2.4 days compared to 3.7 days for revisions. Average (LOS) for all patients having primary RYGB was 3.8 days with 87% of the group leaving in 3 days or less. Complications Overall, 91% of the procedures were without systemic or technical complications.The incidence of complications in relation to BMI is summarized in Table 3. Overall, systemic complications occurred rarely, but did not usually correlate with BMI. The most common technical complications were incisional hernia (3.5%), intestinal obstruction (1.9%), and leak/perforation (1.6%). Table 3Incidence of Complications after RYGBComplicationBMI < 50 (%) (N = 481)BMI > 50 (%) (N = 515)Total (%)p-ValueSystemic complicationsProlonged intubation3 (0.6)8 (1.5)1.10.16 (NS)Deep venous thrombosis0 (0)2 (0.4)0.2NSPulmonary embolism0 (0)3 (0.6)0.3NSMI/fatal arrhythmia1 (0.2)1 (0.2)0.2NSTechnical complicationsIncisional hernia10 (2.1)25 (4.8)3.50.019Intestinal obstruction10 (2.1)9 (1.7)1.9NSLeak7 (1.5)9 (1.9)1.6NSDehiscence2 (0.4)2 (0.4)0.4NSGI bleeding requiring transfusion3 (0.6)6 (1.2)0.9NSAnastomotic ulcer2 (0.4)0 (0)0.2NSAnastomotic Stricture6 (1.2)2 (0.4)0.8NSDeath4 (0.8)11 (2.1)1.5.03Thirty-one patients (3.1%) required reoperation within 30 days of the original procedure. The indications for reoperation within 30 days were leak/perforation (11), intestinal obstruction (9), bleeding (4), rule-out leak (2), dehiscence (4), and subphrenic abscess (1).Indications for late operations (>30 days postoperatively) included incisional hernia repair in 35 patients, intestinal obstruction in 10 patients, and repair of leaks/perforations in five patients who were experiencing ongoing postoperative complications. No patients required reoperation for refractory anastomatic stricture or ulcer. One patient developed a gastrogastric fistula after repair of an early leak, which was treated expectantly since it was clinically insignificant. No late gastrogastric fistulae were identified. Deaths Thirty-day mortality was 1.2%. Overall mortality attributable to surgery was 1.5%. Patients with late deaths due to unrelated events, such as motor vehicle accidents (N = 2) or drug overdoses (N = 1), were excluded and not classified as mortalities in the analysis. Mortality correlated with BMI, with four (0.8%) patients having a BMI < 50 dying compared to 11 (2.1%) patients with BMI ≥ 50 (p = 0.03) (Table 3).Causes of death after RYGB, along with their timing and relationship to BMI are shown in Table 4. Two patients died of fatal arrhythmias on POD #3 and #4. Both were males, ages 48 and 54, with BMIs > 50, DM, CAD, HTN, and SA. Autopsies were performed in both instances and no other precipitating factors were identified. One patient, a 43-year-old woman with a BMI 59, died of a pulmonary embolism (identified at autopsy) at home after 2 weeks. Table 4Causes of Early and Late Deaths Related to BMICause of deathEarly (<30 days) (N = 11)Late (>30 days) (N = 4)BMI < 50BMI ≥ 50BMI < 50BMI ≥ 50MI/arrhythmia0200Pulmonary/PE0100MSOF 20 leak1212MSOF 20 bowel obstruction0100MSOF cause unknown1100Bleeding complications1200PE Pulmonary embolism, MSOF multisystem organ failureSix patients died from MSOF after postoperative leaks, three from the gastrojejunostomy and three from perforations of the distal small bowel within the common channel. Of these, four occurred and were diagnosed within 48 h of the initial RYGB and all were emergently explored. All four developed MSOF and expired between 14 and 211 days postoperatively. Of the remaining two deaths, one patient, 46-year-old woman with a BMI 79 and a history of chronic ventilator dependence due to a paralyzed left hemidiaphragm on chronic steroid therapy developed a late leak and multiple intestinal fistulae and eventually succumbed to MSOF. The other patient, a 43-year-old woman, BMI 43, had undergone a laparoscopic RYGB at another institution that was complicated by a strangulated internal hernia, massive intestinal gangrene, and short-bowel syndrome. She underwent a reversal of her RYGB with reconstruction of her GI tract, but developed multiple small bowel fistulae 2 weeks postoperatively and died of MSOF several months later.Three patients died from a severe systemic inflammatory response syndrome (SIRS) with MSOF accompanied by extremely high fevers, without any identifiable source. One was a 70-year-old woman with BMI 68 underwent reexploration for an early postoperative small bowel obstruction. The others were a 54-year-old woman with a BMI 47, DM, HTN, and SA and a 37-year-old male with BMI 94 and severe SA and HTN. Each developed SIRS and MSOF with temperatures of 105–107°F and hyperdynamic circulations. No intraabdominal or other sources were identified despite numerous cultures and radiologic studies.Three patients died of bleeding complications. A 43-year-old woman with a BMI 48 suffered progressive hypotension and tachycardia in the recovery room postoperatively. These symptoms were initially addressed with rehydration allowing the patient’s hematocrit to fall to a level where hypovolemic shock and resulting coagulopathy obscured efforts to surgically control or identify a single source. A 31-year-old woman with a BMI 50 was returned to the OR 4 h postoperatively for control of bleeding from the small bowel mesentery. After this second procedure, she developed SIRS, temperatures of 106°F, and MSOF, with no identifiable source of sepsis. A 54-year-old male with a BMI 56 and a history of HTN, CAD, and chronic atrial fibrillation suffered a postoperative myocardial infarction and developed a coagulopathy complicated by a massive intraspenic hematoma after restarting coumadin, which rapidly progressed to anuria and MSOF. Attempts to reverse his anticoagulation and control the bleeding angiographically were unsuccessful.The necessity for reoperation within 30 days of the original procedure was particularly ominous. Overall, the incidence of death after a second operative procedure within 30 days was 9/31 (29%). Two of the 16 patients with a BMI < 50 who required reoperation within 30 days died (12.5%) compared to 7 out of 15 patients with BMI ≥ 50 (47% p = 0.03). This is similar to the mortality for the entire series where 0.8% for patients with a BMI < 50 died compared to 2.1% for patients with BMI ≥ 50 (p = 0.03). However, among those 15 patients classified as operative deaths for the entire series, 9 (60%) died after their second operative procedure.Logistic regression demonstrated that CAD [LR 7.5 p < 0.01 (95% CI 2.2 to 25.3)], and SA [LR 3.3 p = 0.03 (95% CI 1.1 to 10.1)], followed by age [LR 1.06 p = 0.042 (95% CI 1.00 to 1.12)], were risk factors for death in all patients (Table 5). Although a small sample set, 12.7% of patients with CAD died (7/55) and 29% of patients with BMI > 50 and CAD and SA died. Table 5Logistic Regression Evaluation of Patient Comorbidity as Predictors of Mortality p-ValueRelative Risk95% Confidence IntervalAge0.0421.0591.002–1.120BMI0.1301.0250.993–1.058Female0.6120.7290.215–2.474DM0.8520.8890.259–3.054HTN0.2572.0850.585–7.438CAD0.0017.4462.195–25.258Dyslipidemia0.0990.3590.106–1.211Asthma0.0703.0650.913–10.293SA0.0333.3421.104–10.115SOB0.6930.6440.073–5.725Although the average BMI of males was slightly greater than that of females (55.2 vs 51.2 kg/m2, p < 0.01), the two populations also differed in characteristics other than BMI. The male population had a significantly significant greater prevalence of DM, HTN, CAD, dyslipidemia, and SA. Females had a greater prevalence of pulmonary comorbidities, including asthma and dyspnea on exertion (Table 2).When logistic regression was performed specific to the subsets of sex, males with angiographically demonstrated CAD were 30 times more likely to die [LR 30.1 p = 0.028 (95% CI 1.4 to 631.4)]. Logistic regression did not identify CAD as a significant predictor when the analysis was limited to women. Predictors of death for women include age [LR 1.07 p = 0.033 (95 % CI 1.0 to 1.14) and SA [LR 4.1 p = 0.040 (95% CI 1.07 to 16.2)].Logistic regression was repeated specific to race. When limited to Caucasian patients only, Caucasians with CAD were 58 times more likely to die [LR 58.8 p < 0.01 (95% CI 4.8 to 716.9)] than those Caucasians without CAD. Increasing BMI was also significant among Caucasians. [LR 1.08 p = 0.02 (95% CI 1.01 to 1.16)]. Evaluation of African–American patients demonstrated that only SA was significant [LR 19.1 p = 0.03 (95% CI 1.29 to 282.8)]. Regression of the Hispanic population did not identify a specific factor. One-way ANOVA did not demonstrate any significant differences in prevalence of death or CAD between the three racial groups for the entire population or specific to sex.Hispanic patients had significantly less SA than either the Caucasian or African–American patients (18.1 vs 25.6 and 27.4%, respectively (p = 0.01)). When examined specifically in relation to sex, there were no differences among prevalence of SA in the males. African–American women, however, had the greatest prevalence of SA (26.4%) compared to Caucasian (18.9%) and Hispanic (15.7%) women (p < 0.01). Discussion The tremendous growth of bariatric surgery over the past several years has spawned much interest in its complications and mortality, first in the media, but most recently in the public health arena as well. Health and malpractice insurance carriers as well as governmental agencies and professional societies are evaluating the risks of bariatric surgery and the surgeons that perform it. In several states, insurance companies have stopped covering bariatric surgery at the same time that the Centers for Medicare and Medicaid Services have approved coverage for them. Several malpractice carriers have stopped issuing policies for surgeons performing bariatric procedures while others are categorizing bariatric surgery as a high-risk subspecialty area, similar to obstetrics and neurosurgery, and increasing premiums accordingly. In each of these instances, the overriding fear or consideration appears to be that the risks associated with bariatric surgery are excessively high. Much of the information being utilized in this regard has come form series utilizing pooled data from multiple smaller series or government databases.1,5–9 Buchwald et al.5 performed a metaanalysis of 16,944 patients which included 7,074 patients that underwent gastric bypass (open and laparoscopic) in 44 studies with a 30-day mortality rate of 0.5%. Several authors have reported that complication rates of bariatric surgery were inversely correlated with case loads, reporting mortality rates of in the range of 0.1–0.5% for surgeons more than 100 or 150 cases per year.6–8 Flum et al.9 recently reported 30- and 90-day mortality rates of 2.0 and 2.8%, respectively in Medicare beneficiaries having bariatric surgery, with men having higher rates than women and those over 65 years of age having higher rates than those younger than 65. Many of these series lack data concerning BMI and comorbidities, making risk assessment difficult or impossible. In one large series from a single-center, Christou et al.10 reported a 30-day mortality of 0.4% in 1,035 patients undergoing RYGB, of whom 820 had open RYGB. No details about preexisting comorbidities or perioperative complications were given. The results of these pooled series differ from those in several large series of open gastric bypasses2,3,11. Livingston et al.2 reported an operative mortality of 1.3% in 1,067 patients after open RYGB. In his series, mean BMI was 53.6 kg/m2, mean age 42.3 years, and the incidence of comorbidities was diabetes (23%), hypertension (48%), and sleep apnea (39%). Only male sex was predictive of severe life-threatening complications; mortality in patients over 55 years was significantly greater than in patients under 55 (3.5 vs 1.1%, p < 0.05). More recently, Fernandez et al.3 analyzed their patient cohort of 1,431 patients having open RYGB and found a 1.9% mortality, which was associated with age, weight, longer limb gastric bypass, and the occurrence of a leak or pulmonary embolism. In that series, the mean BMI was 53.3, mean age 40.7 years, and incidence of serious comorbidities was diabetes (19.5%), hypertension (51%), and sleep apnea (33%). In both of these series, the BMI was higher, the patients were older and the incidence of serious comorbidities, such as diabetes, hypertension, dyslipidemia, and sleep apnea, higher. Pories et al.11 reported a 1.5% perioperative mortality rate, with 0.8% dying of sepsis, 0.5% dying of pulmonary embolism, and 0.2% of an unknown cause. Additional information regarding the incidence of leaks, bowel obstructions and other complications was not reported as these were not the focus of the original paper. Similarly, in the present series, the 30-day mortality rate was 1.2% in 1,000 patients with a mean BMI of 52 kg/m2. The prevalence of preoperative comorbidities was comparable to the larger series (diabetes—23%, hypertension—39%, dyslipidemia—46%, coronary artery disease/congestive heart failure—5.5%, sleep apnea—23.5%, and asthma—15%) and generally higher than in the pooled series. The incidence of leaks and postoperative small bowel obstruction in our series was comparable to the other series. The incidence of pulmonary embolism was less than those reported, perhaps related to the use of both pneumatic compression stockings and subcutaneous heparin and early ambulation with a shorter length of stay. Although the incidence of incisional hernia in our series was low (3.5%), this may well be affected by our suboptimal follow-up. Multisystem organ failure accounted for 11 deaths in our series (73%). In six patients, this resulted from leaks or perforations; in one it followed a bowel obstruction, in two it followed postoperative hemorrhage and in two patients, the cause was never determined. In each of these instances, the complication was identified early and appropriate treatment and supportive care instituted. In four of the patients, the clinical course of SIRS and MSOF was characterized by extremely high temperatures (>105°F), with no apparent source ever identified. To our knowledge, this “syndrome” has not been reported, but may be due to the fact that the enormous adipose tissue stores in these patients may act as a “metabolic sink”, releasing cytokines and other mediators and perpetuating this extreme systemic inflammatory response. Two of the deaths were due to fatal arrhythmias, both in patients with known CAD, who were extensively evaluated preoperatively. The death due to pulmonary embolism occurred after discharge, even though the patient received prophylaxis with both heparin and pneumatic compression stockings in the hospital. The remaining death due to exsanguination was clearly preventable. Although males were significantly heavier and had higher BMIs than women, sex was not an independent predictor of morbidity. However, the presence of angiographically documented coronary artery disease was particularly ominous in men. Males with angiographically demonstrated CAD were 30 times more likely to die. In women, predictors of death included age and SA, but not CAD. With respect to race, Caucasian males with BMI > 50 and CAD were most likely to die, whereas SA was a predictor in African–American men. There were no predictors in Hispanics. Despite the increased mortality in Caucasian males with CAD, these patients were not candidates for cardiac revascularization and extreme weight loss was the only intervention thought to make a beneficial health impact. Based upon our data and those of others2,3,11, it appears that the risk of open RYGB is in the range of 1–2%. Risk appears to be adversely affected by increasing BMI and those factors with which it is often associated namely male sex and coronary artery disease. How this compares with the perioperative mortality after laparoscopic RYGB is still unclear because many series of patients having laparoscopic RYGB do not include patients with the highest BMIs, above 60 or 70 kg/m2, or patients having revisional surgery. A perioperative mortality rate of 1.2% after RYGB compares favorably with that after other common surgical procedures. For example, perioperative mortality after elective surgery for abdominal aortic aneurysms has been reported at 3.1–4.7% overall, 1.0–2.7% in patients under 65 years of age, and 3.5–5.2% in patients over 65 years of age.12 Using Medicare data adjusted for high volume surgeons, Birkmeyer et al.13 reported perioperative mortality rates of 4.5% for colectomy, 8.6% for gastrectomy, 8.4% for esophagectomy, 3.8% for pancreatic resection, 4.0% for pulmonary lobectomy, and 10.7% for pneumonectomy. While it is true that all of these patients were over 65 years of age, the fact still remains that these perioperative mortality rates are all substantially greater than that after RYGB (even in Medicare recipients, as recently reported9) and neither the public, the press, the insurance industry, or various state Departments of Health are appalled or alarmed, or calling for a moratorium on those procedures. This is not meant to suggest that every effort should not be made to lessen the risks of bariatric surgery and to improve operative mortality, but rather to inject some proportionality into the discussion. The importance of such careful analysis of bariatric surgical data, including its limitations, and the need to continue to offer bariatric surgery to those patients for whom it constitutes the best available treatment has recently been emphasized.14 In his 2004 PERSPECTIVE, Surgery for Severe Obesity, Steinbrook15 quotes Robert Brolin, MD, cautioning physicians and the public “...to reconcile the fact that the operation has a real mortality and it will continue to have a real mortality under the best of circumstances. Some of these patients are just profound operative risks for any kind of surgical intervention...The sickest ones are the ones who benefit the most, but they are also the highest risk”. Conclusion RYGB can be performed with acceptable perioperative morbidity in patients over a wide range of BMIs. Patients with BMI ≥ 50 have a higher morality for both initial operations and after reexploration. Age, coronary artery disease, and obstructive sleep apnea correlate with perioperative mortality. These three comorbidities were more prevalent in these patients and may contribute to this finding.
[ "morbidity", "mortality", "roux-en-y gastric bypass", "obesity", "morbid obesity" ]
[ "P", "P", "P", "P", "P" ]
Clin_Rheumatol-4-1-2367392
Tracheobronchomalacia due to amyloidosis in a patient with rheumatoid arthritis
In this case report, we describe a patient with longstanding rheumatoid arthritis who developed tracheobronchomalacia with fatal outcome. Despite negative antemortem biopsies of abdominal fat and tongue, amyloid was found postmortem in the trachea and appeared to be associated with tracheobronchomalacia. Introduction Amyloidosis in longstanding chronically active rheumatoid arthritis (RA) is a well-known complication of this disease [1]. We present a RA patient with amyloidosis, albeit with a unique presenting symptom and unexpected organ involvement. Case report The patient was a 69-year-old female with seropositive erosive RA, who had not been treated with disease-modifying antirheumatic drugs since 1984. In March 2005, she presented to another hospital because of progressive dyspnea of several hours, due to a bronchopneumonia. After 2 days, she developed a stridor. Otolaryngological examination results showed diffuse swelling of the tongue, pharynx and neck, and an extensive tracheobronchomalacia. Treatment consisted of intubation, antibiotics, diuretic therapy, and corticosteroids. She was admitted to our Intensive Care Unit because of progressive swelling and respiratory insufficiency due to tracheobronchomalacia. Physical examination results revealed extensive swelling and a remarkable protrusion of the tongue. Typical rheumatoid joint deformities and rheumatoid nodules were present but no active arthritis. Laboratory investigations revealed a normocytic anemia and a renal insufficiency. The measured creatinine clearance was 12 ml/min, with a proteinuria of 0.35 g/24 h. Computed tomography scan of thorax and neck provided no explanation for the tracheobronchomalacia. Clinical suspicion of amyloidosis was not confirmed by biopsies of abdominal fat tissue and of the tongue. Ultimately, the patient succumbed to multiple organ failure. Autopsy revealed AA amyloid in the trachea (Fig. 1), spleen, liver, and perivascularly in the kidneys. Fig. 1Amyloidosis of the trachea in Congo red Discussion Amyloidosis of the trachea is a rare but known disorder, even rarer in patients with systemic diseases (such as Sjogren syndrome) but never associated with tracheobronchomalacia [2–4]. We report the first case of an RA patient with tracheobronchomalacia due to AA amyloidosis of the trachea. The diagnosis was only confirmed at postmortem, although several biopsies had been done in life to confirm the clinical suspicion. We refrained from rectal and kidney biopsies because of the relatively high risk of complications associated with these procedures; the absence of diarrhea and the low level of proteinuria suggested a relatively low pretest probability. Therefore, the finding of renal amyloid was surprising. Recently, Uda et al. [5] described a prospective cohort study of patients with RA without clinical signs of kidney disease and amyloidosis of the kidney. At follow-up, the group of patients with amyloid deposition confined to the perivascular region continued to have little proteinuria and a good renal prognosis, compared to those with amyloid deposition in the glomeruli who developed rapidly deteriorating renal function. This patient, however, does show that the disease course in RA patients with perivascular amyloid deposition in the kidney is not benign and may be associated with fatal complications elsewhere. In conclusion, tracheobronchomalacia can be due to amyloid deposition in patients with RA. Amyloidosis should be considered in RA patients with a small amount of proteinuria, and a renal biopsy should be performed, especially as accumulating evidence shows that patients with amyloidosis have a better prognosis when the ongoing inflammation is effectively suppressed [1].
[ "tracheobronchomalacia", "amyloidosis", "rheumatoid arthritis", "kidney" ]
[ "P", "P", "P", "P" ]
Pediatr_Radiol-3-1-1805044
2005 PRETEXT: a revised staging system for primary malignant liver tumours of childhood developed by the SIOPEL group
Over the last 15 years, various oncology groups throughout the world have used the PRETEXT system for staging malignant primary liver tumours of childhood. This paper, written by members of the radiology and surgery committees of the International Childhood Liver Tumor Strategy Group (SIOPEL), presents various clarifications and revisions to the original PRETEXT system. Introduction The PRETEXT system was designed by the International Childhood Liver Tumor Strategy Group (SIOPEL) for staging and risk stratification of liver tumours [1, 2]. PRETEXT is used to describe tumour extent before any therapy, thus allowing more effective comparison between studies conducted by different groups. The system has good interobserver reproducibility [3] and good prognostic value in children with hepatoblastoma [2–5], and is the basis of risk stratification in current SIOPEL hepatoblastoma studies. Most other study groups now use the PRETEXT system to describe imaging findings at diagnosis, even if this is not their main staging system. Certain limitations of the system have become obvious over the last 15 years. In addition, there have been significant advances in imaging during this period [6]. This paper is the report of a working party that met in June 2005 to update the PRETEXT system. PRETEXT staging is based on Couinaud’s system of segmentation of the liver (Fig. 1) [7]. The liver segments are grouped into four sections as follows: segments 2 and 3 (left lateral section), segments 4a and 4b (left medial section), segments 5 and 8 (right anterior section) and segments 6 and 7 (right posterior section). The term section is used (where other authors use segment or sector) to avoid terminological confusion. Fig. 1Schematic representations of the segmental anatomy of the liver. a Frontal view of the liver. The numerals label Couinaud’s segments 2 to 8. b The hepatic veins (black) and the intrahepatic branches of the portal veins (grey) are shown. Segment 1 (equivalent to the caudate lobe) is seen to lie between the portal vein and the inferior vena cava. c Exploded frontal view of the segmental anatomy of the liver. The umbilical portion of the left portal vein (LPV) separates the left medial section from the left lateral section (LLS). Segment 1 is obscured in this view. Note that the term “section” has been used in preference to “segment” or “sector” (see text). d Transverse section of the liver shows the planes of the major venous structures used to determine the PRETEXT number. The hepatic (blue) and portal (purple) veins define the sections of the liver (2–8). This schematic diagram shows how the right hepatic (RHV) and middle hepatic (MHV) veins indicate the borders of the right anterior section (RAS) with the right posterior (RPS) and left medial (LMS) sections. Note that the left portal vein (LPV) actually lies caudal to the confluence of the hepatic veins and is not seen in the same transverse image. The left hepatic vein (LHV) runs between segments 2 and 3 and is not used in PRETEXT staging In the original system, the caudate lobe (segment 1) was ignored. The PRETEXT number was derived by subtracting the highest number of contiguous liver sections that were not involved by tumour from four [1]. This number is, very roughly, an estimate of the difficulty of the expected surgical procedure (Table 1). Pedunculated tumours are considered to be confined to the liver and to occupy only the section(s) from which they originate. Table 1Definitions of PRETEXT number (see text for PRETEXT number of tumours involving the caudate lobe)PRETEXT numberDefinitionIOne section is involved and three adjoining sections are freeIIOne or two sections are involved, but two adjoining sections are freeIIITwo or three sections are involved, and no two adjoining sections are freeIVAll four sections are involved In addition to describing the intrahepatic extent of the primary tumour(s), the PRETEXT system includes certain other criteria. These assess involvement of the inferior vena cava (IVC) or hepatic veins (designated V), involvement of the portal veins (P), extrahepatic abdominal disease (E) and distant metastases (M). The purpose of the 2005 revision was to improve the original definitions of the PRETEXT stages, to clarify the criteria for “extrahepatic” disease, and to add new criteria (Table 2). The term “extrahepatic” disease is confusing, and these categories will in future be called “additional criteria”. There is still much to be learned about prognostic factors in the primary malignant liver tumours of childhood. An important goal of these changes, therefore, is to improve our ability to identify prognostic imaging findings, and thereby refine risk stratification. Table 22005 PRETEXT staging: additional criteriaCaudate lobe involvementCC1Tumour involving the caudate lobeAll C1 patients are at least PRETEXT IIC0All other patientsExtrahepatic abdominal diseaseEE0 No evidence of tumour spread in the abdomen (except M or N)Add suffix “a” if ascites is present, e.g., E0aE1Direct extension of tumour into adjacent organs or diaphragmE2Peritoneal nodulesTumour focalityFF0Patient with solitary tumourF1Patient with two or more discrete tumoursTumour rupture or intraperitoneal haemorrhageHH1Imaging and clinical findings of intraperitoneal haemorrhageH0All other patientsDistant metastasesMM0No metastasesAdd suffix or suffixes to indicate location (see text)M1Any metastasis (except E and N)Lymph node metastasesNN0No nodal metastasesN1Abdominal lymph node metastases onlyN2Extra-abdominal lymph node metastases (with or without abdominal lymph node metastases)Portal vein involvementPP0No involvement of the portal vein or its left or right branchesSee text for definition of involvement. Add suffix “a” if intravascular tumour is present, e.g., P1aP1Involvement of either the left or the right branch of the portal veinP2Involvement of the main portal veinInvolvement of the IVC and/or hepatic veinsVV0No involvement of the hepatic veins or inferior vena cava (IVC)See text for definition of involvement. Add suffix “a” if intravascular tumour is present, e.g., V3aV1Involvement of one hepatic vein but not the IVCV2Involvement of two hepatic veins but not the IVCV3Involvement of all three hepatic veins and/or the IVC Although the PRETEXT system is principally used for hepatoblastoma, the 2005 revision is intended to be applicable to all primary malignant liver tumours of childhood, including hepatocellular carcinoma and epithelioid haemangioendothelioma. The original SIOPEL risk stratification system for hepatoblastoma has already been modified in the protocols for current SIOPEL studies (Table 3). Firstly, tumour rupture or intraperitoneal haemorrhage at the time of diagnosis (H1, see below) is now a defining criterion of high risk. Secondly, children with alpha-fetoprotein levels of <100 μg/l are also considered to be high risk. The 2005 revision involves no further change in the SIOPEL risk stratification system for hepatoblastoma. Table 3Risk stratification in hepatoblastoma for current SIOPEL studiesHigh riskStandard riskPatients with any of the following:Serum alpha-fetoprotein <100 μg/lAll other patientsPRETEXT IVAdditional PRETEXT criteria: E1, E1a, E2, E2a H1 M1 (any site) N1, N2 P2, P2a V3, V3a PRETEXT grouping The traditional approach to radiological segmentation of the liver, based on the paths of the hepatic veins, is an oversimplification. This is partly due to the variability of hepatic venous anatomy [8–10]. The main problem, however, is the imperfect correlation with segments defined by the branching pattern of the portal veins [8, 11–13]. Although the plane of the right hepatic vein reliably separates the right posterior and anterior sections [9], the left hepatic vein runs to the left of the boundary between the left lateral and medial sections, which is best defined by the plane of the fissure of the ligamentum teres and the umbilical portion of the left portal vein (Fig. 1) [14]. PRETEXT I This group includes only a small proportion of primary malignant liver tumours of childhood. From the definition of the PRETEXT number, it can be seen that only tumours localized to either the left lateral section or the right posterior section qualify as PRETEXT I (Fig. 2). Fig. 2PRETEXT I. a The left lateral section (segments 2 and 3) is involved. b The right posterior section (segments 6 and 7) is involved PRETEXT II Most PRETEXT II tumours are limited to either the right lobe or the left lobe of the liver. Tumours of the left medial or right anterior sections are also PRETEXT II. Multifocal tumours involving only the left lateral and right posterior sections are classified as PRETEXT II; this pattern is very rare. Tumours limited to the caudate lobe were not classifiable under the original PRETEXT system [1]. In the 2005 PRETEXT system these tumours are classified as PRETEXT II (but see also C, below). This is the only change in the PRETEXT numbering system in this revision. There is no change in numbering for tumours involving the caudate lobe and any other part of the liver, which are classified as PRETEXT II (if two or three contiguous sections are free), III (if there are no two contiguous sections free) or IV (if all four sections are involved) (Fig. 3). Fig. 3PRETEXT II. a Tumour involving only the right lobe of the liver. b A transverse T1-weighted MR image of a child with hepatoblastoma shows that the middle hepatic vein (arrow) is displaced but not involved by the tumour. This is the most common type of PRETEXT II tumour. c Tumour involving only the left lobe of the liver. d Tumour involving only the left medial section. e Tumour involving only the right anterior section. f Multifocal tumours involving only the left lateral and right posterior sections. g The tumour is confined to the caudate lobe (PRETEXT II C1, see text; RPV right portal vein) PRETEXT III The unifocal tumours in this category spare only the left lateral or right posterior section. These tumours are relatively common. In children with hepatoblastoma, great care must be taken to distinguish between invasion and compression of the apparently uninvolved section of the liver, because risk stratification (and/or the need for liver transplantation) may depend on this point. Anterior central liver tumours involve segment 4 and either or both of segments 5 and 8. Although recent advances in surgical technique permit resection of these tumours without trisectionectomy [15], classification as PRETEXT III reflects the difficulty of these operations (Fig. 4). Fig. 4PRETEXT III. a Extensive tumour sparing only the left lateral section. b Extensive tumour sparing only the right posterior section. c Anterior central liver tumour involving the left medial and right anterior sections. d Contrast-enhanced CT image shows a central liver tumour lying between the left portal vein (white arrow) and the right hepatic vein (black arrow). e Multifocal PRETEXT III tumour, sparing the right anterior section. f Multifocal tumours sparing only the left lateral and right anterior sections. g Multifocal tumours sparing only the left medial and right posterior sections Multifocal PRETEXT III tumours may also spare the right anterior or left medial sections, or two non-contiguous sections. These patterns are rare. PRETEXT IV PRETEXT IV tumours involve all sections of the liver. These tumours are often multifocal. Alternatively, a very large solitary tumour can involve all four sections (Fig. 5). Fig. 5PRETEXT IV. a Multifocal PRETEXT IV tumours involve all four sections. b Contrast-enhanced CT image of a patient with PRETEXT IV F1 (see text) hepatoblastoma. c Unifocal PRETEXT IV tumours often have a diffuse growth pattern. d Contrast-enhanced CT image of a patient with diffuse PRETEXT IV hepatoblastoma C: caudate lobe tumours The caudate lobe and caudate process (segment 1 or segments 1 and 9, depending on the system of nomenclature) can be resected with either the left or right lobe of the liver [7]. For this reason, segment 1 was not considered in the PRETEXT classification in the original system [2]. Modern surgical techniques have made resection of segment 1 safer, but these operations remain difficult. Involvement of the caudate lobe is, therefore, a potential predictor of poor outcome. If any tumour is present in segment 1 on imaging at diagnosis (Fig. 3g), the patient will be coded as C1, irrespective of the PRETEXT group (see above). All other patients should be coded as C0. E: extrahepatic abdominal disease The assessment of extrahepatic abdominal disease was one of the most confusing aspects of the original PRETEXT system, and clearly needed revision. Originally, there was a requirement for all extrahepatic abdominal spread of tumour (E+) to be proved by biopsy. Modern imaging techniques are capable, in principle, of identifying extrahepatic abdominal tumour extension in many forms. The frequency and significance of these imaging findings is different for different tumour types, and not all patterns are easily biopsied. In hepatoblastoma, for example, direct extension of tumour into other abdominal organs is unusual. Tumour extension through the diaphragm (Fig. 6) is uncommon, but can be shown quite convincingly by MRI or CT, and biopsy proof may be impractical. In the 2005 revision, patients with direct extension of tumour through the diaphragm or into other organs can be coded as E1 without biopsy proof. Fig. 6Extrahepatic abdominal tumour extension. This composite of contrast-enhanced CT images in a patient with hepatoblastoma shows growth of the primary tumour through the diaphragm into the thorax (E1). The 2005 PRETEXT system no longer requires biopsy proof for this form of tumour spread Pedunculated tumours are considered to be confined to the sections from which they arise, and are not extrahepatic disease. Peritoneal tumour seeding was originally not included in this category [2]. It probably indicates more advanced abdominal disease than direct extension of the primary tumour. Imaging techniques, especially ultrasonography, can often show even small peritoneal nodules clearly, and the differential diagnosis is very limited. In the 2005 revision, peritoneal nodules will be assumed to be metastases, and will be coded as E2. All other patients should be coded as E0. Ascites is an unusual finding at presentation in hepatoblastoma, but is more common in hepatocellular carcinoma, where it may be an independent predictor of poor prognosis. For this reason, patients with ascites will be coded as E0a, E1a or E2a as appropriate. Abdominal lymph node metastases, which were previously recorded as E+, are now coded as N (see below). F: tumour focality In SIOPEL 1, multifocal tumours were identified at the time of diagnosis in 18% of the patients with hepatoblastoma where this information was available [4]. Univariate analysis showed that the 5-year event-free survival was significantly worse for patients with multifocal tumour (40%) than for those with unifocal tumour (72%) [4]. The independent significance of this finding is unclear, as there is clearly an association between multifocality and advanced PRETEXT number. The German Society of Pediatric Oncology and Hematology reported slightly different results [16]. In its HB89 study, 21% of patients had multiple well-defined tumours, and these children had a similar disease-free survival (DFS; 87%) to those with a single tumour (86%). However, in 20% of children the tumour exhibited a diffuse growth pattern (Fig. 5), and these had a significantly worse DFS (21%) [16]. Unfortunately, a diffuse growth pattern is difficult to define, and despite the promise that this finding shows as a potential risk factor, it was decided not to incorporate it in the 2005 PRETEXT revision. Patients with one hepatic tumour should be coded as F0. All those with more than one tumour nodule (Figs. 3, 4 and 5), regardless of nodule size or PRETEXT stage, should be coded as F1. H: tumour rupture or intraperitoneal haemorrhage It is not uncommon for hepatoblastoma and hepatocellular carcinoma to present with tumour rupture [17, 18]. Originally, these patients were not automatically included as high risk in SIOPEL studies, because of the requirement that extrahepatic disease (E) be proved by biopsy. Although the data to prove this are not currently available, it seems intuitively likely that tumour rupture (usually manifesting as intraperitoneal haemorrhage) is a risk factor, and these patients should be coded as H1. Laparotomy or aspiration of peritoneal blood is not necessary for diagnostic purposes if characteristic imaging and clinical findings (such as hypotension and low haematocrit or haemoglobin level) are present. The presence of peritoneal fluid on imaging alone does not imply tumour rupture (but see E above). Since the opening of the SIOPEL 4 study in September 2004, tumour rupture has become a defining feature of high-risk hepatoblastoma in SIOPEL studies. Patients with no evidence of tumour rupture or haemorrhage, and those with only subcapsular or biopsy-related intraperitoneal bleeding, are coded as H0. M: distant metastases Patients with distant metastases at diagnosis are coded as M1. In hepatoblastoma, these metastases are predominantly found in the lungs. Although the best imaging modality for the identification of lung metastases is currently CT, the defining characteristics of lung metastases in this context have not been specifically studied. It is believed, however, that factors favouring a diagnosis of metastasis include multiple lesions, a rounded, well-defined contour and a subpleural location. In most parts of the world, a single rounded lung lesion with a diameter of >5 mm in a child with a primary liver tumour is very likely to be a metastasis. Patients with these findings on chest CT scans should be classified as M1. Biopsy is not required for staging purposes, because it is uncommon for other lesions to mimic metastases in this clinical context. The protocols of the SIOPEL studies recommend central radiological review if there is any doubt about the presence of lung metastases. Other metastases are infrequently found at diagnosis in hepatoblastoma, but are more common in hepatocellular carcinoma. The imaging findings of brain metastases are usually characteristic, and biopsy is not required. Bone scintigraphy is recommended for staging in children with hepatocellular carcinoma, but not hepatoblastoma. Abnormal calcium metabolism is common in children with hepatoblastoma, and may cause abnormal uptake on bone scintigraphy, especially in the ribs [19], whereas bone metastases are rare [20]. Biopsy proof is therefore mandatory for suspected bone metastases in hepatoblastoma, unless the findings of cross-sectional imaging are characteristic and the patient is already in the high-risk category for some other reason, such as the presence of lung metastases. Bone marrow biopsy is not recommended in children with hepatoblastoma, because bone marrow spread is rare [20]. It is not known whether metastases at different sites have different prognostic implications. For statistical purposes, it is therefore recommended that one or more suffixes be added to M1 to indicate the major sites of metastasis: pulmonary (p), skeletal (s), central nervous system (c), bone marrow (m), and other sites (x). A child with lung, brain, and adrenal metastases would therefore be coded as M1cpx. Patients with no evidence of haematogenous metastatic spread of tumour should be coded as M0. N: lymph node metastases Because porta hepatis (and other abdominal) lymph node metastases are quite unusual in hepatoblastoma, SIOPEL trials have always required this form of tumour spread to be proved by biopsy. In fact, benign enlargement of lymph nodes is probably not uncommon, and the accuracy of positron emission tomography is not known in this context. Because biopsy of equivocal lymph nodes inevitably carries some risk, the SIOPEL committee actively discourages this. Biopsy may, however, be required if there is significant nodal enlargement (for example short axis >15 mm) in a child with no other criteria for high-risk hepatoblastoma. Lymph node metastases are quite common in hepatocellular carcinoma and fibrolamellar carcinoma, and biopsy proof is not required if the imaging abnormality is unequivocal. An arbitrary threshold short axis diameter of 15 mm is suggested for this purpose. Children with no lymph node metastases by these criteria are coded as N0, those with nodal metastases limited to the abdomen (i.e. caudal to the diaphragm and cranial to the inguinal ligament) as N1, and those with extra-abdominal nodal metastases as N2. P: portal vein involvement Involvement of the main portal vein and/or both major branches has been considered a risk factor in hepatoblastoma, because this has obvious implications for the resectability of the tumour. It is also possible that portal vein invasion detected by imaging is an independent risk factor for tumour recurrence [21]. The original PRETEXT criteria, however, did not specifically define the word “involvement”. It is well recognized that a tumour that abuts or displaces a major portal venous branch at imaging performed at diagnosis (Fig. 7) may shrink away from the vein following preoperative chemotherapy. Imaging evidence of complete obstruction or circumferential encasement (Fig. 7) is therefore required to qualify as portal vein involvement. Failure to identify the portal vein or one of its major branches in either its normal position or its expected displaced location, on good quality images, is strong evidence of obstruction. The other form of involvement, portal vein invasion, is not uncommon, and is often best detected by ultrasound (Fig. 7). Various US signs may be present [22, 23], and analogous findings can be seen on CT and MR imaging. Fig. 7Involvement of the portal and hepatic venous systems. a When the tumour (grey) approaches or abuts the vein (black), there is no venous involvement, even if the vein is partly encased. b Complete obstruction or encasement of the vein is one form of involvement. Obstruction of the inferior vena cava by extrinsic compression, however, does not count as involvement (see text). c Intravascular tumour growth in the portal and/or hepatic venous systems is not uncommon in children with hepatoblastoma or hepatocellular carcinoma. d Transverse ultrasound image of the right lobe of the liver in a patient with hepatoblastoma. The tumour (white circles) has grown into the right branch of the portal vein (P1a), disrupting the normal “white line” of the vein wall (arrows) Patients with no imaging evidence of involvement of the main portal vein, its bifurcation, or either of its main branches will be coded as P0. Those who fulfil the original PRETEXT definition of P+ (involvement of the main portal vein, its bifurcation, or both of its main branches), as well as those with “cavernous transformation” of the portal vein will be coded as P2. P2, however, represents very advanced disease. For this reason, the category P1 has been created for patients with evidence of involvement of one major branch of the portal vein. In addition, the detection of portal vein invasion should be marked by the suffix “a” (e.g., P2a). V: involvement of the IVC and/or hepatic veins The same definitions of involvement (venous obstruction, encasement and/or invasion) used for the portal veins apply to the hepatic veins (Fig. 7). A hepatic vein can be assumed to be involved if it cannot be identified at all, and its expected course runs through a large tumour mass. It is important to look carefully for the hepatic veins, preferably with ultrasonography as well as CT and/or MRI, as they may be displaced from their expected position by the tumour. Complete obstruction of the IVC can occur with mass effect alone, without any tumour extension to the vein itself. Inability to visualize the IVC, and the presence of an enlarged azygos vein, are not, therefore, sufficient criteria for involvement. Patients with no imaging evidence of involvement of the hepatic veins or IVC will be coded as V0. As for the portal vein, the original classification of involvement (V+) indicated a very advanced level of disease. Intermediate categories have therefore been created. V1 and V2 indicate involvement of one or two main hepatic veins respectively. V3 indicates involvement of either the IVC or all three of the hepatic veins. In addition, the detection of hepatic vein or IVC invasion should be marked by the suffix “a” (e.g., V2a). The presence of tumour in the right atrium automatically makes a patient V3a. SIOPEL risk stratification for patients with hepatoblastoma The SIOPEL risk stratification for children with hepatoblastoma is essentially unchanged by this revision. Patients with any one or more of certain criteria (Table 3) are high risk. All other SIOPEL patients are standard risk. Presurgical re-evaluation Although the timing of surgery will depend on the treatment protocol and the patient’s response to therapy, preoperative reimaging is almost always necessary. All of the PRETEXT categories should be reassessed after preoperative chemotherapy, as near as possible to the time of surgery, and recorded as POSTEXT (post-treatment extent of disease). Comparison of surgical findings with POSTEXT will allow prospective assessment of the accuracy of imaging techniques.
[ "staging", "liver", "tumour", "children", "hepatoblastoma" ]
[ "P", "P", "P", "P", "P" ]
Trans_R_Soc_Trop_Med_Hyg-1-5-1950430
Associations between mild-to-moderate anaemia in pregnancy and helminth, malaria and HIV infection in Entebbe, Uganda
Summary 1 Introduction Anaemia in pregnancy contributes to maternal deaths and may also contribute to adverse birth outcomes including intrauterine growth retardation and prematurity, and hence to perinatal morbidity and mortality (WHO, 1991). Hookworm is an important cause of anaemia in developing countries, but lack of consensus regarding the risks and benefits of treating helminths in pregnancy has, until recently, led to a tendency to exclude pregnant and even breastfeeding women from deworming programmes. In 1994, a call was made for research leading to improved estimates of hookworm infection in women of child-bearing age and for evaluation of interventions that might be beneficial (WHO, 1994) but, to our knowledge, only one placebo-controlled trial of the treatment of hookworm in pregnancy has been reported (Torlesse and Hodges, 2000, 2001). More recently, an Informal Consultation held by the WHO gave consideration to the possible effects of schistosomiasis in pregnancy and it was suggested that anaemia might be among them (Allen et al., 2002; WHO, 2002). Although praziquantel had been widely avoided in pregnant and lactating women, it was noted that there was no evidence from studies in animals, from case reports or from mass treatment campaigns in humans of any major adverse effects. Treatment of schistosomiasis during pregnancy was therefore advocated by the Consultation Committee. We have undertaken a trial designed to examine the effects of maternal helminths and of deworming during pregnancy on the response to immunisation and susceptibility to infection and disease in infancy [ISRCTN32849447] (Elliott et al., 2007). Anaemia is among the secondary outcomes of the trial. The trial is ongoing. In view of current interest in the role of helminths in anaemia during pregnancy, we have examined associations between anaemia and helminths and other major infections (malaria and HIV) among pregnant women at enrolment into the trial. 2 Materials and methods 2.1 Study population and procedures The study area comprises Entebbe Municipality and the adjacent subcounty of Katabi. This area supports semi-urban, rural and fishing communities residing on the Entebbe peninsula in Lake Victoria, Uganda. Women were recruited at the antenatal clinic at Entebbe Hospital between April 2003 and November 2005. Women were assessed for screening at their first antenatal visit, thus initial screening could take place in any trimester of pregnancy. They were eligible for screening if they were well, resident in the study area, planning to deliver their baby at the hospital, willing to participate and willing to know their HIV status. On the screening day, after giving written informed consent, eligible women were interviewed regarding sociodemographic characteristics and risk factors for helminth infection, malaria and HIV and were examined by a midwife. A blood sample was obtained for investigations including haemoglobin (Hb) estimation, examination for microfilariae (mf) and malaria parasites, syphilis and HIV serology. Screened women were asked to return for enrolment within 1 month with a stool sample. They were excluded from enrolment if they had a Hb level <8 g/dl, clinically apparent severe liver disease, diarrhoea with blood in the stool, an abnormal pregnancy, a history of adverse reaction to anthelminthic drugs or had already participated in the study during an earlier pregnancy. Women were enrolled in the trial when they returned with a stool sample if they were in the second or third trimester and full eligibility was confirmed. All women received routine antenatal care including haematinics and intermittent presumptive treatment for malaria using sulfadoxine/pyrimethamine. Women were treated for syphilis and provided with nevirapine for prevention of mother-to-child HIV transmission, if indicated. Women who were excluded from the study on grounds of severe anaemia were treated with albendazole and haematinics and referred for transfusion if required. 2.2 Haemoglobin estimation and definition of anaemia Hb was estimated at the antenatal clinic using a colorimetric haemoglobinometer (DHT haemoglobin meter; Developing Health Technology, Barton Mills, UK) with same-day results. The same sample was then sent to the Medical Research Council/Uganda Virus Research Institute (MRC/UVRI) laboratories for analysis by a Coulter analyser (Beckman Coulter AC-T 5 diff CP; Beckman Coulter, Nyon, Switzerland). Quality control for the Coulter analyser was provided through the United Kingdom National External Quality Assessment Schemes, with consistently good results. Initial evaluation suggested that the haemoglobinometer results reliably matched the Coulter analyser results (intraclass correlation coefficient (ICC) = 0.81, 95% CI 0.74–0.88) and the immediately available haemoglobinometer results were used to determine enrolment. Evaluation of results to March 2005 showed less reliability (ICC = 0.54, 95% CI 0.51–0.58) and it was noted that 37 women had been enrolled with haemoglobinometer results >8 g/dl but Coulter analyser values below this cut-off; thereafter, Coulter analyser results were used for enrolment. Coulter analyser results have been used in this analysis. In accordance with WHO criteria, anaemia was defined as Hb < 11.2 g/dl, i.e. 0.2 g/dl above the standard cut-off of 11 g/dl to allow for the altitude in Entebbe (1132 m a.s.l.) (http://www.sph.emory.edu/∼cdckms/hbadj2.html) (WHO, 1999a). 2.3 Parasitology Examination of stool samples was performed as described previously (Bukusuba et al., 2004) using the Kato–Katz method (Katz et al., 1972) and charcoal culture for Strongyloides (Friend, 1996). Two Kato–Katz slides were prepared from each sample, each examined within 30 min for hookworm or the following day for other parasites. Blood was examined for Mansonella by a modified Knott's method (Melrose et al., 2000). Intensity of infection was assessed by egg counts in stool and mf counts in blood. Intensities were categorised as follows: hookworm: light <1000 eggs per gram of stool (epg), moderate 1000–3999 epg, high ≥4000 epg (WHO, 1994); Schistosoma mansoni: light <100 epg, moderate 100–399 epg, high ≥400 epg (WHO, 1999b); Trichuris trichiura: light <1000 epg, moderate 1000–9999 epg, high ≥10 000 epg (WHO, 1999b). No standard categories are available for Mansonella intensity, therefore arbitrary categories were defined to obtain approximately equal numbers of participants in each category: light <30 mf/ml, moderate 30–99 mf/ml, high ≥100 mf/ml. 2.4 HIV serology HIV serology was performed using a rapid test algorithm with same-day results. Testing kits, provided by the Ministry of Health, varied with availability. Most commonly, Determine (Abbott Laboratories, Abbott Japan Co. Ltd., Tokyo, Japan) was used for screening, with positive results confirmed by Unigold (Trinity Biotech plc, Bray, Ireland). Samples with differing results were referred for analysis by non-rapid ELISA tests or were examined using a ‘tie-breaker’ rapid test, usually Statpack (Chembio Diagnostic Systems, Medford, NY, USA). A proportion of samples, including all those with differing results by rapid test, were re-examined at the MRC/UVRI laboratories for quality control, with high agreement of the results. 2.5 Demographic information and potential risk factors for anaemia Socioeconomic data were summarised by developing two indices based on the variables that appeared to describe socioeconomic status most usefully. These were ‘woman's socioeconomic index’, comprised of education, personal income and occupation, and ‘household socioeconomic index’, comprised of building materials, number of rooms and items collectively owned. The relationship between potential confounding factors and anaemia and helminths, malaria and HIV were considered, and a diagram describing hypothesised relationships was developed (Figure 1). Age, socioeconomic status and tribe were considered to be possible confounders, with gravidity of potential importance, particularly for malaria, to which primigravidae are known to be particularly susceptible. 2.6 Data management and statistical analysis Data were entered using Microsoft Access (Microsoft Corp., Redmond, WA, USA) and analysed using STATA version 8 (Stata Corp., College Station, TX, USA). The initial analysis was based on a binary variable for outcome (anaemic/not anaemic) and categorical variables for exposure, with binary variables for exposures to helminths, malaria and HIV infection (infected/not infected). Effects of helminth infection were then investigated in more detail by examining categories of infection intensity (none, light, moderate or heavy). For these analyses, cross-tabulations were made between selected risk factors and the presence of anaemia. Logistic regression was used to estimate unadjusted and adjusted odds ratios (OR). Likelihood ratio tests were used to determine P-values. Continuous variables for anaemia (Hb level) and infection intensity (egg or parasite count) were then used to examine effects of intensity in more detail among infected participants using linear regression. Adjustment was made for the same potential confounders in logistic and linear regression models. 3 Results A total of 15 035 women registered at the antenatal clinic during the recruitment period, of whom 11 783 were assessed for inclusion in the study and 3163 were considered eligible and screened. The commonest reasons for ineligibility were residence outside the study area (6243), unwillingness to have an HIV test (1186), unwillingness to join the study (874) and enrolment during an earlier pregnancy (115). Of the 3163 screened, 2515 were enrolled; 8 of these were subsequently excluded because they had been enrolled during a previous pregnancy. Of the 648 women screened but not enrolled, the majority (596) failed to return for enrolment and only 15 brought a stool sample. Since intestinal helminth infection detected by stool analysis was a focus of interest, this analysis was confined to the 2507 women who were enrolled in the trial and for whom all or almost all relevant data were available. 3.1 Characteristics of the study women Maternal age ranged from 14 years to 47 years (mean 23.6 years). The majority were Baganda (49.1%), the predominant tribe of the district. Most (83.8%) were married, with 13.4% single, 0.6% widows and 2.3% divorced or separated. Education varied from none (3.9%), to primary (50.5%), secondary (37.3%) and tertiary (8.4%) and most women were poor (85.1% with a personal income of less than £10 per month). Primigravidae comprised 27.7% of the women studied. 3.2 Prevalence of infections and anaemia Complete data were obtained for Kato–Katz assays from 2498 women, for Strongyloides assays from 2485 women, for Mansonella from 2499 women and for malaria from 2459 women. The prevalence of hookworm was 44.5%, Mansonella perstans 21.3%, S. mansoni 18.3%, Strongyloides stercoralis 12.3%, T. trichiura 9.1%, Ascaris lumbricoides 2.3%, Trichostrongylus sp. 1.0%, Hymenolepis nana 0.2% and Loa loa <0.1%. Ova of Fasciola hepatica and Dicrocoelium dendriticum were found in samples from one and five women, respectively, but infection was not confirmed in follow-up samples and eggs probably originated from liver or offal in the women's diet; detection of ova of these two species was not considered to indicate infection. The prevalence of asymptomatic Plasmodium falciparum malaria parasitaemia was 10.9% and HIV infection 11.9%. The prevalence of anaemia (Hb < 11.2 g/dl) was 39.7%. Six women were enrolled in error with Hb < 8 g/dl by both methods (Coulter analyser results 5.6–7.9 g/dl). These, as well as the 37 women enrolled prior to March 2005 with haemoglobinometer results above but Coulter analyser results below 8 g/dl, have been retained in this analysis. 3.3 Associations between characteristics of pregnant women and anaemia in pregnancy Relationships between characteristics of the participating women and anaemia are shown in Table 1. The prevalence of anaemia declined with age. Anaemia was associated with tribe, with Basoga most likely to be anaemic. Anaemia showed no association with women's socioeconomic index, but low household socioeconomic status was associated with anaemia. Primigravidae were more likely to be anaemic than multigravidae. 3.4 Relationship between infections and anaemia in pregnancy Relationships between infections and anaemia are shown in Table 2. None of the helminths showed an association with anaemia when considered using categorical variables for the presence or absence of infection. Hookworm showed a weak positive association, but this was reduced in the adjusted model. Malaria parasitaemia and HIV infection were strongly associated with anaemia; for malaria the effect was unchanged and for HIV it increased after adjusting for potential confounding factors. Malaria was also associated with HIV infection (OR 1.81, 95% CI 1.30–2.53; P = 0.001); the association between HIV and anaemia was reduced slightly, but not explained, when malaria was added to the model (adjusted OR (AOR), adjusted for age, tribe, socioeconomic index, gravidity and malaria: 2.27, 95% CI 1.74–2.97; P < 0.001). The likelihood of being anaemic was particularly high among women with both HIV infection and malaria compared with women with neither infection (AOR 6.50, 95% CI 3.27–12.95; P < 0.001). Comparing those with HIV only to those with neither gave an AOR of 2.30 (95% CI 1.73–3.05; P < 0.001) and comparing those with malaria only to those with neither gave an AOR of 3.09 (95% CI 2.28–4.20; P < 0.001). Hookworm and Mansonella showed positive associations with malaria parasitaemia, and hookworm showed a negative association with HIV infection, but adjusting for these infections had minimal effect on the observed associations between helminths and anaemia (data not shown). Attributable fractions for anaemia were 3.1% for hookworm, 12.3% for malaria and 10.2% for HIV infection. 3.5 Relationship between anaemia and infection intensity Associations between infections and anaemia were further explored by examining infection intensity (Table 3). The prevalence of anaemia increased with each category of hookworm infection intensity. However, this trend was greatly weakened, showing no evidence of association after adjusting for age, tribe, socioeconomic index, gravidity, malaria and HIV. On the other hand, there was a small negative association between log10 hookworm egg count and Hb in hookworm-infected women (adjusted regression coefficient −0.23, 95% CI −0.38 to −0.09; P = 0.002), i.e. women with the highest hookworm intensity (approximately 10 000 epg) had, on average, a Hb level 0.69 g/dl lower than those with the lowest detectable egg counts (12 epg). Anaemia prevalence was higher in women with heavy S. mansoni infection than in those with lower intensity infection or no infection, but the number of such women was small and the effect was not statistically significant. Infections with Trichuris were light in all but six of the infected women, therefore associations between anaemia and moderate-to-heavy Trichuris infections could not be examined. There was no evidence of an association between Hb and egg count for women infected with S. mansoni or Trichuris. There was a negative association between log10 malaria parasite count (parasites per 200 white blood cells) and Hb, but this was reduced in the adjusted model (crude regression coefficient −0.27, 95% CI −0.53 to −0.01, P = 0.039; adjusted regression coefficient −0.17, 95% CI −0.45 to 0.11, P = 0.206). Hb was not significantly associated with log10 CD4+ T-cell count among HIV-positive women (adjusted regression coefficient 0.51, 95% CI −0.17 to 1.19; P = 0.120). 3.6 Anaemia in women excluded from enrolment Hb < 8 g/dl was an exclusion criterion. Among the 648 women who were screened but never enrolled, Hb < 8 g/dl was the reason given in 17 cases, but Coulter analyser Hb was <8 g/dl for a further 37 women excluded for other reasons. The prevalence of anaemia was 281/648 (43.4%) among those excluded compared with 996/2507 (39.7%) among those enrolled (P = 0.093), giving an overall prevalence of 40.5% among all women screened. Since stool results were only available for 15 of the excluded women, associations with intestinal helminths could not be analysed. Of the 54 women excluded and with Coulter Hb < 8 g/dl, two had stool results: both had hookworm infections (one light and one moderate intensity) and neither had schistosomiasis. A sensitivity analysis was performed assuming that all women excluded and with Coulter Hb < 8 g/dl had hookworm. As expected, a slightly stronger association was obtained, but this was again reduced after adjusting for confounding factors (crude OR 1.29, 95% CI 1.10–1.51, P = 0.001; AOR adjusted for age, tribe, socioeconomic indices and gravidity 1.19, 95% CI 1.00–1.41, P = 0.047). Associations with malaria and HIV were similar in the excluded group to those in the enrolled group, with crude ORs of 3.73 (95% CI 2.23–6.23; P < 0.001) and 1.57 (95% CI 1.06–2.34; P = 0.026), respectively. 4 Discussion This study suggests that, among pregnant women in Entebbe, Uganda, malaria and HIV are more important infectious causes of anaemia than helminths. No association was observed between mild-to-moderate anaemia and any species of helminth, and a weak association between anaemia and increasing intensity of hookworm infection was reduced after adjusting for confounding factors. Anaemia was slightly more common among women heavily infected with S. mansoni, but the number of such women was small. In keeping with the exclusion of women with Hb < 8 g/dl, excluded women were more likely to be anaemic than enrolled women, but this had minimal impact on the overall estimate of prevalence of anaemia (40.5%). Given the recognised ability of hookworm to cause anaemia (Bondevik et al., 2000; Hotez et al., 2004; Shulman et al., 1996), we conducted a sensitivity analysis to estimate the effect of hookworm if all women with Hb < 8 g/dl and no stool result had hookworm. The result was in keeping with a possible effect of hookworm, but after adjusting for confounding factors the effect was small. Associations between anaemia and malaria and between anaemia and HIV infection were similar in enrolled and excluded women. Our investigations for intestinal helminths used only one stool sample from each woman, meaning that a proportion of women with low-intensity infections will have been misclassified as uninfected and that estimates of intensity will have been imprecise (Hall, 1981; Utzinger et al., 2001). Recent comparable studies have the same limitation (Ajanga et al., 2006; Bondevik et al., 2000; Dreyfuss et al., 2000; Larocque et al., 2005). In our study this may, again, be important for hookworm, where there was a small increase in anaemia with infection intensity, but may not explain the lack of association for schistosomiasis where there was no suggestion of an effect of light-to-moderate infections. Our method of investigation for Mansonella and intensity does not suffer from this limitation: repeat examinations among 1971 women showed 96% agreement for the binary variable (infected/uninfected), with an ICC for mf/ml of 0.87 (95% CI 0.85–0.88) (unpublished data). Not all pregnant women in Entebbe attend the district hospital antenatal clinic, but a community survey undertaken in the study area showed an increase in the proportion choosing this clinic during the recruitment period, to approximately 80%, and most of the personal and socioeconomic characteristics of women choosing, or not choosing, to attend the district hospital clinic were similar (unpublished data). Thus, our results are likely to be reasonably representative of pregnant women in this area. Recent studies from different settings give similar results in relation to the effects of hookworm and S. mansoni on anaemia in pregnancy. In Peru (where hookworm and Trichuris are predominant), in Tanzania (S. mansoni and hookworm), in the Democratic Republic of Congo (Ascaris and hookworm) and in Java (Trichuris and hookworm), no association was observed between anaemia and infection with any single species (Ajanga et al., 2006; Kalenga et al., 2003; Larocque et al., 2005; Nurdia et al., 2001). In Peru there was an association between anaemia and higher hookworm intensities, and in Java there was a negative association between serum ferritin and hookworm, suggesting an effect of hookworm on iron status but not anaemia; in Tanzania, in an area of higher S. mansoni prevalence and intensity, a strong association between anaemia and heavy S. mansoni infection was observed. Infection intensity thus appears to be important, with light-to-moderate hookworm or S. mansoni infections having relatively weak effects on Hb levels. However, a second important factor is the underlying nutritional status of the women. In two studies in Nepal, among populations perhaps poorer and less well nourished than ours (as indicated by items owned, anthropometry and vitamin A status (Dreyfuss et al., 2000) and with a traditionally vegetarian diet (Bondevik et al., 2000)), anaemia showed a significant association with hookworm infection (Bondevik et al., 2000; Dreyfuss et al., 2000). The effects of hookworm infection are partially mediated by iron deficiency (Bondevik et al., 2000; Olsen et al., 1998) and in the trial conducted in Sierra Leone iron-folate supplements had a greater benefit for anaemia in pregnancy than treatment with albendazole (Torlesse and Hodges, 2001). We found no suggestion of an association between anaemia and any other helminth species that was common in our environment (Mansonella, Trichuris or Strongyloides). This again is in agreement with other studies (Dreyfuss et al., 2000; Larocque et al., 2005; Nurdia et al., 2001). Larocque et al. (2005) noted a stronger effect of moderate-to-heavy hookworm infection when combined with moderate-to-heavy Trichuris infection. Although there was broad overlap between the confidence intervals for these effects in their analysis, such an effect is plausible, given evidence in children that heavy Trichuris infections (>10 000 epg) can be associated with anaemia (Ramdath et al., 1995). In our study, as in that reported by Nurdia et al. (2001), no such heavy Trichuris infections were observed. The strong effects of malaria and HIV contrast with the weak effect of hookworm and the lack of effects of other helminths in this study. Of note, women were screened for this study when they were apparently healthy, thus both malaria and HIV infection were largely asymptomatic. The effects of these infections are not mediated by iron deficiency and may override any benefit of good nutrition. The importance of malaria as a cause of anaemia in pregnancy is well established (Shulman and Dorman, 2003). HIV infection is also a recognised cause of anaemia (Belperio and Rhew, 2004), and anaemia may be one mechanism by which it causes adverse birth outcomes (Dairo et al., 2005; McIntyre, 2003). There are compelling reasons for preventing and treating malaria and HIV during pregnancy (Shulman and Dorman, 2003; ter Kuile et al., 2004). Our results highlight that anaemia is among them. On the other hand, our results as well as recent literature suggest that associations between helminth infections and anaemia in pregnancy are weaker, with regional variations that may be based on nutrition and intensity of helminth infection. These findings are relevant when estimating the relative disease burden of helminths and other infections and the relative value of possible interventions in pregnancy. Globally, the majority of helminth infections are of low intensity so, in some settings, the benefit of routine deworming during pregnancy in relation to anaemia may be modest. The effects of deworming during pregnancy on other parameters, including birth outcome, birth weight and long-term effects on health in infancy and childhood, also need to be considered (Christian et al., 2004). The forthcoming results of our ongoing trial of deworming in pregnancy are expected to contribute further to this debate. Authors’ contributions AME designed the study; JN and CA carried out interviews and recruited participants; MO and HM carried out clinical assessments; NO and DK carried out laboratory assessments; LM, PW and LAM analysed and interpreted the data; LM, PW, LAM and AME drafted the manuscript. All authors reviewed and approved the final manuscript. LM and AME are guarantors of the paper. Funding Wellcome Trust Career Post fellowship held by Dr Elliott, grant number 064693; PMTCT programme, Ministry of Health, Uganda. Conflicts of interest None declared. Ethical approval The Science and Ethics Committee, Uganda Virus Research Institute, the Uganda National Council for Science & Technology, and the London School of Hygiene & Tropical Medicine.
[ "anaemia", "pregnancy", "helminth", "malaria", "hiv", "uganda" ]
[ "P", "P", "P", "P", "P", "P" ]
Osteoporos_Int-3-1-1766476
A multidisciplinary, multifactorial intervention program reduces postoperative falls and injuries after femoral neck fracture
Introduction This study evaluates whether a postoperative multidisciplinary, intervention program, including systematic assessment and treatment of fall risk factors, active prevention, detection, and treatment of postoperative complications, could reduce inpatient falls and fall-related injuries after a femoral neck fracture. Introduction Nearly all hip fractures occur as a result of a fall [1] and many fall again soon after sustaining the fracture [2]. Osteoporosis with low bone mineral density (BMD) puts older people who fall at high risk of sustaining fractures [3, 4]. A first hip fracture is associated with a 2.5-fold increased risk of a subsequent fracture [5]. A population-based study among people aged 85 years or older showed that 21% of those with a hip fracture had suffered at least two hip fractures [6]. Previous research has identified several fall risk factors such as comorbidity, functional disability, previous falls, and use of drugs [7–11] but also aging [12, 13] and among the oldest old, male sex [14]. Delirium, which is very common after hip fracture surgery, especially among those with a cognitive decline [15, 16], has been found to be one of the most important risk factors for falls among older people [10]. Multi factorial intervention strategies among community-living older people can prevent falls [17–20] and are the recommendations in fall prevention interventions nowadays [21]. The recommendations in fall prevention programs is that they should include gait training, advice on use of assistive devices, medication reviews, exercise programs including balance training, treatment of hypotension, environmental modification, and treatment of cardiovascular disorders. In long-term care the program recommends also to include staff education [21]. Most fall prevention studies are performed in the community, but multidisciplinary and multifactorial interventions have also been shown to be beneficial in residential care facilities [22]. Few fall prevention studies have been carried out in hospitals; there have been a few studies with single interventions among older patients in rehabilitation units, without any significant effects [23–25]. Recently two studies, one using multiple interventions [26] and one using a multidisciplinary fall prevention approach [27], have demonstrated a reduction in falls. None of these fall prevention studies have focused on hip fracture patients or tried to reduce postoperative complications as a fall prevention measure. Considering the lack of fall prevention studies in hospitals, especially after recent hip fracture surgery, this is an area of interest for study. The aim of this study was thus to evaluate if a postoperative multidisciplinary, multifactorial intervention program could reduce inpatient falls and fall-related injuries in patients with femoral neck fractures. Methods Recruitment and randomization This study included patients with femoral neck fracture aged ≥70 years, consecutively admitted to the orthopedic department at the Umeå University Hospital, Sweden, between May 2000 and December 2002, and the study was designed according to the CONSORT guidelines [28]. In Sweden different surgery methods are used depending on the displacement of the femoral neck fracture. In the present study patients with undisplaced fracture were operated on using internal fixation (IF) and patients with displaced fracture were operated on using hemiarthroplasty (HAP). If patients had severe rheumatoid arthritis, severe hip osteoarthritis, or pathological fracture they were excluded, by the surgeon on duty, because of the need for a different surgery method, such as total hip arthroplasty (THA). Patients with severe renal failure were excluded, by the anesthesiologist, because of their morbidity. Patients being bedridden before the fracture occurred were also excluded. In the emergency room the patients were asked both in writing and orally if they were willing to participate in the study. The next of kin was always asked prior to the inclusion in patients with cognitive impairment. The patients or their next of kin could at any time decline participation. A total of 258 patients met the inclusion criteria; 11 patients declined to participate and 48 patients were not invited to participate because they had sustained the fracture in the hospital or the inclusion routines failed (Fig. 1). These 59 patients were more likely to be men (p = 0.033) and living in their own house/apartment (p = 0.009), but there was no difference in age (p=0.354) compared to the participating patients. The remaining 199 patients (Table 1) consented to participate. All patients received the same preoperative treatment. Patients were randomized, to postoperative care in a geriatric ward with a special intervention program or to conventional care in an orthopedic ward, in opaque sealed envelopes. The lots in the envelopes were sequentially numbered. All participants received this envelope while in the emergency room but the envelope was not opened until immediately before surgery to ensure that all patients received similar preoperative treatment. Persons not involved in the study performed these procedures. The randomization was stratified according to the operation methods used in the study. Depending on the degree of dislocation, the patients were treated with IF using two hook-pins (Swemac Ortopedica, Linköping, Sweden) (n=38 intervention vs n = 31 control) or with bipolar hemiarthroplasty (Link, Hamburg, Germany) (n = 57 vs 54). Basocervical fractures (n = 7 vs 10) were operated on using a dynamic hip screw (DHS, Stratec Medical, Oberdorf, Switzerland) and one had a resection of the femoral head due to a deterioration in medical status and one died before surgery (both were in the control group). Fig. 1Flow chart for the randomized trialTable 1Basic characteristics and assessments during hospitalization among participants in the intervention and control groups. SD standard deviation, ADL activity of daily living Intervention (n = 102)Control (n = 97)p valueSociodemographic Age, mean±SD82.3 ± 6.682.0 ± 5.90.724 Females74740.546 Independent living before the fracture66600.677Health and medical problems Stroke (n = 102/93)29200.265 Dementia28360.145 Previous hip fracture (n = 102/96)a16140.829 Depression (n = 102/95)33450.031 Diabetes (n = 102/95)23170.417 Cardiovascular disease (n = 101/93)57530.938Medications on admission Number of drugs, mean±SD5.8 ± 3.85.9 ± 3.60.867 Antidepressants 29450.009Sensory impairments Impaired hearing (n = 94/82)42340.667 Impaired vision (n = 91/74)37270.584Functional performance before fracture Use of roller walker (n = 101/93)56520.948 Use of wheelchair (n = 101/93)23160.334 Previous falls, last month (n = 99/90)b24250.580 Walking independently, at least indoors (n = 101/94)85850.191 Staircase of ADL, median (Q1,Q3) (n = 92/88)5 (1–7.75)5 (0.25–7)0.859Assessments during hospitalization Mini Mental State Examination, mean±SD (n = 93/90)17.4 ± 8.215.7 ± 9.10.191 Organic Brain Syndrome Scale, mean±SD (n = 94/90)10.1 ± 10.812.5 ± 11.40.148 Geriatric Depression Scale, mean±SD (n = 81/68)5.2 ± 3.64.5 ± 3.50.271aExcept for the present hip fracturebExcept for the fall that caused the hip fracture Intervention The intervention ward was a geriatric unit specializing in geriatric orthopedic patients. The staff worked in teams to apply comprehensive geriatric assessments, management, and rehabilitation [29, 30]. Active prevention, detection, and treatment of postoperative complications such as falls, delirium, pain, and decubitus ulcers was systematically implemented daily during the hospitalization (Table 2). The staffing at the intervention ward were 1.07 nurses/aides per bed. Table 2Main content of the postoperative program and differences between the two groups Intervention groupControl groupWard layoutSingle and double roomsSingle, double, and four-bed rooms24-bed ward, extra beds when needed27-bed ward, extra beds when neededThe geriatric control ward was similar to the intervention wardStaffing1.07 nurses/aides per bed1.01 nurses per bedTwo full-time physiotherapistsTwo full-time physiotherapistsTwo full-time occupational therapists0.5 occupational therapist0.2 dieticianNo dieticianThe geriatric control ward had staffing similar to the intervention wardStaff educationA 4-day course in caring, rehabilitation, teamwork, and medical knowledge including sessions about how to prevent, detect, and treat various postoperative complications such as postoperative delirium and fallsNo specific education before or during the projectTeamworkTeam included registered nurses (RN), licensed practical nurses (LPN), physiotherapists (PT), occupational therapists (OT), dietician, and geriatriciansNo corresponding teamwork at the orthopedic unitClose cooperation between orthopedic surgeons and geriatricians in the medical care of the patientsThe geriatric ward, where some of the control group patients were cared for, used teamwork similar to that in the intervention wardIndividual care planningAll team members assessed each patient as soon as possible, usually within 24 h, to be able to start the individual care planningIndividual care planning was used in the orthopedic unit but not routinely as in the intervention wardTeam planning of the patients’ individual rehabilitation process and goals twice a weekAt the geriatric rehabilitation unit there was weekly individual care planningPrevention and treatment of complicationsInvestigation as far as possible regarding how and why they sustained the hip fracture, through analyzing external and internal fall risk factorsNo routine analysis of why the patients had fractured their hipsAn action to prevent new falls and fractures was implemented including global ratings of the patients’ fall risk every week during team meetingsNo attempt was made to systematically prevent further fallsCalcium and vitamin D and other pharmacological treatments for osteoporosis were used when indicatedNo routine prescription of calcium and vitamin DActive prevention, detection, and treatment of postoperative complications such as delirium, pain, and decubitus ulcers was systematicAssessments for postoperative complications were made with check-ups for, i.e., saturation, hemoglobin, nutrition, bladder and bowel function, home situation etc., but these check-ups were not carried out systematically as in the intervention groupOxygen-enriched air during the 1st postoperative day and longer if necessary until the measured oxygen saturation was stableUrinary tract infections and other infections were screened for and treatedIf a urinary catheter was used it should be discontinued within 24 h postoperativelyRegular screening for urinary retention, and prevention and treatment of constipationBlood transfusion was prescribed if B-hemoglobin  <100 g/l and  <110 for those at risk of delirium or those already deliriousIf the patient slept badly, the reason was investigated and the aim was then to treat the causeNutritionFood and liquid registration was systematically performed and protein-enriched meals were served to all patients during the first 4 postoperative days and longer if necessaryA dietician was not available at the orthopedic unitNutritional and protein drinks were served every dayNo routine nutrition registration or protein-enriched meals were available for the patientsRehabilitationMobilization within the first 24 h after surgeryMobilization usually within the first 24 hThe training included both specific exercise and other rehabilitation procedures delivered by a PT and OT, as well as basic daily ADL performance training, by caring staff. The patients should always do as much as they could by themselves before they were helpedThe PT on the ward mobilized the patients together with the caring staff. The PT aimed to meet the lucid patients every day. Functional retraining in ADL situations was not always given. The OT at the orthopedic unit only met the patients for consultationThe rehabilitation was based on functional retraining with special focus on fall risk factorsThe geriatric control ward had both specific exercise and other rehabilitation procedures delivered by a PT and OT, similar to the intervention ward but did not systematically focus on fall risk factorsHome visit by an OT and/or a PTNo home visits were made by staff from the orthopedic unit The control ward was a specialist orthopedic unit following the conventional postoperative routines. A geriatric unit, specializing in general geriatric patients, was used for those who needed longer rehabilitation (n = 40). The staffing at the orthopedic unit was 1.01 nurses/aides per bed and 1.07 for the geriatric control ward. The main content of both the intervention program and the conventional care is described in Table 2. The staffs on the intervention and control wards were not aware of the nature of the present study. Data collection Two registered nurses were employed and performed the assessments during hospitalization. Medical, social, and functional data were collected from the patients, relatives, staff, and medical records on admission. Complications during hospitalization, including falls, length of stay, morbidity, and mortality, were systematically registered in the medical and nursing records. Nurses are obliged by law to document any falls in the records [31]. A fall was defined as an incident when the patient unintentionally came to rest on the floor and included syncopal falls. Numbers of falls and time lapse to first fall after admission were calculated. The Abbreviated Injury Scale (AIS) [32] was used to classify the injuries resulting from a fall. The maximum injury (MAIS) connected with each incident was recorded. A few days after surgery, patients were assessed and interviewed regarding their cognitive status using the Mini Mental State Examination (MMSE) [33]. The modified Organic Brain Syndrome Scale (OBS Scale) [34] was used to assess cognitive, perceptual, emotional, and personality characteristics as well as fluctuations in clinical states. Mental state changes were also documented from medical records. Depression during hospitalization was diagnosed due to current treatment with antidepressants and depression screened using the Geriatric Depression Scale (GDS-15) [35] in combination with depressive symptoms observed and registered by the OBS Scale. The patients’ vision and hearing were assessed by their ability to read 3-mm block letters with or without glasses, and their ability to hear a normal speaking voice from a distance of 1 m. Activities of daily living (ADL) prior to the fracture were measured retrospectively using the Staircase of ADL [36]. A geriatrician, unaware of study group allocation, analyzed all assessments and documentation, after the study was finished, for completion of the final diagnoses according to the same criteria for all patients. The Ethics Committee of the Faculty of Medicine at Umeå University approved the study (§ 00-137). Statistical analysis The sample size was calculated to detect a 50% reduction of number of fallers between the intervention and control groups at a significance level of 0.050, based on our previous multifactorial fall intervention study in institutional care [22]. Student’s t-test, Pearson’s χ2 test, and the Mann-Whitney U test were performed to analyze group differences regarding basic characteristics and postoperative complications. We analyzed outcomes on an intention to treat basis. The incidence of falls between intervention and control groups was compared in three ways. First, an unadjusted comparison using Pearson’s χ2 and Fisher’s exact test regarding number of patients who fell and injuries. Second, the fall incidence rate was compared between intervention and control groups by calculating the fall incidence rate ratio (IRR) using a negative binomial regression, with adjustment for observation time and for overdispersion. Negative binomial regression (Nbreg) is a generalization of the Poisson regression model and is recommended for evaluating the efficacy of fall prevention programs [37]. Third, a Cox regression was used to compare the time lapse to first fall between groups (hazard rate ratio, HRR). The difference in fall risk between groups was further illustrated by a Kaplan-Meier graph. Basic characteristics that differed between the intervention and the control groups, corresponding to a p value  <0.150 (depression, antidepressants, and dementia, Table 1), were considered as covariates in the Poisson (Nbreg) and the Cox regression models. However, the inclusion of these variables had only marginal effects on the log-likelihood values of the models as well as on the IRR and HRR values and standard errors for the group allocation variable (intervention or control). In addition, none of the variables showed significant effects on the dependent variable and are therefore not included in the Poisson (Nbreg) and Cox regression analyses. Pearson’s χ2 test and Fisher’s exact test were also used to analyze the associations between falls and days with delirium between the groups. All calculations were carried out using SPSS v 11.0 and STATA 9 statistical software for Macintosh. A p value  <0.050 was considered statistically significant. Results During hospitalization 12 patients in the intervention group sustained 18 falls (range: 1–3) and in the control group 26 patients sustained 60 falls (30 falls in the orthopedic unit and 30 in the geriatric control unit) (range: 1–11). Among patients with dementia 1 patient sustained a single fall in the intervention group and 11 patients sustained 34 falls in the control group (Table 3). Table 3Falls during hospitalization. CI confidence interval, IRR incidence rate ratio Intervention (n = 102)Control (n = 97)p valueNumber of falls1860Postoperative in-hospital days2,8603,685Crude fall incidence rate (number of falls/1,000 days)6.2916.28IRR (95% CI)0.38 (0.20–0.76)a1.00 (Ref.)0.006Number of fallers12260.007Number of fallers with injuries due to falls3150.002Number of fallers with fractures due to falls040.055Number of falls among people with dementia134IRR (95% CI) among people with dementia0.07 (0.01–0.57)a1.00 (Ref.)0.013Number of fallers among people with dementia (n = 28/36)1110.006aNegative binomial regression analyses adjusted for overdispersion and controlled for dementia, depression, and use of antidepressants The crude postoperative fall incidence rate was 6.29/1,000 days in the intervention group vs 16.28/1,000 days in the control group. Using a negative binomial regression, the fall incidence was significantly lower in the intervention group, IRR 0.38 (95% CI: 0.20–0.76, p = 0.006), and among patients with dementia, IRR 0.07 (95% CI: 0.01–0.57, p = 0.013) (Table 3). In Fig. 2, a Kaplan-Meier survival analysis of time lapse to first fall illustrates the difference between the two groups with a significantly reduced fall rate in the intervention group (log rank p value 0.008). Fig. 2Kaplan-Meir survival graph The difference in fall risk, expressed as time lapse to first fall, was compared between intervention and control groups in a Cox regression (HRR). Including all patients in the calculation, the fall risk was significantly lower in the intervention group, HRR 0.41 (95% CI: 0.20–0.82, p = 0.012). There were in total 3 minor or moderate injuries (MAIS 1-2) in the intervention group compared to 15 in the control group according to the AIS. The serious injuries (MAIS 3) were new fractures of which four, two hip fractures, one rib fracture with pneumothorax, and one with multiple skull fractures, occurred in the control group and none in the intervention group (Fisher’s exact test: p = 0.055). Three of the patients who fell in the intervention group (25%) and 12 in the control group (46%) fell during a day when they were delirious (p = 0.294). Analyzing the number of falls revealed that 4 of 18 (22%) falls in the intervention group and 27 of 60 (45%) in the control group occurred on a day when the patient was delirious, p = 0.083. Apart from the falls there were fewer other postoperative complications in the intervention group, such as fewer patients with postoperative delirium (p=0.003) and fewer delirious days (p ≤ 0.001), urinary tract infections (p = 0.005), sleeping disturbances (p = 0.009), nutritional problems (p = 0.038), and decubitus ulcers (p = 0.010). The postoperative in-hospital stay was shorter in the intervention group, 28.0 ± 17.9 days vs 38.0 ± 40.6 days, p = 0.028. Among those ten with the longest postoperative in-hospital stays in the control group there were eight patients with any fall and two had had new fractures. Discussion The present study shows that the number of falls and time lapse to first fall can be reduced during in-hospital rehabilitation after a femoral neck fracture. A multidisciplinary, multifactorial geriatric care program with systematic assessment and treatment of fall risk factors as well as active prevention, detection, and treatment of other postoperative complications resulted in fewer patients who fell, a lower total number of falls, and fewer injuries. To our knowledge this is the first fall intervention study in this group of patients, despite the fact that this is a group of patients with a high fall risk. In general there are few fall prevention studies in hospital settings. Two [26, 27] with positive outcomes in other patient groups and on subacute wards have recently been published. The first one [26] reduced falls at three subacute rehabilitation wards, but the differences were most obvious after 45 days of observation. Thus the results were not comparable with those from the present study, which included both the acute and rehabilitation hospital stay. The other study [27] resulted in fewer fallers, falls, and injuries on a geriatric ward but the differences disappeared when the results were adjusted for observation time. Those studies used a multidisciplinary approach in their fall intervention similar to that used in the present study, but in the present study we have, in addition, focused on inpatient complications associated with falls such as delirium and urinary tract infections. One of those studies [27] tried to manage the delirious patients using bedrails, alarms, and changing the furniture arrangements for the patients, but no mention was made of any prevention and treatment of the underlying causes of delirium. The use of physical restraints was not included in the intervention program in the present study. The studies above used fall risk assessment tools to recognize those with a high fall risk. In the present study, we used a rehabilitation and care program including assessment of risk factors for falls and global ratings for each patient during team meetings. A critique of fall risk assessment tools is that few have been tested for validity and reliability testing in a new independent sample. When using fall prediction tools in different clinical settings the specificity decreases [38]. A limitation in the present study is that some falls could have been missed, but we presume that there were very few. For one thing the nurses are obliged to document falls in the records. Also hip fracture surgery patients can hardly get up by themselves after a fall so soon after the surgery and are, therefore, bound to be noticed; but if there were any missing falls there would probably be no difference between the groups. Another limitation is that the fall registration could not be blinded regarding group allocation, but the staffs on each ward were not aware of the comparison with another ward regarding falls and injuries. The study sample is also quite small, but the sample size is calculated according to the results from a previous study [22]. The method of concealment could have been improved, but one strength was that none from the research team performed this procedure and the envelopes were not opened until the intervention was to begin. Other strengths were the intention to treat analyses, the few patients who refused to participate, and that there were no crossover effects due to staff changing wards during the study period. One may speculate that the successful reduction in number of falls in the present study could be a result of the active prevention, detection, and treatment of postoperative complications after surgery. During the period of hospitalization there were differences between the groups regarding some complications associated with falls among older people in residential care facilities and in hospitals, such as delirium and urinary tract infections. The reduction of postoperative delirium can probably explain much of the difference between the groups regarding the numbers of falls and the number of patients who fell. There are studies that have found that delirium is an important risk factor for falls [10]. Demented patients especially are at high risk of developing delirium when they are treated for femoral neck fractures [15, 16] and these patients seemed to have benefited most in this study from the intervention program regarding prevention of postoperative falls. Our findings support an earlier non-randomized study that fewer injurious falls occur when the incidence and duration of delirium was reduced [39]. The investigation into why the patients had fractured their hip and why they fell may also have influenced the result, as well as the investigation and rehabilitation concerning external fall risk factors such as the use of walking aids, safe transfers, balance, and mobility. It seems that teamwork and individual care planning alone do not have the same effect on falls, as half the falls in the control group occurred in the geriatric control ward, a ward specializing in geriatric patients where teamwork, as well as individual care planning, is applied. In the community and residential care facilities, interdisciplinary and multifactorial fall prevention studies have shown positive effects on the reduction in the number of falls and injuries [19, 22]. Among those with cognitive decline or dementia there is no evidence that such strategies prevent falls [40, 41], but the present study allowed the conclusion that at least during the in-hospital stay, this group of patients could benefit from such strategies. The reduced number of falls and injuries also probably contributed to the shorter hospitalization seen in the intervention group. The program seems easy applicable both in the acute postoperative care as well in the post-acute rehabilitation settings and except for the staff education there were no increased costs. Conclusion A team applying comprehensive geriatric assessment and rehabilitation, including prevention, detection, and treatment of fall risk factors, can successfully prevent inpatient falls and injuries, even in patients with dementia.
[ "intervention", "hip fracture", "in-hospital", "accidental falls", "elderly" ]
[ "P", "P", "P", "M", "U" ]
Anal_Bioanal_Chem-3-1-1592466
Capillary-assembled microchip as an on-line deproteinization device for capillary electrophoresis
A capillary-assembled microchip (CAs-CHIP), prepared by simply embedding square capillaries in a lattice polydimethylsiloxane (PDMS) channel plate with the same channel dimensions as the outer dimensions of the square capillaries, has been used as a diffusion-based pretreatment attachment in capillary electrophoresis (CE). Because the CAs-CHIPs employ square-section channels, diffusion-based separation of small molecules from sample solutions containing proteins is possible by using the multilayer flow formed in the square section channel. When a solution containing high-molecular-weight and low-molecular-weight species makes contact with a buffer solution, the low-molecular-weight species, which have larger diffusion coefficients than the high-molecular-weight species, can be collected in a buffer-solution phase. The collected solution containing the low-molecular-weight species is introduced into the separation capillary to be analyzed by CE. This type of system can be used for CE analysis in which pretreatment is required to remove proteins. In this work a fluorescently labeled protein and rhodamine-based molecules were chosen as model species and a feasibility study was performed. Introduction On-line combination of sample pretreatment and electrophoretic separations, for example capillary electrophoresis (CE) and microchip-based capillary electrophoresis (MCE), has been focused on by many researchers, because of the possibilities of integrating complicated pretreatment procedures. During the past decade a variety of approaches have been used to combine pretreatment with CE separations. The “single capillary approach”, involving position-selective immobilization of functional molecules or polymers inside a capillary, has been used to demonstrate preconcentration or enzymatic reaction before CE separation [1, 2], and in-capillary preconcentration during the electrophoretic process [3–8] has provided a simple system enabling direct application of the capillary in commercial equipment. The “microchip-based approach”, recently reviewed by many authors [9–11], has enabled integration of a variety of complicated pretreatment processes with the separation channel on a single microchip. These approaches have different advantageous features for each purpose. From the standpoint of total system design, however, a “combination approach” involving the use of a flow-system or a functionalized capillary together with a separation capillary provides a more flexible system designed for particular analyte. Ye et al. reported connection of a trypsin-immobilized capillary with a separation capillary, by means of a home-built chip prepared by lamination of Lexan and Parafilm, and demonstrated on-line digestion and CE separation [12]. Kuban et al. and Fang et al. reported “flow-injection capillary electrophoresis (FI-CE)” comprising a flow-injection system and a separation capillary [13, 14]. This system enabled various pretreatment processes to be combined with CE separation and has since been expanded to the use of MCE to achieve rapid separations. These systems have been reviewed by Chen et al. [15]. Vizioli et al. prepared a monolithic polymer capillary, for selection of histidine-containing peptides, connected it to a CE separation capillary [16]. Fan et al. recently reported combination of an FI-CE system with in-capillary preconcentration using a dynamic pH junction [17]. In all the examples mentioned above, combination of a flow-reaction system or chemically functionalized conduits with a separation capillary played an important role, and would provide an attractive integrated device enabling flexible design for combining pretreatment and CE separation. In contrast, we have recently reported a new concept for fabricating a chemically functionalized microchip, called a “capillary-assembled microchip (CAs-CHIP)” [18]. The microchip is fabricated simply by embedding chemically functionalized square capillaries in a lattice PDMS channel chip which has the same channel dimensions as the outer dimensions of the square capillaries. By using this technique, we have constructed a multi-ion sensing system based on combination of several different ion-sensing capillaries and a valving and sensing system based on thermo-responsive polymer-immobilized capillaries and an enzyme-immobilized capillary [19, 20]. Because the CAs-CHIP enabled easy connection of different functional capillaries, we focused on this device as an on-line sample-pretreatment attachment for CE separation capillaries in a manner similar to the previously reported FI-CE system. In this instance the CAs-CHIP, prepared from a PDMS chip, served both as the device for pretreatment of the sample solution, by use of a variety of chemical processes, and the injection device. Interfacing the flow system and the separation capillary using a PDMS chip has already been reported by Bergstrom et al. [21, 22]. Because the CAs-CHIP employs a square-shaped channel structure, however, the characteristic multilayer flow formed in the square section channel can also be used as a useful pretreatment process which can be integrated. The CAs-CHIP is, therefore, expected to enable the integration of universal pretreatment processes by using different kinds of chemically-functionalized capillary and multilayer flow-based chemical processes; this is usually difficult to achieve using other reported techniques. In this paper, we report preliminary results concerning the preparation of a CAs-CHIP as a deproteinization attachment for CE separation. Deproteinization was achieved by use of the multilayer flow obtained in the PDMS microchannel, and the small molecules separated from the mixed protein sample were injected into the separation capillary connected directly to the CAs-CHIP, to be analyzed by CE. In this work a fluorescently labeled protein and rhodamine-based molecules were chosen as model species, and feasibility study was performed. Experimental Square capillaries and reagents Square capillaries of 300 μm outer width (flat-to-flat) and 50 μm inner width were purchased from Polymicro (Phoenix, AZ, USA). Before use of these capillaries the polyimide coating was removed by heating. Sylgard 184 silicone elastomer was purchased from Dow Corning (Midland, MI, USA). Reagents of the highest grade commercially available were used to prepare the aqueous solutions. Rhodamine B (RB), sulforhodamine (SR), and bovine serum albumin fluorescein conjugate (F-BSA) were purchased from Sigma (St Louis, MO, USA). All reagents were used without further purification. Distilled and deionized water had resistivity greater than 1.7 × 107 Ω cm−1 at 25°C. Fabrication of CAs-CHIP-CE system by embedding square capillaries on a PDMS plate and bonding of a PDMS cover Figure 1 shows the design of a CAs-CHIP-CE system. The PDMS plate for pretreatment was connected to the inlet of the separation capillary (left side of Fig. 1) and another PDMS plate was used to fix the detection position (right side of Fig. 1). Square capillaries were cut into appropriate lengths and embedded in the lattice microchannel network fabricated on the PDMS plate. The general procedure for fabrication of a lattice microchannel on a PDMS plate has been reported elsewhere [18]. Briefly, a glass mold with a lattice structure was prepared by cutting a 300-μm depth with 1 mm pitch, using a dicing saw with an edge 300 μm wide. The conventional PDMS molding process using the glass mold was then performed to prepare a PDMS mold. The second molding process using this PDMS mold gave the lattice microchannel network on the second PDMS plate. Plugged capillaries were prepared by introduction of PDMS prepolymer into the square capillaries (inner width 50 μm) and cured at 70°C for more than 5 h. These plugged capillaries were also cut and used to prepare the designed channel network. After embedding all the capillaries, a PDMS cover was bonded on top. For this, a spin-coated PDMS prepolymer on an acrylic plate (ca. 2 mm thick) was used as the cover plate, to fill the voids formed between the cover plate and the capillary-embedded PDMS plate [18]. PDMS prepolymer was spin-coated on the acrylic plate at 7000 rpm, then attached to the capillary-embedded PDMS plate before curing. Bonding was carried out by curing at 60°C for 12 h. Figure 2 shows the final CAs-CHIP-CE system with a 12-cm separation capillary. Depending on the resolution required, 12-cm (effective length 10 cm) and 48-cm (effective length 33 cm) separation capillaries were used. To apply the high voltage, two plastic vials, as solution reservoirs, were connected at the outlets of the PDMS channel and the separation capillary. Fig. 1General concept for fabricating a diffusion-based pretreatment–CE separation system using a capillary-assembled microchip (CAs-CHIP). The plugged capillaries indicated as gray parts are actually square capillaries with 50 μm square-shaped conduits blocked with PDMS. In this figure, for simplicity, these conduits are not shownFig. 2Photograph of a fabricated CAs-CHIP-CE device Operating procedures Channels (or capillaries) on the CAs-CHIP-CE system were washed sequentially with methanol, 0.1 mol L−1 NaOH, 0.1 mol L−1 HCl, and water, then rinsed and preconditioned with 50 mmol L−1 phosphate-buffer solution (PBS) at pH 8.1. Sample solution (RB, SR: 0.5 mmol L−1, F-BSA: 4 mg mL−1) and two buffer solutions for diffusion-based separation and sample injection were introduced by use of syringe pumps with appropriate flow rates (see figure captions). Sample injection was achieved by on–off switching of the syringe pump delivering buffer 2, shown in Fig. 1. Immediately after injection, high voltage was applied manually by use of a standard high-voltage power supply (Matsusada Precision, Shiga, Japan). Capturing fluorescence images and laser-induced fluorescence measurement Fluorescence images of the microchannel were obtained by using an optical/fluorescence inverted microscope (Eclipse TS100-F, Nikon, Tokyo, Japan). Photographs were captured using a 3CCD color camera (HV-D28S, Hitachi Kokusai Electric, Tokyo, Japan) installed at the front port of the microscope. Fluorescent images were collected using a mercury lamp as a light source and a filter block (G-2A and FITC, Nikon, Tokyo, Japan). CE with laser-induced fluorescence (LIF) detection was performed on a home-built system based on an inverted fluorescence microscope (IX70, Olympus, Tokyo, Japan). Light at 488 nm from an argon ion laser (Newport Spectra Physics Laser Division, Mountain View, CA, USA) was introduced into the microscope. The laser beam was filtered through a 460–490 nm band-pass filter, reflected by a 510 nm dichroic mirror, then focused on the detection point by means of a 20× objective lens. Fluorescence was collected by use of the same objective lens, filtered through a 515 nm high-pass filter, and finally detected by use of a CCD camera (Model PMA-11, Hamamatsu Photonics, Shizuoka, Japan). To obtain electropherograms fluorescence at 600 nm was used throughout. Results and discussion Optimization of diffusion-based deproteinization Diffusion-based separation of chemical species by use of multilayer flow was first reported by Brody et al. [23]. They separated small molecules from a particle-containing sample, or proteins from a biological cell-containing sample [23, 24]. Here we used this technique to separate small molecules from a mixed protein sample. When the sample solution containing small molecules and proteins made contact with the buffer-solution flow, the small molecules, which have larger diffusion coefficients than the proteins, diffused into the buffer solution. In this study, RB and SR were used as models for small molecules and F-BSA as that for a large molecule. Figure 3 shows the fluorescence images at the confluence point of two flows and the injection point. Fluorescence images for F-BSA and rhodamine molecules were captured using different optical filters. In this case, buffer 2 was not flowing. As can be seen, when the total flow rate of sample and buffer was very high, neither F-BSA nor rhodamine molecules could reach the separation capillary (Fig. 3a). In contrast, selection of an appropriate total flow rate resulted in selective diffusion of small rhodamine molecules and negligibly small protein diffusion (Fig. 3b). Figure 3b also shows the fluorescence intensity profiles obtained by analysis of the fluorescence images for the rhodamine derivatives. According to this, the fluorescence intensity of the rhodamine-based molecules at the injection point was approximately 40% of the initial fluorescence intensity at the confluence point (see fluorescence intensity at position y at the injection point shown in the fluorescence intensity profile). Because complete diffusion gives 50% signal intensity for rhodamine-based molecules, these results suggest that diffusion of the rhodamine-based molecules was almost complete. In our experiments, surface adsorption of the solutes by the channel surface occurred, because of the hydrophobicity of the PDMS surface. Because these samples were continuously flowing for deproteinization, however, surface adsorption did not seriously affect the deproteinization procedure when sample injection was performed after steady flow was achieved. Fig. 3Fluorescence images obtained at the confluence and injection points at different flow rates. (a) Total flow rate 10 μL min−1 (sample 5 μL min−1, buffer 1 5 μL min−1). (b) Total flow rate 1 μL min−1 (sample 0.5 μL min−1, buffer 1 0.5 μL min−1). In (b), fluorescence intensity profiles obtained by analysis of the fluorescence images obtained for rhodamine derivatives (SR, RB) are also shown. F-BSA and rhodamine-based molecules were detected by use of different optical filters In this study the molecular weights of RB and SR are 443 and 581, respectively, and that for F-BSA is approximately 73,000 [25]. According to the literature, the diffusion coefficient for a small molecule with a molecular weight of 342 (sucrose) is 5.2×10−6 (cm2 s−1) and that for a large molecule with a molecular weight of 69,000 (HSA) is 6.1×10−7 (cm2 s−1) [26, 27]. Because these molecular weights are very close to those of the molecules used in this study, an approximately tenfold difference between the diffusion coefficients of the small and large molecules enabled successful separation of small molecules from mixed protein sample solution. Although the difference between diffusion coefficients is a primary principle of diffusion-based separation, it should be noted that the optimum conditions may change, depending on the concentration ratio of the protein and small molecules. Therefore, careful optimization experiments may be required when the concentrations of the small molecules are very low. Sample injection and electrophoretic separation Sample injection was performed manually by on–off switching of the syringe pump delivering buffer 2. In the diffusion-based separation and the CE separation mode, three-layer flow composed of sample solution, collection buffer (buffer 1), and buffer 2 occurs in the 300 μm square-PDMS channel along the buffer 2 flow channel, as illustrated in Fig. 1. Because buffer 2 prevents introduction of sample solution into the separation capillary, on-off switching of the syringe pump (buffer 2) enables sample injection similar to the “gated injection” reported by Jacobson et al. [28]. Usually, stopping the syringe pump does not immediately stop flow of buffer 2. In this work, therefore, it took several seconds to start injection. This causes diffusion of molecules at the front and rear ends of the sample plug. When sequential sample injections with injection times of 15 s were performed, however, fairly good reproducibility of 6.5% RSD (peak height; n=5) was obtained. Although this method of injection needs to be improved, good reproducibility led us to use this simple method for further investigation. Figure 4 shows detection signals obtained at 10 cm downstream of the injection point with and without voltage application. When the sample plug is introduced without voltage application, ca. 160 s, corresponding to a linear flow rate of 0.63 mm s−1, was required for detection. The approximate linear flow rate of the pressure-driven flow inside the separation capillary can be estimated from the total flow rate of sample and buffer flow (in total 2 μL min−1), and the ratio of the cross sectional areas of the PDMS channel (300 μm square) and the separation capillary (50 μm square). Under our experimental conditions the calculated linear flow rate was ca. 0.36 mm s−1. Although the order of the calculated value was the same as the experimental value, the difference may be because of the slight difference between the back pressures of the two waste reservoirs. When the high voltage of 5 kV (417 V cm−1) was applied just after injection, the time required for detection was reduced to ca. 70 s and slight separation was observed. Because the net charge of SR was −1 and that of RB was 0, migration time of SR was slightly longer than that of RB. These results indicated that induction of electroosmotic flow and separation by electrophoresis, by voltage application, were confirmed. Fig. 4Detection of rhodamine-based molecules (RB and SR) downstream of the separation capillary with and without application of high voltage. Flow rates: sample, 1 μL min−1; buffer 2, 1 μL min−1. Injection time: 20 s Deproteinization and electrophoretic separation On the basis of these experiments, deproteinization and electrophoretic separation were achieved. Figure 5 shows the electropherograms obtained at the downstream of the separation capillary with and without diffusion-based deproteinization. When the sample solution containing RB, SR, and F-BSA was introduced into the separation capillary without deproteinization, a broad signal arising from F-BSA was observed after the appearance of the RB and SR peaks (Fig. 5a). This broad signal may be because of the protein adsorption frequently observed in CE, or the multiple labeling of the F-BSA used in this work [29]. Commercially available F-BSA contains 7–12 labeling molecules (average) per BSA molecule. Therefore, the electrophoretic mobilities of BSA species with different numbers of labels may cause the broad signals shown in Fig. 5a. In contrast, the electropherogram obtained after deproteinization by use of the multilayer flow had a completely flat baseline after the appearance of two peaks of the rhodamine molecules (Fig. 5b). Thus our system enabled successful diffusion-based deproteinization and CE separation. Although interference from the protein was successfully removed, resolution of RB and SR is still not good in Fig. 5b (resolution: R=1.4). When the longer, 48 cm, separation capillary (effective length: 33 cm) was connected to the CAs-CHIP instead of 12 cm capillary, however, baseline separation of these species after deproteinization was successfully achieved (resolution: R=5.7). In addition, relative standard deviations of peak height and migration time for five sequential measurements were 11.4 and 11.5% for RB and SR, and 1.5 and 1.4% for RB and SR, respectively, indicating that reliable analysis with deproteinization pretreatment was successfully achieved by use of a longer separation capillary. Fig. 5Electropherograms obtained downstream of the separation capillary with and without diffusion-based deproteinization. Flow rates: sample, 0.5 μL min−1, buffer 1 0.5 μL min−1, buffer 2 1 μL min−1. Injection time 15 s. Applied voltage 5 kV (ca. 417 V cm−1) Conclusions We have demonstrated deproteinization and CE separation on a single device using CAs-CHIP technology. Diffusion-based deproteinization in a CAs-CHIP was successfully achieved by choosing an appropriate flow rate, and subsequent CE separation of small molecules was achieved by use of a separation capillary connected directly to the CAs-CHIP. Because diffusion-based separation is, in general, based on dilution of the sample solution, preconcentration before CE separation will be required in the next step. Because the CAs-CHIP is prepared by simply embedding square capillaries, however, further integration of the preconcentration process using chemically-functionalized capillaries, or the on-line preconcentration techniques reported to date, can be also used. These applications are currently under investigation.
[ "capillary-assembled microchip", "deproteinization", "capillary electrophoresis", "square capillary", "polydimethylsiloxane" ]
[ "P", "P", "P", "P", "P" ]
Bioinformation-1-1-1891626
AVATAR: A database for genome-wide alternative splicing event detection using large scale ESTs and mRNAs
In the past years, identification of alternative splicing (AS) variants has been gaining momentum. We developed AVATAR, a database for documenting AS using 5,469,433 human EST sequences and 26,159 human mRNA sequences. AVATAR contains 12000 alternative splicing sites identified by mapping ESTs and mRNAs with the whole human genome sequence. AVATAR also contains AS information for 6 eukaryotes. We mapped EST alignment information into a graph model where exons and introns are represented with vertices and edges, respectively. AVATAR can be queried using, (1) gene names, (2) number of identified AS events in a gene, (3) minimal number of ESTs supporting a splicing site, etc. as search parameters. The system provides visualized AS information for queried genes. Background Alternative splicing (AS) is an important mechanism for functional diversity in eukaryotic cells. AS allow processing of one pre-mRNA into different transcripts in a cell type. This results in protein diversity with each protein having distinct function. [1–2 –3] To address this problem we used EST (short, single pass cDNA sequences generated from randomly selected library clones produced in a high throughput manner from different tissues, individuals and conditions) and mRNA sequences to detect AS variants. The detected variants (using 5,469,433 EST and 26,159 mRNA sequences) were stored in a database called AVATAR. Although, AS databases are available in the public domain, not many contain AS information for multiple eukaryotes (a comparison summarized in AVATAR web site). Therefore, it is important to document AS information for multiple eukaryotes. Hence, we developed AVATAR containing AS information for six eukaryotes. Here, we describe AVATAR development, its content and utility. Methodology Dataset used The dbEST database (Jan 16, 2004) at NCBI contains nearly 5.4 million human EST sequences and this dataset is used in the current analysis. [4] The human genome sequences (CONTIG build 3.4) in Genbank format is obtained from NCBI. [5] Gene information and mRNA sequence were downloaded from the NCBI RefSeq project. Identification of AS The identification of AS in AVATAR is performed in three steps (described below) as illustrated in Figure 1. Step 1: Alignment of EST and mRNA with the genome sequence EST sequences were aligned to the whole genome sequence using Mugup. [6 ] Mugup is a sequence alignment program developed in Windows platform. This procedure identified splice sites in the ESTs (Figure 1 panel A and B). The matched regions and gaps correspond to exons and introns, respectively. EST and mRNA alignments with scores greater than 94% were used for further analysis. Step 2: Clustering EST and mRNA EST and mRNA were clustered according to their location in the genome (Figure 1 panel C). EST and mRNA with overlapping regions were then assembled together. Step 3: Detection of AS sites The mapping of EST alignment with genome sequence to intron positions helps to identify skipped exons and included exons. Searching AVATAR AVATAR can be queried using keywords. The keywords include accession number, gene name, gene isoform, gene location, cytogenetic locations, chromosome number and number of AS events. The database search produces AS visuals for queried gene. Utility to the Biological Community AVATAR is a collection of AS information for 6 eukaryotic organisms. The database can be queried simultaneously for 6 organisms. It can also be searched using gene names and desired number of AS events. EST sequences are error prone resulting in the detection of aberrant transcripts. Frequency of EST alignment at a specific site provides improved detection in AVATAR. Caveats AS information on paralogous genes in eukaryotic genomes are not included in AVATAR due to the difficulty in identifying their corresponding chromosomal locations using EST sequences. Future developments New EST sequences are generated in laboratories every day. Hence, it is a time consuming to keep AS databases updated due to the growth of genome and mRNA sequences. Hence, we are in the process of developing a computer agent which can update AVATAR automatically. We also plan to include tumor specific AS data.
[ "database", "alternative splicing", "est", "mrna", "human", "eukaryotes", "protein diversity", "sequence alignment" ]
[ "P", "P", "P", "P", "P", "P", "P", "R" ]
Eur_Spine_J-4-1-2226191
High failure rate of the interspinous distraction device (X-Stop) for the treatment of lumbar spinal stenosis caused by degenerative spondylolisthesis
The X-Stop interspinous distraction device has shown to be an attractive alternative to conventional surgical procedures in the treatment of symptomatic degenerative lumbar spinal stenosis. However, the effectiveness of the X-Stop in symptomatic degenerative lumbar spinal stenosis caused by degenerative spondylolisthesis is not known. A cohort of 12 consecutive patients with symptomatic lumbar spinal stenosis caused by degenerative spondylolisthesis were treated with the X-Stop interspinous distraction device. All patients had low back pain, neurogenic claudication and radiculopathy. Pre-operative radiographs revealed an average slip of 19.6%. MRI of the lumbosacral spine showed a severe stenosis. In ten patients, the X-Stop was placed at the L4–5 level, whereas two patients were treated at both, L3–4 and L4–5 level. The mean follow-up was 30.3 months. In eight patients a complete relief of symptoms was observed post-operatively, whereas the remaining 4 patients experienced no relief of symptoms. Recurrence of pain, neurogenic claudication, and worsening of neurological symptoms was observed in three patients within 24 months. Post-operative radiographs and MRI did not show any changes in the percentage of slip or spinal dimensions. Finally, secondary surgical treatment by decompression with posterolateral fusion was performed in seven patients (58%) within 24 months. In conclusion, the X-Stop interspinous distraction device showed an extremely high failure rate, defined as surgical re-intervention, after short term follow-up in patients with spinal stenosis caused by degenerative spondylolisthesis. We do not recommend the X-Stop for the treatment of spinal stenosis complicating degenerative spondylolisthesis. Introduction Lumbar spinal stenosis complicating degenerative spondylolisthesis is a common cause for low back pain, neurogenic claudication, and radiculopathy in the elderly population. The majority of the patients will respond well to non-operative treatment modalities. However, in patients that fail to respond to conservative treatment, surgical decompression with or without a posterolateral fusion and instrumentation, may be considered [3, 12]. Unfortunately, these procedures have variable long-term outcomes and are frequently followed by complications, especially in the elderly patients with high co-morbility [2, 10]. Therefore, alternative therapies are being developed, of which the interspinous distraction device is rapidly gaining popularity [4, 9]. Of such, the X-Stop (X-Stop, St. Francis Medical Technologies, Inc®, Alameda, CA) has been introduced as a minimal invasive surgical procedure to treat symptomatic degenerative lumbar spinal stenosis [4, 9]. Initial results of the treatment of degenerative lumbar spinal stenosis with the X-Stop are promising [8, 13, 14]. Recently, encouraging results have been reported for the treatment of patients with symptomatic lumbar spinal stenosis caused by degenerative spondylolisthesis [1]. However, we observed an alarmingly high failure rate, defined as surgical re-intervention, in a cohort of patients treated with the X-Stop for symptomatic lumbar spinal stenosis caused by degenerative spondylolisthesis. This prompted us to perform a retrospective chart review, and analysis of the radiographs. Patients and methods We retrospectively reviewed 12 consecutive patients with symptomatic lumbar spinal stenosis caused by degenerative spondylolisthesis treated with the X-Stop interspinous distraction device. The patients were treated between January 2003 and May 2005. There were 9 female and 3 male patients with a mean age at surgery of 67.5 years (50–83). All patients complained of progressive low back pain throughout the day with neurogenic claudication, radiculopathy and a diminished walking distance. In all patients, neurological examination was judged normal or nonspecific. Anteroposterior, lateral and flexion/extension plain radiographs, and magnetic resonance imaging (MRI) were performed in all cases. The percentage of degenerative slip was measured on the lateral radiograph and measured according to the method described by Anderson et al. [1]. The anteroposterior dural sac diameter in the axial and sagittal plane T2 sequence was measured on the MRI. A standardized walking and cycling test [6] was performed at the department of physical therapy. A limited walking distance less than 1 km (0.62 miles) independent of the time needed, was considered positive. After walking the patient had to sit and the pre-walking pain must be reduced in less than 5 min. Cycling should be unlimited without complaints. Initial treatment consisted on patient education, medications to control pain, and exercise and physical treatments to regain or maintain activities of daily living. Surgical treatment with the X-Stop was considered in patients not improving with conservative care for more than 6 months. All operations were performed under general anesthesia. The patients were placed in prone position on a Wilson spinal surgery frame (Orthopaedic Systems, Inc., Union City, CA) with the lumbar spine in maximum flexion. Prophylactic antibiotics, cefazolin (cefalosporin, Kefzol®) 1,000 mg IV, were administered at the induction of anesthesia, and as a second and third dose 8 and 16 h post-operatively, respectively. After radiographic identification of the surgical level, a mid-sagittal incision of approximately 4 cm is made over the spinous processes. The musculature was elevated to the level of laminae and facets. The supraspinal ligament is kept intact. To pierce the interspinous ligament, a curved dilator is inserted in the anterior margin of the interspinous space. Subsequently, a sizing distractor is inserted to determine the appropriate implant size. The X-Stop is inserted into the interspinous space as close to the posterior aspect of the lamina as possible. An adjustable wing is attached to the implant and secured along the midline. Patients were mobilized immediately once they had recovered from the anesthetic effects. They were discharged from hospital within 48 h. Clinical follow-up took place at 6 and 12 weeks and at 12 and 24 months. All patients underwent a clinical and radiographic examination of the lumbar spine in standing position at each follow-up visit. The mean follow-up was 30.3 months (13–41). In patients with persistent or recurrence low back pain with neurogenic claudication and radiculopathy, a second MRI was made. The endpoint was secondary surgical intervention of the lumbar spine. Statistical analysis, comparing the pre- en post-operative MRI dimensions, was performed using Students’ t-test. Results The pre-operative percentage of degenerative spondylolisthesis was less than 30% in all patients, with an average slip of 19.6% ± 6.20 (9.6–29.7). In 9 out of the 12 patients there was a slip of less than 25% (grade 1) degenerative spondylolisthesis. Bending radiographs revealed mobility at the level of the spondylolisthesis in all patients. MRI showed nerve root compression and impingement of the thecal sac. The mean anteroposterior axial cross-sectional diameter was 7.33 mm (5.71–11.19) and the mean anteroposterior sagittal cross-sectional diameter was 7.32 mm (5.40–8.49). The operations were performed at L4–5 in ten of the patients and at both L3–4 and L4–5 in two patients. A 14-mm diameter X-Stop was implanted in nine levels. In the remaining levels, a 12-mm implant was used three times, and 16 and 10-mm implants both once. No peri-operative complications were observed. Post-operative plain radiographs showed a correct position of the implants in all patients. No fractures of the spinous processes were observed. The post-operative percentage of spondylolisthesis, measured on plain radiographs post-operatively and at final follow-up, remained unchanged in all patients. Direct post-operatively, 8 out of 12 patient reported a significant improvement of pain, neurogenic claudication, and radiculopathy. However, four patients did not experience any relief of symptoms following surgery and no improvement at follow-up. At 12 weeks follow up, two patients, that initially had experienced a relief of symptoms suffered from a recurrence of pain, neurogenic claudication, and radiculopathy. In addition, a third patient experienced a recurrence of symptoms at 24 months follow-up. All patients with persistent or recurrent symptoms had a post-operative MRI. No statistically significant (P > 0.05) difference of spinal stenosis was seen at the effected levels in comparison to the pre-operative values. The mean post-operative anteroposterior axial cross sectional diameter was 6.80 mm (5.24–7.65) and the mean sagittal cross sectional diameter was 6.91 mm (5.12–7.70) (Fig. 1, 2, 3). The pre-operative axial and sagittal cross sectional diameter in these seven patients (7 levels) was not significantly different (P > 0.05) from that of the five patients (7 levels) without persistent or recurrent symptoms. Finally, the seven patients with persistent or recurrent symptoms underwent surgical re-intervention. The mean degenerative spondylolisthesis of these seven patients was 17.8% ± 6.9. Six of these patients had a pre-operative degenerative spondylolisthesis of less than 25%. One patient had a 27.6% degenerative spondylolisthesis. The X-Stop was removed and a decompression and posterolateral fusion with instrumentation was performed. Fig. 1a Pre-operative lateral plain radiograph. b Post-operative lateral plain radiograph. X-Stop positioned at the level L4–5Fig. 2Pre-operative T2-weighted a transversal and b sagittal MR Image showing lumbar spinal stenosis due to discopathy, facet arthritis, ligamentum flavum hypertrophy and anterolisthesisFig. 3Post-operative T2-weighted a transversal and b sagittal MR Image. No change in canal cross-sectional area and mid-sagittal diameter visible after insertion of the X-Stop at level L4–5 Discussion The X-Stop interspinous distraction device has shown to be an attractive alternative to conventional surgical procedures in the treatment of symptomatic degenerative lumbar spinal stenosis [4, 9]. It may be questioned, however, if the X-stop will be effective in patients with lumbar spinal stenosis caused by degenerative spondylolisthesis also. To our best knowledge, there is only one study that investigated the clinical effects of the X-Stop in patients with lumbar spinal stenosis caused by degenerative spondylolisthesis [1]. In this study, 42 patients were treated with the X-Stop and compared to 33 patients with non-operative treatment. The indication for treatment was a percentage of slip of less than 25%. An overall clinical success rate of 63.4% was reported in the X-Stop treated patients compared to 12.9% in the non-operative treated patients after 2 year follow-up. Secondary surgery was required in 5 (11.9%) of the patients in the X-Stop group compared to 4 (12.1%) in the control group. Unfortunately, we experienced an extremely high failure rate, defined as surgical re-intervention, in a cohort of patients with lumbar spinal stenosis caused by degenerative spondylolisthesis treated with the X-Stop interspinous distraction device. In our cohort, the average percentage of slip was less than 25%, though in 3 patients the percentage of slip was between 25 and 30%. Surgical re-intervention was required in 7 (58%) patients within 24 months. Of these, only 1 patient had a slip of more than 25% (27.6%). There was no relation between the severity of the slip and the failures in our cohort. Our indication for re-intervention included recurrence or persistent and unremitting low back pain and persistent or progressive neurogenic claudication with radiculopathy. Both clinical and radiological findings were considered together for diagnosing failure of treatment. Unfortunately, we did not enclose pre- and post-operative outcome measurements. However, since the surgical goal of the X-Stop include pain reduction, improvement of neurological symptoms, and improvement of quality of live, re-intervention was considered as the endpoint for failure. In diagnosing spinal stenosis, thecal sac impingement and nerve root compression are seen on MRI. We observed no improvement of the axial and sagittal diameter of the central canal on the MRI after insertion of the X-Stop. In addition, no relation was found between the severity of the pre-operative spinal stenosis measured on MRI and an eventual secondary surgical intervention. Recently, in a study using positional MRI pre- and post-operatively following insertion of the X-Stop, improvement of the cross sectional area of the dural sack has been observed in 12 patients with symptomatic spinal stenosis [11]. This study, however, did not include patients with spinal stenosis caused by degenerative spondylolisthesis. Unfortunately, we do not have the opportunity to use the positional MRI. It may be hypothesized that the spinal stenosis will be more severe in a standing positional MRI, as a result of the instability in degenerative spondylolisthesis. A limitation of the present study is the lack of objective standards of measurement spinal stenosis on MRI. Nevertheless, all patients in our study showed pre-operative a severe thecal sack impingement at the level of degenerative spondylolisthesis. In addition, the spondylolisthesis, as measured on the lateral standing radiographs, did not show progression or improvement after surgery. From a biomechanical point of view, it may be questioned if the X-Stop interspinous distraction device provides any stabilizing effect on the affected motion segment and will increase the spinal canal in degenerative spondylolisthesis. It has been shown that the facet joints in patients with degenerative spondylolisthesis demonstrate an increased sagittal orientation [5, 7]. When the facet joints are orientated in a more sagittal plain, the resistance to shear forces is decreased. Obviously, the more sagittal orientation of the L4–L5 segment combined with an interspinous distraction device may result in a progressive forward slip of the superior vertebra, and a progressive narrowing of the spinal canal and lateral recesses. Thus, the presence of a degenerative spondylolisthesis in patients with lumbar spinal stenosis may be considered as a contra indication for the X-Stop. In conclusion, the X-Stop interspinous distraction device showed an extremely high failure rate, defined as surgical re-intervention, after short term follow-up in patients with spinal stenosis caused by degenerative spondylolisthesis. We do not recommend the X-Stop for the treatment of lumbar spinal stenosis with degenerative spondylolisthesis, and we consider a degenerative spondylolisthesis a contra-indication for the X-Stop interspinous distraction device. Competing interests No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article.
[ "x-stop", "lumbar spinal stenosis", "degenerative spondylolisthesis" ]
[ "P", "P", "P" ]
Doc_Ophthalmol-3-1-1784540
Multifocal ERG findings in carriers of X-linked retinoschisis
Purpose To determine whether retinal dysfunction in obligate carriers of X-linked retinoschisis (XLRS) could be observed in local electroretinographic responses obtained with the multifocal electroretinogram (mfERG). Methods Nine obligate carriers of XLRS (mean age, 46.2 years) were examined for the study. Examination of each carrier included an ocular examination and mfERG testing. For the mfERG, we used a 103-scaled hexagonal stimulus array that subtended a retinal area of approximately 40° in diameter. The amplitudes and implicit times in each location for the mfERG were compared with the corresponding values determined for a group of 34 normally-sighted, age-similar control subjects. Results Mapping of 103 local electroretinographic response amplitudes and implicit times within a central 40° area with the mfERG showed regions of reduced mfERG amplitudes and delayed implicit times in two of nine carriers. Conclusions The mfERG demonstrated areas of retinal dysfunction in two carriers of XLRS. When present, retinal dysfunction was evident in the presence of a normal-appearing fundus. Multifocal ERG testing can be useful for identifying some carriers of XLRS. Introduction Juvenile X-linked retinoschisis (XLRS) is a hereditary, vitreoretinal degeneration that was initially described in 1898 by Haas [1] and subsequently recognized to have an X-linked recessive inheritance in 1938 by Mann and Mac Rae [2]. It is characterized by decreased visual acuity within the first to second decade of life and cystic-appearing lesions within the fovea [3–5]. Approximately 50% of patients will also show peripheral retinoschisis [3–5]. A selective or predominant b-wave amplitude reduction on full-field electroretinogram (ERG) testing is a distinctive feature of the disease [6]. In other X-linked diseases, such as X-linked retinitis pigmentosa, choroideremia and X-linked ocular albinism, because of the principle of Lyonisation, which predicts a random inactivation of one X chromosome within each cell, female carriers can express some features of the diseases [7–9]. As a rule, female carriers of XLRS do not exhibit any clinically apparent fundus abnormalities [3, 4]. Isolated observations of foveal cystic-appearing lesions in female carriers from families with XLRS have been reported. There were six such female carriers noted out of a total of 13 who were examined in two different families with consanguineous marriages reported by two previous studies [10, 11]. Foveal changes, described as wrinkling of the internal limiting membrane, of one eye was reported in one female XLRS carrier from a non-consanguineous marriage by Wu et al. [12]. One study did find that 11 out of 11 obligate carriers showed an abnormal rod-cone interaction by psychophysical testing [13]. Nevertheless, carrier detection of XLRS currently can most reliably be successful by identifying genetic mutations in a gene that codes for a protein referred to as retinoschisin (RS1) [14, 15]. In the current study, we evaluated the possible role of multifocal ERG (mfERG) testing as a means of detecting functional abnormalities in obligate carriers of XLRS. Methods Nine obligate carriers (eight parents, one offspring) of XLRS patients were enrolled in the study. They were selected on the basis of their availability and willingness to participate in the study. Each had at least one male family member who was diagnosed as having characteristic XLRS findings by one of the authors (GAF). Five of the nine XLRS patients were confirmed as having the causative gene RS1. None of the carriers had any other medical or ocular conditions that might have affected their retinal function. After each carrier was counseled regarding the study, they signed a written informed consent approved by the Institutional Review Board at the University of Illinois at Chicago. The examinations were conducted in accordance with Health Insurance Portability and Accountability Act regulations. All carriers were examined at the Department of Ophthalmology, University of Illinois at Chicago by two of the authors (GAF, LSK). Best-corrected visual acuity (BCVA) was measured in each carrier with a Snellen projection chart. A dilated fundus examination and slit-lamp biomicroscopy of the lens and anterior segment of both eyes were performed on all carriers as were mfERG (Electro-Diagnostic Imaging, San Mateo, CA) measurements. Multiple retinal areas were stimulated to record local cone responses using a stimulus array of 103-scaled hexagons subtending a retinal area of approximately 40° in diameter. Each hexagon was scaled with eccentricity to obtain approximately equal amplitudes of local ERG responses [16]. The luminance of each hexagon was modulated according to a binary m-sequence. Individual stimulus elements were modulated between black (0.45 cd/m2) and white (280 cd/m2) for a time averaged luminance of 140.23 cd/m2 (approximately 3.8 log td). The surround luminance was set at 140 cd/m2. The stimulus was displayed on a black-and-white monitor driven at a frame rate of 75 Hz (Nortec, Plymouth, MN). Each subject’s vision was optimally corrected with a refractor/camera system for the fixed viewing distance of 40 cm. To ensure equal magnification of the stimulus array, the distance between a subject’s eye and the refractor/camera was adjusted for each subject to obtain a sharp image on a control monitor. Each carrier’s pupil was dilated with 2.5% phenylephrine and 1% tropicamide to obtain a pupil diameter of at least 7 mm. Multifocal ERGs were recorded only for the right eye for each of the XLRS carriers. A Burian-Allen bipolar contact lens electrode (Hansen Ophthalmic Laboratories, Iowa City, IA) was used and grounded to the ipsilateral ear. Before insertion of the contact lens electrode, a carrier’s cornea was anesthetized with 0.5% proparacaine, and the left eye was occluded. The total recording time was approximately 8 min divided into 32 segments. All carriers were required to maintain fixation during each of 14 sec segments. Segments with large eye movements, losses of fixation, or blinks were discarded and re-recorded. The raw data were filtered at a bandpass of 10 to 300 Hz, amplified at a gain of 100,000 (Astro-Med/Gass model CP11 amplifier), and digitized at 1200 Hz. Each local response was isolated by a cross correlation between the m-sequence and response cycle according to the VERIS algorithm. The amplitude (a-scale) and implicit time (t-scale) of all local (first-order) mfERG responses were derived using the algorithm of Hood and Li [17]. For the current analysis, raw waveforms were exported from the VERIS 3.0 for each of the 34 (15 male, 19 female) age-similar (30–79 years) normally sighted control subjects. Artifact reject and averaging with neighbors were turned off. Template wave forms were constructed for each hexagon from the average of all the control subjects’ data. These averaged templates were fitted using a least-squares fitting procedure to each control subject’s data by stretching horizontally (timing) and vertically (amplitude). For each hexagon, the a-scale values were then averaged across all normal subjects and standard deviations calculated. This was also done separately for the t-scale values. Each carrier’s data were then fit by the control templates and the resulting values for the a-scale and t-scale were plotted for each hexagon. For the a-scale, a number less than one indicates an amplitude lower than for the average control subjects and for the t-scale, a number greater than one indicates a timing delayed compared to the average control subjects. The statistical probability of each value was calculated using the control subjects mean and ±3 standard deviations (SD) for the a-scale and t-scale respectively and at each hexagon. The grey hexagons indicate >3 SD above the normal mean. For the 99.7% confidence level (grey hexagons) five control subjects had one abnormal amplitude and two subjects had two abnormal grey hexagons on the a-scale. For implicit time, six control subjects had one abnormal grey hexagon and two had two abnormal grey hexagons. Therefore, for this analysis, any carrier exhibiting a total of three or more grey hexagons were considered abnormal. A black hexagon indicated that a measurable signal could not be detected from the noise level. None of the 34 control subjects had any hexagons that were depicted as black. We used a criterion of two or more black hexagons for a carrier as highly likely to represent a local abnormality. Results The nine XLRS carriers ranged in age from 23–70 years, with a mean age of 46.2 years. All showed a normal exam of the anterior segment of the eye, including the lens. Visual acuity was 20/25 or better in each eye. None showed any fundus abnormalities. These findings are summarized in Table 1. Seven of the nine obligate carriers were observed to have varying degrees, from mild to moderate, of situs inversus of their temporal retinal vessels. In ocular situs inversus, the temporal vessels leave the optic disc directed towards the nasal retina before making a sharp temporal turn. The fundus photograph for Carrier #1 with situs inversus is shown in Fig. 1. In a study of fundus findings for XLRS patients, situs inversus was observed in 32% of patients with XLRS [18]. To our knowledge, situs inversus of the retinal vessels has not been emphasized as a possible finding in carriers of XLRS. Table 1The age, visual acuity for the right eye (OD) and left eye (OS) and fundus findings for each of the X-linked retinoschisis carriersCarrierAge (years)VA ODVA OSFundus findings13520/1520/15Mild situs inversus OU24720/2020/20Normal OU32320/2020/20 −1Normal OU45320/2020/20Mild situs inversus OU54820/2520/20Situs inversus OU65820/2020/20Situs inversus OU73920/2020/20Mild situs inversus OU84320/20 −220/20Situs inversus inferior vein OS, anomalous branching OS > OD97020/20 −220/20 −2Anomalous branching OU, situs inversus superior artery OD, inferior artery OSFig. 1Fundus photograph of XLRS carrier #1 demonstrating situs inversus of the retinal veins Two of nine carriers demonstrated a mosaic pattern of statistically significant amplitude reductions (Figs. 2a, 3a) and implicit time delays (Figs. 2b, 3b). The actual mfERG tracings and hexagon analysis for these two carriers are shown in Figs. 2 and 3. Fig. 2Multifocal electroretinogram (mfERG) results of X-linked retinoschisis (XLRS) carrier #8 exhibiting a mosaic pattern of retinal dysfunction. MfERG amplitudes (a) and implicit times (b) for the right eye. In this figure, as in figure 3, the white hexagons show regions falling within 2 standard deviations (SD) of the normal mean. The grey hexagons represent locations that are more than 3 SD from the normal mean while the black hexagons indicate a non-measurable response. MfERG waveform traces are illustrated below (c)Fig. 3Multifocal electroretinogram (mfERG) results of XLRS carrier #3 exhibiting a mosaic pattern of retinal dysfunction. MfERG amplitudes (a) and implicit times (b) for the right eye. MfERG waveform traces are illustrated below (c) Discussion Our findings support the conclusion that the presence of a functional abnormality on mfERG testing, in the absence of other causes for retinal dysfunction in a female at risk, should suggest the presence of a carrier state for XLRS, which could then be further confirmed by genetic testing. However, a normal mfERG result would not, by itself, rule out the carrier state. Piao and co-workers [19] showed that mfERG cone-mediated responses were more impaired centrally than in the more peripheral retina in their study of seven male patients with XLRS. They also observed a higher frequency of delayed implicit times than reduced amplitudes in their patients. We did not observe either of these features in our two carriers. The presence of situs inversus of the retinal vessels in a female from a family with XLRS is also suggestive of the carrier state. Our findings on mfERG testing demonstrate the effect of Lyonisation on retinal function in carriers of XLRS. A similar finding on mfERG testing was observed in carriers of X-linked retinitis pigmentosa [20]. Multifocal ERG testing may be a useful technique for the detection and monitoring of localized retinal dysfunction in some carriers of XLRS.
[ "carriers", "x-linked retinoschisis", "situs inversus", "multifocal electroretinography" ]
[ "P", "P", "P", "M" ]
Bioprocess_Biosyst_Eng-2-2-1705532
A next generation, pilot-scale continuous sterilization system for fermentation media
A new continuous sterilization system was designed, constructed, started up, and qualified for media sterilization for secondary metabolite cultivations, bioconversions, and enzyme production. An existing Honeywell Total Distributed Control 3000-based control system was extended using redundant High performance Process Manager controllers for 98 I/O (input/output) points. This new equipment was retrofitted into an industrial research fermentation pilot plant, designed and constructed in the early 1980s. Design strategies of this new continuous sterilizer system and the expanded control system are described and compared with the literature (including dairy and bio-waste inactivation applications) and the weaknesses of the prior installation for expected effectiveness. In addition, the reasoning behind selection of some of these improved features has been incorporated. Examples of enhancements adopted include sanitary heat exchanger (HEX) design, incorporation of a “flash” cooling HEX, on-line calculation of Fo and Ro, and use of field I/O modules located near the vessel to permit low-cost addition of new instrumentation. Sterilizer performance also was characterized over the expected range of operating conditions. Differences between design and observed temperature, pressure, and other profiles were quantified and investigated. Introduction Continuous sterilization also is known as high-temperature, short-time (HTST) sterilization. A continuous sterilizer heats non-sterile (raw) medium to the desired sterilization hold temperature (typically 135–150°C), maintains it at constant temperature in an adiabatic holding loop (consisting of a long length of insulated stacked piping connected with U-bends for compactness), then cools it to 35–60°C before transferring flow to a fermenter that has been previously sterilized empty or with a minimal amount of water. The residence time that medium is held at sterilization temperature, tR (min) [calculated from the adiabatic retention loop volume, Vs (L), divided by the system volumetric flowrate, Q (lpm)], is varied by adjusting flowrate and/or length of the holding loop. Energy is recovered by pre-heating incoming cold medium from 15°C (worst case) to 120°C with outgoing sterilized medium that is cooled from its sterilization temperature of 150 to 45°C prior to entering the process cooler where it is cooled further to 35°C. Medium is recycled back to a circulation tank (also called a surge or recycle tank) or diverted to the sewer during start up or process upsets (such as a decrease in sterilization temperature or an increase in system flowrate). This circulation tank can be pressurized or non-pressurized with the non-pressurized design approach requiring a second “flash” cooler before returning flow to the recycle tank to avoid flashing. Heating is accomplished indirectly using steam or hot water via a heat exchanger (HEX) or directly by mixing steam with incoming medium (steam injection). Cooling HEXs can use cooling tower/chilled water, but also may use vacuum to reduce temperature and draw off any accumulated water from direct steam injection. Continuous sterilization systems typically are pre-sterilized with steam by direct injection and/or with hot water. After attaining steady state with water flow, non-sterile medium feed is introduced. Various media components are sterilized in aliquots and sent to the receiving fermentation vessel with water flushed between them. A next generation, pilot-scale continuous sterilization system was designed, installed, started up, and validated. Demolition as well as retrofit was accomplished within an actively operating industrial pilot plant. Despite prior experience with a stick-built, internally designed system, a skid-mounted vendor design was selected consisting of five skids (recovery and heating exchangers, hot water loop and exchanger, retention loop, process and “flash” cooling HEXs, and switching valve station). Sterilized medium, obtained from the system at 40–100 lpm, typically was aliquoted into 800–19,000 L scale fermenters with lower flowrates being most appropriate for lower fermenter volumes. The design accommodated a range of different media types, including low solids levels below 5 wt.% and concentrated nutrient solutions. The design evaluated features from related industrial applications of continuous sterilization, including sanitary design advances in spiral HEX fabrication that were considered helpful. Valued design characteristics were flexibility, reliability, and straightforwardness in operation and maintenance. Although some general papers describing continuous sterilizer design are available [10, 35], there have been few, if any, publications linking design and operation, despite the considerable and varied industrial applications of high temperature, short time (HTST) sterilization. This paper describes the design and testing of a next generation HTST continuous media sterilization system, along with the technical rationale behind its features and flexibility. Background Advantages Advantages of continuous sterilization have been outlined in several reports [3, 7, 10, 30, 60, 84]. By far the most noted benefit is energy conservation since continuous sterilization consumes up to 60–80% less steam and cooling water for large-scale fermenter media volumes. This economy lies at the high end of this range when continuous sterilization utilizes heat recovery via indirect heat exchange to pre-heat incoming cold medium with hot medium leaving the sterilization hold loop. It thus requires less energy (as well as generates a more uniform demand without peak draws [7]); than the alternative batch sterilization process involving sterilizing the fermenter vessel and its non-sterile contents together. Batch sterilization becomes less efficient with scale since heating and cooling portions of the cycle are longer than the constant hold temperature portion [54, 83], and heat transfer coefficients decrease with scale up [34]. Prior to receiving continuously sterilized medium, the fermenter is sterilized empty or with a small volume of water covering the pH and dissolved oxygen probes. Heat up/cool down times are substantially shorter, decreasing overall turn-around time [96]. Continuous sterilization results in gentler treatment of medium compared to batch sterilization, which tends to overheat medium to ensure that vapor space vessel internals achieve sterilization temperatures [1]. Sterilization at higher temperatures for a shorter time generates less degradation of heat-sensitive medium components since spore destruction rates increase faster than nutrient destruction rates as temperature rises [15]. The activation energy for nutrient degradation ranges from 50 to 150 kJ/mol, which typically is smaller than activation energies for the thermal death of microorganisms, which range from 250 to 350 kJ/mol [67]. Consequently Fo increases more than Ro as temperature rises [1, 52]. Similarly, amino carbonyl or Maillard browning reactions, which form objectionable color and tastes to consumers for pasteurization and adversely affect medium quality (destroy growth factors) for fermentation, are minimized [12, 54]. In the case of polymerized poly (l-lactide) rod implants, lower molecular weight decreases also have been found for autoclave cycles at higher temperatures and shorter times [88]. Continuous sterilization results in more uniform heat treatment of medium than batch processes since the system operates at steady state [15]. Proteins and carbohydrates can be separately sterilized in multiple sections using several mix tanks with a sterile water flush between them [96]. Since there is no need to agitate unaerated (ungassed) large liquid volumes during batch sterilization, fermenter agitator design can be based on drawing full load during gassed conditions [96]. Some feel that continuous sterilization offers a lower contamination rate relative to batch sterilization since fermenter internals are more easily sterilized in an empty rather than full fermenter [96]; others believe that batch sterilization has a lower risk since there is no need to transfer aseptic media [97]. HTST systems have a high degree of flexibility since a large range of time/temperature combinations can be selected within equipment design limits. Scale up is linear with media flowrates of 10–100,000 L/h reported for HTST systems [36] and up to 30,000–50,000 L/h for pasteurizers [38]. Time/temperature exposure profiles accurately reflect sterilization conditions for the media of interest and can be readily modeled. Finally, the ability to design heat exchange equipment to minimize fouling reduces cleanability and maintenance concerns. Disadvantages of continuous sterilization primarily are that process control performance is critical since it is necessary to immediately divert flow of any inadequately sterilized medium, halt any further medium sterilization, and resterilize the system. In contrast, for a batch sterilization system upset, often additional hold time can be readily added to the sterilization. Continuous sterilizer systems also use dedicated equipment that usually is not well suited for other purposes. Applications Several relevant background papers on applications of continuous heat treatment of liquids have been published in the food, biowaste, and fermentation fields. Applications of continuous sterilization to dairy and other food pasteurization are prevalent in the literature. A continuous pasteurization process with a hold temperature of 72°C and hold time of 15 s replaces a batch process with a lower hold temperature of 63°C for 30 min [65]. Ultra high temperature (UHT) treatment (120–136°C), using either direct steam injection or indirect heating, is used to obtain longer preservation (specifically greater log reduction) than pasteurization at 72°C [32]. Temperatures of 100–145°C produce extended shelf life milk with a product shelf life of 15–30 days at 7°C [16]. Direct steam injection for heating feed to its hold temperature for UHT treatment causes less destruction to other milk components owing to rapid heating using injected steam and rapid cooling using a vacuum [43]. For dairy applications, often the lethality achieved during heat up and cool down periods is similar in magnitude to that achieved during isothermal hold time [67], and thus needs to be considered in evaluating exposure. The integrated pasteurization effect (PE) is calculated to convert the time, t (min), at different temperatures, T(t) (K), in various sections of the pasteurizer (specifically the heat up, holding, and cooling sections), to the equivalent time at a reference temperature, To, of 72 C (345 K) and a reference time, to, of 15 s (0.25 min) (Eq. 1): where Ea is the activation energy, cal/mol, and R is the universal gas constant of 1.987 cal/mol-K. A PE of one corresponds to complete pasteurization at 72°C for 15 s [65]. The effectiveness of heat treatment in the food industry is established indirectly since it is undesirable to introduce indicator organisms into production equipment. An indicator enzyme such as alkaline phosphatase is used to test proper milk pasteurization after first establishing its relation to pathogen load [28, 65] and pasteurization effect [69]. The behavior of indicator organisms also is examined and rigorously modeled since obtaining accurate kill kinetics at operational conditions can be problematic [72, 80, 87]. Continuous sterilization also is used for biowaste destruction and decontamination of spent broth, the major byproduct from biotechnology plants [41, 84]. Typical sterilization temperatures vary from as low as 80°C up to 140°C, for usually short hold times of 1–5 min. Less aggressive conditions are warranted since organisms being sterilized are active cultures and not dormant spores. Batch systems involve heat up, sterilization, and cool down of waste, all in the same jacketed vessel, and often with direct steam sparging for heating and an external HEX used for cooling to shorten the time cycle [57]. Owing to its higher throughput, continuous sterilization has been applied to biowaste treatment [60], and the prediction that it eventually would be the preferred method of biowaste inactivation [84] has been realized for larger facilities. Operational concerns for biowaste treatment are opposite those for media sterilization. Although both processes require achievement of the desired log reduction of live organisms in the feed, biowaste treatment is concerned with live organism leakage into either previously sterilized effluent broth and/or uncontained cooling water. In contrast, media sterilization/pasteurization is concerned with live organisms leaking into sterilized media from non-sterile cooling water [98] and/or non-sterile incoming feed. For fermenter media sterilization applications, continuous sterilization complements continuous fermentation, which can be more productive for certain fermentation processes since it substantially reduces fermenter turn-around time between successive runs [102]. Systems are able to be maintained on-line and ready so that they can continuously sterilize and deliver mid-cycle medium additions directly into active fermentations. Typical sterilization temperatures range from 135 to 150°C with hold times of 4–12 min. Similarly to the PE value for pasteurization, the Fo value (min) is used to characterize sterilization effectiveness for fermentation medium [18]. It is the time for the actual sterilization hold temperature that is equivalent to exposure to a saturated steam environment of 121°C, according to Eq. 2: where T(t) is the sterilization hold temperature, K, t is the incremental sterilization hold time, min, integrated over the start time, ti, to finish time, tf, and Z is the temperature difference (K or °C) for a one log change in DT (min), the time for a one log reduction in spore concentration. A similar expression can be developed for the analogous impact on nutrient degradation, Ro (min) [17]. High temperature, short time continuous heat treatment also has been evaluated for the viral inactivation of mammalian cell culture medium (hold temperature of 102°C and hold time of 10 s) to minimize nutrient degradation [66] and of blood plasma (hold temperature of 77°C and hold time of 0.006 s) to maintain protein structure and activity [23]. Kill/degradation kinetics Relative to Escherichia coli, the heat resistance of bacterial spores is 3 million:1, mold spores is 2–10:1, and viruses and bacteriophages is 1–5:1 [34]. As a first pass, the kinetics of kill and degradation are based on the Arrhenius equation for the thermal death constant, k(t), min−1 (Eq. 3) as a function of the incremental sterilization holding time, t: where T(t), Ea, and R are defined as in Eq. 1 and A is the frequency factor of the reaction, min−1. Adherence to strict first order kinetics is not always the case [4, 33], and this model does not incorporate partial germination and/or heat activation of dormant spores prior to media sterilization reducing their heat resistance [90]. Nevertheless, this simple model is employed for the validation of media sterilization conditions in the fermentation industry. Using the Arrhenius model, the non-temperature dependent activation energy can be calculated from the regressed slope of log of reaction rate constant versus the reciprocal of absolute temperature [67]. Typical values for Bacillus (Geobacilllus) stearothermophilus, an indicator organism commonly used to evaluate heat treatment effectiveness, are 9.5×1037 min−1 for A and 70,000 cal/mol for Ea [11]. The D value, DT (min), or decimal reduction time, is the time to decrease the population to one-tenth its original number at a specified temperature [15]. The Z value (K or °C) is number of degrees of temperature rise that causes a tenfold increase in D value [15]. It is obtained by plotting the log of the D value versus the corresponding temperature and calculating the Z value obtained from the reciprocal of the slope of the least squares regression line (Bigelow model) [67]. The QΔT value is the death rate increase for a specified change in temperature, ΔT [15], and it is calculated from the dependence of the D value (specifically k) on temperature. Both D and Z values are affected by the physicochemical and biochemical properties of the solution to be sterilized (e.g., composition, pH) [21, 51, 52, 85, 101]. The Z value for B. stearothermophilus in water is 10°C, vs. 56°C for vitamin B1 and 50°C for vitamin B2 (riboflavin) [15], both notably higher. Correspondingly, the Q10 value for B. stearothermophilus is 11.5, vs. 2.1 for vitamin B1, 2.3 for vitamin B2 (riboflavin), and 3.0 for the Maillard reaction [15], all notably lower. Two models (Arrhenius or Bigelow) can be used to extrapolate death rates for higher temperatures than those measured experimentally since it is difficult to measure kill and degradation kinetics at temperatures above 130°C with existing equipment [67]. Using the overall sterilization hold time or residence time, tR, the log reduction may be obtained according to Eq. 4: where N(t) is the number of spores surviving heat treatment at incremental sterilization time, t, No is the initial number of spores, and DT is defined below Eq. 2. Assuming D121=3 min and Z=10°C (typical values for B. stearothermophilus spores in water [52]), then log reductions for continuous sterilization at 150°C range from 1,800 to 7,500-fold for the system residence times of 5.4–22.5 min, substantially higher than what is obtainable via batch sterilization. Actual sterilization conditions are selected based on specific medium properties and fermentation process requirements. Retention loop flow behavior and its impact Flow through a pipe is characterized by the Reynolds number, NRe, given by DVρ/η, which is the ratio of inertial to viscous forces. Since the system flow tube diameter, D (cm), does not change, NRe varies with sterilization fluid velocity, V (controlled by system flowrate), fluid viscosity, η (cp), and density, ρ (g/cm3), which is fixed for the selected medium. As flow becomes more turbulent (higher NRe), flow behavior approaches ideal plug flow. There is considerable disparity in the literature regarding the Reynolds numbers associated with laminar and turbulent flow through a tube for various continuous flow sterilization applications. For flow through a tube, laminar flow was below 1,100 and turbulent flow was above 2,100 [95]. Laminar flow was below 2,100, and turbulent flow above 4,000 according to another study [81]. A minimum velocity, which gives turbulent flow is recommended with a Reynolds number of about 3,000 [96] or at least 2,500 [30], but preferably above 20,000 [30]. For a retention loop in a dairy application, Reynolds numbers of 1,130–2,300 were considered laminar [67], and Reynolds numbers of 4,800–7,080 were considered transitional [74]. Substantially higher Reynolds numbers of 7,200–9,400 were considered transitional for the heating and cooling sections of tubular HEXs for a dairy application [67]. Differences in tube roundness and entrance effects may have an influence [68]. System design for NRe well above 10,000, particularly in the holding tube, minimizes potential for inadvertent operation in the laminar flow regime. One potential design approach is to incorporate flow disturbances to induce turbulence at lower NRe. Confirmation of turbulent flow, based on the deviation between ideal and non-ideal plug flow behavior for specific operating conditions, can be determined experimentally. Continuous thermal treatment is most uniform when the flow through the retention loop is turbulent since residence times of individual streamlines become less variable. The parabolic velocity profile associated with laminar flow leads to variable residence times [67]. Specifically, for laminar flow the mean velocity of a viscous fluid through a pipe is one-half of the maximum velocity along the axis, and for turbulent flow, the mean velocity is 82% of the maximum value [3]. Thus, there are concerns about laminar flow for viscous solutions in pasteurization (e.g., ice cream mix, egg nog, and liquid egg products) [81]. Non-ideal flow behavior is problematic since each fluid element spends different lengths of time in the sterilizer hold phase and thus receives different levels of sterilization. Consequently, it is necessary to characterize the residence time distribution to accurately predict lethality [95], specifically the spectrum of times spent in the sterilizer hold tube for different fluid elements. The extent of the distribution dictates the degree of axial or Taylor dispersion, i.e., concentration gradients along the length of the retention loop. [Radial gradients are assumed negligible]. The holding efficiency, tmin/tm, is evaluated by comparing the minimum holding time, tmin (min), to the mean holding time, tm (min), and the extent of product overheating, tmax/tm, can be calculated using the ratio of the maximum, tmax (min), to mean holding times [55]. Stimulus-response measurement techniques and data analysis to determine the extent of non-ideal flow have been described comprehensively [61, 62, 95]. After a pulse or step change is introduced, tracer concentration is measured as a function of time by sampling effluent at the system outlet. The total area under the concentration versus time curve then is calculated and used to normalize concentration measurements so they can be readily compared for different tracer tests. The exit age distribution, E(t), delineates the fraction of fluid elements exiting the system having a particular hold time, t. It characterizes instantaneous pulse changes (delta function) in tracer concentration. This curve is normalized by dividing measured C(t) values by the area under the resulting concentration versus time curve to obtain E(t), for which the area under the E(t) curve is always one [61]. For E(t) curves, the mean time, tm, is the sum of individual products of time, normalized tracer concentrations, and time interval, Δt (min), which is assumed constant [46]. The distribution variance, σ2, is given by Eq. 5: where The dimensionless variance, σ2/tm2, can be used to estimate the dispersion coefficient based on experimental data. Another concentration versus time curve, F(t), is the probability that a fluid element left the system within time, t, or the volume fraction of the outlet stream that has remained in the system for a time less than t [95]. It characterizes behavior resulting from step inputs of tracer with an initial entering concentration, Co. The ratio of C(t)/Co versus time produces a normalized F(t) curve for which the axes range from 0 to 1 [61]. These distributions are related mathematically according to Eqs. 6 and 7: Integration of E(t) curve to obtain the corresponding F(t) curve via Eq. 7 is accomplished graphically for various Δt [95]. Both the E(t) and F(t) curves also can be made dimensionless in time by normalizing by the mean hold time, tm. Normalization with respect to time is helpful to compare conditions at different residence times, and normalization with respect to concentration assists in comparing data from different tracer experiments. The sterilization efficiency for a given residence time distribution is given by Eq. 8: This equation permits quantitative assessment of the sterilization impact from non-ideal flow patterns. The Bodenstein number (or Peclet number or Peclet-Bodenstein number [3, 54]), NBs, is given by VL/Dz, where Dz is the axial dispersion coefficient, m2/s, V is the flow velocity, m/s, and L is the retention loop length, m. This dimensionless group is the ratio of convective transport to axial dispersion [63, 94], and it is used to quantify the extent of axial dispersion. For NBs>>1, there is plug flow with minimal axial mixing and sterilization efficiency is the highest possible [62, 63]. For NBs<<1, axial dispersion is at its worst, with retention loop contents completely mixed along the tube length, and sterilization likely is incomplete. Actual operating conditions fall between these two extremes [63]. The flow system should be designed so that dispersion is minimized with high NBs and high NRe [54], preferably NRe>2×104. NBs of 3–600 have been reported as typical for continuous sterilizers [30]. The current system’s NBs of about 1.8×104 is much higher than this range, most likely due to its longer retention loop. Experimental distribution data may be used to calculate σ2/tm2, NBs, and then Dz [61, 62] using Eqs. 9 and 10. For Dz/VL<<1 and a normal (Gaussian) distribution for the E(t) curve: Alternatively, the dispersion coefficient and residence time distribution may be inferred from correlations. For turbulent flow, Eq. 10 applies: where the Fanning friction factor, f, for the retention loop pipe is obtained from correlations based on the ratio of surface roughness, ε, to pipe diameter, D [78]. An experimental correlation for water, where Dz/VD=0.25 for NRe=105 and Dz/VD=0.33 for NRe=104 [61], was used to assess the reasonableness of measured Dz/VL values and to compare NBs values obtained using Eqs. 10 and 11a. Friction factors for the retention loop pressure drop, applicable for both laminar and turbulent flow, were calculated using Eq. 11a [26, 75] and shown in Table 9: where Alternatively these friction factors can be estimated using the Colebrook equation, Eq. 11b [29, 78] and solving iteratively, but this approach was not used: Rule-of-thumb conditions generating a narrow residence distribution and low dispersion coefficient for flow through a pipe are L/D>200 and NRe>12,000 [55]. Influence of solid content of media For sterilizer feed medium that contains solids, ranging from small amounts to in excess of 10 vol.% [31], solids must be adequately wetted and dispersed without clumps. Particles flow at different velocities through the retention loop, and temperature distribution within a particle is challenging to monitor. Although it is somewhat straightforward to determine residence time distributions, partial differential equations using finite differences are required to model convective–conductive heat transfer between the fluid and particles [20]. Thermal properties of the surrounding fluid are less critical for heat transfer to particles since the heat transfer coefficient, h (cal/s-cm2−°C), between the fluid and particle is limiting [59]. Its effectiveness is shown by the Nusselt number (ratio of total heat transfer to conductive heat transfer), NNu, given by hDp/K, where Dp (cm) is the particle diameter, and K (cal-cm/s-cm2−°C) is the thermal conductivity of fluid at the processing temperature [22, 49]. If the predicted particle temperature profile is hotter than the actual one, it is possible to obtain incomplete inactivation [20]. The sterilization challenge of large diameter solids is to avoid selecting hold times/temperatures that sterilize solids but damage liquid medium components [89]. The time required for particles to attain sterilizing temperature is on the order of microseconds for particles several microns in size (i.e., media bioburden) and seconds for solids several millimeters in size (i.e., raw material particles), highlighting the need to clarify raw materials [3, 30]. Time-temperature integrators have been developed to quantify the heating impact on spores within the entire particle. These indicators are spores immobilized in alginate cubes or polymethylmethacrylate designed to have a mechanical resistance to flow through the system similar to that of actual particles [49, 73]. Other tracers have been found to mimic the flow behavior of microbial cells except when the flow is laminar [2]. As the solid content increased from 0 to 30 w/w.%, the mean residence time of the liquid phase increased by 40% and flow less resembled plug flow which indicated that the presence of solids can significantly influence liquid phase flow patterns [76]. Consequently, a safe design approach for solids-containing medium sterilization uses the maximum rather than average fluid velocity [8, 9]. Residence time distributions for solid particles also often have more than one peak representing different groups of particles. Experimental methods to quantify axial dispersion Testing of dispersion has been done using a variety of tracers, most commonly salts and dyes. Experimental mean residence times calculated from salt tracer measurements in skim milk were close to the average holding time [74]. A salt tracer was found to be adequate for low viscosity and Newtonian fluids only; it overestimated thermal exposure in more viscous fluids [81]. Salt tracers can be saturated sodium chloride solutions, but chloride exposure is not desirable for stainless steel [50]; thus, other salts with high aqueous solubilities (sodium sulfate, sodium citrate, and magnesium sulfate) and/or sodium hydroxide can be substituted. Dye tracers include fast green FCF (Sigma; St. Louis, MO, USA) and basic Fuchsin (no vendor given) [46]. Other tracers have been based on chemical reactions, specifically sucrose inversion (hydrolysis to glucose and fructose) when heated in an HCl solution at pH 0–2 or sulfuric acid at a pH of 2.5 to avoid exposure of stainless steel to chlorides. Changes in optical rotation and freezing point were used to quantify reaction extent [1]. Another tracer used has been the pulse injection of 20 w/w% citric acid and subsequent pH measurement [76]. Finally, temperature spikes also have been effective tracers in scraped surface HEXs [44]. Overview of operation A comparison of major changes between the prior and next generation pilot-scale, continuous sterilization systems, as well as expected benefits/risks or potential drawbacks, are summarized in Table 1. A schematic of major equipment components and their arrangement is shown by Fig. 1. System specifications and design criteria are listed in Table 2. After system design was completed for water, its effectiveness was evaluated for concentrated nutrients, typically sterilized separately from the base medium to avoid Maillard reactions or sterilized just prior to delivery to active fed-batch fermentations to avoid storage in a holding tank. The nutrients and concentrations selected were 55 wt.% cerelose (glucose monohydrate; CPC International, Argo, IL, USA) and 50 vol.% glycerol (Superol glycerine; Proctor and Gamble Chemicals, Cincinnati, OH, USA). Physical properties for these test media at various temperatures were estimated for water from [37, 42, DIPPER database tables (dippr.byu.edu)]. Physical properties for 50 vol.% glycerol and 50 wt.% cerelose were modeled using Aspen Plus (AspenTech, Cambridge, MA, USA) process simulation software with physical properties database information. Table 1Comparison of major changes in large continuous sterilization systemItemPrior designNew designExpected benefit/Risk or potential drawbackFinal heating method to attain sterilization temperatureDirect steam injectionIndirect heating loopLess dilution, improved stability with respect to source steam fluctuations, no adulteration from plant steam additives/higher costFlowmeterMagneticCoriolisAbility to sense deionized water/position of flag criticalNumber of different retention loop lengthsAble to increase/decrease by two tubes for 16–30 tubes (tR=4.0–12.5 min), removable connections at both ends of all tubesFive configurations from 18 to 30 tubes (tR=5.4–22.5 min), removable jumpers at same end of selected tubesFewer fittings/limited visual inspectionFlowrate turn down60–100 lpm40–100 lpmAvoids separate smaller unit/multiple ranges for tuning controlPressure safety deviceSafety valve onlyRupture disc and safety valve with tell-tale pressure gaugeSanitary disc in process contact, evident when disc blown/added cost and maintenanceSecond cooler of same size as process cooler (“flash” cooler)AbsentPresentAbility to conduct water sterilization, installed marker for process cooler/extra expenseBooster pump with pressure control on recuperator outletAbsentPresent (used with centrifugal feed pump only)Sterile media at higher pressure than non-sterile media/increased complexity of tuning and operationRetention loop insulationInsulated box without packing, 4–6°C temperature dropInsulated box with packing, <2°C temperature dropMore adiabatic and isothermal/modest additional expenseHEX plate thickness1/4″3/16″Lower cost and higher surface area per unit volume/higher risk of breachHEX aspect ratioHigher velocities/increased effect of channeling due to gap and drain notchesRecuperator2.73.46Coolers4.762.91Heater0.93 (horizontal cross flow for condensing service)1.31HEX process side channel thickness0.25″ (coolers 0.375″)0.25″ (coolers 0.25″)Higher surface area per unit volume/higher pressure dropHEX utility side channel thickness (coolers)0.75″0.5″Higher surface area per unit volume/higher pressure dropAspect ratio is the HEX diameter divided by its widthFig. 1Schematic of sterilizer system. a major components, b switching stationTable 2System specifications and design criteriaParametersDesign range (min–max)Sterilization hold temperature (T)135–150°CRetention loop hold up volume (Vs)540–900 LFlowrate (to achieve design recuperator HEX heat recovery) (Q)40–100 lpm at 15–60°C (water)40–100 lpm at 60°C (55 wt.% cerelose)40–65 lpm at 25°C (55 wt.% cerelose)40–91.5 lpm at 25–60°C (50 vol.% glycerol)40–88 lpm at 15°C (50 vol.% glycerol)Flow rates <40 lpm may not achieve sufficient back-pressure for the selected sterilization temperature to avoid flashingFeed temperature (Tin,cold,ext)15–60°C (water, 50 vol.% glycerol)25–60°C (55 wt.% cerelose)Feed temperature of 15°C for 55 wt.% cerelose insufficient to maintain a solutionResidence time (tR)5.4–22.5 minBack-pressure (P)3.5–5 kgf/cm2 (typically 4.1 kgf/cm2)Sufficient pressures used to avoid flashing >2.15 kgf/cm2 for 135°C >3.93 kgf/cm2 for 150°CRetention loop temperature drop (900 L volume and inlet temperature of 150°C), ΔT2.0°C for 40 lpm, 1.5°C for 60 lpm, 1.0°C for 80 lpm, and 1.0°C for 100 lpmHeat recovery (HR)>70–80% depending inlet feed temperature, media type and flowrate 78.9% (100 lpm water, 60°C) 75.5% (100 lpm 55 wt.% cerelose, 60°C) 78.9% (91.5 lpm 50 vol.% glycerol, 60°C) The sterilizer had several distinct phases that are depicted by Fig. 2 and described briefly below: System start up included (1) leak tests of the cold system under pressure, (2) flowrate and totalizer accuracy checks versus a decrease in feed tank volume, and (3) leak re-check after raising the system to sterilization temperature. Proper operation and reliability of instrumentation was assured by evaluating all pressure and temperature transmitter and gauge readings for consistency. Prior to system sterilization and before introducing steam or superheated water, draining of process water, supplemented by evacuating with 90 psig air, was necessary for preventing stress corrosion cracking of HEXs [93]. Fig. 2Overview of sterilizer phases The next phase was steam sterilization using four high point steam injection points, one of which was located at the outlet of retention loop, and setting the hot water heating loop to 135°C, slightly above the corresponding steam sterilization temperature of 134°C for the 30 psig (2.1 kgf/cm2) steam supplied. After 2 h of steam sterilization, the system was transitioned from steam to water carefully (over a period of 1 h) to maintain sterility, or with less care assuming that water sterilization was planned next. Steam sterilization could be conducted for the system up through the process cooler as well as up through the “flash” cooler (Fig. 1a). Use of the “flash” cooler was the preferred configuration since it provided an extra buffer during steam collapse. During this next phase of steam-to-water transition, the system flowrate was started with 15 lpm water flowing to the sewer after the “flash” cooler using the Moyno pump. All steam injection points, such as the medium distribution system, were closed, and the hot water generated by the heater provided sufficient back-pressure. The system was set up for water sterilization (i.e., recuperator non-sterile side bypassed and the “flash” cooler used to cool sterilizing water so that the process cooler could be sterilized), and the hot water loop was set at 150°C in cascade (set point for water sterilization). As water entered the system, the temperature of the retention loop rose from 133 to 150°C, while the temperature of the process cooler fell from 148 to 122°C. (The temperature drop of the process cooler was not a sterility risk since the entering 15 lpm water flowrate, sterilized at 133°C, resulted in a sufficient Fo of 965 min to assure sterility of the retention loop effluent). Loss of incoming water due to boiling while the system was at a lower backpressure was believed minimized by the nearly closed “flash” cooler backpressure valve (expected fill volume 1,215 L, actual volume 1,218 L). During water sterilization, incoming cold water was circulated for two passes (one pass if system was already hot from steam sterilization) at 60 lpm using the centrifugal inlet feed pump, after it rose to the sterilization inlet hold temperature of 150°C. It required previously steamed-through system block/drain valves since users were not comfortable that conduction adequately sterilized through them when closed. The non-sterile side of the recuperator HEX was bypassed to ensure that the sterile side reached sterilization temperature. Cooling water was applied to the “flash” cooler to ensure that the process cooler attained sterilization temperature. For the target sterilization temperature of just below 150°C and a 60 lpm water flowrate, the temperature reached about 148.5°C at the sterile side of the recuperator and 146.5°C for the sterile side of the process cooler. Water sterilization was redone during medium sterilization if medium diversion was necessary owing to system sterility upset. After taking immediate action to divert media away from the production vessel, water re-sterilization was conducted by (1) diverting flow through the “flash” cooler and enabling its pressure control loop, (2) fully opening the process cooler back-pressure valve, (3) conducting water re-sterilization, (4) enabling the process cooler pressure control loop, (5) fully opening the “flash” cooler back-pressure valve, and then (6) resuming medium sterilization. When switching from the “flash” to the process cooler, it was necessary to maintain sterile conditions. After the system was sterilized and running on water, typically in recirculation mode or emptying into the system sewer, the switching valve station was used to divert flow to distribution. Water now flowed to a waste vessel or the process sewer located near the eventual medium receiving vessel. After conditions stabilized, sterilizer inlet feed was switched from water to medium. Again, after conditions stabilized, flow was switched to the receiving vessel. When the receiving vessel was filled sufficiently, sterilizer effluent was switched back to the waste tank, sterilizer inlet feed was switched back to water, and then effluent switched back to either the recirculation vessel or system sewer. After media sterilization, a thorough water rinse was conducted at sterilization temperature and the system was cooled to 60°C for cleaning. Alkaline and/or acid cleaning solutions were used depending on the nature of the soil. After cleaning, the system was cooled and drained completely. Equipment design The system’s five skids were designed and fabricated at the vendor’s shop and delivered with only field installation of interconnecting piping required. To minimize design miscommunications, three-dimensional piping models were used for skid piping plans, which were able to be reviewed remotely by the customer. Ball valves were used instead of diaphragm valves for hot temperature service. Hazardous energy control was carefully considered with locking devices installed and valve placement selected for facile equipment isolation and operability. Equipment was citric acid-passivated after installation. Each relief device on the process side consisted of a flanged rupture disc (RD) with a pressure indicator and telltale as well as a pressure safety valve (PSV) that reseated after the source of excessive pressure was removed. These devices were placed directly after the positive displacement Moyno (Robbins and Myers; West Chester, PA, USA) system inlet feed pump, recuperator outlet, and booster pump. Discharges were piped to return to the feed tank for safety as well as for medium recovery. Piping was designed such that no PSV devices were required on the process side to minimize risk of system integrity disruption. Sample points were located on the inlet feed (pre-sterilization, prior to recuperator) and sterilized medium outlet (post-sterilization, after process cooler) lines. Heat exchangers The chief goals of HEX design are to optimize cost, heat transfer, size, and pressure drop [48]. The type of HEX selected was a spiral, which was preferred over alterative shell and tube, plate and frame, or concentric double pipe designs. A spiral HEX consists of two long, flat, preferably seamless sheets of metal plate, separated by spacers or studs, wrapped around a center core or mandrel which forms two concentric spirals. Alternate ends are welded (both by machine and manually) to create separated flow channels. Hot fluid enters the center (flows inside to outside) and cold fluid enters on the exterior (flows outside to inside) to achieve countercurrent flow. Details of spiral HEX design are described elsewhere [71]. Advantages of spirals are chiefly that they require less space per unit of heat transfer surface area [104]. Their continuously curved channel, unrestricted flow path, and presence of spacers increases turbulence due to secondary flow effects which maintain solids in suspension [13, 103]. Fouling is lower than shell and tube designs since cross-sectional velocities increase as channel size decreases, creating a scrubbing effect [13, 19]. (In contrast, as individual tubes of shell and tube exchangers plug, flow is diverted into unplugged tubes.) Spirals are particularly well suited for slurries and many viscous fluids [103]. Specifically, slurries can be processed at velocities as low as 2 ft/s (0.61 m/s) [71]. Periodic, thorough cleaning can be conducted by simply removing the cover to expose the spiral cavities and cleaning with a high pressure water source. Evaluation of thermal effectiveness can be done by calculating the number of thermal transfer units, NTU [25, 100], using Eq. 12: where AHEX is the heat transfer area, m2, QM is the mass flow rate (kg/s), U (W/s−m2-K) is the overall mean heat transfer coefficient between the fluid streams, and Cp (J/kg-K) is the specific heat capacity of the fluid at constant pressure. This quantity also can be obtained from individual stream temperatures where Tin,hot,ctr is the temperature of the incoming hot stream entering in the center, Tin,cold,ext is the temperature of the incoming cold stream entering on the periphery, Tout,hot,ext is the temperature of the outgoing hot stream exiting on the periphery, and Tout,cold,ctr is the temperature of the outgoing cold stream exiting in the center. In this case, the temperature rise of the cold stream is divided by the log mean temperature difference (LMTD) for the HEX. An NTU of 1.0 correspond to a shell and tube HEX; NTU>1 represents overlap of hot and cold side temperature ranges indicative of spiral HEXs. Thermal effectiveness also can be evaluated using Eq. 13 to calculate the thermal effectiveness factor, TE [25, 100]: Overall Eq. 13 represents the change in recuperator cold side stream temperature divided by the temperature difference of streams flowing through its center connections (i.e., incoming, sterilized, hot medium exiting retention loop and outgoing, pre-heated, non-sterile medium). Finally, thermal effectiveness can be evaluated by calculating the heat recovery, heat recovery (HR), using Eq. 14: which represents the heat gained by the incoming cold medium after passing through the recuperator divided by total heat gained by the cold medium after passing through both the recuperator and heater HEXs. Heat recoveries improve with clean HEXs (higher heat transfer coefficient), lower flowrates (permitting more time for heat transfer), and higher inlet feed temperatures (lower medium viscosity which improves heat transfer coefficient). When the temperature difference on both sides of the recuperator is the same, then NTU is replaced by TE, and Eq. 12 cannot be used since ΔTln=0 [14]. For consistency in comparing design and observed performance over the entire sterilizer system, the value of Tin,hot,ctr (retention loop outlet temperature) used for calculating design values in Eqs. 12, 13, and 14 was assumed to be identical to the retention loop inlet temperature (T), i.e., the retention loop was assumed to be isothermal (adiabatic). The sensitivity of these three parameters to small temperature measurement errors of ±1°C for the expected temperature change of each HEX stream was estimated ±5.7% for NTU, +3.5/–12.5% for TE, and ±3.0% for HR. Specific limiting performance case scenarios depend on media type and inlet feed temperature. Higher inlet feed temperature (60°C) is worst case for the cooling HEX since the recuperator removes less heat from sterile medium. Lower inlet feed temperature (15°C) is worst case for the recuperator since it represents the greatest challenge to HR. Heat exchanger design and material selection is important to extending the unit’s lifetime. A pressure rating of 150 psig was selected to match piping specifications, specifically flange connections. Thicker gauge material permitted wider spacing of studs, directly affecting cost [58], minimized corrosion without significantly reducing heat transfer [13], but decreased HEX surface area per unit volume (Table 1). All HEXs underwent hydrostatic as well as helium leak tests. Studs for spacing were only partially welded around the base so a small crack existed. These crevices were considered unavoidable, and they were accepted since they were shallow enough to permit adequate contact with sterilizing and cleaning fluids. Heat exchanger diameter, width, and channel spacing were designed to minimize fouling and deposits by ensuring channel velocities were sufficiently high during operation. Channel widths of ¼″ were used, except for the utility sides of cooler HEXs, where a channel width of ½″ was selected to minimize plugging due to cooling water deposits and to reduce pressure drop. Bulk velocities for each process fluid are shown in Table 10. A sanitary design was utilized with a continuous sheet for coil formation, a tapered channel transition for the medium inlet and outlet, external bracing of shell connections, back-welding of the center pocket stiffener as much as possible, elimination of additional center stiffeners, and polishing/cleaning of all internal welds. Process connections were 150 psig bolted, milled, lap-joint flanges possessing a right angle rather than a bevelled edge to line up directly with the gasket and avoid crevices. A solid 316 L stainless steel door eliminated a process side weld around the center nozzle required to attach a stainless steel skin to a carbon steel door. Heat exchangers initially were designed to include a gap between the spiral face and cover to minimize gouging of the door due to distortion or telescoping. Spirals can “grow” during thermal cycling and cut into full-face gaskets, if present [71] or door covers if annular gaskets were utilized without a sufficient gap. Telescoping also was minimized by not applying pressure to one HEX side without either the opposite door bolted closed or suitable bracing installed on the open side. An annular gasket initially was selected which when placed in its groove provided a 1/16″ gap between the exchanger spiral surface and the door. At first, this gap was considered small enough so that any short-circuiting negligibly impacted performance, particularly the recuperator HR. Subsequently, the gap size was realized to be critical, especially for high aspect ratio (“pancake” type) HEXs like the recuperator. The smoothness and straightness of the spiral and door faces (tolerance of ±1/32″) also assured a consistent gap, minimizing bypassing, and maximizing heat transfer; HEXs were modified to be within this tolerance. Observed pressure drops during operation (150°C inlet retention loop temperature, 100 lpm water) approached design values for cases where this gap was minimized through installation of compressible gaskets. Both an annular gasket (Gylon 3510; Garlock Sealing Technologies, Palmyra, NY, USA) and a full-face gasket (Gylon 3545) were used to implement a full-face gasket installation with gasket material compressed so as to cut into the spiral face. Low point drains were installed to permit complete system drainage with steam barriers applied to sterile process side drain valves to reduce sterility risk. These drains were fed by small “U” shaped notches in each wind of the spiral from the center down to the door drain. These notches were expected to negligibly impact performance relative to the gap. The extent that gaps or notches remained when a pliable, full-face gasket was installed was estimated by qualitative assessment of HEX gravity draining rates, which corresponded to measured pressure drops closer to design values. For the HEX gaskets selected, the remaining water after gravity drain was 20–50% of the entire HEX hold up volume, suggesting only very small gaps. This water was removed by subsequent blow-down using a 90 psig air supply. A second cooling HEX was required for water sterilization conducted without a pressurized recirculation vessel. This HEX was cooled with chilled/cooling water and not by flashing, and it is referred to as a “flash” cooler throughout this paper. Since this pilot scale system was used intermittently, a sterilized system was not continuously maintained by recirculating sterile water between media runs, which often is done in production facilities. This “flash” cooling HEX also was sized to directly replace the process cooling HEX. Since process coolers experienced the highest extent of thermal cycling, they were more likely than the other HEXs to fail based on previous experience. In addition, it was desirable to avoid using chlorine treatments to reduce bioburden in chilled and tower cooling water since this adversely affected stainless steel integrity [98]. Other causes of stress cracking and pitting corrosion during normal operations and cleanouts also existed [55], and their impact needed to be minimized. Chilled/cooling water flowrates to cooling HEXs were sized for reasonable exit temperatures to minimize load on the chiller/cooling tower and reduce deposits that formed at higher outlet cooling water temperatures above 50°C [106]. The peak design condition was for water sterilization of the system and not production of sterile media. The observed rise in chilled water temperature was within design values when full cooling was applied but not when the control valve restricted flow to attain the desired outlet temperature set point (Table 6). Less water was used for cooling when under control than was assumed in the design since (1) process and “flash” coolers were oversized and (2) media outlet temperature and chilled water flowrate cannot both be specified. Hot water heating loop A hot water or tempered heating loop involves indirect heating without direct steam contact, and it utilizes a HEX, expansion tank, and circulation pump to heat water to above 150°C. It is more expensive than direct steam injection since additional equipment is required, but hot water loops have some key advantages. Direct steam injection can be accomplished with a specialized steam water mixing valve, for example a Pick heater [79]. Although it is more energy efficient since heat up is almost instantaneous, its ability to provide accurate temperature control has been debated. It has a faster response time, can be used with solids-containing media, and is easier to clean and maintain [97], but it is sensitive to source steam pressure and media composition changes. Limited theoretical design information is established for these mixers, although detailed photographic examination of injected steam characteristics in water as a function of flow Reynolds number is available [77]. The key drawback to direct steam injection is process stream dilution, which can be up to 20 vol.% [83]. Excess water must be removed by subsequent flashing elsewhere in the system or the initial feed concentration must be adjusted. Also, since medium is exposed directly to steam, it may accumulate any additives or iron present in the steam [15]. Finally, there can be additional noise from direct steam injection into flowing liquid in some applications. There is higher energy in steam (2,260 kJ/kg energy released from condensing steam) versus the heat capacity, Cp, of water of 4.2 kJ/kg K, making condensing steam heat content 540-fold higher than hot water heat content on a per degree basis [5]. In addition, injected steam heat transfer coefficients are 60-fold higher than indirect condensing steam heat transfer coefficients [77] and are not reduced by fouling as in a HEX [45]. Consequently, there are advantages of direct steam injection due its higher steam utilization efficiency [97] for high temperature sterilization of milk [82] and beer mash heating [5]. Based on its advantages for media sterilization, indirect heating via a hot water loop was implemented. A shell and tube 316 L stainless steel HEX was selected for this application since a spiral HEX was not found to be cost-effective for the size required. The hot water loop utility piping, originally carbon steel, exhibited substantial amounts of iron oxide corrosion due to its operation at higher temperatures. This build-up throughout the hot water loop was subsequently removed by a citric acid wash and piping was replaced by stainless steel. The hot water loop was designed for an operating temperature of up to 160°C and pressure of 75 psig using compressed air (>80 psig) applied to the expansion tank. Installation of a computer limit of 160°C for the loop temperature was necessary to avoid inadvertent system over-pressurization since the steam control valve opened fully during initial loop heat up. Inlet retention loop temperature was controlled for these loops rather than outlet temperature, commonly used in pasteurization applications [82], owing to longer loop residence times for medium sterilization applications. Sterilization inlet temperature was controlled in either automatic or cascade mode. In automatic mode, a single loop was used to modulate hot water temperature to control retention loop inlet temperature at the outlet of the heater. In this single loop feedback control, wider periodic fluctuations have been found, but response time is quicker [27]. In cascade mode inlet temperature was used for primary control, and hot water temperature input was used as the secondary control loop. Using cascade control, the slave, inner or secondary loop manipulated the steam control valve to control water outlet temperature from the hot water HEX. The master, outer or primary loop manipulated the secondary (slave) loop set point to control medium outlet temperature on the final heating HEX prior to medium entry into the retention loop. This control has been found to be smoother and more accurate [91], but it has about a twofold longer response time [27]. Steam valve signals were more stable under cascade control with more constant steam flows instead of oscillating between high and low steam supply flow rates as in single loop control. An alternative feed forward control algorithm also has been used in other systems to anticipate process upsets due to load changes and to ensure tight control of product outlet temperature from the retention loop [56], but this strategy was not implemented in the current system. The hot water loop temperature controllers initially were tuned using the Ziegler–Nichols closed loop method [107] for both primary and secondary loops. Tuning constants for the secondary loop were first determined from the ultimate gain (specifically controller gain that causes continuous cycling) and ultimate period (specifically cycle period). Next, the primary loop was tuned with the secondary loop placed in cascade using these constants. The speed of the slave (secondary) loop was slowed down considerably and reset minimized [70] to gain more precise control (±0.1°C) of inlet temperature in cascade mode. Together with low heat loss over the insulated retention loop, this tuning strategy permitted operation at more uniform sterilizing temperatures. Thus, sterilization temperature effects on subsequent production media performance were quantified more precisely, and operation with a safety factor of several degrees was avoided. Retention loop The retention loop (holding tube or box) was composed of thirty, 2″ diameter tubes (schedule 10 pipe with an ID of 2.16″ and wall thickness of 0.11″), each with only one weld over the 40 ft straight length, plus 29 connecting U-bends. These U-bends were fabricated from 2″ pipe bent by machine that resulted in a minimum thickness at the bend slightly less than normal schedule 10 pipe. The total length was 1,253 ft (382 m). The L/D was 9,318, suggesting that a narrow residence time distribution (and low axial dispersion) was achievable with sufficient turbulent flow. Also, additional mixing at each U-bend owing to its curvature might further reduce axial concentration gradients, although this effect has not been mentioned specifically in the literature. The retention loop was designed to be drainable, and tubes were arranged in two banks in an “accordion-type” fashion on a 0.11 incline. Pipe supports were designed to permit expansion [96]. A high point vent valve was installed in the loop for draining and for bleeding of air during steaming/filling. (This vent valve did not appear to be required since a negligible amount of air exited the system when it was opened). Jumpers were configurable for variable retention loop volumes of 18 (540 L), 20 (600 L), 24 (720 L), 28 (840 L), or 30 (900 L) tubes. All of the jumpers were located on the same end of the retention loop with a removable insulation cover. Removable U-bends were attached using pipe-to-I-line ferrule fitting adaptors with minimal welds and maintaining the inner diameter so that flow was not constricted. The retention loop was required to operate as close to isothermally (adiabatically) as possible. Improved insulation was installed by packing fiberglass blankets inside 2″ thick fiberglass board surrounding the faces between the frame and tubes. This method was preferred over insulating individual tubes since the large insulation thickness required around each tube adversely enlarged overall loop dimensions. The temperature profile along the length of the retention loop was assumed to be linear [35]. For this improved insulation, observed retention loop temperature drops as a function of flowrate compared favorably with design values. Flow and pressure control Flow and pressure control was composed of five loops that, although not related by software linkages, were closely related operationally. Two flow control valves were installed with one located after the centrifugal inlet feed pump and one after the centrifugal booster pump (Fig. 1a). The system was designed to utilize either a positive displacement (Moyno) or centrifugal inlet feed pump. When the positive displacement pump was used, both flow control valves were held fully open and flow was controlled using the Moyno pump’s variable speed drive. When the centrifugal feed pump was used with the booster pump, booster pump suction pressure was controlled just prior to the booster pump suction and flow was controlled at the booster pump discharge only. (The flow control valve after the centrifugal feed pump was not used since this starved the booster pump suction). Three pressure control valves also were installed. One pressure control valve, located on the recuperator inlet piping on the booster pump suction side, was set to maintain a positive pressure differential to avoid leakage on non-sterile feed should a HEX breach develop [sterile side at higher average pressure (0.8–1.5 kgf/cm2) than the non-sterile side]. The second and third valves were located after the process and “flash” coolers respectively (Fig. 1a) to maintain system pressure above the boiling point, which eliminated noise and potential damage from hammering [96]. To avoid leakage of non-sterile cooling fluids, the pressure on the utility side of the process cooler can be operated slightly below that of the sterile process side [96] by raising the system back-pressure or by reducing the chilled water supply pressure (i.e., by opening the supply to the nearby “flash” cooler). Piping and HEX pressure drops were designed to be low so that sufficient back-pressures were attainable at the system outlet to provide adequate protection against flashing. There were limits to the range of suitable temperature and back-pressure combinations that avoided operating close to the fluid flashing point (Table 2). In addition, sufficient system flowrate (>40 lpm) was necessary to maintain back-pressure and flow consistency to avoid flashing. The tuning strategy for the system began with tuning each flow and pressure loop individually and then operating them together, slowing down the response of the pressure loops as necessary to eliminate interactions. Loops were tuned using the Ziegler–Nichols method [107]; however the loop response with these settings excessively oscillated even before approaching set point. For flow and liquid pressure loops, large proportional bands (i.e., small gain) and fast reset action (i.e., small reset/integral time) are recommended [6]. Proportional and integral constants only are recommended for most liquid flow control with only integral constants (i.e., floating control) recommended for noisy loops [64]. Consequently, gain and reset time were reduced so that integral control was the dominant action, which reduced oscillations. In addition, booster pump suction pressure control was deliberately detuned to have a slow response so as not to interact with flow and system back-pressure controllers (which themselves did not interact with each other). Thus, both booster pump flow and suction pressure controllers could be used together with no instability. Table 3 shows the tuning constants selected. Table 3Optimized tuning constants for sterilization of test mediaParameterKpT1 (min/repeat)T2 (min)Flow control valve after centrifugal feed pump (40–100 lpm)0.080.050Pressure control valve on suction of booster pump 40–60 lpm0.050.300 80–100 lpm0.050.200Flow control valve after centrifugal booster pump (40–100 lpm)0.0450.050Hot water temperature control of retention loop inlet—sterilization temperature of 135–150°C cleaning temperature of 60–80°C (40–100 lpm)Primary (outer)0.360.50.225Secondary (inner)20.020.00.20Pressure control after process cooler—Moyno or centrifugal inlet feed pump 40–60 lpm0.050.100 80–100 lpm0.050.080Pressure control after “flash” cooler—Moyno or centrifugal inlet feed pump 40–60 lpm0.050.100 80–100 lpm0.050.080Temperature control of process cooler cooling to 35°C1.51.00.25Temperature control of “flash” cooler-during sterilization cooling to 35°C and with 35°C inlet feed (40–100 lpm)1.51.00.25Temperature control of “flash” cooler-during cleaning solution cooling to 60°C and with 60°C inlet feed 60 lpm0.481.350.5 80–100 lpm1.51.00.25(1) Zero T2 values were used for faster (relative to temperature loop) pressure and flow control loops. (2) Slightly different tuning constants required for (a) Moyno and centrifugal feed pumps and (b) 40–60 and 80–100 lpm flowrates to remain within desired ±0.1 kgf/cm2 back-pressure variation. Tuning constants for 40–60 lpm worked up to 80 lpm. (3) At 60 lpm slower tuning required for “flash” cooler when cooling to 60°C for cleaning (inlet feed of 60°C) than cooling to 35°C (inlet feed of 35°C) during sterilization. (4) Higher T2 value for primary hot water loop relative to its T1 value minimized variations of slower secondary temperature control loop Switching valve station The switching valve station was comprised of several diverter valves to direct the flow of steam, water, or medium to distribution, recycle, condensate trap, or system sewer as desired (Fig. 1b). The system switched according to the following valving arrangements (Fig. 1a, b): (1) system recycle to circulation tank (after passing through “flash” cooler; used for clean-in-place (CIP) and water sterilization), (2) transfer (feed) of sterilized medium to fermenters/waste tank, (3) system flow to sewer after process cooler, (4) system flow to sewer after “flash” cooler, (5) steam sterilization through process cooler to its condensate trap, and (6) steam sterilization through “flash” cooler to its condensate trap. Simultaneously with pathway switch, a steam barrier was applied to the pathway not being used to maintain sterility. Isometric design of the switching station was challenging since several automatic valves with actuators were located in close proximity to reduce dead legs. Actuator size was minimized by sizing appropriately with little excess buffer for the facility instrument air pressure. Limit switches were avoided to save additional space as well as to streamline installation and maintenance costs. Instrumentation Sterilizer instrumentation is described in Table 4. In general, instrumentation mounting was important both for sanitary operation and for accurate instrument measurements. Remotely mounted transmitters were used where needed to extend temperature range suitability of instrument sensors (e.g., flowmeters) and where helpful for space and access reasons. Wherever possible, locally indicating transmitters were selected which permitted operation by a single person since the human/machine interface (HMI) was upstairs in the facility control room. Transmitters were mounted either in a panel (drawback of additional wiring but able to be factory tested) or on the skid (drawback of crowding skid access but avoids cost of a separate panel). Table 4InstrumentationParameterModelFeaturesTemperatureRosemount 3144PD1A1NAM5C2QPX30–200°C (hot water loop) 0–160°C (all others)FlowMicromotion R100S128NBBAEZZZZ0–120 lpmPressureRosemount 3051CG4A22A1AS1B4M5QP0–10 kgf/cm2 (feed and booster pumps) 0–6 kgf/cm2 (post-cooler HEXs)ConductivityRosemount 225-07-56-99LC/54EC-02-090–100 MS/cm triclamp connectionRosemount 403VP-12-21-36/54EC-02-090–100 μS/cm triclamp connectionTemperature controlFisher-Rosemount 1052-V200-3610JSoftware limit of 160°C for hot water loopFlow controlFisher-Rosemount 1052-V200-3610JFlow control valves usable with either transmitterPressure controlFisher-Rosemount 1052-V200-3610JSoftware adjustment to prevent full closing of system back-pressure valveSteam controlFisher-Rosemount 667-EZ-3582125 psig unregulated plant steam supplyI/P transducerMarsh-Bellofram 966-710-1013–15 psig compactSolenoidAsco series 541 multifunction ISO 1 mono stableSpring-return piston actuators Accurate measurement of temperature was critical to ensuring that adequate medium sterilization was achieved and permitting reliable Fo and Ro calculations. Typical accuracies reported in HTST pasteurization equipment are ±0.5°C at 72°C between indicating and recording temperature devices and ±0.25°C at 72°C between test and indicating devices [92]. This compared favorably with the loop accuracy of ±0.21°C for this system, estimated based on stated vendor accuracy for matched sensors [86]. Pressure loop accuracy was ±0.02 kgf/cm2. The system back-pressure valve was capable of controlling pressure for flowrates ranging from 40 to 100 lpm using a computer-controlled maximum output of 80% closure to prevent unintentional shut off when operating with the positive displacement Moyno pump. In contrast, a closure of at least 95% was required when operating with the centrifugal pump since the pump output pressure was lower. A pressure and temperature gauge or transmitter was installed on the inlet and outlet of both sides of all HEXs for assuring adequate heat transfer performance and determining when HEXs required cleaning. These instruments also were important to evaluate individual unit performance for systems of interrelated HEXs during trouble-shooting [39]. Accurate measurement of volumetric flow was critical to ensuring that medium was properly sterilized for the appropriate residence time. A back-up flowmeter was installed for confirmation. Coriolis meters (Micromotion; Rosemount, Chanhassen, MN, USA), with a meter accuracy of ±0.5% of flowrate (loop accuracy of ±0.6% of flowrate), were selected rather than magnetic meters. Since flow measurements were based on fluid density, Coriolis meter readings were similar for both deionized and process (city) water (<±0.5 lpm at 60–100 lpm) and within expected variations. Coriolis meters also were insensitive to media composition changes, specifically the switch from media to water, assuming these changes negligibly affected fluid density and were not affected by the hydraulics of medium to water switches. However, volumetric flow rate readings were up to 5.5% higher after the recuperator than before it owing to density decreases with temperature for specific medium types. [Mass flowrate (kg/min) readings were similar]. Finally, since air bubble entrainment altered density readings and thus Coriolis flowmeter readings, a variable speed agitator (5:1 turndown) was installed on the larger non-sterile medium feed tanks. For soluble media, shutting off the agitation also minimized air entrainment. Proper Coriolis flowmeter installation was critical to performance. The preferred orientation was in a vertical upward flow section of pipe so that the flag filled and drained completely. Alternatives were not attractive: in the horizontal position pointing downwards the flag does not drain, in the horizontal position pointing upwards the flag incompletely fills due to air entrapment, and in the vertical position in a downward flow section of pipe the flag incompletely fills owing to gravity drainage. It also was necessary to secure the surrounding pipe to minimize interfering vibrations. Conductivity sensors were used on-line to measure changes in the composition upon switching inlet feed stream contents and between inlet and outlet streams. One conductivity meter with a range of 0–100 μs/cm detected DIW with typical conductivities of 2.1 μs/cm during cleaning. Two other conductivity meters, each with a range of 0–100 Ms/cm, were located on feed (after the inlet feed pump) and outlet (after the process cooling HEX) of the sterilizer system (Fig. 1a). Similarly to the flowmeters, they were mounted in the vertical flow section to ensure no adverse effects on readings. Conductivity monitoring at the cooling water exit may enable instant detection of a cooling HEX breach [93], but this leak might have to be fairly substantial since cooling water and media conductivities are similar in magnitude. Control system Strategy The control system strategy utilized minimal sequencing with manual operation preferred both to reduce installation expense and maximize flexibility. The system was composed of about 100 I/O (input/output) with about 55% analog input/output (AI/AO) and 45% digital input/output (DI/DO). The controls were interfaced to an existing Honeywell Total Distributed Control 2000/3000 hybrid system using newly-installed, dual (redundant) Honeywell High performance Process Manager controllers. Field-mounted (remote) I/O was installed inside a Nema 4× enclosure. Calculated values by the control system Several calculated values were displayed on the HMI to permit alarming and trending. These parameters included the temperature difference (inlet minus outlet) across the insulated retention loop, flowrate difference across the recuperator (upstream minus downstream), conductivity difference (inlet minus outlet), and pressure difference across the recuperator (outlet of hot side minus inlet of cold side). In addition, the totalized volumetric flowrate was calculated based on flowmeter readings rather than using the flow transmitter totalizer signal since implementation of the former was more straightforward. Values of Fo and Ro were obtained based on on-line calculation of system residence time. The flowrate measured after the recuperator HEX was used in the calculation. Activation energies, Ea, of 16,800–26,000 cal/mol used for Ro generally were lower than those of the various spore types of 67,700–82,100 cal/mol used for Fo [34]. An Ea of 67,700 cal/mol was used for Fo [105] and an Ea of 20,748 cal/mol was used for Ro [17]. An adjustable filtering function [53] could be applied to final calculated Fo and Ro process variable values (PV) to smooth fluctuations caused by pulsations in flow and temperature readings. This function (Eq. 16) had only one user adjustable parameter for the filter value (FV): With FV set to 1.0 (i.e., no filtering), variations of Fo and Ro were less than 1%. In addition, a user adjustable input permitted entry of proper retention loop volumes, Vs, to ensure accurate residence time, tR, calculations. Three methods were used to evaluate this on-line calculation for a simulated continuous sterilization run (Table 5): (a) calculation for 1 min residence time intervals along the loop length and summation of values over the entire length of pipe, (b) use of the average of retention loop inlet and outlet temperatures in the calculation, and (c) averaging of two separate calculations using inlet and outlet temperatures. Although method a was most accurate, method b was selected since the error was sufficiently small, and implementation was more straightforward. In general, errors were smaller for Ro than Fo. This approach was in contrast to the dairy industry where a worst case lethality has been calculated using the outlet temperature of the insulated retention loop [91]. Table 5Comparison of calculation methods for Fo and Ro for a simulated continuous sterilization run with a 2°C temperature drop across the insulated retention loop and tR=10 min (error calculated relative to method a)MethodFo (min)Error (%)Ro (min)Error (%)aIntegrate at 1 min residence time intervals439.93Basis31.84BasisbUse average of inlet and outlet loop temperatures437.000.6731.820.06cAverage separate calculations using inlet and outlet loop temperatures445.621.3031.880.13 Tuning constants and control variation For all control loops, proportional/integral/derivative (PID) control in the fast mode (PID calculation updated every 0.25 s) was utilized based on three parameters (definitions are specific to the Honeywell control system): the proportional gain, KP, unitless (reciprocal of the proportional band); the integral constant, T1, min per repeat; and the derivative constant, T2, min. Control tuning constants were developed for water (Table 3) then tested and found to be acceptable for different media (55 wt.% cerelose, 50 vol.% glycerol). Slightly different values were optimal for flowrates of 40–60 lpm than for 80–100 lpm with lower flowrate range constants performing somewhat better than higher flowrate range constants between 60 and 80 lpm. Variations in these control loops, characterized under various operating conditions, were found to be acceptable. Hot water loop control performance did not change significantly with media type since system disturbances expected for media were likely to be dampened relative to water. System performance Water and media testing The three types of media tested were water [both deionized (DIW) and process (city)], 55 wt.% cerelose and 50 vol.% glycerol. Heating of non-sterile feed tanks by external jacket platecoils and recycling of cooled effluent back to the system inlet permitted system testing with feeds of differing temperatures over the range of 15–60°C. “Once-through” testing was used only for cerelose to reduce Maillard reactions, which were feared to soil sterilizer internals. Water and glycerol were recycled by setting the “flash” cooler temperature to the desired inlet temperature, permitting testing of process cooler performance at or above inlet feed temperatures. The manner in which readings were taken affected assessment of their variability; readings were observed for a few seconds, then a mental average was taken and evaluated to determine whether the bounce was within reasonable limits. Computer system historical trends, which recorded data every 1 min also were used to assess variability. Pressure drops were calculated for each HEX for various media considering the temperature effect on the inlet feed stream density. The vendor’s proprietary software was used which did not account for gaps between spiral face and HEX door and assumed a tighter-than-actual stud spacing. Thus, the design HEX pressure drop was likely overestimated. Re-calculation (data not shown) using published equations for the pressure drop across the spiral HEX [71], which also did not account for the gap impact, resulted in estimates somewhat closer to measured values. The retention loop pressure drop was calculated using a retention loop equivalent length of 1,525 ft (including elbows and pipe-to-tube adapters) and f values according to Table 9. Calculated pressure drops were compared to measured values (data not shown). For the 100 lpm flowrate, most measured values were about 30–40% lower than calculated values with the exception of the heater’s cold side which was 2.5- to 3.5-fold higher. For the 40 lpm flowrate, observed values were substantially higher (6.5- to 10-fold) than calculated values for the heater’s cold side. These results may indicate difficulty in predicting pressure drops for the lower aspect ratio heater HEX (Table 1), particularly at its higher operating temperatures relative to the other HEXs. Measured pressure drops for various test media were reasonably similar. System temperatures were calculated for each media type and compared with observed values (Tables 6, 7, 8). Design HR, NTU, and TE were calculated assuming no temperature drop across the retention loop (i.e., retention loop inlet temperature equal to hot side recuperator inlet temperature). Negligible heat loss for the retention loop, HEXs, and piping also was assumed; design values would be higher if these losses were included. Table 6System performance using water at 100 lpm (900 L retention volume unless noted otherwise)ParameterFeed inlet of 15°CFeed inlet of 25°CFeed inlet of 60°CDesignObserved Moyno/centrifugalDesignDesignObserved Moyno/centrifugalObserved Moyno 540 LRecuperator cold side inlet feed (°C)15.015/1525.060.059/5960Recuperator cold side outlet/heater hot side inlet (°C)120.0 (126.0)120/120122.5 (128.0)131.0 (135.0)131/132131Heater hot side outlet/retention loop inlet (°C)150.0150.1/150.0150.0150.0150.0/150.0150.01Retention loop outlet/recuperator hot side inlet (°C)150.0149.2/149.1150.0150.0149.2/149.1149.02Recuperator hot side outlet/process cooler hot side inlet (°C)45.7 (39.6)46/4353.2 (47.6)79.5 (75.4)80/7877Process cooler hot side outlet (°C)35.035.0/35.035.035.035.0/35.134.8Process cooler cold side inlet (°C)6.08.6/8.26.06.08.3/N/AN/AProcess cooler cold side outlet (°C)12.0 (8.5)41/5012.0 (11.6)12.0 (11.4)49/N/AN/ARecuperator heat recovery (HR, %) (Eq. 14)77.8 (82.2)77.7/77.878.0 (82.4)78.9 (83.3)79.1/80.278.9Recuperator NTUs (Eq. 12)3.46 (4.57)3.49/3.683.50 (4.62)3.69 (4.93)3.68/4.054.06Recuperator thermal efficiency (TE) (Eq. 13)3.50 (4.63)3.60/3.613.55 (4.68)3.74 (5.00)3.96/4.273.94(1) Water not tested at 25°C but design is included for comparison. (2) Process cooler inlet temperature reading taken from building chilled water supply temperature. (3) Design numbers calculated based on intermediate temperatures needed to reach 150°C at the retention loop inlet assuming that most of the load was undertaken by the heating HEX based on a maximum heating loop temperature of 160°C. Thus, less than the maximum area of the recuperator was utilized in some cases. (4) Design numbers in bold calculated based on 100% utilization of the recuperator surface area and permitting the hot water loop to operate at values less than 160°CTable 7System performance (900 L retention loop volume) using 55 wt.% cerelose at 65 (25°C inlet temperature) and 100 lpm (60°C inlet temperature)ParameterFeed inlet of 25°CFeed inlet of 60°CDesignObserved Moyno/centrifugalDesignObserved Moyno65 lpm65 lpm40 lpm100 lpm100 lpm40 lpmRecuperator cold side inlet feed (°C)25.02525/2660.05760Recuperator cold side outlet/heater hot side inlet (°C)105.0 (110.5)120126/125128.0 (127.0)127130Heater hot side outlet/retention loop inlet (°C)150.0150.0150.0/150.0150.0150.0150.0Retention loop outlet/recuperator hot side inlet (°C)150.0148.8148.1/147.9150.0149.1148.1Recuperator hot side outlet/process cooler hot side inlet (°C)77.2 (64.5)5447/5185.2 (86.3)8277Process cooler hot side outlet (°C)35.035.735.1/34.335.035.035.8Process cooler cold side inlet (°C)6.08.27.5/NA6.07.7 (est)7.7Process cooler cold side outlet (°C)12.0 (13.3)4042/NA12.0 (21.1)61.4 (est)45Recuperator heat recovery (HR, %) (Eq. 14)64.0 (68.4)76.080.8/79.875.5 (74.4)75.377.8Recuperator NTUs (Eq. 12)1.65 (2.17)3.294.58/4.132.89 (2.72)2.983.99Recuperator thermal efficiency (TE) (Eq. 13)1.78 (2.17)3.304.57/4.323.09 (2.91)3.173.86(1) Est estimation by using data for 40 lpm case as a basis. (2) Cerelose after sterilization was dark brown at 150°C; lighter brown at 135°C. (3) At 25°C, 55 wt.% cerelose forms a cloudy dispersion in feed tank with entrained air (requiring several minutes to dissipate after agitation is stopped) compared to 55 wt.% cerelose at 60°C where solution in feed tank is clear. (4) Design numbers in bold (see note 4 of Table 6) for 65 lpm interpolated from 60 and 80 lpm casesTable 8System performance (900 L retention loop volume) using 50 vol.% glycerol at 88 lpm using flowmeter after inlet feed pump (15°C inlet temperature) and 91.5 lpm (60°C inlet temperature)ParameterFeed inlet of 15°CFeed inlet of 60°CDesignObserved MoynoDesignObserved Moyno/centrifugal88 lpm88 lpm40 lpm91.5 lpm91.5 lpm40 lpmRecuperator cold side inlet feed (°C)15.015156060/6060/60Recuperator cold side outlet/heater hot side inlet (°C)125.8 (116.0)114117131.0 (128.3)128/128130/130Heater hot side outlet/retention loop inlet (°C)150.0150.0150.0150.0150.0/150.0150.0/150.0Retention loop outlet/recuperator hot side inlet (°C)150.0149.1147.9150.0149.1/148.9147.9/148.1Recuperator hot side outlet/process cooler hot side inlet (°C)45.0 (49.0)524882 (81.7)82/8076/76Process cooler hot side outlet (°C)35.034.934.635.035.0 (est)/60.262.44/60.2Process cooler cold side inlet (°C)6.08.17.666.07.8/7.897.6/7.83Process cooler cold side outlet (°C)12.0 (17.6)38.543.012.0 (13.9)45.2 (est)/7378/72Recuperator heat recovery (HR, %) (Eq. 14)82.1 (74.8)73.375.678.9 (75.9)75.6/75.677.8/77.8Recuperator NTUs (Eq. 12)4.10 (2.97)2.753.193.47 (3.16)3.16/3.324.14/4.11Recuperator thermal effectiveness (TE) (Eq. 13)4.58 (2.97)2.823.303.74 (3.16)3.23/3.253.92/3.87(1) Glycerol at 60°C inlet feed temperature cooled to 60°C outlet temperature. (2) Est estimation by back calculating heat transfer coefficient for cooling using tower water performance data for similar test media/conditions. Chilled water flowrate varies, which alters heat coefficient, which changes chilled water outlet temperature. Chilled water flowrate and outlet temperature solved iteratively by balancing heat transferred (U A ΔTln) with heat absorbed (QMCp ΔT). (3) Design numbers in bold (see note 4 of Table 6) for 88 and 91.5 lpm interpolated from 80 and 100 lpm cases Observed temperature profiles, HRs, NTUs, and TEs generally met or were somewhat lower than design depending upon which design basis was utilized. The primary factor causing under-performance was believed to be lack of allowance for the gap that was likely present even with pliable, full-face gaskets installed owing to unavoidable variations in flatness of the HEX spiral and door faces. The observed hot side recuperator inlet temperature from the non-adiabatic retention loop was lower than the isothermal design assumption and thus raised measured values compared with design HR, NTU, and TE. Viscosity decreases with higher temperature resulted in improved performance. System draining hold up volume The system’s hold up volume was established by running process water into a completely drained and air-blown system. It was determined when water reached a certain section by opening the adjacent downstream drain valve. Measured hold up volumes agreed with calculated ones within reasonable limits but may have been affected by the ability to fill the system completely at the lower flowrates used to obtain these measurements. Overall, the impact on design residence times of these differences was negligible, however. Inlet feed stream and outlet distribution stream switching Prior to testing all instrument air connections to the switching skid were checked for leaks and proper venting. When switching to the distribution manifold (sterilizer feed to fermenters) from either the sewer or recycle flow paths, transient flow and pressure spikes and their effect on inlet and outlet retention loop temperatures were observed and found to be negligible. When sterilizer distribution was switched from the receiving waste tank to the desired fermenter vessels, pressure spikes also had a negligible effect on temperature. Minimal disturbances were observed for actual water-to-media switch over when the system feed was changed from water to 55 wt.% cerelose and back again. In all of these instances, an acceptable temperature spike was considered to be less than the variation observed during normal flow operation. Since these spikes were negligible, it was not necessary to flush the system appreciably after a switch to regain steady performance. In addition, sewer to recycle, waste to sewer, and fermenter to waste transitions were not potential sterility risks since these typically occurred after sterilized medium transfer was completed. Heat losses For this system, the target retention loop temperature drops (Table 2) were met or exceeded for the test media at residence times of 9–22.5 min. In another study, for a retention loop of 50 mm (2″) diameter, the temperature drop was negligible for short residence times of 4 s (hold temperature of 85°C and room temperature of 20°C) regardless of whether the retention loop was insulated (0.005°C) or not insulated (0.04°C) [55]. For longer residence periods of 40 s, the temperature drop was 0.04°C for insulated and 0.35°C for non-insulated cases. These temperature drops are expected to increase with higher residence times and higher holding temperatures. Extrapolating from these data assuming operation at 85°C, a change of 0.0583°C/min was expected, translating to an expected drop of 0.525°C for a residence time of 9 min. Since the actual operating temperature was substantially higher at 150°C, the observed drop of 1°C appears consistent with these earlier studies. Infrared pictures (Inframetrics model PM250 Thermocam; Flir, Boston, MA, USA) of the retention loop and HEXs were taken to evaluate retention loop insulation effectiveness (particularly the removable jumper end) and HEXs heat losses due to lack of insulation. (Radiant heat loss due to thermal radiation was considered negligible). These pictures showed that the retention loop was adequately insulated. Typical HEX surface temperatures were consistent with the temperature profile of Table 6. Since cold fluids enter the HEX on the periphery and hot fluids enter in the center, there was minimal heat loss to the surroundings [100]. In addition, during water sterilization temperature drops were measured for the recuperator’s hot media side (with the inlet cold side bypassed) and the process side of the process cooler (without chilled/cooling water flow). These drops were <0.3°C for the recuperator (aspect ratio of 3.5) and about 2.0°C for the process cooler (aspect ratio of 2.9). Lower heat losses were expected for thicker, lower aspect ratio HEXs and HEXs with more turns [99]. In addition, full-face gaskets minimized heat transfer rates to the HEX cover faces, which reduced heat loss. System sensitivity Variations in inlet retention loop temperature (±0.05°C), system flowrate (Moyno feed pump- ±0.015 lpm, centrifugal feed pump- ±1.0 lpm), and back-pressure (±0.05 kgf/cm2) generally were negligible across the operating range and for the various test media. As expected, flowrate variations were somewhat greater for the centrifugal than for the positive displacement Moyno feed pump. The relative sensitivities of retention loop inlet and outlet temperatures to changes in system back-pressure were found to be negligible. This behavior was improved from the prior direct steam injection design since steam/water mixing was more volatile and pressure-sensitive. Fo and Ro For a hold temperature of 150°C, Fo magnitudes were acceptable ranging from 3,000 min at 100 lpm to 6,500–7,000 min at 40 lpm. At 135°C, Fo was substantially lower at 150 min, but still representing a 50-log reduction for spores with a D value of 3.0 min. For a hold temperature of 150°C, Ro magnitudes ranged from 50 to 55 min at 100 lpm to 125–130 min at 40 lpm; at 135°C, Ro was substantially lower at 20 min. Acceptability of these Ro values depends on the degree and impact of media degradation for the specific process. Reproducibility was high at <1.5% for the same inlet feed pump. Fo and Ro differences were <10% between the two inlet feed pumps using the same medium type, residence time, and sterilization hold temperature; these differences were presumed due to small volumetric flowrate changes, thus slightly altering residence times. Different flowmeters controlled flowrate depending on the inlet pump utilized, but the flowmeter after the recuperator was used for the calculation regardless. Dimensionless groups and axial dispersion For the retention loop, NRe and NBs were calculated for each type of media at various system flowrates and at a sterilization temperature of 150°C (Table 9), starting with velocity estimation. Process bulk velocities for the retention loop, as well as HEXs, ranged from 0.62–1.33 m/s at 100 lpm and 0.24–0.47 m/s at 40 lpm (Table 10). The velocity at 40 lpm was 40% of the value for 100 lpm. Over the 50 wt.% cerelose feed temperature range of 25–60°C and the water/50 vol.% glycerol feed temperature range of 15–60°C, there was only a very slight change in calculated velocity (data not shown). Table 9Key calculated parameters for the retention loop for an inlet feed temperature of 15°C, sterilization hold temperature of 150°C, and process cooler set point of 35°C (L/D of 9,318 for 900 L retention loop volume)ParameterWater (40–100 lpm)55 wt.% cerelose (40–65–100 lpm)50 vol.% glycerol (40–88–100 lpm)NRe84,000–210,00053,000–86,000–132,00060,600–133,300–151,500V (m/s)0.305–0.7620.335–0.518–0.8230.335–0.701–0.792f (Eq. 11a)0.0222–0.02040.0232–0.0221–0.02130.0231–0.0212–0.0209Dz/VD=3.57f0.5 [61], Eq. 100.532–0.5100.544–0.531–0.5210.543–0.520–0.516Dz (m2/s)0.00890–0.02130.0100–0.0151–0.02350.00999–0.0200–0.0224NBs (VL/Dz)17550–1830017150–17550–1790017150–17950–18100WaterDz/VD correlation [61]0.33 (NRe=104)–0.2 (NRe=105)Dz (m2/s)0.00550 (NRe=104)–0.00837 (NRe=105)NBs (VL/Dz)28350 (NRe=104)–46600 (NRe=105)(1) Surface roughness assumed to be equivalent to commercial steel (ε=4.57×10−5 m, D=0.0549 m, ε/D=0.00083) [78]. (2). Inlet temperature variation from 15 to 60°C only slightly affects NRe (<1%) based on density change of volumetric inlet flow rate. (3) NBs for 540 L retention loop volume is about 60% of that for 900 L retention loop volumeTable 10Calculated bulk velocities of HEXs for various test media (based on 40–100 lpm process flowrate, sterilization temperature of 150°C, process cooler set point of 35°C)HEXProcess side velocity (m/s)Utility side velocity (m/s)Water (15°C inlet feed)55.0 wt.% cerelose (25°C inlet feed)50 vol.% glycerol (15°C inlet feed)Interconnecting piping0.47–1.260.47–1.330.47–1.32N/ARecuperator (both sides)0.35–0.940.34–0.940.34–0.98N/AHeater0.24–0.620.25–0.660.25–0.672.0Retention loop0.31–0.770.32–0.810.32–0.80N/AProcess and “flash” coolers0.35–0.870.35–0.890.35–0.883.3 Calculated system velocities were compared to expected settling velocity for a solids-containing media such as 5% cottonseed flour (Pharmamedia, Traders Protein; Memphis, TN, USA). According to the manufacturer, 91% of the particles are <74 μ. (i.e., pass through a 200 mesh screen) and particle density, ρp, is 1,485 kg/m3. The particle settling velocity at 150°C based on a specific gravity of 1,013 kg/m3 for a 50 g/L solution at 25°C (measured using a Fisherbrand, Category No. 11-555G hydrometer) was estimated at 0.016 m/s for 200 μ particles, 0.0054 m/s for 100 μ particles, and 0.0016 for 50 μ particles. These values were 15- to 100-fold lower (depending on the particle size assumed) than the lowest system velocities. Settling velocities were expected to be even lower as the temperature decreased owing to the higher specific gravity of water. NRe ranged from 53,000–150,000 regardless of media type (Table 9). Equation 11a was used to obtain f values ranging from 0.0204 to 0.0232 regardless of media type or flowrate, resulting in Dz values (Eq. 10) ranging from 0.0089 to 0.0235 over the flowrate range from 40 to 100 lpm. NBs values ranged from 17,150 to 18,300 and were relatively insensitive to either medium type or flowrate (Table 9). Using the experimental correlation and applying it to water [61], NBs values were somewhat higher than the calculated values (Table 9), but reasonable considering the data scatter of the correlation itself. Thus, NBs>>1 for expected operational ranges. Dips for disturbances were shallower for the retention loop outlet than for the inlet and were observed about tR minutes later. For example, at 100 lpm, an inlet temperature dip to 143°C (from 150°C) resulted in an outlet temperature dip to only 145.5°C. Based on the 1°C steady-state temperature loss observed, an outlet temperature dip to 142°C was expected, but not realized most likely due to axial dispersion. Direct qualitative examination of axial dispersion was accomplished using step change and spike tests for flowrates of 40 and 100 lpm. For the step change test, water flowed through the sterilizer then the step change was performed quickly by switching from one feed tank at ambient temperature to a second tank at a higher temperature (60°C). The system was not heated and was operated without backpressure. For the spike (delta or pulse) change, a temperature spike in the hot water loop was created by quickly opening the steam valve fully and then returning it to its original output setting, taking care to maintain the peak temperature below 150°C. For both types of tests the non-sterile side of the recuperator was by-passed to avoid heat transfer, as well as reducing the piping length/volume (relative to that of the retention loop) between the switching point and the retention loop entrance (about 50 ft excluding spiral loops vs. 1,253 ft, 65 L vs. 900 L). It was difficult operationally to perform step and spike changes with sufficient rapidity owing to this hold up. The dispersion results are shown in Fig. 3a and b. Clearly the degree of dispersion increased at the lower flowrate of 40 lpm (versus 100 lpm) for both types of tests. However, the absolute value of the dispersion coefficient suggested by these data appears artificially higher than that predicted from the calculations. Fig. 3Axial dispersion. a pulse, b step change Steam-in-place testing The use of biological indicators (BIs) and thermocouples (TCs) was avoided in steam-in-place (SIP) testing since (1) fittings had to withstand higher pressures up to 10 kgf/cm2 and (2) the system was physically large precluding use of TCs with attached wires [although Valprobe (GE Kaye Instruments Inc., North Billericia, MA, USA) wireless TCs were one alternative considered]. TCs commonly have been used in pasteurization applications with a lower hold temperature of 72°C [82] and correspondingly lower pressure. BIs (in the form of feed inoculated with indicator organisms) were not considered desirable for contact with production equipment for pasteurization [47], although studies are underway to determine if non-indicator organisms can be used [82]. Simulations have been conducted (1) using laboratory or pilot scale test apparatuses with inoculated spore solutions, (2) by tracking an indicator enzyme [69] and/or (3) utilizing rigorous temperature distribution monitoring on laboratory and production equipment, obtaining spore inactivation kinetic data in parallel, then developing models to estimate lethality [47, 82, 92]. Installation of several temperature and pressure indicators for in process monitoring, along with a functional sterility test, was the preferred approach to demonstrate proper operation. Operational testing was performed to test both the steam sterilization-to-water transition and water sterilization modes. Regardless of the sterilization method used, prior steam sterilization of the empty system and subsequent switching back pressure control from the “flash” to process cooler was required. Each method had key points requiring extra care- the initial water introduction for the steam sterilization-to-water transition and placing the empty non-sterile side of the recuperator on-line for the water sterilization. Thus, the time and attention required to execute either method was reasonably equivalent. Using the steam sterilization-to-water transition and a 9 min residence time, three sterility tests employing soluble medium were successfully completed at 150°C and one at 135°C to test the Fo range of 150–3,000 min. One sterility test was successfully completed using the water sterilization method. Changes between non-sterilized and sterilized sterility medium were minimal for total dissolved solids (<±3–8%) and conductivity (<3% decrease), confirming minimal medium dilution since all heating was done indirectly. Glucose concentration (measured enzymatically by a YSI analyzer, Yellow Springs, OH, USA) decreased by 15–25%, likely reflecting glucose complexing with nitrogen during sterilization and not medium dilution. Further tests for different media (50 vol.% glycerol, 50 wt.% cerelose, 5 wt.% Pharmamedia) are to be done as process development requirements dictate. Clean-in-place (CIP) testing Cleaning of HEXs to remove fouling and solid accumulations was required both to maintain heat transfer performance and avoid system sterilization problems. After medium was run through the sterilizer, the system was flushed with water while at sterilization temperature for at least one system volume, then flushing was continued as the system cooled. An alkaline cleaning agent (typically 1.5 vol.% low heat #3; Oakite Products, Bardonia, NY, USA) was added to dissolve medium components as well as any denatured proteins. Besides alkaline cleaning agents, acid cleaning agents (typically 1.2 wt.% sulfamic acid) were used to dissolve mineral deposits. Also, a high pressure water stream (1,800–2,500 psi pressure) can be used to clean the spiral HEX channels [19, 71], although this required opening HEXs doors which was costly on a routine basis. The system was heated to 60–80°C, rather than sterilization temperatures of 135–150°C, which past experience indicated to be sufficient for cleaning. An additional set of tuning constants was implemented for these lower cleaning temperatures (Table 3), but overall control was not required to be as tight as during sterilization. Contact of the cleaning solution with all internal wetted sterilizer parts at a minimum required velocity of 1.5 m/s (2.0 m/s recommended) [24] was desirable. Although these high velocities were not achievable in the process side (Table 10), system cleanability was acceptable. There were significant concerns about cleaning HEXs with full-face gaskets installed due to the potential for accumulated residue where the spirals contacted the door and underneath the gasket at the center nozzles since the gasket hole was braced only by the spiral face. System cleanliness was evaluated by (1) examining inlet and outlet conductivity differences and comparing values to those for DIW, (2) analyzing rinse water samples for total solids, filtered solids, color, and ultraviolet absorbance, and (3) visual inspection and swabbing internal surfaces for total organic carbon (TOC), including the HEXs. There were no appreciable differences between source and rinse water in these measurements. Residue accumulation was negligible on the process sides of all spiral HEXs, and swab-testing results were less than 25 ppm TOC for each location examined. Cleaning approaches utilized in the dairy industry for continuous pasteurization applications are consistent with the above approach. After processing in certain milk pasteurization applications, systems are rinsed with cold water, flushed with 1.5–2 w/v.% caustic detergent at 85°C for 30 min, cooled, drained, then flushed with water a second time [40]. For other dairy applications, the nature of the soil was primarily protein, butter-fat, and minerals [38]. In this latter case, an acid detergent first dissolved minerals and loosened burned on accumulation, which increased soil solubility in subsequent caustic detergent solution. Instead of draining and rinsing the system between detergent switches, caustic has been added directly to the acid solution to minimize energy costs associated with a second heating of the cleaning solution [38]. However, this short cut can result in soil particles already dissolved in acid or caustic solution potentially re-depositing on system surfaces as the pH changes. Conclusion Improvements implemented for a next generation, pilot-scale continuous sterilization system span the design, fabrication, and testing project phases and have been described. Advantages and disadvantages of various system features were evaluated based on literature analysis from fermentation as well as other related applications. Successful realization of these requirements depended on the adoption of an effective project strategy. The selected system vendor had experience primarily with the food industry since there were few new media sterilizers for manufacturing being constructed and even fewer for pilot plant process development use. Thus, it was critical to devote sufficient time to comprehensively determining system requirements. Development of a detailed sequence of operation as the piping and instrument diagram (P&ID) itself was developed ensured alignment of performance expectations. In addition, selection of a system (as well as HEX) vendor who was located nearby facilitated interim progress examinations prior to delivery. The “worst case” design scenarios were determined carefully, ensuring that they did not create unnecessary additional costs. Agreement on the design assumptions and performance requirements was critical, particularly for calculated quantities. Specifically, the entire system operation needed to be evaluated when developing the HEX performance requirements. Interim temperatures and pressures were estimated based on the system’s flow connections and not simply considering each HEX separately. Since the temperature rise in each HEX stream depended on actual flowrate, design calculations were done using expected flowrates and not solely the maximum flowrates that the HEX can support. Finally, a check of calculations for the various design cases ensured they were internally consistent. Performance testing was devised to quantify actual operation versus design expectations. Intermediate pressure and temperature measurements within the system were compared to design calculations to identify performance issues. Communication of acceptable variability to the control and instrument system designer upfront ensured proper test criteria were met and steady state variations were acceptable. Tests were performed and documented for all operational phases. These system tests were considered critical to effectively characterizing the system’s capabilities prior to placing the equipment in service.
[ "continuous sterilizer", "pilot scale", "high temperature short time", "spiral heat exchanger", "start-up" ]
[ "P", "P", "R", "R", "U" ]
Virchows_Arch-4-1-2329733
Validation of tissue microarray technology in squamous cell carcinoma of the esophagus
Tissue microarray (TMA) technology has been developed to facilitate high-throughput immunohistochemical and in situ hybridization analysis of tissues by inserting small tissue biopsy cores into a single paraffin block. Several studies have revealed novel prognostic biomarkers in esophageal squamous cell carcinoma (ESCC) by means of TMA technology, although this technique has not yet been validated for these tumors. Because representativeness of the donor tissue cores may be a disadvantage compared to full sections, the aim of this study was to assess if TMA technology provides representative immunohistochemical results in ESCC. A TMA was constructed containing triplicate cores of 108 formalin-fixed, paraffin-embedded squamous cell carcinomas of the esophagus. The agreement in the differentiation grade and immunohistochemical staining scores of CK5/6, CK14, E-cadherin, Ki-67, and p53 between TMA cores and a subset of 64 randomly selected donor paraffin blocks was determined using kappa statistics. The concurrence between TMA cores and donor blocks was moderate for Ki-67 (κ = 0.42) and E-cadherin (κ = 0.47), substantial for differentiation grade (κ = 0.65) and CK14 (κ = 0.71), and almost perfect for p53 (κ = 0.86) and CK5/6 (κ = 0.93). TMA technology appears to be a valid method for immunohistochemical analysis of molecular markers in ESCC provided that the staining pattern in the tumor is homogeneous. Introduction Esophageal carcinoma is the eighth most common type of cancer in the world [13]. Although the recent rise in incidence of esophageal cancer has predominantly been caused by an increase in adenocarcinomas, the majority of esophageal cancer cases globally are squamous cell carcinomas [13]. For both histological types, radical en bloc esophagectomy with an extensive lymph node dissection offers the best chance for cure, leading to an overall 5-year survival rate of around 30% [1, 20]. Well-known histopathological factors for prognostication of esophageal cancer include the TNM stage, the number of positive lymph nodes, and the presence of extracapsular lymph node involvement [16, 24, 26, 32]. Recently, there has been a growing interest in the prognostic value of molecular markers in (esophageal) cancer [21]. The expression of such markers is often studied by immunohistochemistry on formalin-fixed, paraffin-embedded tumor slides. Tissue microarray (TMA) technology has been developed to enable high-throughput immunohistochemical analyses [14]. By inserting small (diameter e.g. 0.6 mm) donor tissue core biopsies into a single recipient paraffin block, this technique allows for rapid analysis of large numbers of tissues under standardized laboratory and evaluation conditions without significantly damaging the patient’s tissue. In addition, TMA technology leads to a significant reduction of the amount of consumables used and time needed for interpretation, increasing cost-effectiveness. A potential disadvantage compared to full tissue sections is that the donor cores may not be representative for the whole tumor, particularly in case of heterogeneous tumors and heterogeneously expressed molecular markers. Hence, some validation studies have been performed in various cancers using different kinds of antibodies [2, 4, 6, 7, 9, 12, 23, 35]. Although several studies have revealed novel prognostic biomarkers in esophageal squamous cell cancer (ESCC) by means of TMA technology [38, 39, 41], this technique has not yet been validated for these tumors. The aim of the present study was, therefore, to validate TMA technology in ESCC by assessing the concurrence of immunohistochemical staining scores of established molecular markers with various expression patterns between triplicate 0.6 mm core biopsies of the TMA and their whole tissue section counterparts. Materials and methods TMA construction Formalin-fixed, paraffin-embedded tissues from thoracic ESCCs of consecutive patients having undergone esophagolymphadenectomy at the authors’ institute between 1989 and 2006 were retrieved from the archives of the Department of Pathology. Patients who received neoadjuvant therapy were excluded from this study. The study was carried out in accordance with the ethical guidelines of our institution concerning informed consent about the use of patient’s materials after surgical procedures. By an experienced pathologist (FtK), three representative tumor regions were marked on one selected hematoxylin and eosin (H&E)-stained section of each tumor, avoiding areas of necrosis. From these three tumor regions, a tissue cylinder with a diameter of 0.6 mm was punched out of the corresponding paraffin block (‘donor block’) and placed into the TMA paraffin block using a manual tissue arrayer (MTA-I, Beecher Instruments, Sun Prairie, USA), which was guided by the MTABooster® (Alphelys, Plaisir, France). The distribution and position of the cores was determined in advance with the TMA-designer Software (Alphelys-TMA Designer®, Version 1.6.8, Plaisir, France). Cores of normal esophageal mucosa, lymph node, kidney, liver, spleen, and prostate were incorporated in the tissue array block as internal controls. Immunohistochemistry For each marker, a 4-μm slide of the TMA and one of every selected donor paraffin block were immunohistochemically stained. Table 1 shows the details of all antibodies, dilutions, incubation times and antigen retrieval methods applied in this study. Table 1Specification of antibodies used and details of tissue processingPrimary antibodyStaining patternSourceaClone and codeAntigen retrievalDilutionIncubation time (min/room temperature)DetectionbPositive controlProcedureCK5/6CytoplasmicChemiconD5/16 B4EDTA pH 9.01:3,00060Strept ABCBreastAutostainerCK14CytoplasmicNeomarkersLL002EDTA pH 9.01:40060PowervisionBreastAutostainerE-cadherinMembranousZymed4A2C7Citrate autoclave pH 6.01:20060PowervisionBreastAutostainerMIB-1 (Ki-67)NuclearDakoM7240Citrate pH 6.01:10060Strept ABCTonsilAutostainerp53NuclearBiogenexBP53-12Citrate pH 6.01:20060Strept ABCSerous adenocarcinoma of the endometriumAutostaineraBiogenex, San Ramon, CA, USA; Chemicon, Chemicon International, Temecula, CA, USA; Dako, DakoCytomation, Glostrup, Denmark; Neomarkers, Fremont, USA; Zymed, Zymed Laboratories, San Francisco, CA, USA.bStrept ABC is biotinylated horse–antimouse Vector BA-2000, diluted 1:500 in PBS, followed by streptavidin–biotin complex, diluted 1:1,000. Powervision ready to use (Poly-HRP-antiMs/Rb/RtIgG biotin-free; ImmunoVision Technologies, Norwell, CA, USA). For all stainings, sections were deparaffinized in xylene for 10 min followed by dehydration through graded alcohols. Endogenous peroxidase activity was blocked for 15 min in a buffer solution of pH 5.8 (containing 8.32 g citric acid, 21.52 g disodium hydrogen phosphate, 2 g sodium azide in 1 l of water) with hydrogen peroxide (0.3%). After antigen retrieval for 20 min, a cooling down period of 30 min was followed by incubation with the primary antibody. Depending on the antibody used, slides were incubated with the secondary antibody followed by the streptavidin–biotin complex or slides were directly incubated with Powervision (details of both products shown in the legend of Table 1). Then, the peroxidase reactivity was developed by 3,3′-diaminobenzidine for 10 min and slides were counterstained with Mayer’s hematoxylin. In between steps, slides were washed with phosphate-buffered saline (pH 7.4). Immunohistochemical scoring By two observers (FtK and JB) conjointly, the degree of differentiation and the percentage of immunohistochemically stained tumor cells were determined in all TMA cores and in the full sections of the selected donor blocks. Histologic grade was scored as well-differentiated (G1), moderately differentiated (G2), or poorly differentiated (G3) [37]. Staining of p53 and Ki-67 were marked as negative (<10% of tumor nuclei stained), weakly positive (10–50%), or strongly positive (≥50%) [10, 40]. Cytokeratin (CK)5/6 and CK14 staining were scored as negative (<10% of tumor cell cytoplasms stained), weakly positive (10–80%), or strongly positive (≥80%). E-cadherin expression was regarded negative when <50% of tumor cell membranes stained and positive when ≥50% stained [29, 30]. Cores were considered lost if <10% of cells contained tumor (‘sampling error’) or when <10% of tissue was present (‘absent core’). Cases were excluded if two out of three cores were lost. When the scores between the cores of a particular case differed, the most frequent score determined the overall score. In case of three different scores in one case, the middle score was chosen. When only two cores were available with both a different score, the case was excluded from further analysis [11]. Statistical analysis Statistical analyses were performed using the SPSS software for Windows (Version 12.0, SPSS, Chicago, IL, USA). Sixty-four donor blocks (60% of the tumors incorporated in the TMA) were randomly chosen by means of a random selection function of SPSS. To determine the chance-corrected agreement between the immunohistochemical staining scores of TMA cores and large sections, the Cohen’s weighted kappa statistic was calculated. Chance-corrected agreement was considered poor if κ < 0.00, slight if 0 < κ < 0.20, fair if 0.21 < κ < 0.40, moderate if 0.41 < κ < 0.60, substantial if 0.61 < κ < 0.80, and almost perfect if 0.81 < κ < 1.00 [17]. The overall agreement was defined as the percentage of correct agreement between the TMA and the donor blocks from the total number of cases [15]. Results Of the 324 (3 × 108) tumor tissue cores that were transferred into the TMA paraffin block, a median of 295 (91%) was available for immunohistochemical scoring on the 6 TMA slides used in this study (Table 2). Of the 64 randomly selected cases, a median of 176 (92%) of 192 cores (3 × 64) was evaluable on the TMA slides. Table 2Overview of the amount of cores that were evaluable, absent or contained too little tumor in all 108 cases and in the 64 randomly selected cases on the TMA slides H&ECK5/6CK14E-cadherinKi-67p53MedianTotal TMA cases (n = 108) No. of evaluable cores293309294306293295295 Percentage90959194909191 No. of absent cores207229222221 Percentage6273776 No. of cores without tumor11889979 Percentage4333323Randomly selected TMA cases (n = 64) No. of evaluable cores176187176185176176176 Percentage92979296929292 No. of absent cores133134131313 Percentage7272777 No. of cores without tumor3233333 Percentage2122222H&E: hematoxylin and eosin On the H&E-stained TMA slide, 49 (76%) of the 64 randomly chosen cases were represented by 3 cores; 14 (22%) by 2 cores. One (1.6%) case was excluded from further analysis because only a single core was available. The agreement in the scores for the grade of differentiation between the TMA cores and the full sections is shown in Table 3. The weighted kappa score was 0.65. Table 3Agreement in the degree of differentiation between TMA cores and full sections Full sectionG1G2G3TotalκTMAG12305 G2221225 G3072633 Total43128630.65G1: well-differentiated, G2: moderately differentiated, G3: poorly differentiated Fifty-nine (92%) of the 64 randomly selected cases stained for CK5/6 were represented by 3 cores (Fig. 1); the 5 remaining cases by 2 cores. The immunohistochemical scores of the TMA and the donor blocks are shown in Table 4. Overall agreement in CK5/6 scores between the TMA and the donor blocks was 98% with a kappa of 0.93. Fig. 1Example of strong CK5/6 staining in TMA cores and the corresponding full section. a Three TMA cores representing one tumor; magnification ×20. b Enlargement of the middle TMA core depicted in a; magnification ×100. c Part of the slide of the donor block of the same tumor; magnification ×100Table 4Agreement in immunohistochemical scores between TMA cores and full slides stained for CK5/6 and CK14 Full sections<10%10–80%≥80%TotalκTMAsCK5/6 <10%1102 10–80%0404 ≥80%005858 Total1558640.93CK14 <10%5218 10–80%011213 ≥80%043438 Total51737590.71 For CK14, two cases were excluded because only one tumor core was left and three cases were also excluded because the two available cores had discrepant immunohistochemical scores. Fifty (85%) of 59 cases had complete agreement (Table 4). Four cases were scored one class higher on TMA when compared with the full sections. Conversely, five other cases were classified lower on TMA with one case two classes lower. Kappa score was 0.71. Regarding E-cadherin staining, three assessable cores were present in 89% of the cases; two cores in 11%. Overall agreement in E-cadherin staining scores was accomplished in 72% of cases (Table 5). In one case, a higher score was found on the TMA compared to the full section. In 17 cases, the expression of E-cadherin was scored lower on TMA than on the full sections. The observed kappa was 0.47. Table 5Agreement in immunohistochemical scores between TMA cores and full slides stained for E-cadherin  Full sectionsE-cadherin<50%≥50%TotalκTMA <50%221739 ≥50%12425 Total2341640.47 Three-core analysis of Ki-67 staining could be performed in 78% of selected cases and two-core analysis in 19%. Two cases were represented by a single tumor core and were, therefore, excluded from further analysis. Ki-67 staining was scored as “moderate” on both TMA and full sections in 42 (69%) of 61 selected cases. In 79% of cases, the Ki-67 scores of the TMA were similar to that of the full sections. Thirteen cases were discordant (Table 6); kappa was 0.42. Table 6Agreement in immunohistochemical scores between TMA cores and full slides stained for Ki-67 and p53 Full sections<10%10–50%≥50%TotalκTMAsKi-67 <10%2103 10–50%342348 ≥50%06410 Total5497610.42p53 <10%193022 10–50%0123 ≥50%033538Total19737630.86 With regard to p53 staining, two cores were present in 12 (19%) cases and 3 cores were available in 51 (80%) cases. One case was excluded as it was represented by only one TMA core. Complete agreement was achieved in 87% of the selected tumors (Table 6). In the eight nonconcordant cases, the difference was one class, resulting in a kappa of 0.86. Discussion After its introduction in 1998, TMA technology has been applied in the immunohistochemical analysis of various malignancies, including squamous cell carcinomas and adenocarcinomas of the esophagus [3, 5, 18, 19, 25, 41]. Although it seems a very attractive method for high-throughput analysis of hundreds of tissues simultaneously, it may have limitations as the evaluation of the marker expression is reduced from full-section analysis to a few tissue cores of only 0.6 mm in diameter, especially for proteins that are heterogeneously expressed or that are cell cycle-dependent [36]. It is, therefore, essential to assess in each type of cancer individually and for every molecular marker whether TMA technology is feasible and valid [8, 33]. To our knowledge, this has not been done in esophageal cancer. In our TMA containing triplicate cores of 108 ESCCs, a median of 9% of cores was uninformative (6% lost during tissue processing and 3% containing too little tumor), which is comparable to the results reported in other studies [7, 8, 27]. Improper selection of representative tumor areas on the donor block’s H&E slide by the pathologist or incorrect punching of these representative areas out of the donor block can cause tissue cores that contain too little tumor. Possible causes of absent cores are the size and fragility of the tumor tissue used and the aggressiveness of tissue processing applied [31, 35, 42]. Moreover, the number of available cores on the TMA slide depends on the level at which the TMA paraffin block has been sectioned. The slides stained for H&E, CK14, Ki-67, and p53 were one of the first slides that were cut from our TMA block, whereas sections stained for CK5/6 and E-cadherin were taken slightly deeper. On these latter sections, a lower number of absent cores was observed (Table 2), showing that not all cores were placed at the exact same level in the TMA block during TMA construction, mainly due to dissimilar thicknesses of the donor paraffin blocks that were used to construct the TMA [34]. The agreement in immunohistochemical results of the markers between our TMA and the full sections varied from moderate to almost perfect (κ = 0.42 to 0.93), which is consistent with the results reported in other TMA validation studies [4, 7, 8, 31, 35]. The observed variation in agreement could be due to tumor heterogeneity, topographical variation in the expression pattern of the molecular marker, or to the scoring criteria used [31]. Regarding tumor heterogeneity, the optimal amount of tissue cores incorporated in the TMA has been a matter of debate. Several validation studies have shown that three cores are highly representative for the full section [6, 11, 12, 28]. The addition of a fourth core did not add to the percentage of agreement in a colorectal cancer TMA [12]. Moreover, the more cores punched per case, the fewer cases can be placed into the TMA reducing throughput. Adding a fourth core may nevertheless be worthwhile in tissues prone to uninformative cores due to small lesions such as dysplasias or carcinomas in situ [42]. In our TMA, the amount of uninformative cores was low (5–10%), probably because ESCCs have a large diameter, thereby increasing the chance of obtaining a core containing tumor tissue. Taken together, we consider it justified to utilize three biopsy cores in ESCCs. Nonetheless, using such a low amount of cores requires careful selection of the tumor regions by an experienced pathologist to deal with the heterogeneity of the tumor in the TMA [33]. The agreement between TMA and full sections was substantial to almost perfect for CK5/6 and CK14. Because 91% of cases have shown a very strong expression of CK5/6 and only 1 of 64 cases showed negative staining, this molecular marker does not subdivide ESCCs and consequently will not be a prognostic marker for this malignancy. CK14 was more evenly distributed over the three scoring groups, but because one case was scored two classes lower on TMA when compared to the full section, kappa was lower when compared to CK5/6. The relatively moderate concordance in case of Ki-67 may be explained by the fact that almost 80% of cases were situated in one category (staining of 10–50% of tumor cells) with 13 discordant cases deviating from this category. E-cadherin also had a moderate concordance, mainly because the relatively faint staining intensity of this molecular maker made its assessment in our TMA very difficult (Fig. 2). Fig. 2Representative example of E-cadherin staining in TMA cores and the corresponding full section. a Three TMA cores representing one tumor; magnification ×20. b Enlargement of the right TMA core depicted in a; magnification ×100. c Part of the slide of the donor block of the same tumor; magnification ×100 TMA technology was also found to be valid for determining the histologic grade of differentiation in ESCC. Complete agreement between TMA and full sections occurred in 78% (49 out of 63; κ = 0.65) of selected cases, which is high when compared to the 40% agreement achieved in a TMA of bladder cancer [22]. Due to its homogeneous staining pattern, p53 showed excellent concordance (κ = 0.86) in our microarray (Fig. 3). Fig. 3Representative example of p53 staining in TMA cores and the corresponding full section. a Three TMA cores representing one tumor; magnification ×20. b Enlargement of the left TMA core depicted in a; magnification ×100. c Part of the slide of the donor block of the same tumor; magnification ×100 The concurrence between the TMA and the full sections is affected by the cut-off values of the immunohistochemical scoring system of the stainings as well [7, 31]. The application of a two-class scoring system in an endometrial cancer TMA improved κ to 1.0 compared to 0.81 with a three-class system [7]. In our study, the two-class scoring system did not substantially affect the kappa (data not shown). Because the E-cadherin expression had a very low intensity in our ESCCs, we have chosen to apply a two-class system. In addition, the cut-off values indicating a strong immunohistochemical expression were set higher in the cytokeratins (80%) than in the other molecular markers (cut-off value 50%) because otherwise practically all tumors would be designated having a strong expression of cytokeratins. Now that our esophageal cancer TMA has been validated, it will be used to correlate the expression of various molecular pathways with clinicopathologic data, aiming at detecting markers of prognostic significance and molecular targets for new therapies. Because the agreement between TMA slides and full sections depended on the molecular marker stained for, it should be considered to assess the expression pattern of a marker on a full section first, before staining a TMA slide. When a focal or heterogeneous expression pattern is noticed, it might be more valuable to assess marker expression by means of full sections instead of TMA. On the other hand, when a marker shows a homogeneously diffuse expression pattern, staining a TMA slide does allow for high-throughput screening of tumors. When a prognostic molecular marker has been identified by means of TMA technology, it is recommended to verify the results by full-section analysis. In conclusion, this study has demonstrated TMA technology to be a valid method for immunohistochemical analysis in ESCC with agreement levels for well-known molecular markers with different staining potential between TMA and full sections ranging from moderate to almost perfect.
[ "squamous cell carcinoma", "validation studies", "biological markers", "esophageal neoplasms", "protein microarray analysis" ]
[ "P", "P", "M", "M", "R" ]
Pediatr_Nephrol-3-1-1766478
Renal replacement therapy for acute renal failure in children: European Guidelines
Acute renal failure (ARF) is uncommon in childhood and there is little consensus on the appropriate treatment modality when renal replacement therapy is required. Members of the European Pediatric Peritoneal Dialysis Working Group have produced the following guidelines in collaboration with nursing staff. Good practice requires early discussion of patients with ARF with pediatric nephrology staff and transfer for investigation and management in those with rapidly deteriorating renal function. Patients with ARF as part of multi-organ failure will be cared for in pediatric intensive care units where there should be access to pediatric nephrology support and advice. The choice of dialysis therapy will therefore depend upon the clinical circumstances, location of the patient, and expertise available. Peritoneal dialysis has generally been the preferred therapy for isolated failure of the kidney and is universally available. Intermittent hemodialysis is frequently used in renal units where nursing expertise is available and hemofiltration is increasingly employed in the intensive care situation. Practical guidelines for and the complications of each therapy are discussed. Introduction Acute renal failure (ARF) is uncommon in childhood, but its incidence may be increasing and modalities of treatment changing with an increasing number of children being treated in the intensive care unit (ICU) with multi-organ failure. Traditionally children with ARF with renal involvement were only treated with peritoneal dialysis, but extracorporeal techniques are being increasingly used in ICUs. Members of the European Pediatric Dialysis Working Group reviewed all modalities of renal replacement therapy for ARF in children and developed the following guidelines in collaboration with nursing staff during three meetings and extensive e-mail discussion. There are no randomized trials of renal replacement treatment in children with ARF. The guidelines are based upon published reports and consensus opinion to emphasize good practice. ARF is recognized when renal excretory function declines rapidly. Rising values of plasma urea and creatinine are usually accompanied by oliguria (<1 ml/kg per hour), but occasionally patients may be polyuric. The cause of ARF may be pre-renal, intrinsic, or post-renal (obstructive) problems, and causes differ between neonates and older children [1, 2, 3]. The incidence of ARF in children is hard to define, as often renal insufficiency in the newborn and on ICUs is conservatively managed by ICU staff. Outside the neonatal period, ARF is an uncommon condition accounting for 8 referrals per million population per year to one regional pediatric nephrology unit in the United Kingdom [4]. ARF may occur as isolated failure of the kidneys alone, with other organ systems functioning normally, or in association with multiple organ failure. The mortality of the latter group is considerably higher, especially with the growth in pediatric intensive care. For example, the mortality in neonates and infants is 51% after cardiac surgery for congenital heart defects [4], but only 3%–6% for children with intrinsic renal disease such as hemolytic uremic syndrome (HUS) in developed countries [5, 6]. The case mix in different units treating ARF, and hence mortality and morbidity rates, will therefore vary according to local clinical activity and resources [7, 8]. Many pediatric renal units will be close to pediatric ICUs (PICUs) in hospitals that may offer cardiac surgery, liver transplantation, and specialist treatment for metabolic disorders, oncology patients, etc. [6]. Other renal units may be in hospitals that do not have a PICU on site and conversely there may be hospitals offering pediatric intensive care with no specialist pediatric nephrology service. Recommendations All children with ARF require discussion with a pediatric nephrologist. Early transfer for investigation and management is essential in those with rapidly deteriorating renal function or in those with hemodynamic or biochemical disturbances (good practice) [9]. All children with ARF as part of multi-organ failure require transfer to a designated regional pediatric ICU where there should be access to pediatric nephrology advice and support (good practice). Rationale Since there are few comprehensive regional pediatric nephrology centers the distances that families may have to travel can be considerable. Children with acute renal impairment may be managed in local hospitals, but it is essential that early referral is made, especially if children have evidence of rapidly deteriorating renal function and require an urgent histological diagnosis to determine if immunosuppressive therapy or other treatment is required. Indications for referral include oligoanuria, especially if associated with fluid overload, hypertension, hyperkalemia, hyponatremia, acidosis, or the need for blood transfusion. Dialysis is often accompanied by early nutritional support and pediatric nephrology units should be equipped to provide the necessary medical and nursing expertise, combined with dietetic and psychosocial support. The latter support is also important if the child is managed conservatively. Neonates and premature infants with ARF require transfer to a tertiary neonatal unit with pediatric nephrology team expertise. Patients with ARF and multi-organ failure require prompt transfer to a designated regional PICU. The choice of dialysis therapy for ARF depends upon the clinical circumstances, patient location, and expertise available. Peritoneal dialysis (PD) has generally been considered the preferred therapy if there is isolated failure of the kidneys, such as HUS. It is regarded as a simpler technique that is universally available. However, hemofiltration (HF) and hemodiafiltration (HDF) are increasing in popularity in PICUs where the facilities to perform hemodialysis (HD) may not be available. HD may be the preferred mode of treatment in more-stable patients with adequate vascular access treated on renal units where specialist nurses are available. Although extracorporeal techniques such as continuous venovenous hemofiltration (CVVH) or continuous venovenous hemodiafiltration (CVVHDF) are used quite frequently in adult ICUs, there is still limited expertise in many PICUs. Such techniques are very dependent on technology and are more costly than PD [10]. They are also dependent upon the availability of appropriate nursing expertise [11]. Such expertise can be developed and maintained in units remote from the pediatric nephrology center by an outreach service using a renal critical care nurse educator [12]. Recommendation There is no evidence for the optimum level of renal function for starting renal replacement therapy nor for the optimum dialysis modality. Advantages and disadvantages are listed in Table 1. Consideration should be given to establishing national and international databases to collect these data along with patient outcomes [6, 13].Table 1 Advantages and disadvantages of various modalities of renal replacement therapy for acute renal failure (CVVH continuous venovenous hemofiltration, CVVHDF continuous venovenous hemodiafiltration)TypeComplexityUse in hypotensionEfficiencyVolume controlAnticoagulationPeritoneal dialysisLowYesModerateModerateNoIntermittent hemodialysisModerateNoHighModerateYesCVVHModerateYesModerateGoodYesCVVHDFHighYesHighGoodYes Choice of therapy Acute PD The main advantage of PD is that it is continuous therapy that requires neither anticoagulation nor vascular access, and the technique can be used in hemodynamically unstable patients [14]. Acute PD can be performed in units with no HD expertise and is effective in children of all ages, including neonates [15, 16, 17, 18]. PD has been used in treating acute pancreatitis, tumor lysis syndrome, intoxications, metabolic diseases, and other pathological conditions in children [19, 20, 21, 22]. The choice of PD as therapy has always to be individualized, balancing advantages against disadvantages. Limitations in the use of PD Inborn areas of metabolism in the newborn period lead to acute accumulation of neurotoxic metabolites that can be better removed using techniques such as CVVHDF [23, 24]. The latter technique requires good vascular access, which can still be a major problem in small children [25]. Newborns with respiratory diseases, even if on ventilatory treatment, can be treated with PD provided that the fill and exchange volumes are adapted to the clinical situation. However, caution is necessary in neonates with necrotizing enterocolitis and older children with suspected bowel perforation [26]. Preparation for PD Dialysis is only possible if the access provides free flow in and out of the abdomen. The choice is between catheters inserted at the bedside under sedation or the placement of a chronic PD catheter by a pediatric surgeon in the operating theater, or exceptionally at the bedside in the ICU. The rigid Trocath catheter with a stylet has largely disappeared and surgically placed Tenckhoff catheters are reported to have fewer complications [27, 28, 29]. However, small catheters for percutaneous placement using a Seldinger technique are invaluable in providing acute PD rapidly, especially in the neonatal PICU [13, 30]. Blockage by the omentum is always a risk with PD catheters. If the catheter is to be placed surgically then consideration should be given to partial omentectomy [31]. In patients who are having a PD catheter inserted under general anesthetic a cephalosporin antibiotic (20 mg/kg) should be given as a single intravenous dose up to 1 h prior to implantation of the catheter [32]. Any subsequent accidental contamination should result in the use of prophylactic antibiotics, e.g., cefuroxime 125 mg/l in the dialysate for 48 h. For catheters that are inserted percutaneously, prophylactic antibiotics, e.g., cefuroxime 125 mg/l, should be added to the dialysis fluid unless the patient is on systemic treatment. Heparin, 500 units/l, should be prescribed to prevent catheter blockage with fibrin. This is generally maintained for the first 48 h, and longer if the PD fluid remains slightly bloodstained [33, 34]. PD prescription This needs to be individualized according to patient size and condition. Automated PD machines are the preferred method for delivering the individualized dialysis prescription and accurately measuring ultrafiltration [30]. Such machines are now available that can deliver dialysis volumes accurately down to 60 ml with 10-ml increments. Although such machines now have improved accuracy of ultrafiltration measurements, the dead space of the tubing can reduce dialysis efficiency. A manual PD set can be used, using burettes that can accurately measure inflow and outflow, with the PD fluid warmed appropriately [35]. With manual sets, attempts should be made to maintain a closed drainage system, which can help reduce the frequency of peritoneal contamination [36]. Such manual PD sets are commercially available for neonatal patients. Choice of dialysis solution The choice of dialysis solution will depend upon the weight, blood pressure, and hydration status of the child, bearing in mind the need to create nutritional space as part of the management strategy [37]. The general principle is to commence with the lowest concentration of glucose solution possible (1.36%), with stepwise increments. Care is needed if 3.86% glucose solution is required as (1) rapid ultrafiltration can occur (especially in infants) and (2) hyperglycemia may develop (especially in septic and multi-organ failure patients) leading to hyperosmolarity and loss of effective ultrafiltration. Icodextrine solutions need a longer dwell time to obtain significant ultrafiltration and so are rarely indicated in ARF. Lactate-containing dialysis solutions are likely to be replaced by bicarbonate solutions, which are being evaluated in chronic PD. The routine use of bicarbonate solutions should be considered in neonates or in patients with reduced lactate metabolism or with lactic acidosis [38, 39]. Practical points Patients should be connected and automated PD or manual cycles started immediately after catheter implantation. Heparin (500 units/l) should be added to the dialysis fluid to prevent fibrin deposition and to improve peritoneal solute permeability [33, 34], but it can be absorbed and care is needed in patients with coagulation disorders. Dialysis fill volumes of 10–20 ml/kg (300–600 ml/m2) should be used initially, depending on the body size and cycle in and out, until the dialysate becomes clear. A PD program with 1-h dwells should be used during the first 24 h. Shorter cycles can be considered initially if hyperkalemia needs urgent treatment. The program should be adjusted with increasing dwell times and cycle fill volume (if no leakage problems) until the desired fill volume (800–1,200 ml/m2) is achieved, with adequate ultrafiltration and biochemical control [40]. High intraperitoneal pressure (IPP) can be a problem in the first 2–3 days after surgical catheter insertion. The measurement of IPP may limit the risk of leakage when the fill volumes are being increased and allow optimized pain management, but is not yet in routine use [41]. Inflow/outflow pain on PD usually diminishes with time. Tidal dialysis is an alternative [42] and bicarbonate dialysis should be considered [43]. The amount of ultrafiltration that is prescribed will partly depend upon the volume of oral, nasogastric, or total parenteral nutrition that is required, combined with fluid for drugs. Ultrafiltration may not be enough without the use of 2.27% or 3.86% glucose solutions. The clinical, biochemical, and nutritional status of the patient should be assessed regularly in conjunction with an experienced renal dietitian [44]. Optimal nutrition is necessary to avoid a catabolic state and associated production of blood urea nitrogen and uremic products. Rationale Patients with ARF need constant assessment while on PD, and adequacy should be judged in terms of clinical status, ultrafiltration achieved, and biochemical parameters, particularly urea, creatinine, and bicarbonate levels [40]. Although a link between the dialysis dose and the outcome of adult patients in ARF has been established [45], there are no guidelines as to what constitutes adequate PD in a child with ARF. The aim is to deliver maximum clearance to compensate for the catabolic stress. Complications of acute PD Leakages can be a difficult problem and are mostly due to a leakage around the catheter. The incidence can be reduced by proper surgical technique when using a Tenkhoff catheter [46] or resuturing around a percutaneous catheter. Fibrin glue injected into the catheter tunnel is a technique under evaluation [47]. Poor drainage due to mechanical blockage or catheter migration is all too common. Flushing the catheter and preventing fibrin accumulation by increasing the heparin dosage and/or urokinase is suggested initially [48]. A plain abdominal X-ray is rarely justified, as repeated poor drainage will require catheter relocation. If available, a laparoscopic technique may be used to correct poor drainage or replace the malfunctioning catheter [49]. Hernias can be a problem in neonates and infants, particularly males. They do not usually require interruption of PD and can be repaired electively by laparoscopic or direct measures when the child’s clinical condition has improved or stabilized. Peritonitis remains a constant threat, especially if there has been a lot of manipulation of the catheter. The standard features of cloudy PD fluid require urgent attention [50]. Continuous extracorporeal techniques Continuous arteriovenous hemofiltration (CAVH) has largely been replaced by pumped CVVH and CVVHDF, particularly in ICUs [51]. Such continuous renal replacement therapies (CRRT) have expanded the possible role of blood purification in the management of critically ill patients. However, there is a lack of randomized trials in patients with sepsis, and a recent analysis failed to show a benefit for hemofiltration [52]. Studies in adult ICU patients have shown a lower mortality in patients treated with CRRT compared with intermittent HD. However, a recent meta-analysis of studies before 1996 concluded that the evidence was insufficient to draw strong conclusions regarding the mode of renal replacement therapy for ARF in the critically ill [53]. A recent randomized trial in adult ICU patients showed a significant survival advantage when the intensity of ultrafiltration was increased [54]. Practical guidelines for prescription Since the concentration of solutes in the filtrate is the same as in the plasma, biochemistry is controlled by removing large volumes of filtrate and replacing it with electrolyte-containing fluid (HF replacement fluid). As most solutes are distributed within the extracellular and intracellular fluid compartments (total body water), the exchange volume of filtration necessary to control biochemistry relates to total body water. Clinical experience has shown that a turnover of approximately 50% of body weight in 24 h is usually adequate for CVVH. The extracorporeal circuit requires good central venous access, usually via a dual-lumen catheter, to allow the high blood flows necessary to prevent clotting in the hemofilter. Suggested catheter sizes in French gauge (FG) are:Patient size (kg)Vascular access2.5–106.5-FG dual-lumen (10 cm)10–208-FG dual-lumen (15 cm)>2010.8-FG or larger dual-lumen (20 cm) For neonates a 5-FG dual-lumen catheter may be adequate, and access can be obtained via the umbilical vein [55]. A single-lumen catheter using a “single needle” for CVVHD in very low birth weight infants has also been described [56], but this method may be compromised by high recirculation rates with most available systems. However, the smaller the access the greater the problems [57]. It is possible to consider placing two small single-lumen catheters in different central veins. A low blood flow rate, high hematocrit, and high plasma protein concentration will limit the rate at which filtration can occur and solutes (particularly of higher molecular weight) are removed. For a given blood flow rate, pre-dilution results in higher clearance of solutes than does post-dilution [58], but at the expense of greater use of replacement fluid (approximately 20%–50% more). Pre-dilution has the potential for extending filter life. As with HD, the blood volume in the extracorporeal circuit should be less than 10% of the patient’s circulatory volume. Blood flows of 6–9 ml/kg per min or 8% of circulating blood volume prevents excessive hemoconcentration in the filter. Automated machines with appropriate accuracy for children are recommended for delivering the CRRT prescription safely [59], and have replaced pump-assisted hemofiltration using volumetric pumps [60]. To achieve a 50% exchange of total body water in 24 h, an appropriate filter should be selected with a surface area of no more than the surface area of the patient. Suggested maximum filtration rates are:Patient size (kg)Maximum filtration rate (ml/h)<8.52508.5–20500>202,000Under post-dilution conditions, the filtration rate should never exceed one-third of the blood flow. Several filter materials are now available. Synthetic membranes have replaced cellulose acetate, as they are more biocompatible, causing less complement reaction and anticoagulation needs. The synthetic polysuphone membranes are also thought to aid convective clearance of solutes through solute drag [61]. A variety of replacement fluids are available such as lactate, bicarbonate, and buffer-free solutions. Bicarbonate or buffer-free solutions should be used in young infants and those intolerant of lactate. If a commercially available bicarbonate solution were freely available, then this would be the solution of choice. Careful monitoring of electrolytes, glucose, and phosphate is essential, as the constituents vary between the solutions. Anticoagulation The goals of anticoagulation are to prevent clotting of the circuit and maintain adequate clearances with minimal risk to the patient. Heparin is the standard anticoagulant in Europe, but the choice of dosage will depend upon the patient’s coagulation status, adequacy of blood flow, and blood viscosity. In most patients, heparin should be administered as an initial bolus (maximum 50 units/kg) at the time of connection to the extracorporeal circuit, followed by a continuous infusion of 0–30 units/kg per hour. The activated clotting time (ACT) or whole blood activated partial thromboplastin time (aPPT) are usually used to monitor treatment. The optimal ACT during hemofiltration is 120–180 s. The aPPT should be between 1.2 and 1.5 times the respective baseline value. Some patients can be treated without heparin in the circuit [6]. In those patients who are severely thrombocytopenic or where there is suspected heparin-induced thrombocytopenia, alternative treatment with prostaglandin infusions or recombinant hirudin [62], a direct thrombin inhibitor, can be considered [63]. Regional anticoagulation with citrate has been favored by some centers [64, 65]. Sodium citrate chelates ionized calcium necessary for the coagulation cascade and systemic anticoagulation is avoided by infusing calcium through a separate central line. The disadvantages include the possibility of various acidbase and electrolyte disturbances, including hypernatremia, hypocalcemia, and metabolic alkalosis. Adjustment of the prescription Any formula for the prescription of HF is at best an approximation or starting point, as the needs will be determined by many unmeasured variables, such as the rate of solute production, nutritional intake, and the actual volumes of the extracellular fluid and intracellular fluid compartments. If only fluid removal is required, then relatively low rates of filtration are needed, often referred to as slow continuous ultrafiltration (SCUF). There will be negligible solute removal under these circumstances. Correction of “uremia” and electrolyte disturbance requires the turnover of large volumes per kilogram of fluid, typically of the order of 50% of body weight per day for post-dilution and 75% for pre-dilution (approximately 20–30 ml/kg per hour). In catabolic patients, the clearances achieved with standard CVVH may not be sufficient. Solute removal may be increased by attempting “high-volume exchange,” but this may be limited by the practical problems of pediatric patients with limitations of vascular access and hemoconcentration in the filter. In these cases, small solute clearances can be maximized by establishing diffusive mass transport via a dialysis circuit. This can be performed with CVVHDF or without an additional major ultrafiltration component (CVVHD). CVVHDF latter technique requires an additional pump to achieve separate control of the dialysate in- and outflow and of the replacement fluid flow. CVVH substitution fluid bags can be used as dialysis fluid. Dialysis fluid flow should be 2–3 times the blood flow if maximal efficacy is desired. This setting requires frequent manual bag exchanges and continuous supervision of the system. For practical purposes, the HD component can be added for several hours per day to a CVVH regimen. CVVHD has recently been recommended as the method of choice for the treatment of inborn errors of metabolism, since it supplies maximal clearance of ammonium and other neurotoxic metabolites. When CVVHD is unavailable, large volume turnover of body water with CVVH will provide the next best therapy. Rates of up to 100 ml/kg per hour have been reported [66]. If possible, the blood pump speed also needs to be increased. When high turnover and blood flow rates are in use, patients should be carefully monitored for hypothermia, hypokalemia, and circulatory failure. Hypothermia may need to be treated with an external warming blanket and hypokalemia will require replacement. Blood flow should not be increased if the patient develops cardiovascular instability. CVVH and extracorporeal membrane oxygenation In the authors′ experience, the best results are achieved when pre-diluted fully automated CVVH is used, attached to the venous (outflow from patient) side of the extracorporeal membrane oxygenation (ECMO) circuit. This appears to reduce problems of shunting blood around the oxygenator and overcomes the problems of the increased hematocrit that may be associated with ECMO. It also reduces the complications of excessive fluid and solute clearances, with a free flow when systemic hemofilters are used in line with the ECMO circuit. When using CVVH in the suggested configuration, the “pigtails” provide access with very little resistance, causing the arterial and venous pressure alarms to activate and shut down the circuit. Therefore, three-way taps are used to create more resistance to flow into and out of the CVVH circuit. When treating neonatal patients, the ECMO circuit increases the extracorporeal blood volume very significantly. Therefore, the blood pump speed should be calculated taking into account the patient’s blood volume and the priming volume of the ECMO circuit. Complications of continuous extracorporeal techniques Complications of continuous extracorporeal techniques are described in reference [67]. Hypotension Hemofiltration is most commonly used in sick septic children, many of whom will be on pressor therapy. Indeed, the need for pressor agents gives a poorer prognosis [6]. Care should have been taken to minimize the amount of blood in the extracorporeal circuit and blood priming of the HF circuit may be necessary at the outset. Fluid removal is obviously adjusted according to the patient’s clinical state during the treatment. Clotting of the filter and lines This is one of the commonest complications and again is related to the patient’s changing clinical status and problems with anticoagulation. This complication occurred in 24% of 89 patients treated with CVVH in a 2-year local audit (B. Harvey, unpublished observations). Other potential complications of bleeding, anticoagulation toxicity, and infections appear to be minimal. Air embolism is a rare but preventable complication of extracorporeal circuits, and is greatly reduced with the proper use of automated machinery. Intermittent HD The advantages and limitations of intermittent HD are described in reference [68]. Advantages The main advantage of HD is the relatively rapid removal of uremic toxins and ultrafiltration of fluid. This makes the technique well suited for acute situations. Limitations HD is not a continuous therapy and it requires good vascular access as with HF. A purified water supply is also required, as well as anticoagulation, which should always be minimized. The technique might not be applicable for hemodynamically unstable patients. Often the major limiting factor is the availability of expert nursing staff [69], especially in the ICU [70]. Practical guidelines for prescription HD is only possible with good vascular access provided either by a double-lumen HD catheter or a single-lumen catheter of sufficient diameter to achieve flows for single-needle dialysis. Catheter lengths vary from 5 cm for neonates to 20 cm for large adolescents. Bloodline choice depends on the priming (extracorporeal) volume, which traditionally has not exceeded 10% of the blood volume (approximately 80 ml/kg). Dialyzer choice depends on the priming volume and maximum flow rate, with a surface area that should not exceed the child’s surface area and with a urea clearance between 3 and 5 ml/kg per min. There is no evidence for dialyzer choice in pediatric practice, but meta-analysis in adult patients with ARF suggested synthetic membranes conferred a significant survival advantage over cellulose-based membranes, but with no similar benefit for recovery of renal function [71]. Bloodline priming is usually performed with isotonic saline. Small babies, anemic patients, and those in an unstable cardiocirculatory condition, require priming with albumin or blood. HD catheter care After the session the catheter should be flushed with isotonic saline and filled with undiluted heparin (1,000 IU/ml), with volumes according to manufacturer’s recommendations (usually marked on the catheter itself). HD prescription The first session should not exceed 2–3 h, but the standard time is usually 4 h. Longer sessions are advisable to avoid too-rapid ultrafiltration and disequilibrium syndrome. All children should be dialyzed using volume-controlled machines and with bicarbonate dialysate. The blood pump rate is usually 6–8 ml/kg per min, but depends upon the catheter and patient size [69]. The ultrafiltration target should not exceed 0.2 ml/kg per min for acute patients who should be carefully monitored for hypovolemia and hypotension. Sodium profiling is rarely used in pediatric HD practice. Anticoagulation is usually with heparin (50–100 IU/kg per session including initial bolus). Reinfusion is usually performed with isotonic saline. Complications occurring during acute HD For hypotension, the ultrafiltration should be switched off and isotonic saline infused into the venous line until the blood pressure normalizes; additional 20% albumin 5 ml/kg might be helpful. Hypertension is treated according to standard hypertension protocols available elsewhere [72]. Disequilibrium syndrome is now a rare event with adequate control of ultrafiltration and stepwise reduction of uremic toxins. Hypoglycemia should not occur with the use of glucose-containing dialysis fluid. In cases of anemia transfusions are avoided unless patient symptomatic. Erythropoietin may be given intravenously at the end of dialysis (50–200 IU/kg) to maintain hemoglobin levels. Medications The clearance of drugs on HD or during CRRT needs to be considered. Reference should be made to standard texts [73, 74].
[ "acute renal failure", "guidelines", "peritoneal dialysis", "hemodialysis", "hemofiltration" ]
[ "P", "P", "P", "P", "P" ]
Eur_Radiol-3-1-1797072
MR imaging of therapy-induced changes of bone marrow
MR imaging of bone marrow infiltration by hematologic malignancies provides non-invasive assays of bone marrow cellularity and vascularity to supplement the information provided by bone marrow biopsies. This article will review the MR imaging findings of bone marrow infiltration by hematologic malignancies with special focus on treatment effects. MR imaging findings of the bone marrow after radiation therapy and chemotherapy will be described. In addition, changes in bone marrow microcirculation and metabolism after anti-angiogenesis treatment will be reviewed. Finally, new specific imaging techniques for the depiction of regulatory events that control blood vessel growth and cell proliferation will be discussed. Future developments are directed to yield comprehensive information about bone marrow structure, function and microenvironment. Introduction Bone marrow is the fourth largest organ of the human body. Its main function is the hematopoiesis, i.e., it provides the body with erythrocytes, leukocytes and platelets in order to maintain the oxygenation, immune function and auto-restoration of the body. MR imaging provides a non-invasive visualization of the bone marrow and may be used to define its cellular or fat content as well as its vascularity and metabolism. Treatment effects due to irradiation, chemotherapy or other new treatment regimens may change one or several of these components, thereby causing local or generalized changes in bone marrow signal intensity on MR images. This article will provide an overview over such treatment-related bone marrow signal changes on MR imaging. In order to recognize these treatment effects, one has to be familiar with the MR imaging features of the normal and non-treated pathologic marrow. Thus, this article will first provide a brief overview over MR imaging features of the normal and non-treated pathologic marrow as a basis for subsequent descriptions of the bone marrow after conservative or new treatments. Indications to study treatment effects In patients with hematopoietic malignancies the diagnosis of neoplastic bone marrow infiltration is crucial to determine prognosis and to identify suitable treatment protocols [1]. This diagnosis is usually obtained by iliac crest biopsy, which is mandatory for staging and histopathologic classification of the disease [2–4]. In non-Hodgkin’s lymphoma (NHL) a neoplastic bone marrow infiltration indicates the highest stage (stage four) according to the Ann Arbor Classification. In cases of myeloma, the extent of bone marrow infiltration may be associated with any stage of the Salmon and Durie classification [2]. In both instances, iliac crest biopsy may be false negative when the bone marrow infiltration is focal rather than diffuse [3, 4]. The role of MR imaging for the depiction of bone marrow infiltration by hematologic malignancies is controversial and depends on the type, stage and clinical course of the malignancy. For patients with multiple myeloma, MR has been established in some centers as an integrative technique for staging and treatment monitoring, since it proved to be highly sensitive for the detection of bone marrow infiltrates and provided important additional information to conventional bone surveys. In several studies, MR showed a higher sensitivity than conventional bone surveys and bone scintigraphy for the detection of focal bone marrow lesions [5, 6]. Thirty-three percent of patients with myeloma were “understaged” based on the skeletal survey when compared to MR [7]. And some MR findings have been shown to have direct prognostic relevance: In patients with stage 1 disease according to Salmon and Durie, a pathologic bone marrow MR and normal bone survey was associated with a worse prognosis than a normal MR and normal bone survey [8, 9]. And in patients with stage 3 disease according to Salmon and Durie, a diffuse bone marrow infiltration, as shown on MR images, was associated with a worse prognosis than a multifocal bone marrow infiltration [10, 11]. For patients with Hodgkin’s lymphomas and high-grade NHL, FDG-PET has been established as the imaging modality of choice for staging and treatment monitoring and MR imaging should only be considered in selected cases with high risk of bone marrow involvement and equivocal findings on PET or extracompartmental tumor growth. Neither MR nor FDG-PET is meant to replace marrow biopsy, because the histopathologic subtype of lymphoma has to be defined, and because minimally diffuse, microscopic marrow involvement can be false negative with either imaging technique [12]. On the other hand, routinely performed iliac crest biopsies cover only a small portion of the entire bone marrow and also provide potential false-negative findings [3, 4, 12]. Thus, in selected patients with a high risk of bone marrow involvement, MR imaging may contribute clinically significant information. In a study by Hoane et al. in 98 patients with malignant lymphoma, up to one-third of the patients evaluated with routine iliac crest biopsies had occult marrow tumor detectable with MRI [12]. In addition, patients with negative marrow biopsies but pathologic MR were found to have a worse prognosis than patients with negative marrow biopsies and normal MR [13]. Thus, since the diagnosis of bone marrow involvement affects treatment decisions and prognosis, these authors concluded that optimal bone marrow evaluation in such patients should include both biopsy and MR [12]. Another valuable contribution of MR is its capacity to detect soft-tissue paravertebral masses as well as epidural masses and neural foramen invasion, which may accompany vertebral disease. In case of clinical symptoms such as unexplained back pain or new neurological symptoms, MR is the modality of choice to detect paravertebral and epidural masses, which may accompany bone marrow infiltration of the spine. For patients with leukemia, there is currently no role for routine bone marrow evaluation with MR imaging, neither before nor after therapy. Particular questions in these patients that may be addressed by MR are the search for biopsy sites in patients with suspected new or relapsed disease, but negative bone marrow biopsy as well as the diagnosis of treatment complications, such as treatment induced bone marrow infarcts or avascular necroses, when conventional radiographs are negative or equivocal [14]. In summary, dedicated MR imaging of the bone marrow in patients with hematologic malignancies is currently restricted to applications in patients with multiple myeloma, selected patients with other malignant lymphomas with a high risk of bone marrow infiltration and/or extracompartmental tumor growth as well as patients with potential treatment complications. Additional patients with hematologic malignancies may undergo MR imaging for evaluations of other pathologies, outside the bone marrow. And since the bone marrow is depicted on nearly any MR examination throughout the body, the knowledge of the normal and pathologic marrow of these patients before and after therapy is crucial for the radiologist in order to provide a comprehensive diagnosis. Such “secondary” evaluations of the bone marrow may outperform above mentioned primary indications in this patient population. Technique Standard techniques to depict the bone marrow at 1 T and 1.5 T clinical scanners comprise plain T1-weighted spinecho (SE) or fast-SE (FSE) sequences as well as short TI inversion recovery (STIR) sequences. The T1-weighted, non-fat-saturated spinecho (SE) or FSE sequences are best suited to define the cellular content of the bone marrow and should be included in any protocol for MR imaging of the bone marrow. STIR sequences are useful as a screening sequence to search for abnormalities in the bone marrow, most of which appear with a very high signal intensity on these sequences. There is some debate about the optimal inversion time (TI) for depiction of the bone marrow abnormalities with STIR sequences [15–18]: Some authors prefer a TI that causes a suppression of all normal structures and provides a maximal contrast between normal (no signal) and pathologic (high signal) tissues. Other authors prefer a TI, which is slightly higher in order to provide some additional background signal of normal tissues (low signal), and, thus, improved anatomical information. The advantages of STIR sequences are that they provide a very high tissue contrast and that they are insensitive to magnetic field inhomogeneities. The disadvantages are that STIR sequences have a limited signal-to-noise ratio and that the fat suppression technique is non-specific: The signal from tissue or fluid with a T1 similar to that of fat will also be suppressed, for example, mucoid tissue, hemorrhage, proteinaceous fluid, and gadolinium [18]. Selective fat-suppressed T2-weighted images are an alternative to STIR-images at high field MR scanners. Selective fat saturation is lipid specific, usually provides a higher signal-to-noise ratio than STIR sequences and does not suppress gadolinium-based contrast agent (i.e., can be added after contrast medium administration). However, selective fat saturation is susceptible to magnetic field inhomogeneities. To achieve reliable fat saturation, the frequency of the frequency-selective saturation pulse must equal the resonance frequency of lipid. Inhomogeneities of the static magnetic field will shift the resonance frequencies of both water and lipid, this discrepancy would result in poor fat suppression or - even worse - saturation of the water signal instead of the lipid signal. Static field inhomogeneities inherent in magnet design are relatively small in modern magnets and can be reduced by decreasing the field of view, centering over the region of interest, and autoshimming. However, substantial inhomogeneities can be caused by local magnetic susceptibility differences such as those found at air-bone interfaces or around foreign bodies like metal or air collections [18]. Of note, T2-weighted FSE sequences without fat saturation are probably the worst sequences for evaluation of the bone marrow since both lesions and normal fatty marrow appear hyperintense on these sequences. Other MR imaging techniques have been developed to improve the detection and quantification of diffuse bone marrow involvement. These techniques include chemical-shift imaging, bulk T1 relaxation time measurement, and hydrogen 1 spectroscopy [19]. All of these methods were used to measure the fat content or the water/fat fraction more accurately. However, these measurements have so far not demonstrated clinical significance, and hence these techniques are currently not used for routine imaging. Diffusion-weighted MR imaging techniques have been reported to be useful for the differentiation of neoplastic marrow infiltration and pathologic vertebral fractures [20]. Recently, further advanced diffusion-weighted whole body scans have been described for treatment monitoring of patients with leukemia [21]. The technique relies on selective excitation of the water resonance and generation of image contrast that is dependent upon differential nuclear relaxation times and self-diffusion coefficients. Contrast-agent enhanced scans In most instances, the administration of Gd-based contrast agents is not necessary for evaluation of bone marrow disorders. Administration of Gd-DTPA can be helpful to differentiate cysts and tumors, to differentiate necrotic and viable tumor tissue before a biopsy, in suspected osteomyelitis or in equivocal cases of bone infarcts. If Gd-DTPA-enhanced scans are performed, fat saturated T1-weighted sequences should be used in order to suppress the fatty components of the bone marrow with intrinsic high signal intensity and, thus, to provide an optimal depiction of the Gd-enhancement. The diagnosis of lesion vascularization based on comparisons between plain non-fat-saturated and fat-saturated, Gd-enhanced T1-weighted sequences is no problem, if the investigated focal lesion shows a marked enhancement. However, it may be difficult to determine, if a certain lesion shows minimal or no contrast enhancement based on comparisons of plain non-fat-saturated and fat-saturated Gd-enhanced sequences. In these cases, we recommend to obtain additional Gd-enhanced non-fat-saturated T1-weighted MR sequences with identical pulse sequence parameters as the plain sequences. A subtraction of pre and post sequences can then identify a presence or absence of contrast enhancement of certain focal lesions, listed above. Additional dynamic, contrast enhanced scans after administration of Gd-DTPA have been applied by some investigators in order to generate estimates of the blood volume of the normal or pathologic bone marrow before and after treatment [22, 23]. Such dynamic contrast enhanced MR studies with gadolinium chelates should be performed with rapid acquisition techniques, producing interscan intervals as short as possible, to best measure the rapidly evolving distribution patterns of these small molecular probes (<600 daltons). Although investigated since 1993 [23], dynamic Gd-DTPA-enhanced MR studies have not shown clinical significance so far. In addition to bone marrow perfusion, the amount of contrast agent at any time within the bone marrow depends on several additional inter-related variables including microvessel permeability, endothelial surface area, hydrostatic pressure, osmotic pressure, diffusion, convective forces, interstitial pressure and clearance. Clearly, contrast media kinetics in the bone marrow is not a simple matter and shows a high interindividual and intraindividual variability with respect to the mentioned influencing factors [24–26]. This may be one reason why it did not find wide clinical application to date. New, macromolecular contrast media (MMCM) may provide more specific information on bone marrow blood volume and sinus permeability, which may be more useful to study treatment effects. Small molecular Gd-chelates can only estimate the blood volume (immediate post-contrast scans) and extracellular space (later post-contrast scans) of the normal or abnormal bone marrow, because these small molecules permeate readily and non-selectively across normal and abnormal capillaries in the bone marrow. MMCM can also estimate the blood volume based on immediate post-contrast scans. In addition, MMCM are so large that their diffusion across microvessels is affected by the permeability of these vessels. This may be helpful for treatment monitoring of anti-angiogenesis drugs, which specifically and readily decrease microvascular permeability, but which have no immediate effect on the blood volume. Thus, MMCM may be able to predict a response to anti-angiogenic treatment before a subsequent arrest in tumor vessel growth (decrease in blood volume) and before clinical signs of response (laboratory parameters) are apparent [27]. At this time, none of the MMCMs are yet FDA approved, but several of these agents are currently in advanced stages of development for applications in patients (phase II and III clinical trials), such as MS-325/Vasovist (Epix and Schering, approved in Europe), Gadomer-17 (Schering), SHU555C (Schering), Sinerem/Combidex (Guerbet/Advanced Magnetics), B-22956/1 (Bracco) and Code 7228 (Advanced Magnetics). In order to acquire useful kinetic information from MMCM-enhanced studies, a series of images spanning at least 20–30 minutes after contrast medium administration, is required [27]. On the other hand, since the transendothelial diffusion of MMCM is a rather slow process, it is usually sufficient to acquire MMCM-enhanced dynamic data with intervals of one to two minutes or, for specific questions, even to obtain just one delayed contrast enhanced scan. Applications of potential clinical interest will be mentioned in the specific sections below. One new class of contrast agents, particularly noteworthy with respect to bone marrow imaging, is the group of ultra small superparamagnetic iron oxide particles (USPIO). These particulate iron oxide contrast agents are phagocytosed by macrophages in the normal bone marrow, where they induce a T2-shortening effect. The principle is similar to MR imaging of the liver with superparamagnetic iron oxide particles (SPIO). However, SPIO, used for liver imaging have a diameter of >50 nm, whereas USPIO used for bone marrow imaging have a diameter of <50 nm. USPIO particles are not taken up in neoplastic marrow infiltrates, which do not contain macrophages [28–30]. Thus, USPIO may be used to differentiate hypercellular normal and neoplastic marrow. After infusion of USPIO at a dose of 2.6 mg/kg body weight, the normal marrow shows a USPIO induced signal loss as opposed to focal neoplastic infiltrates, which do not show any signal loss, and, thus, stand out as bright lesions [17, 29, 31]. STIR- or T2-weighted fat saturated sequences are best suited for such USPIO-enhanced MR scans. Metz et al. found a significantly increased number of detected focal bone marrow lesions (<1 cm) in patients with lymphoproliferative disorders on these sequences after administration of USPIO compared to non-enhanced scans [17]. The USPIO Ferumoxtran-10 (Sinerem/Combidex, Guerbet / Advanced Magnetics) is expected to become approved for clinical applications in 2006 in Europe and has already shown its ability to differentiate normal and pathologic hypercellular marrow and to detect multifocal lesions within the bone marrow [17, 31]. Other USPIOs, which are in different stages of preclinical and clinical trials are SHU555C/Resovist S (Schering), Code 7228 (Advanced Magnetics), VSOP (Ferropharm) and Clariscan (Amersham/GE). Future imaging developments are likely to generate combined techniques that will maximize the information to be extracted from the image. Tumor location, morphology and function will be integrated. For example, both dynamic contrast-enhanced MR imaging and MR spectroscopy can be acquired in a single diagnostic session to define bone marrow vascular and metabolic characteristics [32]. New developments for whole body MR imaging, such as parallel imaging techniques, dedicated coils (Angio-SURF), and the total imaging matrix (Siemens systems, Avanto) may provide a “screening” of the whole red bone marrow for tumor infiltration within a reasonable time. Additional development is being directed towards combined PET and MR imaging, either by retrospective spatial registration of data from PET and MR images, obtained on separate PET and MR machines or by sequential data acquisitions on combined PET-MR scanners [33, 34]. PET-MR imaging has been described as superior, a least for some applications, compared to PET-CT because of the improved intrinsic soft tissue contrast and potential direct bone marrow depiction provided by MR. Normal bone marrow The normal bone marrow undergoes age-related changes of its cellular content with increasing age of the patient. In adults, the normal bone marrow is characterized by a partial or complete fatty conversion and low cellularity, which leads to a relatively high signal intensity on plain T1-weighted images and low signal intensity on STIR- or fat saturated T2-weighted MR images [35–38]. In children, the normal bone marrow is highly cellular, which leads to a low signal intensity on plain T1-weighted images and high signal intensity on STIR- or fat saturated T2-weighted MR images. With increasing age, a conversion from this highly cellular marrow in children to fatty marrow in adults occurs with a gradual increase of the bone marrow signal on T1-weighted MR images and a gradual decline of the bone marrow signal on STIR- or fat saturated T2-weighted MR images over time. This conversion also follows a particular distribution pattern within the skeleton: it starts in the peripheral skeleton and progresses centrally [38]. Within long bones, it first involves the epiphyses, then the diaphyses and, finally, the metaphyses [38]. In the vertebrae, it starts in the center, around the venous plexus, and progresses peripherally [38]. In adults, the signal intensity of the normal bone marrow is typically hyperintense to surrounding muscle and intervertebral disks on T1-weighted MR images and hypointense to surrounding muscle on STIR- or fat saturated T2-weighted MR images. The knowledge of this pattern of conversion is useful to differentiate normal cellular marrow from focal or diffuse neoplastic involvement and to recognize treatment effects and tumor recurrence. The enhancement of the normal bone marrow in healthy persons after administration of standard small molecular Gd-chelates can vary greatly (range 3-59%, mean 21%, SD 11%) and decreases with increasing age [26]. As a rule of thumb, a signal Gd-enhancement of less than 40% on T1-weighted MR images was reported to be normal in adults of more than 40 years. However, this threshold is dependent on the applied field strength and pulse sequence parameters. The relative enhancement of both normal and abnormal marrow may be higher with new, more sensitive pulse sequences [25]. Thus, each investigator should establish the specific threshold for normal bone marrow enhancement for the technique used at his particular MR scanner and institution. Pathologic bone marrow in hematologic malignancies Neoplastic infiltration of the bone marrow in MR images results in a replacement of the fatty converted marrow by neoplastic cells, thereby increasing the cellular content of the bone marrow, which results in a prolongation of T1- and T2-relaxation times and subsequent decreased T1-signal and increased T2-signal of the bone marrow. The detection of neoplastic bone marrow infiltrations with MR imaging depends on the quantity and distribution of cellular infiltration. The distribution of neoplastic bone marrow involvement in patients with hematologic malignancies may be focal, multifocal or diffuse. In patients with NHL, a focal or multifocal involvement is more common than the diffuse infiltration pattern. In patients with myeloma, an additional, typical “salt and pepper” or variegated distribution may be observed, most frequently in stage I disease according to Salmon and Durie. In patients with leukemia, the bone marrow is usually involved in a diffuse fashion. A multifocal involvement may be seen in a small proportion of patients, particularly those with AML. The detection of focal, multifocal and “salt and pepper” or variegated infiltrations of the bone marrow in patients with NHL is straight forward: The MR signal intensity of focal bone marrow lesions is typically iso- or hypointense to surrounding muscle and intervertebral disks on T1-weighted MR images and hyperintense to surrounding muscle on STIR- or fat saturated T2-weighted MR images. Since these focal lesions are also associated with an increased angiogenesis, they show an increased signal enhancement compared to the surrounding bone marrow on fat saturated T1-weighted MR images. The detection of a diffuse bone marrow infiltration with MR imaging is limited. Using standard MR scanners and pulse sequences, an infiltration of the bone marrow with more than 30% of neoplastic cells can be readily detected by a diffusely decreased T1-signal and a diffusely increased T2-signal [39]. An infiltration with less than 20% neoplastic cells cannot be distinguished from normal marrow with standard MR pulse sequences. Several authors reported a normal bone marrow MR signal in patients with leukemia, in patients with early stages of bone marrow invasion by lymphoproliferative diseases and even in up to one-quarter of patients with stage III multiple myeloma [40]. The administration of Gd-DTPA is not necessary for staging of focal, multifocal and “salt and pepper” lesions, which can be readily detected on plain MR scans. Dynamic Gd-DTPA-enhanced MR scans documented repetitively an increased blood volume (i.e., increased enhancement) of neoplastic infiltrations compared to the normal bone marrow [22, 23, 41]. As one would expect, the Gd-enhancement was significantly higher in marked bone marrow infiltrations than in mild or no infiltration (P<0.05) and the enhancement was higher in lesions with high vessel-density than in lesions with low vessel-density at histology (P=0.01). In addition, a higher enhancement was found in the presence of increased serum immunoglobulins [42]. However, the Gd-DTPA enhancement of focal neoplastic lesions has not shown additional clinically significant information for staging purposes so far. In some cases, Gd-administration may help to diagnose a diffuse bone marrow infiltration. According to studies from Staebler et al., a bone marrow signal intensity increase exceeding 40% indicates a diffuse infiltration in adult patients [43]. Irradiation MR signal changes of the bone marrow during and after irradiation are time and dose dependent: In the acute phase (day 1–3 of irradiation), the bone marrow develops an edema, which appears hypointense on T1-weighted MR images and hyperintense on fat-saturated T2-weighted and STIR-images. Contrast enhanced T1-weighted images show a transiently increased enhancement of the bone marrow during this phase. Subsequently (day 4–10), focal T1-hyperintense and T2/STIR-hypointense areas of hemorrhage may occur. The bone marrow ultimately undergoes a conversion to fatty marrow, which closely represents the irradiation field and which appears very bright on T1-weighted MR images (close to subcutaneous fat) and dark on fat suppressed images (Fig. 1). Dependent on the applied dose, this fatty transformation of the bone marrow may be detected with MR imaging as early as 10–14 days after therapeutic irradiation [44]. In other cases, the edema may persist for weeks and a fatty conversion may only become apparent months after the irradiation. This fatty conversion is reversible after an irradiation of less than 30–40 Gy and irreversible after an irradiation with more than 40 Gy. The time frame of reconversion after low dose irradiation has not been specified so far and will be most likely highly variable with respect to potential additional chemotherapy or GCSF-treatment, location and extend of the affected anatomical area and age of the patient. Contrast enhanced scans show a markedly decreased enhancement of the fatty converted bone marrow. Fig. 1T1-weighted non-enhanced (a) and fat saturated contrast enhanced (b) MR images after radiation therapy of a focal PNET tumor infiltration in L 2. Note the fatty conversion of the bone marrow in the irradiation field, L1 to L3, is only apparent on the non-fat saturated non-enhanced MR image. The tumor in L2 shows a mild Gd-enhancement after radiotherapy on contrast-enhanced scans In addition to these bone marrow signal changes in the irradiation field, several authors also reported an additional small, but measurable fatty conversion of the adjacent bone marrow, outside of the irradiation field. The bone marrow outside the irradiation field showed similar MR signal intensity changes compared to the bone marrow within the irradiation field, but to a much lower degree. Interpretations of these findings differ, the most appealing explanation is a fatty conversion due to scatter irradiation [45, 46]. In children, irradiation may also cause impairment or stop in skeletal maturation. Local irradiation may impair the growth at the growth plate of the irradiated bone after doses as little as 1.3 Gy [47]. In parallel to the above described changes of the bone marrow in general, the metaphyses and growth plates of the affected bones may also show an edema on MR images initially, and, later, a fatty conversion. The severity of impairment in bone growth is dependent on the age of the patient at the time of the irradiation and the administered dose. High doses may lead to marked epimetaphyseal deformities. The affected metaphyses of long bones may show horizontal or longitudinal areas of signal loss of on T1- and T2-weighted MR images, which resemble metaphyseal bands and striations seen on conventional radiographs [47]. Irradiation of the spine may cause a scoliosis, which is typically concave to the irradiation field, if only one side of the spine was irradiated. In children, irradiation of the whole spine may also result in a scoliosis. There is usually an associated impairment of the growth of paraspinal muscles as well as an impaired vertical growth of the irradiated vertebrae. Depending on the applied dose, the bone marrow of the vertebrae may undergo a usually transient or (rarely) persistent fatty conversion and the bone marrow in the peripheral skeleton may show a compensatory hypercellularity. Irradiation of the brain in children with acute leukemia may result in growth arrest due to a deficiency in growth hormone [47]. In these patients, the bone marrow usually appears normal (in case of remission of the leukemia) on MR images. In children, irradiation of the proximal femur may cause a slipped capital epiphysis [47]. This may be diagnosed early on MR images by an asymmetry and typical widening of the center or posteromedial region of the affected physis, which is best seen on plain T1-weighted MR images. A potential subsequent closure of the physis may be diagnosed by sequential MRI, it typically develops from the posterior portion anteriorly. In adults, a short-term complication of local irradiation may be the development of an “irradiation osteitis”, which presents as a T1-hypointense, T2/STIR-hyperintense, inhomogeneous edema on MR images in the irradiation field and which shows a narrow zone of transition to the adjacent, non-irradiated bone marrow. There is no associated extraosseous soft tissue mass. The irradiation osteitis may be associated with insufficiency fractures, which may sometimes be better apparent on MR and sometimes be better apparent on conventional radiographs. Thus, imaging of a suspected irradiation osteitis should always also include conventional radiographs of the affected bone. Irradiation osteitis is rare in children [47]. Other short-term complications of irradiation are insufficiency fractures and avascular necroses (AVN) in the irradiated bone. Insufficiency fractures are caused by normal stress on weight-bearing bones with an irradiation-induced decreased elastic resistance. Depending on the acuteness of the fracture and the time interval after irradiation, MR may show a more predominant edema or a more predominant fatty conversion of the bone marrow. The fracture line may be seen as a bright line on T2-weighted and contrast enhanced T1-weighted MR images, if it is surrounded by adjacent marrow edema (Fig. 2). AVN is caused by an irradiation induced arteritis with fibrosis and endothelial proliferation, blocked arterial inflow or venous outflow, rise in intramedullary pressure, compromised perfusion, and, finally, anoxia and death of bone marrow cells. The MR imaging findings of AVN are described in detail below. Fig. 2Coronal fat saturated T2-weighted (3200/50 ms) (a), T1-weighted (600/20 ms) (b) and fat saturated T1-weighted (600/20 ms) contrast-enhanced (c) MR images of the knee of a 16 year-old patient who underwent radiation therapy of the lower femur and poplitea region. Note fatty bone marrow conversion and fracture lines at the distal femur and proximal tibia (arrows) consistent with irradiation induced insufficiency fractures As long-term complications, benign or malignant tumors may occur after irradiation of the bone and bone marrow. Osteochondromas are the most common tumors that may develop after total body irradiation (TBI). The irradiation induced development of osteochondromas is inversely related to the age of the patient at the time of the TBI. In children, osteochondromas developed in about 6–18% after TBI. The latency for the development of osteochondromas after TBI is highly variable, but generally shorter than the latency for the development of malignant tumors [48]. On MR imaging, a continuity of the bone marrow space with the lesion and a cartilage cap with a thickness of not more than 3 mm are criteria to prove a benign osteochondroma and to exclude a malignancy. Of note, sarcomatous degeneration of an irradiation induced osteochondroma is extremely rare, although incidental cases have been reported [49]. Other benign bone tumors that may develop after TBI are fibrous dysplasia and aneurysmal bone cysts [48]. Radiation-induced sarcoma is a rare late complication of irradiation, which develops after a latency period of about 10 years in the previous irradiation field. Irradiation induced sarcomas are not directly related to the local radiation dose. Osteosarcomas, fibrosarcomas, malignant fibrous histiocytomas, and other sarcomas may occur [50, 51]. Cortisone treatment Ischemic (avascular) necrosis is a well recognized complication of high dose cortisone treatment, seen in 1–10% of patients in the initial treatment phase for leukemias or lymphomas (Fig. 3). In addition, AVN occurs in 10% of long-term survivors of bone marrow transplantation (Fig. 4) who received high doses of steroids for the prevention or treatment of graft-versus-host disease. AVN has been also described after chemotherapy or irradiation [47]. The apparently increasing prevalence of this complication may be due to increasing recognition based on increasing use of MR imaging. Fig. 3Coronal fat saturated Spinecho (3200/46 ms) MR images of the knee joint of a 13 year old boy with ALL after treatment with high dose cortisone. There are multiple bone infarctions in the distal femur and proximal tibia (arrows). MR studies were obtained 14months (a) and 28 months(b) after onset of treatment. Note that the infarcts decrease in sizeFig. 4Conventional radiograph (a), sagittal T1-weighted (600/20 ms) (b) and sagittal STIR sequence (4000/70 ms, TI: 150 ms) (c) of the ankle in a patient with a history of chronic myeloid leukemia and bone marrow transplantation, showing multiple infarcts in the distal tibia, the talus and the calcaneus (arrows), which are not visualized on the radiograph AVN is caused by vascular insufficiency, compromised bone marrow perfusion, and, finally, anoxia and death of bone marrow cells. Bones with end-arterial vascular supply and poor collaterals are particularly prone to develop an AVN, such as the femoral head, distal femur and proximal tibia, proximal humeri, tali, scaphoid, and lunate bones. Usually, the cartilage is not affected because it is nourished by synovial fluid. MRI is the most sensitive non-invasive method for diagnosis of bone marrow infarction. Early forms of AVN are characterized by a diffuse marrow edema. These T1-hypointense and T2-hyperintense areas of edema may be extensive and are non-specific in their MR appearance. Potential causes, which may all be related to steroid administration, are transient osteoporosis, osteomyelitis, occult intraosseous fracture, and stress fracture. However, in patients under high dose steroid treatment, these areas of edema may represent an early stage of AVN and should be considered a marker for potential progression to advanced osteonecrosis. Therefore, careful examinations for osteonecrosis are necessary when bone marrow edema is seen is these patients [52]. The classic MR appearance of advanced bone marrow infarction is characterized by a segmental area of low signal intensity in the subchondral bone on plain T1-weighted pulse sequences, outlining a central area of marrow, which may have variable signal intensities [53]. This crescentic, ring-like well defined band of low signal intensity on T1-weighted images is thought to represent the reactive interface between the necrotic and reparative zones and typically extends to the subchondral plate. On T2-weighted images, this peripheral band classically appears hypointense with an adjacent hyperintense line - the “double line sign”. The hyperintense inner zone represents hyperemic granulation tissue, the hypointense outer zone represents adjacent sclerotic bone. Though characteristic of AVN, this sign is uncommon with the use of fast spin-echo sequences with or without fat suppression and is not necessary for diagnosis of the disease. There is no need to perform conventional T2-weighted sequences to find this sign. On STIR images, the band-like signal alterations of the bone marrow (corresponding to the “inner zone”) usually appear hyperintense. Mitchell et al. described four stages of AVN based on MR imaging findings [54]. The knowledge of these stages may be helpful in the recognition of AVN. Class A lesions had signal intensity characteristics analogous to those of fat, i.e., high signal intensity on T1-weighted images and intermediate signal intensity on T2-weighted images. Class B lesions demonstrated signal intensity characteristics similar to those of blood, i.e., high signal intensity on both T1- and T2-weighted images. Class C lesions had signal intensity properties similar to those of fluid, i.e., low signal intensity on T1-weighted images and high signal intensity on T2-weighted images. Class D lesions had signal intensity properties similar to those of fibrous tissue, i.e., low signal intensity on both T1- and T2-weighted images (2). Class A signal intensity tended to reflect early disease, and class D signal intensity tended to reflect late disease. However, these stages may not occur in a chronological order and did not show prognostic significance [53, 54]. There is currently no established role for Gd-administration in non-traumatic AVN. Of note, in children, the proximal femur epiphyses may show a residual, small subcortical rim of hematopoietic marrow, which typically appears brighter than adjacent muscles on plain T1-weighted MR images. This should not be confused with the “double line sign” in early AVN, which is characterized by a subcortical rim, which is iso- or hypointense compared to surrounding muscle. The percentage of the affected weight bearing surface occupied by the AVN is the most reliable factor for predicting outcome [52]. AVN that are entirely circumscribed, and that do not extend cranially to the cortical subchondral margin, have a good outcome, independent of the overall size of the AVN lesion. AVN that do extend to the subchondral margin and are associated with epiphyseal collapse are at risk to result in permanent disability. AVN may present in an atypical fashion. For example AVN of the spine typically involves a single vertebrae. The MRI findings of a wedge-shaped lesion with fluid intensity (hyperintense signal, like that of cerebrospinal fluid on T2-weighted images) are characteristic for AVN. However, AVN may involve two contiguous vertebrae and the intervening disc and can then be confused with infective or neoplastic processes [55]. Chemotherapy Chemotherapy in patients with leukemia results in typical signal changes of the bone marrow on MR images, which reflect the underlying changes in the cellular composition and vascularity of the bone marrow [56–58]. During the first week of treatment, the bone marrow sinus becomes dilated and hyperpermeable, leading to an edema. The edematous bone marrow shows a low signal intensity on plain T1-weighted MR images and a high signal intensity on plain T2-weighted and STIR-images. This bone marrow edema was reported to be more pronounced in patients with AML than in patients with ALL, probably reflecting the higher bone marrow toxicity of the chemotherapeutic drugs applied for the treatment of AML (e.g., cytosine arabinoside, which has a known high bone marrow toxicity) as opposed to ALL. Subsequently, a marked decrease in bone marrow cellularity and a fatty conversion of the bone marrow develops, which is characterized by an increase in T1- and T2-relaxation times, an increased signal intensity on plain T1-weighted MR images (Fig. 5) and decreased signal intensity on fat saturated T2-weighted and STIR-images. After successful therapy, a normalization of the MR signal occurs with regeneration of normal hematopoietic cells in the bone marrow. This regeneration often occurs in a multifocal pattern, i.e., within the fatty converted marrow, multiple diffuse small foci of decreased signal on T1-weighted images and increased signal on T2- or STIR-images develop. Fig. 5Coronal T1-weighted spinecho (600/20 ms) images of the pelvis before (a) and after (b) chemotherapy for a fibrosarcoma of the pelvis (arrow). Note a decrease of hematopoietic bone marrow and an increased fatty conversion in the pelvis and bilateral proximal femurs, while the tumor size shrinks Hodgkin’s and non-Hodgkin’s lymphomas, low- and high-grade lymphomas as well as distinct subgroups of NHL differ markedly in response to treatment. In Hodgkin’s lymphoma, a bone marrow infiltration at diagnosis is rare and expected to resolve after treatment. Patients with residual active bone marrow lesions need additional, dedicated treatment. MR is non-specific in differentiating residual viable from non-viable disease. FDG-PET or (in equivocal cases) biopsy are preferable diagnostic methods to answer these questions. There are about 30 subtypes of NHL. About 30% of patients with NHL develop a diffuse large B-cell lymphoma, an aggressive B-cell lymphoma. About 20% of patients with NHL develop a follicular lymphoma, an indolent B-cell lymphoma [59]. About 6% of patients with NHL develop a mantle cell lymphoma, an aggressive B-cell lymphoma that is often widespread at diagnosis. Low-grade lymphomas are considered incurable and are often followed by “watchful waiting”. The aggressive high-grade lymphomas are treated with aggressive therapy regimens, chemotherapy, irradiation, often a combination of both and/or autologous bone marrow transplantation. Bone marrow lesions in these patients are expected to change under therapy on MR images. In general, MR images of patients with Hodgkin’s lymphoma and NHL, including myeloma, show a conversion from hypercellular, hypervascularized to normocellular and less vascularized marrow after chemotherapy. However, the “ideal” evolution from hypercellular to normocellular marrow may not occur in all patients with malignant lymphomas. Rahmouni et al. reported that the marrow often returned to normal after treatment when the pattern was diffuse or variegated before treatment [60]. But residual signal alterations of the bone marrow have been also reported after the end of therapy, particularly in patients with a focal pattern before treatment (Fig. 6) [60]. A conversion of a diffuse to a focal MR imaging pattern of marrow involvement, reduction, but not disappearance in lesion size and number, and persistant peripheral lesion enhancement on contrast-enhanced MR images (Fig. 6) have been described in association with a response to standard chemotherapy [58, 60]. In addition, focal bone marrow lesions in patients with lymphoma may show a persistent abnormal signal and/or a cystic or fatty degeneration: A fatty or cystic conversion of focal lesions has been reported as an indication for a response to treatment (Fig. 7) [61]. Low signal intensity on post-therapeutic T2-weighted images is usually associated with fibrosis and rules out relapse. A persistent intermediate hyperintense signal of focal bone marrow lesions on T2-weighted MR images (non-cystic, non-fatty) has been described in association with treatment induced necrosis and inflammation and was found in both responding and non-responding patients [60]. Fig. 6Sagittal T1-weighted Spinecho sequences (500– 700/15–25 ms) with (a, c) and without (b, d) contrast before (a, b) and after c, d) chemotherapy for lymphoma. Note decrease in size of the soft tissue component after chemotherapy, yet still significant contrast enhancement in the bone marrowFig. 7Sagittal MR images of a patient with malignant lymphoma and multifocal bone marrow infiltration. Sagittal STIR (a) and contrast enhanced fat saturated T1-weighted fast SE sequences (700/15–25 ms) were obtained before chemotherapy (a, b). Fat saturated T2-weighted fast SE (4000/60 ms) ©, contrast enhanced fat saturated T1-weighted fast SE sequences and non-contrast enhanced T1-weighted SE sequences (600/20 ms) were obtained after chemotherapy (c–e). After therapy, the areas of previous tumor infiltration show a decreased signal on T2-weighted images, a decreased contrast enhancement and fatty degeneration (arrows) In patients with myeloma, new or progressive vertebral compression fractures may occur as a complication of treatment response in vertebrae that had extensive marrow disease before treatment [58]. With response to treatment, the tumor mass that had replaced the trabeculae, may resolve and the unsupported vertebrae can collapse. An increasing back pain in patients with myeloma after treatment may be caused by such vertebral compression fractures or tumor progression. Relapse and poor response to treatment are well evaluated with MR imaging. In patients with clinical relapse, new focal lesions, an increase in the size of previously identified focal lesions or a change from focal to diffuse infiltration may be seen. Additional signs of tumor progression are increasing paraosseous soft tissue masses or increased Gd-enhancement of focal lesions after chemotherapy. A progress of focal to diffuse bone marrow neoplasias may be sometimes more difficult to assess, since a diffuse bone marrow infiltration may be indistinguishable from reconverted hematopoietic marrow after e.g., GCSF-treatment. As specified above, a new Gd-enhancement of >40% or lack of USPIO uptake in a hypercellular marrow may indicate a tumor progress or recurrence in these cases. A rare form of progression in myeloma under therapy is leptomeningeal spread within the central nervous system, reported in 18 out of a series of 1856 patients [62]. MR findings of leptomeningeal enhancement in the brain or spine helped to establish the diagnosis, which was subsequently confirmed by cytologic analysis of cerebrospinal fluid. Myelofibrosis and amyloidosis can also develop as a consequence of treatment in patients with myeloma. Myelofibrosis can be suggested on MR studies as a conversion of the entire bone marrow to diffuse hypointensity on both T1-weighted and STIR images. Amyloidosis can be seen as focal areas of hypointensity on both T1-weighted and STIR images [63]. Perfusion studies using MR enhanced with standard small molecular Gd-based contrast agents provide functional information concerning the response of bone marrow neoplasias to chemotherapy. Conventional cytotoxic drugs have direct or indirect effects on angiogenesis and cause a decrease in bone marrow contrast medium uptake within weeks or months [22, 23, 41]. However, though these techniques are clinically available for more than a decade, perfusion studies are used infrequently or not at all in clinical practice since the obtained functional changes in bone marrow vascularity do not or only slightly precede the obvious and readily apparent clinical parameters for treatment response. Treatment effects of various chemotherapeutic regimen on the bone marrow in patients with leukemia and lymphomas could be also detected with 31P MR spectroscopy. A treatment induced change in tumor pH with an alkaline shift was related temporally to increases in the phosphodiester/beta-adenosine triphosphate ratio, and occurred before alterations in tumor size were documented [64]. And interestingly, changes in the 31P MR spectral profile could not only be detected by direct investigations of the bone marrow itself, but also by MR spectroscopy of the serum of the patients. The 31P MR spectral profile of the serum changed in responding patients to resemble that of normal serum with typical, higher peak intensities as compared to non-treated and non-responding patients [65]. Bone marrow reconversion after standard therapy After successful cytotoxic therapy and/or irradiation, the normal bone marrow may undergo a reconversion from fatty to highly cellular hematopoietic marrow. This reconversion occurs in a reverse fashion compared to the conversion from hematopoietic marrow to fatty marrow, described above, i.e., the reconversion progresses from the central skeleton to the periphery. Within long bones, it involves first the metaphyses and then the diaphyses. The presence of cellular marrow within the epiphyses in an adult patient with hematologic malignancies is always suspicious for neoplastic infiltration, especially when the rest of the bone marrow did not undergo a complete reconversion. A reconversion of marrow within the epiphyses is only rarely seen, usually in conjunction with an extensive reconversion of the hematoietic marrow of the whole bone. In patients with lymphoma, the reconversion process may be enhanced by administration of granulocyte colony stimulating factor (GCSF), which activates the hematopoietic marrow and decreases the period of aplasia after chemotherapy [66]. The differentiation between this reconverted, highly cellular normal hematopoietic marrow and recurrent tumor after chemotherapy is not possible with conventional MR techniques, since relaxation rates and MR signal characteristics of highly cellular hematopoietic and highly cellular neoplastic bone marrow are similar [67]. Various investigators have addressed this problem, but were not able to differentiate reconverted hypercellular hematopoietic marrow and tumor infiltration using a variety of pulse sequences, static post-contrast MR images, and MR spectroscopy (Fig. 8) [66, 68]. Fig. 8A patient with myeloma at different stages of therapy. a) After chemotherapy and irradiation, the bone marrow of the pelvis and proximal femurs shows a major fatty conversion with small areas of hypointense, cellular marrow of uncertain significance on plain T1-weighted MR images. b) After high-dose chemotherapy and GCSF treatment, several areas of hypercellular, hypointense marrow are seen in the marrow of the pelvis and proximal femurs. The lower row of images shows that these areas appear bright on FS-T2-w images (b1), hypointense on plain T1-w images (b2) and show a marked enhancement on Gd-enhanced T1-w scans (b3). A follow up study, 3 months after the study in (c), shows again a fatty conversion of the bone marrow. In retrospect, the lesions in B were most likely due to reconverted hematopoietic marrow Dynamic T1-weighted MR images after intravenous bolus injection of standard small molecular Gd-chelates may be helpful in the differentiation between normal hypercellular hematopoietic and neoplastic marrow. When compared with red marrow, enhancement of abnormal bone marrow occurred earlier, was steeper and did not last as long in neoplastic bone marrow infiltrations [69–71]. However, in our experience, the Gd-enhancement of the GCSF-treated, markedly hypercellular marrow and neoplastic marrow in patients with hematologic malignancies shows considerable overlap and is of limited clinical value for a definitive differentiation of these entities. USPIO contrast agents may be more useful for such a differentiation of reconverted marrow and tumor recurrence. The pathophysiologic basis for this is the distribution of RES cells in the bone marrow and their ability to phagocytose exogenous iron oxides. After chemotherapy, especially after GCSF treatment, the bone marrow reconversion causes an increased quantity of all hematopoietic cell lines in the bone marrow, including RES cells [72]. In bone marrow neoplasia, on the other hand, the hematopoietic marrow is replaced by tumor cells and the number of RES cells is substantially reduced [73]. Thus, USPIO-enhanced MRI can differentiate these entities by depicting iron oxide-targeted RES cells, which are present in the reconverted hematopoietic marrow, but not present or substantially reduced in focal or multifocal tumor deposits [17, 29, 31]. Before USPIO administration, both, the hypercellular hematopoietic and neoplastic marrow appear of low signal intensity on plain T1-weighted MR images and of high signal intensity on STIR and fat saturated T2-weighted MR images (Fig. 8). After USPIO administration, however, the normal marrow, which takes up the USPIO, shows an iron oxide induced signal loss, whereas focal neoplastic marrow infiltrates, which do not take up the USPIO, stand out as bright lesions on STIR and fat saturated T2-weighted MR images (Fig. 9) [17, 29, 31]. The technique can also be applied in case of a diffuse, marked hypercellularity of the bone marrow (Fig. 10). Diffuse or focal cell components of the normal, non-neoplastic bone marrow, which appear iso- or hypointense to intervertebral disks or skeletal muscle on plain T1-weighted MR images do show a substantial signal loss on USPIO-enhanced STIR images. If this signal loss is not observed, a malignant infiltration is present (Fig. 10). On the other hand, a USPIO-administration is not meaningful in patients with a fatty marrow on T1-weighted MR images, because there are obviously no cells present, which could take up these particulate contrast agents. Thus, USPIO may be applied in specific patients, where a differentiation of hypercellular normal marrow and neoplastic infiltration is warranted. Fig. 9A 42-year-old patient after recurrent chemotherapy and GCSF treatment with reconverted, hyperplastic hematopoietic marrow and multifocal bone marrow infiltration by lymphoma. Both hematopoietic marrow and focal tumor infiltration show a low signal intensity on plain T1-weighted images (a), as well as an increased signal intensity on plain STIR images (b); however, STIR images after iron oxide administration (c) show a marked signal decrease of the hematopoietic marrow, whereas focal tumors (arrows) do not show any iron oxide uptake; thus, the tumor-to-bone marrow contrast increases substantially (figure from [31])Fig. 10A 57-year-old patient with myeloma after chemotherapy and GCSF treatment. T1-weighted images before (a) show a pathologic fracture of Th 9 (arrow) and a diffuse hypointense signal intensity of the bone marrow in all vertebrae, compatible with a high bone marrow cellularity. Unenhanced STIR images (b) show a diffuse hyperintense bone marrow, also compatible with high bone marrow cellularity. After iron oxide infusion, the hypercellular bone marrow shows only minimal changes in signal intensity on STIR images (c), indicative of a diffuse tumor infiltration. Iliac crest biopsy revealed 80% tumor cells in the bone marrow (figure from [31]) New therapy regimens Angiogenesis-inhibiting drugs, such as thalidomide, may be useful for treating hematologic malignancies that depend on neovascularization, and these agents were recently added to some treatment regimens for patients with advanced myeloma [74]. The anti-angiogenic therapy is intended to stop cancer progression by suppressing the tumor recruitment of new blood supply. As such, the anti-angiogenic drugs are generally tumorostatic, not cytotoxic. Thus, a successful therapeutic inhibition of angiogenesis can be expected to slow or stop tumor growth, but not to cause tumor regression or disappearance. Accordingly, MR imaging may show a persistent high bone marrow cellularity under anti-angiogenic treatment. Signs of a response to this treatment are rather a delayed, less steep and smaller bone marrow enhancement after intravenous administration of small molecular Gd-chelates compared to pretreatment studies. New macromolecular contrast agents (MMCM) may provide an earlier diagnosis of response or failure of anti-angiogenic treatment than standard small molecular contrast agents. MMCM are more sensitive in the detection of changes in vascular permeability than small molecular contrast agents. And such changes in the permeability of bone marrow sinus occurs earlier than changes in perfusion and blood volume, typically assessed with small molecular contrast agents. In animal models, MMCM-enhanced MRI was able to define anti-VEGF effects of certain angiogenesis inhibitors as early as one day after initiation of therapy [75, 76]. Future studies have to show, if these results can be also obtained in patients and if so, they would be obviously of high clinical significance for treatment monitoring and management. In clinical practice, anti-angiogenic therapy regimens will almost certainly combine anti-angiogenic drugs with cytotoxic drugs. The effect of such combined therapies on the tumor accumulation and therapeutic effect of the individual drugs are complex and currently investigated in several experimental and clinical trials. Again, imaging techniques are the only available tool to provide a non-invasive and serial assessment of such combined therapy regimens. Some angiogenesis inhibitors, such as anti-VEGF antibody, decrease microvessel permeability and thereby reduce tumoral delivery of large molecular cytotoxic drugs, but not small molecular cytotoxic drugs [76]. MRI assays of angiogenesis can monitor such anti-angiogenesis therapy induced changes in tumor microvascular structure and optimize the choice and timing of cytotoxic drug administration. Other inhibitors of angiogenesis, such as anti-angiogenic steroids (tetrahydrocortisol, cortisone acetate), cyclodextrin derivates 460 (cyclodextrin tetradecasulfate), and tetracycline derivates (minocycline), may increase tumor microvascular permeability and thus potentiate certain cytotoxic therapies [77, 78]. In addition, inhibitors of angiogenesis have been shown to effectively potentiate tumor irradiation effects [79]. This possible synergy between these anti-angiogenesis drugs and cytotoxic drugs or irradiation, as a function of apparent tumor microvascular hyperpermeability, can also be interrogated with MMCM-enhanced MR imaging. Stem cell transplantation The most commonly used therapy for patients with lymphoma is autologous stem cell transplantation. For this, the patient receives a conditioning therapy, then his or her own stem cells are collected by leukapheresis, the patient subsequently receives a high-dose chemotherapy or irradiation and his previously collected stem cells are reinfused. With some types of lymphoma, an autologous transplant may not be possible in case of persistent malignant bone marrow infiltration. Even after purging (treatment of the stem cells in the lab to kill or remove lymphoma cells), reinfusion of some lymphoma cells with the stem cell transplant is possible. It would, therefore, be of high clinical significance to be able to differentiate patients with still viable lymphoma cells in their bone marrow at the time of leukapheresis from patients in true complete remission. In refractory diseases or in aplastic anemia, allogenic marrow transplantation or chord cell transplantation are also used. This much more aggressive procedure is initiated by a conditioning high dose chemotherapy and/or total body irradiation and subsequent reinfusion of allogenic donor cells, i.e., stem cells from a matched sibling or unrelated donor. Allogenic transplantation has limited applications, because of the need for a matched donor. Another drawback is that side effects of this treatment are too severe for most people over 55 years old. After the conditioning therapy for allogenic marrow transplantation, the patients reach complete aplasia, are usually isolated on a bone marrow transplantation unit and should only undergo MR imaging for vital indications. Only limited studies exist about the evaluation of bone marrow with MR imaging in the setting of bone marrow transplantation in patients with hematologic disorders. Based on these data, the MR imaging characteristics of the bone marrow after autologous or allogenic marrow transplantation are apparently quite similar [80]. After the conditioning therapy and before bone marrow transplantation, the bone marrow would be assumed to be depleted of tumor cells. Ideally, a conversion from focal or diffuse hypercellular marrow to normocellular or fatty marrow would be expected to occur after the induction high-dose chemotherapy. However, MR studies in patients before bone marrow transplantation showed that some patients have persistent focal lesions. Metz et al. described residual focal bone marrow lesions in patients with lymphomas right before leukapheresis [17]. Negendak and Soulen found that patients with residual lesions on MR before bone marrow transplantation had a significantly shorter median time until relapse compared to patients who did not show any residual bone marrow disease on MR images [81]. By contrast, Lecouvet et al. reported that patients with an apparently normal marrow and patients with a persistent pathologic bone marrow right before autologous or allogenic transplantation did not show differences in survival [80]. One reason for this difference in observation may be the limitation of the MR imaging technique to detect and differentiate viable and non-viable tumor cells. Some patients with a “clean” bone marrow may have residual microscopic active tumor cells while macroscopic residual lesions on MR imaging in other patients may or may not be viable. In our opinion, it would be highly significant to investigate MR imaging criteria (e.g., diffusion weighted MR, dynamic MR, new contrast agents, spectroscopy) for the differentiation of such persistent focal bone marrow lesions that may or may not develop to recurrent disease after stem cell transplantation. After bone marrow transplantation, the evolution of distinct MR signal patterns of the bone marrow has been described [82, 83]. During the first post-transplantation days, the bone marrow shows a decline in signal intensity on T1-weighted images and an increased signal intensity on T2-weighted images, probably due to a treatment-induced edema. Within 3 months from bone marrow transplantation, a characteristic band pattern appears on T1-weighted MR images of the spine. This band pattern consists of a peripheral T1-hypointense zone and a central T1-hyperintense zone. At histologic examination, the peripheral zone corresponded to repopulating hematopoietic marrow and the central zone to marrow fat. This “band pattern” may be seen for several months [82, 83]. Subsequently, the band pattern gradually evolves into a homogeneous appearance of the marrow after successful bone marrow transplantation. On late post-transplant MR studies, years after bone marrow transplantation, the bone marrow shows the fatty conversion of adult marrow. The signal intensity of the post-transplant bone marrow on T1-weighted MR images is usually increased compared to age-matched controls [84]. In some patients, residual marrow abnormalities may be observed on MR images after bone marrow transplantation and the administration of high-dose myeloablative chemotherapy, in the same way that they may occur after conventional chemotherapy and conditioning chemotherapy regimens, described above (Fig. 11). For example, sharply defined focal low signal intensity areas of bone marrow on T1-weighted images have been reported in patients who are in complete remission after transplantation [85]. Patients with these abnormalities did not have a poorer outcome than those with normal post-transplantation MR imaging findings [80]. These data show that there is clearly a need for the MR imaging technique to add some functional information to the anatomical data, e.g., by adding spectroscopy, perfusion studies or markers for tumor cell proliferation. Fig. 11Patient with malignant lymphoma after TBI and bone marrow transplantation: Plain T1-w MR image (center) shows residual hypointense bone marrow lesions in the pelvis and proximal femur. Some of these lesions may be residual marrow abnormalities of uncertain significance. Other lesions, such as the serpiginous lesions in both proximal femurs represent therapy-induced bone infarcts. These lesions show a corresponding serpiginous hyperintense area on STIR images (left) and a minor, serpiginous enhancement on Gd-enhanced T1-w scans (right) Of note, iron overload commonly accompanies bone marrow transplantation and may result in a diffusely decreased signal intensity of the liver (reported in 77% of pediatric cases after bone marrow transplantation), spleen (46%) and bone marrow (38.5%) on T2-weighted and STIR MR images (Fig. 12). The degree of hepatic iron overload correlated significantly and splenic iron overload correlated weakly with the number of blood transfusions [86]. Fig. 12A patient with myeloma after bone marrow transplantation: The bone marrow of the lumbar spine shows a diffuse hypointense signal intensity on both plain T1-weighted (left) and fat-saturated T2-weighted (right) MR images TBI can cause irradiation induced signal changes and complications, described above (Fig. 11). Long-term complications of TBI, applied as part of the conditioning regimen for a bone marrow transplantation, are the development of osteochondromas or sarcomas. In summary, MR imaging can be a useful tool to aid in the treatment monitoring of patients with hematologic malignancies by monitoring treatment response, detecting treatment complications, differentiating normal and neoplastic hypercellular marrow and by diagnosing residual or recurrent tumor deposits. In order to achieve clinical significance and cost-effectiveness, the MR imaging technique should be clearly tailored to specific patients and the specific questions described in detail above. New MR imaging techniques may serve to depict those molecular pathways and regulatory events that control blood vessel growth and proliferation. Non-invasive monitoring of anti-angiogenic therapies has found success by defining tumor microvascular and metabolic changes, while treatment-related changes in bone marrow morphology tend to occur rather late and are non-specific. Already established in many institutions, future developments will almost certainly include “fusion” or “hybrid” imaging methods, such as PET-CT and PET-MR in the treatment monitoring of patients with malignant lymphomas.
[ "mr imaging", "bone marrow", "treatment effects", "contrast agents" ]
[ "P", "P", "P", "P" ]
Intensive_Care_Med-4-1-2271079
Factors associated with posttraumatic stress symptoms in a prospective cohort of patients after abdominal sepsis: a nomogram
Objective To determine to what extent patients who have survived abdominal sepsis suffer from symptoms of posttraumatic stress disorder (PTSD) and depression, and to identify potential risk factors for PTSD symptoms. Introduction Posttraumatic stress disorder (PTSD) is the development of psychological and physical symptoms following exposure to one or more traumatic events [1, 2]. PTSD symptoms include intrusive recollections (re-experiencing the trauma in flashbacks, memories or nightmares); avoidant and numbing symptoms (including diminished emotions and avoidance of situations that are reminders of the traumatic event); and hyperarousal (including increased irritability, exaggerated startle reactions or difficulty sleeping or concentrating) [3]. PTSD symptoms have a major impact on life, illustrated by the fact that the patients have a reduced quality of life [4] and frequently suffer from depression [5]. Events that typically trigger the development of PTSD include exposure to violent events such as rape, domestic violence, child abuse, war, accidents, natural disasters and political torture, all of which include a threat to life [6–8]. Increasingly PTSD has also been found in patients who have survived a major, life-threatening disease and patients who have spent a significant amount of time in an intensive care unit (ICU) [9–13]. Severe peritonitis (or abdominal sepsis) is such a disease where typically an episode of acute and severe illness [14, 15] is followed by a lengthy ICU stay and a long recovery period that often includes multiple surgical and non-surgical interventions [16–22]. This combination of factors could make this patient group particularly vulnerable for developing PTSD symptoms. To date, little is known about the presence and severity of PTSD and possible risk factors in patients recovering from severe peritonitis [15]. Therefore, our aims were to determine the presence and level of symptoms of PTSD in patients surviving abdominal sepsis. In addition, we searched for demographic and disease-related factors associated with higher levels of PTSD symptoms. Identification of such factors may be important to determine possible targets of intervention and to select patients for psychological assessment interviews. Methods Study design Our study was embedded in an ongoing randomized clinical trial (the RELAP Trial) evaluating two surgical treatment strategies for patients with secondary peritonitis after the initial emergency laparotomy. Patients were enrolled between December 2001 and February 2005 in two academic medical centers and seven regional teaching hospitals in The Netherlands. All patients were followed up for 12 months after initial (index) laparotomy. The study was approved by the medical ethics committee of the Academic Medical Center, Amsterdam. All patients gave informed consent to participate in this study. Study population Patients were eligible for the RELAP trial if they had a clinical diagnosis of secondary peritonitis requiring emergency laparotomy and an Acute Physiology and Chronic Health Evaluation II (APACHE-II) score above 10. Further details of the study population can be found elsewhere [23]. Data collection All self-administered PTSD questionnaires were distributed by mail to patients who survived at least 12 months following initial emergency laparotomy, with a reminder by phone within 2 weeks in the case of no response. After 1 month without response a new questionnaire including a reminder letter was sent. Instruments assessing the level of PTSD symptoms We used two instruments with good psychometric characteristics [24, 25] for measuring PTSD symptoms in research settings: the Post-Traumatic Stress Scale 10 [26] and the Impact of Event Scale–Revised [27, 28]. The Post-Traumatic Stress Syndrome Scale 10 (PTSS-10) was originally designed to diagnose PTSD according to the Diagnostic and Statistical Manual of Mental Disorders III (DSM-III) criteria in victims of natural disasters [14]. The PTSS-10 is now a widely used self-report questionnaire assessing symptoms related to PTSD, particularly in critically ill and ICU patients [4, 11, 12]. The PTSS-10 consists of 10 items, each of which ranges from 1 point (none) to 7 points (always). The total score ranges from 10 to 70, with higher scores indicating more symptoms; scores of 35 or above are considered indicative of PTSD [11, 29]. The Impact of Events Scale–Revised (IES-R) is one of the most commonly used self-report questionnaires for determining PTSD symptomatology following a trauma [27]. The IES-R consists of 22 items, each ranging from 0 (no problems) to 4 (frequent problems), with the total score ranging from 0 to 66. Scores above 24 points are generally considered indicative of PTSD, with higher scores indicating more symptoms [28]. The IES-R has been developed based on DSM-IV criteria and therefore has three distinct subscales, the avoidance subscale (eight questions), the intrusion subscale (eight questions) and the hyperarousal subscale (six questions) [28, 30], and is one of the most frequently used self-report questionnaires in both the clinic and in PTSD research [27]. Potential risk factors Potential risk factors were selected from previous studies [31] examining factors for increased mortality and morbidity [17–22, 32, 33] in secondary peritonitis supplemented with specific factors mentioned in the PTSD literature [6, 9–11, 14, 34, 35]. We divided these factors into three distinct categories. General patient characteristics included age, gender and the presence of major comorbidity (cardiovascular disease; COPD; renal failure; diabetes; malignancy). Disease characteristics and postoperative course included severity of disease measured at the time of initial laparotomy using the APACHE-II score. As several components of the APACHE-II score are already considered in a univariate analysis (namely age and comorbidity); we chose to replace the APACHE-II score with the APS score as a potential predictor of PTSD [36]. The APS comprises only the acute components of the APACHE-II score, without age and comorbidity. Postoperative characteristics included administration of hydrocortisone during ICU stay [13, 37], development of acute respiratory distress syndrome (ARDS), [4, 12, 20], number of relaparotomies, duration of ICU and hospital stay, the development of a disease-related major morbidity during 6 months' follow-up [23] and an enterostomy present after 6 months' follow-up. Traumatic memories of ICU/hospital stay were assessed using the four-item adverse experiences questionnaire, which captures four types of traumatic memories of the stay in the ICU or hospital ward: nightmares, fear and panic, pain, and difficulty in breathing [13]. Patients scored the frequency of traumatic memories of their stay in the ICU or hospital ward using a four-point scale of 0 (never), 1 (sometimes), 2 (regularly) or 3 (often), administered concurrently with the PTSS-10 and IES-R questionnaires after at least 12 months' follow-up. These were subsequently summed and classified into three graded categories of traumatic memories: 0 (no traumatic memories), 1–4 (some traumatic memories), more than 4 (many traumatic memories). We also collected data on whether patients had experienced other traumas or whether a close family member or friend had experienced a trauma within the previous 3 years. We used questions 29 and 30 from the Life Stressor Checklist–Revised [38], administered at the same time as the PTSS-10, IES-R and Beck Depression Inventory II (BDI-II) questionnaires. Responses were given dichotomously as yes or no, and patients were subsequently asked to specify the event type [38]. These questions were asked to determine to what extent the PTSD symptomatology found in this patient group was due to their peritonitis or to other traumatic events. Data analysis We used two instruments aimed at measuring the presence and severity of PTSD symptoms in our population, each with their own cut-off value. Combining data from two instruments measuring the same construct (PTSD symptoms) may lead to a more robust classification of patients. To preserve the natural ordering of patients who scored below the cut-off value on both questionnaires (‘low-scoring patients’), patients scoring above the cut-off on only one of these questionnaires (‘moderate scoring patients’) and patients scoring above the cut-off on both questionnaires (‘high-scoring patients’), we applied ordinal regression modeling. The proportion of patients in each of these three categories is presented with 95% confidence intervals (95% CI) using the method of Wilson [39]. Potential predictors for PTSD symptoms were analyzed using an ordinal logistic regression model. This ordinal regression model is an extension of the binary logistic model and is appropriate when a continuous trait is grouped into several categories by using cut-offs [40]. All potential predictors for PTSD symptoms were first examined in univariate ordinal regression models. Factors with a p-value of less than 0.1 were entered in a multivariate ordinal logistic regression model. If variables within a group of predictors were strongly correlated, only the factor with the strongest univariate relationship and/or most relevant clinical interpretation was added to the model. Because the literature on PTSD and ICU studies shows them to be clinically relevant, age and gender were always included in the multivariate model regardless of the strength of their associations [34]. In addition, a factor comprised of other non-related traumas that the patient had experienced within the previous 3 years was included in the final model to assess its potential confounding role. The fit and validity of the model was evaluated by checking the discriminatory properties (overlap in risk scores of patients with different outcomes), the proportional odds assumption (test for parallel lines) and calibration (closeness in expected and observed numbers of patients evaluated by an extension of the Hosmer–Lemeshow goodness-of-fit statistic). Calibration was checked by comparing expected and observed number of patients in each of the three outcome categories across deciles of expected risk and tested for significance by using an extension of the Hosmer–Lemeshow goodness-of-fit statistic [41]. Nomogram: A nomogram was developed to visualize the prognostic strength of the different factors from the multivariate model in a single diagram. A nomogram allows readers to calculate an expected distribution of PTSD symptomatology (‘low-scoring’, ‘moderate-scoring’ and ‘high-scoring’ patients) based on a specific profile of a patient. The number of points for each predictor was based on the original coefficient from the multivariate ordinal model. The total number of points derived by specifying values for all predictors was used to calculate the expected probabilities that a patient would be a ‘low-scoring patient’, a ‘moderate-scoring patient’ or a ‘high-scoring patient’. Analyses were performed using SAS software version 9.1 (SAS Institute Inc., Cary, NC, USA). Results Of the total of 132 patients eligible for this study, 108 (80%) patients responded to the questionnaire (Fig. 1). On average the responses were provided approximately 12.5 months following initial emergency laparotomy. There were no significant differences in any of the patient or disease characteristics between respondents and non-respondents. Fig. 1Flowchart summarizing inclusion and response The median age of patients was 66.8 years and 54% were male. Patients were severely ill, with a median APACHE-II score of 14 and a median APS score of 6, and 5% had a major comorbidity (Table 1). Ninety-six patients (89%) were admitted to ICU: their median ICU-stay was 7 days, and these patients were mechanically ventilated for a median of 5 days. Patients were hospitalized for a median period of 28 days (IQR 19–55 days). Fifty-one percent of patients also underwent another trauma in the 3 years prior to filling in the PTSD questionnaires. Table 1Association between severity of PTSD symptoms (three categories) and patient, disease operative and postoperative characteristics: results from univariate ordinal regression modelsOverallPTSD symptoms a(n = 107)Univariate ordinal regression bNone to mild (n = 66)Moderate (n = 30)High (n = 11)p-valueGeneral patient characteristicsMedian age (IQR)66.8 (57–73)70.2 (60–74)58.7 (47–72) 57.8 (49–65)0.004Male gender (%)54%53%53% 64%0.847Major comorbidity present (%) c53%55%50% 55%0.670Peritonitis and postoperative characteristicsInitial Median APS score (IQR) 6 (4–8) 6 (4–8) 7 (5–9) 8 (3–8.5)0.271 Hydrocortisone in first 14 days in ICU (median days) 2 (0–7) 1.5 (0–8) 1 (0–8) 5 (1–7)0.749 ARDS 6% 3%10% 9%0.192 One or more relaparotomies67%70%63% 64%0.515 Admitted to ICU89%85%93%100%0.110 Median length of ICU stay (IQR) 7 (4–15) 7 (4–12) 7 (4–19) 9 (6–16)0.042 Median ventilation time (IQR) 5 (1–8) 4 (1–7) 5 (1–10) 7 (4–13)0.073 Median length of hospital stay (IQR)28 (19–55)26 (18–47)31 (23–60) 56 (19–72)0.102Follow-up Disease-related major morbidity at 6-month follow-up15% 9%27% 18%0.068 Enterostomy at 6-month follow-up51%47%55% 70%0.183IQR, Interquartile range; a Three graded outcomes: none to mild, moderate and high; two patients' data are based on only one completed questionnaire; b All models were checked for parallel lines to see if an ordinal test for significance was appropriate; c Major comorbidity included cardiovascular disease, COPD, renal failure, diabetes and malignancy Prevalence of PTSD symptoms The proportion of ‘moderate-scoring’ PTSD patients was 28% (95% CI 20–37%), whilst 10% (95% CI 6–17%) of patients were ‘high-scoring’ patients (Table 1). Detailed information on depression and PTSD symptoms is presented in the electronic supplementary material (ESM). Predictive factors Results from the univariate analysis are presented in Table 1 and Table 2, and descriptive details can be found in the ESM. Table 2Association between severity of PTSD symptoms (three categories) and other traumatic experiences following peritonitisPTSD symptoms (n = 105) Univariate ordinal regressionNone to mild (n = 64) aModerate (n = 30) aHigh (n = 11)p-valueTraumatic memories of ICU or hospital stayNightmares39%61% 82% 0.002Fear and panic24%61%100%< 0.001Pain67%70% 82% 0.002Difficulty breathing33%76%100%< 0.001Traumatic memoriesNone (0)41%50% 9%< 0.001Moderate (1–4) 7%47% 47%Severe (> 4) 0%18% 82%a Two patients not included in final analysis due to missing data on traumatic memories during ICU or hospital stay The final multivariate model included age, gender, length of ICU stay, disease-related morbidity during the 6-month follow-up, traumatic memories of the ICU or hospital stay and other traumatic factors within the previous 3 years (Table 3). Table 3Association between severity of PTSD symptoms and patient, disease operative and postoperative characteristics and other traumatic experiences following peritonitis in a multivariate analysisFinal model (n = 105) aOR95% CIp-valueLowerUpperTen years increase in age 0.740.53 1.04 0.084Female 0.90.94 2.3 0.822Length of ICU stay (log2 transformed) 1.41.1 1.7< 0.003Major disease-related morbidity during 6-month follow-up (including index hospital admittance) 2.10.61 7.11 0.238Traumatic memories of ICU or hospital stayModerate (1–4) 4.90.95 24.9 0.058Severe (> 4)55.59.4328.0< 0.001Other trauma within previous 3 years 2.40.94 6.3 0.085a This multivariate ordinal analysis included a test for parallel lines (p = 0.694) In our final model, increasing age was associated with a lower likelihood of PTSD symptomatology (OR = 0.74 per 10 years increase in age, p = 0.084). Gender was not predictive of PTSD symptoms (OR = 0.90, p = 0.82). Disease-related morbidity at the 6-month follow-up (OR = 2.1, p = 0.24) was no longer independently predictive of PTSD symptoms. Memories of the ICU/hospital stay (patients that reported some memories: OR = 4.9, p < 0.057; patients that reported many memories: OR = 55.5, p < 0.001) were the most prominent independent risk factor for increased PTSD symptomatology. Length of ICU stay was also significantly predictive in the development of PTSD symptomatology in the multivariate model (OR = 1.4, p = 0.004). The relative strengths of these relationships are visualized in the nomogram (Fig. 2). In this nomogram, one can calculate for the individual patient given his/her risk profile the probability that he/she will score either no to mild, moderate or high PTSD symptoms according to the PTSS-10 and IES-R. Fig. 2Nomogram for prediction of severity of PTSD symptoms in patients with secondary peritonitis. Graded outcome categories are: none to mild (negative on both instruments), moderate (positive on one instrument), and severe (positive on both instruments) The proportional odds assumption was not violated as indicated by a p-value of 0.694 for the test of parallel lines. Calibration of the model (closeness between predicted and observed probabilities) was good, with a p-value of 0.987 for the goodness-of-fit test for ordinal models. A graphical impression of the model's discriminative ability is shown in Fig. 3. This figure shows that the mean risk score is significantly different between all three PTSD symptom severity categories (p < 0.001), although there is some substantial overlap in the risk scores between patients from different categories of PTSD symptom severity. Fig. 3Distribution of total points from nomogram (risk score) for the prediction of the severity of PTSD symptoms with use of the risk factors taken from the multivariate ordinal model. PTSD categories are graded according to severity: none to mild (negative on both instruments), moderate (positive on one instrument), severe (positive on both instruments) Discussion The proportion of patients with ‘high-scoring’ PTSD symptomatology 12 months after peritonitis was 10%, and the number of ‘medium-scoring’ patients (28%) was in line with earlier studies measuring PTSD symptoms in critically ill patients who had been admitted to ICU [9–13, 15, 42]. The prevalence of PTSD recorded in the general population varies between 0.9% and 2.9% (the ESEMeD study) [7, 43, 44]. From our study, the following observations can be made. Firstly, the development of PTSD symptoms is not directly related to the severity of the disease at presentation. The APS score at baseline was not predictive for the development of PTSD symptoms. The APS measures the severity of disease score solely on the weight of the acute clinical features and does not incorporate age and comorbidity [36]. The development of PTSD symptoms was, however, predominantly related to a more complicated course of secondary peritonitis. Longer ICU and hospital stays and major disease-related morbidity during the 12-month follow-up were associated with more PTSD symptoms. In concordance with earlier studies [15, 34], the strongest predictor of having PTSD symptoms following abdominal sepsis was having traumatic memories and experiences during their initial hospital or ICU stay. These results suggest that the presence of traumatic memories is one of the most relevant aspects for the development of PTSD-related symptoms. Earlier studies also found that subjective interpretation of the intensive care experience emerged as a consistent predictor of adverse emotional outcome in both the short and the long term [13, 34]. Age plays a critical role in the development of PTSD symptoms. Younger patients are much more likely to develop and report such symptoms. This finding confirms the results of an earlier, retrospective study of a different cohort of patients 4–10 years after hospital admission for severe peritonitis [15]. These findings suggest that older patients are more able to adapt to the limitations that are associated with experiencing such a major disease, most likely because they have already experienced a co-morbid illness and health-related problems. In contrast to some other studies, gender did not play a role in the development of PTSD symptoms [34]. Patients with abdominal sepsis suffering from ARDS did not report more PTSD symptomatology than those without ARDS. In earlier ICU studies, ARDS patients reported considerable PTSD symptoms [12, 45]. In our cohort of abdominal sepsis patients we found different predictive factors for PTSD than those found in the ARDS patients [10, 12, 15]. Secondary peritonitis in itself may have been severe enough, with ICU admission and extended mechanical ventilation days, to cause PTSD symptoms; therefore, the added risk by ARDS may be moot. Lack of power may also be a factor, because the proportion of patients developing ARDS in this study was modest. We did not find an association between hydrocortisone administration during ICU stay and PTSD symptoms within this peritonitis cohort, as has been demonstrated for other critically ill patient groups [13, 37, 46, 47]. Hydrocortisone did not protect against developing PTSD symptoms, whereas other studies have found that administration of hydrocortisone during ICU can lead to a reduction in PTSD symptoms after discharge. In this study corticosteroid use during only the first 14 days of ICU was included in our analyses. The effect of prolonged use of hydrocortisone or late-stage use during conditional adrenal insufficiency cannot be excluded. Unfortunately, due to the acute and life-threatening nature of secondary peritonitis, it was not possible to collect baseline information on PTSD or data on earlier psychological disorders. However, as recommended in a recent review by Griffiths and colleagues [48], to account for possible earlier traumas we considered information pertaining to comorbid diseases. Furthermore, we collected data on other, non-disease-related traumatic events that had occurred within the previous 3 years. These non-disease-related events were indeed associated with having more PTSD symptoms, and altered the initial ORs of the other factors to the extent that we considered it a moderate confounder. Timing plays an important role in collecting data on PTSD symptoms in critically ill and ICU patients [48]. We set the period for the recording of PTSD symptoms at 12 months for a very specific reason; in this severely ill patient group, we did not want to record patient recovery. Past studies have shown that critically ill patients develop PTSD symptoms only after their physical recovery period has passed, hence with a delayed onset [9]. Although these self-report questionnaires are frequently used, the diagnostic value of such instruments in relation to a DSM-IV diagnosis obtained by a structured interview is still being researched and discussed [49]. Some studies have reviewed the diagnostic value of the questionnaires, but in general these studies were methodologically limited [11, 29, 50]. In this study we did not include a structured clinical interview for establishing a definite DSM-IV criteria diagnosis of PTSD, although this is highly recommended in clinical psychology. However, we feel that the use of questionnaires is more feasible in the ICU [48], and patients who report many symptoms on these self-report questionnaires [51] can subsequently be referred to an appropriate mental healthcare provider [50]. In this study we have tried to learn from two questionnaires, one commonly used and validated in particular for critically ill patients, together with one of the most frequently used screening instruments for PTSD, the IES-R. The prevalence of PTSD symptomatology in the present study was based on whether or not a patient scored above the cut-offs of the IES-R and the PTSS-10. We employed the two questionnaires as complementary for the detection of PTSD symptoms and not to compare results deduced from both questionnaires separately [49]. Combining the results of both questionnaires in our analysis was anticipated to lead to a more robust assessment of the factors associated with more PTSD symptoms. Although these two instruments aim to measure the presence of PTSD symptoms, their concordance in classification of patients was not perfect, with 30 patients (28%) being positive on one questionnaire but not on the other. This demonstrates the difficulty in measuring PTSD symptoms by questionnaire, but also means that both questionnaires are informative in their own right. Combining the two instruments may therefore lead to a more robust classification of patients based on their level of PTSD symptoms and may be a more useful tool in screening patients following ICU stay, while potentially reducing biases due to instrument variation [52]. We assessed traumatic experiences during ICU/hospital stay based on the patients' recollections (after 1 year). The patients' perceived traumatic experience may well have contributed to the development of PTSD symptoms, but it is also possible that having PTSD symptoms influenced their perceptions. Future studies should aim to prospectively quantify traumatic experiences during or shortly after ICU stay to draw more causal conclusions, even though this might be difficult in patients with such a lengthy recovery period [9, 48, 53, 54]. In the clinical setting, there is a continuing debate on whether to intervene in the more acute peritraumatic psychological processes or in a later phase, when symptoms or prodromes of PTSD are observed. By improving our understanding of which factors play an important role in the development of PTSD, we can better prevent PTSD symptoms in high-risk patients and decide when best to intervene. The aim of our predictive model is for it to be used by treating physicians, following the acute episode and phase of secondary peritonitis in which survival and physical recovery are the main concerns, to recognize high-risk PTSD patients. This relatively simple model can aid the surgeon, for instance, during the first outpatient visit in determining which patients are at higher risk for the development of PTSD symptoms. However, before this nomogram can be used to actually predict PTSD symptomatology in clinical practice, it must be externally validated in another cohort of patients with secondary peritonitis. In conclusion, 10% of peritonitis patients report ‘high’ PTSD symptomatology and another 28% ‘moderate’ PTSD symptoms. Factors that were related to more PTSD symptoms included younger age, traumatic memories of the period of hospitalization and length of ICU stay. Knowledge of these predictive factors is required to increase awareness, and to develop tailored early treatment options for these high-risk patients our nomogram may assist in identifying patients with PTSD symptoms. Electronic supplementary material Electronic Supplementary Material (DOC 21K) Electronic Supplementary Material (DOC 52K)
[ "sepsis", "posttraumatic stress disorder", "ptsd", "depression", "intensive care", "peritonitis", "ptss-10", "ies-r", "bdi-ii" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
Eur_J_Pediatr-3-1-1802727
The probiotic Escherichia coli strain Nissle 1917 (EcN) stops acute diarrhoea in infants and toddlers
In most cases, acute diarrhoea will become self-limiting during the first few days after onset. For young children, however, health risks may develop when the disease lasts longer than 3 days. The purpose of the present trial was to determine whether the stool frequency of infants and toddlers suffering from acute diarrhoea could be normalised more quickly by administering the probiotic Escherichia coli Nissle 1917 (EcN) solution than by administering a placebo. The safety of EcN were also assessed. A total of 113 children (aged 2–47 months) with acute diarrhoea (> three watery or loose stools in 24 h) were randomised to either a group receiving the probiotic EcN suspension (n = 55) or a group receiving the placebo suspension (n = 58) in a confirmative, double-blind clinical trial. Depending on the age of patients, 1–3 ml per day of verum suspension (108 viable EcN cells per millilitre) or placebo were administered orally. The causes of the diarrhoea were viral rather than bacterial, but they were mainly unspecific infections. The median onset of treatment response (reduction of daily stool frequency to ≤ three watery or loose stools over at least 2 consecutive days) occurred more rapidly in the children receiving the EcN solution (2.5 days) than in those receiving the placebo (4.8 days), a significant difference (2.3 days; p = 0.0007). The number of patients showing a response was clearly higher (p < 0.0001) in the EcN group (52/55; 94.5%) than in the placebo group (39/58; 67.2%). EcN was found to be safe and well-tolerated, and it showed a significant superiority compared to the placebo in the treatment of acute diarrhoea in infants and toddlers. Introduction Probiotics are non-pathogenic microorganisms – mostly of human origin – which confer health benefits to the host when administered in adequate amounts. They are considered to be a safe and effective part of the first-line therapy for acute diarrhoea in children and adults [12]. In addition, probiotics are able to prevent or improve not only gastrointestinal diseases such as inflammatory bowel disease, irritable bowel syndrome, infectious gastroenteritis or diverticular disease of the colon, but also to act in the prevention of allergic diseases. Various probiotics are commercially available in, for example, Europe, the USA and Japan where they are marketed as functional foods or probiotic drugs. To date, lactobacilli, bifidobacteria and Saccharomyces boulardii are the most commonly marketed probiotic active substances. Certain strains of Escherichia coli are also available in some European countries, the best known example of which is E. coli strain Nissle 1917 (EcN). EcN is marketed as a probiotic drug in two galenic presentations for oral use: enteric-coated capsules and a suspension in which 1 ml contains 108 viable EcN cells. While capsules are mostly used in adults (e.g. Kruis et al. [8]), the use of the suspension form is the most reasonable form in neonates, infants and toddlers. The purpose of the present trial was to examine the efficacy and safety of an EcN suspension administered to infants and toddlers suffering from acute diarrhoea of different causes in terms of normalising the stool frequency. Materials and methods Infants and toddlers treated for acute diarrhoea in the paediatric outpatient wards of 11 centres between February and April 2005 were eligible for enrollment in this study. This was a multicentre, prospective, confirmative, randomised, double-blind, placebo-controlled, parallel group clinical trial of phase III. It was carried out in accordance with the requirements of Good Clinical Practice and the Revised Declaration of Helsinki. The study was approved by the Independent Ethics Committee (IEC) of the Federal Agency of Drugs Quality Control, Moscow, Russia, and by the IEC of the State Enterprise Centre of Immunobiological Medicines at the Ministry of Health of Ukraine. Acute diarrhoea was defined as more than three watery-to-loose stools per day from an acute episode of non-bloody diarrhoea which did not persist longer than 3 successive days. For reasons of comparability, one of the exclusion criteria was a higher grade of dehydration (loss of body weight >5%; hydration status was surveyed, rehydration was not implemented in low-grade dehydration); the most important inclusion and exclusion criteria are listed in Table 1. The participant was assessed until ascertainment of response, 10 days at maximum. An overview of the study design is presented in Fig. 1. A stool sample was taken at both the beginning and end of the study and checked for the presence of the following pathogens: Salmonella, Campylobacter, Yersinia, E. coli (ETEC, EPEC, EIEC, EHEC), Shigella, Entamoeba histolytica, Cryptosporidium parvum and Rota-, Adeno- and Noroviruses. Table 1Inclusion and exclusion criteriaInclusion criteriaExclusion criteriaAge <4 years at the time of enrolmentDehydration (>5% loss of body weight)More than three watery or loose non-bloody stools in a 24-h period that had not persisted for more than three consecutive daysParticipation in another clinical trialSigned informed consent by the parentsIntake of EcN within the past 3 months prior to enrolmentIntake of food supplements or drugs which contain living microorganisms or their metabolic products or components within 7 days prior to enrolment or during the trialOther antidiarrhoeal drugsAntibioticsBreast-feeding, Premature birthSevere or chronic disease of the bowel or severe concomitant diseasesFig. 1Study design. *Final visit Duration of treatment, which was until ascertainment of response, 10 days at maximum The parents were asked to maintain a daily record (diary) containing information on the number of stools, stool consistency, admixtures of blood or mucus, frequency of vomiting, abdominal pain and cramps and fluid intake as well as concomitant medication and general state of health. An assessment of general health was also documented during each control visit by the investigator and parents. The randomisation schedule was generated by means of SAS, ver. 9.1 (SAS Institute, Cary, N.C.) based on seed values dependent on a random number generator. The method of randomly permuted blocks was used (block size: 4). Study medication The drug being studied (verum) is a commercially available suspension for oral use that contains non-pathogenic E. coli strain Nissle 1917 (Mutaflor suspension; Ardeypharm, Herdecke, Germany, with 108 viable microorganisms per millilitre). As placebo, we administered an identical preparation consisting of a suspension devoid of the active substance. In accordance with good clinical practice (GCP), identical containers were used in order to guarantee a concealed random allocation both to the parents and the study personnel involved. Depending on the age, daily doses of the study medication (EcN or placebo) were: Infants <1 year1 ml once dailyToddlers ≥1 to ≤3 years1 ml twice dailyToddlers >3 to <4 years1 ml three times daily The parents received a diary in which the intake of the trial medication was documented. The investigator checked the entries for completeness and plausibility. The compliance was also evaluated by comparing the amount of trial medication handed out with that returns. Outcome The primary effect criterion was the time to response. Treatment response was defined as a reduction in stool frequency to ≤ three watery or loose stools in 24 h over a period of at least 2 consecutive days. Secondary effect criteria included the response rate, stool consistency, abdominal pain and cramps, body temperature, frequency of vomiting, occurrence of adverse events and tolerance to the study medication. Statistical analysis The study was conducted according to a three-stage group sequential test design (O’Brien/Fleming type) with possible sample size adaptation after the two planned interim analyses [9]. A time-to-response analysis was performed (Kaplan-Meier method; log-rank test to test the superiority of EcN compared to placebo; overall type-I error rate α = 0.025; one-sided). The response rates were also computed and compared between treatment groups by means of Fisher’s exact test (one-sided; exploratory). The intention-to-treat (ITT) data set included all randomised patients who took at least one dose of study medication (primary analysis), whereas patients with major protocol violations were excluded from the per-protocol (PP) analysis. The analysis sets were defined in a blind review of the data. The sample size was estimated prospectively using ADDPLAN ver. 3.0. An independent data monitoring committee (IDMC) was responsible for reviewing the results of the interim analyses and giving recommendations. Two interim analyses were performed, resulting in continuation of the study with the pre-planned sample sizes. Results Baseline data A total of 113 infants and toddlers between 2 and 47 months of age with acute diarrhoea were admitted to the trial. All patients were Caucasian. The patients were randomly allocated to either the EcN group (55 patients) or the placebo group (58 patients) (Fig. 2). No relevant differences between the groups were observed in terms of gender, age, height, weight and BMI of the patients (Table 2). The vast majority of patients had an average body development and a good nutritional status, but reduced appetite was reported. There were also no differences in systolic and diastolic blood pressure, heart rate and body temperature between the two treatment groups at baseline. Fig. 2Diagram of participant flowTable 2Baseline data for the two treatment groupsEcN (n = 55)Placebo (n = 58)Male gender 32 (58.2%)32 (55.2%)Age (median)21 months23 monthsHeight (median) 83 cm83 cmWeight (median)12.7 kg12.6 kgBMI (median) 17.4 kg/m217.2 kg/m2Mean duration of diarrhoea1.4 days1.6 daysMean stool frequency5.0 per day5.1 per dayPossible causes for the current acute diarrhoea episodePrevious antibiotic treatment2 (3.6%)4 (6.9%)Virus infections 16 (29.1%)19 (32.8%)Bacterial infections9 (16.4%)4 (6.8%)Unspecified infections25 (45.5%)29 (50.0%)Other causes3 (5.5%)2 (3.4%) There was no difference in the duration of the current acute diarrhoea episode between patients in the EcN group and placebo group (EcN: 1.4 ± 0.6 days, mean: 1.0 days; placebo: 1.6 ± 0.6 days, mean: 2.0 days). The number of infections during the past 12 months was ≤ five in 55/55 patients (100%) of the EcN group and 56/58 patients (96.6%) of the placebo group. Infections as a cause of the acute diarrhoea were more often viral than bacterial. However, unspecific infections were the most common (Table 2). The number and proportion of patients with pathogenic microorganisms were slightly higher in the EcN group (27/55 patients) than in the placebo group (21/58 patients) at baseline. This difference was not statistically significant. Data analyses All efficacy analyses were originally designed to be performed on both the ITT and PP data sets. However, as the ITT and PP data sets were identical in this study (n = 113 patients), only the ITT data are evaluated here. All safety analyses were performed on the ITT data set. Primary objective The median time to response was 2.5 days in the EcN group and 4.8 days in the placebo group, i.e. treatment with EcN shortened the duration of diarrhoea by 2.3 days. Statistical testing revealed that the EcN treatment was significantly superior to that of the placebo in terms of time to response (p = 0.0007; overall p value of the group sequential test design) (Fig. 3). Analysis by centre showed no difference in the number of responders between treatment groups. Fig. 3Time-to-response curves: Kaplan-Meier analysis (ITT analysis) In total, diarrhoea was stopped in 52/55 patients (94.5%) in the EcN group and 39/58 patients (67.2%) in the placebo group within 10 days. Fourteen patients dropped out of the trials (EcN, n = 1; placebo, n = 13) because of unsuccessful therapy. The diarrhoea did not cease within 10 days of treatment in two patients of the EcN and six patients of the placebo group. Secondary objectives An exploratory comparison showed a significant difference in the number of responders between the treatment groups (p < 0.0001; ratio of rates: 1.406; 95%CI: 1.162–1.701). A cumulative presentation of the number of responders on each study day showed a difference between EcN- and placebo-treated patients starting on day 3 [EcN 34/55 (61.8%) vs. placebo 24/58 (41.4%); ratio of rates: 1.494; 95%CI: 1.032–2.163] (Fig. 4). The difference increased until day 5 [EcN 45/55 (81.8%) vs. placebo 30/58 (51.7%); ratio of rates: 1.582; 95%CI: 1.198–2.089] and then decreased slightly from day 6 to the end of study [EcN 52/55 (94.5%) vs. placebo 39/58 (67.2%); ratio of rates: 1.406; 95%CI: 1.162–1.701). Fig. 4Response rates among the patients receiving the EcN solution (n = 55) and placebo (n = 58) during the course of the study Prior to the treatment regimen, almost no infant had a normal stool consistency. During the course of the study the patients of the EcN group showed a more pronounced improvement than their counterparts in the placebo group. The same trend was observed for the disappearance of abdominal pain (28/30 EcN patients vs. 24/33 placebo patients) and abdominal cramps (17/18 EcN patients vs. 21/26 placebo patients) (Table 3). Table 3Improvement of symptoms after treatment EcN (%)Placebo (%)Normal stool consistency78.440.5No abdominal pain93.372.7No abdominal cramps94.480.8 In addition, the general state of health, as assessed by the investigator during clinical examinations or by the parents by means of the diary, of the patients in the EcN group improved more clearly than that of the patients in the placebo group (data not shown). Body temperature showed an almost identical decrease over time in both treatment groups (EcN: −0.5 ± 0.4°C; placebo: −0.4 ± 0.5°C). The number of vomiting episodes was very small at baseline in all patients and decreased to 0% in both groups. In principle, body weight and the status of dehydration did not show any changes from baseline to the end of study in either treatment group. Only one patient in the placebo group experienced mild dehydration. From baseline to study termination, pathogenic microorganisms disappeared in a similar number of patients in both treatment groups (14/27 patients in the EcN group and 12/21 patients in the placebo group). In the patients who were free from infectious agents at baseline, pathogenic microorganisms were detected at the end of study in 3/28 and 6/37 patients in the EcN and placebo group, respectively. Tolerance to treatment The study medication was well tolerated. Only 2/55 patients (3.6%) in the EcN group and 2/58 patients (3.4%) in the placebo group experienced one adverse event (AE) each. These were rhinitis and abdominal pain in the EcN-treated patients and two cases of acute otitis media in the patients receiving the placebo. According to the European Medicines Agency (EMEA) classification, none of these AEs are rated as “serious” or “severe” (regulatory guidance CPMP/ICH/377/95). The two placebo-treated patients with otitis media were removed from the study due to the AE being intolerable. For the two AEs in patients receiving the EcN treatment, complete recovery was documented. According to the parents, tolerance to treatment was slightly better in the EcN group than in the placebo group, whereas no notable difference was observed by the investigators (Table 4). Table 4Tolerance to treatment in the two treatment groups Assessed by parentsAssessed by investigators EcNPlaceboEcNPlaceboVery good11/55 (20.0)a4/58 (6.9)5/55 (9.1)4/58 (6.9)Good44/55 (80.0)53/58 (91.4)50/55 (90.9)53/58 (91.4)Poor0/55 (0.0)1/58 (1.7)0/55 (0.0)1/58 (1.7)aPercentage is given in parenthesis Discussion The aim of this multicentre, prospective, confirmative, randomised, double-blind, placebo-controlled clinical trial of phase III was to investigate the therapeutic efficacy and safety of orally administered EcN in treating acute diarrhoea in infants and toddlers. The results showed that EcN was superior to the placebo in terms of both time to response and response rate. The difference in median duration of diarrhoea – 2.3 days – was statistically significant and also clinically important. Acute diarrhoea in children is very often self-limiting within a few days. However, toddlers and young infants are in danger of developing dehydration and a deteriorating general health. Therefore, a fast-tracking antidiarrhoeal treatment would be beneficial. Several investigations have been carried out with probiotics for the treatment of acute gastroenteritis, and different meta-analyses and systematic reviews have been published in this field. All of these have demonstrated the efficacy of probiotics in treating or preventing diarrhoea. On average, the treatment of diarrhoea with lactobacilli, bifidobacteria and/or S. boulardii shortened the duration of diarrhoea by only 0.5–1.5 days [4, 12, 14, 18, 19]. Szajewska and Mrukowicz reviewed ten randomised, double-blind, placebo-controlled studies and concluded that the administration of probiotics led to a substantial reduction in the duration of acute diarrhoeal symptoms: an average of 20 h [18]. Moreover, a meta-analysis of nine clinical trials conducted by D’Souza et al. demonstrated that probiotics effectively prevented antibiotic-associated diarrhoea [4]. The work of van Niel et al. included nine randomised controlled studies with lactobacilli in acute infectious diarrhoea in children. In these studies, the duration of diarrhoea was significantly reduced by an average of 0.7 days along with the stool frequency [19]. Most recently, McFarland et al. examined the efficacy of probiotics in paediatric diarrhoea by analysing 39 randomised, controlled and blinded clinical trials comprising a total of 41 probiotic treatment arms [12]. Of these, 32 (78%) reported efficacy. The latest meta-analysis of 39 trials by Sazawal et al. showed that probiotics prevented acute diarrhea, with a risk reduction among children off 57% (range: 35–71%) [14]. Diarrhoea is one of the best-studied indications for probiotics, and treatment with EcN has been found to stop acute diarrhoea more rapidly than other probiotics. The efficacy of EcN was confirmed by a second multicentre, prospective, randomised, double-blind, placebo-controlled phase III study conducted by our group [7]. In that study, children with prolonged diarrhoea treated with EcN showed a more rapid onset of response to treatment than those treated with placebo (median: 2.4 vs. 5.7 days; p < 0.0001). There was also a remarkable difference in the response rates, as determined on days 14 (EcN: 93.3%; placebo: 65.8%) and 21 (EcN: 98.7%; placebo: 71.1%), thus showing a statistically significant superiority of EcN on both days (p = 0.0017 and p < 0.001, respectively). In the present trial, high initial response rates in both groups represent the spontaneous healing known for acute gastroenteritis. The superiority of the EcN treatment became increasingly noticeable from 3. The healing process was markedly faster in the EcN-treated patients than in the patients receiving placebo, a result which underlines the high efficacy of this probiotic. The relatively high number of children with unspecific diarrhoea corresponds quite well to the frequent failure to detect the responsible pathogen in routine analyses. This is the reason why the results of this study are not helpful in answering the question whether EcN is more efficient in bacterial or viral diarrhea. This question should be addressed by future studies. In the present study, EcN was safe and well-tolerated. There was no difference between the EcN and placebo treatments in terms of AEs, body weight, stool examinations and the assessment of tolerance. This result is in accordance with experience from clinical trials in premature and fullterm newborns where EcN was not only very safe but improved the microbial intestinal milieu of the treated infants and reduced the risk of acquiring pathogens early in life [3, 10, 11]. It has also been shown that prolonged colonisation with EcN protected infants at an age of 6–12 months from flatulence, diarrhoea or constipation when given immediately on the first 5 days after birth [16]. Our understanding of the effects of probiotics and their numerous modes of action has grown substantially in recent years. With regard to gastroenteritis, probiotics may improve symptoms by several mechanisms, including: competition with pathogens (for adherence to intestinal epithelium, for growth and survival in the gut) and inhibition of pathogen overgrowth;secretion of bacteriostatic/bactericidal peptides (e.g. colicins, microcins);enforcement of intestinal barrier function and reduction of microbial translocation;modulation of immune responses (local and/or systemic, e.g. stimulation of secretion of IgA by lymphocytes and defensins by enterocytes). All of these mechanisms of action have been shown for E. coli strain Nissle 1917. The antagonistic activity of EcN against pathogens has been demonstrated in vitro in animal models and in humans [1, 10, 13, 17]. In a pig model of intestinal infection, EcN was able to prevent acute secretory diarrhoea [15]. Among many other strain-specific characteristics [2, 5, 6], EcN exerts an intense immunomodulatory effect in children [3, 11]. Here, EcN was found to stimulate the production of antibodies of mucosa-associated B lymphocytes and the systemic production of antibodies (IgM, IgA) in premature and fullterm children. Conclusion In summary, EcN showed a significant superiority to placebo in the treatment of acute diarrhoea in infants and toddlers. EcN treatment also improved the general state of health and its administration was safe and well tolerated. Electronic supplementary material Below is the link to the electronic supplementary material Statistical analysis (DOC 31 kb)
[ "probiotic", "ecn", "acute diarrhoea", "infants", "toddlers", "escherichia coli nissle 1917" ]
[ "P", "P", "P", "P", "P", "P" ]
Eur_Spine_J-2-2-1602181
Os odontoideum with bipartite atlas and segmental instability: a case report
We report on the case of a 15-year-old adolescent who presented with a transient paraplegia and hyposensibility of the upper extremities after sustaining a minor hyperflexion trauma to the cervical spine. Neuroimaging studies revealed atlantoaxial dislocation and ventral compression of the rostral spinal cord with increased cord signal at C1/C2 levels caused by an os odontoideum, as well as anterior and posterior arch defects of the atlas. The patient underwent closed reduction and posterior atlantoaxial fusion. We describe the association of an acquired instability secondary to an os odontoideum with an anteroposterior spondyloschisis of the atlas and its functional result after 12 months. The rare coincidence of both lesions indicates a multiple malformation of the upper cervical spine and supports the theory of an embryologic genesis of os odontoideum. Introduction The craniovertebral junction is a common site for malformations [10]. Clefts of the anterior and posterior arch of the atlas are rare, but well-documented congenital anomalies [1, 2]. Several reports attributed the aetiology of os odontoideum to an either embryologic, traumatic or vascular basis [3–5, 8]. We describe the unusual case of a combined midline cleft of the anterior and posterior arch of the atlas associated with os odontoideum leading to atlantoaxial instability with acute myelopathia after a minor trauma. We presume an embryologic genesis of our findings. Case report A 15-year-old male patient injured his cervical spine in a hyperflexion trauma when performing a somersault on a trampoline. He presented with transient numbness and weakness of both arms. Initial X-rays and computed tomography demonstrated a displaced os odontoideum which reduced on extension and a rostral compression of the cervical spinal cord due to a subluxation of C1 over C2 with narrowing of the spinal canal to 50%. Additionally midline clefts of the anterior and posterior arch of the atlas became evident (Figs. 1, 2). MRI scans revealed increased cord signals at C1/C2 levels on T2-weighted images and a persistent subdental synchondrosis was visualized on T2-weighted turbo spin echo sequences (Fig. 3). The patient underwent closed reduction and posterior atlantoaxial fusion by sublaminar tension band wiring with autologous bone grafting and transarticular lag screw fixation. Postoperatively all symptoms improved significantly. Radiographs taken 1 year after the trauma showed a stable fusion of C1/C2 (Fig. 4). The patient presented with a range of motion at 30° of extension, 40° of flexion and 50–0–40° of rotation. He was free of symptoms and had returned to his pre-injury status regarding work and leisure activities. Fig. 1X-rays of the cervical spine in neutral position (a), flexion (b) and extension (c). Ventral subluxation of C1 over C2 in flexion (b), which reduces on extension (c)Fig. 2CT scans reveal midline clefts of the anterior and posterior arch of the atlas (a). Displaced os odontoideum and subluxation of C1 over C2 with narrowing of the spinal canal (b)Fig. 3T2-weighted MRI images with increased cord signals at C1/C2 levels and a persistent subdental synchondrosisFig. 4Follow-up X-rays 12 months after trauma. Stable atlantoaxial fusion after sublaminar wiring and transarticular screw fixation in ap (a) and lateral (b) view Discussion The odontoid process and the atlas originate from the first cervical sclerotome. Body of the axis, lateral masses and posterior arch arise entirely from the second cervical sclerotome. The atlas is formed from three primary ossification centres, which develop during the seventh week of gestation. Two centres at the lateral masses extend posteromedially to form the posterior arch usually in the fourth year. Ossification of the anterior arch involves one or two ossification centres, which extend posterolaterally to fuse with lateral masses around the seventh year. The odontoid process separates from the atlas between the sixth and seventh week of intrauterine life and moves caudally to join the body of the axis (Fig. 5). Fig. 5Scheme of the embryologic development of atlas and axis Os odontoideum is an oval or round shaped ossicle of variable size with a smooth cortical border. Several reports attributed its aetiology to either an embryological, traumatic or vascular basis. Failure of fusion of ossification centres in the odontoid process has been considered to be the main aetiology [8]. Considerable evidence on cases of acquired os odontoideum indicates, that occult fractures with subsequent avascular necrosis might result in similar pathology [5]. O’Rahilly et al. [9] studied the cervical spine of 8-weeks-old human embryos and observed that axis consists morphologically of three parts. Two rostral parts form the odontoid process and a caudal part gives rise to the axis body, separated from the middle part by subdental synchondrosis. A segmentation anomaly of the two rostral parts, which never occurred in normal individuals, may result in bipartite dens [3]. Currarino [3] reported 11 cases with complete or partial segmentation defect in mid odontoid, suggesting an embryological anomaly characterized by a complete segmentation of two rostral parts of axis which may explain the congenital os odontoideum. Malformations of the atlas are very rare and include both clefts and aplasias [1, 2]. Galindo and Francis [6] reported the incidence of anteroposterior spondyloschisis of the atlas in normal individuals as 0.3%. Atasoy et al. [1] reported the first case of bipartite atlas with os odontoideum causing spinal stenosis. Garg et al. [7] presented a report of bipartite atlas with anterior arch aplasia associated to an os odontoideum. They found a small projection on the anterior surface of the dens and concluded that the ossification centre of anterior arch of atlas may fail to separate from future dens resulting in anterior arch aplasia with a small tubercle attached to the anterior surface of the dens. To our knowledge no other case of bipartite atlas with os odontoideum had been reported previously in the English literature. In our case, there was a malformation of the anterior and posterior arch of the atlas and a persistent subdental synchondrosis, both likely to be a result of a congenital failure of fusion of ossification centres. The associated os odontoideum was clinically silent until a traumatic instability occurred which resulted in acute myelopathy. Our findings support the theory of a congenital aetiology of os odontoideum. Both the combined anterior and posterior clefts of atlas and os odontoideum are either asymptomatic or, if cervical instability arises, may develop neurological symptoms. Conclusion We described a rare association of an anterior and posterior midline cleft of the atlas with an os odontoideum in an adolescent. An embryologic genesis is likely. Minor trauma is commonly the cause for the onset of symptoms, which may occur immediately after the injury, be transitory and experienced repeatedly or have a delayed progressive course. If the lesion is reducible, atlantoaxial fusion is recommended.
[ "os odontoideum", "atlantoaxial instability", "c1/c2 fusion", "midline cleft of atlas" ]
[ "P", "P", "R", "R" ]
Purinergic_Signal-2-2-2254478
The E-NTPDase family of ectonucleotidases: Structure function relationships and pathophysiological significance
Ectonucleotidases are ectoenzymes that hydrolyze extracellular nucleotides to the respective nucleosides. Within the past decade, ectonucleotidases belonging to several enzyme families have been discovered, cloned and characterized. In this article, we specifically address the cell surface-located members of the ecto-nucleoside triphosphate diphosphohydrolase (E-NTPDase/CD39) family (NTPDase1,2,3, and 8). The molecular identification of individual NTPDase subtypes, genetic engineering, mutational analyses, and the generation of subtype-specific antibodies have resulted in considerable insights into enzyme structure and function. These advances also allow definition of physiological and patho-physiological implications of NTPDases in a considerable variety of tissues. Biological actions of NTPDases are a consequence (at least in part) of the regulated phosphohydrolytic activity on extracellular nucleotides and consequent effects on P2-receptor signaling. It further appears that the spatial and temporal expression of NTPDases by various cell types within the vasculature, the nervous tissues and other tissues impacts on several patho-physiological processes. Examples include acute effects on cellular metabolism, adhesion, activation and migration with other protracted impacts upon developmental responses, inclusive of cellular proliferation, differentiation and apoptosis, as seen with atherosclerosis, degenerative neurological diseases and immune rejection of transplanted organs and cells. Future clinical applications are expected to involve the development of new therapeutic strategies for transplantation and various inflammatory cardiovascular, gastrointestinal and neurological diseases. Introduction Extracellular nucleotides modulate a multiplicity of tissue functions including development, blood flow, secretion, inflammation and immune reactions. Indeed, signaling via extracellular nucleotides has been recognized for over a decade as one of the most ubiquitous intercellular signaling mechanisms [1, 2]. Essentially every cell in a mammalian organism leaks or releases these mediators, and carries receptors for nucleotides of which seven ionotropic (P2X) and at least eight metabotropic (P2Y) receptor subtypes have been identified and characterized to date. Whereas P2X receptors respond to ATP, P2Y receptors can be activated by ATP, ADP, UTP, UDP, ITP, and nucleotide sugars, albeit agonist specificity varies between subtypes and the multiple animal species [3]. Depending on the P2 receptor subtype and signaling pathways involved, these receptors trigger and mediate short-term (acute) processes that affect cellular metabolism, adhesion, activation or migration. In addition, purinergic signaling also has profound impacts upon other more protracted responses, including cell proliferation, differentiation and apoptosis, such as seen in atherosclerosis, degenerative neurological diseases and in several inflammatory conditions [2, 4, 5]. The effects of extracellular nucleotides appear to overlap, at least in part, with those of vascular growth factors, cytokines (inflammatory), adhesion molecules and nitric oxide (NO). Nucleotide-mediated activation may be also synergistic with polypeptide growth factors (PDGF, bFGF) and insulin, the signaling being mediated via phospholipase C and D, diacylglycerol, protein kinase C, ERKs, phosphatidylinositol 3-kinases (PI3K), MAP kinases (MAPK) and Rho [6–8]. The situation concerning extracellular nucleotide-signaling can be suitably contrasted with the unique specificity of peptide hormones or vasoactive factors for often single, defined receptors [9, 10]. Within purinergic/pyrimidinergic signaling events specificity is dictated by three essential modulatory components: 1) The derivation or source of the extracellular nucleotides [1, 11, 12]; 2) the expression of specific receptors for these molecular transmitters (and for the nucleotide and nucleoside derivatives) [13–16] (See also Molecular Recognition Section of National Institutes of Health, http://mgddk1.niddk.nih.gov/ also http://www.ensembl.org/index.html and http://www.geocities.com/bioinformaticsweb/speciesspecificdatabases.htm), and, 3) select ectonucleotidases that dictate the cellular responses by the stepwise degradation of extracellular nucleotides to nucleosides [17–20]. Ensembles of ectonucleotidases, associated receptors and signaling molecules Within the past decade, ectonucleotidases belonging to several enzyme families have been discovered, cloned and functionally characterized by pharmacological means. Specifically, we refer here to members of the ecto-nucleoside triphosphate diphosphohydrolase (ENTPDase) family (EC 3.6.1.5) as ectoenzymes that hydrolyze extracellular nucleoside tri-and diphosphates and have a defined pharmacological profile. Most notably, in many tissues and cells, NTPDases comprise dominant parts of a complex cell surfacelocated nucleotide hydrolyzing and interconverting machinery. This ensemble includes the ecto-nucleotide pyrophosphatase phosphodiesterases (E-NPPs), NADglycohydrolases, CD38/NADase, alkaline phosphatases, dinucleoside polyphosphate hydrolases, adenylate kinase, nucleoside diphosphate kinase, and potentially ecto-F1-Fo ATP synthases [21–25] that may interact in various tissues and cellular systems. The ectonucleotidase chain or cascade, as initiated by NTPDases can be terminated by ecto-5′-nucleotidase (CD73; EC 3.1.3.5) with hydrolysis of nucleoside monophosphates [26]. Together, ecto-5′-nucleotidase and adenosine deaminase (ADA; EC 3.5.4.4), another ectoenzyme that is involved in purine salvage pathways and converts adenosine to inosine, closely regulate local and pericellular extracellular and plasma concentrations of adenosine [10, 27]. Several of these ectonucleotidase families and additional functions of NTPDases [28–30] are addressed elsewhere in this issue in detail. This review focuses on the surface-located mammalian members of the E-NTPDase protein family. It starts with a brief introduction of molecular structure and functional properties, followed by an analysis of the physiological and pathophysiological roles at various sites with an emphasis on vasculature and neural tissues. Molecular identities unraveled The literature on the molecular and functional characterization of the E-NTPDase family has been intensively reviewed [18–22, 31–36] and will not be repeated here in detail. Our intent is to summarize principal properties of the enzymes that will be of use for the reader new to this field. Eight different ENTPD genes (Table 1 and Fig. 1) encode members of the NTPDase protein family. Four of the NTPDases are typical cell surface-located enzymes with an extracellularly facing catalytic site (NTPDase1, 2, 3, 8). NTPDases 5 and 6 exhibit intracellular localization and undergo secretion after heterologous expression. NTPDases 4 and 7 are entirely intracellularly located, facing the lumen of cytoplasmic organelles (Fig. 1). The molecular identification of individual NTPDase subtypes, genetic engineering, mutational analyses, and the generation of subtypespecific antibodies have not only led to considerable insight into enzyme structure and function. These advances have also defined physiological and pathophysiological functions of NTPDases in a considerable variety of tissues. Table 1Nomenclature of mammalian members of the E-NTPDase family and chromosomal localizationProtein nameAdditional namesGene name human, mouseChromosome location human, mouseAccession number human, mouseNTPDaselCD39, ATPDase, ecto-apyrase [43, 44]ENTPD1, Entpdl10q24, 19C3U87967, NMJ309848NTPDase2CD39L1, ecto-ATPase [49, 109, 252]ENTPD2, Entpdl9q34, 2A3AF144748, AY376711NTPDase3CD39L3, HB6 [50, 177]ENTPD3, Entpd33p21.3, 9F4AF034840, AY376710NTPDase4UDPase, LALP70 [253, 254]ENTPD4, Entpd48p21, 14D1AF016032, NMJ326174NTPDase5CD39L4, ER-UDPase, PCPH [137, 255, 256]ENTPD5, Entpd514q24, 12E (12D1)aAF039918, AJ238636NTPDase6CD39L2 [257–259]ENTPD6, Entpd620p11.2, 2G3AY327581, NM_172117NTPDase7LALP1 [260]ENTPD7, Entpd710q24, 19D1 (19C3)aAF269255, AF288221NTPDase8liver canalicular ecto-ATPase, hATPDase [52, 174]ENTPD8, Entpd89q34, 2A3AY430414, AY364442Information is provided for the human genome from GenBank (http://www.ncbi.nlm.nih.gov) and mouse genome informatics (MGI) for the mouse genome (http://www.informatics.jax.org/). Since the mouse genome represents a composite assembly that continues to undergo updates and changes from build to build, the computed map locations may be corrected in the future.a For mouse Entpd5 and Entpd7, the BLAST analysis displayed in Map Viewer indicates a different map location (in brackets) when compared with the mapping data reported on MGI records using cytoband information based on experimental evidence.Fig. 1Hypothetical phylogenetic tree derived for 22 selected members of the E-NTPDase family (NTPDase1 to NTPDase8) from rat (r), human (h) and mouse (m), following alignment of amino acid sequences. The length of the lines indicates the differences between amino acid sequences. The graph depicts a clear separation between surface-located (top) and intracellular (bottom) NTPDases. In addition, the major substrate preferences of individual subtypes and the predicted membrane topography for each group of enzymes is given (one or two transmembrane domains, indicated by barrels). Modified from [59]. The presence of ATP and/or ADP hydrolyzing activity at the surface of many cell types had been recognized for several decades [17, 37–40]. However, the molecular identity of the first member of the ENTPDase family (NTPDase1) was not unraveled and determined until the mid-1990s. The prototypic member of the enzyme family had first been cloned and sequenced as a lymphocyte cell activation (CD39) antigen of undetermined function [41]. Final success came from three independent approaches. Handa and Guidotti [42] purified and cloned a soluble ATP diphosphohydrolase (apyrase) from potato tubers and noted that this protein was related not only to similar enzymes of some protozoans, plants and yeast but also to human CD39. They also recognized conserved sequence domains and the relation to members of the actin-hsp70-hexokinase superfamily. This was then followed by the functional expression of human CD39 and the demonstration that this protein was in fact an ecto-apyrase [43]. In parallel, ectonucleotidases (termed ATP diphosphohydrolases) from porcine pancreas and bovine aorta were purified. The partial amino acid sequences for both ATP diphosphohydrolases revealed identity with the cloned cDNA sequence of CD39 [44]. The cDNA was isolated from human endothelial cells and functional, thromboregulatory studies confirmed that the dominant vascular ectonucleotidase (ATP diphosphohydrolase) activity was identical to the previously described and cloned human CD39 [44]. Several internal peptide sequences obtained from the purified human placental ATP diphosphohydrolase [45] revealed that in retrospect this protein was also identical to CD39. It was originally thought that there existed a single ectonucleotidase of the NTPDase type with potential post-translational modifications [46]. However, a close molecular relative was soon cloned that revealed functional properties of an ecto-ATPase (now NTPDase2) rather than of an ecto-ATP diphosphohydrolase [47, 48]. Further human genomic analysis of expressed sequence tags (ESTs) allowed the identifi-cation of additional members of the gene family [49–51]. These genes were originally named CD39L(ike)1 to CD39L4. Then followed the identification, cloning and functional expression of all members of the ENTPDase family, the last to date being NTPDase8 [52]. Potential splice variants have been isolated for the surface-located NTPDase1 and NTPDase2 [for references see 34, 53]. It should be further noted that heterologous expression of potential splice variants does not necessarily result in the formation of a functional protein [54]. The initially proposed nomenclature [50] has been somewhat confusing as it did not meet with generally accepted norms for human cell differentiation molecules [55]. While CD39 (now NTPDase1) indeed belongs to the cluster of differentiation antigens, CD39L1 (NTPDase2), CD39L3 (NTPDase3), CD39L4 (NTPDase5) and CD39L2 (NTPDase6) do not. Scientists at the Second International Workshop on Ecto-ATPases proposed that all E-NTPDase family members be termed as NTPDase proteins and classi-fied in order of discovery and characterization [34, 56]. The CD39 nomenclature should fall away for all but the prototypic member NTPDase1 that already has a long history of use in the Immunology and Oncology fields. Further revisions are however inevitable. Catalytic properties The individual NTPDase subtypes differ in cellular location and functional properties. The four cell surface-located forms (NTPDase1,2,3,8) can be differentiated according to substrate preference, divalent cation usage and product formation. All surfacelocated NTPDases require Ca2+ or Mg2+ ions in the millimolar range for maximal activity and are inactive in their absence [34, 57]. They all hydrolyze nucleoside triphosphates including the physiologically active ATP and UTP. Notably, the hydrolysis rates for nucleoside diphosphates vary considerably between subtypes (Figs. 1 and 2). Whereas NTPDase1 hydrolyzes ATP and ADP about equally well, NTPDase3 and NTPDase8 reveal a preference for ATP over ADP as substrate. NTPDase2 stands out for its high preference for nucleoside triphosphates and therefore has previously also been classified as an ecto-ATPase [34, 57]. In contrast to NTPDase1 and NTPDase2, murine NTPDase3 and NTPDase8 are preferentially activated by Ca2+ over Mg2+ [52, 58, 59]. Presumably, differences in sequence but also in secondary, tertiary and quaternary structure account for differences between subtypes in the catalytic properties [60, 61]. Fig. 2Cell surface-located catabolism of extracellular nucleotides and potential activation of receptors for nucleotides (P2 receptors) and adenosine (P1 receptors). The figure depicts the principal catalytic properties of members of the E-NTPDase family and of ecto-5′-nucleotidase. NTPDases sequentially convert ATP to ADP + Pi and ADP to AMP + Pi. NTPDase1 is distinct among these enzymes as it dephosphorylates ATP directly to AMP without the release of significant amounts of ADP. Hydrolysis of the nucleoside monophosphate to the nucleoside is catalyzed by ecto-5′-nucleotidase. NTPDases, NPPs and alkaline phosphatase sometimes co-exist and it seems likely that they can act in concert to metabolize extracellular nucleotides. ATP can activate both P2X receptors and subtypes P2Y receptors whereas UTP activates subtypes of P2Y receptors only. After degradation, ADP or UDP may activate additional subtypes of P2Y receptors. The adenosine formed can potentially act on four different types of P1 receptors and is either deaminated to inosine or directly recycled via nucleoside transporters. Bottom: Profiles of nucleotide hydrolysis and substrate formation by plasma membrane-located NTPDases. The figure compares catalytic properties of human and murine NTPDase1,2,3 and 8, following expression in COS-7 cells. The principal catalytic properties of the respective human and murine enzymes are similar. ATP (•), ADP (▪), AMP (≆). Modified from [57]. Membrane-bound NTPDase1 hydrolyzes ATP almost directly to AMP with the transient production of minor amounts of free ADP (Fig. 2). This functional property largely circumvents activation of P2Y-receptors for nucleoside diphosphates. Interestingly, significant amounts of UDP are accumulated when UTP is hydrolyzed by NTPDase1 [57]. In contrast, ADP is released upon ATP hydrolysis by NTPDase2, then accumulates and is slowly dephosphorylated to AMP. On the one hand, this results in the removal of agonists for nucleoside triphosphate-sensitive P2Y-receptors (Fig. 2). On the other hand, it generates agonists for nucleoside diphosphate-sensitive receptors such as platelet P2Y1 and P2Y12 receptors [62]. The actions of NTPDase3 and NTPDase8 result in intermediate patterns of product formation, leading to a transient accumulation of diphosphonucleosides with the simultaneous presence of triphosphonucleosides. Principal structural features The hallmarks of all NTPDases are the five highly conserved sequence domains known as ‘apyrase conserved regions’, abbreviated and termed ACR1 to ACR5 [42, 63, 64] that are involved in the catalytic cycle. This notion is supported by a considerable variety of deletion and mutation experiments [for reviews see 30, 34, 64–68]. NTPDases share two common sequence motifs with members of the actin/HSP70/sugar kinase superfamily, the actin-HSP 70-hexokinase b-and g-phosphate binding motif [(I/L/V)X(I/L/V/C)DXG(T/S/G)(T/S/G)XX(R/K/C)] [42, 47, 69, 70], with the DXG sequence strictly conserved. These motifs are identified in ACR1 and ACR4. Furthermore, there are striking similarities in secondary structure with members of the actin/HSP70/sugar kinase superfamily [30, 59, 71]. These proteins are soluble, have ATP phosphotransferase or hydrolase activity, depend on divalent metal ion and tend to form oligomeric structures. In spite of negligible global sequence identity they share the principal structure of two major domains (I and II, possibly resulting from gene duplication) of similar folds on either side of a large cleft. They reveal similar conserved secondary structure topology (β1β2β3α1β4α2β5α3) repeated in each domain and fold into a pocket for substrate binding at the bottom [59]. Presumably, NTPDases share not only secondary structure but also major elements of tertiary structure with members of the actin/HSP70/sugar kinase superfamily (Fig. 3). Homology modeling of the NTPDase3 sequence reveals high degrees of structural fold similarity with a bacterial exopolyphosphatase (PDB 1T6C) that further refine structural predictions for members of the E-NTPDase family [30, 72]. Fig. 3Hypothetical membrane topology of a surface-located NTPDase with two transmembrane domains. A comparison of the conserved secondary structure reveals duplicate conservation of two major domains related to subdomains Ia and IIa of actin, and other members of the actin/HSP70/sugar kinase superfamily [59]. In contrast to the other members of the superfamily, surface-located NTPDases are anchored to the plasma membrane by terminal hydrophobic domains. The figure takes into account the close distance of the N-and C-terminus of actin at domain I and the binding of ATP (red) in the cleft between domains I and II [80]. These two domains are expected to undergo conformational changes involving movement relative to each other. NTPDases readily form homo-oligomeric assemblies. NTPDase1 to NTPDase3 were found as dimers to tetramers [29, 64, 73–78]. In contrast to the P2X receptors that share a similar membrane topography, hetero-oligomeric complexes between NTPDases have not been reported, to date. Oligomeric forms reveal increased catalytic activity [73, 75, 76] and the state of oligomerization can affect catalytic properties [77, 78]. NTPDase1,2,3, and 8 are firmly anchored to the membrane via two transmembrane domains that in the instance of NTPDase1 are important for maintaining catalytic activity and substrate specificity [29, 64, 79]. The two transmembrane domains interact both within and between monomers. They may also undergo coordinated motions during the process of nucleotide binding and hydrolysis [29, 61]. This could in turn induce conformational changes [80] involving movement of the two major domains (I and II) relative to each other (Fig. 3). Alterations in quarternary structure and subunit interactions may thus affect the impact or interaction of ACRs involved in substrate binding and hydrolysis. Whether posttranslational modifications such as protein phosphorylation contribute to this dynamic behavior remains to be investigated. Functional modifications Biologically active NTPDase1 is subject to differential forms of surface modification under conditions of oxidative stress that inhibit enzymatic activity, as influenced by unsaturated fatty acids [81, 82]. It also undergoes limited proteolysis that increases enzyme activity and differential glycosylation reactions that appear to be required for membrane expression [64]. Since the surface-located ATP-hydrolyzing members of the NTPDase family pass through the endoplasmic reticulum and Golgi apparatus, the associated catalytic activity might abrogate ATP-dependent luminal functional processes. NTPDase1 becomes catalytically active on reaching the cell surface and glycosylation reactions appear crucial in this respect [83]. The N-terminal intracytoplasmic domain of NTPDase1 is palmitoylated. Truncated forms of NTPDase1 lacking the N-terminal intracytoplasmic region and the associated Cys13 residue, are not subject to palmitoylation. This post-translational modification appears to be constitutive and to contribute to the integral membrane association of this ectonucleotidase in lipid rafts [84–86]. This raises the possibility that NTPDase1 may be recycled to and from cell membranes via sequential actions of putative palmitoyltransferases and palmitoyl-protein thioesterases [87], in order to fine tune and modulate purinergic signaling responses. In contrast to NTPDase1 and NTPDase3, NTPDase2 does not have the required intra-cytoplasmic Cys to undergo this post-translational modification. The potential multimerization of NTPDase1 [35] may be facilitated by acylation with intermolecular interactions within the cholesterol and sphingolipid-rich microdomains of the plasma membrane [88]. Experiments using endothelial cells from caveolin-1 deficient mice suggest that caveolae are not essential for the enzymatic activity or for the targeting to the plasma membrane of NTPDase1. However, cholesterol depletion results in a strong inhibition of the enzyme [86]. The targeting of palmitoylated NTPDase1 to lipid rafts could influence defined G-protein coupled receptors within this plasmalemmal microenvironment and thus regulate cellular signal transduction pathways. Furthermore, the caveolar co-localization of ecto-5′-nucleotidase, P2 receptors, and NTPDase1 could serve to modulate signaling via both ATP and adenosine at the cell surface and possibly also within endosomal compartments [20]. Transcriptional regulation of expression Members of the E-NTPDase family are constitutively expressed in many tissues. To date, there is only scattered evidence on promoters and the factors controlling NTPDase expression [22]. The transcription of NTPDase1/CD39 is constitutive in venous, arterial and certain non-fenestrated microvascular endothelium and certain immune cells e.g., B cells, dendritic cells and defined T-cell subsets [20, 89]. The modulated expression of NTPDase1 has been closely associated with inflammatory cytokines, oxidative stress and hypoxia in vitro and in vivo [19, 90]. Expression of NTPDase1 is increased in differentiating melanomas followed by a gradual decrease with tumor progression [91] and enhanced NTPDase1 activity of stimulated endothelial and mesangial cells is downregulated by glucocorticosteroids [92]. Activity of ‘ecto-ATP diphosphohydrolase’ in human endothelial cells in-vitro is increased by aspirin [93] and glomerular ‘ecto-ATP diphosphohydrolase’ immunoreactivity might well be modulated by estradiol [94]. Transcription of NTPDase2 in mouse hepatoma cells is inducible by 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) [95]. These cells contain both constitutive and TCDD-inducible NTPDase2 activity. The NTPDase2 core promotor reveals constitutive activity that is independent of TCDD [96]. TCDD does not increase expression of NTPDase1, NTPDase3 or other ectonucleotidases and apparently fails to induce NTPDase2 in a variety of other cell lines derived from varying species [97]. In rat Sertoli cells, NTPDase2 is upregulated by follicle-stimulating hormone and cAMP [98] and it is selectively downregulated in biliary cirrhosis [99]. Human epidermoid carcinoma cells increase the cascade for extracellular nucleotide hydrolysis when periodically treated with extracellular ATP, suggesting that the substrate itself may affect the expression of its own hydrolysis chain [100]. Inhibitors A considerable number of compounds alter and inhibit extracellular nucleotide hydrolysis by NTPDases. These include non-hydrolysable nucleotide analogues and inhibitors of P2 receptors. Ideally, NTPDase inhibitors should not be P2 receptor agonists or antagonists and not be subject to dephosphorylation by the ectoenzyme [22, 101, 102]. The only commercially available compound reported to effectively inhibit hydrolysis of ATP in a variety of tissues without significantly acting on purinoceptors is the structural analogue of ATP, ARL 67156 (FPL 67156) (6-N, N-diethyl-D-β,γ-dibromomethylene ATP) [103–105]. Other potential inhibitors include 8-thiobutyladenosine 5′-triphosphate (8-BuS-ATP) [106] and 1-naphthol-3, 6-disulfonic acid (BG0136) [101]. Periodate-oxidized ATP inhibits ecto-ATPase activity in 1312N1 human astrocytoma cells [107] and Gadolinium ions have been found to effectively inhibit the ecto-nucleoside triphosphate diphosphohydrolase from Torpedo electric organ as well as potato apyrase [108]. It is noteworthy that the potency of inhibitors can vary considerably between individual members of the E-NTPDase family [109–111]. This necessitates a functional evaluation of each inhibitor for the enzyme investigated in a given tissue or cell type. The failure to develop specific inhibitors remains a major impediment to ongoing discoveries. Principal functional contexts Cell surface-located NTPDases are considered to be of major importance for controlling the availability of extracellular nucleotide agonists at P2 receptors. They also contribute to recycling of nucleosides derived from extracellular nucleoside phosphates and metabolic salvage pathways. The number of studies that define a functional impact of individual NTPDases in purinergic signaling in situ is limited and has been dependent to date on global genetic modifications of mice and swine to delete or upregulate the NTPDase or P2 gene of interest [20]. Subtype-specific inhibitors, siRNA approaches, and animals in which the encoding gene can be inactivated or selectively induced in specific tissues will be of major importance. There is increasing experimental evidence that ectonucleotidases compete with P2 receptors for a limited pool of endogenously released nucleotide [112, 113] and —by hydrolyzing released nucleotide —terminate or modulate the function of P2 receptors [114–116]. Portal fibroblasts regulate P2Y receptormediated bile duct epithelial proliferation via expression of NTPDase2 [117] (see liver section, below). NTPDases functionally interact with P2Y-receptors [112] and may also co-localize with these G-proteincoupled receptors (GPCR) in lipid rafts and possibly caveolae [118–121]. The modulatory effects of NTPDases are complex as the enzymes differentially regulate agonist availability in a process that is dependent upon P2 receptor subtype by either degrading ATP/UTP or by generating ADP/UDP (Fig. 2). Recent experiments suggest that plasma membranebound NTPDases may have functions distinct from their catalytic properties alone. In a yeast two-hybrid system using techniques developed by Zhong for yeast apyrases [122], the N-terminus of human NTPDase1 (used as bait protein) has been shown to interact with truncated Ran Binding Protein M (RanBPM, otherwise known as RanBP9, NM_005493) in the human library screened [122a]. RanBPM contains conserved SPRY (repeats in splA and RyR) domains that appear to be crucial for the interaction with NTPDase1 and is preferentially distributed in human heart tissues [123]. RanBPM is known to interact with Sos and regulate ERK/Ras signaling. NTPDase1 interacts with RanBPM to directly modulate Ras activation and cellular proliferation in liver regeneration following partial hepatectomy [124]. The N-termini of NTPDases also have consensus sequences for protein phosphorylation by protein kinase C [47] that could have additional functional impacts. Furthermore, the C-terminal sequence of NTPDase1 contains a putative PDZ domain (-K-DM-V). This may have utility in determining interactions with select P2Y receptors e.g., the purinergic P2Y1 and P2Y2 receptors that terminate in -D-T-S-L and -D-I-R-L, respectively [125]. PDZ domains are most often found in combination with other protein interaction domains (for instance, SH3, PTB, WW), participating in complexes that facilitate signaling or determine the localization of receptors [126–128]. Finally, the general membrane topography of NTPDase1 and oligomeric assembly resemble the morphology of channel forming proteins such as P2X nucleotide receptors and members of the epithelial Na+ channel/degenerin gene superfamily [129]. This raises the question whether, in addition to their catalytic activity, NTPDases could function as channels. Release of ATP from Xenopus oocytes induced by hyperpolarizing pulses requires functional ecto-ATPase activity [130]. To what extent this functional property is shared by the structurally related NTPDase2, NTPDase3 and NTPDase8 has not been investigated. Vasculature The normal vascular endothelium provides a barrier that separates blood cells and plasma factors from highly reactive elements of the deeper layer of vessel wall. The vessel wall maintains blood fluidity and promotes flow by inhibiting coagulation, platelet activation and promoting fibrinolysis [131]. These properties are governed by important thromboregulatory mechanisms; key biological activities of the vasculature have been already identified and shown to be ecto-nucleotide catalysts that generate the respective nucleosides by phosphohydrolysis [19, 82]. The dominant ectonucleotidases of the vasculature have now more fully been characterized as NTPDases. This important biological property expressed by the endothelium and associated cells is responsible for the regulation of extracellular and plasma levels of nucleotides [20, 44, 82, 132, 133]. Over the past decade, extracellular nucleotides have been recognized as important mediators of a variety of processes including vascular inflammation and thrombosis with varying impacts in different systems [19]. Adenosine and ATP mediated effects or mechanisms can be implicated in the local control of vessel tone as well as in individual vascular cell migration, proliferation and differentiation. As an example, ATP may be released from sympathetic nerves (see later sections) and results in constriction of vascular smooth muscle through effects mediated by P2X receptors. In contrast, ATP released from endothelial cells during changes in flow (shear stress) or following exposure to hypoxic conditions activates P2Y receptors in a paracrine manner to release NO, resulting in vessel relaxation. Any nucleotide released will be ultimately hydrolyzed to adenosine and will result in vasodilatation via the effects of smooth muscle P1 receptors. P2X receptors also appear on vascular cells and are associated with changes in cell adhesion and permeability [2]. These cellular processes and nucleotidetriggered events are modulated during angiogenesis (Fig. 4) and influence the development of atherosclerosis and restenosis following angioplasty [2, 113, 134–136]. Fig. 4Angiogenesis with expression of NTPDase1 in the vasculature of syngeneic islet transplants. Mouse islets were prepared from wild type and Entpd1 null mice, as described by T. Maki et al. and transplanted under the renal capsule [261]. Islets were harvested at four weeks (n = 4 per group) and stained for NTPDase1 immunoactivity and other markers of EC. Substantially diminished levels of CD31 staining vascular elements were also present in null to Entpd1 null grafts, indicating a defect in new vessel growth (not depicted here). A) Wild type to wild type showing grafted islet vasculature staining for NTPDase1 with adjacent normal renal vascular pattern. B) Wild type to null mouse showing intrinsic vasculature of islet has persisted within the graft and even entered the NTPDase1 null renal parenchyma. C) Null to wild type grafts showing infiltrating macrophages and NTPDase1 positive endothelium migrating from recipient (confirmed by other stains; not shown). NTPDase1 is the major ectonucleotidase in the vasculature [112]. Other NTPDases associated with the vasculature are the cell-associated NTPDase2 and the soluble monocyte expressed NTPDase5 [32, 50, 137]. The phosphohydrolytic reaction of NTPDase1 limits the platelet activation response that is dependent upon the paracrine release of ADP and activation of specific purinergic receptors [81, 132, 138]. In contrast, NTPDase2, a preferential nucleoside triphosphatase, activates platelets by converting the competitive antagonist (ATP) of platelet ADP-receptors to the specific agonist of the P2Y1, and P2Y12 receptors. In keeping with these biochemical properties, NTPDase1 is dominantly expressed by endothelial cells and the associated vascular smooth muscle where it serves as a thromboregulatory factor. In contrast, NTPDase2 is associated with the adventitial surfaces of the muscularized vessels, microvascular pericytes of some tissues and organs as the heart and the stromal cells and would potentially serve as a hemostatic factor [62]. Extracellular nucleotide stimulation of P2 receptors represents components of platelet, endothelial cell and leukocyte activation that culminate in vascular thrombosis and inflammation in vivo [19]. In these inflammatory settings, with oxidant endothelial injury, NTPDase1 biochemical function is substantially, albeit temporarily, decreased because of post-translational changes; reconstitution of vascular NTPDase activity occurs following transcriptional upregulation of CD39 in endothelium [82, 139]. This functional change may relate, at least in part, to alterations in acylation and associated membrane lipid association with consequent disruption of multimer structure. Interestingly, palmitate supplementation may protect against loss of NTPDase activity following cellular activation in vitro [81]. These observations may provide several avenues of research to augment NTPDase activity within the vasculature at sites of injury [134]. Mechanisms of endothelial cell activation by nucleotides ATP and UTP increases intracellular calcium levels, results in cytoskeletal rearrangements and stimulates phosphorylation of several proteins in human endothelial cells (EC) that are also associated with integrin signaling [140–142]. These include the focal adhesion kinase (FAK) and paxillin, proline-rich tyrosine kinase 2 (Pyk2) (also named related adhesion focal tyrosine kinase, RAFTK) and p38 MAP kinase. Further, UTP preferentially increases EC migration in a PI3-kinase and ERK-dependent manner. Moreover, extracellular nucleotide-mediated EC activation involves cytoskeletal rearrangements and increases in cell motility, comparable to that seen with ligation of integrins by extracellular matrix proteins [143]. These phenotypic changes (seen in both nucleotide-and matrix-mediated activation) are associated with tyrosine phosphorylation of FAK, paxillin and p130 Crk-associated substrate (p130cas) and down-stream activation of p38 MAP kinases. FAK has been implicated to play an important role in integrin-mediated signal transduction pathways [144], suggesting that P2-receptors are implicated in ‘inside-out’ integrin signaling in EC, as well as platelets [20, 112]. Therapeutic considerations To test how extracellular nucleotide-mediated signaling influences pathophysiological events, several techniques have been developed and validated to manipulate NTPDase1 expression in the vasculature and to study conditions of inflammatory stress. The first mutant mouse derived and studied concerned the global deletion of the gene encoding the dominant ectonucleotidase NTPDase1 (Entpd1,cd39). The mutant mice exhibit major perturbations of P2 receptormediated signaling in the vasculature and immune systems [19, 89, 145]. These phenomena manifest as hemostatic defects, thromboregulatory disturbances, heightened acute inflammatory responses with a failure to generate cellular immune responses that are all associated with vascular endothelium, monocyte, dendritic cell and platelet integrin dysfunction [20, 112, 134]. The therapeutic potential of NTPDase1 to regulate P2 receptor function in the vasculature and mitigate against thrombotic/inflammatory stress has been further established by the generation of NTPDase1 transgenic mice and swine [20, 146], the use of adenoviral vectors to upregulate NTPDase1 in cardiac grafts [147] and the use of soluble derivatives of NTPDase1 and apyrases [133, 148]. The beneficial effects of administered NTPDases have been determined in several animal models of vascular inflammation [148, 149]. Exogenous infusions of soluble NTPDases are able to rescue Entpd1-deficient mice from systemic toxicity induced by ischemia reperfusion injury and after stroke induction [145, 150]. Angiogenesis requires the dynamic interaction of endothelial cell proliferation and differentiation with orchestrated interactions between extracellular matrix and surrounding cells (such as vascular smooth muscle and/or pericytes) [151–153]. NTPDase1 appears crucial in the co-ordination of angiogenic responses in inflammation, organ remodeling and transplantation [20, 134]. For example, in syngeneic pancreatic islet transplantation, the maintenance and revascularization of grafted islets appears dependent upon expression of NTPDase1 by the developing vasculature within the islet (Fig. 4). In summary, multiple experimental studies largely reveal beneficial effects of over-expression of NTPDases within the vasculature, or by their pharmacological administration [20, 133]. Clinical studies of these soluble thromboregulatory factors are in planning [20, 154, 155]. Immune system There are multiple P2X and P2Y receptor subtypes expressed by monocytes and dendritic cells, whereas lymphocytes express only P2Y receptors [2]. NTPDase1/CD39 was first described as a B lymphocyte activation marker and also shown to be expressed on activated T cells [156, 157] and dendritic cells [89]. The CD39 enzymatic function on dendritic cells is involved in the recruitment, activation and polarization of naive T cells. ATP is released by CD4+ and CD8+ T cells upon stimulation with Con A or anti-CD3 mAb while CD39 functions as an additional recognition structure on haptenated target immunocytes for HLA-A1-restricted, hapten-specific cytotoxic T cells [156, 157]. In cd39 null mice, there are major defects in dendritic cell function antigen presentation and T-cell responses to haptens (type IV hypersensitivity reactions) [19, 89]. Immunocyte-associated CD39 may play an immunoregulatory role by hydrolyzing ATP (and perhaps ADP) released by T cells during antigen presentation and thereby generating adenosine [19, 89, 158]. Ectoenzymes, including ectonucleotidases, are known to play an important role in leukocyte trafficking (for an excellent review on this topic, see [159]). Recent work has indicated that regulatory CD4 + ve CD25 + ve T cells (Treg cells) play important roles in the maintenance of immunological reactivity and tolerance [160]. The selective expression of CD39 by Treg and the question whether this ectonucleotidase and/or extracellular nucleotides influence(s) the function of these interesting cells is a focus of current work. Digestive and renal systems Released nucleotides are polarized and do not re-enter cells. They have to be transformed into the corresponding nucleosides that enter cells via specific transporters to rebuild nucleoside pools. If this did not occur, they would be lost from the metabolic pool. The same may pertain to dietary ingestion of nucleotides where NTPDases are potential participants in the digestion of exogenous nucleotides and intestinal function. In addition, extracellular nucleotide and adenosine receptors are highly expressed in the digestive and renal systems, so these molecules are likely to have homeostatic functions [2]. An important nucleotide-mediated mechanism that seems common to various epithelia, as well as to hepatocytes, involves the autocrine regulation of cell volume by ATP via P2 receptors [161, 162]. As P2 receptors are expressed by epithelia in a polarized manner and can be linked to several digestive and homeostatic functions [163, 164], the presence of NTPDases in the immediate environment may serve as regulatory switches. Liver In the liver, extracellular nucleotides are potentially involved in several functional contexts [161]. There is evidence that extracellular nucleotides regulate glycogenolysis through activation of glycogen phosphorylase and inactivation of glycogen synthase by inhibition of the glucagon effect on cAMP and by the activation of phospholipase D [165, 166]. In addition, nucleotides may be involved in the regulation of canalicular contraction and bile flow [167–169]. Concentrations of canalicular adenine nucleotides in bile samples and effluents from hepatic cell lines are estimated to be around 0.1 to 5 µM [161, 168]. Hepatocytes and bile duct cells have been shown to interact and communicate via local ATP release in vitro [170]. Extracellular ATP acts as a hepatic mitogen and activates JNK signaling and hepatocyte proliferation both in vitro and in vivo [171]. Several ectonucleotidases are expressed in liver. Of the nucleotide pyrophosphatase/phosphodiesterases, NPP1 (PC-1) is expressed on the basolateral membrane of hepatocytes while the closely related NPP3 (B10) has a predominant canalicular in distribution [172, 173]. NTPDase1 is highly expressed on larger vessels and more weakly on sinusoids as well as in Kupffer cells [174]. In the quiescent liver, NTPDase2 is expressed by cells of the subendothelium of veins and adventitial cells of arteries, but not in sinusoids. In addition, NTPDase2 is expressed by portal fibroblasts near basolateral membranes of bile duct epithelia [175]. Activated but not quiescent hepatic stellate cells express NTPDase2 at the protein level [176]. Only low expression of NTPDase3 could be demonstrated at the mRNA level in the liver [50, 177]. NTPDase2 expression in portal fibroblasts, the primary fibroblastic cell type of the portal area, suggests a role in the regulation of bile ductular signaling and secretion [161, 175]. Jhandier et al. tested the hypothesis that portal fibroblast NTPDase2 regulates epithelial cell proliferation. Using co-cultures of cholangiocytes (Mz-ChA-1 human cholangiocarcinoma cells) and primary portal fibroblasts from rat liver, increased NTPDase2 expression decreased cell proliferation, and knockdown of NTPDase2 by siRNA increased proliferation. P2 receptor blockade also attenuated Mz-ChA-1 proliferation [117]. These experiments defined a novel cross-talk signaling pathway between bile duct epithelial cells and underlying portal fibroblasts, regulated by NTPDase2. Because they are the chief fibrogenic cells of the liver, hepatic stellate cells and portal fibroblasts are important targets of liver disease therapy. Loss of NTPDase2 expression in human biliary cirrhosis, as well as in models of bile duct ligation in rat, has been observed. NTPDase2 expression also shifts from the portal area to bridging fibrous bands in cirrhosis with hepatitis C [99]. Functional ATPases were previously shown to be associated with bile canalicular plasma membranes by histochemical techniques [178]; the corresponding enzyme was subsequently incorrectly identified as cCAM105 [179–181]. More recent studies revealed that the canalicular ecto-ATPase corresponds to NTPDase8 [52], also referred to as hepatic ATP diphosphohydrolase (ATPDase) [174, 182]. NTPDase8 is the mammalian orthologue of the chicken ecto-ATPDase cloned from oviduct and liver [183, 184]. In tandem with ecto-5′-nucleotidase, NTPDase8 has the potential to regulate the concentration of nucleotides in the hepatic canalicule. The ultimate generation of extracellular adenosine from dephosphorylated ATP not only activates adenosine receptors but also produces the key molecule for purine salvage and consequent replenishment of ATP stores within many cell types [17, 185]. Adenosine transporters are of major importance to organs and cells incapable of de novo nucleotide synthesis such as brain, muscle, intestinal mucosa and bone marrow [167, 186]. As the liver appears to be a major source of purines for these tissues, curtailment of nucleotide loss into the bile may be important to maintain appropriate nucleotide/nucleoside concentrations within hepatocytes [185]. Thus, dephosphorylation of nucleotides by ectonucleotidases may be critical for appropriate systemic purine homeostasis [167]. The presence of NTPDase8, ecto-5′-nucleotidase and nucleoside transporters in the canalicular domain of hepatocytes would be consistent with an important role of NTPDase8 in purine salvage. The exocrine pancreas The exocrine pancreas secretes digestive enzymes and a HCO3-rich fluid. Acini release ATP and the excurrent ducts express several types of P2 receptors [187, 188]. Thus ATP may function as a paracrine mediator between pancreatic acini and ducts. Ectonucleotidase activity in pancreatic tissues was first detected in the rat in the 1960s [189, 190], followed by analyses in the pig [191, 192]. Cytochemical and biochemical observations have corroborated the association of ATPase activity with zymogen granules [193]. In other studies of small intercalated/interlobular ducts, NTPDase1 immuno-fluorescence can be localized on the luminal membranes, while in larger ducts it is localized on the basolateral membranes [194]. Upon stimulation with cholecystokinin octapeptide-8 (CCK-8), acinar NTPDase1 relocalized in clusters towards the lumen and is secreted into the pancreatic juice, as an active form associated with particulate fractions [188, 195]. As revealed by electron microscopy, NTPDase2 is located on epithelial cells, myoepithelial cells and the basolateral membrane of acini. Interestingly, NTPDase2 could be also detected at the basolateral surface of endothelial cells [194]. Salivary glands There are only few studies on the localization of NTPDases in salivary glands. NTPDases might play a role in the transport of electrolytes by modulating the extracellular ATP concentration in the salivary gland ducts. NTPDase1 reveals to be mainly vascular in expression. NTPDase2 was immunodetected on myoepithelial cells and in nerves [194, 196]. The immunolocalization of NTPDases 3 and 8 in salivary glands has not yet been determined. Kidney The kidney reveals a complex cellular profile of expression for P1 and P2 receptors as well as of ectonucleotidases. Both ATP and adenosine have been invoked in the regulation of tubuloglomerular feedback [197, 198]. This feedback system links the salt concentrations in the tubular fluid at the macula densa to the vascular tone of the afferent arteriole of the same nephron. As depicted by their localization, NTPDases may participate in the regulation of several biological functions of the kidney, including vascular perfusion. In mouse, rat and porcine kidneys, NTPDase1 can be detected in vascular structures, including blood vessels of glomerular and peritubular capillaries [174, 199, 200]. NTPDase2 is detected on the Bowman’s capsules of mouse and rat [199] and NTPDase8 on the luminal side of porcine renal tubules [174]. More recently, an immunohistochemical analysis of various ectonucleotidases of the rat nephron revealed expression of both NTPDase2 and NTPDase3 in the thick ascending limb, the distal tubule and the inner medullary collecting ducts. In addition, NTPDase3 is located in the cortical and outer medullary collecting ducts [201]. The nervous system All cell types of the nervous system express nucleotide receptors [2]. It is increasingly apparent that NTPDases are distributed in the nervous system as ubiquitously as are P2 receptors and that these ectoenzymes are directly involved in the control of P2 receptor function in nervous tissues [22, 31, 36]. Signaling via nucleotides is widespread both in the peripheral and central nervous system. Major nucleotide receptormediated functions in the central nervous system include the modulation of synaptic signal transmission [202], the propagation of Ca2+ waves between glial cells [203], or the control and activation of astrocytes and microglia [204, 205]. In addition, ATP can contribute to synaptic signal transmission [36]. In the sympathetic nervous system, ATP acts as a fast neurotransmitter together with catecholamines [206], it is an important mediator of central and peripheral chemosensory transduction, including pain [207] and it is involved in the control of myelination formation of peripheral axons [208]. Central nervous system ATP can be rapidly hydrolyzed to adenosine at brain synapses that in turn activates pre-or postsynaptic receptors, thereby modulating neuronal transmission. Adenine nucleotides undergo conversion to adenosine within a few hundred milliseconds in the extracellular (synaptic) space of rat brain slices [209, 210]. Complex synaptic interactions in the central nervous system may thus be modulated both by the activation of P2 and (after hydrolysis of the nucleotide) P1 receptors that may be located at identical or different cellular targets [202, 211]. Based on immunoblotting and in situ hybridization, NTPDase1, 2 and 3 are expressed in the mammalian brain [47, 57, 59, 116, 177, 212]. NTPDase1 and 2 have been purified from porcine brain [213, 214]. But the exact cellular allocation of individual subtypes is still a challenge. There is ample evidence from early enzyme histochemical investigations that surface-located catalytic activity for the hydrolysis of nucleoside tri-and diphosphates can be allocated to all cell types of the nervous system [for reviews see 22, 31, 36, 215]. This catalytic activity can be localized to synapses, including the synaptic cleft, at the surface of neurosecretory nerve terminals in the pituitary or at peripheral nerve terminals. These data imply a wide distribution of cell surface-located ATP hydrolyzing activity in the CNS. Neurons Ecto-ATPase activity has been observed in synaptosomal fractions isolated from various sources, implying endogenous ectonucleotidase activity of nerve cells. Biochemical studies on isolated synaptosomes permit the determination of the ratios of ATP to ADP hydrolysis rates as well as the analysis of product formation. Total synaptosome fractions isolated from rat brain cortex and immunopurified cholinergic striatal synaptosomes revealed ratios of 3.4: 1 and 2.1: 1, respectively [216]. ADP was found to transiently accumulate after addition of ATP, and was subsequently metabolized to AMP and adenosine. Similar results were obtained with hippocampal synaptosomes [217]. This strongly argues against a major contribution by NTPDase1 and NTPDase2 and would rather be compatible with a neuronal expression of NTPDase3 (comp. Fig. 2). A recent immunocytochemical study allocates NTPDase3 to neurons including axon-like structures of various brain regions [218]. Astrocytes, oligodendrocytes, and microglia The ratio of ATP to ADP hydrolysis is clearly different in cultured astrocytes. Astrocytes cultured from cortex or hippocampus display a ratio of 8: 1 [219]. Furthermore, cultured rat cortical astrocytes accumulate ADP from ATP that is only very slowly further degraded to AMP [220]. This would be largely compatible with NTPDase2 as the major ectonucleotidase of cultured astrocytes. Immunocytological investigations of adult rat and mouse brain sections assign NTPDase2 solely to the astrocyte-like stem cells in the subventricular zone of the lateral ventricles and the dentate gyrus of the hippocampus and to astrocytes in few distinct additional brain regions [221, 222]. Thus, cultured astrocytes may reveal functional properties that differ from the in situ situation as they tend to rapidly alter their protein expression profile [223]. Enzyme histochemistry assigns ecto-ATPase activity to both central and peripheral myelin [31], but fully supplementary immunocytochemical data are lacking. Enzyme histochemical staining for surface-located nucleoside diphosphate activity has long been used to identify microglia in tissue sections of the adult and developing brain [224]. The major microglial ectonucleotidase has been identified as NTPDase1 [225]. Stem cells in the adult mammalian brain In the adult rodent brain, neurogenesis persists in two restricted regions, the subventricular zone (SVZ) of the lateral ventricles and the dentate gyrus of the hippocampus. These regions contain stem cells that give rise to neurons throughout the life span of the animal. Interestingly, these cells share astrocytic properties [226]. They generate highly proliferating intermediate cell types and finally mature neurons. NTPDase2 is highly and selectively expressed by the stem cells (type B cells) of the SVZ [221] (Fig. 5) as well as by the progenitor cells (residual radial glia) of the dentate gyrus [222]. In the presence of epidermal growth factor (EGF) and fibroblast growth factor-2 (FGF-2), SVZ-derived stem cells can be cultured as free floating cellular aggregates (neurospheres). Cultured stem cells express NTPDase2 and functional P2 receptors. Agonists of P2Y1 and P2Y2 receptors augment cell proliferation, whereas inhibition of the receptors attenuates cell proliferation in spite of the presence of mitogenic growth factors [227]. These data suggest that NTPDase2 and nucleotides, together with other signaling pathways, contribute to the control of neurogenesis in the adult mammalian brain. Fig. 5Detail of arrangement of neuronal stem cells and neuroblasts at the lateral lining of the mouse subventricular zone (SVZ) (triple labeling). A) DAPI staining of all nuclei. Arrow heads mark endymal lining. B) Stem cells (type B cells) immunopositive for NTPDase2 form tube-like sheeths around clusters of migrating immature neurons (type A cells) that immunostain for the microtubule-associated protein doublecortin (DCX) (C). The spaces covered by type A cells remain dark in (B) and are indicated with stars. D) Merge of B) and C). E) Merge of A), B) and C). Bar = 10 µm. (by courtesy of David Langer, Frankfurt am Main). Apparently, individual enzyme isoforms govern cell surface-located nucleotide hydrolysis in the various cell types of the central nervous system. This does not exclude however, the possibility that individual cell types express more than one isoform with one of the enzymes predominating. For example, PC12 cells express mRNA for NTPDase1-3. But the ATP/ADP hydrolysis ratio, the pattern of product formation and the immunocytochemical surface staining suggest that NTPDase3 is the major functional isoform [59, 228]. Similarly cultured normal and immortalized pituitary and hypothalamic cells express NTPDase1-3 [116]. The future planned use of transgenic mice expressing fluorescent protein under the promoter of the respective NTPDase isoform will greatly facilitate the identification of the expression pattern of individual enzyme isoforms in the developing and adult nervous system. Peripheral nervous system Noradrenaline and ATP are co-released from sympathetic nerve terminals of the guinea pig heart whereby ATP enhances noradrenaline release by a mechanism controlled by ectonucleotidases, possibly NTPDase1 [229]. Interestingly, stimulated sympathetic nerves of the guinea pig vas deferens release not only ATP and noradrenaline but also enzyme activity that degrades ATP to adenosine. The latter exhibits similarities to NTPDases and ecto-5′-nucleotidase but their molecular identity has not been defined [230]. NTPDase2 associates with immature and non-myelinating Schwann cells of peripheral nerves whereas NTPDase1 immunoreactivity is absent [231]. NTPDase2 is also expressed by the satellite glial cells in dorsal root ganglia and sympathetic ganglia and by the enteric glia surrounding cell bodies of ganglionic neurons of the myenteric and submucous plexus [231]. Sensory systems The most comprehensive investigation of expression of NTPDases within sensory systems concerns the inner ear. Ectonucleotidase activity is associated with the tissues lining the perilymphatic compartment of the cochlea [232, 233]. Immunohistochemical analysis of the murine cochlea has assigned NTPDase1 to the cochlear vasculature and primary auditory neurons in the spiral ganglion, whereas NTPDase2 is associated with synaptic regions of the sensory inner and outer hair cells, supporting cells of the organ of Corti and additional tissue elements [234, 235]. Interestingly, noise exposure induces upregulation of NTPDase1 and NTPDase2 in the rat cochlea [236]. Taste buds transduce chemical signals in the mouth into neural messages. Taste cells and nerve fibers express P2X2 and P2X3 receptors [237] and various P2Y receptors [238, 239]. Genetic elimination of P2X2 and P2X3 receptors revealed that ATP is a key neurotransmitter in this system [240]. NTPDase2 is expressed at the mRNA level in mouse taste papillae [241]. Immunohistochemistry and enzyme histochemical staining allocate NTPDase2 to type I ‘glial-like’ cells in the tongue, palate and larynx. Furthermore, NTPDase2 immunostaining is associated with nearby nerves, suggestive of Schwann cells, implying that NTPDase2 may be a regulator in defined taste transmission. Pathological implications Cerebral ischemia The interruption of blood flow accompanied by an interrupted supply of oxygen and glucose initiates a sequence of events resulting in structural and functional damage of the nervous tissue, comparable to that seen at other sites of vascular injury [20]. Transient global cerebral ischemia of the rat results in a long-term increase in extracellular nucleotide hydrolysis pathways [242, 243]. Preconditioning delays the postischemic increase in ATP diphosphohydrolase activity [243]. During the days following transient forebrain ischemia, mRNA for NTPDase1 (but not of NTPDase2) and ecto-5′-nucleotidase becomes upregulated in the hippocampus [242], corresponding to the upregulation of the entire ectonucleotidase chain for the hydrolysis of ATP to adenosine. The data suggest that the increased expression of ectonucleotidases in the regions of damaged nerve cells is associated with activated glia, mainly microglia [224]. The upregulation of the ectonucleotidase chain is suggestive of an ischemia-induced increased and sustained cellular release of nucleotides. This could have several functional implications. Since microglial cells express the cytolytic P2X7 receptor [244, 245] the activity of these cells may be particularly endangered by increased levels of extracellular ATP. Enhanced activity of NTPDase1 may prevent activated microglia from overstimulation by ATP released from the injured tissue. Alternatively, microglial expression of NTPDase1 might contribute to preventing receptor desensitization on prolonged exposure to elevated ATP levels. The parallel increase in activity of ecto-5′-nucleotidase would facilitate the formation of the final hydrolysis product adenosine that exerts neuro-modulatory and immunomodulatory actions and contributes to the protection of neurons. Alterations following plastic changes in the nervous system Additional experiments, analyzing synaptosome fractions, suggest that changes in neural plasticity can be paralleled by changes in ecto-ATPase activity. Enzyme activity is reduced following avoidance learning [246] and status epilepticus [247, 248]. It is altered in two rat models of temporal lobe epilepsy [249], and on pentylenetetrazol kindling [250]. Changes in synaptosomal ectonucleotidase activity have been implicated by a broad variety of additional treatments including acute caffeine treatment [251]. Taken together, these experiments suggest that expression of ectonucleotidases can be altered following a variety of physiological or pathological stimuli, possibly together with that of purine receptors. Further work needs to define the enzyme subtypes involved and the mechanisms underlying the regulation of ectonucleotidase expression. Conclusions This review summarizes components of extracellular nucleotide-mediated signaling pathways that are impacted upon largely by the E-NTPDase family of ectonucleotidases. Modulated, distinct NTPDase expression appears to regulate nucleotide-mediated signaling in essentially every tissue, including the vasculature and of immune and nervous systems. For example, extracellular nucleotide-mediated vascular endothelial and accessory cell stimulation might have important consequences for platelet activation, thrombogenesis, angiogenesis, vascular remodeling and the metabolic milieu of the vasculature, in response to inflammatory stress and/or immune reactions. Nucleotides are also of significant relevance for the communication of nerve cells and glial cells or in the reciprocal signaling between these cells. These purinergic mechanisms might also dictate pathological processes of the nervous system or following vascular injury, thromboregulatory disturbances, and defective angiogenesis with associated perturbations in tissue remodeling and regeneration. There is a wide field for future investigations of the role of nucleotides and ectonucleotidases in other tissues. Increasing interest in this field may open up new avenues for investigation and the development of new treatment modalities for a large variety of diseases, including neurological pathological states, vascular thrombotic disorders including stoke, atherosclerosis and the vascular inflammation seen in transplant-graft failure.
[ "ntpdase", "cd39", "vasculature", "nervous tissue", "apyrase", "ecto-atpase", "immunology", "platelet", "liver", "ischemia", "brain", "kidney" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
Eur_J_Pediatr-3-1-2151776
What is new in pertussis?
Despite high vaccination coverage, over the last fifteen years there has been a worldwide resurgence of B. pertussis infection. While classical pertussis in the prevaccine era was primarily a childhood disease, today with widespread vaccination, there has been a shift in the incidence of disease to adolescents and adults. Centers of Disease Control and Prevention (CDC) data from 2004 reveal a nearly 19-fold increase in the number of cases in individuals 10–19 years and a 16-fold increase in persons over 20 years. Indeed adolescent and adults play a significant role in the transmission of pertussis to neonates and infants who are vulnerable to substantial morbidity and mortality from pertussis infection. Several explanations have been proposed to explain the increasing incidence of disease, with waning immunity after natural infection or immunization being widely cited as a significant factor. Improving molecular biology diagnostic techniques, namely PCR assays, also accounts for the increasing laboratory diagnosis of pertussis. Expanding vaccination strategies including universal immunization of adolescents, targeted immunization of adults, and in particular, healthcare workers, childcare providers and parents of newborns, will likely improve pertussis control. With pertussis continuing to pose a serious threat to infants, and greatly affecting adolescents and adults, there remains a need to: (a) increase the awareness of physicians as to the growing pertussis problem, (b) standardize diagnostic techniques, and (c) implement various new vaccine strategies to enhance its control. Introduction Pertussis, an acute infectious illness of the respiratory tract remains endemic in developed nations despite high vaccination coverage [7, 8, 16]. While the early use of whole-cell vaccine was highly effective in reducing the incidence of reported pertussis in the United States in the 1970s, there has been a resurgence of reported pertussis over the last 15 years [8, 16, 31]. Worldwide, there are an estimated 50 million cases occurring annually (90% of which are in developing countries), and there are as many as 400,0000 pertussis-related deaths [16, 31]. It is general consensus, moreover, that the reported incidence of pertussis is considerably lower than its actual incidence [8, 31]. Though in the prevaccine era pertussis was regarded as a childhood disease affecting primarily young children, pertussis epidemiology in the postvaccine era is different [8, 17]. Infants are the most vulnerable group with the highest rates of complications and mortality, yet adolescents and adults now comprise a significant percentage of cases and a conduit of infection for the infants [8, 13, 17]. PCR, culture, and serology are the mainstay of the laboratory diagnosis of pertussis, with various factors affecting the sensitivity and specificity of each modality [8, 24, 36]. However, in recent years, PCR has become an increasingly more popular tool and has significantly contributed to the increasing laboratory diagnosis of pertussis [8, 27, 36]. Advances have also been made with regard to prevention and disease control, with experts from 17 countries recently establishing the Global Pertussis Initiative (GPI) with the aim of analyzing the status of pertussis and enhancing existing immunization strategies [8, 15]. Epidemiology of pertussis Before the introduction of the whole-cell pertussis vaccine in the1940s, there were approximately 200,000 cases reported annually in the US [37]. Immunizations reduced disease rates and in 1976 pertussis incidence reached a nadir of 1,010 reported cases [13, 37]. However, since that time, there has been a substantial increase in the number of cases reported [8, 13, 31]. Indeed, over the past 15 years, there has been a marked increase in the incidence of pertussis with reported disease in the US reaching a rate of 8.9 per 100,000 in 2004 with nearly 19,000 provisional reported cases [7, 17]. It is also well established that, despite high vaccination coverage for primary immunization in infants and children, pertussis continues to be a global concern with increased incidence in many countries including Argentina, Australia, Canada, Italy, Japan, the Netherlands, Switzerland and the US [31]. It is also widely noted that in recent years there is a general shift in the age distribution of pertussis, with adults and adolescents an underrecognized but significant source of infection for neonates and infants [8, 13, 15–17]. Data from the EUVAC-NET project, a network for the epidemiologic surveillance and control of communicable diseases in the European community, demonstrate that between 1998 and 2002, the rate of disease incidence remained stable at a high rate among children less than 1 year old. Nevertheless, these data indicate that the incidence rate among adults doubled in 5 years [4]. Similarly, Centers for Disease Control and Prevention (CDC) surveillance data from 1990–2003 demonstrate that the reported incidence of pertussis among adolescents has substantially increased with a nearly ten-fold rise [17]. Moreover, when comparing pertussis disease rates in 1990–1993, recent CDC data from 2004 reveal a nearly 19-fold increase in the number of cases in persons aged 10–19 years and a 16-fold increase in persons over 20 years [13]. Several factors have been proposed as underlying the increasing incidence of pertussis disease including waning immunity with subsequent atypical disease manifestations, increasing awareness by public health personnel with subsequent enhanced surveillance and improved laboratory diagnostics [8, 17, 31]. Waning of both vaccine-induced immunity and infection-acquired immunity is widely cited as an important reason for recent epidemiologic trends [7, 8, 36, 37]. While the assessment of the duration of immunity afforded after either natural infection or vaccination is complex, individuals are clearly susceptible to initial infection/reinfection after vaccination or previous pertussis illness, respectively. Studies vary in their estimation of protection against disease with protective immunity after natural infection waning 7–20 years after illness, and duration of immunity after vaccination waning at approximately 4–12 years in children [36]. Yet, regardless of the precise interval, when individuals do contract pertussis after the waning of their immunity, their disease manifestations are frequently atypical [8, 17, 27]. As such, their illness is often underdiagnosed. Such underdiagnosis poses a potentially serious public-health concern in that those untreated persons with protracted cough continue to unknowingly transmit the disease to others. Finally, it has been proposed that the increased incidence rates may also be a function of enhanced surveillance as well as improved and more sensitive diagnostic lab techniques (e.g., PCR), in that such techniques allow for the diagnosis of cases that would probably have been missed in the past [8, 17, 27, 35]. Nevertheless, it is important to note, that the current estimates are likely to be, if anything, an underrepresentation of the true incidence of disease [8, 31]. First, the clinical diagnosis of pertussis is complicated by underconsulting, particularly among adolescents and adults [8]. Second, with prolonged cough often being their only clinical feature, by the time these adolescents and adults finally do seek medical attention, it is often too late to culture or detect the organism by PCR, thus potentially resulting in a missed diagnosis [8, 17, 31, 35]. Moreover, the wide heterogeneity in disease expression, modification of disease by immunization, mixed infection, inconsistent definition, and insensitive nonstandardized, poorly performed, or lack of available laboratory tests, further complicate physician diagnosis [8]. While the classic or “typical” pertussis may be easily recognized, it is seen less often since general immunization began. Instead, atypical pertussis, usually characterized by the absence of whoop and often a somewhat shorter duration of cough, is more common than classical pertussis among adolescents and adults [8, 17]. And finally, it should be noted that, immunized young children that are PCR positive for B pertussis can be asymptomatic [29, 31]. Regardless of whether an individual displays classical pertussis signs and symptoms or a more protracted, atypical cough, pertussis may not be suspected because of the misconception among many physicians that pertussis is a childhood disease [8, 17]. Co-occurrence of other infections like Influenza A or B, adenovirus, and RSV may also complicate the clinical diagnosis [8]. And, finally, even when diagnosed, pertussis is often underreported [8]. Indeed, Sutter and Cochi report that in the US, only an estimated 11.6% of pertussis cases were actually reported [17, 30]. Thus multiple institutional, clinical, and laboratory factors diminish the true assessment of pertussis incidence, and the current data clearly are an underestimation of the true burden of disease. Laboratory diagnosis of pertussis Because accurate diagnosis of pertussis cannot be made by clinical signs and symptoms alone, there is a need for improved laboratory diagnosis of pertussis [17]. While several laboratory techniques exist for the identification of B. pertussis, namely, culture, serology and PCR, overall, several practical factors may adversely affect the sensitivity of its laboratory diagnosis. Delayed specimen collection, poor specimen collection techniques, specimen transport problems, and lab media contamination are but a few of the practical constraints often influencing the outcomes of the laboratory diagnosis of pertussis. Moreover, previous exposure to the organism, patient’s age, stage of disease, previous antibiotic administration, and immunization are other factors that may have a substantial impact on the sensitivity of the tests. Finally, limited access to diagnostic or laboratory methods, in both developed and developing countries, undoubtedly affects B. pertussis laboratory confirmation [8, 24]. Culture B. pertussis is a fastidious gram-negative cocobacillus, and its isolation from nasopharyngeal secretions remains the gold standard for diagnosis. Culture requires collection of a posterior nasopharyngeal specimen with a dacron or calcium alginate swab. To increase the yield of positive cultures, specimens should be immediately plated onto selective Regan Lowe agar or Bordet Gengou medium, selective media that are seldom readily available in physician's offices because of their cost and short shelf-life [17, 24]. The main reasons for failure of bacterial growth in culture, from correctly collected and transported specimens, stem from bacterial and fungal contamination and the lack of fresh media [24]. Generally, 7–10 days are required to grow, isolate, and identify the organism, an obvious limitation of the culture method.The timing of obtaining specimens for culture is also of paramount importance and greatly affects its yield. The proportion of patients testing positive for pertussis by culture is highest when the initial specimens are obtained early in the course of illness, i.e., during the early catarrhal phase of the illness when the organism is present in the nasopharynx in sufficient quantity. However, adults and adolescents often present late in the course of their illness, thereby greatly reducing the likelihood of culturing the organism [17, 24]. Studies also demonstrate that proportions of positive cultures decline in patients who have been previously immunized and undoubtedly in those in whom antibiotics have been started. Thus, given the limited “window of opportunity” for positive culture, it is important to stress that a negative culture does not exclude pertussis [16]. Finally, it is important to emphasize, that despite its low yield, culture should be attempted, as the bacterial isolates are needed for genotypic and phenotypic analysis. PCR The use of PCR for the diagnosis of pertussis is rapidly evolving as it provides a sensitive, rapid means for laboratory diagnosis in circumstances in which the probability of a positive culture is low [8, 17, 32, 35]. Notably, the CDC and World Health Organization (WHO) now include a positive PCR in their lab definition of pertussis [17]. While the sensitivity of PCR also decreases somewhat with the duration of cough and among previously immunized individuals, it is nevertheless a significantly more robust tool for diagnosis for those in the later stages of the disease or for those who have already received antibiotics [17, 35]. Specifically, in their 2005 consensus paper, the European Research Programme for Improved Pertussis Strain Characterization and Surveillance (EUpertstrain) state that the real-time PCR is more sensitive than culture for the detection of B. pertussis, especially after the first 3–4 weeks of coughing and after antibiotic therapy has been initiated [16, 27]. In a prospective study in which nasopharyngeal samples were obtained simultaneously for both PCR and culture, the identification of B. pertussis infections was nearly four-fold higher with PCR [8, 28]. Finally, PCR is an invaluable tool for the diagnosis of pertussis among young infants since the yield of culture is low and serology is problematic in this age group [1, 17].As with culture, important factors for the successful application of PCR in the diagnosis of infection by Bordetella species include proper sample collection and preparation. For example, a Dacron swab with a fine flexible wire shaft, and not calcium alginate, is the recommended swab. After obtaining the nasopharyngeal sample, the swab should be shaken vigorously in saline solution, the swab discarded and the vial sealed for further processing [24]. Appropriate primer selection, amplification conditions, and controls are also essential for effective PCR testing. Primers have been derived from four chromosomal regions and common primers employed in PCR detection systems include IS481, IS1001 PTp1, and PTp2 [8, 24]. Inherent with the high sensitivity, false positive results are a well-recognized problem associated with the PCR diagnosis of pertussis and other respiratory illnesses. While at the present time, PCR is not routinely available and its methods need more standardization, optimization, and quality control, in the future, an internationally accepted standardized kit might be available, which would facilitate the expanded use of PCR for pertussis diagnosis [16, 35]. Serology Natural infection with B. pertussis is followed by an increase in serum levels of IgA, IgM, and IgG antibodies to specific pertussis antigen whereas the primary immunization of children induces mainly IgM and IgG antibodies. During the past 15 years, ELISAs have constituted the mainstay of serologic diagnosis using specific B. pertussis protein as antigens, and the serologic diagnosis of pertussis is suspected with increases in IgA or IgG antibody titers to pertussis toxin (PT), filamentous hemmaglutinin (FHA), pertactin, fimbriae or sonicated whole organisms in two serum samples collected 2–4 weeks apart [24]. Notably, these antibody responses to FHA are not specific to B. pertussis, but also occur following other Bordetella species; moreover, these antibodies may be cross-reacting epitopes to other bacteria including H. influenzae and M. pneumoniae. Thus, the greatest sensitivity and specificity for the serological diagnosis of B. pertussis infection is by ELISA measurement of IgG and IgA antibodies to PT demonstrating at least a two-fold rise in titer between acute- and convalescent-phase sera [24].Still, the main problem in the serologic diagnosis of B. pertussis by ELISA is the frequent delay in obtaining the acute-phase specimen. In individuals with re-infections, there is a rapid increase in titer such that if a “delayed” acute-phase sample is obtained, the titer is likely to have already peaked, thereby hampering the detection of the significant titer increase between the acute- and convalescent-phase serum samples [24]. Notably, for those individuals not recently immunized, a single-serum sample ELISA may circumvent the problem, as ill patients will have significantly higher ELISA titers than the geometric mean titers (GMT) of healthy controls [23–25]. With this in mind, although a rise in PT IgA is more suggestive of a recent antibody response, it is less consistent than a PT IgG rise; hence, in adolescents and adults, a single high value of IgG or IgA antibodies to PT suggests pertussis infection [17, 24, 35]. Indeed, de Melker et al. demonstrated that an IgG concentration to PT of at least 100 units/mL in a single serum sample was diagnostic of either a recent or active pertussis infection [10].The serological diagnosis of pertussis among infants also has notable limitations. Some culture-positive patients, particularly infants younger than 3 months, do not develop measurable antibodies, a finding that calls into question the utility of even obtaining a serum specimen for serology in young infants [8].In summary, despite the shortcomings of serology, a single-sample serology test can be a useful tool, particularly among older patients presenting late in the course of their illness when culture and PCR testing are negative. Use of antibiotics in the treatment and prevention of pertussis Antimicrobial agents administered early in the course of disease, i.e., during the catarrhal stage, may ameliorate the disease; although, after the cough is established antibiotics do not have a discernable effect on the course of the illness, but rather are recommended to limit the spread of organisms to other individuals [9].Erythromycin, clarithromycin or azithromycin are now considered first-line agents for treatment (and prophylaxis) of pertussis in individuals 6 months of age or older (Table 1). The antibiotic choice for infants younger than 6 months of age, however, requires special attention. The FDA has not yet approved azithromycin or clarithromycin for use in infants younger than 6 months; however, the 2006 AAP endorsed Red Book lists azithromycin as the preferred macrolide for this age group because of the risk of idiopathic hypertrophic pyloric stenosis associated with erythromycin [9]. Notably however, there was a recent report of infantile hypertrophic pyloric stenosis among two young infants treated with azithromycin for pertussis [26]. Table 1Recommended antimicrobial therapy and postexposure prophylaxis for pertussis in infants, children, adolescents, and adults [9]AgeRecommended drugsAlternativeAzithromycinErythromycinClarithromycinTMP-SMX<1 mo10 mg/kg per day as a single dose for 5 daysa40–50 mg/kg per day in 4 divided doses for 14 daysNot recommendedContraindicated at <2 mo of age1–5 moSee aboveSee above15 mg/kg per day in 2 divided doses for 7 days≥2 mo of age: TMP, 8 mg/kg per day; SMX, 40 mg/kg per day in 2 doses for 14 days≥6 mo and children10 mg/kg as a single dose on day 1 (maximum 500 mg); then 5 mg/kg per day as a single dose on days 2–5 (maximum 250 mg/day)See above (maximum 2 g/day)See above (maximum 1 g/day)See aboveAdolescents and adults500 mg as a single dose on day 1, then 250 mg as a single dose on days 2–52 g/day in 4 divided doses for 14 days1 g/day in 2 divided doses for 7 daysTMP, 300 mg/day; SMX, 1600 mg/day in 2 divided doses for 14 daysUsed with permission of the American Academy of Pediatrics. Red Book: 2006 Report of the Committee on Infectious Diseases Book, American Academy of Pediatrics, 2006TMP trimethoprim, SMX sulfamethoxazoleaPreferred macrolide for this age because of risk of idiopathic hypertrophic pyloric stenosis associated with erythromycin Postexposure prophylaxis The American Academy of Pediatrics’ 2006 Red Book recommends that chemoprophylaxis be administered to all household contacts and other close contacts, regardless of age and immunization status. The rationale behind this recommendation is that administration of chemoprophylaxis to asymptomatic contacts within 21 days of onset of cough in the index patient can limit secondary transmission [9]. Other countries like the United Kingdom limit the use of prophylaxis for the protection of only those with the greatest risk from pertussis, namely, young infants [11, 33]. Notably, an evidence-based review of literature on the use of erythromycin in preventing secondary transmission of pertussis to close contacts concluded that in countries where effective pertussis vaccines are in use, chemoprophyalxis should be limited to those most susceptible to the complications of pertussis (i.e., unimmunized or partially immunized infants) and to those individuals who come in close contact with the latter [11, 12]. Regardless of the policy, the agents, dose, and duration of prophylaxis are the same as for treatment of pertussis. Prevention of pertussis: vaccination strategies Pertussis vaccines licensed for use in infants, children, and adults vary across countries. These vaccines differ both in terms of their active ingredients and in terms of the other diseases for which coverage is provided (e.g., polio, diptheria). For example, Repevax (Sanofi Pasteur) contains diptheria, tetanus, pertussis (acellular, component) as well as inactivated polio, whereas ADACEL contains only tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis. Vaccination strategies similarly vary from country to country. Over the last several years, many potential immunization strategies have been proposed to improve pertussis control (Table 2). Universal immunization of adolescents and adults, selective perinatal immunization of women who recently gave birth, and close contacts of newborns, are but a few of the strategies that were discussed by the Global Pertussis Initiative (GPI), which first convened in 2001. Specified immunization goals also included improvement of current infant and toddler vaccination programs [15]. The second GPI convened in 2005 and reiterated several intervention strategies to address the ongoing severe pertussis disease among neonates and infants [16]. Table 2Immunization strategies assessed by GPI participantsa (see [15], Table 1, pg. S70)StrategyPrimary objectivesSecondary objectives1. Universal adult immunizationReduce morbidity in adultsReduce transmission to young infantsDevelop herd immunityReduce morbidity in older children2. Selective immunization of new mothers, family, and close contacts of newbornsReduce transmission to infantsReduce morbidity in adults, particularly young adults3. Selective immunization of health care workersReduce transmission to patientsReduce morbidity in health care workers4. Selective immunization of child care workersReduce transmission to infantsReduce morbidity in child care workers5. Universal adolescent immunizationReduce morbidity in adolescents and young adultsReduce transmission to infantsDevelop herd immunity6. Preschool booster at 4 years of ageReduce morbidity in 4- to 6-year oldsReduce transmission to infantsDevelop herd immunity7. Reinforce and/or improve the current infant and toddler immunization strategyReduce morbidity and mortality in infants, toddlers, and childrenReduce overall circulation of pertussisUsed with permission from Lippincott Williams & WilkinsaEntries represent the consensus of opinion of the GPI participants Immunization of adolescents As previously noted, the incidence of pertussis among adolescents is increasing and these individuals then serve as a reservoir of infection to unvaccinated or incompletely vaccinated infants [13]. Two Tdap vaccines (Boostrix and ADACEL) are licensed for use in the US. Recently, the CDCs Advisory Committee on Immunization Practices (ACIP) has recommended routine Tdap for adolescents from 11–18 years [3, 13, 16]. Several other countries including Canada, Austria, Australia, France, and Germany have also introduced the universal immunization of adolescents [16]. In Germany, for example, the current immunization schedule recommends DTaP at 2, 3, 4, and 11–14 months, and dTaP at 5–6 years and at 9–17 years [14]. For a complete overview of the pertussis vaccination in other European countries please access the EUVAC.NET website [14]. Future studies will be needed to evaluate the duration of protection afforded and the potential need for an adult booster. Immunization of adults The Adult Pertussis Trial (APERT), a study sponsored by the United States’ National Institute of Health (NIH), has recently demonstrated the efficacy of acellular pertussis vaccines in preventing pertussis disease in adults (and adolescents). [16, 20, 21, 34]. To date, only ADACEL is licensed for use in adults, and the recommended adult immunization schedule in the US (October 2006–September 2007) now recommends that Tdap replace a single dose of Td for adults <65 years who have not previously received a dose of Tdap (either in the primary series, booster or for wound management) [6]. Given the increased public awareness of adolescent and adult pertussis, in conjunction with perhaps an increased awareness about vaccines in general (e.g., HPV and influenza), the general public may be more receptive to a universal adult vaccination against pertussis [16]. The expected benefits of such programs would be to build up herd immunity and reduce disease. Alternatively, the selective vaccination of only those adults at highest risk of transmitting B. pertussis to vulnerable infants is likely to decrease both the incidence and the impact of pertussis on young infants. Regardless of the approach used, successful adult vaccination programs must include education and public awareness. Cocoon strategy The vaccination of household members, including parents and siblings of newborn infants, has been recently coined the cocoon strategy [16]. Recent studies have demonstrated that parents are frequently the source of pertussis infection to their infants [2, 13, 16, 19, 22]. While implementation of this strategy is expected to lead to only modest reductions in typical adult cases, there is a strong indirect effect on infants and young children. In countries where universal immunization of adults is not yet feasible, many experts consider such targeted immunization as “worthy of implementation” [16]. Presently, the cocoon strategy is recommended in several European countries, including Australia, France, Germany, and Austria [16]. Maternal vaccination Although there is efficient placental transfer of pertussis antibodies, low maternal levels and rapid decay in newborns render the infants vulnerable to life-threatening pertussis [16, 18]. Maternal immunization during pregnancy might afford some degree of protection to mother and infant during a vulnerable period, and the use of Tdap during pregnancy is currently under consideration. Neonatal vaccination Given the resurgence of reported pertussis in infant populations noted in multiple countries, and the high morbidity and mortality in this age group, newborn pertussis immunization is a potentially attractive strategy [5, 16, 19]. It is still unclear, however, if such a strategy will induce sufficient and timely immunity for this targeted immunization. Future trials are needed to address these concerns. Conclusion Despite the increasing awareness of B. pertussis, it continues to affect millions of people worldwide. While classical pertussis was once regarded as a “child’s disease”, today, pertussis poses a serious threat to infants, and greatly affects adolescents adults, now functioning as reservoirs of infection. While advances in molecular biology have undoubtedly increased the capacity to diagnose pertussis, work is still needed to standardize laboratory techniques. The increased awareness of the pertussis problem among experts and the lay public will hopefully pave the way for the implementation of various vaccine strategies to enhance its control.
[ "immunization strategies", "review", "bordetella pertussis", "polymerase chain reaction (pcr)" ]
[ "P", "P", "R", "M" ]
Intensive_Care_Med-4-1-2249616
Surviving Sepsis Campaign: International guidelines for management of severe sepsis and septic shock: 2008
Objective To provide an update to the original Surviving Sepsis Campaign clinical management guidelines, “Surviving Sepsis Campaign guidelines for management of severe sepsis and septic shock,” published in 2004. Introduction Severe sepsis (acute organ dysfunction secondary to infection) and septic shock (severe sepsis plus hypotension not reversed with fluid resuscitation) are major healthcare problems, affecting millions of individuals around the world each year, killing one in four (and often more), and increasing in incidence [1–5]. Similar to polytrauma, acute myocardial infarction, or stroke, the speed and appropriateness of therapy administered in the initial hours after severe sepsis develops are likely to influence outcome. In 2004, an international group of experts in the diagnosis and management of infection and sepsis, representing 11 organizations, published the first internationally accepted guidelines that the bedside clinician could use to improve outcomes in severe sepsis and septic shock [6, 7]. These guidelines represented Phase II of the Surviving Sepsis Campaign (SSC), an international effort to increase awareness and improve outcomes in severe sepsis. Joined by additional organizations, the group met again in 2006 and 2007 to update the guidelines document using a new evidence-based methodology system for assessing quality of evidence and strength of recommendations [8–11]. These recommendations are intended to provide guidance for the clinician caring for a patient with severe sepsis or septic shock. Recommendations from these guidelines cannot replace the clinician's decision-making capability when he or she is provided with a patient's unique set of clinical variables. Most of these recommendations are appropriate for the severe sepsis patient in both the intensive care unit (ICU) and non-ICU settings. In fact the committee believes that, currently, the greatest outcome improvement can be made through education and process change for those caring for severe sepsis patients in the non-ICU setting and across the spectrum of acute care. It should also be noted that resource limitations in some institutions and countries may prevent physicians from accomplishing particular recommendations. Methods Sepsis is defined as infection plus systemic manifestations of infection (Table 1) [12]. Severe sepsis is defined as sepsis plus sepsis-induced organ dysfunction or tissue hypoperfusion. The threshold for this dysfunction has varied somewhat from one severe sepsis research study to another. An example of typical thresholds identification of severe sepsis is shown in Table 2 [13]. Sepsis induced hypotension is defined as a systolic blood pressure(SBP) of < 90 mm Hg or mean arterial pressure < 70 mm Hg or a SBP decrease > 40 mm Hg or < 2 SD below normal for age in the absence of other causes of hypotension. Septic shock is defined as sepsis induced hypotension persisting despite adequate fluid resuscitation. Sepsis induced tissue hypoperfusion is defined as either septic shock, an elevated lactate or oliguria. Table 1Determination of the Quality of Evidence• Underlying methodologyARCTBDowngraded RCT or upgraded observational studiesCWell-done observational studiesDCase series or expert opinion• Factors that may decrease the strength of evidence1.Poor quality of planning and implementation of available RCTs suggesting high likelihood of bias2.Inconsistency of results (including problems with subgroup analyses)3.Indirectness of evidence (differing population, intervention, control, outcomes, comparison)4.Imprecision of results5.High likelihood of reporting bias• Main factors that may increase the strength of evidence1.Large magnitude of effect (direct evidence, relative risk (RR) > 2 with no plausible confounders)2.Very large magnitude of effect with RR > 5 and no threats to validity (by two levels)3.Dose response gradientRCT, randomized controlled trial; RR, relative riskTable 2Factors Determining Strong vs. Weak RecommendationWhat should be consideredRecommended ProcessQuality of evidenceThe lower the quality of evidence the less likely a strong recommendationRelative importance of the outcomesIf values and preferences vary widely, a strong recommendation becomes less likelyBaseline risks of outcomesThe higher the risk, the greater the magnitude of benefitMagnitude of relative risk including benefits, harms, and burdenLarger relative risk reductions or larger increases in relative risk of harm make a strong recommendation more or less likely respectivelyAbsolute magnitude of the effectThe larger the absolute benefits and harms, the greater or lesser likelihood respectively of a strong recommendationPrecision of the estimates of the effectsThe greater the precision the more likely is a strong recommendationCostsThe higher the cost of treatment, the less likely a strong recommendation The current clinical practice guidelines build on the first and second editions from 2001 (see below) and 2004 [6, 7, 14]. The 2001 publication incorporated a MEDLINE search for clinical trials in the preceding 10 years, supplemented by a manual search of other relevant journals [14]. The 2004 publication incorporated the evidence available through the end of 2003. The current publication is based on an updated search into 2007 (see methods and rules below). The 2001 guidelines were coordinated by the International Sepsis Forum (ISF); the 2004 guidelines were funded by unrestricted educational grants from industry and administered through the Society of Critical Care Medicine (SCCM), the European Society of Intensive Care Medicine (ESICM), and ISF. Two of the SSC administering organizations receive unrestricted industry funding to support SSC activities (ESICM and SCCM), but none of this funding was used to support the 2006–2007 committee meetings. It is important to distinguish between the process of guidelines revision and the Surviving Sepsis Campaign. The Surviving Sepsis Campaign (SSC) is partially funded by unrestricted educational industry grants, including those from Edwards LifeSciences, Eli Lilly and Company, and Philips Medical Systems. SSC also received funding from the Coalition for Critical Care Excellence of the Society of Critical Care Medicine. The great majority of industry funding has come from Eli Lilly and Company. Current industry funding for the Surviving Sepsis Campaign is directed to the performance improvement initiative. No industry funding was used in the guidelines revision process. For both the 2004 and the 2006/2007 efforts there were no members of the committee from industry, no industry input into guidelines development, and no industry presence at any of the meetings. Industry awareness or comment on the recommendations was not allowed. No member of the guideline committee received any honoraria for any role in the 2004 or 2006/2007 guidelines process. The committee considered the issue of recusement of individual committee members during deliberation and decision making in areas where committee members had either financial or academic competing interests; however, consensus as to threshold for exclusion could not be reached. Alternatively, the committee agreed to ensure full disclosure and transparency of all committee members' potential conflicts at time of publication (see disclosures at the end of this document). The guidelines process included a modified Delphi method, a consensus conference, several subsequent meetings of subgroups and key individuals, teleconferences and electronically based discussions among subgroups and members of the entire committee and two follow-up nominal group meetings in 2007. Subgroups were formed, each charged with updating recommendations in specific areas, including corticosteroids, blood products, activated protein C, renal replacement therapy, antibiotics, source control, and glucose control, etc. Each subgroup was responsible for updating the evidence (into 2007, with major additional elements of information incorporated into the evolving manuscript throughout 2006 and 2007). A separate search was performed for each clearly defined question. The committee chair worked with subgroup heads to identify pertinent search terms that always included, at a minimum, sepsis, severe sepsis, septic shock and sepsis syndrome crossed against the general topic area of the subgroup as well as pertinent key words of the specific question posed. All questions of the previous guidelines publications were searched, as were pertinent new questions generated by general topic related search or recent trials. Quality of evidence was judged by pre-defined Grades of Recommendation, Assessment, Development and Evaluation (GRADE) criteria (see below). Significant education of committee members on the GRADE approach was performed via email prior to the first committee meeting and at the first meeting. Rules were distributed concerning assessing the body of evidence and GRADE experts were available for questions throughout the process. Subgroups agreed electronically on draft proposals that were presented to committee meetings for general discussion. In January 2006, the entire group met during the 35th SCCM Critical Care Congress in San Francisco, California, USA. The results of that discussion were incorporated into the next version of recommendations and again discussed using electronic mail. Recommendations were finalized during nominal group meetings (composed of a subset of the committee members) at the 2007 SCCM (Orlando) and 2007 International Symposium on Intensive Care and Emergency Medicine (Brussels) meetings with recirculation of deliberations and decisions to the entire group for comment or approval. At the discretion of the chair and following adequate discussion, competing proposals for wording of recommendations or assigning strength of evidence were resolved by formal voting. On occasions, voting was performed to give the committee a sense of distribution of opinions to facilitate additional discussion. The manuscript was edited for style and form by the writing committee with final approval by section leads for their respective group assignment and then by the entire committee. The development of guidelines and grading of recommendations for the 2004 guideline development process were based on a system proposed by Sackett in 1989, during one of the first American College of Chest Physicians (ACCP) conferences on the use of antithrombotic therapies [15]. The revised guidelines recommendations are based on the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) system – a structured system for rating quality of evidence and grading strength of recommendation in clinical practice [8–11]. The SSC Steering Committee and individual authors collaborated with GRADE representatives to apply the GRADE system to the SSC guidelines revision process. The members of GRADE group were directly involved, either in person or via e-mail, in all discussions and deliberations amongst the guidelines committee members as to grading decisions. Subsequently, the SSC authors used written material prepared by the GRADE group and conferred with GRADE group members who were available at the first committee meeting and subsequent nominal group meetings. GRADE representatives were also used as a resource throughout subgroup deliberation. The GRADE system is based on a sequential assessment of the quality of evidence, followed by assessment of the balance between benefits versus risks, burden, and cost and, based on the above, development and grading of a management recommendations [9–11]. Keeping the rating of quality of evidence and strength of recommendation explicitly separate constitutes a crucial and defining feature of the GRADE approach. This system classifies quality of evidence as high (Grade A), moderate (Grade B), low (Grade C), or very low (Grade D). Randomized trials begin as high quality evidence, but may be downgraded due to limitations in implementation, inconsistency or imprecision of the results, indirectness of the evidence, and possible reporting bias (see Table 1). Examples of indirectness of the evidence include: population studied, interventions used, outcomes measured, and how these relate to the question of interest. Observational (non-randomized) studies begin as low-quality evidence, but the quality level may be upgraded on the basis of large magnitude of effect. An example of this is the quality of evidence for early administration of antibiotics. The GRADE system classifies recommendations as strong (Grade 1) or weak (Grade 2). The grade of strong or weak is considered of greater clinical importance than a difference in letter level of quality of evidence. The committee assessed whether the desirable effects of adherence will outweigh the undesirable effects, and the strength of a recommendation reflects the group's degree of confidence in that assessment. A strong recommendation in favor of an intervention reflects that the desirable effects of adherence to a recommendation (beneficial health outcomes, less burden on staff and patients, and cost savings) will clearly outweigh the undesirable effects (harms, more burden and greater costs). A weak recommendation in favor of an intervention indicates that the desirable effects of adherence to a recommendation probably will outweigh the undesirable effects, but the panel is not confident about these tradeoffs – either because some of the evidence is low-quality (and thus there remains uncertainty regarding the benefits and risks) or the benefits and downsides are closely balanced. While the degree of confidence is a continuum and there is a lack of a precise threshold between a strong and a weak recommendation, the presence of important concerns about one or more of the above factors makes a weak recommendation more likely. A “strong” recommendation is worded as “we recommend” and a weak recommendation as “we suggest.” The implications of calling a recommendation “strong” are that most well-informed patients would accept that intervention, and that most clinicians should use it in most situations. There may be circumstances in which a “strong” recommendation cannot or should not be followed for an individual patient because of that patient's preferences or clinical characteristics which make the recommendation less applicable. It should be noted that being a “strong” recommendation does not automatically imply standard of care. For example, the strong recommendation for administering antibiotics within one hour of the diagnosis of severe sepsis, although desirable, is not currently standard of care as verified by current practice (personal communication, Mitchell Levy from first 8,000 patients entered internationally into the SSC performance improvement data base). The implication of a “weak” recommendation is that although a majority of well-informed patients would accept it (but a substantial proportion would not), clinicians should consider its use according to particular circumstance. Differences of opinion among committee members about interpretation of evidence, wording of proposals, or strength of recommendations were resolved using a specifically developed set of rules. We will describe this process in detail in a separate publication. In summary, the main approach for converting diverse opinions into a recommendation was: 1. to give a recommendation a direction (for or against the given action). a majority of votes were to be in favor of that direction, with no more than 20% preferring the opposite direction (there was a neutral vote allowed as well); 2. to call a given recommendation “strong” rather than “weak” at least 70% “strong” votes were required; 3. if fewer than 70% of votes indicated “strong” preference, the recommendation was assigned a “weak” category of strength. We used a combination of modified Delphi Process and Nominal (Expert) Group techniques to ensure both depth and breadth of review. The entire review group (together with their parent organizations as required) participated in the larger, iterative, modified Delphi process. The smaller working group meetings which took place in person functioned as the Nominal Groups. If a clear consensus could not be obtained by polling within the Nominal Group meetings, the larger group was specifically asked to use the polling process. This was only required for corticosteroids and glycemic control. The larger group had the opportunity to review all outputs. In this way the entire review combined intense focused discussion (Nominal Group) with broader review and monitoring using the Delphi process. Note: Refer to Tables3, 4, and5for condensed adult recommentations.Table 3Initial Resuscitation and Infection IssuesInitial resuscitation (first 6 hours)Strength of recommendation and quality of evidence have been assessed using the GRADE criteria, presented in brackets after each guideline. For added clarity: • Indicates a strong recommendation or “we recommend”; ○ indicates a weak recommendation or “we suggest”• Begin resuscitation immediately in patients with hypotension or elevated serum lactate > 4mmol/l; do not delay pending ICU admission.(1C)• Resuscitation goals:(1C)– Central venous pressure (CVP) 8–12 mm Hg*– Mean arterial pressure ≥ 65 mm Hg– Urine output ≥ 0.5 mL.kg-1.hr-1– Central venous (superior vena cava) oxygen saturation ≥ 70%, or mixed venous ≥ 65%○ If venous O2 saturation target not achieved: (2C)– consider further fluid– transfuse packed red blood cells if required to hematocrit of ≥ 30% and/or– dobutamine infusion max 20 μg.kg-1.min-1* A higher target CVP of 12–15 mm Hg is recommended in the presence of mechanical ventilation or pre-existing decreased ventricular compliance.Diagnosis• Obtain appropriate cultures before starting antibiotics provided this does not significantly delay antimicrobial administration.(1C)– Obtain two or more blood cultures (BCs)– One or more BCs should be percutaneous– One BC from each vascular access device in place > 48 h– Culture other sites as clinically indicated• Perform imaging studies promptly in order to confirm and sample any source of infection; if safe to do so.(1C)Antibiotic therapy• Begin intravenous antibiotics as early as possible, and always within the first hour of recognizing severe sepsis (1D) and septic shock (1B).• Broad-spectrum: one or more agents active against likely bacterial/fungal pathogens and with good penetration into presumed source.(1B)• Reassess antimicrobial regimen daily to optimise efficacy, prevent resistance, avoid toxicity & minimise costs.(1C)○ Consider combination therapy in Pseudomonas infections.(2D)○ Consider combination empiric therapy in neutropenic patients.(2D)○ Combination therapy no more than 3–5 days and deescalation following susceptibilities.(2D)• Duration of therapy typically limited to 7–10 days; longer if response slow, undrainable foci of infection, or immunologic deficiencies.(1D)• Stop antimicrobial therapy if cause is found to be non-infectious.(1D)Source identification and control• A specific anatomic site of infection should be established as rapidly as possible(1C) and within first 6 hrs of presentation(1D).• Formally evaluate patient for a focus of infection amenable to source control measures (eg: abscess drainage, tissue debridement).(1C)• Implement source control measures as soon as possible following successful initial resuscitation.(1C)Exception: infected pancreatic necrosis, where surgical intervention best delayed. (2B)• Choose source control measure with maximum efficacy and minimal physiologic upset.(1D)• Remove intravascular access devices if potentially infected.(1C)Table 4Hemodynamic Support and Adjunctive TherapyFluid therapyStrength of recommendation and quality of evidence have been assessed using the GRADE criteria, presented in brackets after each guideline. For added clarity: • Indicates a strong recommendation or “we recommend”; ○ indicates a weak recommendation or “we suggest”• Fluid-resuscitate using crystalloids or colloids.(1B)• Target a CVP of ≥ 8 mm Hg (≥ 12 mm Hg if mechanically ventilated).(1C)• Use a fluid challenge technique while associated with a haemodynamic improvement.(1D)• Give fluid challenges of 1000 ml of crystalloids or 300–500 ml of colloids over 30 min. More rapid and larger volumes may be required in sepsis-induced tissue hypoperfusion.(1D)• Rate of fluid administration should be reduced if cardiac filling pressures increase without concurrent hemodynamic improvement.(1D)Vasopressors• Maintain MAP ≥ 65 mm Hg.(1C)• Norepinephrine or dopamine centrally administered are the initial vasopressors of choice.(1C)○ Epinephrine, phenylephrine or vasopressin should not be administered as the initial vasopressor in septic shock.(2C)– Vasopressin 0.03 units/min maybe subsequently added to norepinephrine with anticipation of an effect equivalent to norepinephrine alone.○ Use epinephrine as the first alternative agent in septic shock when blood pressure is poorly responsive to norepinephrine or dopamine.(2B)• Do not use low-dose dopamine for renal protection.(1A)• In patients requiring vasopressors, insert an arterial catheter as soon as practical.(1D)Inotropic therapy• Use dobutamine in patients with myocardial dysfunction as supported by elevated cardiac filling pressures and low cardiac output.(1C)• Do not increase cardiac index to predetermined supranormal levels.(1B)Steroids○ Consider intravenous hydrocortisone for adult septic shock when hypotension remains poorly responsive to adequate fluid resuscitation and vasopressors.(2C)○ ACTH stimulation test is not recommended to identify the subset of adults with septic shock who should receive hydrocortisone.(2B)○ Hydrocortisone is preferred to dexamethasone.(2B)○ Fludrocortisone (50 μg orally once a day) may be included if an alternative to hydrocortisone is being used which lacks significant mineralocorticoid activity. Fludrocortisone is optional if hydrocortisone is used.(2C)○ Steroid therapy may be weaned once vasopressors are no longer required.(2D)• Hydrocortisone dose should be < 300 mg/day.(1A)• Do not use corticosteroids to treat sepsis in the absence of shock unless the patient's endocrine or corticosteroid history warrants it.(1D)Recombinant human activated protein C (rhAPC)○ Consider rhAPC in adult patients with sepsis-induced organ dysfunction with clinical assessment of high risk of death (typically APACHE II ≥ 25 or multiple organ failure) if there are no contraindications.(2B,2Cforpost-operativepatients)• Adult patients with severe sepsis and low risk of death (eg: APACHE II<20 or one organ failure) should not receive rhAPC.(1A)Table 5Other Supportive Therapy of Severe SepsisBlood product administrationStrength of recommendation and quality of evidence have been assessed using the GRADE criteria, presented in brackets after each guideline. For added clarity: • Indicates a strong recommendation or “we recommend”; ○ indicates a weak recommendation or “we suggest”• Give red blood cells when hemoglobin decreases to < 7.0 g/dl (< 70 g/L) to target a hemoglobin of 7.0–9.0 g/dl in adults.(1B)– A higher hemoglobin level may be required in special circumstances (e. g.: myocardial ischaemia, severe hypoxemia, acute haemorrhage, cyanotic heart disease or lactic acidosis)• Do not use erythropoietin to treat sepsis-related anemia. Erythropoietin may be used for other accepted reasons.(1B)• Do not use fresh frozen plasma to correct laboratory clotting abnormalities unless there is bleeding or planned invasive procedures.(2D)○ Do not use antithrombin therapy.(1B)• Administer platelets when: (2D)– counts are < 5000/mm3 (5 × 109/L) regardless of bleeding.– counts are 5000 to 30,000/mm3 (5–30 × 109/L) and there is significant bleeding risk.– Higher platelet counts (≥ 50,000/mm3 (50 × 109/L)) are required for surgery or invasive procedures.Mechanical ventilation of sepsis-induced acute lung injury (ALI)/ARDS• Target a tidal volume of 6 ml/kg (predicted) body weight in patients with ALI/ARDS.(1B)• Target an initial upper limit plateau pressure < 30 cm H2O. Consider chest wall compliance when assessing plateau pressure.(1C)• Allow PaCO2 to increase above normal, if needed to minimize plateau pressures and tidal volumes.(1C)• Positive end expiratory pressure (PEEP) should be set to avoid extensive lung collapse at end expiration.(1C)○ Consider using the prone position for ARDS patients requiring potentially injurious levels of FiO2 or plateau pressure, provided they are not put at risk from positional changes.(2C)• Maintain mechanically ventilated patients in a semi-recumbent position (head of the bed raised to 45 °) unless contraindicated(1B), between 30 –45 (2C).○ Non invasive ventilation may be considered in the minority of ALI/ARDS patients with mild-moderate hypoxemic respiratory failure. The patients need to be hemodynamically stable, comfortable, easily arousable, able to protect/clear their airway and expected to recover rapidly.(2B)• Use a weaning protocol and a spontaneous breathing trial (SBT) regularly to evaluate the potential for discontinuing mechanical ventilation.(1A)– SBT options include a low level of pressure support with continuous positive airway pressure 5 cm H2O or a T-piece.– Before the SBT, patients should: – be arousable – be haemodynamically stable without vasopressors – have no new potentially serious conditions – have low ventilatory and end-expiratory pressure requirement – require FiO2 levels that can be safely delivered with a face mask or nasal cannula• Do not use a pulmonary artery catheter for the routine monitoring of patients with ALI/ARDS.(1A)• Use a conservative fluid strategy for patients with established ALI who do not have evidence of tissue hypoperfusion.(1C)Sedation, analgesia, and neuromuscular blockade in sepsis• Use sedation protocols with a sedation goal for critically ill mechanically ventilated patients.(1B)• Use either intermittent bolus sedation or continuous infusion sedation to predetermined end points (sedation scales), with daily interruption/lightening to produce awakening. Re-titrate if necessary.(1B)• Avoid neuromuscular blockers (NMBs) where possible. Monitor depth of block with train of four when using continuous infusions.(1B)Glucose control• Use IV insulin to control hyperglycemia in patients with severe sepsis following stabilization in the ICU.(1B)• Aim to keep blood glucose < 150 mg/dl (8.3 mmol/L) using a validated protocol for insulin dose adjustment.(2C)• Provide a glucose calorie source and monitor blood glucose values every 1–2 hrs (4 hrs when stable) in patients receiving intravenous insulin.(1C)• Interpret with caution low glucose levels obtained with point of care testing, as these techniques may overestimate arterial blood or plasma glucose values.(1B)Renal replacement○ Intermittent hemodialysis and continuous veno-venous haemofiltration (CVVH) are considered equivalent.(2B)○ CVVH offers easier management in hemodynamically unstable patients.(2D)Bicarbonate therapy• Do not use bicarbonate therapy for the purpose of improving hemodynamics or reducing vasopressor requirements when treating hypoperfusion-induced lactic acidemia with pH ≥ 7.15.(1B)Deep vein thrombosis (DVT) prophylaxis• Use either low-dose unfractionated heparin (UFH) or low-molecular weight heparin (LMWH), unless contraindicated.(1A)• Use a mechanical prophylactic device, such as compression stockings or an intermittent compression device, when heparin is contraindicated.(1A)○ Use a combination of pharmacologic and mechanical therapy for patients who are at very high risk for DVT.(2C)○ In patients at very high risk LMWH should be used rather than UFH.(2C)Stress ulcer prophylaxis• Provide stress ulcer prophylaxis using H2 blocker(1A) or proton pump inhibitor(1B). Benefits of prevention of upper GI bleed must be weighed against the potential for development of ventilator-associated pneumonia.Consideration for limitation of support• Discuss advance care planning with patients and families. Describe likely outcomes and set realistic expectations.(1D) I. Management of Severe Sepsis A. Initial Resuscitation We recommend the protocolized resuscitation of a patient with sepsis-induced shock, defined as tissue hypoperfusion (hypotension persisting after initial fluid challenge or blood lactate concentration equal to or greater than 4 mmol/L). This protocol should be initiated as soon as hypoperfusion is recognized and should not be delayed pending ICU admission. During the first 6 hrs of resuscitation, the goals of initial resuscitation of sepsis-induced hypoperfusion should include all of the following as one part of a treatment protocol: Central venous pressure (CVP): 8–12 mm HgMean arterial pressure (MAP) ≥ 65 mm HgUrine output ≥ 0.5 mL.kg−1.hr −1Central venous (superior vena cava) or mixed venous oxygen saturation ≥ 70% or ≥ 65%, respectively (Grade 1C)Rationale. Early goal-directed resuscitation has been shown to improve survival for emergency department patients presenting with septic shock in a randomized, controlled, single-center study [16]. Resuscitation directed toward the previously mentioned goals for the initial 6-hr period of the resuscitation was able to reduce 28-day mortality rate. The consensus panel judged use of central venous and mixed venous oxygen saturation targets to be equivalent. Either intermittent or continuous measurements of oxygen saturation were judged to be acceptable. Although blood lactate concentration may lack precision as a measure of tissue metabolic status, elevated levels in sepsis support aggressive resuscitation. In mechanically ventilated patients or patients with known pre-existing decreased ventricular compliance, a higher target CVP of 12–15 mm Hg is recommended to account for the impediment to filling [17]. Similar consideration may be warranted in circumstances of increased abdominal pressure or diastolic dysfunction [18]. Elevated central venous pressures may also be seen with pre-existing clinically significant pulmonary artery hypertension. Although the cause of tachycardia in septic patients may be multifactorial, a decrease in elevated pulse rate with fluid resuscitation is often a useful marker of improving intravascular filling. Recently published observational studies have demonstrated an association between good clinical outcome in septic shock and MAP ≥ 65 mm Hg as well as central venous oxygen saturation (ScvO2, measured in superior vena cava, either intermittently or continuously) of ≥ 70% [19]. Many recent studies support the value of early protocolized resuscitation in severe sepsis and sepsis-induced tissue hypoperfusion [20–25]. Studies of patients with shock indicate that SvO2 runs 5–7% lower than central venous oxygen saturation (ScvO2) [26] and that an early goal directed resuscitation protocol can be established in a non-research general practice venue [27]. There are recognized limitations to ventricular filling pressure estimates as surrogates for fluid resuscitation [28, 29]. However, measurement of CVP is currently the most readily obtainable target for fluid resuscitation. There may be advantages to targeting fluid resuscitation to flow and perhaps to volumetric indices (and even to microcirculation changes) [30–33]. Technologies currently exist that allow measurement of flow at the bedside [34, 35]. Future goals should be making these technologies more accessible during the critical early resuscitation period and research to validate utility. These technologies are already available for early ICU resuscitation. We suggest that during the first 6 hrs of resuscitation of severe sepsis or septic shock, if SCVO2 or SvO2 of 70% or 65% respectively is not achieved with fluid resuscitation to the CVP target, then transfusion of packed red blood cells to achieve a hematocrit of ≥ 30% and/or administration of a dobutamine infusion (up to a maximum of 20 μg.kg−1.min−1) be utilized to achieve this goal (Grade 2C).Rationale. The protocol used in the study cited previously targeted an increase in SCVO2 to ≥ 70% [16]. This was achieved by sequential institution of initial fluid resuscitation, then packed red blood cells, and then dobutamine. This protocol was associated with an improvement in survival. Based on bedside clinical assessment and personal preference, a clinician may deem either blood transfusion (if Hct is less than 30%) or dobutamine the best initial choice to increase oxygen delivery and thereby elevate SCVO2. When fluid resuscitation is believed to be already adequate. The design of the afore mentioned trial did not allow assessment of the relative contribution of these two components (i. e. increasing O2 content or increasing cardiac output) of the protocol on achievement of improved outcome. B. Diagnosis We recommend obtaining appropriate cultures before antimicrobial therapy is initiated if such cultures do not cause significant delay in antibiotic administration. To optimize identification of causative organisms, we recommend at least two blood cultures be obtained prior to antibiotics with at least one drawn percutaneously and one drawn through each vascular access device, unless the device was recently (< 48 h) inserted. Cultures of other sites (preferably quantitative where appropriate) such as urine, cerebrospinal fluid, wounds, respiratory secretions, or other body fluids that may be the source of infection should also be obtained before antibiotic therapy if not associated with significant delay in antibiotic administration (Grade 1C).Rationale. Although sampling should not delay timely administration of antibiotics in patients with severe sepsis (example: lumbar puncture in suspected meningitis), obtaining appropriate cultures prior to their administration is essential to confirm infection and the responsible pathogen(s), and to allow de-escalation of antibiotic therapy after receipt of the susceptibility profile. Samples can be kept in the refrigerator or frozen if processing cannot be performed immediately. Immediate transport to a microbiological lab is necessary. Because rapid sterilization of blood cultures can occur within a few hours after the first antibiotic dose, obtaining those cultures before starting therapy is essential if the causative organism is to be identified. Two or more blood cultures are recommended [36]. In patients with indwelling catheters (for > 48 h) at least one blood culture should be drawn through each lumen of each vascular access device. Obtaining blood cultures peripherally and through a vascular access device is an important strategy. If the same organism is recovered from both cultures, the likelihood that the organism is causing the severe sepsis is enhanced. In addition, if the culture drawn through the vascular access device is positive much earlier than the peripheral blood culture (i. e., > 2 hrs earlier), the data support the concept that the vascular access device is the source of the infection [37]. Quantitative cultures of catheter and peripheral blood are also useful for determining whether the catheter is the source of infection. Volume of blood drawn with the culture tube should be at least 10 mL [38]. Quantitative (or semi-quantitative) cultures of respiratory tract secretions are recommended for the diagnosis of ventilator-associated pneumonia [39]. Gram stain can be useful, in particular for respiratory tract specimens, to help decide the micro-organisms to be targeted. The potential role of biomarkers for diagnosis of infection in patients presenting with severe sepsis remains at present undefined. The procalcitonin level, although often useful, is problematic in patients with an acute inflammatory pattern from other causes (e. g. post-operative, shock) [40] In the near future, rapid diagnostic methods (polymerase chain reaction, micro-arrays) might prove extremely helpful for a quicker identification of pathogens and major antimicrobial resistance determinants [41]. We recommend that imaging studies be performed promptly in attempts to confirm a potential source of infection. Sampling of potential sources of infection should occur as they are identified; however, some patients may be too unstable to warrant certain invasive procedures or transport outside of the ICU. Bedside studies, such as ultrasound, are useful in these circumstances (Grade 1C).Rationale. Diagnostic studies may identify a source of infection that requires removal of a foreign body or drainage to maximize the likelihood of a satisfactory response to therapy. However, even in the most organized and well-staffed healthcare facilities, transport of patients can be dangerous, as can placing patients in outside-unit imaging devices that are difficult to access and monitor. Balancing risk and benefit is therefore mandatory in those settings. C. Antibiotic Therapy We recommend that intravenous antibiotic therapy be started as early as possible and within the first hour of recognition of septic shock (1B) and severe sepsis without septic shock (1D). Appropriate cultures should be obtained before initiating antibiotic therapy, but should not prevent prompt administration of antimicrobial therapy (Grade 1D).Rationale. Establishing vascular access and initiating aggressive fluid resuscitation is the first priority when managing patients with severe sepsis or septic shock. However, prompt infusion of antimicrobial agents should also be a priority and may require additional vascular access ports [42, 43]. In the presence of septic shock each hour delay in achieving administration of effective antibiotics is associated with a measurable increase in mortality [42]. If antimicrobial agents cannot be mixed and delivered promptly from the pharmacy, establishing a supply of premixed antibiotics for such urgent situations is an appropriate strategy for ensuring prompt administration. In choosing the antimicrobial regimen, clinicians should be aware that some antimicrobial agents have the advantage of bolus administration, while others require a lengthy infusion. Thus, if vascular access is limited and many different agents must be infused, bolus drugs may offer an advantage. We recommend that initial empirical anti-infective therapy include one or more drugs that have activity against all likely pathogens (bacterial and/or fungal) and that penetrate in adequate concentrations into the presumed source of sepsis (Grade 1B).Rationale. The choice of empirical antibiotics depends on complex issues related to the patient's history including drug intolerances, underlying disease, the clinical syndrome, and susceptibility patterns of pathogens in the community, in the hospital, and that previously have been documented to colonize or infect the patient. There is an especially wide range of potential pathogens for neutropenic patients. Recently used antibiotics should generally be avoided. Clinicians should be cognizant of the virulence and growing prevalence of oxacillin (methicillin) resistant Staphylococcus aureus (ORSA or MRSA) in some communities and healthcare associated settings (especially in the United States) when they choose empiric therapy. If the prevalence is significant, and in consideration of the virulence of this organism, empiric therapy adequate for this pathogen would be warranted. Clinicians should also consider whether Candidemia is a likely pathogen when choosing initial therapy. When deemed warranted, the selection of empiric antifungal therapy (e. g., fluconazole, amphotericin B, or echinocandin) will be tailored to the local pattern of the most prevalent Candida species, and any prior administration of azoles drugs [44]. Risk factors for candidemia should also be considered when choosing initial therapy. Because patients with severe sepsis or septic shock have little margin for error in the choice of therapy, the initial selection of antimicrobial therapy should be broad enough to cover all likely pathogens. There is ample evidence that failure to initiate appropriate therapy (i. e. therapy with activity against the pathogen that is subsequently identified as the causative agent) correlates with increased morbidity and mortality [45–48]. Patients with severe sepsis or septic shock warrant broad-spectrum therapy until the causative organism and its antibiotic susceptibilities are defined. Restriction of antibiotics as a strategy to reduce the development of antimicrobial resistance or to reduce cost is not an appropriate initial strategy in this patient population. All patients should receive a full loading dose of each antimicrobial. However, patients with sepsis or septic shock often have abnormal renal or hepatic function and may have abnormal volumes of distribution due to aggressive fluid resuscitation. Drug serum concentration monitoring can be useful in an ICU setting for those drugs that can be measured promptly. An experienced physician or clinical pharmacist should be consulted to ensure that serum concentrations are attained that maximize efficacy and minimize toxicity [49–52]. We recommend that the antimicrobial regimen be reassessed daily to optimize activity, to prevent the development of resistance, to reduce toxicity, and to reduce costs (Grade 1C).Rationale. Although restriction of antibiotics as a strategy to reduce the development of antimicrobial resistance or to reduce cost is not an appropriate initial strategy in this patient population, once the causative pathogen has been identified, it may become apparent that none of the empiric drugs offers optimal therapy; i. e., there may be another drug proven to produce superior clinical outcome which should therefore replace empiric agents. Narrowing the spectrum of antibiotic coverage and reducing the duration of antibiotic therapy will reduce the likelihood that the patient will develop superinfection with pathogenic or resistant organisms such as Candida species, Clostridium difficile, or vancomycin-resistant Enterococcus faecium. However, the desire to minimize superinfections and other complications should not take precedence over the need to give the patient an adequate course of therapy to cure the infection that caused the severe sepsis or septic shock. We suggest combination therapy for patients with known or suspected Pseudomonas infections as a cause of severe sepsis (Grade 2D).We suggest combination empiric therapy for neutropenic patients with severe sepsis (Grade 2D).When used empirically in patients with severe sepsis, we suggest that combination therapy should not be administered for more than 3 to 5 days. De-escalation to the most appropriate single therapy should be performed as soon as the susceptibility profile is known. (Grade 2D).Rationale. Although no study or meta-analysis has convincingly demonstrated that combination therapy produces a superior clinical outcome for individual pathogens in a particular patient group, combination therapies do produce in vitro synergy against pathogens in some models (although such synergy is difficult to define and predict). In some clinical scenarios, such as the two above, combination therapies are biologically plausible and are likely clinically useful even if evidence has not demonstrated improved clinical outcome [53–56]. Combination therapy for suspected known Pseudomonas pending sensitivities increases the likelihood that at least one drug is effective against that strain and positively affects outcome [57]. We recommend that the duration of therapy typically be 7–10 days; longer courses may be appropriate in patients who have a slow clinical response, undrainable foci of infection, or who have immunologic deficiencies including neutropenia (Grade 1D).If the presenting clinical syndrome is determined to be due to a noninfectious cause, we recommend antimicrobial therapy be stopped promptly to minimize the likelihood that the patient will become infected with an antibiotic resistant pathogen or will develop a drug related adverse effect (Grade 1D).Rationale. Clinicians should be cognizant that blood cultures will be negative in more than 50% of cases of severe sepsis or septic shock, yet many of these cases are very likely caused by bacteria or fungi. Thus, the decisions to continue, narrow, or stop antimicrobial therapy must be made on the basis of clinician judgment and clinical information. D. Source Control We recommend that a specific anatomic diagnosis of infection requiring consideration for emergent source control- for example necrotizing fasciitis, diffuse peritonitis, cholangitis, intestinal infarction – be sought and diagnosed or excluded as rapidly as possible (Grade 1C) and within the first 6 hours following presentation (Grade 1D).We further recommend that all patients presenting with severe sepsis be evaluated for the presence of a focus of infection amenable to source control measures, specifically the drainage of an abscess or local focus of infection, the debridement of infected necrotic tissue, the removal of a potentially infected device, or the definitive control of a source of ongoing microbial contamination (Grade 1C) (see Appendix A for examples of potential sites needing source control).We suggest that when infected peripancreatic necrosis is identified as a potential source of infection, definitive intervention is best delayed until adequate demarcation of viable and non-viable tissues has occurred (Grade 2B).We recommend that when source control is required, the effective intervention associated with the least physiologic insult be employed e. g., percutaneous rather than surgical drainage of an abscess (Grade 1D).We recommend that when intravascular access devices are a possible source of severe sepsis or septic shock, they be promptly removed after establishing other vascular access (Grade 1C).Rationale. The principles of source control in the management of sepsis include a rapid diagnosis of the specific site of infection, and identification of a focus of infection amenable to source control measures (specifically the drainage of an abscess, the debridement of infected necrotic tissue, the removal of a potentially infected device, and the definitive control of a source of ongoing microbial contamination) [58]. Foci of infection readily amenable to source control measures include an intra-abdominal abscess or gastrointestinal perforation, cholangitis or pyelonephritis, intestinal ischemia or necrotizing soft tissue infection, and other deep space infection such as an empyema or septic arthritis. Such infectious foci should be controlled as soon as possible following successful initial resuscitation [59], accomplishing the source control objective with the least physiologic upset possible (e. g., percutaneous rather than surgical drainage of an abscess [60], endoscopic rather than surgical drainage of biliary tree), and removing intravascular access devices that are potentially the source of severe sepsis or septic shock promptly after establishing other vascular access [61, 62]. A randomized, controlled trial comparing early vs. delayed surgical intervention for peripancreatic necrosis showed better outcomes with a delayed approach [63]. However, areas of uncertainty, such as definitive documentation of infection and appropriate length of delay exist. The selection of optimal source control methods must weigh benefits and risks of the specific intervention as well as risks of transfer [64]. Source control interventions may cause further complications such as bleeding, fistulas, or inadvertent organ injury. Surgical intervention should be considered when lesser interventional approaches are inadequate, or when diagnostic uncertainty persists despite radiological evaluation. Specific clinical situations require consideration of available choices, patient's preferences, and clinician's expertise. E. Fluid Therapy We recommend fluid resuscitation with either natural/artificial colloids or crystalloids. There is no evidence-based support for one type of fluid over another (Grade 1B).Rationale. The SAFE study indicated albumin administration was safe and equally effective as crystalloid [65]. There was an insignificant decrease in mortality rates with the use of colloid in a subset analysis of septic patients (p = 0.09). Previous meta-analyses of small studies of ICU patients had demonstrated no difference between crystalloid and colloid fluid resuscitation [66–68]. Although administration of hydroxyethyl starch may increase the risk of acute renal failure in patients with sepsis variable findings preclude definitive recommendations [69, 70]. As the volume of distribution is much larger for crystalloids than for colloids, resuscitation with crystalloids requires more fluid to achieve the same end points and results in more edema. Crystalloids are less expensive. We recommend fluid resuscitation initially target a CVP of at least 8 mm Hg (12 mm Hg in mechanically ventilated patients). Further fluid therapy is often required (Grade 1C).We recommend that a fluid challenge technique be applied, wherein fluid administration is continued as long as the hemodynamic improvement (e. g., arterial pressure, heart rate, urine output) continues (Grade 1D).We recommend fluid challenge in patients with suspected hypovolemia be started with at least 1000 mL of crystalloids or 300–500 mL of colloids over 30 min. More rapid administration and greater amounts of fluid may be needed in patients with sepsis induced tissue hypoperfusion (see initial resuscitation recommendations) (Grade 1D).We recommend the rate of fluid administration be reduced substantially when cardiac filling pressures (CVP or pulmonary artery balloon-occluded pressure) increase without concurrent hemodynamic improvement (Grade 1D).Rationale. Fluid challenge must be clearly separated from simple fluid administration; it is a technique in which large amounts of fluids are administered over a limited period of time under close monitoring to evaluate the patient's response and avoid the development of pulmonary edema. The degree of intravascular volume deficit in patients with severe sepsis varies. With venodilation and ongoing capillary leak, most patients require continuing aggressive fluid resuscitation during the first 24 hours of management. Input is typically much greater than output, and input/output ratio is of no utility to judge fluid resuscitation needs during this time period. F. Vasopressors We recommend mean arterial pressure (MAP) be maintained ≥ 65 mm Hg (Grade 1C).Rationale. Vasopressor therapy is required to sustain life and maintain perfusion in the face of life-threatening hypotension, even when hypovolemia has not yet been resolved. Below a certain mean arterial pressure, autoregulation in various vascular beds can be lost, and perfusion can become linearly dependent on pressure. Thus, some patients may require vasopressor therapy to achieve a minimal perfusion pressure and maintain adequate flow [71, 72]. The titration of norepinephrine to as low as MAP 65 mm Hg has been shown to preserve tissue perfusion [72]. In addition, pre-existing comorbidities should be considered as to most appropriate MAP target. For example, a MAP of 65 mm Hg might be too low in a patient with severe uncontrolled hypertension, and in a young previously normotensive, a lower MAP might be adequate. Supplementing end points such as blood pressure with assessment of regional and global perfusion, such as blood lactate concentrations and urine output, is important. Adequate fluid resuscitation is a fundamental aspect of the hemodynamic management of patients with septic shock, and should ideally be achieved before vasopressors and inotropes are used, but using vasopressors early as an emergency measure in patients with severe shock is frequently necessary. When that occurs great effort should be directed to weaning vasopressors with continuing fluid resuscitation. We recommend either norepinephrine or dopamine as the first choice vasopressor agent to correct hypotension in septic shock (administered through a central catheter as soon as one is available) (Grade 1C).We suggest that epinephrine, phenylephrine, or vasopressin should not be administered as the initial vasopressor in septic shock (Grade 2C). Vasopressin .03 units/min may be subsequently added to norepinephrine with anticipation of an effect equivalent to norepinephrine alone.We suggest that epinephrine be the first chosen alternative agent in septic shock that is poorly responsive to norepinephrine or dopamine (Grade 2B).Rationale. There is no high-quality primary evidence to recommend one catecholamine over another. Much literature exists that contrasts the physiologic effects of choice of vasopressor and combined inotrope/vasopressors in septic shock [73–85]. Human and animal studies suggest some advantages of norepinephrine and dopamine over epinephrine (the latter with the potential for tachycardia as well as disadvantageous effects on splanchnic circulation and hyperlactemia) and phenylephrine (decrease in stroke volume). There is, however, no clinical evidence that epinephrine results in worse outcomes, and it should be the first chosen alternative to dopamine or norepinephrine. Phenylephrine is the adrenergic agent least likely to produce tachycardia, but as a pure vasopressor would be expected to decrease stroke volume. Dopamine increases mean arterial pressure and cardiac output, primarily due to an increase in stroke volume and heart rate. Norepinephrine increases mean arterial pressure due to its vasoconstrictive effects, with little change in heart rate and less increase in stroke volume compared with dopamine. Either may be used as a first-line agent to correct hypotension in sepsis. Norepinephrine is more potent than dopamine and may be more effective at reversing hypotension in patients with septic shock. Dopamine may be particularly useful in patients with compromised systolic function but causes more tachycardia and may be more arrhythmogenic [86]. It may also influence the endocrine response via the hypothalamic-pituitary axis and have immunosuppressive effects. Vasopressin levels in septic shock have been reported to be lower than anticipated for a shock state [87]. Low doses of vasopressin may be effective in raising blood pressure in patients refractory to other vasopressors, and may have other potential physiologic benefits [88–93]. Terlipressin has similar effects but is long lasting [94]. Studies show that vasopressin concentrations are elevated in early septic shock, but with continued shock, concentration decreases to normal range in the majority of patients between 24 and 48 hrs [95]. This has been called “relative vasopressin deficiency” because in the presence of hypotension, vasopressin would be expected to be elevated. The significance of this finding is unknown. The recent VASST trial, a randomized, controlled trial comparing norepinephrine alone to norepinephrine plus vasopressin at .03 units per minute showed no difference in outcome in the intent to treat population. An a priori defined subgroup analysis showed that the survival of patients receiving less than 15 μg/min norepinephrine at the time of randomization was better with vasopressin. It should be noted however that the pre-trial rationale for this stratification was based on exploring potential benefit in the 15 μg or greater norepinephrine requirement population. Higher doses of vasopressin have been associated with cardiac, digital, and splanchnic ischemia and should be reserved for situations where alternative vasopressors have failed [96]. Cardiac output measurement to allow maintenance of a normal or elevated flow is desirable when these pure vasopressors are instituted. We recommend that low dose dopamine not be used for renal protection (Grade 1A).Rationale. A large randomized trial and meta-analysis comparing low-dose dopamine to placebo found no difference in either primary outcomes (peak serum creatinine, need for renal replacement, urine output, time to recovery of normal renal function), or secondary outcomes (survival to either ICU or hospital discharge, ICU stay, hospital stay, arrhythmias) [97, 98]. Thus the available data do not support administration of low doses of dopamine solely to maintain renal function. We recommend that all patients requiring vasopressors have an arterial line placed as soon as practical if resources are available (Grade 1D).Rationale. In shock states, estimation of blood pressure using a cuff is commonly inaccurate; use of an arterial cannula provides a more appropriate and reproducible measurement of arterial pressure. These catheters also allow continuous analysis so that decisions regarding therapy can be based on immediate and reproducible blood pressure information. G. Inotropic Therapy We recommend a dobutamine infusion be administered in the presence of myocardial dysfunction as suggested by elevated cardiac filling pressures and low cardiac output (Grade 1C).We recommend against the use of a strategy to increase cardiac index to predetermined supranormal levels (Grade 1B).Rationale. Dobutamine is the first-choice inotrope for patients with measured or suspected low cardiac output in the presence of adequate left ventricular filling pressure (or clinical assessment of adequate fluid resuscitation) and adequate mean arterial pressure. Septic patients who remain hypotensive after fluid resuscitation may have low, normal, or increased cardiac outputs. Therefore, treatment with a combined inotrope/vasopressor such as norepinephrine or dopamine is recommended if cardiac output is not measured. When the capability exists for monitoring cardiac output in addition to blood pressure, a vasopressor such as norepinephrine may be used separately to target specific levels of mean arterial pressure and cardiac output. Two large prospective clinical trials that included critically ill ICU patients who had severe sepsis failed to demonstrate benefit from increasing oxygen delivery to supranormal targets by use of dobutamine [99, 100]. These studies did not target specifically patients with severe sepsis and did not target the first 6 hours of resuscitation. The first 6 hours of resuscitation of sepsis induced hypoperfusion need to be treated separately from the later stages of severe sepsis (see initial resuscitation recommendations). H. Corticosteroids We suggest intravenous hydrocortisone be given only to adult septic shock patients after blood pressure is identified to be poorly responsive to fluid resuscitation and vasopressor therapy (Grade 2C).Rationale. One french multi-center, randomized, controlled trial (RCT) of patients in vasopressor-unresponsive septic shock (hypotension despite fluid resuscitation and vasopressors) showed a significant shock reversal and reduction of mortality rate in patients with relative adrenal insufficiency (defined as post-adrenocorticotropic hormone (ACTH) cortisol increase 9 μg/dL or less) [101]. Two additional smaller RCTs also showed significant effects on shock reversal with steroid therapy [102, 103]. However, a recent large, European multicenter trial (CORTICUS), which has been presented in abstract form but not yet published, failed to show a mortality benefit with steroid therapy of septic shock [104]. CORTICUS did show a faster resolution of septic shock in patients who received steroids. The use of the ACTH test (responders and nonresponders) did not predict the faster resolution of shock. Importantly, unlike the French trial, which only enrolled shock patients with blood pressure unresponsive to vasopressor therapy, the CORTICUS study included patients with septic shock, regardless of how the blood pressure responded to vasopressors. Although corticosteroids do appear to promote shock reversal, the lack of a clear improvement in mortality-coupled with known side effects of steroids such as increased risk of infection and myopathy-generally tempered enthusiasm for their broad use. Thus, there was broad agreement that the recommendation should be downgraded from the previous guidelines (Appendix B). There was considerable discussion and consideration by the committee on the option of encouraging use in those patients whose blood pressure was unresponsive to fluids and vasopressors, while strongly discouraging use in subjects whose shock responded well to fluids and pressors. However, this more complex set of recommendations was rejected in favor of the above single recommendation (see Appendix B). We suggest the ACTH stimulation test not be used to identify the subset of adults with septic shock who should receive hydrocortisone (Grade 2B).Rationale. Although one study suggested those who did not respond to ACTH with a brisk surge in cortisol (failure to achieve or > 9 μg/dL increase in cortisol 30–60 mins post-ACTH administration) were more likely to benefit from steroids than those who did respond, the overall trial population appeared to benefit regardless of ACTH result, and the observation of a potential interaction between steroid use and ACTH test was not statistically significant [101]. Furthermore, there was no evidence of this distinction between responders and nonresponders in a recent multicenter trial [104]. Commonly used cortisol immunoassays measure total cortisol (protein-bound and free) while free cortisol is the pertinent measurement. The relationship between free and total cortisol varies with serum protein concentration. When compared to a reference method (mass spectrometry), cortisol immunoassays may over- or underestimate the actual cortisol level, affecting the assignment of patients to responders or nonresponders [105]. Although the clinical significance is not clear, it is now recognized that etomidate, when used for induction for intubation, will suppress the HPA axis [106]. We suggest that patients with septic shock should not receive dexamethasone if hydrocortisone is available (Grade 2B).Rationale. Although often proposed for use until an ACTH stimulation test can be administered, we no longer suggest an ACTH test in this clinical situation (see #3 above). Furthermore, dexamethasone can lead to immediate and prolonged suppression of the HPA axis after administration [107]. We suggest the daily addition of oral fludrocortisone (50 μg) if hydrocortisone is not available and the steroid that is substituted has no significant mineralocorticoid activity. Fludrocortisone is considered optional if hydrocortisone is used (Grade 2C).Rationale. One study added 50 μg of fludrocortisone orally [101]. Since hydrocortisone has intrinsic mineralcorticoid activity, there is controversy as to whether fludrocortisone should be added. We suggest clinicians wean the patient from steroid therapy when vasopressors are no longer required (Grade 2D).Rationale. There has been no comparative study between a fixed duration and clinically guided regimen, or between tapering and abrupt cessation of steroids. Three RCTs used a fixed duration protocol for treatment [101, 103, 104], and in two RCTs, therapy was decreased after shock resolution [102, 108]. In four RCTs steroids were tapered over several days [102–104, 108], and in two RCTs [101, 109] steroids were withdrawn abruptly. One cross-over study showed hemodynamic and immunologic rebound effects after abrupt cessation of corticosteroids [110]. It remains uncertain whether outcome is affected by tapering of steroids or not. We recommend doses of corticosteroids comparable to > 300 mg hydrocortisone daily not be used in severe sepsis or septic shock for the purpose of treating septic shock (Grade 1A).Rationale. Two randomized prospective clinical trials and a meta-analyses concluded that for therapy of severe sepsis or septic shock, high-dose corticosteroid therapy is ineffective or harmful [111–113]. Reasons to maintain higher doses of corticosteroid for medical conditions other than septic shock may exist. We recommend corticosteroids not be administered for the treatment of sepsis in the absence of shock. There is, however, no contraindication to continuing maintenance steroid therapy or to using stress does steroids if the patient's endocrine or corticosteroid administration history warrants (Grade 1D).Rationale. No studies exist that specifically target severe sepsis in the absence of shock that offer support for use of stress doses of steroids in this patient population. Steroids may be indicated in the presence of a prior history of steroid therapy or adrenal dysfunction. A recent preliminary study of stress dose level steroids in community- acquired pneumonia is encouraging but needs confirmation [114]. I. Recombinant Human Activated Protein C (rhAPC) We suggest that adult patients with sepsis induced organ dysfunction associated with a clinical assessment of high risk of death, most of whom will have APACHE II ≥ 25 or multiple organ failure, receive rhAPC if there are no contraindications (Grade 2B except for patients within 30 days of surgery where it is Grade 2C). Relative contraindications should also be considered in decision making.We recommend that adult patients with severe sepsis and low risk of death, most of whom will have APACHE II < 20 or one organ failure, do not receive rhAPC (Grade 1A).Rationale. The evidence concerning use of rhAPC in adults is primarily based on two randomized controlled trials (RCTs): PROWESS (1,690 adult patients, stopped early for efficacy) [115] and ADDRESS (stopped early for futility) [116]. Additional safety information comes from an open-label observational study ENHANCE [117]. The ENHANCE trial also suggested early administration of rhAPC was associated with better outcomes. PROWESS involved 1,690 patients and documented 6.1% in absolute total mortality reduction with a relative risk reduction (RRR) of 19.4%, 95% CI 6.6–30.5%, number needed to treat (NNT):16 [115]. Controversy associated with the results focused on a number of subgroup analyses. Subgroup analyses have the potential to mislead due to the absence of an intent to treat, sampling bias, and selection error [118]. The analyses suggested increasing absolute and relative risk reduction with greater risk of death using both higher APACHE II scores and greater number of organ failures [119]. This led to drug approval for patients with high risk of death (such as APACHE II ≥ 25) and more than one organ failure in Europe. The ADDRESS trial involved 2,613 patients judged to have a low risk of death at the time of enrollment. 28 day mortality from all causes was 17% on placebo vs. 18.5% on APC, relative risk (RR) 1.08, 95% CI 0.92–1.28 [116]. Again, debate focused on subgroup analyses; analyses restricted to small subgroups of patients with APACHE II score over 25, or more than one organ failures which failed to show benefit; however these patient groups also had a lower mortality than in PROWESS. Relative risk reduction of death was numerically lower in the subgroup of patients with recent surgery (n = 502) in the PROWESS trial (30.7% placebo vs. 27.8% APC) [119] when compared to the overall study population (30.8% placebo vs. 24.7% APC) [115]. In the ADDRESS trial, patients with recent surgery and single organ dysfunction who received APC had significantly higher 28 day mortality rates (20.7% vs. 14.1%, p = 0.03, n = 635) [116]. Serious adverse events did not differ in the studies [115–117] with the exception of serious bleeding, which occurred more often in the patients treated with APC: 2% vs. 3.5% (PROWESS; p = 0.06) [115]; 2.2% vs. 3.9% (ADDRESS; p < 0.01) [116]; 6.5% (ENHANCE, open label) [117]. The pediatric trial and implications are discussed in the pediatric consideration section of this manuscript (see Appendix C for absolute contraindications to use of rhAPC and prescribing information for relative contraindications). Intracranial hemorrhage (ICH) occurred in the PROWESS trial in 0.1% (placebo) and 0.2% (APC) (n. s.) [106], in the ADDRESS trial 0.4% (placebo) vs. 0.5% (APC) (n. s.) [116]; in ENHANCE 1.5% [108]. Registry studies of rhAPC report higher bleeding rates than randomized controlled trials, suggesting that the risk of bleeding in actual practice may be greater than reported in PROWESS and ADDRESS [120, 121]. The two RCTs in adult patients were methodologically strong, precise, and provide direct evidence regarding death rates. The conclusions are limited, however, by inconsistency that is not adequately resolved by subgroup analyses (thus the designation of moderate quality evidence). Results, however, consistently fail to show benefit for the subgroup of patients at lower risk of death, and consistently show increases in serious bleeding. The RCT in pediatric severe sepsis failed to show benefit and has no important limitations. Thus, for low risk and pediatric patients, we rate the evidence as high quality. For adult use there is probable mortality reduction in patients with clinical assessment of high risk of death, most of whom will have APACHE II ≥ 25 or multiple organ failure. There is likely no benefit in patients with low risk of death, most of whom will have APACHE II < 20 or single organ dysfunction. The effects in patients with more than one organ failure but APACHE II < 25 are unclear and in that circumstance one may use clinical assessment of the risk of death and number of organ failures to support decision. There is a certain increased risk of bleeding with administration of rhAPC which may be higher in surgical patients and in the context of invasive procedures. Decision on utilization depends upon assessing likelihood of mortality reduction versus increases in bleeding and cost (see appendix D for nominal committee vote on recommendation for rhAPC). A European Regulatory mandated randomized controlled trial of rhAPC vs. placebo in patients with septic shock is now ongoing [122]. J. Blood Product Administration Once tissue hypoperfusion has resolved and in the absence of extenuating circumstances, such as myocardial ischemia, severe hypoxemia, acute hemorrhage, cyanotic heart disease, or lactic acidosis (see recommendations for initial resuscitation), we recommend that red blood cell transfusion occur when hemoglobin decreases to < 7.0 g/dL (< 70 g/L) to target a hemoglobin of 7.0–9.0 g/dL (70–90 g/L) in adults (Grade 1B).Rationale. Although the optimum hemoglobin for patients with severe sepsis has not been specifically investigated, the Transfusion Requirements in Critical Care trial suggested that a hemoglobin of 7–9 g/dL (70–90 g/L) when compared to 10–12 g/dL (100–200 g/L) was not associated with increased mortality rate in adults [123]. Red blood cell transfusion in septic patients increases oxygen delivery but does not usually increase oxygen consumption [124–126]. This transfusion threshold of 7 g/dL (70 g/L) contrasts with the early goal-directed resuscitation protocol that uses a target hematocrit of 30% in patients with low SCVO2 (measured in superior vena cava) during the first 6 hrs of resuscitation of septic shock. We recommend that erythropoietin not be used as a specific treatment of anemia associated with severe sepsis, but may be used when septic patients have other accepted reasons for administration of erythropoietin such as renal failure-induced compromise of red blood cell production (Grade 1B).Rationale. No specific information regarding erythropoietin use in septic patients is available, but clinical trials in critically ill patients show some decrease in red cell transfusion requirement with no effect on clinical outcome [127, 128]. The effect of erythropoietin in severe sepsis and septic shock would not be expected to be more beneficial than in other critical conditions. Patients with severe sepsis and septic shock may have coexisting conditions that do warrant use of erythropoietin. We suggest that fresh frozen plasma not be used to correct laboratory clotting abnormalities in the absence of bleeding or planned invasive procedures (Grade 2D).Rationale. Although clinical studies have not assessed the impact of transfusion of fresh frozen plasma on outcomes in critically ill patients, professional organizations have recommended fresh frozen plasma for coagulopathy when there is a documented deficiency of coagulation factors (increased prothrombin time, international normalized ratio, or partial thromboplastin time) and the presence of active bleeding or before surgical or invasive procedures [129–131]. In addition, transfusion of fresh frozen plasma in nonbleeding patients with mild abnormalities of prothrombin time usually fails to correct the prothrombin time [132]. There are no studies to suggest that correction of more severe coagulation abnormalities benefits patients who are not bleeding. We recommendagainst antithrombin administration for the treatment of severe sepsis and septic shock (Grade 1B).Rationale. A phase III clinical trial of high-dose antithrombin did not demonstrate any beneficial effect on 28-day all-cause mortality in adults with severe sepsis and septic shock. High-dose antithrombin was associated with an increased risk of bleeding when administered with heparin [133]. Although a post hoc subgroup analysis of patients with severe sepsis and high risk of death showed better survival in patients receiving antithrombin, antithrombin cannot be recommended at this time until further clinical trials are performed [134]. In patients with severe sepsis, we suggest that platelets should be administered when counts are < 5000/mm3 (5 × 109/L) regardless of apparent bleeding. Platelet transfusion may be considered when counts are 5,000–30,000/mm3 (5–30 × 109/L) and there is a significant risk of bleeding. Higher platelet counts (≥ 50,000/mm3 (50 × 109/L)) are typically required for surgery or invasive procedures (Grade 2D).Rationale. Guidelines for transfusion of platelets are derived from consensus opinion and experience in patients undergoing chemotherapy. Recommendations take into account the etiology of thrombocytopenia, platelet dysfunction, risk of bleeding, and presence of concomitant disorders [129, 131]. II. Supportive Therapy of Severe Sepsis A. Mechanical Ventilation of Sepsis-Induced Acute Lung Injury (ALI)/Acute Respiratory Distress Syndrome (ARDS). We recommend that clinicians target a tidal volume of 6 ml/kg (predicted) body weight in patients with ALI/ARDS (Grade 1B).We recommend that plateau pressures be measured in patients with ALI/ARDS and that the initial upper limit goal for plateau pressures in a passively inflated patient be ≤ 30 cm H2O. Chest wall compliance should be considered in the assessment of plateau pressure (Grade 1C).Rationale. Over the past 10 yrs, several multi-center randomized trials have been performed to evaluate the effects of limiting inspiratory pressure through moderation of tidal volume [135–139]. These studies showed differing results that may have been caused by differences between airway pressures in the treatment and control groups [135, 140]. The largest trial of a volume- and pressure-limited strategy showed a 9% decrease of all-cause mortality in patients with ALI or ARDS ventilated with tidal volumes of 6 mL/kg of predicted body weight (PBW), as opposed to 12 mL/kg, and aiming for a plateau pressure ≤ 30 cm H2O [135]. The use of lung protective strategies for patients with ALI is supported by clinical trials and has been widely accepted, but the precise choice of tidal volume for an individual patient with ALI may require adjustment for such factors as the plateau pressure achieved, the level of PEEP chosen, the compliance of the thoracoabdominal compartment and the vigor of the patient's breathing effort. Some clinicians believe it may be safe to ventilate with tidal volumes higher than 6 ml/kg PBW as long as the plateau pressure can be maintained ≤ 30 cm H2O [141, 142]. The validity of this ceiling value will depend on breathing effort, as those who are actively inspiring generate higher trans-alveolar pressures for a given plateau pressure than those who are passively inflated. Conversely, patients with very stiff chest walls may require plateau pressures higher than 30 cm H2O to meet vital clinical objectives. One retrospective study suggested that tidal volumes should be lowered even with plateau pressures that are ≤ 30 cm H2O [143]. An additional observational study suggested that knowledge of the plateau pressures was associated with lower plateau pressures; however in this trial, plateau pressure was not independently associated with mortality rates across a wide range of plateau pressures that bracketed 30 cm H2O [144]. The largest clinical trial employing a lung protective strategy coupled limited pressure with limited tidal volumes to demonstrate a mortality benefit [135]. High tidal volumes that are coupled with high plateau pressures should be avoided in ALI/ARDS. Clinicians should use as a starting point the objective of reducing tidal volumes over 1–2 hrs from its initial value toward the goal of a “low” tidal volume (≈ 6 mL per kilogram of predicted body weight) achieved in conjunction with an end-inspiratory plateau pressure ≤ 30 cm H2O. If plateau pressure remains > 30 after reduction of tidal volume to 6 ml/kg/PBW, tidal volume should be reduced further to as low as 4 ml/kg/PBW (see Appendix E for ARDSnet ventilator management and formula to calculate predicted body weight). No single mode of ventilation (pressure control, volume control, airway pressure release ventilation, high frequency ventilation, etc.) has been consistently shown advantageous when compared with any other that respects the same principles of lung protection. We recommend that hypercapnia (allowing PaCO2 to increase above its pre-morbid baseline, so-called permissive hypercapnia) be allowed in patients with ALI/ARDS if needed to minimize plateau pressures and tidal volumes (Grade 1C).Rationale. An acutely elevated PaCO2 may have physiologic consequences that include vasodilation as well as an increased heart rate, blood pressure, and cardiac output. Allowing modest hypercapnia in conjunction with limiting tidal volume and minute ventilation has been demonstrated to be safe in small, nonrandomized series [145, 146]. Patients treated in larger trials that have the goal of limiting tidal volumes and airway pressures have demonstrated improved outcomes, but permissive hypercapnia was not a primary treatment goal in these studies [135]. The use of hypercapnia is limited in patients with preexisting metabolic acidosis and is contraindicated in patients with increased intracranial pressure. Sodium bicarbonate or tromethamine (THAM®) infusion may be considered in selected patients to facilitate use of permissive hypercarbia [147, 148]. We recommend that positive end-expiratory pressure (PEEP) be set so as to avoid extensive lung collapse at end-expiration (Grade 1C).Rationale. Raising PEEP in ALI/ARDS keeps lung units open to participate in gas exchange. This will increase PaO2 when PEEP is applied through either an endotracheal tube or a face mask [149–151]. In animal experiments, avoidance of end-expiratory alveolar collapse helps minimize ventilator induced lung injury (VILI) when relatively high plateau pressures are in use. One large multi-center trial of the protocol-driven use of higher PEEP in conjunction with low tidal volumes did not show benefit or harm when compared to lower PEEP levels [152]. Neither the control nor experimental group in that study, however, was clearly exposed to hazardous plateau pressures. A recent multi-center Spanish trial compared a high PEEP, low-moderate tidal volume approach to one that used conventional tidal volumes and the least PEEP achieving adequate oxygenation. A marked survival advantage favored the former approach in high acuity patients with ARDS [153]. Two options are recommended for PEEP titration. One option is to titrate PEEP (and tidal volume) according to bedside measurements of thoracopulmonary compliance with the objective of obtaining the best compliance, reflecting a favorable balance of lung recruitment and overdistention [154]. The second option is to titrate PEEP based on severity of oxygenation deficit and guided by the FIO2 required to maintain adequate oxygenation [135] (see Appendix D.). Whichever the indicator-compliance or oxygenation-recruiting maneuvers are reasonable to employ in the process of PEEP selection. Blood pressure and oxygenation should be monitored and recruitment discontinued if deterioration in these parameters is observed. A PEEP > 5 cm H2O is usually required to avoid lung collapse [155]. We suggest prone positioning in ARDS patients requiring potentially injurious levels of FIO2 or plateau pressure who are not at high risk for adverse consequences of positional changes in those facilities who have experience with such practices (Grade 2C).Rationale. Several smaller studies and one larger study have shown that a majority of patients with ALI/ARDS respond to the prone position with improved oxygenation [156–159]. One large multi-center trial of prone positioning for approximately 7 hrs/day did not show improvement in mortality rates in patients with ALI/ARDS; however, a post hoc analysis suggested improvement in those patients with the most severe hypoxemia by PaO2/FIO2 ratio, in those exposed to high tidal volumes, and those who improved CO2 exchange as a result of proning [159]. A second large trial of prone positioning, conducted for an average of approximately 8 hours per day for 4 days in adults with hypoxemic respiratory failure of low-moderate acuity, confirmed improvement in oxygenation but also failed to show a survival advantage [160]. However, a randomized study that extended the length of time for proning each day to a mean of 17 hours for a mean of 10 days supported benefit of proning, with randomization to supine position an independent risk factor for mortality by multivariate analysis [161]. Prone positioning may be associated with potentially life-threatening complications, including accidental dislodgment of the endotracheal tube and central venous catheters, but these complications can usually be avoided with proper precautions. A) Unless contraindicated, we recommend mechanically ventilated patients be maintained with the head of the bed elevated to limit aspiration risk and to prevent the development of ventilator-associated pneumonia (Grade 1B). B) We suggest that the head of bed is elevated approximately 30–45 degrees (Grade 2C).Rationale. The semirecumbent position has been demonstrated to decrease the incidence of ventilator-associated pneumonia (VAP) [164]. Enteral feeding increased the risk of developing VAP; 50% of the patients who were fed enterally in the supine position developing VAP [162]. However, the bed position was only monitored once a day, and patients who did not achieve the desired bed elevation were not included in the analysis [162]. A recent study did not show a difference in in incidence of VAP between patients maintained in supine and semirecumbent positions [163]. In this study, patients in the semirecumbent position did not consistently achieve the desired head of the bed elevation, and the head of bed elevation in the supine group approached that of the semirecumbent group by day 7 [163]. When necessary, patients may be laid flat for procedures, hemodynamic measurements, and during episodes of hypotension. Patients should not be fed enterally with the head of the bed at 0°. We suggest that noninvasive mask ventilation (NIV) only be considered in that minority of ALI/ARDS patients with mild-moderate hypoxemic respiratory failure (responsive to relatively low levels of pressure support and PEEP) with stable hemodynamics who can be made comfortable and easily arousable, who are able to protect the airway, spontaneously clear the airway of secretions, and are anticipated to recover rapidly from the precipitating insult. A low threshold for airway intubation should be maintained (Grade 2B).Rationale. Obviating the need for airway intubation confers multiple advantages: better communication, lower incidence of infection, reduced requirements for sedation. Two RCTs demonstrate improved outcome with the use of NIV when it can be employed successfully [164, 165]. Unfortunately, only a small percentage of patients with life threatening hypoxemia can be managed in this way. We recommend that a weaning protocol be in place, and mechanically ventilated patients with severe sepsis undergo spontaneous breathing trials on a regular basis to evaluate the ability to discontinue mechanical ventilation when they satisfy the following criteria: a) arousable; b) hemodynamically stable (without vasopressor agents); c) no new potentially serious conditions; d) low ventilatory and end-expiratory pressure requirements; and e) FIO2 requirements that could be safely delivered with a face mask or nasal cannula. If the spontaneous breathing trial is successful, consideration should be given for extubation (see Appendix E). Spontaneous breathing trial options include a low level of pressure support, continuous positive airway pressure (≈ 5 cm H2O) or a T-piece (Grade 1A).Rationale. Recent studies demonstrate that daily spontaneous breathing trials in appropriately selected patients reduce the duration of mechanical ventilation [166–169]. Successful completion of spontaneous breathing trials leads to a high likelihood of successful discontinuation of mechanical ventilation. We recommendagainst the routine use of the pulmonary artery catheter for patients with ALI/ARDS (Grade 1A).Rationale. While insertion of a pulmonary artery catheter may provide useful information on a patient's volume status and cardiac function, potential benefits of such information may be confounded by differences in interpretation of results [170–172], lack of correlation of pulmonary artery occlusion pressures with clinical response [173], and absence of a proven strategy to use catheter results to improve patient outcomes [174]. Two multi-center randomized trials: one in patients with shock or acute lung injury [175], and one in patients with acute lung injury [176] failed to show benefit with the routine use of pulmonary artery catheters in patients with acute lung injury. In addition, other studies in different types of critically ill patients have failed to show definitive benefit with routine use of the pulmonary artery catheter [177–179]. Well-selected patients remain appropriate candidates for pulmonary artery catheter insertion when the answers to important management decisions depend on information only obtainable from direct measurements made within the pulmonary artery. To decrease days of mechanical ventilation and ICU length of stay we recommend a conservative fluid strategy for patients with established acute lung injury who do not have evidence of tissue hypoperfusion (Grade 1C).Rationale. Mechanisms for the development of pulmonary edema in patients with acute lung injury include increased capillary permeability, increased hydrostatic pressure and decreased oncotic pressure [180, 181]. Small prospective studies in patients with critical illness and acute lung injury have suggested that less weight gain is associated with improved oxygenation [182] and fewer days of mechanical ventilation [183, 184]. Use of a fluid conservative strategy directed at minimizing fluid infusion and weight gain in patients with acute lung injury based on either a central venous catheter or a pulmonary artery catheter along with clinical parameters to guide treatment strategies led to fewer days of mechanical ventilation and reduced length of ICU stay without altering the incidence of renal failure or mortality rates [185]. Of note, this strategy was only used in patients with established acute lung injury, some of whom had shock present. Active attempts to reduce fluid volume were conducted only during periods free of shock. B. Sedation, Analgesia, and Neuromuscular Blockade in Sepsis We recommend sedation protocols with a sedation goal when sedation of critically ill mechanically ventilated patients with sepsis is required (Grade 1B).Rationale. A growing body of evidence indicates that the use of protocols for sedation of critically ill ventilated patients can reduce the duration of mechanical ventilation and ICU and hospital length of stay [186–188]. A randomized, controlled clinical trial found that protocol use resulted in reduced duration of mechanical ventilation, reduced lengths of stay, and reduced tracheostomy rates [186]. A report describing the implementation of protocols, including sedation and analgesia, using a short-cycle improvement methodology in the management of critically ill patients demonstrated a decrease in the cost per patient day and a decrease of ICU length of stay [187]. Furthermore, a prospective before-and-after study on the implementation of a sedation protocol demonstrated enhanced quality of sedation with reduced drug costs. Although this protocol also may have contributed to a longer duration of mechanical ventilation, ICU discharge was not delayed [188]. Despite the lack of evidence regarding the use of subjective methods of evaluation of sedation in septic patients, the use of a sedation goal has been shown to decrease the duration of mechanical ventilation in critically ill patients [186]. Several subjective sedation scales have been described in the medical literature. Currently, however, there is not a clearly superior sedation evaluation methodology against which these sedation scales can be evaluated [189]. The benefits of sedation protocols appear to outweigh the risks. We recommend intermittent bolus sedation or continuous infusion sedation to predetermined end points (e. g., sedation scales) with daily interruption/lightening of continuous infusion sedation with awakening and retitration if necessary for sedation administration to septic mechanically ventilated patients (Grade 1B).Rationale. Although not specifically studied in patients with sepsis, the administration of intermittent sedation, daily interruption, and retitration or systemic titration to a predefined end point have been demonstrated to decrease the duration of mechanical ventilation [186, 189, 190]. Patients receiving neuromuscular blocking agents (NMBAs) must be individually assessed regarding discontinuation of sedative drugs because neuromuscular blocking drugs must also be discontinued in that situation. The use of intermittent vs. continuous methods for the delivery of sedation in critically ill patients has been examined. An observational study of mechanically-ventilated patients showed that patients receiving continuous sedation had significantly longer durations of mechanical ventilation and ICU and hospital length of stay [191]. Similarly, a prospective, controlled study in 128 mechanically-ventilated adults receiving continuous intravenous sedation demonstrated that a daily interruption in the “continuous” sedative infusion until the patient was awake decreased the duration of mechanical ventilation and ICU length of stay [192]. Although the patients did receive continuous sedative infusions in this study, the daily interruption and awakening allowed for titration of sedation, in effect, making the dosing intermittent. Systematic (protocolized) titration to a predefined end point has also been shown to alter outcome [186]. Additionally, a randomized prospective blinded observational study demonstrated that although myocardial ischemia is common in critically ill ventilated patients, daily sedative interruption is not associated with an increased occurrence of myocardial ischemia [193]. Thus, the benefits of daily interruption of sedation appear to outweigh the risks. These benefits include potentially shorter duration of mechanical ventilation and ICU stay, better assessment of neurologic function, and reduced costs. We recommend that NMBAs be avoided if possible in the septic patient due to the risk of prolonged neuromuscular blockade following discontinuation. If NMBAs must be maintained, either intermittent bolus as required or continuous infusion with monitoring the depth of blockade with train-of-four monitoring should be used (Grade 1B).Rationale. Although NMBAs are often administered to critically ill patients, their role in the ICU setting is not well defined. No evidence exists that maintaining neuromuscular blockade in this patient population reduces mortality or major morbidity. In addition, no studies have been published that specifically address the use of NMBAs in septic patients. The most common indication for NMBA use in the ICU is to facilitate mechanical ventilation [194]. When appropriately utilized, NMBAs may improve chest wall compliance, prevent respiratory dyssynchrony, and reduce peak airway pressures [195]. Muscle paralysis may also reduce oxygen consumption by decreasing the work of breathing and respiratory muscle blood flow [196]. However, a randomized, placebo-controlled clinical trial in patients with severe sepsis demonstrated that oxygen delivery, oxygen consumption, and gastric intramucosal pH were not improved during profound neuromuscular blockade [197]. An association between NMBA use and myopathies and neuropathies has been suggested by case studies and prospective observational studies in the critical care population [195, 198–201]. The mechanisms by which NMBA's produced or contribute to myopathies and neuropathies in critically ill patients are presently unknown. There appears to be an added association with the concurrent use o NMBA's and steroids. Although no specific studies exist specific to the septic patient population, it seems clinically prudent based on existent knowledge that NMBA's not be administered unless there is a clear indication for neuromuscular blockade that can not be safely achieved with appropriate sedation and analgesia'' [195]. Only one prospective, randomized clinical trial has evaluated peripheral nerve stimulation vs. standard clinical assessment in ICU patients. Rudis et al. [202] randomized 77 critically ill patients requiring neuromuscular blockade in the ICU to receive dosing of vecuronium based on train-of-four stimulation or clinical assessment (control). The peripheral nerve stimulation group received less drug and recovered neuromuscular function and spontaneous ventilation faster than the control group. Nonrandomized observational studies have suggested that peripheral nerve monitoring reduces or has no effect on clinical recovery from NMBAs in the ICU setting [203, 204]. Benefits to neuromuscular monitoring, including faster recovery of neuromuscular function and, shorter intubation times, appear to exist. A potential for cost savings (reduced total dose of NMBAs and shorter intubation times) also may exist, although this has not been studied formally. C. Glucose Control We recommend that, following initial stabilization, patients with severe sepsis and hyperglycemia who are admitted to the ICU receive IV insulin therapy to reduce blood glucose levels (Grade 1B).We suggest use of a validated protocol for insulin dose adjustments and targeting glucose levels to the < 150 mg/dl range (Grade 2C).We recommend that all patients receiving intravenous insulin receive a glucose calorie source and that blood glucose values be monitored every 1–2 hours until glucose values and insulin infusion rates are stable and then every 4 hours thereafter (Grade 1C).We recommend that low glucose levels obtained with point-of-care testing of capillary blood be interpreted with caution, as such measurements may overestimate arterial blood or plasma glucose values (Grade 1B).Rationale. The consensus on glucose control in severe sepsis was achieved at the first committee meeting and subsequently approved by the entire committee (see Appendix G for committee vote). One large randomized single center trial in a predominantly cardiac surgical ICU demonstrated a reduction in ICU mortality with intensive IV insulin (Leuven Protocol) targeting blood glucose to 80–110 mg/dl (for all patients relative 43%, and absolute 3.4% mortality reduction, and for those with > 5 day ICU length of stays (LOS) a 48% relative and 9.6% absolute mortality reduction) [205]. A reduction in organ dysfunction and ICU LOS (from a median of 15 to12 days) was also observed in the subset with ICU LOS > 5 days. A second randomized trial of intensive insulin therapy using the Leuven Protocol enrolled medical ICU patients with an anticipated ICU LOS of > 3 days in three MICUs [206]. Overall, mortality was not reduced but ICU and hospital LOS were reduced associated with earlier weaning from mechanical ventilation and less acute kidney injury. In patients with a medical ICU LOS > 3 days, hospital mortality was reduced with intensive insulin therapy (43% versus 52.5%; p = 0.009). However, investigators were unsuccessful in predicting ICU LOS and 433 patients (36%) had an ICU LOS of < 3 days. Furthermore, use of the Leuven Protocol in the medical ICU resulted in a nearly three-fold higher rate of hypoglycemia than in the original experience (18% versus 6.2% of patients) [205, 206]. One large before-and-after observational trial showed a 29% relative and 6.1% absolute reduction in mortality and a 10.8% reduction in median ICU LOS [207]. In a subgroup of 53 patients with septic shock there was an absolute mortality reduction of 27% and a relative reduction of 45% (p = 0.02). Two additional observational studies report an association of mean glucose levels with reductions in mortality, polyneuropathy, acute renal failure, nosocomial bacteremia, and number of transfusions, and suggest a glucose threshold for improved mortality lies somewhere between 145 and 180 mg/dl [208, 209]. However, a large observational study (n = 7,049) suggested that both a lower mean glucose and less variation of blood glucose may be important [210]. A meta-analysis of 35 trials on insulin therapy in critically ill patients, including 12 randomized trials, demonstrated a 15% reduction in short term mortality (RR 0.85, 95% confidence interval 0.75–0.97) but did not include any studies of insulin therapy in medical ICUs [211]. Two additional multicenter RCTs of intensive insulin therapy, one focusing on patients with severe sepsis (VISEP) and the second on medical and surgical ICU patients, failed to demonstrate improvement in mortality, but are not yet published [212, 213]. Both stopped earlier than planned because of high rates of hypoglycemia and adverse events in the intensive insulin groups. A large RCT that is planned to compare targeting 80–110 mg/dl (4.5–6.0 mmol/L) versus 140–180 mg/dl (8–10 mmol/L) and recruit more than 6,000 patients (Normoglycemia in Intensive Care Evaluation and Survival Using Glucose Algorithm Regulation, or NICE-SUGAR) is ongoing [214]. Several factors may affect the accuracy and reproducibility of point-of-care testing of blood capillary blood glucose, including the type and model of the device used, user expertise, and patient factors including hematocrit (false elevation with anemia), PaO2, and drugs [215]. One report showed overestimation of arterial plasma glucose values by capillary point-of-care testing sufficient to result in different protocol-specified insulin dose titration. The disagreement between protocol-recommended insulin doses was largest when glucose values were low [216]. A recent review of 12 published insulin infusion protocols for critically ill patients showed wide variability in insulin dose recommendations and variable glucose control during simulation [217]. This lack of consensus about optimal dosing of IV insulin may reflect variability in patient factors (severity of illness, surgical vs. medical settings, etc) or practice patterns (e. g., approaches to feeding, IV dextrose) in the environments in which these protocols were developed and tested. Alternatively, some protocols may be more effective than other protocols. This conclusion is supported by the wide variability in hypoglycemia rates reported with protocols [205–207, 212, 213]. Thus, the use of a validated and safe intensive insulin protocol is important not only for clinical care but also for the conduct of clinical trials to avoid hypoglycemia, adverse events, and premature termination of these trials before the efficacy signal, if any, can be determined. The finding of reduced morbidity and mortality within the longer ICU length of stay subsets along with acceptable cost weighed heavily on our recommendation to attempt glucose control after initial stabilization of the patient with hyperglycemia and severe sepsis. However, the mortality benefit and safety of intensive insulin therapy (goal to normalize blood glucose) has been questioned by 2 recent trials and we recommend maintaining glucose levels < 150 mg/dl until recent and ongoing trials are published or completed. Further study of protocols that have been validated to be safe and effective for controlling blood glucose concentrations and blood glucose variation in the severe sepsis population are needed. D. Renal Replacement We suggest that continuous renal replacement therapies and intermittent hemodialysis are equivalent in patients with severe sepsis and acute renal failure (Grade 2B).We suggest the use of continuous therapies to facilitate management of fluid balance in hemodynamically unstable septic patients (Grade 2D).Rationale. Although numerous nonrandomized studies have reported a nonsignificant trend toward improved survival using continuous methods [218–225], 2 meta-analyses [226, 227] report the absence of significant difference in hospital mortality between patients who receive continuous and intermittent renal replacement therapies. This absence of apparent benefit of one modality over the other persists even when the analysis is restricted to only randomized studies [227]. To date, 5 prospective randomized studies have been published [228–232]. Four of them found no significant difference in mortality [229–232]. One study found significantly higher mortality in the continuous treatment group [228], but imbalanced randomization had led to a higher baseline severity of illness in this group. When a multivariable model was used to adjust for severity of illness, no difference in mortality was apparent between the groups [228]. It is important to note that most studies comparing modes of renal replacement in the critically ill have included a small number of patients and some major weaknesses (randomization failure, modifications of therapeutic protocol during the study period, combination of different types of continuous renal replacement therapies, small number of heterogenous groups of patients enrolled). The most recent and largest randomized study [232] enrolled 360 patients and found no significant difference in survival between the 2 groups. Moreover, there is no current evidence to support the use of continuous therapies in sepsis independent of renal replacement needs. Concerning the hemodynamic tolerance of each method, no current evidence exists to support a better tolerance with continuous treatments. Only 2 prospective studies [230, 233] have reported a better hemodynamic tolerance with continuous treatment, with no improvement in regional perfusion [233] and no survival benefit [230]. Four other prospective studies did not find any significant difference in mean arterial pressure or drop in systolic pressure between the 2 methods [229, 231, 232, 234]. Concerning fluid balance management, 2 studies report a significant improvement in goal achievement with continuous methods [228, 230]. In summary, current evidence is insufficient to draw strong conclusions regarding the mode of replacement therapy for acute renal failure in septic patients. Four randomized, controlled trials have addressed whether the dose of continuous renal replacement affects outcomes in patients with acute renal failure [235–238]. Three found improved mortality in patients receiving higher doses of renal replacement [235, 237, 238], while one [236] did not. None of these trials was conducted specifically in patients with sepsis. Although the weight of current evidence suggests that higher doses of renal replacement may be associated with improved outcomes, these results may not be easily generalizable. The results of 2 very large multicenter randomized trials comparing the dose of renal replacement (ATN in the United States and RENAL in Australia and New Zealand) will be available in 2008 and will greatly inform practice. E. Bicarbonate Therapy We recommendagainst the use of sodium bicarbonate therapy for the purpose of improving hemodynamics or reducing vasopressor requirements in patients with hypoperfusion-induced lactic acidemia with pH ≥ 7.15 (Grade 1B).Rationale. No evidence supports the use of bicarbonate therapy in the treatment of hypoperfusion-induced lactic acidemia associated with sepsis. Two randomized, blinded, crossover studies that compared equimolar saline and bicarbonate in patients with lactic acidosis failed to reveal any difference in hemodynamic variables or vasopressor requirements. [239, 240] The number of patients with pH < 7.15 in these studies was small. Bicarbonate administration has been associated with sodium and fluid overload, an increase in lactate and pCO2, and a decrease in serum ionized calcium; but the relevance of these parameters to outcome is uncertain. The effect of bicarbonate administration on hemodynamics and vasopressor requirements at lower pH as well as the effect on clinical outcomes at any pH is unknown. No studies have examined the effect of bicarbonate administration on outcomes. F. Deep Vein Thrombosis Prophylaxis We recommend that severe sepsis patients receive deep vein thrombosis (DVT) prophylaxis with either (a) low-dose unfractionated heparin (UFH) administered b.i.d. or t.i.d. or (b) daily low-molecular weight heparin (LMWH) unless there are contraindications (i. e., thrombocytopenia, severe coagulopathy, active bleeding, recent intracerebral hemorrhage) (Grade 1A).We recommend that septic patients who have a contraindication for heparin use receive mechanical prophylactic device such as graduated compression stockings (GCS) or intermittent compression devices (ICD) unless contraindicated (Grade 1A).We suggest that in very high-risk patients such as those who have severe sepsis and history of DVT, trauma, or orthopedic surgery, a combination of pharmacologic and mechanical therapy be used unless contraindicated or not practical (Grade 2C).We suggest that in patients at very high risk, LMWH be used rather than UFH as LMWH is proven superior in other high-risk patients (Grade 2C).Rationale. ICU patients are at risk for DVT [241]. Significant evidence exists for benefit of DVT prophylaxis in ICU patients in general. No reasons suggest that severe sepsis patients would be different from the general patient population. Nine randomized placebo controlled clinical trials of DVT prophylaxis in general populations of acutely ill patients exist [242–250]. All 9 trials showed reduction in DVT or PE. The prevalence of infection/sepsis was 17% in all studies in which this was ascertainable, with a 52% prevalence of infection/sepsis patients in the study that included ICU patients only. Benefit of DVT prophylaxis is also supported by meta-analyses [251, 252]. With that in mind, DVT prophylaxis would appear to have a high grade for quality of evidence (A). As the risk of administration to the patient is small, the gravity of the potential result of not administering is great, and the cost is low, the grading of the strength of the recommendation is strong. The evidence supports equivalency of LMWH and UFH in general medical populations. A recent meta-analysis comparing b.i.d. and t.i.d. UFH demonstrated that t.i.d. UFH produced better efficacy and b.i.d. less bleeding [253]. Practitioners should use underlying risk for VTE and bleeding to individualize choice of b.i.d. versus t.i.d. The cost of LMWH is greater and the frequency of injection is less. UFH is preferred over LMWH in patients with moderate to severe renal dysfunction. Mechanical methods (ICD and GCS) are recommended when anticoagulation is contraindicated or as an adjunct to anticoagulation in the very high-risk patients [254–256]. In very high-risk patients, LMWH is preferred over UFH [257–259]. Patients receiving heparin should be monitored for development of heparin-induced thrombocytopenia (HIT). G. Stress Ulcer Prophylaxis (SUP) We recommend that stress ulcer prophylaxis using H2 blocker (Grade 1A) or proton pump inhibitor PPI (Grade 1B) be given to patients with severe sepsis to prevent upper GI bleed. Benefit of prevention of upper GI bleed must be weighed against potential effect of an increased stomach pH on development of ventilator-associated pneumonia. Rationale. Although no study has been performed specifically in patients with severe sepsis, trials confirming the benefit of stress ulcer prophylaxis reducing upper GI bleeds in general ICU populations would suggest that 20–25% of patients enrolled in these types of trials have sepsis [260–263]. This benefit should be applicable to patients with severe sepsis and septic shock. In addition, the conditions shown to benefit from stress ulcer prophylaxis (coagulopathy, mechanical ventilation, hypotension) are frequently present in patients with severe sepsis and septic shock [264, 265]. Although there are individual trials that have not shown benefit from SUP, numerous trials and a meta-analysis show reduction in clinically significant upper GI bleeding, which we consider significant even in the absence of proven mortality benefit [266–269]. The benefit of prevention of upper GI bleed must be weighed against the potential effect of increased stomach pH on greater incidence of ventilator-associated pneumonia [270]. Those severe sepsis patients with the greatest risk of upper GI bleeding are likely to benefit most from stress ulcer prophylaxis. The rationale for the preference for suppression of acid production over sulcrafate was based on the study of 1200 patients by Cook et al comparing H2 blockers and sucralfate and a meta-analysis [271, 272]. 2 studies support equivalency between H2 blockers and PPIs. One was in very ill ICU patients. The second study is larger and demonstrates non-inferiority of omeprazole suspension for clinically significant stress ulcer bleeding [273, 274]. No data relating to utility of enteral feeding in stress ulcer prophylaxis exist. Patients should be periodically evaluated for continued need for prophylaxis. H. Selective Digestive Tract Decontamination (SDD) The guidelines group was evenly split on the issue of SDD, with equal numbers weakly in favor and against recommending the use of SDD (see appendix H). The committee therefore chose not to make a recommendation for the use of SDD specifically in severe sepsis at this time. The final consensus on use of SDD in severe sepsis was achieved at the last nominal committee meeting and subsequently approved by the entire committee (see Appendix H for committee vote). Rationale. The cumulative conclusion from the literature demonstrates that prophylactic use of SDD (enteral non-absorbable antimicrobials and short-course intravenous antibiotics) reduces infections, mainly pneumonia, and mortality in the general population of critically ill and trauma patients [275–286] without promoting emergence of resistant Gram negative bacteria. Post hoc subgroup analyses [287, 288] of two prospective blinded studies [289, 290] suggest that SDD reduces nosocomial (secondary) infections in ICU patients admitted with primary infections [268] and may reduce mortality [288]. No studies of SDD specifically focused on patients with severe sepsis or septic shock. The use of SDD in severe sepsis patients would be targeted toward preventing secondary infection. As the main effect of SDD is in preventing ventilator-associated pneumonia (VAP), studies comparing SDD with non-antimicrobial interventions such as ventilator bundles for reducing VAP are needed. Further investigation is required to determine the comparative efficacy of these two interventions, separately or in combination. Although studies incorporating enteral vancomycin in the regimen appear to be safe [291, 292, 293] concerns persist about the potential for emergence of resistant Gram positive infections. I. Consideration for Limitation of Support We recommend that advance care planning, including the communication of likely outcomes and realistic goals of treatment, be discussed with patients and families (Grade 1D). Rationale. Decisions for less aggressive support or withdrawal of support may be in the patient's best interest. [294–296] Too frequently, inadequate physician/family communication characterizes end-of-life care in the ICU. The level of life support given to ICU patients may not be consistent with their wishes. Early and frequent caregiver discussions with patients who face death in the ICU and with their loved ones may facilitate appropriate application and withdrawal of life-sustaining therapies. A recent RCT demonstrated reduction of anxiety and depression in family members when end-of-life meetings were carefully planned, conducted, included advance care planning, and provided relevant information about diagnosis, prognosis, and treatment [297]. III. Pediatric Considerations in Severe Sepsis While sepsis in children is a major cause of mortality, the overall mortality from severe sepsis in children is much lower that that in adults, estimated at about 10% [298]. The definitions for severe sepsis and septic shock in children are similar but not identical to the definitions in adults [299]. In addition to age-appropriate differences in vital signs, the definition of systemic inflammatory response syndrome requires the presence of either temperature or leukocyte abnormalities. The presence of severe sepsis requires sepsis plus cardiovascular dysfunction or ARDS or 2 or more other organ dysfunctions [299]. A. Antibiotics We recommend antibiotics be administered within one hour of the identification of severe sepsis, after appropriate cultures have been obtained (Grade 1D).Early antibiotic therapy is as critical for children with severe sepsis as it is for adults. B. Mechanical Ventilation No graded recommendations. Due to low functional residual capacity, young infants and neonates with severe sepsis may require early intubation [300]. Drugs used for intubation have important side effects in these patients, for example, concerns have been raised about the safety of using etomidate in children with meningococcal sepsis because of adrenal suppression effect [301]. The principles of lung-protective strategies are applied to children as they are to adults. C. Fluid Resuscitation We suggest initial resuscitation begin with infusion of crystalloids with boluses of 20 mL/kg over 5–10 minutes, titrated to clinical monitors of cardiac output, including heart rate, urine output, capillary refill, and level of consciousness (Grade 2C).Intravenous access for fluid resuscitation and inotrope/vasopressor infusion is more difficult to attain in children than in adults. The American Heart Association along with the American Academy of Pediatrics has developed pediatric advanced life support guidelines for emergency establishment of intravascular support encouraging early intraosseous access [302]. On the basis of a number of studies, it is accepted that aggressive fluid resuscitation with crystalloids or colloids is of fundamental importance to survival of septic shock in children [303–308]. Three randomized, controlled trials compare the use of colloid to crystalloid resuscitation in children with dengue shock [303, 307, 308]. No difference in mortality between colloid or crystalloid resuscitation was shown. Children normally have a lower blood pressure than adults, and fall in blood pressure can be prevented by vasoconstriction and increasing heart rate. Therefore, blood pressure by itself is not a reliable end point for assessing the adequacy of resuscitation. However, once hypotension occurs, cardiovascular collapse may soon follow. Hepatomegaly occurs in children who are fluid overloaded and can be a helpful sign of adequacy of fluid resuscitation. Large fluid deficits typically exist and initial volume resuscitation usually requires 40–60 mL/kg but can be much higher [304–308]. However, the rate of fluid administration should be reduced substantially when there are (clinical) signs of adequate cardiac filling without hemodynamic improvement. D. Vasopressors/Inotropes (should be used in volume loaded patients with fluid refractory shock) We suggest dopamine as the first choice of support for the pediatric patient with hypotension refractory to fluid resuscitation (Grade 2C).In the initial resuscitation phase, vasopressor therapy may be required to sustain perfusion pressure, even when hypovolemia has not yet been resolved. Children with severe sepsis can present with low cardiac output and high systemic vascular resistance, high cardiac output and low systemic vascular resistance, or low cardiac output and low systemic vascular resistance shock. At various stages of sepsis or the treatment thereof, a child may move from one hemodynamic state to another. Vasopressor or inotrope therapy should be used according to the clinical state of the child. Dopamine-refractory shock may reverse with epinephrine or norepinephrine infusion [309]. We suggest that patients with low cardiac output and elevated systemic vascular resistance states (cool extremities, prolonged capillary refill, decreased urine output but normal blood pressure following fluid resuscitation) be given dobutamine (Grade 2C).The choice of vasoactive agent is determined by the clinical examination. For the child with a persistent low cardiac output state with high systemic vascular resistance despite fluid resuscitation and inotropic support, vasodilator therapy may reverse shock [310]. When pediatric patients remain in a normotensive low cardiac output and high vascular resistance state despite epinephrine and vasodilator therapy, the use of a phosphodiesterase inhibitor may be considered [311–313]. In the case of extremely low systemic vascular resistance despite the use of norepinephrine, vasopressin use has been described in a number of case-reports. Thus far there is no clear evidence for the use of vasopressin in pediatric sepsis [314, 315]. E. Therapeutic End Points We suggest that the therapeutic end points of resuscitation of septic shock be normalization of the heart rate, capillary refill of < 2 secs, normal pulses with no differential between peripheral and central pulses, warm extremities, urine output > 1mL.kg −1.hr−1, and normal mental status [290] (Grade 2C).Capillary refill may be less reliable in a cold environment. Other end points that have been widely used in adults and may logically apply to children include decreased lactate and improved base deficit, ScvO2 ≥ 70% or SvO2 ≥ 65%, CVP of 8–12 mm Hg or other methods to analyze cardiac filling. Optimizing preload optimizes cardiac index. When using measurements to assist in identifying acceptable cardiac output in children with systemic arterial hypoxemia such as cyanotic congenital heart disease or severe pulmonary disease, arterial-venous oxygen content difference is a better marker than mixed venous hemoglobin saturation with oxygen. As noted previously, blood pressure by itself is not a reliable end point for resuscitation. If a thermodilution catheter is used, therapeutic end points are cardiac index > 3.3 and < 6.0 L.min−1.m−2 with normal coronary perfusion pressure (mean arterial pressure – central venous pressure) for age. [290] Using clinical endpoints such as reversal of hypotension and restoration of capillary refill for initial resuscitation at the community hospital level before transfer to a tertiary center was associated with significantly improved survival rates in children with septic shock [305]. Development of a transport system including publicizing to local hospitals and transport with mobile intensive care services significantly decreased the case fatality rate from meningococcal disease in the United Kingdom [316]. F. Approach to Pediatric Septic Shock Figure 1 shows a flow diagram summarizing an approach to pediatric septic shock [317]. Fig. 1Approach to Pediatric Shock G. Steroids We suggest that hydrocortisone therapy be reserved for use in children with catecholamine resistance and suspected or proven adrenal insufficiency (Grade 2C).Patients at risk for adrenal insufficiency include children with severe septic shock and purpura [318, 319], children who have previously received steroid therapies for chronic illness, and children with pituitary or adrenal abnormalities. Children who have clear risk factors for adrenal insufficiency should be treated with stress dose steroids (hydrocortisone 50 mg/m2/24hr). Adrenal insufficiency in pediatric severe sepsis is associated with a poor prognosis [320]. No strict definitions exist, but absolute adrenal insufficiency in the case of catecholamine-resistant septic shock is assumed at a random total cortisol concentration < 18 μg/dL (496 nmol/L). A post 30- or 60-min ACTH stimulation test increase in cortisol of ≤ 9 μg/dL (248 mmol/L) has been used to define relative adrenal insufficiency. The treatment of relative adrenal insufficiency in children with septic shock is controversial. A retrospective study from a large administrative database recently reported that the use of any corticosteroids in children with severe sepsis was associated with increased mortality (OR 1.9 95% CI 1.7–2.2) [321]. While steroids may have been given preferentially to more severely ill children, the use of steroids was an independent predictor of mortality in multivariable analysis [321]. Given the lack of data in children and potential risk, steroids should not be used in those children who do not meet minimal criteria for adrenal insufficiency. A randomized, controlled trial in children with septic shock is very much needed. H. Protein C and Activated Protein C We recommendagainst the use rhAPC in children (Grade 1B).Protein C concentrations in children reach adult values at the age of 3 yrs. This might indicate that the importance of protein C supplementation either as protein C concentrate or as rhAPC is even greater in young children than in adults [322]. There has been one dose finding, randomized, placebo-controlled study performed using protein C concentrate. This study was not powered to show an effect on mortality rate, but did show a positive effect on sepsis-induced coagulation disturbances [323]. An RCT of rhAPC in pediatric severe sepsis patients was stopped by recommendation of the Data Monitoring Committee for futility after enrollment of 399 patients. 28-day all cause mortality: 18% placebo group vs. 17% APC group. Major amputations occurred in 3% of the placebo group vs. 2% in the APC group [324]. Due to the increased risk of bleeding (7% vs. 6% in the pediatric trial) and lack of proof of efficacy, rhAPC is not recommended for use in children. I. DVT Prophylaxis We suggest the use of DVT prophylaxis in post-pubertal children with severe sepsis (Grade 2C). Most DVTs in young children are associated with central venous catheters. Femoral venous catheters are commonly used in children, and central venous catheter-associated DVTs occur in approximately 25% of children with a femoral central venous catheter. Heparin-bonded catheters may decrease the risk of catheter-associated DVT and should be considered for use in children with severe sepsis. [325, 326] No data on the efficacy of unfractionated or low-molecular weight heparin prophylaxis to prevent catheter-related DVT in children in the ICU exist. J. Stress Ulcer Prophylaxis No graded recommendations. Studies have shown that the rate of clinically important gastrointestinal bleeding in children occurs at rates similar to adults [327, 328]. As in adults, coagulopathy and mechanical ventilation are risk factors for clinically important gastrointestinal bleeding. Stress ulcer prophylaxis strategy is commonly used in mechanically-ventilated children, usually with H2 blockers. Its effect is not known. K. Renal Replacement Therapy No graded recommendations. Continuous veno-venous hemofiltration (CVVH) may be clinically useful in children with anuria/severe oliguria and fluid overload, but no large RCTs have been performed comparing CVVH with intermittent dialysis. A retrospective study of 113 critically ill children reported that children with less fluid overload before CVVH had better survival, especially in those children with dysfunction of 3 or more organs [329]. CVVH or other renal replacement therapy should be instituted in children with anuria/severe oliguria before significant fluid overload occurs. L. Glycemic Control No graded recommendations. In general, infants are at risk for developing hypoglycemia when they depend on intravenous fluids. This means that a glucose intake of 4–6 mg.kg−1.min−1 or maintenance fluid intake with glucose 10%/NaCl containing solution is advised. Associations have been reported between hyperglycemia and an increased risk of death and longer length of stay [330]. A recent retrospective PICU study reported associations of hyperglycemia, hypoglycemia, and glucose variability with length of stay and mortality rates. [331] No studies in pediatric patients (without diabetes mellitus) analyzing the effect of strict glycemic control using insulin exist. In adults, the recommendation is to maintain a serum glucose below 150 mg/dL. Insulin therapy to avoid long periods of hyperglycemia seems sensible in children as well, but the optimal goal glucose is not known. However, continuous insulin therapy should only be done with frequent glucose monitoring in view of the risks for hypoglycemia. M. Sedation/Analgesia We recommend sedation protocols with a sedation goal when sedation of critically ill mechanically ventilated patients with sepsis is required (Grade 1D).Appropriate sedation and analgesia are the standard of care for children who are mechanically ventilated. Although there are no data supporting any particular drugs or regimens, it should be noted that propofol should not be used for long term sedation in children because of the reported association with fatal metabolic acidosis [332, 333]. N. Blood Products No graded recommendations. The optimal hemoglobin for a critically ill child with severe sepsis is not known. A recent multicenter trial reported similar outcomes in stable critically ill children managed with a transfusion threshold of 7 gm/dl compared to those managed with a transfusion threshold of 9.5 g/dL [334]. Whether a lower transfusion trigger is safe or appropriate in the initial resuscitation of septic shock has not been determined. O. Intravenous Immunoglobulin We suggest that immunoglobulin may be considered in children with severe sepsis (Grade 2C).Administration of polyclonal intravenous immunoglobulin has been reported to reduce mortality rate and is a promising adjuvant in the treatment of sepsis and septic shock in neonates. A recent randomized controlled study of polyclonal immunoglobulin in pediatric sepsis syndrome patients (n = 100), showed a significant reduction in mortality, LOS, and less progress to complications, especially DIC [335]. P. Extracorporeal membrane oxygenation (ECMO) We suggest that use of ECMO be limited to refractory pediatric septic shock and/or respiratory failure that cannot be supported by conventional therapies (Grade 2C).ECMO has been used in septic shock in children, but its impact is not clear. Survival from refractory shock or respiratory failure associated with sepsis is 80% in neonates and 50% in children. In one study analyzing 12 patients with meningococcal sepsis in ECMO, eight of the 12 patients survived, with six leading functionally normal lives at a median of 1 yr (range, 4 months to 4 yrs) of follow-up. Children with sepsis on ECMO do not perform worse than children without sepsis at long-term follow-up [336, 337]. Although the pediatric considerations section of this manuscript offers important information to the practicing pediatric clinician for the management of critically ill children with sepsis, the reader is referred to the references at the end of the document for more in-depth descriptions of appropriate management of pediatric septic patients. Summary and Future Directions The reader is reminded that although this document is static, the optimum treatment of severe sepsis and septic shock is a dynamic and evolving process. New interventions will be proven and established interventions, as stated in the current recommendations, may need modification. This publication represents an ongoing process. The Surviving Sepsis Campaign and the consensus committee members are committed to updating the guidelines on a regular basis as new interventions are tested and published in the literature. Although evidence-based recommendations have been frequently published in the medical literature, documentation of impact on patient outcome is limited [338]. There is, however, growing evidence that protocol implementation associated with education and performance feedback does change clinician behavior and may improve outcomes in and reduce costs in severe sepsis [20, 24, 25]. Phase III of the Surviving Sepsis Campaign targets the implementation of a core set of the previous recommendations in hospital environments where change in behavior and clinical impact are being measured. The sepsis bundles were developed in collaboration with the Institute of Healthcare Improvement [339]. Concurrent or retrospective chart review will identify and track changes in practice and clinical outcome. Software and software support is available at no cost in 7 languages, allowing bedside data entry and allows creation of regular reports for performance feedback. The Campaign also offers significant program support and educational materials at no cost to the user (www.survivingsepsis.org ). Engendering evidence-based change in clinical practice through multi-faceted strategies while auditing practice and providing feedback to healthcare practitioners is the key to improving outcomes in severe sepsis. Nowhere is this more evident than in the worldwide enthusiasm for Phase III of the Campaign, a performance improvement program using SSC guideline-based sepsis bundles. Using the guidelines as the basis, the bundles have established a global best practice for the management of critically ill patients with severe sepsis. As of November 2007, over 12,000 patients have been entered into the SSC central database, representing the efforts of 239 hospitals in 17 countries. Change in practice and potential effect on survival are being measured.
[ "surviving sepsis campaign", "sepsis", "guidelines", "severe sepsis", "septic shock", "infection", "sepsis syndrome", "grade", "sepsis bundles", "evidence-based medicine" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "R" ]
Ann_Gen_Hosp_Psychiatry-2-_-162166
Contribution of psychoacoustics and neuroaudiology in revealing correlation of mental disorders with central auditory processing disorders
Background Psychoacoustics is a fascinating developing field concerned with the evaluation of the hearing sensation as an outcome of a sound or speech stimulus. Neuroaudiology with electrophysiologic testing, records the electrical activity of the auditory pathways, extending from the 8th cranial nerve up to the cortical auditory centers as a result of external auditory stimuli. Central Auditory Processing Disorders may co-exist with mental disorders and complicate diagnosis and outcome. Background Evaluation of the central auditory nervous system (CANS) is essential in order to obtain information on its anatomical and functional integrity. Both, children and adults may suffer from central auditory processing disorders (CAPD). This fact has been underestimated but as research in this field progresses, it shows that specific mental disorders may be the outcome of a CAPD or that CAPD can co-exist with a neurological or mental disorder [1]. Assessment of the CANS begun at the mid-1950s with the confirmation by Bocca and his colleagues [2] that CANS disorders do exist and that there are sensitive tests to reveal them. However, at that time acceptance of the new diagnostic methods by the audiologists, who were the first to be interested in this field was limited. This can be attributed to the slow acceptance of each new method before it is fully validated. Better understanding of the anatomy and physiology of the CANS was gained by advances concerning the presence and physiology of neurotransmitters and the accumulation of data on the psychoacoustic and electrophysiologic tests [3]. As a result audiologists started applying the new diagnostic tests more often and appreciated their contribution. Other medical specialties became aware and interested in the disorders of the CANS. These were mainly psychiatry and neurology. The assessment of the CANS is also of great value concerning neuropsychology and special education [4-6]. Anatomy and physiology of the CANS Clinical evaluation of central auditory function requires understanding of the anatomy and physiology of the CANS and appreciation of its complexity. The CANS extends from the anterior and posterior cochlear nuclei which are situated on the surface of the inferior cerebellar peduncle to the auditory cortex. In between important structures through which nerve fibers pass are: the trapezoid body, the lateral lemniscus, the inferior colliculus, the medial geniculate body and the acoustic radiation of the internal capsule. The auditory cortex includes the gyrus of Heschl on the upper surface of the superior temporal gyrus, the planum temporale and the Silvian fissure. It is essential to point out that nerve impulses from each ear proceed along auditory pathways on both sides of the brainstem. Both ipsilateral and contralateral pathways are important in ensuring interchange of auditory information. The contralateral pathway exhibits dominance as opposed to the ipsilateral one [7]. Thirty thousand afferent auditory nerve fibers with different range of frequency response are responsible for conveying auditory information to the cortex [8]. Many components of the stimulus are analyzed separately. There is an increasing complexity of the whole process in the auditory cortex. One should keep in mind that, understanding of the exact way of processing the auditory information at the level of the auditory cortex, is still incomplete. It is in this understanding that Phychoacoustics helps as it is the science concerned with the evaluation of the sensation of hearing as an outcome of the sound or speech stimulus. Components of central auditory processing Central auditory processing occurs prior to language comprehension [9]. It consists firstly of auditory discrimination, which is responsible for the ability to group sounds according to how similarly or differently they are heard. Auditory memory is the component responsible for storing and recalling auditory information. Auditory perception concerns the reception and understanding of sounds and words. It plays a significant part in reading skills, managing verbal information, communication and social relationships. Auditory-vocal association consists of the interaction between what is heard and verbal response. Auditory synthesis is responsible for combining sounds or syllables to formulate comprehensible patterns (words) and de-combining words into separate sounds. Auditory-vocal automaticity is the ability to predict how future linguistic events will be heard by utilizing past experience. Auditory figure-ground plays a role in diminishing sounds which are not important while focusing on others [10]. It is due to this component that someone can listen to another person talking in a railway station, where a lot of environmental noise exists. Material and methods The medline research revealed 564 papers when using the keywords 'auditory deficits' and 'mental disorders'. 79 papers were referring specifically to CAPD in connection with mental disorders, as this is a new term for auditory deficits and one mostly used by audiologists. Auditory deficit is a more general term used mostly by psychiatrists. Both terms refer to the same disorder. It is essential to point out that 25 of the 79 papers are published between 2000 – 2003. Schizophrenia is found related to CAPD in 175 papers, 49 of them are published between 2000 – 2003 showing the research focus of the last three years. Learning disabilities were found related to CAPD in 126 papers. Parkinson's disease was related to CAPD in 29 papers. Dyslexia is related to CAPD in 88 papers, 37 of them are between 2000–2003. Alzheimer's disease and auditory deficits are connected in 39 papers. The remaining articles are on depression, alcoholism, anorexia and childhood mental retardation, all being related to some extend to CAPD. Assessment of the CANS is carried through a great variety of tests that fall into two main categories: psychoacoustic and electrophysiologic testing. Psychoacoustic tests are considered more subjective. Electrophysiologic ones are more objective with the exception of P300 component. Results psychoacoustic tests Learning disabilities, attention deficit disorders and dyslexia are assessed through a great variety of psychoacoustic tests. Age limitations have to be considered [11] and specially designed tests are used for different age groups. When evaluating children who are less than 12 years old an important step is the Pediatric Speech Intelligibility (PSI) Test. This consists of single words and sentences presented with a competing message at varying levels of difficulty [12]. In this test it is essential that performance is adjusted for language age according to previously determined normative data [13]. Evaluation of this test may provide the cause of learning disabilities including dyslexia [14,15]. Children older than 12 years old are assessed through a more complex test battery that contains several tests. These tests are based on the stimulation of the auditory system with tones, numbers, syllables, words and sentences. Evaluation is made according to the different components of the auditory processing. One widely used test is that of the dichotic digits which consists of different pairs of numbers presented simultaneously to each ear [16]. The person under examination has to repeat all four numbers regardless of order. This test is easy to use in order to detect the auditory deficit of dyslexia particularly since it does not contain language and phonological parameters [17]. The Staggered Spondaic Word Test (SSW) consists of two-syllable spondaic words that are presented simultaneously to each ear [18]. This involves the diagnosis of auditory deficits in attention disorders, autism, learning disabilities and chronic alcoholism [19,20]. A series of experiments were planned by Nielzen and Olsson on the basis of psychoacoustic handling of auditory stimulation. The results of these psychacoustic experiments show significant differences between a group of schizophrenic patients and a group of reference subjects thus indicating central auditory processing disorders even in a phase of illness remission or during treatment with neuroleptics [21]. electrophysiologic tests In all mental disorders assessed with the suspicion of CAPD an objective measure of the peripheral auditory system is mandatory. The Auditory Brainstem Responses (ABR), measure the electrophysiologic activity from the 8th cranial nerve to the medial geniculate body of the brainstem [22]. A very important element of ABR evaluation is the morphology and synchronization of the waveform. One should always begin his evaluation while observing waveform changes on real time [23]. The Auditory Middle Latency Responses (AMLRs) provide an electrophysiologic measure of the primary auditory cortex function [24]. The AMLRs can diagnose central auditory processing disorders in children with learning disabilities [25], patients with Alzheimer's disease [26], adult autistic subjects [27,28] patients with Schizophrenia [29] The Auditory P300 Response, which consists of the measure of the hippocampal and auditory cortex function again from an electrophysiological point of view [30]. The P300 response has been considered an endogenous event-related potential. Endogenous responses depend both on the context within which the auditory stimuli are presented and the psychologic condition and attention of the subject. P300 has been used in diagnosing CAPD in patients with dementia of the Alzheimer type [31], in monitoring long-term effects of donepezil in patients with Alzheimer's disease [32], in anorexic patients [33], in children with mental retardation during a selective attention task to auditory stimuli [34] and in first episode and chronic schizophrenia [35]. Mismatch Negativity Response (MMN) is an event-related evoked potential that measures the electrophysiologic activity of the auditory cortex function [36]. The MMN is always elicited 100–250 msecs from stimulus change onset. Its application is in detecting CAPD in alcoholism [37], in Schizophrenia [38-43], in attention deficit and in developmental dyslexia [44]. psychoacoustic and electrophysiologic testing according to type of lesion In the selection of tests for the evaluation of brainstem lesions the examiner should keep in mind that all psychoacoustic tests have been reported to aid in the diagnosis. According to the studies of Kartz [45] the Staggered Spondaic Words Test may help differentiating brainstem from cortical lesions and upper from lower brainstem lesions. Musiek et al [46] concluded that Auditory Brainstem Responses in combination with either Masking Level Differences or Dichotic Digits Test may be as sensitive in evaluating a group of patients suffering from multiple sclerosis as a seven test battery. Jerger et al [47] reported that for patients suffering from multiple sclerosis the best test battery was a combination of stapedial reflex measures and speech audiometry. The usual finding in central auditory tests regarding cortical lesions is a deficit or impairment in the ear contralateral to the side of lesion. Psychoacoustic tests such as Dichotic Digits and SSW in patients with well documented cortical and hemispheric lesions demonstrate primarily contralateral ear deficits and impairments [48]. Two exceptions that the examiner should always keep in mind are when frequency and duration tests are applied and when compromise of auditory fibers of the corpus callosum has occurred [49]. Regarding interhemispheric dysfunction, test results may be difficult to evaluate. Representation of auditory information at the cortical level is mostly contralateral as is clearly depicted in dichotic listening situations. When speech responses are required by the subject auditory information from the right ear are projected through to the left hemisphere without the participation of the opposite hemisphere for the production of a speech response. On the contrary auditory stimuli from the left ear must cross the midline through the corpus callosum for the production of a speech response. Patients with split brain disorders subjected to dichotic testing have interestingly demonstrated decreased scores regarding the left ear and enhanced scores in the right ear [50,51]. Considerable evidence has been reported that indicates a relation between various learning disabilities, including dyslexia, attention deficit hyperactivity disorder and poor performance scores on central auditory tests Learning disabilities in children might be the expression of various underlying central auditory disorders such as maturational, developmental or neurological as depicted by abnormal CAPD test results [52]. Conclusions CANS assessment represents a fascinating field. Cooperation of professionals in psychiatry, neurology, neuropsychology and pediatric psychology, with the otolaryngologist-audiologist is a prerequisite. Central auditory processing disorders may co-exist with various mental disorders such as: learning disabilities, attention deficit hyperactivity disorder, dyslexia, autism, chronic alcoholism, Alzheimer's disease, adult autistic disorder, Schizophrenia, anorexia and mental retardation. Assessing these disorders is difficult due to the complex anatomy and physiology of the CANS. This explains the great variety of existing methods of testing with two main categories: those of psychoacoustic methodology and those based on electrophysiologic measures. Physiology of CANS is still not completely understood and further research is needed on development of new tests and validation of their clinical applicability. Conflict of interest none declared
[ "psychoacoustics", "mental disorders", "central auditory processing disorders" ]
[ "P", "P", "P" ]
Rheumatol_Int-3-1-2134974
The potential utility of B cell-directed biologic therapy in autoimmune diseases
Increasing awareness of the importance of aberrant B cell regulation in autoimmunity has driven the clinical development of novel B cell-directed biologic therapies with the potential to treat a range of autoimmune disorders. The first of these drugs—rituximab, a chimeric monoclonal antibody against the B cell-specific surface marker CD20—was recently approved for treating rheumatoid arthritis in patients with an inadequate response to other biologic therapies. The aim of this review is to discuss the potential use of rituximab in the management of other autoimmune disorders. Results from early phase clinical trials indicate that rituximab may provide clinical benefit in systemic lupus erythematosus, Sjögren’s syndrome, vasculitis, and thrombocytopenic purpura. Numerous case reports and several small pilot studies have also been published reporting the use of rituximab in conditions such as myositis, antiphospholipid syndrome, Still’s disease, and multiple sclerosis. In general, the results from these preliminary studies encourage further testing of rituximab therapy in formalized clinical trials. Based on results published to date, it is concluded that rituximab, together with other B cell-directed therapies currently under clinical development, is likely to provide an important new treatment option for a number of these difficult-to-treat autoimmune disorders. Background Autoimmunity is widely believed to be fundamental to the development and progression of many rheumatic diseases—rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE) being the best-known examples. The function of B cells in autoimmunity is still not fully understood, although evidence is mounting that they play an essential role in the process. In addition to their well-known function in synthesizing antibodies, B cells act in antigen presentation and as critical regulators of the development and function of T cells [21]. B cells are also the source of rheumatoid factor, levels of which are strongly correlated with disease severity in RA [103]. These and other lines of evidence provided the rationale for testing whether B cell depletion would be an effective strategy for treating rheumatic diseases. The availability of rituximab (RITUXAN®; Genentech/Biogen-IDEC, South San Francisco, CA, USA), a genetically engineered monoclonal antibody directed against the B cell-specific antigen CD20 [60], enabled this hypothesis to be tested. The first results, demonstrating sustained clinical responses coupled with B cell depletion in 5 RA patients treated with rituximab [30], ignited intense interest in the wider potential of B cell depletion therapy in autoimmune diseases. A full-scale clinical trial program led to the approval of rituximab in 2006 for the treatment of RA in patients with an inadequate response to anti-tumour necrosis factor (TNF) therapy. A number of other B cell-directed agents are currently in clinical development. Among the most advanced is epratuzumab, a humanized monoclonal antibody directed against CD22, another B cell-specific marker [90]. Epratuzumab has been tested in patients with Sjögren’s syndrome (SS) [89] and results were published recently of an open-label clinical trial involving patients with SLE [29]. Another strategy under investigation is the neutralization of B cell survival factors. BAFF (also known as B lymphocyte stimulator, BLyS) is essential for the survival of B cells and is involved in many other aspects of B cell biology, including germinal center maintenance, isotype switching, and regulation of B cell-specific markers [48]. Belimumab is an anti-BAFF monoclonal antibody that has reached Phase II trials in SLE and RA [27], while atacicept (previously known as TACI-Ig), a recombinant fusion protein that neutralizes both BAFF and APRIL (a related B cell survival factor) [41], has undergone Phase I evaluation in SLE. A more in-depth review was published recently of current B cell-targeted approaches that are being developed to treat autoimmune disorders [31]. The aim of this review is to discuss the potential utility of B cell-directed therapy in the management of autoimmune disorders. As the first of these agents to be approved for clinical use, rituximab will be the focus of this article. In addition, since several excellent reviews have been published recently covering the use of rituximab in RA [25, 31, 58], this review will discuss results from the clinical testing of rituximab in autoimmune disorders other than RA. Information from case reports, clinical trials, and other studies was gathered from a search of the Medline database up to and including June 2007. Clinical use of rituximab Rituximab has been tested in a wide range of autoimmune conditions, with clinical trials being most advanced in SLE and SS. A summary of the published clinical data in these and other autoimmune disorders is presented in Table 1. Table 1Summary of published data from clinical studies of rituximab in autoimmune disorders other than RADisorder/study typeNo. of ptsTreatment regimenSummary of clinical resultsSystemic lupus erythematosusPhase I/II dose escalation [59]18Single RTX infusion of 100 mg/m2 (low dose [n = 6]) or 375 mg/m2 (intermediate dose [n = 6]), or 4 weekly RTX infusions of 375 mg/m2 (high dose [n = 6]) Improved SLAM score at 12 months in 11/17 (65%) evaluable pts Open-label pilot [84]10aRTX (375 mg/m2) once weekly for 4 weeks + prednisolone (0.5 mg/kg/day for 10 weeks, tapered by 4 mg every 2 weeks thereafter)Partial remission (improvement in renal parameters) in 8/10 pts within a median (range) of 2 (1–4) months; of these, 5 pts had complete remission at 3 months (median); this was sustained for ≥12 months in 4 ptsOpen-label pilot [55]24RTX (1,000 mg) + CyP (750 mg): two infusions, 2 weeks apart Improvements in global and all 8 individual BILAG scores at 6 months in 23/24 pts (96%) Open-label pilot [85]11RTX (375 mg/m2) once weekly for 4 weeks + CyP (500 mg) co-administered at first infusion; immunosuppressive therapy at baseline had been unchanged for ≥3 months prior to study and was continued until Month 6, following which dose reduction was allowed 6 complete and 5 partial responses (follow-up through 2 yrs) (overall, significant reduction in median BILAG scores) Sjögren’s syndromeSingle-centre, open-label Phase II [73]15RTX (375 mg/m2) once weekly for 4 weeksImprovements in subjective and objective parameters of disease activity (salivary and lacrimal gland function) in all 14 pts who completed the study. Of the 7 pts with MALT-type lymphoma, 3 had complete remission, while disease was stable in 3 pts and progressive in 1 pt. Retrospective [83]16RTX (375 mg/m2) once weekly for 4 weeks (6 weeks in 1 pt with lymphoma); 1 pt with systemic manifestations received RTX 2 × 1,000 mg. All pts received methylprednisone (100 mg) and either oral certirizine (20 mg) or dexchloropheniramine (5 mg) before the RTX infusionEfficacy observed in 9/11 pts with systemic manifestations (improvement in systemic symptoms) and in 4/5 pts with lymphomas (disease remission) Open-label pilot [26]16RTX (375 mg/m2) once weekly for 2 weeksSignificant improvement in mean VAS scores for fatigue and dryness, tender point count, and quality of life (at Week 12) and for all 4 VAS scores, tender joint count, tender point count, and quality of life (at Week 36) VasculitisCase series [34]9bRTX (500 mg [375 mg/m2 in 1 patient]) once weekly for 2 weeks (n = 3) or 4 weeks (n = 6)Remission (BVAS = 0) in 8 pts and partial remission (BVAS = 1) in 1 pt at 6 monthsCase series [50]11cRTX (375 mg/m2) once weekly for 4 weeks + prednisone (≤1 mg/kg/day, tapering once disease activity improved)Remission (BVAS/WG = 0) in all 11 pts (10 pts within 6 months); tapering of prednisone dose (median = 0; range 0–1.5 mg/kg/day) in all ptsOpen-label pilot [51]10dRTX (375 mg/m2) once weekly for 4 weeks + prednisone (≤1 mg/kg/day, tapering once disease activity improved)Remission (BVAS/WG = 0) in all pts within 3 months; tapering of prednisone dose to 0 in all pts by 6 monthsCase series [88]10eRTX (375 mg/m2) once weekly for 4 weeks + prednisone (≤2 mg/kg/day, tapering once disease activity improved)Complete response (BVAS/WG = 0) in 9 pts and partial response (BVAS/WG = 1) in 1 pt at 6 months. Follow-up (median 34 months; range 26–45 months): 3 pts relapsed but had new sustained response following re-treatmentOpen-label [85]11cRTX (375 mg/m2) once weekly for 4 weeks + CyP (500 mg) co-administered at first infusion; immunosuppressive therapy at baseline had been unchanged for ≥3 months prior to study and was continued until Month 6, following which dose reduction was allowedRemission in 9/11 pts (BVAS = 0) and partial remission in 1 pt (BVAS = 2); 6/10 pts subsequently relapsed but had new sustained response following re-treatment with RTX (2 × 1000 mg, 2 weeks apart)Case series [8]8dRTX (375 mg/m2) once every 4 weeks for 4 cycles + standard treatment (CyP 2 mg/kg once daily or 15–20 mg/kg every 18–21 days or methotrexate 0.3 mg/kg once weekly) Remission (BVAS = 0) in 2 pts, partial remission in 1 pt, unchanged disease activity in 3 pts, and progression in 2 pts 1 month after final cycleMyositisOpen-label pilot [56]7fRTX (375 mg/m2) once weekly for 4 weeks + standard treatment (included azathioprine, corticosteroids, CyP, and intravenous immunoglobulin)Clinical improvement (increased muscle strength relative to baseline [assessed using dynomometry]) in all 6 evaluable pts Idiopathic thrombocytopenic purpuraOpen-label pilot [87]25RTX (375 mg/m2) once weekly for 4 weeksClinical response (rise in platelet counts) at end of therapy without need for further treatment in 13/25 (52%) pts. Responses were sustained for ≥6 months in 7 ptsPooled data from two pilot trials [22]57RTX (375 mg/m2) once weekly for 4 weeks; 17 pts received prednisone (60 mg with Infusion 1 and 20 mg with Infusion 2)Clinical response (rise in platelet counts) at end of therapy without need for further treatment in 31/57 (54%) pts; 29/31 responses occurred within 8 weeks of initiating RTX therapy. 15/16 pts with complete clinical response (rise in platelet counts to normal levels) maintained response for ≥12 months Retrospective national multicenter [13]35RTX (375 mg/m2) once weekly for 4 weeks + prednisone; 6 pts received a fixed dose of 500 mg supplemented by 100 mg methylprednisone or 50–100 mg prednisone + antihistamine prior to RTX infusionClinical response (rise in platelet counts) within 3–8 weeks for 17/39 (44%) treatments (4 pts received 2 cycles); pts with complete or partial responses had been in remission for a median of 47 weeksRetrospective national multicenterg [71]89RTX (375 mg/m2) once weekly for 4 weeks (n = 77) or for 1–6 weeks (n = 12); 31 pts received RTX with other therapies (corticosteroids [n = 20], IVIG [n = 2], corticosteroids + IVIG [n = 3], others [n = 6]) Clinical response (rise in platelet counts) in 49/89 (55%) pts; 31 pts maintained response for a median (range) of 9 (2–42) months, 12 pts for >12 months Thrombotic thrombocytopenic purpuraOpen-label prospective multicenter [35]11RTX (375 mg/m2) once weekly for 4 weeks + premedication with IV steroids (30 mg), IV dexchlorpheniramine (10 mg), and IV paracetamol (1 g). Patients with acute TTP (n = 5) continued plasma infusions for ≥3 weeks followed by tapering at the onset of remissionClinical remission (regression of visceral ischemic signs and normalization of blood parameters) in all patients with acute TTP; continued remission in patients with disease remission at enrolment (6–11 months’ follow-up). Biologic remission (≥10% recovery of ADAMTS-13 activity and disappearance of anti-ADAMTS-13 antibodies) in all pts Open-label prospective multicenter [82]25RTX (375 mg/m2) once weekly for 4 weeks + premedication with IV hydrocortisone (100 mg), IV dexchlorpheniramine (10 mg), and oral paracetamol (1 g) immediately following PEX; PEX was continued until clinical remission was achievedAll patients achieved clinical remission (sustained normal platelet count, absence of clinical manifestations of TTP, and cessation of PEX) in a median of 11 days after initiating rituximab. ADAMTS-13 activity returned to normal levels in 21/25 pts; anti-ADAMTS-13 antibodies disappeared in 23/25 pts Retrospective comparative 2-center [45]15RTX (375 mg/m2) once weekly for 1–8 weeks + standard therapy (PEX + corticosteroids + various agents added as second-line therapy, if needed) (n = 8) or standard therapy alone (n = 7) Clinical remission (absence of clinical manifestations of TTP and normalization of blood parameters): 100% (RTX group) vs. 66% (standard therapy group) (p = 0.0025)Mixed cryoglobulinemiaOpen-label prospective [80]20hRTX (375 mg/m2) once weekly for 4 weeksComplete response (improvement of clinical signs and decline in cryocrit) in 16/20 (80%) pts; response was maintained for ≥12 months in 12/16 respondersCase series [101]15iRTX (375 mg/m2) once weekly for 4 weeks + prednisone (<0.5 mg/kg/day), if already administered at recruitmentImproved clinical symptoms (including cutaneous manifestations, lymphoma features, neuropathic symptoms) in all 15 ptsCold agglutinin diseaseOpen-label Phase II [11]27RTX (375 mg/m2) once weekly for 4 weeks. Re-treatment (if required): RTX (same regimen) plus interferon-α (5 million units three-times weekly for 20 weeks) Clinical response (improvement in anaemia, clinical symptoms, and histopathology) in 14/27 (52%) pts after first treatment and in 6/10 pts after re-treatment; median (range) time to response was 1.5 (0.5–4) monthsPhase II multicenter [81]20jRTX (375 mg/m2) once weekly for 4 weeksOne pt showed a complete response (normalization of hemoglobin levels, absence of signs of hemolysis, and loss of clinical symptoms), 8 pts had a partial response (increase in hemoglobin levels ≥1.0 g/dl for ≥1 month, no need for erythrocyte transfusions, improvement in clinical symptoms); of the 9 responders, 8 relapsed and 1 remained in remission at 48 weeksaProliferative lupus nephritis, bANCA-positive microscopic polyangitis (n = 2) and ANCA-positive Wegener’s granulomatosis (n = 7), cANCA-associated vasculitisdANCA-positive refractory Wegener’s granulomatosis, eANCA-positive microscopic polyangitis (n = 2) and ANCA-positive Wegener’s granulomatosis (n = 8), f Dermatomyositis, g Clinical results were obtained from physicians via a questionnaire (original patient data were not analyzed), h HCV-positive type II or type III mixed cryoglobulinemia, i Type II mixed cryoglobulinemia (HCV-positive [n = 12]; associated with SS [n = 1]; “essential” disease [n = 2]), j Idiopathic CAD (n = 13) and CAD associated with malignant B-cell lymphoproliferative disease (n = 7)ADAMTS-13 a disintegrin-like and metalloproteinase with thrombospondin-like type I motif 13, ANCA anti-neutrophil cytoplasmic antibody, BILAG British Isles Lupus Assessment Group, BVAS Birmingham vasculitis activity score, BVAS/WG BVAS modified for Wegener’s granulomatosis, CAD cold agglutinin disease, CyP cyclophosphamide, HCV hepatitis C virus, IVIG intravenous immunoglobulin, MALT mucosa-associated lymphoid tissue, PEX plasma exchange, pts patients, RTX rituximab, SLAM systemic lupus activity measure, VAS visual analog scale Systemic lupus erythematosus Traditional treatments for SLE include nonsteroidal anti-inflammatory drugs, antimalarials, corticosteroids, methotrexate, mycophenylate, and cytotoxic drugs such as cyclophosphamide (often in combination). However, these therapies are associated with many potential side effects and are usually only partially effective in the long term [46]. The wide body of evidence indicating that B cells play a central role in the etiopathology of SLE has focused attention on the potential benefits of rituximab and other B cell-targeted therapies in the disease [33, 57, 78]. Individual case reports and case series, together with encouraging results from early phase clinical trials, indicate that rituximab is likely to provide significant clinical benefit for at least a subset of SLE patients. For example, in a dose-escalation study involving 17 patients, significant improvements in the systemic lupus activity measure (SLAM) score were observed in those patients (11/17) who achieved concomitant profound B cell depletion; efficacy persisted for 12 months and no significant adverse events were reported [59]. Analysis of some of the patients in this trial revealed that clinical response to rituximab correlated closely with the FcγIIIa genotype of individual patients [6], as observed previously in studies involving the rituximab responses of patients with follicular lymphoma [96]. In another open-label study, 23/24 patients achieved depletion of B cells following treatment with rituximab (two 1,000 mg infusions of rituximab separated by 2 weeks); depletion lasted for 3–8 months—except in 1 individual, who remained depleted after 4 years [55]. Clinical improvements observed in this study occurred in each of the 8 organs/systems assessed using the British Isles Lupus Assessment Group (BILAG) system. A recent update from the same group—covering a total of 41 patients with a mean (range) follow-up period of 37 (6–79) months—reported that one-third of patients remained well following B cell depletion, without the need for immunosuppressive agents [64]. Thirteen patients had been re-treated with rituximab. Three serious adverse events (1 pneumococcal sepsis, 1 severe serum sickness-like reaction, and 1 seizure related to hyponatremia) and 2 deaths (1 involving varicella pneumonitis and the other involving pancarditis) had occurred in this cohort over the 7-year observation period. In another trial involving patients with active or refractory SLE, with a follow-up period of 2 years, all 11 patients in the study responded to a single course of rituximab, with 6 achieving a full response and 5 a partial response; although relapse was common (64%), re-treatment was rapidly effective [85]. In a recently reported case series of six patients with aggressive refractory SLE, rituximab therapy (doses of rituximab and use of combination drugs varied between patients) resulted in partial clinical improvements in five cases [40]. Rituximab has also shown effectiveness in pilot studies involving patients with the common severe complication lupus nephritis [42, 84, 95] and in patients with refractory SLE involving the central nervous system [92]. Although most studies to date indicate that B cell depletion therapy is likely to be useful in SLE, the variability of responses to rituximab therapy observed in SLE trials published to date remains to be explained. Ongoing Phase II/III randomized controlled trials should provide some insight into this question. In addition, although the overall tolerability of rituximab in SLE appears to be good, the Food and Drug Administration recently issued an alert concerning two spontaneous fatal cases of progressive multifocal leukoencephalopathy (PML) due to JC polyomavirus reactivation in two patients with SLE who had received rituximab therapy [38]. It is unclear whether these cases were related to rituximab treatment, since only two cases have been reported and PML has also been reported in >20 SLE patients not treated with rituximab. Sjögren’s syndrome Sjögren’s syndrome is a chronic autoimmune disorder of the exocrine glands affecting approximately 1% of the adult US population. The syndrome often occurs in the presence of another autoimmune disorder such as RA or SLE [37]. The etiopathology of SS is not fully understood; however, disturbances in B cell biology are considered to play an important role [43]. A number of case reports and pilot studies have been published that describe the successful treatment of SS with rituximab [2, 73, 76, 93]. In a recent trial involving 16 female patients with systemic complications of primary SS, rituximab therapy led to B cell depletion and decreased levels of various B cell markers; with a median follow-up period of 14.5 months, clinical efficacy was observed in 4/5 patients with lymphomas and in 9/11 patients with other systemic manifestations [83]. Another recent study investigated the effects of rituximab (two infusions of 375 mg/m2 separated by 1 week) in 16 patients with primary SS [26]. Rituximab therapy, which was administered using a slow initial rate of infusion without steroid premedication, was well tolerated and overall improvements were observed in subjective parameters of disease activity and in quality of life after both 12 and 36 weeks’ follow-up. Results were presented recently from the first double-blind, randomized, controlled study of rituximab in SS [24]. In this 20-patient pilot study, subjects received either rituximab (two 1,000 mg infusions separated by 2 weeks) or placebo. Although patient responses were highly variable and there was a marked placebo effect, a higher proportion of patients in the rituximab group achieved improvement in fatigue (the primary efficacy endpoint) than in the placebo group (48 vs. 20%); this difference was not statistically significant. Significantly greater improvements with rituximab over placebo in the social functioning aspect of the quality of life assessment were also noted. Vasculitis Vasculitis refers to a collection of rare inflammatory diseases that involve the blood vessel walls and surrounding interstitium. A subset of these diseases, including Wegener’s granulomatosis (WG), microscopic polyangitis, and Churg–Strauss syndrome, is characterized by the presence of anti-neutrophil cytoplasmic antibodies (ANCAs) [97]. The mainstay of current therapies in vasculitis involves glucocorticoids, cyclophosphamide, and—more recently—methotrexate and azathioprine [54]. However, these approaches are not always effective and are often limited by significant toxicity. B cells have been implicated in the pathogenesis of ANCA-associated vasculitis [20], indicating that rituximab may be an effective treatment option. In addition to a number of individual case reports, results have been published recently from several small open-label trials of rituximab in vasculitis. In a report of a series of nine individual cases of ANCA-positive vasculitis resistant to conventional therapy in which rituximab therapy was attempted, eight patients achieved complete responses, while the other patient showed a partial response [34]. Keogh and colleagues have conducted small, prospective, open-label trials in both ANCA-associated vasculitis and WG. The vasculitis trial involved 11 patients whose disease was either refractory to cyclophosphamide or in whom cyclophosphamide was contraindicated [50]. Following infusions with rituximab, circulating B cells became undetectable in all patients and ANCA titers decreased significantly. Clinical remission was achieved in all patients and was maintained while B cells were undetectable. In ten patients with refractory WG treated with prednisone (1 mg/kg/day) plus rituximab (four consecutive weekly infusions of 375 mg/m2) [51], therapy was well tolerated and—after 3 months—all patients had achieved clinical remission (reduction in disease activity score to 0); in addition, all patients were able to stop glucocorticoids by 6 months. Following the recurrence of raised ANCA titers, five patients in the trial were successfully re-treated with rituximab. Results were also recently published of long-term follow-up of ten patients with ANCA-associated vasculitis who had been treated with rituximab [88]: patients had received four consecutive weekly doses of rituximab (375 mg/m2) and all experienced rapid clinical improvement at 6 months. Although three patients subsequently relapsed, re-treatment was effective. In addition, ANCA titers decreased significantly in all patients. Of 11 patients with refractory ANCA-associated vasculitis who were treated with rituximab in another recently published pilot study, 10 showed either complete or partial responses to a course of rituximab together with a single dose of cyclophosphamide [85]. In contrast to the above findings, one recent study found that rituximab was less effective in a cohort of eight patients with refractory WG [8]. In this trial, rituximab was given every fourth week. Interestingly, all patients in this study had particular granulomatous manifestations, consisting of retro-orbital granulomata (n = 5), nodules of the lungs (n = 1), and subglottic stenosis (n = 2). Although three patients experienced some clinical improvement, ANCA titers were not affected by rituximab therapy (except in a single patient). A smaller Norwegian study had also previously found only temporary responses to rituximab in three patients with WG, two of whom had granulomatous masses [68]. In a recent review of published studies in this area, it was concluded that rituximab may be an effective treatment in patients with refractory ANCA-associated vasculitis (with the probable exception of WG patients with retro-orbital granulomas, who tended to be less responsive to rituximab therapy) [98]. Since then, however, case reports have appeared describing the successful use of rituximab in patients with granulomatous involvement [79, 91]. In addition, results from a recent case series of eight WG patients indicated that, while vasculitis symptoms tended to disappear relatively quickly, granulomatous manifestations usually regressed more slowly (sometimes over several months) [14]. With regard to other forms of ANCA-associated vasculitis, two individual case reports have been published recently detailing the successful treatment of Churg–Strauss syndrome with rituximab [49, 52]. Thrombocytopenic purpura and other hematologic disorders A number of autoimmune disorders of hemostasis, most notably idiopathic thrombocytopenic purpura (ITP) and thrombotic thrombocytopenic purpura (TTP), have been examined for their potential responsiveness to rituximab in several small trials. In a study involving a cohort of 25 patients with chronic ITP that had proved resistant to conventional therapies [87], patients received weekly rituximab at a dose of 375 mg/m2 for 4 weeks. The overall response rate (comprising those with complete, partial, and minor responses) was 52%; responses were sustained for at least 6 months in 7 patients. Complete and partial responses were associated with rapid normalization of platelet concentrations. A similar initial response rate (54%) was reported from a larger follow-up trial involving 57 patients; sustained responses were observed in 32% of the study participants [22]. Other reports include a multicenter trial in 35 adults with refractory ITP conducted in Denmark, which resulted in a 44% overall success rate based on predefined rises in platelet concentrations [13]. An indirect retrospective survey of findings from 89 ITP patients treated at multiple centers in Spain indicated that rituximab therapy led to sustained responses in 35% of patients with a median follow-up of 9 months (range 2–42 months) [71]. A review was published recently of the clinical outcomes of patients with chronic ITP who were re-treated with rituximab following an initial response to therapy [72]. All 9 second responses recorded in this report were classified as complete. An interesting additional finding was the higher female:male ratio of the nine re-treated patients compared with the population of patients originally treated across the published studies identified, suggesting that female ITP patients are more likely than male patients to respond to rituximab therapy. In a recently published letter, early administration of rituximab was reported to be associated with a higher response rate in chronic ITP [102]. The efficacy and safety of rituximab in adults with ITP were the subject of a recently published systematic review [9]. Based on 19 reports (313 patients) deemed eligible for the analysis up to April 2006, rituximab therapy was associated with mean complete response (platelet count >150 × 109 cells/l) and overall response (platelet count >50 × 109 cells/l) rates of 44 and 63%, respectively. Significant toxicities including death occurred in 3% of included cases, although the deaths were not necessarily attributable to rituximab therapy. The authors noted the lack of randomized controlled studies of rituximab therapy in ITP. A number of studies have also been conducted in patients with refractory or relapsing TTP. In addition to several case reports and small case series [3, 17, 69, 70, 74, 99], results from the first prospective trial have been published [35]. This study recruited 11 patients (6 enrolled during an acute refractory phase and 5 during a remission phase); following rituximab therapy (375 mg/m2 once weekly for 4 weeks), clinical remission was observed in all 6 acute cases, while all 5 patients enrolled during remission remained in clinical remission during the 6–11 month follow-up period. Biologic remission (disappearance of anti-ADAMTS-13 [a disintegrin and metalloproteinase with thrombospondin motif 13] antibodies, which occur in the great majority of patients with acquired TTP [75]) was achieved in all patients 7–24 weeks after the final rituximab infusion. Another more recent study involved 25 patients with acute refractory/relapsing idiopathic TTP, who were given rituximab in conjunction with plasma exchange (PEX) because of progressive clinical disease [82]. It was reported that all 25 patients in this trial achieved complete clinical and laboratory remission (sustained normal platelet count, absence of clinical manifestations of TTP, and cessation of PEX) in a median of 11 days following the initiation of rituximab therapy. Restoration of ADAMTS-13 activity and disappearance of anti-ADAMTS antibodies occurred in the vast majority of cases. At the time of publication, it was stated that none of the patients had clinically relapsed, with a median (range) follow-up of 10 (1–33) months. In another recent retrospective study, the clinical outcome of patients who had received rituximab (375 mg/m2 once weekly for a maximum of 8 weeks) together with standard therapy (PEX + corticosteroids) was compared with that of patients who had received standard therapy alone [45]. The remission rate in the rituximab group was significantly greater than that observed in the standard therapy group (100 vs. 66%; P = 0.0025). Interestingly, all three of the TTP studies described above reported good tolerability to rituximab therapy. There have also been sporadic case reports describing the successful use of rituximab in a number of other rare hematologic disorders, including Evans’ syndrome [61, 65], mixed type II cryoglobulinemia [15, 80, 101], and cold agglutinin disease [11, 81]. In addition, we have recently reported on the successful use of rituximab in RA patients with life-threatening hemorrhage due to the presence of Factor VIII inhibitor [67]. By contrast, a case report describing the failure of rituximab therapy in a hemophiliac patient with Factor VIII inhibitor has also been published [18]. A recent analysis of published case reports of patients with acquired antibodies to Factor VIII indicated that rituximab therapy was associated with a similar rate of clinical remission (approximately 80%) compared with the standard treatment modality (cyclophosphamide + prednisolone) [86]. Myositis Myositis comprises a group of inflammatory myopathies, of which polymyositis, dermatomyositis, and inclusion body myositis are the best defined clinically. The etiopathologies of this group of diseases are currently poorly understood, although autoimmunity is thought to play an important role [19]. The aim of a recent open-label pilot study involving seven patients with dermatomyositis was to test the hypothesis that B cells play a critical role in this disease [56]. The results of this trial, in which patients received four infusions of rituximab (375 mg/m2) at weekly intervals, showed that rituximab therapy was well tolerated and led to significant clinical improvements in the six evaluable patients who completed 1 year of follow-up. A number of other small pilot studies and case reports have also appeared recently detailing the generally successful use of rituximab in patients with dermatomyositis or polymyositis [7, 12, 16, 28, 53, 62, 66]. Antiphospholipid syndrome Antiphospholipid syndrome (APS), a rare disorder mostly affecting young adults, is defined by the presence of autoantibodies against phospholipids; the main clinical manifestations are venous or arterial thrombosis and obstetric complications, although the link between antiphospholipid antibodies and these clinical features has not been firmly established [39]. The traditional approach to treatment mainly involves the use of anticoagulation therapies. However, data indicating a link between raised circulating CD5+ B cells and high levels of antiphospholipid antibodies in APS patients [100] suggest that APS may be amenable to B cell-directed therapies. To date, only a small number of case reports have been published which detail attempts to manage APS with rituximab. Three of these studies [5, 77, 94] reported successful clinical outcomes following rituximab therapy, while the other [4] reported only a limited effect of rituximab on thrombocytopenia and anticardiolipin antibodies in a patient with primary APS. Although the data are currently limited, the striking clinical successes seen in some patients suggest that pilot studies with rituximab in APS should be conducted in the near future. Still’s disease Adult-onset Still’s disease (AOSD) is a systemic inflammatory disorder of unknown etiology. Traditional therapies include NSAIDs, corticosteroids, and—more recently—disease-modifying anti-rheumatic drugs [47]. A number of trials have also been conducted with biologic agents (including TNF inhibitors), with some promising results [32]. One case report was published recently that described the successful use of rituximab in a patient with AOSD [1]. This report, together with the author’s unpublished observations of two patients with AOSD refractory to cytotoxic agents who benefited from repeated rituximab infusion therapy, suggests that rituximab may be a future treatment option for this disease. Neurologic disorders As reviewed recently by Finsterer [36], rituximab has been tested in a number of immune-mediated peripheral neuropathies with promising results. Its potential clinical utility in neurological diseases of the central nervous system such as multiple sclerosis (MS) remains to be explored. Encouragingly, pilot studies have shown that rituximab therapy results in partial depletion of B cells from the cerebrospinal fluid of patients with progressive MS [23, 63]. The results of the first Phase I and II trials of rituximab in progressive MS were presented recently [10, 44]. In the placebo-controlled Phase II study [44], involving 104 patients with relapsing remitting MS, a single course of rituximab (two infusions of 1,000 mg given 2 weeks apart) resulted in significantly fewer inflammatory brain lesions and relapses over the 6-month observation period compared with placebo. Rituximab treatment was reported to have been well tolerated. Conclusions Recent advances in our understanding of autoimmunity have opened up new avenues for exploring novel targeted therapies in a wide range of diseases. The role of B cells in many autoimmune disorders is now widely accepted, in many cases through the demonstration that B cell depletion using rituximab can often be very effective clinically. The potential utility of rituximab and other B cell-directed therapies is currently being studied in several of these diseases, including SLE, SS, and vasculitis. Although to date most of the findings have been encouraging, a significant proportion of the information derives from case reports and small case series. Together with the lack of randomized controlled trials in most of the diseases discussed in this review, it is likely that there has been a degree of positive reporting bias. Therefore, until large-scale clinical trial data are available, it would be prudent to proceed with caution regarding the use of rituximab outside its approved indications. Although rituximab tolerability was generally reported as favorable in most of the studies covered in this review, the true incidence of associated adverse events (e.g., serious infections, serum sickness-like reactions, and PML) will only become clear when larger numbers of patients have been treated in each disease entity. Important questions also remain regarding the optimal rituximab dosing modalities for each disease (for example, the dose and frequency of treatment, when re-treatment should be considered, and whether to use combination therapies). Nevertheless, based on the information published to date, it seems likely that B cell depletion therapy, using rituximab and—in the future—agents currently under development, will offer an effective new approach for the management of many of these burdensome and difficult-to-treat conditions.
[ "biologic therapies", "rituximab", "cd20", "lupus", "sjögren’s syndrome", "vasculitis", "thrombocytopenic purpura", "b-lymphocytes" ]
[ "P", "P", "P", "P", "P", "P", "P", "U" ]
Pediatr_Nephrol-3-1-2064944
Chronic kidney disease in children: the global perspective
In contrast to the increasing availability of information pertaining to the care of children with chronic kidney disease (CKD) from large-scale observational and interventional studies, epidemiological information on the incidence and prevalence of pediatric CKD is currently limited, imprecise, and flawed by methodological differences between the various data sources. There are distinct geographic differences in the reported causes of CKD in children, in part due to environmental, racial, genetic, and cultural (consanguinity) differences. However, a substantial percentage of children develop CKD early in life, with congenital renal disorders such as obstructive uropathy and aplasia/hypoplasia/dysplasia being responsible for almost one half of all cases. The most favored end-stage renal disease (ESRD) treatment modality in children is renal transplantation, but a lack of health care resources and high patient mortality in the developing world limits the global provision of renal replacement therapy (RRT) and influences patient prevalence. Additional efforts to define the epidemiology of pediatric CKD worldwide are necessary if a better understanding of the full extent of the problem, areas for study, and the potential impact of intervention is desired. Introduction Most epidemiological information on chronic kidney disease (CKD) originates from data available on end-stage renal disease (ESRD), the terminal stage of CKD when treatment with renal replacement therapy (dialysis or transplant) becomes necessary to sustain life. Little information is available on the prevalence of earlier stages of CKD, as patients are often asymptomatic. The epidemiological studies that have been performed provide evidence that ESRD represents the “tip of the iceberg” of CKD and suggest that patients with earlier stages of disease are likely to exceed those reaching ESRD by as much as 50 times [1]. Worldwide, the number of patients with CKD is rising markedly, especially in adults, and CKD is now being recognized as a major public health problem that is threatening to reach epidemic proportions over the next decade [2]. In North America, up to 11% of the population (19 million) may have CKD [1], and surveys in Australia, Europe, and Japan describe the prevalence of CKD to be 6–16% of their respective populations [3, 4]. In North America alone, more than 100,000 individuals entered ESRD programs in 2003 (adjusted incidence rate: 341 new cases per million population), with a prevalence count of more than 450,000 as of December 2003 (prevalence rate: 1,509 per million population) [5]. Not surprisingly, the cost of treating patients with ESRD is substantial and poses a great financial challenge. The economic cost of North American ESRD programs reached $25.2 billion in 2002, an 11.5% increase over the previous year, and is expected to reach $29 billion by 2010 [2]. Two factors, aging and the global epidemic of type-II diabetes mellitus, are primarily responsible for the increasing incidence of CKD in adults. In contrast, pediatric ESRD patients (<20 years of age) constitute a very small proportion of the total ESRD population. However, they pose unique challenges to providers and to the health care system, which must address not only the primary renal disorder but the many extrarenal manifestations that affect growth and development as well. In North America, children younger than 20 years of age account for less than 2% of the total ESRD patient population, and the prevalence of patients aged 0–19 years has grown a modest 32% since 1990. This is in contrast to the 126% growth experienced by the entire ESRD population over the same time period [5]. Nonetheless, CKD in children is a devastating illness, and the mortality rate for children with ESRD receiving dialysis therapy is between 30 and 150 times that of the general pediatric population [6, 7]. In fact, the expected remaining lifetime for a child 0–14 years of age and on dialysis is only 20 years [6]. Therefore, the diagnostic and therapeutic approach to CKD must emphasize primary prevention, early detection, and aggressive management. Knowledge of the epidemiology of CKD and its associated clinical manifestations is a crucial component of this effort by helping to target key patient populations at risk, by quantifying the extent of the problem, and by facilitating an assessment of the impact of intervention. Classification of CKD There is limited information on the epidemiology of CKD in the pediatric population. This is especially true for less advanced stages of renal impairment that are potentially more susceptible to therapeutic interventions aimed at changing the course of the disease and avoiding ESRD. As CKD is often asymptomatic in its early stages, it is both underdiagnosed and, as expected, underreported. This is in part the result of the historical absence of a common definition of CKD and a well-defined classification of its severity. The current CKD classification system described by the National Kidney Foundation’s Kidney Disease Outcomes Quality Initiative (NKF-K/DOQI) has helped remedy the situation. According to the K/DOQI scheme, CKD is characterized by stage 1 (mild disease) through stage 5 (ESRD) (Table 1) [8]. By establishing a common nomenclature, staging has been helpful for patients, general health care providers, and nephrologists when discussing CKD and anticipating comorbidities and treatment plans. The classification system has, however, been subject to debate, as it is argued that stages 1 and 2 would be better defined by the associated abnormalities (e.g. proteinuria, hematuria, structural anomalies) rather being classified as CKD, whereas more advanced stages (3 and 4) should be characterized by the severity of the impaired renal solute clearance [9]. Furthermore, and with particular reference to children, the normal level of glomerular filtration rate (GFR) varies with age, gender, and body size and increases with maturation from infancy, approaching adult mean values at approximately 2 years of age (Table 2). In turn, GFR ranges that define the five CKD stages apply only to children 2 years of age and older. Finally, although the threshold of GFR reduction where chronic renal failure (CRF) and chronic renal insufficiency (CRI) begins is a matter of opinion, many registries have operationally defined this as a GFR below 75 mL/min per 1.73 m2 [10]. Hence, populations with CRI or CRF are now categorized as those that comprise CKD stages 2–4. Table 1National Kidney Foundation’s Kidney Disease Outcomes Quality Initiative (NKF-K/DOQI) stages of chronic kidney disease [8]StageDescriptionGFR (mL/min/1.73 m2)1Kidney damage with normal or increased GFR>902Kidney damage with mild decrease in GFR60–893Moderate decrease in GFR30–594Severe decrease in GFR15–295Kidney failure<15 or dialysisGFR glomerular filtration rateTable 2Normal glomerular filtration rate (GFR) in children and adolescents [8]AgeMean GFR±SD (mL/min/1.73 m2)1 week (males and females)41 ± 152–8 weeks (males and females)66 ± 25>8 weeks (males and females)96 ± 222–12 years (males and females)133 ± 2713–21 years (males)140 ± 3013–21 years (females)126 ± 22 Sources of pediatric data Most of the existing data on the epidemiology of CKD during childhood concentrates on the late and more severe stages of renal impairment [11, 12] and are not population based in nature [13]. In addition, some methodologically well-designed childhood CKD registries are limited by being restricted to small reference populations [14–16]. Finally, direct comparisons of the incidence and prevalence rate of childhood CKD in different geographical areas around the world is difficult due to methodological differences in study age group, characterization of the degree of renal insufficiency, and disease classification. In the United States, data is primarily available from two sources: the registry of the North American Pediatric Renal Trials and Collaborative Studies (NAPRTCS) organization [10] and the United States Renal Data System (USRDS). NAPRTCS was established as a transplant registry in 1987 with a goal of gathering data from the majority of pediatric renal transplant centers in the United States, Canada, Mexico, and Costa Rica. Its registry was expanded in 1992 to include data from patients receiving maintenance dialysis, and in 1994, data was first collected from patients with CRI characterized by a Schwartz estimated creatinine clearance of ≤75 mL/min per 1.73 m2 [17]. Participation in this registry is voluntary and mandates the involvement of a pediatric nephrologist in the provision of care to those patients entered into the registry. As of December 2005, information had been collected on more than 6,400 patients who entered the registry with a diagnosis of CRI [10]. In contrast to the NAPRTCS, which only receives data voluntarily submitted by pediatric nephrology centers, the USRDS is a national data system that collects, analyzes, and distributes information about all patients with ESRD in the United States. Thus, USRDS data includes information on both adults and children with stage 5 CKD, which is published as an Annual Data Report (ADR) [5, 6]. This source of information is particularly important from an epidemiological perspective, as approximately one third of children and adolescents with ESRD requiring dialysis or transplantation in the United States are cared for in facilities that primarily serve adults, and thus, they are not included in the NAPRTCS database [18]. The recently published data from the ItalKid Project is by far the most comprehensive data on the epidemiology of CKD in children. The ItalKid Project is a prospective, population-based registry that was started in 1990 and includes all incident and prevalent cases of CRF (CCr < 75 mL/min per 1.73 m2) in children (<20 years) from throughout Italy (total population base: 16.8 million children) [19]. The European Dialysis and Transplant Association (EDTA) was established in 1964 to record demographic data and treatment details of patients receiving renal replacement therapy (RRT), including dialysis and renal transplantation. Historically, the EDTA registry gathered data on RRT in children from individual renal units by means of center and patient questionnaires, a process that was subject to underreporting. At the turn of the century, the EDTA office moved to Amsterdam and began collecting data on RRT entirely through national and regional registries and recently reported data on RRT in children from 12 registries located in Europe (vide infra) [20]. Other regional societies, such as the Japanese Society for Pediatric Nephrology (JSPN), have also provided useful epidemiological information. In Japan, children are screened annually by urinalysis in a nationwide program, an approach that has provided invaluable epidemiological information and the opportunity for establishing clinical trials focusing on early detection and intervention. Epidemiological data is also available from Australia and New Zealand [21]. In contrast, epidemiological information from Asia, where 57% of the world’s population resides and a geographic region characterized by a very high proportion of children, is very scant and is primarily based on patients referred to tertiary medical centers [22, 23]. The situation in central and southern Africa or in the Arab countries of North Africa and the Middle East is even more unfortunate, as there are no regional pediatric nephrology societies in place to collect and publish any valid epidemiological data. Incidence and prevalence of CKD in childhood Large population-based studies, such as the Third National Health and Nutrition Examination Survey (NHANES III), have made it possible to estimate the incidence and prevalence of CKD in the adult population [1]. According to this report, the prevalence of patients with early stages of CKD (stages 1–4; 10.8%) is approximately 50 times greater than the prevalence of ESRD (stage 5; 0.2%). There is no comparable information available in the United States on the prevalence of the earlier stages of CKD in children and its relationship to ESRD. This is, in large part, due to differences in disease etiology for children and adults. Furthermore, the relationship between the prevalence of earlier stages of CKD and the subsequent development of more severe CKD/ESRD is determined in part by factors unrelated to disease etiology, as was recently shown in a comparison between adult patients in Norway and the United States [4]. Data that do exist on the epidemiology of CKD in children come from a variety of sources. Population-based data from Italy (ItalKid Project) has reported a mean incidence of preterminal CKD (CCr < 75 mL/min per 1.73 m2) of 12.1 cases per year per million of the age-related population (MARP), with a point prevalence of 74.7 per MARP in children younger than 20 years of age [19]. The national survey performed in Sweden from 1986 until 1994 included children (ages 6 months to 16 years) with more severe preterminal CKD (CCr < 30 mL/min per 1.73 m2) and reported a median annual incidence and prevalence of 7.7 and 21 per MARP, respectively [16]. Similarly, the incidence rate of severe pre-terminal CKD in Lorraine (France) has been estimated as 7.5 per MARP in children younger than 16 years; the prevalence rate ranged from 29.4 to 54 per MARP [15]. In Latin America, the Chilean survey from 1996 reported incidence and prevalence rates of 5.7 and 42.5 per MARP, respectively, in children younger than 18 years of age with CCr < 30 mL/min/1.73 m2, including patients with ESRD [12]. As alluded to above, there are 81.2 million children in the United States younger than 20 years of age [5], but no data on the incidence or prevalence of preterminal CKD is available. Due to a lack of national registries, any semblance of incidence and prevalence data from developing countries primarily originates as reports from major tertiary care referral centers [22–27]. The nature of the data depends on local referral practices and accessibility to hospital care. The Jordan University Hospital has estimated the annual incidence and prevalence of severe CKD (CCr < 30 mL/min per 1.73 m2) to be 10.7 and 51 per MARP, respectively, based on their hospital admission rate [26]. A 15-year review of admissions from a university teaching hospital in Nigeria estimated the median annual incidence of severe CKD (CCr < 30 mL/min per 1.73 m2) to be 3.0 per MARP, with a prevalence of 15 patients per million children [27]. In a recent report, data from a major tertiary hospital in India revealed that approximately 12% of patients (n = 305) seen by the pediatric nephrology service over a 7-year period had moderate to severe CKD (CCr < 50 mL/min per 1.73 m2), and one quarter of these patients had already developed ESRD, highlighting the late diagnosis and referral pattern [23]. Similar data was reported from another tertiary hospital in India where 50% of 48 patients presenting with CRF over a 1-year period had ESRD [22]. Finally, data from a major Iranian hospital collected over 7 years (1991–1998) reported that 11% of pediatric nephrology admissions (n = 298) were due to severe CKD (CCr < 30 mL/min per 1.73 m2), and one half of the patients advanced to ESRD [25]. The incidence rate of ESRD, adjusted for race and gender, is much higher among adults than among children. Data from the USRDS revealed that in pediatric patients younger than 20 years of age, the annual incidence of ESRD increased marginally from 13 per MARP in the 1988 cohort to 15 per MARP in the 2003 cohort [5]. This is in contrast to the adult incidence rate of 119 per MARP for patients 20–44 years of age and 518 per MARP for those 45–64 years old in the 2003 cohort [5]. As in adults, a higher incidence rate with older patients was also found across the 5-year age groups within the pediatric cohort. The incidence rate was nearly twice as high among children 15–19 years of age (28 per MARP) compared with children 10–14 years of age (14 per MARP), and nearly three times higher than the rate for children 0–4 years of age (9 per MARP). The point prevalence for pediatric patients (adjusted for age, race, and gender) was 82 per million population during 2002–2003 [5]. The EDTA registry recently reported its cumulative data on 3,184 patients (<20 years of age) with ESRD who initiated RRT between 1980 and 2000 in 12 European countries [20]. With a total of 18.8 million children between 0–19 years in the countries surveyed, data revealed that the incidence of ESRD rose modestly from 7.1 per MARP in the 1980–1984 cohort to 9.9 per MARP over the next 15 years. In contrast, the prevalence of patients receiving RRT increased from 22.9 per MARP in 1980 to 62.1 per MARP in 2000, providing evidence of improved long-term survival. As in the United States, the incidence of ESRD was highest in the 15–19 year age group, with the exception of the 0- to 4-year age group in Finland who experienced a high incidence of ESRD (15.5 per MARP) secondary to the large number of infants in that country with congenital nephrotic syndrome. The incidence of ESRD in children (<20 years age) from Australia and New Zealand has remained fairly constant at around 8–10 per million population over the past 25 years, whereas the prevalence of treated ESRD has steadily increased since 1980, from approximately 25 to 50 patients per million population [21]. The 1998 Japanese National Registry data reported comparatively lower ESRD incidence and prevalence rates of 4 and 22 per MARP, respectively, for children 0–19 years of age [28] for reasons that are as yet unexplained. However, as in other countries, the prevalence rate of treated ESRD patients among patients aged 15–19 years of age was not only high (34 per million), but seven times higher than that of patients 0–4 years of age (5 per million). In the 2005 ADR from the USRDS, data regarding the incidence and prevalence of ESRD in children was simultaneously published from 37 countries to corroborate the information above and to facilitate international comparisons [5]. The highest incidence rates for children were reported from the United States, New Zealand, and Austria, at 14.8, 13.6, and 12.4 per million population, respectively. As mentioned earlier (vide supra), Japan’s rate for pediatric patients was, in contrast, one of the lowest, even though Japan ranks fourth highest in the world for the incidence of ESRD in adults. The prevalence rate for pediatric ESRD patients was reported to be highest in Italy, at 258 patients per million population; however, this may be partially related to the addition of data from patients ages 20–24 to the prevalent group. The second highest prevalence rate for children was reported from Finland, with a rate only 40% of that in Italy but greater than the rates from the United States and Hungary, where they were reported to be 82 and 81 patients per million population, respectively (Fig. 1) [5]. Fig. 1Incidence (left) and prevalence of end-stage renal disease (ESRD) around the world in the 0–19 age group in 2003 [5] A number of factors influence incidence and prevalence rate variability of childhood ESRD. Factors such as racial and ethnic distribution, type of prevalent renal disease, and quality of medical care available for preterminal CKD patients have a significant impact on patient outcome. As the vast majority of treated ESRD patients come from more-developed countries, which can afford the cost of renal replacement therapy, the huge disparity in the prevalence of ESRD between the more- and less-developed countries probably stems, in large part, from the inadequacy of health-care resource allocation to programs providing renal replacement therapy in underdeveloped countries [29, 30]. Finally, characterization of the patient population with CKD (both preterminal CRF and treated ESRD) reveals that the incidence and prevalence rates are universally greater for boys than for girls [10, 16, 19, 22, 23, 25–27]. Two thirds of patients in the NAPRTCS CRI registry and in the database of the ItalKid Project are males. This gender distribution reflects the higher incidence of congenital disorders, including obstructive uropathy, renal dysplasia, and prune belly syndrome, in boys versus girls. In fact, in the ItalKid Project, males continue to predominate (male:female ratio 1.72) even after excluding patients with posterior urethral valves [19]. As for race, the incidence rate for ESRD in black children in North America is two to three times higher than for white children, irrespective of gender [31]. Likewise, the incidence rate of ESRD for the indigenous people of Australia (Aborigines) and New Zealand (Maoris) is disproportionately higher than that experienced by the remainder of the population [32]. Etiology of CKD Unlike adults in whom diabetes and hypertension are responsible for the majority of CKD, congenital causes are responsible for the greatest percentage of all cases of CKD seen in children. However, whereas this is the most common reported etiology from developed countries where CKD is diagnosed in its earlier stages, infectious or acquired causes predominate in developing countries, where patients are referred in the later stages of CKD. These generalizations apart, certain disorders giving rise to CKD are, indeed, more common in some countries than in others. In the CRI registry arm of NAPRTCS, almost one half of the cases are accounted for by patients with the diagnoses of obstructive uropathy (22%), aplasia/hypoplasia/dysplasia (18%), and reflux nephropathy (8%) (Table 3). Whereas structural causes predominate in the younger patients, the incidence of glomerulonephritis (GN) increases in those older than 12 years. Among the individual glomerular causes, only focal segmental glomerulosclerosis (FSGS) accounts for a significant percentage of patients (8.7%), whereas all other glomerulonephritides combined contribute less than 10% of the causes of childhood CKD. For reasons that are as yet not clear, FSGS is three times more common in blacks than in whites (18% vs. 6%) and is particularly common among black adolescents with CKD [10]. Table 3Diagnosis distribution of North American Pediatric Renal Trials and Collaborative Studies (NAPRTCS) chronic renal insufficiency (CRI) patients [10]Distributions by diagnosisNumberPercent MalePercent whitePercent blackPercent otherTotal6,40564611920Primary diagnosis Obstructive uropathy1,38586612117 Aplastic/hypoplastic/dysplastic kidney1,12562621721 Other91358631621 FSGS55757403921 Reflux nephropathy5365374620 Polycystic disease25755741115 Prune belly18597622315 Renal infarct15553661321 Unknown16852472032 HUS1345881711 SLE nephritis9625274132 Cystinosis97489235 Familial nephritis9986611227 Pyelo/interstitial nephritis8739642016 Medullary cystic disease82508497 Chronic GN7650432928 MPGN-type I6761481933 Berger’s (IgA) nephritis6463641620 Congenital nephrotic syndrome6846461243 Idiopathic crescentic GN4648522424 Henoch-Schönlein nephritis406578320 MPGN-type II297279317 Membranous nephropathy3348303930 Other systemic immunologic disease2532403228 Wilms tumor2854572121 Wegener’s granulomatosis17769406 Sickle cell nephropathy13620928 Diabetic GN1150364518 Oxalosis66783017 Drash syndrome610067033FSGS focal segmental glomerulosclerosis, HUS hemolytic uremic syndrome, SLE systemic lupus erythematosus, GN glomerulonephritis, MPGN membranoproliferative GN, IgA immunoglobulin A Data from the ItalKid Project revealed that hypoplasia with or without urological malformations accounts for as many as 57.6% of all cases of CKD in Italy, whereas glomerular diseases account for as few as 6.8% of cases of CKD in children [19]. Interestingly, when the analysis was restricted to the patient population that had reached ESRD, the relative percentage of glomerular diseases increased from 6.8% to 15.2%, whereas that of hypoplasia decreased from 57.6% to 39.5%, underscoring the discrepancy between the rates of progression of these two entities. Observations from this study have also prompted questions regarding the commonly accepted cause–effect relationship between vesicoureteral reflux (VUR) and kidney disease (reflux nephropathy) and support the hypothesis that both hypoplasia and VUR may be related to similar developmental factors causing congenital disorders of the kidney and urinary tract [33]. In the ESRD population reported by the EDTA registry, hypoplasia/dysplasia and hereditary diseases were the most common causes for ESRD in the 0- to 4-year age group, whereas GN and pyelonephritis became progressively more common with increasing age in the majority of reporting countries [20]. The exception is Finland, where congenital nephrosis (Finnish type) remains the most common cause of ESRD in children younger than 15 years of age [34]. Somewhat different is the data reported by the Japanese National Registry, which reflects a very high proportion (34%) of cases secondary to GN [FSGS 60% and immunoglobulin A (IgA) nephropathy 17%] in their pediatric ESRD population [28]. Similarly, the Australia and New Zealand Dialysis and Transplant (ANZDATA) registry reported GN to be the most common cause of ESRD in children and adolescents from Australia and New Zealand (42%) [21]. Comprehensive information on the etiology of ESRD from many less-developed countries is unavailable owing to poor data collection and the absence of renal registries. In addition and in contrast to the experience within developed countries, many of these countries continue to suffer from the burden of infectious diseases such as hepatitis C, malaria, schistosomiasis, and tuberculosis, with resultant infection-related GN. One such example is Nigeria, from which a publication on pediatric CKD reported various glomerulopathies as the cause of renal failure in one half of their patients, a third of whom also had nephrotic syndrome [27]. Human-immunodeficiency-virus (HIV)-associated nephropathy in children is another entity that is underreported, and it is a disorder that is likely to increase along with the increasing incidence of HIV in Africa and Asia. Familial Mediterranean fever leading to amyloidosis has been found to be responsible for up to 10% of cases of CKD in Turkish children (n = 459) [24]. Hereditary disorders are more prevalent in countries where consanguinity is common. One third of Jordanian children with CKD have been diagnosed with hereditary renal disorders such as polycystic kidney disease, primary hyperoxaluria, and congenital nephrotic syndrome [26]. Similarly, one fifth of Iranian children with CKD have been reported to have hereditary disorders such as cystinosis, cystic kidney disease, Alport syndrome, and primary hyperoxaluria [25]. Progression of CKD Although the stages of CKD are now reasonably well defined, the natural history of the early stages is variable and often unpredictable. However, most available data demonstrates a slower progression toward ESRD in patients with congenital renal disorders compared with patients with glomerular disease. For this reason, and as alluded to previously, the relative proportion of glomerular diseases increases in groups of patients with more advanced stages of CKD. The progression of established CKD is also influenced by a variety of risk factors, some of which (e.g., obesity, hypertension, and proteinuria) may be modifiable [35–37], whereas others, including genetics, race, age, and gender, are not. Obesity is associated with hypertension, albuminuria, and dyslipidemia, all of which can potentially influence the progression of CKD. The incidence of certain glomerulonephritides, such as FSGS, is higher in obese than in lean individuals [38, 39]. Hypertension together with proteinuria has been shown to be an important risk factor for progression of primary renal disease in children and adults [40, 41], and the renoprotective efficacy of renin angiotensin system (RAS) antagonists, which is in part independent of blood pressure, has been clearly demonstrated in animal models and adults with acquired nephropathies [42–46]. Whereas both angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers have been shown to reduce proteinuria in children with CKD, the renoprotective efficacy of these medications in children and their potential impact on the epidemiology of CKD still needs to better delineated, as is currently being addressed by the Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients (ESCAPE) trial [47, 48]. The clustering of CKD in families is strongly suggestive of a genetic or familial predisposition in some cases [49]. Studies have suggested the presence of links between CKD and various alterations or polymorphisms of candidate genes encoding putative mediators, including the renin–angiotensin system. Additionally, racial factors may play a role in susceptibility to CKD, as there is a strong concordance of renal disease in the families of African Americans with hypertensive ESRD [49]. Not only may there be an increased susceptibility to disease, but there is evidence that the rate of progression of CKD is faster among African American males [50]. Low birth weight in some ethnic communities might be associated with a reduction in the number of nephrons and a subsequent predisposition to hypertension and renal disease in later life [51]. Irrespective of the underlying kidney disease or presence of additional risk factors, it is clear that the risk of progression to ESRD in childhood is inversely proportional to the baseline creatinine clearance [10, 19]. Additionally, regardless of the initial level of renal insufficiency, puberty seems to be a critical stage for patients with renal impairment, as a steep decline in renal function often occurs during puberty and the early postpuberty period [19]. Whereas the specific reasons are yet to be determined, it is speculated that this pattern of progression may be attributable to an adolescent-specific pathophysiological mechanism, possibly related to sex hormones and/or the imbalance between residual nephron mass and the rapidly growing body size. Data collected by NAPRTCS has also revealed that patients whose baseline serum albumin was below 4 g/dl, inorganic phosphorus above 5.5 mg/dl, calcium below 9.5 mg/dl, blood urea nitrogen (BUN) above 20 mg/dl, or hematocrit below 33% had a significantly higher risk of reaching ESRD (p < 0.001) [10]. Data pertaining to a variety of risk factors potentially associated with the progression of CKD, including those noted above, is being collected by the Chronic Kidney Disease in Children Study (CKiD), a prospective, multicenter initiative funded by the National Institutes of Health designed to follow the course of 540 children with CKD for 2–4 years [52]. Outcome for children with CKD The outcome of children with severe CKD is highly dependent upon the economy and availability of health care resources. Approximately 90% of treated ESRD patients come from developed countries that can afford the cost of RRT [29]. Despite comparable incidence rates, high mortality in countries that lack resources for RRT results in a low prevalence of CKD patients in those countries. In one of the tertiary care hospitals in India, for example, up to 40% of the ESRD patients opted out of further therapy because of a lack of financial resources [22], and of the 91 patients with ESRD in another hospital, only 15 underwent renal transplantation, 63 received hemodialysis, and the remainder opted out of dialysis or transplantation care secondary to financial constraints [23]. Similar results were recently published from South Africa where only 62% of children (<20 years of age) with ESRD were accepted by an “Assessment Committee” for RRT as part of a rationing program [30]. In countries where RRT is readily available, the most favored renal replacement modality is transplantation in all pediatric age groups. Sixteen percent of children newly diagnosed with ESRD in North America receive a preemptive transplant, and three fourth of children receive a transplant within 3 years of RRT initiation [5]. Similar figures are reported by the ANZDATA registry [21]. Among Western countries, Spain/Catalonia has the highest pediatric transplant rate, reaching 15 patients per million population, followed by a rate of 12 patients per million population in the United States and Finland (Fig. 2) [5]. In the United States, white pediatric patients are more likely to receive a renal transplant than are patients from other racial groups. Fig. 2Percent distribution of prevalent dialysis modality (left) and transplant rates in the 0–19 age group in 2003 [5] The distribution of dialysis modalities varies among countries (Fig. 2) [5]. Peritoneal dialysis (PD) in children is highest in Finland, New Zealand, and Scotland, accounting for 73%, 71%, and 67% of prevalent dialysis patients, respectively. Whereas PD is still the favored mode of dialysis in young children, there has been an increase in hemodialysis (HD) utilization since the early 1990s, and HD is now the most common form of dialysis overall for prevalent patients <19 years of age (Fig. 2) [5]. In the United States, PD is the most frequently used dialysis modality (60% of dialysis patients) according to the NAPRTCS registry [10], whereas HD is more common according to data collected by the USRDS [5]. Once again, this discrepancy reflects in part the fact that many adolescent patients are cared for in adult dialysis units where there is often a preference for HD [18]. Whereas automated PD (APD) is the most frequently used PD modality in children [53], continuous ambulatory PD (CAPD) is commonly used in countries that lack finances and technical support, as reflected in the recent report of the Turkish Pediatric Peritoneal Dialysis (TUPEPD) registry. [54]. Mortality rates remain significantly lower in pediatric patients with ESRD compared with their adult counterparts. Nevertheless, an assessment of the causes of death reflect the excess risk of cardiac and vascular disease and the high prevalence of left ventricular hypertrophy and dyslipidemia among children treated with RRT [55–57]. Pediatric patients with glomerulonephritis or those with cystic/hereditary/congenital disease have the greatest probability of surviving 5 years, in contrast to patients who have developed ESRD as a result of secondary GN or vasculitis [5]. Infants on dialysis have a higher mortality rate than do older children, which is likely, at least in part, to be a result of coexisting morbidities [58]. Although substantial improvement has occurred in the long-term survival of children and adolescents with ESRD over the past 40 years, the overall (dialysis and transplantation) 10-year survival remains at only 80%, and the age-specific mortality rate is still 30–150 times higher than among children without ESRD [6, 7]. It is noteworthy that dialysis is associated with an appreciably higher risk of death compared with renal transplantation; therefore, patients who experience a longer wait for transplantation are more likely to have a worse overall outcome. Not only is the benefit of transplantation evident when one compares transplant recipients to patients deemed “medically unsuitable” for transplantation, it has also been substantiated in a recent longitudinal study of 5,961 patients ≤18 years of age, all of whom were placed on the kidney transplant waiting list in the United States [59]. In that study, transplanted children had a lower estimated mortality rate (13.1 deaths/1,000 patient years) compared with patients on the waiting list (17.6 deaths/1,000 patient years). Similarly, the 2005 ADR reported that approximately 92% of children initiating therapy with a transplant survive 5 years compared with 81% of those receiving HD or PD [5]. Finally, the expected remaining lifetime for children 0–14 years of age and on dialysis is only 18.3 years, whereas the prevalent transplant population of the same age has an expected remaining lifetime of 50 years [5]. Conclusion Children with CKD comprise a very small but important portion of the total CKD population. Whereas disorders associated with its development are well delineated, the availability of valid and widespread information regarding the epidemiology of CKD in children requires additional efforts, such as the ItalKid Project, in which early identification and longitudinal follow-up are key practices. This information will, in turn, serve as the basis upon which to judge the impact that observational trials such as CKiD and interventional trials such as ESCAPE have on the evolution of CKD during childhood [48].
[ "chronic kidney disease", "children", "epidemiology", "end-stage renal disease", "renal replacement therapy" ]
[ "P", "P", "P", "P", "P" ]
Surg_Endosc-4-1-2358937
Perioperative outcome of laparoscopic left lateral liver resection is improved by using a bioabsorbable staple line reinforcement material in a porcine model
Hypothesis Laparoscopic liver surgery is significantly limited by the technical difficulty encountered during transection of substantial liver parenchyma, with intraoperative bleeding and bile leaks. This study tested whether the use of a bioabsorble staple line reinforcement material would improve outcome during stapled laparoscopic left lateral liver resection in a porcine model. The success of laparoscopic cholecystectomy has driven the application of minimally invasive techniques to other disease processes, and led to the use of laparoscopy in solid organ surgery [1–9]. Since the report of the first laparoscopic liver resection for a 6-cm focal nodular hyperplasia, laparoscopic surgery for the treatment of liver diseases has become more popular [28–30]. In 1995, excision of a segment 4 hepatic tumor was reported. The first successful laparoscopic left lateral hepatectomy (segments 2 and 3) in a patient with a benign adenoma was performed in 1996 [8]. Technological advances and the development of new equipment have facilitated the growing trend of laparoscopic liver resections [2]. However, laparoscopic liver surgery remains technically challenging. Compared to open surgery, the laparoscopic surgeon is limited by the lack of an effective means to divide substantial liver parenchyma. Transection of the liver can be performed laparoscopically in several different ways, however bile leakage rates of 1–16% and bleeding complication rates of 6–55% have been reported [9, 30–32]. We hypothesized that the addition of a bioabsorbable membrane to reinforce standard laparoscopic stapling devices would reduce the bleeding and biliary complications of stapled laparoscopic liver resections. We tested this hypothesis in a prospective survival study of 20 pigs undergoing laparoscopic left lateral liver resections. Methods Study design A total of 20 female pigs each weighing approximately 40 kg were used for the study. All animals were maintained in accordance with the recommendations of the institutional animal care and use committee at the Mount Sinai School of Medicine. Animals were randomly assigned at the time of surgery to either group A (n = 10) in which the stapling devices used for laparoscopic left lateral segmentectomy were reinforced with a bioabsorbable membrane or group B (n = 10), in which standard stapling devices were used. Animals were followed prospectively for a 6-week period after which they were sacrificed. Surgery All animals were premedicated with ketamine (22 mg/kg) and atropine (0.04 mg/kg) and anesthetized with thiopentothal (15 mg/kg) prior to intubation. Animals were then mechanically ventilated with an initial tidal volume of 10 ml/kg and a respiratory rate of 15 breaths per minute. The tidal volume was adjusted to maintain arterial PaCO2 of 35–40 mmHg during the experiment. Anesthesia was maintained with inhaled isofluorane (1.5%). The pig was placed in a supine position. A standard open technique was used to gain access to the abdomen and a total of five trocars (Karl Storz Endoscopy America, Culver City, CA, USA) were used for the procedure (Fig. 1). Fig. 1Schematic illustration of trocars placement in the porcine model Pneumoperitoneum was established to a pressure of 15 mmHg. After inspection of the abdominal cavity with a 10-mm 30° laparoscope, the remaining trocars were placed under direct vision. Two 10-mm trocars were used in the left and right upper quadrants. One 12-mm trocar was positioned in the left anterior axillary line for liver retraction. The liver capsule and parenchyma were inspected superiorly and inferiorly and the line of transection was visualized between the liver hilum, along the surface of the entire left liver lobes towards the diaphragm. Subsequently, the left lateral segmentectomy was demarcated (segments 2 and 3 of the liver) within 1 cm along the falciform ligament. If necessary, the diaphragmatic and minor omental attachments to the liver were divided to mobilize the left lobe. Once mobilized, attention was turned to transection of the liver parenchyma. This was accomplished with sequential firings of a 3.5-mm endo GIA stapler (US Surgical, Norwalk, CT). In group A, the stapler cartridges were buttressed with a bioabsorble reinforcement material (Seamguard®, W.L. Gore, Flagstaff, Arizona) (Fig. 2). In group B the standard stapler cartridges were used. As the liver parenchyma was divided, no attempt was made to isolate the major feeding vessels (major branches of the left hepatic artery and vein) to segments 2 and 3; rather, they were divided en masse with the liver parenchyma. Upon completion of the transection, hemostasis was achieved with the ultrasonic scalpel (Ethicon Endo-Surgery, Cincinatti, Ohio, USA), which works by means of a vibrating blade or scissors and can effectively seal small vessels and bile ducts, and/or electrocautery. The resected specimen was placed into an endobag and retrieved through the umbilical trocar site that was extended approximately 3.5–5.0 cm to accommodate the specimen. The abdomen was irrigated and inspected prior to closure of the trocar sites. No intra-abdominal drains were used. Fig. 2An absorbable polymer membrane (Bioabsorbable Seamguard, W.L. Gore, Flagstaff, Arizona, USA) is constructed as a buttress mat integrated in the stapler systems Data collection Operative data recorded at the time of surgery included operative time, specimen size, and blood loss as estimated by amount of fluid collected in a suction container minus irrigation fluid used. Animals were prospectively followed with regard to clinical outcome for 6 weeks. Clinical status (temperature, blood pressure = BP, heart rate = HR) was examined routinely on a daily basis unless clinical condition mandated additional assessment. Routine blood tests (bilirubin, ALT, AST, alkaline phosphatase, GGT, and complete blood count) were obtained 2 days and 6 weeks postoperatively. Animals were sacrificed at 6 weeks and examined for intra-abdominal abnormalities such as abscess or bile leaks. Standard methylene blue was injected into the biliary tree to examine for active biliary leaks. A standardized pressure-controlled injector using a green sized syringe injected 5–10 ml methylene blue into the ligated common bile duct. Subsequently, the cut edge of the liver was examined both macroscopically and microscopically for evidence of biliary leak. The cut edge of the liver was then sent for histopathological analysis. Statistical analysis To obtain a 50% reduction rate in postoperative outcome after laparoscopic liver resection an 80% power analysis showed a series of 20 consecutive animals to be operated. Moreover, a 95% confidence interval was obtained to achieve statistical significance. Data were compared between groups with Student’s t-test and chi-square tests. Results All animals tolerated the procedure well and were healthy during the entire follow up. Clinical status (temperature, blood pressure, heart rate) examined routinely on a daily basis were not different. All animals resumed normal diet within 1 day and had their first bowel movement within 2–3 days. Routine blood tests (bilirubin, ALT, AST, alkaline phosphatase, GGT, white blood cell, hemoglobin, hematocrit) obtained 2 days and 6 weeks postoperatively showed no abnormalities in either group. Mean operative time was not different between groups (group A 64 ± 11 min versus group B 68 ± 9 min, p = ns). Intraoperative blood loss was significantly higher in group B (25 ± 5 mL versus 185 ± 9 mL, p < 0.05) (Fig. 3). An average of six 60-mm stapler cartridge firings per animal were used in each group to transect the left lateral liver lobe (p = ns). Resected specimen size was similar in both groups (average size 10 × 8 × 4 cm; average weight 0.41 kg). There was no morbidity or mortality in either group. However, two animals in group B were found to have bile collections in the previous operating field at the time of necropsy. When the livers were subjected to methylene blue injection via the common bile duct (CBD), two additional animals in group B were found to have evidence of biliary leak (a total of four of ten animals in group B versus zero of ten in group A, p < 0.05). Fig. 3Intraoperative blood loss in groups A and B Histopathological examination of the cut surface of the liver in group A revealed no evidence of the staple line reinforcement material, indicating that it had been totally reabsorbed. There was no bile duct damage seen and only mild mononuclear inflammation was seen in the portal areas. Subcapsular sinusoid dilation and congestion was seen with sharp demarcation between areas of dilation and absence of dilation in a pattern suggestive of fixation artifact although ischemic or anoxic damage cannot be ruled out. In group B marked fibrotic changes and damaged vascular and biliary endothelium were seen (Fig. 4 and 5). Fig. 4Macroscopic illustration of bile duct damage at the liver’s transection site of group B in which conventional stapling technique was performedFig. 5Microscopic examination (20×) of resection line of group A (above) and group B (below) Discussion While new technology is rapidly expanding the realm of laparoscopic surgery to include major hepatic resections, the majority of laparoscopic hepatic operations currently performed are for diagnostic purposes [3–5]. One of the major limitations to laparoscopic liver resections is the difficulty encountered in division of substantial liver parenchyma. Commonly employed devices for standard open liver resections including the Cavitron ultrasonic surgical aspirator (CUSA) (Valleylab, Boulder, CO), and the Argon Beam coagulator (Valleylab, Boulder, CO) are available for use in laparoscopy [34–36]. Transecting the liver can be performed in laparoscopically several ways (varying from gasless procedures to hand-assisted or strict laparoscopic liver resections), each technique with its own risk of bleeding and bile leaks. In the past, the percentage of these complications has not changed considerably, remaining between 10% and 50% [9, 31]. Commonly used laparoscopic devices such as the ultrasonic scalpel or the Ligasure (Valleylab, Boulder, CO) are limited in their ability to divide liver parenchyma because of the small size of the active heating area. The addition of various buttressing materials to stapling devices has been tried as a means to improve results and reduce complications rates in both pulmonary and gastrointestinal surgery [17–23]. We hypothesized that the addition of a buttressing material to standard laparoscopic stapling devices wouldprovide a novel technique for liver transection. One buttressing material (Seamguard®, W.L. Gore, Flagstaff, Arizona) is a porous fibrous structure composed solely of synthetic bioabsorbable poly (glycolide:trimethylene carbonate) copolymer. Degraded via a combination of hydrolytic and enzymatic pathways, the copolymer has been found to be both biocompatible and nonantigenic, with a history of use as bioabsorbable sutures, membranes, and other implantable devices. When used for staple line reinforcement, it can be expected to retain mechanical strength for 4–5 weeks and be completely absorbed by the end of 6 months [11]. The fact that this material is completely bioabsorbable should reduce concerns over possible long-term complications such as migration, erosion, calcification, and infection. In addition, this synthetic copolymer does not carry the risk of animal source contamination. The histopathological examination of the resection line of both groups showed fibrotic changes in group B in contrast to group A. This may be caused by excess mechanical stress which was avoided in group A due to addition of a staple line reinforcement. Our results show that transaction using conventional stapling devices is inadequate but that the addition of a bioabsorbable staple line reinforcement material to standard laparoscopic stapling devices can reduce intraoperative blood loss during transection of liver parenchyma. In addition, there was a reduction in postoperative bile leaks, although in this study these leaks were not clinically significant. Bioabsorbable reinforcement improves on complications and is a viable addition to laparoscopic liver resection.
[ "liver resections", "staple line reinforcement", "laparoscopic surgery", "complications", "bile duct leak", "hemorrhage." ]
[ "P", "P", "P", "P", "R", "M" ]
Eur_J_Epidemiol-3-1-2071962
Sodium and potassium intake and risk of cardiovascular events and all-cause mortality: the Rotterdam Study
Background Dietary electrolytes influence blood pressure, but their effect on clinical outcomes remains to be established. We examined sodium and potassium intake in relation to cardiovascular disease (CVD) and mortality in an unselected older population. Methods A case–cohort analysis was performed in the Rotterdam Study among subjects aged 55 years and over, who were followed for 5 years. Baseline urinary samples were analyzed for sodium and potassium in 795 subjects who died, 206 with an incident myocardial infarction and 181 subjects with an incident stroke, and in 1,448 randomly selected subjects. For potassium, dietary data were additionally obtained by food-frequency questionnaire for 78% of the cohort. Results There was no consistent association of urinary sodium, potassium, or sodium/potassium ratio with CVD and all-cause mortality over the range of intakes observed in this population. Dietary potassium estimated by food frequency questionnaire, however, was associated with a lower risk of all-cause mortality in subjects initially free of CVD and hypertension (RR = 0.71 per standard deviation increase; 95% confidence interval: 0.51–1.00). We observed a significant positive association between urinary sodium/potassium ratio and all-cause mortality, but only in overweight subjects who were initially free of CVD and hypertension (RR = 1.19 (1.02–1.39) per unit). Conclusion The effect of sodium and potassium intake on CVD morbidity and mortality in Western societies remains to be established. Introduction Observational and experimental data support an independent, positive relationship between sodium intake and blood pressure, most clearly in hypertensive populations [1–3]. Potassium intake, on the other hand, has been inversely related to blood pressure [3, 4]. Since hypertension is a strong predictor of cardiovascular disease (CVD), especially stroke, inadequate intake of sodium and potassium is likely to be associated with increased cardiovascular morbidity and mortality [1]. Only recently, population-based studies on dietary salt intake in relation to CVD and non-cardiovascular events have received priority [5]. Alderman et al. were among the first to report an increased risk of myocardial infarction with low urinary sodium in treated hypertensive men [6]. In a subsequent analysis of NHANES I data, an inverse association of sodium intake with all-cause and cardiovascular mortality was found [7]. Estimation of salt intake by 24-h dietary recall and other methodological aspects of this analysis, however, have been criticized [8–10]. Salt intake was not significantly related to coronary or all-cause mortality in the large cohorts of the Scottish Heart Health Study [11] and the MRFIT trial [12]. A recent systematic review of 11 randomized trials showed no effect of long-term sodium reduction on overall mortality, but this meta-analysis included only 17 fatal events and should be interpreted with caution [13]. He et al. showed that high sodium intake was a strong risk factor for congestive heart failure in overweight participants of the NHANES I follow-up study [14], and also predictive for CVD and all-cause mortality in this group [15]. Similarly, in a Finnish cohort, 24-h urinary sodium excretion predicted mortality and risk of coronary heart disease only in the presence of overweight [16]. With regard to incidence of stroke, the Finnish study showed no association with urinary sodium [16]. Stroke mortality was neither predicted by dietary sodium intake in MRFIT [12]. In the WHO Cardiovascular Diseases and Alimentary Comparison (CARDIAC) Study in 24 countries, however, sodium intake appeared to be a risk factor for stroke in men [17]. As a consequence of these inconsistent findings, there is currently no consensus as to the cardiovascular risks of salt intake. Tobian et al. demonstrated a lower risk of hemorrhagic stroke and mortality in hypertensive rats that had been given potassium supplements, an effect that was not mediated by blood pressure reduction [18]. Khaw and Barrett-Connor confirmed this independent protective effect of dietary potassium against stroke in humans [19]. Also in the CARDIAC study [17], the Cardiovascular Health Study [20] and the Nurses Health Study [21] the intake of potassium was inversely related to risk of stroke. Data on dietary potassium in relation to coronary and all-cause mortality in humans are scanty. We examined the relationship of sodium and potassium intake with cardiovascular events and all-cause mortality in the older cohort of the population-based Rotterdam Study. Methods The Rotterdam Study This case–cohort analysis formed part of the Rotterdam Study, a population-based prospective study among 7,983 men and women aged 55 years and older in the Netherlands [22]. The Medical Ethics Committee of the Erasmus Medical Centre Rotterdam approved the study, and written informed consent was obtained from all participants. From August 1990 until June 1993, a trained research assistant collected data on health, medication use, lifestyle, and risk indicators for chronic diseases during a home interview. Subjects were subsequently invited at the study centre for clinical examination and assessment of diet. Assessment of diet Subjects were interviewed at the study centre by a trained dietician, who used a validated, semi-quantitative food frequency questionnaire [23]. The intake of total energy, alcohol, macronutrients, and a large number of micronutrients was computed using Dutch food composition tables [24]. No information on salt use was obtained and therefore data on dietary sodium were considered unreliable for this analysis. Clinical examination Height and body weight were measured with the subject wearing indoor clothing without shoes. The body mass index was computed as weight divided by height squared. A trained research assistant measured sitting systolic and diastolic blood pressure twice with a random-zero sphygmomanometer after a 5-min rest, and values were averaged. Hypertension was defined as a systolic blood pressure ≥160 mmHg or diastolic blood pressure ≥95 mmHg or use of antihypertensive medication. Diabetes mellitus was considered present when the subject reported antidiabetic treatment, or when random or post-load plasma glucose levels were 11.1 mmol/l or higher. CVD was considered present in case of a verified history of myocardial infarction, stroke, coronary bypass grafting, or percutaneous transluminal coronary angioplasty. Serum total and HDL cholesterol level (mmol/l) were determined by standard laboratory methods [25]. Assessment of sodium and potassium excretion Participants collected an overnight urine sample before visiting the research centre and recorded collection times on the jar. They were not aware that samples would be used for estimation of electrolyte intake. At the research centre, volumes were recorded, urines were swirled and 100 ml samples were taken. Samples were stored in plastic tubes at −20°C for future laboratory determinations. Urinary sodium, potassium and creatinine determinations were performed by Vitros® 250 (formerly Ektachem 250) Chemistry System (Johnson & Johnson, Ortho-Clinical Diagnostics Inc., Rochester, New York). Determination of electrolytes and creatinine were based on potentiometry and enzymatic conversion, respectively. Urinary sodium and potassium concentrations (mmol/l) were standardized to 24-h values using recorded collection times and urinary volumes (ml). In addition, urinary sodium/potassium ratio was computed. Follow-up procedures The present analysis is based on follow-up data collected from baseline (1990–1993) until 1 January 1998. Informed consent for collection of follow-up data was obtained from 7,802 participants (98%). Information on vital status was obtained at regular intervals from municipal population registries. General practitioners (GPs) used a computerized information system to record fatal and non-fatal events in the research area (covering 85% of the cohort). In the Netherlands, the GP forms the link to all specialized medical care and clinical events are unlikely to be missed by this follow-up procedure. Research physicians verified all information on incident events using GP records and hospital discharge letters. Events were coded independently by two physicians according to the International Classification of Diseases, 10th revision (ICD-10) [26]. Coded events were reviewed by a medical expert in the field, whose judgment was considered definite in case of discrepancies. Myocardial infarction comprised ICD-10 code I21 and stroke comprised ICD-10 codes I60-I67. Both fatal and non-fatal incident events were recorded. For the present study, only first events were considered. Events followed by death within 28 days were classified as fatal. CVD mortality comprised fatal myocardial infarction, fatal stroke, sudden cardiac death and other forms of fatal CVD (ICD-10 codes I20-I25, I46, I49, I50, I60-I67, I70-I74, and R96). Study population Of 7,129 subjects who visited the research centre, 6,605 adequately performed a timed overnight urine collection for which collection times were recorded and volumes exceeded 150 ml. Of those, 5,531 had blood pressure readings and these subjects were eligible for the present analysis. We followed a case–cohort approach for efficiency reasons. Assessment of urinary sodium, potassium and creatinine excretion was performed in all subjects who died (n = 795, including 217 cardiovascular deaths), and in those who experienced a myocardial infarction (n = 206) or stroke (n = 181) during follow-up. A random sample of 1,500 control subjects was taken from the eligible cohort for assessment of electrolyte excretions. Urine samples could not be retrieved for 52 of these subjects, and data on urinary sodium, potassium and creatinine were thus obtained in 1,448 subjects. Dietary data were available for 1,205 subjects (83%) of the random sample, 518 subjects (65%) who died during follow-up, 157 subjects (72%) who died from CVD, 170 subjects (83%) with an incident myocardial infarction and 147 subjects (81%) with an incident stroke. Reasons for missing dietary data were participation in the pilot phase of the Rotterdam Study, low cognitive function, and logistic reasons, as described in more detail elsewhere [23]. Of the random sub-cohort (n = 1,448), 783 subjects (54%) were free of CVD and hypertension at baseline. Data analysis Pearson correlations were computed to examine inter-relationships between urinary and dietary measures of electrolyte intake and associations with total energy intake. The association of urinary and dietary electrolytes with incident myocardial infarction, incident stroke, cardiovascular mortality and all-cause mortality was evaluated in a case–cohort design with standard Cox proportional-hazards models with modification of the standard errors based on robust variance estimates [27, 28]. We used the method according to Barlow in which the random cohort is weighted by the inverse of the sampling fraction from the source population. Members of the random cohort are included from baseline until failure or censoring, whereas cases outside the cohort are included at the time of their event. For the Cox models we used Proc MI and Proc MIanalyze, in conjunction with Proc Phreg (SAS 8.2). Relative risks (RR) with 95% confidence intervals (95%-CI) were computed per 1 standard deviation increase in urinary sodium (mmol/24 h), urinary potassium (mmol/24 h) and dietary potassium intake (mg/day), and per 1 unit increase in urinary sodium/potassium ratio. Two-sided P-values below 0.05 were considered statistically significant. Adjustment was made for age, sex and, in urinary analyses, for 24-h urinary creatinine excretion (model 1). In a second analysis (model 2), additional adjustment was made for body mass index (kg/m2), smoking status (current, past, or never), diabetes mellitus (yes/no), use of diuretics (yes/no), and highest completed education (three categories). In a third analysis (model 3), dietary confounders were additionally adjusted for, i.e. daily intake of total energy (kJ), alcohol (g), calcium (g), and saturated fat (g). In the analysis for urinary sodium we additionally included urinary potassium in this model, and vice versa. Analyses were repeated after exclusion of subjects with a history of CVD or hypertension to avoid biased risk estimates due to intentional dietary changes. Within this sub-cohort, a predefined stratified analysis of urinary sodium and urinary sodium/potassium ratio with cardiovascular and all-cause mortality was performed in subjects with a high body mass index (i.e., ≥25 kg/m2), using model 3. Also in the sub-cohort free of CVD and hypertension, the distribution of 24-h urinary sodium excretion was divided into quartiles to be able to examine the relationship with all-cause mortality at extreme intakes. Quartiles of urinary sodium (cut-off levels: 66, 105 and 151 mmol/24 h) were entered categorically into the fully adjusted model (model 3), using the lower quartile as the reference. Results The study had a median follow-up of 5.5 years. Baseline characteristics of the study population are shown in Table 1. Randomly selected controls (n = 1,448) were expectedly healthier at baseline than cases, as indicated by a lower prevalence of hypertension, diabetes, and CVD. Table 1Baseline characteristics of the study populationRandom sampleCasesIncident MIIncident strokeCVD mortalityAll-cause mortalityNo. of subjects1,448206181217795In random sample (%)31312829Age (year)69.2 (8.7)71.0 (8.0)74.0 (8.5)76.8 (8.4)76.9 (8.9)Men (%)4162455149Body mass index (kg/m2)26.4 (3.8)26.3 (3.4)26.0 (3.3)26.2 (3.8)25.7 (3.8)Smoking status (%)a    Current2329282326    Former4148424740    Never3623292935Alcohol use (%)8174807173Educational level (%)a,b    Low5861606566    Intermediate3231343028    High108656Serum cholesterol (mmol/l)    Total6.6 (1.2)6.3 (1.3)6.5 (1.2)6.6 (1.4)6.3 (1.3)    HDL1.4 (0.4)1.3 (0.4)1.3 (0.4)1.2 (0.4)1.3 (0.4)Blood pressure (mmHg)    Systolic140 (22)145 (23)149 (24)146 (25)145 (25)    Diastolic74 (11)74 (12)75 (13)73 (13)73 (14)Hypertension (%)c3744535547Diabetes mellitus (%)d1021222621History of CVD (%)e1735173928Values are means with standard deviations, or percentages; CVD, cardiovascular disease; MI, myocardial infarctionaValues not always add up to 100% due to roundingbHighest achieved level of education; low, primary education, or less; intermediate, secondary general or vocational education; high, higher vocational education, universitycSystolic blood pressure ≥160 mmHg or diastolic blood pressure ≥95 mmHg or use of antihypertensive medicationdPlasma glucose ≥11.1 mmol/l or treated with oral antidiabetes medication or insulineVerified history of cardiovascular disease, i.e. myocardial infarction, stroke, coronary bypass-grafting, or percutaneous transluminal coronary angioplasty Baseline urinary excretions and dietary intakes are presented in Table 2. In the random sample, 24-h urinary sodium excretion estimated from overnight urine collection was 117 mmol (i.e., 2.7 g/day, which corresponds to a NaCl intake of 6.8 g/day). Urinary potassium excretion was 45 mmol/24 h (1.8 g/day), which was half the amount estimated by food frequency questionnaire (3.6 g/day). The correlation between urinary and dietary potassium was 0.21 (P < 0.001). Table 2Baseline urinary excretions and dietary intakes of Dutch men and women aged 55 years and over: The Rotterdam StudyRandom subcohortCasesIncident MIIncident strokeCVD mortalityAll-cause mortalityUrinary excretionaVolume (l/24 h)1.4 (0.6)1.4 (0.6)1.4 (0.6)1.3 (0.6)1.3 (0.6)Sodium (mmol/24 h)117 (69)124 (68)115 (72)99 (61)107 (66)Potassium (mmol/24 h)45 (22)47 (22)45 (23)44 (24)44 (22)Sodium/potassium2.8 (1.5)2.7 (1.3)2.7 (1.3)2.5 (1.4)2.6 (1.6)Creatinine (mmol/24 h)9.2 (4.9)9.8 (4.7)8.4 (4.4)8.1 (4.7)8.1 (4.4)Sodium/creatinine13.8 (6.6)13.6 (6.1)14.6 (7.1)14.0 (8.0)14.8 (7.9)Potassium/creatinine5.4 (2.2)5.3 (2.1)5.8 (2.1)6.1 (2.6)6.1 (2.5)Dietary intakebTotal energy (mJ/day)8.3 (2.1)8.6 (2.2)8.4 (2.2)8.3 (2.0)8.5 (2.2)Saturated fat (g/day)32 (12)34 (13)34 (13)33 (13)34 (12)Calcium (g/day)1.1 (0.4)1.1 (0.4)1.1 (0.4)1.1 (0.5)1.1 (0.4)Sodium (g/day)c2.2 (0.7)2.3 (0.6)2.2 (0.6)2.2 (0.7)2.2 (0.7)Potassium (g/day)3.6 (0.8)3.7 (0.8)3.6 (0.8)3.6 (0.9)3.6 (0.9)Values are means with standard deviations; CVD, cardiovascular disease; MI, myocardial infarctionaBased on one timed overnight urine samplebDietary data were available for 1,205 subjects of the random sample (83%), 170 MI cases (83%), 147 stroke cases (81%), 157 CVD deaths (72%), and 518 deaths from any cause (65%)cOnly from foods, discretionary sources not included RR for cardiovascular events and all-cause mortality per 1-SD increase in 24-h urinary sodium are presented in Table 3. Urinary sodium was not significantly associated with incident myocardial infarction, incident stroke, or overall mortality. For CVD mortality, however, a borderline significant inverse association was observed (RR = 0.77 (0.60–1.01) per 1-SD, model 3) but the relationship was attenuated after excluding subjects with a history of CVD or hypertension (RR = 0.83 (0.47–1.44) per 1-SD, model 3). In subjects initially free of CVD, the risk of all-cause mortality was also examined across quartiles of 24-h urinary sodium (median values: 45, 87, 125 and 190 mmol, respectively). RR in consecutive quartiles, using the lower quartile as the reference, were 0.80 (0.43–1.49), 0.66 (0.34–1.27) and 0.98 (0.54–1.78), respectively (model 3). In a subgroup analysis of CVD free subjects with a body mass index ≥25 kg/m2, the association of urinary sodium with CVD mortality or all-cause mortality was neither statistically significant (RR = 0.91 (0.44–1.89) and RR = 1.19 (0.86–1.66) per 1-SD, respectively; model 3). Table 3Relative risk of urinary sodium with cardiovascular events and all-cause mortality in Dutch men and women aged 55 years and overAll subjectsaSubjects initially free of CVD and hypertensionaIncident MIRR, model 1b1.13 (0.95–1.34)1.04 (0.75–1.43)RR, model 2c1.16 (0.98–1.39)1.07 (0.77–1.50)RR, model 3d1.19 (0.97–1.46)1.14 (0.77–1.69)Incident strokeRR, model 11.09 (0.89–1.33)1.16 (0.84–1.61)RR, model 21.09 (0.87–1.35)1.15 (0.81–1.62)RR, model 31.08 (0.80–1.46)1.02 (0.66–1.58)CVD mortalityeRR, model 10.74 (0.60–0.91)0.84 (0.59–1.22)RR, model 20.83 (0.68–1.02)0.95 (0.66–1.39)RR, model 30.77 (0.60–1.01)0.83 (0.47–1.44)All-cause mortalityRR, model 10.90 (0.81–1.02)1.00 (0.83–1.20)RR, model 20.96 (0.84–1.09)1.10 (0.91–1.34)RR, model 30.95 (0.81–1.12)1.12 (0.86–1.46)RR, Relative risk with 95% confidence interval per standard deviation increase in urinary sodium (mmol/24 h), obtained by Cox proportional hazard analysisaNumber of cases and subjects in random sample given in Table 1bAjusted for age, sex and (for urinary sodium) 24-h urinary creatinine excretioncAs model 1, with additional adjustment for body mass index, smoking status, diabetes, use of diuretics, highest completed educationdAs model 2, with additional adjustment for daily intake of total energy, alcohol, calcium, saturated fat and 24-h urinary potassium excretioneCardiovascular mortality comprises fatal myocardial infarction, fatal stroke, sudden cardiac death and other forms of fatal CVD Findings for potassium are presented in Table 4. Urinary potassium tended to be positively associated with incident CVD events or mortality, especially in subjects who were initially free of CVD and hypertension. After full adjustment for confounders (model 3), however, none of these associations were statistically significant. Urinary potassium did neither predict all-cause mortality. For dietary potassium, similar results were obtained except for risk of all-cause mortality that was significantly reduced both in the entire cohort (RR = 0.78 (0.65–0.94 per 1-SD) and in subjects initially free of CVD and hypertension (RR = 0.71 (0.51–1.00), model 3). Table 4Relationship of urinary and dietary potassium with cardiovascular events and all-cause mortality in Dutch men and women aged 55 years and overAll subjectsaSubjects initially free of CVD and hypertensionaUrinary excretion (mmol/24 h)Dietary intake (mg/day)Urinary excretion (mmol/24 h)Dietary intake (mg/day)Incident MIRR, model 1b1.10 (0.89–1.35)0.98 (0.85–1.13)1.15 (0.84–1.59)1.14 (0.85–1.54)RR, model 2c1.16 (0.94–1.43)0.94 (0.81–1.09)1.25 (0.94–1.74)1.07 (0.78–1.46)RR, model 3d1.11 (0.87–1.43)0.90 (0.65–1.24)1.22 (0.79–1.87)1.32 (0.65–2.67)Incident strokeRR, model 11.09 (0.87–1.36)0.99 (0.84–1.17)1.12 (0.79–1.60)1.07 (0.79–1.43)RR, model 21.12 (0.89–1.42)0.99 (0.84–1.16)1.15 (0.77–1.71)1.20 (0.86–1.68)RR, model 31.17 (0.86–1.58)1.02 (0.71–1.46)1.11 (0.61–2.04)1.06 (0.50–2.29)CVD mortalityeRR, model 11.13 (0.90–1.41)0.97 (0.82–1.14)1.63 (1.14–2.33)1.23 (0.83–1.84)RR, model 21.14 (0.92–1.42)0.95 (0.81–1.12)1.66 (1.08–2.56)1.19 (0.78–1.83)RR, model 31.23 (0.94–1.60)0.97 (0.72–1.31)1.45 (0.84–2.54)1.43 (0.67–3.03)All-cause mortalityRR, model 11.04 (0.91–1.18)0.91 (0.82–1.01)1.06 (0.88–1.28)0.95 (0.78–1.17)RR, model 21.06 (0.86–1.31)0.89 (0.80–0.99)1.06 (0.86–1.31)0.90 (0.73–1.12)RR, model 31.08 (0.91–1.28)0.78 (0.65–0.94)0.95 (0.71–1.26)0.71 (0.51–1.00)RR, Relative risk with 95% confidence interval per standard deviation increase in urinary or dietary potassium, obtained by Cox proportional hazard analysisaNumber of cases and subjects in random sample given in Table 1bAjusted for age, sex and (for urinary potassium) 24-h urinary creatinine excretioncAs model 1, with additional adjustment for body mass index, smoking status, diabetes, use of diuretics and highest completed educationdAs model 2, with additional adjustment for daily intake of total energy, alcohol, calcium, saturated fat and 24-h urinary sodium excretioneCardiovascular mortality comprises fatal myocardial infarction, fatal stroke, sudden cardiac death and other forms of fatal CVD Data for urinary sodium/potassium ratio (Table 5) showed no relationship with CVD events and mortality. When restricting this analysis to CVD free subjects with a body mass index ≥25 kg/m2, urinary sodium/potassium ratio was significantly associated with all-cause mortality (RR = 1.19 (1.02–1.39) per unit, model 3), but not with CVD mortality (RR = 0.86 (0.60–1.25)). Table 5Relationship of urinary sodium/potassium ratio with cardiovascular events and all-cause mortality in Dutch men and women aged 55 years and overAll subjectsaSubjects initially free of CVD and hypertensionaIncident MIRR, model 1b1.03 (0.93–1.14)0.92 (0.76–1.13)RR, model 2c1.02 (0.92–1.13)0.90 (0.73–1.10)RR, model 3d1.04 (0.93–1.17)0.91 (0.72–1.16)Incident strokeRR, model 11.01 (0.89–1.13)1.01 (0.83–1.23)RR, model 20.99 (0.86–1.13)0.99 (0.77–1.20)RR, model 30.99 (0.83–1.18)0.90 (0.66–1.22)CVD mortalityeRR, model 10.88 (0.77–1.01)0.85 (0.65–1.11)RR, model 20.93 (0.81–1.06)0.86 (0.66–1.13)RR, model 30.92 (0.80–1.07)0.91 (0.65–1.27)All-cause mortalityRR, model 10.99 (0.91–1.06)1.04 (0.91–1.18)RR, model 20.99 (0.92–1.08)1.06 (0.93–1.22)RR, model 31.01 (0.91–1.12)1.13 (0.93–1.36)RR, Relative risk with 95% confidence interval per 1 unit increase in urinary sodium/potassium ratio, obtained by Cox proportional hazard analysisaNumber of cases and subjects in random sample given in Table 1bAdjusted for age, sex and 24-h urinary creatinine excretioncAs model 1, with additional adjustment for body mass index, smoking status, diabetes, use of diuretics and highest completed educationdAs model 2, with additional adjustment for daily intake of total energy, alcohol, calcium, and saturated fateCardiovascular mortality comprises fatal myocardial infarction, fatal stroke, sudden cardiac death and other forms of fatal CVD Discussion In an unselected population of older Dutch subjects we found no consistent association of urinary sodium and potassium with CVD events or mortality. Dietary potassium estimated by food frequency questionnaire, however, was associated with a lower risk of all-cause mortality. Urinary sodium/potassium ratio was positively associated with mortality risk, but only in overweight subjects without CVD and hypertension at baseline. Electrolyte intake was assessed from one overnight urine collection, which provides a crude estimate of short-term intake [29, 30]. Luft et al. examined the utility of nocturnal sodium excretion under controlled intake conditions, in which daily sodium intake was randomly varied [31]. In that study, on a randomly selected day, both 24-h and nocturnal sodium excretion estimated the daily intake reasonably well. Nevertheless, it is likely that misclassification has attenuated the relationships with CVD events and mortality in our study. Incomplete urine collection was partly adjusted for by adding urinary creatinine excretion to the multivariate models. In addition, we examined the urinary sodium/potassium ratio, which is less influenced by incomplete urine collection. To exclude bias due to dietary changes, we repeated all analyses in a subgroup without CVD or hypertension at baseline. Salt intake was not consistently related to CVD or mortality in our study. An explanation for the absence of a positive relationship, apart from regression dilution bias, may be the relatively narrow range of salt intake in the Netherlands and the lack of contrast in exposure within a single population. An increased risk of mortality was observed for high salt intake in overweight Finnish subjects with 24-h urinary excretions close to 200 mmol (RR = 1.56 per 100 mmol) [16]. However, this could not be confirmed in our analysis of quartiles of sodium intake in relation to overall mortality (RR = 0.98 in CVD free subjects with median sodium excretion of 190 mmol/24 h). The absence of a relationship between salt intake and mortality in our study corroborates the findings from the large Scottish Heart Health Study among almost 12,000 middle-aged subjects with 24-h urine samples [11]. Follow-up data of the MRFIT trial neither showed a relationship between dietary sodium intake estimated by 24-h recall and cardiovascular events or mortality [12]. However, other prospective epidemiological studies do suggest that sodium intake is related to morbidity and mortality [6, 7, 15, 16], although this may be confined to specific subgroups with overweight, hypertension or high salt intake. In overweight subjects, we did find a positive relationship between urinary sodium/potassium ratio and overall mortality (19% increase in risk per unit change in sodium/potassium ratio). A protective effect of potassium intake against stroke, as previously reported [19–21], could not be confirmed by our data. We neither observed an association between potassium intake and coronary events. Long-term rather than short-term intake may be relevant and therefore we estimated habitual potassium intake during the preceding year by food frequency questionnaire. Mortality risk was reduced by 29% per 1-SD increase in dietary potassium, although only in subjects initially free of CVD and hypertension. Except for misclassification, although less likely than for sodium, we have no explanation for the absent relationship between potassium intake and CVD events. In the Scottish Heart Health Study, 24-h urinary potassium excretion was inversely related to all-cause mortality and coronary events [11]. Data on potassium intake in relation to mortality, however, are sparse and more prospective population-based studies are needed to draw conclusions. Preferably, the effect of dietary potassium on CVD should be examined in a randomized trial. Prolonged differences in blood pressure of 5 mmHg may result in a one-third reduction in stroke and one-fifth reduction in coronary events [32]. Meta-analysis of randomized controlled trials showed that sodium reduction around 2 g per day could lower blood pressure by 2–3 mmHg, with the effect being twice as large in hypertensives [33]. The World Health Organization recommends that people should consume less than 5 g of salt (i.e. 2 g of sodium) per day in order to prevent CVD [34]. From this and other epidemiological studies we conclude that effect of dietary salt on clinical cardiovascular endpoints and overall mortality within the range of intake commonly observed in Western countries has not yet been established. More research is needed to settle the discussion regarding this major public health issue.
[ "sodium", "potassium", "mortality", "cardiovascular disease", "myocardial infarction", "stroke", "population-based", "salt" ]
[ "P", "P", "P", "P", "P", "P", "P", "P" ]
Int_J_Hematol-4-1-2276241
Diagnosis of acute myeloid leukemia according to the WHO classification in the Japan Adult Leukemia Study Group AML-97 protocol
We reviewed and categorized 638 of 809 patients who were registered in the Japan Adult Leukemia Study Group acute myeloid leukemia (AML)-97 protocol using morphological means. Patients with the M3 subtype were excluded from the study group. According to the WHO classification, 171 patients (26.8%) had AML with recurrent genetic abnormalities, 133 (20.8%) had AML with multilineage dysplasia (MLD), 331 (51.9%) had AML not otherwise categorized, and 3 (0.5%) had acute leukemia of ambiguous lineage. The platelet count was higher and the rate of myeloperoxidase (MPO)-positive blasts was lower in AML with MLD than in the other WHO categories. The outcome was significantly better in patients with high (≥50%) than with low (<50%) ratios of MPO-positive blasts (P < 0.01). The 5-year survival rates for patients with favorable, intermediate, and adverse karyotypes were 63.4, 39.1, and 0.0%, respectively, and 35.5% for those with 11q23 abnormalities (P < 0.0001). Overall survival (OS) did not significantly differ between nine patients with t(9;11) and 23 with other 11q23 abnormalities (P = 0.22). Our results confirmed that the cytogenetic profile, MLD phenotype, and MPO-positivity of blasts are associated with survival in patients with AML, and showed that each category had the characteristics of the WHO classification such as incidence, clinical features, and OS. Introduction The French-American-British (FAB) classification of acute myeloid leukemia (AML), based on morphological and cytochemical findings, was established in 1976 and has since become the standard classification [1, 2]. However, specific chromosomal and genetic abnormalities that have been extracted from analyses of prognostic factors for AML are recognized as important in selecting treatment strategies and are reflected in the AML classification as factors that are required to establish the disease entity [3]. The 1999 World Health Organization (WHO) classification includes morphological, immunological, cytogenetic, genetic, and clinical features [4–6]. The WHO and FAB classifications differ in several aspects. The blast threshold required for a diagnosis of AML was reduced from 30 to 20%, and new AML categories have been added for cytogenetic abnormalities, the presence of multilineage dysplasia (MLD), as well as a history of chemotherapy and subtypes for acute basophilic leukemia, acute panmyelosis with myelofibrosis, and myeloid sarcoma. The WHO classification comprises more subtypes and is more comprehensive than the FAB classification. Cytogenetic features are important prognostic factors in AML [3, 7–12]. However, 11q23 abnormalities have not yet been established as a cytogenetic risk classification. Over 30 partner genes with 11q23 abnormalities have been described, and some reports indicate that patients with t(9;11) have a relatively more favorable prognosis than those with other partner chromosomes/partner genes [13–16]. In the present study, we reviewed stained smears of blood and bone marrow from patients who were registered in the Japan Adult Leukemia Study Group (JALSG) AML-97 trial, and classified them into FAB subtypes and WHO categories. We also evaluated their survival on the basis of the WHO classification, the myeloperoxidase (MPO)-positivity of blasts, and cytogenetic findings including 11q23 abnormalities. Patients and methods Patients Between December 1997 and July 2001, 809 patients aged from 15 to 66 years with untreated AML (excluding M3) were registered from 103 institutions in the AML-97 trial of the JALSG. The patients were diagnosed with AML according to the FAB criteria at each institution. Patients with a history of MDS, hematological abnormalities before the diagnosis of AML, or a history of chemotherapy were not eligible for the AML-97 trial. Treatment strategies Details of the JALSG AML-97 treatment protocol are described elsewhere [17]. In brief, all patients underwent induction therapy consisting of idarubicin (3 days) and Ara-C (7 days). Patients who achieved complete remission were randomized into one of two arms of consolidation chemotherapy alone or in combination with maintenance chemotherapy. Patients who were placed into intermediate/poor risk groups according to the JALSG scoring system [17] and who had an HLA-identical sibling (≤50 years old) were simultaneously assigned to receive allogeneic hematopoietic stem cell transplantation during their first remission. Morphologic and cytochemical analyses Peripheral blood and bone marrow smears from registered patients were sent to Nagasaki University for staining with May-Giemsa, MPO, and esterase, and the diagnosis was then reevaluated by the Central Review Committee for Morphological Diagnosis. Patients were subsequently categorized according to the FAB and WHO classifications. Dyserythropoietic features were defined as >50% dysplastic features in at least 25 erythroblasts and dysgranulopoietic features including ≥3 neutrophils with hyposegmented nuclei (pseudo-Pelger–Heut anomaly), and hypogranular or agranular neutrophils (>50% of ≥10 neutrophils). Dysmegakaryopoietic features were defined as ≥3 megakaryocytes that were micronuclear, multiseparate nuclear, or large mononuclear [18]. We assessed the ratios (%) of MPO-positive blasts on MPO-stained bone marrow smears using the diaminobenzidine method [19]. Cytogenetic analysis Cytogenetic analysis was performed at either laboratories in participating hospitals or authorized commercial laboratories. The karyotypes of leukemic cells were collected through the JALSG AML-97 case report forms and reviewed by the Central Review Committee for Karyotyping. The patients were classified into favorable, intermediate, or adverse risk groups based on karyotypes according to results of the Medical Research Council (MRC) AML 10 trial [3]. The favorable risk group included patients with t(8;21) and inv(16), whether alone or in combination with other abnormalities. The intermediate risk group included those with a normal karyotype and other abnormalities that were not classified as either favorable or adverse. The adverse risk group included patients with a complex karyotype with four or more numerical or structural aberrations, −5, deletion (5q), and −7, whether alone or in combination with intermediate risk or other adverse risk abnormalities. Statistical analysis The overall survival (OS) for all patients was defined as the interval from the date of diagnosis to that of death. We applied the Kaplan–Meier method to estimate OS and 5-year survival. We compared survival rates between groups using the log-rank test (Stat View J 5.0). Differences were examined by the Chi-square test using Excel software. All P-values are two-sided, and values <0.05 were considered significant. Results Patient characteristics Of the 809 registered patients, 638 were consistent with the WHO classification. Data were incomplete for 10 of the 638 patients. Table 1 lists the characteristics of the patients. The median age of all 638 patients (390 males and 248 females) was 45 years (range 15–66 years). The median values of WBC, hemoglobin (Hb), platelets, and the ratio of blasts in the bone marrow were 13.7 × 109/l, 8.3 g/dl, 52.0 × 109/l, and 56.0%, respectively. Table 1Patient characteristicsAge (year)45 (15–66)Male/female390/248WBC count (×109/l)13.7 (0.4–709)Hemoglobin (g/dl)8.3 (3.8–17.2)Platelet count (×109/l)52 (0–890)Bone marrow blasts (%)56 (6–99)Values are presented as the median (range) WBC white blood cell FAB classification Table 2 shows the FAB classification of the 638 patients. Most were classified as M2 (n = 261; 40.9%), followed by M4 (n = 148; 23.2%), and M1 (n = 109; 17.1%) with M0, M4Eo, M5a, M5b, M6, M7, and acute leukemia of ambiguous lineage comprising the remainder in that order. Table 2Number of patients according to the FAB classificationSubtypeDescriptionNo. of patients%M0Minimally differentiated acute myeloid leukemia (AML)304.7M1AML without maturation10917.1M2AML with maturation26140.9M4Acute myelomonocytic leukemia (AMMoL)14823.2M4EoAMMoL with eosinophils233.6M5aAcute monoblastic leukemia193.0M5bAcute monocytic leukemia243.8M6Acute erythroleukemia162.5M7Acute megakaryoblastic leukemia50.8Acute leukemia of ambiguous lineage30.5Total638100 WHO classification and clinical characteristics Table 3 shows the patients categorized according to the WHO classification. The first category of AML with recurrent genetic abnormalities accounted for 171 patients (26.8%), 133 (20.8%) were in the second category of AML with MLD, 331 (51.9%) were in the fourth category of AML not otherwise categorized, and 3 (0.5%) were categorized as having acute leukemia of ambiguous lineage. Most patients in the second category were identical to those with a de novo MLD phenotype. We found that 144 patients diagnosed with the MLD phenotype comprised 133 (92.4%) in the second category, 10 (7.0%) with 11q23 abnormalities, and 1 (0.7%) with acute leukemia of ambiguous lineage. Figure 1 shows the OS of each category. The 5-year survival rates of the first, second, and fourth categories were 58.2, 22.5, and 40.9% (P < 0.0001), respectively. Table 3Number of patients according to the WHO classificationCategory and subtypeNo. of patients%I. AML with recurrent genetic abnormalities17126.8 t(8;21)(q22;q22);(AML1/ETO)11317.7 inv(16)(p13;q22) or t(16;16)(p13;q22);(CBFβ/MYH11)264.1 t(15;17)(q22;q12)(PML/RARα)–– 11q23(MLL)abnormalities325.0II. AML with multilineage dysplasia13320.8 Following MDS–– Without antecedent MDS13320.8III. AML and MDS, therapy-related–– Alkylating agent-related–– Topoisomerase type II inhibitor-related–– Other types––IV. AML not otherwise categorized33151.9 AML, minimally differentiated253.9 AML without maturation9915.5 AML with maturation10816.9 Acute myelomonocytic leukemia (AMMoL)639.9 AMMoL with eosinophilia50.8 Acute monoblastic leukemia81.3 Acute monocytic leukemia162.5 Acute erythroid leukemia60.9 Acute megakaryoblastic leukemia10.2Acute leukemia of ambiguous lineage30.5Total638100 Fig.1Overall survival of patients categorized according to the WHO classification Table 4 compares the clinical features among the WHO categories. The mean values of platelets, WBC, Hb, and the ratio (%) of blasts in bone marrow and of MPO-positive blasts significantly differed, whereas age did not significantly differ. Patients in the second category had a higher platelet count (111.0 × 109/l), whereas those with 11q23 abnormalities had a lower count (38.3 × 109/l) compared with those of other subtypes. Table 4Comparison of clinical findings of patients diagnosed according to the WHO classificationCategoryPlatelets (×109/l ± SE)WBC (×109/l ± SE)Hb (g/dl ± SE)Age (year ± SE)Blasts in bone marrow (%±SE)MPO positivity of blasts (%±SE)I t(8;21)76.7 ± 56.43 (113)a 1.4 ± 0.6 (113)7.8 ± 0.2 (113)41.6 ± 1.3 (113)49.9 ± 2.0 (113)93.3 ± 3.3 (108)inv(16)57.8 ± 52.03 (26)6.6 ± 1.2 (26)9.2 ± 0.5 (26)44.5 ± 2.6 (26)50.5 ± 4.1 (26)66.9 ± 6.7 (26)11q2338.3 ± 30.8 (32)4.3 ± 1.1 (32)8.9 ± 0.4 (32)41.6 ± 2.4 (32)56.3 ± 3.7 (32)43.6 ± 6.1 (32)II111.0 ± 121.5 (133)3.0 ± 0.5 (133)8.3 ± 0.2 (133)44.2 ± 1.2 (133)48.0 ± 1.8 (133)34.0 ± 3.1 (126)IV72.8 ± 91.7 (330)5.1 ± 0.3 (331)8.8 ± 0.1 (330)43.8 ± 0.7 (331)65.7 ± 1.2 (328)53.7 ± 1.9 (312) P < 0.0001 P < 0.0001 P = 0.0004 P = 0.4077 P < 0.0001 P < 0.0001 SE standard error, WBC white blood cell, MPO myeloperoxidase, Hb hemoglobin aNumber of patients The WBC count of patients with t(8;21) was 1.4 × 109/l and lower than in other subtypes. The MPO-positive rate of blasts among patients with t(8;21) was higher (93.3%) and that of patients in the second category was lower (34.0%), than in other subtypes. All patients were grouped as high- or low-MPO according to ≥50% or <50% of MPO-positive blasts, respectively. A total of 339 patients (53.1%) were classified as high-MPO, 268 (42.0%) as low-MPO, and the MPO status of blasts could not be assessed in 31 (4.9%). Figure 2 shows the OS of patients with high- or low-MPO. The 5-year survival rate for patients with high or low-MPO was 50.7 and 29.6%, respectively (P < 0.0001). Fig. 2Overall survival of patients with high or low MPO-positive blasts Cytogenetics All 638 patients were classified into favorable (n = 139; 21.8%), intermediate (n = 413; 64.7%), and adverse (n = 54; 8.5%) cytogenetic risk groups (Table 5). Figure 3 shows the OS according to this stratification. The 5-year survival rates were 63.4, 39.3, and 0.0% in the favorable, intermediate (except for those with 11q23 abnormalities), and adverse risk groups, respectively, and 35.5% in the group with 11q23 abnormalities (P < 0.0001). Table 5Distribution of patients classified by cytogenetic riskCytogenetic risk groupNo. of patients%Favorable13921.8 t(8;21)11317.7 inv(16)264.1Intermediate41364.7 Normal karyotype26741.8 11q23325.0 Ph(+)71.1 t(7;11)(p15;p15)40.6 t(6;9)40.6 Other13120.5Adverse548.5 Complex416.4 −720.3 abn350.8 del5q20.3 −510.2 Other30.5Total638100.0 Fig. 3Overall survival of patients stratified according to cytogenetic risk groups. Significant differences were observed between patients with a favorable, intermediate (except 11q23), and adverse karyotype (P < 0.0001) The numbers of patients with or without MLD and high- or low-MPO in each cytogenetic risk group are listed in Table 6. None of those with the MLD phenotype were classified into the favorable risk group, while 129 (89.6%) and 15 (10.4%) of 144 patients with MLD were classified into intermediate or adverse risk groups, respectively. Only 15 patients (4.4%) in the high-MPO group were classified as having an adverse risk, while 11 (4.1%) in the low-MPO group were included in the favorable risk group. Table 6Relationship between cytogenetic risk groups and MLD phenotype or MPO-positive rates of blastsFavorable n = 139Intermediate n = 445Adverse n = 54TotalMLD +0129 (89.5%)15 (10.4%)144 −138 (28.2%)292 (59.6%)38 (7.8%)490 Unknown1214MPO High123 (36.3%)201 (59.3%)15 (4.4%)339 Low11 (4.1%)221 (82.5%)36 (13.4%)268 Unknown523331High- and low-MPO indicates a percentage of myeloperoxidase positive blasts ≥50 or <50%, respectively MLD multilineage dysplasia The 32 patients with 11q23 abnormalities comprised 11 (34.4%) with t(11;19), 9 (28.1%) with t(9;11), 5 (15.6%) with del(11)(q23), 4 (12.5%) with t(6;11), and 3 (9.4%) with t(11;17). Figure 4 shows the OS of the intermediate risk group. The 5-year survival rate was 44.0% in patients with a normal karyotype, 35.5% in those with 11q23 abnormalities, and 30.6% in other patients including those with t(7;11), t(6;9), and Ph(+) abnormalities, respectively (P = 0.033). Fig. 4Overall survival of patients with subtypes of intermediate cytogenetic risk. Significant differences were observed between patients with a normal karyotype and those with 11q23 abnormalities (P = 0.033) Table 7 shows the relationship between t(9;11) (n = 9) and other 11q23 abnormalities (n = 23). More patients with low-MPO, without MLD, or with the FAB M5 subtype were found in the group with t(9;11) than with other 11q23 abnormalities. The survival rates between the two groups did not significantly differ (P = 0.22, data not shown). Table 7Comparison of t(9;11) and other 11q23 abnormalitiesNo. of patientsAuerMPO*MLD*FABMedian age (year)Median survival (day)+−HighLow+−M1M2M4M4EoM5a**M5b t(9;11)9091809003060391031.00Other 11q232351813101013131312348520.00Total3252714181022131618344.5531.5High- and low-MPO indicates a percentage of myeloperoxidase-positive blasts ≥50 or <50%, respectively MLD multilineage dysplasia* P < 0.05, ** P < 0.01 Discussion We attempted to classify selected patients who were reviewed morphologically and had available chromosomal data according to the WHO system. However, our series had some limitations in terms of analysis and patient selection. Although we obtained chromosomal data, genetic data were not available. Patients who were diagnosed with AML M3 or who had t(15;17), a history of MDS, or preceding hematological abnormalities, or who had previously undergone chemotherapy, were not eligible for the present study. However, multicenter trials might have some advantages in diagnosing AML according to the WHO classification, because morphological diagnoses and karyotypes are reviewed by the corresponding institutional committees. The incidence of each category of the WHO classification was similar to those in several reports when patients with t(15;17) and therapy-related AML were excluded [20–22]. We and several others have shown that approximately 30% of patients have recurrent genetic abnormalities. Multiplex reverse transcriptase-polymerase chain reaction (RT-PCR) assays have recently been applied to analyze cytogenetic abnormalities [21, 23, 24]. This method might cause the frequency of the first WHO category to increase. Thus, the multiplex RT-PCR assay might have to be incorporated into the WHO system. The JALSG has started a cohort study in which all AML patients in participating hospitals are registered and analyzed according to the WHO classification. That study should clarify the real ratios of the AML subtypes in the WHO classification. Few reports have included clinical data with the WHO classification. We found that the platelet count was higher among patients in the second category than in other categories. This supports our previous finding that the platelet count is higher in patients with AML accompanied by the MLD phenotype [25]. Among patients with MLD, none were in the favorable risk group, whereas the intermediate or adverse risk ratios among these patients were 89.6 and 10.4%, respectively. These differences might influence the finding that OS was better among patients without than with MLD (P = 0.0002, data not shown). Previous studies have also associated the MLD phenotype with a poorer outcome, although MLD is not significantly prognostic on multivariate analysis [18, 26], and a German group showed that dysplastic features correlate with adverse karyotypes [26]. Furthermore, patients in the second category had a lower MPO-positive rate of blasts, whereas those with t(8;21) had a higher rate. Patients with high- and low-MPO were more frequently observed in the favorable and adverse risk groups, respectively. Multivariate analysis has shown that MPO is a significant factor affecting OS [19]. We did not assess prognostic factors by multivariate analysis here because the main theme of this study was to categorize patients according to the WHO classification, and we have already examined these in a previous series [18, 19]. Several studies have demonstrated the impact of specific cytogenetic abnormalities on survival in AML [3, 7–12, 20–22]. The cytogenetic risk groups stratified the AML patients in the present study according to the MRC system, as in these reports [3]. Therefore, we confirmed the clinical usefulness of cytogenetics as the first category of the WHO classification. We found that 32 patients had 11q23 abnormalities. The MRC system revealed that de novo and secondary AML patients with 11q23 abnormalities had an intermediate outcome with an OS rate of 45% at 5 years (n = 60; median age, 17 years) in a younger cohort [3] and an OS rate of 0% at 5 years (n = 11; median age 64 years) in an elderly cohort [7]. In contrast, SWOG/ECOG trials including adult de novo AML patients (age, 16–55 years) assigned those with 11q abnormalities to the unfavorable cytogenetic subgroup [8]. Our data showed that patients with 11q23 abnormalities have an intermediate rather than adverse outcome. The prognostic effect of 11q23 abnormalities might depend on the partner gene. Several studies have shown that 11q23 abnormalities with t(6;11) and t(10;11) are associated with a poor prognosis, whereas t(9;11) is associated with a superior OS and such patients might respond well to intensive treatment, especially when the chemotherapy regimen includes high-dose cytarabine [15, 27–30]. The CALGB study has shown that the median OS of 13.2 months among 23 patients with t(9;11) was significantly longer than the 7.7 months among 24 patients with other 11q23 rearrangements (P = 0.009) [30]. In a recent CALGB series of 54 patients with 11q23 abnormalities, 27 patients with t(9;11) had an intermediate outcome and a median OS of 13.2 months, whereas those with t(6;11) or t(11;19) had a poor outcome of 7.2 or 8.4 months [15]. Conversely, Schoch et al. showed that 14 patients with t(9;11) had a median OS of 10.0 months compared with the 12.8 months of 26 patients with other MLL rearrangements, and that the two cytogenetic groups did not significantly differ [13]. Our data showed that nine patients with t(9;11) were more frequently involved in M5. The MPO and MLD features significantly differed between patients with t(9;11) and those with other 11q23 abnormalities. However, the CALGB study found no significant differences in myelodysplastic features between the two cytogenetic groups [30]. In terms of OS, our results showed no significant differences between patients with t(9;11) and those with other 11q23 abnormalities (P = 0.22). Some problems are associated with the analyses of 11q23 abnormalities. We had few patients with these abnormalities, particularly individual translocations, and genetic analysis was not performed. Thus, the prognostic risk of 11q23 abnormalities cannot be concluded from the present study. Nonetheless, these abnormalities were never associated with a favorable risk. To classify 11q23 abnormalities into each prognostic risk group, further investigations and genetic analyses of a large number of patients with 11q23 abnormalities are required. The fourth WHO category, which is not otherwise categorized, accounted for 52% of patients in the present study. Most of them were classified into the intermediate risk group, and no prognostic subdivisions were valuable. Using cytogenetic features as a prognostic factor in groups with a normal karyotype has limitations, and such patients accounted for 64.6% of the intermediate risk group (data not shown). Additional factors are required to stratify these patients. We and several others suggested that differences could be based on molecular genetic analysis [22, 31–35]. For example, FLT3 mutations are important biomarkers of a normal karyotype and might be valuable for stratifying the intermediate risk group. Further follow-up studies might also shed light on the roles of FLT3 ITD mutations in the development of AML and aid their use as novel molecular targeting agents against AML [22, 32]. Bienz et al. identified CEBPA mutations, FLT3-ITD, and differing levels of BAALC expression as having independent prognostic significance in patients with a normal karyotype [33]. If these genetic markers can be confirmed as being of clinical significance, genetic analyses will probably be incorporated into the WHO classification. In summary, our results confirmed those of previous studies showing the prognostic significance of cytogenetics, MLD, and MPO-positivity of blasts in AML. Furthermore, we categorized patients with de novo AML according to the WHO classification and showed the clinical characteristics and OS of each category.
[ "who classification", "aml", "myeloperoxidase", "11q23 abnormalities", "multilineage dyplasia" ]
[ "P", "P", "P", "P", "M" ]
Purinergic_Signal-3-1-2096770
P2 receptors in atherosclerosis and postangioplasty restenosis
Atherosclerosis is an immunoinflammatory process that involves complex interactions between the vessel wall and blood components and is thought to be initiated by endothelial dysfunction [Ross (Nature 362:801–09, 1993); Fuster et al. (N Engl J Med 326:242–50, 1992); Davies and Woolf (Br Heart J 69:S3–S11, 1993)]. Extracellular nucleotides that are released from a variety of arterial and blood cells [Di Virgilio and Solini (Br J Pharmacol 135:831–42, 2002)] can bind to P2 receptors and modulate proliferation and migration of smooth muscle cells (SMC), which are known to be involved in intimal hyperplasia that accompanies atherosclerosis and postangioplasty restenosis [Lafont et al. (Circ Res 76:996–002, 1995)]. In addition, P2 receptors mediate many other functions including platelet aggregation, leukocyte adherence, and arterial vasomotricity. A direct pathological role of P2 receptors is reinforced by recent evidence showing that upregulation and activation of P2Y2 receptors in rabbit arteries mediates intimal hyperplasia [Seye et al. (Circulation 106:2720–726, 2002)]. In addition, upregulation of functional P2Y receptors also has been demonstrated in the basilar artery of the rat double-hemorrhage model [Carpenter et al. (Stroke 32:516–22, 2001)] and in coronary artery of diabetic dyslipidemic pigs [Hill et al. (J Vasc Res 38:432–43, 2001)]. It has been proposed that upregulation of P2Y receptors may be a potential diagnostic indicator for the early stages of atherosclerosis [Elmaleh et al. (Proc Natl Acad Sci U S A 95:691–95, 1998)]. Therefore, particular effort must be made to understand the consequences of nucleotide release from cells in the cardiovascular system and the subsequent effects of P2 nucleotide receptor activation in blood vessels, which may reveal novel therapeutic strategies for atherosclerosis and restenosis after angioplasty. Introduction Atherosclerosis is a pathological phenomenon primarily affecting the large conduit arteries, for example the aorta, coronary, carotid iliac, and femoral arteries. The development of atherosclerotic lesions in arteries involves the intimal recruitment of smooth muscle cells (SMC) within the blood vessel wall, and also the infiltration of blood-derived cells [1]. This process necessitates the proliferation and migration of SMC from the underlying media and the endothelial adhesion of leukocytes and their infiltration into the subendothelium. A similar intimal accumulation of SMC also takes place during the postangioplasty restenotic process. Although the factors involved in intimal cell recruitment are not clearly identified, it is becoming evident that endothelial dysfunction is a key factor in the development of vascular disease. Experimental evidence suggests that an intact endothelium plays a central role in maintaining a low proliferative state of SMC under normal conditions [10]. In arterial injury, endothelial cells, SMC, and various blood cells can release chemotactic factors and mitogens, including ATP and other nucleotides (see [4] for review). Activation of P2 nucleotide receptors has been shown to induce not only the proliferation and migration of vascular SMC but also apoptosis, a process involved in the evolution of the atherosclerotic plaque [11]. In addition, P2 receptors mediate both vasorelaxation and vasoconstriction of arteries that may be involved in the vascular remodeling accompanying atherosclerosis and postangioplasty restenosis [5]. A better understanding of the causative agents and mechanisms of proliferation and migration of vascular SMC as well as recruitment of blood-derived cells by the endothelium could lead to prevention, attenuation, or even reversal of intimal thickening, which may dramatically reduce morbidity and mortality from vascular diseases such as atherosclerosis and restenosis after angioplasty. In this respect, a better understanding of the physiological role of P2 receptors in both normal and pathological blood vessels could potentially lead to a breakthrough in the fight against vascular disease. P2 receptors in the cardiovascular system Extracellular nucleotides bind to cell surface receptors known as P2 receptors, which are present in many tissues. To date, these receptors have been classified into two main families: the P2X receptors that are ligand-gated ion channels comprised of homo- or hetero-oligomers [12] and P2Y receptors that are seven membrane spanning receptors coupled via G proteins (Gq/11 or Gi/o) to phospholipase C (PLC) and/or adenylate cyclase [12–14]. In turn, PLC activation generates inositol 1,4,5-triphosphate (IP3), a mediator of Ca2+ release from intracellular stores, and diacylglycerol, an activator of protein kinase C (PKC), whereas adenylate cyclase generates cyclic adenosine monophosphate (cAMP), an activator of protein kinase A (PKA). The cloning of seven P2X (P2X1, P2X2, P2X3, P2X4, P2X5, P2X6, P2X7) and eight P2Y (P2Y1, P2Y2, P2Y4, P2Y6, P2Y11, P2Y12, P2Y13, P2Y14) receptor subtypes has made it possible to use molecular and pharmacological approaches to study the distribution and functional properties of specific P2 receptor subtypes at the tissue and cellular level. P2 receptors in vascular cells The normal arterial wall consists of three layers: the intima, media, and adventitia. The single layer of endothelial cells facing the vessel lumen is a very important component of the vascular wall in terms of releasing both vasodilators such as nitric oxide (NO) and prostacyclin (PGI2) and vasoconstrictors like thromboxane A2 and endothelin. The principal P2Y receptor subtypes that have been functionally characterized in endothelial cells are P2Y1 and P2Y2, but mRNAs for P2Y4 and P2Y6 receptors have also been detected [4]. Endothelium-dependent vasorelaxation has been attributed to the release of NO and PGI2 after binding of nucleotides to P2Y1 and P2Y2 receptors in endothelial cells [15], whereas vasoconstrictor effects in SMC result from the action of nucleotides on P2Y2 and P2X receptors [16, 17]. In most blood vessels, P2Y1 receptors for ADP are present on the endothelium and regulate vasodilatation by Ca2+-dependent (PLC-mediated) activation of nitric oxide synthase (NOS) and generation of endothelial-dependent relaxing factor (EDRF) (see [4] for review). Endothelial prostacyclin production is also stimulated by P2Y1 and P2Y2 receptors, but this seems to play a minimal role in vasodilatation, at least under physiological conditions (see [18] for review). Recent studies have indicated that in the aorta of P2Y2-null mice the endothelium-dependent relaxation by ATP and ATPγS was inhibited, demonstrating the role of the P2Y2 receptors, but that a relaxation by UTP and UDP was maintained, suggesting the additional involvement of P2Y6 receptors [19]. The majority of the cells in intact blood vessels are SMC, which occupy most of the media, and are involved in vasoconstriction and vasorelaxation of the vessel. P2Y2Rs in SMC mediate the induction of immediate-early and delayed-early cell cycle-dependent genes, consistent with a role for P2Y2Rs in vascular proliferation of SMC [20, 21]. A recent study demonstrated that P2Y2 is the predominant functional receptor that responds to ATP and UTP in rat aortic SMC [22]. In human cerebral arteries, P2Y6 seems to be the predominant subtype and induces vasoconstriction when activated by UDP/UTP [23, 24]. This is consistent with findings from rat pulmonary and mesenteric arteries [25, 26]. In addition, a recent study on P2X1 knockout mice further supports the prominent contractile effect of the P2Y6 subtype in mesenteric arterial trees [27]. Taken together, it seems that the principal receptor mediating UTP/UDP-induced contractile responses in blood vessels might be the P2Y6 subtype. Other studies have reported the presence of both P2Y4 and P2Y6 receptors in rat aortic SMC [21, 28, 29]. P2Y1 receptors are expressed in SMC of a number of blood vessel types and like their EC counterparts mediate vasodilatation most likely through the activation of K+ channels (see [18] for review). The presence of several P2X receptor subtypes also has been reported in human saphenous vein SMC, including P2X1, P2X2, P2X4, and P2X7 receptors [30]. The outermost layer of the blood vessel consists of connective tissue and fibroblasts, which have not been appreciated in the regulation of vascular tone. A recent study showed that fibroblasts can migrate into the neointima, suggesting their possible involvement in the development of vascular diseases such as atherosclerosis and restenosis after angioplasty [31]. Since human and rat fibroblasts are known to express P2Y1, P2Y2, P2Y4, P2Y6, P2X3, P2X4, and P2X7 receptors [32], further investigation is needed to establish the role of fibroblast P2 receptors in either physiological or pathophysiological conditions. P2 receptors in blood cells G protein-coupled P2Y receptors whose activation leads to intracellular calcium mobilization have been observed in neutrophils [4, 33, 34], and turkey erythrocytes express both P2Y1 and P2X7 receptors [35, 36]. ATP and UTP act as secretagogues by binding to P2Y2Rs to enhance exocytosis of primary granules in neutrophils [37]. In macrophages, ATP activates a P2X7 receptor that differs from other ligand-gated ion channel P2X receptors by its ability to generate plasma membrane pores when activated [38]. Human T lymphocytes also have been shown to express P2X7 receptors [39]. Human monocytes and macrophages coexpress P2X1, P2X4, and P2X7 receptors, whereas granulocytes only express P2X7 receptors [40]. P2Y1, P2Y2, and P2Y6 receptors also are expressed in monocytes, B lymphocytes, and polymorphonuclear granulocytes [41]. Human platelets express P2Y1, P2Y12, and P2X1 receptors [42–44]. Thus, the diversity of P2 receptor expression in blood cells, as well as endothelial cells, SMC, and fibroblasts, suggests that this receptor superfamily plays a significant role in the regulation of cardiovascular functions. P2 receptors regulate nucleotide-induced vascular smooth muscle cell proliferation Proliferation of SMC is a hallmark of vascular diseases such as atherosclerosis and restenosis following angioplasty. ATP and other nucleotides are released by aggregating platelets and damaged vascular cells, such as endothelial cells and SMC in pathological or stress conditions [45]. Extracellular nucleotides can act via P2X and P2Y receptors to induce acute responses such as the regulation of vascular tone [46]. However, it is generally believed that the ionotropic P2X receptors do not mediate the chronic responses of nucleotides such as cell proliferation. The mitogenic effect of extracellular nucleotides on vascular SMC (VSMC) has been known for years [47]. However, a potent antiproliferative effect of UTP on VSMC also has been reported in human arterial and venous SMC [48]. In either case, the P2 receptor subtype(s) responsible for these effects on the proliferation of VSMC has not been determined. Earlier studies have shown that ATP or UTP increased DNA and protein synthesis in subcultured rat aortic VSMC [49, 50]. In the same cell culture model, however, Malam-Souley et al. [51] were unable to detect increases in DNA synthesis after ATP/UTP stimulation, although ATP or UTP upregulated the expression of mRNA to several cell cycle progression-related genes. Because P2X receptor agonists were essentially inactive, it was concluded that a P2U-like receptor (now termed P2Y2) was responsible for the mitogenic effects of ATP/UTP. However, the role of a P2Y4 receptor cannot be excluded, because the nucleotide agonist profile between rat P2Y2 and P2Y4 receptors is essentially indistinguishable [52]. Indeed, Harper et al. [53] suggested that the P2Y4 receptor mediated ATP/UTP-induced proliferation of rat aortic VSMC. Recent studies indicated that ATP, UTP, or ITP, three agonists of the cloned porcine P2Y2 receptor, increased DNA and protein synthesis and cell number in coronary artery SMC [54]. Indeed, treatment of pig coronary artery SMC with UTP, ATP, or ITP caused a concentration-dependent increase in DNA and protein synthesis and cell number, whereas UDP only caused a small increase in protein synthesis. Intriguingly, ATP was much more potent and efficacious than UTP, ITP, and UDP in increasing DNA synthesis and expression of PCNA, a protein marker of cell proliferation, suggesting that another receptor may contribute to the proliferative response [54]. In vivo experiments have shown that intimal thickening of collared rabbit carotid arteries was greatly enhanced by in situ UTP application and was closely associated with osteopontin expression in medial SMC [6]. Osteopontin is chemotactic for SMC and is associated with arterial smooth muscle cell proliferation [55]. Moreover, both UTP and ATP increased osteopontin expression in cultured SMC, whereas ADP, UDP, and 2-MeSATP were ineffective, which suggests a role for the P2Y2R in which ATP and UTP are equipotent. Direct evidence for involvement of the P2Y2R was provided by inhibition of UTP-induced osteopontin expression in cultured SMC by P2Y2 antisense oligonucleotides [6]. P2Y6 receptors also have been shown to mediate proliferation of SMC in rat aorta [56]. Role of P2 receptors in the migration of vascular SMC Recent studies indicate that the extracellular nucleotides ATP, ADP, UTP, and UDP serve as directional cues for the migration of rat aortic SMC [57]. At identical concentrations, the most powerful migratory response induced by these nucleotides was elicited by UTP. Nucleotide-induced migration of SMC is the consequence of both chemotaxis and chemokinesis and may result either from the activation of one particular P2 nucleotide receptor subtype or from activation of several P2 receptor subtypes. The ability of UTP at submicromolar levels to stimulate migration of SMC supports the hypothesis that this response could have physiological consequences and is essentially mediated by P2Y2 receptor activation without excluding participation of other P2Y receptor subtypes. The difference in the capacity of UTP and ATP to elicit migration of SMC could be due to the inhibition of nucleotide-induced cell migration by adenosine generated from ATP catabolism by cell surface ectonucleotidases. Indeed, ATP and UTP were equally effective in causing migration of SMC when ATP was prevented from degradation by addition of the ectonucleotidase inhibitor α,β-methylene-ATP [57]. Several P2Y receptor subtypes could be involved in nucleotide-induced migration of SMC. It has been shown in rat aortic SMC that the P2Y2 receptor is the predominant P2Y receptor subtype [28, 58], whereas lower levels of P2Y4 and P2Y1 receptor mRNA were detected [58]. The very low level of P2Y1 receptor mRNA expression was consistent with the absence of ADP-induced migration of cultured rat aortic SMC, demonstrating that P2Y1 is not involved in this process [58]. In addition, the same study showed that a commercially available solution of hexokinase-treated UDP (UTP-free) induced cell migration equally well as untreated UDP, thereby demonstrating that UDP is chemotactic for aortic SMC by activation of the P2Y6 receptor. Conversely, migration of rat aortic SMC induced by UTP occurred even when UTP degradation by nucleoside diphosphate kinase was inhibited, demonstrating the involvement of P2Y2 and/or P2Y4 receptor(s) [58]. The increased migration of SMC in response to extracellular nucleotides could be related to increases in extracellular matrix (ECM) protein expression. Indeed, previous studies have shown that UTP induces osteopontin expression in rat and rabbit aortic SMC [51, 6]. Increased expression of osteopontin, an RGD-containing ECM protein, is associated with the activation of rat arterial SMC in vitro and in vivo [57]. The increase in osteopontin expression plays a key role in UTP-induced migration of rat aortic SMC, since a monoclonal antibody against osteopontin fully abolished UTP-induced migration [57], whereas an antibody against vitronectin, another ECM protein also involved in migration of human SMC [59], had no effect on the migration of rat aortic SMC [57]. UTP induces increases in osteopontin mRNA expression by increasing both osteopontin mRNA stabilization and osteopontin promoter activity [60]. Recent studies have shown that activation of an AP-1 binding site located 76 bp upstream of the transcription start in the rat osteopontin promoter is involved in UTP-induced osteopontin expression. Using a luciferase promoter deletion assay, Renault et al. [61] identified a new region of the rat osteopontin promoter (-1837 to -1757) that is responsive to UTP. This region contains an NFκB site located at -1800 and an Ebox located at -1768. Supershift electrophoretic mobility shift and chromatin immunoprecipitation assays identified NFκB and USF-1/USF-2 as DNA binding proteins induced by UTP. Using dominant negative mutants of IκB kinase and USF transcription factors, it was confirmed that NFκB and USF-1/USF-2 are involved in the UTP-induced expression of osteopontin. This ability of nucleotides to act as chemoattractants for rat arterial SMC in a concentration range potentially found in pathological vessels [62] and the findings of previous studies demonstrating the mitogenic activity of extracellular nucleotides for these cells suggest that nucleotides released from mechanically stretched vascular or damaged cells during the angioplasty process may participate in arterial wall remodeling. Role of P2 receptors in nucleotide-induced vascular inflammation In addition to their mitogenic effects, extracellular nucleotides may also cause cell recruitment by inducing lymphocyte and macrophage adhesion to human pulmonary artery endothelial cells as demonstrated in vitro [63]. Nucleotides can also modulate rat aortic smooth muscle cell adhesion and migration by increasing the expression of osteopontin [51, 57], a protein involved in both processes. Moreover, extracellular nucleotides may play a role in intra-arterial attraction of monocytes by inducing an increased expression of monocyte-chemoattractant protein-1 by arterial SMC [21]. Stimulation of P2 receptors is coupled to the release of the proinflammatory cytokines interleukin (IL)-1β, IL-1α, IL-8, and tumor necrosis factor (TNF)-α (see [4] for review) that are of obvious relevance to inflammation in atherosclerosis. Activation of P2X7 receptors on monocytes/macrophages enhances release of proinflammatory cytokines that modulate NO production and expression of inducible nitric oxide synthase (iNOS) (see [64] for review), mediators of immune cell activation that is an early step in atherosclerotic lesion development. Monocyte recruitment into the vessel wall is a complex process that includes cell rolling, firm attachment, and directed migration. It is now becoming evident that adhesion molecules such as VCAM-1 play an important role in leukocyte adherence to vascular endothelial cells [65, 66]. VCAM-1 expression is induced or upregulated by proinflammatory cytokines such as TNF-α and IL-1β in cellular components of the arterial wall including endothelial cells, smooth muscle cells, and fibroblasts [67-69]. ATP and UTP have been shown to induce cell-cell adhesion in a human monocyte/macrophage lineage and neutrophil adherence to human endothelial cell monolayers [63, 70]. Recent studies have shown that local UTP delivery via an osmotic pump to collared rabbit carotid arteries induced intimal accumulation of macrophages, similar to oxidized low-density lipoprotein (LDL), a response that was mediated by activation of P2Y2 receptors [6]. Leukocyte migration depends on the activities of adhesion proteins (e.g., selectins and integrins) on leukocytes and vascular endothelial cells. We have demonstrated that activation of P2Y2 receptors in endothelial cells causes the expression of VCAM-1 that mediates the adherence of monocytes to vascular endothelium [71], leading to their penetration into the vessel wall to promote arterial inflammation associated with atherosclerosis. Recent studies have revealed that a Src homology-3 (SH3) binding domain in the C-terminal tail of the P2Y2 receptor promotes the nucleotide-induced association of Src with the P2Y2 receptor, leading to the transactivation of growth factor receptors, such as the epithelial growth factor (EGF) and vascular endothelial growth factor (VEGF) receptors, and nucleotide-induced upregulation of VCAM-1 [72, 73]. Since leukocyte infiltration and migration are key processes involved in atherosclerosis, these findings suggest that P2Y2 receptors represent a novel target for reducing arterial inflammation associated with cardiovascular disease. P2 receptors in vascular apoptosis Apoptosis (programmed cell death) has been reported to occur in various vascular diseases such as atherosclerosis, restenosis, and hypertension [74, 75]. The major cell types undergoing apoptosis in human atherosclerotic lesions are arterial SMC [11, 76–78] and macrophages [79]. In restenosis following balloon angioplasty, there is a peak in the proliferation and apoptosis of rat vascular SMC 14 days postangioplasty [76]. Furthermore, apoptosis of arterial SMC has been described in animal models of intimal thickenings [80] and probably takes part in the normal process involved in the control of hyperplasia. In contrast, apoptosis of SMC in advanced human atherosclerotic plaques may destabilize the fibrous lesion to promote plaque rupture and its clinical consequences. As a mediator of cell-to-cell communication, ATP can trigger a variety of biological responses after being rapidly released in large amounts from various sources, including activated platelets, endothelial cells, nerve terminals, antigen-stimulated T cells, and other cell types following hypoxia, stress, and tissue damage. For example, in human umbilical cord vein endothelial cells (HUVEC), a substantial release of ATP (and UTP) is induced by shear stress [81], which may lead to alterations in the balance between proliferation and apoptosis regulated by P1 adenosine and P2 (particularly P2X7) nucleotide receptors [82]. P2X7 and P1 receptors have been previously linked to apoptosis in other cell types, including immune cells, astrocytes, and thymocytes [83–85]. The P2X7R also has been shown to mediate ATP-induced cell death in human embryonic kidney cells [86], human cervical epithelial cells [87], and primary rat cortical neurons [88]. In human arterial SMC, adenosine-induced apoptosis is essentially mediated via the A2b-adenosine receptor subtype and involves a cAMP-dependent pathway [89]. As an important constituent of atherosclerotic plaques, fibroblasts share several features with smooth muscle cells. In human fibroblasts, P2X7 was identified as the main nucleotide receptor involved in the high glucose concentration-dependent responses modulated by ATP, including morphological changes, enhanced apoptosis, caspase-3 activation, and IL-6 release [90]. In the immune system, ATP also plays important roles through nucleotide receptors in leukocyte functions. P2X7 receptor-mediated apoptosis has been demonstrated in various types of leukocytes, including a lymphocytic cell line, murine thymocytes, murine peritoneal macrophages, human macrophages, mesangial cells, dendritic cells, and microglial cells [83, 91–96]. Extracellular ATP acting via the P2X7 receptor activates the transcription factor NFκB by selectively targeting NFκB p65 (Rel A) in the N9 mouse microglial cell line [97]. It also has been reported that the P2X7 receptor modulates macrophage production of TNF-α, IL-1β, and nitric oxide (NO) following lipopolysaccharide (LPS) exposure [98], consistent with a role for the P2X7 receptor in inflammation. In HUVEC, TNF-α markedly increases apoptotic cell death via the activation of caspase-3 [74]. Recent reports indicate that ATP/ADP activate NFκB and induce apoptosis probably through P2X7 receptors in porcine aortic endothelial cells [99]. These studies have provided compelling evidence suggesting a role for P2 and P1 receptor-mediated apoptosis in vascular diseases; however, further studies are needed to determine the precise pathways involved and to accumulate direct evidence that these pathways contribute significantly to the development of atherosclerosis, hypertension, and restenosis. Modulation of P2 receptors in vascular injury Experimental arterial intimal hyperplasia can be induced by balloon angioplasty or by the perivascular placement of a silicone collar around an artery. An influx of leukocytes precedes the migration and proliferation of vascular SMC into the intima in both these models [100]. In normal adult rat aorta, P2Y2 mRNA was found in the endothelial cell lining, while a sustained expression of P2Y2 mRNA was detected in few medial SMC [20]. In contrast, P2Y2 mRNA was detected in all medial SMC of rat fetal aortas and in most of the aortic SMC of intimal lesions after balloon angioplasty, with an overexpression in cells lining the lumen both 1 and 3 weeks after injury. In the collar model, neointimal formation appears to be triphasic [100]. The first phase is characterized by vascular infiltration of leukocytes beginning 2 h after collar placement around a rabbit carotid artery. The second phase begins within 12 h of collar placement and is characterized by medial replication of SMC. The third phase is characterized by the appearance, beginning at day 3 after collar placement, of subendothelial SMC. In situ hybridization with sham-operated rabbit carotid arteries indicated that P2Y2 mRNA expression was localized to CD31-positive aortic endothelial cells and not medial SMC [6]. High levels of P2Y2 mRNA were detected in medial SMC 3 days after collar placement, before appearance of neointima. At day 14, all intimal and medial SMC were P2Y2 positive. Fura-2 digital imaging of single SMC, used to measure changes in myoplasmic calcium concentration in response to P2Y receptor agonists, confirmed an increase in P2Y2 receptor activity. However, the same study showed that P2Y4 mRNA was equivalently expressed in sham-operated and collared arteries or cultured rabbit carotid SMC, whereas P2Y6 mRNA was not detected in carotid arteries or cultured SMC. In a more recent study, it was shown that P2Y2 receptor upregulation occurs in stented porcine coronary artery, a clinically relevant model of arterial injury [54]. P2Y2 receptor mRNA levels were significantly increased in coronary SMC dispersed from stented segments of coronary arteries 3 weeks after stent angioplasty compared with SMC from unstented segments. There was no significant difference observed in levels of P2Y6 mRNA in the stented and unstented artery segments, whereas P2Y4 receptor mRNA was undetectable. Upregulation of functional P2Y receptors also occurs in the basilar artery of the rat double-hemorrhage model [7], in the coronary artery of diabetic dyslipidemic pigs [8], and in human atherosclerotic lesions (Seye and Desgranges, unpublished data). It has been proposed that upregulation of P2Y receptors could be a potential diagnostic indicator for the early stages of atherosclerosis [9]. Interestingly, a more recent study showed that high shear stress, associated with vascular diseases, can selectively upregulate P2Y2 and P2Y6 receptors in perfused arterial SMC [101]. P2X1 and P2X4 receptors have been shown to be upregulated in rabbit intimal thickenings [102]. Taken together, these findings strongly suggest that at least some P2 receptor subtypes (most notably P2Y2) are implicated in the development of vascular disease. Pathophysiological significance of P2 receptor modulation in vascular injury Migrating and proliferating SMC in the arterial media, together with infiltrating macrophages and T lymphocytes, are the main cell types that comprise atherosclerotic and restenotic lesions [1]. In the early stages of intimal hyperplasia, SMC are modified from a differentiated, contractile phenotype to an immature, synthetic phenotype, and this enables them to migrate into the intima, to proliferate, and to secrete extracellular matrix components. In many respects, this shift in phenotype is a reversal of the normal differentiation pattern of vascular SMC during fetal and early postnatal life [103, 104]. However, there is still a lack of knowledge concerning the phenotypic regulation of SMC during vasculogenesis and vascular disease. A high level of P2 receptor expression could be related to the altered phenotype of SMC in intimal thickenings. Partially dedifferentiated SMC are found in rat arterial intimal lesions after balloon angioplasty [105, 106], and their phenotype has been compared with that of newborn rat aortic SMC [107]. Interestingly, it has been reported that medial SMC of rat embryonic aorta also exhibit high P2Y2 expression, similar to intimal thickenings [20]. Other studies have shown that P2Y1 and P2Y2 receptor transcripts are strongly upregulated with phenotypic changes in rat SMC, whereas P2X1 mRNA is completely downregulated, and P2Y4 and P2Y6 mRNA levels are unchanged [58, 28]. Taken together, these and other results suggest that P2Y2 receptor expression is upregulated in the entire cell population of intimal thickenings and is closely associated with a poorly differentiated phenotype of SMC. The dramatic increase in P2Y2 mRNA expression observed in balloon angioplasty-induced intimal lesions would suggest increased activity of extracellular nucleotides with consequent enhancement of cell proliferation and vasoreactivity. Indeed, extracellular nucleotides, particularly ATP and UTP, have been shown to induce cell cycle progression and proliferation of cultured arterial SMC [21, 49, 50, 51] and vasoconstriction in the absence of endothelial cells [17, 108, 109]. Since both neointimal hyperplasia and vasoconstrictive remodeling have been found to be involved in postangioplasty restenosis [1, 5, 107, 110], these findings suggest that extracellular nucleotides may play a significant role in this process, at least as long as functional endothelial cells, which regulate intimal thickening [111, 112] and nucleotide-induced vasorelaxation [113, 114], are not regenerated. Increased P2Y2 receptor expression in the neointima may by itself be sufficient to enhance the local effects of extracellular nucleotides on the proliferation of SMC. Although the expression of other P2 receptors has been described in arterial SMC [21, 29, 115], the P2Y2 receptor seems to be more specifically involved in the response of SMC to ATP and UTP [116], particularly in the potentiation of proliferation [21, 50]. The effects of extracellular nucleotides are not only dependent on the nature and number of P2 receptors present on target cells, but also on the local concentrations of nucleotide agonists. Although in vivo concentrations of extracellular nucleotides are difficult to measure, various in vitro experiments suggest that extracellular nucleotides are released from blood and vascular cells exposed to various physicochemical conditions, e.g., stress, hypoxia, and other factors [117, 118, 119] associated with the angioplasty process. P2Y2 receptors in SMC are involved in nucleotide-induced constriction of normal arteries [17, 108, 109]. Long lasting alterations in vasomotricity after endothelial cell denudation, resulting in increased sensitivity to vasoconstrictive substances, have previously been demonstrated [113, 114]. It appears that like other receptors for vasoconstrictive factors such as angiotensin II [120], endothelin [121], and platelet-derived growth factor (PDGF) [122], which are overexpressed in neointima, P2Y2 receptors may play an important role in controlling the vasoactive properties of pathological arteries, particularly in chronic constriction at the lesion site that is postulated to be one of the processes leading to postangioplasty restenosis [5, 110]. Conclusion P2 receptor subtypes, including P2Y2, P2X1, and P2X4, appear to play a role in responses to endothelial injury that are thought to be key events in the initiation of atherosclerosis and restenosis after angioplasty. P2Y2R upregulation and activation in endothelial cells and SMC promote leukocyte transmigration and intimal thickening in arteries of animal models of vascular injury, suggesting a possible regulatory role for this receptor in the mechanisms leading to neointimal hyperplasia after angioplasty. Although there are no selective P2Y2 receptor antagonists yet available, recent progress in siRNA technology has made it possible to design small RNA interference molecules that can selectively inhibit P2 receptor subtype expression. Such molecules can be efficiently delivered into the vessel wall using adenoviral vectors. In addition, P2 receptor transgenic mice (i.e., mice in which the relevant receptor subtype has been deleted or overexpressed) will be valuable tools to substantiate the role that nucleotides and P2 receptors play in the etiology of cardiovascular disease. Further delineation of the signaling pathways involved in these P2 receptor-mediated processes may help limit or prevent vascular diseases such as atherosclerosis and restenosis after angioplasty.
[ "atherosclerosis", "restenosis", "inflammation", "proliferation", "migration", "smooth muscle cell", "nucleotide receptors" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
J_Gastrointest_Surg-3-1-1852385
Controversies in the Surgical Management of Sigmoid Diverticulitis
The timing and appropriateness of surgical treatment of sigmoid diverticular disease remain a topic of controversy. We have reviewed the current literature on this topic, focusing on issues related to the indications and types of surgery. Current evidence would suggest that elective surgery for diverticulitis can be avoided in patients with uncomplicated disease, regardless of the number of recurrent episodes. Furthermore, the need for elective surgey should not be influenced by the age of the patient. Operation should be undertaken in patients with severe attacks, as determined by their clinical and radiological evaluation. Magnitude of the Problem Diverticular disease, either diverticulosis or diverticulitis, was regarded as a surgical curiosity in the 19th surgery, but over the past 100 years, its prevalence in Western countries has increased dramatically. In the US, an individual’s risk for developing diverticular disease approaches 50% by age 60.1 Diverticulitis, defined as inflammation and infection related to diverticula, occurs in 20 to 30% of patients with diverticulosis and is one of the most common indications for gastrointestinal tract-related hospitalizations. One in four of these patients presenting with diverticulitis will require an emergency operation because of perforation, peritonitis, or systemic complications. At present, diverticulitis is the associated diagnosis for one third of all colostomies and/or colon resections.2 As such, diverticulitis is one of the five most costly gastrointestinal disorders affecting the US population.3 Etiology Colonic diverticula tend to develop in the areas of weakness in the colonic wall, most frequently at the sites of penetration of the wall by blood vessels.4 These outpouchings of mucosa and peritoneum are of the pulsion type and are thought to be caused by an increase in the intraluminal pressure within the colonic wall in affected individuals (Fig. 1). Figure 1Endoscopic images of diverticuli. Colonoscopy can be rather difficult when several diverticula are encountered because of increased colonic tortuosity and lack of distensibility. It is thought that a low intake of dietary fiber and resultant decrease in stool bulk predisposes those in Western societies to an elevation in colonic pressure. Some authors attribute the high rate of diverticular disease to the development of the roller mills during the last half of the 19th century, causing the grains to be crushed so effectively as to nearly eliminate all of the cellulose from the Western diet.5 Despite significant supporting evidence for fiber and its role in the development of diverticulosis, no study to date has demonstrated that a high fiber diet can reverse this process or reduce the incidence of complications in cases of established diverticulosis.6 In addition to dietary intake, other factors have been implicated in the development of diverticular disease. Most studies report that diverticular disease is more common in the elderly, especially elderly women, and in patients who smoke cigarettes or drink alcohol.7 Sigmoid colon specimens from patients with diverticulosis have been found to have increased in vitro sensitivity to acetylcholine, as well as reduced smooth muscle choline acetyltransferase activity and upregulation of smooth muscle muscarinic M3 receptors.8 The significance of these biochemical characteristics still needs to be elucidated, but the differences suggest that there are underlying physiological abnormalities that may predispose to the development and progression of diverticular disease. Clinical Presentation and Evaluation The clinical presentations of diverticular disease range from asymptomatic diverticulosis, diverticulosis with periodic spasmodic abdominal pain and bloating, diverticulosis with hemorrhage, and finally, diverticulitis. Although diverticula can occur in any portion of the colon, this review will only focus on sigmoid diverticulitis, by far, the most common site for this disease process. Most patients with diverticulitis present with symptoms of left lower quadrant abdominal pain, fever, and leukocytosis (Table 1). Additional symptoms of acute sigmoid diverticulitis may include nausea, vomiting, change in bowel habits, urinary frequency, and/or dysuria.1 In cases of clear-cut diverticulitis based upon the clinical picture, one can manage the patient without any imaging studies. In many cases, especially in those with severe symptoms and potential complicated diverticulitis, computed tomography (CT) scanning should probably be performed. The value of CT scanning is the ability to confirm the diagnosis and confidently stratify the severity of the disease process, differentiating mild, localized inflammation from advanced inflammation with abscess formation and/or distant extension. Table 1Clinical Symptoms of DiverticulitisSymptomsFrequency (%)Left lower quadrant pain93–100Leucocytosis69–83Fever57–100Nausea10–30Vomiting15–25Constipation10–30Diarrhea5–15Dysuria5–20Urinary frequency6–25 Before the advent of CT, the contrast enema was the primary tool in the evaluation of colonic diverticular disease. However, CT scans have largely replaced barium enema as the preferred imaging modality to evaluate patients with suspected diverticulitis. The use of CT scanning has been justified by several studies from the radiological literature, demonstrating a high sensitivity (97%) and specificity (100%) for diverticulitis (Fig. 2). Contrast enema, on the other hand, has a sensitivity of only 82% and a specificity of 81% for diverticulitis.9Figure 2a Computed tomography scan images of a patient who presented with uncomplicated diverticulitis that was subsequently treated successfully with antibiotics. Note the thickening of the sigmoid colon, yet the lack of any extraluminal fluid or air. b Computed tomography scan images of a patient who presented with complicated diverticulitis and an extraluminal fluid collection that did not resolve with attempted CT-guided drainage an required an eventual sigmoid colectomy. Classification There are two commonly utilized classifications of diverticulitis. The European Association for Endoscopic Surgeons developed a classification scheme based upon the severity of its clinical presentation.10 In this system, diverticulitis is divided into symptomatic uncomplicated disease, recurrent symptomatic disease, and complicated disease (Table 2). Table 2Clinical Classification of Diverticulitis (Adapted from Kohler et al. 10)GradeClinical DescriptionSymptomsISymptomatic uncomplicated diseaseFever, crampy abdominal pain, CT evidence of diverticulitisIIRecurrent symptomatic diseaseRecurrence of aboveIIIComplicated diseaseHemorrhageAbscessPhlegmonPerforationPurulent and fecal peritonitisStrictureFistulaObstruction Another classification system was developed by Hinchey et al.11 and is used to describe the stages of complicated diverticulitis (Table 3). This scheme allows for good communication among surgeons when it comes to describing the various degrees of diverticular perforation, ranging from a localized perforation with a small abscess to generalized fecal peritonitis. Clearly, the proper surgical approach will vary depending upon the Hinchey stage. Table 3Hinchey Classification of Complicated Diverticulitis (Adapted from Hinchey et al.11)StageDescriptionIPericolic or mesenteric abscessIIWalled off pelvic abscessIIIGeneralized purulent peritonitisIVGeneralized fecal peritonitis We will refer to both of these classifications when discussing the appropriate management of this disease. Yet another classification, developed by Ambrosetti et al.12, is based upon the CT findings. The criteria of Ambrosetti et al. are being increasingly utilized to stratify patients into optimal pathways for management (Table 4). Thus, patients with mild disease are likely to be successfully managed with intravenous antibiotics, whereas percutaneous drainage and surgery is generally indicated for cases of complicated diverticulitis. Table 4Ambrosetti’s CT Staging of Diverticulitis (Adapted from Ambrosetti et al.12)Mild DiverticulitisSevere DiverticulitisLocalized sigmoid wall thickening (<5 mm)AbscessInflammation of pericolic fatExtraluminal airExtraluminal contrast Management of Complicated Diverticulitis Surgical intervention is rarely indicated in cases of acute diverticulitis because most of these cases will resolve with appropriate antibiotic management. Operations are reserved for cases of complicated diverticulitis, i.e., patients with perforation and peritonitis, abscess formation, fistula, or obstruction. Although this may seem clear-cut, decisions regarding if and when to operate patients with diverticulitis remain a topic of significant debate. Operation is clearly indicated when the patient presents with perforation and diffuse peritonitis, whether it is purulent or feculent (Hinchey stages III and IV). However, the ideal surgical procedure in such cases of perforation remains a matter of debate. The possible operations advocated range from a simple washout of the abdomen with drainage, as described in a few case reports from Scotland and France, to primary resection with a Hartmann pouch, primary resection with anastomosis, diverting ileostomy, and finally, a primary resection with anastomosis and no temporary stoma. Of these,1,13,14 American surgeons are most likely to perform the Hartmann procedure, which has been advocated as the standard of care for perforated diverticulitis.1 The Hartmann’s resection has proven to be a safe and effective approach, and is based upon the idea that an anastomosis in the setting of acute infection/inflammation is dangerous and associated with a high rate of suture line breakdown. A simple washout without resection would not be considered an appropriate approach because ongoing infection/inflammation of the involved bowel is likely to occur. There is a paucity of data to support a minimalist, simple washout approach; there are only 18 case reports in the literature describing the technique and its results.13,14 On the other hand, the practice of routine stomas in operations for acute diverticulitis may not be justified. Belmonte et al.15 looked at 277 consecutive patients treated for acute diverticular disease at the University of Minnesota, both urgently and electively. Of these, 88% had a primary anastomosis, most of them without diversion. They found that primary anastomosis was quite safe, with an overall 4% leak rate. Interestingly, none of these leaks were in their subset of patients with Hinchey stage IV diverticulitis, a group that comprised 9% of their total study population. A systematic literature review of 50 studies comparing a Hartmann’s procedure to a primary resection with anastomosis for perforated diverticulitis found 569 reported cases of primary anastomoses. The reported mortality and morbidity in the patients with an anastomosis was the same as in the patients who underwent the Hartmann’s procedure.16 These data suggest that in a select group of patients undergoing surgery in the acute stage of diverticulitis, an anastomosis is probably safe, even in the milieu of feculent peritonitis. These data are intriguing, but must be viewed with caution, especially in the case of a very sick or toxic patient with multiorgan system failure and/or shock. In the absence of randomized controlled studies, we still recommend the Hartmann’s procedure in patients with significant purulent or feculent peritonitis, and those patients with any instability related to the systemic effects of sepsis. However, in a patient who is clinically stable, a primary anastomosis at the first operation can be performed even in the setting of perforation (Fig. 3). Figure 3Gross specimen of the sigmoid colon that was resected from a patient who presented with freely perforated diverticulitis (Hinchey III). Proximal margin extends to the area where the diverticuli end, and the distal margin is at the rectum. Mention should be made of the meticulous surgical technique that must be used in this situation. The splenic flexure of the colon may need to be mobilized to ensure a tension-free anastomosis. One should imagine the rectum collapsing back into the pelvis with the patient standing upright when deciding on whether the bowel ends are truly free of tension. The margins of resection must be clearly viable with regard to vascularity. Finally, it may be best to avoid the crossed staple lines inherent to the double-stapled technique. Either a double pursestring technique with a stapled end-to-end anastomosis or a standard handsewn anastomosis are preferred when operating in an inflamed milieu. Preventive Surgery The question of when to recommend elective, preventive surgery for patients with diverticulitis remains very controversial. Current American Society of Colon and Rectal Surgeons (ASCRS) guidelines suggest preemptive surgery for any patient who has had two attacks of acute diverticulitis, with the intention of preventing another attack that could present with perforation and would necessitate a stoma.1 This recommendation for surgery after the second episode of diverticulitis is based on the data published in 1969 by Parks17 showing that the mortality rate for each subsequent attack of diverticulitis increases from 4.7% during the first admission to 7.8% during each subsequent admission. Parks is also widely quoted for stating that each subsequent episode of diverticulitis is less likely to respond to medical therapy, with a 70% response after the first episode vs 6% response after the third episode.1 However, there are little data to support this concept of poor response to medical treatment in subsequent attacks of diverticulitis. Furthermore, the advent of CT scanning and better antibiotics has improved nonoperative management of these patients. In a modern series of 673 patients with diverticular disease, only 3% of patients required emergency operations during a follow up of 10 years.18 Another 10-year study of 366 patients showed that recurrence was not associated with an increased rate of either complications or less successful medical management.19 Looking at the issue from another angle, Somasekar et al.20 reviewed 108 patients admitted with complicated diverticulitis. Almost all of them (104) required emergency surgery. Interestingly, only 26% of these patients were previously diagnosed with diverticular disease and only three patients had been admitted in the past with a prior episode of acute diverticulitis. In other words, only 2.7% of patients in this group would have benefited from an elective resection. Complications would still have occurred in 92.6% of patients in whom these attacks happened de novo. Thus, it appears that elective resection might have little impact on the incidence of patients requiring emergency procedures because most of these occur with the first attack of diverticulitis. Subsequent attacks of diverticulitis in the same patient seem to be akin to their previous ones, suggesting that specific patients are predisposed to a set pattern of diverticulitis, and once settled into this pattern they stay within it. The threat of the colostomy bag to a patient who has been successfully managed medically during two previous attacks may be unwarranted and misleading. In addition, it is important to recognize that elective surgery for diverticulitis is not without complications. Bookey et al.21 demonstrated that elective diverticular disease resection is associated with higher rates of morbidity and mortality than elective colorectal carcinoma resection, with the mortality rate increasing from 0 to 15% with advancing age. Furthermore, colectomy is not a guaranteed cure for diverticulitis, with recurrence rates varying from 3 to 13%. These rates have improved, however, with the recognition that the chances of recurrence are fourfold higher if a colosigmoid anastomosis is performed, emphasizing the importance of resecting the entire sigmoid colon in an operation for diverticulitis.22 With these conflicting data in mind, we maintain that the patients with uncomplicated diverticulitis can be managed nonoperatively regardless of the number of recurrent episodes. Patients who develop complications, such as fistulas, obstruction, or nonresolving smoldering disease, are best managed with surgical resection. Elective surgery may also be offered to patients who have had two or more episodes of severe diverticulitis, as determined by their clinical presentation and CT grade. In addition, elective surgery may be justified in patients with limited access to medical care or in those who are concerned about the negative impact of repeated illnesses with regard to work productivity and/or psychosocial issues. In elective or semielective circumstances, both open and laparoscopic sigmoid resection with a primary anastomosis have been considered as acceptable methods of treatment.23 Laparoscopy has been shown to be associated with an approximate 10% rate of conversion to open surgery. Interestingly, no direct relationship has been found between the number of attacks of diverticulitis or the timing after an acute attack with regard to complications or conversion rates with laparoscopic colectomy.24 Diverticulitis in Young Men Many authors believe that diverticulitis is a more virulent disease in younger patients. As such, it has been argued that all patients younger than 50 should undergo elective colon resection after an initial attack of diverticulitis.1,25 This argument arose from studies in the pre-CT era, which were replete with data, indicating a high risk of surgical intervention in young patients eventually diagnosed with diverticulitis. Subsequent authors have argued that these earlier studies were flawed because of a significant rate of unnecessary laparotomies in the younger patients because of erroneous preoperative diagnoses of appendicitis.26 Vignati et al.27 were among the first to challenge the concept that diverticulitis in the young is a more virulent disease. These authors surveyed 40 patients under the age of 50 that where treated with intravenous antibiotics and bowel rest and found that at a 5- to 9-year follow-up, none of these patients required colostomies. One third of them did undergo surgery, but most of these procedures where either elective or, if urgent, still conducive to a successful primary anastomosis. Guzzo et al.26 performed a retrospective review of 762 patients with diverticulitis treated at their institution from 1990–2000 and found that 76% of the patients under age 50 improved with antibiotics and did not require surgery during their first attack. These rates did not differ from the rates of surgery in the elderly patients. Of the patients treated nonoperatively, only four patients had a recurrence requiring surgery at a later time and only one needed a colostomy. Thirty-eight additional patients underwent preemptive elective surgery based upon their surgeon’s recommendation. One hundred fifty-five patients, 60% of the entire group, did not require surgery at all.26 A prospective study from Switzerland followed 118 patients who had their first attack of diverticulitis and found that recurrence rates in the younger patients were similar to those seen in the older patients, once stratified by their CT severity.28 Based upon these studies, we believe that young patients should generally be treated using the same criteria as older patients, and that the there is no justification for the routine recommendation for surgery after a single attack of diverticulitis in young patients. Elective preemptive surgery should be reserved for those who had at least two episodes of severe diverticulitis, and this decision should be supported by CT scan documentation of prior complicated disease. Fistulas As we succeed with the nonoperative treatment of acute diverticulitis, the incidence of fistulas appears to be increasing, reported to occur in approximately 12% of patients.29 Colovesical fistulas account for two thirds of the cases, followed by colovaginal, colocutaneous, and enterocolic cases.30 These patients can present a significant challenge to the surgeon. Some fistulas will close spontaneously as the inflammatory process resolves. Therefore, a selective approach should be used, in which operation is offered to those patients with persistent symptoms after 5–6 months after an acute attack. The most commonly reported symptoms in this group of patients include abdominal pain (43%), pneumaturia (43%), cystitis (40%), fecaluria (38%), diarrhea (15%), and hematuria (5%).31 In the operating room, the surgeon should expect a significant desmoplastic reaction and a contained abscess cavity in the area of fistulization. It may be prudent to place ureteral stents before the procedure, although most fistulas to the bladder will be at the dome and away from the trigone region, allowing relatively safe access for identification, dissection, and closure. Most of these cases should be amenable to resection with primary anastomosis, avoiding the need for a temporary stoma.32 In expert hands, a colectomy can be accomplished by either an open or laparoscopic approach.33 Some authors suggest that these procedures are best performed by surgeons whose main interest focuses on colon and rectal surgery. A study from McGill University comparing outcomes of surgery for diverticulitis-induced fistulas found that colorectal surgeons performed less diverting Hartmann’s and colostomies (5 vs 27%), and had a lower rate of complications, including wound infections and anastomotic leaks.31 It is not clear, however, whether the data from this small study of 121 patients are applicable to all surgeons in all centers. Diverticulitis in the Immunocompromised Patient Diverticulitis in immunocompromised patients can be virulent because there is an increased likelihood of free perforation and fecal peritonitis. In addition, the clinical presentation of these patients often underestimates the severity of their disease.34 Marked differences have also been noted in the response of these patients to medical treatment. In the nonimmunocompromised group one should expect that 75% of patients will respond to antibiotics. In contrast, a very large percentage of immunocompromised patients will fail standard, nonoperative treatment.35 As such, most of these patients require urgent surgical intervention, and this is associated with a significantly higher mortality rate: 39 vs 2% in noncompromised patients.35 Given these data, most authors and the ASCRS recommend elective sigmoid resection after the first episode of diverticulitis in immunocompromised patients.1,34–36 Conclusion The management of patients with sigmoid diverticulitis is still evolving. We should continue to constantly reassess the surgical dogma regarding the appropriate treatment of this common disease entity. Clearly, a randomized controlled study comparing the Hartmann’s procedure to primary anastomosis in the setting of perforated diverticulitis would be worthwhile. It is becoming increasingly clear that mandatory operations may not be warranted in young patients or those with two episodes of diverticulitis. As in other areas of clinical surgery, we must tailor our treatment to the specific situation for each individual patient.
[ "surgical management", "sigmoid diverticulitis", "elective surgery", "diagnosis" ]
[ "P", "P", "P", "P" ]
Health_Care_Anal-4-1-2244696
Social Patterning of Screening Uptake and the Impact of Facilitating Informed Choices: Psychological and Ethical Analyses
Screening for unsuspected disease has both possible benefits and harms for those who participate. Historically the benefits of participation have been emphasized to maximize uptake reflecting a public health approach to policy; currently policy is moving towards an informed choice approach involving giving information about both benefits and harms of participation. However, no research has been conducted to evaluate the impact on health of an informed choice policy. Using psychological models, the first aim of this study was to describe an explanatory framework for variation in screening uptake and to apply this framework to assess the impact of informed choices in screening. The second aim was to evaluate ethically that impact. Data from a general population survey (n = 300) of beliefs and attitudes towards participation in diabetes screening indicated that greater orientation to the present is associated with greater social deprivation and lower expectation of participation in screening. The results inform an explanatory framework of social patterning of screening in which greater orientation to the present focuses attention on the disadvantages of screening, which tend to be immediate, thereby reducing participation. This framework suggests that an informed choice policy, by increasing the salience of possible harms of screening, might reduce uptake of screening more in those who are more deprived and orientated to the present. This possibility gives rise to an apparent dilemma where an ethical decision must be made between greater choice and avoiding health inequality. Philosophical perspectives on choice and inequality are used to point to some of the complexities in assessing whether there really is such a dilemma and if so how it should be resolved. The paper concludes with a discussion of the ethics of paternalism. Introduction Approaches to Screening A major component of current public health strategy is the provision of screening programmes to allow prevention and early treatment of serious disease. A characteristic of screening is that it involves the possibility of immediate harm in return for the possibility of future benefit. For example breast screening with mammography has a number of substantial possible immediate harms [26, 38, 50]. In 2001 five percent of mammography screens in the US gave a false positive result leading to further testing [53]. Even when further tests result in a diagnosis of breast cancer, about 20 percent of the cases identified will be of ductal carcinoma in situ which has an uncertain course without treatment [50]. Many such cancers are not life threatening but, once detected, are usually treated as such [53]. Such treatments are unpleasant and damaging to health and women diagnosed with ductal carcinoma in situ have similar anxiety levels to those with early invasive breast cancer [39]. In contrast, only a small number of people, relative to the total screened, have their lives saved, even over a considerable period of time. Estimates suggest that 2451 women aged between 50 and 59 need to be screened as recommended over a five-year period to save one life [41]. Because only a few cases of disease will be detected within a healthy population, large numbers need to be screened in order to have an impact on overall population health. Thus high uptake of screening has been encouraged by emphasizing the benefits of participation, reflecting a public health approach to screening [27]. Recently, in the UK, there has been a policy change towards promoting informed choices particularly in screening:There is a responsibility to ensure that those who accept an invitation [to screening] do so on the basis of an informed choice, and appreciate that in accepting an invitation or participating in a programme to reduce their risk of a disease there is a risk of an adverse outcome [32]. One of the factors influencing the move to informed choices has been a concern about patient autonomy [16]. The issue of autonomy has become more salient with the rise of ‘patient-centred medicine’ in reaction to concerns with traditional medical practice and its emphasis on the role of the health professional in medical decision-making. [37] Informed Choice in Screening Informed choices have been defined as those based on relevant knowledge, consistent with the decision-maker’s values and behaviourally implemented [28]. To make an informed choice participants need information about their personal risks of developing the condition, what having the screening test will be like, accuracy of the test, and what will happen if the screening test is positive [15]. While the place of this information in facilitating informed choices is obvious, the role of values may be less so. People will, however, attach different values to the possible outcomes of screening. For example those who are more and less socially deprived may place a different value on early diagnosis. The consequences of being diagnosed with a serious condition can have very different implications for those who have material resources and those who don’t, even within a universal healthcare system. While those who have a high level of material resources can use those resources to ameliorate the impact of a diagnosis of serious illness, for those who are more socially deprived such a diagnosis may bring the prospect of increasing poverty and uncertainty and they might thus prefer to delay knowledge of an illness for as long as possible [24]. There has been no research to describe or evaluate the impact of an informed choice policy in screening. One possible impact is that it will reduce uptake of the screening programme with a greater reduction in some groups, for example those who are more socially deprived and already have lower participation in screening. Such a decline in the uptake of screening could be evaluated negatively as contributing to a decline in the overall health of the population or positively as reflecting an increase in the autonomy of the individual. This paper seeks to describe and evaluate the possible impact of an informed choice policy in screening. Explaining Screening Uptake Uptake of screening is lowest in those who are most socially deprived. This is a consistent finding across different screening programmes and healthcare systems [20, 29, 48]. Lower uptake among the most socially deprived is, at least in part, a consequence of a lack of material resources, such as transport costs to the screening centre. But when the lack of such resources is controlled for in statistical analyses, differences in uptake remain [23]. Psychological characteristics, among other factors, contribute to explaining those differences [48, 52]. One such psychological characteristic might be time orientation. Psychologists suggest that people use information about the timeframe in which an event occurs to process information about the event and to make decisions. However, people’s responses to the specific timeframe in which an event might occur, vary. Individuals have preferences for certain timeframes which influence their information processing and evaluation of actions and the possible outcomes of those actions. These preferences are called time orientation. While a number of different time orientations have been identified [55], evidence for their existence is strongest for two of these, future and present orientation. Those who have high future orientation think more about the future and have an awareness of the effects of current actions on future outcomes compared to those with lower future orientation [42]. Future orientation is associated with the practice of health-related behaviours, such as physical activity and healthy eating, that might involve immediate cost for possible future gain [22, 25, 34, 42]. Those who have high present orientation think more about immediate outcomes of their behaviour than those who are less present orientated [42]. High present orientation is also associated with a limited sense of control and fatalism about life events [42] and unhealthy behaviours, such as substance abuse, which result in immediate rather than future rewards [18, 42]. Future and present orientation are largely independent, of one another with each being associated with different patterns of thought and behaviour [18, 56]. Because the possible harms of screening are immediate while the possible benefits occur in the future, time orientation may contribute to explaining screening uptake. Increasing information about the possible harms and limited benefits of screening may therefore have a differential impact on those with different time orientations. Given that time orientation and social deprivation are associated [33, 47], this effect may vary by social deprivation. Aims The first aim of this study is to describe the possible impact of an informed choice policy on screening uptake by exploring the relationships between social deprivation, present orientation and expectations of participation in screening. The second aim is to evaluate that impact from different ethical perspectives. Method Design A questionnaire-based descriptive survey. Sample A total of 300 participants was recruited. The sample was structured to reflect the English population in terms of age and sex with one third of the sample being drawn from each of the North, South and Midlands of England. Procedure Home-based interviews were conducted by a research agency. Questionnaires were completed by interviewers on behalf of participants. Materials In the first part of the interview information about diabetes screening, based on that used in previous research [34], was presented to participants (Appendix 1), after which the following measures were completed:Expectations of participation in screening. The mean response to three items measuring intention to participate in screening was assessed on five-point response scales [34, 35] giving a measure with good reliability (Cronbach’s alpha: .85).Time orientation. A brief, nine item version of the Stanford Time Perspective Inventory (Crockett et al. in submission). The Stanford Time Perspective Inventory (STPI) has been extensively validated and used in the study of health behaviour [9, 21, 42, 56]. This brief STPI consists of nine items, measured on five-point rating scales, comprising two subscales with adequate reliability: five items measuring future orientation (Cronbach’s alpha: .67) and four items measuring present orientation (Cronbach’s alpha: .62).Social deprivation: A brief three item measure asking participants to indicate: Possession of educational qualificationsHome ownership (including having a mortgage). Those who neither owned their homes nor had any educational qualifications were considered to have the greatest social deprivation (scored as 2); those who either owned their homes or had educational qualifications were considered to have intermediate levels of social deprivation (scored as 1); and, those who both owned their homes and had educational qualifications were considered to have the least social deprivation (scored as 0). This measure was derived from our previous research which indicated that individual level measures of social deprivation showed greater associations with psychological characteristics than did neighbourhood measures such as the Indices of Multiple Deprivation (Crockett et al. in preparation). Analysis Statistical analyses were conducted using SPSS version 12. Associations between social deprivation, future and present time orientation and expectations of participation in diabetes screening were examined using Spearman’s rank correlations. Results The associations between social deprivation, time orientation and expectations of participation in diabetes screening are shown in Table 1. Table 1Associations between expectations of participation in screening, social deprivation and present orientation (Spearman’s rho correlation)Present orientationFuture orientationExpectations of diabetes screeningSocial deprivationa−.245***.067.218***Present orientation−.301***−.265***Future orientation.054*** P < 0.001amost deprived scored as 0, intermediate group scored as 1, least deprived group scored as 2 Future orientation was not significantly associated with social deprivation (r = −.067, n = 300, P = .247) or with expectations of participation in screening (r = .054, n = 300, P = .349). However present orientation was significantly associated with both social deprivation (r = .245, n = 300, P < 0.001) and expectations of participation in screening (r = −.265, n = 300, P < 0.001). Because only present orientation showed associations with social deprivation and expectations of participation in screening, further findings are presented for present orientation only. To illustrate the association between social deprivation, uptake of screening and present orientation, two figures were plotted. Figure 1(a) shows the mean expectation of participation in diabetes screening at each level of deprivation. As social deprivation increases, expectations of participation decrease. A one-point difference between those most and those least deprived indicates a substantial effect of social deprivation on expectations of participation in screening. Figure 1(b) shows the mean expectation of participation between those with high and low levels of present orientation, indicating that those with low present orientation express higher expectations of participating in screening. Figure 1(b) does not indicate such large effects as seen in Fig. 1(a) suggesting that other variables must contribute to the impact of social deprivation and expectations of participation in screening. Fig. 1Expectations of participation in screening and (a) social deprivation, (b) present orientation, High and low present orientation calculated by means of a mean split Additional analyses using another part of the data set presented here indicate present orientation explains part of the association between social deprivation and uptake (Crockett et al. in submission). Summary The results of this study suggest that present orientation is associated with both social deprivation and with uptake of screening, such that present orientation partially accounts for the relationship between greater social deprivation and lower expectations of participation in screening. The results indicate that psychological factors can contribute to an explanatory framework of screening uptake. This framework suggests that there is an association between social deprivation and present orientation and that decisions about uptake of screening are influenced by the time orientation of those who are invited. This framework further suggests that making the more immediate possible harms of screening more salient, as would happen in an informed choice policy, could reduce uptake of screening in those who are more socially deprived. This framework can be used to identify and evaluate the possible impact of an informed choice policy in screening. Evaluation The Possible Impact of Informed Choices on Inequality The study results suggest that the implementation of an informed choice policy, which makes salient the possible immediate harms of participation, might lead to decreases in uptake of screening among those who are more socially deprived. This decrease is unlikely to be matched by a similar decrease among those who are less deprived, not only because they are less present orientated but also because evidence suggests that those who are more educated, and typically less deprived, are more aware of the limited benefits of screening programmes [12]. A differential decline in uptake of screening is an issue of particular concern. Those who are more socially deprived already have poorer health [7, 17] and are more likely to develop diabetes and its complications [3, 8]. A reduction in uptake of screening among those who are most deprived might widen the existing gap in physical health between those who are more and less socially deprived, running counter to UK government policy of reducing such inequalities [10, 11]. If an informed choice policy does reduce the rate of screening of the most deprived, then there appears to be an ethical dilemma. On the one hand, there can be greater choice but at the ethical cost of increased inequality; on the other hand, greater inequality can be avoided, but at the ethical cost of less informed choice. In the rest of this paper, we aim to describe this dilemma more fully and offer some thoughts about how, in the face of it, an informed choice might be evaluated. We should say at the start that our aims are limited and that we do not try, nor think it possible, to evaluate an informed choice policy fully in the space available. Our discussion is aimed at those who feel an initial ethical pull both toward reducing inequality and toward informed choice. Among those who feel this pull are the UK government, whose policies are explicitly designed to try to achieve both. Thus we do not aim to persuade those libertarians or elitists who oppose taxation-funded screening programs altogether. Nor do we intend the ethical claims we make to be applicable in all times and places. In aiming our discussion at those who are attracted both to equality and choice, we do not suggest that the nature of these values is at all obvious. Indeed, one of our major points will be that any evaluation of an informed choice depends on further specification of these values. It might well turn out, as these values are specified, that the dilemma between choice and equality is merely apparent and that a decision between them in the context of screening does not have to be made. After the ethical characterization of the choice and equality dilemma, we critique a public health and paternalistic approach to screening that is sceptical about the value of choice. Is There a Dilemma? As mentioned, it appears that there is a dilemma for those who value both choice and equality if an informed choice policy reduces the rate of screening of the most deprived without significantly reducing the rates for everybody else. It appears that the health of the most deprived would decline relative to others and thus inequality of health would increase. Whether there is actually a dilemma depends partly on what happens to screening rates in practice, but it also partly depends on the characterization of the ethical values, as we shall now show. In the first place, it is not obvious, even if there are differential impacts on screening rates, that choice would produce greater inequality of health. While maximizing informed choices might result in decreased physical health, by promoting personal autonomy they might increase psychological well-being. Personal autonomy has been related to two specific psychological constructs [51]: perceived control, which has been linked to positive outcomes including coping, personal adjustment and success or failure in a variety of areas of life [46]; and self-efficacy, which is the sense of having mastery over the action needed to achieve a particular end [4]. Self-efficacy is associated with a number of health-related behaviours including uptake of screening [13, 14, 19, 25]. Those who are more socially deprived are not only disadvantaged in terms of physical health but commonly also have lower levels of positive psychological characteristics, such as self-efficacy, which contribute to an individual’s overall well-being and are related to the ability to make autonomous choices in a variety of health and life domains [5]. The point here can be made in two different ways. One is to say that health, as a philosophical concept, is about more than narrowly defined physical health and that the psychological benefits of greater choice for the most deprived are gains in health to them. This leaves it moot whether a reduction in their rates of screening is worse for them in terms of their health, and thus moot whether choice increases inequalities in health. A second way, which avoids the complicated conceptual debates about what health is, puts the point as one about the determinants of health. Even on a relatively narrow construal of health as physical health the psychological gains of choice, such as increased sense of control [51], might improve the physical health of the most deprived [1], again leaving it moot whether choice increases inequalities in health. Even if there is a conflict between choice and equality of health, it does not follow that there is a conflict between choice and the value of equality. Again, whether there is a conflict depends on further specification of the value. There is a large debate in political philosophy called the `equality of what?’ debate [44]. Fundamentally, however health is characterized and however equality is characterized, health would only be one item in the metric of inequality. Suppose that an informed choice policy causes the health of the most deprived to be lower than it would otherwise have been. The policy also causes their choice to go up. Whether the result is to make them worse off than they would otherwise have been is a complicated question that depends on the relative value of health and choice. Possibly the result should be counted as a gain in equality, since it increases the choice of the most deprived, although possibly not. The point here is that, from the viewpoint of equality, concentrating on inequalities in health is concentrating on only part of the picture. Not only is the evaluation of choice complicated by debates about health and the metric of equality, it is also complicated by debates over how to understand the value of equality. It is possible to give only an incomplete and sketchy account of this here. Consider three views: utilitarianism, prioritarianism, and egalitarianism. Utilitarianism recommends policies that maximize aggregate welfare [45]. It would recommend reducing inequalities of health if this would indeed maximize welfare. Many utilitarians believe that greater equality would maximize welfare [6]. As for the policy of informed choice, utilitarians would recommend it if it led to greater welfare than the alternative and oppose it if it reduced welfare. In the next section, on paternalism, we make some points about the effects of choice on welfare. Prioritarianism recommends policies that improve the position of the worst off. Prioritarians disagree among themselves about the extent of the priority that should be given to the worst off, for instance whether small gains to the worst off outweigh large gains to everyone else [30, 31]. However prioritarians decide to evaluate gains to the worst off, there is no necessary connection between their view and favouring equality. Improving the position of the worst off might reduce inequalities in health, but it might also increase them [36, 40]. Egalitarians, by contrast, believe that inequality is in itself bad. For them, there is some value in reducing inequality even if it is bad for some and good for no one [49]. This is not to say that egalitarians would, all-things-considered, want greater equality if it produced no gain in anyone’s welfare. Their attitude to inequalities would depend on the other values they hold [36]. In the face of this sketch of various positions, it is clear that evaluating any inequality produced by an informed choice policy requires more detail about those effects. That inequality increases does not on its own justify opposing the policy. Suppose inequality increases because the most deprived stay the same, on whatever is the right metric for measuring people’s positions, but the position of others improves. Utilitarians and prioritarians should endorse the policy. Egalitarians might oppose it but also might not, depending on the other values they hold. Suppose the position of the most deprived goes down and others’ positions improve. Egalitarians and prioritarians would probably oppose it, although that might depend on how much the most deprived lose and how much others gain. Utilitarians will weigh up the gains and losses to see what maximizes welfare. In this section we have pointed out some of the complexities of evaluating the policy of informed choice even if we assume some initial ethical commitment to both choice and equality. In particular, we have shown that the initial statement of the dilemma between choice and equality is too crude, and that there might be no dilemma in the end. In describing the complexities we do not want to suggest that a rationally defensible overall evaluation is impossible, only that it is perhaps more difficult than one might think. In the next section, we continue the theme of complexity by pointing to the for a brisk paternalistic public health approach that says that if choice reduces health it must be bad. Informed Choice, Public Health, and Paternalism If an informed choice policy really does reduce the screening rates of the most deprived—or, indeed, of the general population—then the argument might be made that the policy is ethically wrong because it would increase morbidity and mortality. The argument might be put in terms of public health, that the disease burden of society would increase, or more directly in paternalistic terms, that individuals would be net worse off for having and making the choice not to be screened, and that either way, an informed choice should not be implemented. While we cannot show decisively that this argument is mistaken, this section points to some of the serious difficulties that face it. We should state at the outset that we understand paternalism in the sense commonly used in political philosophy, where an action is paternalistic only if it is both motivated by concern for the target’s interests and in some way bypasses the target’s own decision-making, for instance by coercing the target or failing to disclose relevant information [54]. Firstly, at the population level any loss of life resulting from an informed choice policy has to be set alongside the negative impacts of a public health approach to screening including death as a consequence of that approach. Emphasizing the benefits of screening results in limited understanding of the harms of screening, including the possibility of false negatives and in turn this leads to an overestimation of the reliability of screening results by both the public and health professionals [38]. This may lead to symptoms of disease being ignored following a negative screening result, delaying diagnosis and treatment [2, 38]. Essentially the same points can be made if the argument is put instead in paternalistic terms, that individuals choose against their best interests when they choose against screening. From the point of view of any given individual facing the choice of whether to undergo screening, not being screened carries a certain risk of morbidity and mortality and so does being screened. Given other costs and benefits of screening, it might be quite rationally self-interested for some individuals to decide to undergo screening while others choose against it. It is likely that in many cases those who are more deprived would indeed value increased life expectancy and it would therefore be appropriate to give those from lower socioeconomic groups information about their higher risk of dying from the disease to inform their personal evaluation of the benefits and harms of participation. However, it is also possible that those who are more socially deprived may value increased life expectancy differently to those who are less socially deprived resulting in a preference to delay diagnosis for as long as possible, even if, in the long term this results in a shortened life expectancy [24]. Thus it is much harder than might be thought to show that those who choose against screening must be acting against their better interests. Moreover, as we pointed out in the previous subsection, the value of autonomous choice can be intertwined with people’s interests in that having choices can have psychological, and health-promoting, benefits given perceived control and self-efficacy have been linked to good health [1, 5]. This is a further reason to doubt that restricting individuals’ informed choice would promote their overall welfare. The argument from the benefits of screening against informed choice, in either public health or paternalistic forms, arguably also undervalues autonomy. For many political philosophers, and people more generally, being able to make choices for oneself is good in itself. The value of informed choice is not simply in the extent to which is allows us to choose what is really in our interests. In other dimensions of life, such a choice of career or partner, many people want to make their own choices even if they are less good at selecting than some disinterested observer [43]. Perhaps these reasons to value informed choice carry over to the choice of screening even if people do sometimes get it wrong. A persuasive paternalistic argument must show that, given the possibility of an informed choice, the people would tend to choose against their interests and that this would justify denying them the choice. We have not shown that no such argument could be made, but we have tried to show the severity of the difficulties that face it. Conclusions This paper has shown how psychological research can contribute to assessing the possible impact of an informed choice policy on screening uptake. It suggests that an informed choice policy could lead to a decrease in uptake of screening amongst those who are most socially deprived, resulting in decreases in the physical health of this group. From a public health perspective any decrease in physical health is a matter of concern, particularly if that decrease is greater in those who are more socially deprived. From an informed choice perspective such a decrease in uptake could be interpreted as indicating that people are making autonomous choices based not only on good knowledge, but also in line with their own values. Those who are more socially deprived are more present orientated. They therefore value actions that have positive outcomes immediately and thus, once they understand that the benefits of screening are not immediate, may be less likely to participate in the screening programme. These results have been evaluated from different philosophical perspectives on health inequality and on choice. This evaluation has not attempted to provide a definitive assessment of whether the introduction of an informed choice policy in screening can be justified in the light of the likely impact on physical health inequalities outcomes across the population. Rather, the evaluation has sought to describe the way in which philosophical approaches to choice and to health inequality can be used to inform further discussions about choosing an informed choice approach to screening over a public health approach.
[ "screening", "informed choice", "public health approach", "health inequality", "paternalism", "autonomy", "utilitarianism", "prioritarianism", "egalitarianism" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
Oecologia-3-1-2039789
Costs and benefits of induced resistance in a clonal plant network
Plant defense theory suggests that inducible resistance has evolved to reduce the costs of constitutive defense expression. To assess the functional and potentially adaptive value of induced resistance it is necessary to quantify the costs and benefits associated with this plastic response. The ecological and evolutionary viability of induced defenses ultimately depends on the long-term balance between advantageous and disadvantageous consequences of defense induction. Stoloniferous plants can use their inter-ramet connections to share resources and signals and to systemically activate defense expression after local herbivory. This network-specific early-warning system may confer clonal plants with potentially high benefits. However, systemic defense induction can also be costly if local herbivory is not followed by a subsequent attack on connected ramets. We found significant costs and benefits of systemic induced resistance by comparing growth and performance of induced and control plants of the stoloniferous herb Trifolium repens in the presence and absence of herbivores. Introduction Plants can allocate a limited pool of resources to the three main functions of growth, reproduction and defense, suggesting that increased investments in one function may compromise the others. Empirical studies have shown that constitutive defense can be costly (e.g., tannins, Sagers and Coley 1995; glandular trichomes, Elle et al. 1999). Plant defense theory postulates that inducible defense mechanisms have evolved to reduce these costs by optimizing the temporal match between resource investment into defense and herbivory threats (Herms and Mattson 1992; Agrawal et al. 1999). In the last decades numerous studies have attempted to find costs of induced defense (reviewed in Bergelson and Purlington 1996 and Strauss et al. 2002) but evidence for costs of inducible plant defense remains scarce and inconclusive. More recently, empirical evidence has emerged supporting the allocation cost theory with the help of improved experimental designs, genetic engineering, and biochemical methodology (Baldwin 1998; Van Dam and Baldwin 1998, 2001; Heil and Baldwin 2002). Inducible resistance is a form of phenotypic plasticity as it allows plants to express an adequate phenotype in response to temporally and spatially variable herbivore damage. Herbivore-induced changes in the phenotypes of plants often relate to trait alterations which reduce the palatability and digestibility of consumed tissue by producing toxic metabolites and/or by up-regulating the production of a variety of defensive compounds. The ecological viability of induced resistance as an efficient defense strategy depends on the balance of costs and benefits associated with plastic defense induction. Assessing the benefits of induced defense in conjunction with possible costs is a prerequisite for estimating the advantages and disadvantages of plastic defense induction as a potentially adaptive form of phenotypic plasticity (Dudley and Schmitt 1996; Schmitt et al. 1999) and hence for understanding potential selection pressures leading to the evolution of induced plant defenses (Agrawal 2000). Costs of defense have been traditionally measured in terms of decreased plant fitness. Allocation costs refer to a direct fitness decrease as a consequence of resource-mediated trade-offs between defense investment and other plant functions. Recent empirical and conceptual work has provided convincing arguments for the notion that defense induction can also affect fitness in an indirect manner, via a multitude of potentially complex ecological interactions (Van Dam and Baldwin 1998, 2001; Heil and Baldwin 2002; Strauss et al. 2002). These costs are commonly referred to as ecological costs. Allocation theory suggests that costs of plastic defense induction should be more apparent in low-resource environments than under optimal growth conditions (Herms and Mattson 1992; Bergelson 1994; Bergelson and Purlington 1996; but see van Dam and Baldwin 2001) as the diversion of resources to defense can not easily be compensated for by enhanced resource acquisition. In addition, experiments to detect costs of defense conducted under quasi-optimal conditions are unlikely to reflect realistic situations, and therefore, tend to underestimate plasticity costs. To overcome this problem, several studies have used competitive and/or low-resource environments to quantify costs of induced defense (Siemens et al. 2003 and studies quoted therein). Additionally, previous studies have shown that controlling the genetic background of plants can substantially enhance the chances to detect costs, by removing confounding effects due to genetic variation in the induced response (Bergelson and Purlington 1996; Strauss et al. 2002). Stoloniferous plants consist of multiple, genetically identical individuals (ramets) that are interconnected by aboveground horizontal stems (stolons). Resource transport within clonal plant networks has been extensively described in the literature (Pitelka and Ashmun 1985; Marshall 1990; Alpert 1996; Alpert and Stuefer 1997). Nevertheless, the importance of stolon connections for the transport of defense agents is a novel aspect (Stuefer et al. 2004) that has only recently been demonstrated (Gómez and Stuefer 2006). Ramets of the stoloniferous herb Trifolium repens are able to systemically induce other ramets after local herbivore damage. On the one hand, this form of physiological integration may confer clonal plant networks with considerable benefits by allowing for a fast, specific and efficient early-warning system among interconnected ramets. On the other hand, the potentially large spatial scale of clonal plant networks may also lead to substantial costs if network members become induced without being threatened by herbivores (Gómez and Stuefer 2006). These costs are due to a potential mismatch in the spatio-temporal scale of plastic defense expression and the dynamics and patterns of herbivore attacks. To assess the potentially adaptive nature of plastic responses, “it is necessary to demonstrate that the phenotype induced in each relevant environment confers higher fitness in that environment relative to alternative phenotypes” (Schmitt et al. 1999). This is analogous to stating that the induced phenotype should incur costs in herbivore-free environments, while defense induction should lead to benefits in herbivore-exposed environments. To quantify costs and benefits we measured traits related to plant fitness and performance of induced and uninduced T. repens plants in the absence and presence of herbivores. Growing induced and uninduced plants in the absence of herbivores allows for a quantification of possible costs of induced resistance, simulating localized damage (e.g., by small herbivores with a low mobility) and the activation of defense in ramets beyond the feeding range of the herbivore. Benefits of induction, however, can only be assessed in the presence of herbivores after an initial attack, thereby simulating a scenario with mobile herbivores showing active foraging behavior beyond the first place of attack. In this study we tested the following specific hypotheses: In the absence of herbivores, systemically induced ramets of clonal plants perform worse than uninduced ramets of the same genotype. This is due to costs of defense induction when defense is not needed. In the presence of herbivores, induced ramets of clonal plants perform better than uninduced plants, due to an enhanced protection through induced defense. To test these hypotheses we grew induced and uninduced (control) plants of the stoloniferous herb T. repens together to expose them to mutual competitive interactions, resembling sub-optimal growing conditions in a sward. To quantify costs and benefits of induced resistance we grew plants in herbivore-free and herbivore-exposed environments, respectively. Materials and methods Study organisms Five genotypes of the stoloniferous herb T. repens L. were vegetatively propagated in a greenhouse at a mean temperature of 21°C/19°C (day/night), and at a 16 h/8 h (light/dark) photoperiod. The genotypes originated from natural riverine grassland populations situated along the river Waal, The Netherlands. They had been collected 4 years prior to the start of this experiment and were grown under common garden conditions, eliminating possible maternal and environmental carry-over effects. The beet armyworm (Spodoptera exigua Hübner) used in this study is a generalist caterpillar with a broad host range. The caterpillar colony was maintained at a constant temperature of 24°C and 16 h/8 h (light/dark) photoperiod. The larvae were reared on an artificial diet described in Biere et al. (2004). Pre-growth of plant material We started the experiment with 64 cuttings of each of the five genotypes. The cuttings were planted in pairs in plastic trays (16 cm × 12 cm × 5 cm) using sterilized clay grains as a substrate (Seramis; Masterfoods, Germany). Each tray was fertilized weekly with 50 ml full-strength Hoagland solution before the start of the experiment. At the beginning of the experiment, all cuttings consisted of a main stolon with at least eight fully developed ramets. If present, side branches were removed immediately before starting the experiment. Experimental design The experimental set-up (Fig. 1) to measure costs and benefits of systemic induced resistance (SIR) consisted of four peripheral trays placed around a central tray, which we will refer to as the “competition tray”. All trays were of similar dimensions (16 cm × 12 cm × 5 cm). Each of the peripheral trays contained two cuttings with at least eight ramets each. The cuttings in two of those trays received a treatment to induce defense during the entire duration of the experiment (for details see below), while the cuttings in the other two trays remained uninduced (control). Trays receiving the same treatment were placed diagonally opposite each other. The competition tray was placed inside a metal frame (20 cm × 15 cm × 20 cm) covered by mosquito netting (mesh gauge 0.2 cm2) with four small openings on both longitudinal sides. The two youngest ramets of each cutting were inserted through the mesh openings and allowed to grow (proliferate and root) in the competition tray for 19 days. We used five T. repens genotypes, each of which was replicated 4 times to measure costs and 4 times to measure benefits of defense induction. All induced and control plants grown together in the same experimental set-up (as described above) belonged to the same genotype. The experimental systems were randomly distributed on greenhouse benches. Fig. 1Schematic representation of the experimental set-up to measure costs and benefits of systemic induced resistance (SIR) in a clonal plant network. Control (white) and defense-induced (gray) plants grew from four peripheral trays into a common, central competition tray. The circles represent petri dishes used for a continued controlled herbivore attack (defense induction treatment). To measure costs of SIR, plants grew together in the absence of herbivores in the competition tray (upper drawing). To measure benefits, ten caterpillars (wavy black lines) were added to the competition tray (lower drawing). See Materials and methods for more details Systemic induction of resistance Systemic induction of resistance was achieved through a controlled herbivore attack. One S.exigua larva was confined with two leaves in one petri dish mounted on the plants (Gómez and Stuefer 2006). The corresponding ramets of uninduced control plants were similarly enclosed in modified petri dishes but without adding any larvae. The controlled herbivore attack was maintained throughout the course of the experiment, starting on the ramet on the eighth position (counting from the tip of the stolon) from each cutting. When the two ramets inside the petri dish had lost at least 50% of leaf tissue, the petri dish was moved forward on the stolon and the adjacent, younger ramet was inserted into the petri dish. Whenever the induction treatment was moved forward on the induced cuttings, a comparable leaf area was removed with scissors from one ramet of each cutting in the control trays. This was done to compensate for the leaf area loss due to caterpillar feeding in the induced plants. Cutting the leaves with scissors does not induce resistance in T. repens (S. Gómez, unpublished data). The induction treatment started 1 day after the cuttings were placed into the competition tray. If the caterpillar inside the petri dish died, it was replaced by a new one to maintain defense induction. In order to enhance plant interactions, induced and control plants were grown together in the competition tray. Since all plants growing together belonged to the same genotype, induction effects cannot be confounded with genetic differences in plant traits, including competitive ability, between induced and control plants. All measurements described below were performed on ramets growing in the competition trays. Costs of SIR Costs of defense induction were measured as a reduction in plant performance. Costs can be measured after initial herbivore damage (and consequent defense induction) in the absence of subsequent herbivore attacks. To quantify costs of defense induction we measured the following traits which are known to be closely related to plant performance and fitness: total biomass production, relative biomass allocation to leaves, petioles, stolons, and roots, number and length of the main and side stolons and number of ramets on the main and side stolons. We also measured the petiole length, petiole dry mass, leaf area, leaf dry mass of the fourth and fifth youngest ramets of each cutting. Benefits of SIR To quantify benefits of SIR we exposed the plants in the competition tray to a second, controlled herbivory attack (referred to as “herbivory treatment”). We released five fourth instar caterpillars on day 16 in the competition tray and then added two and three more on day 17 and 18, respectively, to achieve substantial levels of herbivore damage. The plants were harvested 19 days after the start of the experiment. We quantified benefits of induced resistance by scoring herbivory damage in the induced and in the control plants. At the time of harvesting each ramet on the main stolon was classified according to the leaf area consumed. We visually estimated the damage and assigned each ramet a damage category ranging from 0 to 3. The values corresponded to the following amounts of damage: 0 = no damage, 1 = 1–33%, 2 = 33–66% and 3 = 66–100% of leaf area consumed. We also recorded the position of the damaged ramet on the stolon to investigate possible intra-clonal variation in the damage pattern according to ramet age. In addition to the degree of damage, we measured the dry mass of leaves, petioles, stolons and roots in induced and control plants. Herbivore preference test One day before releasing the caterpillars (herbivory treatment) we performed two dual choice tests per competition tray to check whether plants assigned to the defense-induction treatment were systemically induced. For each competition tray we cut off two control and two induced ramets of a similar developmental stage (third-youngest fully expanded leaf). Each control ramet was paired with an induced one and placed together on a moist filter paper in a petri dish to perform a dual choice test. A fourth instar S. exigua caterpillar was placed in the middle and allowed to feed until more than 30% of one of the leaves was consumed or for 48 h. By means of visual estimates the leaf with the largest area consumed was recorded for each choice test. In 78% of the cases more of the control leaf was consumed (sign test M = 23, P < 0.0001; n = 77) than the induced one, confirming that plants in the competition trays that had received local herbivore damage (defense induction treatment) were induced before the herbivory treatment started. Statistical analysis Central competition trays were considered the units of replication in all statistical analyses. To avoid pseudo-replication and a consequent inflation of df (Hurlbert 1984), all traits measured on plants (cuttings) in the competition trays were pooled per treatment (by averaging the four control cuttings and the four defense-induced cuttings, respectively) prior to data analysis. Consequently, our experiment had 20 replicates for measuring costs and 20 replicates for assessing benefits. Competing plants cannot be considered independent from each other as, by definition, they change each other’s environment, growth and development. To take this dependence into account we used a repeated measures design to analyze differences between competing plants that belonged to different treatment groups. Repeated measures analysis explicitly considers intrinsic relationships between treatment groups (Potvin et al. 1990). Costs of SIR Repeated measures ANOVA was performed to test for costs of defense induction in number and length of the main and side stolons and number of ramets on the main and side stolons, relative biomass allocation to roots, stolons, petioles and leaves and petiole length, petiole dry mass, leaf area, leaf dry mass of the fourth and fifth youngest ramets. Defense induction (induced vs. control) was considered a within-subjects effect, and genotype was treated as a between-subjects effect. Absolute dry masses of roots, stolons, petioles and leaves were analyzed using two-way repeated measures ANOVA (within-subjects effect—defense induction; between-subjects effects—plant genotype and herbivory). Benefits of SIR The amount of damage in the herbivory treatment was assessed with doubly repeated measures ANOVA using ramet age and defense induction as repeated factors and genotype as main effect. The analysis included a profile analysis (SAS procedure GLM; profile statement) to test for differences in the degree of damage between adjacent ramets on the stolons. To correct for differences in the developmental stage of different cuttings we used only the six youngest ramets of each cutting in the damage analysis. All analyses were conducted with SAS 9.1 (SAS Institute, Cary, N.C.). Results Costs of SIR Total dry mass did not differ between control and induced plants (Table 1). However, defense induction caused a significant reduction in petiole dry mass. Additionally, defense induction resulted in a shift in biomass allocation to the different plant parts. Relative biomass allocation to leaves increased significantly after defense induction (Table 2; P = 0.01). The percentage of biomass allocated to roots, stolons and petioles did not significantly differ between control and induced plants. (Table 3) Table 1Repeated measures ANOVA for effects of genotype, herbivory and defense induction on roots, stolons, petioles, leaves and total dry massSourcedfRootStolonPetiolesLeavesTotalMSFMSFMSFMSFMSFBetween-subject effectsGenotype (Gen)49817.20***3,7104.12***3,74110.97***11,00210.85***60,3778.99***Herbivory (Herb)128.10.2178.60.091,6464.83*4,2344.18*11,9261.78Gen × Herb449.30.361240.141050.312920.291,1710.17Error301368993421,0146,714Within-subject effectsInduction (Ind)179.80.815741.974165.11*98.60.371,8790.99Ind × Gen453.90.5483.10.2850.70.623671.371,0540.55Ind × Herb11061.0757.10.2011.10.141600.601,1430.60Ind × Gen × Herb41391.405421.861591.952650.993,5251.85Error3099.029281.32681,901*0.01 < P < 0.05, ***P < 0.0001 Table 2Costs of systemic induced resistance (SIR). Repeated measures ANOVA for effects of genotype and defense induction on relative dry mass allocation to roots, stolons, petioles and leaves on plants without an herbivory treatment in the competition traySourcedfRootsStolonsPetiolesLeavesMSFMSFMSFMSFBetween-subjects effectsGenotype (Gen)429.86.61** 49614.57***22.73.54*35.77.3**Error154.51276.44.9Within-subject effectsInduction (Ind)112.02.060.040.015.31.9235.87.32*Ind × Gen42.20.388.21.161.80.6410.92.24Error155.87.12.84.9*0.01 < P < 0.05, **0.001 < P < 0.01, ***P < 0.0001 Table 3Average (±SE) absolute and relative dry mass allocated to roots, stolons, petioles and leaves of uninduced and induced plants in the absence of a subsequent herbivory treatment (Costs) and in the presence of a subsequent herbivory treatment (Benefits) in the competition trayRoot (mg)Stolons (mg)Petioles (mg)Leaves (mg)Total (mg)CostsUninduced15.6 ± 3.0 (4.5 ± 0.7%)103.2 ± 6.7 (34.9 ± 1.0%)72.6 ± 4.8 (24.9 ± 0.6%)107.5 ± 8.3 (35.7 ± 0.7%)298.9 ± 21.1Induced11.3 ± 2.1 (3.4 ± 0.6%)96.1 ± 6.0 (34.8 ± 1.0%)67.3 ± 4.5 (24.2 ± 0.5%)106.8 ± 8.3 (37.6 ± 0.6%)281.6 ± 19.1BenefitsUninduced14.5 ± 2.2 (5.1 ± 0.8%)99.5 ± 4.8 (37.6 ± 0.8%)62.8 ± 3.7 (23.8 ± 0.8%)90.1 ± 5.6 (33.5 ± 0.8%)266.9 ± 13.2Induced14.8 ± 3.7 (4.8 ± 0.9%)95.8 ± 6.1 (36.8 ± 1.1%)59.0 ± 4.5 (22.6 ± 0.6%)95.1 ± 7.6 (35.8 ± 0.6%)264.7 ± 19.7 The number of ramets produced on the main stolon was 7% lower in induced as compared to control plants (Table 4; induction effect P = 0.003). The number and length of side stolons and the number of ramets formed on them did not change after defense induction. Table 4 Costs of SIR. Repeated measures ANOVA for effects of genotype and defense induction on plant fitness and performance-related traits in the absence of herbivoresSourcedfRamet no. main stolonLength main stolonRamet no. side stolonsLength side stolonsSide stolons numberFourth ramet petiole lengthFourth ramet areaMSFMSFMSFMSFMSFMSFMSFBetween-subjects effectsGenotype (Gen)42.383.41*23.55.4***71.96.72**30.81.6316.012.97***18.014.93***8.5715.36***Error150.694.3110.718.91.231.200.55Within-subject effectsInduction (Ind)11.8012.13**7.572.451.250.250.780.230.150.314.385.22*0.100.23Ind × Gen40.171.193.921.271.830.374.101.220.120.240.350.420.090.21Error150.143.084.953.370.510.840.46*0.01 < P < 0.05, **0.001 < P < 0.01, ***P < 0.0001 The fourth and fifth youngest ramets on the main stolon produced petioles 5% shorter in the induced plants (Table 4; fourth ramet P = 0.03, fifth ramet P = 0.07). Leaf area, leaf dry mass and petiole dry mass measured on those ramets were not significantly affected by the induction treatment. Benefits of SIR Defense induction had a very strong effect on the amount of damage inflicted by S. exigua larvae on the plants (Table 5; induction effect P < 0.0001; Fig. 2). The number of ramets that were partially or fully consumed during the herbivore attack was consistently higher in control than in induced plants. Most of the damaged ramets lost only a small part of their leaf area (1–5%). This was consistent for both control and induced plants (Fig. 2). In induced plants up to 44% of the ramets on the main stolon were not damaged, whereas in control plants only 22% of ramets on the main stolon were undamaged. Table 5Benefits of SIR. Doubly repeated measures ANOVA for effects of genotype, defense induction and ramet age on leaf area loss due to herbivorySourcedfMSFBetween-subjects effectsGenotype (Gen)41,1022.39†Error15461Within-subjects effectsInduction (Ind)16,84763.92***Ind × Gen42442.28Error (induction)15107Age518,988133.0***Age × Gen206304.41***Error (age)75142Ind × Age51251.22Ind × Age × Gen201721.67†Error (Ind × Age)75103† 0.1 > P > 0.05, ***P < 0.0001Fig. 2Average damage (±1 SE) inflicted on ramets of the main stolon (the 1st ramet being the youngest and the 6th being the oldest) of control and induced plants in the competition tray after carrying out a controlled herbivore attack (herbivory treatment). Damage categories: no damage (0), 1–33% (1), 33–66% (2), 66–100% (3). The asterisks above the bars indicate the statistical significance of the result of a profile analysis (SAS procedure GLM; profile statement) to test for differences in the degree of damage between ramets of successive age classes. The amount of damage was significantly higher for control than for induced plants in all age classes. ***P < 0.001, ns not significant The herbivory treatment significantly reduced the biomass of leaves and petioles (Table 1; P = 0.049 and P = 0.036, respectively; Table 3) in both induced and uninduced plants. In the presence of herbivores, induced and uninduced plants had a comparable total biomass. However, induced plants showed a larger percentage of biomass in their leaves (Table 3; repeated measures ANOVA; F = 17.44 P = 0.0008), suggesting that the induced plants benefitted from increased relative biomass in those organs under attack. Ramet age, regardless of the induction state, had a very strong effect on herbivore preference (Table 5; age effect P < 0.0001). Younger ramets, especially the first and second youngest ones, were heavily preferred over older ones (profile analysis; Fig. 2). The first ramet exhibited particularly severe damage in both induced and control plants (average leaf area consumed > 65%; Fig. 2). Defense induction had a significant effect on leaf area loss due to herbivory in all ramet age classes (Fig. 2). The degree to which systemic defense induction reduced herbivory damage was similar for ramets of all age classes (Table 5; no age × induction effect). There was a marginally significant genotype effect on the feeding of the caterpillars (Table 5; genotype effect P = 0.09). Discussion Our study provides empirical evidence of significant costs and benefits of SIR in a clonal plant network. In agreement with our hypotheses, induced and control plants showed clear differences in performance and fitness-related traits when grown in the absence and presence of herbivores. In environments without herbivores, induced plants produced fewer ramets, shorter petioles and exhibited a shift in biomass allocation patterns. In environments with herbivores, control plants suffered consistently higher degrees of leaf damage than induced plants. Even though defense induction resulted in changes in plant growth, and significantly affected the amount of damage caused by the herbivores, total plant biomass did not respond as expected under the adaptive plasticity hypothesis (Dudley and Schmitt 1996; Schmitt et al. 1999), as we could not find a significant induction × herbivory interaction effect. However, we propose that the differences observed in our study (e.g., reduced ramet production rates and shorter petioles in the cost experiment, decreased amount of leaf damage in the benefits experiment) are likely to translate into substantial differences in plant productivity, and hence biomass, in the longer term. Costs of SIR Biomass production and allocation Total plant biomass production did not change as a consequence of defense induction, implying that defense induction did not incur direct and immediate productivity costs. After induction, however, biomass allocation shifted significantly towards the leaves. We suggest that this allocation shift may enable plants to better cope with current and future herbivory by reducing resource allocation to those organs that are not currently impacted by herbivore damage. While potentially beneficial in the short term, this response might result in longer term indirect costs due to reduced performance under certain environmental conditions, such as drought, root herbivory and severe root competition. A similar shift in the biomass allocation pattern was observed in Lepidium virginicum plants after defense induction. Induced plants grown at a high density showed a reduction in root biomass and an increase in aboveground biomass (Agrawal 2005). In agreement with our findings, total biomass production was not significantly altered by defense induction in that study. A reduction in belowground biomass was also reported for induced wild parsnip plants. In this case, however, the aboveground biomass did not change significantly after defense induction (Zangerl et al. 1997). Further studies are necessary to assess the generality, functional significance (including costs and benefits) and mechanistic basis of changes in root–shoot allocation in response to induced resistance to herbivory. Reduction in developmental growth rate Defense induction negatively affected plant fitness by reducing the number of ramets produced. This delayed developmental growth rate was expressed as a reduction in the number of ramets on the main stolon produced during the experiment (7.4 ramets on the control and 7.0 on the induced plants). In the shorter run (i.e., time span of this experiment) this effect is unlikely to translate into biomass differences. In the longer run, however, subtle changes in the developmental growth rate are known to result in very major divergences in performance, structure and clonal fitness of stoloniferous plants (Birch and Hutchings 1992a; Birch and Hutchings 1992b; Huber and Stuefer 1997; Stuefer and Huber 1998). Reduction in petiole length Defense induction had significant negative effects on petiole lengths. This effect can have severe performance and fitness consequences for a stoloniferous plant like T. repens, which often grows in dense herbaceous canopies, and which relies on petiole elongation for shade avoidance (Huber 1997). Petiole length largely determines the ability of stoloniferous plants to place their leaves higher up in the canopy (Huber and Wiggerman 1997; Weijschedé et al. 2006). Even a small reduction in petiole length could have serious performance costs since differences in the relative position of leaves in herbaceous canopies are likely to be amplified by asymmetric competition for light (Weiner 1990; Pierik et al. 2003). Defense induction may also cause physiological trade-offs which impede the simultaneous expression of plasticity to herbivores and to shading by competitors (Cipollini 2004). A decrease in petiole length as a result of defense induction can hence, compromise the competitive ability of plants and result in an enhanced risk of induced plants being over-shaded by neighbors. A recent study by Kurashige and Agrawal (2005) supports this notion by showing that Chenopodium album plants, which had previously been damaged by herbivores, were able to elongate stems to a similar proportional degree as undamaged plants when grown in competition for light. However, the damaged plants were smaller due to the expression of induced resistance, thereby incurring potential opportunity costs due to asymmetric competition. Benefits of SIR Reduced damage Our results provide direct evidence for short-term benefits of having an early-warning system in clonal plant networks. In the presence of herbivores, induced plants suffered considerably less damage than control plants. As many as 50% fewer ramets were attacked in induced plants as compared to controls. Localized damage (defense-induction treatment) resulted in a greater degree of protection against herbivores for ramets further along that main stolon and its side branches. The reduced damage did not translate into a significant effect of defense induction on biomass production, due to the fact that the youngest, usually not fully developed leaves were heavily preferred by the herbivores. The biomass loss due to young leaf consumption is very likely to strongly underestimate the negative effects of herbivory and defense induction on future plant growth and performance. Coleman and Leonard (1995) demonstrated how leaf area consumption, and its consequences for plant performance, can be severely underestimated if the developmental stage of leaves is not taken into account. They showed that a certain amount of damage inflicted on young expanding Nicotiana tabaccum leaves is more detrimental than the same amount received by mature, fully developed leaves. As leaf tissue expanded, the area of the holes increased almost fourfold and the final area of the leaf decreased by approximately 40%. In addition, they observed a 35% decrease in the number and mass of fruits on the plants that received the damage to expanding young tissues. Therefore, an initially small amount of damage inflicted on young developing leaves may have dramatic consequences for plant performance and fitness over time. Similarly, the differences found in our experiment can be expected to result in considerable performance differences between induced and uninduced plants as increased damage and loss of young leaves in uninduced plants will compromise plant productivity by reducing the number of future source ramets. Our results show that ramet age largely determines herbivore damage. The first and second ramets were heavily attacked as compared to the rest. This damage, although still large, was significantly reduced in induced plants. The reduction in leaf area loss in induced young ramets likely increases their chance of survival and establishment. Young ramets in clonal plants constitute the most valuable tissue since they represent the future reproductive potential of the plant (Huber and During 2000) and their protection is critical since they are responsible for a high proportion of the future biomass production (Beinhart 1963). We present evidence supporting the hypothesis that an early-warning system after herbivory in a clonal plant network grants vulnerable young offspring ramets with parental support (Stuefer et al. 2004) that non-clonal plants are unable to confer their offspring at the moment of the attack (but see Agrawal et al. 1999). Our study provides evidence for significant costs and benefits of systemic defense induction in T. repens. The experimental approach used in this study, however, does not allow for balancing costs and benefits in terms of plant fitness and overall plant performance, because both positive and negative effects of induction reported here, although likely to have significant longer-term effects on productivity and ultimately on fitness, did not have an effect on biomass at the short time scale during which the experiment took place. While our results indicate clear advantages and disadvantages of network induction in the subsequent presence and absence of herbivores, respectively, an accurate and reliable quantification of the cost–benefit ratio should make use of long-term experiments. In conclusion, the present study shows that in the short term, the activation of early-warning responses in clonal plant networks has both costs and benefits. In the absence of herbivores, the performance of the induced phenotype was compromised as compared to the uninduced phenotype in terms of potential competitive ability. In the presence of herbivores, the induced phenotype was favored by suffering considerably less herbivore damage suggesting potential advantages for the phenotype correctly matching its environment. Whether this represents an adaptive value of the induced responses remains to be demonstrated in longer-term studies where the initial small changes observed in our study can be measured directly in terms of fitness. The long-term balance of costs and benefits of induced resistance in clonal plant networks is likely to be strongly context dependent and a function of the match between spatio-temporal aspects of systemic defense expression and the feeding behavior of herbivores.
[ "plant defense", "trifolium repens", "physiological integration", "adaptive plasticity hypothesis", "plant communication" ]
[ "P", "P", "P", "P", "M" ]
Mol_Hum_Reprod-1-1-2408934
In utero exposure to low doses of environmental pollutants disrupts fetal ovarian development in sheep
Epidemiological studies of the impact of environmental chemicals on reproductive health demonstrate consequences of exposure but establishing causative links requires animal models using ‘real life’ in utero exposures. We aimed to determine whether prolonged, low-dose, exposure of pregnant sheep to a mixture of environmental chemicals affects fetal ovarian development. Exposure of treated ewes (n = 7) to pollutants was maximized by surface application of processed sewage sludge to pasture. Control ewes (n = 10) were reared on pasture treated with inorganic fertilizer. Ovaries and blood were collected from fetuses (n = 15 control and n = 8 treated) on Day 110 of gestation for investigation of fetal endocrinology, ovarian follicle/oocyte numbers and ovarian proteome. Treated fetuses were 14% lighter than controls but fetal ovary weights were unchanged. Prolactin (48% lower) was the only measured hormone significantly affected by treatment. Treatment reduced numbers of growth differentiation factor (GDF9) and induced myeloid leukaemia cell differentiation protein (MCL1) positive oocytes by 25–26% and increased pro-apoptotic BAX by 65% and 42% of protein spots in the treated ovarian proteome were differently expressed compared with controls. Nineteen spots were identified and included proteins involved in gene expression/transcription, protein synthesis, phosphorylation and receptor activity. Fetal exposure to environmental chemicals, via the mother, significantly perturbs fetal ovarian development. If such effects are replicated in humans, premature menopause could be an outcome. Introduction Exposure to environmental compounds (ECs) may alter female reproductive tissues and thus affect the ability of human couples to conceive and maintain a healthy pregnancy (Hruska et al., 2000). ECs include endocrine disrupting compounds [EDCs, e.g. dioxins, polychlorinated biphenyls (PCB) and organochlorine pesticides] that have been/are used extensively in manufacturing and agriculture. ECs are found ubiquitously within the environment, together with heavy metal pollutants. Human exposure occurs through a variety of routes including the consumption of meat and dairy products, ingestion of water, absorption through the skin and inhalation. The concentrations of ECs within human tissues and the mechanisms through which they elicit effects on reproductive tissues are not well understood. Studies conducted in a variety of animal models demonstrate that the female fetus is particularly sensitive to small endocrine changes (Lovekamp-Swan and Davis, 2003; Henley and Korach, 2006) that have detrimental effects on reproductive function (Miller et al., 2004; Uzumcu and Zachow, 2007). There is uncertainty as to whether women’s fertility and reproductive health have deteriorated or whether the link between exposure to ECs and increased incidence of breast cancer, precocious puberty, premature menopause and decreased fertility is valid (Sharara et al., 1998; Hruska et al., 2000). Nevertheless, the importance of developmental disturbance in disease aetiology is increasingly evident (Heindel, 2006). Determining the incidence of reproductive problems in women (15–20% of couples have difficulty conceiving) is confounded by diagnostic, social (e.g. increasing age of women first trying to conceive) and obesity issues (Evers, 2002; Sharpe and Franks, 2002) which complicates linking changes in reproductive function with EC exposure (Akkina et al., 2004; Cesario and Hughes, 2007). Similarly, although the incidence of breast cancer is clearly rising and has a strong links with environmental factors and in utero exposures (Kortenkamp, 2006), there is no general increase in ovarian cancer (Bray et al., 2005). Two conclusions arise from the literature: first, that the development of the female fetus or embryo can be perturbed by exposure to ECs and, second, that the evidence for population trends in humans is mixed and often inconclusive. Human tissues, including the maternal-fetal unit and maternal tissues during gestation, contain levels of ECs that are associated with many in utero effects (e.g. Ikezuki et al., 2002; Younglai et al., 2002; Tsutsumi, 2005; Barr et al., 2007; Chao et al., 2007; Huang et al., 2007; Thundiyil et al., 2007). However, the lack of robust human exposure-phenotype linkage data and multi-factorial effects on reproductive trends hampers our ability to test the hypothesis that in utero exposure to environmental chemicals damages female reproductive development (Foster, 2003). Many ECs have the potential to perturb development (Tabb and Blumberg, 2006; Watson et al., 2007) and epidemiological studies (Toft et al., 2004) suggest that there is a need to study the effects of EC exposure in relevant animal models at real-world rates of exposure, focusing on likely ‘sensitive’ components and mechanisms of relevant reproductive circuits. Most studies into effects of ECs have focused on rodent models, using single compounds, administered for short periods, often at pharmacological doses. Although essential to understanding of the mechanisms though which such compounds act, these studies are of limited value in the assessment of risks to human developmental health because the patterns of experimental exposure are not representative of normal human exposure. Thus an animal model using a prolonged, ‘real-life’ (defined as exposure through natural routes, not via specific application, such as injection), in-utero, pattern of EC exposure will be better able to indicate likely effects on humans. An ovine model in which pregnant ewes are exposed to the complex mix of ECs and heavy metals contained in human sewage sludge, following its application to pasture as a fertilizer, is characterized by defects in fetal development (Paul et al., 2005). To determine which chemical classes might be involved in these effects, we measured levels of seven heavy metals, the di(2-ethylhexyl) phthalate (DEHP), nonyl phenol and seven polychlorinated biphenyl (PCB) congeners in tissues of exposed fetuses/offspring and found concentrations of Cu, Pb, Zn and DEHP to be significantly altered (Rhind et al., 2002, 2005a,b, 2007a,b). Sewage sludge is a relevant source of a chemical cocktail to which humans are exposed as it, in itself, broadly reflects human exposure (Rhind et al., 2002, 2005b; Abad et al., 2005; Oleszczuk, 2006; Martinez et al., 2007) to a complex mixtures of ECs (Groten et al., 2004; Jonker et al., 2004; Robinson and MacDonell, 2004; Koppe et al., 2006). In addition, sheep, like humans, are long-lived and have a relatively long gestation period (145 days), during which time ovine fetuses show a similar timing and sequence of ovarian development to that observed in the human (Pryse-Davies and Dewhurst, 1971; Juengel et al., 2002; Sawyer et al., 2002; De Felici et al., 2005). Unlike the rodent, the ruminant fetal ovary synthesizes estrogens which are important for germ cell development (Pannetier et al., 2006). Therefore, notwithstanding the metabolic and dietary differences between sheep and humans, the long-term, real-life, low-dose, exposure of pregnant sheep to a complex mix of relevant ECs is a good model for the human given that we know that many ECs are gaining access to the materno-fetal unit in pregnant women. In this study, we aimed to determine whether prolonged, low-dose, exposure of the developing fetus, in utero, to maternal dietary EC load adversely affects the developing fetal ovary. Materials and Methods Animals, blood and tissue collection The study was conducted at the Macaulay Institute research station (Hartwood, Lanarkshire, Scotland, UK) using Texel ewes. Animals were maintained on pasture at conventional stocking rates, adjusted according to pasture height, as described previously (Paul et al., 2005), so that animals from the respective treatments were maintained in comparable nutritional states. Neither group received supplementary feeding during their breeding lives. The animals were inspected by a qualified shepherd on a daily basis and routine animal care and vaccination procedures were conducted, as prescribed by best practice protocols. Digested sewage sludge was applied twice annually, to each of three replicate 9 ha plots, at a rate of 2.25 metric tons of dry matter per hectare. For the first five applications, the sludge was applied in liquid form as described previously (Rhind et al., 2002). Thereafter, owing to changes in sludge production practices by the UK water authorities, thermally dried sludge pellets were used and applied at similar rates (about 2.25 tones dry matter/ha/year). The composition of the sludge, on a dry matter basis, did not differ between the two methods of application. The application of sludge to the surface of the pastures was not designed to conform to the UK recommendations for good practice (SEDE and Arthur Anderson, 2001). According to recognized codes of practice, sludge can only be applied to grazed grassland when it is deep injected into the soil or, if it is applied to the surface, there can be no grazing of that land within the season of application. However, this study was designed to maximize the rate of contamination of the pasture and topsoil and thus to maximize the risk of exposure of grazing animals to the chemical constituents of sewage sludge through their food’. Animals were not allowed to graze the pasture until a minimum of 3 weeks after sludge application, as prescribed by relevant legislation (Parliament, 1989). Control ewes were maintained on similar pasture to which 225 kg of nitrogen/ha/year was applied in the form of conventional inorganic fertilizers. The relatively harsh environmental conditions in central Scotland where the farm is located mean that there was no growth of clover or other estrogenic plant species in any of the pastures. The treated and control groups from which the study animals were drawn each comprised 3 replicate subgroups of 5 breeding ewes in each of four age categories (total flock size = 120 ewes). For the present study, subgroups of ewes that were 6 years of age and had been maintained on the respective treatments throughout their breeding lives were drawn from all replicates of each treatment. Ewes from both groups underwent estrous cycle synchronization, using progestagen sponges (Chronolone, 30 mg; Intervet, Cambridge, UK), before being mated to rams of the same genotype and from the same source, to remove the effect of genotypic differences. Estimation of gestational age was based on the knowledge that conception should occur within 48 h of sponge removal when estrus cycle synchronization is used. Animals were euthanized at ∼110 days of gestation (GD110, equivalent to 27 weeks in the human) according to Schedule 1 protocols as defined by the UK Animals (Scientific Procedures) Act, 1986. Twenty-three female and 24 male fetuses were collected from both sets of ewes (control, n = 10, treatment, n = 7) and all of the resulting female fetal lambs (n = 15 in control, n = 8 in treated groups) were used for the ovary studies detailed below. Maternal and fetal body weight and fetal ovary weight were recorded at slaughter; fetal blood samples were also collected and the serum, isolated by centrifugation, stored at −20°C until required for assay. One ovary from each animal was fixed for 5.5 h in Bouin’s fixative and then transferred to 70% ethanol until analysis. The other ovary from each animal was snap-frozen in liquid N2 and stored at −80°C until analysis. GD110 was selected as a representative developmental stage since the primordial follicle pool has been established and expression of many developmentally important genes, such as growth differentiation factor 9 (GDF9) is maximal (Mandon-Pepin et al., 2003). Immunohistochemistry Sagittal 5 µm sections of each fetal ovary were cut, floated onto slides and dried at 50°C overnight. Slides were de-waxed in xylene, hydrated gradually through graded alcohols and washed in water. For all ovaries, slides were prepared containing two randomly selected, non-consecutive sections. At least one slide from each ovary was H&E stained for morphological assessment. Further slides were immunostained for induced myeloid leukaemia cell differentiation protein (MCL1) (pro-proliferative), GDF9 (development of primordial follicles) and phosphorylated histone H3 (pH3), to assess active proliferation and establish a mitotic index (Brenner et al., 2003). Following routine de-waxing, antigen retrieval procedures were used for the MCL1 and GDF9 epitopes [microwaving in 0.01 M citrate buffer (pH 6.0) on full power for 3 × 5 min]. Immunostaining for MCL1 and GDF9 was carried out using standard peroxidase-based immunostaining protocols (MCL1: chemMate, Dako and GDF9: vector). Slides for MCL1 were placed in a Dako autostainer and incubated with anti-MCL1 antibody (Serotec, 1/50), for 30 min. Antibody binding was visualized using the ChemMate peroxidase/DAB detection system (DakoCytomation Ltd., Ely, Cambridgeshire, UK). Slides for GDF9 were incubated overnight in a humid chamber with anti-GDF9 antibody (R & D Systems, 1/15) and antibody binding visualized using the Vector peroxidase/DAB detection system for 30 min (Vector Laboratories, Peterborough, UK). A rabbit anti-P-Histone H3 (#06–570; 1:500; Upstate Cell Signalling Solutions, Hampshire UK) was used after antigen retrieval and visualized using a goat biotinylated anti-rabbit antibody, followed by AB complex with horse-radish peroxidase (HRP) and DAB detection (all DAKO, Cambridge, UK). Ovarian sections were analysed for oocyte numbers and follicle size classes using an established follicle classification system (Lundy et al., 1999) by two independent observers (N.J.D. and H.M.) using six fields of view per section. Total numbers of pH3-positive granulosa, stromal, endothelial and surface epithelial cells were counted at ×40 and identified at ×400 by M.A. Because of the relatively low incidence of pH3-positive cells, cell numbers were expressed per mm2 of ×40 fields of view. In all analyses, sections from 15 control and 8 sewage sludge exposed fetuses were used. Only oocytes with the nucleus clearly visible were included in quantification of oocyte and follicle densities, which was performed in six separate slides containing a total of 12 sections for each fetus (H&E, MCL1 and GDF9). Protein extraction The frozen fetal ovaries were processed for one-dimensional and two-dimensional gel electrophoresis, as described in detail previously (Fowler et al., 2007a,b). Briefly, the frozen ovaries (15 control, 8 treated) were blotted on filter paper and combined with lysis buffer (1 mg wet tissue weight: 5 µl lysis buffer) containing 0.01 M Tris–HCl, pH 7.4, 1 mM EDTA, 8 M Urea, 0.05 M DTT, 10% (v/v) glycerol 5% (v/v) NP40, 6% (w/v) pH 3–10 Resolyte (Merk Eurolab Ltd, Poole, Dorset, UK) and protease inhibitor cocktail (Roche Diagnostics, Lewes, UK). The tissues were minced, sonicated in iced water for four 10 min bursts, with 2 min between each sonication, and centrifuged at 50 000g for 20 min at 4°C. Once the protein content of the final supernatant containing the soluble cellular proteins had been determined (RC-DC assay, Bio-Rad Laboratories Ltd, Hemel Hempstead, UK), the ovary extracts were stored at −80°C until required for further analysis. One-dimensional gel electrophoresis and western blot Individual ovary lysates (15 control and 8 treated) were electrophoresed (30 µg protein/lane) on 26-lane one-dimensional gel electrophoresis 4–12% Bis–Tris gels (Invitrogen Ltd, Paisley, UK) under reducing conditions and transferred to immobilon-FL membrane [Millipore (UK) Ltd, Watford, UK] as described previously (Lea et al., 2005). SeeBlue plus two molecular weight markers (Invitrogen) were electrophoresed in three lanes of every gel. The membranes were blocked (overnight, 4°C) with (Odyssey Blocking Buffer, 927–4000: LI-COR Biosciences UK Ltd, Cambridge, UK) and were incubated with primary antibodies (in blocking buffer) at 4°C overnight: (i) BAX (1:200: santa-Cruz Biotechnology Inc, CA, USA, sc-493), (ii) BCL2 (1:200: santa-Cruz, sc-783), (iii) SCF (1:200: santa-Cruz, sc-113126), (iv) CYP17A1 (1:2500: Fowler et al., 2007a), (v) SOD2 (1;2500: abnova Corp, Taipei, Taiwan, H00006648), all combined with an anti-β-actin load control of differing species (mouse 1:5000 AB6276; rabbit 1:10 000 AB8227, both AbCam Ltd, Cambridge, UK). The protein bands were visualized using an Odyssey infrared fluorescence imager (LI-COR) and the resulting electronic images were analysed using Phoretix-1D Advanced software (Nonlinear Dynamics Ltd, Newcastle upon Tyne, UK) in order to determine the band volumes and molecular weights. This software calculates band volumes, based on constant lane width and automatic band selection, from the raw data of pixel area and intensity that are independent of operator-altered contrast or brightness. The band volumes of β-actin were compared between groups to check the validity of this load control for ovaries from control and treated groups. Proteomics In the present study, we have selected proteomic methods as an exploratory tool to uncover novel effects on the ovary; we have previously demonstrated the utility of this approach for heterogeneous tissues (Fowler et al., 2007a,b) and others have used it to characterize responses to endocrine disruption (Alm et al., 2006). Two-dimensional gel electrophoresis Soluble proteins were analysed by two-dimensional gel electrophoresis using a small format gel system described previously (Cash and Kroll, 2003; Uwins et al., 2006), using pooled (control versus sewage sludge) lysates comprising equal quantities of protein from each ovary (15 ovaries in the control pool, 8 ovaries in the treated pool). Briefly, in order to ensure the clearest electrophoresis of the protein by two-dimensional gel electrophoresis, sample clean-up was performed using ReadyPrep 2-D Cleanup Kits (Bio-Rad Laboratories Ltd, Hemel Hempstead, Herts, UK) according to manufacturer’s instructions. Hundred microgram of total protein from the pooled lysates were separated in the first dimension on 7 cm pH 4–7 immobilized pH gradient gels (GE Healthcare, Uppsala, Sweden). For the second dimension, the proteins were resolved on 10–15% gradient polyacrylamide slab gels and proteins were detected using Colloidal Coomassie Blue G250. Four replicate gels were electrophoresed from each lysate pool. Stained gels were scanned using a Molecular Dynamics Personal Densitometer (GE Healthcare) at 50 µm resolution to generate 12-bit images, which were transferred to Phoretix 2D Analytical software, V 6.01 (Nonlinear Dynamics, Newcastle, UK). The semi-automated routines available in this software were used to detect and quantify protein spots as well as to match the profiles across a gel series. Individual spot volumes are expressed as normalized volumes relative to the total detected spot volume separately for each gel, minimizing potential analytical artefacts from variations in protein loading and migration. Mass spectroscopic protein identification Proteins in the gel pieces were digested with trypsin (sequencing grade, modified; Promega UK, Southampton, UK) using an Investigator ProGest robotic workstation (Genomic Solutions Ltd., Huntingdon, UK). Briefly, proteins were reduced with DTT (60°C, 20 min), S-alkylated with Iodoacetamide (25°C, 10 min) then digested with trypsin (37°C, 8 h). The resulting tryptic peptide extract was dried by rotary evaporation (SC110 Speedvac; Savant Instruments, Holbrook, NY, USA) and dissolved in 0.1% formic acid for LC-MS/MS analysis. Peptide solutions were analysed using an HCTultra PTM Discovery System (Bruker Daltonics Ltd., Coventry, UK) coupled to an UltiMate 3000 LC System [Dionex (UK) Ltd., Camberley, Surrey, UK]. Peptides were separated on a Monolithic Capillary Column (200 µm i.d. × 5 cm; Dionex part no. 161409). Eluent A was 3% acetonitrile in water containing 0.05% formic acid, Eluent B was 80% acetonitrile in water containing 0.04% formic acid with a gradient of 3–45% B in 12 min at a flow rate of 2.5 µl/min. Peptide fragment mass spectra were acquired in data-dependent AutoMS(2) mode with a scan range of 300–1500 m/z, three averages, and up to three precursor ions selected from the MS scan 100–2200 m/z. Precursors were actively excluded within a 1.0 min window, and all singly charged ions were excluded. Peptide peaks were detected and deconvoluted automatically using Data Analysis software (Bruker). Mass lists in the form of Mascot Generic Files were created automatically and used as the input for Mascot MS/MS Ions searches of the NCBInr database using the Matrix Science web server (www.matrixscience.com). The default search parameters used were: enzyme = Trypsin; Max. Missed cleavages = 1; Fixed modifications = Carbamidomethyl (C); Variable modifications = Oxidation (M); Peptide tolerance ± 1.5 Da; MS/MS tolerance ± 0.5 Da; Peptide charge = 2+ and 3+; Instrument = ESI-TRAP. Only proteins showing good agreement with mass and pI on the two-dimensional gels, statistically significant MOWSE scores and good sequence coverage were considered to be positive identifications. Hormone assays Fetal serum levels of follicle-stimulating hormone (FSH), estradiol and prolactin (PRL) were measured by radioimmunoassays that have been described and validated previously for sheep (McNeilly et al., 1986; Mann and Lamming, 1995; Lincoln et al., 2003; Crawford et al., 2004). The assay standards used, assay sensitivities and intra-assay coefficients of variation were: FSH; NIDDK-FSH-RP2 and NIH-LS18, 0.1 ng/ml, <10%, estradiol; MAIA estradiol kit (Serono Diagnostics, Fleet, Hants, UK), 0.2 pg/ml, <12%, PRL; NIDDK-PRL—RP3, 0.5 ng/ml, 10%). All fetal blood samples were measured in single assays. Statistical analysis Analyses were performed using JMP (5.1, Thomson Learning, London, UK). Normality of data distribution was tested with the Shapiro–Wilk test and non-normally distributed data were log-transformed prior to analysis. Morphological and endocrine data and the one-dimensional gel electrophoresis WB band volumes (normalized relative to β-actin expression separate for each lane) and normalized spot volumes (% of total spot volume for each gel separately) were compared as control versus sewage sludge-exposed using one-way ANOVA. For the proteomics, virtual ‘average’ gels were prepared so that only spots present in three out of four gels for each group were included and compared to determine differences in spot expression. Spots demonstrating statistically significant differences (ANOVA) in normalized volume were investigated further. Unless stated, otherwise data are presented as mean ± SEM. Results Maternal and fetal body weights and fetal ovary weights and endocrinology There was no difference in the body weights, at the time of slaughter, of the ewes pastured on control and sewage sludge-treated fields (83 ± 2 versus 84 ± 1 kg, respectively). In contrast, the female fetuses from the ewes maintained on the sewage-treated pastures were significantly (P = 0.013) lighter (14%) (Table I) than the controls [and similar to the male fetuses (Paul et al., 2005)]. Despite difference in body weight, there was no significant (P > 0.05) treatment effect on the ovary weights. When ovary weights were normalized to body weight (mg/kg), the difference between treated and control animals remained non-significant (treated 22.7 ± 3.0 versus control 18.8 ± 1.1 mg/kg, P > 0.05). Of the three hormones measured, only PRL was significantly affected by exposure to sewage sludge (reduced 49% compared with controls, P = 0.011) although estradiol concentrations tended to be lower (22%) than the controls. Table I. Effects of sewage sludge on morphological and endocrine characteristics of Day 110 fetal ewes. Characteristic Control n = 15 Sludge-exposed n = 8 Fold-change following sludge-exposure ANOVA Body weight (g) 1593 ± 67 1365 ± 67 −1.17 P = 0.013 Ovary weight (mg) 29.4 ± 2.1 31.0 ± 4.2 +1.05 NS FSH (ng/ml) 2.1 ± 0.3 1.4 ± 0.2 −1.50 NS Estradiol (pmol/l) 128 ± 54 100 ± 27 −1.28 NS PRL (ng/ml) 3.7 ± 0.7 1.9 ± 0.3 −1.95 P = 0.011 Lack of statistical significance (NS) is indicated where P > 0.05. Effects of treatment on oocyte and follicle numbers and mitotic index The densities of oocytes and follicles were similar, whether quantified by assessing H&E stained slides or MCL1 and GDF9 positive oocytes (Fig. 1a and b). Total oocyte densities were significantly reduced in the treated ovaries assessed by H&E staining (19% reduced: 18.8 ± 1.3 in treated versus 23.1 ± 1.6 in control, P = 0.042), MCL1 immunostaining (26% reduced: 14.6 ± 2.1 in treated versus 19.6 ± 1.5 in control, P = 0.039, Fig. 1) and GDF9 immunostaining (28% reduced: 13.9 ± 1.9 in treated versus 19.4 ± 1.4 in control, P = 0.034). The ratio of different follicle classes and isolated oocytes was also significantly skewed (P = 0.003 by two-way ANOVA combining treatment with follicle size class) by treatment (Fig. 1e) with a slightly, but significantly, greater proportion of the follicles being further developed following EC exposure: (combining classes 1a, 2 and 3: treated 16.1 ± 1.8 versus control 11.2 ± 0.9, P = 0.040). There were no statistically significant effects of sewage sludge exposure (P > 0.05) on the mitotic index of granulosa cells (0.6 ± 0.3 in treated versus 1.0 ± 0.2 in control), stromal cells (14.1 ± 1.3 in treated versus 13.3 ± 1.0 in control) or surface epithelial cells (0.3 ± 0.2 in treated versus 0.5 ± 0.1 in control), expressed as pH3-positive cells/mm2. In contrast, sewage sludge-exposed ovaries had significantly (P = 0.014) more pH3-positive endothelial cells/mm2 (1.2 ± 0.3 treated versus 0.3 ± 0.2 control). Figure 1: Effect of sewage sludge exposure on fetal ovarian morphology. Immunolocalization of (a) MCL1, (b) GDF9 and (c) phosphorylated histone H3 expression in representative Day 110 fetal ovary sections. In (a) and (b), the arrow highlights immunopositive oocytes, whereas in (c), the arrow shows a histone pH3 positive granulosa cell. GDF9 and MCL1 immunopositive oocytes were present in all classes of follicles observed. Quantification (n = 15 control versus 8 treated ovaries) of the MCL1 immunohistochemistry is shown in (d) demonstrating the significant decrease in oocyte density in sewage sludge-exposed fetuses and (e) demonstrating the small but significant increase in the proportion of more advanced follicles in sewage sludge-exposed fetuses. Since the findings were similar between H&E staining and MCL1 and GDF9 positive oocytes, only the MCL1 data are shown to illustrate these results. Effects of treatment of markers of apoptosis, developmental signalling, steroidogenesis and response to oxidative stress There were no significant differences in β-actin band volumes between treated and control groups, demonstrating the validity of this load control. In utero exposure to sewage sludge pasture altered the balance between pro- and anti-apoptotic processes in the fetal ovaries. Whereas BAX was significantly higher in treated ovaries relative to the controls (65%, P = 0.002), BCL2 tended to be lower but this difference did not reach statistical significance (54%, P > 0.05, Fig. 2a and b). Expression of the key ovarian developmental signalling protein, SCF, was not different between controls and sludge-exposed ovaries (Fig. 2c). CYP17 expression tended to be lower (34%) in the treated compared with the control ovaries, but again this difference did not reach statistical significance (Fig. 2d). SOD2 was detected as both the mature 15 kDa form (weakly) and the 25 kDa precursor form (Fig. 2e and f). The precursor form tended to be lower (22%, P > 0.05), but the mature molecule was significantly lower (by 42%) in the treated compared with the control ovaries (P = 0.031). Figure 2: Quantification of levels of Day 110 fetal ovarian proteins (n = 15 control versus 8 treated ovaries) involved in (a) pro-apoptosis, BAX, (b) anti-apoptosis, BCL2, (c) developmental cell signalling, stem cell factor (SCF), (d) steroidogenesis, CYP17 and (e and f) oxidative stress response, manganese superoxide dismutase (SOD2). P-values denote significant differences between control and sewage sludge-exposed fetuses and the load control was β-actin at 42 kDa. Effects of treatment on the fetal ovarian proteome There was little difference in the overall number of robust protein spots between the control (349 ± 9) and sewage sludge (354 ± 5) groups (P > 0.05); however, 147 protein spots showed significant (P < 0.05) differences in normalized volume and presence or absence in the sewage sludge compared with controls, based on a minimum change of ≥1.2-fold, P < 0.05, for inclusion. The differentially expressed protein spots covered a wide range of normalized spot volumes. In the sewage sludge ovarian proteome, a total of 52 spots were up-regulated (1.2–4.6-fold increased, P < 0.05), 46 were down-regulated (1.2–5.2-fold decreased, P < 0.05), 25 were absent (only present in the control group) and 24 were unique (only present in the sewage sludge group). Nineteen protein spots showing consistent detection, reproducible quantification between replicate gels and statistically significant differences, or very clear uniqueness to one experimental group, were identified by tandem mass spectroscopy (Table II and Fig. 3). Six of these proteins were blood protein which showed highly variable alterations in expression (−5.2-fold to +3.5-fold, absent to unique) between sewage sludge and control groups, preventing any simplistic links to potential differences in vascularization. The remaining 13 proteins fell into four broad functional categories: (i) cytoskeleton and its regulation, (ii) gene expression, transcription and processing, (iii) protein synthesis and (iv) protein phosphorylation and receptor activity. Pathway Architect software (Stratagene Europe, Amsterdam, The Netherlands) was used to inform on the functional links between the proteins that were seen to exhibit altered expression between the two groups. Six of these proteins were directly linked with translation (Fig. 4a), and SOD2 and BAX are directly associated with DNA fragmentation during apoptosis (Fig. 4b). Graphical representation of an interactions network demonstrates that CALM1 dominates the relationships between many of the differentially expressed proteins (Fig. 5). Figure 3: 2-DE gel electrophoresis based proteomic analysis of Day 110 fetal ovaries (n = 15 control versus 8 treated ovaries) pooled into treated and control groups. (a) Representative 2-DE 7 cm gel using a 4–7 pH gradient. The numbered arrows show the location of the 19 protein spots identified in Table II. Zoom boxes demonstrating representative protein spots showing (b) down-regulation and (c) up-regulation in fetuses exposed to sewage sludge. The histograms are based on averaged (n = 4 gels/group) spot volume (normalized relative to total spot volumes separately for each gel). Significant differences are based on ANOVA following log-transformation of spot volumes. Figure 4: Biological processes network of some of the differentially expressed proteins from the treated ovaries, demonstrating key interactions between CALM1 and HNRPH in regulating (a) translation and antagonism between SOD2 and BAX in (b) DNA fragmentation during apoptosis. Output from Pathway Architect. Figure 5: Biological interactions network of some of the differentially expressed proteins from the treated ovaries. CALM1 sits central within the network, interacting with most of the differentially regulated proteins via one or two steps. Ovals with a green background denote small molecules. Output from Pathway Architect software (Stratagene Europe, Amsterdam, the Netherlands). Table II. Positively identified proteins showing differential expression between ovaries at GD110 from 15 control fetuses with 8 fetuses exposed to sewage sludge in utero. Spot # Protein name GENE SYMBOL Protein function and Tissue/cellular location MW (kDa) pI MOWSE score Accession number (NCBI) Fold change (P, ANOVA) Cytoskeleton and its regulation 1 Gelsolin [actin depolymerising factor (ADF), brevin] GSN Promotes nucleation and severs formed filaments. Expression lost in ovarian carcinoma. Cytoplasm 80.9 5.58 429 2833344 −2.8 (P = 0.020) 2 Vinculin VCL Involved in cell adhesion and focal complex assembly with role in actin microfilament attachment to plasma membranes. Cytoskeleton, cell–cell junctions. 117.2 5.83 1050 4507877 −3.4 (P = 0.043) 9 Tubulin α-chain (alpha-tubulin 1) TUBA1 Major constituent of microtubules, binds two GTP molecules. Cytoskeleton 50.6 4.97 383 3502919 +3.1 (P < 0.001) Blood proteins 5 Serum albumin precursor ALB Secreted protein, typically into plasma, binding molecule, regulates colloidal osmotic pressure 71.1 5.82 251 162648 −5.2 (P = 0.003) 10 97 +3.0 (P < 0.001) 12 189 Absent 8 Albumin precursor ALB 71.1 5.80 483 1387 +3.5 (P = 0.010) 16 646 Unique 14 Serotransferrin precursor (transferrin, siderophilin) TF Iron-binding transport protein and stimulation of cell proliferation 79.9 6.75 243 29135265 −3.1 (P = 0.003) Gene expression, transcription and processing 3 DNA-binding pur alpha PURA Interacts with RNA and DNA and recruits regulatory proteins to specific nucleic acid sequences, stimulating transcription. Nucleus 34.8 5.88 61 9652255 −5.0 (P < 0.001) 7 Heterogeneous nuclear ribonucleoprotein K (HNRPK) HNRPK Binds RNA, interacts with pur alpha to mediate repression of CD43 promoter. Ribonucleoprotein complex 51.3 5.14 598 74354615 +3.1 (P = 0.023) 11 Heterogeneous nuclear ribonucleoprotein H (HNRNPH) HNRPH1 Part of complex providing substrates for processing of pre-mRNA. Nucleus 49.5 5.89 475 10946928 Absent Protein synthesis 4 Endoplasmic reticulum protein ERp29 (ERp31 or ERp28) ERP29 Molecular chaperone, participates in the folding of secretory proteins in endoplasmic reticulum. Endoplasmic reticulum 28.9 5.63 270 109658363 −3.2 (P = 0.002) 6 Calmodulin (CaM) CALM1 Mediates the control of enzymes and proteins (e.g. protein kinases and phosphatases) by Ca, role in calcium signalling. Cytoplasm, cell membrane 16.8 4.09 224 115509 −3.1 (P = 0.033) 17 Tu translation elongation factor (EF-Tu) TUFM Role in protein synthesis, promoting GTP-dependent binding of tRNA to ribosomes. Mitochondria 49.7 6.72 235 111304949 Unique 18 Tumor rejection antigen 1 (GP96) (HSP90 family) TRA1 (GRP94, HSP90B1) Molecular chaperone during processing, folding and transport of secreted proteins in the endoplasmic reticulum. Endoplasmic reticulum 92.7 4.76 727 27807263 Unique Protein phosphorylation and receptor activity 13 Guanine nucleotide-binding protein G, beta 1 (transductin beta 1 chain) GNB1 Part of G-protein heterotrimer, transduces transmembrane signalling systems. Cell membrane, cytoplasm 38.2 5.60 384 6680045 Absent 15 Protein phosphatase 1D PPM1D (WIP1) Required for relief of p53-dependent checkpoint mediated cell cycle, negatively regulates cell proliferation. Nucleus 38.0 5.84 351 227436 +4.6 (P = 0.010) 19 S100 calcium-binding protein A11 (similar to) S100A11 Proposed function is calcium-ion binding and signal transduction, negative regulation of cell proliferation. Cytoplasm 11.5 6.72 202 29135265 Unique Discussion The aim of this study was to establish whether long-term exposure of pregnant ewes to a cocktail of ECs disrupts female fetal ovarian development. Our results clearly show that long-term dietary exposure to such a cocktail, delivered via application of sewage sludge fertilizer to pasture, significantly disturbs both the fetal ovary and fetal endocrinology. These findings of alterations in the ovary were associated with a mild but significant growth restriction of the female fetuses. In utero exposure of the female fetus to ECs results in a very wide range of morphological changes, varying according to species and nature of the chemicals (reviewed Miller et al., 2004). Although 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) does not produce the reduction in rodent oocyte and follicles numbers that we find (Flaws et al., 1997), PCBs and polycyclic aromatic hydrocarbons (PAHs) do (Flaws et al., 1997; Matikainen et al., 2002). The actions of other chemicals in the sewage sludge with effects on reproduction, such as cadmium (Henson and Chedrese, 2004), must also be taken into account. However, it is unlikely that our results were due to elevated phytoestrogens since studies of phytoestrogens (usually in rodents), such as genistein (e.g. Jefferson et al., 2006), show decreased germ cell apoptosis and reduced breakdown of oocyte nests, quite different from our findings of reduced oocyte density in fetuses exposed to sewage sludge chemicals in utero. This is extremely interesting given that ovarian BAX expression was significantly higher in sludge-exposed fetuses. In the mouse, BAX is a key regulator of follicle numbers and the BCL family is heavily implicated in the mechanisms underlying detrimental effects of selected environmental chemicals on the ovary (Borgeest et al., 2004; Miller et al., 2005; Greenfeld et al., 2007a,b). It is most probable that similar mechanisms act to determine the follicle pool in all mammals. That the anti-apoptotic BCL2 tended to be reduced, although not significantly so, in the sludge-exposed fetuses, suggests a redistribution of the pro- and anti-apoptotic balance more towards apoptosis. These findings agree with our observation that there were no differences in granulosa cell mitotic index and lead to the conclusion that the reduction in follicle numbers that we observed was due to an early loss of germ cells. Should these effects be reproduced in women exposed in utero to ECs as fetuses, it would be likely to result in reduced fertility and earlier onset of menopause. SCF was not affected by exposure to an EC cocktail in sewage sludge demonstrating the subtlety of the effects of a cocktail of chemicals. Clearly, while oocyte density is lower, the pre-granulosa or granulosa cells were producing the same amount of SCF, which is important in early follicle development (Parrott and Skinner, 2000; Jin et al., 2005). Sewage sludge exposure did not significantly affect the mitotic index (density of pH3-positive cells) of any ovarian cell type studied, other than endothelial cells. This suggests an effect on ovarian angiogenesis and supports our conclusion that chemicals in the sewage sludge are acting on the fetus since EDCs are know to have both stimulatory and inhibitory effects on angiogenesis (Tavolari et al., 2006). Although neither was statistically significant, our findings of reduced levels of circulating estradiol and lowered ovarian expression of CYP17 in sludge-exposed fetuses were consistent with each other and suggest a mild impairment of steroidogenesis in these animals, possibly related to reduced germ cell proliferation (Pannetier et al., 2006) in sewage sludge-exposed fetuses. We quantified SOD2, because reactive oxygen species (ROS) induced by some ECs can damage developing organs (Ahmed et al., 2005). Further, methoxychlor and heavy metals decrease the expression of protective enzymes such as SOD1 and SOD2 in ovarian cells (Gupta et al., 2006; Nampoothiri et al., 2007), which would have developmental and steroidogenic consequences. Our animals exposed to a real-life complex cocktail of ECs revealed a similar trend since the levels of SOD2 were lower than that in the controls and thus their ovarian cells may be more susceptible to oxidative damage than controls. The reduction in SOD2 would also imply reduced inhibition of DNA fragmentation during apoptosis at a time when increased BAX will elevate apoptosis rates (Fig. 4b; Southgate et al., 2006). The fact that phytoestrogens, such as daidzein, reduce ROS-induced toxicity (Tang et al., 2006), contrasts sharply with the effects of exposure to sewage sludge chemicals emphasizing that the observed effects are not due to phytoestrogens. Exposure to sewage sludge chemicals altered the expression patterns of 42% of the protein spots in the exposed compared with control ovaries. Although this by no means reflects the entire ovarian proteome, it indicates that chemicals in sewage sludge can cause widespread changes in the ovary. Some of these changes will reflect altered cell numbers, but some will reflect changes in pathways involved in ovarian function and development. In addition, proteins may be present in many spots and spots may contain several proteins and so changes in spot volumes reflect complex and sometimes very subtle alterations in the protein profile. Proteomics allows us to investigate global effects of exposure to environmental chemicals without preconceptions about pathways and mechanisms, complementing the targeted study of ovarian morphology and proteins of known importance in reproductive development. We targeted the most consistently, statistically significantly, altered proteins for identification. This addresses gel–gel variation and reduces potential methodological error. In addition, we limited ourselves to proteins showing a ≥2-fold change in expression. Liquid-chromatography-mass spectrometry enabled positive identification of all 19 selected protein spots that fit the selection criteria. Overall, the proteins showing disturbed expression would be consistent with altered transcription and translation, apoptosis/proliferation and protein production/actions within the developing ovary. The disturbed proteins could also result in dysregulated receptor and calcium-dependent functions. The most obvious consequence of the altered proteins is the probable disturbance in translation since 6 of the 19 differentially expressed proteins, that we identified, have interactions and effects on translational processes (Fig. 4a). This would have consequences for the expression of gene products during development. The reduction of CALM1 protein in sludge-exposed fetal ovaries was unexpected because up-regulation of CALM1 is characteristic of genomic estrogenic activation (Wang et al., 2004). This, therefore, is evidence that exposure to sewage sludge was not necessarily activating conventional genomic estrogen pathways. Indeed, it is now well known that environmental chemicals operate through a wide range of mechanisms and pathways in addition to genomic estrogen action (Henley and Korach, 2006; Tabb and Blumberg, 2006). The 5-fold reduction of DNA-binding pur-alpha (PURA) in ovaries from fetuses exposed to sewage sludge chemicals is interesting on two counts. First, PURA, which recruits regulatory proteins to RNA and DNA sequences, is involved in oncogenic inhibitory pathways (Johnson et al., 2003), and, second, CALM1 stimulates PURA (Kuo et al., 1999), so a reduction in both proteins suggests a causative link. The location of CALM1 at the node of a biological interactions network that links most of the proteins we observed to be affected by exposure to sewage sludge chemicals (Fig. 5) demonstrates how changes in protein expression could propagate down numerous pathways involved from transcription along to post-translational modification and receptor activity. Relevantly, calmodulin-dependent kinases are implicated in maintaining ovarian steroidogenesis (Seals et al., 2004) and calmodulin binding protein is expressed in fetal germ cells during development (Luers et al., 2002), whereas defects in the caldesmon/calmodulin system impairs cell cycling and migration (Li et al., 2004). Gelsolin (GSN), 2.8-fold reduced in treated ovaries, was of interest since its expression is reduced in ovarian cancer (Noske et al., 2005). GSN, which modulates cellular motile activities, is widely expressed in the developing embryo (Arai and Kwiatkowski, 1999) and adult ovary (Teubner et al., 1994) where it is important in follicle growth. The role of vinculin (VCL) in a variety of processes, such as granulosa cell differentiation (Kranen et al., 1993), suggests that its down-regulation in our study may have consequences for ovarian development. The HNRNP family of proteins is important in the formation of mRNA and both H and K forms are highly expressed in the primary oocyte (Kamma et al., 1995). HNRNPK, up-regulated in sludge-exposed ovaries, is important for cell spreading (Yoo et al., 2006) and transcriptional responses to DNA damage (Moumen et al., 2005). HNRNPK and PURA act together in transcriptional regulation of leukocytes (Da Silva et al., 2002) and, should both proteins act together on ovarian cells, disruption of transcription would be expected. Pertinently, the HNRNPH homologue, glorund, represses ovarian expression of Drosophila nanos (Kalifa et al., 2006), which is highly conserved and important for germ cell migration in mouse embryos (Tsuda et al., 2003). This contrasts with our study where HNRNPH was greatly reduced, which would suggest increased germ cell numbers. Disturbance of transcription would have multiple effects in many cell types, although redundancy of function could greatly reduce the severity of phenotypic effects. The endoplasmic reticulum is important in protein secretion and many cellular activities, such as response to oxidative stress, so it is interesting that two endoplasmic reticulum chaperone proteins ERP29 and TRA1 (also called GRP94) were reduced and increased, respectively, by exposure to sewage sludge chemicals. Although not well understood, endoplasmic reticulum chaperone proteins are important in developmental and disease processes, including programmed cell death (Rao et al., 2006; Ni and Lee, 2007). TRA1 is abundant in the adult oolemma (Calvert et al., 2003) and regulates the secretion of a key player in ovarian function: insulin-like growth factor (Knight and Glister, 2006; Wanderling et al., 2007). TRA1 is increased in endoplasmic reticulum stress (Hagg et al., 2004) underlining the relevance of this finding. Although ERP29 is redox-inactive (Mkrtchian and Sandalova, 2006), it may be a target for reactive metabolites of toxic chemicals like bromobenzene (Koen and Hanzlik, 2002). The fact that pituitary levels of ERP29 are increased by exposure to estrogen (Blake et al., 2005), but decreased in our sewage sludge-exposed ovaries, may indicate either non-estrogenic effects or differential responses in the ovary. The increase in Tu translation elongation factor (TUFM), which has chaperone activities during protein synthesis (Suzuki et al., 2007), is interesting since mutations in elongation factors have developmental pathologies (Smeitink et al., 2006). Increased PPM1D and decreased GNB1 in the sludge-exposed ovaries imply effects on phosphorylation and receptor activity. PPM1D over-expression amplifies tumorigenesis by suppressing p53 activation (Bulavin et al., 2002) and enhances progesterone receptor activation (Proia et al., 2006). Importantly, progesterone receptor activation reduces primordial follicle assembly (Kezele and Skinner, 2003). This may be part of the mechanism by which sewage sludge-exposed ovaries exhibit reduced density of follicles at Day 110 of gestation. The sludge-exposed female fetuses were 14% lighter than controls which raise the issue of what caused this difference and what effect this apparent very mild growth restriction might have on fetal ovarian development. There were no differences in maternal body weight in our study, which would contradict any conclusion that sludge-exposed fetuses were nutritionally challenged. The literature (reviewed Rhind et al., 2003) suggests that such a mild body weight difference is highly unlikely, in itself, to cause the ovarian developmental perturbation that we report here. For instance, a 50% maintenance diet during gestation resulted in a 32% decrease in lamb body weight and a 23% reduction in fetal ovary weight, but no loss of germ cells (Murdoch et al., 2003). More detailed studies of maternal undernutrition during pregnancy in sheep (e.g. Rae et al., 2001) found that cutting energy provision during pregnancy to 50% induced a trend towards smaller ovaries and significantly fewer more advanced follicles (types 2, 3 and 4) without changing germ cell numbers. This contrasts with our study in which the proportion of more advanced follicle increased, possibly contributing to our finding that ovary weight was not different between controls and sludge-exposed fetuses. Following growth-restricted fetuses into adulthood finds them recovering body mass with apparently normal hypothalamo-pituitary-ovary-axes, although ovulation rates were 20% reduced (Rae et al., 2002; Borwick et al., 2003). In the human, girls born small for gestational stage (i.e. growth-restricted) may have small ovaries and reduced ovulation rates at puberty (Ibanez et al., 2000, 2002), an effect that is at least partly related to insulin resistance (Ibanez and de Zegher, 2006), but probably not due to a reduction in the fetal follicle pool (de Bruin et al., 2001). The ovaries from our sludge-exposed fetuses are very different: no reduction in ovarian weight, but reduced germ cell and follicle densities and a small but significant increase in the proportion of more advanced follicles. Thus, we can conclude that growth restriction was not a major factor and, therefore, the effects of exposure to sewage sludge are much more likely to be due to chemicals in the sludge. That both the sewage sludge exposure in our study and the growth restriction of the sheep fetus (Phillips et al., 2001) have some similar reproductive consequences suggests some commonality in regulatory pathways between the two forms of fetal insult. It is not clear which components of sewage sludge are reaching the maternal and fetal tissues and causing the disturbed fetal development we observed. It is currently impossible to accurately measure many chemicals in the fetal gonad because of the limited tissue quantity available. Therefore, we cannot be confident of actual fetal ovarian exposure to different chemicals in our study. Some tissue EC levels do differ significantly between control and sludge-exposed fetuses (see Introduction), but it is most likely that subtle and complex alterations in the balance between a very wide range of chemicals, probably affecting a variety of body systems and organs, is the key factor in causing the disruption of fetal ovarian development. This is supported by our finding that although maternal smoking during gestation does not lead to any major changes in fetal liver PAH concentrations, the fetuses were correctly allocated to smoking and non-smoking groups by discriminant analysis of liver PAH concentrations (Fowler et al., 2008). The sheep in our study were pastured on fields treated with sewage sludge in a way that may have raised exposure above the levels to which sheep are normally exposed to under current UK/EC legislation and agricultural guidelines; the treatments were designed to maximize the risk of exposure. It is important to note that our studies are designed to elucidate the effects of prolonged exposure to a real-life cocktail of chemicals and are not designed to determine whether the use of processed human sewage sludge as a fertilizer poses human health risks (under the current best operating practices). With this proviso, we can strongly conclude from our study that long-term exposure to low doses of a complex ‘real-life’ cocktail of environmental chemicals affects the developing fetal ovary, probably starting very early in fetal life, in such a way as to suggest that similar human fetal exposures could reduce fertility of the resulting women and increase their chances of premature menopause. Future studies should address genetic and endocrine mechanisms involved in these effects and also establish the critical windows of exposure, as well as determining the fertility of resulting adults. Funding The Wellcome Trust (080388) to P.A.F.; NHS Research & Development (1664) to P.A.F.; the Scottish Executive Environment & Rural Affairs Department (302132) to S.M.R.
[ "environmental chemicals", "sewage sludge", "oocyte", "fetal development", "granulosa cell" ]
[ "P", "P", "P", "P", "P" ]
Diabetologia-3-1-1914278
Perturbation of hyaluronan metabolism predisposes patients with type 1 diabetes mellitus to atherosclerosis
Aims/hypothesis Cardiovascular disease contributes to mortality in type 1 diabetes mellitus, but the specific pathophysiological mechanisms remain to be established. We recently showed that the endothelial glycocalyx, a protective layer of proteoglycans covering the endothelium, is severely perturbed in type 1 diabetes, with concomitantly increased plasma levels of hyaluronan and hyaluronidase. In the present study, we evaluated the relationship between hyaluronan and hyaluronidase with carotid intima-media thickness (cIMT), an established surrogate marker for cardiovascular disease. Introduction Macro- and microvascular complications are a major cause of morbidity and mortality in patients with diabetes mellitus. While the macrovascular complications in patients with type 2 diabetes mellitus can partly be attributed to the increased prevalence of classic cardiovascular risk factors such as dyslipidaemia, these risk factors cannot explain the increased prevalence of atherosclerosis in type 1 diabetes mellitus [1–3]. Recent data demonstrated that hyperglycaemia itself may play a causative role. Thus, improved metabolic control is associated with a decreased macrovascular event rate [1–3]. However, the pathophysiology of glucose-associated atherogenesis remains to be elucidated [4, 5]. In recent years, the glycocalyx has emerged as a potential orchestrator of vascular homeostasis, which closely determines anti-adhesive and barrier properties of the vessel wall [6]. In line, we found that the endothelial glycocalyx is adversely affected by both acute and chronic hyperglycaemia in volunteers and type 1 diabetes patients, respectively [7, 8]. Hyaluronan is a principal constituent of the glycocalyx and removal of the glycocalyx with hyaluronidase has been associated with increased vascular vulnerability towards atherogenic insults [9–11]. In animal models of type 1 diabetes, hyaluronidase activity has been shown to be increased and this correlated with increased carotid intima-media thickness (cIMT) [12–14]. In line with this, increased accumulation of hyaluronan within the arterial wall in type 1 diabetes patients correlated with vascular changes [15]. We recently found an acute increase in plasma hyaluronan coinciding with glycocalyx perturbation during a normo-insulinaemic–hyperglycaemic clamp in healthy volunteers [7]. Moreover, we observed an inverse correlation between plasma hyaluronan as well as plasma hyaluronidase and glycocalyx volume in patients with type 1 diabetes [8]. In this concept, hyperglycaemia-induced perturbation of hyaluronan metabolism, characterised by increased hyaluronidase activity with subsequent increased plasma hyaluronan levels, may indicate increased vascular vulnerability. In the present study, we set out to evaluate the potential relationship between structural changes of the carotid artery and hyaluronan metabolism in patients with uncomplicated type 1 diabetes. Subjects and methods We enrolled non-smoking Europid patients with type 1 diabetes, all without clinical signs of micro- or macrovascular disease. The patients were recruited from the Internal Medicine outpatient clinics of the Academic Medical Center and Onze Lieve Vrouwe Gasthuis in Amsterdam, the Netherlands. The presence of macrovascular disease, defined as ECG abnormalities or a history of cardiac, cerebral or peripheral vascular events, was an exclusion criterion for the study. Moreover, subjects with retinopathy, neuropathy, (micro) albuminuria, or hypertension were excluded from participation. All patients were on multiple daily injections of insulin with no other concomitant medication use. Matched non-smoking controls (selected for this study specifically) were unrelated volunteers of similar age and sex. Investigations of both study groups were randomly performed during the study period. Approval for the study was obtained from the Internal Review Board of the Academic Medical Center Amsterdam and all subjects gave written informed consent. The study was carried out in accordance with the principles of the Declaration of Helsinki. All measurements were performed after an overnight fast and in a quiet and air-conditioned room. Blood pressure was measured in triplicate and the last two measurements were averaged to obtain heart rate and systolic and diastolic blood pressure. The latter were averaged to calculate mean blood pressure. At baseline, blood samples were collected for determination of lipids, high sensitivity C-reactive protein (hsCRP), HbA1c, hyaluronan and hyaluronidase. The markers of hepatic function ASAT and ALAT (aspartate aminotransferase and alanine aminotransferase, respectively) were determined, since chronic liver disease is known to be associated with increased plasma hyaluronan levels [16]. After centrifugation (within 1 h after collection), aliquots were snap-frozen in liquid nitrogen and stored at −80°C. Clinical chemistry Total cholesterol, HDL-cholesterol and triacylglycerol were measured by enzymatic methods (Roche Diagnostics, Basel, Switzerland). LDL-cholesterol was calculated using the Friedewald formula. ALAT and ASAT were measured by a pyridoxal-phosphate activation assay (Roche Diagnostics). HbA1c was measured using an HPLC (Reagens Bio-Rad Laboratories BV, the Netherlands) on a Variant II (Bio-Rad Laboratories). Total plasma hyaluronan and hsCRP levels were determined in duplicates by commercial ELISA (Echelon Biosciences, Salt Lake City, UT, USA and Roche, Bern, Switzerland, respectively). Plasma hyaluronidase levels were determined with a previously described assay [8, 17]. Ultrasound B-mode protocol for cIMT measurement B-mode ultrasound imaging was used to visualise three carotid arterial wall segments comprising common carotid, bulb and internal of the left and right carotid arteries according to a previously published protocol [18, 19]. Subjects were scanned in the reclined position following a predetermined, standardised protocol. An Acuson 128 XP/10 v (Siemens, Erlangen, Germany) equipped with an L7 linear array transducer and extended frequency software was used. B-mode images were stored as 4:1 compressed jpeg files on a digital still recorder (SONY DKR-700 P). All scans were performed by the same sonographer. To investigate intra-sonographer reproducibility, ten study subjects were scanned in duplicate. These investigations enabled us to provide robust arterial wall thickness measurements (SD of the means of the paired cIMT measurements 0.05 mm; CV = 9.0%). One image analyst performed the analyses off-line with semi-automated quantitative and qualitative video image analysis software. Both the sonographer and the image analyst were blinded to the clinical status of the subjects. These images provided the cIMT data. Mean cIMT was defined as the mean cIMT of the right and left common carotid, the carotid bulb and the internal carotid far wall segments. For a given segment, cIMT was defined as the average of the right and left cIMT measurements. The per-patient averaged means of the cIMT values of segments was used for the primary analysis. Statistical analysis Mean values of continuous variables between type 1 diabetes patients and controls were compared using Student’s t test for independent samples. In the case of a skewed distribution the t test was performed on log-transformed values, while medians and interquartile ranges are presented. Chi-square tests were applied for comparison of distribution of dichotomous data. Correlation between hyaluronan and hyaluronidase was calculated by Spearman’s rank coefficient (two-tailed). In this study our main interest was to find predictors for type 1 diabetes-associated atherosclerosis and vascular dysfunction. The relationship between the dependent variable cIMT on the one hand and other parameters (e.g. plasma hyaluronan) on the other was first explored univariably using linear regression analysis. For clinical variables and variables which revealed statistically significant correlations in the univariate analysis, estimates of cIMT adjusted for confounding were calculated with SPSS version 11.5 (Chicago, IL, USA). In addition, several multivariate models were built to explore the effects of age, sex and the statistically significant variables on cIMT. Throughout, a two tailed p value <0.05 was considered statistically significant. Results Clinical characteristics of the 99 type 1 diabetes subjects and 99 matched controls are listed in Table 1. There was no significant difference between type 1 diabetes subjects and controls with regard to age, sex, BMI, systolic and diastolic blood pressure, liver function tests and cholesterol profile. However, we did observe increased values for HbA1c, heart rate, plasma hsCRP, hyaluronan and hyaluronidase in type 1 diabetes patients. A significant correlation was found between plasma hyaluronan and hyaluronidase activity in type 1 diabetes (r = 0.3, p < 0.05, see Fig. 1a). After exclusion of the highest hyaluronidase activity levels (levels >750 U/ml), the correlation between hyaluronan and hyaluronidase levels was still present. Liver function tests (ASAT and ALAT) were not significantly associated with plasma hyaluronan levels in type 1 diabetes. Table 1Demographic and baseline parameters of the study cohort Type 1 diabetes patientsControlsNumber of participants9999Sex (male/female)44/5544/55Age (years)32.8 ± 14.834.9 ± 16.4Duration of diabetes (years)16.4 ± 11.9–Daily insulin dose (IU) 52.9 ± 20.3–Smoking (yes/no)0/990/99BMI (kg/m2)23.4 ± 3.523.3 ± 3.6Systolic blood pressure (mmHg)123 ± 17125 ± 20Diastolic blood pressure (mmHg)72 ± 973 ± 12Heart rate (beats/min)71 ± 11**60 ± 14 Total cholesterol (mmol/l)4.9 ± 0.94.9 ± 1.0LDL-cholesterol (mmol/l)2.8 ± 0.72.9 ± 0.8HDL-cholesterol (mmol/l)1.6 ± 0.51.5 ± 0.4Triacylglycerol (mmol/l)0.9 (0.5–1.1)1.0 (0.5–1.2)ASAT (U/l)24 (21–27)25 (21–29)ALAT (U/l)22 (15–28)20 (14–24)HbA1c (%)8.3 ± 1.6**5.1 ± 0.3Hyaluronan (ng/ml)78 ± 43*60 ± 18Hyaluronidase (U/ml)362 ± 23**242 ± 13hsCRP (mg/l)2.6 (0.4–2.9)*1.1 (0.2–2.0)cIMT (mm)0.61 ± 0.15**0.53 ± 0.12Data are means±SD, except for triacylglycerol, ASAT, ALAT and hsCRP, which are expressed as median (inter-quartile range).*p < 0.05, **p < 0.01 type 1 diabetes patients vs controlsFig. 1a Relationship between plasma hyaluronan levels and plasma hyaluronidase activity in type 1 diabetes patients. b Relationship between plasma hyaluronan levels and cIMT in type 1 diabetes patients Mean cIMT was increased in the type 1 diabetes group compared with controls (0.61 ± 0.15 vs 0.53 ± 0.12 mm, p < 0.001). In type 1 diabetes subjects plasma hyaluronan levels (Fig. 1b), age, male sex, duration of diabetes and mean blood pressure were positively correlated with mean cIMT in univariate analysis. No dose-dependent relationship between insulin dose, HbA1c levels and mean cIMT was found in these patients. However, upon multivariate linear regression analysis only age and sex remained significantly associated with cIMT (Table 2). Table 2Univariate and multivariate associations of cIMT with various risk factors in patients with type 1 diabetesParameterUnivariate β coefficientp valueMultivariate β coefficientp valueFemale sex −0.86*0.005−0.048*0.029Age0.007*0.0010.07*0.0001Duration of diabetes 0.008*0.001−0.1100.340Daily insulin dose−0.0010.109BMI0.0040.433Mean blood pressure0.004*0.004−0.0270.726Heart rate0.00010.994Total cholesterol0.0160.366LDL-cholesterol0.0210.321HDL-cholesterol0.0260.416Triacylglycerol−0.0540.076ASAT0.0370.615ALAT−0.0190.589HbA1c−0.0080.441Hyaluronan0.126*0.0010.1160.130Hyaluronidase0.0360.731hsCRP−0.0130.306*p < 0.05 type 1 diabetes patients vs controls Discussion In line with expectation, type 1 diabetes patients were characterised by structural changes of the arterial wall. In addition, we observed significant elevations of plasma hyaluronan and hyaluronidase activity levels in type 1 diabetes patients, whereas hyaluronan was correlated to cIMT. These present data imply that disturbances of hyaluronan metabolism may be associated with vascular damage in type 1 diabetes patients. Intima-media thickness in type 1 diabetes We observed a significant increase in cIMT in type 1 diabetes patients without micro- or macro-vascular complications compared with controls and confirmed that male sex is associated with an increased cIMT [3]. Despite the cross-sectional design of our study, this finding is compatible with data from the longitudinal DCCT study [3]. LDL-cholesterol was not associated with cIMT progression in the DCCT cohort, underscoring a potential role for hyperglycaemia in diabetic atherogenesis [1, 20, 21]. In contrast to the DCCT findings, we did not find a correlation between cIMT and glycaemic status. This apparent discrepancy may have several explanations. First, a single determination of HbA1c in our study may not be a reliable reflection of glycaemic excursions over the last months to years. In the DCCT, HbA1c was evaluated repetitively in a prospective cohort of type 1 diabetes patients, all receiving an intensive treatment regimen [1, 3]. Under these circumstances, HbA1c was predictive of cIMT progression over the next 4 years. Second, the elderly patients in our cohort may have been exposed to longer periods of poorly controlled diabetes followed by more intensive treatment regimens only in the last few years. As a consequence, their present HbA1c levels may have underestimated long-term glycaemic exposure. Hyaluronan metabolism and type 1 diabetes Hyaluronan is the principal constituent of endothelial glycocalyx as well as the extra-cellular matrix [7–10, 14, 15, 22]. Since the glycocalyx is a principal determinant of vascular permeability for macromolecules (e.g. lipoproteins), glycocalyx loss may contribute to the increased transvascular leakage of lipoproteins in patients with type 1 diabetes [20]. Indeed exposure of the carotid artery to atherogenic challenges in mice resulted in increased plasma hyaluronan shedding, concomitant loss of endothelial glycocalyx and increased cIMT [23, 24]. In line, we observed that patients with uncomplicated type 1 diabetes have a profound reduction of endothelial glycocalyx, which was associated with increased plasma hyaluronan and hyaluronidase levels [8]. In fact, in the present study we find a positive correlation between plasma hyaluronan and cIMT thickness. Upon multivariate analysis this correlation was lost, most likely due to the fact that age and sex are very closely associated with cIMT, thus attenuating the predictive value of hyaluronan [19]. We did not find a relationship between chronic inflammation as measured by hsCRP and plasma hyaluronan in type 1 diabetes. Therefore further studies are needed to evaluate the effect of other inflammatory markers (e.g. leucocyte count) on hyaluronan metabolism in these patients.Collectively, the present finding implies that hyperglycaemia may elicit alterations in hyaluronan metabolism, which are likely to reflect glycocalyx perturbation. The latter facilitates a wide array of pro-atherogenic effects, including vascular dysfunction and increased permeability of the vessel wall for, for example, lipoproteins. In line, aggregation of calcium with hyaluronan (and other subendothelial glycosaminoglycans) could enhance ectopic calcification of the cIMT, which is increased in type 1 diabetes patients [25, 26]. It would be interesting to investigate whether therapeutic intervention aimed at this disturbed hyaluronan metabolism in type 1 diabetes has the potential to reverse the pro-atherogenic state in these patients [27, 28]. Study limitations Several methodological aspects of our study merit caution. We did not determine fasting plasma glucose levels in our type 1 diabetes subjects, therefore we were only able to study markers of chronic hyperglycaemia (HbA1c) on vascular dysfunction. Moreover, the observational nature of our study combined with the use of surrogate markers of future vascular disease can be considered a weakness. However, solid evidence exists that changes in arterial wall function are predictive of cardiovascular outcome [29, 30]. In addition, reproducibility of the measurements was excellent, since a single ultrasound machine was used, one experienced sonographer performed all ultrasonography and images were analysed by a single reader. To reduce variability further, image analysis software automatically investigated each measurement.Finally endothelial glycocalyx also comprises other glycosaminoglycans such as syndecan, chondroitin sulphate and heparan sulphate, which all have their own function [6, 31]. Since hyaluronan and heparan sulphate are important in shear stress-mediated nitric oxide production, which is inextricably entangled with atherosclerosis, further research is needed to explore the role of these other endothelial glycocalyx compounds on diabetes mellitus associated vascular dysfunction.
[ "hyaluronan", "type 1 diabetes mellitus", "hyaluronidase", "intima-media thickness" ]
[ "P", "P", "P", "P" ]
Anal_Bioanal_Chem-3-1-1839866
Biochemical applications of surface-enhanced infrared absorption spectroscopy
An overview is presented on the application of surface-enhanced infrared absorption (SEIRA) spectroscopy to biochemical problems. Use of SEIRA results in high surface sensitivity by enhancing the signal of the adsorbed molecule by approximately two orders of magnitude and has the potential to enable new studies, from fundamental aspects to applied sciences. This report surveys studies of DNA and nucleic acid adsorption to gold surfaces, development of immunoassays, electron transfer between metal electrodes and proteins, and protein–protein interactions. Because signal enhancement in SEIRA uses surface properties of the nano-structured metal, the biomaterial must be tethered to the metal without hampering its functionality. Because many biochemical reactions proceed vectorially, their functionality depends on proper orientation of the biomaterial. Thus, surface-modification techniques are addressed that enable control of the proper orientation of proteins on the metal surface. Introduction The seminal discovery of surface-enhanced Raman scattering (SERS) in the early 70s opened the field of surface-enhanced spectroscopy [1, 2]. The phenomenon has subsequently also been observed at longer wavelengths and, ultimately, led to the realization of surface-enhanced infrared absorption spectroscopy (SEIRAS) [3]. Several reports appeared in the 90s on both practical and theoretical aspects of the phenomenon [4–6]. SERS and SEIRAS have lately received attention in the field of biochemistry and biophysics, because of growing interest in bio-nanotechnology [7]. Typical approaches of bio-nanotechnology are constructions of hybrid devices in which bio-molecules, e.g. DNA or proteins, are combined with a solid sensing and/or actuating substrate, for example as an electrode. With this architecture the whole bandwidth of biological functions can be addressed by exchange of signals with the sensor/actuator. The concept of the hybrid bio-device is key to the development of biosensors for DNA or proteins, or for immunoassays on a chip, etc. This concept is, moreover, valuable not only for technological progress but also for fundamental studies on proteins and other biologically active materials. Triggering the properties of the adsorbate on the substrate enables functional studies of biomolecules. A critical issue in the design of such a device is assessment of the interface between biomaterial and substrate in which the essential signal relay between the two different materials occurs. This signal relay comprises only small amounts of monolayer molecules at the interfaces which are difficult to detect by conventional Raman and IR techniques. Spectroscopic distinction from the strong background of the bulk is also difficult. This obstacle is overcome by exploiting the “optical near-field effect” of surface-enhanced spectroscopy in which the signal enhancement is restricted to the interface. Characteristic of vibrational techniques, SERS and SEIRA provide a wealth of molecular information on the level of a single chemical bond. SERS and SEIRA are complementary techniques, and each has its own advantages and disadvantages. SERS takes advantage of its enormous enhancement factor (of the order of 106–1012). The strongest enhancement occurs as a result of the resonance condition if the biomolecule carries a chromophoric co-factor. The fluorescence which often accompanies this may render detection of the Raman spectrum difficult, however. Although the latter is not a problem with SEIRA, the surface-enhancement is only modest (∼101–103). SEIRAS probes almost all bands of the adsorbed species as long as the vibrational mode includes a dipole component perpendicular to the surface (surface-selection rule) [5]. Although the enhancement factor of SEIRAS is smaller than that of SERS, the cross-section for IR absorption is several orders of magnitude higher than the corresponding Raman cross-section. Thus, the modest enhancement of SEIRAS may be sufficient for many applications. To study the functionality of proteins by IR spectroscopy, the difference technique has provided an unprecedented amount of molecular information [8–10]. The IR spectrum of a protein is recorded in one state—often the resting state—and subtracted from the IR spectrum of another state—an active state or reaction intermediate. The difference spectrum then contains only the vibrational bands associated with the transition from one state to the other. All of the other vibrational bands are cancelled which drastically simplifies interpretation of the vibrational changes. As a consequence, the amplitudes of the difference bands are much smaller than the absorption bands of the entire protein (difference bands may be smaller by a factor of 10−4, depending on the size of the protein). Resolving the small difference bands requires acute spectroscopic sensitivity, in particular when surface-enhanced infrared difference spectroscopy (SEIDAS) is performed on a protein monolayer. In this report, we review applications of SEIRA in which biochemical processes were studied. Because SEIRAS was introduced to the field of biomolecules only recently, we consider it worthwhile to start with practical aspects of the method. Experimental considerations Preparation of thin metal-film substrates Preparation of the thin metal film is the critical part of a successful SEIRA experiment. Enhancement by SEIRA is very dependent on the size, shape, and particle density of the selected metal-island film. These properties are easily affected by the experimental conditions during film fabrication, e.g. rate of film deposition, type of the substrate, the substrate temperature, etc. [11]. SEIRA-active metal islands are usually prepared by high-vacuum evaporation of the metal on to a supporting substrate. Metals such as Au, Ag, Cu, and Pt are vapor-deposited by Ar sputtering, electron-beam heating, or resistive thermal heating of a tungsten basket. The thickness and the rate of deposition are monitored by use of a quartz crystal microbalance (QCM). Controlling of the deposition rate is essential for optimum enhancement. Slow deposition (0.1 nm s−1 or less for deposition of Au or Ag on Si or CaF2) generally results in greater enhancement [11]. This condition also depends on the type of metal, the type of substrate, and the metal film thickness, however, and must therefore be optimized for the system being used. As the morphology of the metal film affects the extent of surface enhancement, templates such as a periodic particle-array film prepared by nanosphere lithography have been used [12, 13]. This approach not only increases the enhancement factor but the reproducibility of signal enhancement will enable quantitative SEIRAS. Although vacuum evaporation is routinely used, the equipment is costly and not readily available. An alternative means of forming a metal thin film is by chemical (electroless) deposition. Stable SEIRA-active thin films of Au, Pt, Cu, and Ag have been reported on Si or Ge substrates [14–17]. The procedure for preparing a thin Au film on a silicon surface is described below: The surface of the Si is covered with 40% w/v NH4F for a few minutes (typically 1–3 min) to remove the oxide layer and to terminate the surface with hydrogen.After rinsing with water, a freshly prepared 1:1:1 mixture of: 0.03 mol L−1 NaAuCl40.3 mol L−1 Na2SO3+0.1 mol L−1 Na2S2O3+0.1 mol L−1 NH4Cl, and2% w/v HFis put on the Si surface for 60–90 s.Although a shiny Au film is formed, the Au surface may still be contaminated with thio compounds from the plating chemicals. These are removed by electrochemical cycling of the potential between 0.1 and 1.4 V in 0.1 mol L−1 H2SO4 until the cyclic voltammogram of polycrystalline gold appears (broad oxidation peak above 1.1 V and a sharp reduction peak at 0.9 V relative to the SCE). One can, instead, apply a dc voltage of +1.5 V between the Au film and the counter electrode (e.g. a Pt wire) for ca. 1 min. Atomic force microscope (AFM) images of the chemically deposited Au film reveals an island structure similar to that of the vacuum-evaporated Au films, albeit with somewhat larger average diameter of the metal islands [16, 17]. Another advantage of the chemical method is the stronger adhesion of the deposited metal layer to the substrate. This property helps significantly when long-term stability of the metal film is required, as is typical for preparation of biomimetic devices (vide infra). An interesting option is the use of colloidal gold nano particles [18, 19]. Colloidal gold is prepared by reducing tetrachloroauric(III) acid with sodium citrate. It is also commercially available in different particle sizes. Typically, 10–50 nm colloidal gold is chosen. A major difference from the metal thin-film method is that the sample is attached to the colloidal gold suspension before measurement. The colloidal gold is then collected on an optical substrate by filtration or by centrifugation. The sample/colloid gold is measured either in the transmission configuration (with an IR card) or dried on an ATR prism. Removal of the metal film from the supporting substrate The metal film prepared by vacuum-evaporation is usually poorly adhesive. It is easily wiped off with ethanol or acetone. A metal film prepared by the chemical deposition method adheres much more strongly, and hence may not be removed by wiping or even by polishing with aluminium powder. Such metal films can be dissolved by immersion in a boiling solution of a 1:1:1 mixture of HCl (32%), H2O2 (30%), and H2O. Geometry and optical configuration The most widespread optical arrangement employs the so-called metal underlayer configuration (sample/metal film/supporting substrate; Fig. 1a). IR-transparent materials are usually used as supporting substrates. Highly refractive materials, for example Si, Ge, and ZnSe, are suitable for internal reflection optical geometry (Fig. 1a, (ii)) and CaF2 and BaF2 are more suitable for transmission geometry (Fig. 1a (i)). For the former, relatively thick metal films (in the range of ten to several hundred nanometers) can be used whereas for the latter geometry the thickness of metal film should be kept to less than 10 nm, because the island structure starts to merge at higher thickness. As a result, the metal film is no longer transmissive [11, 20]. Fig. 1Schematic diagram of the optical configuration of SEIRAS. (a) Metal underlayer configuration. (b) Metal overlayer configuration. The arrows denote the optical pathway of the IR beam in (i) transmission, (ii) attenuated total reflection, and (iii) external reflection geometry. (c) Spectro-electrochemical cell for SEIRA spectroscopy For internal reflection geometry (Kretschmann attenuated total-reflection geometry), half-cylindrical, half-spherical, or trapezoidal prisms are commonly used. The former two types are advantageous for optical reasons, because the position of the focus on the metal film is not affected by changes in the angle of incidence of the IR beam. Si is commonly used as reflection element, because of its high chemical stability. One disadvantage is that the available spectral range is then limited to >1000 cm−1. Although use of Ge enables use of a wider spectral window (>700 cm−1), it is not suitable for electrochemical experiments at potentials above 0.0 V (relative to the SCE) in acidic solution (pH<4.0), in which it starts to dissolve. Although Ge matches well with Ag or Cu, combination with Au may cause a decrease in the enhancement factor, because of the formation of a Ge–Au alloy at the boundary which strongly affects the morphology of the island structure of the film. Non-IR-transparent material, for example glass, polymers, carbon, and metal, can be used as a substrate only with external reflection which uses the metal overlayer configuration (metal particle/sample/supporting substrate, Fig. 1b). In this configuration the thin metal layer (ca. 4–7 nm) is directly evaporated on the sample/support surface. It should be noted that poorly reflective material is optically favorable, because of reduced distortion of band shape in the corresponding SEIRA spectrum [21, 22]. This configuration is advantageous for detection in trace analysis of a sample adsorbed on glass or a polymer [22]. It is pointed out, however, that external reflection IR spectroscopy is not very useful for functional studies of biomaterial, because of the required presence of bulk water that strongly absorbs the IR probe beam. Chemical modification of the metal film surface It is crucial to chemically modify the bare metal if direct contact with the metal surface hampers the structure and function of the biomaterial. Strong interaction forces may even induce denaturation of proteins, because their secondary structure is very dependent on weak hydrogen-bonds between amino acids. The use of a chemically modified electrode (CME) is an approach used to tackle these obstacles [23]. The surface of the metal electrode is modified by heterobifunctional crosslinkers that comprise: a thiol group that is spontaneously adsorbed by the metal electrode;a spacer group with different lengths of alkyl chain; anda functional headgroup pointing toward the bulk solution. The CME surface can be conveniently prepared by the so-called “SAM (self-assembled monolayer) method” in which the SEIRA-active metal substrate is immersed in a solution containing the crosslinker molecule (normally for between 10 min and 2 h, depending on size of the molecule). The SAM of the crosslinker is formed spontaneously by quasi-covalent bonding between the sulfur and the metal surface. The substrate is then thoroughly rinsed with the solvent so that a SAM of the crosslinker remains on the metal substrate. Adsorption of the protein by the chemically modified electrode (CME) is a suitable model for protein–protein interaction. By varying the properties of the functional headgroup it is possible to mimic the specific conditions of the substrate during the protein–protein or protein–ligand interaction process. For example, use of a carboxyl headgroup mimics the side-chain terminus of Asp or Glu. The negatively charged carboxyl headgroup can specifically interact with positively charged amino acids, for example Lys, His, and Arg residues, on the protein surface. Another approach is to exploit the specific interaction of Ni2+-chelated nitrilotriacetic acid (NTA) with a sequence of histidine residues genetically introduced into the target protein (His-tag) [24, 25]. Details of this approach are given below. Applications Studies of nucleic acids and DNA SEIRAS studies on nucleic acid bases have provided information about the adsorption and orientation of thymine on silver island film [26], and of cytosine [27] and uracil [28] on gold electrodes. These experiments have been expanded to studies of the hydrogen-bonding interaction between complementary bases (adenine (A)–thiamine (T) and guanine (G)–cytosine (C)) as model systems of DNA and RNA. Sato et al. used SEIRAS to study base-pairing between thymidine and 6-amino-8-purinethiol, a thio-derivatized adenine, immobilized on a gold electrode surface [29]. They observed a characteristic thymidine band at 1571 cm−1 adsorbed on the adenine-modified surface by hydrogen-bonding. This base-pairing occurs at potentials higher than 0.1 V (relative to the SCE), at which the N1 of adenine is deprotonated and orientated toward the solution. This result implies that hydrogen-bonding between the base pairs can be controlled by the applied electrode potential. As an interesting extension of the method, SEIRAS has been used as a diagnostic criterion in cancer research [30, 31]. Although the reported enhancement factor of 3–5 only modest [31], the enhanced bands are impossible to observe with conventional IR spectroscopy but are crucial for determining the properties of nucleic acids of tumor tissues and their interaction with anticancer drugs. Immunoassays based on SEIRAS There is substantial interest in the development of biosensors in which the biological component is associated with a transducer to provide a means of detecting changes in the biological component. Most current biosensors consist of an enzyme or an antibody which interacts with the substrate or the respective antigen. The enzymes or antibodies are usually immobilized on a platform and the interaction between the enzyme and the substrate (or the antibody and the antigen) is monitored. Biosensors based on surface plasmon resonance (SPR) have received considerable attention in recent years. In this method, the biomaterial is immobilized on the surface of a metal-film-covered optical prism (ATR–Kretschmann configuration), i.e. the same optical principle is applied as with SEIRAS. The advantage of SEIRAS compared with SPR is that the chemical information derived from the vibrational spectrum identifies the nature of the adsorbed species yet the sensitivity of SEIRAS is comparable with that of SPR. The sensitivity of SPR is poor when the refractive index of the adsorbed species is close to that of the solvent, as it is for small organic molecules dissolved in an organic solvent. SEIRAS detects the individual vibrational modes of functional groups of the adsorbate, and the S/N ratio does not depend on the type of solvent if the vibrational bands of the solvent do not overlap with those of the adsorbate. SEIRAS can, moreover, be used to detect even minute structural changes that occur in the adsorbed layer during the adsorption process. Brown et al. have applied SEIRAS to biosensor analysis for determination of antibody–antigen interactions [32]. In this experiment, antibodies for Salmonella (anti-SAL) were immobilized on a gold surface deposited on a silicon wafer. After taking the spectrum of anti-SAL, this “dipstick” sensor was immersed in a solution containing the antigen (SAL). The SEIRA spectra of anti-SAL without SAL has characteristic antibody peaks at 1085 and 990 cm−1. Addition of SAL results in an additional band at 1045 cm−1 which was assigned to the P–O stretching vibration of the phospholipid in the cell wall, which indicates the presence of SAL in the solution. Initially, the analyte adsorbed by the “dipstick” sensor was probed by external reflection SEIRA spectroscopy. Later, the authors developed a new method for SEIRAS in which they used a colloidal gold surface as adsorption platform [19]. The colloidal gold–antibody–antigen complexes are left to assemble in solution, taking advantage of the tight binding of the antibodies to colloidal gold which stabilizes the colloid solution. When the sample has been assembled the complexes are collected, either by filtration on a porous polyethylene membrane [19] or by centrifugation on an ATR substrate [18], and are then readily mounted in the FTIR spectrometer. Both “dipstick” and “colloidal gold” furnish similar spectral quality, although the enhancement of the latter approach is not mentioned. The colloidal gold system is more convenient and less expensive means of preparation than coating by sputtering. Filtration of the colloidal particles on disposable IR cards is also a rapid, easy, and cost-effective means of producing a SEIRA substrate which operates in transmission mode and does not require complicated optical arrangements. SEIRAS to probe protein functionality The metal used for surface enhancement, can also be used as an electrode. The enzymatic reactions of many biological systems, especially those of membrane proteins, are driven by an electrochemical gradient across the cell membrane. Such a system can be artificially reproduced on an electrode surface to mimic the physiological properties of a biological membrane. Electrochemically induced oxidation and reduction of a monolayer of cytochrome c (Cc), a protein that mediates single-electron transfer between the integral membrane protein complexes of the respiratory chain, is the model system for electron transfer within and between proteins [33]. Electrons are directly injected to and/or withdrawn from Cc after proper contact has been established with the electrode by means of a suitable surface modifier. Proteins can be attached to such a chemically modified electrode (CME) by electrostatic attraction or covalent interaction. Figure 2 depicts potential-induced difference spectra of Cc that has been electrostatically adsorbed by different types of CME. The positive and negative peaks are indicative of conformational changes associated with the transition from the fully oxidized to the fully reduced state. Details of band assignment have been published [14]. The difference spectra reveal subtle changes of the secondary structure induced by rearrangement of the hydrogen-bonded network among the internal amino acid side-chains surrounding the heme chromophore. Note that the peak positions of the observed Cc bands do not depend on the type of CME layer whereas the relative intensities are strongly dependent on the CME. The former suggests that the internal conformational changes of Cc are not affected by interaction with the modifiers. The latter observation is attributed to the different surface structure, i.e. the orientation or position of the Cc relative to the CME underlayer [14]. Fig. 2Potential-induced redox difference spectra of a cytochrome c monolayer adsorbed on a variety of chemically modified electrodes: (a) mercaptopropionic acid, (b) mercaptoethanol, (c) cysteine, (d) dithiodipyridine. Minor spectral contributions from the surface modifier have been subtracted to selectively reveal the vibrational spectra of the adsorbed cytochrome c [14] Studies of molecular and protein recognition The acute sensitivity of SEIRAS is particularly useful when the technique is applied to studies of membrane proteins. Handling of the membrane proteins usually requires great care, because of their sensitivity to degeneration as soon as they are separated from the native lipid bilayer. The large size of the proteins also makes it difficult to control their orientation. A successful strategy for immobilization of membrane proteins is to attach the purified protein, by using the selective affinity of a genetically introduced histidine tag (His-tag), to a nickel-chelating nitrilotriacetic acid (Ni-NTA) monolayer (SAM) self-assembled on a chemically modified gold surface. The lipid environment of the surface-anchored membrane protein is subsequently restored by in-situ dialysis of the detergent around the amphiphilic membrane protein. This approach results in orientated immobilization and reconstitution of the native matrix, which enhances the stability of the membrane protein and restores full functionality. The success of the method has been demonstrated by use of cytochrome c oxidase (CcO) [24, 25]. Recombinant CcO (solublized by detergent) from Rhodobacter sphaeroides was immobilized on the stepwise-formed Ni-NTA SAM bound to the gold surface. It is advantageous to follow each surface-modification step by in-situ SEIRAS, which ensures qualitative and quantitative control of each reaction step. The protein is subsequently embedded in the lipid layer by removing the detergent by addition of Bio-Beads. SEIRA spectra reveal the increase of several lipid bands during the reconstitution without any dissociation of protein from the surface. Atomic-force microscopy (AFM) provides direct evidence of the formation of the lipid bilayer. Spots of size ∼7 nm appear in the AFM image; this is in accordance with the diameter of a single CcO molecule [34]. It is remarked that the same approach can be used to bind CcO to a rough Ag surface. Integrity of the native structures of the heme cofactors (heme a and heme a3) was shown by surface-enhanced resonance Raman spectroscopy (SERRS) [35, 36]. Full functionality of the lipid-reconstituted CcO electrode is demonstrated by the fact that a catalytic current resulting from reduction of oxygen, the native property of CcO, is observed when an electron is donated by the adsorbed Cc. In such an experiment orientation of the CcO is controlled by the position of the His-tag on either side of the membrane surface CcO. With molecular genetic techniques, the His-tag is introduced either in the C-terminal tail of subunit I or in the C-terminus of subunit II. By binding CcO through the respective His-tag, the former orients CcO such that the binding site of Cc is exposed to the bulk solution. The latter orientation obstructs the binding site for Cc, because of the barrier imposed by the lipid bilayer. Indeed, SEIRAS, in combination with electrochemistry, reveals that Cc binds and initiates the catalytic reaction of CcO only when the Cc binding site faces the bulk aqueous phase and that Cc does not interact with the oppositely oriented CcO [24, 25]. Interestingly, the electrochemically induced redox difference surface-enhanced infrared difference absorption (SEIDA) spectrum of the Cc–CcO complex is remarkably similar to that of Cc adsorbed on a carboxy-terminated SAM (mercaptoundecanoic acid, MUA) (Fig. 3c and d). It is noted that SEIDA spectra of Cc are sensitive to the properties of the terminal headgroup of the SAM that it interacts with [14]. The similarity of the spectra of Cc–CcO and Cc–MUA suggests a preponderance of carboxylate residues at the physiological docking site of Cc, i.e., those of the side-chains of aspartate or glutamate units. Ferguson-Miller et al. have, indeed, identified residues Glu148, Glu157, Asp195, and Asp214 (all in subunit II of CcO from R. sphaeroides) as the major interaction partners of Cc [37]. Fig. 3Potential-induced IR difference spectra of: (a) cytochrome c bound to a monolayer of cytochrome c oxidase tethered to a gold surface via His-tag/Ni-NTA interaction; (b) tethered monolayer of cytochrome c oxidase alone; (c) difference between (a) and (b) which recovers the vibrational contribution from cytochrome c only; (d) cytochrome c adsorbed to a monolayer of mercaptoundecanoic acid The same strategy has recently been applied to immobilization of photosystem 2 (PS2) on an Au surface with the purpose of developing a semi-artificial device for production of hydrogen by photosynthetic oxidation of water [38]. In this report, SEIRA was used to determine the adsorption kinetics of PS2 on the Ni-NTA-modified Au surface. On illumination of the surface-bound PS2 a photocurrent was generated. The action spectrum corresponds to the absorption spectrum of PS2, indicating that the observed photocurrent is caused by photoreaction of PS2. This methodology is a general approach for immobilization of proteins, because the introduction of affinity tags is routine with modern genetic techniques. Other tags beyond the His-tag may also be an option. Thus, orientational control of protein adsorption on a solid surface can be conveniently achieved. An oriented sample is mandatory when the vectorial function of membrane proteins is addressed. Many membrane proteins are asymmetric in their functionality, because they translocate ions or solutes preferentially in one direction or because their stimulant, e.g. ligand, binding partner protein, membrane potential, etc., affects the protein from one side only. Summary and outlook Despite the accomplishments reported in this article, studies of biomaterials by SEIRA is still in its infancy. Potential-induced difference spectroscopy using SEIRA is a promising possibility for study of the functionality of proteins. The stimulus in such studies is not limited to the electric trigger, as reviewed here, but can also be light illumination, temperature jump, or chemical induction. Many protein functions can be addressed by these means and the mechanism of action of these molecular machines may be resolved down to the level of a single bond. The sensitivity may not be sufficient for functional difference spectroscopy on any protein with SEIDAS, however. As the enhancement factor for SEIRA is modest compared with that for SERS, optimization of the enhancement by proper design of the metal surface is crucial. Although enhancement factors are usually in the range 10–100, factors as high as 1000 have sometimes been reported [16]. To obtain the latter enhancement reproducibly, however, development of homogeneous island metal film preparation under strict topological control will be mandatory.
[ "electron transfer", "protein", "self-assembled monolayer", "membrane", "ftir" ]
[ "P", "P", "P", "P", "P" ]
Diabetologia-3-1-1871609
The role of physical activity in the management of impaired glucose tolerance: a systematic review
Although physical activity is widely reported to reduce the risk of type 2 diabetes in individuals with prediabetes, few studies have examined this issue independently of other lifestyle modifications. The aim of this review is to conduct a systematic review of controlled trials to determine the independent effect of exercise on glucose levels and risk of type 2 diabetes in people with prediabetes (IGT and/or IFG). A detailed search of MEDLINE (1966–2006) and EMBASE (1980–2006) found 279 potentially relevant studies, eight of which met the inclusion criteria for this review. All eight studies were controlled trials in individuals with impaired glucose tolerance. Seven studies used a multi-component lifestyle intervention that included exercise, diet and weight loss goals and one used a structured exercise training intervention. Four studies used the incidence of diabetes over the course of the study as an outcome variable and four relied on 2-h plasma glucose as an outcome measure. In the four studies that measured the incidence of diabetes as an outcome, the risk of diabetes was reduced by approximately 50% (range 42–63%); as these studies reported only small changes in physical activity levels, the reduced risk of diabetes is likely to be attributable to factors other than physical activity. In the remaining four studies, only one reported significant improvements in 2-h plasma glucose even though all but one reported small to moderate increases in maximal oxygen uptake. These results indicate that the contribution of physical activity independent of dietary or weight loss changes to the prevention of type 2 diabetes in people with prediabetes is equivocal. Introduction Given the growing prevalence of diabetes and the high economic cost of treating the condition and its comorbities, it is important to find effective ways of targeting those who are most at risk of developing the disease [1]. Prediabetes is the collective term for people with IGT and/or IFG [2]. Prediabetes is associated with an increased risk of development of type 2 diabetes [3] and cardiovascular disease [4–6]. There is good evidence from cross-sectional and longitudinal studies for a link between levels of physical activity and the risk of type 2 diabetes [7–9]. However, evidence from intervention studies in high-risk populations is limited, making it difficult to quantify the effectiveness of physical activity in reducing the risk of type 2 diabetes in individuals with prediabetes. Lifestyle intervention studies that have encouraged weight loss through a combination of dietary change and increased physical activity have reduced the risk of type 2 diabetes in individuals with IGT [10–14]. However, because physical activity was not usually analysed independently of other variables, such as weight loss, it is difficult to determine the effectiveness of physical activity at protecting against the risk of diabetes in individuals with prediabetes. Therefore, the aim of this systematic review is to establish the effectiveness of physical activity independent of other variables at reducing the risk of diabetes or improving glucose parameters in people with prediabetes. Materials and methods Search strategy MEDLINE (1966 to February week 4, 2006) and EMBASE (1980 to week 8, 2006) were searched for articles examining the effect of an exercise or lifestyle intervention on individuals with prediabetes. The search was carried out using medical search headings (MeSH) and by searching titles and abstracts for relevant words. For example, studies including individuals with prediabetes were found by using the MeSH ‘prediabetic state,’ ‘insulin resistance,’ ‘glucose intolerance’ and ‘diabetes mellitus’ (subheading ‘prevention and control’), and by searching titles and abstracts for ‘prediabetes,’ ‘impaired glucose tolerance,’ ‘IGT,’ ‘impaired fasting glucose’ and ‘IFG.’ Studies that included an exercise intervention were found by using the MeSH ‘lifestyle,’ ‘sports,’ ‘exercise therapy’ and ‘physical fitness,’ and by searching titles and abstracts for ‘exercise,’ ‘physical activity,’ ‘physical fitness,’ ‘resistance training,’ ‘strength training,’ ‘circuit training,’ ‘endurance training’ and ‘aerobic training.’ In addition, the reference lists of relevant published original articles and reviews were hand-searched.One reviewer (T. Yates) performed the electronic and hand-searches and reviewed the results. Studies that clearly did not meet the inclusion criteria were rejected during the initial review. Where uncertainty existed, the full text of the article was obtained and reviewed. Two reviewers (T. Yates and K. Khunti) independently assessed all potentially relevant studies and performed data extraction. Disagreement was resolved by discussion and, where necessary, third party adjudication. Subjects Participants were adults (age ≥18 years) diagnosed with prediabetes. Prediabetes was defined as IGT and/or IFG using one of the sets of criteria previously recommended by the WHO [15, 16] or the American Diabetes Association (ADA) [17, 18]. Studies that defined IGT or IFG using other criteria were included if the mean value of the participants’ plasma glucose fell within the range of IGT or IFG as defined by the WHO or ADA criteria (2-h plasma glucose ≥7.8 mmol/l and <11.1 mmol/l, and fasting glucose <7.8 mmol/l for IGT; 2-h plasma glucose <7.8 mmol/l, and fasting glucose ≥5.6 mmol/l and <7.0 mmol/l for IFG). Interventions Interventions that included an exercise programme were included. ‘Exercise programme’ was taken to mean any intervention that actively promoted and supported physical activity or a structured exercise training regimen. Studies that only provided individuals with brief written or verbal physical activity advice were excluded. Studies investigating the effect of a single or acute episode of exercise were also excluded. Outcome measures Only studies with an outcome measure of physical activity and a relevant clinical measure were included. A relevant clinical measure was defined as progression to diabetes or a suitable measure of plasma glucose (2-h plasma glucose for IGT, or fasting glucose for IFG). Type of study Randomised and non-randomised controlled trials were included. Analysis As the heterogeneity of the type of exercise interventions and outcome measures did not lend itself to quantitative methods of analysis, a systematic narrative review was undertaken. Baseline and follow-up exercise, body mass and glucose parameters were reported using mean±SEM, or median (interquartile range). Results reporting the SD or the 95% CI were converted to SEM using the formulas (where t is the t distribution value for a 95% CI), respectively. Where the SEM, SD or 95% CI for the change from baseline to follow-up was not reported, only the mean value is reported because of the potential error involved in calculating SEM for this figure. When available, the relative risk of diabetes in the intervention group compared with the control group was also reported. Results The search produced 307 hits, from which 279 potential studies were identified, of these, eight trials met the criteria for inclusion (see Fig. 1). Study details for the eight included studies are shown in Table 1; the main outcomes are presented in Table 2. Fig. 1Flow diagram of the literature search. Duplicates: where several studies reported on the same trial and cohort, only one published study for each trial was included for the purposes of this review; the included studies were those that reported on the cohort as a whole, had relevant follow-up measures and included the most recently published data. Where a relevant study was identified in more than one publication, the study was included only onceTable 1Characteristics of included studiesAuthors, yearStudy location/nameStudy designIntervention durationNumber of subjects (men/women)Inclusion criteriaType of interventionType of dietary interventionType of exercise interventionMethod of physical activity measurementLindström et al. 2003 [21]Finland/Finnish Diabetes Prevention StudyRCT3 years522 (172/350)IGT (WHO criteria, 1985), age 40–64 years, BMI≥25 kg/m2Exercise and dietWeight reduction through a healthy dietParticipants individually encouraged to increase their overall level of physical activity. Circuit-type exercise sessions were also offered.Self-report—Kuopio 12 month leisure time physical activity questionnaireKnowler et al. 2002 [12]USA/Diabetes Prevention Research GroupRCTAverage follow-up 2.8 (range 1.8–4.6) years2,161 (680/1481)aIGT (ADA criteria, 1997), age ≥25 years, BMI ≥24 kg/m2 (≥22 kg/m2 if Asian), fasting plasma glucose ≥5.3 mmol/lExercise and dietWeight reduction through a healthy, low-energy, low-fat dietParticipants individually encouraged to accumulate at least 150 min/week of moderate intensity exercise.Self-report—Modified Activity Questionnaire and activity logPan et al. 1997 [10]China/The Da Qing IGT and Diabetes Study RCT6 years530 (283/247)IGT (WHO criteria, 1985), age ≥25 years1. Exercise 2. Exercise and dietWeight maintenance for those with a BMI <25 kg/m2 through a healthy energy-balanced diet; weight reduction for those with a BMI ≥25 kg/m2 through reduced energy intakeParticipants were encouraged to increase their physical activity to 1 unit per dayb. Those who were aged <50 years and were able to were encouraged to accumulate 2 units per day.Self-report—type not reportedEriksson and Lindgärde, 1991 [24]Sweden/The 6-year Malmö feasibility studyNon-randomised controlled trial5 years260 (260/0)IGT (2-h post-challenge glucose values 7–11 mmol/l, and fasting plasma glucose <7.8 mmol/l)Exercise and dietHealthy dietary advice givenParticipants were encouraged to increase their physical activity levels. Participants given the option of training in organised groups for a 6-month period in the first year.VO2max—heart rate response to submaximal workload using bicycle ergometerOldroyd et al. 2006 [19]EnglandRCT2 years69 (39/30)IGT (WHO criteria, 1985)Exercise and dietWeight reduction through a healthy, low-energy, low-fat dietParticipants were encouraged to undertake 20–30 min of aerobic activity 2–3 days/week. In addition, all participants were given a discount at local gyms.Cardiovascular fitness—shuttle testSelf-report—type not reportedMensink et al. 2003 [22]Netherlands/Study on lifestyle-intervention and impaired glucose tolerance MaastrichtRCT2 years114 (64/50)IGT (2-h post-challenge glucose values 7.8–12.5 mmol/l, and fasting plasma glucose <7.8 mmol/l), age >40 years, BMI ≥25.0 kg/m2Exercise and dietWeight reduction through a healthy, low-energy, low-fat dietParticipants were encouraged through goal setting to undertake 30 min of moderate intensity exercise per day and were given use of free exercise classes.VO2max—incremental exhaustive test using bicycle ergometerLindahl et al. 1999 [23]SwedenRCT1 year186 (69/117)IGT (WHO criteria, 1985), age 30–60 years, BMI ≥27 kg/m2Exercise and dietWeight reduction through a healthy, low-energy, low-fat diet.Participants were encouraged to increase their physical activity. Supervised exercise sessions were available in the first monthVO2max—heart rate response to submaximal and maximal workload using bicycle ergometerCarr et al. 2005 [20]USARCT2 years62 (29/33)IGT (WHO criteria, 1998)Structured exercise and dietParticipants were encouraged to follow the energy-balanced American Heart Foundation Step 2 dietWalking/jogging at >70% of heart rate reserve for 1 h on 3 days/weekVO2max—modified branching treadmill protocolVO2max, maximal oxygen consumptionaData for lifestyle and control groups onlyb1 unit = 30 min of mild exercise, or 20 min of moderate exercise, or 10 min of strenuous exercise, or 5 min of very strenuous exerciseTable 2Baseline values and change in main outcomesAuthors, yearBaseline VO2max (l/min)Change in VO2max (l/min)aBaseline self-reported physical activity levelsChange in self-reported leisure time physical activityaBaseline body mass (kg)Change in body mass (kg)aBaseline 2-h plasma glucose (mmol/l)Change in 2-h plasma glucose from baseline (mmol/l)a Baseline fasting glucose (mmol/l)Change in fasting plasma glucose (mmol/l)aRelative risk of diabetes in the intervention group vs the C: group (95% CIs)Lindström et al. 2003 [21]N/AN/AL: 156 (62 to 288) min/weekL: 61 (−33 to 168) min/weekb`L: 86.7 ± 0.8Results at 3 yearsL: 8.9 ± 0.1Change at 1 yearL: 6.1 ± 0.05Change at 1 year0.4 (0.3–0.7)C: 169 (65 to 352) min/weekC: 6 (−91 to 104) min/weekC: 85.5 ± 0.9L: −3.5 ± 0.3bC: 8.9 ± 0.1L: −0.9 ± 0.1bC: 6.2 ± 0.04L: −0.2 ± 0.04C: −0.9 ± 0.4C: −0.3 ± 0.1C: 0.0 ± 0.04Change at 3 yearsChange at 3 yearsL: −0.5 ± 0.2L: 0.0 ± 0.05C: −0.1 ± 0.2C: 0.1 ± 0.05Knowler et al. 2002 [12]N/AN/AL: 15.5 ± 0.7 MET-h/weekL: 6 MET-h/weekb,cL: 94.1 ± 0.6L: −5.6bL: 9.1 ± 0.03NRL: 5.9 ± 0.01NR0.4 (0.3–0.5)C: 17.0 ± 0.9 MET-h/weekC: 1 MET-h/weekcC: 94.3 ± 0.6C: −0.1C: 9.1 ± 0.03C: 5.9 ± 0.02Pan et al. 1997 [10]N/AN/AE: 3.4 ± 0.2 unitsdE: 0.6 unitsdNRD&E: −2.5E: 8.8 ± 0.1E: 1.7E: 5.6 ± 0.1E: 1.3E: 0.5 (0.2–0.9)D&E: 3.1 ± 0.2 unitsdD&E: 0.8 unitsdE: −0.4D&E: 9.1 ± 0.1D&E 1.7D&E 5.7 ± 0.1D&E: 1.5D&E: 0.6 (0.3–0.9)C: 2.4 ± 0.2 unitsdC: 0.1 unitsdC: −1.0C: 9.0 ± 0.1C: 4.0C: 5.5 ± 0.1C: 2.1Eriksson and Lindgärde 1991 [24]L: 2.46 ± 0.04L: 0.20bN/AN/AL: 82cL: −3.3bL: 8.2 ± 01L: −1.1bNRNR0.4 (0.2–0.7)C: 2.29 ± 0.1C: −0.05C: 83cC: 0.2C: 8.3 ± 0.1C: 0.1cOldroyd et al. 2006 [19]N/AN/ANRNRL: 85.3 ± 2.9L: −1.8 ± 1.1bL: 9.2 ± 0.1Change at 1 yearL: 6.1 ± 0.1Change at 1 yearN/AC: 85.5 ± 2.5C: 1.5 ± 0.5C: 9.2 ± 0.2L: −0.6 ± 0.3C: 6.2 ± 0.2L: 0.0 ± 0.1C: 0.2 ± 0.3C: 0.1 ± 0.2Change at 2 yearsChange at 2 yearsL: 0.2 ± 0.3L: 0.3 ± 0.1C: −0.5 ± 0.4C: 0.1 ± 0.2Mensink et al. 2003 [22]L: 2.15 ± 0.1L: 0.09 ± 1.90bN/AN/AL: 86 ± 1.9L: −2.4 ± 0.7bL: 8.9 ± 0.3Change at 1 year L: 5.9 ± 0.1Change at 1 yearN/AC: 2.13 ± 0.1C: −0.03 ± 2.77C: 83.7 ± 1.5C: 0.1 ± 0.5C: 8.6 ± 0.2L: −0.9 ± 0.3bC: 5.8 ± 0.1L: −0.1 ± 0.1C: 0.3 ± 0.3C: 0.1 ± 0.1Change at 2 yearsChange at 2 yearsL: −0.6 ± 0.3bL: 0.2 ± 0.1C: 0.8 ± 0.4C: 0.5 ± 0.1Lindahl et al. 1999 [23]L: 2.12 ± 0.1eL: 0.21 ± 0.1b,eN/AN/AL: 86.4 ± 1.1L: −5.4 ± 0.5bL: 7.5L: −0.7 ± 0.2L: 5.4L: −0.5 ± 0.1N/AC: 1.89 ± 0.1eC: −0.02 ± 0.1eC: 83.6 ± 1.1C: −0.5 ± 0.3C: 8.0C: −0.3 ± 0.3C: 6.1C: −0.3 ± 0.1Carr et al. 2005 [20]E: 1.93 ± 0.1E: 0.16 ± 0.04bN/AN/AE: 66.5 ± 2.9E: −1.8 ± 0.5bE: 9.2 ± 0.2Change at 6 monthsE: 5.3 ± 0.1Change at 6 monthsN/AC: 2.02 ± 0.1C: −0.04 ± 0.03C: 69.7 ± 2.6C: 0.6 ± 0.5C: 9.1 ± 0.2E: −0.7C: 5.4 ± 0.1E: 0.0C: −0.1C: 0.1Change at 2 yearsChange at 2 yearsE: −0.6E: 0.0C: 0.0C: 0.1Data are presented as means±SD or medians (interquartile range)aChange from baseline values reflect results at final follow-up, unless stated otherwisebp < 0.05 vs C:cValue estimated from graphd1 unit = 20 min of moderate E: or 10 min of strenuous E:eFitness measurement taken in a randomly selected subgroup, n = 45C control, D diet, E exercise, L lifestyle, MET-h/week metabolic equivalent per week, NR not reported, N/A not applicable Study design Seven of the eight trials involved randomisation of subjects to a treatment group or control group [10, 12, 19–23]. The non-randomised trial identified control participants by using individuals who, for various (unstated) reasons, were not enrolled in the intervention programme [24]. Sample size Sample size ranged from 62 to 2,161. Two studies reported a power calculation based on the expected difference in the incidence of diabetes between groups [12, 21], and one reported a power calculation based on the expected difference between groups in the proportion of individuals with IGT at the end of the study [19]. In the latter study, Oldroyd et al. calculated that a total of 100 participants were required to detect a 0.6 mmol/l difference in fasting glucose and a 20% difference in the number of individuals with IGT, allowing for a 90% power at a significance of 0.05. Three studies had sample sizes of fewer than 100 participants at follow-up [19, 20, 22]. Inclusion criteria All studies examined in this review included individuals with IGT and excluded those with isolated IFG [10, 12, 19–24]. Sex Except for one trial that involved only men (n = 188) [24], all trials included both men and women. In the included studies a total of 40% of participants were men. Intervention conditions Seven of the eight included studies used a multi-component lifestyle intervention [10, 12, 19, 21–24], and one used a structured gym-based exercise training intervention [20].Six of the lifestyle intervention studies were based on encouraging individuals to increase their physical activity to approximately 150 min of exercise of moderate to vigorous intensity per week whilst also encouraging weight loss through a healthy energy-restricted diet [10, 12, 19, 21, 22, 24]. Participants in all six studies received regular encouragement and counselling from a trained dietician at least once every 3 months throughout the duration of the intervention. Two of the six studies also provided participants with the option of attending supervised exercise classes for some or all of the study duration [21, 24] and one provided discounted access to local gyms [19]. One study determined the effect of diet and exercise separately and in combination [10].One lifestyle intervention included an initial 1-month stay at a wellness centre where individuals were provided with healthy dietary options and encouraged to take part in 2.5 h/day of light to moderate intensity exercise using the leisure facilities provided [23]. After the stay at the wellness centre, participants were encouraged to make plans about how they could incorporate healthier habits into everyday life and then received no further contact until follow-up.The structured exercise intervention study used a training protocol of 180 min per week of aerobic exercise at 70% of heart rate reserve [20]. Exercise training was supervised for the first 6 months and both groups were encouraged to eat a healthy energy-balanced diet, with those in the exercise training group also being encouraged to eat a diet with a high percentage of energy from carbohydrate [20]. Outcomes Four studies included the incidence of diabetes as the main outcome [10, 12, 21, 24], and four used 2-h plasma glucose levels as a direct measure of glucose control [19, 20, 22, 23]. All the studies using the incidence of diabetes as their main outcome were based on a multi-component lifestyle intervention (see intervention conditions). Incidence of diabetes and physical activity All four of the intervention studies that measured the incidence of diabetes as their primary outcome found a significant reduction in the incidence of type 2 diabetes in the intervention group. Diabetes incidence was reduced by 42–63% in this group compared with the control group (see Table 2). The study that investigated the effect of diet and physical activity both separately and in combination found a greater reduction in the incidence of diabetes (46% reduced risk) in the physical activity-only group than in either the combined physical activity and diet group (42% reduced risk) or the diet-only group (13% reduced risk), although the difference between groups was not statistically significant [10]. Three of these four studies relied on self-reported measures of physical activity [10, 12, 21], and of these, only the Diabetes Prevention Program (DPP) [12] and the Finnish Diabetes Prevention Study (FDPS) [21] reported using a validated physical activity questionnaire. All three of the studies relying on self-reported physical activity levels reported non-significant to small changes in physical activity levels in the intervention group. For example, the DPP reported a mean increase in energy expenditure due to leisure time physical activity of around six metabolic equivalent hours per week [12], which is approximately equivalent to walking at a moderate pace for 15 min/day [25]. The FDPS reported no significant change in total physical activity levels compared with the control group and an increase of 9 min/day in moderate to vigorous physical activity [21], and the Da Qing IGT and Diabetes Study reported no significant change in physical activity levels compared with the control group [10]. The Malmö Feasibility Study, which used an objective outcome measure (cardiovascular fitness), reported an 8% increase in maximal oxygen uptake [24]. 2-h post-challenge plasma glucose and physical activity Three of the studies that used the incidence of diabetes as their primary outcome measure also measured 2-h plasma glucose before and after the intervention [10, 21, 24]. The FDPS reported a 0.9 mmol/l decrease in 2-h plasma glucose after 1 year, but no significant change after 3 years [21]; the Da Qing IGT and Diabetes Study found that 2-h plasma glucose increased in all groups, but the increase in the control group was over twice that in either of the intervention groups [10]; and the Malmö Feasibility Study reported a 1.1 mmol/l reduction in 2-h plasma glucose in the intervention group [24]. Of the three lifestyle intervention studies that used 2-h plasma glucose levels rather than the incidence of diabetes as the primary indictor of improved glucose tolerance [19, 22, 23], only one reported a significant difference between the groups in terms of 2-h plasma glucose at follow-up [22]. Two of the studies used a measure of cardiovascular fitness as an indicator of physical activity levels [22, 23], and one [19] used distance walked in a shuttle test [26] as a measure of physical activity. Two studies found a small to moderate increase in cardiovascular fitness (<10% increase compared with baseline value) [22, 23], and the study using the shuttle test reported no change in the distance walked during the test [19]. Similarly, the moderate increases in cardiovascular fitness observed in the structured exercise training study were not associated with significant improvements in 2-h plasma glucose compared with the control group [20]. Fasting glucose None of the included studies reported a significant change in fasting glucose in the intervention group compared with the control group at follow-up. One study did not report fasting glucose values [24]. Discussion Eight controlled trials in individuals with IGT were included in this review. Four studies measured the incidence of diabetes as a primary outcome measure, and found that the risk of diabetes was reduced by approximately 50% (range 42–63%) in individuals who were encouraged to reduce their body mass through changes in diet and physical activity [10, 12, 21, 24]. Although the promotion of physical activity was an important component of these studies, the effect of exercise independent of other factors on the risk of diabetes in individuals with IGT is still unclear. All but one [10] of the studies included in this review reported significant weight loss among participants. Given that weight loss is known to improve many of the factors associated with IGT, including insulin sensitivity and glycaemic control [27], and considering only modest increases in physical activity were found in these studies, the success of these interventions is likely to be largely explained by weight loss. The apparent success of the exercise-only intervention in the Da Qing IGT and Diabetes Study [10] is likely to be attributable, at least in part, to the significantly higher levels of physical activity at baseline in the exercise intervention group compared with the control group. The separation of physical activity and weight loss may seem an overcorrection given that increased physical activity may encourage weight loss through increased energy expenditure; however, several meta-analyses of controlled trials investigating the effect of physical activity on glycaemic control in individuals with diabetes found that exercise training was not associated with weight loss [28, 29]. Furthermore it is increasingly recognised that at least 60 min/day of moderate intensity exercise should be undertaken for the effective management of body mass [30], an amount that none of the interventions included in this review achieved. Three of the four studies that investigated the effect of a lifestyle intervention in individuals with IGT on the incidence of type 2 diabetes [10–12] relied on self-reported measures of physical activity. Given the limitations of subjective measures of physical activity, particularly when measuring non-structured forms of moderate physical activity such as walking activity [31], these lifestyle intervention studies provide uncertain information about the effect of physical activity in individuals with IGT. Results from the lifestyle intervention studies that relied on changes in 2-h plasma glucose rather than the incidence of diabetes as the primary measure of glucose control were inconclusive [19, 20, 23]. Two of the three studies were unsuccessful at improving glucose tolerance [19, 23]. Similarly, the one study that used an aerobic exercise training protocol found no improvements in glucose tolerance as measured by 2-h plasma glucose [20]. However, it did find a significant improvement in insulin sensitivity at both 6 and 24 months. This suggests that, although the intervention goal of 3 h/week of moderate intensity exercise was enough to improve insulin sensitivity, it was not long enough and/or of sufficient intensity to elicit the necessary magnitude of change in insulin sensitivity for this to be translated into a significant reduction in 2-h plasma glucose. Overall, non-significant results were seen in all but two of the studies that measured 2-h plasma glucose before and after the intervention. However, despite the link between 2-h plasma glucose and diabetes risk, it does not follow that the risk of diabetes was unchanged in these studies, as demonstrated by the FDPS, which reported a non-significant change in 2-h plasma glucose over the course of the intervention but a >50% reduction in the risk of diabetes [21]. One reason for this discrepancy is likely to be the poor repeatability of 2-h plasma glucose values [32], and given the relatively small sample sizes in most of these studies, it is possible that improvements in glucose tolerance were not detected using 2-h plasma glucose. The glucose AUC has been identified as a more reliable measure of glucose tolerance than 2-h plasma glucose [33], and is therefore a more sensitive measure of glucose tolerance. One study included in this review measured both 2-h plasma glucose and glucose AUC at baseline and follow-up [20]. It reported that, although 2-h plasma glucose did not change significantly at any of the follow-up time points, there was a significant reduction in the glucose AUC at 6 months, and a trend towards significance at 24 months. Given the failure of the lifestyle interventions to substantially increase physical activity levels, and the inconclusive result of the structured exercise training study, the role of physical activity independent of other lifestyle changes in the treatment of prediabetes remains equivocal. However, statistical analysis of the independent effects of physical activity, which has been carried out on some of the lifestyle intervention studies included in this review, show interesting results. For example, the conclusion of the Malmö Feasibility Study that cardiovascular fitness and weight loss were equally correlated to improved glucose tolerance is supported by data from the Study on Lifestyle Intervention and Impaired Glucose Tolerance Maastricht (SLIM) [34], and a recent analysis of data from the FDPS found a 49% difference in the risk of diabetes, after adjustment for changes to body mass and diet, when comparing those in the highest and lowest tertiles of moderate to vigorous leisure time physical activity change [35]. Thus, although the overall evidence for the independent effect of physical activity in the management of prediabetes is equivocal, encouraging evidence is starting to emerge in support of the importance of exercise. Given the limitations of the studies included in this review it is not possible to make any recommendations as to the intensity and duration of exercise needed to improve glucose tolerance and/or reduce the risk of diabetes in individuals with IGT, independently of other lifestyle changes. The equivocal nature of the evidence is reflected in the advice given by the ADA, which recommends that individuals with IGT should include 150 min/week of moderate to vigorous intensity exercise as part of a weight management programme [36]. However, the aforementioned analysis of the change in physical activity in the FDPS found that a difference of 246 min/week in median values between those in the lowest and the highest tertiles of moderate to vigorous physical activity change was associated with a significant reduction in the risk of diabetes, after adjusting for changes in diet and body mass. However, the difference of 120 min/week in median values between the lowest and middle tertiles was not associated with a reduced risk of diabetes [35]. Although this result was obtained by analysing the pooled cohort, and therefore provides little information about the effectiveness of the intervention itself, it does suggest that 150 min/week of moderate to vigorous intensity exercise is unlikely to be enough to significantly reduce the risk of type 2 diabetes in individuals with IGT, independently of other lifestyle changes. However, given that this analysis relied on self-reported physical activity levels, further rigorous studies are needed to confirm this. All studies included in this review selected individuals using IGT as an inclusion criteria. Therefore, any conclusions from this review can only be applied to individuals with IGT and it is impossible to determine whether or not exercise may be effective in treating individuals with isolated IFG. However as individuals with isolated IFG account for a minority of individuals with prediabetes [37], conclusions about the effect of exercise on IGT drawn from this review will apply to the majority of individuals with prediabetes. In summary, the majority of studies identified for this review used interventions that encouraged dietary and physical activity to initiate and maintain weight loss in individuals with IGT. Analysis of these studies found that the independent effect of physical activity in reducing the risk of type 2 diabetes in individuals with prediabetes is equivocal. Furthermore, given the limited evidence, no definite conclusion can be drawn either as to the amount of physical activity needed to reduce the risk of diabetes in individuals with prediabetes or the effectiveness of a single-component physical activity intervention compared with more conventional multi-component interventions. Thus, more evidence from rigorously designed randomised controlled trials with objective measures of physical activity is needed. As the majority of studies promoting lifestyle changes included in this review failed to substantially increase physical activity levels, strategies for effecting increased physical activity in this population also need to be researched thoroughly. Further investigation is also needed into whether exercise is equally effective in treating the different phenotypes of IGT and IFG.
[ "physical activity", "impaired glucose tolerance", "type 2 diabetes", "prediabetes", "exercise", "igt", "ifg", "prevention", "impaired fasting glucose" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
Arch_Orthop_Trauma_Surg-4-1-2324129
Closed reduction and percutaneus Kirschner wire fixation for the treatment of dislocated calcaneal fractures: surgical technique, complications, clinical and radiological results after 2–10 years
Introduction To reduce complications, a minimally invasive technique for the treatment of dislocated intraarticular fractures of the calcaneus was used. Therefore previously described closed reduction and internal fixation techniques were combined and modified. Introduction A fracture of the calcaneus allowed to heal in improper anatomical position leads to static and dynamic malfunctions of the whole foot with consequent limited load bearing capacity and walking ability [4, 29]. The associated pain leads to a significant impairment in quality of life. The goal of therapy for calcaneal fractures is the elimination of pain and restoration of walking ability for patients with normal foot shape and the ability to wear normal footwear. At present, there are multiple operative procedures for the treatment of calcaneal fractures. The procedures can be differentiated by approach, implant type and whether the treatment is one- or two-stages. Recently, open procedures using internal fixation have been favored for surgical therapy of the calcaneus [2, 5, 9, 15, 17, 23, 24, 26]. A possible complication of an open procedure is the disturbance of wound healing with skin and soft tissue necrosis [2] and the possibility of cutaneous flaps [5, 8, 9, 15, 17]. In addition to posttraumatic arthritis in the lower ankle and adjoining joints, there are reports of osteitis of the calcaneus [2, 5, 17]. Advanced osteitis of the calcaneus can require a calcanectomy [15] or amputation of the lower leg [9, 17]. In an effort to reduce the complications that can occur with an open procedure, we have combined and modified previously described closed reduction and internal fixation techniques [14, 27, 28] to create a minimally invasive technique. The aim of this study was to evaluate the clinical and radiographic results of our minimally invasive surgical treatment of intraarticular fractures of the calcaneus at 2–10 years postoperatively. We then compared our results to the results of both open and minimally invasive surgical techniques found in the literature. Material and methods Patients A total of 88 patients with 92 closed, dislocated and intraarticular fractures of the calcaneus were consecutively treated with a minimally-invasive technique developed at our institution by modifying and combining the procedures of Westhues [27], Poigenfürst and Buch [14], and Wondrák [28]. All the surgeries were performed without the use of bone graft. The average age at time of calcaneal fracture was 46.1 years (range 18–82 years) and most patients were male (71.6%). The cause of fracture was a fall from varying heights in 75 (85.2%) of cases and a motor vehicle accident in 13 (14.8%) of cases. From this group of patients, 63 (71.6%) patients with 67 fractures were available for retrospective examination with an average follow-up time of 5.7 years (range 2–10 years). Twenty-five patients were unavailable for follow-up examination; 12 patients who satisfied with the result of the surgery did not wish to participate in the study due to age or unwillingness to travel to the hospital, 7 patients were at an unknown address and 6 patients deceased. Examination All the medical records, radiographs, pre- and post-operative computed tomography scans were available for the entire study group. Traumatic soft tissue damage was determined using the classification method of Tscherne and Oestern [25]. In addition to radiographic evaluation using the calcaneus lateral, calcaneus axial and Broden views at 20° and 40° [3], all the patients obtained a bilateral calcaneal preoperative CT scan for fracture classification and surgical planning. Fractures were evaluated by the classification scale of Sanders et al. [17, 18]. The duration of surgery, as well as the time the X-ray image intensifier used, were noted as operative data. For postoperative evaluation of the reduction, radiographs of the calcaneus in lateral, axial, Broden 20° and 40° [3] views were obtained. At the last follow-up evaluation, a CT scan of both calcanei was obtained to examine the geometry of the calcaneus and to evaluate arthritic changes in the lower ankle joint. For a reconstruction of the calcaneal posterior facet to be considered satisfactory a joint deviation of ≤2 mm had to be seen radiographically on the postoperative radiographs. For the evaluation of the clinical results, the Zwipp score was used [30]. Statistical analysis For statistical analysis the Fisher’s exact test and the Chi-square test were used. Perioperative treatment and surgical technique After a calcaneal fracture was diagnosed, the lower leg was evaluated. If the soft tissue was in good condition, primary surgery performed. In cases of severe swelling with potential soft tissue damage NSAIDs, local cryotherapy and active movement exercises determined the course of therapy. Once the soft tissue was in good condition, surgery was performed. The patient was placed prone on the traction table under general anaesthesia or spinal anaesthesia without arrest of blood supply. After application of the calcaneus wire for traction in the dorso-cranial plane, the varus or valgus malaligment of the back foot was corrected with the wire positioned orthogonally to the longitudinal axis of the calcaneus. To achieve correct placement, the surgeon pulled at the traction bow with the knee bent along the longitudinal axis of the calcaneus. The subsequent traction in the plantar plane was performed with the leg stretched. Once placed in the proper orientation, the traction bow was connected to the retaining jig of the traction table (traction of 20 kg). The surgeon then held the heel of the patient with both hands and applied compression both medially and laterally with his thenar muscle to reduce the main medial and lateral fragments. With the use of the X-ray image intensifier, the restoration of Böhler’s angle and the reduction of the posterior facet were verified. A Steinmann pin was put with the pointed end into a universal chuck and the other end into the dorso-lateral calcaneus beneath the posterior facet with a stab incision (Fig. 1a). For reduction of the posterior facet, the traction wire was used as a hypomochlion (Fig. 1b). The anatomical restoration of the posterior facet was verified with use of the Brodén radiographic views (20° and 40°). The reduction of the calcaneus was verified by examining the lateral radiographic view (Fig. 2a). The fragments were fixed with percutaneus Kirschner wires, which were arranged conically into the talus and/or cuboid (Fig. 2b). All of the main fragments to be reduced were held in place by the Kirschner wires and the arch of foot was held until the bone healed [28]. The wires were bent above the skin level, the wire traction removed and the stab incision closed with sutures. Fig. 1The Steinmann pin is placed into the dorso-lateral calcaneus above the traction wire (left side). For the reduction of the posterior facet, the traction wire is used as a hypomochlion (right side). The hematome is drained though the stab incisionFig. 2Intraoperative lateral radiograph of a dislocated calcaneal fracture before reduction (left side). After reduction the fragments are fixed with percutaneus Kirschner wires (right side) Postoperatively, a dorsal lower leg splint was applied for immobilization and the lower portion of the operative side was elevated. On the first postoperative day, the patient was mobilized on crutches with no weight bearing on the operative side. Depending on pain levels, the patient began with active dorsal extension and plantar flexion in the upper ankle joint. The radiographic controls, (upper ankle joint anterior and lateral, calcaneus axial Brodén views [25]) were performed immediately postoperative, and again 2 and 8 weeks postoperative. After the fracture was healed, the Kirschner wires were removed. Partial weight bearing began after the eighth postoperative week at 20 kg and was increased up to full weight bearing by the twelfth postoperative week (Fig. 3). Fig. 3Clinical example. a Preoperative lateral radiograph of an intraarticular, displaced calcaneal fracture (left side). The preoperative CT scan shows a Sanders Type II AC fracture on the coronal reconstruction (right side). b Lateral radiograph taken 2.5 years postoperatively. The patient had a Zwipp score of 166 points (left side). Anatomical reconstruction of the posterior facet was achieved (right side). This coronal CT image shows the anatomical reconstruction of the posterior facet with no joint deviation Results Fifty-five of the 88 (62.5%) patients suffered exclusively from calcanal fractures. Thirty-three of the 88 (37.5%) patients had multiple injuries. Twenty of the 33 (61%) patients with multiple injuries had local co-injuries such as fractures of the upper ankle joint, tarsus and front-foot. In 83 of the 92 calcaneal fractures, the soft tissue injury was graded as 1° or 2°; in 9 fractures, the soft tissue injury was graded as 3°. In eight cases, including three patients with multiple injuries, surgery was performed immediately on the day of the trauma. Length of surgery averaged 61 min (range 20–175 min) and screening time averaged 115 s (range 20–454 s). To obtain proper retention of the fracture, between 4 and 7 Kirschner wires were used. The Kirschner wires were removed with or without local anaesthesia at an average of 10 weeks (range 7–15 weeks) as an outpatient procedure. Full weight bearing was achieved at an average of 15 weeks postoperative. Sanders classification Sanders type II, III and IV fractures were diagnosed (Table 1). All the patients had a joint deviation of at least 2 mm. Patients with type I non-dislocated fractures were treated conservatively and were therefore not included in this study. Reconstruction of the posterior facet was verified radiographically in 13 of 15 (86.7%) type II fractures, in 47 of 52 (90.4%) type III fractures and in 16 of 25 (64%) type IV fractures immediately postoperatively (Table 1). Table 1Patients with immediate postoperative reconstruction of the posterior facet by Sanders fracture classificationSanders fracture classification (N = 92)Radiological reconstruction of posterior facet achievedType II15 (16.3%)13 (86.7%)Type III52 (56.5%)47 (90.4%)Type IV25 (27.2%)16 (64.0%) The Zwipp score of 67 patients at the last follow-up evaluation The Zwipp score of the 67 patients available for follow-up averaged 130/200 points (range 48–186 points) which is considered a good result [30]. Regardless of fracture type, 41 (61.2%) of the 67 patients had a very good or good result, 24 (35.8%) patients had a satisfactory result and 2 (3%) patients had a bad result. The worst clinical results occurred with type IV fractures (Table 2). Thirty (44.8%) patients considered their treatment outcome very good or good, 28 (41.8%) as satisfactory and sufficient and 9 (13.4%) as insufficient. Table 2Clinical and radiological results at last follow-up evaluation by Sanders fracture classificationSanders fracture classificationVery good or good resultArthritis in lower ankle jointNormal Böhler angle achievedOverall6741 (61.2%)33 (49.3%)47 (70.1%)Type II 6 (8.9%)4 (66.7)1 (16.7%)5 (83.3%)Type III 39 (58.2%)29 (74.4%)12 (30.8%)31 (79.5)Type IV 22 (32.8%)8 (36.4%)20 (90.9%)11 (50.0%) Thirty-seven (55.3%) of the 67 patients had no pain while full weight bearing or could walk at least 4 h without pain at last follow-up evaluation. Nine (13.4%) patients had constant pain. Thirty-three (49.3%) patients had a range of motion in the upper ankle joint identical to that of the non-affected side. Thirty-four (50.7%) patients had a restricted range of motion of up to 15° and more than half (58.2%) of the patients had achieved more than 75% of their total range of motion in the lower ankle joint. Forty-three (64.2%) of 67 patients were able to wear normal shoes while 5 (7.4%) used shoes with an unroll aid. Nineteen (28.3%) patients had obtained orthopaedic shoes on their own at last follow-up. Two (3%) of 67 patients had arthritis of the upper ankle joint at last follow-up. Independent of fracture type 33 (50.2%) patients had arthritic changes in the lower ankle joint and 14 (20.89%) cases had arthritic changes in the calcaneocuboidal joint (Table 2). Böhler’s angle was restored in 70.1% (47 of 67) of the cases (Table 2). A total of 85.1% (57 of 67) cases had a reduction in calcaneal height to within 10% when compared to the non-operative side. Additionally, a reduction in the length of the calcaneus occurred within 10 in 94% (63 of 67) of cases when compared to the non-operative side. Thirty-eight (56.8%) of the 67 patients had a widening of the calcaneus to within 10% compared to the opposite side. Complications Of the 92 surgically treated calcaneus fractures, 76 (82.6%) healed without complications. In nine (9.8%) cases, superficial skin infections, perforations of the Kirschner wires and bone dystrophy occurred and healed without any further complications. Significant complications occurred in six (6.5%) cases: three cases had osteitis of the calcaneus, one case had dislocation of the fracture requiring revision surgery and two cases had peroneal tendon impingement. Three cases of osteitis healed through conservative therapy with oral antibiotics and the two patients with impingement of the peroneal tendon refused any operative intervention. Using our method, disturbance of wound healing with skin and soft tissue necrosis requiring operative intervention was not observed. Additionally, no lower leg amputations and no total or partial calcanectomies had to be performed. Statistical analysis Using the Chi-square test a statistical significant correlation was found between the fracture type and the incidence of subtalar arthritis (p < 0.0268). Using the same test there was no statistical significant correlation found between the fracture type and quality of reduction (p > 0.6522), between fracture type and clinical result (p > 0.3204) and between fracture type and Böhler’s angle (p > 0.5488). However, a statistical significant correlation was found between the clinical result and subtalar arthritis (p < 0.0013) and between the incidence of subtalar arthritis and quality of reduction when using the Fisher`s exact test (p < 0.0003). Discussion To correct deformations of the calcaneus, spare the soft tissue and lower the complication rate, indirect and less invasive reduction and fixation techniques to treat calcaneal fractures have been developed [7, 10, 12–14, 16, 19, 21, 22]. Besides techniques with percutaneous reduction and internal K-wire fixation [7, 14, 16, 21] percutaneous reduction techniques with external fixation are described in the literature [12, 13, 19, 22]. In our department we have combined and modified previously described closed reduction and internal fixation techniques [14, 27, 28] to create a minimally invasive technique. This method used the lower joint surface of the talus as a guide for remodeling the posterior facet and reconstruction of the calcaneus. A multiple of different evaluation scores makes a comparison of the clinical results with published results difficult [11, 17, 20, 30]. Using the Zwipp score, 61.3% of the cases (41 of 67) treated by our modified method had good or very good results regardless of fracture type. In 38.7% (26 of 67) of cases, a satisfactory or bad result was achieved. After open reduction and internal fixation with a plate, Rammelt et al. [15] and Boack et al. [2] reported good and very good results in 65 and 66% of cases, respectively with the use of the ±200 Zwipp score. Boack et al. [2] presented satisfactory or bad clinical results in 34% of cases also using the Zwipp score. Sanders et al. [18] showed that under application of a lateral approach followed by internal fixation with a plate, the worst clinical results occurred with type IV fractures. In this study the worst clinical results were achieved in the treatment of type IV fractures although within our cases no statistical significant correlation between fracture type and clinical result could be demonstrated (Table 2). Patient satisfaction is an essential criterion for the successful treatment of calcaneus fractures. With a comparable follow-up period of 5.4 years, Thermann et al. [23] reported that 48.3% of patients viewed their treatment outcome as good or very good after open reduction of their calcaneus fractures, 37.3% of patients judged their treatment outcome as satisfactory and sufficient and 14.4% judged their outcome as insufficient. Our treatment method had comparable results with 44.8% of patients considering their treatment outcome very good or good, 41.8% as satisfactory and sufficient and 13.4% as insufficient. At last follow-up evaluation, 64.2% of patients in this study were able to wear normal shoes while 28.4% required orthopaedic shoes. Comparably, in the study by Therman et al. [23], 68.7% of the patients wore normal shoes and 16.8% required orthopaedic shoes. Overall, satisfactory mobility of the joints adjoining the calcaneus was achieved with our surgical technique. In approximately half of our patients, complete range of motion in the upper ankle joint was achieved and in the lower ankle joint, approximately 60% of patients had no or low movement restriction. Similar mobility in the upper and lower ankle joints has been reported in the literature after open reduction and osteosynthesis with a plate [5, 23]. However, Buch [6] reported a worse range of motion in the upper and lower ankle joint after performing a percutaneus wire osteosynthesis in 100 calcaneal fractures with a varus or valgus deviation of the back foot occurring in half of the cases. In this study, malpositions of the back foot of more than 5° were diagnosed in 6 (9%) cases of the 67 cases. Clinically relevant changes in the axis of the front foot were not observed. The radiological evaluation of the joints adjoining the calcaneus showed arthritic changes in the upper ankle joint in two of our cases. Half of our cases showed arthritic changes in the lower ankle joint and in approximately a fifth of our evaluated cases, arthritic changes were found in the calcaneocuboidal joint. In our study, 66.7% of type-II fractures and 98% of type-III/IV fractures showed arthritic changes in the lower ankle joint. These results are similar to those presented by Thermann et al. [23], who showed arthritic changes in the lower ankle joint in 65.2% of type-II fractures and in 81.7% of type III/IV fractures. Regardless of fracture type, anatomical restoration of the joint surface or a joint deviation of up to 2 mm of the posterior facet was achieved postoperatively in 82.1% of cases. A total of 86.7% of type-II fractures and 72.8% of type-III/IV fractures achieved anatomical restoration of the posterior facet. These results are comparable to those presented in the literature with open reduction and internal fixation with plates or screws [2, 17, 23]. In this study, the average length of the surgery was 61 min. This is considerably less than the average length of surgeries utilizing open reduction and internal fixation of 139 and 168 min [2, 20]. In 9.8% of cases, complications such as superficial skin infections, perforation of the Kirschner wires and bone dystrophy healed with conservative therapy or after removal of the Kirschner wires. This rather insignificant rate of complications is similar to those described in the literature after operative treatment of calcaneal fractures utilizing a minimally invasive technique [7, 16, 21]. The rate of significant complications in this study was 6.5%. In two cases, an impingement of the peroneal tendon occurred. One case had a dislocation of the fragments requiring revision surgery. Three cases had infections of the calcaneus and were related to a traumatic soft tissue injury that healed with conservative therapy. With open reduction, skin and soft tissue necrosis [2] with possible cutaneous flaps have been reported in scattered cases [5, 8, 9, 15, 17]. Folk et al. [9] demonstrated that in 25% of cases, wound complications required an additional surgery after open reduction of calcaneus fractures 84% of the time. Abidi et al. [1] reported wound healing disturbances in 33% of cases following open reduction and internal fixation of calcaneal fractures. There have also been reports in the literature of partial or total calcanectomies [15], including amputations, as a result of calcaneal osteitis after open reduction [9, 17]. Sanders et al. [17] considered a lower leg amputation if a patient suffered continual osteitis. In the current study no disturbances of wound healing, skin or soft tissue necrosis that required microsurgical intervention were observed. In this study no statistical significant correlation could be found between fracture type and the ability to restore Böhler`s angle. However, regardless of fracture type Stulik et al. [21] showed similar results using a comparable minimally invasive technique while Thermann et al. [23] had worse results following open reduction with internal fixation. In our study, there was a slight reduction in the height and length of the operated calcaneus when compared to the non-operated side. With our closed reduction and internal fixation technique, more than half of the cases had a widening of the calcaneus of more than 10% when compared to the opposite side, which is an unsatisfactory result. At the same time only two patients had an impingement of the peroneal tendon that the patients considered tolerable. In summary, we presented a minimally invasive technique for the treatment of intraarticular, dislocated calcaneus fractures and were able to produce results comparable to open techniques with a lower rate of serious complications. In the majority of cases, an almost identical Böhler angle and geometry of the calcaneus was achieved when compared to the opposite side at the time of last follow-up. Simple removal of the Kirschner wires and shorter surgery time decrease patient stress and must be recognized as an advantage of this minimally invasive technique. Thus, we feel that our minimally invasive technique is a viable alternative for the treatment of intraarticular, dislocated calcaneal fractures.
[ "dislocated calcaneal fracture", "minimal-invasive" ]
[ "P", "U" ]
Purinergic_Signal-3-4-2072916
Guanosine reduces apoptosis and inflammation associated with restoration of function in rats with acute spinal cord injury
Spinal cord injury results in progressive waves of secondary injuries, cascades of noxious pathological mechanisms that substantially exacerbate the primary injury and the resultant permanent functional deficits. Secondary injuries are associated with inflammation, excessive cytokine release, and cell apoptosis. The purine nucleoside guanosine has significant trophic effects and is neuroprotective, antiapoptotic in vitro, and stimulates nerve regeneration. Therefore, we determined whether systemic administration of guanosine could protect rats from some of the secondary effects of spinal cord injury, thereby reducing neurological deficits. Systemic administration of guanosine (8 mg/kg per day, i.p.) for 14 consecutive days, starting 4 h after moderate spinal cord injury in rats, significantly improved not only motor and sensory functions, but also recovery of bladder function. These improvements were associated with reduction in the inflammatory response to injury, reduction of apoptotic cell death, increased sparing of axons, and preservation of myelin. Our data indicate that the therapeutic action of guanosine probably results from reducing inflammation resulting in the protection of axons, oligodendrocytes, and neurons and from inhibiting apoptotic cell death. These data raise the intriguing possibility that guanosine may also be able to reduce secondary pathological events and thus improve functional outcome after traumatic spinal cord injury in humans. Introduction Spinal cord injury (SCI) occurs in an instant, but its devastating effects last a lifetime at huge personal and economic cost [1]. The spinal cord conveys both afferent sensory and efferent motor information, so disruption of spinal cord function results not only in motor paralysis but also sensory and autonomic impairment distal to the injury [2, 3]. Sensory dysfunction contributes to the generation of pressure sores that, like bladder impairment, are a major source of morbidity and even mortality in those with spinal cord injury [4]. Restoration of function in longstanding spinal cord injuries is poor, although some limited functional restoration has been reported [5]. Therefore, there is much interest in reducing the extent of the initial damage from traumatic spinal cord injury in the acute phase. Trauma often results in primary damage resulting from mechanical disruption of the nerve axons in the spinal cord that is not amenable to neuroprotective therapy. However, secondary pathological changes involving cascades of biochemical, molecular, and cellular changes can produce even more extensive damage, and these changes are potentially susceptible to therapeutic intervention with neuroprotective agents [3, 6]. Thus, pathological changes occur from the moment of injury and continue for years afterwards and have been divided into three phases: an acute phase, a phase of secondary tissue loss, and a chronic phase [3, 6]. In the acute phase, which starts at the moment of injury and extends over the first few days, numerous pathological processes begin. Mechanical injury induces an immediate change in neuronal tracts at the moment of impact, blood flow is reduced, creating substantial ischemic necrosis [7, 8] and a cascade of pathophysiological processes rapidly follows mechanical trauma to the spinal cord, resulting in secondary neuronal damage that can significantly exacerbate the original injury [9]. Traumatic injury to the spinal cord also leads to a strong inflammatory response, with the recruitment of peripherally derived inflammatory cells, including macrophages [10]. Damage to the spinal cord also results in extensive cell proliferation in and around the epicenter, many of which are microglia and macrophages. This acute inflammatory response at the site of the initial lesion is at least partly responsible for secondary spinal cord pathology [11–13]. The inflammatory cells (particularly macrophages/microglia) can mediate tissue damage by producing a variety of cytotoxic factors including interleukins [14], tumor necrosis factor-alpha (TNF-alpha) [15], and reactive nitrogen species [13, 16–18]. Neuronal and oligodendroglial cell loss is apparent in the lesion epicenter, and rostral and caudal to it within 4 h of injury [19]. From days to years after the initial trauma, apoptotic cell death continues, and scarring and demyelination accompany Wallerian degeneration. All these processes contribute to motor and sensory functional deficits [20, 21]. Many pharmacological agents have been reported to reduce secondary injury and to be neuroprotective in a variety of animal models; these include anti-inflammatory [22–24] and antiapoptotic agents [24, 25] and agents that elevate cyclic adenosine monophosphate (cAMP) [26, 27]. None has yet proved effective in ameliorating the effects of acute spinal cord injury in clinical trials in humans. There is increasing evidence that the non-adenine-based purine guanosine acts as an intercellular signaling molecule. It is released from cells and has several diverse effects on cells in vivo and in vitro, particularly trophic effects modulating cellular growth, differentiation, and survival [28, 29]. Guanosine has a number of effects on various cell types that make it a good candidate to test as a neuroprotective agent in acute spinal cord injury since it might potentially interact with several steps of the biochemical and cellular cascade. It is neuroprotective [28–32] and stimulates nerve regeneration [5, 33]. It also protects several cell types against apoptosis induced by a variety of agents such as staurosporine [34] and beta-amyloid [35] and has been reported to increase intracellular cAMP [36, 37]. Therefore, in the present study, we assessed whether guanosine might ameliorate tissue damage and enhance functional outcome after acute spinal cord injury. Materials and methods Animals Adult female Wistar rats (280–300 g body weight, Charles River) were maintained in a temperature-controlled vivarium on a 12:12-h light-dark cycle with food and tap water freely available. Rats were handled daily for 2 weeks before surgery. Spinal cord injury induction and experimental design Spinal cords were surgically exposed and compressed with modified coverslip forceps to produce a moderate traumatic spinal cord injury [5, 38]. Before surgery, rats were given buprenorphine (0.03 mg/kg body weight, subcutaneously) for pain relief. They were then anesthetized with isoflurane (3–5%): O2 (1 l/min) and a laminectomy was performed at T11/T12 to expose the spinal cord that was then crushed with modified coverslip forceps [5, 38–40]. The forceps were closed slowly (over 2 s) compressing a 5-mm length of the spinal cord to a thickness of 1.4 mm for 15 s. The wound was closed by suturing the muscles and fat pad, and by clipping the skin with stainless steel clips. Postoperatively, the rats were kept quiet and warm. To evaluate the effect of guanosine on acute phase SCI pathophysiology, we studied two groups of rats (n = 24) with SCI. After surgery and prior to treatment, behavioral tests were done for each rat. Two of the total 24 animals were excluded from the study because of incorrect injury (BBB score at the surgery day before treatment below 4 or above 7; in our experience, about 91% of rats with this degree of crush were within the range of 5 to 6 on the BBB scale at first day after injury [38]). The 22 rats were then randomly divided into two groups. Starting 4 h after surgery, rats received either daily intraperitoneal (i.p.) injections of 8 mg/kg guanosine or the same volume of saline containing 0.001 N NaOH [5] for 2 weeks. On day 7 after injury one animal in the saline group was euthanized because of a severe bladder infection. Motor and sensory functional recovery assessment All rats were handled daily for 2 weeks preoperatively to acclimatize them to the handling and behavioral testing. After spinal cord compression, the locomotor behavior segmental reflexes and spinothalamic senses of the rats were assessed immediately prior to treatment (day 0) and on days 1, 3, 5, 7, 14, 21, and 28 after the injury. Five tests were used: an open field walking task [5, 41, 42], hind limb placing response [38, 40], foot orienting response [38, 40, 43], an inclined plane test [38, 44, 45], and a hot plate test [46]. Behavioral analyses were performed by individuals who were blinded with respect to treatment groups. An open field walking testing (OFWT) was used to assess the locomotor functional recovery of the hind limb. It was conducted in a child’s circular plastic swimming pool (1.3 m in diameter [5, 38]). Cagemates (two animals) were placed in the center of the open field. They were observed for 5-min periods and scored for general locomotor ability using the standard BBB scale. The rats were rated on a scale of 0 to 21, 0 being no function and 21 being normal. If the animal stopped moving for a minute, it was placed again in the center of the open field; otherwise it was left undisturbed for the duration of the 5-min test period. Reflexes in the hind limbs were assessed with hind limb placing response (HLPR) and foot orienting response (FOR). They were each scored on a scale of 0 to 2, 0 indicating no function and 2 indicating full function [38, 40, 43]. Half-scores were assigned if the behavioral response appeared intermediate. Hind limbs were scored separately for each measure. To assess HLPR, we grasped the hind foot between the thumb and forefinger, pulled backwards, and then released the foot. The placement of the foot on the table surface was then scored [38, 40]. The FOR followed Gruner’s [40] protocol modified from the previous descriptions of this reflex [43]. When a rat is raised and lowered by the tail, it shows a characteristic behavior of the hind legs. A normal rat spreads the toes of its hind leg wide apart and generally holds them apart for several seconds. After spinal cord injury, this response is sometimes lost completely, or reduced in magnitude. The inclined plane test (IPT) measured the ability of the rats to maintain their position for 5 s on an inclined plane, covered by a rubber mat containing horizontal ridges (1 mm deep, spaced 3 mm apart, self-made) [44]. The rats were observed as the angle of the surface was increased from 5 to 90° at 5° intervals. The angle at which the rat could no longer stay in position was the outcome measure. Hind limb sensory function was tested by ability to perceive a standard application of a controlled stimulus to the trunk below the level of the lesion using a standard testing apparatus hot plate (IITC Life Science Inc., Woodland Hills, CA, USA) [46]. The time taken for the rat to withdraw its hind paws from contact with the hot plate was noted. A cutoff time was about 20 s after which the animals were removed to prevent thermal injury [46]. Recovery of lower urinary tract function Normal lower urinary tract (LUT) function involves both spinal and supraspinal circuitry that controls urine storage and release [47]. Incomplete SCI results in an initial loss and later partial or complete recovery of LUT function depending on the severity of injury [48, 49]. After SCI the rats were not capable initially of spontaneous micturition, and their bladders were manually expressed twice daily. The volume of expressed urine was measured each time, and the data were used to estimate the initiation of LUT function after SCI. Histological analysis Tissue processing and immunohistochemical staining Twenty-eight days after SCI, animals were killed for histological and immunohistochemical analysis. Rats were deeply anesthetized with sodium pentobarbital (50–60 mg/kg b/w, i.p.) and perfused transcardially; first with 100 ml 0.05 M phosphate-buffed saline (PBS) containing 0.1% heparin, followed by 300–500 ml of 4% paraformaldehyde (PFA). The T9 to L1 segments of the spinal cords were removed and incubated in the same fixative solution overnight at 4°C and then cryoprotected in PBS solution containing 30% sucrose. A segment of each cord, extending from 5 mm rostral to 5 mm caudal to the lesion site was embedded in medium (Tissue-Tek® O.C.T. compound, Sakura Finetek USA, Inc., Torrrance, CA, USA). Serial sections were cut at 20-μm intervals on a cryostat and mounted onto slides (ColorFrost/Plus; Fisher, Pittsburgh, PA, USA) for histochemical staining. Some cords were transverse-sectioned for immunohistochemical analysis using the antibodies described in the section below. Longitudinal sections were cut for Luxol fast blue staining, a lipophilic dye used commonly to stain myelin [38].Details of these immunohistochemical procedures have been described previously [5, 38]. Briefly, the cryostat sections were thawed, air-dried, and then incubated in hydrogen peroxide to reduce endogenous peroxidase activity, before being rinsed in PBS. The sections were then incubated in 1% sodium borohydride for 15 min. After thorough washing with PBS, the sections were treated with PBS/5% normal goat serum with 0.3% Triton X-100 at room temperature for 30 min. Overnight incubation with primary antibody was performed in humidified boxes at 4°C. Macrophages and reactive microglia were detected with a mouse monoclonal antibody against ED-1 (MCA 341R, 1:500; Serotec, Hornby, ON, Canada) and reactive astrocytes (astrogliosis) were determined with rabbit anti-glial fibrillary acidic protein (GFAP) polyclonal antibodies (Zymed® Lab-SA System Kit, dilution was 1:600, Invitrogen Canada Inc., Burlington, ON, Canada). The following day, sections were rinsed with PBS and incubated with either rhodamine-conjugated or fluorescein isothiocyanate (FITC)-conjugated secondary antibodies. Sections were then rinsed, coverslipped, and examined under a confocal microscope.To determine the spared axons and myelin, double fluorescent immunolabeling was performed, combining mouse monoclonal antibodies against neurofilament (RT-97, 1:10; Developmental Hybridoma Bank) and rabbit anti-myelin basic protein (MBP) polyclonal antibodies (1:50; Chemicon Int., Temecula, CA, USA). For double immunolabeling, sections were developed using a mixture of FITC-conjugated goat anti-rabbit IgG and rhodamine-conjugated goat anti-mouse IgG in 1% normal goat serum and 0.25% Triton X-100; 1:200 (Invitrogen Canada Inc., Burlington, ON, Canada) for 2 h. Sections were then rinsed, coverslipped, and examined under a confocal microscope. An investigator who was blinded to the treatment groups conducted histological analysis. Terminal deoxynucleotidyl transferase (TdT)-mediated dUTP-biotin nick end labeling assays and quantification For detection of apoptotic cells, a terminal deoxynucleotidyl transferase (TdT)-mediated dUTP nick end labeling (TUNEL) stain was performed using the “In situ Cell Death Detection Kit-Fluorescein” (Roche Molecular Biochemicals, Chemicon Int., Temecula, CA, USA), according to the manufacturer’s instructions. After fixation, tissue sections were incubated in TdT-mediated dUTP-biotin nick end labeling (TUNEL) reaction mixture containing TdT buffer with TdT and biotinylated dUTP in TdT buffer, incubated in a humid atmosphere at 37°C for 90 min, and then washed with PBS. The sections were incubated at room temperature (37°C) for 30 min with fluorescence-conjugated antibody. The results were analyzed using confocal fluorescence microscopy. TUNEL-positive cells in the lesion site in the spinal cord were quantified by counting positively stained cells. Sections (5–7 per animal) taken from the penumbra of the lesion and spaced about 100 μm apart were analyzed for each animal (n = 3–5 animals per group). Apoptotic cell death was determined by counting the total number of TUNEL-positive nuclei through entire cross sections. Low power sections were digitized and manually outlined using an image analysis system. Any cavities present in the sections were excluded from analysis. Data are expressed as cells per section [5]. Quantification For cell counting, 5–7 sections from each animal (n = 5 for each group) at the lesion site (every third section, 100 μm apart) were analyzed. OX-42-positive microglia and TUNEL-positive nuclei were counted through the entire cross section. Data are expressed as the number of immunostained cells per section (mean ± SEM). Statistical analysis All data are presented as mean ± SEM. The statistical significance of behavioral scores was analyzed by Kruskal-Wallis nonparametric analysis of variance followed by Mann-Whitney U tests. Histological data were evaluated by Student’s t-tests. Results Guanosine improves neurological function Over the course of 4 weeks after spinal cord injury, control rats (which received saline treatment) recovered occasional weight-supported plantar steps, but no fore-hind limb coordination, and had a mean BBB locomotor score of 9.3 ± 0.6 (Fig. 1). In contrast, the guanosine-treated rats recovered to a BBB score of 14 ± 0.5 (Fig. 1); they exhibited consistent weight supporting with consistent fore-hind limb coordination. Hind limb locomotor function in guanosine-treated animals was significantly better than in rats that received saline treatment (P < 0.01). Fig. 1Open field walking test (OFWT) scores from day 0 (the same day as surgery, prior to treatment) to 28 days after spinal cord injury for groups of saline- and guanosine-treated animals (means ± SEM). Animals with normal spinal cord function score 21, whereas a score of 0 represents total paralysis. Hind limb locomotor function in guanosine-treated animals (n = 11) was significantly better than in rats that received saline treatment (n = 10; P < 0.01) from day 1 to 28 after injury Uninjured rats have normal HLPR scores of 2 [38, 40]; they always place an extended hind limb briskly beneath the body in a proprioceptive placing response. Injured rats place their hind limb either partially, or unreliably, or not all, depending on the time since the injury and the treatment. In the present study, 4 weeks after injury, control rats with saline injection attained a 1.0 ± 0.1 HLPR score (Fig. 2) that was characterized by little or no attempt to place the foot, or leaving the foot extended with its dorsal surface down. In contrast, rats treated with guanosine reached a score of 1.7 ± 0.1 (Fig. 2), with the toes of their hind legs spread wide apart, a finding which is significantly better than rats that received saline treatment (P < 0.01). Uninjured rats have a normal FOR score of 2. Saline-treated animals had a FOR score of 1.0 ± 0.1 (Fig. 3). These rats extended their hind legs laterally with toe spread but turned their feet outward. When these rats were lowered, they did not orient their feet toward the surface. In contrast, the guanosine-treated rats recovered to a score of 1.7 ± 0.1 (Fig. 3), which is significantly different (P < 0.01) from the saline-treated rats. Fig. 2Hind limb placing response (HLPR) scores from day 0 (the same day as surgery, prior to treatment) to 28 days after spinal cord injury for groups of saline- and guanosine-treated animals (means ± SEM). Animals with normal spinal cord function score 2, whereas a score of 0 represents total paralysis. Compared with saline-treated control animals (n = 10), guanosine-treated animals (n = 11) had a significantly better improvement of their hind limb placing responses (P < 0.01) from day 1 to 28 after injuryFig. 3Foot orienting response (FOR) scores from day 0 (the same day as surgery, prior to treatment) to 28 days after spinal cord injury for groups of saline- and guanosine-treated animals (means ± SEM). Animals with normal spinal cord function score 2, whereas a score of 0 represents total paralysis. Compared with saline-treated control animals (n = 10), guanosine-treated animals (n = 11) had a significantly better improvement of their foot orienting responses (P < 0.01) from day 1 to 28 after injury Uninjured rats maintain their position on the inclined plane even at an angle of 90°. In the present study, saline-treated control rats recovered their ability to maintain their position at 69° ± 2 by 4 weeks after injury, whereas guanosine-treated rats were able to maintain their position to a mean incline of 79° ± 1 (Fig. 4) by 4 weeks (P < 0.05). Fig. 4Inclined plane test (IPT) scores from day 0 (the same day as surgery, prior to treatment) to 28 days after spinal cord injury for groups of saline- and guanosine-treated animals (means ± SEM). Compared with saline-treated control animals (n = 10), guanosine-treated animals (n = 11) had a significantly better score in their inclined plane test (P < 0.05) from day 1 to 28 after injury Both saline- and guanosine-treated rats were insensitive to the thermal stimulus using the hot plate test during the first 2 days after injury. There was a gradual recovery of sensory function thereafter with recovery being accelerated in guanosine-treated animals compared to controls (P < 0.05) (Fig. 5). Fig. 5Post-lesional sensitivity of hind limbs touched by a hot plate from day 0 (the same day as surgery, prior to treatment) to 28 days after spinal cord injury for groups of saline- and guanosine-treated animals. Values are the means ± SEM of the average time of withdrawal of left and right hind paws during contact with a hot plate. Compared with saline-treated control animals (n = 10), guanosine-treated animals (n = 11) had a significantly better improvement of their sensory function (*P < 0.05) from day 2 to 14 after injury Guanosine accelerated LUT functional recovery following SCI In saline-treated rats the volume of manually expressed urine increased over the first 7 days after SCI and then decreased as spontaneous micturition was reestablished. On the basis of a previous study [48], the increase in volume of expressed urine during the first week after injury was interpreted as resulting from increased bladder size in the absence of spontaneous micturition. The subsequent decrease in manually expressed urine indicates the initiation of spontaneous micturition (Fig. 6). In guanosine-treated rats there was a significantly lower residual urine volume compared to the saline-treated animals (Fig. 6; P < 0.001). More importantly, the rats that received guanosine recovered their LUT function completely by 7 days after injury (Fig. 6). Fig. 6Time course of recovery of lower urinary tract function (spontaneous voiding). Urinary bladders were expressed every 12 h, and the collected urine volume was measured. Means ± SEM of the volume for each group. Compared with saline-treated control animals (n = 10), guanosine-treated animals (n = 11) had significantly less urine collected with time after SCI (*P < 0.05; **P < 0.001). The rats that received guanosine had empty bladders by day 7 after injury Guanosine attenuates macrophage and microglia activation, but not astrogliosis After SCI, early inflammatory reactions consisting of neutrophil and macrophage invasion as well as activation of microglia and astrocytes were reported in earlier studies [50, 51]. In the present study, the activated macrophages and microglia were labeled by the ED-1 immunoreaction. Guanosine treatment decreased the number of the ED-1 immunopositive cells compared with saline treatment (Fig. 7a, b, e; P < 0.01). However, no differences were found between the saline and guanosine groups in the number of activated astroglia (astrogliosis) indicated by the GFAP immunostaining (Fig. 7c, d). Fig. 7Fluorescent immunostaining using antibodies against a marker (ED-1) for macrophages and activated microglia in cross sections of cords from saline-treated (a) and guanosine-treated (b) animals at the lesion site 4 weeks after injury. There were fewer ED-1-immunolabeled cells in cords of guanosine-treated rats (b, e) compared to the cords of vehicle-treated rats (a, e; P < 0.01). GFAP-immunofluorescent staining of cross sections of spinal cords from vehicle-treated (c) and guanosine-treated (d) animals at the lesion site 4 weeks after injury showed no difference in immunostaining of GFAP between the cords of guanosine-treated rats (d) and vehicle-treated rats (c). Scale bar = 50 μm for all Guanosine attenuates apoptotic cell death in lesion site of spinal cord In the present study, apoptotic cell death induced by traumatic spinal cord injury was determined with TUNEL staining. TUNEL-positive nuclei were not uniformly distributed throughout the cross section of the spinal cord but were more numerous in the vicinity of the injury center. Quantification of TUNEL-positive nuclei showed that there were significantly fewer TUNEL-positive nuclei in the spinal cords of guanosine-treated rats than saline-treated control animals (Fig. 8; P < 0.001). Fig. 8TUNEL-positive apoptotic cells in spinal cord lesions were quantified by counting the total number of TUNEL-positive nuclei through entire cross sections. Compared with the cords from saline-treated animals (a, c), guanosine-treated cords had significantly fewer TUNEL-positive cells (b, c; P < 0.001). Scale bar = 50 μm for both a and b Guanosine increases axon and myelin sparing Axons were labeled with antibodies against RT-97 (a neuronal cytoskeletal protein; Fig. 9a, b) and myelin was labeled with antibodies against MBP (a specific marker for myelin; Fig. 9c, d). The immunohistochemistry showed that there were more axons (red, Fig. 9b) and myelin (green, Fig. 9d) as well as myelinated axons (yellow, Fig. 9f) in the spinal cords from guanosine-treated animals than in the cords from saline-treated control rats (Fig. 9a, c, e), indicating that systemic administration of guanosine is associated with preservation of tissue including neuronal and glial elements. Fig. 9Immunostaining with antibodies against RT-97 for labeling axons (a, b) and against myelin basic protein (MBP) for central myelin (c, d) at the lesion site demonstrated the spared tissue. Cross sections from saline-treated (a) and guanosine-treated cords (b) demonstrate a neurofilament (NF) immunoreactivity surrounding the lesion site. c, d Cross sections from saline-treated (c) and guanosine-treated cords reveal myelin at the lesion site. e, f Merger of the two images demonstrates NF (red) and MBP (green) double fluorescent immunolabeling. There were more spared axons and more myelin in the cords around the lesion site from guanosine-treated animals compared to the saline-treated group. Scale bar = 50 μm for all Discussion These data are the first to demonstrate the ability of guanosine to act as a neuroprotective agent in vivo. These findings have potential clinical relevance. Previous studies have shown that after central nervous system injuries, the concentrations of guanosine are elevated around the injury and in CSF, sometimes for prolonged periods [32, 52]. Given the various trophic and antiapoptotic effects of guanosine in vitro [28, 34, 35], it seemed likely that increasing the concentration of guanosine in the central nervous system after injury in vivo might also be neuroprotective. We found that within 7 min of systemic administration, guanosine enters and progressively accumulates in the central nervous system (unpublished data), where it is converted to guanine. It remains to be determined whether guanosine, guanine, or both are the active neuroprotective agent. However, guanosine is more readily deliverable than guanine as a potential neuroprotectant. The model of incomplete spinal cord injury that we have used is clinically relevant since about 50% of patients with spinal trauma have incomplete injury [53]. After incomplete spinal cord injury, both reflex and voluntary motor functions below the level of the injury are initially lost; partial recovery occurs over time [41, 54, 55]. The recovery of functions mediated by supraspinally controlled reflexes is slow and incomplete since these require the function of long tracts, many of which are irreversibly damaged by the injury [56, 57]. Recovery of locomotion and limb placement depends on ascending and descending spinal cord tracts, including cortico-, rubro-, reticulo-, vestibulo-, and raphespinal tracts [58]. The functional loss after spinal cord injury in rats involves interruption of descending serotonergic [59], reticulospinal, and other descending spinal tracts that facilitate segmental reflexes [40, 60]. The ascending spinothalamic tracts mediate the perception of pain and temperature below the level of the lesion. Since all of these motor and sensory functions were better in guanosine-treated animals than in control animals, it appears that guanosine preserved the function of multiple long tracts. This was associated with increased axonal survival and myelin preservation. Normal micturition requires coordinated activation of the bladder smooth muscle (detrusor) and the striated muscle of the external urethral sphincter, controlled by spinal and supraspinal circuitry [47]. Thus after incomplete spinal cord injury, initially bladder function is lost, but later partially recovers, the extent of recovery depending on the degree of preservation of white matter (and hence the long tracts) at the injury site [48, 49]. Our data revealed that in guanosine-treated rats, residual urine volume was not only much less than in controls from the first day after the spinal cord injury, but that by 7 days after spinal cord injury micturition had returned to normal. Because guanosine began to exert a neuroprotective effect on both motor and bladder function soon after it was first administered, its neuroprotective effect on each may involve similar mechanisms. The inherent complexity of the biological system coupled with the many potential trophic actions of guanosine make it difficult to determine the mechanisms by which guanosine produces its effects. Nevertheless, neuroprotective effects were observed by 24 h after spinal cord injury. As guanosine was administered 4 h post-injury, it must have affected processes that were important between 4 and 24 h post-injury. Within this time frame after traumatic injury of the spinal cord, both inflammatory responses and apoptosis are prominent. Activation and proliferation of microglia/macrophages play an important role in the secondary damage following spinal cord injury. Thus, cells at the center of the injury cascade are potential targets for neuroprotective treatments of acute SCI [21, 61]. As guanosine attenuated the activation and proliferation of microglia/macrophages following SCI, this may at least in part provide an explanation for its neuroprotective effect. Guanosine has antiapoptotic effects in vitro [34, 35] and this is another potential mechanism through which guanosine might exert beneficial effects after spinal cord injury. Apoptosis after SCI has been described by many investigators [15, 62–67]. In these reports, early apoptosis of neural cells, including neurons, is followed by a delayed wave of predominantly oligodendroglial-programmed cell death in degenerating white matter tracts [63–67]. Studies of apoptosis in white matter after injury raise the possibility that glial apoptosis occurs, at least in part, as a consequence of axonal degeneration [68, 69]. However, the presence of activated microglia in contact with apoptotic oligodendrocytes after SCI indicates that this interaction may also activate cell death programs in the oligodendrocyte [70]. Secondary axonal degeneration may then follow [71, 72]. In the present study, guanosine significantly suppressed apoptosis in the injured spinal cords when it was systemically administered daily for 2 weeks beginning 4 h after injury. It seems reasonable to postulate that decreasing apoptosis may be an important mechanism through which guanosine improves neurological outcome after SCI. Although the intracellular pathways through which guanosine suppresses apoptotic cell death following spinal cord injury are not known, a number of intracellular pathways which protect cells against apoptosis have been identified. These include the phosphatidylinositol 3-kinase (PI-3-K)/Akt/protein kinase B (PKB) pathway [73, 74] and the mitogen-activated protein (MAP) kinase pathway [75]. Our previous studies in vitro have shown that the antiapoptotic effect of guanosine is mediated by the activation of both PI-3-K/Akt/PKB and MAPK pathways [34, 35]. Guanosine can also increase intracellular cAMP in various cell types [36, 37], and increases in cAMP after spinal cord injury have been shown to increase recovery [26, 27]. However, it appears unlikely that increases in intracellular cAMP are responsible for the effects of guanosine since the principal effect of cAMP in enhancing recovery from spinal cord injury appears to be due to its ability to promote axonal regeneration. In contrast, the neuroprotective effects we observed early after administration of guanosine are unlikely to be explained on the basis of outgrowth of damaged axons. A much longer time frame could be required for this to occur. A further possible mechanism by which guanosine might exert its neuroprotective effects is by stimulating cells to release and increase synthesis of various trophic factors, such as basic fibroblast growth factor (bFGF), nerve growth factor (NGF), and neurotrophin-3 (NT-3) [28]. Such trophic factors can contribute to tissue preservation after trauma [67]. The ability of guanosine to stimulate production and release of trophic factors may not only have early effects, but also may contribute to the reduced number of apoptotic cells observed 3 weeks after the injury. This study is important not only because it is the first to demonstrate the neuroprotective effect of systemically administered guanosine in acute spinal cord injury, but also because it indicates some of the potential mechanisms whereby guanosine may exert its neuroprotective effects in vivo. This work provides a basis for further exploration of the purinergic mechanisms underlying the neuroprotective effects of exogenous guanosine. Furthermore, and of potential clinical importance, is that guanosine was effective when it was administered 4 h after the injury—a realistic time frame in which to initiate treatment after spinal cord injury in humans.
[ "guanosine", "apoptosis", "inflammation", "spinal cord injury", "cell death", "myelin", "immunohistochemistry", "glia", "locomotor and sensory function" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "R" ]
Behav_Genet-4-1-2226020
Genetic and Environmental Influences on the Relation Between Attention Problems and Attention Deficit Hyperactivity Disorder
Objective The assessment of symptoms of ADHD in children is usually based on a clinical interview or a behavior checklist. The aim of the present study is to investigate the extent to which these instruments measure an underlying construct and to estimate the genetic and environmental influences on individual differences in ADHD. Methods Maternal ratings were collected on 10,916 twins from 5,458 families. Child Behavior Checklist (CBCL) ratings were available for 10,018, 6,565, and 5,780 twins at the ages 7, 10, and 12, respectively. The Conners Rating Scale (4,887 twins) and the DSM interview (1,006 twins) were completed at age 12. The magnitude of genetic and environmental influences on the variance of the three measures of ADHD and the covariance among the three measures of ADHD was obtained. Results Phenotypic correlations range between .45 and .77. Variances and covariances of the measurements were explained mainly by genetic influences. The model that provided the best account of the data included an independent pathway for additive and dominant genetic effects. The genetic correlations among the measures collected at age 12 varied between .63 and 1.00. Conclusions The genetic overlap between questionnaire ratings and the DSM-IV diagnosis of ADHD is high. Clinical and research implications of these findings are presented. Introduction As is the case for all psychiatric disorders, the diagnosis of attention deficit hyperactivity disorder (ADHD) is not based on a specific pathological agent, such as a microbe, a toxin, or a genetic mutation, but rather on the collection of signs and symptoms and evidence of impairment that occur together more frequently than expected by chance (Todd et al. 2005). The presence of these symptoms is usually established by direct observation, or by the completion of a clinical interview or questionnaire by the parent or teacher of a child. Instruments vary with respect to the included symptoms, the exact manner of data collection (checklist or interview), and the response format (e.g., yes/no versus Likert scale). In the present paper, we investigated if (co)variance of the scores on different instruments can be explained by a common underlying construct and to what extent this common factor is influenced by genetic and environmental factors. The focus is on three widely used instruments: the Child Behavior Checklist (CBCL; Achenbach 1991), the Conners Parent Rating Scale-Revised:Short version (CPRS-R:S; Conners 2001), and the Diagnostic and Statistical Manual of Mental Disorders-4th edition (DSM-IV; American Psychiatric Association 1994). The CBCL-Attention Problem scale (CBCL-AP) was developed by means of factor analyses, and includes eleven items. The psychometric properties and methods to establish the reliability of the syndrome are discussed in detail elsewhere (Achenbach 1991). Despite its name, the scale assesses problems related both to attention and hyperactivity. The CBCL has sex- and age-specific norms, which are useful in assessing a child’s risk for ADHD. The CPRS-R:S ADHD-index comprises the 12 best items for distinguishing children with ADHD from children without ADHD as assessed by the DSM (Conners 2001). As with the CBCL, sex- and age-specific norm scores are available, allowing the clinician to determine whether a given child is at risk for ADHD. DSM-IV ADHD is assessed on the basis of 18 symptoms; nine relate to inattention, and nine relate to hyperactivity/impulsivity. In the DSM framework, ADHD is viewed as a categorical trait; i.e., children either do or do not meet criteria for ADHD. The norms for clinical diagnosis do not vary as a function of sex or age of the child. Table 1 contains the symptoms included in the CBCL-AP scale, the CPRS-R:S ADHD-index and DSM-IV ADHD. Table 1An overview of the Child Behaviour Checklist (CBCL), Conners Parent Rating Scale-Revised:Short version (CPRS-R:S), and the Diagnostic and Statistical Manual of Mental Disorders-4th edition symptomsScaleSymptomCBCL Attention ProblemsActs too young for his/her ageCan’t concentrate, can’t pay attention for longCan’t sit still, restless, or hyperactiveConfused or seems to be in a fogDaydreams or gets lost in his/her thoughtsImpulsive or acts without thinkingNervous, high-strung, or tenseNervous movements or twitchingPoor school workPoorly coordinated or clumsyStares blanklyCPRS-R:S ADHD-indexInattentive, easily distractedShort attention spanFidgets with hands or feet or squirms in seatMessy or disorganized at home or schoolOnly attends if it is something he/she is very interested inDistractibility or attention span a problemAvoids, expresses reluctance about, or has difficulties engaging in tasks that require sustained mental effort (such as schoolwork or homework)Gets distracted when given instructions to do somethingHas trouble concentrating in classLeaves seat in classroom or in other situations in which remaining seated is expectedDoes not follow through on instructions and fails to finish schoolwork, chores, or duties in the workplace (not due to oppositional behavior or failure to understand directions)Easily frustrated in effortsDSM-IV ADHDInattentionOften fails to give close attention to details or makes careless mistakes in schoolwork, work, or other activitiesOften has difficulty sustaining attention in tasks or play activitiesOften does not seem to listen when spoken to directlyOften does not follow through on instructions and fails to finish schoolwork, chores, or duties in the workplace (not due to oppositional behavior or failure to understand instructions)Often has difficulty organizing tasks and activitiesOften avoids, dislikes, or is reluctant to engage in tasks that require sustained mental effort (such as schoolwork or homework)Often loses things necessary for tasks or activities (e.g., toys, school assignments, pencils, books, or tools)Is often easily distracted by extraneous stimuliIs often forgetful in daily activitiesHyperactivityOften fidgets with hands or feet or squirms in seatOften leaves seat in classroom or in other situations in which remaining seated is expectedOften runs about or climbs excessively in situations in which it is inappropriateOften has difficulty playing or engaging in leisure activities quietlyIs often “on the go” or often acts as if “driven by a motor”Often talks excessivelyOften blurts out answers before questions have been completedOften has difficult awaiting turnOften interrupts or intrudes on others (e.g., butts into conversations or games) Although the CBCL, DSM, and CPRS-R:S focus on different symptoms, and are based on distinct assumptions, the scores of these instruments are strongly related. CBCL-AP scores predict the presence of ADHD (Gould et al. 1993; Chen et al. 1994; Eiraldi et al. 2000; Lengua et al. 2001; Sprafkin et al. 2002; Hudziak et al. 2004). In a non-referred sample enriched for ADHD, about 50% of the children with a high CBCL-AP score were diagnosed with ADHD compared to 3% of the children with a low CBCL-AP score (Derks et al. 2006). Although these results imply a good convergence between the CBCL and a DSM-IV interview, the relation is clearly less than perfect. The CPRS-R:S ADHD-I was developed for assessing children at risk for ADHD based on a DSM-IV diagnosis (Conners 2001). Conners (2001) showed that the CPRS-R:S ADHD-I is a good screening instrument for DSM-IV ADHD with a sensitivity of 100%, a specificity of 92.5%, and an overall correct classification rate of 96.3%. As far as we know, the relation between CBCL-AP and the CPRS-R:S ADHD-I has not been studied, but given that they are both related to DSM-IV ADHD, these are likely to be correlated. Genetic studies of psychiatric disorders are complicated by the lack of clear diagnostic tests (Hudziak 2001). Heritability estimates in epidemiological genetic studies, and the results of gene-finding studies may depend on the exact instrument that is used to assess ADHD. Although a number of papers have established the convergence between CBCL-AP and DSM-IV ADHD, the causal factors underpinning this relationship remain unclear. Is it the result of genetic overlap, environmental overlap, or both? This is an important question, which may determine the progress in gene finding studies. If variance in alternative measures of ADHD is explained by different genes, we would expect disagreement in the results of studies using different instruments. If the same genes explain variance in these measures, the data from studies using different instruments may be combined in order to increase statistical power (Boomsma 1996; Boomsma and Dolan 1998). Assuming that the convergence between different instruments will be less than perfect, part of the variance will be attributable by instrument-specific factors. It is important to investigate the nature of such factors. If the divergence among instruments is merely a matter of measurement error, we would expect no genetic influences on the instrument-specific factors. Genetic influences on the instrument-specific factors, on the other hand, would suggest that the instruments tap partly unique aspects of children’s behavior. Genetic and environmental influences on individual differences in behavior can be studied in genetically informative designs, such as the classical twin design. Such studies have shown that genetic influences explain between 55 and 89% of the variance in clinical diagnoses of ADHD (Eaves et al. 1997; Sherman et al. 1997). Shared environmental influences were nearly always absent. Likewise, about 70–80% of the variance in CBCL-AP scores is explained by genetic influences. The remaining variance is explained by non-shared environmental influences (Rietveld et al. 2003; Hudziak et al. 2000; Gjone et al. 1996). Kuntsi and Stevenson (2001) used the Conners Rating Scale to assess symptoms of ADHD and reported a heritability of 72%. A review of genetic studies on AP, HI and ADHD suggested the absence of qualitative and quantitative sex differences in the genetic etiology of parent ratings of ADHD (Derks et al. in press). Interestingly, in parent ratings, but not in teacher ratings, the DZ twin concordances and correlations are lower than would be expected under a purely additive genetic model. For example, in maternal structured interview reports, the concordance rate is .67 in MZ twins, but .00 in DZ twins (Sherman et al. 1997). Similarly, in CBCL ratings, the DZ twin correlations are less than half the MZ correlations (Rietveld et al. 2003). In the literature, two explanations are offered for these low DZ correlations. Firstly, the DZ correlation can be less than half the MZ correlation due to the presence of non-additive genetic effects (i.e., genetic dominance) (Lynch and Walsh 1998). Secondly, the low DZ correlation may be explained by social interaction effects, which may be the result of interaction among siblings (i.e., the behavior of a twin influences the behavior of the other twin) or rater bias (i.e., the behavior of a twin is compared to the behavior of the other twin) (Eaves 1976; Carey 1986; Boomsma 2005). In previous studies, support was found both for the presence of genetic dominance (Rietveld et al. 2003; Martin et al. 2002) and sibling interaction (Simonoff et al. 1998; Kuntsi and Stevenson 2001; Vierikko et al. 2004; Eaves et al. 1997). A high heritability of attention problems and ADHD has been reported, irrespective of the instrument that is used. However, based on the findings of univariate studies, we cannot conclude that CBCL, Conners Rating Scale, and DSM ratings measure the same construct, or that they are influenced by the same set of genes. To address this question, multivariate analyses are needed. Although a number of studies have focused on the genetic and environmental influences on either AP or ADHD, only the study of Nadder and Silberg (2001) included multivariate analyses. Nadder and Silberg (2001) analyzed data obtained in a sample of 735 male and 819 female same-sex twin pairs, aged 8–16 years. They modelled the genetic influences on nine measures of ADHD symptomatology, including maternal and paternal DSM-III-R interview data (three dimensions: hyperactivity, inattention and impulsivity), maternal questionnaire data (the Rutter Parental Scale, and the CBCL), and a questionnaire completed by the twin’s teacher. The aim of this study was to determine whether overactivity, inattention, and impulsivity reflect the same underlying genetic liability, while taking method (i.e., instrument-specific) variance into account. In males, 23.7–70.1% of the genetic variance was explained by a common factor that loaded on all nine indicators. A second and third factor loaded on the three dimensions of the maternal and paternal interview data, respectively. The remaining variance (0.0–65.7%) was explained by factors that were specific to each measure. In females, there was also one factor common to all indicators (explaining 16.2–60.2% of the variance), and a second and third factor, which loaded on the three dimensions of the interview data. In contrast to the males, a fourth factor loaded on the three behavioral questionnaires. This factor explained 12.3–46.2% of the genetic variance. In total, measurement specific factors explained 0.0–73.0% of the genetic variance. The purpose of the present paper is to investigate the construct validity of CBCL-AP, CPRS-R:S ADHD-I, and DSM-IV ADHD. Three questions are addressed. First, what are the phenotypic correlations between the three instruments? Second, do the instruments reflect a common underlying factor? Third, what are the genetic and environmental influences on the common and the instrument-specific factors? Methods Subjects This study is part of an ongoing longitudinal twin study in the Netherlands. The subjects were all registered at birth with the Netherlands Twin Register (Boomsma et al. 2002, 2006; Bartels et al. 2007). Mothers of the registered twin pairs receive the CBCL and the CPRS at the ages 7, 10, and 12 years. A subsample of the twins was selected based on their longitudinal CBCL scores. The mothers of these pairs completed a diagnostic interview. The twins, with an age range of 10–13 years (mean age = 11.71; SD = .77) at the time of the interview, were born between 1989 and 1994. The mean time-span between the completion of the interview and the questionnaires was 4.42 (SD = .75), 1.82 (SD = .73), and −.84 (SD = .63) years for the questionnaires completed at age 7, age 10, and age 12, respectively. Questionnaires were sent to all families that agreed to participate with the research of the Netherlands Twin Registry when the children were born (N = 7,828 families; birth cohorts 1989–1994) at the ages 7, 10, and 12 years. At least one measurement is available for 10,916 twins from 5,458 families, so the response rate is 70%. CBCL ratings were available in 10,018 twins at age 7, 6,565 twins at age 10, and 5,780 twins at age 12. CPRS-R:S ratings were available for 4,887 twins at age 12, and DSM-IV interviews were available for 1,006 twins. Complete data were available in 740 twins. The fact that the number of CPRS-R:S ratings is lower than the number of CBCL ratings, can be explained by the fact that the CPRS-R:S was not included for children born before 1991. The number of available questionnaires decreases over time as a result of the longitudinal character of the study (i.e., a number of children in the study had yet to reach the age of 12). Zygosity diagnosis was based on DNA in 674 same-sex twin pairs. In the remaining same-sex pairs, zygosity was assessed using a 10–item questionnaire. Zygosity determination using this questionnaire is almost 95% accurate (Rietveld et al. 2000). Of the 5,458 twin pairs, there were 898 monozygotic male (MZM) pairs, 888 dizygotic male (DZM) pairs, 1,005 monozygotic female (MZF) pairs, 844 dizygotic female (DZF) pairs, and 1,823 dizygotic opposite sex (DOS) pairs. Selection for the diagnostic interview For the diagnostic interview, subjects were selected on the basis of their standardized maternal CBCL ratings (T-scores; mean = 50, SD = 10) at the ages 7, 10, and 12 years (Derks et al. 2006). Subjects were excluded if maternal ratings were available at only one time-point, or if they suffered from a severe handicap, which disrupted daily functioning. Twin pairs were selected if at least one of the twins scored high on AP (affected pairs), or if both twins scored low on AP (control pairs). A high score was defined as a T-score above 60 at all available time-points (age 7, 10, and 12 years) and a T-score above 65 at least once. A low score was defined as a T-score below 55 at all available time-points. The control pairs were matched with the affected pairs on the basis of sex, cohort, maternal age, and social economic status (SES). T-scores were computed in boys and girls separately. In other words, girls were selected if they scored low or high compared to other girls, and boys were selected if they scored low or high compared to other boys. This procedure resulted in the selection of similar numbers of boys (N = 499) and girls (N = 507). Measures The Child Behavior Checklist (CBCL) (Achenbach 1991) is a standardized questionnaire designed for parents to report the frequency and intensity of their children’s behavioral and emotional problems as exhibited in the past 6 months. It consists of 120 items that measure problem behavior. The items are rated on a 3-point scale ranging from “not true = 0”, “somewhat or sometimes true = 1”, to “very true or often true = 2”. The Attention Problem scale contains 11 items. The 2-week test–retest correlation and the internal consistency of this scale are .83 and .67, respectively (Verhulst et al. 1996). In the statistical analyses, we included the CBCL ratings at the ages 7, 10, and 12 years in order to correct for the selection, as explained below. The Conners’ Parent Rating Scale-Revised is a widely used instrument to assess behavior problems in the past month (CPRS-R; Conners 2001; Conners et al. 1998). The short version contains 28 items. The items are rated on a 4-point scale ranging from “not true at all = 0” to “very much true = 3”. The CPRS-R:S ADHD-I, which was used in the present study, comprises the best 12 items for distinguishing children with ADHD from children without ADHD as assessed by the DSM-IV (American Psychiatric Association 1994; Conners 2001). The internal consistency of this scale at age 12–14 years is .94 in boys and .91 in girls. The 6–8 weeks test–retest correlation is .72. The Diagnostic Interview Schedule for Children (DISC) (Shaffer et al. 1993) is a structured diagnostic interview. It can be used to assess the presence of DSM-IV diagnoses, including ADHD. The Dutch translation is by Ferdinand and van der Ende (1998). The mothers of twins were interviewed by ten experienced research assistants to determine which symptoms of ADHD were displayed by the twins during the last year. Maternal ratings of DISC symptoms in their children were assessed by the same interviewer for each twin in a given pair. We analyzed the total number of symptoms. Statistical analyses Transformation to categorical data The distributions of the CBCL, CPRS-R:S, and DSM symptom data are characterized by excessive skewness and kurtosis. Derks et al. (2004) showed that bias in parameter estimates due to non-normality of the data may be avoided by using categorical data analysis. In this approach, a liability threshold model is applied to the ordinal scores (Lynch and Walsh 1998). It is assumed that a person is “unaffected”, if his or her liability is below a certain threshold, and that he or she is “affected”, if his or her liability is above this threshold. In the present paper, the scores were recoded in such a way that three thresholds divide the latent liability distribution into four categories, of about equal size. The liability threshold model was identified by constraining the variance of the observed variables at 1. The CBCL AP score was calculated by summing the responses on the 11 items which resulted in a sum score with a possible maximum of 22. The four categories consisted of a score of 0, 1–2, 3–5, and 6 or higher, respectively. The CPRS-R:S ADHD-I score was calculated by summing the responses on the 12 items, which resulted in a sum score with a possible maximum of 36. The four categories consisted of a score of 0–1, 2–5, 6–11, and 12, or higher, respectively. The DISC sumscore with a range of 0 to 18 was transformed into an ordinal variable with four categories. The four categories were: (i) not affected (0 symptoms); (ii) mildly affected (1–2 symptoms); (iii) moderately affected (3–5 symptoms); and (iv) highly affected (more than 6 symptoms). The use of this four category variable provides greater resolution, and so better statistical power than the use of a dichotomous variable (ADHD absent versus ADHD present). Correcting for the selection Diagnostic interview data were collected only in a subsample of the twins. The probability of selection for the interview depends on a measured variable, namely the twin’s CBCL scores at age 7, 10, and 12. The data of the complete sample may be partitioned as the observed (selected) and missing (unselected) parts. The data are missing at random (MAR) if the probability of missingness depends only on the observed part of the data, and not on the missing part (Little and Rubin 2002). Given that the data are MAR, unbiased parameter estimates can be obtained by full information (i.e., raw data) maximum likelihood estimation of the parameters in a statistical model that includes the variables that were used for selection. It is essential to include all variables that were used for selection, because the probability of missingness should not depend on the missing part of the data, in which case the data would be missing not at random (MNAR) and parameter estimates would be biased. We therefore included the CBCL ratings obtained at the ages 7, 10, and 12 years in the statistical analyses. All twin pairs in which at least one measure is available are included in the analyses. Prevalences To investigate if the prevalences of AP and ADHD depend on the twin’s sex or zygosity, we performed χ2-tests with the five ordinal measures as dependent variables and sex and zygosity as independent variables. Genetic modeling Genetic and environmental influences on variance in ADHD scores were estimated using structural equation modeling. All model fitting was performed on raw data with Mx (Neale et al. 2003), a statistical software package well suited for conducting genetic analyses. The influence of the relative contributions of genetic and environmental factors to individual differences in ADHD can be inferred from the differences in correlations across MZ and DZ twin pairs, as MZ and DZ twins differ in their genetic relatedness (Plomin et al. 2001). Using the twin method, phenotypic variance may be attributed to additive genetic effects (A), dominant genetic effects (D) or shared environmental effects (C), and non-shared environmental (E) effects. The genetic effects (A and D) correlate 1 in MZ twins, as they are genetically identical. In DZ twins, A correlates .5, and D correlates .25. C correlates 1 in both MZ and DZ twins. E or non-shared environmental effects are, by definition, uncorrelated. Uncorrelated measurement error, if present, is absorbed in the E term. Note that estimating C and D at the same time is not possible in a design using only data from MZ and DZ twins reared together. If the correlations of DZ twins are less than half the correlations of MZ twins, which is the case for maternal ratings of attention problems and ADHD, D is included in the genetic model. The proportion of the variance accounted for by heritability or environmental influences is calculated by calculating the ratio of variance due to A, D, or E to the total phenotypic variance. For instance, let a, d, and e denote the regression coefficients in the regression of the phenotype on the standardized latent variables A, D, and E, respectively. The variance due to A is then a2, and the (narrow-sense) heritability is calculated as a2/(a2 + d2 + e2). Social interactions may be an additional source of variance. Social interaction effects lead to differences in variances in MZ and DZ twins in continuous data (Carey 1986). Using ordinal data, the presence of an interaction component can be tested by equating the prevalences of AP/ADHD between MZ and DZ twins. The absence of significant prevalence differences suggests that the presence of sibling interaction or rater bias is considered implausible. Three multivariate models were tested: a triangular (Cholesky) decomposition, an independent pathway model, and a common pathway model (Neale and Cardon 1992). The triangular decomposition is the least restrictive model, as no specific hypotheses regarding the covariance matrices of A, D, and E are tested. These matrices are merely assumed to be positive (semi) definite. This is a saturated model that can be used to obtain (otherwise unconstrained) genetic and environmental correlations among traits. In the independent pathway model, common and specific genetic and environmental factors are included. In our data analyses of the five variables, we specified a common factor and five instrument-specific factors for each of the factors A, D, and E, which we denote Ac, Dc, and Ec. An independent pathway model provides a good fit to the data if the covariance between the five variables is due to the common factors Ac, Dc, and Ec. Finally, in the common pathway model, a model that is nested under the independent pathway model, it is assumed that genes and environment explain variance in a latent phenotype. This latent factor, of which the variance is constrained at 1, explains variance in the five variables. In addition, the variance of the five variables is allowed to be influenced by instrument-specific influences of A, D, and E. In other words, the common pathway model would provide a good fit to the data if the covariance between the five variables can be explained by a latent construct. Because the number of twins for whom interview data are available is relatively small, and sex differences in heritability are usually not found, the data from male and female twins were combined in the analyses. To allow for prevalence differences between boys and girls, sex was included as a covariate on the thresholds. The type-I error rate of all statistical tests was set at .05. Results Descriptives The prevalences for the five measures were compared between MZ and DZ twins and between boys and girls. The first model that was fitted to the data was a fully saturated model. In this model, 90 correlations were estimated, 45 in MZ twins and 45 in DZ twins. In addition, the model included 30 thresholds in each of the following groups: MZ boys, DZ boys, MZ girls, and DZ girls, which results in a total of 120 estimated thresholds. Next, a model was fitted that included a number of constraints on the thresholds. This model included 30 thresholds, 1 sex effect on the thresholds, and 5 zygosity effects on the thresholds (one for each of the five measurements). As this model fitted the data well, it was used as the reference model to test for prevalence differences as a function of zygosity for each of the five measurements. The results of these analyses are summarized in Table 3. Zygosity did not affect the prevalences of the CBCL, CPRS-R:S, and DSM scores. In view of the absence of prevalence differences in MZ and DZ twins, social interaction effects were not included in the genetic model. The model that was used as the reference model to test for sex differences included as free parameters 30 thresholds, 1 zygosity effect on the thresholds, and five sex effects on the thresholds, one for each measurement. The results showed that boys have significantly more problems than girls on all five measurements; therefore, sex was included as a covariate on the thresholds. Because of the use of categorical scores in the present paper, we did not report means and standard deviations of the CBCL, CPRS-R:S and DSM scores. These descriptives can be requested from the corresponding author by interested readers. Twin correlations The polychoric correlations between the five measurements are shown in Table 2 for MZ and DZ twins. The MZ (DZ) twin correlations are reported above (below) the diagonal. As expected, the phenotypic correlations (i.e., the correlation between traits within the same individual) are similar in first- and second-born twins and in MZ and DZ twins. The correlations range from .45 to .77, with slightly lower correlations between different assessment methods (e.g., CBCL questionnaire versus clinical interview) than similar assessment methods (e.g., CBCL questionnaire versus CPRS-R:S questionnaire). Equating the correlations of first- and second-born twins at age 12, the phenotypic correlation between CBCL-AP and CPRS-R:S was .75, while the correlations between CBCL-AP and DSM, and CPRS-R:S and DSM were .62. The fact that the cross-twin and the cross-trait cross-twin correlations are higher in MZ than DZ twins indicates that genetic influences contribute to the variance of the three measures and to the covariance between them. Table 2Polychoric correlations in monozygotic (above diagonal) and dizygotic (below diagonal) twinsFirst-bornSecond-bornCBCL 7 CBCL 10 CBCL 12 CPRS DSM CBCL 7 CBCL 10 CBCL 12 CPRSDSMFirst-bornCBCL age 71.66.62.51.59.76.54.49.45.45CBCL age 10.701.69.61.59.56.77.58.53.48CBCL age 12.63.741.71.57.48.54.75.58.53CPRS-R:S .56.68.771.60.46.55.62.84.51DSM.51.55.59.681.34.41.46.46.64Second-bornCBCL age 7.31.22.18.15.041.66.63.52.46CBCL age 10.22.35.22.21.01.661.71.64.59CBCL age 12.21.28.34.24.13.60.721.75.58CPRS-R:S .22.27.28.38.08.49.64.741.60DSM.11.16.11.07.13.45.63.67.581Note: CBCL = Child Behavior Checklist; CPRS = Conners Parent Rating Scale-Revised:Short version ADHD-index; DSM = Diagnostic Statistical Manual of Mental Disorders Genetic analyses A Cholesky decomposition that included additive genetic influences (A), dominant genetic influences (D), and non-shared environmental influences (E) was fitted to the data. The full ADE cholesky decomposition fitted the data well (χ2(50) = 59.03, P = .180); see Table 3 for an overview of the model fitting results. Next, an independent pathway model was fitted to the data. Imposition of the independent pathway model for A, D, and E, resulted in a significant deterioration in fit compared to the fit of a cholesky decomposition (χ2(15) = 42.42, P < .001). Additional analyses showed that the influence of A and D were consistent with the independent pathway model, whereas the influence of E was not. A model that incorporated an independent pathway model for A and D, and a cholesky decomposition for E fitted well compared to the full cholesky decomposition (χ2(10) = 16.45, P = .087). The fit of the common factor model was poor (χ2(23) = 259.12, P < .001). Next, we tested if the instrument-specific influences of A and D could be constrained at zero. The instrument-specific additive genetic factors could not be dropped from the model (χ2(5) = 91.80, P < .001). In contrast, the dominant genetic variance could be explained by one common factor (χ2(5) = 1.06, P = .96). In other words, the covariance structure of D did not include specific variances. This means that this covariance matrix has rank one, and that the correlations (obtained by standardizing the covariance matrix of D) were all one. Figure 1 provides a graphical representation of the genetic part of the best fitting model and includes the unstandardized factor loadings of the additive genetic and dominant genetic factors. Table 3Multivariate model fitting of maternal ratings on CBCL, CPRS-R:S and DSM-IV ratings on attention problems and ADHD in 7-year-old children−2 log LLN parWith modeld.f.χ2P1. Fully saturated63020.52210––––2. Thresholds MZ/DZ free, thresholds boys/girls equated 63123.54126184103.02.082a. Thresholds CBCL age 7 equated in MZ/DZ63124.66125211.11.292b. Thresholds CBCL age 10 equated in MZ/DZ63123.9812521.43.512c. Thresholds CBCL age 12 equated in MZ/DZ63123.6812521.14.712d. Thresholds Conners age 12 equated in MZ/DZ63123.6012521.06.812e. Thresholds DSM age 12 equated in MZ/DZ63126.34125312.80.093. Thresholds boys/girls free, thresholds MZ/DZ equated 63108.1812618487.66.373a. Thresholds CBCL age 7 equated63423.1912531315.01<.0013b. Thresholds CBCL age 10 equated63395.1812531287.00<.0013c. Thresholds CBCL age 12 equated63321.3212531213.14<.0013d. Thresholds Conners age 12 equated63388.2412531280.06<.0013e. Thresholds DSM age 12 equated63137.031253128.85<.0014. Cholesky decomposition ADE63147.4110215059.03.184a. Independent pathway model D; Cholesky decomposition AE63149.4097451.99.854b. Independent pathway model A; Cholesky decomposition DE63151.8897454.47.484c. Independent pathway model E; Cholesky decomposition AD63170.06974522.65<.0014d. Independent pathway AD; Cholesky decomposition E63163.869241016.45.094e. Independent pathway model ADE63189.838741542.42<.0014f. Independent pathway AD; Cholesky decomposition E, instrument-specific A factors dropped63255.66874d591.80<.0014g. Independent pathway AD; Cholesky decomposition E, instrument-specific D factors dropped63164.92874d51.06.965. Common pathway model63406.5379423259.12<.001Fig. 1A graphical representation of the unstandardized additive genetic (A) and dominant genetic (D) effects on five measurements of Attention Problems and ADHD. In this figure, a graphical representation of the best-fitting model and the estimated factor loadings is provided for one individual twin. Additive genetic effects correlate 1 in MZ twins and .5 in DZ twins. Dominant genetic effects correlate 1 in MZ twins and .25 in DZ twins. To identify the model, the variances of the five categorical measurements are constrained at 1. CBCL7 = Child Behavior Checklist at age 7; CBCL10 = Child Behavior Checklist at age 10; CBCL12 = Child Behavior Checklist at age 12; CPRS-R:S = Conners Parental Rating Scale-Revised:Short version at age 12; DSM = DISC-IV ADHD at a mean age of 12 years Although the influence of the nonshared environment was not included in Fig. 1, the fact that the total variances of the five measurements are constrained at 1 in order to identify the model allows a calculation of the additive and dominant genetic variance based on the unstandardized factor loadings. For example, 41% (i.e., .442 + .462) of the variance in the CBCL rating at age 7 is attributable to additive genetic effects, 36% (.602) is attributable to dominant genetic effects. The remaining variance is explained by nonshared environmental effects. The additive genetic variance on the five measurements can be decomposed into variance due to the common factor and variance due to instrument-specific factors. For the CBCL rating at age 7, 19% (.442) of the total variance is attributable to common additive genetic effects, and 22% (.462) is attributable to instrument-specific genetic effects. The influence of common additive genetic effects on the total variance accounts for 36%, 55%, 56%, and 32% for the CBCL at age 10, the CBCL at age 12, the CPRS-R:S, and the DSM, respectively. Likewise, the influences of instrument-specific effects account for 17, 13, 23, and 24% of the variance, respectively. Table 4 shows an overview of the standardized influences of A, D, and E on the variance and covariance of the five measurements. The three diagonals of the five by five tables of A, D, and E contained the standardized variance components. The results indicate a high heritability, irrespective of measurement instrument or age. On the off-diagonals in Table 4, one can find the standardized influences of A, D, and E on the covariance between the measurements. For example, the covariance between CBCL7 and DSM is for 51% explained by A, 25% by D, and 24% by E. To obtain the unstandardized amount of variances explained, the standardized influences should be multiplied with the phenotypic covariance between the measures, which is .51 for CBCL7 and DSM. The most interesting comparison is between the data that were collected at approximately the same time. The covariance between the CBCL at age 12 and the DSM is explained largely by genetic effects (68% A, 9% D, and 23% E). Similar results were found for the covariance between CPRS-R:S and the DSM (67% A and 7% D) and for the covariance between the CBCL age 12 and CPRS-R:S (74% A and 8% D). Table 4Standardized genetic and environmental influences on the variances and covariances of five ratings of ADHD and attention problemsADECBCL 7CBCL 10CBCL 12CPRS-R:SDSMCBCL 7CBCL 10CBCL 12CPRS-R:SDSMCBCL 7CBCL 10CBCL 12CPRS-R:SDSMCBCL age 7.41.36.23CBCL age 10.39.53 .45.25.16.22CBCL age 12.53.62.68 .26.19.07.21.19.25CPRS-R:S.62.69.74.79 .26.17.08.05.13.14.18.16DSM.51.57.68.67.56.25.17.09.07.04.24.26.23.25.40Note: CBCL = Child Behavior Checklist; CPRS-R:S = Conners Parent Rating Scale-Revised:Short version ADHD-index; DSM = Diagnostic Statistical Manual of Mental Disorders; A = additive genetic effects; D = dominant genetic effects; E = non-shared environmental effects Table 5 includes the genetic and environmental correlation matrices in the best-fitting model, which represent the overlap between the genetic and environmental influences on the five measurement instruments. The additive genetic correlations range between .52 and .76. All dominant genetic correlations are 1, which is a result of the absence of specifics in the one-factor model used to model the dominant genetic covariance structure. The non-shared environmental correlations range from .34 to .68. Table 5Genetic and environmental correlations of five ratings of ADHD and attention problemsADECBCL 7CBCL 10CBCL 12CPRS-R:SDSMCBCL 7CBCL 10CBCL 12CPRS-R:SDSMCBCL 7CBCL 10CBCL 12CPRS-R:SDSMCBCL age 71.01.01.0CBCL age 10.571.01.01.0.501.0CBCL age 12.62.741.01.01.01.0.56.601.0CPRS-R:S.58.69.761.01.01.01.01.0.34.49.681.0DSM.52.62.68.631.01.01.01.01.01.0.39.52.45.621.0Note: CBCL = Child Behavior Checklist; CPRS-R:S = Conners Parent Rating Scale-Revised:Short version ADHD-index; DSM = Diagnostic Statistical Manual of Mental Disorders; A = additive genetic effects; D = dominant genetic effects; E = non-shared environmental effects Discussion The aim of this study was to determine the extent to which three different instruments, which are commonly used to assess ADHD, attention problems, and hyperactivity, measure a common construct. The instruments considered are two scales based on items from questionnaires (CBCL-AP, and CPRS-R:S ADHD-I), and a DSM-IV ADHD interview. First, we considered the phenotypic correlations. Second, we tested if the variance in the different instruments reflects one common underlying factor. Third, we estimated the genetic and environmental influences on individual differences in ADHD. This is the first study that includes multivariate genetic analyses of behavior rating scales and DSM-IV interview data collected in a large sample of twins of approximately the same age. The CBCL scores collected at age 7 and 10 years were included only to correct for the selection. In the discussion, we focus mainly on the CBCL, CPRS-R:S and DSM interview data, which were collected at a mean age of 12 years. The phenotypic correlation between CBCL-AP and the CPRS-R:S ADHD-I was high (r = .75). The correlations between the CBCL and the DSM and between the CPRS-R:S and the DSM were slightly lower (r = .62). These lower correlations can both be the result of the different time-points at which the behavior checklists and the DSM interview data were collected (the mean time-span between measurement occasions was 10 months), the differences in the time frame for the assessment of the items (e.g., 1 month for the CPRS-R:S, 6 months for the CBCL, and 1 year for the DSM), and of instrument or method variance (e.g., interview versus behavior checklists). The genetic analyses show that the covariance between CBCL and CPRS is for 82% explained by genetic factors while the covariance between CBCL and DSM was for 75% explained by genetic factors. Therefore, the higher phenotypic correlation between CBCL and CPRS is not caused by a relatively higher genetic covariance. As noted, the AP scale of the CBCL questions relate to both inattention and hyperactivity/impulsivity. The fact that the correlation between the CPRS-R:S ADHD-I and DSM-IV ADHD is identical to the correlation between CBCL-AP and DSM-IV ADHD implies that the CPRS-R:S and the CBCL measure ADHD equally well. The description of the eleven item CBCL scale as an inattention scale seems to be too limited, because both the item content and the current results suggest that the CBCL also signals problems related to hyperactivity/impulsivity. Although the phenotypic correlations provide an interesting insight regarding the similarities and dissimilarities of the quantitative and qualitative approaches towards child psychopathology, an important question concerns the etiological influences on the variances and covariances. In agreement with previous studies (Eaves et al. 1997; Sherman et al. 1997; Rietveld et al. 2003; Hudziak et al. 2000), individual differences in AP and ADHD are mainly explained by genetic factors. An independent pathway model provided a better fit than a common factor model. A common factor model implies a similar structure for the additive genetic, dominant genetic and nonshared environmental influences so the poor fit is probably due to the fact that there are instrument-specific additive genetic factors while these are absent for the dominant genetic factors. As referees of earlier drafts of this paper noted, alternative models might be fit to our data; a model including three common factors (one loading on all ratings, a second loading on CBCL ratings, and a third loading on age 12 ratings, might offer a good solution, but because the structure of A, D, and E differ (with no rating-specific influences for D), we did not fit this model to our data. An independent pathway model allows for the inclusion of common and instrument-specific genetic and environmental factors. The model that provided the best fit to the data included common additive and dominant genetic effects, instrument-specific additive genetic effects, and nonshared environmental effects. The relative influence of common and instrument-specific genetic effects varies by rating. Two third of the additive genetic variance of the CBCL ratings at age 10, age 12, and the CPRS-R:S rating at age 12, was explained by common effects. More specifically, Instrument-specific effects played a more important role in the CBCL ratings at age 7, and in the DSM ratings. For these ratings, the ratio of common and instrument-specific effects was about 50:50. Apparently, the overlapping genes explain less of the variance in these ratings compared to the other ratings, probably as a result of developmental changes in behavior and of method variance (i.e., questionnaire versus interview). The dominant genetic effects overlapped completely between ratings, as the instrument-specific effects could be dropped from the model. Our results show some agreement with the findings of Nadder and Silberg (2001), who fit an independent pathway model to ADHD symptomatology based on maternal and paternal questionnaire and interview data, and to teacher reports. Although their best-fitting model included contrast effects instead of genetic dominance, our finding of both common and specific genetic influences on the questionnaire and interview data on ADHD is supported by their results. The poor fit of the common factor model suggests that the construct validity of the instruments is not perfect. However, it is interesting to consider the implications of the overlap between the sets of genes that explained variance in the three instruments. High genetic correlations imply that the detection of the specific genes that play a role for ADHD, does not depend much on the instrument that is used. At age 12, the additive genetic correlations of the CBCL, CPRS-R:S, and DSM varied between .63 and .76, while the dominant genetic correlations could be constrained at 1. The non-shared environmental correlations are also quite high, and vary between .45 and .68. The dominant genetic correlations of 1 suggest that there is a subset of genes whose effect is not instrument or age dependent. In contrast, the correlations of the additive genetic effects are high but less than perfect. This suggests that the influence of most genes with an additive effect are not sensitive to the particular instrument that is used, although there are some genes that explain variance only in a particular measurement (e.g., CBCL), but not in another (e.g., DSM). What are the implications of the present findings for gene finding studies? Thus far, five groups have conducted genome-wide linkage scans in an attempt to find genomic regions which are involved in ADHD, and a number of regions that may be of interest have been identified. Linkage peaks with a LOD score above 2 (P < ∼.002) were reported at chromosomes 16p13 and 17p11 (Ogdie et al. 2003), chromosomes 7p and 15q (Bakker et al. 2003), chromosomes 4q, 8q, and 11q (Arcos-Burgos et al. 2004), chromosomes 5p and chromosome 12q (Hebebrand et al. 2006), and chromosomes 14q32 and 20q11 (Gayan et al. 2005). All these studies based diagnosis on DSM-IV (Ogdie et al. 2003; Bakker et al. 2003; Arcos-Burgos et al. 2004; Hebebrand et al. 2006) or DSM-III (Gayan et al. 2005) criteria. The discrepancy in the results of these five studies could be due to low statistical power. The present study showed that the genetic overlap between behavior checklist scores and the DSM-IV diagnosis of ADHD is high. This implies that the detection of genes, which play a role for ADHD, can be based on questionnaire scores, rather than diagnostic interviews. This will reduce the costs of collecting phenotypic data. Resources may then be reallocated to the collection of genotypic data. An increased number of subjects can be genotyped and the statistical power to detect a QTL will be increased. Limitations The results of this study should be interpreted bearing in mind the following limitations. First, further study is required to investigate if the results of the current study, which was based on a Dutch population sample, generalize to population samples outside the Netherlands. Second, clinical diagnoses were based on structured diagnostic interviews with the mother. The results may be different when the assessment of ADHD is based on expert clinical diagnoses. Third, no distinction was made between problems related to inattention and problems related to hyperactivity. Since the CBCL does not distinguish between inattention and hyperactivity (and probably the number of items is too small to reliably measure these two factors) we did not distinguish between the subscales. Fourth, we did not allow for sex differences in the genetic and environmental influences based on the results of univariate studies. Because of the increased statistical power in the multivariate model, it is possible that sex differences do exist. However, due to the categorical nature of the data, and the fact that some of the cells in the contingency tables contain few individuals in a two-group analysis, statistical problems will arise in a four-group analysis. Fifth, as a result of the categorical nature of the data, computational limitations prohibited inclusion of confidence intervals. Clinical implications Two general approaches towards the measurement of ADHD can be distinguished. In the DSM-IV framework, ADHD is viewed as a categorical trait. Using behavior checklists, children can show variation in a continuum from not affected at all to severely affected. The current study shows that variance in DSM-IV symptoms, the CBCL-AP scale, and the CPRS-R:S ADHD-I is explained mostly by genetic effects. The correlations between the genetic influences on variance in these three measurements of ADHD are high. This implies that different measurements tap the same genetic liability.
[ "genetics", "attention problems", "adhd", "measurement", "twins", "multivariate analysis" ]
[ "P", "P", "P", "P", "P", "R" ]
Breast_Cancer_Res_Treat-3-1-2096637
MRI compared to conventional diagnostic work-up in the detection and evaluation of invasive lobular carcinoma of the breast: a review of existing literature
Purpose The clinical diagnosis and management of invasive lobular carcinoma (ILC) of the breast presents difficulties. Magnetic resonance imaging (MRI) has been proposed as the imaging modality of choice for the evaluation of ILC. Small studies addressing different aspects of MRI in ILC have been presented but no large series to date. To address the usefulness of MRI in the work-up of ILC, we performed a review of the currently published literature. Introduction Invasive lobular carcinoma (ILC) is the second most common histologic type of breast carcinoma after invasive ductal carcinoma (IDC). In most series ILC constitutes between 5 and 15% of all breast cancers, whereas IDC constitutes between 70 and 90% of all breast cancers [1–5]. Probably due to the use of complete hormone replacement therapy the lobular breast cancer component has continuously increased over the past decade from 9.5% in 1987 to 15.6% in 1999 [3]. Patients are, according to most series, a little older than patients presenting with IDC, especially the fraction of patients presenting with ILC younger than 40 is smaller [1, 5, 6]. Furthermore, the mean tumor size of ILC is slightly larger than in patients with IDC and patient presentation with a tumor larger than 5 cm occurs more often in cases of ILC [1, 5, 7]. Histopathologically, ILC are clearly defined: ILCs are constituted from small, relatively uniform cells, very similar to normal endothelial cells. Characteristically, these cells are only loosely cohesive and infiltrate the stroma in single cell file strands along ductuli. This growth pattern, present in 30–77% of cases [8], is also known as “Indian filing.” It is probably caused by a typical loss of the adhesion molecule E-cadherin. Often there is very little desmoplastic stromal reaction [8, 9]. The biological characteristics of ILC are usually less alarming than those of IDC: more tumors contain estrogen receptors and progesterone receptors, while expression of Her2/Neu and p53 are more often normal and axillary lymph nodes are not more often positive, even though ILC are overall larger in size than IDC [1, 7]. Probably due to the diffuse infiltrative growth pattern, ILC is frequently missed on mammography [5]. Detection is also compromised because ILC often has a density less than or equal to normal fibroglandular breast tissue on mammography [5, 10]. For correct treatment of ILC, adequate staging is important. Both mammography and ultrasound tend to underestimate lesion size and are therefore not optimal for staging purposes [5, 11]. This may in part be the reason that higher failure rates of breast-conserving therapy (BCT) in ILC than in IDC are reported [2, 11, 12]. Various authors therefore propose magnetic resonance imaging (MRI) as the modality of choice for the evaluation of ILC. Several small studies addressing the different aspects of the use of MRI in ILC have been presented, but no large series to date. Therefore many questions regarding the use of MRI in ILC remain unanswered.The sensitivity of MRI for breast lesions is approximately 95–98%, however, whether this holds true for ILC as well is not clear [13].The morphologic aspects of ILC are not yet well defined, nor is the dynamic behavior of contrast agents in these tumors clearly documented.Moreover, whether the MRI findings are similar to pathologic findings and can thus be used for accurate staging still needs to be established.Finally, the impact of MRI on surgical treatment of ILC should be evaluated.To answer these questions we performed a thorough review of the existing literature regarding the use of MRI in case of ILC and performed meta-analysis whenever possible. We subsequently reviewed the literature on other imaging modalities for this indication in order to evaluate the use of MRI from a clinical perspective. Materials and methods Search strategy We performed a literature search for articles that specifically dealt with the use of MRI in patients with histologic proof of ILC published before 1 April 2006. The Cochrane Library, MEDLINE and the in-progress citations as provided by PubMed were searched using the query: “lobular AND (MR OR MRI OR MRT OR magnetic).” These databases were further searched using the “Related Articles” function in PubMed. The same query was used to browse the web using scholar.google.com. Furthermore, the references of all retrieved articles were manually searched for relevant cross-references. Articles in all languages were accepted. All retrieved articles were then compared and from overlapping series of patients only the most recent publication was accepted. Many different search terms were used for literature review of other imaging modalities. However, only PubMed was used as search engine. Endpoints The study was thus undertaken to answer the following four questions.What is the sensitivity of MRI for ILC?What are the visual characteristics of ILC on MRI?Are the findings on MRI equal to the findings at pathology?What is the impact of MRI on surgical management of ILC?Whenever studies allowed direct comparison between MRI and other imaging modalities, these modalities were also analyzed. Sensitivity was defined as the number of lesions visible on MRI divided by the total number of ILC detected at pathology. We regarded morphology, dynamic curve analysis of contrast behavior, and quantitative dynamic analysis of contrast behavior as three different aspects of tumor appearance and these were thus analyzed separately. A principal distinction between mass-like and non-mass-like lesions was made in the analysis of morphology. Based on the BI-RADS lexicon [14], we defined architectural distortion, regional, segmental, ductal, multifocal, or diffuse enhancement, and multiple enhancing foci as descriptors of non-mass lesions. Nodular or focal enhancement, well-defined, round, irregular or spiculated masses, and dominant masses with small enhancing foci were defined as descriptors of mass-like lesions. Correlation between the findings on MRI and pathology was evaluated for relative tumor size (unifocal versus multifocal disease and single quadrant versus multicentric disease) and absolute tumor size. The impact on surgical management was derived from all changes implemented, based solely on MRI findings. The numbers of correct and incorrect changes were tabulated. Eligibility criteria All studies that presented a series of at least ten patients with histologic proof of pure ILC, with or without concurrent DCIS and/or LCIS, were considered eligible. A quality analysis of the study had to be possible, otherwise no abstracts were accepted. Patients with mixed carcinomas of ILC and IDC were excluded. Studies that presented data on both ILC and mixed carcinomas had to allow extraction of the relevant data for ILC only. Every study considered eligible according to these eligibility criteria was then evaluated for all the study endpoints. Specific eligibility criteria for the various considered endpoints are described below.Detection: Studies had to be based on a pathology database and all subsequent patients with ILC who underwent a MRI had to be included. The total number of ILC confirmed at pathology had to be clearly stated as well as the number of lesions found with MRI.Morphology: Studies describing the appearance of ILC visible on MRI were eligible. Separation between mass and non-mass-like lesions had to be possible.Dynamic curve analysis of contrast behavior: Studies that described the enhancement versus time curve were eligible. However, as time to peak and shape of the final phase of the enhancement curve were our main endpoints, these had to be described.Quantitative analysis of contrast behavior: Studies performing quantitative analysis of the contrast-enhancement parameters were eligible.Relative correlation with pathology: Studies presenting data on the unifocal versus multifocal correlation or single quadrant involvement versus multicentric involvement were eligible.Absolute correlation with pathology: Studies comparing sizes measured on MRI with those measured at pathology and presenting a correlation coefficient or sufficient raw data to calculate such a value were eligible.Detection of additional lesions: Any study describing additional lesions apart from the index lesion detected by MRI only with subsequent acquisition of histologic proof of malignancy was considered eligible. Lesions in the ipsilateral breast and the contralateral breast were evaluated separately.Impact on surgical treatment: Studies mentioning all changes in surgical strategy based on MRI findings were eligible. Statistics The quality of all included studies was assessed using the QUADAS tool [15]. The latter is a list of 14 items created for quality assessment of studies to diagnostic accuracy. Although not all the included studies specifically evaluate diagnostic accuracy, this tool was judged to be the most appropriate available. Data of all the studies were collected according to the inclusion and exclusion criteria. When at least five studies presented the same type of data or at least 100 patients were included in a smaller series of studies with similar data, we considered meta-analysis and heterogeneity analysis was performed. Dichotomous data with a binomial distribution (e.g., sensitivity) were transformed to the log odds scale because this scale has a normal distribution and is a good approximation to the exact binomial distribution. A disadvantage of this transformation, however, is that the confidence intervals are a little wider and values in the middle of the distribution (e.g., sensitivity closer to 50%) are more heavily weighted in meta-analysis than values close to the upper or lower level. Pearson’s correlation coefficient was transformed to Fisher’s Z for the same reason [16]. We calculated Cochran’s Q coefficient and the I2-statistic to assess heterogeneity. Cochran’s Q is a form of the χ2-test and provides information about the applicability of pooling the data. The I2-statistic provides a quantitative measure of the amount of heterogeneity and has an upper limit of 100%. Values of the I2-statistic of 25, 50, and 75% can be interpreted as low, moderate, and high heterogeneity, respectively [17]. Meta-analysis of the data using a random effects model was performed when the Q-coefficient showed no significant heterogeneity (p > 0.05). In cases where meta-analysis was feasible, the estimate and the 95% CL are expressed. When meta-analysis was not feasible due to severe heterogeneity, only the range of values found in the different studies is mentioned. All calculations were performed using R version 2.3.1 (The R Project for Statistical Computing, www.r-project.org) and the meta package (G. Schwarzer, cran.r-project.org). Results Studies We identified 21 separate studies that dealt with MRI and ILC [18–38]. We further identified four studies that did not deal specifically with ILC and MRI. However, they did present their data in such a fashion that relevant information for ILC only could be extracted for at least ten patients [39–42]. Four studies were case-reports and were dropped from the cohort [20, 21, 29, 37]. The study by Bazzocchi et al. [18] was excluded because only eight patients underwent MRI. Leung et al. [27] and Newstead et al. [42] only published their findings in abstract form and were consequently excluded. Table 1 gives an overview of the included studies and their characteristics, including the QUADAS score. Table 1Characteristics of the included studiesAuthorsPub. YearaStudy typebNcAge meandAge min.eAge max.fFieldgScan seq.hUni/bilatiCompressionjMean sizekQUADAS scorelRodenko et al. [32]19961206038842110X11Sittek et al. [34]1998123XXX1220X11Weinstein et al. [36]200111753326922111,712Kim et al. [41]200111254m24m88m22102,1m12Trecate et al. [35]2001128X32812220X9Francis et al. [24]2001222XXX22203,712Qayyum et al. [30]20021135546842110X11Munot et al. [28]20021206139782320X11Yeh et al. [38]200311959427922204,111Kneeshaw et al. [26]20031215743722210X11Quan et al. [31]200316253XX2231X10Bedrosian et al. [39]200312453mXX2001X10Schelfout et al. [33]20041265741743220X11Diekmann et al. [22]2004117XXX0000X10Boetes et al. [19]200413455357822204,910Berg et al. [40]2004229XXX3220X13Kepple et al. [25]20051296251672130X9Fabre Demard et al. [23]2005134XXX2220X11aYear of publication of the original articleb1 indicates retrospective cohort study, 2 indicates prospective cohort studycNumber of patients includeddMean age of all included patients, X denotes not mentionedeAge of respective youngest patient included in the studyfAge of respective eldest patient included in the studygStrength of magnetic field—0 denotes unknown, 1 denotes 1 T, 2 denotes 1.5 T, 3 denotes both 1 T and 1.5 ThType of scan sequence used—0 denotes unknown, 1 denotes RODEO, 2 denotes FLASH 3D, 3 denotes otheriUnilateral or bilateral imaging of the breast—0 denotes unknown, 1 denotes unilateral, 2 denotes bilateral, 3 denotes both unilateral and bilateral depending on the patientjCompression applied to the breast—0 denotes no, 1 denotes yeskMean size of the lesions in centimeters, X denotes not mentionedlNumber of items valid on QUADAS scorings listmValid for whole study population only, not for subpopulation of patients with ILC The applied scan protocols in the included studies are diverse. In general, most studies presented herein used a 1.5-T MRI scanner, although some authors had at least some of their included patients scanned using 1.0 T machines [33, 34, 40]. Most protocols were based on T1 weighted images made with either a normal FLASH 3D sequence or a FLASH 3D sequence with fat-suppression [19, 20, 23, 24, 26, 31, 33–36, 38, 40, 41] or a RODEO sequence with water selective excitation [25, 30, 32]. A number of authors also used T2 weighted sequences [22, 23, 31, 38, 40, 41]. Other differences in scan protocols involve the voxel sizes and temporal resolution. Some authors emphasize high spatial resolution [32, 39] while others prefer high temporal resolution [26] and yet again others performed both types of sequences in succession [30, 38]. Furthermore, single breast coils [26, 30, 32, 36, 41, 43] and double breast coils (all others) were used and sometimes compression was applied to the imaged breast [31, 36, 39]. In most reported studies the scanning protocols evolved over time and are thus not identical for all imaged patients. Lesion detection Eight studies provided sufficient data to calculate sensitivity of MRI for ILC [19, 23, 24, 26, 28, 33, 34, 40]. Sensitivity ranged from 83 to 100%. Cochran’s Q was 6.48 (p = 0.49), I2 was 0%, indicating homogeneous studies and hence data pooling could be performed. Mean sensitivity was 93.3% (95% CI 88–96%). Only the studies by Francis et al. [24] and Berg et al. [40] provided prospective data and are therefore able to show sensitivity in clinical practice. They showed a sensitivity of 95 and 97%, respectively, and were statistically not different from the retrospective studies (two-sided T-test, p = 0.78). Seven of these studies also provided data on mammography [Q 31.79 (p < 0.001), I2 = 81%], six on ultrasound [Q 10.92 (p = 0.05), I2 = 54%], and five on clinical examination [Q 29.63 (p < 0.001), I2 = 87%]. Sensitivity of ultrasound could also be computed through meta-analysis and was 83% (95% CI 71–91%), although moderate heterogeneity was present. The provided data for mammography and clinical examination were too heterogeneous for meta-analysis and ranged from 34 to 91 and 28 to 94%, respectively. Figure 1 shows the results of each independent study and the overall results. Fig. 1Forestplot of the sensitivity of the respective modalities for ILC (MMG mammography, US ultrasound, CE clinical examination), the horizontal lines represent 95% confidence intervals. Modalities presented on the right of the authors name have not been tested in the appropriate study. The diamonds at the bottom represent the pooled estimates and their 95% confidence intervals for MRI and US, respectively. Because mammography and clinical examination were too heterogeneous for meta-analysis no pooled estimate is presented for these modalities Morphology Seven studies described lesion morphology on static MRI images [23, 30, 32, 33, 36, 38, 41]. However, Kim et al. [41] studied morphologic appearances of masses only and therefore did not include non-mass-like lesions. Information provided by their study is therefore only used to evaluate the appearance of masses and not for the principal distinction between mass and non-mass lesions. The terminology used in the literature to describe the lesions is highly variable. Only Yeh et al. [38] consistently used the terminology of the BI-RADS lexicon [14]. The six eligible studies that presented data on morphologic appearance described a total of 133 tumors. However, results are highly variable. The incidence of a mass-like lesion ranged from 31 to 95% [Q 16.44 (p < 0.01), I2 = 70%]. Table 2 shows the appearance of ILC on MRI for all individual studies. Table 2Morphologic appearance of ILC on MRIAuthorsNumber of tumorsNon-mass-likeMass-likeRodenko et al. [32]201 (5)19 (95)Weinstein et al. [36]188 (44)10 (56)Qayyum et al. [30]139 (69)4 (31)Yeh et al. [38]2011 (55)9 (45)Schelfout et al. [33]276 (22)21 (78)Fabre Demard et al. [23]3511 (31)24 (69)Numbers between parenthesis represent percentages Fabre Demard et al. [23] did not specify the lesions beyond the description “mass-like.” Other authors used many different terms to further describe lesions. In the study presented by Rodenko et al. [32], five pre-defined shapes were used, but they described all 19 mass-like lesions as spicular enhancing masses. In the other studies most lesions are described as spiculated masses as well. Schelfout et al. [33] recognized a dominant mass with multiple enhancing foci in eight cases and Yeh et al. [38] described even a round focal mass. In the 12 mass-like cases described by Kim et al. [41], 10 had an irregular shape and 8 were spiculated. Therefore, among the 76 masses, a total of 65 tumors were described as an irregular or spiculated mass. This appears to be the most common type of mass-like presentation in ILC. Kinetics Only two studies reported on the dynamic curve appearance of ILC [34, 35]. The most apparent similarity between findings was that maximum enhancement is often delayed and wash-out is present in only a minority of lesions. Sittek et al. [34] reported that maximum enhancement was not reached before 2 min after contrast administration. Trecate et al. [35] noted that a classic pattern of rapid signal increase was only present in 4 of 12 pure ILC, whereas a delayed pattern was observed in the other 8 cases. Two other studies reported on quantitative contrast behavior analysis in ILC [30, 38]. Qayyum et al. [30] reported on a parameter called K21, analogue to the Ktrans parameter as described by Tofts et al. [44]. Yeh et al. [38] evaluated the extraction flow product (EFP), which is a similar analogue but respects the possibility that contrast leakage from the vessels is limited by flow instead of being limited by the permeability surface area product. Both studies did not, however, include sufficient patients to produce meaningful results, other than a high variability in the values of these parameters and the presence in some tumors of enhancement very much like enhancement in normal breast tissue. It was noted that K21 values appeared to be an order of magnitude less in ILC than in IDC lesions. Correlation Several authors evaluated correlation of the MRI findings with pathology [19, 24–26, 28, 32, 33, 40]. Three studies compared unifocality and multifocality between MRI and pathology [26, 32, 33] (Table 3). Overall 5 of 67 cases (7%) were regarded as multifocal, whereas they appeared unifocal at pathology and, vice versa, 2 cases (3%) in one study appeared unifocal at MRI, but were multifocal according to pathology. Table 3Relative correlation of unifocality versus multifocality for MRI versus pathologyAuthorsNumber of patientsUF MRIUF PATHMF MRIMF PATHOverestimatedaUnderestimatedbRodenko et al. [32]2091111921Kneeshaw et al. [26]21910121110Schelfout et al. [33]261417121021Total6752UF unifocal, MF multifocal, PATH pathologyaDisease was classified as multifocal on MRI, but was unifocal on pathologybDisease was classified as unifocal on MRI, but was multifocal on pathology Overestimation of multifocality based on mammography in 63 patients from these studies occurred in 2 patients (3%), whereas underestimation occurred 25 times (40%) and the lesion was not visible on mammography in another 4 patients (6%). Two of these studies further analyzed single quadrant versus multicentric involvement of the affected breast [32, 33] (Table 4). In the study by Rodenko et al. [32], two cases of single quadrant disease were erroneously classified as multicentric on MRI. Table 4Relative correlation of single quadrant versus multicentric involvement for MRI versus pathologyAuthorsNumber of patientsSQ MRISQ PATHMC MRIMC PATHOverestimatedaUnderestimatedbRodenko et al. [32]2091111920Schelfout et al. [33]2621215500Total4620SQ single quadrant, MC multicentric, PATH pathologyaMulticentric involvement was seen on MRI, but involvement of only one quadrant was shown on pathologybInvolvement of only one quadrant was seen on MRI, but on pathology multicentric involvement was shown Mammography in 42 of these patients resulted in overestimation of disease extent in 1 patient and underestimation in 15. Again, no lesion was visible in four patients. Berg et al. [40] further showed a series of 12 patients that underwent MRI. Correct size estimation was performed in seven patients. In one patient an additional focus was missed and in four patients overestimation occurred due to foci of LCIS. Absolute correlation of MRI and pathologic size measurement was performed by six authors [19, 24–26, 28, 32]. Rodenko et al. [32] found a Kappa coefficient of 0.77, which represents substantial agreement. The other authors presented Pearson’s correlation coefficients ranging from 0.81 to 0.97 [Q 10.90 (p = 0.03), I2 = 63%]. Correlation coefficients for other modalities were substantially more variable. Presented correlation coefficients in Table 5 are optimized by excluding cases where no abnormalities were seen from the calculations. Table 5Correlation of tumor size measured by various modalities compared to pathologyAuthorsMRIMMGUSCENPCCΚNPCCKNPCCKNPCCKRodenko et al. [32]200.77315−0.081Munot et al. [28]200.97100.66140.67Kneeshaw et al. [26]210.86210.93a210.93a210.47Francis et al. [24]220.87160.79200.56190.89Boetes et al. [19]360.81360.34360.24Kepple et al. [25]330.8890.71MMG mammography, US ultrasound, CE clinical examination, N number of lesions visible on the appropriate modality, PCC Pearson’s correlation coefficient, K Kappa valueaKneeshaw et al. did not provide a correlation coefficient for either MMG or US, but only one for the combined modalities Boetes et al. [19] applied a correctness measure of 1.0 cm to their data and found that MRI underestimated disease extent in 5 of 36 tumors and overestimated extent in 4 cases by more than 1.0 cm. The data provided by Francis et al. [24] allow a similar calculation. Underestimation occurred in 6 of 22 cases and overestimation occurred in 1. Additional lesions Five studies focused on the detection of concurrent additional lesions in the affected breast apart from the index lesion only visible by MRI [22, 23, 31, 33, 36]. In 44 of 146 patients, additional malignant lesions were found [Q 7.20 (p = 0.13), I2 = 44%]. Additional malignant findings only visible on MRI were present in 32% of cases (95% CI 22–44%). The results of the individual studies are presented in Table 6. Table 6Additional malignant findings in the ipsilateral breast by MRIAuthorsNumber of patientsNumber of additional findingsWeinstein et al. [36]187Quan et al. [31]5111Schelfout et al. [33]269Diekmann et al. [22]179Fabre Demard et al. [23]348Total14644Meta-analysis (%)10032 Eight studies, presented in Table 7, reported on findings in the contralateral breast [19, 22–25, 28, 31, 40]. In 12 of 206 patients, unexpected contralateral cancer was discovered exclusively by MRI [Q 2.28 (p = 0.94), I2 = 0%]. Cases where contralateral cancer was also visible on mammography and/or ultrasound are excluded. Contralateral carcinoma only visible by MRI was present in 7% of patients (95% CI 4–12%). Table 7Additional findings in the contralateral breast by MRIAuthorsNumber of patientsNumber of contralateral findingsFrancis et al. [24]220Munot et al. [28]202Quan et al. [31]535Diekmann et al. [22]171Boetes et al. [19]342Berg et al. [40]150Kepple et al. [25]140Fabre Demard et al. [23]342Total20612Meta-analysis (%)1007 Effect on surgical treatment Six studies explicitly stated the effect of MRI on the surgical treatment of their patients [23, 26, 28, 31, 32, 39]. In 160 patients with ILC, a total of 44 changes in surgical management occurred [Q 7.90 (p = 0.16), I2 = 37%]. Overall, MRI changed the surgical management in 28.3% of cases (95% CI 20–39%). In 24 cases BCT was changed to mastectomy. In nine cases a wider local excision was performed. In the remaining 11 cases the type of change was not further described. Forty-one of 44 changes in surgical management were retrospectively judged necessary based on pathologic findings [Q 1.24 (p = 0.94), I2 = 0%]. Therefore, 88% of all changes were correct (95% CI 75–95%). In three cases the change in management was retrospectively judged unnecessary based on pathology. The data of the individual studies are presented in Table 8. Table 8Changes in surgical management based solely on MRI findingsAuthorsNumber of patientsNumber of changesCorrect changesIncorrect changesCorrect wider excisionIncorrect wider excisionCorrect mastectomyIncorrect mastectomyRodenko et al. [32]2087171Munot et al. [28]20333Kneeshaw et al. [26]215514Quan et al. [31]51111156Bedrosian et al. [39]241192NANANANAFabre Demard et al. [23]246633Total160444139231Number of changes (%)10028.3Correct changes (%)10088Number of changes and correct changes show the result of meta-analysesNA not available Rodenko et al. [32] and Kneeshaw et al. [26] both reported one further unnecessary mastectomy based on MRI outcomes. However, these mastectomies would also have been performed based on the mammography findings and are therefore not only due to the MRI. Berg et al. [40] also reported that findings on MRI in 12 patients with ILC would have resulted in two unnecessary mastectomies. However, mastectomies were also indicated according to the ultrasound report. Nonetheless they based their treatment on the mammograms only and therefore these mastectomies were not performed. Discussion Studies and quality analysis We included 18 studies, but the highest number of studies that could be used to answer a specific endpoint was 8 (sensitivity and contralateral findings). Strong evidence is therefore lacking and this review is thus a clear call for more substantial research in this area. The overall study quality of all studies is, according to the QUADAS score, reasonably high (lowest score = 9/14). However, this tool does not include the study size in the analysis, which was generally low. The tool places a strong emphasis on the relation of the test to the reference standard (typical for observational studies). In all studies, the reference standard was pathology and therefore always acceptable as gold standard. However, the test results (in this case the MRI reports), were never shielded from the pathologist who performed the pathologic evaluation. In studies that were performed to evaluate the visual characteristics of ILC on MRI a thorough description of the pathological examination was, deservedly so, not included [23, 30, 32, 33, 36, 38, 41]. These studies thus scored a little lower. There are some other drawbacks that must be considered and that are not included in the QUADAS score. Firstly, all but 2 of the included 18 studies were retrospective in nature, and secondly, the applied MRI protocols were largely heterogeneous (see Table 1). However, the presented data were extracted from studies that made use of the various standards in MRI of the breast of the last decade and therefore give a reasonable overview of the overall capability of MRI in ILC imaging in this period. Sensitivity The sensitivity of physical examination and conventional imaging for ILC of the breast is not optimal. The sensitivity of physical examination for ILC ranges between 65 and 98% [10, 45–47], with usually over 50% of patients presenting with palpable abnormalities. The sensitivity of mammography for ILC (BI-RADS 3 or higher) ranges between 81 and 92% in literature [10, 45–51]. In a recent study that evaluated intra- and interobserver variability, sensitivity even ranged from 88 to 98% [52], which could be regarded as sufficient. However, ILC often do not appear as a malignant lesion on mammography; approximately 30% is classified as equivocal and sensitivity is then approximately 57–59% [51]. The overall sensitivity of mammography in the current analysis appears lower than findings in the literature on mammography in ILC. However, equivocal findings may have been classified as undetected lesions in some studies resulting in the overall lower results. Nevertheless, the sensitivities of only 34% found by Berg et al. [40], and 50% found by Munot et al. [28] are on the lower end of the spectrum. Munot et al. [28] did not state which views constituted their mammograms, while Berg et al. [40] made craniocaudal, mediolateral and spot-compression views on a standard mammography machine, which we regard as common practice. A possible explanation for the poor results in the study by Berg et al. [40] may be that they defined an ILC as a focus of tumor, thereby allowing more tumors to be present in one breast, whereas other authors defined this as multifocal or multicentric tumors and thus as detected when at least one lesion was visible on mammography. In literature, the reported sensitivity of ultrasound for ILC ranges between 68 and 98% [47, 53–58]. As this range is comparable to the range found in the present evaluation, we are of the opinion that an overall sensitivity of 83% is accurate. However, application of newer high-frequency ultrasound transducers may improve sensitivity. Initial series using 7.5 MHz transducers show sensitivities of 68% [47] and 78% [56], whereas series that used 10–13 MHz transducers report sensitivities up to 98% [57, 59]. Contrast-enhanced MRI is nowadays widely accepted as the most sensitive modality for detection of malignancy of the breast. Early reports on overall sensitivity of MRI for breast lesions range from 93 to 100% [13, 60–63]. Thus, the sensitivity of MRI found for ILC in the studies presented herein and the overall sensitivity of 93.3% calculated from these studies are not different from those known for malignancy in the breast in general. The relatively low heterogeneity of all studies describing lesion detection as well as detection of additional lesions in the ipsi- and contralateral breast show that the applied MRI technique only has a minor impact on the ability of MRI to detect lesions. The overall sensitivity could even be increased to 96% (95% CI 92–98%) if an early study is excluded from the analysis [34]. This study reported a sensitivity for ILC of only 83%, a discrepancy that may well be explained by the fact that the slice thickness in this study was 4.2 mm, thicker than in any of the other presented studies, which could have had a negative impact on sensitivity. Moreover, 15 of 23 patients in their series were scanned with a FLASH 3D sequence with TR 8.4/TE 3.0, resulting in image acquisition with a phase-shift of water and fat, which might have further decreased their sensitivity, although this was not apparent from their data. It must be taken into account that the acquired sensitivity in all studies was achieved in cases where prior knowledge of the existence of ILC was present. Mostly because of the retrospective nature of the presented studies, but also because the two prospective studies both included their patients on the basis of histological proof of invasive (lobular) carcinoma by core biopsy. It is therefore not possible to formulate conclusions on the sensitivity of MRI for ILC prior to biopsy. In a large multicenter trial by Bluemke et al. [64] overall sensitivity for invasive cancer prior to biopsy was 91%, thus it might be expected that sensitivity for ILC prior to biopsy is also slightly lower. However, in most cases the indication for MRI is assessment of disease extent because of inconclusive findings at mammography or ultrasound. In conclusion, the sensitivity of MRI for ILC is higher than that achieved by any other modality, in direct comparison and validated by literature, and is equal to the overall sensitivity of MRI for malignant lesions of the breast. Only modern ultrasound examinations seem to have the ability to approach the performance of MRI in the detection of ILC [57]. Morphology The morphologic appearance of ILC on MRI ranged from 69% non-mass-like lesions to 95% mass-like lesions, thereby raising questions concerning the amount of heterogeneity in the description of morphology of lesions by radiologists. In fact, the general agreement on the description of lesion type according to the BI-RADS lexicon is only moderate [14, 65]. In the current analysis, this is even further complicated because most authors did not specifically use the BI-RADS lexicon. Additionally, differences in scan techniques may have further affected the appearance of the lesion. However, in keeping with the above, the classification of lesion type is also highly variable on mammography, where the incidence of mass lesions ranges from 32 to 78% [10, 45, 46, 48, 50, 51, 55]. The vast majority of the mass-like lesions described on MRI are irregular or spicular lesions. The eight patients with a dominant mass surrounded by multiple enhancing foci, as described by Schelfout et al. [33], may present noncontiguous foci of disease without visible spiculae due to the absence of desmoplastic reaction, which is a well-known histopathological presentation [8]. In all series only one round mass was described [38], suggesting this to be a very rare presentation for ILC. This is consistent with findings in mammography by Le Gal et al. [10], who described a round mass in only 2% of all patients where a mass was present (4/174) while the remainder was either classified as a spicular mass (54%) or poorly defined mass (44%). Mammographic findings would therefore appear to correlate well with MRI findings. However, only one study allows direct comparison [33]: of all lesions visible in this study on both mammography and MRI, 78% (18/23) were classified as mass-like by MRI, while only 48% (11/23) were classified as mass-like by mammography. Six masses on MRI were visible as architectural distortion on mammography and two as asymmetric density. In one case a lesion described as spicular mass on mammography was visible on MRI as multiple enhancing foci with interconnecting enhancing strands. Non-mass-like ILC in mammography are typically described as architectural distortion or asymmetric density. In some cases microcalcifications are present, although these are often related to concurrent surrounding DCIS, sclerosing adenosis or fibrotic changes and might thus not be related to the presence of ILC [45, 51, 55]. The descriptors currently used for non-mass-like lesions on MRI are diverse and include various types of abnormal enhancement, such as regional, ductal, segmental, and diffuse enhancement. According to Qayyum et al. [30] the morphologic description of ILC on MRI has a good correlation to histopathologic findings. The non-mass-like presentation might specifically occur in cases where ILC grow in the classic pattern with cells arranged in a linear fashion along the ductuli. It may thus be concluded that the appearance of most ILC on MRI and mammography is similar: most ILC are mass-lesions that have clear malignant properties. However, the more diffuse growing tumors are characterized by areas of unexpected enhancement and are more difficult to recognize. In a number of cases where no clear mass is visible on mammography, a mass-like lesion may be found on MRI [33]. Kinetics The relatively late contrast enhancement of ILC apparent in all studies presented here and mirrored by the relatively low values of K21 and EFP in the studies by Qayyum et al. [30] and Yeh et al. [38] must be taken into account when evaluating ILC. Standard subtraction images, generated from the pre-contrast and the first or second post-contrast acquisitions may be inconclusive as maximum enhancement is not achieved at this point in time and the lesion is thus not yet clearly visible. In fact, false-negative MRI in cases of ILC is usually contributed to inadequate enhancement of the tumor [26, 35, 66]. The diffuse and often slow tumor growth, not requiring extensive neovascularization, may partly cause this difficult visualization [1, 67, 68]. This is also clear from the relatively lower amount of vascular endothelial growth factor found in tumors with a lobular histology, which might also indicate a different signaling pathway in the formation of neovascular vessels in ILC, resulting in more mature and thus less leaky capillaries [69], with consequently diminished or absent contrast enhancement. Correlation In the herein presented studies overestimation of lesion extent by mammography is rare, yet underestimation is more rule than exception. This is also confirmed by studies that specifically deal with mammography in cases of ILC. Yeatman et al. [5] showed that mammography underestimated ILC by a mean of 12 mm. Uchiyama et al. [51] reported 56% of all visible ILC on mammography to be underestimated and Veltman et al. [52] showed 35–37% of all ILC to be mammographically understaged. Ultrasound also tends to underestimate tumor size in the studies presented here. This finding is underlined by Tresserra et al. [70] and more recently by Watermann et al. [71], who documented a structural underestimation of 5.4 ± 12.2 mm in cases of ILC versus 1.4 ± 12.0 mm for cases of IDC. This might be partly due to the observation that US tends to underestimate larger tumors more than smaller tumors and low grade tumors more than high grade [70], consistent with the finding that ILC usually presents with slightly larger and less aggressive tumors [1, 5, 67, 72]. The current analysis shows that there is good correlation of tumor size measured on MRI compared to pathology. The various studies presented only moderately heterogeneous results. In most cases MRI outperforms mammography and ultrasound in the assessment of disease extent. Most tumors are correctly classified as uni- or multifocal and multicentric disease is only seldom overestimated [19, 32]. Additional lesions and effect on surgical treatment Especially important in this analysis is the detection of additional lesions apart from the index lesion in patients with ILC. The co-existence of other invasive malignant lesions apart from the index lesion in the ipsilateral breast in 32% of patients only visualized by MRI is high. Moreover, the detection of contralateral cancer in another 7% of patients by MRI only, seems to make MRI indispensable. These findings are confirmed by the rate of change in treatment of the ipsilateral breast based on MRI. The fact that change in treatment was considered correct, as verified by pathologic findings in the specimen, in 88% of cases shows that ILC is often more extensive than appreciated on conventional imaging. However, various authors have shown that there is no significant difference in disease free survival (DFS) or overall survival (OS) after BCT or mastectomy in patients with breast cancer. Although some authors report more local recurrence in patients with ILC after BCT [2, 73], most authors showed that there is no difference in DFS or OS after BCT in ILC versus IDC [74, 75]. On the other hand, Yeatman et al. [5] reported a higher rate of conversion from lumpectomy to mastectomy in ILC compared to IDC (17.5% versus 6.9%). More recently, Molland et al. [68] reported similar findings (37.2% versus 22.4%). Hussien et al. [2] even reported failure of BCT in patients with ILC in 63% (34/54) of patients, resulting in conversion to mastectomy in 76% of failures (26/34). However, a very recent study by Morrow et al. [76] showed that BCT did not fail more often in patients with ILC when corrected for age and tumor size, although they still observed a trend of more excisions in patients with ILC [OR 1.58 (0.89–2.79), p = 0.12]. To date, there is no evidence suggesting increase in survival for patients with ILC due to the performance of MRI. What is then the added value of MRI? The rate of recurrence 10 years after BCT followed by radiotherapy is between 7 and 18% and is not significantly different from the rate of recurrence in case of IDC [77, 78]. However, in view of the MRI findings (additional malignant lesions in 32% of patients), we can only conclude that in a large number of patients with ILC, surgery is not curative but merely debulking. As recurrence rates are fortunately much lower, we must assume that curative treatment is to be expected from adjuvant therapy. Unfortunately, because there is no possibility to determine which additional findings will respond to adjuvant therapy, the detection of additional lesions on MRI currently still requires a change of treatment when malignancy has been proven by core biopsy. This may further reduce the rate of recurrence in patients with ILC and may even improve survival. However, this requires confirmation in future studies. Conclusion Magnetic resonance imaging has a high sensitivity for ILC, not achieved by other imaging modalities. Therefore MRI is helpful in cases where conventional imaging is inconclusive. Morphology is often mass-like and a typical ILC presents as an irregular or spiculated mass. However, asymmetric enhancement that can be ductal, segmental, regional, or diffuse in nature may be the only sign of tumor. MRI measures disease extent with a high reliability. Although underestimation and overestimation of lesion size by MRI still occurs, it is more accurate than size determination by other modalities, indicating often more extensive tumor burden than expected. The underestimation by other imaging modalities results in more failure of BCT, more re-excisions and more conversion to mastectomy in series where MRI is not used. MRI has an effect on surgical management in that when used to assess disease extent, surgical management was changed in 28.3% of which 88% were judged necessary based on pathology. Larger series of patients are required to confirm the findings of this review; especially evaluation of tumor morphology and dynamic profile seems feasible.
[ "invasive lobular carcinoma of the breast", "magnetic resonance imaging", "sensitivity", "morphology", "additional findings", "impact on treatment" ]
[ "P", "P", "P", "P", "P", "R" ]
Arch_Dermatol_Res-3-1-1950585
Preservation of skin DNA for oligonucleotide array CGH studies: a feasibility study
Array-based comparative genomic hybridization (a-CGH) is a promising tool for clinical genomic studies. However, pre-analytical sample preparation methods have not been fully evaluated for this purpose. Parallel sections of normal male human skin biopsy samples were collected and immediately immersed in saline, formalin and a molecular fixative for 8, 12 and 24 h. Genomic DNA was isolated from the samples and subjected to amplification and labeling. Labeled samples were then co-hybridized with normal reference female DNA to Agilent oligonucleotide-based a-CGH 44k slides. Pre-analytic parameters such as DNA yield, quality of genomic DNA and labeling efficacy were evaluated. Also microarray analytical variables, including the feature signal intensity, data distribution dynamic range, signal to noise ratio and background intensity levels were assessed for data quality. DNA yield and quality of genomic DNA—as evaluated by spectrophotometry and gel electrophoresis—were similar for fresh and molecular fixative-exposed samples. In addition, labeling efficacy of dye incorporation was not drastically different. There was no difference between fresh and molecular fixative material comparing scan parameters and stem plot analysis of a-CGH result. Formalin-fixed samples, on the other hand, showed various errors such as oversaturation, non-uniformity in replicates, and decreased signal to noise ratio. Overall, the a-CGH result of formalin samples was not interpretable. DNA extracted from formalin-fixed tissue samples is not suitable for oligonucleotide-based a-CGH studies. On the other hand, the molecular fixative preserves tissue DNA similar to its fresh state with no discernable analytical differences. Introduction Applications of new technologies have resulted in major advancements in laboratory medicine [1]. Bringing these advances into clinical practice, however, requires careful evaluation and validation. Array-based comparative genomic hybridization (a-CGH) is an extremely powerful tool that can generate high resolution mapping of chromosomal abnormalities. Advances in microarray technology and bioinformatics have now made a-CGH easily available and affordable [14, 15]. With a-CGH’s potential for clinical application, it is important that guidelines for proper sample preparation and control of quality are developed. Since conventional methods of clinical tissue preparation commonly employ formalin fixation, we studied the suitability of formalin for array CGH studies and compared the results to that of clinical samples that were preserved in a newly developed, molecular-friendly fixative. The study was approved by the University of Miami Institutional Review Board. Three separate normal skin biopsies from one volunteer healthy male were immediately sliced in three parts each 0.1 × 0.2 × 0.2 cm. One part was immersed in normal saline solution, one part in a methanol-based molecular tissue fixative, UMFIX (Universal Molecular Fixative, marketed as Tissue-Tek® Xpress™ Molecular Fixative, Sakura Finetek, Torrance, CA), and the third slice was fixed in 10% neutral buffered formalin. Volume of fixative/preservative was 150 ml and incubation was performed at room temperature. After 8 (set 1), 12 (set 2) and 24 h (set 3), the genomic DNA was extracted from the samples using Puregene DNA Purification System tissue kit (Gentra, Minneapolis, MN). One microliter of extracted DNA solution was diluted in Tris–EDTA (TE) buffer and evaluated on ND-1000 spectrophotometer (NanoDrop Technologies, Rockland, DE). Gel electrophoresis (0.8% agarose gel) was performed to evaluate the quality of genomic DNA. DNA yield and quality of genomic DNA as evaluated by spectrophotometry, were similar between the samples (Table 1), although UMFIX samples appeared to have better quality. A260/A280 ratio determines presence of contaminating proteins and a ratio of more than 1.8 is generally considered to be indicative of high quality sample. All UMFIX samples consistently had a ratio of more than 1.8. We further evaluated the quality of DNA by agarose gel electrophoresis that showed presence of high molecular weight (HMW) genomic DNA band in all samples (Fig. 1a). While formalin-fixed sample showed higher degradation and lower intensity of genomic DNA, UMFIX-exposed samples did not appear degraded. Table 1Spectrophotometric result of total DNA yield, DNA purity, and labeling efficiency, for fresh, formalin-fixed and UMFIX-exposed samplesSampleTotal DNA (μg)A260/A280A260/A230Sample-cy5 labeling efficiency (pmol/μg)Control-cy3 labeling efficiency (pmol/μg)Fresh 13.891.771.4370.8111Fresh 22.941.861.7942.889.6Fresh 32.931.9252.897Mean ± SD3.25 ± 0.551.84 ± 0.071.74 ± 0.2955.47 ± 14.1999.20 ± 10.87UMFIX 16.4161.85278.6110UMFIX 22.71381.861.8449.688UMFIX 34.0121.861.848.281.8Mean ± SD4.38 ± 1.881.86 ± 0.011.88 ± 0.1158.80 ± 17.1693.27 ± 14.82Formalin 15.3661.580.8459.8142.4Formalin 24.04241.671.1141.273.4Formalin 34.4271.932.0836.277.4Mean ± SD4.61 ± 0.681.73 ± 0.181.34 ± 0.6545.73 ± 12.4497.73 ± 38.731 8 h, 2 12 h, 3 24 hFig. 1Result of gel electrophoresis for genomic DNA a HMW genomic DNA is visible in all samples. There is more DNA degradation in formalin-fixed samples. b Formalin-fixed samples do not show uniform strong smear after linear amplification. c PCR for GAPDH shows a 450 bp amplicon in all samples with a lesser intensity in formalin-fixed samples, L ladder (numbers indicate bp), S fresh control in saline, U UMFIX, F formalin, P positive control, N negative control Genomic DNA from the skin samples and the control female DNA (Promega, Madison, WI) were then subjected to amplification according to Agilent’s (Palo Alto, CA) protocol for oligonucleotide array-based CGH for genomic DNA (version 2.0 August 2005). Amplification, of both male genomic DNA and female control DNA, 100 ng each, was performed using Qiagen REPLI-g Amplification Kit. Amplified DNA was digested during a 2 h incubation at 37°C, with Alu I and Rsa I (10 U/μl; 5μl/reaction) restriction enzymes (Promega, Madison, WI). Purification of digested DNA was performed with QIAprep Spin Miniprep Kit (Qiagen, Valencia, CA). Digested DNA was subjected to electrophoresis to evaluate quality of amplified DNA by visual inspection of its uniformity and range. The amplification efficiency of DNA from formalin-fixed samples was less than fresh and UMFIX samples, as evidenced by the size and fluorescent intensity of the bands (Fig. 1b). We also used PCR for GAPDH primers to gauge the quality of extracted DNA. PCR was performed using glyceraldehyde-3-phosphate dehydrogenase primers (GAPDH, Clonetech, Palo Alto, CA) using 0.5 μg of RNase-treated isolated DNA and Qiagen TaqPCR Mastermix (Qiagen, Valencia, CA). The conditions for DNA PCR were as follows: 95°C, 15 min; 35 cycles at 94°C, 45 s; 60°C, 45 s; 72°C, 2 min. As seen in Fig. 1c, a 450 bp band was detected in all samples, although the intensity was considerably lower in formalin-fixed material. Amplified Genomic DNA was then labeled using BioPrime Array CGH Genomic Labeling kit (Invitrogen, Carlsbad, CA). Quality analysis and quantitation was performed with ND-1000 (NanoDrop Technologies, Wilmington, DE) spectrophotometer to calculate the labeling efficiency. Labeled and purified samples were combined with hybridization master mix and applied to Agilent Human Genome CGH Microarray 44B slides for 40 h at 65°C. To minimize the impact of environmental oxidants on signal intensities, slides were scanned immediately using Agilent microarray scanner 4800B. Array images were analyzed using Agilent feature extraction software (v8.1) and CGH explorer (v2.51) [9]. Microarray analytical variables, including the feature signal intensity, data distribution dynamic range, signal to noise ratio and background intensity levels, and the number of saturated and undetected features was used to assess microarray quality. One of the steps in microarray quality control is labeling efficacy or dye incorporation. Dye incorporation was not drastically different between fresh and UMFIX-exposed samples but formalin-fixed samples showed less dye incorporation (Table 1). There were no differences between fresh and UMFIX-exposed material with regard to scan parameters (Table 2). Formalin-fixed samples, showed various errors such as over-saturation, non-uniformity in replicates and increased signal to noise ratio. The numbers of non-uniform features were at least ten fold higher in formalin-fixed samples. The signal to noise ratio of replicated probes can be used to evaluate reproducibility of signals. Formalin-fixed samples showed a higher median %CV value, indicating lower reproducibility of signal across the microarray and lower signal to noise ratio. The number of features that were saturated in the scanned image was also significantly higher in formalin-fixed samples. Table 2Representative data of microarray scan quality measurementSampleNon-uniform featureReproducibility: non-control replicated probes median %CVSaturated featureRedGreenRedGreenRedGreenFresh 11133797801Fresh 2511394400Fresh 3381355600Mean ± SD67 ± 40218 ± 1405 ± 16 ± 200UMFIX 1352560131300UMFIX 2851797800UMFIX 3361366700Mean ± SD158 ± 170292 ± 2338 ± 49 ± 300Formalin 11,4023027762851Formalin 21,98715737123610Formalin 33,1351709483260Mean ± SD2,175 ± 882210 ± 8069 ± 309 ± 3324 ± 380Red cy5 labeled samples, green cy3 labeled controls, 1 8 h, 2 12 h, 3 24 h These findings may be attributed to erratic and random fragmentation of DNA in formalin-fixed samples. The fragmentation itself results in increase background noise and aberrant signal intensity due to random hybridization. We further analyzed array scan data by a-CGH explorer software using stem-plot analysis for moving averages. Fresh and UMFIX-exposed sample showed a readable plot with the expected difference in XY chromosome regions; as expected from the female control and male samples. Also, they showed reproducibly amplified and deleted region in our test sample DNA. A-CGH result from formalin-fixed samples was not interpretable (Fig. 2). Fig. 2Array CGH stem plot of 8-h preserved skin samples—chromosomes are displayed on x-axis and relative ratios on y-axis. Green (control XX female, Cy3 labeled) Red (test XY male, Cy5 labeled). X and Y chromosomes are marked Array CGH studies have great potential for clinical application. Since the technique does not utilize live cells, it is considerably more advantageous when compared to conventional karyotyping techniques. Array CGH has its own limitations, such as failure to detect translocations; nevertheless it offers unprecedented spatial resolution [6, 11]. Recently, it has been shown that low-level gene copy number change is associated with changes in expression level of its transcripts [12]. Therefore, results of transcriptomics studies can be used to study DNA markers, which are more stable and easier to study than RNA. Array GGH was originally based on BAC (Bacterial Artificial Chromosomes), but more recently it utilizes oligonucleotides, with its ability to provide whole genome-covering resolution. Synthetic oligonucleotides obviously have the advantage because their exact sequence and length are known for each element on the array. Oligonucleotide array CGH (oa-CGH) has the benefit of overcoming the difficulties inherent to BAC arrays, such as the amount of available DNA, clone management, probe identity due to PCR contaminations, and mapping inaccuracies. Using oa-CGH, it is also possible to eliminate another source of error, which is the batch-to-batch variability of Cot-1 DNA, used to block repetitive DNA sequences, since the oligonucleotide probes are designed to be repeat-free [20]. Samples used in nearly all a-CGH studies have been fresh or fresh-frozen tissue. Such material, although useful in research settings, is impractical and cumbersome to use in clinical practice. Furthermore, because diagnostic biopsy samples are relatively small, the amount of residual tissue for additional ancillary testing may be inadequate. Hence there is a great need to develop test strategies that require minimal amounts of tissue and are robust enough to withstand pre-analytical sample preparation. Simplified schemes for sample preservation that allow reliable histomorphology along with preservation of high quality macromolecules are desirable. We have previously described a novel tissue fixation and processing technique that beside providing adequate histomorphology also preserves high quality HMW RNA, akin to fresh samples [19]. This fixative also protects HMW tissue DNA for use in PCR studies [18]. Here, we further evaluated the suitability of skin tissue DNA for array CGH studies using same methods applied routinely for fresh samples. This study demonstrates that using the novel fixative it is possible to preserve and extract high molecular weight genomic DNA, supported by high labeling efficiency, comparable to that from fresh tissue. Furthermore, no artifacts were seen using this DNA in microarray scanning or analysis for array CGH. Array CGH studies have been performed on formalin-fixed tissues with variable and irreproducible results. Most of the prior studies were based on low-resolution BAC arrays without detailed description of DNA quality or array metrics [10, 21]. Besides, there was often inadequate documentation of fixation time, the volumetric ratio of fixative to tissue, or buffering status. Only few studies have adequately addressed analytical aspects of fixation and processing on array CGH. Ghazani et al. [4] studied the effect of formalin fixation on MCF-7 cell line and on one breast cancer tissue sample. They used two different BAC clone arrays; 1.7k for the cell line and 19k for the breast cancer sample [4]. There was no mention of fixative volume or quality of genomic DNA. They showed that long term (1 week) fixation results in extensive loss of HMW DNA. In cell lines, the concordance between fresh and formalin-fixed samples was around 85%. When genomic DNA was amplified, the concordance decreased to 75%. Johnson et al. [5] also studied the effect of formalin fixation on a-CGH using BAC arrays. Our results support their conclusion that analysis of DNA samples on agarose gels does not offer any advantage in prediction of suitability of formalin-fixed tissue samples for a-CGH. They also show that the quality of a-CGH depends on the integrity of DNA samples with the requirement that extracted DNA supports the PCR amplification of an amplicon of 300 bp or longer. We showed that it is possible to detect amplicons of up to 450 bp in formalin-fixed sample; (albeit with lower intensity when compared to fresh or UMFIX-exposed samples). The studies by Johnson and others show the possibility of having acceptable results with DNA from formalin-fixed samples using BAC arrays but there are no data that supports its suitability for oa-CGH arrays. This may be due to larger sequence of probes presented in BAC array. Other investigators have shown that using alternative approaches to DNA extraction, quantification, amplification or labeling may produce improved results. However, all other authors do agree that the procedures for improving DNA quality are neither substitute for high quality DNA nor they could obtain consistent results from formalin-fixed samples [7, 10]. This is mostly due to complex chemical effect of formalin on tissue, which is still poorly understood. Formalin has a tissue penetration rate of 2.4 mm in 24 h and adequate fixation requires at least a ten to one volumetric ratio of fixative to tissue [3]. Therefore, many clinical specimens are only partially fixed by formalin before processing. Furthermore during processing, they are exposed to formalin and alcohol, introducing formalin and alcohol-related tissue artifacts [13]. Formaldehyde is a dipolar molecule and can react with amino, or imino group of the anionic forms of the amino acids. This reaction is time and temperature dependent [2, 9, 16, 17, 18]. More recent findings show that the deleterious effect of formalin fixation might also result from cumulative action of other reagents and processing conditions. Conversely, no DNA or nucleoside reactions have been reported with ethanol and none would be expected under physiological conditions [8]. By changing the three dimensional structure of the proteins, alcohols prevent protein functions. Therefore, rapid fixation of samples prevents alteration and degradation of biomolecules and preserves them in their native form [11]. In summary, we show that by using a new molecular fixative it is possible to preserve skin tissue DNA that is suitable for array CGH studies; identical to fresh tissue. This can be achieved using same methods and protocols used for fresh samples.
[ "genomics", "fixatives", "array comparative genomic hybridization", "tissue preservation" ]
[ "P", "P", "R", "R" ]
J_Gastrointest_Surg-3-1-1852375
Intestinal Perforations in Behçet’s Disease
Behçet’s disease accompanied by intestinal involvement is called intestinal Behçet’s disease. The intestinal ulcers of Behçet’s disease are usually multiple and scattered and tend to perforate easily, so that many patients require emergency operation. The aim of this study is to determine the extent of surgical resection necessary to prevent reperforation and to point out the findings of concurrent oral and genital ulcers and multiple intestinal perforations in all patients of our series. During a 25-year study period, information of 125 Behçet’s disease cases was gathered. Among the 82 patients who were diagnosed with intestinal Behçet’s disease, 22 cases had intestinal perforations needing emergency laparotomy. We investigated and analyzed these cases according to the patients’ demographic characteristics, clinical presentations, laboratory data, and surgical outcome. There were 14 men and 8 women ranging from 22 to 65 years of age. Nine cases were diagnosed preoperatively, and the diagnoses were confirmed in all 22 cases during the surgical intervention. Surgical resection was performed in every patient, with right hemicolectomy and ileocecal resection in 11 cases, partial ileum resection in 8 cases with two reperforations, and ileocecal resection in 3 cases with one reperforation. Introduction Behçet’s syndrome is a systemic process affecting multiple organ systems1,2. Surgeons need to be aware of the lethal complication of Behçet’s disease with intestinal ulcers, which tend to perforate at multiple sites3,4. A review of the literature reveals that involvement of the gastrointestinal tract is not infrequent. Most cases reported in the literature are in the eastern Mediterranean countries and Japan5–7. We report here a series of 22 cases of intestinal Behçet’s disease with multiple perforations, treated by emergency surgical resections. Materials and Methods During the 25 years from July 1979 to June 2004, 125 patients with Behçet’s disease were encountered at the Cardinal Tien Hospital and Tri-Service General Hospital, Taipei, Taiwan. Eighty-two patients were diagnosed as having intestinal Behçet’s disease, which was based on the Mason–Barnes criteria (Table 1)1,2. Among these patients, 22 had intestinal perforations (see Table 2 for the details of these 22 cases). Table 1The Mason–Barnes CriteriaMajor SymptomsMinor SymptomsBuccal ulcerationsGastrointestinal lesionsGenital ulcerationsThrombophlebitisOcular lesionsCardiovascular lesionsSkin lesionsArthritisNeurologic lesionsFamily historyThree major or two major and two minor criteria are required to establish the diagnosis of Behçet’s diseaseTable 2Intestinal Perforation in Behçet’s Disease Encountered at CTH and TSGH (from 1979 to 2004, n = 22)Case No.Age (years)SexOral UlcerGenital UlcerGI S & SOcular SignsSkin LesionPathergic ReactionArthritis or Arthalgia138M+++−−++245M++++++326F+−++−−447M+++++−528F+−++−++636F++++−−7a22M+++−−+842M++++−922M+++−+++1028F++++−−1165M+++−+−12a23M+++−+−−1332F+−+++−1424M++++−++1534M+++−−1641F+++−+17b38M++++++−1833M+++−+−−1925M+++−+++2048F++++−2129M+++−+−−2250F++++−+Plus signs mean that the feature is present; minus signs mean that the feature is not present.CTH = Cardinal Tien Hospital, TSGH = Tri-Service General Hospital, S & S = symptoms and signsaReperforations at ileum after partial resection of ileumbReperforation at ileum after ileocecal resection In 13 of these 22 cases, the diagnosis was confirmed at surgical resection for multiple perforations. Nine of the 22 cases had Behçet’s disease with intestinal involvement, which was confirmed preoperatively, six were confirmed by endoscopic examination; two by radiological examination; and one patient had gastrointestinal symptoms of intermittent abdominal pain, diarrhea, and nausea. Results Patient Characteristics There were 14 men and 8 women in the 22 cases investigated. The ages of the patients with perforated intestinal Behçet’s disease ranged from 22 to 65 years, with a mean age of 35.3 years. The age at onset of symptoms of Behçet’s disease varied from 18 to 64 years on diagnosis, with a mean age of 33.1 years. In Table 2, oral ulcers with gastrointestinal symptoms and signs were found concurrently in all 22 cases, genital ulcers in 19 cases, ocular lesions in 12 cases, and skin lesions in 11 cases. The painful oral ulcers (Fig. 1) occurred on oral mucosa, lips and in the larynx. They varied from 2 to 8 mm in size and invariably healed without scarring. The genital ulcers (Fig. 2) resembled the oral ulcers in appearance and course, except that vaginal ulcers were painless. Four patients had anterior uveitis and eight had a mild relapsing conjunctivitis as their sole ocular lesion. The nodular cutaneous lesions resembled those of erythema nodosum and were chronic and multiple. Most lesions occurred on the chest wall, back (Fig. 3), and legs. Biopsy of dermal subcutaneous lesions had been done in 10 cases. In each of them, a nonspecific vasculitis of subcutaneous capillaries and venules was present (Fig. 4). Pathergic reaction was found positive in 7 of 10 patients. Figure 1Buccal ulcer.Figure 2Penile ulcer.Figure 3Nodular cutaneous lesion on the back.Figure 4Vasculitis characterized by lymphocytic and plasmacytic infiltration of perivascular tissue (hematoxylin and eosin; 10 × 40). There were no specific immunologic abnormalities in any of the 16 patients tested (Table 3). The levels of immunoglobulin were variable. IgG was increased in 3 of 16 patients, IgA in 5 patients, and IgM in 3 patients. There was a significant decrease of IgG in two patients and of IgA in one patient. The total hemolytic complement was normal in all 16 serum samples. Alpha-2 globulin was increased in 9 of 16 patients, and gamma globulin was increased in seven patients. Table 3Laboratory DataCase No.Immunoglobulins (mg/dl)Serum Complement (mg/dl)Globulin (%)IgGIgAIgMC′3C′411,9763752501453813.823.821,726245174924012.010.842,1504002401104514.224.651,500590300382510.518.07a7401856090387.814.381,18019514059326.612.292,2704642621274614.016.2111,8503802501905012.523.512a1,30032023588399.615.0142,3504902951804813.325.01668098561503513.021.817b1,65047528076349.412.5181,8002901501054513.823.2192,4185812091664014.428.0211,8803302501803510.520.0221,9853862281683813.824.2Normal range950–2,110170–41054–26247–19127–524.8–12.18.8–22.8aReperforations at ileum after partial resection of ileumbReperforation at ileum after ileocecal resection Multiple concurrent penetrating ulcers (Fig. 5) were found in all 22 cases, with multiple perforation sites identified from terminal ileum to the ascending colon (Table 4). The size and number of perforated ulcers were variable, ranging from 0.2 to 6 cm in size, and 4 to 16 in number. The perforations were found at the ileocecal region and ascending colon in 10 cases, at the terminal ileum in 8 cases, and at the cecum and ascending colon in 4 cases. Figure 5Surgical specimen of ileocecal region showing multiple penetrating ulcers.Table 4Operative Findings and Operation Performed in 22 Perforated Intestinal Behçet’s Disease PatientsCase No.Location of Perforated UlcersNo. of PerforationsOral/genital UlcerOperation Performed1Terminal ileum4+/+Partial resection of the ileum2Terminal ileum6+/+Partial resection of the ileum3Ileocecal region and ascending colon10+/−Right hemicolectomy and ileocecal resection4Ileocecal region and ascending colon16+/+Right hemicolectomy and ileocecal resection5Cecum and ascending colon5+/−Ileocecal resection6Terminal ileum5+/+Partial resection of the ileum7aTerminal ileum4+/+Partial resection of the ileum8Cecum and ascending colon9+/+Right hemicolectomy and ilececal resection9Terminal ileum8+/+Partial resection of the ileum10Ileocecal region and ascending colon11+/+Right hemicolectomy and ileocecal resection11Ileocecal region and ascending colon10+/+Right hemicolectomy and ileocecal resection12aTerminal ileum5+/+Partial resection of the ileum13Terminal ileum7+/−Partial resection of the ileum14Ileocecal region and ascending colon11+/+Right hemicolectomy and ileocecal resection15Ileocecal region and ascending colon5+/+Right hemicolectomy and ileocecal resection16Ileocecal region and ascending colon13+/+Right hemicolectomy and ileocecal resection17bCecum and ascending colon4+/+Ileocecal resection18Ileocecal region and ascending colon7+/+Right hemicolectomy and ileocecal resection19Ileocecal region and ascending colon9+/+Right hemicolectomy and ileocecal resection20Cecum and ascending colon6+/+Ileocecal resection21Terminal ileum4+/+Partial resection of the ileum22Ileocecal region and ascending colon12+/+Right hemicolectomy and ileocecal resectionaReperforations at ileum after partial resection of ileumbReperforation at ileum after ileocecal resection Operative Treatment and Outcome All 22 perforated intestinal Behçet’s disease cases were confirmed at operation, with nine of them correctly diagnosed preoperatively. Surgical resection of the perforated intestinal ulcers was done in all cases, with right hemicolectomy and ileocecal resection in 11 cases, partial ileum resection in 8 cases, and ileocecal resection in 3 cases. No reperforation occurred in the group of patients who underwent right hemicolectomy and ileocecal resection. However, two reperforations ocurred in patients who underwent partial ileum resection alone and one in the ileocecal resection group. The pathologic study of the resected specimens showed nonspecific inflammatory reactions with the infiltration of lymphocytes and plasma cells as the predominant finding (Fig. 6). Histological sections from the ulcer walls showed changes consistent with a nonspecific ulcerative inflammatory process and infiltration containing both plasma cells and chronic inflammatory cells. Figure 6Chronic inflammatory response and perivascular infiltration (hematoxylin and eosin; 10 × 10). After operation on these 22 patients with Behçet’s disease and intestinal perforation, four patients died during the postoperative course due to septic shock, which was present prior to the surgical intervention; three died from complications of hypertension and diabetes mellitus; and three were lost to follow-up. Thus, only 12 patients are still under observation, without evidence of gastrointestinal complications up to this date. The remaining 60 cases of intestinal Behçet’s disease, without perforations, are still under surveillance. Discussion In 1937, Behçet described a chronic relapsing triple-symptom complex of oral ulceration, genital ulceration, and ocular inflammation5. Over the years, it has become apparent that the process is a systemic recurrent inflammatory disease affecting a number of organs consecutively6. In 1940, Bechgaard first described intestinal involvement in Behçet’s disease. Tsukada et al. proposed the term “intestinal Behçet’s disease” in 19642,3. Baba et al.4. agreed to this proposal and cited 49 cases of the disease treated from 1975. Since then, the number of operations reported has increased rapidly3, but perforated intestinal Behçet’s disease is still rarely reported. In a large review series, Oshima and colleagues reported that 40% of patients with Behçet’s disease had gastrointestinal complaints, such as nausea, vomiting, and abdominal pain2–4,8,9. The age at onset of these symptoms ranges from 16 to 67 years, and the male-to-female ratio ranges from 1.5:1 to 2:12,5. Our cases were in accordance with this reported age range and sex ratio. The third decade is the most commonly reported age of onset for Behçet’s disease6,8,10,30 and the fourth decade for intestinal Behçet’s disease3. In our study, intestinal Behçet’s disease occurred at a mean age of 33.1 years. However, Behçet’s disease and intestinal involvement were diagnosed simultaneously in some of these patients, most of whom had already experienced systemic manifestations. The exact cause of this disease still remains an enigma. Current hypotheses include allergic vasculitis of small vessels, autoimmune disease, and immunologic deficiency2,4,11,12. The deposition of immune complexes in the walls of small blood vessels was found by the laboratory results of three of our cases, and this process has been proposed as one of the underlying pathologic mechanisms in intestinal Behçet’s disease12. Since no clinicopathologic findings are pathognomonic in this disease, the diagnosis is made on the basis of combinations of various clinical symptoms and signs13. Mason and Barnes constructed an elaborate set of major and minor criteria for diagnosis1. They suggested the triad of buccal ulceration, genital ulceration, and eye lesion and skin lesion as major symptoms. The minor symptoms included gastrointestinal lesions, arthritis, thrombophlebitis, cardiovascular lesions, neurologic lesions, and family history. Three major criteria or two major criteria and two minor criteria are necessary for diagnosis. These various symptoms are not usually present at the same time. If we hold the original triple-symptom complex as a prerequisite for the diagnosis, cases may be missed. In 1990, the International Study Group for Behçet’s Disease14 introduced a diagnostic criteria requiring the presence of oral ulcerations plus any two of the following: genital ulcerations, typical eye lesions, typical skin lesions, or positive results to a pathergy test. However, some reports have shown that almost 20% of patients with Behçet’s disease presented without oral lesions initially15,16. Furthermore, 2–5% of patients did not show any oral lesions at all16,17. In our series, all patients had manifestations of concurrent oral ulceration. All perforated cases present oral or genital ulcerations at the same time. Because we warned that patients of intestinal Behçet’s disease may have abdominal pain and oral or genital ulcerations concurrently, intestinal perforations should always be kept in mind. A phenomenon of pathergy was first described by Blobner in 1937 and was further elaborated by Katzenellenbogen in 1968. It consists of an itradermal test applied to Behçet’s disease patients with a sharp needle prick causing skin hypersensitivity, which is characterized by the formation of a sterile pustule 24 to 48 h after the trauma. Biopsy at the intradermal puncture site is taken 48 h after for histopathologic evaluation. In a study conducted by Tuzum et al., this reaction was found to be positive in 84% of 58 patients with the disease, as compared to 3% of 90 healthy controls1. A positive pathergic reaction should make us aware of the possibility of the disease in the presence of any of the accepted symptoms of this process. However, the recent results and interpretations of pathergy tests have varied widely according to the technical aspects of the tests18,19 and ethnic differences of the patients. The histological lesions in Behçet’s disease are rather uncharacteristic. Nonspecific perivascular infiltrations of plasma cells and lymphocytes are usually found in the cutaneous and mucosal lesions5,20. The intestinal ulcers in Behçet’s disease are characterized not only by the absence of the granulomatous formation of Crohn’s disease, but also by deeper penetration of the ulcers to areas nearer to serosa membrane than the ulcers of ulcerative colitis3,4,21. The ulcers tend to be undermined, and the submucosal connective tissues are usually destroyed. The bases of the ulcers are avascular with edema-like swelling and crater-shaped formation around the ulcer margin.2,22–24 These ulcers are usually found in the terminal ileum and the cecum, but they may be present at any site throughout the digestive system and tend to perforate at multiple sites25–29. The gross pathologic characteristics of our intestinal Behçet’s disease included perforations at multiple sites concurrently in variable sizes and configurations, extending from the ileocecal region to ascending colon, in accordance with the reported literature3,4,8,30,31. The medical treatment of the intestinal Behçet’s disease remains unsettled. The beneficial effect of steroid therapy has not been convincing in most series2,7,30. It may control the disease initially, but recurrences are common. Topical application of corticosteroids decreases the ocular inflammation, and is also useful in relieving the pain of oral ulcers. Haim and Sherf reported a favorable response to fresh blood and plasma in cases of Behçet’s disease, but the nature of the useful component in hematotherapy is unknown5. In our two patients with perforations, steroid therapy was given for 2 weeks after surgery with favorable outcomes. Resection of the ileocecal region or the right half of the colon is the usual operation in the treatment of gastrointestinal complications3,4. In our series, perforations at multiple sites were found in all cases; right hemicolectomy and ileocecal resection were performed in 11 cases without reperforation; ileocecal resection in 3 cases with one reperforation; and partial resection of the ileum in 8 cases with two reperforations. Conclusion Because concurrent oral and genital ulcers were found in all patients in our series, the presentation of this seemingly innocuous clinical manifestation along with gastroinstestinal symptoms should raise the level of suspicion that intestinal involvement and complications of perforations may have already happened. The other constant finding among our 22 patients is that all the intestinal perforations were located between the terminal ileum and the ascending colon. Therefore, to prevent reperforations, wide excision of the terminal ileum with right hemicolectomy is recommended for perforated intestinal Behçet’s disease. We found out that the specimens of the resected bowel of the 19 nonreperforated patients all had more than 60 cm of terminal ileum, but those of the three reperforated cases had less than 60 cm. Furthermore, the perforation sites were all at 10 to 12 cm proximal to the anastomosis. This is the main reason we recommend the resection of up to 80 cm of ileum from the ileocecal valve at the time of right hemicolectomy4,31.
[ "intestinal perforations", "behçet’s disease", "intestinal ulcers" ]
[ "P", "P", "P" ]
Domest_Anim_Endocrinol-2-1-2428105
Endometrial expression of the insulin-like growth factor system during uterine involution in the postpartum dairy cow☆
Rapid uterine involution in the postpartum period of dairy cows is important to achieve a short interval to conception. Expression patterns for members of the insulin-like growth factor (IGF) family were determined by in situ hybridisation at day 14 ± 0.4 postpartum (n = 12 cows) to investigate a potential role for IGFs in modulating uterine involution. Expression in each uterine tissue region was measured as optical density units and data were analysed according to region and horn. IGF-I mRNA was localized to the sub-epithelial stroma (SES) of inter-caruncular and caruncular endometrium. Both IGF-II and IGF-1R expression was detected in the deep endometrial stroma (DES), the caruncular stroma and myometrium. IGFBP-2, IGFBP-4 and IGFBP-6 mRNAs were all localised to the SES of inter-caruncular and caruncular uterine tissue, and in the DES and caruncular stroma, with IGFBP-4 mRNA additionally expressed in myometrium. IGFBP-3 mRNA was only detectable in luminal epithelium. IGFBP-5 mRNA was found in myometrium, inter-caruncular and caruncular SES and caruncular stroma. These data support a role for IGF-I and IGF-II in the extensive tissue remodelling and repair which the postpartum uterus undergoes to return to its non-pregnant state. The differential expression of binding proteins between tissues (IGFBP-3 in epithelium, IGFBP-2, -4, -5 and -6 in stroma and IGFBP-4 and -5 in myometrium) suggest tight control of IGF activity within each compartment. Differential expression of many members of the IGF family between the significantly larger previously gravid horn and the previously non-gravid horn may relate to differences in their rate of tissue remodelling. 1 Introduction In dairy cows, the peri-partum period is critical to future milk production and fertility. Uterine involution involves extensive restructuring of the extracellular matrix alongside mitogenesis and apoptosis [1–3]. Initial degeneration of placental cotyledons and maternal caruncles accumulate as tissue debris in the uterine lumen forming a lochial discharge [4]. Contractions of the myometrium aid expulsion of lochia, and also restore uterine size, shape and tone to that of a non-pregnant animal [5,6]. Whilst most of these changes have occurred within 2–3 weeks postpartum, involution is not considered complete until about 40–50 days postpartum [1]. The previously non-gravid uterine horn returns to a non-pregnant state 10–15 days earlier than the previously gravid uterine horn [7]. Histological repair of the endometrium lags physical involution by 10–20 days [8], completing when caruncles regenerate epithelium [4]. Microbial contamination of the postpartum uterus is almost universal during the first week postpartum [9]. When pathogenic bacteria are not cleared the uterus becomes infected and inflamed and uterine involution is delayed [1,10]. Clinical endometritis is characterised by the continued presence of a purulent discharge beyond 21 days after calving [1]. Many processes involved in uterine repair are common to those of wound healing in other tissues (for a review see [11]). Potential mediators of tissue turnover and remodelling in the uterus include cytokines, matrix-degrading enzymes and growth factors [11,12]. The insulin-like growth factors (IGF-I and IGF-II) function in such tissue repair processes. In healing-impaired wounds, the mRNA for IGF-I, IGF-1R, and IGFBP-3 is significantly reduced [13]. The administration of IGF-I to these wounds corrects defective tissue repair [14] and in combination with other growth factors it increases connective tissue regeneration and epithelialisation [15]. Components of the IGF system have been described in the uteri of a variety of species (e.g. humans [16], rodents [17], pigs [18], cattle [19], and sheep [20]). The proliferative and differentiating effects of IGFs on uterine cells are thought to support the growth and regression of uterine tissue throughout the estrous cycle and also the regenerative processes in women following menstruation [16,21]. IGFBP-2 has also been shown to stimulate endometrial cell mitogenesis directly [22]. An increased rate of uterine involution is associated with earlier resumption of ovarian activity [23], which is in turn important for increasing pregnancy rate to first service [24]. Conversely, endometrial damage associated with sub-clinical endometritis leads to prolonged intervals to conception, with many cows failing to conceive at all [25]. The mechanisms that regulate uterine involution are not completely understood and, to the best of our knowledge, no previous studies have investigated the uterine IGF system during involution in lactating dairy cows. We postulated that changes in IGF bioavailability may be implicated in the rate of postpartum uterine recovery and thus influence the calving to conception interval and reproductive efficiency. The objective of the study was to determine patterns of mRNA expression for the IGF system within the previously gravid (PG) and previously non-gravid (PNG) uterine horns during the early postpartum period. Samples were obtained at approximately 2 weeks after calving as we hypothesised that this represents a time by which a delay in the normal recovery process may predispose cows to the subsequent development of endometritis. 2 Materials and methods 2.1 Animals and tissue samples All procedures were carried out under license in accordance with the European Community Directive, 86-609-EC. Uteri were collected from 12 multiparous Holstein-Friesian dairy cows (mean parity 4.7) following slaughter at day 14 ± 0.4 postpartum. The diameters of both horns were measured approximately 5 cm anterior to the bifurcation of the uterus. Samples of inter-caruncular and caruncular tissue were dissected from the previously gravid and non-gravid uterine horns approximately 1 cm anterior to the bifurcation of the uterus. A 5 cm square region was harvested, wrapped in aluminium foil, and frozen in liquid nitrogen-tempered isopentane. Samples were stored at −80 °C until sectioning. 2.2 In situ hybridisation The in situ hybridisation procedure was performed as described previously [26]. All chemicals were purchased from Sigma–Aldrich Company Ltd. (Poole, Dorset, UK) or VWR International Ltd. (Poole, Dorset, UK) unless otherwise specified. Briefly, sections of 10 μm were cut from each uterine tissue sample and thaw-mounted onto SuperFrost® Plus or POLYSINE™ microscope slides, fixed in 4% (w/v) paraformaldehyde in 0.01 M PBS, washed in PBS, and sequentially dehydrated in 70% and 95% ethanol. The oligonucleotide probes for the IGF system were end-labelled with [35S]dATP (Amersham Biosciences UK Ltd., Buckinghamshire, England) using terminal deoxynucleotidyl transferase (Promega UK Ltd., Southampton, England). Tissue sections were subsequently treated with 100 000 cpm (100 μl)−1 hybridisation buffer and hybridised overnight at either 42, 45, or 52 °C (Table 1). Following incubation, slides were washed in a solution of 1 × SSC, 2 g l−1 sodium thiosulphate at room temperature for 30 min followed by fresh 1 × SSC, 2 g l−1 sodium thiosulphate at 60 °C for 60 min. Slides were then rinsed in solutions of 1 × SSC, 0.1 × SSC, 75% ethanol and 95% ethanol and air-dried before exposure to β-max hyperfilm (Kodak BioMax MR Film) for either 4 or 5 days. All uterine sections treated with a particular probe were hybridized in the same batch. Sense probes, which were identical in sequence to the respective mRNA targets, were always included as negative controls and any signal from these was regarded as non-specific. Each batch also contained an appropriate positive control tissue, based on previous studies. These were cross-sections of uterus from an estrous ewe for IGF-I and the type 1 IGF receptor [20], IGFBP-1 [27] and IGFBP-6 [28]; ovine placentome for IGF-II and IGFBPs-2, -3 and -4 [29] and ovine intercotyledonary tissue for IGFBP-5 [30]. 2.3 Photographic emulsions To aid cellular localisation of hybridised probes, slides previously subject to autoradiography were coated with photographic emulsion LM1 (Amersham Biosciences UK Ltd., Buckinghamshire, England) according to the manufacturer's instructions and stored for 28, 30 or 42 days at 4 °C in the dark (Table 1). The slides were developed in 20% phenisol (ILFORD Imaging UK Ltd., Cheshire, England) fixed in 1.9 M sodium thiosulphate and counterstained with haematoxylin and eosin. All other slides were also stained with haematoxylin and eosin to aid identification of tissue region. 2.4 Optical density measurements Readings were obtained from at least two sections per tissue for each of the antisense (AS) and sense (S) probes. Autoradiographs were scanned into a computer and optical density (OD) measurements were recorded from digital images. The relative expression of mRNA for components of the uterine IGF system was quantified from the autoradiographs using the public domain NIH ImageJ program (available through the NIH website—http://www.nih.gov), which calculated the average optical density (OD) over the selected area of film based on a linear grey scale of 0.01–2.71. The following tissue layers were each assessed separately: luminal epithelium, sub-epithelial stroma (a layer of dense connective tissue underlying the luminal epithelium), caruncular stroma (the dense connective tissue forming the caruncles), deep endometrial stroma (loose connective tissue between the sub-epithelial stroma and the myometrium) and myometrium. The latter two tissue types were only present in samples collected from the inter-caruncular region. Each tissue type was measured separately on each section. The background OD, from a blank area of film, was also measured and subtracted from both AS and S OD measurements. Finally the S values were subtracted from AS values to give an average OD value for specific hybridisation [31]. The detection limit was taken as an OD value of 0.01. 2.5 Statistical analysis Statistical analyses were performed using Statistical Package for the Social Sciences (SPSS for Windows, V13.0). Data for uterine diameter measurements at the time of tissue collection were analysed using Student's t-test. OD measurements were obtained from four samples per cow, taken from each of the caruncular and inter-caruncular regions of the previously gravid and non-gravid horns. The effects of uterine horn and tissue region on the level of mRNA expression for each probe were analysed by general linear model analysis. Cow was entered as a random effect. For this purpose, data from uteri in which a particular probe showed no detectable specific hybridisation (OD of <0.01) were given an OD of 0.01, which equated to the lower limit of detection. Results were considered statistically significant when P < 0.05. 3 Results At the time of tissue collection, the diameter of the previously gravid uterine horn was larger than that of the previously non-gravid uterine horn (56 ± 6.9 and 31 ± 3.1 mm, respectively, mean ± S.E.M., P = 0.005). The spatial distribution of mRNA encoding components of the uterine IGF system is shown in Figs. 1 and 2. The concentrations of mRNA in OD units are summarised in Table 2 according to uterine horn and tissue region and their two-way interactions are illustrated in Figs. 3 and 4. The method used provided a semi quantitative measure of the intensity of mRNA expression in specific cell types. 3.1 Expression of the IGFs and IGF type 1 receptor IGF-I mRNA was localized to the sub-epithelial stroma (SES) of inter-caruncular and caruncular endometrium in both uterine horns (Figs. 1(A) and 2A). Both IGF-II and IGF-1R expression was detected in the deep endometrial stroma (DES), the caruncular stroma (not shown) and myometrium (Figs. 1(C), (E) and 2C, E). Overall expression of IGF-I mRNA was higher in the inter-caruncular than caruncular SES (P = 0.001, Table 2). There was a significant horn × region interaction (P = 0.032), with lower levels of IGF-I transcript in the inter-caruncular SES of the PG compared with the PNG horn (Fig. 3(A)). IGF-II expression was higher in the DES than in the caruncular stroma and myometrium (P ≤ 0.001, Table 2). When data from tissue regions were pooled the concentration of IGF-II mRNA did not vary between the PNG and PG horns, but there was a significant horn × region interaction (P ≤ 0.001). Expression of IGF-II in the DES and caruncular stroma was lower in the PG than PNG horn, whereas within myometrium expression was higher in the PG than PNG horn (Fig. 3(B)). For the IGF-1R, expression was highest in myometrium and similar between DES and caruncular stroma (Table 2 and Fig. 3(C)). Overall, the level of IGF-1R transcript was higher (P = 0.030) in the PNG than PG horn (Table 2). The horn × region interaction was not significant for uterine IGF-1R mRNA expression. 3.2 Expression of IGFBPs IGFBP-1 mRNA could not be detected in any uteri examined, despite expression being observed in the ovine estrous uterus which was used as positive control tissue (data not shown). IGFBP-2, IGFBP-4 and IGFBP-6 mRNAs were all localised to the SES of inter-caruncular and caruncular uterine tissue, and in the DES and caruncular stroma (Figs. 1(G), (K), (O) and 2G, K, O). IGFBP-4 mRNA was additionally expressed in myometrium. In contrast, IGFBP-3 mRNA expression was only detected in the luminal epithelium (LE) of both inter-caruncular and caruncular samples (Figs. 1(I) and 2I). IGFBP-5 mRNA was found in myometrium, inter-caruncular and caruncular SES and caruncular stroma (Figs. 1(M) and 2M). IGFBP-2 mRNA expression in inter-caruncular and caruncular SES was higher than in DES and caruncular stroma (P ≤ 0.001, Table 2). There was no main effect of horn, but there was a horn × region interaction (P = 0.034). Within caruncular stroma only, the concentration of IGFBP-2 mRNA was higher in the PG than the PNG uterine horn (Fig. 4(A)). For IGFBP-3 mRNA the main effects of uterine horn and tissue region were not significant but there was an interaction (P ≤ 0.001). Expression in the inter-caruncular LE was higher in the PNG than PG horn, whereas in the caruncular LE expression was higher in the PG uterine horn (Fig. 4(B)). Expression levels of IGFBP-4 mRNA varied between tissue regions, with higher expression in the caruncular than inter-caruncular SES, lowest expression in myometrium, and intermediate signal intensity in the DES and caruncular stroma (Table 2). There was no difference in transcript levels between the PNG and PG uterine horns when regional data were combined (Table 2). Levels of IGFBP-4 mRNA expression were, however, affected by an interaction between uterine horn and tissue region (P = 0.024): within DES expression was lower in the PG than PNG uterine horn (Fig. 4(C)). Expression of IGFBP-5 mRNA was highest in myometrium, intermediate in the inter-caruncular and caruncular SES and lowest in caruncular stroma (P ≤ 0.001, Table 2). When regional expression data were pooled, the PNG uterine horn expressed higher concentrations of IGFBP-5 mRNA than the PG horn (P ≤ 0.001, Table 2). There was also a significant effect of the interaction between uterine horn and tissue region (P ≤ 0.001). Expression in both the inter-caruncular SES and the caruncular stroma was lower in the PG than PNG uterine horn whereas for the caruncular SES the reverse was true (Fig. 4(D)). IGFBP-6 mRNA was expressed at higher concentrations in the inter-caruncular and caruncular SES than in DES and caruncular stroma (P ≤ 0.001, Table 2). Transcript levels were higher (P ≤ 0.001) in the PNG than PG uterine horn when regional expression data were pooled (Table 2). The interaction between uterine horn and tissue region was significant (P = 0.045). Expression in each of the inter-caruncular SES, caruncular SES, DES and caruncular stroma was lower in the PG than PNG horn (Fig. 4(E)). 4 Discussion The rate of uterine involution is an important factor influencing the subsequent fertility of dairy cows [24]. In this study we have investigated for the first time a possible role for the IGF family of proteins in this event in lactating dairy cows. The timing of tissue collection at approximately day 14 postpartum occurred when the PG uterine horn in our group of multiparous cows was larger than the PNG horn. At this stage caruncular tissue is expected to have undergone degeneration and sloughing, but not to have completed re-epithelialisation [2]. In contrast, the inter-caruncular area does not lose its epithelial layer [32] and recovers from pregnancy more quickly [2]. The ongoing process of uterine involution at the time of tissue collection was thus expected to involve tissue regeneration alongside size recovery. An adequate recovery process may be crucial in preventing the uterus, which is heavily contaminated with bacteria following calving [1], from developing endometritis. Samples were analysed using in situ hybridisation. Whilst this approach is considered only semi-quantitative, we have found the technique described here to be highly repeatable. Furthermore, it enables measurement of mRNA concentrations in individual cellular types. This is not feasible in a complex organ such as the uterus using alternative techniques such as RT-PCR, as it is not readily possible to separate different populations of epithelial and stromal cells for RNA extraction. IGF-I mRNA was localised to the SES, confirming earlier observations in the cow [33] and sheep [20,34]. Normal wound healing involves a sequence of inflammation, proliferation, and maturation or remodelling [12] and local IGF-I production increases as wound healing progresses [35]. Since IGF-I increases during the late proliferative phase of the human menstrual cycle [36], and is known to stimulate cell proliferation and collagen synthesis during tissue regeneration [14,35], we propose that IGF-I produced by SES may act in an autocrine and/or paracrine manner to stimulate the proliferation of uterine stroma and epithelium [37,38] during uterine involution. In early pregnancy the bovine endometrium synthesises IGF-II primarily within caruncular stroma [33]. The present study localised IGF-II mRNA at similar concentrations in both the caruncular stroma and myometrium. The strongest expression of IGF-II mRNA was, however, in the DES. Similar results were found in human endometrium [36]. IGF-1R was similarly localised to the DES, caruncular stroma and myometrium, confirming earlier observations in the bovine uterus [33]. Since the effects of IGF-II are probably mediated by the IGF-1R (for a review see [39]), the co-localisation of IGF-II and IGF-1R transcripts supports a local action for IGF-II in uterine repair and regeneration within both endometrial and caruncular stroma [16,21]. Stromal IGF-II may also act in a paracrine manner to stimulate epithelial cell proliferation [22]. The interaction between IGFs and their receptors in muscle growth and regeneration has been comprehensively reviewed by Florini et al. [40]. In myometrium, IGF-II may stimulate muscle growth and regeneration [40], and potentially increases muscle strength [41]. These actions would support myometrial contractions that return the uterus to its non-pregnant size, shape and tone [5]. Since the PG horn has to contract from a larger size at parturition, the higher concentration of IGF-II in the PG myometrium supports the proposal that IGF-II assists postpartum uterine size recovery. In rat myometrium the IGF-1R is up-regulated in the early postpartum period [42]. This study failed to detect IGFBP-1 mRNA in any postpartum uteri. During the estrous cycle IGFBP-1 expression is low at estrous and relatively higher during the luteal phase [33] concurrent with progesterone production. Since the cows in this study had yet to establish ovulatory cycles, then the uterus would not have been recently exposed to progesterone stimulation [43]. In ruminants IGFBP-1 appears to be involved in pregnancy recognition [33,44] rather than, as this study shows, postpartum uterine events. In agreement with previous studies [33,44], the expression of uterine IGFBP-2 mRNA was localised to the SES and at relatively lower levels in the DES and caruncular stroma. In the cyclic cow IGFBP-2 mRNA levels increased during the luteal phase, concomitant with the highest levels of plasma progesterone [19]. Other studies have shown that human endometrial cells constitutively synthesise and secrete IGFBP-2 in vitro [45] and in response to estradiol [46]. In the bovine mammary gland [47], IGF-I may stimulate IGFBP-2 expression and protein secretion, whereas in fetal visceral glomerular epithelial cells isolated from human kidneys IGFBP-2 production was stimulated by IGF-II [48]. The exact mechanisms regulating postpartum uterine IGFBP-2 mRNA expression thus requires further investigation. IGFBP-2 is presumably modulating uterine involution indirectly by regulating the bioavailability of IGF-I and IGF-II and the interaction of these ligands with their receptors [39]. The precise action of IGFBP-2 also remains uncertain. Both stimulatory [22] and inhibitory [48] actions of IGFBP-2 on IGF-stimulated epithelial cell proliferation have been suggested. Alternatively or additionally, IGFBP-2 could modulate uterine cell growth directly [22]. In contrast to all the other binding proteins investigated, the expression of IGFBP-3 mRNA was confined to the luminal epithelium, again agreeing with earlier work in the cyclic animal [33]. Epithelial IGFBP-3 may regulate local IGF bioavailability [39] or transport IGFs across this cell layer [49,50] for secretion into the uterine lumen. Removing excess IGF from the endometrium would prevent the IGF-1R from being down-regulated [51]. Alternatively, since IGFBP-3 associates with cell surfaces, then it may store IGFs [52] in the uterus and further promote IGF-stimulated tissue repair [37,39] as proposed for other physiological systems [53]. IGFBP-4 mRNA was detected in multiple uterine tissue compartments. The localisation in SES and caruncular stroma has also been found in the pregnant ewe [54] and synthesis in the DES and myometrium agrees with studies in the human and pregnant bovine uterus [44,55]. IGFBP-4 is generally considered inhibitory to IGF actions [56]. IGFBP-4 does not appear to bind to the cell surface or extracellular matrix, but can cross the endothelium [39], indicating that IGFBP-4 may clear endometrial IGFs. IGFBP-5 mRNA was localised to caruncular stroma and myometrium, in agreement with previous studies in the cow [33] and sheep [34], and transcript was also detected in the SES. With a lack of detectable IGFBP-2, IGFBP-3, and IGFBP-6 mRNA alongside relatively low levels of IGFBP-4 mRNA in the myometrium, the abundance of IGFBP-5 in this tissue suggests this binding protein as the primary regulator of local IGF bioavailability [57]. In the rat myometrium, IGFBP-5 mRNA is significantly up-regulated after parturition, which is suggested to support tissue remodelling during involution [42]. It is also possible that within the myometrium, IGFBP-5 is directly stimulating muscle cell survival during myogenesis [57,58]. In stromal fibroblasts, IGFBP-5 can adhere to the extracellular matrix, which decreases its affinity for IGF and can potentiate IGF-stimulated DNA synthesis [39,59]. Furthermore, IGFBP-5 may stimulate local tissue growth independently of IGF [39,59]. IGFBP-6 was localised to SES and at lower levels in DES and caruncular stroma, similar to the non-pregnant ovine uterus [28]. This expression pattern parallels that of IGFBP-2 mRNA and may suggest these two binding proteins are co-regulated [60]. IGFBP-6 has a markedly higher affinity for IGF-II [61] and so the major function of IGFBP-6 is probably to regulate IGF-II actions [62,63]. Furthermore, IGFBP-6 is generally considered to inhibit the effects of IGF-II, including cell proliferation and differentiation [59]. The importance of controlling uterine IGF bioactivity has been demonstrated in the human uterus where low levels of IGFBP-6 and higher levels of IGF-II are associated with uterine leiomyomas (fibroids), compared with normal endometrium [64]. Many members of the IGF family showed differential expression between the two uterine horns. Expression of IGF-I, the IGF-1R and IGFBP-6 was at an overall lower level in the previously gravid horn whereas IGFBP-4 mRNA expression was lower in the DES only. IGF-II, IGFBP-3 and IGFBP-5 expression showed horn by region interactions, with mRNA concentrations reduced for some regions but increased for others. These differences may reflect the temporal misalignment of the horns in their rate of tissue remodelling, with the PG horn lagging behind the PNG horn by up to 15 days [7]. In conclusion, the IGF system is significant to uterine function and the synthesis of these growth factors in the postpartum uterus indicates a role in uterine involution. This study has shown that in the postpartum bovine uterus IGF-I synthesis was localised to sub-epithelial stroma, whilst maximum concentrations of IGF-II and IGF-1R mRNA were in the endometrial stroma and myometrium, respectively. The uterine tissue compartments expressed different profiles of IGF-binding proteins, indicating that IGF bioavailability and bioactivity is differentially regulated throughout the regenerating endometrium. IGFs are presumably supporting the tissue repair that follows parturition, similar to that of normal wound healing [12]. We propose that myometrial IGF-II synthesis stimulates tissue recovery in an autocrine manner, which may assist the uterus returning to its non-pregnant shape and size. Although IGFs may be key physiological mediators of endometrial repair, other growth factors and cytokines are undoubtedly also important in this process [11,65]. Conflicts of interest The authors declare that there is no conflict of interest that would prejudice the impartiality of this scientific work.
[ "involution", "igf", "igfbp", "uterus", "bovine" ]
[ "P", "P", "P", "P", "P" ]
Evid_Based_Complement_Alternat_Med-3-4-1697739
The Use of Herbal Medicine in Alzheimer's Disease—A Systematic Review
The treatments of choice in Alzheimer's disease (AD) are cholinesterase inhibitors and NMDA-receptor antagonists, although doubts remain about the therapeutic effectiveness of these drugs. Herbal medicine products have been used in the treatment of Behavioral and Psychological Symptoms of Dementia (BPSD) but with various responses. The objective of this article was to review evidences from controlled studies in order to determine whether herbs can be useful in the treatment of cognitive disorders in the elderly. Randomized controlled studies assessing AD in individuals older than 65 years were identified through searches of MEDLINE, LILACS, Cochrane Library, dissertation Abstract (USA), ADEAR (Alzheimer's Disease Clinical Trials Database), National Research Register, Current Controlled trials, Centerwatch Trials Database and PsychINFO Journal Articles. The search combined the terms Alzheimer disease, dementia, cognition disorders, Herbal, Phytotherapy. The crossover results were evaluated by the Jadad's measurement scale. The systematic review identified two herbs and herbal formulations with therapeutic effects for the treatment of AD: Melissa officinalis, Salvia officinalis and Yi-Gan San and BDW (Ba Wei Di Huang Wan). Ginkgo biloba was identified in a meta-analysis study. All five herbs are useful for cognitive impairment of AD. M. officinalis and Yi-Gan San are also useful in agitation, for they have sedative effects. These herbs and formulations have demonstrated good therapeutic effectiveness but these results need to be compared with those of traditional drugs. Further large multicenter studies should be conducted in order to test the cost-effectiveness of these herbs for AD and the impact in the control of cognitive deterioration. Introduction Alzheimer's disease (AD) is characterized as a progressive neurodegenerative disorder and considered as prominent cause of dementia in the elderly. The main characteristics of this disease are difficulties in household handling routine and cognitive and emotional disturbance in the elderly. The treatment of AD is a clinical challenge. With the development of cholinesterase inhibitors and a N-methyl-d-aspartate antagonist (memantine), good perspectives have emerged in controlling the symptoms of AD. Therapeutic decisions have to be guided by clinical studies and should consider the physiopathogenesis and epidemiology of the disease. The main objective of these clinical trials is to reduce the Behavioral and Psychological Symptoms of Dementia (BPSD) and to improve cognition and the functional activity status, thus reducing the impairment of instrumental activities of the daily living (IADLs) and to lower the institutionalization rates (nursing home placement). Unfortunately, only a limited number of trials have dealt with this topic and with follow-up periods shorter than two years. In spite of the absence of sufficient therapeutic effectiveness in mild and moderate AD, these drugs are still considered as the first line of treatment for AD (1). Studies of cost-effectiveness suggest that memantine (2,3) and donepezil (4) are useful in the reduction of institutionalized care and/or cognitive impairment in patients with AD. Recently, two clinical trials showed no improvement of the cognitive deficit (5,6) or reduction in the institutionalization rate (6). Searching for alternatives, many herbal products have been tested and employed in the treatment of AD, but with different clinical responses (7). The assessment of these drugs through randomized controlled trials should be useful to identify effective products in the treatment of AD. Methods Searching at MEDLINE (during April 2006, PubMed), LILACS (Latin American and Caribbean Health Science Literature: 40th edition, May 2001, the last research was performed in April 2006); Cochrane Library (issue 1, 2006); Dissertation Abstract (USA, during April 2006); ADEAR (Alzheimer's Disease Clinical Trials Database, until April 2006); National Research Register (1/2006); Current Controlled trials (the last research was performed in October 2005); PsychINFO Journal Articles (during the year of 2006); relevant web sites; and scanning of reference list of relevant articles. There were no language or publication restrictions. Search for keywords in MeSH (medical subject heading (MeSH) with the words ‘Alzheimer disease, dementia, cognition disorders’ was performed first. In the second part, the keywords were ‘Herbal’ and ‘Phytotherapy’. The crossover results of the two searches were evaluated by the Jadad's measurement scale (8). Inclusion criteria: Three investigators independently reviewed all of the articles found. The articles were selected using the criteria listed below: The studies should be randomized; double-blind and controlled (with a control group and a treatment group). Studies should establish methodological procedures in the crossover or be conducted at the same time. In the case of being crossover, a washout period of at least 7 days was required. Patients included in the researches had their diagnosis rated into three degrees as follows: mild, moderate and severe forms of AD, according to the criteria from the National Institute of Neurological and Communicative Disorders and Stroke—AD and Related Disorders Association (NINCDS-ADRDA) (9). The models used were as follows: Mini-mental beginning values between 10 and 26 (initial and mild group) or <10 (initial group). Clinical trials should last for at least 1 month (4 weeks). Detailed description of the herbal product used. Neuropsychiatry symptoms progression should be measured with numerical score using the Assessment Scale (ADAS-noncog, range of score, 0–70), NPI (Neuropsychiatric Inventory, range of score 0–120), Clinical Global impression of change or Behavioral rating scale for Geriatric patients. The final score should be quantified using a combination of ADL and IADL methodological procedures. Exclusion criteria: The herbal product has already been target of a quantified systematic review study. In this case, only the results of the studies will be considered. Jadad's measurement scale: Methodological quality was assessed using a scale developed and validated by Jadad et al. (8). This scale assesses the completeness of reporting using three items with a five points maximum score. If the allocation into groups is explicitly randomized, item 1 is scored. A bonus point is given if an adequate method to generate the random sequence is described. If there is an explicit statement that the study is double-blind Item 2 is scored. A bonus point is given if the method is described and adequate. Item 3 is scored if there is either an explicit statement that all patients included were also analyzed or if the number and reasons for dropouts in all groups are given separately. For being classified as adequately reported a trial should score at least three of five points, a cut-off point is recommended by the author of the scale (10). All extraction and quality assessments were performed by at least two independent reviewers using standard forms developed for each review. Disagreements were documented and discussed with final decisions made by the principal reviewer. Results Two herbs and two herbal formulations were identified to have effectiveness in the treatment of cognitive disturbance of AD in the systematic review: Salvia officinalis (11), Melissa officinalis (12), and Yi-Gan San (13) and Ba Wei Di Huang Wan (BDW) (14). The main characteristics of the study are described in Table 1. Gingko biloba was previously identified in one meta-analysis (15), and only the conclusions of the study will be considered. Another study will be conducted with huperzine A, a product derived from a Chinese herb Huperzia serrata, to evaluate the safety and efficacy in the treatment of AD in a multicenter randomized controlled trial of its effect on cognitive function (16). The studies of Salvia (11), Melissa (12), Yi-Gan San (13) and BDW (14) have reached Jadad's measurement scale of ≥ 3. The researches had a follow up of 1 month (Yi-Gan San) (13), 2 months (BDW) (14) and 4 months (Salvia and Melissa) (11,12). All samples studied were composed of patients with initial mild symptoms judged as AD. Two studies compared herbal medicines and control samples, using intention to treat [Salvia (11) and Melissa (12)]. None of the studies evaluated the institutionalization rate or compared the active principle with the current therapies with Acetyl Cholinesterase Inhibitor or memantine. Discussion The results of this systematic review identified four studies with methodological quality assessing S. officinalis (11), M. officinalis (12), Yi-Gan San (13) and BDW (14). The last two are composed of formulations with different phytoactive agents. These herbs and formulations presented efficiency in reducing the mild and moderate symptoms of AD. Gingko biloba presented statistically significant mild effectiveness in the treatment of cognitive deficit in AD. The meta-analysis study of Cochrane (13) concluded that additional controlled studies would be necessary in order to recognize cognitive improvement with the use of gingko. There is still need of a prospective study with an appropriate duration and representative sample to identify if G. biloba reduces the development of AD (17). Another plant with a large application perspective is H. serrata, after multicenter trial confirmation underway (16). Melissa and Yi-Gan San showed reduction in the cognitive deficits and a good sedative effect in patients with AD (12,13). Previous clinical studies showed that the extract of M. officinalis reduces laboratory-induced stress (18) and might have benefits in mood improvement (19–21). The use of these herbs and formulations should be well tolerated, (22) and adverse effects have not yet been reported (23). Further studies should be conducted to compare the current therapies for AD and the use of these herbal remedies in controlling the symptoms of AD. The action mechanisms of these herbs and formulations are not well known. It has been suggested that the chemical composition of the essential oil of the Melissa and Salvia leaf extracts are monoterpene aldehydes, polyphenol flavonoids (including rosmarinic acid) (24) and monoterpene glycosides (25). All of these components have many observable effects in vitro, which include powerful anti-oxidative activity (26,27) and an affinity to nicotinic and muscarinic receptor in the human cerebral cortex (28). This last mechanism is of special interest, as modulation of cholinergic systems should play a role in improving the cognitive function, especially in AD. Yi-Gan San and BDW are mixes of many herbal ingredients. Yi-Gan San is a mixture of 7 different dried plants, many of them (Unticariae sinensis and Angelicae root) with possible actions in the serotoninergic and gaba system. BDW consists of 8 herbs: Rehmannia glutinosa Libosh. var purpurea Makino, Cornus officinalis Sieb et Zucc (Cornaceae), Dioscorea batatas Decne root (Dioscoreaceae), Alisma orientale Juzep rhizome (Alimataccae), Poria cocos Wolf, Paeonia suffruticosa Andr. (Paeoniaceae), Cinnamomum cassia Blume (Lauraceae) and Aconitum carmichaeli Debx. (Ranunculaceae). Studies have suggested that BDW enhances the choline acetyltransferase activity and increases the acetylcholine content of the frontal cortex in a murine model (29,30). Major methodological limitations of the four studies are small size of samples and short-term duration. There is no description of the chemical composition and/or possible active principles of the different products employed in studies Involving Yi-Gan San (13) and BDW (14) formulation (Table 1). In the study with BDW (13), Sepia sp. and face powder were used as placebo; however, the way that the shellfish formulation was performed was not described. In the control group of the study using the Yi-Gan-San (14) formulation, 25 mg per day of tiapride hydrochloride were introduced. This drug is a substituted derivative with selective dopamine D2-receptor antagonist properties. This intervention (contamination) occurred in 44% of the individuals and was responsible for the symptoms, among them dizziness. Generally, crude herbal drugs are natural products and their chemical composition depends on several factors such as geographic source of the plant material, climate in which it was grown, and time of harvest. Commercially available herbal medicinal products also vary in their content and concentration of chemical constituents from batch to batch and when products containing the same herbal ingredient are compared between manufacturers. Even when herbal products are standardized for content of known active or marker compounds to achieve more consistent pharmaceutical quality, variations in the concentrations of other constituents can be observed. The use of a protocol such as the Consolidated Standards of Reporting Trials (CONSORT), composed of 22 items, will probably minimize the limitations of RTC with phytotherapic agents (31). The use of herbal medicines in the treatment of AD should be compared with the pharmacological treatment currently in use. Such studies should include the identification of the active principle in order to improve the validation of the clinical trial. Further large-scale, multicenter studies are necessary to determine the effectiveness of these substances in the cognitive deterioration of AD. Until then, this review provides some evidence of the benefit of Melissa, Salvia, Yi-Gan San and BDW in the treatment of AD.
[ "herbs", "alzheimer's disease", "systematic review", "dementia", "elderly", "cognitive impairment", "randomized clinical trial" ]
[ "P", "P", "P", "P", "P", "P", "R" ]
Bioinformation-1-10-1896055
Integrative analysis of the mouse embryonic transcriptome
Monitoring global gene expression provides insight into how genes and regulatory signals work together to guide embryo development. The fields of developmental biology and teratology are now confronted with the need for automated access to a reference library of gene-expression signatures that benchmark programmed (genetic) and adaptive (environmental) regulation of the embryonic transcriptome. Such a library must be constructed from highly-distributed microarray data. Birth Defects Systems Manager (BDSM), an open access knowledge management system, provides custom software to mine public microarray data focused on developmental health and disease. The present study describes tools for seamless data integration in the BDSM library (MetaSample, MetaChip, CIAeasy) using the QueryBDSM module. A field test of the prototype was run using published microarray data series derived from a variety of laboratories, experiments, microarray platforms, organ systems, and developmental stages. The datasets focused on several developing systems in the mouse embryo, including preimplantation stages, heart and nerve development, testis and ovary development, and craniofacial development. Using BDSM data integration tools, a gene-expression signature for 346 genes was resolved that accurately classified samples by organ system and developmental sequence. The module builds a potential for the BDSM approach to decipher a large number developmental processes through comparative bioinformatics analysis of embryological systems at-risk for specific defects, using multiple scenarios to define the range of probabilities leading from molecular phenotype to clinical phenotype. We conclude that an integrative analysis of global gene-expression of the developing embryo can form the foundation for constructing a reference library of signaling pathways and networks for normal and abnormal regulation of the embryonic transcriptome. These tools are available free of charge from the web-site http://systemsanalysis.louisville.edu requiring only a short registration process. Background Animal development is fashioned by conserved signaling pathways that orchestrate morphogenesis, pattern formation, and cell differentiation - complex processes operating jointly in different parts of an embryo and in stages associated with sequential gene activation. Monitoring local and temporal changes in gene expression can provide insight into how genes and regulatory signals work together to guide development. [1] This knowledge is important for understanding the pathogenesis of birth defects and to the central problems of defining precursor target cell susceptibility and the causal mechanisms of abnormal development triggered by diverse environmental and genetic perturbations to maternal-fetal unit. [2] Profiling gene expression on a global scale has become an important source of information for biological knowledge discovery. Despite well-known challenges confronting technology development, the analysis of global gene expression data can reveal themes in the biologically robust response patterns in gene activity. [3] Gene Expression Omnibus (GEO) is one of the main national repositories for high-information content transcript data from microarray analysis and serial analysis of gene expression (SAGE). [4] GEO has grown from 18,235 records in June 2004 to 115,415 records in December 2006, reflecting an average growth of over 100 new entries per day. A subset of this information describes the embryo proper and can be mined for major biological themes in developmental health and disease. Using keyword searches and trend analysis to mine PubMed and Medline literature databases, the public availability of embryo-based microarrays currently numbers 500-600 (564), mostly studies on mouse embryos and differentiating human cell lines. The increasing volume of gene expression data on local and temporal states confronts the developmental biologist with the need for reference libraries and information management systems to handle optimal-scale gene-expression signatures and facilitate biological knowledge discovery. For example, Lamb et al., [5] created a prototype reference collection of geneexpression signatures from cultured human cells exposed to bioactive molecules, which serves as a platform for pattern-matching software to establish a ‘connectivity map’ between drugs, genes, and diseases. Another example is the integrative analysis of multi-study tumor profiles. [6,7] The emerging database model for tumor classification based on molecular abundance profiles has implied a 67-gene core ‘common transcriptional program’ in multiple cancers. [8] In developmentteratogenesis, a preliminary meta-analysis across microarray studies in the mouse embryo returned a gene-expression signature of 512 developmentally regulated genes of which 16% (~82-genes) changed during exposure to teratogenic agents. [2] Given the promises and pitfalls of computational methods for solving gene expression problems, automated access to a reference collection of gene-expression signatures to benchmark the programmed (genetic) and adaptive (environmental) regulation of the embryonic transcriptome is scientifically needed. ‘Birth Defects Systems Manager’ (BDSM) is a knowledge management system that provides custom software to mine public microarray data for interesting patterns across developmental stages, organ systems, and disease phenotypes. [2,9] This open resource enables: consolidation of communal data and metadata relevant to developmental health and disease; interactions with current builds of national databases and data repositories; efficient algorithms for cross-species annotation of symbolic gene annotations using the NCBI sequence homologybased annotations for corresponding homologues and orthologues; specific queries across experiments to facilitate secondary analysis; and data formats interoperable with analysis software for phenetic clustering, chromosomal mapping, gene ontology classification, pathway evaluation, and network identification. Since a comprehensive reference collection of gene-expression signatures for developing structures must be constructed from highly-distributed data, the present study was designed to empower BDSM with tools for seamless data integration: MetaSample, MetaChip, and CIAeasy. These tools are accessible at http://systemsanalysis.louisville.edu requiring only a short registration process to BDSM. A field test of the prototype run with published microarray data illustrates proof of concept for integrative analysis of the mouse embryonic transcriptome. Methodology Dataset collections Search of PubMed using keywords ‘embryo’ and ‘microarray’ returned 495 records of which 193 actually used the technology to study developing animal systems. GEO data sets (GDS) narrowed the list to 47 nonredundant microarray datasets, and including the keyword ‘teratogen’ added a few more datasets, for a total of 564 public microarrays on the embryo. Raw and/or processed microarray sample-data files and associated metadata were parsed onto the server using LoadBDSM. [9] BDSM currently holds 25 developmental series containing 537 samples that are derived from the public domain and 3 series containing 43 samples which are private. These data represent 15 developing organ systems, 6 chemical exposures, and 5 drug interventions across 42 development stages. Tracking provenance For the present study, we restricted the analysis to 160 arrays in the BDSM library based on wellannotated experiments published on normal mouse embryogenesis using an Affymetrix technology platform. These conditions are identified by GEO Series Accession number (GSE) and/or literature citation as follows: preimplantation mouse embryo (GSE1749) [10]; heart (GSE1479) GD10 - GD18 [11]; nerve (GSE972) GD9.5 - birth [12]; ovary (GSE1359) and testis (GSE1358) between GD11.5 - birth [13]; orofacial region (GSE1624) [14] and secondary palate [15] between GD13-15. The platforms for these series included: MG-U74Av2 (12488 probes), MG-U74Bv2 (12478 probes), MG430Av2 (45104 probes), MOE430Av2 (22690 probes), and MOE430Bv2 (22576 probes). Internal annotation of Affymetrix probe identifiers was performed to standardize gene labels across samples and improve cross-platform interoperability, as discussed previously. [2] Data integration Individual microarray sample-data files from the aforementioned developmental series were integrated using QueryBDSM, a module that for merging pre-processed, normalized samples from the BDSM library. Three metaanalysis tools written in PHP were designed to compare and analyze expression data across multiple chips and platforms: MetaSample, MetaChip, and CIAeasy. The workflow schema is diagrammed in Figure 1. Individual sample-data files are selected from the BDSM library and added to a queue for integration. The formatted input files are tab-delimited expression ratios of probes (rows) x samples (columns). QueryBDSM determines the number of distinct microarray platforms in the sample queue and merges the data as follows: if all samples come from the same microarray platform (number =1), then QueryBDSM automatically runs MetaSample to create a merged-table having ‘columns’ of normalized expression data (samples) in ‘rows’ derived from the platform, with unique probe identifiers (ProbeID) expanded to include GenBank accession (GeneID), and symbolic gene name (Gene Symbol). If multiple platforms are represented by samples in the queue (number >1), then QueryBDSM automatically runs MetaChip. MetaChip merges data when probe identifiers are different but represent the same annotation, such as across microarray platforms or phylogenetic species. The probe identifiers from each platform are converted to UniGene ID and then merged accordingly, with associated expression data, for those genes common across the datasets. The probes are annotated based on reverse-engineering the sequence homology-based annotations from GenBank. [2] For this purpose the system uses data flat files downloaded monthly from the HomoloGene and UniGene databases of NCBI. [16] In contrast to MetaSample and MetaChip, automated tools for merging samples, CIAeasy must be specified explicitly to compare datasets for the same samples on different platforms. CIAeasy was created from the ADE-4 for R statistical computing software package [17] and is adapted from ‘coinertia analysis’ (CIA) for microarrays. [18] With CIAeasy users can perform CIA from the BDSM web site without detailed knowledge of R language programming. Since samples are aligned on a common space, CIA extracts information about the joint trends in expression patterns of genes independent of probe or sequence annotation. [18] CIAeasy automatically computes successive orthogonal axes with correspondence analysis and returns the percentage of total variance explained by each eigenvector to find the strongest trends in the co-structured datasets. Data analysis One of the problems confronting meta-analysis is the normalcy and spread of expression data. In order to derive expression ratios from the Affymetrix data, we computed a reference denominator, averaged for each gene at the earliest stage in a developmental series. These ratios were transformed to logarithm base 2 (log2) to produce a continuous spectrum of values without biasing between up- and down-regulated genes and making the spread more normal. Each microarray set was centered to median of 0.00 and standardized by scaling to an average standard deviation of 0.5. [2] The merged data were imported to GeneSpring v 7.2 using the UniGene cluster ID as the unique gene identifier. The data were clustered using Pearson correlation for the gene tree and two-sided Spearman Confidence for the developmental conditions. Functional annotation used the NIH/NIAID Database for Annotation, Visualization and Integrated Discovery (DAVID). [19] The highest-ranking biological themes were stratified by Gene Ontology (GO) terms. Discussion Implementation of QueryBDSM For proof of concept we examined samples from GEO data source GSE1391, a series describing global gene-expression profiles of the mouse embryo during preimplantation stages. [10] Samples included in this series represent development of the oocyte through fertilization (1-cell embryo), activation of the zygotic genome (2-cell embryo) and first differentiation (8-cell embryo) leading to divergent embryonal (inner cell mass) and trophectodermal (placental) lineages of the blastocyst. Biological replicates (3-4) arrayed at each stage used three different Affymetrix platforms: MOE430Av2 (22690 probes), MOE430Bv2 (22576 probes), and MG-U74Av2 (12488 probes). Gene-expression profiles were normalized to the ‘oocyte’ in each platform as the earliest stage in the series. Derived data are log2-scale expression values computed from the ratio of signals to the oocyte reference. Using QueryBDSM, we merged datasets from the different samples to create three distinct datasets for the MOE430Av2, MOE430Bv2, and MG-U74Av2 platforms. Statistical (ANOVA) analysis, run at high stringency with Benjamini-Hochberg correction applied, returned 4417 probes (alpha = 0.0001), 1614 probes (alpha = 0.0001), and 2400 probes (alpha = 0.001) that were differentially regulated. Aside from 34 probes that overlapped between the first two datasets, different genes were detected across these diverse platforms. Hierarchical clustering revealed two basic trajectories of gene-expression in all three platforms (Figure 2). One expression cluster contained genes that increased at the 2-cell stage to the blastocyst stage, and the other cluster contained genes that decreased over these stages (not shown). These probes were mapped to the 307 reference pathways in the KEGG: Kyoto Encyclopedia of Genes and Genomes (http://www.genome.ad.jp/kegg/) library to identify metabolic themes. The top significant KEGG pathways showed concordance between platforms MOE430Av2 and MGU74Av2 (Table 1), whereas MOE430Bv2 detected different pathways. Some pathways had marginal P-values with individual analysis that became significant in datasets joined by MetaChip (e.g., Adherens Junction, Tight Junction pathways). This illustrates the strength of the meta-analysis approach. We next used QueryBDSM to merge samples from platforms MOE430Av2 and MG-U74Av2 to illustrate the MetaChip and CIAeasy tools. These chips contained 13022 and 9562 nonredundant UniGene cluster identifiers, respectively, of which 7278 were common between the two platforms when passed through MetaChip. Statistical (ANOVA) analysis with the same parameters as before returned 3324 genes that were differentially regulated in this data subset. The top significant KEGG pathways for the combined 430A-U74A MetaChip are concordant with the individual analysis (Table 1). CIAeasy was used to compare the joint trend between 4417, 1614, and 2400 probes from MetaSample, resulting in a high-level of similarity between these three platforms (Figure 2). Of the 6812 nonredundant probes between MOE430Av2 and MG-U74Av2 (4417 + 2400), only five probes were common. Annotating the probes which were not in common gave 4551 unique DAVID identifiers. Again, meta-analysis picked up most of the significant KEGG pathways identified in either of the singular analyses performed earlier and a few additional metabolic pathways (Table 1). Comparative bioinformatics analysis across developing systems An obvious limitation in combining data from several different platforms is that as more platforms are included, fewer representative genes are found to be common amongst all platforms. This problem increases when considering less comprehensive arrays, older arrays with outdated probe annotations, or arrays across animal species. Staging the entry of data from the most versatile arrays first can lessen the problem of losing information when data are combined across platforms; however, in some data mining efforts the discriminating power gained by increasing samples and conditions might outweigh the loss of information. For example, multi-platform datasets have been found to discriminate tumor classification by expression profile with as few as 25 genes. [6] For this reason it may be possible to benchmark developmental stages using a limited number of genes across many diverse platforms. To illustrate this point we used BDSM-derived data to compare expression profiles across six unrelated studies and developing systems. We constructed a virtual meta-chip for probes common to all five technology platforms represented in these studies, yielding 346 genes. Unsupervised clustering and Pearson correlation of the gene-expression profiles correctly ordered the samples first by organ system and then by developmental sequence within each system (Figure 3). Within this hierarchy commonalities and differences across organ systems were evident for the patterns of expression in subsets of genes. Unfortunately, the number of genes returned from 346 by statistical (ANOVA) analysis of each individual system, or by K-means clustering of the entire matrix, was too small for insightful functional annotation. QueryBDSM is a simple and efficient solution that can be used to construct a self-evolving reference collection of geneexpression profiles from highly-distributed data on mouse development. Although robust mapping of biological themes and pathways that are expressed at particular developmental stages is straightforward when the same technology platform is considered [2] fuzzy-clustering methods will be needed when multiple platforms are considered. The MetaSample and MetaChip components of QueryBDSM are also available as standalone tools under the MetaBDSM module. In this way, the user can upload data outside the BDSM library. The user supplies details of the technology platform, organism, and information about the file format. Once all the required fields are entered and submitted, the files are checked for unique headers. Columns can be dropped by selecting the appropriate checkboxes. Clicking the Continue button combines the datasets with expression data and unique identifiers only for genes common between the datasets. These tools have been tested on Internet Explorer 6.0 or greater and Mozilla-based browsers, such as Netscape 6.0 and Firefox. Other tools are available to assemble data when the platform is same, such as Microarray data assembler. [20] This Excel-based program inherits Excel's limitation from file sizes and number of samples (256 columns and 65,000 data points) whereas the MetaSample and MetaChip tools do not have this limitation. These tools create temporary tables in the Oracle database and join them using the functionality of Oracle before putting it into a text file, reducing restrictions on the number of samples and size of files. Although users can theoretically combine 100 files at a single time, it is not recommended to load more then 25 files at a time. Conclusion The representation of experimental samples as developmentally contiguous groups is expected to yield a novel mosaic view of gene-expression signatures and genetic dependencies. Although sufficient data exists for data-mining efforts to begin, the ultimate goal of an unabridged reference collection must be viewed as a long-term effort. Regarding the embryo, a search of OVID MedLine using keywords ‘embryo’ OR ‘teratogen’ (136,146 records) AND ‘microarray’ OR ‘SAGE’ (16,906 records) returned 343 total records. At the current rate of 564 microarrays per 343 publication records (factor = 1.64), the trajectory of embryo-based microarray publications projects GEO to hold in excess of 1,476 microarrays relevant for embryogenesis or teratogenesis by the year 2010. As studies unravel gene-expression signatures, the key principles in teratology – namely, chemical effects on biological mechanisms, dose-response relationships, factors underlying genetic susceptibility, stage-dependent responses, and maternal influences, can be framed in a systems biology context to address an ‘experience database’ for ranking pathways and networks by strength of association with anatomical landmarks and developmental abnormalities. [2] The BDSM resource would parallel efforts toward molecular diagnostics in cancer biology (http://www.oncomine.org/), which includes data sets profiling human tumor samples. [21] Since interpreting geneexpression signatures in birth defects will be predicated on posterior (prior) knowledge about developmental health and disease, an important payoff from this bioinformatics effort is to recognize and characterize how these biological states emerge from adaptation or adverse regulation of the embryonic transcriptome.
[ "integrative analysis", "mouse", "embryo", "transcriptome", "expression", "birth defects" ]
[ "P", "P", "P", "P", "P", "P" ]
Pediatr_Nephrol-3-1-1989763
Nutrition in children with CRF and on dialysis
The objectives of this study are: (1) to understand the importance of nutrition in normal growth; (2) to review the methods of assessing nutritional status; (3) to review the dietary requirements of normal children throughout childhood, including protein, energy, vitamins and minerals; (4) to review recommendations for the nutritional requirements of children with chronic renal failure (CRF) and on dialysis; (5) to review reports of spontaneous nutritional intake in children with CRF and on dialysis; (6) to review the epidemiology of nutritional disturbances in renal disease, including height, weight and body composition; (7) to review the pathological mechanisms underlying poor appetite, abnormal metabolic rate and endocrine disturbances in renal disease; (8) to review the evidence for the benefit of dietetic input, dietary supplementation, nasogastric and gastrostomy feeds and intradialytic nutrition; (9) to review the effect of dialysis adequacy on nutrition; (10) to review the effect of nutrition on outcome. The importance of nutrition in normal growth Normal growth can be divided into four important phases: prenatal, infantile, childhood and pubertal. Nutrition is important at all phases of growth, but particularly so during the infantile phase because the rate of growth is higher than at any other time of life (other than prenatally) and is less dependent on growth hormone (GH) than during other phases. Rate of growth gradually decreases from >25 cm/year at birth to an average of 18 cm/year at age 1 year and 10 cm/year by the age of 2. Half of adult height is achieved by the age of 2 years, so that irrecoverable loss of growth potential can occur during this phase. At birth, 170 kcal/day are stored in new tissue, falling to 50–60 at 6 months, 30–40 by 1 year and 20–30 by the age of 2 years. During the childhood phase, growth becomes more dependent on the GH/insulin-like growth factor-1 (IGF-1) axis; growth rate decelerates continuously until the pubertal phase. The pubertal phase results from the coordination of GH and sex steroid production. Together they have an anabolic effect on muscle mass, bone mineralization and body proportions. It is another phase of rapid growth so that nutrition can again modify the genetic growth potential [1]. Methods of assessing nutritional status Normal nutrition can be defined as maintenance of normal growth and body composition. Although it is agreed that nutritional assessment is important in chronic renal failure (CRF), there is no single or easy definition or measure of inadequate nutritional status: measurement of nutritional parameters are complicated in CRF because of salt and water imbalances and the potential inappropriateness of using age matched controls in a population that is short and may be delayed in puberty; it has been suggested that it is more appropriate, therefore, to express measures relative to height age (age at which the child’s height would be the 50th centile) and/or pubertal stage. The reader is referred to extensive and excellent reviews on this subject [2, 3]. Anthropometric measures The most commonly used assessment of nutrition is height and weight, along with head circumference in younger children, plotted on percentile charts. Anthropometric and nutritional measures are usefully expressed as a score of the number of standard deviations (SDs) from the mean for a normal population of the same age [e.g. height or weight SD score (HtSDS, WtSDS), also called z scores]. This allows comparison with the normal population and helps follow progress in the individual patient. However, although a normal rate of growth can be considered to represent adequate nutrition, weight loss and alterations in body composition occur before height velocity is affected [2], and poor growth can occur due to reasons other than nutrition (see below). Another way of expressing the relative weight and height is the body mass index (BMI, Ht/Wt2), which is important because extremes are associated with increased morbidity and mortality [4]. Because BMI varies considerably throughout childhood, reaching a trough at 4–6 years of age, it has been suggested that it should be calculated according to height age [5]. It must be borne in mind that BMI does not distinguish between fat mass and fat-free mass (FFM) and an appropriate BMI for age (whether height age or chronological age) does not necessarily indicate ideal body composition; weight gain may be due to the laying down of excess fat rather than a balance of fat and lean tissue. Skinfold thickness is a measure of subcutaneous fat and mid-arm circumference (MAC) is a reflection of muscle mass and may therefore be more useful in determining body composition than the calculation of BMI alone. Decreased values have been found in children with CRF [6–9]. However, both are rather unreliable tools because consistent measurements are difficult, values may not be representative of visceral fat and fat or muscle mass, oedema will influence values, regional fat and muscle distribution may be different in CRF and values vary according to age in normal children. Dietary assessment The paediatric renal dietician is crucial to the successful management of nutrition in children with renal disease. Monthly review has been recommended for under twos on dialysis, and three- to four-monthly in those over that age [10]; and six-monthly or one- to three monthly in children with moderate and severe CRF, respectively [3]. The purpose is to prevent the development of malnutrition. Children entirely dependent on enteral feeds may need to be seen more often, particularly in infancy when feed adjustments may be necessary as often as weekly. Assessment of intake in children taking an oral diet can depend on prospective dietary diaries, usually over 3 days, or retrospective recall. It has been estimated that 5.9 contacts per patient (in clinic or by phone) per month in children <5 years of age and 3.1 in children >5 years of age are necessary to successfully support families of children on peritoneal dialysis (PD). This intensive input resulted in improvement of HtSDS and WtSDS from −1.2 and −1.32 to −1.14 and −0.73, respectively, over a 3-year period [11]. Protein intake can also be calculated using well established formulae (protein catabolic rate, nPCR [2]). Serum albumin Serum albumin has been identified as a surrogate marker for nutritional status and morbidity/mortality in patients with end-stage renal failure (ESRF). Patients <18 years of age initiating dialysis with hypoalbuminemia are at a higher risk for death: in 1,723 children each fall of serum albumin by 1 g/dl at the start of dialysis was associated with a 54% higher risk of death. This was independent of other potential confounding variables [12]. Although serum albumin may be a reflection of nutrition, low levels may be due to haemodilution, nephrotic syndrome or chronic infection/inflammation [13]. To remove the effect of fluid overload, therefore, it has been suggested that if practical, levels should be checked post dialysis [14]. Chronic inflammation in itself will lead to malnutrition [15]. Low serum albumin is more common in children on PD: low levels were present in 35.9% of assays in 39 children on PD over a 2-year period compared with none in 32 children on haemodialysis (HD), even though protein intake (estimated by nPCR) was similar, averaging 1.1 g/kg per day. Thus, children maintained on PD are at greater risk of protein malnutrition compared with peers treated with HD [16]. This may be due in part to losses in PD fluid: average losses of free amino acids (AA) vary with transporter status from 0.02 to 0.03 g/kg/day [17]. There is an inverse correlation between body weight and surface area and peritoneal protein loss, such that infants have nearly twofold greater peritoneal protein losses per metre-square body surface area than those weighing more than 50 kg. Such protein losses in infants impair normal growth and may contribute to permanent loss of growth potential [18]. Dual energy X-ray absorptiometry (DEXA) and other methods A whole body DEXA scan can estimate fat mass, lean mass and bone mineral density (BMD), but is affected by body water content. Other methods include bioelectrical impedance analysis (BIA) [19], total body potassium, densitometry and in vivo neutron activation analysis, but these are predominantly used as research tools [2]. Nutritional requirements for normal children Recommended daily amounts (RDA) and recommended intakes (RI) for energy, protein and nutrients vary between countries. Regardless of the national dietary recommendations that are used it is important to consider that these are estimates of requirements for normal healthy populations of people and are not recommendations for absolute intakes for individuals. They serve as a guide for the energy and nutrients that an individual may require for normal growth, maintenance, development and activity. Requirements for any particular nutrient will differ between individuals. The United Kingdom dietary reference values (DRV) [20] are for infants fed artificial formulas and for older infants, children and adults consuming food. DRVs are not set for breast-fed babies, as it is considered that human milk provides the necessary amounts of nutrients. In some cases the DRVs for infants aged up to 3 months who are formula-fed are in excess of those which would be expected to derive from breast milk; this is because of the different bioavailability of some nutrients from breast and artificial milk. The DRV for energy intake is assumed to be normally distributed and is expressed as the estimated average requirement (EAR). For protein and other nutrients, requirements are expressed as reference nutrient intakes (RNI), set at 2 SDs above the average. Therefore, intakes of protein and nutrients above this amount will almost certainly be adequate for all individuals in a population. For some nutrients where there is insufficient data to establish DRVs with great confidence safe intakes are set—a level or range of intake at which there is no risk of deficiency. Daily DRVs for energy, protein and some nutrients are given in Table 1. If a diet for a normal child is adequate in calcium, iron and vitamin C and the child is receiving adequate energy from a mixed diet, then most other nutrients are likely to be taken in adequate amounts. Table 1UK dietary reference values for normal populations of children [20] (mo months, yr years)AgeEARRNIRNIRNIRNIEnergy (kcal)Protein (g)Calcium (mmol)Iron (mg)Vitamin C (mg)0–3 mo115–100/kg2.1/kg13.11.7254–6 mo95/kg1.6/kg13.14.3257–9 mo95/kg1.5/kg13.17.82510–12 mo95/kg1.5/kg13.17.8251–3 yr95/kg1.1/kg8.86.9304–6 yr90/kg1.1/kg11.36.1307–10 yr1,970/day28.3/day13.88.730Boy 11–14 yr2,220/day42.1/day25.011.335Girl 11–14 yr1,845/day41.2/day20.014.835Boy 15–18 yr2,755/day55.2/day25.011.340Girl 15–18 yr2,110/day45.0/day20.014.840 Nutritional requirements for children with CRF, on dialysis and post transplant Energy It is unlikely that energy requirements for children with CRF are different from those of normal children, and energy intakes below the EAR will contribute to growth failure. Restoration of normal energy requirements to 100% EAR allows for catch-up growth in children under 2 years and shows some benefit in older children (see section on Dietary supplements). If recurrent vomiting resulting from abnormal gastric motility and delayed gastric emptying is not treated, energy intake may need to be increased by up to 30% daily to replace lost feed and food in order to preserve growth; once vomiting is controlled growth is maintained on normal energy requirements [21]. To ensure an adequate energy intake it is advisable to use the EAR for height age if the child is below the 2nd centile for height. There is no evidence that intakes for children on dialysis should exceed those for normal children [10], though dietary energy intake may need to be reduced for children on PD to compensate for the energy derived from dialysate glucose, estimated at 8–12 kcal/kg/day if there is excessive weight gain [7, 22]. Post-transplant energy requirements should match those of normal children, though care needs to be taken with increased appetite in response to steroid administration. Up to 13% of transplanted chidren become obese [23, 24]. A reduction in energy intake is indicated where there is excessive weight gain and will help correct any dyslipidaemia. Protein Protein intakes in CRF must provide at least 100% RNI if protein is not to become the limiting factor for growth. Inadequate protein will impact on body composition with a preponderance of fat rather than lean tissue. Adequate energy must be given to promote deposition of protein [25]. To ensure an adequate protein intake, it is advisable to use the RNI for height age if the child is below the 2nd centile for height. In children undergoing PD, protein intake must provide at least 100% RNI plus an allowance for both replacement of transperitoneal losses and replacement of daily nitrogen losses in order to achieve positive nitrogen balance [7, 18, 22]. There are few studies describing the optimal amount of protein for children on PD and the existing data does not include all age groups. Taking this into account, and the wide variability in transperitoneal losses of protein, recommended intakes for protein for populations of children on PD may be considered generous. Routine assessment of growth, albumin and urea levels will determine the required intake for the individual child. For chronic haemodialysis, the Kidney Disease Outcomes Quality Initiative (K/DOQI) recommends the RDA for age plus an increment of 0.4 g/kg/day to achieve positive nitrogen balance [10]. This recommendation is based on work done in adults on HD who failed to maintain nitrogen balance on 1.1 g protein/kg/day [26]. Published recommended intakes for the US and the UK shown in Tables 2 and 3 are for populations and may need adjustment for the individual. Table 2US recommended dietary protein for children on maintenance dialysis [10] Age (yr)RDA (g/kg/day)Protein intake for HD (g/kg/day)Protein intake for PD (g/kg/day)Infants0–0.52.22.62.9–3.00.6–1.01.62.02.3–2.4Children1–61.21.61.9–2.07–101.01.41.7–1.811–141.01.41.7–1.8Males15–180.91.31.4–1.5Females15–180.81.21.4–1.5Table 3UK guidelines on dietary protein for children on maintenance dialysis [27] AgeRNI (g/kg/day)Protein intake for HD (g/kg/day)Protein intake for PD (g/kg/day)Infants0–3 mo2.12.52.8–2.94–12 mo1.5–1.61.92.2–2.3Children1–3 yr1.11.51.8–1.94 yr–puberty1.01.4–1.51.7–1.9Pubertal1.01.3–1.41.6–1.8Post–pubertal0.91.2–1.31.4–1.5 Post-transplant protein requirements should match those of normal children. Vitamins and minerals Little is known about the requirements of children with CRF. It would be reasonable to give the RNI for vitamins, minerals and micronutrients as for normal children, with the exception of calcium, phosphate, magnesium, sodium and potassium, which may be deranged and must be determined for the individual child. UK RNIs are shown in Table 4. Table 4Micronutrient guidelines for children with CRF [28] InfantsChildrenThiamin (mg)0.2–0.30.5–1.0Riboflavin (mg)0.40.6–1.3Niacin (mg)3.88–18Vitamin B6 (mg)0.2–0.70.7–2.0Vitamin B12 (μg)0.3–0.50.5–1.5Folic acid (μg)*50–50070–1000Vitamin C (mg)2525–40Vitamin A (μg)*350350–700Vitamin D (μg)*7–8.5–Zinc (mg)4.0–5.05.0–9.5Copper (μg)0.2–0.30.3–1.0 *Vitamin A - care should be taken not to give excessive amounts of vitamin A, as the resultant high serum levels can lead to hypercalcaemia, anaemia and hyperlipidaemia [29]. A common practice guideline is not to exceed 200% of the RNI from the diet and/or supplements. *Vitamin D - there is no need to achieve the RNI for vitamin D, as it cannot be converted to the activated form (as which it is usually supplemented). Recent recommendations for adults are to assess 25 and 1, 25-vitamin D levels and replace as needed, but there are no equivalent recommendations for children. *Folic acid - hyperhomocysteinaemia is an independent risk factor for cardiovascular disease [30, 31]. Additional folic acid may be given to effectively lower plasma homocysteine levels. A common practice guideline is to supplement when the glomerular filtration rate (GFR) is <40 ml/min/1.73 m2, but the doses used are arbitrary. Infants: 250 μg/kg to maximum of 2.5 mg daily Children 1–5 years: 2.5 mg daily Children >5 years: 5 mg daily In adults on PD, blood concentrations of some water soluble vitamins (C, B6 and folic acid) are reported to be low. This is due to a combination of inadequate intake, increased transperitoneal losses and increased needs. In children supplements of these vitamins have been given with the result that blood concentrations have met or exceeded normal values [32, 33]. Accordingly, based on these authors’ recommendations and the RNIs above, the following intakes are suggested [27]: Vitamin C: 15 mg (infants)-60 mg (children) daily Vitamin B6: 0.2 mg (infants)-1.5 mg (children) daily Folate: 60 μg (infants)-400 μg (children) daily These amounts may well be met from food, feeds and nutritional supplements so it is important to assess the dietary contribution before routinely giving medicinal supplements. Whilst adequate vitamin C needs to be given to offset dialysate losses excessive intakes should be avoided, as the resulting elevated oxalate levels may lead to cardiovascular complications [34]. If folic acid is given to lower plasma homocysteine this will more than compensate for dialysate losses of folate. The K/DOQI recommends that supplementation should be considered if dietary intake alone does not meet or exceed the dietary reference intake, if blood vitamin levels are below normal values, or if there is clinical evidence of deficiency [10]. There are no reported specific micronutrient requirements for children on HD and post transplant and 100% of the RNIs can be considered the goal for these children. Calcium and phosphate Control of phosphate, calcium and parathyroid hormone (PTH) levels is necessary to prevent renal bone disease. Dietary phosphate may need to be restricted when the GFR falls below the normal range, and almost always when below 50 ml/min/1.73 m2. The following common practice guidelines will help maintain serum phosphate within acceptable reference ranges, although phosphate binders may also be necessary. Infants <10 kg: <400 mg daily Children 10–20 kg: <600 mg daily Children 20–40 kg: <800 mg daily Children >40 kg: <1,000 mg daily Iron, copper and zinc Anaemia can be prevented by the prescription of iron supplements and erythropoietin (rhuEPO). Advice to increase dietary haem iron and reduce the inhibition of non-haem iron absorption by: phytates in wholegrains and legumes; polyphenols in tea, coffee and cocoa; calcium in dairy products; and simultaneous administration of phosphate binders and antacids should be given. It may be necesary to give intravenous iron if stores are low. Low dietary intakes of copper and zinc are reported [35] in children on PD. The K/DOQI recommends the intake of these to be monitored every 4–6 months and supplements given if necessary [10]. Reports of spontaneous nutritional intake in children with CRF Several studies have demonstrated decreased spontaneous intake in children with CRF. Four-day weighed dietary records from 50 children with a GFR <65 ml/min/1.73 m2 and 93 healthy children showed an energy intake 76–88% of the RI in CRF patients and 90%–93% in controls. Protein intake was 2.1–3.1 g/kg per day in controls and 1.6–2.7 g/kg per day in CRF patients, so that overall, the energy intake was 10% and the protein intake 33% lower in CRF patients than in healthy children [36]. Other studies have shown low energy intake (87% RNI) but high protein/energy ratio, protein (223%), carbohydrate (73%), fat (110%), polyunsaturated (55%), monounsaturated (129%) and saturated fatty acid (111%), with relative distribution of calories of 15% from proteins, 48% from carbohydrates and 37% from lipids in 15 children with moderate CRF [14]; and RIs in 82 children with CRF were <86% for energy and >161% for protein [12]. Although energy intake is low, it may be proportionate to body weight: 4-day food records from 120 children with CRF found an energy intake of 80% of RI for age, decreasing with increasing age. However, this was in the normal range when factored by body weight. The protein intake was 153% of the RDA [37]. Intake deteriorates with severity of CRF: energy intakes correlated negatively with GFR in 95 children with CRF, and fell to 85% of EAR when the GFR was <25 ml/min/1.73 m2 [38]. Nitrogen balance studies have been performed in 19 children on PD. Protein intake was 1.64 g/kg/day (126% of the RDA), and the calorie intake reached 75% of RDA. Nitrogen losses were 0.205 g/kg/day, and nitrogen balance was positive in three quarters of studies, correlating with nitrogen and calorie intake [39]. Intake also decreases with time: 3-day semi-quantitative dietary diaries in 51 children with a GFR <75 ml/min/1.73 m2 assessed over 2 years showed protein intake to decrease by 0.4 g/kg per day, and calcium by −20% RNI [40]. Energy and protein intake (energy more than protein), all anthropometric measurements and plasma proteins and AAs were low in 24 children on continuous ambulatory PD (CAPD), particularly in those <10 years of age [13]. Low intakes of calcium [35–38], zinc [35, 37, 40, 41] and vitamins [37, 40, 42] are also reported. The epidemiology of nutritional disturbances in renal disease, including height, weight and body composition Height and weight It is during the infantile phase of growth that the most significant loss of height potential can occur, but also it is the phase during which there is the greatest potential for catch-up with nutritional intervention. Many infants with CRF are already growth-retarded by the time they are first seen in a paediatric nephrology service: a loss of HtSDS from birth of −1.68 SD (up to −5 SD/year) has been reported [43]. Approximately one-third of the reduction in height occurs during foetal life and one-third during the first 3 months, accompanied by a similar decline in head circumference [44–46]. Mean HtSDS below the lower limit of normal has been reported in most studies [43–46]. However, catch-up can occur, even on dialysis [47–49]. Poor nutritional status is associated with starting PD at a younger age [50]. Interestingly, infants who grew well continued with catch-up in early childhood [43, 47]. During the childhood phase, growth in CRF usually parallels the centiles but without catch-up [43, 46]. The North American Pediatric Renal Transplant Cooperative Study (NAPRTCS) CRF database (<20 years of age, GFR <75 ml/min/1.73 m2) has shown a mean HtSDS that has changed little since 1996 when it was −1.5 in 1,725 patients, to −1.4 (one third >−1.88) in 3,863 patients in 1998 and −1.4 in 4,666 patients in 2001 [51–53]. European study group data on 321 children aged 1–10 years with congenital CRF reported a mean HtSDS of −2.37. Increasing severity of CRF adversely affects growth: when the patients were divided into those with a GFR greater or less than 25 ml/min/1.73 m2, the HtSDS was −1.65 and −2.79, respectively [44]. Reports of growth on dialysis vary from improvement [54], to no change [55] to declining HtSDS [56], with a worsening of nutritional status in children dialysed for more than a year [50]. The pubertal phase of rapid growth is another period when loss of height potential can occur. Early studies demonstrated that puberty was delayed, with an irreversible decline in height SDS, particularly in patients on dialysis [57, 58]. However, others have reported normal pubertal progression and growth [43, 46, 59]. Body composition Children with CRF and short stature are significantly protein-depleted for age, although not for height: 17 patients, mean age 12.9 years had total body nitrogen (TBN) and TBN/height of 54% and 63%, respectively, when predicted from age, but 100% when predicted from height. Energy and protein intakes were 65% and 172% of RDA, respectively. This suggests that chronic energy deficiency may contribute to impaired protein deposition which, in turn, may be important in the pathogenesis of growth failure in CRF [25]. It has been suggested that the ratio of the length of trunk to limb is low in CRF, suggesting a disproportionately greater effect of disease and/or treatment on spinal growth [60] although not all have found this [61]. BIA in children starting PD showed an improvement of hydration and nutrition after 6 months, although levels remained below normal, suggesting that introduction of dialysis should not be left until malnutrition has developed [62, 63]. DEXA studies of 20 PD patients receiving the RDA for energy and a daily protein intake of 144.3% and 129.9% RI at months one and six showed an increase in BMD, bone mineral content (BMC) and FFM. However, the daily protein intake showed a negative correlation with these parameters and also plasma bicarbonate, suggesting that a high protein intake may negatively affect bone mineralization and FFM by its effect on acid-base status [64]. Disturbances in plasma and intracellular AAs have been found in CRF. Muscle isoleucine and valine levels and the valine/glycine ratio were low in children with CRF who were short but had no other signs of malnutrition (normal skinfold thickness, MAC and serum proteins) [65]. Levels have been studied in ten children on CAPD. Although plasma levels of essential AAs (EAAs) were low, only muscle intracellular leucine and valine were low, whereas both plasma and intracellular levels of some non-essential AAs were high. No correlations were found between plasma and muscle AAs and indicators of nutritional status, except muscle branched-chain AA levels with BMI [66]. The pathological mechanisms underlying poor appetite, abnormal metabolic rate and endocrine disturbances in renal disease Poor appetite is common in children with CRF and may in part be due to abnormal taste sensation [67]. Appetite is also affected by cytokines, and their roles in CRF, along with their effect on metabolic rate, have been extensively and excellently reviewed [68, 69]. Malnutrition may be an inappropriate term in CRF because it infers that dietary replacement would be curative (which is not always the case in CRF), and the term cachexia, which implies replacement of muscle with fat and declining plasma proteins, may be more appropriate [68, 69]. Cachexia may result not only from anorexia but also from acidosis and inflammation (which are common in CRF), which cause elevated levels of cytokines such as leptin, TNF-α, IL-1 and IL-6. These act through the hypothalamus to affect appetite and metabolic rate and maintain the constancy of fat stores. Leptin is produced by adipocytes and is probably the most important cytokine involved in this process. When body fat falls, levels of leptin decline and the brain responds by increasing appetite and metabolic efficiency; in contrast, when leptin levels rise, food intake is decreased and metabolic rate increases. However, leptin is excreted by the kidney and not cleared by dialysis, so levels can be paradoxically high in malnourished patients, contributing further to decreased food intake and increased metabolic rate. Serum leptin levels were >95th percentile in 45% of 134 children with varying severity of CRF, and correlated positively with their percentage body fat and GFR and negatively with spontaneous energy intake [70]. Leptin levels also correlate with CRP, a marker of inflammation, and with insulin resistance [67, 69]. Leptin signalling in the brain occurs through the hypothalamic melanocortin receptors, and may offer a new area for therapeutic intervention in the cachexia of CRF. The part played by the short-term regulators of satiety, such as ghrelin (which stimulates appetite), is not yet fully understood [68, 69]. Abnormalities of the GH/IGF-1 axis occur in CRF. Whether they are primarily due to CRF itself or secondary to malnutrition is controversial [71, 72]. GH levels are normal to high and IGF-1 levels are low in both CRF and malnutrition [73]. IGF-1 decreases according to nutritional status in children on HD [71]. Leptin levels correlate positively with IGF-1 and negatively with GH in children with starvation [73]. However, in CRF this relationship is disrupted: in 17 children on HD with energy and protein intakes of 40–70 kcal/kg per day and 1–1.54 g/kg per day, respectively, who had reduced anthropometric measurements, although IGF-I levels were low, leptin levels were high [74]. Evidence for the benefit of dietetic input, dietary supplementation, nasogastric and gastrostomy feeds and intradialytic feeding There are very few controlled trials, so recommendations are based on the evidence available. Some, but not all, reports have been able to establish a relationship between nutritional intake and growth. Growth velocity correlated positively with energy intake in 17 children with CRF [75], and with energy but not protein intake in 15 children on CAPD [76]. Growth velocity was inversely correlated with dietary protein intake and positively correlated with caloric intake both before the initiation of rhGH therapy and after the first year of treatment in 31 children on dialysis [77]. The advantage of input from a dietitian has been specifically demonstrated in two studies [11, 40], but it is likely that in all studies dietetic input was necessary. Dietary supplements Because of the importance of nutrition in the first two years of life it might be expected that enteral nutrition would be most effective at this age. Most studies have therefore concentrated on this age group, and there is evidence to show that nutritional supplementation is of benefit. Eight studies have demonstrated an improvement in growth [21, 47, 78–83], three showed an initial decline followed by stabilisation [84–86], three showed no effect on growth [11, 87, 88] and one showed a decline in HtSDS [89]. All but one study [86] used feeds administered by nasogastric or gastrostomy tubes and aim for at least the EAR for energy and RNI for protein, with a protein supplement for dialysis. An early study of nasogastric feeding in 14 children weighing <10 kg demonstrated a benefit on HtSDS and WtSDS in 11 [78]. Twenty-six children aged <2 years with a GFR <26 ml/min/1.73 m2 were treated with a whey-based infant formula (supplemented with fat and/or carbohydrate) which provided 100% of the RNI for protein for Ht age and 100% of the EAR for energy for chronological age. HtSDS increased from −2.9 to −2.1 over 2 years [21]. A similar feed resulted in improvement in HtSDS from −2.34 at 6 months of age to −1.93 at 2 years in 24 infants with a GFR <20 ml/min/1.73 m2, and from −2.17 to −1.24 in 13 infants on dialysis over a similar time-frame, although a protein supplement for dialysis was included [47]. A further study from the same centre showed an increase in HtSDS from −1.8 to −0.8 at 2 years in 20 infants on PD [79]. Three studies, each reporting the results of nasogastric feeding in three young children with CRF, have shown benefit in eight out of the nine children [80–82]. It is important not to restrict the salt and water content of the diet in the salt-wasting polyuric infant: a feed providing just over the RDA for calories and protein diluted to 0.3–0.5 kcal/ml with additional 2–4 mEq of sodium/100 ml in 24 infants resulted in improvement of HtSDS by 1.37 SD at 1 year and 1.82 SD at 2 years [83]. Three studies have shown an initial decline followed by normalisation of growth. Twelve infants with CRF dropped to a HtSDS of −2 by 12 months of age and then stabilised at that level [84]. Decline in HtSDS was arrested in eight children <2.5 years of age after starting gastrostomy feeds [85] and stabilised after 3 months in the only study using supplementation without nasogastric or gastrostomy feeds [86]. In three studies, enteral feeds made no impact on growth: there was no change in infants with CRF [87], HD [88] or PD [11]. In one study, a decline in HtSDS occurred in 82 children <2 years of age at the start of dialysis [89]. Whether supplemental feeds benefit children over 2 years of age has been challenged [90]. Six studies have included older children [21, 80, 81, 85, 89, 91]. Of these there was an improvement in growth in three [80, 81, 91]. Dietary advice and supplements of glucose polymer and vitamins (as Ketovite) were given to 65 children aged 2–16 years with a GFR <75 ml/min/1.73 m2 if their EAR and RNI, respectively, fell below 80% as assessed by annual 3-day dietary diaries. Mean HtSDS was maintained in those with a GFR of 25–75 and significantly increased in children with a GFR <25 ml/min/1.73 m2. There was an increase in HtSDS and/or BMI SDS in all the patients on supplements, and change in energy intake correlated with change in HtSDS in those with a GFR <25 ml/min/1.73 m2 [91]. Three children with a GFR of 20–25 ml/min/1.73 m2 fed overnight by nasogastric tube for 11–16 months with increasing amounts until weight gain occurred improved their HtSDS [81], as did three children with CRF over the course of a year [80]. Two studies have shown no change in HtSDS. Nine children aged 2–5 years, treated with a whole protein enteral feed supplemented with fat and/or carbohydrate showed no change in HtSDS (−2.3 to −2.0) [21], as did seven children in given gastrostomy feeds [85]. One study has demonstrated an ongoing decline in HtSDS in children aged 2–5 years at the start of dialysis, 14 of whom were given supplements and 20 not [89]. Complications of gastrostomy are uncommon, but include gastro-colic fistulae, paraoesophageal herniae and, in children on PD, post-surgical peritonitis and an increased risk of exit site infection and dialysis catheter removal from infection, a risk that might be reduced if open rather than percutaneous surgery is used [92, 93]. After removal the track usually closes spontaneously [94]. There have been concerns that enteral tube feeding precludes the development of normal feeding behaviour [95]. However, other studies have shown that, despite long term nasogastric or gastrostomy feeding, oral feeding is resumed in the majority of children after successful transplantation [21, 96]. Positive reinforcement at feeding times using behavioural therapy techniques allowed five infants who had PD and nasogastric tube feeding initiated in the first month of life, and who showed persistent food refusal, to convert to oral feeding [97]. Our impression is that spontaneous oral intake increases with long-term tube feeding; indeed, over a period of 31 months energy intake from the feed did not increase, implying that oral intake had improved over this time to support the demonstrated growth [21]. Reports of the use of Nissen fundoplication are principally from one group [21, 47, 54, 79, 88, 92], making assessment of its effect difficult, although results of growth from this centre are good. Essential aminoacid (EAA) supplements Serum EAAs, carnitine and total protein levels have been demonstrated to be low in CRF, particularly in patients on PD [98]. It has been suggested, therefore, that a low protein diet supplemented with EAAs might benefit growth by ensuring adequate AA intake without protein toxicity. However, results have been inconclusive. No improvement in growth was seen in seven children with severe CRF who were given half the protein RDA for height age as EAAs for 6–8 months [99]. Ten children with CRF managed for 3 years using a strict low protein diet supplemented by a mixture of the keto and amino forms of the EAAs and histidine showed a significant increase in height and weight velocity [100]. HtSDS improved from −1.93 to −1.37 over 30 months in 20 patients with a GFR <50 ml/min/1.73 m2 on a diet of 0.6 g/kg of protein supplemented with ketoacids [101]. Ten children on HD given AA supplementation (0.25 g/kg body weight i.v.) with and without carnitine (25 mg/kg body weight i.v.) had no overall improvement in AA levels [102]. Amino acid-containing peritoneal dialysis solutions Excessive glucose absorption and dialysate AA and protein losses contribute to malnutrition in children on PD. It has been suggested, therefore, that using an AA dialysate might both decrease glucose load and replace AA losses. AAs are absorbed in proportion to the concentration difference between dialysate and plasma; after a 1% AA exchange in seven children on CAPD, the rise in plasma levels of AAs correlated with the ratio of the amount of AA in the bag to the basal plasma concentration. The amount of AA absorbed was 66% after 1 h, and 86% after 4 h and 6 h [103, 104]. However, there is no evidence for any long-term nutritional benefit: eight children on CAPD who had a first morning exchange of 1% AA dialysate instead of dextrose for 12–18 months had no improvement in any plasma or anthropometric parameter of nutrition; plasma urea increased. Plasma EAAs, which had been low, improved but the intracellular pool of free AAs, measured in polymorphonuclear leucocytes did not improve [105]. Two randomised prospective cross-over studies of 3 months AA or dextrose dialysate for three months have been performed, both in seven growth-retarded children either on CAPD [106] or continuous cycling PD (CCPD) [107]. In the children on CAPD there was no nutritional benefit from the AA dialysis [106]. The children on CCPD received dextrose dialysate overnight, plus a single daytime dwell of either AA dialysate or dextrose dialysate. Appetite, calorie and protein intake improved and total body nitrogen increased in half the children during AA dialysis. However, total plasma protein and albumin did not change and fasting AAs after 3 months of AA dialysis were comparable to baseline; plasma urea concentrations were higher [107]. High plasma urea may be due to inadequate protein synthesis in the absence of glucose. Ten children underwent overnight CCPD using a 3:1 ratio of glucose to AA solutions simultaneously during the night. Glucose absorption was 33.7% and AA absorption was 55.2% of the infused amount, and although plasma AA levels were high for the entire ambulatory PD (APD) treatment the plasma urea levels did not increase suggesting that the AAs were being used for protein synthesis with this regimen [108]. Disadvantages compared with glucose include cost, and reports of fluid removal are variable. However, equal amounts of urea and creatinine are removed, normoglycaemia is maintained and there are no reported adverse clinical or biochemical effects, other than a slight increase in plasma urea [106, 107, 109]. Intradialytic parenteral nutrition (IDPN) There are only four studies of IDPN during HD in children so it is difficult to draw conclusions about its effectiveness. Losses of AAs occur into the dialysate during HD and depend on their plasma concentrations and molecular weights. AAs were added to the dialysate of three children in increasing concentrations. Plasma nonessential AAs were not affected, but EAAs improved [110]. Four malnourished children on HD were given IDPN as AAs (8.5% solution), glucose (10% to 15% dextrose), and 20% fat emulsion at every dialysis session (three times a week) for 7–12 weeks. Oral intake improved and, although weight did not improve during treatment, it did so subsequently. Albumin did not change [111]. The weights of three children improved after 6 weeks of IDPN; again, albumin did not improve [112]. Nine patients who on HD had a >10% weight loss and were <90th percentile of ideal body weight received thrice weekly IDPN. In six, BMI increased in the first 5 months and PCR increased, whereas serum albumin did not change; those who did not gain weight were considered to have psychosocial causes for their malnutrition [113]. Nutritional causes of poor growth not related to energy and protein Sodium Requirements for sodium vary according to the type of renal disease. Congenital structural abnormalities often result in an obligatory urinary loss of salt and water. Such children may become chronically salt- and water-depleted and need sodium supplementation and free access to water as salt wasting impairs growth [83]. On the other hand, children with glomerular disease need to restrict their sodium intake. Young children on PD may need sodium supplementation as considerable sodium losses can occur in dialysate. Acidosis Acidosis is associated with a catabolic state, suggesting that acidosis-related protein wasting could contribute to growth retardation [114, 115]. Correction of acidosis improves serum albumin, catabolic rate and growth [83, 116, 117]. Anaemia Anaemia is a well recognised cause of poor appetite. However, studies that include blinded, placebo-controlled trials have found that despite subjective increases in appetite, there were no consistent improvements in dietary intake or anthropometric measures observed during rhuEPO treatment [118–122]. Vitamin D The dose of calcitriol prescribed to control hyperparathyroidism must be balanced against its potential to depress the activity of chondrocytes causing adynamic bone disease. Large doses impair growth, even if intermittent [123, 124], but the frequency of administration does not affect growth if small doses are used [125, 126]. Growth hormone There may be cases when, despite at least 6 months of adequate nutrition, growth continues to be poor. GH may be offered in these circumstances. The effect of dialysis adequacy on nutrition Several studies have looked to see whether increasing dialysis dose benefits appetite, protein intake, nutrition and growth. Twenty-one children on CCPD showed an improvement in HtSDS when aiming for a Kt/V of >2 and a creatinine clearance of >60 l/week/1.73 m2 compared with 1.7 and 40 l/week/1.73 m2 [127]. However, in some, but not all studies, there would appear to be a ceiling above which no further benefit occurs. The nPCR and serum albumin were assessed according to Kt/V in 15 patients on HD. Serum albumin levels were normal. The nPCR was lowest in patients with a Kt/V <1.3, but increasing over 1.6 did not improve nPCR further, suggesting that although adequate dialysis needs to be achieved in order to ensure good protein intake, high dialysis doses are of no further benefit [128]. However, in 12 children taking 90.6% and 155.9% of their requirements for energy and protein, respectively, and receiving HD with a Kt/V of 2.00 and a urea reduction ratio of 84.7%, there was an improvement in HtSDS of +0.31 SD/year and pubertal growth was normal, suggesting that for HD, increasing dialysis dose does improve growth [129]. In PD, but not HD, there was an inverse relationship between albumin level and Kt/V, suggesting that increasing PD dose may reach a point of no further benefit due to increasing albumin losses in PD fluid [130]. PD causes an influx of glucose which can contribute to obesity. High transporter status was associated negatively with HtSDS and positively with BMI SDS in 51 children on PD. Large dialysate volumes also affected BMISDS [131]. Improving dialysis dose can be achieved by the addition of an icodextrin daytime dwell to overnight PD. A cross-over study in eight children of overnight PD with or without addition of a daytime dwell with 1,100 ml/m2 icodextrin for a week showed an improvement in weekly dialysis creatinine clearance from 35 to 65 l/week/1.73 m2 and Kt/V from 1.99 to 2.54. However, protein and calorie intake did not improve and peritoneal albumin loss and serum albumin did not change, but there was increased loss of AAs, although plasma AA levels did not change [132]. Residual renal function (RRF) has an important positive effect on clearance and growth. Mean HtSDS improved from −1.78 to −1.64 over a year of PD in 12 patients with RRF, but declined from −1.37 to −1.90 in 12 patients without. Weekly Kt/V was not different, and only the native kidney Kt/V and creatinine clearance correlated with growth, suggesting that clearance obtained by PD cannot be equated with that obtained by native kidneys [133]. Eleven of 20 patients on PD with a minimum total Kt/V of 2.1, daily protein intake of 3.25 g/kg/day and HtSDS of −2.3 improved their HtSDS by 0.55, while in nine it declined by −0.50. Variables affecting growth were nitrogen balance and residual Kt/V [134]. The role of nutrition in the outcome of children with CRF Malnutrition is associated with increased mortality. The association with plasma albumin levels has already been discussed [12]. Of 2,306 children, those with a HtSDS less than −2.5 at the start of dialysis had a twofold higher risk of death [135]. In 1,949 children with ESRF, each decrease in height by 1 SD was associated with a 14% increase in risk for death, and there was a U-shaped association between BMI and death [4]. Part of this may be due to an increased risk of infection in malnourished patients [136]. In conclusion, commencement of careful nutritional support early in the course of disease may improve not only growth but also mortality in children of all ages with CRF. Multiple choice questions (Answers appear following the reference list) For each question answer true or false: During the phases of growth Rate of growth is highest during prenatal life50% of final height is achieved by the age of 2 yearsThe infantile phase of growth is principally dependent on growth hormoneGrowth rate stays the same throughout the childhood phase of growthA pubertal growth spurt can occur without the development of secondary sexual characteristicsAssessment of nutritional status in CRF and on dialysis It may be more appropriate to express measures of growth according to height age rather than chronological ageThe height standard deviation score (HtSDS) is the number of standard deviations from the mean for a normal population of the same age and sexThe BMI is the Ht/Wt2 and indicates the proportion of fat mass to fat free massLow serum albumin is always an indication of malnutritionPeritonealprotein losses in dialysate are twofold greater in relation to body surface area in infants than in those >50 kg in weightNutritional requirements The estimated protein requirement for a normal healthy 30-week-old girl weighing 6.0 kg (0.4th centile) and 59.0 cm in length (<0.4th centile) is >2.1 g/kg/dayHer estimated energy requirement is 150 cal/kg/dayThe prescribed dietary protein intake for the same child on PD would be 2.8–3.0 g/kg/dayChildren with CRF or on dialysis need a calorie intake that exceeds the EAR for height ageSupplements of vitamin A are necessary in children on dialysisAppetite and metabolic rate Abnormal taste sensation can occur in CRF and on dialysisLeptin is produced by adipocytes and levels are high in CRF and on dialysisHigh leptin levels cause an increase in food intake and metabolic rateGH levels are normal or high and IGF-1 levels are low in CRFGH levels are normal or high and IGF-1 levels are low in malnutritionDietary supplementation Salt restriction is important in all children with CRFNutritional supplementation has not been shown to benefit children over two years of ageIncreasing dialysis dose in PD may increase peritoneal dialysate protein losses and contribute to obesitySodium supplementation may be necessary in children on PDGastrostomy placement is preferable before PD commences
[ "nutrition", "dialysis", "growth", "chronic renal failure" ]
[ "P", "P", "P", "P" ]
Pediatr_Radiol-3-1-1891641
Doppler waveforms of the ureteric jet: an overview and implications for the presence of a functional sphincter at the vesicoureteric junction
This paper is a comprehensive review of the Doppler waveform appearance of ureteric jets. Six jet waveform patterns have been identified: monophasic, biphasic, triphasic, polyphasic, square and continuous. Details of the physical properties of jet patterns and their changes under various physiological conditions are illustrated. The immature monophasic ureteric jet pattern is common in infancy and early childhood up to around 4 years of age. This pattern is also noted to have a high incidence in older children with urinary tract infection/vesicoureteric reflux, nocturnal enuresis and in other special physiological conditions such as in children undergoing general anaesthesia, in women during pregnancy, and in patients who have had ureteric transplantation. A hypothesis of dual myogenic and neurogenic components is proposed to explain the mode of action of the vesicoureteric junction (VUJ). The implication of this hypothesis is that it alters the scientific basis of the understanding of the VUJ. Furthermore, the application of colour Doppler US to ureteric jets may provide a non-invasive technique to study the physiology or pathophysiology of the VUJ in humans. This might shed light on new novel approaches to the monitoring and treatment of diseases related to VUJ function. Introduction When the bolus of urine being transmitted through the ureter reaches the terminal portion, it is ejected forcefully into the bladder through the vesicoureteric junction (VUJ). This creates a jet of urine that can be seen within the urinary bladder during cystoscopy and grey-scale ultrasonography (US). Ureteric jets are also occasionally visible during intravenous urography (IVU) and voiding cystourethrography (VCU). In 2-D, grey-scale real-time US imaging the jet can be visualized as a stream or burst of low-intensity echoes emerging from the ureteric orifice. Each ureteric jet usually lasts for few seconds and is fast enough to produce a frequency shift; therefore the ureteric jet can be demonstrated by colour Doppler US. Colour Doppler US is in fact the easiest method for demonstrating the jet. It is also amenable to further characterization using a pulse-wave Doppler waveform. The US appearance of the jet has been documented in a number of studies and the jet can be consistently demonstrated in both humans [1, 2] and animals such as the dog [3]. This review is based on a number of previous studies that involve US scanning of a total of 2,128 subjects, which include a normal population of 1,341 subjects. The characteristics of ureteric jets in a normal population of 1,341 subjects are described and the effect of age, gender and bladder-filling (based on a subgroup of 102 normal adult females) are discussed. A summary of jet patterns seen in children with urinary tract infection/vesicoureteric reflux (VUR) (n = 98) [4] and nocturnal enuresis (n = 511) [5] is also presented. Special groups, including pregnant women (n = 107) [6], anaesthetized children (n = 16) [7] and subjects with ureteric transplantation (n = 55) [8] are also discussed to look at changes of ureteric jet patterns under specific physiological conditions. Overall the observations suggest a possible correlation of different jet patterns with functional sphincteric action of the VUJ. A hypothesis of dual myogenic and neurogenic components is proposed to explain the mode of action of the VUJ. Discussion Basic ureteric jet patterns on Doppler US Jequier et al. [9] were the first to describe the ureteric jet. In their study of children, they demonstrated that ureteric jets had both crescendo and decrescendo forms. The jet waveforms ranged from a single to as many as four “humps”. Cox et al. [2] noted that the number of peaks (the “humps” of Jequier et al.) in the ureteric jet varied from one to four while Wu et al. [10] found only two or three peaks. Both the latter studies involved only adults. To the best of our knowledge, ours is the only group that has studied in detail ureteric jet patterns in a cohort of subjects with a wide age range, including a large paediatric population. Six basic patterns have been identified according to the number of peaks within a single ureteric jet: monophasic, biphasic, triphasic, polyphasic, square and continuous [11] (Fig. 1). Among these, the square and continuous waveforms represent modified waveforms under the state of forced diuresis and they are deliberately avoided in most of the study analysis. The biphasic, triphasic and polyphasic waveforms are grouped under the category of mature complex jets while the monophasic jet is classified as the immature jet. We have shown that the majority of the population have a complex mature waveform of the ureteric jet while the immature monophasic waveform has a significantly higher incidence in young children. This is discussed in more detail below. Fig. 1Six patterns of the ureteric jet: a monophasic, b biphasic, c triphasic, d polyphasic, e square, and f continuous The initial slope, duration and maximum velocity (peak velocity) of the strongest jet are measured on every Doppler waveform for quantification purposes. Significant differences have been found in the above physical parameters among the four jet patterns monophasic, biphasic, triphasic and polyphasic. The monophasic jet has the shortest duration, lowest velocity and smallest initial slope [11]. Occasional modification features of the ureteric jet Some interesting features have been observed in the jet patterns of some normal subjects. These features include (a) the presence of breaks, (b) a multispike pattern, and (c) change of angle of the jet between the beginning and the end. The number of cases with the above features is too small for statistical analysis; however, we sought to describe them in detail for completeness of the whole spectrum of ureteric jet patterns. These features also provide indirect supportive evidence for the hypothesis of functional sphincteric action at the VUJ and are discussed further in the final section. Breaks The presence of break is defined as the total absence of Doppler signal between peaks within a single ureteric jet waveform (Fig. 2). Breaks were identified in 5.7% (149/2,629 ureters) of the study population. Comparing different groups of subjects, it has been found that most breaks occur in adult females when their bladder is at maximum capacity, i.e. when the subjects experience an urgent desire to micturate (33.3% of adult females, P<0.05, chi squared test). Fig. 2Break within a ureteric jet Multispike pattern A multispike pattern is defined as the pulsations noted within a single jet waveform as the result of pulsation transmitted from adjacent arteries (Fig. 3). The incidence of a multispike pattern was found to be 1.9% (50/2,629) in the normal population and is more commonly observed when the bladder is extremely full (forced diuresis). Fig. 3A multispike pattern of a ureteric jet Change in angle of the jet A change in the angle within a single jet waveform is illustrated given in Fig. 4. The incidence of change in angle of the ureteric jet in the whole normal population was 4.3% (113/2,629). Fig. 4Change in angle of the ureteric jet (a) at the beginning and (b) at the end of the ureteric waveform Ureteric jet pattern and physical properties of jets in normal subjects General properties Direction of flow Dubbins et al. [1] and Elejalde and de Elejalde [12] have found on both US and MRI that the ureteric jet is usually directed anteriorly or anteromedially (with or without crossing of the jets), while others have found ureteric jets directed in a more vertical direction or perpendicular to the bladder base [2, 9, 13–16]. Our US findings are in agreement with the above on the ureteric jet direction of flow. Mean jet velocity In our studies, the mean velocity of ureteric jets in children was found to be 34.03 cm/s for the monophasic pattern and 61.82 cm/s for the complex pattern (Tables 1 and 2). These values are higher than those reported previously. The mean jet velocity previously reported in children aged from 26 days to 17 years varies between 18 and 31.6 cm/s [9, 17, 18]. The discrepancy between the findings of our study and those of previous studies can be explained by different proportions of children of different ages. The mean velocity in adults in our cohort was found to be 57.65 cm/s for the monophasic pattern and 78.89 cm/s for the complex pattern. These values are similar to those reported previously [2, 18, 19]. Table 1Mean values of jet parameters in children and adults with the monophasic pattern Right sideLeft sideChildrenAdultsP valueChildrenAdultsP valueNumber of ureteric jets83188018Initial slope (cm s−2)211.82195.540.60256.55281.100.87Velocity (cm s−1)34.0357.65<0.0138.6663.93<0.01Duration (s)1.171.91<0.011.171.90<0.01Table 2Mean values of jet parameters in children and adults with the complex pattern Right sideLeft sideChildrenAdultsP valueChildrenAdultsP valueNumber of ureteric jets293910296892Initial slope (cm s−2)293.32271.210.09264.48309.130.13Velocity (cm s−1)61.8279.89<0.0161.9773.83<0.01Duration (s)5.266.92<0.015.157.03<0.01 Mean jet duration In our studies, the mean jet duration in children was 1.17 s for the monophasic pattern and 5.26 s for the complex pattern. In adults, the mean jet duration was 1.91 s for the monophasic pattern and 6.9 s for the complex pattern (Tables 1 and 2). These values are similar to those reported previously. In adults, previously reported jet durations range from 3.5 to 15 s [2, 16, 18, 20], while in children previously reported mean jet durations in two different series were 2.77±1.53 s [9] and 1.8±0.2 s [18]. Laterality difference in ureteric jets In general, there are no significant differences in waveform pattern, initial slope, velocity and duration of ureteric jets between the right and left sides in both children and adults. This is in agreement with the study of Matsuda and Saitoh [18]. There were two exceptions in our cohort, which might not have clinical significance. Boys were found to have a higher incidence of the monophasic waveform in jets on the left side compared with girls (P<0.01, chi-squared test; Table 3), while adult males have a lower velocity on the left than the right (P<0.01, paired sample T-test) (Table 4). Table 3Waveform patterns in children and adults in relation to sex (total of 2,629 ureteric jets in 1,341 subjects) ChildrenAdultsFemaleMaleFemaleMaleRightLeftRightLeftRightLeftRightLeftNumber of ureteric jets166164211215567560373373PatternMonophasic28 (16.9%)19 (11.6%)55 (26.1%)61 (28.4%)14 (2.5%)12 (2.1%)4 (1.1%)6 (1.6%)Biphasic54 (32.5%)53 (32.3%)59 (28.0%)63 (30.2%)219 (38.6%)224 (40%)110 (29.5%)89 (23.9%)Triphasic54 (32.5%)53 (32.3%)57 (27.0%)54 (25.1%)207 (36.5%)183 (32.7%)134 (35.9%)122 (32.7%)Polyphasic29 (17.5%)37 (22.6%)40 (19.0%)34 (15.8%)120 (21.2%)126 (22.5%)120 (32.2%)148 (39.7%)Square1 (0.6%)1 (0.6%)0 (0%)0 (0%)5 (0.9%)13 (2.3%)2 (0.5%)1 (0.3%)Continuous0 (0%)1 (0.6%)0 (0%)1 (0.5%)2 (0.4%)2 (0.4%)3 (0.8%)7 (1.9%)Table 4Mean values of jet parameters in children and adults in relation to sex ChildrenAdultsFemaleMaleFemaleMaleRightLeftRightLeftRightLeftRightLeftNumber of ureteric jets165162211214560545368365Initial slope (cm s−2)290.73261.06264.22264.31268.12274.72271.93234.07Velocity (cm s−1)56.3157.0255.7257.6069.9468.8494.0180.82Duration (s)4.644.564.214.206.376.437.527.69 Effect of age on ureteric jets The distribution of the four basic patterns (monophasic, biphasic, triphasic, polyphasic) among children and adults has been previously described by our group. We have found a strikingly larger proportion of monophasic waveforms in children than in adults in a population of 1,010 subjects [11]. In this review, the above finding is substantiated after expanding the study population to 1,341 normal subjects. Children are found to have a higher incidence of the immature pattern than adults. The incidences of the monophasic waveform in children and adults are 22% and 1.9% on the right side, and 21.1% and 1.9 % on the left side, respectively (P<0.01, chi-squared test; Table 5). This immature pattern occurs constantly in the first 6 months of life [4]. In our previously reported cohort, the immature monophasic pattern in children changed to a mature complex pattern at a mean age of 4.54 years, which probably reflects the mean age of VUJ maturity in general. There is no significant gender difference for the mean age of VUJ maturity in children: boys show VUJ maturity at a mean age of 4.88 years and girls at a mean age of 4.34 years (P>0.05, simple Z test) [21]. Adults have higher velocity (P<0.01, independent sample t test) and longer duration of the ureteric jet (P<0.01, independent sample t test) than children in both the monophasic and complex patterns. This finding has also been reported by Matsuda and Saitoh [18], suggesting that the stroke volume of urine in children is less than that in adults. The initial slope of the ureteric jet, however, shows no significant difference between children and adults (all P>0.05, independent sample t test; Tables 1 and 2). Effect of gender on ureteric jets In the adult population, males have a higher incidence of the polyphasic waveform than females (P<0.01 for both right and left sides, chi-squared test; Table 3). Male subjects also have a higher velocity (P<0.01 for both sides, independent sample t test) and longer duration of the ureteric jet than females (P<0.01 for both sides, independent sample t test; Table 4). In children, however, no significant differences in velocity, duration, initial slope or number of peaks within a single jet was observed between boys and girls (all P>0.05, chi-squared and independent sample t test; Tables 4 and 5). Table 5Incidence of monophasic jet in children and adults (total of 2,629 ureteric jets in 1,341 subjects). Note that the total number of ureteric jets is less than double the number of subjects as jets in some subjects could not be satisfactorily demonstrated on both sides by Doppler US Right sideLeft sideChildrenAdultsP valueChildrenAdultsP valueNumber of ureteric jets377940379933Monophasic pattern83 (22%)18 (1.9%)0.0180 (21.1%)18 (1.9%)0.01 Effect of bladder filling status on ureteric jets In a subgroup of 102 adult females we demonstrated how jet patterns are affected by different bladder filling status. Jet characteristics are compared between two different time intervals: (1) when bladder volume was small and rate of diuresis was low, and (2) when bladder volume was large and rate of diuresis more active. We found that 42.2% of the subjects showed no change in the number of peaks within a single jet waveform, 28.9% showed a decrease and 26.5% showed an increase, and 2.4% had square and continuous jet patterns when the bladder was very full [11]. Among the group with changes in the number of peaks, 3.4% showed a change from a monophasic to a complex pattern and 3.9% showed a change from a complex to a monophasic pattern. In all subjects, the initial slope, velocity and duration of the jet were not affected by different stages of bladder filling (all P>0.05, paired sample t test; Table 6). Even though bladder filling was shown to have little effect on whether the subjects had an immature or complex pattern, we standardized the protocol of bladder filling in all our subsequent Doppler US studies of ureteric jets. Scanning was started in most subjects 20 min after water intake because the jet frequency is reasonably high at that time while the bladder is not yet too distended to make the subject feel uncomfortable. Table 6Mean values of jet parameters in 102 subjects under different stages of bladder filling. Only the jet on the right side is shown for comparison as there was no significant difference between the two sidesJet parameterBladder statusP valueNot fullMaximally fullInitial slope (cm s−2)245.27209.440.39Velocity (cm s−1)62.8158.300.37Duration (s)6.246.230.81 Characteristics of ureteric jet under specific physiological conditions Pregnancy (physiological) A total of 107 pregnant women and 375 non-pregnant women were investigated. The occurrence of the monophasic waveform was significantly higher in the pregnant women than in the non-pregnant women (18.7%, 41.1% and 1.6% at 20 weeks’ gestation, 32 weeks’ gestation, and 3 months postpartum, respectively, vs. 1.9% in the non-pregnant women) [6]. General anaesthesia (pharmacological) Our previous study has documented loss of the complex jet pattern after general anaesthesia. A total of 16 children undergoing surgery under general anaesthesia were recruited. Before anaesthesia, 14 of them showed a complex pattern and two showed a monophasic pattern. After anaesthesia, all showed a monophasic waveform [7]. Ureteric transplantation following renal transplantation Our previous study has shown that transplanted ureters do not have the normal regulatory function at the VUJ, but inherent peristalsis is retained [8]. From a comparison of 55 transplant patients and 817 healthy subjects, we found that the Doppler waveforms of transplanted ureters are distinctly different from those of healthy adult ureters. Basically, only two patterns were identified from transplanted ureters: more commonly a short monophasic waveform (66.1% vs. 2.6% in the health ureters), and less commonly a longer multiphase pattern that does not resemble the patterns of the healthy ureter. Ureteric jet characteristics in paediatric conditions Literature review Previous studies have attempted to relate the ureteric jet seen during IVU in UTI, VUR and bladder neck obstruction. The studies of Kalmon et al. [22] and Nevin et al. [23] have suggested that identification of ureteric jets in IVU studies do not exclude VUR, but Kuhns et al. [24] found an association between the jet sign and absence of VUR. They postulated that the jet sign is produced by peristalsis through the ureteric sphincter. There might be an abnormal increase in the intravesical volume and pressure in the presence of reflux, thus preventing ureteric peristalsis so that no jet can be seen. Eklöf and Johanson [25], however, disagreed with this hypothesis, finding a lower incidence of visible jet in subjects without radiological proof of VUR (5.7% compared with 32% in the study of Kuhns et al.). Although infrequent, significant ipsilateral VUR is observed even when a ureteric jet is detected (10.7% [25] vs. 5.3% [24]). Eklöf and Johanson [25] concluded, therefore, that the low rate of bilateral jets detected on IVU restricts the potential clinical value of jet detection on IVU as an indicator of the absence of gross VUR and that voiding cystourethrography is necessary in the radiological work-up of children with UTI. Marshall et al. [17] agreed with the view of Kuhns et al. [24] that visualization of a ureteric jet on IVU should not exclude VUR. In addition, they found a strong correlation between relatively lateral positioning of the orifice and the presence of VUR. The mean velocity of the ureteric jet is not related to VUR. A midline-to-orifice distance of >7 mm has been proposed as the cut-off for predicting VUR. The more laterally positioned the ureteric orifice, the more likely it is that reflux will occur [17]. Subsequent studies largely agree with the concept that the ureteric jet is just a normal physiological phenomenon. Gothlin [26] found that the ureteric jet can be identified in subjects with and without UTI. Neither sex nor age has an effect on jet visualization. He therefore concluded that the jet is just a physiological phenomenon and the roentgenographic finding is normal. There are a number of previous studies evaluating characteristics of the ureteric jet based on Doppler US. Jequier et al. [9] found that the Doppler waveform parameters of jet direction, duration, frequency, velocity and shape do not help in predicting VUR. A lateral ureteric orifice is not seen in normal patients, but is identified in subjects with VUR and other urinary tract disorders. Gudinchet et al. [27] found no difference between refluxing and non-refluxing ureters with regard to ureteric jet length, angle, and midline-to-orifice distance. They concluded that these parameters cannot be used to predict recurrence of reflux in children after endoscopic subureteric collagen injection (SCIN) for the treatment of VUR. In two recent studies carried out by our group, we found a high correlation between the immature monophasic jet pattern in children with specific urinary disease entities. The details are discussed in the following sections. Children with VUR and UTI Ureteric jets of 241 healthy children and 98 children with UTI were studied. The incidence of monophasic jet (immature pattern) was 29% in healthy children overall, but varied greatly according to age. The immature pattern was universal in the first 6 months of life, but was markedly reduced to below 15% in late childhood. This immature pattern was more commonly seen in children with UTI (73.5%) and VUR (90.5%) than in healthy controls of the same age [4]. Children with nocturnal enuresis A comparison was made between 511 children presenting with primary nocturnal enuresis and 266 age-matched normal controls. The incidence of immature monophasic jet was significantly greater in enuretic children (19.2% in both sides) than in normal children (6.4% on the right side and 8.3% on the left side). Furthermore, the immature waveform was more commonly seen in the enuretic group with a markedly thickened bladder wall and multiple urodynamic abnormalities [5]. Taking all the observations together, we postulate a hypothesis concerning the mode of action of the VUJ as illustrated in the last section. Hypothesis of an active sphincter at the VUJ There has been controversy regarding the anatomy and function of the VUJ for a long time. There are three schools of thought for the antireflux mechanism at the VUJ. In the first theory the VUJ is thought to be governed by a passive valve mechanism dependent on the length and obliquity of the intravesical ureter [28, 29]. In the second theory the VUJ is considered to possess mixed active and passive valvular action. In addition to the anatomical factors, the distal ureter also shows antireflux ureteric peristaltic activity so that contraction of the ureter can prevent retrograde leakage of the intraluminal contents [30–32]. In the third theory the VUJ is considered to be able to act as a sphincter. Noordzij and Dabhoiwala [33] have proposed a sphincteric function for the VUJ which might be activated by the intrinsic muscular meshwork of the trigonal region of the bladder complementing a purely passive antireflux mechanism. Taking into consideration all the features observed in previous Doppler US studies of the ureteric jet, we postulate that the human VUJ can act as a functional sphincter. Only a monophasic ureteric peristaltic wave is demonstrated by M-mode study of the ureter [11] while the waveform of the ureteric jet emanating from the VUJ becomes more complex in pattern. Because the waveform of the ureteric jet is modified, an active sphincter mechanism is probably present at the VUJ [11]. Because the VUJ shows sphincteric activity, different patterns of ureteric jet can be identified under different physiological and pathological conditions with occasional modifications. We hypothesize that dual components are present regarding the mode of action in the active functional sphincter. The first component is the “myogenic” (primary or “immature”) and the second component is “neural” (secondary or “mature”). The distal ureteric muscle and possibly part of the detrusor muscle may contribute to the functional sphincteric action at the VUJ. We postulate that the monophasic jet pattern is the result of contraction caused by the myogenic component of the VUJ, while the complex pattern is the result of modulation of the myogenic component of the jet by the neural component in response to the distal intraureteric pressure. The mode of the functional sphincteric action of the VUJ and the subsequent ureteric jet waveform vary depending upon whether or not the neural component is active. In normal adults and in children reaching a certain age of maturity, the neural component modulates the myogenic component and complex patterns can thus be seen. When the neural component is absent, for example in a small immature child, under general anaesthesia or in certain pathological conditions, only the myogenic component is functioning, and thus the jet pattern reverses to the monophasic pattern. In renal transplantation patients with ureteric reimplantation, the normal VUJ mechanism in the transplanted ureter is completely lost and the ureteric jet patterns observed are completely different from the patterns obtained from the native VUJ in normal subjects [8]. These distinct patterns could be explained by loss of both the normal myogenic and the neural components of the sphincter action. Although the neural component governing the ureteric jet pattern is either present (resulting in a complex pattern) or absent (resulting in a monophasic pattern), the characteristics (initial slope, velocity, and duration) of the monophasic and complex pattern within the same age group remain distinct. Although there is a trend for a longer duration and higher peak velocity of the ureteric jet with increasing age, this could be explained by a larger bolus of urine in each jet in adults than in smaller children. The neural component governing the sphincteric action of the VUJ appears to be affected under various physiological and pathological conditions. We have observed a marked increase in the incidence of a monophasic jet pattern in pregnant women. However, for those pregnant women who retain the complex jet pattern, the characteristics of the complex waveform remain similar to those seen in non-pregnant women [6]. The above observation suggests that the neural mechanism of the VUJ is lost in some pregnant women, but the underlying reason remains obscure. In subjects undergoing general anaesthesia, the ureteric jet waveforms revert to the monophasic pattern with no difference in initial slope, velocity, and duration of the waveform when compared with the normal population [7]. This suggests that the ureteric jet in anaesthetized subjects is under myogenic influence solely, while the neurogenic influence is temporarily suspended. Histochemical study of the VUJ [34] has shown the muscular components of the VUJ to be innervated by both adrenergic and cholinergic nerves. However, because several drugs were simultaneously administered during general anaesthesia in our small cohort, the exact pharmacological action of these drugs on the neurogenic pathway could not be clearly determined. The presence of VUR is highly correlated with the immature monophasic waveform [4]. This observation suggests that the complex jet pattern is associated with a more efficient antireflux mechanism than the simple monophasic waveform that is more primitive or immature. In those children with UTI, but without VUR, a high correlation with the immature monophasic pattern is still observed. It is not clear whether the monophasic pattern is a risk factor for or the consequence of UTI. However, the association of the monophasic jet waveform with VUR and UTI might partially explain why pregnant women with a higher incidence of the monophasic jet pattern are also more prone to VUR and UTI [4]. A higher incidence of the immature jet pattern is also found in children with nocturnal enuresis [5]. This suggests that there is a lower level of maturity of the VUJ in a proportion of enuretic children. This group of children also has more deranged parameters on urodynamic studies. The hypothesis of dual components in the sphincteric action of the VUJ might also help explain some of the jet phenomena that we have described previously. The multispike pattern in the ureteric jet resulting from pulsations transmitted from adjacent arteries could probably be explained by premature relaxation of the VUJ that precedes the ureteric jet proper so that the transmitted arterial pulse becomes dominant. Premature relaxation of the VUJ is also likely to be governed by the neural mechanism seen in forced diuresis. The modification involving breaks in the jet is predominantly observed when the bladder is maximally full. Under these circumstances the intravesical pressure would be very high, which might impose a countering effect on the pressure wave of the ureteric jet emitted from the VUJ. Breaks might therefore appear within the jet waveform when the jet velocity drops significantly to zero flow on entering the bladder. In summary, based on all the above observations, we postulate that the human VUJ can act as a functional sphincter with two possible components: (1) a myogenic component which has a simple “open and close” action that gives rise to the monophasic jet pattern, and (2) a neural component that modulates the monophasic waveform into a more complex pattern. Further anatomical study to determine the exact nature of the sphincteric muscle governing VUJ function is warranted. The major implication of this overview of ureteric jet patterns is a change of concept for the human VUJ. Rather than being a passive valve, the VUJ functions as an active sphincter. This might lead to a novel approach to the management of VUR, UTI and enuresis in children which could replace traditional treatment. Conclusion This review has provided a comprehensive understanding of the physiological pattern of ureteric jets and contributes to our knowledge of the pathophysiology of urinary dysfunction in disease entities such as UTI, VUR and primary enuresis. The application of this technique in future studies might lead to novel approaches to the monitoring and prognosis of these conditions and more evidence-based treatment of related diseases.
[ "doppler", "ureteric jet", "vesicoureteric junction", "children", "ultrasound" ]
[ "P", "P", "P", "P", "U" ]
Biochem_Pharmacol-1-5-1920586
Inhibition of the HERG potassium channel by the tricyclic antidepressant doxepin
HERG (human ether-à-go-go-related gene) encodes channels responsible for the cardiac rapid delayed rectifier potassium current, IKr. This study investigated the effects on HERG channels of doxepin, a tricyclic antidepressant linked to QT interval prolongation and cardiac arrhythmia. Whole-cell patch-clamp recordings were made at 37 °C of recombinant HERG channel current (IHERG), and of native IKr ‘tails’ from rabbit ventricular myocytes. Doxepin inhibited IHERG with an IC50 value of 6.5 ± 1.4 μM and native IKr with an IC50 of 4.4 ± 0.6 μM. The inhibitory effect on IHERG developed rapidly upon membrane depolarization, but with no significant dependence on voltage and with little alteration to the voltage-dependent kinetics of IHERG. Neither the S631A nor N588K inactivation-attenuating mutations (of residues located in the channel pore and external S5-Pore linker, respectively) significantly reduced the potency of inhibition. The S6 point mutation Y652A increased the IC50 for IHERG blockade by ∼4.2-fold; the F656A mutant also attenuated doxepin's action at some concentrations. HERG channel blockade is likely to underpin reported cases of QT interval prolongation with doxepin. Notably, this study also establishes doxepin as an effective inhibitor of mutant (N588K) HERG channels responsible for variant 1 of the short QT syndrome. 1 Introduction Diverse cardiac and non-cardiac drugs are associated with prolongation of the rate-corrected QT (QTc) interval of the electrocardiogram and with a risk of the potentially fatal arrhythmia Torsade de Pointes (TdP) [1–4]. The majority of such agents exert a common action of inhibiting the cardiac rapidly activating delayed rectifier K+ current (IKr). IKr is a major determinant of ventricular action potential repolarization and, thereby, of the QTc interval [2,4,5]. The pore-forming subunit of IKr channels is encoded by HERG (human ether-à-go-go-related gene [6,7]). HERG channels appear to have a larger pore cavity than other (Kv) six transmembrane domain K+ channels and possess particular aromatic amino-acid residues in the S6 region of the channel [5,8–10]. These features combine to confer a high susceptibility to pharmacological blockade upon the HERG channel. Indeed, the association between drug-induced QTc interval prolongation and pharmacological blockade of HERG channels is sufficiently strong that drug-screening against recombinant HERG channels is now an important component of cardiac safety-pharmacology during drug development [11–13]. Doxepin is a tricyclic antidepressant (TCA) structurally related to amitriptyline and imipramine that combines antidepressant and sedative actions [14]. Initially doxepin was suggested to have better cardiac safety than other TCA drugs [15]. However, a subsequent review of the clinical and animal data [16] and a study in depressed patients [17] found no evidence that doxepin had fewer cardiovascular effects than other TCAs. Moreover, similar to other TCAs, doxepin can be cardiotoxic in overdose [14]. Adverse cardiac effects associated with doxepin include premature ventricular complexes (PVCs) and wide QRS complexes on the electrocardiogram. Furthermore, there are a number of documented cases of QT interval prolongation with doxepin [18–20]. For example, in overdose doxepin has been associated with a greatly prolonged QTc interval (reaching 580 ms) and TdP [19], whilst QT interval prolongation and syncope have been reported for doxepin in combination with methadone and β blocker use [20]. At present there is no information on the basis for QTc prolongation with doxepin. However, other TCAs including imipramine and amitryptiline have been demonstrated to inhibit recombinant HERG channels [21–23] and we hypothesised that doxepin is also likely to act as an inhibitor of HERG channel current (IHERG). The present study was conducted to test this hypothesis and to characterise the nature of any observed IHERG blockade. 2 Methods 2.1 Maintenance of mammalian cell lines stably expressing wild-type and mutant HERG channels Experiments on wild-type HERG were performed on a cell line (Human Embryonic Kidney; HEK 293) stably expressing HERG (donated by Dr Craig January, University of Wisconsin [24]), except for those in Fig. 8; these utilised a cell line stably expressing lower levels of HERG developed in this laboratory, for use with high external [K+] (for comparison with the F656A mutant, see Section 2.2 below). Cell lines stably expressing HERG and its mutants, F656A and Y652A were maintained as described previously [25]. Cell lines stably expressing the S631A [26] and N588K [27] mutants were made from appropriately mutated HERG sequences using previously described methods [25]. Cells were passed using a non-enzymatic dissociating agent (Splitase, AutogenBioclear) and plated out onto small sterilised glass coverslips in 30 mm petri dishes containing a modification of Dulbecco's modified Eagle's medium with Glutamax-1 (DMEM; Gibco, Gibco/Invitrogen, Paisley, UK), supplemented with 10% fetal bovine serum (Gibco), 400 μg ml−1 gentamycin (Gibco) and 400 μg ml−1 geneticin (G418; Gibco). Treatment of the mutant cell lines was identical to treatment of the wild-type cell line except that cultures were maintained with 800 μg ml−1 of hygromycin. The cells were incubated at 37 °C for a minimum of two days prior to any electrophysiological study. 2.2 Experimental solutions Whole-cell patch-clamp measurements of wild-type (WT) and mutant IHERG were made at 37 ± 1 °C. Once in the experimental chamber cells were superfused with a standard extracellular Tyrode's solution containing (in mM): 140 NaCl, 4 KCl, 2.5 CaCl2, 1 MgCl2, 10 glucose, 5 HEPES, (titrated to pH 7.45 with NaOH). Similar to other studies from our laboratory (e.g. [28,29]), for experiments employing the S6 mutant F656A (which shows comparatively low levels of channel expression [8,25]) and its WT control, the external solution contained 94 mM KCl (the NaCl concentration was correspondingly reduced). Experimental solutions were applied using a home-built, warmed, solution delivery system that exchanged the solution surrounding a cell in <1 s. Doxepin powder (Sequoia Research Products and Sigma-Aldrich) was dissolved in Tyrode's solution to produce initial stock solutions of either 10 or 50 mM, which were serially diluted to produce working solutions ranging from 0.1 μM to 1 mM. The pipette dialysis solution for IHERG measurement contained (in mM): 130 KCl, 1 MgCl2, 5 EGTA, 5 MgATP, 10 HEPES (titrated to pH 7.2 with KOH) [28,29]. Patch-pipettes were heat-polished to 2.5–4 MΩ. No correction was made for the ‘pipette-to-bath’ liquid junction potential, which was measured to be −3.2 mV. 2.3 Experiments on rabbit isolated ventricular myocytes One series of experiments was performed to investigate blockade by doxepin of native IKr from adult ventricular myocytes (Results, Fig. 2). For these, male New Zealand white rabbits (2–3 kg) were killed humanely in accordance with UK Home Office legislation. Ventricular myocytes were then isolated by a combination of mechanical and enzymatic dispersion, using previously described methods [30,31]. Pipette and external solutions for IKr measurement were identical to those described above for IHERG measurement. 2.4 Electrophysiological recording and analysis Whole-cell patch-clamp recordings were made using Axopatch 200 or 200B amplifiers (Axon Instruments) and a CV201 head-stage. Between 75 and 80% of the pipette series resistance was compensated. Voltage-clamp commands were generated using ‘WinWCP’ (John Dempster, Strathclyde University), Clampex 8 (Axon Instruments), or ‘Pulse’ software (HEKA Electronik). Data were recorded either via a Digidata 1200B interface (Axon Instruments) or an Instrutech VR-10B interface and stored on the hard-disk of a personal computer. The voltage-protocols employed for specific experiments are described either in the relevant ‘results’ text, or are shown diagrammatically on the relevant figures; unless otherwise stated in the text, the holding membrane potential between experimental sweeps was −80 mV. Data are presented as mean ± S.E.M. Statistical comparisons were made using, as appropriate, paired and unpaired t-tests or one-way analysis of variance (Anova) (Prism 3 or Instat, Graphpad Inc.). P values of less than 0.05 were taken as significant; ns = no statistically significant difference. The following equations were used for numerical analysis and graphical fits to data: The extent of IHERG inhibition by differing concentrations of doxepin was determined using the equation:where ‘Fractional block’ refers to the degree of inhibition of IHERG by a given concentration of doxepin; IHERG–DOXEPIN and IHERG–CONTROL represent current amplitudes in the presence and absence of doxepin. Concentration–response data were fitted by a standard Hill equation of the form:where IC50 is [DOXEPIN] producing half-maximal inhibition of the IHERG tail and h is the Hill coefficient for the fit. Half-maximal voltage values for IHERG activation were obtained by fitting IHERG tail-voltage (I–V) relations with a Boltzmann distribution equation of the form:where I is the IHERG tail amplitude following test potential Vm, Imax the maximal IHERG tail observed during the protocol, V0.5 the potential at which IHERG was maximally activated, and k is the slope factor describing IHERG activation. Data from each individual experiment were fitted by this equation to derive V0.5 and k values in ‘control’ and with doxepin. The resultant mean V0.5 and k values obtained from pooling values from each experiment were then used to calculate the mean activation relations plotted in Fig. 3. Parameters describing voltage-dependent inactivation of IHERG were derived from fits to voltage-dependent availability plots with the equation:where ‘inactivation parameter’ at any test potential, Vm, occurs within the range 1–0, V0.5 is the voltage at which IHERG was half-maximally inactivated and k describes the slope factor for the relationship. 3 Results 3.1 Doxepin produces concentration-dependent inhibition of WT IHERG IHERG was elicited by the protocol shown in the inset of Fig. 1A, which is a standard protocol used to study IHERG pharmacology in this laboratory (e.g. [28,29,32]). Membrane potential was stepped from −80 to +20 mV for 2 s followed by a 4-s step to −40 mV to elicit IHERG tails. A brief (50 ms) pulse to −40 mV preceded the step to +20 mV in order to monitor the instantaneous current without activation of IHERG (peak outward IHERG tails on repolarization to –40 mV were compared with the instantaneous current during the 50 ms pulse to −40 mV, in order to measure IHERG tail amplitude). This voltage protocol was applied repeatedly (at 20 s intervals) prior to and during the application of doxepin. Fig. 1A shows a representative recording of IHERG in control, 5 min after the addition of 10 μM doxepin and 5 min following wash-out of the drug. Doxepin produced a substantial inhibition of IHERG; this was largely reversible upon washout (to 73 ± 9% of control in six cells to which 10 μM doxepin was applied). Fig. 1B shows records for the same cell before and at increasing times during doxepin exposure, indicating that blockade was maximal within 4–5 min of drug application. Fig. 1C shows a plot of mean ± S.E.M. fractional block of IHERG tails by four different concentrations of doxepin, fitted by Eq. (2). The IC50 for doxepin inhibition of IHERG with this protocol was 6.5 ± 1.4 μM and the Hill coefficient for the fit was 1.0 ± 0.2. IHERG blockade by doxepin was not accompanied by statistically significant changes to the time-course of deactivation of the time-dependent component of IHERG tails on repolarization to −40 mV: bi-exponential fitting of the time-dependent tail current decline yielded fast time-constants of deactivation (τfast) of 203 ± 15 and 273 ± 21 ms in control solution and 10 μM doxepin, respectively (n = 6; p > 0.05, paired t-test), and slow time-constants of deactivation (τslow) of 1164 ± 141 and 1499 ± 131 ms (p > 0.1). Doxepin did not influence significantly the relative proportions of deactivating current described by the fast and slow time-constants of deactivation (the proportion of deactivating current fitted by the τfast was ∼0.6 in both control and doxepin). It has been suggested that pharmacological inhibition of IHERG by some drugs may vary between different stimulus protocols [33]. Therefore, we also investigated IHERG blockade by doxepin using an action potential (AP) voltage waveform [32,34]. The voltage-command used (shown in the lower panel of Fig. 2A) was a previously acquired, digitised AP from a rabbit ventricular myocyte. This was applied repeatedly (at 4 s intervals) from a holding potential of −80 mV [32]. The upper panel of Fig. 2A shows traces in the absence and presence of doxepin. Peak outward IHERG during the repolarizing phase of the AP was inhibited 60 ± 12% (n = 5) by 10 μM doxepin, which does not differ significantly from the extent of IHERG tail current blockade by this concentration obtained using the protocol shown in Fig. 1 (62 ± 4%; n = 6; p > 0.8; unpaired t-test). In order to determine effects of doxepin on native IKr, three drug concentrations (1, 10 and 100 μM) were applied to ventricular myocytes under whole-cell patch clamp. The command protocol for these experiments (similar to [32]) is shown in Fig. 2B and IKr tails were monitored on repolarization from +20 to −40 mV. Under our conditions, the outward tail currents observed on repolarization to −40 mV were almost completely abolished by 1 μM dofetilide (91 ± 3% blockade; n = 7), verifying that these were carried by IKr with little or no contamination from overlapping currents. Fig. 2C shows representative IKr tails on repolarization from +20 to −40 mV in the absence and presence of 100 μM doxepin; at this concentration the current was largely abolished. Fig. 2D contains mean ± S.E.M fractional block data for the tested doxepin concentrations, fitted by Eq. (2). The derived IC50 for doxepin inhibition of native IKr tails was 4.4 ± 0.6 μM, in good agreement with the observed inhibitory potency of doxepin on IHERG. 3.2 Voltage dependence of IHERG blockade by doxepin Voltage dependence of IHERG blockade by doxepin was determined by the application, in control and doxepin-containing solutions, of a series of 2 s duration depolarizing commands to a range of test potentials up to +40 mV [32]. Successive command pulses were applied at 20 s intervals. Representative control currents at selected command voltages are shown in Fig. 3Ai; currents from the same cell following equilibration in 10 μM doxepin are shown in Fig. 3Aii (upper traces; lower traces in each panel show corresponding voltage commands). At all test potentials, IHERG was inhibited by doxepin. For each of five similar experiments, fractional inhibition of IHERG tails following each voltage command was calculated using Eq. (1); the mean ± S.E.M. fractional block of IHERG tails is plotted against command voltage in Fig. 3B. Also shown in Fig. 3B are voltage-dependent activation relations for IHERG in control solution and in the presence of doxepin (see Section 2). The derived mean V0.5 and k values were: control V0.5 = −21.0 ± 3.1; doxepin V0.5 = −24.0 ± 2.2, p > 0.1; control k = 5.9 ± 0.2; doxepin k = 7.5 ± 1.7, p > 0.3, with the IHERG activation relations in control and doxepin closely overlying one another (Fig. 3B). Fractional block of IHERG showed no statistically significant dependence on voltage over the range from −40 to + 40 mV (p > 0.2; Anova). The voltage dependence of IHERG availability/inactivation was assessed using a 3-step protocol similar to those used in previous IHERG investigations from our laboratory ([32,35], and shown schematically as an inset to Fig. 3C). Mean data from five experiments were corrected for deactivation [35] and the resulting values were plotted to give availability plots in the absence and presence of doxepin (Fig. 3C); the data-sets were fitted with Eq. (4) (see Section 2). In control and doxepin the inactivation V0.5 values were, respectively, −37.9 ± 2.4 and −43.5 ± 4.1 mV (ns, paired t-test), with corresponding k values of −16.2 ± 2.4 and −17.3 ± 8.8 mV (ns, paired t-test). Thus, doxepin did not alter the voltage-dependence of IHERG inactivation. Although there was a trend towards an acceleration in the time-course of IHERG inactivation, this did not attain statistical significance (inactivation time–constant of 3.3 ± 1.2 ms in control versus 1.4 ± 0.6 ms in doxepin, p > 0.1; obtained from exponential fits to the inactivating phase of the current elicited during the third step of the protocol, following a brief hyperpolarizing step to −80 mV). 3.3 Time dependence of IHERG inhibition by doxepin Gating-dependence of IHERG inhibition by doxepin was investigated further by the use of two protocols to examine the time-dependence of development of IHERG blockade. The first protocol used a sustained (10 s) depolarizing step to 0 mV from a holding potential of −80 mV (shown in Fig. 4A). This protocol was applied first in the absence of doxepin to elicit control IHERG. It was then discontinued whilst cells were equilibrated in 10 μM doxepin (for 7 min), after which it was re-applied. The first current trace recorded on resumption of stimulation was used to determine development of fractional blockade throughout the applied depolarization. Fig. 4A shows representative traces in control and doxepin; these traces diverged rapidly following depolarization, suggesting that IHERG blockade developed rapidly with time. Fig. 4B shows plots of mean ± S.E.M. fractional block of IHERG at various time intervals throughout the applied depolarization (main panel; n = 8) and on an expanded time-scale to show development of blockade over the first 0.5 s (inset). These plots show clearly that IHERG blockade developed rapidly on depolarization, with little change in blockade after 200–300 ms following the onset of the voltage-command. The time-course of development of blockade was well-described by a mono-exponential fit to the data, with a rate constant (K) for the fit of 16.33 s−1 (equivalent to a time-constant (1/K) of 61 ms). Although the protocol used in Fig. 4 is well suited for examining the development of IHERG inhibition over a period of seconds following membrane depolarization, it is less well suited for accurate assessment of blockade of IHERG immediately following membrane depolarization than are protocols based on tail current measurements [29,32]. Therefore, a second protocol was also used [29]. Membrane potential was held at –100 mV and, from this, 10 and 200 ms duration depolarizations to +40 mV were applied, each followed by a period at −40 mV to monitor IHERG tails [29]. Fig. 5A shows representative currents from the same cell activated by 10 ms (Fig. 5Ai) and 200 ms (Fig. 5Aii) commands in the absence and presence of doxepin. Some IHERG tail inhibition was evident following the 10 ms duration command, with further blockade evident after the 200 ms command. Fig. 5B shows mean fractional block of IHERG tails for five similar experiments. Whilst blockade for 200 ms commands was significantly greater than that for 10 ms commands (p < 0.02), the occurrence of some blockade with only a very brief duration depolarizing command is concordant with either a very rapidly developing gating-dependent blockade on depolarization or with a contribution of closed-channel block to the overall effect of doxepin [25,29,36]. 3.4 Effect of inactivation-attenuating mutants on IHERG inhibition by doxepin In order to investigate further the gating-dependence of IHERG blockade by doxepin, experiments were performed using two inactivation-attenuating HERG mutants: S631A and N588K. Residue S631 is located towards the outer mouth of the HERG channel pore, and the S631A (serine → alanine) mutation has been reported to shift IHERG inactivation by ∼+100 mV [26]. Residue N588 is located in the external S5-Pore linker of the channel, and the N588K (asparagine → lysine) mutation, which is responsible for one form of the recently identified genetic short QT syndrome [27], has been reported to shift IHERG inactivation by ∼+60 to +100 mV [35,37]. The effect of each mutation on the potency of IHERG blockade by doxepin was determined using the same protocol as was used to establish concentration-dependence of WT IHERG blockade (Fig. 1). Fig. 6Ai shows representative currents in control and 10 μM doxepin for WT–HERG, whilst Fig. 6Aii and Aiii show similar records for S631A–HERG and N588K–HERG, respectively. In contrast to WT–HERG, for both S631A–HERG and N588K–HERG the IHERG tail magnitude was substantially smaller than the maximal current during the voltage-command (Fig. 6Aii and Aiii, respectively) reflecting the greatly attenuated IHERG inactivation of these HERG mutants [26,35,37]. Both mutant channels retained the ability to be inhibited by doxepin. Exponential fits to currents activated on depolarization to +20 mV for both mutants showed similar time-courses of current activation in control solution and following equilibration with 10 μM doxepin (S631A control: 60.6 ± 5.5 ms; doxepin: 64.5 ± 8.0 ms, p > 0.5; N588K control: 49.8 ± 6.2; doxepin: 41.7 ± 9.9 ms, p > 0.5; n = 5 for both). Three doxepin concentrations (1, 10 and 100 μM) were tested to obtain concentration-response data. Fig. 6B shows concentration-response relations (determined using Eq. (2)) for IHERG tail inhibition for both S631A–HERG and N588K–HERG, with corresponding data for WT–HERG plotted for comparison. Although the S631A and N588K mutations produced small and modest increases, respectively, in the IC50 derived from the fits to the concentration-response relations (from 6.6 μM for WT to 8.6 μM for S631A–HERG and 12.6 μM for N588K–HERG), these differences did not attain statistical significance (p > 0.05; Anova). These observations indicate the IHERG blockade by doxepin was not highly sensitive to attenuation of HERG channel inactivation. 3.5 Sensitivity of IHERG inhibition by doxepin to the S6 mutations Y652A and F656A Two aromatic amino-acid residues, Y652 and F656, have been shown to be important components of the drug-binding site for a variety of HERG channel blockers [5,9,38]. Accordingly, we investigated whether or not doxepin inhibition of IHERG was sensitive to mutation of either residue, adopting a similar approach and protocols to those used in other recent IHERG pharmacology studies [29,32,39–41]. Fig. 7 shows the effects on doxepin inhibition of IHERG of the mutation Y652A (tyrosine → alanine). The experimental voltage protocol used (Fig. 7Aiii) was identical to that used for WT IHERG in Fig. 1 and to investigate the S631A and N588K mutants in Fig. 6. Representative traces showing the effect of 10 μM doxepin on WT IHERG and Y652A IHERG are shown in Fig. 7Ai and Aii. WT IHERG was inhibited substantially by this doxepin concentration (Fig. 7Ai). In contrast, inhibition of Y652A IHERG was noticeably reduced (Fig. 7Aii; 28 ± 3% peak tail current inhibition compared to 62 ± 4% for WT IHERG; p < 0.001). Four other doxepin concentrations (30, 100, 300μM and 1 mM) were also tested in order to construct a concentration response relation for inhibition of Y652A IHERG tails (Fig. 7B). A fit to the data with Eq. (2) (dashed line) yielded an IC50 for inhibition of Y652A IHERG of 27.8 ± 8.8 μM (with a Hill coefficient of 0.9 ± 0.3), which represented a modest, though significant (p < 0.05), ∼4.2 fold increase in IC50 over that for WT IHERG. IHERG carried by F656A–HERG channels was studied using the protocol shown in the inset of Fig. 8Aiii and inward tails were measured at −120 mV [25,28,29]. Initial experiments employed a doxepin concentration of 100 μM (shown in Fig. 1 to produce extensive inhibition of WT IHERG with a standard [K+]e). Representative traces showing effects of this concentration on WT and F656A IHERG are shown in Fig. 8Ai and Aii. The F656A mutation produced a modest attenuation of IHERG inhibition by this concentration of doxepin. However, as observed previously for some other drugs (e.g. [28]), the extent of inhibition of WT IHERG by 100 μM doxepin under conditions of high [K+]e (see Section 2) was significantly smaller (56 ± 5%) than that observed with the same concentration in the experiments with a standard [K+]e in Fig. 1 (93 ± 4%; p < 0.001). Therefore, the effects of two further concentrations of doxepin (500 μM and 1 mM) were also examined. Fig. 8B shows bar-chart plots of the mean levels of inhibition of WT and F656A IHERG by the three doxepin concentrations. These concentrations resulted in progressive increases in the level of blockade of WT IHERG. In contrast, although Anova comparison of the data with the three drug concentrations confirmed that doxepin inhibition of F656A IHERG was concentration-dependent (p < 0.001), the observed concentration-dependence was unusual: there was no statistically significant difference between the observed level of blockade of F656A IHERG between 100 and 500 μM, whilst at 1 mM the observed level of inhibition of F656A IHERG was markedly increased (p < 0.001 compared to each of 100 and 500 μM, Bonferroni post-test) and approached that of WT IHERG. Taken together, the data with the three doxepin concentrations suggest that the F656A mutation exerted some influence on the ability of doxepin to inhibit IHERG, though the F656A data did not appear to follow a conventional monotonic concentration dependence. 4 Discussion Despite a strong association between TCA use and QT interval lengthening [42], and although doxepin itself has been linked with both QT interval prolongation and TdP [18–20], the effects of this drug on HERG K+ channels have not hitherto been reported. Moreover, whilst IHERG blockade has been investigated previously for the TCAs imipramine and amitriptyline [21–23], to our best knowledge the present study is the first in which molecular determinants of IHERG inhibition have been investigated for any member of the TCA family. 4.1 Characteristics of IHERG blockade by doxepin Previously, imipramine has been reported to inhibit IHERG recorded in experiments using a mammalian expression system with an IC50 of 3.4 μM [23] whilst an IC50 of 10 μM was reported for amitriptyline [21]. A different study of amitriptyline, using the Xenopus oocyte expression system, reported IC50's of ∼3.3–4.8 μM (depending on [K+]e) [22]. Thus, the potency of doxepin as an IHERG inhibitor found here (IC50 of 6.5 μM) is broadly comparable to that seen previously for these other two TCAs. However, in terms of the characteristics of observed IHERG blockade, doxepin appears to be closer to imipramine than to amitriptyline: amitriptyline block of IHERG has been reported to show significant voltage-dependence [22] whereas imipramine showed only weak voltage-dependence [23]. Moreover, no significant effects of doxepin on the voltage-dependence of activation or inactivation were seen in this study. Unfortunately, comparative data for the TCAs imipramine and amitriptyline are lacking [21–23] and, therefore, a direct comparison between doxepin and these agents cannot be made in this regard. However, the atypical tetracyclic antidepressant maprotiline has recently been reported to show no alteration to the voltage-dependence of activation and inactivation [43], though another study contradicted this with regard to activation [44]. The lack of a significant leftward shift in the voltage-dependence of inactivation with doxepin in our study suggests that this agent does not act to stabilise IHERG inactivation, and the time-course of WT IHERG inactivation was not significantly accelerated by the drug. Moreover, the results with inactivation-attenuating mutations to two residues from distinct parts of the channel (S631A: outer mouth of the channel pore; N588K: in the S5-Pore linker) provide evidence that IHERG inactivation does not play an obligatory role in doxepin's binding to the channel. This is further supported by unaltered levels of WT IHERG blockade with progressive depolarization over voltages at which IHERG was maximally activated, but over which inactivation increased (Fig. 3C and D). Doxepin is quite distinct in this regard from a number of other drugs, including the archetypal high affinity HERG blocking methanesulphonanilide drugs E-4031 and dofetilide, for which IHERG inactivation exerts a strong influence on blocking potency – either as a direct consequence of inactivation-state dependent block, or due to conformational changes during inactivation facilitating optimal orientation of S6 helical residues to which drugs bind [10,45–48]. Blockade of IHERG by imipramine has been reported to develop rapidly during a sustained depolarization, with a component of inhibition visible even for comparatively brief depolarizing voltage commands [23]. These features of imipramine's action correspond well with those seen for doxepin in the present study. They are also similar to those reported for the serotonin-selective reuptake inhibitors (SSRIs) fluvoxamine and citalopram [25,49], but differ significantly from the methanesulphonanilides, for which little blockade is observed immediately upon depolarization, with blockade then increasing progressively during the maintained depolarization [24,49,50]. The lack of a strong dependence of HERG channel blockade by doxepin on IHERG inactivation suggests that gating-dependent blockade by this drug is likely to arise predominantly to activated/open channels. Accordingly, the observed time-dependence of IHERG inhibition by doxepin in this study is consistent with either a mixed state-dependence of blockade (with components of both closed and open channel blockade) or with the presence of a very rapidly developing component of activation-dependent inhibition immediately on depolarization. For the majority of drugs that have been studied, one or both of the Y652 and F656 aromatic amino acid residues in the S6 helices of the HERG channel comprise key elements of the drug binding site [5,9,38,51]. For example, the Y652A and F656A mutations increased the IC50 for HERG blockade by the methanesulphonanilide MK-499 by 94-fold and 650-fold, respectively, and that for terfenadine by ∼100-fold [8]. In comparison, the ∼4-fold increase in IC50 for doxepin produced by the Y652A mutation in this study, is rather modest, suggesting that this residue is less influential for binding of doxepin than for either of these high affinity blockers. The unusual concentration dependence seen with the F656A mutation indicated that at a high doxepin concentration of 1 mM IHERG blockade was little affected by mutation of this residue, whilst at concentrations of 100 and 500 μM significant attenuation of blockade occurred with no significant increase in block at 500 μM compared to 100 μM. It was not possible to obtain an adequate fit of these data with Eq. (2). A cautious interpretation of the lack of a conventional concentration-dependence of F656A–HERG inhibition by doxepin is that blockade may depend partly, but incompletely, on this residue. Neither the Y652A nor F656A mutations attenuated blockade by doxepin concentrations producing very high levels (>90%) of blockade of WT IHERG, suggesting that neither residue is absolutely obligatory for doxepin binding to HERG channels to occur. Whilst unusual, this is not unprecedented; IHERG block by both the SSRI fluvoxamine and the antiarrhythmic agent dronedarone has been reported to be only partially attenuated by the Y652A and F656A mutations [25,28]. Comparative data for other TCAs are lacking, though recently these residues have been implicated in IHERG inhibition by the tetracyclic drug maprotiline [43,44]; one of these two studies obtained IC50 values for the Y652A and F656T mutants, with respective (modest) 3-fold and 7-fold increases in IC50 [43]. Whilst obligatory molecular determinants of doxepin-binding to HERG remain to be found, the fact that IHERG inhibition by doxepin developed progressively over several minutes following rapid external solution exchange is consistent with the drug crossing the cell membrane to reach its site of action. The reduced IHERG inhibition by doxepin of WT IHERG in the presence of raised [K+]e may also be of significance. Since IHERG inhibition by doxepin appears not to be critically dependent on channel inactivation, decreased blockade with high [K+]e is unlikely to result from any effect of [K+]e on IHERG inactivation. Rather, reduced inhibition in high [K+]e may be accounted for by an interference with drug-binding due to an electrostatic repulsion or “knock-off” process [28,52], consistent with drug binding to the channel at a site close to the ion conduction pathway. It remains to be determined whether this site would need to reside within the channel pore. However, in a limited series of experiments using the D540K mutant [53], we did not find evidence that doxepin can readily unbind on hyperpolarization-induced channel opening (data not shown); though not all drugs that bind within the pore exhibit marked ‘untrapping’ [54]. Nevertheless, given the presence of a significant component of IHERG blockade with brief depolarization and incomplete attenuation of inhibition by the Y652A and F656A mutations, we cannot exclude the possibilities that a proportion of the observed blockade with doxepin involves binding outside of the channel pore, or binding to closed HERG channels. 4.2 Clinical significance of IHERG blockade by doxepin TCAs are associated with QTc interval lengthening in clinical use [42], and IHERG inhibition by doxepin observed in the present study is consistent both with documented cases of QT interval prolongation and TdP with doxepin [18–20] and with QT interval increases seen in anaesthetised guinea-pigs receiving doxepin infusion [15]. Our study was conducted at a physiologically relevant temperature and the extent of IHERG blockade by doxepin was similar between ventricular action potential and conventional voltage step protocols. A lack of relief of IHERG blockade on membrane hyperpolarization for the D540K mutant (mentioned in Section 4.1, above) is suggestive of relatively poor drug unbinding at negative voltages in the maintained presence of drug, whilst the data shown in Fig. 5 indicate that at 10 μM (the same concentration as used for the AP clamp experiment in Fig. 2) doxepin was able to bind and inhibit IHERG rapidly on membrane depolarization to a positive voltage. These factors may combine to account for the similar levels of IHERG inhibition by doxepin observed with step and AP clamp protocols. Although we did not co-express HERG with MiRP1, a putative β subunit suggested to be necessary to recapitulate native IKr [55], it has recently been suggested that MiRP1 is unlikely to interact with HERG outside of the cardiac conduction system [51] and, additionally, the pharmacological sensitivity of HERG channels expressed in mammalian cells without MiRP1 co-expression has been found to be similar to that of native IKr [56]. This notion is reinforced by the close concordance of inhibitory potency of doxepin on IHERG and native IKr in our experiments (IC50 values of 6.5 and 4.4 μM, respectively). A question therefore arises as to the relationship between the potency of IHERG/IKr blockade seen in this study and plasma concentrations of doxepin in patients. As a class, the TCAs are lipophilic and are known to become concentrated in some tissues, including the myocardium [57]. In the case of doxepin, one experimental study has reported doxepin concentration in cardiac muscle to be 41-fold higher than plasma levels [58]. This makes it difficult to extrapolate with accuracy from known plasma concentrations to likely levels of IHERG/IKr blockade by doxepin in vivo. The therapeutic plasma level of doxepin is thought to be between 50 and 250 ng/ml (0.16–0.8 μM), although a wide variety of recommendations from university psychiatric departments and laboratories (up to 1000 ng/ml; 3.2 μM) have been reported [59]. Whilst IHERG (or IKr) blockade at the lower end of this range might be anticipated to be modest, inhibition at higher concentrations would be significant and, taking into account also potential cardiac accumulation, the observed potency of IHERG inhibition by doxepin in this study is likely to be clinically relevant, particularly in overdose. Such an effect may be exacerbated in individuals exhibiting pre-existing QT interval prolongation (congenital or acquired), electrolyte abnormalities or with impaired drug metabolism. Thus, as for other IHERG blocking medications, caution is warranted in its use in patients with pre-existing QT interval prolongation or with risk factors likely to exacerbate the effects of IHERG blocking medications. The findings of this study have further clinical relevance in a second, perhaps less expected, respect. The attenuated-inactivation N588K–HERG mutant used in this study has been shown recently to underlie the SQT1 familial form of the recently identified genetic ‘Short QT syndrome’, which carries a risk of cardiac arrhythmia and sudden death [27,60]. Pharmacological approaches to correcting the QT-interval of SQT1 patients are currently very limited. These patients are comparatively insensitive to Class III IKr/HERG blocking drugs [27,61] and the N588K–HERG blocking potencies of the IKr/HERG blockers E-4031and D-sotalol are reduced ∼12–20-fold compared to their effects on WT–HERG [62,63], presumably due to a role (direct or otherwise) of channel inactivation in facilitating drug binding to the HERG channel. To date, only the Class Ia antiarrhythmic drug quinidine has been found both to inhibit N588K–HERG effectively and to correct the QT interval in such patients [27,62,63]; however, very recently, another Class Ia antiarrhythmic, disopyramide, has been shown to be effective against N588K–HERG in vitro [63]. The present study identifies doxepin as both an IHERG-blocker for which channel inactivation does not play a major role in drug binding and as an additional drug that is an effective inhibitor of N588K–HERG. Whilst the sedative effects of doxepin may make it unsuitable as a corrective treatment for SQT1 patients, our findings prompt the question as to whether chemical structures related to doxepin might feasibly offer viable IHERG blocking agents in SQT1.
[ "herg", "potassium channel", "antidepressant", "doxepin", "rapid delayed rectifier", "qt interval", "arrhythmia", "short qt syndrome", "torsade de pointes", "i kr", "long qt syndrome", "qt-prolongation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "U" ]
Diabetologia-4-1-2292427
What is the mechanism of microalbuminuria in diabetes: a role for the glomerular endothelium?
Microalbuminuria is an important risk factor for cardiovascular disease and progressive renal impairment. This holds true in the general population and particularly in those with diabetes, in whom it is common and marks out those likely to develop macrovascular disease and progressive renal impairment. Understanding the pathophysiological mechanisms through which microalbuminuria occurs holds the key to designing therapies to arrest its development and prevent these later manifestations. Introduction The associations between microalbuminuria, cardiovascular disease and progressive renal impairment are well described, but how these are linked mechanistically is something of a conundrum [1]. Here we focus on the pathogenesis of microalbuminuria in patients with diabetes, in whom it occurs commonly and has particular significance. In type 1 diabetes the prevalence gradually increases from onset of disease (6% after 1–3 years), reaching over 50% after 20 years [2]. In type 2 diabetes the prevalence is 20–25% in both newly diagnosed and established diabetes [3]. However, it is also instructive to review the general epidemiology of microalbuminuria, including those conditions with which it is associated and those for which it is a risk factor. Such an analysis reveals generalised endothelial dysfunction as a common denominator in microalbuminuria in both the general and diabetic populations. In 1989, this observation led to the hypothesis that a common process underlies both microalbuminuria and generalised endothelial dysfunction in diabetes. This process was suggested to be the dysregulation of enzymes involved in metabolism of extracellular matrix, the ‘Steno hypothesis’ [4]. Nearly 20 years on from the Steno hypothesis, the determinants of selective glomerular permeability to proteins at the cellular and molecular level are much better understood. In particular, the importance of podocyte-specific proteins in the regulation of selective permeability has been recognised. Similarly, much is now known about the biochemical derangements important in the pathogenesis of diabetic complications. We draw together these elements to consider the pathophysiological mechanisms through which diabetes exerts its effects on glomerular permeability in the initiating stages of diabetic nephropathy, i.e. at or before the appearance of microalbuminuria. These early changes establish the milieu in which the more advanced changes of overt diabetic nephropathy develop. Defining the mechanistic links from biochemical derangements to the appearance of increased urinary albumin highlights key elements in the pathophysiological pathway of the development of both diabetic nephropathy and micro- and macrovascular disease elsewhere. We hold with the established view that increased transglomerular passage of albumin is the major source of microalbuminuria [5]. While other hypotheses have been advanced, for example, failure of tubular reuptake of albumin, none are sufficiently robust to seriously challenge this position. In both general and diabetic populations, conditions associated with endothelial damage predispose to microalbuminuria In the general (non-diabetic) population, hypertension is the major risk factor for microalbuminuria, and the prevalence of microalbuminuria in essential hypertension is around 25%. Individuals with essential hypertension who develop microalbuminuria have a higher incidence of biochemical disturbances, implying that hypertension per se may not be the cause of microalbuminuria, but, rather, these additional derangements [6]. Microalbuminuria is strongly associated with vascular disease in hypertensive patients, suggesting that it is a marker of vascular and/or endothelial damage in this condition [7]. The insulin resistance syndrome describes a clustering of disorders the underlying pathology of which is thought to be related to insulin resistance and/or endothelial dysfunction [8]. Microalbuminuria is associated with several of the disturbances found in the insulin resistance syndrome, including endothelial dysfunction and obesity, in addition to type 2 diabetes. Proinflammatory cytokines produced by visceral adipocytes (adipokines) have recently emerged as important mediators of the increased cardiovascular risk associated with the insulin resistance syndrome. These adipokines represent a possible link from insulin resistance and obesity to microalbuminuria in the non-diabetic population. Microalbuminuria can be detected in patients undergoing major surgery, particularly when complicated by sepsis [9], and is associated with other inflammatory states, including rheumatoid arthritis and inflammatory bowel disease [10]. Microalbuminuria can also be detected in a significant proportion of the normal non-diabetic, normotensive population (6.6% in one study [11]), where it also associates with cardiovascular disease. Male sex [11] and hormone replacement therapy in women [12] seem to increase susceptibility to microalbuminuria, and although the basis for this is not clear, the fact that men have a higher incidence of vascular disease in general implies a common aetiology. Hypertension is approximately twice as frequent in individuals with diabetes as in those without, and hypertensive individuals are predisposed to the development of diabetes [13]. Hypertension is certainly a major determinant of microangiopathy in diabetes, but the relationship between hypertension and microalbuminuria in diabetes is complex. Hypertension and microalbuminuria often coexist in diabetic patients, and reducing blood pressure reduces microalbuminuria in type 1 diabetes [14]. However, it is unclear whether hypertension contributes to the development of microalbuminuria in diabetes. At least in type 1 diabetes, hypertension and microalbuminuria appear to develop together: in longitudinal studies there is no evidence that hypertension develops before microalbuminuria [15]. This is more difficult to demonstrate in type 2 diabetes, perhaps because of the heterogeneity of the disease. One third of patients with type 1 diabetes develop advanced nephropathy, and the renal status of probands of diabetic patients makes a difference of nearly 50% in risk [16]. A family history of hypertension also predisposes to microalbuminuria in diabetes [17]. These observations suggest a genetic predisposition in individuals with type 1 diabetes who develop diabetic nephropathy, but as yet no gene has been identified. Perhaps most significant is that this and other evidence suggests that the susceptibility of type 1 diabetic patients to diabetic nephropathy is increased in response to increased genetic risk of insulin resistance [18]. Consequently, it is the combination of this genetic risk with hyperglycaemia and its sequelae that leads to microalbuminuria and, eventually, diabetic nephropathy. Thus, in the diabetic as well as in the general population the risk factors for the development of microalbuminuria can be grouped into those associated with vascular disease, including endothelial dysfunction, inflammation and insulin resistance. This implies that microalbuminuria may also, at least in these situations, result from endothelial dysfunction. This does not preclude significant damage to other components of the vessel wall, including basement membrane, pericytes, matrix components (collagen, elastin) and vascular smooth muscle cells. However, as the principal regulators of vascular permeability, endothelial cells are likely to be involved in systemic increases in permeability. Microalbuminuria is a risk factor for macro- and microangiopathy, including advanced diabetic nephropathy Microalbuminuria is a significant risk factor for cardiovascular mortality in type 1 [19] and 2 diabetes [20], as well as in the non-diabetic population [21]. In type 1 diabetes, microalbuminuria is closely associated with microangiopathy elsewhere (e.g. retinopathy). Although this association is also present in type 2 diabetes, in a proportion of these patients microalbuminuria does not appear to reflect generalised microvascular damage [22]. Microalbuminuria predicts the development of overt diabetic nephropathy in type 1 and 2 diabetes; however, the relationship in type 2 diabetes is less clear because of the greater heterogeneity of this condition and the presence of other risk factors for microalbuminuria in these, usually elderly, patients [23]. Microalbuminuria invariably precedes overt diabetic nephropathy, and although microalbuminuria may regress spontaneously in a proportion of cases, it remains the best documented predictor for high risk of development of diabetic nephropathy in both type 1 and type 2 diabetes [24]. The relationship between microalbuminuria and vascular disease suggests a common causality, as established risk factors explain at most a small part of these associations [25]. Unifying mechanisms, such as generalised endothelial dysfunction or inflammation, are therefore implicated [8]. Indeed, AER is correlated with endothelial dysfunction in type 1 and type 2 diabetes, and in the non-diabetic population [26]. Markers of chronic low-grade inflammation, including C-reactive protein (CRP), are also correlated with microalbuminuria in type 1 and type 2 diabetes [27], and increasing evidence from the non-diabetic population indicates the importance of inflammation in the pathogenesis of cardiovascular disease [28]. Generalised and glomerular endothelial dysfunction Markers of endothelial dysfunction, including elevated serum von Willebrand factor (vWF) and increased transcapillary albumin escape rate, are present before the onset of microalbuminuria in type 1 diabetes and worsen in association with it [27, 29]. Type 2 diabetes is often complicated by the presence of other risk factors for vascular disease, and discerning the contribution of hyperglycaemia and its sequelae to endothelial dysfunction in this condition is more difficult. While in some type 2 diabetic patients microalbuminuria may occur in the absence of evidence of endothelial dysfunction, in others, vWF levels predict its development [30]. The close association between endothelial dysfunction and microalbuminuria in type 1 diabetes may underlie the predictability of development of diabetic nephropathy and the greater susceptibility to micro- and macrovascular disease in other organs. As endothelial dysfunction is an important antecedent of microalbuminuria in both types of diabetes (albeit with a less predictable relationship in type 2), it provides an attractive explanation for the association between microalbuminuria and vascular disease in diabetes (Fig. 1). But to what extent can endothelial dysfunction be said to cause microalbuminuria? As the glomerular endothelium is exposed to the same diabetic milieu as other endothelia, it is highly likely that it is also dysfunctional. This begs the question of how glomerular endothelial dysfunction could lead to microalbuminuria. To address this we now turn to consider the structure and function of the glomerular filtration barrier (GFB). Fig. 1The relationship between hyperglycaemia, insulin resistance, endothelial dysfunction, macrovascular disease and microalbuminuria in type 1 and type 2 diabetes. Proposed major pathways are represented by red arrows; those of less certain significance by black arrows. The diagram illustrates, for the example, a possible mechanism for the increased risk of microalbuminuria in patients with type 1 diabetes and a susceptibility to insulin resistance. Particularly in type 2 diabetes, other pathways, not directly involving endothelial dysfunction, are likely in the pathogenesis of macrovascular disease and may also contribute to microalbuminuria (broken arrows) The glomerular filtration barrier is a complex biological sieve Unlike other capillaries, glomerular capillaries have a high permeability to water (hydraulic conductivity) yet, like other capillaries, are relatively impermeable to macromolecules. These fundamental permeability properties depend on the unique, three-layer structure of the GFB: the endothelium with its glycocalyx, the glomerular basement membrane (GBM) and podocytes (glomerular epithelial cells; Fig. 2) [31]. Fig. 2Representation of a cross-section through the GFB showing the three-layer structure consisting of glomerular endothelium and glycocalyx, glomerular basement membrane (GBM) and podocyte foot processes. Albumin, represented by orange ellipses, does not pass through the normal GFB in significant amounts Glomerular endothelial cells Glomerular endothelial cells are highly specialised cells with regions of attenuated cytoplasm punctuated by numerous fenestrae, circular transcellular pores 60–80 nm in diameter [31, 32]. Initially these fenestrations were thought of as empty and to therefore provide little barrier to the passage of proteins. Standard fixation protocols for electron microscopy do not preserve the glycocalyx, but newer fixation techniques have allowed the demonstration of a glomerular endothelial glycocalyx of 200–400 nm in thickness [33, 34]. This glycocalyx covers both fenestral and inter-fenestral domains of the glomerular endothelial cell luminal surface.Studies of the systemic endothelial glycocalyx are instructive here. The glycocalyx is a dynamic, hydrated layer largely composed of glycoproteins and proteoglycans with adsorbed plasma proteins. Heparan sulphate proteoglycans (HSPGs) are largely responsible for the negative charge characteristics of the glycocalyx. Removal of the glycocalyx increases vascular protein permeability, providing evidence that it hinders the passage of macromolecules [35, 36]. Compared with those without, capillaries with fenestrations are much more permeable to water and small solutes, but not to proteins. These characteristics can only be explained by the glycocalyx [37].Therefore, the presence of a significant glomerular endothelial glycocalyx implies that the glomerular endothelium significantly contributes to the barrier to macromolecules [31, 32, 38]. Experimental data support this view: in mice treated with glycocalyx-degrading enzymes the distance between luminal lipid droplets and the glomerular endothelium was decreased, and this was accompanied by an increase in AER [39]. In rats, under normal perfusion conditions albumin is confined to the glomerular capillary lumen and endothelial fenestrae [40]. Endothelial glycocalyx has the correct anatomical distribution (on the surface of endothelial cells, including in fenestral openings) to explain this distribution. Reactive oxygen species (ROS), which are known to disrupt the glycocalyx [41], cause torrential proteinuria without any identifiable structural changes in the GFB using standard electron microscopy techniques [42]. Furthermore, the ability of human glomerular endothelial cell glycocalyx to form a permeability barrier to macromolecules can be directly demonstrated in vitro [43]. The GBM The GBM is a basal lamina specialised for the structural requirements of the GBM and its filtration function. It is a hydrated meshwork of collagens and laminins to which negatively charged HSPGs are attached. Traditional concepts have therefore characterised the GBM as a charge-selective barrier. However, more recent analyses indicate that it only makes a small direct contribution to the barrier to protein passage [31]. Podocytes Podocytes, or, more specifically, their interdigitating foot processes, form the outer layer of the GFB (Fig. 2). The gaps between adjacent foot processes, the ‘filtration slits’ (25–60 nm), are spanned by the slit diaphragm. This is a molecular structure thought to form the most restrictive barrier to the passage of water and macromolecules. The effect of mutations in podocyte-specific proteins (e.g. nephrin mutations result in congenital nephrotic syndrome) indicate the importance of podocytes in resisting the passage of protein [44]. However, exactly how podocytes and their foot processes contribute to selective permeability is not yet clear. Electron microscopy studies have suggested that the slit diaphragm has a porous structure with a pore half-width of 2 nm. However, this figure is too small to account for the experimentally observed passage of solutes of various radii, emphasising that our understanding of slit diaphragm function is incomplete [45]. Podocyte biology has been reviewed in detail elsewhere [44]. Passage of albumin across the normal GFB Filtration Water filters across the GFB via gaps through or between cells rather than through the cell cytoplasm. Therefore, the high hydraulic conductivity of the GFB depends on the presence of fenestrae and filtration slits. Fifty per cent of the hydraulic resistance of the GFB is afforded by the two cell layers (endothelium and podocyte foot processes), and 50% by the GBM [46]. Solute flux Solute flux (including albumin) from the glomerular capillary occurs by a combination of convection (i.e. being swept along by the filtration of water) and diffusion [31]. The rate of convection of a solute depends on the filtration rate and on its reflection coefficient. Reflection coefficients are related to solute size, larger solutes having higher reflection coefficients. A value of 1 indicates that a molecule is totally excluded. The rate of diffusion of a solute depends on the concentration gradient and its diffusivity (solute permeability). The diffusivity of a solute decreases with increasing size. A measure of overall flux of a solute is given by the sieving coefficient: the filtrate to plasma concentration ratio. The sieving coefficient for albumin across the normal GFB is <0.001 [47]. The ratio of the contribution of convective to diffusive flux (the Peclet number) increases with increasing molecular size. That is, for small molecules, diffusion dominates; for larger molecules, convection dominates. The Peclet number for albumin across the normal GFB is near to unity, so both diffusion and convection contribute [48]. Hence, an increase in GFR will result in an increase in the convective flux of albumin. However, despite this, the best estimates indicate that even a 50% increase in filtration rate would only increase urinary albumin in the sub-microalbuminuric range, as the majority of the excess albumin is reabsorbed by the tubules [47]. Therefore, for albumin flux to increase sufficiently to produce microalbuminuria (assuming normal tubular reuptake), the GFB must be physically altered in such a way as to increase the sieving coefficient of albumin across it.Models have been developed to correlate the macromolecular sieving properties of the GFB with the structure and properties of its components [31]. These indicate that the GFB functions as a whole, with each layer having an important contribution to selective permeability. Whereas hydraulic resistances are essentially additive, sieving coefficients are multiplicative. This relationship means that a change in one element significantly affects the overall protein permeability to the same degree, i.e. a 10% change in permeability of any one layer will produce a 10% change in overall GFB permeability. Importantly, this means that, regardless of which layer is most restrictive, a change in the permeability of any layer of the GFB could potentially account for microalbuminuria. Structural alterations in the GFB associated with microalbuminuria in diabetes Glomerular structural changes typical of diabetic nephropathy are commonly established by the time microalbuminuria becomes apparent [49, 50]. However, the changes seen are heterogeneous, and all may be found in normoalbuminuric diabetic patients [51]. Early changes described include an increase in glomerular size, GBM thickening, mesangial expansion and broadening of podocyte foot processes [49–51]. The increase in glomerular size is due both to mesangial expansion and to enlargement in glomerular capillaries. The latter occurs at least in part through angiogenesis, an endothelium-dependent process [52, 53]. Glomerular structural changes are less marked in type 2 diabetes, with only a third conforming to the classical pattern observed in type 1 diabetes [22]. Specific assessment of structural changes in glomerular endothelial cells and associated glycocalyx in diabetes has not been performed. Evidence is emerging, however, that total systemic glycocalyx volume is reduced by acute hyperglycaemia in humans [54]. Furthermore, type 1 diabetic patients have decreased systemic glycocalyx volume, and this correlates with the presence of microalbuminuria [55]. GBM thickening alone, without change in composition, does not significantly affect its protein permeability characteristics. Broadening of podocyte foot processes (effacement) is an indicator of podocyte injury but generally correlates poorly with degree of proteinuria [56]. Indeed, proteinuria may occur in the complete absence of structural changes to podocytes [57]. This is the case with diabetic microalbuminuria, which can occur in the absence of such changes, at least in type 2 diabetes [58]. Some have suggested that podocyte loss occurs in early diabetes and that this would contribute to the filtration barrier defect [51]. Others have argued that significant early podocyte loss does not occur [59]. A likely reconciliation is that the lower proportion of podocytes seen in early disease is due to a relative increase in mesangial and endothelial cells, while podocyte loss occurs at a later stage. Functional alterations in GFB selective permeability associated with microalbuminuria in diabetes Increased flux of albumin across the GFB in diabetic microalbuminuria can be confirmed in experimental models using isolated glomeruli [60] and by inhibition of tubular reabsorption of albumin [61]. Analysis of the permeability of the GFB to molecules of varying size and charge can be used to estimate whether the increased flux of albumin is due to loss of size or charge selectivity of the GFB. In animal models of diabetes the defect is primarily in charge selectivity [61]. In clinically healthy non-diabetic individuals with microalbuminuria, loss of both size and charge selectivity of the GFB can be demonstrated [62]. However, in type 1 diabetes, defects in charge selectivity occur earlier than loss of size selectivity [63]. Similarly, in Pima Indians with type 2 diabetes, microalbuminuria is associated with loss of charge selectivity. Loss of size selectivity is seen only in those developing macroalbuminuria [58]. We have already noted that physical alteration of the GFB is necessary to increase its permeability to albumin. Therefore, this confirmation of changes in GFB selective permeability in diabetic microalbuminuria in the absence of clearly identified structural correlates suggests that the key changes are yet to be elucidated. These considerations and the predominance of defects in charge selectivity point to alterations in the negatively charged glomerular endothelial glycocalyx as the missing link. We now move to consider what aspects of the diabetic milieu are responsible for these GFB changes and how they might cause endothelial, including glycocalyx, dysfunction. Metabolic pathways and effectors from hyperglycaemia to microalbuminuria There is now overwhelming evidence that hyperglycaemia is the major initiating factor in the pathogenesis of diabetic complications, including microalbuminuria. However, most adverse effects of glucose are mediated indirectly through diverse metabolic pathways. Four major hypotheses have highlighted the roles of AGEs, increased activity of the polyol pathway, activation of protein kinase C and increased flux through the hexosamine pathway (Fig. 3). Activation of these pathways in turn causes dysregulation of a number of effector molecules which cause cellular damage and dysfunction. The roles of these pathways and effectors have been studied in detail in overt diabetic nephropathy, but the importance of these elements individually in its initiation and the appearance of microalbuminuria are less clear. Fig. 3Pathways to microalbuminuria in diabetes. Hyperglycaemia, through increased mitochondrial superoxide production, dysregulates key intracellular metabolic pathways. These in turn lead to the production of effectors that directly cause glomerular endothelial cell (GEnC) dysfunction (particularly of the glycocalyx) and disturb podocyte–endothelial cell communication. This results in microalbuminuria. Progression of these lesions and development of other glomerular changes, including podocyte damage, lead to overt diabetic nephropathy For example, TGFβ is an effector molecule with a clear role in the progression of established diabetic nephropathy, but there is no evidence that it is important in the mechanism of proteinuria per se. It is a key pro-fibrotic mediator, and in type 1 and type 2 diabetes, TGFβ is elevated in serum, urine and glomeruli from an early stage of disease. Its levels correlate with the degree of mesangial expansion, interstitial fibrosis and renal insufficiency, but not with microalbuminuria. Similarly, in animal models, blockade of TGFβ signalling inhibits the development of these pathological features, apart from the microalbuminuria [64, 65]. Here we focus on selected intermediaries and effectors that have an identifiable role in microalbuminuria. In large part, dissection of these pathways and elucidation of their role in glomerular disease has necessarily relied on tissue culture or animal models. Such are described where human studies are lacking or where they provide additional significant insights. ROS Brownlee has proposed oxidative stress as a unifying mechanism whereby the above-mentioned four pathways are inter-linked in the pathogenesis of diabetic complications (Fig. 3) [66]. Hyperglycaemia increases oxidative stress through overproduction of superoxide and other ROS by the mitochondrial electron transport chain. ROS have direct cellular effects (e.g. they increase activity of nuclear factor κB [NFκB], a key inflammatory regulator), increase oxidative stress and interact with the above four pathways. Normalisation of mitochondrial superoxide production blocks all four of these pathways implicated in hyperglycaemic damage and corrects a variety of hyperglycaemia-induced phenotypes in target cells of diabetic complications. Endogenous superoxide dismutase normally neutralises excess superoxide but is overwhelmed in the diabetic state.As well as forming a nexus of metabolic pathways dysregulated by hyperglycaemia, ROS can also be considered as effectors through direct cellular actions. ROS decrease glomerular HSPG production [67], directly disrupt the endothelial glycocalyx [68], interfere with nitric oxide bioavailability and activate NFκB.Glomerular ROS production is increased in experimental diabetes [69] and transgenic overexpression of superoxide dismutase attenuates renal injury, including increases in AER [70]. A superoxide dismutase mimetic also ameliorates increases in glomerular permeability both in vivo and ex vivo in non-diabetic models [71]. Little work has yet been done on the use of antioxidants in human diabetic nephropathy, but in vitro work confirms the importance of ROS in cells of the GFB. Podocytes produce ROS in response to high glucose [72], and genetic overexpression of superoxide dismutase prevents inhibition of endothelial nitric oxide synthase (eNOS) activity by hyperglycaemia in endothelial cells [73]. Vascular endothelial growth factor Vascular endothelial growth factor (VEGF) is a key regulator of vascular permeability and angiogenesis and is implicated in the pathogenesis of diabetic retinal neovascularisation. In the glomerulus it is produced in large amounts by podocytes and is thought to be important in maintaining glomerular endothelial cell fenestrations [74]. In rat models of type 1 diabetes, VEGF is upregulated in podocytes throughout the course of disease [75]. VEGF inhibition attenuates glomerular hypertrophy and albumin excretion and prevents upregulation of eNOS production in glomerular endothelial cells [76]. Similar effects are observed in some, but not all, models of type 2 diabetes [77].In human type 1 diabetes, serum VEGF concentrations vary according to glycaemic control, and higher levels are associated with microvascular complications, including microalbuminuria [78]. In type 2 diabetes, VEGF is upregulated early in the course of disease and urinary VEGF levels are correlated with microalbuminuria [79]. Further studies have confirmed initial upregulation of VEGF signalling in type 2 diabetes followed by a downregulation as podocyte loss and sclerosis develops [80].VEGF has the potential to induce the new vessel growth seen in early diabetic nephropathy [53] and to alter the permeability characteristics of the endothelium. Indeed, there is increasing evidence of the importance of precise control of glomerular VEGF for normal GFB function: both transgenic podocyte-specific under- or overexpression result in glomerular abnormalities, including glomerular endothelial changes and proteinuria [81]. Retardation of albuminuria in experimental diabetes by angiogenesis inhibitors further implies the importance of angiogenesis and/or endothelial-related processes in the development of diabetic microalbuminuria [52, 82]. The growth hormone/IGF system Dysregulation of the growth hormone/IGF system can be detected early in experimental diabetes and is associated with both glomerular hypertrophy and microalbuminuria [83]. All components of the growth hormone/IGF system can be detected at the mRNA level in the normal kidney. However, as detailed histochemical studies have not been performed, the exact location of system components with the glomerulus and the likely role of particular cell types cannot be defined.The rapid renal growth in experimental diabetes is preceded by a rise in the renal concentration of IGF-1 [84]. Somatostatin analogues ameliorate the increase in IGF-1 and renal hypertrophy and reduce AER [85]. In human type 1 diabetes, serum growth hormone levels and urinary IGF-1 levels are elevated and correlate with microalbuminuria, while an IGF-1 gene polymorphism modifies the risk of development of microalbuminuria [86]. The somatostatin analogue octreotide reduces macroalbuminuria and endothelial dysfunction in type 2 diabetes [87].Taken together, this evidence points to a role for the growth hormone/IGF system early in human diabetic nephropathy. However, there are few clues to its contribution to the pathogenesis of microalbuminuria at the structural level. IGF-1 activates intracellular intermediates, including NFκB, and an IGF-1 receptor inhibitor suppresses VEGF production, suggesting that VEGF is a downstream mediator of the effects of IGF-1 [88]. Proinflammatory cytokines and adipokines TNFα is a proinflammatory cytokine with diverse actions, including increased production of endothelial cell adhesion molecules and IL-6, which in turn regulates CRP. TNFα also directly increases endothelial permeability and disrupts the glycocalyx [41, 89]. In experimental diabetes, TNFα levels rise in urine and the renal interstitium prior to the onset of albuminuria and correlate with it [90], as they do in human type 2 diabetes [91]. IL-6 levels also correlate with albuminuria in type 1 and 2 diabetes [91, 92]. CRP, often thought of as a downstream marker of inflammation, is elevated in both type 1 and 2 diabetes [27, 93] and correlates strongly with cardiovascular disease.Adipokines, which include leptin and adiponectin (and, arguably, TNFα and IL-6), also have the potential to contribute to the development of microalbuminuria in both non-diabetic and diabetic populations. Enlargement of fat cells in obesity is associated with a generalised proinflammatory state, including increased levels of TNFα and IL-6, and hypersecretion of adipokines, with the exception of adiponectin, which is downregulated [94]. Leptin induces vascular permeability and synergistically stimulates angiogenesis with VEGF [95]. Its serum levels correlate strongly with nephropathy in type 2 diabetes [96]. Adiponectin, unlike other adipokines, appears to have protective effects in the vasculature in general by reducing endothelial cell activation and inflammation.What do these changes in regulation and expression of these mediators tell us about the mechanism of microalbuminuria? It is clear that ROS and oxidative stress have a central role, given their importance in various metabolic pathways and direct cellular effects, including in the disruption of endothelial glycocalyx, which may be relevant in pathogenesis of microalbuminuria. There is strong evidence for a role of VEGF early in the course of diabetic nephropathy, probably through upregulation of production by podocytes and actions on glomerular endothelial cells disrupting their contribution to the GFB. The importance of dysregulation of angiogenic factors is emphasised by the protective effects of angiogenesis inhibitors. The growth hormone/IGF system is also dysregulated early in diabetes, and it appears that IGF-1 is an important contributor to microalbuminuria through VEGF. Involvement of inflammatory mediators again points to a role for glomerular endothelial cells through endothelial activation, and hence, an increase in permeability through disruption of the glycocalyx. Conclusions In summary, the various avenues of study of diabetic microalbuminuria reviewed converge on the glomerular endothelium. Our analysis therefore leads to the conclusion that this is the site of the initial damage that leads to the development of microalbuminuria in diabetes. The most important aspect of this damage is disruption of the endothelial glycocalyx through actions of mediators dysregulated by the diabetic milieu. Key players include ROS, VEGF and proinflammatory cytokines (Fig. 4). Disturbance of endothelial cell–podocyte communication contributes to and amplifies the endothelial lesion. Progression of microalbuminuria to overt nephropathy is accompanied by predictable structural changes in the glomerulus, including podocyte damage and loss. This is the result of the ongoing diabetic milieu and disturbed cell–cell communication, but is also secondary to increased penetration of the GFB by serum proteins [97]. Fig. 4Proposed mechanism of glomerular filtration barrier damage leading to diabetic microalbuminuria. High glucose causes dysregulation of mediators including TNFα and enhanced production of ROS, which directly damage the glomerular endothelial glycocalyx leading to microalbuminuria. Increased levels of pro-angiogenic molecules, including VEGF and inflammatory mediators, induce an activated and more permeable glomerular endothelial cell phenotype Thus recent evidence remains broadly supportive of the Steno hypothesis [4] but it points specifically to disturbance of the endothelial glycocalyx as the common process underlying both microalbuminuria and generalised endothelial dysfunction. While it would be overly simplistic, particularly in type 2 diabetes, to suggest that this is the only factor involved (Fig. 4), it follows nevertheless that microalbuminuria is an indicator of generalised endothelial dysfunction. Most interestingly, these conclusions imply that glycocalyx dysfunction is involved in the pathogenesis of other vascular disease, both microvascular and macrovascular. Therefore, therapies aimed at protecting or repairing endothelial cells and their glycocalyx would be expected to retard these diseases. Reduction or reversal of microalbuminuria implies resolution of generalised endothelial dysfunction and has potential to be a useful indicator of successful reduction of overall cardiovascular risk [98].
[ "microalbuminuria", "diabetes", "podocyte", "glomerular filtration barrier", "glycocalyx", "glomerular endothelial cell" ]
[ "P", "P", "P", "P", "P", "P" ]
Environ_Manage-4-1-2242854
Headwater Influences on Downstream Water Quality
We investigated the influence of riparian and whole watershed land use as a function of stream size on surface water chemistry and assessed regional variation in these relationships. Sixty-eight watersheds in four level III U.S. EPA ecoregions in eastern Kansas were selected as study sites. Riparian land cover and watershed land use were quantified for the entire watershed, and by Strahler order. Multiple regression analyses using riparian land cover classifications as independent variables explained among-site variation in water chemistry parameters, particularly total nitrogen (41%), nitrate (61%), and total phosphorus (63%) concentrations. Whole watershed land use explained slightly less variance, but riparian and whole watershed land use were so tightly correlated that it was difficult to separate their effects. Water chemistry parameters sampled in downstream reaches were most closely correlated with riparian land cover adjacent to the smallest (first-order) streams of watersheds or land use in the entire watershed, with riparian zones immediately upstream of sampling sites offering less explanatory power as stream size increased. Interestingly, headwater effects were evident even at times when these small streams were unlikely to be flowing. Relationships were similar among ecoregions, indicating that land use characteristics were most responsible for water quality variation among watersheds. These findings suggest that nonpoint pollution control strategies should consider the influence of small upland streams and protection of downstream riparian zones alone is not sufficient to protect water quality. Introduction Nonpoint source pollution is a serious problem that degrades surface waters and aquatic ecosystems. Loading of nutrients, sediment, and other pollutants from the landscape may compromise the integrity of freshwaters (Hunsaker and Levine 1995). In particular, excessive inputs of nitrogen and phosphorus result in eutrophication and fundamental changes in trophic state of lakes and streams (Carpenter and others 1998; Dodds and others 2002; Dodds 2006) and the impairment of surface waters for uses such as drinking, recreation, and support of aquatic life (Dodds and Welch 2000). These problems are pervasive; almost 40% of classified stream miles in the United States may be impaired, with diffuse pollutants responsible for a large percentage of impairments (U.S. Environmental Protection Agency [EPA] 2000). In response to these problems, research has focused on identifying and testing practices that reduce excessive pollutant loading and help restore the health of aquatic ecosystems. The development of remote sensing and geographic information systems (GIS) technologies has facilitated quantitative assessment of landscape influences on aquatic ecosystems and watershed-scale approaches to the study of water quality (Johnson and Gage 1997). Watershed land cover is strongly correlated with water chemistry parameters, especially nutrient concentrations (e.g., Hunsaker and Levine 1995; Johnson and others 1997; Jones and others 2001; Osborne and Wiley 1988; Sliva and Williams 2001). Riparian land use may be particularly influential and, in some cases, a better predictor of in-stream water quality than land cover in the entire catchment (Johnson and others 1997; Osborne and Wiley 1988). Intact riparian zones provide water quality benefits and help preserve the biological integrity of watersheds (Gregory and others 1991). In areas such as the Midwestern United States large-scale land use conversion has resulted in some of the worst water pollution in the United States (U.S. EPA 2000) and imperilment of many native aquatic species (Fausch and Bestgen 1997). Establishing or protecting riparian zones or large watershed areas that mitigate impacts of human land use on water quality may be costly or politically difficult, particularly in areas where much of the land is privately owned. In such instances, it is essential that scientists and managers identify areas within watersheds where protection would produce the most substantial water quality benefits, and prioritize these areas for protection. Geographic information systems are ideally suited to provide such identification because landscape analyses encompass the full range of spatial scales across which stream processes are regulated (Allan and others 1997) and allow for multiscale examinations of riparian (e.g., Johnson and Gage 1997) or headwater impacts on water quality. We examined relationships between riparian and whole watershed land cover and water chemistry metrics in streams in Kansas at spatial scales ranging from several kilometers to the entire watershed, with the objective of testing areas where land use may strongly affect water quality in downstream reaches of the watershed (herein referred to as “downstream water quality”). We hypothesized that land use adjacent to small headwater streams would have a disproportionately large impact on water quality, because these streams provide the predominant hydrologic contributions to the watershed (Lowrance and others 1997), and substantial in-stream nutrient processing and retention in upland streams and rivers can regulate downstream water quality (Alexander and others 2000; Peterson and others 2001). Natural geological and topographic features also influence surface water quality at landscape scales, in addition to anthropogenic factors such as land use conversion (Johnson and others 1997; Sliva and Williams 2001). To assess regional differences related to these features, we compared riparian-water chemistry relationships among four U.S. EPA level III ecoregions. Ecoregions denote general similarities in ecosystem types, serve as a spatial framework for research, assessment, and management of ecosystems (Omernik 1995), and can correspond well with principal factors that may influence surface water quality (e.g., Brown and Brown 1994; Rohm and others 2002). We assessed the degree to which relationships between surface water quality and land cover were affected by landscape heterogeneity (as indicated by ecoregions) by evaluating regional variation in riparian-water chemistry relationships. To our knowledge, no previous studies have examined the importance of headwater riparian zones, compared to other riparian areas within watersheds, at these scales of analysis across multiple watersheds. Methods Sixty-eight small watersheds (mean watershed area, 280 km2; range, 19–1400 km2) were identified in four level III U.S. EPA ecoregions (U.S. EPA 1998a) across eastern Kansas (Fig. 1). These ecoregions also represent 4 of the 14 regions developed for the National Nutrient Strategy (U.S. EPA 1998b), which were classified by both anthropogenic and natural characteristics (i.e., geology, geomorphology, land use, soils, vegetation) associated with nutrient concentrations in streams. Sites were selected across the four ecoregions so results would not be as tied to within-ecoregion characteristics. Sites were chosen from those regularly sampled by the Kansas Department of Health and Environment within the ecoregions such that the watersheds did not cross ecoregion boundaries and none of the sites were nested. Fig. 1Location of study watersheds in Kansas, grouped by level III U.S. EPA ecoregion, and example of land cover classification scheme, in which riparian and catchment land cover was quantified for the subcatchment of each stream segment in the watersheds Twenty-four watersheds were located in the Flint Hills (FH) ecoregion, characterized by rolling hills, coarse soils, and relatively intact tracts of tallgrass prairie predominantly used as cattle pasture. Because of topography and geology, little of this region has been converted to cropland agriculture. Eighteen watersheds were located in the Central Irregular Plains (CIP), characterized by irregular topography, loam soils, and a variety of land use types, including cropland agriculture, tallgrass prairie, and oak-hickory forests. Fourteen watersheds were located in the Western Corn Belt Plains (WCBP), a region that was historically covered with tall and mixed-grass prairie but has now been almost entirely converted to cropland agriculture. Finally, 12 watersheds were located in the eastern part of the Central Great Plains (CGP) ecoregion, characterized by reduced topography, mixed-grass prairie, and large tracts of cropland agriculture. Criteria for inclusion in the study were as follows: (1) watersheds were sampled for water chemistry parameters a minimum of 12 times, and (2) watersheds were entirely contained within one U.S. EPA level III ecoregion. Watersheds were located across a precipitation gradient, with average rainfall ranging from 610 to 1016 mm/year. No watersheds were chosen that had very large livestock feeding operations or municipal point sources. The few smaller feeding operations (∼1000 animals) included were in all cases at least 0.1 km upstream of the stream chemistry site, and the total area of these operations was included in the analysis (see section Statistical Analyses, below). Relationships between riparian land cover and water chemistry parameters were assessed at four spatial scales (Fig. 2). Riparian land cover throughout entire watersheds was quantified to examine cumulative impacts on water quality. Because small streams exert a large influence on downstream water quality (Alexander and others 2000; Peterson and others 2001), we examined correlations between riparian land cover adjacent to only the smallest (first-order) streams and water chemistry parameters sampled in downstream reaches of these watersheds. In addition, we examined localized riparian impacts on water quality by quantifying riparian land cover both 2 and 4 km upstream of the sampling site. The results of the above analyses were compared to correlations between water chemistry parameters and catchment-scale land cover at both the watershed and the first-order streams scales. In this way, we assessed the relative impact of riparian land cover on water chemistry parameters, compared to catchment land cover. Temporal variation was explored by partitioning water chemistry data seasonally, which allowed for examination of riparian-water chemistry relationships during both high and base flow conditions. Fig. 2Riparian land cover assessed at four spatial scales: (A) land cover in the whole watershed, (B) land cover adjacent to the first-order streams of watersheds, and (C) land cover 2 and 4 km upstream of the water chemistry sampling point We examined a subset of 39 study watersheds where water chemistry measurements were taken on a fourth-order reach of stream to directly compare the influence of riparian land cover on streams of similar sizes within watersheds. Riparian land cover was quantified by stream order (Strahler 1957) and correlated with downstream water chemistry values separately, so comparisons could be made between stream sizes. In addition, we analyzed riparian land cover-water chemistry relationships among ecoregions to determine if differences existed, or if these relationships held constant across ecosystem types. These analyses also help to show that watershed size and natural factors captured by ecoregions (geology, precipitation, elevation, gradient, etc.) did not confound the interpretations of land use effects. Water Chemistry Data Water chemistry data were collected and analyzed by the Kansas Department of Health and Environment (KDHE) as part of their stream chemistry monitoring network (KDHE 2000). Total nitrogen (TN), nitrate (NO3−), ammonium (NH4+), total phosphorus (TP), total suspended solids (TSSs), atrazine (AT), fecal coliform bacteria (FC), and dissolved oxygen (DO) data were used to assess the impact of riparian land cover on water chemistry. Samples are collected every 2 months between 0900 and 1700 hr at each site on a rotational schedule. Extreme weather (river icing, very high floods) precludes sampling occasionally. Water chemistry samples were collected from the thalweg of each stream, frozen, and stored in acid-washed bottles in the dark, prior to analysis. All TN, NH4+, and TP samples were analyzed within 28 days of collection, NO3− samples were analyzed within 48 hr of collection, TSS and AT samples were analyzed within 7 days of collection, FC samples were analyzed within 24 hr of collection, and DO measurements were taken in the field using a membrane electrode probe. Total nitrogen and phosphorus were analyzed by a colorimetric automated phenate method, following digestion by metal-catalyzed acid and persulfate techniques, respectively (U.S. EPA 1983). Nitrate was analyzed by ion chromatography; NH4+, by semiautomated colorimetry; TSS, by a residue, nonfilterable and TSSs method; and AT, by gas chromatography (U.S. EPA 1983). Fecal coliform bacteria samples were analyzed by a membrane filter procedure (APHA 1992). Field duplicate samples and internal spikes were used to assess the reliability and recovery efficiencies of the assays. Water chemistry data for NO3−, NH4+, TP, TSS, AT, FC, and DO were collected from 1990 to 2001 for all study watersheds. Total nitrogen data were collected from January 2000 to May 2003 for 57 of the 68 study watersheds. Collection of TN data began in 2000 to assist establishment of nutrient criteria for Kansas’ surface waters. For all analyses, mean concentrations of TN, NO3−, NH4+, TP, TSS, AT, and FC were taken for each watershed across sampling dates. Minimum and maximum DO concentrations were quantified by averaging minimum and maximum concentrations by year for all years in which at least five samples were taken, then taking a mean of these concentrations across years. To examine temporal variability in riparian-water chemistry relationships, we first classified seasons using mean monthly discharge measurements (1990–2001) from 30 USGS gauging stations across the study region. Seasons were classified as the month or months in which 0%–25%, 26%–50%, 51%–75%, and 76%–100% of the annual water volume across the region was discharged. Mean water chemistry concentrations of NO3−, NH4+, TP, TSS, AT, and FC were taken for each of the four seasons. Insufficient data prevented analysis of total nitrogen and DO for temporal differences. Digital and Land Cover Data Digital stream networks were derived for each watershed using 30-m digital elevation models, ARCGIS (Arcview version 8.2, 2002), and ArcHydro (Maidment, 2002) software. This method accounts for permanent streams and all but the smallest intermittent streams. Catchment area above each KDHE monitoring site was delineated using catchment-processing tools in ArcHydro software. Using the same processing tools, a subcatchment was delineated for each stream segment of the watersheds. A stream segment was defined as a section of stream from its upstream confluence to its downstream confluence with other tributaries. By overlaying catchment and subcatchment layers with digitized riparian and catchment land cover data, we quantified land cover for each watershed and watershed subcatchment (Fig. 1). Riparian land cover was classified from the Kansas Riparian Areas Inventory dataset (NRCS 2001). The riparian ecotone in this dataset was defined as the 33 m adjacent to the stream and was digitized at a 1:24,000 scale from USGS Digital Orthophotograph Quarter Quadrangles that reflected land cover conditions in 1991. Land cover was identified from the beginning of the period of water chemistry sampling. Large socioeconomic changes did not occur in Kansas over this time period (e.g., only ∼10% population increase). This dataset contained 11 land cover classes (animal production area (holding pens or feeding areas), barren land, cropland, crop/tree mix, forest, grassland, grass/tree mix, shrub/scrub land, urban land, urban/tree mix, water), and riparian areas were classified by the land cover type occurring in ≥51% of the 33-m ecotone. Of the 11 land cover classes, 3 (shrub/scrub land, barren land, and animal production area) did not account for more than 1% of the riparian land cover in any watershed and were not included in the analyses. The remaining eight classifications were aggregated into five categories (cropland, forest, grassland, urban land, and water) following the level I classification scheme developed by Anderson et al. (1976). Water was not included as a land cover type in analyses. While this scheme can create problems with colinearity, the primary goal of this paper was to determine the best-fit model at different spatial scales within the watershed. Colinearity influences the ability to ascribe causation by individual categories of land use (e.g., cropland, urban, forest, or grassland), but this was not the primary goal of our analysis. Catchment land cover was classified from the Kansas Land Cover dataset (KARS 1993). This dataset was digitized at a 1:100,000 scale from Landsat Thematic Mapper imagery and, also, contained 11 land cover classes that reflected conditions in 1991. Land cover classes were reclassified in the same way as the riparian dataset. Comparison of the riparian dataset to a 33-m “buffer” clipped from the catchment dataset showed highly significant correlations (average Kendal τ correlation = 0.93, p < 0.01) between the two datasets for all land cover types. Information on permitted point sources and confined livestock feeding operations within watersheds was obtained from KDHE and incorporated into GIS to ensure that point sources were not in close proximity to sampling sites. Statistical Analyses Forward stepwise linear regression models were used to predict water chemistry parameters with land cover data (animal production area [holding pens or feeding areas], barren land, cropland, crop/tree mix, forest, grassland, grass/tree mix, shrub/scrub land, urban land, urban/tree mix, water) at four spatial scales (watershed, first-order streams, 2 km upstream, 4 km upstream). Separate regressions were done at each scale. F-values of 1 and 0 were used as thresholds to include and exclude land cover classifications from regression models. We investigated the predictive ability of riparian land cover independent of catchment effects by examining partial correlations (r) among riparian land cover classifications that were significant predictors in regression models and water chemistry parameters, controlling for predictor catchment land cover classifications. Analysis of variance (ANOVA) was used to test for differences among ecoregions. Since ecoregions were correlated with land use, slopes of relationships were compared among ecoregions at all four spatial scales using general linear model (GLM) analysis of variance (ANCOVA) to assess whether riparian-water chemistry relationships held constant across ecoregions. Results of comparisons of intercepts on these data were presented in a prior publication (Dodds and Oakes 2004). Least-squares means were used to compare slopes of regression lines. Slopes represent the fundamental response to anthropogenic effects (most relevant to this paper) and intercepts indicate the baseline nutrient or pollutant level. Response data appeared normally distributed and were not transformed prior to analyses. All relationships among the data were plotted and no clear outliers or leveraged relationships were observed. Results Riparian-Water Chemistry Relationships Strahler ordering showed the smallest (first-order) digitized streams on average comprised >60% of the stream miles within study watersheds, with larger streams accounting for sequentially fewer percentages of stream miles. Across all studied watersheds, riparian land cover was a significant predictor of among-site variation in water chemistry concentrations at the watershed and first-order streams scales, particularly for nutrients (Table 1). Less variance was explained at local scales represented as riparian cover 2 or 4 km upstream from the sampling site (Fig. 3). Total nitrogen, TP, and NO3− were the parameters with the greatest R2 values related to riparian land cover, and all three had slightly greater R2 values using land cover adjacent to first-order streams of watersheds than using riparian land cover across the whole watershed. Table 1Multiple regression models showing correlations between water chemistry parameters and riparian land cover in both the whole watersheds the first-order streams of watershedsWater chemistry parameterCropForestGrasslandUrban InterceptR2Watershed  TN−0.4400.2601.932 0.355  NO3-N0.6230.490−0.5000.525  NH4-N−0.466−0.6620.2030.327  TP0.2640.7120.0950.507  AT0.4280.5580.171  FC0.3781621.5700.199  DO (max)0.50812.0850.247First order  TN0.3880.5760.5510.406  NO3-N0.6500.538−0.0330.606  NH4-N−0.445−0.6830.1950.304  TP0.3200.7800.0870.634  AT0.4130.6050.158  FC0.458798.8320.198  DO (max)0.52212.1130.261Note. Significant regression coefficients are presented, illustrating the magnitude and direction of importance of land cover classes in models. TN analyses based on 57 watersheds; all other analyses based on 68 watersheds. Nutrient parameters and dissolved oxygen expressed as milligrams per liter, atrazine (AT) expressed as micrograms per liter, microbiological parameters expressed as colony forming units/100 ml, and land cover classifications expressed as percentages. All values reported were significant at p < 0.05Fig. 3Variance in water chemistry variables (R2 values) accounted for by (A) land cover in the riparian ecotone (33 m) at multiple scales and (B) catchment land cover at two scales, using multiple linear regression analyses. TN analyses based on 57 watersheds; all other analyses based on 68 watersheds. Bars for R2 values were not plotted when there was not a significant relationship (p > 0.05) Riparian land cover 2 and 4 km upstream explained no significant variance in TP concentrations, and riparian land cover 2 km upstream of the sampling point explained no significant variance in AT concentrations. Total suspended solids and minimum DO concentrations did not have significant relationships with riparian cover in any analyses and are not discussed further in this section. Catchment land cover showed similar relationships to water chemistry parameters as riparian land cover (Fig. 3). In all comparisons between catchment and riparian land cover, the magnitude of differences was small. Partial correlations indicated that riparian land cover classifications were still significantly correlated with some water chemistry parameters after controlling for variance explained by catchment land cover classifications that were significant predictors in regression models (Table 2). Removal of the effect of land use cover by using partial correlations can actually remove riparian effects from the overall correlation so these data should not be interpreted to suggest that riparian cover only explains a small portion of the variance in water quality. Table 2Partial correlations among nutrient concentrations and riparian land cover classificationsWater chemistry parameterCatchment land cover Riparian land coverrp-valueWatershed  TNGrass, forestGrass, urbanGrass = −0.060.687Urban = 0.200.134  NO3−Crop, urbanCrop, urbanCrop = 0.220.071Urban = 0.480.000  NH4+Grass, forestGrass, forestGrass = −0.030.803Wood = −0.110.370  TPCrop, urban, forestCrop, urbanCrop = 0.040.779Urban = 0.580.000First order  TNCrop, grassCrop, urbanCrop = 0.250.068Urban = 0.330.013  NO3−Crop, urbanCrop, urbanCrop = 0.260.033Urban = 0.500.000  NH4+Grass, forestGrass, forestGrass = −0.000.994Forest = −0.080.543  TPCrop, urbanCrop, urbanCrop = 0.040.724Urban = 0.680.000Note. Correlations controlled for catchment land cover classifications that were significant predictors in regression models and were used to partition additional variance explained by riparian land cover from variance explained by catchment land cover. Partial correlations (r) for which riparian crop land (crop), forest, grassland (grass), and urban land (urban) explained >30% of the variation in water chemistry parameters among sites (see Table 1 and Fig. 3) are presented Temporal Variation Examination of regional discharge patterns revealed that 25% of annual water volume was discharged from January to April, 50% by June, 75% by August, and the remainder in the August–December time period. Thus, the periods of January–April, May, June–July, and August–December were designated as seasons in temporal analyses. Seasons in which a quarter of annual water volume was discharged in 1 or 2 months (i.e., May, June–July) represented periods of high flow and high connectivity across the landscape, while seasons encompassing more than 2 months (January–April, August–December) represented predominantly base flow conditions (with most of the upper reaches of the first-order streams dry). Most water chemistry parameters exhibited temporal changes in the degree that they were statistically related to riparian land cover. Total P and NH4+ were significantly correlated with riparian land cover in all seasons except May (Fig. 4); in particular, riparian land cover at both the watershed and the first-order streams scales explained most variance in TP concentrations in January–April compared to other seasons. Conversely, AT and FC concentrations were best explained during the high flow period of May, and did not have significant relationships with riparian land cover during some base flow seasons. Nitrate exhibited comparatively less temporal variation; riparian land cover at the watershed scale explained a minimum of 30%, and at the first-order streams scale a minimum of 45%, of among-site variance in NO3− concentrations across seasons. Fig. 4Temporal variation (R2 values) in relationships between water chemistry parameters and (A) total riparian land cover in watersheds and (B) riparian land cover adjacent to the first-order streams of watersheds. Seasons were designated from quartiles of annual discharge occurring across the study region. Total nitrogen and DO were not analyzed for temporal differences (see Methods). Bars for R2 values were not plotted when there was not a significant relationship (p > 0.05) A particularly interesting aspect of these data is that even when first-order streams are not very likely to flow (August–December), the riparian land cover around them yielded somewhat greater R2 values than did the whole watershed riparian cover for TP and NO3−. Impact of Stream Size Different stream sizes were used in the analyses to this point. To control for this a subset of sites was chosen from which data were taken only for fourth-order streams. Total N and NO3− were most closely correlated with first-order riparian land cover (Fig. 5). In general, the most variance was explained by riparian land cover adjacent to first-order streams and less variance was explained by riparian cover near larger-order streams closer to sampling sites. Atrazine and maximum DO concentrations were not significantly correlated with riparian land cover near streams of any size in this subset of watersheds. Fig. 5Variance in water chemistry variables (R2 values) explained by riparian land cover adjacent to different sized streams within watersheds. Analyses performed with a subset of 39 fourth-order watersheds; TN analyses preformed with 38 fourth-order watersheds. Comparisons between riparian land cover and AT and maximum DO concentrations were not significant and are not presented Ecoregion Effects ANOVA indicated some variation in TN, NO3−, NH4+, TP, FC, and maximum DO concentrations among ecoregions. Comparison of least-squares means showed TN and NO3− concentrations were significantly different (p < 0.05) among all ecoregions except the CGP and CIP (Fig. 6). The Western Corn Belt Plains was the only ecoregion that exhibited significantly different NH4+, FC, and maximum DO concentrations, which were all higher than mean concentrations in other ecoregions. Fig. 6Mean values for selected water chemistry parameters and riparian cropland, grouped by ecoregion (WCBP, Western Corn Belt Plains; CGP, Central Great Plains; FH, Flint Hills; CIP, Central Irregular Plains). TN data for WCBP were available for only 3 of 12 study watersheds. Significant differences are labeled with different letters; error bars represent 1 SE The Flint Hills was the only ecoregion that exhibited significantly different TP concentrations, which were lower than those of other ecoregions. Atrazine concentrations did not differ significantly among ecoregions. The percentage of riparian land in agricultural production also varied by ecoregion and closely mirrored nutrient concentrations. Least-squares means comparing slopes of regression lines between water chemistry parameters and significant predictor land cover classifications among the four ecoregions showed that slopes were generally similar across all water chemistry parameters (e.g., Fig. 7), and differences that did exist most often occurred when comparing the Flint Hills to other ecoregions (Table 3). Fig. 7Example of typically observed relationships between riparian land cover and water chemistry parameters among the four ecoregions analyzed (WCBP, Western Corn Belt Plains; CGP, Central Great Plains; FH, Flint Hills; CIP, Central Irregular Plains). The percentage of riparian cropland in the watersheds is plotted versus in-stream NO3− concentrations. Slopes of regression lines fitted through each of the four ecoregions were not significantly differentTable 3Comparisons of least-squares means using general linear model analyses to assess differences in slopes of riparian-water chemistry relationships at four spatial scales, across level III U.S. EPA ecoregionsSpatial scaleResponse variableEcoregions with different slopesp-valueWatershedTPFH & CGP0.010First orderTNFH & CGP0.019First orderTPFH & CGP0.007First orderTPCIP & CGP0.007First orderTPCIP & WCBP0.0412 km upstreamTNFH & CGP0.0222 km upstreamNO3−FH & CIP0.0142 km upstreamNO3−FH & WCBP0.0024 km upstreamNO3−FH & WCBP0.0014 km upstreamFCFH & WCBP0.023Note. Significantly different slope comparisons between Central Great Plains (CGP), Central Irregular Plains (CIP), Flint Hills (FH), and Western Corn Belt Plains (WCBP) ecoregions are listed. All other comparisons were not significantly different at p < 0.05 Discussion Land Cover-Water Chemistry Relationships Riparian and whole watershed land cover was significantly correlated with water quality metrics, particularly nutrient concentrations. Land cover explained greater variance at landscape scales (watershed and first-order streams) than riparian cover at local scales (2 and 4 km upstream of sampling), which is consistent with the idea that nutrient loading and retention occurs at larger spatial scales (Allan and others 1997). Given that NO3− uptake lengths are often less than 2 km in this region (O’Brien and others 2007), it is possible that local riparian cover would influence NO3− concentrations, but the effect was small. Differences in correlations between nitrogen species (NO3− and NH4+) may have occurred because NO3− inputs from the watershed are often greater than NH4+ inputs (Peterson and others 2001) and NH4+ is a preferred nitrogen source for aquatic organisms that can use inorganic N and cycles more quickly than NO3− (Dodds and others 2000). Seasonal differences in relationships between riparian land cover and both NH4+ and TP may be attributable to their strong relationship to particulate dynamics (Johnson and others 1997). Phosphate and NH4+ both adsorb readily to sediments, and are primarily transported into streams via surface runoff (Novotny and Olem 1994). We wanted to remove the potential problem that the proportion of length of first order streams would vary by stream order. But if we only used our fourth-order sites, then we had about half the total number of sites and our statistical power decreased. Thus we analyzed the subset of fourth-order stream sites (Fig. 5) to be certain that our results were not an artifact of sampling sites occurring at different order streams. Since our results were similar with this subset, all other analyses used the full dataset. Variation in nutrient concentrations during high flow may have resulted from “pulses” of sediment-bound nutrients entering from the landscape which were not effectively captured by our method of analyzing mean seasonal concentrations. This could explain the lack of correlation between riparian land cover and NH4+ and TP in May compared to other seasons. Conversely, the primary mode of NO3− transport to surface water is generally via subsurface flow (Hill 1996), and this consistent connectivity to the landscape may explain the comparatively low temporal variability seen in riparian-NO3− relationships. Discrepancies in numbers of sampling dates and sites made it difficult to directly compare riparian-TN relationships with those of other parameters. However, we felt it was important to include TN in these analyses because of its importance in establishing nutrient criteria (Dodds and Welch 2000) and because other available parameters, such as dissolved inorganic nitrogen, can be unsuitable substitutes (Dodds 2003). A disproportionate number of watersheds for which TN data were not available were primarily agricultural and contained some of the highest observed concentrations of both NO3− and TP; the absence of these sites in TN analyses may explain why TN was not as strongly correlated with riparian land cover compared to NO3− or TP. Nonnutrient water chemistry parameters had weaker correlations with riparian land cover. Although AT, FC, and maximum DO concentrations were significantly correlated with riparian land cover, relationships were weak across all spatial and temporal scales and preclude conjecture into the mechanisms underlying correlations. Lack of correlation between TSS and riparian land cover contrasts with results of previous studies (Johnson and others 1997; Sliva and Williams 2001) and, as with relationships observed in sediment-bound nutrients, may be a function of averaging TSS concentrations into one measurement. Although permitted livestock operations and other point sources were not substantial in each watershed, point sources falling below Kansas’ permitting regulations (e.g., confined livestock operations under 300 animals) were likely present in some watersheds and may have accounted for unexplained variance in the observed relationships. Our results were consistent with previous studies (Johnson and others 1997; Jones and others 2001; Osborne and Wiley 1988; Sliva and Williams 2001), suggesting that agricultural and/or urban lands were the most important predictors of water quality variability. Maintaining buffers or other passive land uses in headwater streams may effectively reduce diffuse pollution downstream. The importance of these streams and their riparian zones is due in part to their sheer numbers; small streams often comprise the majority of stream miles within a drainage network (Horton 1945; Leopold and others 1964), and in this study the smallest (first-order) streams on average comprised more than 60% of the stream miles in the study watersheds. Riparian land cover near the first-order streams of watersheds explained greater variance in TN, NO3−, and TP concentrations than did riparian land cover immediately upstream from sampling sites. First-order riparian land cover was statistically related to most water quality measures, even when all potential correlation related to watershed land cover was controlled for. Our results suggest that headwater riparian areas could have an important impact on downstream water quality. Our study was correlative in nature and does not unequivocally confirm causation. Such an approach is required at the spatial scales of our study. Previous work suggests several possible causes for our observed associations. First, lower-order streams have the greatest potential for interactions between water and the adjacent landscape (Lowrance and others 1997). Second, the large benthic surface area-to-volume ratio of small streams favors rapid in-stream uptake, processing, and retention of nitrogen (Alexander and others 2000; Dodds and others 2000; Peterson and others 2001), which in larger streams increases in proportion to depth (Alexander and others 2000) or discharge (Wollheim and others 2001). Because high nitrogen inputs may overwhelm this ability (O’Brien and others 2007; Wollheim and others 2001), riparian zones adjacent to small streams may be particularly important in regulating nutrient inputs and allowing natural in-stream processes to significantly impact nutrient concentrations. Several studies have addressed the relative importance of riparian versus whole catchment land use in regulating water quality. Reports in the literature have been mixed; some researchers (Hunsaker and Levine 1995; Sliva and Williams 2001) found that catchment land cover was better correlated with water quality, while others (Osborne and Wiley 1988; Johnson and others 1997) reported that land cover in the riparian ecotone was more influential. Likewise, although partial correlations indicated riparian land cover classifications were significantly related to TN, NO3−, and TP even after accounting for catchment effects, this did not hold true for all water chemistry parameters. Overall, it is difficult to separate the effects of land cover in the riparian ecotone and land cover in the catchment because they are highly correlated, and in many altered landscapes, riparian land cover may simply reflect the dominant catchment land cover types. Significant partial correlations between riparian land cover and TN, NO3−, and TP concentrations correspond with previous work (e.g., Karr and Schlosser 1978; Lowrance and others 1997) identifying riparian zones as key regulators of nutrient inputs to surface waters. These results, in addition to strong relationships among water quality metrics and riparian land use that have been previously reported at both field (e.g., Karr and Schlosser 1978; Peterjohn and Correll 1984) and landscape (e.g., Johnson and others 1997; Osborne and Wiley 1988) scales, suggest that intact riparian zones could influence landscape impacts on surface water quality. Ecoregion Effects The finding that slopes of the relationships were not significantly different in most ecoregion comparisons may be attributable to several factors. Exceptionally variable relationships could preclude the statistical power to determine differences. It is possible that the study regions were not sufficiently distinct to allow detection of differences in riparian interactions, although this is unlikely given their previous classification as both separate ecoregions (U.S. EPA 1998a) and nutrient regions (U.S. EPA 1998b). Because U.S. EPA ecoregion designations encompass human impacts such as land use in addition to natural geological, climatic, and soil characteristics (Omernik 1995), observed intraregion differences in riparian land cover classification and nutrient concentrations were expected. However, since land use is often a dominant factor regulating surface water quality (Hunsaker and Levine 1995; Johnson and others 1997; Osborne and Wiley 1988), riparian-water chemistry relationships would be expected to remain relatively constant across ecoregions if designations were partially dependent on land use, as was the case in this study. Conclusions The data suggest that riparian cover near sampling sites is generally less well correlated with water quality parameters than riparian cover or land use in first-order streams. Because watershed cover and riparian land use were correlated, it is difficult to determine how important first-order riparian cover is related to water quality. Our results suggest a statistically significant effect of riparian cover of first-order streams on water quality because partial correlations among riparian land cover classifications were significant predictors in regression models when controlling for predictor catchment land cover classifications. We take the conservative approach in our interpretation, but it is possible that riparian cover has much stronger effects than whole-watershed land cover and that most of the correlation is driven by riparian effects. The effect of first-order land cover may not be too surprising; first-order streams make up the majority of stream length in watersheds. Our approach shows that a correlation with land uses in small headwater streams does hold, and holds even in seasons when many of the first-order stream channels are not flowing.
[ "water quality", "riparian zones", "nonpoint source pollution", "geographic information systems", "headwater streams", "watershed management" ]
[ "P", "P", "P", "P", "P", "R" ]
Arch_Dermatol_Res-3-1-1839867
The relevance of the IgG subclass of autoantibodies for blister induction in autoimmune bullous skin diseases
Autoimmune bullous skin diseases are characterized by autoantibodies and T cells specific to structural proteins maintaining cell–cell and cell–matrix adhesion in the skin. Existing clinical and experimental evidence generally supports a pathogenic role of autoantibodies for blister formation. These autoantibodies belong to several IgG subclasses, which associate with different functional properties and may thus determine the pathogenic potential of IgG antibodies. In pemphigus diseases, binding of IgG to keratinocytes is sufficient to cause intraepidermal blisters without engaging innate immune effectors and IgG4 autoantibodies seem to mainly mediate acantholysis. In contrast, in most subepidermal autoimmune blistering diseases, complement activation and recruitment and activation of leukocytes by autoantibodies are required for blister induction. In these conditions, tissue damage is thought to be mainly mediated by IgG1, but not IgG4 autoantibodies. This review summarizes the current knowledge on the pathogenic relevance of the IgG subclass of autoantibodies for blister formation. Characterization of the pathogenically relevant subclass(es) of autoantibodies not only provides mechanistic insights, but should greatly facilitate the development of improved therapeutic modalities of autoimmune blistering diseases. Introduction Autoimmune blistering diseases are associated with an autoimmune response directed to structural proteins mediating cell–cell and cell–matrix adhesion in the skin [62, 66]. Both autoantibodies and autoreactive T cells have been found in patients with these organ-specific autoimmune diseases. However, blister induction is mainly mediated by autoantibodies. Autoimmune blistering diseases are classified based on the ultrastructural site of deposition of immunoreactants and on the molecular target of autoantibodies. Diseases of the pemphigus group are associated with autoantibodies to epidermal components mediating cell–cell adhesion and are characterized by acantholytic blisters within the epidermis [39, 71]. Tissue-bound and circulating autoantibodies to the dermal–epidermal junction are characteristic immunopathological features of subepidermal autoimmune bullous diseases [62, 85]. Target antigens of autoantibodies have been identified for the majority of autoimmune blistering diseases. In most of these diseases, the pathogenicity of autoantibodies is supported by clinical observations and extensive experimental evidence [62]. Antibodies are effector molecules of the adaptive immune system secreted by plasmablasts and long-lived plasma cells. Antibody responses are physiologically mounted following an infection or vaccination and protect against various pathogens. Occasionally, in the setting of an autoimmune disease, antibodies to autologous structures may develop and cause different forms of tissue damage. The immunopathology induced by autoantibodies, similar to the immunity mediated by antibodies to pathogens, relies on several mechanisms of action of antibodies, including direct mechanisms, which are mediated by the antibody’s variable regions (e.g., by steric hindrance and signal transduction), and indirect mechanisms, which are triggered by the constant regions of antibodies. For the latter, (auto)antibodies typically interact through their Fc portions with other factors of the innate immune system, including the complement system and inflammatory cells [62]. Antibodies of the IgG isotype predominate in the systemic immune response, as reflected in serum immunoglobulin concentration, and activate a wide range of effector functions. Four subclasses of IgG are defined, originally from the antigenic uniqueness of their heavy chains, which are products of distinct genes [20, 27, 77]. The subclasses are designated as IgG1, IgG2, IgG3 and IgG4 in order of their serum concentration ∼60, 25, 10 and 5%, respectively. Although the heavy chains show >95% sequence homology, each IgG subclass expresses a unique profile of effector activities [35, 56, 59, 76, 80, 82]. Protein antigens characteristically provoke IgG1 and IgG3 responses and these isotypes are able to activate all types of Fc receptors and the C1 component of complement. The IgG4 subclass may be characteristic of chronic antigen stimulation, as in autoimmune disease; it has restricted Fc receptor activating abilities and does not activate C1q. The IgG2 subclass often predominates in responses to carbohydrate antigens; it has restricted Fc receptor and C1 activating abilities [35, 56, 80, 82]. The pathogenic potential unfolded by autoantibodies is determined not only by their specificity and affinity, but also by their isotype. Autoantibodies against cutaneous proteins in autoimmune blistering diseases belong to different IgG subclasses. This paper summarizes the current knowledge on the relevance of IgG subclasses for tissue injury in autoimmune bullous diseases. Pemphigus diseases Pemphigus designates a group of life-threatening-autoimmune blistering diseases characterized by intraepithelial blister formation caused by loss of cell–cell adhesion [39, 54, 71]. IgG autoantibodies in patients with pemphigus seem to mediate their pathogenic functions independently of their Fc portions [62]. Patients’ IgG autoantibodies are pathogenic in C5-deficient mice and F(ab’)2, Fab, and scFv fragments of autoantibodies induce acantholysis by passive transfer in wild type mice showing that complement activation or other Fc-mediated effects are not required for pathogenicity [4, 21, 45, 55]. Numerous studies clearly demonstrated that tissue bound and circulating autoantibodies in pemphigus patients mainly belong to the IgG1 and IgG4 subclasses [2, 5, 8, 9, 16–18, 23, 28, 36, 38, 40, 49, 61, 79]. The IgG subclass distribution of autoantibodies in a representative pemphigus patient is shown in Fig. 1. While it is generally agreed upon the subclass distribution of IgG autoantibodies, the relevance of autoantibodies of different IgG isotypes for acantholytic blistering in pemphigus is still a matter of debate. IgG4 autoantibodies, known to have poor complement- and leucocyte-activating properties, predominate in pemphigus vulgaris and foliaceus. While several studies suggest a pathogenic role of IgG4 in pemphigus, the capacity of IgG1 autoantibodies to induce acantholysis has not yet been ruled out. In patients with active pemphigus vulgaris, IgG4 autoantibodies against desmogleins were found to predominate [5, 17, 23, 40]. The transplacental transfer of these autoantibodies in mothers with pemphigus induces acantholytic skin disease in neonates [52]. In endemic pemphigus foliaceus the early antibody response in normal subjects living in the endemic area and in patients before the onset of clinical disease is mainly IgG1. Acquisition of an IgG4 response seems to be a key step in the development of clinical disease [78]. Clinical and experimental data suggests that IgG1 differ from IgG4 autoantibodies in terms of both their epitope specificity and pathogenic potential [9, 42]. A monoclonal IgG4 antibody against desmoglein 3 generated from a patient with active pemphigus vulgaris induces acantholysis in cultured skin and when injected in neonatal mice [86]. The fact that IgG4 antibodies purified from patients with fogo selvagem, an endemic form of pemphigus foliaceus, are pathogenic in mice further supports the notion that IgG4 is pathogenic and complement activation is not required for blister formation [57], but does not exclude a pathogenic potential of IgG autoantibodies belonging to other subclasses. Indeed, in several pemphigus foliaceus patients, only IgG1 autoantibodies were found that caused intraepidermal blisters by passive transfer into neonatal mice [29]. This observation clearly shows that autoantibodies of different subclasses may display blister-inducing activity. In addition, in paraneoplastic pemphigus, autoantibodies against desmoglein 3 mainly belong to the IgG1 and IgG2 subclasses suggesting that the IgG subclass per se is not a direct determinant of the antibodies’ pathogenic potential in pemphigus [24]. Fig. 1IgG subclass distribution of circulating pemphigus autoantibodies. A 1:10 dilution of serum from a patient with pemphigus foliaceus was incubated with 6 μm-thick cryostat sections of normal human skin for 30 min at room temperature. Bound antibodies, visualized using an FITC-labeled antibody specific to human IgG, were of a IgG1 and d IgG4 subclasses. In contrast, no binding of b IgG2 and c IgG3 autoantibodies was evidenced Experimental evidence using a murine model of pemphigus vulgaris shows that, similar to human pemphigus, the autoimmune response in mice is biased toward non-complement fixing autoantibodies. In this model, in immunodeficient mice infused with splenocytes from desmoglein-deficient mice immunized against this antigen, IgG autoantibodies are produced by homeostatically expanded antigen-specific B cells under T cell control and mice develop a phenotype reminiscent of pemphigus vulgaris. These IgG autoantibodies predominantly belong to the IgG1 subclass, which is a non-complement fixing antibody in the mouse [3, 50]. These results demonstrate that non-complement-fixing autoantibodies can induce acantholysis and suggest a similar mechanism in patients, but they do not exclude a pathogenic potential of patients’ IgG1 autoantibodies in pemphigus. In conclusion, in pemphigus diseases the autoantibodies mainly belong to the IgG4 and IgG1 subclasses. Extensive experimental evidence demonstrates the blister-inducing potential of IgG4 autoantibodies. The pathogenic activity of autoantibodies of other subclasses seems likely, but needs further investigation. Subepidermal autoimmune blistering diseases Bullous pemphigoid and pemphigoid gestationis Bullous pemphigoid is an autoimmune blistering disease characterized by subepidermal blisters and associated with linear deposits of C3 and IgG at the epidermal basement membrane zone. Autoantibodies in bullous pemphigoid are directed against two hemidesmosomal antigens, BP230 and BP180/type XVII collagen [87]. Pemphigoid gestationis, also referred to as herpes gestationis, is a subepidermal blistering disease associated with pregnancy and characterized by linear deposition of C3 and, to a lesser extent of IgG at the dermal–epidermal junction, as detected by immunofluorescence microscopy [62, 66]. The autoimmune response in bullous pemphigoid and pemphigoid gestationis is mainly directed against epitopes clustered within the immunodominant 16th non-collagenous (NC16) A region of type XVII collagen [48, 65, 88]. Experimental evidence generally supports the pathogenic role of autoantibodies against type XVII collagen for blister formation. Data from passive transfer animal models strongly suggest that antibodies to type XVII collagen are directly involved in the pathogenesis of bullous pemphigoid [44, 84]. In addition, in an ex vivo model utilizing cryosections of human skin, it has been demonstrated that binding of autoantibodies to the immunodominant NC16A domain of type XVII collagen, is the first critical step in subepidermal blister formation [31, 64]. Analysis of the subclass distribution of IgG autoantibodies in the skin of patients with bullous pemphigoid by immunofluorescence microscopy, revealed IgG4 as being the predominant subclass of autoantibodies in bullous pemphigoid, followed by IgG1 autoantibodies, while IgG2 and IgG3 autoantibodies were found only occasionally [1, 10–12, 22, 61, 83]. In addition, serum autoantibodies binding to the dermal–epidermal junction by immunofluorescence microscopy also mainly belong to the IgG4 and IgG1 subclasses [10, 11, 61, 83]. Subsequent molecular analysis of IgG autoantibodies by immunoblotting and ELISA generally confirmed the predominance of IgG1 and IgG4 autoantibodies reactive with type XVII collagen and BP230 (Fig. 2) [6, 19, 32, 41, 69]. Fig. 2IgG1 and IgG4 autoantibodies mainly target type XVII collagen in bullous pemphigoid. Immunoblot analysis of serum from a bullous pemphigoid patient with recombinant type XVII collagen revealed that reactivity against its immunodominant domain consists mainly of IgG1 (lane 1) and IgG4 (lane 4), and less IgG2 (lane 2) and IgG3 (lane 3) autoantibodies In contrast to bullous pemphigoid, in pemphigoid gestationis tissue-bound and circulating autoantibodies seem to mainly belong to the IgG1 and IgG3 subclasses [14, 37]. However, a recent study challenged these reports revealing IgG4 as the predominant IgG subclass of tissue-bound autoantibodies in pemphigoid gestationis patients [53], a pattern similar to the one found in bullous pemphigoid. Further studies should solve this contradiction. Data from several studies in patients suggested a pathogenic role of IgG1 autoantibodies for blister formation (briefly reviewed in [43]). In a recent study, ELISA analysis showed that autoantibodies against the N-terminus of the extracellular domain of type XVII collagen predominantly belong to the IgG1 subclass. More importantly, a NC16A-specific IgG1 response was predominant in the acute phase of bullous pemphigoid, while IgG4 was predominantly detected in bullous pemphigoid patients in remission [32]. Using immunoaffinity purified IgG subclasses, it has been shown that IgG1, but not IgG4 autoantibodies from bullous pemphigoid patients activate the complement system in vitro (Fig. 3) [46, 72]. This observation is in line with the currently accepted view that IgG4 is unable to activate the classical pathway of complement. However, until recently it was unclear which IgG subclass is actually pathogenic in bullous pemphigoid. Using the ex vivo cryosection model, we demonstrated that, in addition to IgG1, IgG4 autoantibodies are also able to activate leukocytes and to induce leukocyte-dependent tissue damage (Fig. 4) [46]. Our results are in line with recent studies demonstrating that both polyclonal human IgG1 and IgG4 from patients with Wegener’s granulomatosis and chronic urticaria can activate leukocytes [33, 70]. Although the pathogenic potential of IgG4 autoantibodies was significantly lower compared to IgG1, IgG4 autoantibodies, which generally predominate, may activate the inflammatory cells already recruited into the upper dermis by complement-fixing IgG1 autoantibodies and thus amplify the recruitment of additional leukocytes and the extent of blister formation. Therefore, when associated with IgG1 and/or IgG3 autoantibodies, IgG4 may significantly contribute to the pathology induced by autoantibodies in antibody-induced granulocyte-mediated autoimmune blistering diseases [46]. Fig. 3IgG4 autoantibodies, in contrast to IgG1, do not fix complement to the dermal–epidermal junction in bullous pemphigoid. Cryosections of normal human skin were incubated with serum and immunoaffinity purified IgG1 and IgG4 antibody preparations from a bullous pemphigoid patient and, subsequently, treated with normal human serum as a source of complement. Both a serum and b purified IgG1 autoantibodies fixed complement C3 at the dermal–epidermal junction in a linear fashion. c In contrast, incubation of cryosections with IgG4 specific for the dermal–epidermal junction does not result in C3 deposition (all magnifications, ×200)Fig. 4IgG4 autoantibodies from bullous pemphigoid patients induce dermal–epidermal separation in sections of human skin. Dermal–epidermal separation in sections of normal human skin is induced by a IgG1 and b IgG4 autoantibodies from a bullous pemphigoid patient. c IgG antibodies from a healthy control (NHS) do not induce subepidermal splits (all magnifications, ×200) Several reports suggested that binding of bullous pemphigoid antibodies to keratinocytes triggers a signal transduction [58, 73–75]. Bullous pemphigoid autoantibodies trigger a signal-transducing event that leads to expression and secretion of interleukin-6 and interleukin-8 from human cultured keratinocytes [58]. A series of studies from another group demonstrated that IgG1 autoantibodies from bullous pemphigoid patients and rabbit IgG against type XVII collagen induces Ca2+ release from intracellular storage sites [73–75]. Interestingly, complement activation by these IgG1 autoantibodies did not result in lysis of keratinocytes [73]. While the relevance of these findings is not yet fully understood, IgG2 and IgG4 patients’ autoantibodies were found to inhibit the transient increase of intracellular Ca2+ induced by bullous pemphigoid IgG1 antibody [74]. Mucous membrane pemphigoid Mucous membrane pemphigoid is a heterogeneous disease with regard to the clinical phenotype and the target antigens. Different target antigens have been identified in mucous membrane pemphigoid, including BP180, laminins 5 (epiligrin) and 6, β4 integrin [62, 66]. In general, autoantibodies in mucous membrane pemphigoid mainly belong to the IgG4 and IgG1 subclasses [7, 34]. Interestingly, in anti-epiligrin cicatricial pemphigoid, autoantibodies against laminin 5 almost exclusively belong to the IgG4 subclass [34]. Consistent with these findings, sera from patients with anti-laminin 5 IgG autoantibodies do not fix C3 to the epidermal basement membranes and do not induce leukocyte-dependent dermal–epidermal separation in vitro [34, 60]. These data suggest that complement activation does not play a major role in this disease and subepidermal blisters in these patients may develop via a direct effect of anti-laminin 5 IgG itself [34, 60]. Diseases associated with autoimmunity against type VII collagen Epidermolysis bullosa acquisita is a chronic subepidermal blistering disease characterized by circulating and tissue-bound antibodies targeting the non-collagenous domain 1 (NC1) of type VII collagen. The pathogenic relevance of antibodies against type VII collagen is supported by compelling evidence: (1) autoantibodies from patients with epidermolysis bullosa acquisita were shown to recruit and activate leukocytes ex vivo resulting in dermal–epidermal separation in cryosections of human skin [60, 63], (2) antibodies against type VII collagen induce subepidermal blisters when passively transferred into mice [67, 81], (3) immunization with recombinant autologous type VII collagen induces an autoimmune response to this protein resulting in a blistering phenotype closely resembling human epidermolysis bullosa acquisita [68]. Tissue-bound and circulating antibodies in epidermolysis bullosa acquisita patients mainly belong to the IgG1 and IgG4 subclasses [7, 15, 26, 47]. A similar distribution of IgG subclasses of autoantibodies is found also in SJL mice immunized against murine type VII collagen [68]. In these mice, while both non-complement-fixing IgG1 and complement-fixing IgG2a and IgG2b autoantibodies are produced after immunization, IgG2a/b autoantibodies seem to induce blistering [68]. Systemic lupus erythematosus and inflammatory bowel diseases may be also associated with autoantibodies against type VII collagen [13, 25, 30, 51]. However, in contrast to epidermolysis bullosa acquisita, autoantibodies from patients with bullous systemic lupus erythematosus and inflammatory bowel diseases mainly belong to IgG2 and IgG3, respectively [30, 51]. The pathogenic relevance of autoantibodies against type VII collagen in inflammatory bowel diseases has not yet been addressed [51]. IgG autoantibodies from patients with bullous systemic lupus erythematosus were shown to induce leukocyte-dependent dermal–epidermal separation in cryosections of human skin ex vivo [30]. This findings suggest that the presence of complement-fixing autoantibodies is not a strict requirement for blistering in patients. Conclusion and perspectives The polyclonal antibody response against structural skin proteins in autoimmune bullous diseases is heterogeneous, but shows a skewing in subclass distribution of autoantibodies. The strong bias toward production of IgG4 autoantibodies in these organ-specific autoimmune diseases suggests chronic antigenic stimulation. In pemphigus, IgG4 autoantibodies that dominate the autoimmune response are clearly pathogenic. However, IgG1 autoantibodies also likely possess blister-inducing potential that requires further investigation. In subepidermal autoimmune blistering diseases, the effector functions of autoantibodies are important for blistering. Thus, in bullous pemphigoid and epidermolysis bullosa acquisita, complement-fixing IgG1 autoantibodies may show a significantly higher pathogenic potential when compared with IgG4 autoantibodies. Characterization of the blister-inducing capacity of different subclasses of autoantibodies in autoimmune bullous diseases will not only provide relevant mechanistic insights, but should also greatly facilitate the development of improved therapeutic modalities of autoimmune blistering diseases. Detailed knowledge on the pathogenic IgG isotype(s) will serve as a basis for the development of IgG subclass-specific immunoapheresis, skewing autoantibody production toward non-pathogenic subclasses by immunotherapy or blocking of complement or leukocytes activation by targeting specific IgG subclasses. A promising approach is represented by interventions aimed at inhibiting the production of autoantibodies in general or skewing the production of autoantibodies toward non-pathogenic subclasses. The molecular targets of these approaches may include different cytokines (e.g., IL-12 and IL-17) and their activity could be modulated using inhibitory antibodies, small peptide inhibitors or peptidomimetics as well as immunization with the autoantigen together with adjuvants known to induce a Th2 immune response.
[ "igg subclasses", "complement", "autoimmune bullous diseases" ]
[ "P", "P", "P" ]
Diabetologia-3-1-1794135
Adiponectin receptor genes: mutation screening in syndromes of insulin resistance and association studies for type 2 diabetes and metabolic traits in UK populations
Aims/hypothesis Adiponectin is an adipokine with insulin-sensitising and anti-atherogenic properties. Several reports suggest that genetic variants in the adiponectin gene are associated with circulating levels of adiponectin, insulin sensitivity and type 2 diabetes risk. Recently two receptors for adiponectin have been cloned. Genetic studies have yielded conflicting results on the role of these genes and type 2 diabetes predisposition. In this study we aimed to evaluate the potential role of genetic variation in these genes in syndromes of severe insulin resistance, type 2 diabetes and in related metabolic traits in UK Europid populations. Introduction Adiponectin, encoded by the gene ADIPOQ (also known as 30-kDa adipocyte complement-related protein, Acrp30, APM-1, APM1, ACDC, and gelatin-binding protein-28 or GBP28), is an adipokine with insulin-sensitising [1, 2] and anti-atherogenic actions [3]. Its levels correlate strongly with insulin sensitivity in humans and animal models, and increasing levels of plasma adiponectin produce a sensitising effect to the biological action of insulin [4]. Several genetic reports have detected association between adiponectin gene variants and obesity, insulin resistance, type 2 diabetes, and adiponectin levels [5–9]. Recently, two adiponectin receptors were identified, adiponectin receptor 1 (ADIPOR1), cloned from a human skeletal muscle expression library, and adiponectin receptor 2 (ADIPOR2), identified using computational tools by Yamauchi et al. [10]. In mice, Adipor1 is expressed ubiquitously, with higher levels in skeletal muscle, and has a higher affinity for the globular form of adiponectin. Adipor2, on the other hand, is most abundant in the liver and preferentially binds the full-length form of adiponectin [10]. In contrast, both human ADIPOR1 (375aa) and ADIPOR2 (311aa) were predominantly expressed in skeletal muscle [10, 11]. In Mexican Americans, glucose-tolerant individuals with a family history of type 2 diabetes were reported to exhibit significantly lower levels of mRNA for ADIPOR1 and ADIPOR2 in skeletal muscle than subjects without a family history of diabetes. mRNA levels of both receptors were also reported to positively correlate with glucose disposal [11]. It is possible therefore that lower expression or altered function of the receptors would predispose to increased insulin resistance and type 2 diabetes. In fact, common variants in the ADIPOR1 gene were recently tested for association in a case–control study with white and African American individuals, but no association was reported [12]. Two additional studies have evaluated the role of adiponectin receptor variants and risk of type 2 diabetes. In the Old Order Amish population, two intronic variants in ADIPOR1 were reported to associate with risk of type 2 diabetes, while in ADIPOR2 an extended haplotype block was associated with increased risk of disease [13]. In contrast, in a Japanese population no associations between polymorphisms in adiponectin receptor genes and risk of type 2 diabetes were detected [14]. More recently, studies in French and Finnish populations reported no association between ADIPOR1 single nucleotide polymorphisms (SNPs) and type 2 diabetes [15, 16], although evidence for association between rs767870 in ADIPOR2 and type 2 diabetes in a French population has been suggested [15]. In light of these studies, and the potential role of these receptors in insulin action and diabetes, we sought to identify and investigate the effects of genetic variants in these genes in UK populations. Subjects and methods Participants Severe insulin resistance cohort A cohort of human patients with severe insulin resistance (SIR cohort) was collected at the University of Cambridge, UK. The inclusion criteria for this cohort were: (1) fasting insulin >150 pmol/l or exogenous insulin requirement >200 U/day; (2) acanthosis nigricans; and (3) BMI <33 kg/m2. In the present study, 129 patients from this cohort were screened for mutations in exons and splice junctions of ADIPOR1 and ADIPOR2 genes. Cambridge Local Research Ethics Committee approval was obtained, and informed consent was received from all individuals before participation. Cambridgeshire Case–Control Study The Cambridgeshire Case–Control Study has been described previously [17]. Briefly, this population-based case–control study consists of 552 type 2 diabetes patients and matched control subjects. DNA was available from 516 cases and control subjects for this study. The cases were a random sample of Europid men and women with type 2 diabetes, aged 47 to 75 years, from a population-based diabetes register in a geographically defined region in Cambridgeshire, UK. The presence of type 2 diabetes in these participants was defined as onset of diabetes after the age of 30 years without use of insulin therapy in the first year after diagnosis. The control participants were individually matched to each of the diabetic subjects by age, sex and geographical location, but not by BMI. Potential control subjects with HbA1c levels greater than 6% were excluded, as this subgroup could have contained a higher proportion of individuals with previously undiagnosed diabetes. Ethical approval for the study was granted by the Cambridge Local Research Ethics Committee. EPIC-Norfolk participants This is a nested case–control study within the EPIC-Norfolk prospective cohort study; both the case–control and full cohort study [18, 19] have been described in detail previously. Briefly, the case–control study consists of 417 incident type 2 diabetes cases and two sets of 417 control subjects, matched on age, sex, time in study and family physician, with the second set additionally matched for BMI. A case was defined by a physician’s diagnosis of type 2 diabetes, with no insulin prescribed within the first year after diagnosis, and/or HbA1c >7% at baseline or the follow-up health check. Controls were selected from those in the cohort who had not reported diabetes, cancer, stroke or myocardial infarction at baseline, and who had not developed diabetes by the time of selection. Potential control subjects with measured HbA1c levels >6% were excluded. DNA was available for this analysis from 354 cases and 741 control subjects. Ethical approval for the study was granted by the Norwich Local Research Ethics Committee. Ely Study This is a population-based cohort study of the aetiology and pathogenesis of type 2 diabetes and related metabolic disorders in the UK [20]. It uses an ethnically homogeneous Europid population, in which phenotypic data were recorded at the outset and after 4.5 years. The cohort was recruited from a population sampling frame with a high response rate (74%), making it representative of the general population for this area in Eastern England. This analysis included 1,721 men and women, aged 35–79 years and without diagnosed diabetes, who attended the study clinic for a health check between 2000 and 2004. Of these, 1,005 were attending a follow-up health check, while the remaining 716 were newly recruited in 2000 from the original population sampling frame. Participants attending the health check underwent standard anthropometric measurements and a 75-g oral glucose tolerance test. Plasma glucose was measured using the hexokinase method. Plasma insulin was measured by two-site immunometric assays with either 125I or alkaline phosphatase labels. Cross-reactivity with intact proinsulin was less than 0.2% and interassay CVs were less than 7%. Ethical approval for the study was granted by the Cambridge Local Research Ethics Committee. PCR and sequencing Genomic DNA from patients was randomly preamplified in a GenomiPhi reaction (GE Healthcare UK, Chalfont St. Giles, UK) prior to amplification with gene-specific primers. Primers were designed using Primer3 software [21] to cover all coding exons and splice junctions. PCR primers and expected product sizes are described in Electronic supplementary material (ESM) Table 1. Following PCR, performed using standard conditions, products were purified using exonuclease I and shrimp alkaline phosphatase (USB Corporation, Cleveland, OH, USA), and bi-directional sequencing was performed using a DNA sequencing kit (Big Dye Terminator 3.1; Applied Biosystems, Foster City, CA, USA). Sequencing reactions were run on ABI3700 capillary machines (Applied Biosystems) and sequences were analysed using Mutation Surveyor version.2.20 (SoftGenetics LLC, State College, PA, USA). Genotyping SNP selection All SNPs with a minor allele frequency greater than 2% in our SIR cohort were selected for genotyping. To increase coverage in areas not re-sequenced, a number of dbSNPs were selected in an attempt to eliminate gaps, between genotyped SNPs, of greater than 2.5–3 kb on average (additional SNPs were selected prior to HapMap phase I data release). SNP choice was based on the following criteria: (1) all putative non-synonymous SNPs in dbSNP were selected irrespective of whether or not there was frequency or validation information for the SNP; (2) SNPs with frequency information were selected if their minor allele frequency was greater than or equal to 5%; (3) for SNPs with no frequency information the choice was based on whether the SNP was a double-hit SNP, had been validated by-cluster or by-submitter; and finally (4) some SNPs with no validation information were included to try and eliminate gaps of greater than 2.5–3 kb between SNPs selected for genotyping. Genotyping and quality control Samples were arrayed on 96-well plates with three replicates and one water control per plate. For the case–control populations, cases and control samples were randomly distributed across each 96-well plate, with approximately the same number of cases and controls per plate. Genotyping of samples was performed in 384-well plates at the Wellcome Trust Sanger Institute, Cambridge, using an adaptation of the homogenous MassExtend protocol for the MassArray system (Sequenom, San Diego, CA, USA) [22]. Assay results for case–control are described in ESM Table 2. The following criteria were used to pass assays resulting from genotyping: (1) call rates had to be greater than or equal to 90% (in one case a call rate of 88% was accepted); (2) concordance rates between duplicate samples had to be greater than or equal to 98%; (3) minor allele frequency had to be greater than or equal to 1% in the genotyped populations; (4) agreement with Hardy–Weinberg equilibrium was tested separately in cases and controls using a χ2 goodness-of-fit test, and if p < 0.01 in controls, the assay was failed, while if p < 0.01 in cases, the assay was flagged but included in primary analysis. Assays that failed quality control were excluded from further analysis. In total we analysed results from nine SNPs in ADIPOR1 and 15 SNPs from ADIPOR2. Statistical analysis All analyses used SAS 8.02 (SAS Institute, Cary, NC, USA) or Stata 7.0 (Stata Corporation, College Station, TX, USA) statistical programs, unless otherwise stated. All used genotypes were in Hardy–Weinberg equilibrium. The pair-wise linkage disequilibrium (LD) coefficient for the controls (r2) was calculated for genotyped SNPs and is represented in Fig. 1. For each SNP, two primary models were used to assess association with diabetes and quantitative traits, the linear trend (additive model) on 1df and the general model on 2df. Since the results from these analyses were not materially different, we present only the results from the linear trend test. Tests for association were performed by logistic regression combined in the two case–control populations adjusting for age, sex and population. Between-study heterogeneity was tested by log-likelihood ratio tests. Quantitative trait analysis was undertaken in the Ely Study population. Association between fasting plasma insulin, fasting and 2-h post-load plasma glucose, 30-min insulin increment (30-min insulin minus fasting insulin over 30-min glucose in an OGTT), BMI and genotype was tested in a multiple regression model, adjusted for age and sex. Fig. 1Genomic structure and pair-wise marker LD in ADIPOR1 (a) and ADIPOR2 (b). The location of SNPs identified in this study and/or genotyped is represented along the gene (bold type: SNPs selected for genotyping, and failed or monomorphic SNPs). Exons are represented in boxes (black for coding and open for untranslated). Introns and flanking sequences appear as lines. The pair-wise marker LD measured by r2 statistics is shown below the genomic structures and indicated by the shade of grey blocks (white to black) and the r2 value. a, b, isoforms a and b in ADIPOR2 Results To evaluate whether genetic variation in ADIPOR1 and ADIPOR2 contributed to severe insulin resistance in humans, we sequenced exons and splice junctions in both genes in 129 individuals from our SIR cohort. These individuals were unrelated and had a variety of syndromes of severe insulin resistance [23]. We identified 13 and 29 polymorphisms in ADIPOR1 and ADIPOR2 respectively, none of which altered the protein sequence of either gene (ESM Table 3). This suggests that no functional mutations were identified in these genes. We next tested whether common variants at these genes impacted on type 2 diabetes predisposition or related quantitative traits in UK Europid populations. We selected all variants with a minor allele frequency greater than 2% in the SIR cohort and supplemented our SNP selection with additional variants from the dbSNP database plus SNPs with significant association results in other published studies. Thirteen polymorphisms from ADIPOR1 and 23 polymorphisms from ADIPOR2 were selected for genotyping in two type 2 diabetes case–control studies (n=2,127) and one metabolic quantitative trait study (n=1,721) (Fig. 1). Those SNPs that passed our genotyping quality control criteria (described in methods) were used to investigate the degree of LD in control individuals across ADIPOR1 and ADIPOR2. In total nine SNPs in ADIPOR1 and 15 SNPs in ADIPOR2 were included in the analysis. LD was measured by r2 statistic and is depicted in Fig. 1. Under both general and linear trend models no evidence was found for statistically significant associations between SNPs and disease risk (Table 1). In ADIPOR1, SNPs rs2275738, rs2275735 and rs10581 were removed from the quantitative trait analysis as they were out of Hardy–Weinberg equilibrium (p<0.01). For the remaining SNPs there was also no evidence for association of the SNPs tested with BMI, fasting and 2-h glucose levels, fasting insulin or 30-min insulin incremental response (ESM Table 4). In ADIPOR2 a few SNPs showed nominally significant association with BMI and 2-h glucose levels. However, these results are likely to be chance findings given the number of statistical tests performed (ESM Table 4). Table 1ADIPOR1 and ADIPOR2 SNP association results with type 2 diabetesGene SNPAlleleCasesControlsOdds ratiop value trendp value homogeneity1/211 (%)12 (%)22 (%)11 (%) 12 (%) 22 (%)(95% CI)ADIPOR1rs6666089G/A378 (47.6)340 (42.8)76 (9.6)536 (46.2)525 (45.2)100 (8.6)1.00 (0.86,1.15)0.96010.750rs1539355A/G374 (46.6)346 (43.1)82 (10.2)558 (45.9)547 (45.0)111 (9.1)1.02 (0.89,1.18)0.73500.649rs2275738C/T277 (34.4)379 (47.1)149 (18.5)395 (34.0)551 (47.4)216 (18.6)0.97 (0.85,1.1)0.61090.496IVS3+85C/G707 (92.1)61 (7.9)0 (0)1046 (92.6)82 (7.3)2 (0.2)1.03 (0.73,1.45)0.87210.654rs2275735C/T763 (92.0)66 (8.0)0 (0)1152 (93.5)79 (6.4)1 (0.1)1.24 (0.88,1.75)0.22390.228rs1342387C/T249 (30.4)414 (50.5)156 (19.0)362 (29.5)634 (51.7)231 (18.8)0.96 (0.85,1.1)0.59090.603rs10581G/A741 (93.8)46 (5.8)3 (0.4)1079 (95.7)46 (4.1)3 (0.3)1.35 (0.92,1.97)0.12810.466rs7539542C/G373 (47.4)336 (42.7)78 (9.9)554 (46.9)510 (43.2)117 (9.9)1.00 (0.87,1.15)0.95660.193rs2185781C/T532 (63.6)256 (30.6)49 (5.9)789 (63.8)398 (32.2)50 (4.0)1.08 (0.92,1.26)0.34250.495ADIPOR2rs1029629A/C353 (44.6)372 (47.0)67 (8.5)537 (44.9)528 (44.1)132 (11.0)0.94 (0.81,1.08)0.34970.265rs11061971A/T228 (28.1)409 (50.4)174 (21.5)349 (29.5)589 (49.7)246 (20.8)1.05 (0.93,1.20)0.43940.092rs4766415A/T207 (25.6)411 (50.7)192 (23.7)323 (27.1)586 (49.2)282 (23.7)1.04 (0.92,1.19)0.51590.074rs767870A/G575 (71.3)214 (26.5)18 (2.2)857 (72.3)310 (26.1)19 (1.6)1.08 (0.9,1.3)0.38430.095rs2286384G/C214 (25.7)429 (51.5)190 (22.8)329 (26.8)618 (50.3)282 (22.9)1.03 (0.9,1.17)0.68370.037*rs2286383C/T237 (28.6)418 (50.4)174 (21.0)368 (30.1)602 (49.2)253 (20.7)1.05 (0.92,1.19)0.45980.068I290C/A618 (75.2)193 (23.5)11 (1.3)940 (77.9)252 (21.0)14 (1.2)1.18 (0.97,1.44)0.09850.118rs9805042C/T603 (75.8)180 (22.6)12 (1.5)900 (79.0)225 (19.8)14 (1.2)1.17 (0.96,1.43)0.12820.051rs2286382G/A791 (94.5)45 (5.4)1 (0.1)1159 (94.2)71 (5.8)0 (0)0.98 (0.67,1.43)0.90540.664rs12342C/T370 (44.6)376 (45.3)84 (10.1)543 (44.3)540 (44.1)142 (11.6)0.96 (0.84,1.1)0.54610.398rs1044471C/T227 (27.8)420 (51.5)169 (20.7)334 (27.6)592 (49.0)282 (23.3)0.93 (0.82,1.05)0.25000.115rs2286380A/T621 (77.0)176 (21.8)10 (1.2)947 (79.0)238 (19.8)15 (0.0125)1.12 (0.91,1.37)0.27620.155rs13219T/C251 (30.2)403 (48.4)178 (21.4)369 (30.1)618 (50.4)238 (19.4)1.04 (0.91,1.18)0.56100.276rs2286379T/C259 (31.0)399 (47.7)178 (21.3)376 (30.6)614 (50.0)239 (19.4)1.03 (0.91,1.17)0.64270.269rs3815325G/A490 (60.6)273 (33.7)46 (5.7)712 (58.7)437 (36.0)64 (5.3)0.96 (0.82,1.11)0.57850.26311, homozygous for allele 1, 12, heterozygous, 22, homozygous for allele 2. Genotype counts are shown with frequency (%), odds ratios are shown per allele 2 with significance calculated for the linear trend (p value trend). Homogeneity in results from association between EPIC and Cambridge Case–Control populations was tested (p value homogeneity)*p values ≤ 0.05 Discussion The current study, including sequencing of 129 patients with syndromes of severe insulin resistance and genotyping of both population-based type 2 diabetes case–control studies (n=2,127) and a metabolic quantitative trait study (n=1,721), suggests that ADIPOR1 and ADIPOR2 genetic variants are unlikely to be major risk factors for type 2 diabetes and insulin resistance in UK Europid populations. Although sequencing of ADIPOR1 and ADIPOR2 genes in a cohort of patients with syndromes of severe insulin resistance (n=129) led to the identification of 42 polymorphisms, including 21 novel rare variants, none altered the protein sequence. Given that this group of patients comprises a heterogeneous cohort representative of a variety of syndromes of extreme insulin resistance, the lack of variants affecting the protein sequence suggests that functional mutations in the genes ADIPOR1 and ADIPOR2 are not major causes of extreme insulin resistance in humans. For ADIPOR1, our data in case–control studies are consistent with previous reports showing that SNPs in this gene are not associated with type 2 diabetes risk in Europid [12, 15, 16], African [12] or Japanese populations [14]. This is in contrast to evidence from the Old Order Amish, where an association of rs2275738 (and rs2275737 which is in perfect LD with it) and rs1342387 with type 2 diabetes risk has been reported [13]. Since the Amish represent an isolated population, it is possible that variants in ADIPOR1 play a role in type 2 diabetes predisposition among them, which is not apparent in more heterogeneous populations. However, given that both reported associated SNPs are present in all populations, it is unlikely that either is the true causal variant, although they could be detecting, through LD, the effect of an untested SNP. Alternatively, given the relative small sample sizes used and lack of adjustment for multiple testing, the authors may have reported a false–positive association. For ADIPOR2 the data are less consistent. While no evidence of association between ADIPOR2 SNPs and type 2 diabetes was present in a Japanese population [14], an association has been suggested between SNP rs767870 and type 2 diabetes risk in French populations [15]. Our data, and those from the Old Order Amish [13], do not support this finding. Notably, in the French population, meta-analysis of rs767870, including 1,380 individuals with type 2 diabetes and 1,496 controls, demonstrated allelic association of nominal significance only (p=0.02), while the most significant result was under a recessive model (p=0.0018). Our study has only 23% power to detect such small recessive effects (odds ratio 1.3) with an allele frequency of 0.15, and this could explain our discrepant result. Further large scale studies of this SNP in additional populations will be required to elucidate its role in conferring risk of disease. A haplotype was also reported to increase risk of diabetes in the Old Order Amish [13]. However, when we performed haplotype analyses in both ADIPOR1 and ADIPOR2, we were unable to detect any significant associations with diabetes risk (data not shown), and this discrepancy might also be accounted for by differences in the populations studied. With regard to quantitative metabolic traits, to date two studies have reported nominally significant results with insulin sensitivity and body size [16, 24]. The first study showed nominally significant associations between two tightly linked SNPs (rs6666089 and an intronic –1927 SNP) in ADIPOR1 and decreased insulin sensitivity and increased HbA1c levels [24]. We did not test association with HbA1c, but have not replicated any association with insulin sensitivity (including with SNP rs6666089) as assessed by fasting insulin measurements. Recently Kantartzis and colleagues reported that the association between rs6666089 and insulin sensitivity is observed only in more obese, but not in lean individuals [25]. This dependence on the degree of adiposity could explain some of the discrepant results observed for this SNP, if there are substantial differences in mean BMI between populations tested. To explore this hypothesis, we performed SNP×BMI interaction tests (using BMI both as a continuous trait and splitting the population above and below the median) on all ADIPOR1 SNPs tested. Our data provided no evidence for such an interaction (data not shown). We also specifically tested for association between rs6666089, and measures of insulin and glucose in participants with BMI above and below 25. Again we found no statistically significant difference between the two groups (p>0.3). Therefore, it is unlikely that BMI differences between populations are at the root of our discrepant results. Of note, our data cannot exclude possible small effects of this SNP on insulin sensitivity, in particular if this effect is only observed in subjects with higher BMI. The second study suggested there was evidence of association between two markers (rs10920534 and rs2275738) and BMI, but this evidence came from men only. Furthermore, three other markers (rs10920534, rs12045862 and rs7539542) were reported to associate with fasting and 2-h insulin levels, particularly in men at baseline [16]. We did not test rs10920534, but did test rs6666089 (D′=r2=1 with rs10920534) and found no evidence for SNP×sex interaction on any of the quantitative traits we analysed. Our data also did not replicate the sex effect of SNP rs7539542 on insulin measurements. Although we did not test SNP rs12045862, previously published data was conflicting. Thus while the C allele was suggested to be associated with higher 2-h insulin levels in men (p=0.027), in women the T allele was associated with higher levels (p=0.029) [16]. Given that neither we, nor others [24], have found evidence for sex×SNP interaction effects on measures of insulin sensitivity, we suggest that further confirmatory studies are required to test this hypothesis. Until recently, remaining uncertainty regarding the identity of the true physiological receptors for adiponectin [26] had hampered interpretation of the functional relevance of polymorphisms in ADIPOR1 and ADIPOR2 with respect to adiponectin’s insulin sensitising effects. However, a recent yeast two-hybrid screen identified an ADIPOR1 interacting molecule, APPL1, thought to mediate many of the effects of adiponectin [27]. This molecule was shown to interact with both ADIPOR1 and ADIPOR2 in an adiponectin-sensitive manner, and was shown to mediate many of adiponectin’s insulin-sensitising effects. This suggests that ADIPOR1 and ADIPOR2 could be therapeutic targets for drug development and should renew interest in association studies, such as those we present here, testing polymorphisms in ADIPOR1 and ADIPOR2 for effects on type 2 diabetes risk and metabolic traits. In summary, sequencing of ADIPOR1 and ADIPOR2 genes in a cohort of patients with syndromes of severe insulin resistance (n=129) suggests that functional mutations in these genes are not a major cause of extreme insulin resistance in humans. Furthermore, testing of common genetic variants (n=24) did not find evidence for association of these genes with type 2 diabetes risk (n=2,127) or with five additional quantitative metabolic traits (n=1,721). These data suggest that ADIPOR1 and ADIPOR2 variants are unlikely to be major risk factors for type 2 diabetes and insulin resistance in UK Europid populations, although more detailed analyses of gene variants may be required to exclude a potential minor role of these genes in insulin resistance and glucose homeostasis. Electronic supplementary material Below is the link to the electronic supplementary material. Table 1 Primer sequences and PCR product sizes used for sequencing ADIPOR1 and ADIPOR2 (DOC 46 kb) Table 2 Genotyping assay results for case-control studies (DOC 61 kb) Table 3 SNPs detected by sequencing ADIPOR1 and ADIPOR2 in severely insulin-resistant patients (DOC 78 kb) Table 4 ADIPOR1 and ADIPOR2 SNP association results with quantitative traits (DOC 215 kb)
[ "insulin resistance", "association studies", "type 2 diabetes", "adipor1", "adipor2", "polymorphisms" ]
[ "P", "P", "P", "P", "P", "P" ]
Oecologia-3-1-2039837
Geographic and seasonal patterns and limits on the adaptive response to temperature of European Mytilus spp. and Macoma balthica populations
Seasonal variations in seawater temperature require extensive metabolic acclimatization in cold-blooded organisms inhabiting the coastal waters of Europe. Given the energetic costs of acclimatization, differences in adaptive capacity to climatic conditions are to be expected among distinct populations of species that are distributed over a wide geographic range. We studied seasonal variations in the metabolic adjustments of two very common bivalve taxa at European scale. To this end we sampled 16 populations of Mytilus spp. and 10 Macoma balthica populations distributed from 39° to 69°N. The results from this large-scale comprehensive comparison demonstrated seasonal cycles in metabolic rates which were maximized during winter and springtime, and often reduced in the summer and autumn. Studying the sensitivity of metabolic rates to thermal variations, we found that a broad range of Q 10 values occurred under relatively cold conditions. As habitat temperatures increased the range of Q 10 narrowed, reaching a bottleneck in southern marginal populations during summer. For Mytilus spp., genetic-group-specific clines and limits on Q 10 values were observed at temperatures corresponding to the maximum climatic conditions these geographic populations presently experience. Such specific limitations indicate differential thermal adaptation among these divergent groups. They may explain currently observed migrations in mussel distributions and invasions. Our results provide a practical framework for the thermal ecophysiology of bivalves, the assessment of environmental changes due to climate change and its impact on (and consequences for) aquaculture. Introduction A fundamental step forward in predicting the ecological and economic consequences of climate change would be to identify the mechanistic link between the physiology of species and climatic variations. How and to what extent climatic variations cause stress in eurythermal bivalve species is still not fully understood and is expected to differ among taxa. In general, for organisms that maintain their body temperature by absorbing heat from the environment (ectotherms), temperatures that are too high can act as a stressor in two ways. On the one hand, it may cause the denaturing of sensitive proteins. This damage can be minimized by the actions of heat-shock proteins, which increase the thermostability of proteins and chaperone cellular processes (Feder and Hofmann 1999; Lyons et al. 2003). On the other hand, excessive temperatures may cause oxygen limitation, due to a limited respiratory capacity, resulting in a maximum respiration rate at a specific temperature, beyond which anaerobic metabolic pathways are utilized and respiration rates usually drop drastically (Pörtner 2001, 2002). The temperature that corresponds to this respiratory maximum is referred to as the breakpoint temperature. Since breakpoint temperatures tend to correlate with the maximum habitat temperatures of several marine ectotherms (Somero 2002), climate-change-induced shifts in the distributions of these species may be due to their respiratory limitations. The metabolic rate of an ectotherm is proportional to its respiration rate. To remain energy-efficient, and for protection against oxygen shortage during the warmer seasons, organisms need to adjust their metabolic energy requirements to their maximum food uptake and oxygen consumption rate. Seasonal variation in the respiratory response to temperature reveals how organisms adjust throughout the annual cycle. While several terrestrial and aquatic mollusks apply the strategy of metabolic down-regulation during the summer (Buchanan et al. 1988; McMahon 1973; McMahon and Wilson 1981; Storey and Storey 1990; Wilson and Davis 1984), others do not (McMahon et al. 1995). The level of metabolic down-regulation is also reflected by the temperature quotient (Q 10) of the metabolic rate, i.e., the sensitivity of the organisms’ metabolism to changes in body temperature. Experimental work performed by Widdows (1976) has demonstrated that when thermal fluctuations approach and exceed breakpoint temperatures, this sensitivity usually decreases. Such reduced sensitivities have been found for field populations of M. edulis (Newell 1969) and M. balthica (Wilson and Elkaim 1991) sampled from high-shore habitats in summertime. Taken together, the thermal sensitivity of the metabolic rate (Q 10) is expected to decrease towards the warm end of the species distribution range. Lately, invasions (Geller 1999; Wonham 2004) and northward introgression (Luttikhuizen et al. 2002) have been reported for Mytilus spp., and a range contraction for M. balthica (Hummel et al. 2000). Aiming to bridge the gap between the observed migrations of these species and the changing climate in Europe, we studied seasonal adjustments to temperature in bivalve metabolism at the European scale, from 39° to 69°N. The analysis of patterns in the extent of metabolic acclimatization across widely distributed populations will reveal how core populations differ from marginal populations, presenting species-specific responses to cold winters and hot summers near the upper and lower edges of temperature-induced distribution ranges. Such latitudinal gradients may provide a powerful tool that can be used to understand the temperature-dependent distributions of species and to predict their adaptive tolerance to climate change. In macrophysiological studies, the possibility of differential adaptation to regional climates among distinct populations should be taken into account. Based on neutral genetic variation, both European mussels and clams can be subdivided into three main genetic groups (Hummel 2006; Daguin et al. 2001; Luttikhuizen et al. 2003; Skibinski 1985). Uncertainty exists about the nomenclature of European Mytilus species. Traditionally, mussels from the Mediterranean Sea and the coast of the Iberian Peninsula are referred to as Mytilus galloprovincialis (Lamarck). Mussels from the English Channel, the North Sea coast, and the Atlantic coast of Norway and Iceland are named Mytilus edulis (L.), and the mussels from the Central Baltic Sea are called Mytilus trossulus (Gould), which is based upon their genetic resemblance to Mytilus trossulus from the Atlantic coast of Canada (Varvio et al. 1988). However, the morphological characteristics of the different holotypes do not diagnostically separate the three genetic groups found in European mussels. In addition, broad hybridization zones (Daguin et al. 2001) and deep introgressions (Luttikhuizen et al. 2003) have been reported; a genetic characteristic of a single species with separated clades. To avoid confusion, we will not use these species names in this study, but rather refer to them as Mytilus spp., with reference to their geographic distributions, i.e., a Baltic Sea group, a North Sea group, and a Mediterranean Sea and Bay of Biscay group. Also, different genetic groups have been distinguished for M. balthica (Hummel 2006; Luttikhuizen et al. 2003) that have never been described as different species. The coupling of phylogenetic and ecophysiological analyses is urgently needed to understand and predict current and future migrations of these bivalve taxa and their clades. Methods Fieldwork To study the marginal and core populations of both taxa, including different genetic groups, we defined 21 research sites of interest along the European coastline, and sampled 11 M. balthica populations and 16 Mytilus spp. populations, respectively. All sampling stations are numbered (1–21) in Fig. 1. Whenever a station name is mentioned in this text, its number is given between parentheses. The sampling stations were located in the coastal areas of the different sea basins that represent much of the European coastline (including the Mediterranean Sea, the Bay of Biscay, the North Sea and the Baltic Sea), and a variety of microhabitats. The sampling stations in the Mediterranean Sea are characterized by a high and stable salinity (38–40 PSU). In the Bay of Biscay and North Sea estuaries, the ambient salinity at the sampling stations fluctuated, generally varying between ∼35 and ∼25 PSU. Due to elevated river runoff, oligohaline conditions may have occurred occasionally at these stations. The Baltic Sea stations were distributed along the Baltic salinity gradient. While the ambient salinity is still ∼15 PSU in the Mecklenburg Bight (7), it is around 6–7 in the Gulf of Gdansk (6) and Askö (5), and has decreased to 3 PSU in Umeå (3). Although high and low peak temperatures occur at the intertidal sampling stations in the Bay of Biscay and the North Sea, water temperatures were intermediate compared to the warmer Mediterranean Sea and the colder Baltic Sea sites (Fig. 2). In summer, the Baltic Sea warms rapidly, reaching temperatures that are comparable to North Sea conditions. In the tidal estuaries of the Bay of Biscay and the North Sea, sampling was carried out at mid-shore level. Lacking significant tidal movements, Baltic Sea populations of Mytilus spp. and M. balthica, and Mediterranean Sea populations of Mytilus spp. were sampled at a water depth of 0.5–1.0 m. During the period July 2003–May 2005 16 populations (i.e., 10 Mytilus spp. and 6 M. balthica) were visited seasonally and the others only once or twice. During each sampling occasion, mussels were sampled from hard substrate, and clams were sieved from the sediment. For Mytilus spp. populations, mean shell lengths ranged from 28 mm (SD: 2.0) for specimens sampled in the Gulf of Gdansk (6) to 33 mm (SD: 3.5) for mussels sampled from the Santa Giusta Lagoon (21). Mean shell lengths of mussels collected from the other populations was in the same range, and standard deviations were <3.5. About 95% of all sampled mussels fell within a size range of 25–35 mm. For M. balthica mean shell lengths ranged from 12 mm (SD: 1.5) for clams sampled from the Mecklenburg Bight (7) to 16 mm (SD: 1.8) for clams sampled at Point d’Aiguillon (15). Mean shell lengths of all other clam populations were within this range, standard deviations did not exceed 2.8 mm, and 95% of all sampled clams fell within the size range of 10–19 mm. After sampling, the collected animals were stored in foam boxes (clams were offered sediment from the field to bury) and transported to a nearby laboratory, where they were kept in (constantly aerated) aquaria, under ambient field conditions (±3 °C). Measurements were carried out within 24 h after sampling. Fig. 1Map of research area with 21 research sites. The coastal area is subdivided into three parts representing the approximate distributions of the three genetic groups found in Mytilus spp. and M. balthica. Light gray indicates the Baltic Sea group, medium gray the North Sea group and dark gray the Mediterranean Sea and Bay of Biscay group (hybridization and introgression zones are not indicated). The white and the gray circles represent mussel and clam populations, respectively (see legend). Site no./name: 1, Rykyavik; 2/3, Umeå a/b; 4, Fallvikshamn; 5, Askö; 6, Gulf of Gdansk; 7, Lomma; 8, Mecklenburg Bight; 9, Grevelingenmeer; 10, Westerschelde estuary; 11, Granville; 12, Le Vevier; 13, Brest; 14, Loire estuary; 15, Point d’Aiguillon; 16, Bidasoa estuary; 17, Mundaka estuary; 18, Vias plage; 19, Marseille; 20, Gulfo di Oristano; 21, Santa Giusta lagoon Fig. 2Annual sea surface temperature (SST) regimes for the sampling stations in the Baltic Sea, the North Sea, the Bay of Biscay, and the Mediterranean Sea. Lines and error bars represent monthly averages and standard deviations, based on measurements taken by satellite twice daily, at 9 a.m. and 2 p.m., during the period 2003–2005. This data were taken from the NASA JPL website (NASA Jet Propulsion Laboratory 2005) Temperature profiles Field temperature profiles were obtained between April 2004 and May 2005 with temperature loggers (HOBO Water Pro®, Onset Computers, Bourne, MA, USA) at research sites 5, 7, 9, 10, 15, 16 and 18, with a resolution of one measurement per 30 min. The loggers were positioned in the direct vicinity of the animals. Logger output was compared with the sea surface temperature (SST) profiles obtained by satellite (NASA Jet Propulsion Laboratory 2005). Mean habitat temperature of the shallow water and intertidal habitats showed a constant relation with SST. In summer SST was about two degrees lower than the mean values calculated from the logger data at all sites, while in winter the SST was slightly lower for the Atlantic and Mediterranean sites. For the Baltic Sea sites, winter SSTs were similar to the logger data. Using the relation between SST and logger data, we estimated the acclimatization temperature of each mussel and clam population for each sampling occasion. This acclimatization temperature is defined in this study as the mean water temperature for the period of 30 days before sampling. We assume that this is a proper indication of the temperature to which the animals should be well adjusted. Respiration rates Within 24 h after sampling, groups of 3–6 mussels and 7–15 clams were gently removed from their aquaria, and without further acclimation transferred to respiration chambers of volume 264 and 154 ml, respectively. The chambers were positioned in a thermostated tank to maintain a constant temperature during the incubations (±0.3 °C). The chambers were filled with filtered habitat water, previously aerated to 100% oxygen saturation. Chamber lids contained Clark-type electrodes to record the change in oxygen tension in the water. In this way, 2–6 replicate measurements were taken per population and temperature after each sampling occasion. Control measurements were carried out using the same experimental setup, without animals. The total number of M. balthica specimens used from the populations that were sampled seasonally ranged from 475 from the Westerschelde estuary (10) to 685 from the Mecklenburg Bight (8). For Mytilus spp. the number of experimental animals ranged from 104 from Point d’Aiguillon (15) to 322 from Askö (5). To avoid light-induced stress in M. balthica, the chambers were made out of tanned Plexiglas. Respiration rates were measured at 3, 10, 17, 24 and 31 °C. Measurements continued until the oxygen tension in the chambers had decreased by 20–30%. After each measurement, experimental animals were frozen at −20 °C and subsequently lyophilized for 72 h to a stable weight. From the dried specimens, valves were removed and the soft tissue dry-weights determined to the nearest mg, after which the mass-specific respiration rates and the temperature quotients were calculated. In addition, we estimated the respiration rates and Q 10 values at the acclimatization temperature, per population, per season. Both, respiration rates and Q 10 values were based on the rates that corresponded to the seven-degree temperature interval (for instance 10–17 °C) nearest to the acclimatization temperature. The respiration rates were assessed by linear interpolation, and the Q 10 values were calculated with the following equation: . Here k 2 and k 1 are the respiration rates measured at the higher and the lower temperatures, t 2 and t 1, respectively. Statistical analysis The following working hypothesis was formulated: H0: “Season” or “genetic group” have no effect on the metabolic temperature dependence of M. balthica or Mytilus spp. Because temperature is not the only the factor limiting ectotherm metabolism, commonly used statistical methods such as correlations and regressions are not very well suited for testing this relationship (Blackburn et al. 1992; Cade et al. 1999; Thompson 1996). Estimating the function along the upper edge of this distribution would describe the evolutionary relation between temperature and metabolism in our species. Data points scattered below this “slope of upper bounds” (Blackburn et al. 1992) are responses induced by other limiting factors. To estimate this slope of maximum respiration rate at a given temperature, we carried out a main axis regression analysis (after Thompson et al. 1996). The first step in this is a general regression through all data points. Subsequently, the data are divided into points that fall below and above the line of least squares. All data points that were found above are then used to fit a second regression that again divides the data into two subsets; etc. This was repeated three times. The regression lines were forced through the x, y point (−2, 0), assuming that at the approximate freezing point of seawater bivalve aerobic metabolism is near to zero. The final regression line for M. balthica populations was based on 12 data points, with an r 2 of 0.95. For Mytilus spp. populations, the final regression was based on seven data points with an r 2 of 0.99. The distance between measured respiration rates and this upper slope describes the extent of metabolic down-regulation in the populations. These rate-deviations (distances) were estimated for all populations and used for statistical comparison. ANOVAs based on rate deviations were carried out with “season” or “genetic group” as independent variables. For the “genetic group” analysis, only data points from the temperature range at which all three groups were sampled were included, i.e., the temperature range for the “genetic group” comparison for M. balthica was 9–18 °C, and for Mytilus ssp. 9–15 °C. Bonferroni’s multiple comparison test was used to test for specific differences among seasons or genetic groups. Results Respiration rates at experimental temperatures For each population sampled seasonally, mean respiration rates at experimental temperatures are given in Fig. 3 for Mytilus spp. and in Fig. 4 for M. balthica. Standard deviations among replicate measurements were usually less than 20% of mean values at experimental temperatures near ambient conditions. At high experimental temperature (31 °C), variation among replicate measurements could be higher, with standard deviations occasionally exceeding 50% of the mean values. This was caused by the fact that some groups of specimens exhibited high respiration rates, while other groups consumed almost no oxygen. Fig. 3Seasonal variation in the respiratory response to an experimental temperature range for ten European Mytilus spp. populations. Station names and degrees north are given in the graphs. Numbers in square boxes refer to the station names presented in Fig. 1. See legend for explanation of the symbols Fig. 4Seasonal variation in the respiratory response to an experimental temperature range for six European M. balthica populations. Station names and degrees north are given in the graphs. Numbers in square boxes refer to the station names presented in Fig. 1. See legend for explanation of the symbols The respiration rate of mussels usually declined between 24 and 31 °C (Fig. 3). Exceptions were some cold-acclimatized populations that showed maximum rates between 17 and 24 °C [e.g., the Askö (5), Gulf of Gdansk (6), and Grevelingenmeer (9) populations sampled in January]. Peaks in respiration rates between 1.5 and 2.5 (mg O2/g (dry weight)/h) at 24 °C were frequently observed in springtime (April–May) and occasionally in other seasons. Respiration rates never exceeded 1.5 (mg O2/g (dry weight)/h) in mussels from the Mecklenburg Bight (6) and the Gulf of Gdansk (5). The highest respiration rates for the M. balthica specimens were mainly observed at 31 °C; the maximum temperature applied in this study (Fig. 4). Especially in the populations from the Westerschelde estuary (10) and Point d’Aiguillon (15), respiration rates never decreased at high experimental temperatures. However, in the Baltic populations, outside the summer period maximum rates were observed at a lower experimental temperature. In Umeå (3), respiration rates measured in January and April declined at experimental temperatures exceeding 10–17 °C. Respiration rates at acclimatization temperatures The respiration rates of both taxa increased with acclimatization temperature (Fig. 5). In winter and springtime, deviations from the upper edge slope were small for both Mytilus spp. and M. balthica. At acclimatization temperatures of >12 °C, an increasing number of data points were scattered at increasing distance from the upper edge slope, and the seasonal comparisons (ANOVA + Bonferroni) revealed that rate deviations were mainly restricted to measurements taken in summer and autumn (see column bar plots, superimposed in Fig. 5A,B). These data also revealed genetic group specific differences between the respiration rate and the acclimatization temperature for Mytilus spp. (P < 0.05); (Fig. 5C). Maximum acclimatized respiration rates occurred between 8 and 14 °C in the Baltic Sea group, 15–19 °C in the North Sea group and 20–24 °C in the Mediterranean Sea and Bay of Biscay group. The rate deviations found for the Mediterranean Sea and Bay of Biscay group differed significantly from those found for the Baltic Sea group. Several data points related to the North Sea group were found to fit the reaction norm of the Mediterranean Sea and Bay of Biscay group between 15 and 20 °C. Thus, the rate deviations of these two groups could not be distinguished (P > 0.05). No genetic group specific differentiation was observed for M. balthica (Fig. 5D). Fig. 5A–DRespiration rates as a function of the acclimatization temperature. The black lines are the upper edge slopes. These graphs include measurements from all research sites presented in Fig. 1. In A and B, different labels represent different sampling seasons (see legend in A). In B and C respiratory response to temperature is presented per genetic group. The gray diamonds, white triangles and black circles represent the Baltic Sea group, the North Sea group and the Mediterranean Sea and Bay of Biscay groups, respectively. Superimposed are the rate deviations calculated per season and per genetic group, in which BS indicates the Baltic Sea group, NS the North Sea group, MS the Mediterranean Sea group and BC the Bay of Biscay group. *P < 0.05 and **P < 0.01 refer to the statistical differences for a given season compared to “January” in A and B and for a given group compared to “BS” in C and D Thermal sensitivity of the metabolic rate Seasonal variation in thermal sensitivity of the metabolic rate shows that Q 10 values of greater than four were exclusively found in winter and spring (Fig. 6). For M. balthica populations, such high Q 10 values were found at low (January) and high (April) latitudes. In April, Q 10 values decreased significantly with latitude. Maximum Q 10 did not exceed 3.0 in July and October. July values were about one for clams from the Westerschelde estuary (10) and Point d’Aiguillon (15). The clam population at Askö (5) also exhibited relatively low values (<2). Q 10 values of greater than four were only observed in January and April for mussels (Fig. 6). A significant decrease in Q 10 values with latitude was observed in January (P = 0.0083), with the lowest values observed (Q 10 = 1.2) for the population from Gulfo di Oristano (20), and the highest values for the population from the Gulf of Gdansk (6). In April, this trend vanished, mainly due to the reduced Q 10 values of mussels from the northernmost sampling stations. During the following seasons, July and October, mean Q 10 values decreased further and latitudinal variation disappeared. Fig. 6Seasonal and latitudinal variations in mean Q 10 values of M. balthica (circles) and Mytilus spp. (diamonds) populations. Trend lines represent linear regressions. r 2 and P values are given in each graph As a function of the acclimatization temperature, both low and high Q 10 values were found at low ambient temperatures (Fig. 7A,B). The maximum Q 10 values observed at a given temperature decreased with increasing acclimatization temperature for both taxa. To illustrate this, we fitted lines of maximum Q 10 values as a function of temperature. These lines crossed the Q 10 = 1 line at ∼23 °C for M. balthica and at ∼24 °C for Mytilus spp. (Fig. 7A,B, line a). In addition to the maximum Q 10 values for all Mytilus spp. populations, group-specific lines could be drawn (lines a–c in Fig. 7A). This resulted in different intercepts for the Q10 = 1 line: at ∼17 °C for the Baltic Sea group, at ∼19 °C for the North Sea group and at ∼24 °C for the Bay of Biscay and Mediterranean Sea group. No differentiation of the maximum thermal sensitivity of the metabolic rate was observed for M. balthica populations (Fig. 7B). Fig. 7A–B Q10 values as a function of the acclimatization temperature, presented per genetic group. The diamonds, triangles and circles represent the Baltic Sea group, the North Sea group and the Mediterranean Sea and/or Bay of Biscay group, respectively. These graphs include measurements taken from all research sites. Lines of maximum Q10 values are fitted for M. balthica (B; a) and for Mytilus spp. (A) per genetic group: a the Mediterranean Sea and Bay of Biscay group, b the North Sea group, and c the Baltic Sea group Discussion Rates at high experimental temperatures When exposed to experimental temperatures of 24 or 31 °C, respiration rates in both Mytilus spp. and M. balthica may exceed 2.0 (mg O2/g (dry weight)/h). Such high rates were mostly observed in winter and spring, probably due to the low activation energy required for enzyme-catalyzed reactions at this time of the year (Hochachka and Somero 2002). The population from the Mecklenburg Bight (7) exhibited exceptionally high respiration rates for Baltic Sea mussels in the summer. In general, these high respiration rates measured at high experimental temperatures are not expected to occur in the field. When the summer starts, and peaks in habitat temperatures (24 °C+) occur in the field, the amplitude of the respiratory response at these higher temperatures declined. We propose that this modulation of metabolic thermal sensitivity is a protection mechanism that prevents excessive metabolic rates at high ambient temperatures. At the northern Baltic Sea station (3), the high thermal sensitivity of the metabolic rate did not result in high respiration rates in wintertime (which includes April in this area). Rates declined above 10 °C, indicating that the respiratory capacity was low and that acclimatization to increasing temperatures in springtime requires physiological changes in these clams. Comparing both taxa, their respiratory responses to the experimental temperature range were rather different. The most striking difference is the regular occurrence of the breakpoint temperature between 24 and 31 °C in Mytilus spp., which was mostly absent for M. balthica (Fig. 8A,B; in these graphs we summarize all respiration rates assessed for all populations during all seasons between 2003 and 2005). This breakpoint temperature may reflect the thermal tolerance limits of marine ectotherms. It has been suggested that species with lower breakpoint temperatures are less tolerant to high temperatures (Hochachka and Somero 2002; Pörtner 2002). Interestingly, despite its apparent breakpoint temperature between 24 and 31 °C, Mytilus spp. seem to tolerate higher environmental temperatures than M. balthica, given the geographical distribution of mussels, which reaches as far south as northern Africa (Comesana et al. 1998). Nonetheless, the lethal temperatures of both taxa under submerged conditions are comparable. The LT50 values for both species were 30–31 °C in a 24-h experiment using mussels and clams that were previously acclimatized to 20–25 °C (Kennedy and Mihursky 1971; Wallis 1975). Fig. 8A–B Mean respiratory responses to temperature of Mytilus spp. (B; diamonds) and M. balthica (A; circles). Data are presented as the averages of all measurements taken (all populations and all seasons, 2003–2005). The numbers next to the labels are the number of replicate measurements taken, and the dashed lines indicate the standard deviations Rates at acclimatization temperatures The relations between the mean acclimatized respiration rates of the studied taxa and temperature are based on specimens from a great variety of microhabitats. Therefore, we expect these data to give a representative overview of metabolic temperature dependence in acclimatized mussels and clams. A narrow relation between temperature and acclimatized respiration rates in winter and springtime suggests that variations in the metabolic rate during this time of the year were directly dependent on temperature, and not limited by other physiological or environmental variables. A temperature-limited metabolic rate was also reflected by the high thermal sensitivity of the metabolic rate of some of the populations during those seasons. Only a few observations confirmed that the constant increase in respiration rates with acclimatization temperature can continue in summer and autumn, when most data points were found to be scattered below the upper edge slope. We suggest that the upper edge slopes represent the “metabolic scopes” of these taxa. However, since measurements are based on groups of specimens, respiration rates of single individuals can be higher. The relatively high acclimatized respiration rates (1.0–1.5 mgO2/g/h) most probably result from metabolic up-regulation, related to elevated rates of digestion and protein synthesis. The present study did not find any acclimatized routine respiration rates that exceeded 1.5 (mg O2/g/h) in mussels or clams. In general, respiration rates exceeding 1.5 mg O2/g(dry weight)/h can be considered to be high and will rarely occur under ambient conditions. Also, in other studies no acclimatized routine respiration rates higher than 1.5 mg O2/g(dry weight)/h have been reported for M. balthica (McMahon and Wilson 1981; Wilson and Elkaim 1991; Hummel et al. 2000) or Mytilus spp. (Arifin and Bendel-Young 2001; Bayne and Widdows 1978; Tedengren et al. 1999; Thompson 1984). Towards tolerance limits, the arising breakpoint temperature will force down the “metabolic scope,” as indicated by the mussel populations from the Santa Giusta lagoon (21) and Gulfo di Oristano (20) in July, and for clams from Point d’Aiguillon (15) in July and October. Physiological rates that are acclimatized to (near) breakpoint temperatures are rarely described for ectotherms. The relatively low respiration rates at acclimatization temperatures >12 °C are interpreted as metabolic down-regulation resulting from both intrinsic and extrinsic factors (Brokordt et al. 2000; Burkey 1971; Velasco and Navarro 2003). The extremely low respiration rate of mussels from the Bidasoa estuary (16) in January was an exception, presumably caused by a temporary drop in ambient salinity (from 34 to 9 PSU). Such hypo-osmotic conditions are known to induce a metabolic depression in Mytilus spp. (Newell 1969; Storey and Storey 1990). In general, Baltic Sea populations exhibited relatively low respiration rates, intermediate respiration rates were found in the North Sea populations, and relatively high metabolic rates in the Bay of Biscay and Mediterranean Sea populations. These observations fall together with the low growth rates in bivalves from the Baltic Sea, intermediate growth rates in North Sea populations, and the highest growth rates found in the Bay of Biscay and in some Mediterranean Sea populations (Bachelet 1980; Fuentes et al. 1998, 2000; Gangnery et al. 2004; Hummel et al. 1998; Peteiro et al. 2006; Westerborn et al. 2002). This indicates that mean routine respiration rates, measured at ambient temperatures, do indeed reflect the metabolic rate and ultimately the physiological performance of bivalves. Separate curves describing the respiratory response to acclimatization temperatures were found for the three genetic groups of Mytilus populations. This resulted in group-specific respiration rates at a given temperature, which was significantly lower for the Baltic Sea group that exhibited reduced metabolic rates at intermediate temperatures (12–15 °C), where populations from the other genetic groups mostly exhibited optimal respiration rates (see Fig. 5c). In line with our earlier conclusion that the observed suboptimal respiration rates represent metabolic adjustment to limiting extrinsic factors (including energy–substrate availability), we expect that these group-specific response curves mainly reflect the food conditions in the respective sea-basins during the warmer months of the year. This assumption is supported by the seasonal variation in chlorophyll concentrations The chlorophyll concentrations in the Baltic Sea decline (Heiskanen and Leppänen 1995) at thermal conditions that correspond to the maximum annual chlorophyll concentrations in the North Sea and the Bay of Biscay (Colebrook 1979) and their adjacent estuaries (Rybarczyk et al. 1993; van Bergeijk et al. 2006). It also explains the exceptionally high respiration rates of some populations from the North Sea group at high habitat temperatures, e.g., specific dynamic action, facilitated by high ambient food availability. Still, differential genetic adaptation to temperature may add to the observed physiological differences, and cannot be excluded as an explanation. The relation between the respiration rate and the acclimatization temperature shows great overlap upon comparing Mytilus spp. with M. balthica. Further comparison with other studies demonstrates that seasonal variation in the acclimatized respiration rates obtained from two Mytilus populations from England (Bayne and Widdows 1978) fits the relation described in this study. The respiratory performances of other bivalve species, such as Cerastoderma edule (Newell and Bayne 1980), Ostrea edulis (Beiras et al. 1994), or Dreissena polymorpha (Sprung 1995), were highly comparable as well. This great similarity in respiratory performance among European bivalve species suggests that they share a comparable evolutionary relation with temperature. Sensitivity of the metabolic rate to temperature changes We hypothesized that the sensitivity to temperature of the metabolic rate would decrease towards more southern localities, especially in the warmer seasons. Such latitudinal clines were only observed in January for Mytilus spp., and in April for M. balthica populations. In October no specific pattern was observed for Mytilus populations, since the Q 10 values were low in all populations. This may be related to limiting food conditions in autumn. Absence of significant latitudinal clines in July was caused by cline-interruption, which corresponded to the geographic transition of one genetic group to the next. These genetic group specific clines became especially apparent in mussels when the Q 10 was presented as a function of the acclimatization temperature (Fig. 6). The obtained Q 10 values decreased with increasing acclimatization temperature for three reasons. First, metabolic down-regulation for energetic balancing results in a reduced Q 10. Second, Q 10 will decrease near breakpoint temperatures. Third, during thermal fluctuations that involve high peaks in habitat temperature, ectotherms will minimize the sensitivity of their metabolic rate, avoiding excessive rates when exposed to elevated temperatures (Peck et al. 2002; Widdows 1976; Wilson and Elkaim 1991). Thus, the increasing abundance of relatively nutrient-poor and thermally dynamic habitats causes a gradual shrinking of the ecological niche of these bivalve taxa towards the warm end of their distribution range. The group-specific clines discussed in the preceding paragraph indicated differential adaptation to temperature among Mytilus spp. populations. Since these bivalve species have great dispersal capacities, strongly coupled to hydrodynamic circulation (Gilg and Hilbish 2002), genetic divergence within these species requires geographic isolation. During isolation in different climatic regions, selective genetic variation may have evolved at the same spatial scale as the observed neutral genetic variation. This may explain why the biogeography of these genetic groups is associated with European temperature gradients. Physiological studies revealed that mussels from different genetic groups exhibit different growth rates when hatched under similar conditions (Beaumont et al. 2004; Hillbish et al. 1994). Differential thermal adaptation has recently been demonstrated for different mussel species from the West coast of North America as well (Fields et al. 2006). The differential adaptation to temperature indicated by our results, combined with the strong spatial variability of the coastal climate in the Bay of Biscay (Puillat et al. 2004), may explain the broad and mosaic-like transition from one mussel-group present in this area to the next (Bierne et al. 2003). Q10 values lower than one only occur beyond the breakpoint temperature, where the metabolic rate decreases with increasing temperature. Since thermal conditions beyond breakpoint temperatures are not beneficial for bivalve performance and are most probably lethal when extended, it is most interesting to observe that the southernmost Mytilus spp. populations in our research area exhibited Q10 values of less than one at acclimatization temperatures in July 2004. In the Santa Giusta Lagoon (21), the monthly mean water temperature reached 27 °C, exceeding the breakpoint temperature of mussels. Very few bivalves survived the summer of 2004 in the Santa Giusta lagoon (21), indicating that thermal tolerance limits were indeed crossed under field conditions. Survival of temperature-induced stress depends on its duration and on the physiological status and condition of the organisms. Widdows and Bayne (1971) found that mussels can cope with relatively high temperatures so long as they can regularly recover in water at a suitably low temperature. This is an important strategy that allows survival of temporary heat exposure, e.g., when exposed to the air during low tide in summer. The range and limits of this relation for mussels or clams have not been studied in depth yet. In conclusion, we demonstrated that the respiratory responses to temperature of two European bivalve taxa are greatly dependent on seasonal variations in temperature. These responses, obtained throughout all seasons and at a large geographic scale, fit together in a framework when presented as a function of the acclimatization temperature. This framework is useful in both the fundamental and the applied sciences, facilitating the interpretation of respiration rates measured under ambient conditions, and the further development of ecophysiological theory. We observed that the maximum thermal sensitivity of the metabolic rate decreases with increasing acclimatization temperature, crossing a threshold (Q 10 = 1) in Mytilus spp. at the maximum acclimatization temperatures observed in the field. Whether the temperature quotient will become less than one in M. balthica populations when the acclimatization temperature exceeds 23 °C cannot be answered with any certainty, since no breakpoint temperature is observed in the experimental southern populations under or near ambient conditions. (Onto)genetic adaptation to regionally different climates implies that climate change will affect not only marginal populations via their metabolic rates, but all genetic groups are expected to shift northward with increasing temperatures. Although the dispersal capacity of mussel larvae and the active transport of juvenile and adult mussels for aquaculture purposes support range shifts with the speed of climate change, other ecological and physiological variables and their impact on the environment need to be investigated to predict the fate of mussels and other bivalve populations under changing climatic conditions. For M. balthica populations, no differential adaptation to temperature was observed among divergent groups, which leads us to expect that the direct impact of climate-induced temperature changes will be restricted to the southernmost populations. Electronic supplementary material Below is the link to the electronic supplementary material. (DOC 57 kb)
[ "metabolic rate", "climate change", "respiration rate", "distribution range", "thermal tolerance" ]
[ "P", "P", "P", "P", "P" ]
Int_J_Cardiovasc_Imaging-3-1-2048827
Assessment of normal tricuspid valve anatomy in adults by real-time three-dimensional echocardiography
Background The tricuspid valve (TV) is a complex structure. Unlike the aortic and mitral valve it is not possible to visualize all TV leaflets simultaneously in one cross-sectional view by standard two-dimensional echocardiography (2DE) either transthoracic or transesophageal due to the position of TV in the far field. Introduction The tricuspid valve (TV) is a multi-component complex structure [1]. In classic anatomic studies the anterior, septal and posterior TV cusps were described [2, 3]. Unlike the aortic and mitral valve it is not possible to visualize all TV cusps simultaneously in one cross-sectional view by standard transthoracic two-dimensional echocardiography (2DE) [4]. During transesophageal 2DE small changes in transducer angle, probe position and rotation may bring to light some additional TV details [5, 6]. However, because of the position of the TV in the far field in relation to probe, transesophageal 2DE can still only provide limited information and can also not visualize all TV cusps simultaneously. In three-dimensional (3D) transesophageal image reconstruction and intracardiac echocardiography studies this goal could be achieved but at the cost of some procedural risks and an increase in procedural duration [7, 8]. Real-time three-dimensional echocardiography (RT3DE) can visualize the atrio-ventricular valves from both the ventricular and atrial side in detail without these limitations [9]. This study aimed to apply RT3DE for quantitative and qualitative assessment of normal TV anatomy. Subjects and methods In one hundred patients (mean age 30 ± 9 years, 65% males) the TV was examined by transthoracic RT3DE after an informed consent. All patients had sinus rhythm and normal right-sided heart (normal right ventricular dimensions and function, normal right atrial dimension, trivial or absent tricuspid regurgitation and normal tricuspid valve function). Patients with good 2DE image quality only were included. RT3DE was done with a commercially available ultrasound system (Philips Sonos 7500, Best, The Netherlands) attached to a X4 matrix array transducer capable of providing real-time B-mode images. The 3D data set was collected within approximately 5–10 s of breath holding in full volume mode from an apical window and transferred for off-line analysis with TomTec software (Unterschleissheim, Munich, Germany). Data analysis of 3D images was based on a two-dimensional approach relying on images obtained initially from the apical 4-chamber view. The images were adjusted to put the TV in the center of interest. To exclude non-relevant tissue, the TV was sliced between the two narrowest lines by which all parts of the TV leaflets were still in between. The TomTec software allows in this way visualization of the short-axis TV view in a 3D display (see Fig. 1). RT3DE gain and brightness were adjusted to improve delineation of anatomic structures. The following points were checked for visualization: 1) tricuspid annulus diameter and area, 2) TV leaflets (number, mobility, thickness and relation to each other), 3) TV area, and 4) TV commissures (antero-septal, antero-posterior, and postero-septal) including the position of their closure lines. All these structures were classified according to a subjective 4-point scale for image quality (1 = not visualized, 2 = inadequate, 3 = sufficient and 4 = good). Fig. 1TomTec quadri screen display of the tricuspid valve. The upper two images represent two-dimensional views created from the 3D data set (4-chamber, left and orthogonal view, right). The lower left image represents a two-dimensional short axis view and the lower right image represents the 3D image For quantitative assessment of TV the following RT3DE data were obtained: 1) TV annulus diameter defined as the widest diameter that could be measured from an end-diastolic still frame, 2) maximal TV annulus area obtained from an end-diastolic still frame and measured by manual planimetry, 3) TV area defined as the narrowest part of the TV at the time of maximal opening and measured by manual planimetry, and 4) TV commissural width obtained from a late-diastolic still frame using zoom function to avoid underestimation. The images were optimized for each commissure along its plane to measure the maximal width of the angle formed by the two adjacent TV leaflets. To identify the TV leaflets visualized by the standard 2DE images the TomTec quad screen display was used. As seen in Fig. 1, this screen contains four images; the upper two images are 2DE images perpendicular to each other, the lower two images are a short-axis 2DE image and a RT3DE image. From the properly chosen two-dimensional image a mid-diastolic frame was selected to visualize the TV leaflets just separated from each other. Each leaflet was defined by a marker, after which this marker position was compared with the RT3DE image to detect which leaflet was shown in the 2DE images. Analysis of images was done by two experienced echocardiographers (AMA, JSM) independently. Each one dealt with the full volume image as acquired from echo machine and the selection of cut plane, angulation and gain setting adjustment were dependable on his experience. Statistical analysis All data obtained by RT3DE were presented as mean ± SD. Interobserver and intraobserver agreements for the visualization score were estimated using kappa values for each morphologic feature and classified as poor (kappa < 0.4), moderate (kappa 0.4 to 0.7), or good (kappa > 0.7). Interobserver and intraobserver variabilities for RT3DE measurements was assessed according to the Bland and Altman method in a randomly selected group of 50 patients [10] Table 1. Table 1Scores for real-time three-dimensional echocardiography visualization of TV structuresScoreTV annulusTV leafletsTV areaTV commissuresGood (4)60%80%55%50%Sufficient (3)30%10%30%20%Inadequate (2)10%10%15%20%Not visualized (1)0%0%0%10%Mean score3.5 ± 0.73.7 ± 0.63.4 ± 0.73.1 ± 1.0Median score3.03.03.02.5Abbreviations: TV = tricuspid valve Results Acquisition and analysis of the RT3DE data was performed in approximately 10 min per patient. The TV could be visualized in 90% of patients enface from both ventricular and atrial aspects in relation to adjacent cardiac structures. In these 90 patients detailed analysis of the TV was performed including tricuspid annulus shape and size, TV leaflets shape, size, and mobility, and commissural width. Tricuspid annulus Tricuspid annulus visualization was good in 54 patients (60%), sufficient in 27 patients (30%), and inadequate in 9 patients (10%). As seen in Fig. 2, tricuspid annulus shape appeared as oval rather than circular. Tricuspid annulus diameter and area could be measured in 63 patients (70%), normal values were 4.0 ± 0.7 cm and 10.0 ± 2.9 cm2, respectively. Fig. 2Oval-shaped Tricuspid annulus (the line represent the tricuspid annulus diameter, the dots demark the area) Tricuspid valve leaflets Visualization of the three TV leaflets (in motion) was good in 72 patients (80%), sufficient in 9 patients (10%), and inadequate in another 9 patients (10%). The anterior leaflet was the largest and most mobile of the three leaflets and had a nearly semicircular shape. The septal leaflet was the least mobile and had a semi-oval shape. Its position was parallel to the interventricular septum. The posterior leaflet was the smallest one with variable shape. It was clearly separated from the septal leaflet in all patients but in 10% of patients it was hard to discriminate the posterior leaflet from the anterior leaflet even during maximal TV opening. From the RT3DE data set all standard two-dimensional TV cross-sections (apical 4-chamber, parasternal short-axis and parasternal long-axis right ventricular inflow) were simulated. As seen in Fig. 3, in the apical 4-chamber view in all patients the septal leaflet was seen adjacent to the septum and the anterior leaflet was seen adjacent to the right ventricular free wall. In the parasternal short-axis view, the posterior leaflet was seen adjacent to the right ventricular free wall in 92% of patients and in the remaining 8% no leaflet could be obtained although modification of the cut plane downward could identify this leaflet. In this view the leaflet adjacent to the aorta was the anterior in 52% and the septal leaflet in 48%. In the parasternal right ventricular inflow view the leaflets seen were identical to the apical 4-chamber view with in all patients the septal leaflet seen adjacent to the septum and the anterior leaflet seen adjacent to the right ventricular free wall. Fig. 3Identification of the tricuspid valve leaflets seen on two-dimensional imaging. Below the 2D images, percentage of leaflet identification in each standard view depending the RT3DE images Tricuspid valve area Visualization of the triangular shaped TV area was good in 50 patients (55%), sufficient in 27 patients (30%), and inadequate in 13 patients (15%). As seen in Fig. 4, the anterior and septal leaflets formed the TV area’s angle and the small posterior leaflet formed its base. TV area could be measured in 77 patients (86%) and mean TV area in these patients was 4.8 ± 1.6 cm2. Fig. 4Triangular shape TV area and commissural views Tricuspid valve commissures As seen in Fig. 4, the three TV leaflets were separated from each other by three commissures. The commissures and the direction of closure lines were good visualized in 45 patients (50%), sufficient in 18 patients (20%), inadequate in 18 patients (20%), and not visualized in 9 patients (10%). TV commissural width could be obtained in 63 patients (70%), mean commissural width in these patients was 5.4 ± 1.5 mm for the antero-septal commissure, 5.2 ± 1.5 mm for the postero-septal commissure, and 5.1 ± 1.1 mm for the antero-posterior commissure, respectively. Visualization and measurement of commissures was relatively easy for the antero-septal commissure and most difficult for the antero-posterior commissure. All measurements are listed in Table 2 as absolute value and indexed to body surface area. Table 2Normal (absolute and index) values of tricuspid valve annulus (diameter and area), Tricuspid valve area and the width of the 3 commissuresParameterAbsolute valueIndex valueTricuspid annulus diameter 4.0 ± 0.7 (cm)2.2 ± 0.4 (cm/m2)Tricuspid annulus area 10.0 ± 2.9 (cm2)5.5 ± 1.6 (cm2/m2)Tricuspid valve area 4.8 ± 1.6 (cm2)2.7 ± 0.9 (cm2/m2)Antero-septal commissure 5.4 ± 1.5 (mm)2.9 ± 0.8 (mm/m2)Postero-septal commissure5.2 ± 1.5 (mm)2.9 ± 0.7 (mm/m2)Antero-posterior commissure 5.1 ± 1.1 (mm)2.8 ± 0.6 (mm/m2) Interobserver variability The visualization score between two observers was good for the TV annulus (kappa value 0.91) and TV leaflets (kappa value 0.71) and moderate for the TV commissures (kappa value 0.59). As seen in Fig. 5, good interobserver correlations were found for measurement of TV annulus (r = 0.98, P < 0.0001) and TV area (r = 0.95, P < 0.0001) and fair correlation was found for TV commissural width (r = 0.51, P < 0.001). In the same Figure, the interobserver agreement for TV annulus diameter (mean difference −0.28 ± 1.20 mm, agreement: 2.12, −2.68), for TV area (mean difference: 0.17 ± 0.52 cm2, agreement: 1.21, −0.87), and for mean TV commissural width (mean difference: 0.01 ± 0.62 mm, agreement: 1.25, −1.24) is displayed. Fig. 5Interobserver correlations (top) and Bland–Altman analysis (bottom) of TV annulus, leaflets, and commissures Intraobserver variability The visualization score by the first observer at 2 separate settings was good for the TV annulus (kappa value 0.92) and TV leaflets (kappa value 0.73) and moderate for the TV commissures (kappa value 0.58). Intraobserver agreement was (mean difference −0.26 ± 1.15 mm, agreement: 2.14, −2.56) for TV annulus diameter, (mean difference: 0.15 ± 0.50 cm2, agreement: 1.15, −0.85) for TV area, and (mean difference: 0.02 ± 0.60 mm, agreement: 1.22, −1.18) for the mean TV commissural width. Discussion Two-dimensional echocardiography is a valuable imaging modality for the functional assessment of TV [11–13]. However, with 2DE it is not possible to visualize all TV cusps simultaneously in one cross-sectional view nor can detailed anatomical information of the TV annulus, leaflets, and commissures be provided. Previous studies and case reports described visualization of TV by RT3DE [9, 14] in abnormal states, while this study applied RT3DE for the morphological assessment of the normal TV anatomy. RT3DE allowed analysis of TV annulus, leaflets and commissures in the majority of patients. Beside this morphologic description, quantitative assessment could be obtained. However, it should be noticed that only patients with good 2DE image quality underwent RT3DE. In our experience these patients represent over 50% of the total number of patients referred to our echocardiographic laboratory. Nevertheless, RT3DE allowed TV analysis to a level quite comparable to that recently reported by others for the mitral valve leaflets [15]. One of the salient findings in our study was the identification of the TV leaflets as seen in the routine 2DE views. It is still a matter of controversy in echocardiographic textbooks. In one well known echocardiographic textbook [16], the leaflet seen in the apical 4-chamber view adjacent to the right ventricular free wall was described as being the anterior or posterior leaflet depending on the exact rotation and angulation of the image plane. However, in our study this leaflet was consistently found to be the anterior leaflet (see Fig. 6 for explanation), as described in another textbook [17]. Also, in both these echocardiographic textbooks [16, 17] in the parasternal short-axis view the leaflet adjacent to the right ventricular free wall was described as being the anterior. However, as shown in Fig. 3 and explained in Fig. 6, in all patients in whom a leaflet could be identified in our study it was the posterior one. Fig. 6Surgical view of the heart valves demonstrating the range of the two-dimensional echocardiographic 4-chamber and short-axis planes In our study tricuspid annulus diameter (and area) could be reliably obtained with RT3DE. Tricuspid annulus measurement is of critical importance in the TV surgical decision-making process if a patient is operated for mitral valve disease and has concomitant TV regurgitation [18, 19]. In addition, TV area could be reliably obtained and this may have important implications for the diagnosis of tricuspid stenosis [20, 21]. Visualization of commissures and measurement of its width were obtained with difficulty, in particular for the antero-posterior commissure. Commissural width also showed weak interobserver correlation. This may be due to differences in the commissural levels and tissue dropout. For proper assessment of the three commissures, more cut planes with different angles are needed. However, assessment of commissural width may also be a valuable tool for the diagnosis, follow up, and selection of therapeutic strategy of tricuspid stenosis. All our RT3DE measurements were consistent with the measurements described in anatomical studies [2, 3]. Our data may potentially take RT3DE a step further into clinical routine (providing accurate TV measurements) and may enhance the understanding of TV valve morphology during the cardiac cycle (Fig. 7). Also, the detailed assessment by RT3DE may affect the therapeutic decision of various TV abnormalities and thus expand the abilities of non-invasive cardiology [22]. Fig. 7Visualization of the 3 TV leaflets during valve closure (A), at early diastole (B), and at late diastole (C) Limitation of study The main limitation of this study is that RT3DE data were not compared with a “gold standard” such as magnetic resonance imaging, autopsy or surgical findings. Also, RT3DE images more critically depend on 2DE image quality and could be obtained only in patients with sinus rhythm during hold breath that limits its application to all. The study included patients with narrow age range (21–39 years), and thus the normal findings in this study is confined to this age group and could not be applied to both extremities (<21 and >39 years). Conclusion Three-dimensional imaging of the TV is feasible in a large number of patients. RT3DE may add to functional 2DE data in description of TV anatomy and providing highly reproducible and actual reality (anatomic and functional) measurements.
[ "normal tricuspid valve", "real-time three-dimensional echocardiography", "tricuspid valve anatomical structure" ]
[ "P", "P", "R" ]
Diabetologia-4-1-2170456
Duration of breast-feeding and the incidence of type 2 diabetes mellitus in the Shanghai Women’s Health Study
Aims/hypothesis The aim of this study was to examine the association between lifetime breast-feeding and the incidence of type 2 diabetes mellitus in a large population-based cohort study of middle-aged women. Introduction The prevalence of type 2 diabetes mellitus has been increasing rapidly worldwide [1], making knowledge of risk factors and protective factors associated with type 2 diabetes mellitus essential for the development of prevention strategies. Results from animal and human studies suggest an improvement in glucose and insulin sensitivity during lactation [2–5]. Data from two large cohorts in the USA also indicate that longer duration of breast-feeding may reduce the risk of type 2 diabetes mellitus by improving glucose homeostasis [6]. We examined the association between lifetime breast-feeding and the incidence of type 2 diabetes mellitus in a large population-based cohort study of middle-aged women, the Shanghai Women’s Health Study (SWHS). Methods Study population The SWHS is a population-based prospective cohort study of middle-aged women (40–70 years old) conducted in seven urban communities of Shanghai, China. Details of the SWHS survey have been reported elsewhere [7]. From a total of 81,170 women who were invited to participate, 75,221 were recruited (92.7% participation rate). Reasons for non-participation were refusal (3.0%), absence during the enrolment period (2.6%) and other reasons (health, hearing, speaking problems; 1.6%). After exclusion of women younger than 40 years or older than 70 years at the time of interview (n = 278), 74,942 women remained for the study. Participants completed a detailed survey as part of an in-person interview, which included assessment of dietary intake, physical activity, measurement of anthropometrics and other lifestyle factors. The original questionnaires, in Chinese, are available from the corresponding author upon request. Protocols for the SWHS were approved by the Institutional Review Boards of all institutes involved in the study and written informed consent was obtained prior to interview. Biannual in-person follow-up of all living cohort members was conducted by in-home visit from 2000 to 2002 and from 2002 to 2004, with a response rate of 99.8 and 98.7%, respectively; only 934 participants were lost to follow-up. Outcome ascertainment Incident type 2 diabetes mellitus was identified through follow-up surveys. A total of 1,561 parous women reported a type 2 diabetes mellitus diagnosis since the baseline survey. For the current study we considered a case of type 2 diabetes mellitus to be confirmed if the participants reported having been diagnosed with type 2 diabetes mellitus and met at least one of the following criteria: fasting glucose level ≥7 mmol/l on at least two separate occasions or an oral glucose tolerance test with a value ≥11.1 mmol/l and/or use of hypoglycaemic medication (i.e. insulin or oral hypoglycaemic drugs). All tests were performed as part of patients’ primary care. Of the self-reported cases, a total of 869 participants met the study outcome criteria and are referred to herein as confirmed cases of type 2 diabetes mellitus. We performed analyses restricted to confirmed cases as well as analyses including all cases of type 2 diabetes mellitus. Breast-feeding duration assessment Information on each pregnancy was obtained during the in-person interview. This included date and outcome of each pregnancy and whether and for how long the participant breastfed each child. Based on this information, we calculated the duration of breast-feeding (months and years) and duration of breast-feeding per live birth (years). Measurement of potential confounders Anthropometric measurements were taken at baseline recruitment according to a standard protocol by trained interviewers who were retired medical professionals [8]. Self-reported body weight history was obtained for ages 20 and 40 years and at baseline recruitment. We calculated BMI (weight in kg divided by the square of height in m), WHR (waist circumference divided by hip circumference) and standardised weight change (difference between measured weight at baseline and weight at 20 years divided by the interval between study recruitment and 20 years; kg/year).A structured questionnaire was used at the baseline survey to collect information on socio-demographic factors such as age, level of education (none, elementary school, middle/high school, college), family income in yuan per year (<10,000, 10,000–19,999, 20,000–29,999, >30,000), occupation (professional, clerical, manual labour/other, housewife/retired), smoking (smoked at least one cigarette per day for more than 6 months continuously) and alcohol consumption (had ever drunk beer, wine or spirits at least three times per week). History of diseases such as diabetes, cancer, cardiovascular disease and high blood pressure was also collected.Information on physical activity was obtained using a validated questionnaire [9]. The questionnaire evaluated exercise and sport participation, daily activity and daily commuting round-trip journey to work. We calculated the metabolic equivalents (METs) for each activity, using a compendium of physical activity values [10]. We derived a quantitative estimate of overall non-occupational activity (MET-h per day).Dietary intake was assessed through an in-person interview using a validated food frequency questionnaire at the baseline recruitment survey and at the first follow-up survey [11]. The Chinese food composition tables [12] were used to estimate energy intake (kJ/day). Statistical analysis Person-years for each participant were calculated as the interval between the baseline recruitment to one of the following: diagnosis of type 2 diabetes mellitus; censored at death; or completion of the second follow-up survey.The Cox proportional hazards model was used to assess the effect of breast-feeding on the incidence of type 2 diabetes mellitus. Tests for trend were performed by entering the categorical variables as continuous parameters in the models. In all models, we adjusted for the following potential confounding variables: age, BMI, WHR, total energy, physical activity, number of live births (entered as continuous variables), income level, education level, occupation, smoking status, alcohol consumption status and presence of hypertension at baseline (as categorical variables).We also derived a propensity score by regressing breast-feeding (yes/no) on risk factors for type 2 diabetes mellitus (age, daily energy intake, BMI, WHR, smoking, alcohol consumption, physical activity, hypertension, income, education level, occupation status, oral contraceptive use, vegetable intake, legume intake, meat intake and staples intake). We then examined the association between breast-feeding (yes/no) and type 2 diabetes mellitus by including the propensity score in the Cox proportional hazard model. All analyses were performed using SAS (version 9.1, SAS Institute, Cary, NC, USA), and all tests of statistical significance were based on two-sided probability. A p value of less than 0.05 was considered statistically significant. Results The average age of the entire SWHS cohort (n = 74,942) was 52.1 (SD = 9.1). The average age at first pregnancy was 25.5 years (median: 25.7 years) and the median number of live births was 1.0. A total of 3.3% women had no children, 54.4% had only one child, 21.2% had two children, 10.5% had three children, 6.2% had four children and 4.3% had five or more children. The incidence of diabetes mellitus was 6.0 cases per 1,000 person-years at risk. There were 62,095 parous women in this study with no prior history of type 2 diabetes mellitus, cancer or cardiovascular disease at study recruitment; of these 50,700 (81.65%) reported having breastfed their children. The average number of live births was 1.7, the average number of months of breast-feeding was 14.6 and the average number of years of breast-feeding per child was 0.6. The number of live births was associated with months of breast-feeding (Spearman correlation coefficient, 0.70; p < 0.001). Age-standardised characteristics of the study population by duration of breast-feeding are shown in Table 1. Women with a longer duration of breast-feeding were older, had less education, lower income and were less likely to be employed at the time of the survey or to hold a professional job. Duration of breast-feeding was also associated with smoking and higher physical activity in this population. We did not find an association between differences in participation in exercise or prevalence of hypertension at baseline and duration of breast-feeding (p > 0.05). Table 1Age-standardised characteristics of parous women in the Shanghai Women’s Health Study by duration of breast-feeding Duration of breast-feeding (months)0>0 to 6>6 to 11>11 to 35≥36p value for trendaWomen (n)11,39510,46314,95818,1447,135Cases of diabetes (n)225163253585335Person-years (n)53,03848,77069,74183,87032,535<10 years since last pregnancy (%)26.921.928.918.63.7Age, median (Q25–Q75)45 (42–49)45 (42–49)45 (42–49)54 (49–61)64 (62–67)<0.001Energy intake, mean kJ/day (SE)6,875 (14)6,927 (15)6,920 (13)6,985 (11)6,738 (21)<0.001Live births, median (Q25–Q75)1 (1–1)1 (1–1)1 (1–1)2 (1–2)4 (3–5)<0.001BMI, mean kg/m2 (SE)23.4 (0.03)23.2 (0.03)23.6 (0.03)24.2 (0.02)24.7 (0.04)<0.001WHR, mean (SE)0.80 (0.0005)0.8 (0.0005)0.80 (0.0005)0.81 (0.004)0.82 (0.007)<0.001Weight gain, mean kg/year (SE)0.34 (0.003)0.31 (0.003)0.33 (0.003)0.36 (0.003)0.37 (0.005)<0.001Weight at age 20 years, median kg (Q25–Q75)48 (45–53)49 (45–53)50 (45–54)50 (45–54)50 (45–55)<0.001Smoker (%)1.7 (1.5–1.9)1.4 (1.2–1.9)1.5 (1.3–1.7)2.4 (2.3–2.5)3.6 (3.2–4.0)<0.001Ever drinker (%)1.9 (1.7–2.1)2.1 (1.9–2.3)2.1 (1.9–2.3)2.4 (2.3–2.5)1.9 (1.7–2.1)<0.001Exercise (%)30.5 (29.5–31.5)32.9 (31.9–33.9)34.3 (33.3–35.3)33.7 (32.7–34.7)31.4 (30.4–32.4)0.18High physical activityb (%)22.8 (22.0–23.6)22.1 (21.3–22.9)24.4 (21.6–25.2)27.3 (26.9–27.7)41.2 (40.2–42.2)<0.001Education (%)None11.0 (12.4–11.6)8.5 (7.9–9.1)9.6 (9.0–10.2)17.8 (17.4–18.2)41.2 (40.2–42.2)<0.001Elementary38.2 (37.2–39.2)34.4 (33.4–35.4)39.2 (38.2–40.2)44.3 (43.8–44.8)46.9 (47.9–48.9)Middle/high school32.5 (31.5–33.5)34.3 (33.3–35.3)32.4 (31.4–33.4)27.1 (26.7–27.5)8.6 (8.0–9.2)College18.3 (17.5–19.1)22.7 (21.9–23.5)18.8 (18.0–19.6)10.8 (10.5–11.1)3.3 (2.9–3.7)Household income (%)<10,000 yuan14.0 (13.4–14.6)11.6 (11.0–12.2)12.6 (12.0–13.2)15.6 (15.3–15.9)22.1 (21.3–22.9)<0.00110,000–19,999 yuan38.8 (37.8–39.8)35.8 (34.8–36.8)37.1 (36.1–38.1)38.0 (37.5–38.5)38.7 (37.7–39.7)20,000–29,999 yuan29.4 (28.5–30.3)31.3 (30.3–32.3)30.4 (29.4–31.4)28.6 (28.2–29.0)27.8 (27.0–28.6)>30,000 yuan17.8 (17.0–18.6)21.4 (20.6–22.4)19.8 (19.0–20.6)17.8 (17.4–18.2)11.4 (10.8–12.0)Occupation (%)Professional21.6 (20.8–22.4)25.9 (25.1–26.7)22.2 (21.4–23.0)16.4 (15.6–17.2)6.8 (6.2–7.4)<0.001Clerical12.8 (12.2–13.4)12.4 (11.6–12.8)13.2 (12.6–13.8)13.7 (13.4–14.0)19.5 (18.7–20.3)Manual labour/others21.8 (21.0–22.6)21.0 (20.2–21.8)23.1 (22.3–23.9)23.6 (23.2–24.0)20.0 (19.2–20.8)Housewife/retired43.8 (42.8–44.8)40.7 (39.7–41.7)41.6 (40.6–42.6)46.3 (45.8–46.8)53.6 (52.6–54.6)Hypertension (%)18.4 (17.6–19.2)18.3 (17.5–19.1)18.8 (18.0–19.6)19.8 (19.4–20.2)16.6 (15.8–17.4)0.12Data are presented as n, percent, median (Q25–Q75, interquartile range) and mean (SE), and are directly standardised in years to the age distribution of the populationMeans of energy intake, BMI, WHR and standard weight are adjusted by ageWeight at 20 years and number of live births were not normally distributed and could not be adjusted for ageap values for trend were calculated by: proportional odds model for prevalence of population characteristics; ANOVA test for daily energy intake, BMI, WHR and standard weight gain; and Kruskal–Wallis test for age, number of live births and weight at 20 yearsbHigh physical activity: participants in the upper quartile of total METs Women who had breastfed tended to have a lower risk of type 2 diabetes mellitus [relative risk (RR) = 0.88; 95% CI, 0.76–1.02; p = 0.08] than women who had never breastfed, in analyses adjusted for age, daily energy intake, BMI, WHR, number of live births, occupation, income levels, education, smoking, alcohol consumption, physical activity and presence of hypertension (Table 2). When we adjusted the analysis by a propensity score of predictors of type 2 diabetes mellitus, the RR was 0.81 (95% CI, 0.70–0.94; p < 0.01). Table 2Associations between type 2 diabetes mellitus and duration of breast-feeding for all participants, Shanghai Women’s Health Study %Cases per person-yearsRR95% CIp value for trendBreast-feedingNo18.35225/53,0381.00Yes81.651,336/234,9160.880.76–1.02Duration of breast-feeding Years  018.35225/53,0381.000.01  >0–0.9947.99416/118,5110.880.75–1.04  >0.99–1.9917.31343/57,4180.890.75–1.06  >1.99–2.997.57242/26,4510.880.71–1.07  >2.99–3.994.15148/14,9290.750.59–0.96  ≥44.63187/17,6060.680.52–0.90 Years per child  018.35225/53,0381.000.11  >0–0.4915.10200/43,6150.910.75–1.10  >0.49–0.9944.20659/127,4530.870.74–1.02  ≥122.35477/63,8480.870.78–1.03Values are adjusted for age, daily energy intake, BMI, WHR, number of live births, smoking, alcohol consumption, physical activity, education, income, occupation and hypertension The duration of breast-feeding and duration of breast-feeding per child were associated with a lower risk of type 2 diabetes mellitus (Table 2). The fully adjusted RR for 0, >0 to 0.99, >0.99 to 1.99, >1.99 to 2.99, >2.99 to 3.99 and ≥4 years of breast-feeding with the risk of type 2 diabetes mellitus were 1.00, 0.88, 0.89, 0.88, 0.75 and 0.68. The fully adjusted RR between years of breast-feeding per child and type 2 diabetes mellitus risk were 1.00, 0.91, 0.87 and 0.87 (p = 0.11 for trend) for 0, >0 to 0.49, >0.49 to 0.99 and ≥1 years of breast-feeding per number of births. Associations between months of breast-feeding and incidence of type 2 diabetes mellitus are presented in Table 3. BMI and WHR are the two major confounding factors for the association between the duration of breast-feeding and risk of type 2 diabetes mellitus. Both BMI and WHR were weakly correlated with breast-feeding (r = 0.22 and 0.22, respectively). The correlation between BMI and WHR was 0.46. The association between the duration of breast-feeding and diabetes mellitus was accentuated and the trend was only marginally significant when we adjusted the model for BMI. Additional adjustment for WHR resulted in the trend becoming statistically significant (p = 0.04). The association between breast-feeding and type 2 diabetes mellitus did not change much after adjustment for number of live births. When we restricted the analysis of breast-feeding and incidence of type 2 diabetes to confirmed cases of type 2 diabetes mellitus, we found similar patterns of association (Table 4). Table 3Incidence of type 2 diabetes by duration of breast-feeding, Shanghai Women’s Health Study 0>0 to 6 months>6 to 11 months>11 to 35 months≥36 months RR95% CIRR95% CIRR95% CIRR95% CIRR95% CIp value for trendModel 11.000.810.66–0.990.880.74–1.071.080.92–1.271.030.84–1.250.20Model 21.000.840.69–1.030.890.74–1.070.990.85–1.180.880.71–1.080.75Model 31.000.840.69–1.030.880.73–1.050.900.76–1.050.760.62–0.930.06Model 41.000.870.71–1.060.910.76–1.090.980.83–1.150.840.68–1.030.39Model 51.000.890.71–1.060.890.74–1.070.890.76–1.050.750.61–0.920.04Model 61.000.870.71–1.060.890.75–1.070.890.75–1.050.730.58–0.910.05Model 1 age-adjusted only; Model 2: age plus other confounders, but not BMI, WHR or number of live births; Model 3: model 2 plus BMI; Model 4: model 2 plus WHR; Model 5: model 2 plus BMI and WHR; Model 6: model 5 plus number of live birthsTable 4Associations between type 2 diabetes mellitus and duration of breast-feeding, Shanghai Women’s Health Study (confirmed cases of diabetes only) %Cases per person-yearsRR95% CIp valueBreast-feedingNo18.40128/52,8481.000.17Yes81.60741/233,8360.870.72–1.06Duration of breast-feeding Months  018.40128/52,8481.000.17  >0–616.9186/48,6170.800.61–1.05  >6–1124.21159/69,5580.980.77–1.24  >11–3529.11314/83,3980.840.68–1.05  >3611.37182/32,2630.740.55–1.01 Years  018.40128/52,8481.000.01  >0–0.9948.20297/11,8170.910.73–1.13  >0.99–1.9917.25200/57,1560.900.72–1.13  >1.99–2.997.4899/26,2420.720.54–0.95  >2.99–3.994.1065/14,8090.720.52–1.01  >44.5778/17,4540.670.46–0.96 Years per child  018.40128/52,8481.000.23  >0–0.4915.13116/43,4430.940.73–1.21  >0.49–0.9944.21359/126,9160.840.69–1.04  >122.25266/63,4470.880.70–1.11Values are adjusted for age, daily energy intake, BMI, WHR, number of live births, smoking, alcohol consumption, physical activity, education, income, occupation and hypertension In analyses restricted to women who reported having been pregnant within the last 10 years (Table 5), the RR for type 2 diabetes mellitus by the duration of breast-feeding per number of live births was 1.00, 1.01, 0.74 and 0.65; (p = 0.03 for trend) for 0, >0 to 0.49, >0.49 to 0.99 and ≥1 years of breast-feeding per number of live births. Among women who had been pregnant within the last 10 years, the RRs of type 2 diabetes mellitus for 0, >0 to 0.99, >0.99 to 1.99, >1.99 to 2.99, >2.99 to 3.99 and ≥4 years of breast-feeding were 1.00, 0.80, 0.78, 0.58, 0.47 and 0.45 (p = 0.04 for trend). The RRs of type 2 diabetes mellitus for 0, >0 to 0.99, >0.99 to 1.99, >1.99 to 2.99, >2.99 to 3.99 and ≥4 years of breast-feeding among women who had been pregnant within the last 5 years were 1.00, 0.87, 0.68, 0.44, 0.69 and ≥0.49 (p = 0.02 for trend; data not shown in tables). In addition, we found that the RRs for diabetes mellitus associated with ≥12 months of breast-feeding were 0.66 (95% CI, 0.45–0.95), 0.63 (95% CI, 0.40–0.98), 0.49 (95% CI, 0.28–0.85) and 0.46 (95% CI, 0.26–0.86), respectively, when analyses were conducted with time since last pregnancy defined as ≤13, ≤10, ≤5 and ≤2 years (data not shown). In analyses restricted to women who had not been pregnant in the last 10 years, we still found inverse associations between breast-feeding and risk of type 2 diabetes mellitus, although the associations between months of breast-feeding and years of breast-feeding per child were not significant. The association between years of breast-feeding and type 2 diabetes mellitus was of marginal significance. The RRs of type 2 diabetes mellitus for 0, >0 to 0.99, >0.99 to 1.99, >1.99 to 2.99, >2.99 to 3.99 and ≥4 years of breast-feeding were 1.00, 0.90, 0.94, 0.94, 0.79 and 0.72 (p = 0.06 for trend). Similar trends were found when the analysis was restricted to confirmed cases of type 2 diabetes mellitus. Finally, we found that the beneficial effect associated with the duration of breast-feeding was more evident among women over 60 years of age (data not shown). Table 5Duration of breast-feeding and risk of type 2 diabetes mellitus, Shanghai Women’s Health Study Time since last pregnancy≤10 years since last pregnancy>10 years since last pregnancyRR95% CIp value for trendRR95% CIp value for trendAll participants Months  01.000.051.000.20  >0–60.900.57–1.420.880.70–1.10  >6–110.830.55–1.250.910.74–1.12  >11–350.630.40–0.980.940.79–1.13  ≥360.660.30–1.480.760.60–0.97 Years  01.000.041.000.06  >0–0.990.870.60–1.240.900.75–1.08  >0.99–1.990.680.43–1.090.940.77–1.13  >1.99–2.990.440.21–0.920.940.76–1.17  >2.99–3.990.690.29–1.620.790.61–1.02  ≥40.490.18–1.350.720.54–0.96 Years per child  01.000.031.000.45  >0 –0.491.000.64–1.590.910.74–1.13  >0.49–0.990.740.51–1.070.900.76–1.07  ≥10.650.41–1.050.920.76–1.10Confirmed diabetes only Months  01.000.261.000.34  >0–60.800.41–1.530.810.60–1.09  >6–110.960.55–1.650.980.75–1.27  >11–350.690.39–1.220.870.69–1.11  >360.600.23–1.570.780.56–1.08 Years  01.000.101.000.06  >0–0.990.900.55–1.480.910.72–1.16  >0.99–1.990.800.44–1.440.930.72–1.19  >1.99–2.990.440.18–1.090.760.56–1.03  >2.99–3.990.530.17–1.590.770.54–1.09  >40.530.16–1.730.710.48–1.04 Years per child  01.000.251.000.44  >0–0.490.970.52–1.820.940.71–1.25  >0.49–0.990.810.49–1.330.860.68–1.08  >10.730.40–1.340.920.71–1.17Values are adjusted for age, daily energy intake, BMI, WHR, number of live births, smoking, alcohol consumption, physical activity, education, income, occupation and hypertension Discussion In this large, prospective population-based study of middle-aged Chinese women, breast-feeding was associated with a reduced risk of type 2 diabetes mellitus [13] independently of known risk factors for type 2 diabetes mellitus. Our study adds to the limited data on the association between breast-feeding and risk of type 2 diabetes mellitus in mothers. Longer duration of breast-feeding was associated with reduced incidence of type 2 diabetes mellitus in two large US cohorts of young and middle-aged women, the Nurses’ Health Studies I and II [6]. Similar to our results, the RRs in these studies for parous women with 0, >0 to 3 months, >3 to 6 months, >6 to 11 months, >11 to 23 months and ≥23 months of breast-feeding were 1.00, 0.98, 1.03, 0.96, 0.92 and 0.88 (p = 0.02 for trend) for middle-aged women and 1.00, 1.04, 0.91, 0.87, 0.88 and 0.67 (p < 0.01 for trend) for younger women. The RR for women who had breastfed compared with women who had never breastfed was 0.97 (95% CI, 0.91–1.02) for middle-aged women and 0.90 (95% CI, 0.77–1.04) for younger women. In agreement with our findings, these studies also found that the protection conferred by breast-feeding appeared to wane with time since last birth and that longer duration of breast-feeding per pregnancy was associated with a greater benefit. However, among women with gestational diabetes, breast-feeding was not associated with lower risk of type 2 diabetes mellitus later in life [6]. To our knowledge, no other studies have examined the long-term association between breast-feeding and subsequent development of type 2 diabetes mellitus. In the Nurses’ Health Studies I and II, analyses were adjusted for participants’ birthweight and BMI at 18 years, as the investigators found that duration of breast-feeding was inversely related to BMI at 18 years [6]. In our study, we only had information on birthweight for a subset of participants. The correlation coefficients between duration of breast-feeding and both birthweight of participants and BMI at 20 years were 0.02 and 0.12, respectively. We chose to adjust for current BMI and WHR in our analysis, rather than BMI at 20 years or weight at 20 years, as BMI and WHR were strongly associated with type 2 diabetes mellitus. In addition, measurements used to calculate BMI and WHR in our study were taken by trained professionals at the time of the interview, while weight at 20 years was self-reported. In analyses adjusted for BMI at 20 years, the fully adjusted RRs for 0, >0 to 0.99, >0.99 to 1.99, >1.99 to 2.99, >2.99 to 3.99 and ≥4 years of breast-feeding were 1.00, 0.91, 1.02, 1.09, 0.77 and 0.73 (p = 0.30 for trend). We conducted additional analyses adjusting for standardised weight gain (kg per year) since age 20 years and weight at age 20 years. The fully adjusted RRs for 0, >0 to 0.99, >0.99 to 1.99, >1.99 to 2.99, >2.99 to 3.99 and ≥4 years of breast-feeding were 1.00, 0.90, 0.96, 0.99, 0.76 and 0.68 (p = 0.08 for trend). It has been suggested that breast-feeding may protect against type 2 diabetes mellitus by facilitating weight loss, although the association between breast-feeding and weight loss remains inconclusive [14–20]. In our population, the age-adjusted means of standard weight gain since age 20 years by months of breast-feeding were 0.34, 0.31, 0.33, 0.36 and 0.37 kg per year for 0, >0 to 6 months, >6 to 11 months, >11 to 35 months and ≥36 months of breast-feeding, respectively (p < 0.01). Thus, weight loss is unlikely to be the reason for the inverse association between breast-feeding and type 2 diabetes mellitus observed in our study. Another possible mechanism is that breast-feeding may improve insulin sensitivity and glucose intolerance. In a study of both breast-feeding and non-breast-feeding non-diabetic women, insulin levels and insulin/glucose ratios were lower, while carbohydrate use and total energy expenditure were higher in the breast-feeding group [21]. Data from studies of women with gestational diabetes suggest that breast-feeding affects insulin and glucose homeostasis. In a study of 809 Latina women, breast-feeding was associated with improved glucose tolerance, fasting glucose and total area under the glucose tolerance curve [2], while in another study of 26 white women (14 breast-feeding, 12 non-breast-feeding), the breast-feeding group had a higher disposition index, indicating more efficient pancreatic beta cell function [3]. Data from animal studies also suggests that in the post-partum period, breast-feeding is associated with a decrease in insulin resistance. Blood glucose levels were reduced 20% and insulin levels were reduced 35% in lactating rats compared with non-lactating rats [4], while in another study, a 12-fold increase in insulin uptake was observed in the mammary glands of lactating rats, as well as a marked decrease in the plasma half-life of insulin [5]. Breast-feeding may also influence pituitary hormones [22] and may induce long-term changes in the hypothalamic-pituitary axis [23]. Our study has several strengths. Our population is representative of urban Shanghai. The high follow-up rates minimised the possibility of selection bias. In addition, the prevalence of breast-feeding in this population was very high (81.62%), and there was a high number of live births (of the total number of women in our study at baseline, 74,942 or 96.4% were parous). In addition, the extensive information on potential confounders and the large study size allowed us to examine the effect of duration of breast-feeding on the development of type 2 diabetes mellitus in detail. The major limitation of the study is reliance on self-reported diabetes, which is an important factor to consider when interpreting these results. A recent report suggested that the prevalence of diabetes was under-diagnosed in Shanghai [13]. We are not aware of any programme for systematic screening for diabetes in our study area. At baseline recruitment, we conducted a urinary glucose test for all cohort members who donated a urine sample (88.2% of participants). We found that 1% of participants who reported never having been diagnosed by a physician as having diabetes had a positive urinary glucose test. These participants were excluded from the current analysis. However, it is possible some other type 2 diabetes cases remained undiagnosed in our study. Similarly, self-reported diabetes may also include some false positive cases. Misclassification of diabetes could weaken the association between duration of breast-feeding and the risk of type 2 diabetes mellitus. To address the possibility of surveillance bias, we conducted analyses restricted to women with confirmed diabetes and found similar results. In our study, duration of breast-feeding was correlated with parity. In a recent study from the UK, having more children was associated with a higher risk of diabetes in women [24]. However, the association was attenuated after adjustment for BMI and socioeconomic factors. Similarly to the UK study, we found parity to be associated with lower income and education levels, higher BMI, WHR and standard weight gain, as well as smoking and alcohol consumption in this population. In age-adjusted analyses the incidence of diabetes was positively associated with number of children. The RRs of diabetes were 1.00, 1.38, 1.32, 1.31 and 1.62 for one, two, three, four and five or more children (p < 0.01). When we adjusted the analysis for BMI and WHR, parity was no longer associated with incidence of type 2 diabetes. We found that the association between breast-feeding and type 2 diabetes did not change much after adjustment for number of live births. Before and after inclusion of the number of live births in the analysis, the RRs of type 2 diabetes mellitus for 0, >0 to 6 months, >6 to 11 months, >11 to 35 months and ≥36 months of breast-feeding were 1.00, 0.87, 0.89, 0.89 and 0.75 (p = 0.04 for trend) and 1.00, 0.87, 0.89, 0.89 and 0.73 (p = 0.05 for trend), respectively. Breast-feeding could also reflect differences in other risk factors in our population that might not have been completely controlled for or measured in our study. We applied a propensity score adjustment, including a wide array of lifestyle and dietary intake variables, and found little change in the pattern of associations between breast-feeding and type 2 diabetes mellitus (data not shown). Nevertheless, residual confounding from unmeasured or less than perfectly measured confounders, such as socioeconomic status at the time of breast-feeding, is still a concern. Thus, our results should be interpreted with caution, particularly given that the association was relatively weak and that the reduced risk was limited to participants with a longer duration of breast-feeding (≥36 months). In addition, over-adjustment due to the inclusion of possible mediating variables such as BMI and WHR in the model may also be a concern. In our study population, breast-feeding was weakly associated with BMI and WHR (r = 0.22 and r = 0.22, respectively). Further studies conducted in other populations and focused on the underlying biological mechanisms are warranted. In summary, we found that breast-feeding duration was inversely associated with the risk of type 2 diabetes mellitus in this population. Together with results from two other large US cohorts and some clinical evidence of improved glucose homeostasis in breast-feeding women, these data suggest that breast-feeding may reduce the risk of type 2 diabetes mellitus in middle-aged women.
[ "type 2 diabetes mellitus", "middle-aged women", "parous women", "lifelong breast-feeding" ]
[ "P", "P", "P", "M" ]
Virchows_Arch-3-1-2039817
Primary retroperitoneal mucinous cystadenoma with sarcoma-like mural nodule
Primary retroperitoneal cystadenomas are extremely rare. This is the first report in literature to describe a primary retroperitoneal cystadenoma with a sarcoma-like mural nodule. A 45-year-old woman complained of a left-sided abdominal mass. A computed tomography scan revealed a cystic mass with a mural nodule, which seemed to originate from the tail of the pancreas. At laparotomy the cyst was not adhered to the pancreas but localized retroperitoneally. Histologic examination showed a mucinous cystadenoma with only foci of borderline malignancy with a mural “sarcoma-like” nodule. In view of the surgical and histopathological findings, the mucinous cystadenoma was regarded as primary retroperitoneal. This case demonstrates that in the era of radiological preoperative refinement, pathological diagnosis remains of utmost importance, especially for rare cases. Introduction Mucinous cystadenomas of the ovary are clinically and histopathologically well-established and common tumors. Primary retroperitoneal mucinous cystadenomas are extremely rare. Such tumors are histologically similar to ovarian mucinous cystadenomas. Their histogenesis is still unclear. We report a case of primary retroperitoneal mucinous cystadenoma with foci of borderline malignancy containing a mural “sarcoma-like” nodule. Case report Clinical history A 45-year-old, para 2, woman presented at the emergency room with a 3-week history of left-sided abdominal pain. She had felt a mass in the left lower quadrant 2 days before. Her clinical history included endometriosis and a car accident. The mass was progressive but not painful. Apart from the palpable mass of 15 cm in the left lower abdomen, physical examination was unremarkable. Ultrasonography demonstrated a 15-cm cystic mass with a 3.8-cm nodule in its wall. The uterus was normal in size, and internal ultrasonography showed small ovaries. Carcinoembryonic antigen, cancer antigen (CA) 125, and CA 19-9 levels were within normal limits. The next day, a contrast-enhanced computed tomography scan of the abdomen revealed a 15-cm left-sided cystic mass, which seemed to originate from the tail of the pancreas (Fig. 1a). The cystic mass showed a 4-cm nodule in its wall (Fig. 1b) and was suspected for a cystic papillary adenocarcinoma. At laparotomy, the cyst was not adhered to the pancreas and could be easily separated from its location near the tail without opening the pancreatic capsule. Vascularization appeared to arise from the mesentery of the left colon. It was localized in the retroperitoneal space extending caudally from the spleen to the lower abdomen with medial displacement of the left colon. Total resection of the cyst was performed, and the specimen was sent for histopathological examination. Further inspection showed two normal ovaries. Her postoperative recovery was uneventful. One year after surgery, the patient was without signs of recurrence or metastasis. Fig. 1Contrast enhanced CT scan shows images of a pancreatic cyst (arrowheads) originating from the tail of the pancreas (a) and a mural nodule within the wall of the cystic mass (b) Materials and methods The specimen was fixed in 4% buffered formalin. Representative samples were routinely processed and embedded in paraffin blocks. Four-micrometer-thick sections were stained with hematoxylin and eosin and with parallel routine immunohistochemical procedures. The antigens tested by immunohistochemistry were: pan-keratin, keratin Cam 5.2, cytokeratin 7, cytokeratin 10, cytokeratin 18, cytokeratin 20, epithelial membrane antigen, vimentin, desmin, actin, myosin, CD34, CD68, CD99, CD117, S-100 protein, and bcl-2. Pathological findings The specimen consisted of a unilocular cyst measuring 20 × 11 cm with a smooth surface. The content was a watery mucinous material. The wall was thin with a smooth gray-white inner surface and contained a circumscribed bean-shaped solid mural nodule of 3.5 × 3.5 × 2.5 cm, which showed a brown-yellow and focally hemorrhagic cut surface. Microscopically, most of the cyst was lined by single-layered tall columnar cells, abundant clear cytoplasm, and small basally located nuclei (Fig. 2a). Fig. 2Cyst wall with typical tall columnar mucous-secreting epithelium and fibrous wall (a). Low cellular proliferation with some stratification of the cells and slight nuclear atypia (b). Heterogenous proliferation of pleomorphic cells, spindle cells, osteoclast-like giant cells, and some mononuclear cells with some pigment (c). Detail of sarcoma-like nodule (d) Occasionally (over less than 1% of the surface), the epithelium showed slightly atypical proliferation with glandular budding, tufting of the epithelium, decreased cytoplasmic mucin, some stratification of slightly irregular nuclei, and an occasional mitoses (Fig. 2b). There was no infiltrative growth but foci of borderline malignancy. The nodule was well circumscribed without vascular invasion and consisted of a heterogenous population of spindle-shaped cells, pleomorphic cells with bizarre nuclei, mixed mononuclear inflammatory cells, benign osteoclast-like giant cells, and foci of hemorrhage. There were mitotic figures, including some atypical forms (Fig. 2c and d). The sarcoma-like cells proved to be keratin negative (Table 1). Table 1Immunohistochemical results of the sarcoma-like cellsAntigenResultPan-keratin−Keratin Cam 5.2−Cytokeratin 7−Cytokeratin 10−Cytokeratin 18−Cytokeratin 20−Epithelial membrane antigen−Vimentin+Desmin−Actin+/− (some)Myosin−CD34−CD68+CD99−CD117−S-100 protein−bcl-2+/− (some) Discussion Mucinous cystadenomas can be located in the ovaries, pancreas, and in the retroperitoneum. The mucinous cystadenoma presented was localized retroperitoneally near the pancreas but was clearly not adhered to it. Because normal-appearing ovaries were found, the cystadenoma was thought to be primary retroperitoneal. According to the literature, symptoms are nonspecific, and most patients complained of an abdominal distension or mass with or without pain [9]. Mucinous cystadenomas were relatively large, varying from 10 to greater than 20 cm in diameter, which is large enough to cause symptoms like abdominal fullness [9]. Preoperative diagnosis is very difficult, not because the tumors are often overlooked in the differential diagnosis but also because no sensitive methods or reliable markers are available [2]. As retroperitoneal mucinous cystadenomas are histologically similar to mucinous cystadenomas of the ovary, the ultrasonographic image pattern is in general of no help in distinguishing between ovarian and retroperitoneal origin. In our case, the diagnosis of the retroperitoneal mucinous cystadenoma could not be established preoperatively by ultrasonography or computed tomography. Although ultrasonography, computed tomography, or magnetic resonance imaging can detect retroperitoneal cysts, the diagnosis of mucinous cystadenoma is seldomly made preoperatively. The usual preoperative differential diagnosis consists of ovarian cyst, cystic mesothelioma, cystic lymphangioma, nonpancreatic pseudocyst, and renal cyst [4, 9, 18]. Although aspiration is a good method for delineating the nature of the cyst, cytologic analysis of the aspirated fluid frequently fails to reveal the type of epithelial cells lining the cyst. Therefore, exploratory laparotomy with complete excision of the cyst is usually indicated both for diagnosis and treatment [2]. Retroperitoneal mucinous cystadenomas are histologically similar to mucinous cystadenomas of the ovary. The histogenesis of these tumors is still unclear. Four main hypotheses have been proposed [2, 14]. According to the first three hypotheses, the tumor arises either from ectopic ovarian tissue, although ovarian tissue was only rarely found [14], from a teratoma in which the mucinous epithelium has overgrown all other components or from urogenital remnants. The most widely accepted theory suggests coelomic metaplasia as the causal agent, whereby tumors arise from invagination of the peritoneal mesothelial layer that undergoes mucinous metaplasia with cyst formation [3, 9]. Such origin rather than from ectopic ovarian tissue is supported by the occurrence of such a tumor in a male patients [5, 8, 10, 17]. Primary mucinous tumors of the retroperitoneum are very uncommon. These tumors can be classified into three clinicopathologic types: mucinous cystadenoma, mucinous cystic tumor of borderline malignancy, and mucinous cystadenocarcinoma. Our case was diagnosed as a primary retroperitoneal mucinous cystadenoma with only foci of borderline malignancy and a mural “sarcoma-like” nodule. Mural nodules have been described in ovarian and pancreatic mucinous cystic tumors [7, 16]. Mural nodules may be malignant representing anaplastic carcinoma, containing a predominant population of cytokeratin-positive cells with high-grade malignant nuclei, or a genuine soft tissue-type sarcoma [7, 15, 16]. Benign pseudosarcomatous mural nodules are composed of a heterogeneous cell population of epulis-type giant cells, atypical spindle cells with bizarre nuclei and mitotic figures, mixed inflammatory cells, and signs of hemorrhage and necrosis. In these cases, immunohistochemical staining shows a weakly or focally cytokeratin positivity in the pseudosarcomatous cells. We performed a literature review using Embase and Medline starting in 1966 and identified approximately 45 cases of retroperitoneal mucinous cystadenoma and 25 cases of mucinous cystadenocarcinoma. Only eight cases of mucinous cystadenoma with borderline malignancy have been reported (Table 2). Thus, our case is the ninth case of a retroperitoneal mucinous cystadenoma of borderline malignancy. By our knowledge, however, combination with a mural “sarcoma-like” nodule has not been described earlier in the literature. The patient should be followed. However, in this case, long-term follow-up seems not warranted regarding the only focal aspect (<1% of the surface) of borderline malignancy of the cyst and the benign reactive nature of the mural nodule. Table 2Cases of primary retroperitoneal mucinous cystadenomas of borderline malignancyStudy (year)Age (years)SexSymptomSize, imageTumor markerPreoperative diagnosisHistoryPathologyExtracystic extensionTherapyOutcomeNagata et al. [11] (1987)41FAbdominal swelling, pain12 × 10 × 9 cmUDUDUDMCAborNoTRUDBanerjee and Gough [1] (1988)47FAbdominal mass10 cm (US) next to spl.NDLt adrenal tumorApp HystMCAborNoTR, resection spl. + lt adrenalNEDMotoyama et al. [10] (1994)63MAbdominal pain6 cm (US), under rt kidneyHigh CEA in cystic fluidRt renal cystNRMCAborNoNDNEDPearl et al. [13] (1996)33FAbdominal swelling, painLarge, unilocular lt (CT)NDNDNRMCAborNoLRNED, 10 monthsPapadogiannakis et al. [12] (1997)33FAbdominal mass13 × 9 cm (US + CT)NDMesenteric cystNRMCAborNoTRNED, 12 monthsChen et al. [2] (1998)48FAbdominal fullness15 × 13 × 9 cm (CT)NDMesenteric cystNRMCAborNoLRNED,8 monthsGutsu et al. [6] (2003)41FFlank pain, abdominal distension21 × 16 cm rt (CT)NDRetroperitoneal cystNRMCAborNoTRNED, 18 monthsMatsubara et al. [9] (2005)36FAbdominal distension12 × 8 cm rt (CT)CA 125:51 CA 19-9: 55Ovarian cystNRMCAborNoTR, App, MyoNED, 6 monthsPresent case (2007)45FAbdominal pain15 cm (US + CT)CEA, CA125, CA19-9: normalCystic papillary adenocarcinoma or mucinous cystic neoplasmEndomMCAborNoTRNED, 12 monthsF Female; M male, UD unknown data, NR not remarkable, MCAbor mucinous cystadenoma borderline malignancy, TR tumor resection, LR laparoscopic resection, NED no evidence of disease, US ultrasonography, spl. spleen, App appendectomy, Hyst hysterectomy, Myo myomectomy, Endom endometriosis, rt right, lt left
[ "mucinous cystadenoma", "mural nodule", "retroperitoneum" ]
[ "P", "P", "P" ]
Environ_Health_Perspect-115-3-1849945
Cumulative Lead Dose and Cognitive Function in Adults: A Review of Studies That Measured Both Blood Lead and Bone Lead
Objective We review empirical evidence for the relations of recent and cumulative lead dose with cognitive function in adults. In the development of the adult lead management guidelines (see Kosnett et al. 2007), a number of health outcomes adversely affected by lead exposure were discussed. Cognitive function was an important consideration of because of the growing number of studies in this area and increasing concern that cognitive function in adulthood may be affected by relatively low lead doses. In this article, we systematically review recent evidence concerning recent and cumulative lead dose and adult cognitive function. Measurement of lead dose In reviewing studies of the health effects of lead, it is critical to understand the available lead biomarkers in terms of how they represent external exposure (in terms of timing, duration, magnitude, and accumulation); how they are influenced by metabolic factors (organ distribution, compartmental dynamics, the influence of physiologic factors); and how the combination of these considerations affects inferences regarding the health effects of lead (Hu et al. 2007). We conclude from these important methodologic issues that the most informative recent epidemiologic studies of lead’s impact on health are those that were able to derive estimates of both recent and cumulative lead exposure for each study participant. To achieve this end with the greatest precision and accuracy, such studies have incorporated measurements of lead in both blood (whole blood, using standard chemical assays such as graphite furnace atomic absorption spectroscopy) and bone [using noninvasive in vivo K-shell X-ray fluorescence (KXRF) instruments]. Blood lead levels measured in epidemiologic studies with valid instruments and standardized calibration and quality control procedures have been reported in the literature for > 35 years. Bone lead levels measured by in vivo KXRF were begun in some research laboratories in the 1980s, but it was not until the mid-1990s that reports began to emerge of KXRF-measured bone lead levels in relation to potential health indicators from epidemiologic studies with sufficient sample sizes (for example, ≥ 100 subjects) to have substantial statistical power. Thus, in this review we summarize all studies to date that measure cognitive function and both blood and bone lead levels (or acceptable surrogate for cumulative lead dose). Published reviews of relevance to this review We begin our review with a discussion of three other reviews on the topic of lead dose and cognitive function (Balbus-Kornfeld et al. 1995; Goodman et al. 2002; Meyer-Baron and Seeber 2000). Balbus-Kornfeld et al. (1995) reviewed the evidence on cumulative lead exposure and cognitive function from studies published from 1976 to 1991. Among 21 unique studies that were identified at the time of the authors’ review, none used a biomarker of cumulative dose. Of the four longitudinal studies, all were small (mean sample size in the analysis of 47 lead-exposed subjects), with relatively low follow-up rates and relatively short durations of follow-up. The authors thus concluded that the available literature provided inadequate evidence to conclude whether cumulative exposure or absorption of lead adversely affected cognitive function in adults. Goodman et al. (2002) and Meyer-Baron and Seeber (2000) are reviewed here because they had generally opposite conclusions, which led to considerable controversy and discussion (Goodman et al. 2001; Schwartz et al. 2002; Seeber and Meyer-Baron 2003; Seeber et al. 2002). The Goodman et al. (2002) article was funded by the German Battery Association, apparently in anticipation of consideration in Germany of lowering the blood lead standard in lead workers (Seeber and Meyer-Baron 2003). Goodman et al. (2002) reviewed 22 studies published between 1974 and 1999 with the expressed aim of evaluating associations between moderate blood lead levels and neurobehavioral test scores after occupational exposure to lead. Studies were included if the central tendency for blood lead levels was < 70 μg/dL, the numbers of exposed and unexposed were reported, and test score arithmetic means and measures of variability were reported for exposed and unexposed workers (Goodman et al. 2002). The authors concluded that none of the individual studies were conclusive or adequate in providing information on the effects of lead on cognitive function and called for prospective studies that would evaluate cognitive function before and after exposure. There was no discussion about whether examining relations of blood lead levels with cognitive function was the most relevant question if the hypothesis was that cumulative lead dose was most important to cognitive function. There was little explicit discussion of whether lead may have acute effects as a function of recent dose, and chronic effects as a function of cumulative dose, or how this could be assessed by review of epidemiologic studies. Meyer-Baron and Seeber (2000) performed a meta-analysis of 12 studies using selection criteria similar to Goodman et al. (2002) but also with the requirement for reporting means and standard deviations of dependent variables (Meyer-Baron and Seeber 2000). They concluded that there were obvious neurobehavioral deficits at current blood lead levels < 40 μg/dL. Again, the focus was on associations with blood lead levels, and there was little formal discussion about which lead biomarker was most relevant to hypotheses about how cumulative lead dose may influence cognitive function. Thus, this is the first review to evaluate epidemiologic studies that distinguish between the acute effects of recent dose from the chronic effects of cumulative dose. Methods Methodologic considerations for relations of lead dose and cognitive function Many methodologic issues of relevance to the epidemiologic investigation of lead and cognitive function have been addressed elsewhere in this minimonograph (Hu et al. 2007). When evaluating the associations of cumulative lead dose with cognitive function, it is important to acknowledge that nonoccupational sources of lead exposure were present for all members of the general population, including lead workers throughout the early part of this century until public health interventions progressively removed lead from gasoline and many consumer products during the 1970s and 1980s (Agency for Toxic Substances and Disease Registry 1999; Annest et al. 1983; Pirkle et al. 1998). Lead remains a low-level and ubiquitous neurotoxicant in the environment and is found in measurable levels in all individuals (Hoppin et al. 1995). Thus, current tibia lead levels represent a mix of occupational and environmental exposures. This review does not try to determine whether the main source of lead was occupational or environmental but rather focuses on whether lead in blood or bone is associated with adverse cognitive outcomes in adults. Identification of studies We conducted a systematic literature review of the association between blood and bone lead biomarkers and cognitive functioning in adults. Our aim was to select studies that compared markers of both recent and cumulative lead dose in their relations with cognitive function. Both occupationally and environmentally exposed adult populations were included. We searched the PubMed (National Library of Medicine 2006) and PsycINFO databases (American Psychological Association 2006) for epidemiologic studies using keywords such as blood, bone, lead, cumulative, cognitive, and neurobehavior. There were no date or language restrictions. From the identified publications and relevant review articles, we examined reference lists to locate additional studies that measured both recent and cumulative lead dose. This includes blood lead levels, bone lead levels, or a surrogate measure of cumulative lead dose such as integrated blood lead (IBL), area under the curve of blood lead levels over time, or the product of blood lead level and employment time. Studies were not considered for the review if they a) contained no original research, b) were conducted on nonhuman subjects, c) were case reports, d) contained no standardized neurocognitive assessment outcomes, or e) lacked measures of both recent and cumulative lead dose. Data abstraction We abstracted data from articles meeting the selection criteria. Study quality was assessed with the following criteria: a) exposure was assessed at an individual level; b) exposure was assessed with a biomarker; c) cognitive outcomes were objective, standardized tests; d) statistical adjustment for potential confounders including age, sex (in studies with both men and women), and education; e) data collection was similar in exposed and nonexposed participants; f ) time period of study was the same in exposed and nonexposed participants; and g) there was a detailed description of the approach to data analysis. We decided not to try to derive a pooled estimate across studies of the associations of lead dose biomarkers with cognitive function because of differences in methods for subject selection, blood and bone lead measurements, neurobehavioral outcomes, approach to regression modeling, and presentation of results across studies. Pooled estimates from metaanalysis also can be highly influenced by decisions regarding how and whether to pool certain results. We thus decided to present details for each study and discuss them in turn. Results Overview of evidence We identified three main types of studies that reported cross-sectional or longitudinal associations of blood and bone lead levels with cognitive function. These were of a) environmentally exposed individuals in the general population, b) workers with current occupational exposure, and c) former lead workers without current occupational exposure to lead. We have summarized these studies in Table 1, provided details in Table 2, and discuss them in order below. Studies of adults without occupational lead exposure We identified six articles from three studies [i.e., residents near a lead smelter, the Normative Aging Study (NAS), and the Baltimore Memory Study] that evaluated subjects with mainly environmental exposure to lead (Tables 1 and 2). One study of young adults 19–29 years of age compared 257 individuals with high childhood blood lead levels from exposure 20 years previously from a lead smelter to 276 age- and sex-matched controls. This study found impairment on many cognitive tests among the highly exposed group, but minimal association on most tests with tibia lead levels measured during young adulthood (Stokes et al. 1998). Four articles from the NAS reported associations of blood and bone lead levels in a cohort of older men. One of these articles (Payton et al. 1998) was a first report that examined scores on a large battery of cognitive tests of a small sample (n = 141) of NAS participants. This was subsequently followed up with a report on a much larger number of NAS participants (n = 1,089 with blood lead levels and n = 760 with bone lead levels, 412–515 of whom took different tests twice approximately 3.5 years apart) (Weisskopf et al. 2007). Cross-sectional analyses in the original report found that increased blood lead levels across a relatively low range of levels [mean ± SD = 5.5 ± 3.5 μg/dL) were a stronger predictor, compared with tibia or patella lead levels, of poorer performance on tests of speed, verbal memory, vocabulary, and spatial copying skills. However, this was not confirmed in the larger, cross-sectional analysis, except possibly for scores on a vocabulary test (Weisskopf et al. 2007). Conversely, in longitudinal analyses, the larger study found more decline over time on almost all cognitive tests associated with both higher patella and higher tibia bone lead levels, with the associations reaching statistical significance for pattern comparison and spatial copying skills. An earlier, similar longitudinal analysis by Weisskopf et al. (2004) in this same population reported that patella lead levels were significantly associated with a decline in Mini-Mental State Examination (MMSE; Folstein et al. 1975) score over time. A slightly smaller association was observed with tibia lead levels, whereas no association was observed with blood lead levels. In cross-sectional analyses of the same population, higher blood lead levels were a stronger predictor of poorer performance on the MMSE, as were higher patella and tibia bone lead levels (Payton et al. 1998; Wright et al. 2003). In a study of almost 1,000 persons 50–70 years of age randomly selected from the general population in the Baltimore Memory Study (BMS), a cross-sectional analysis showed that relatively low current blood lead levels were not associated with cognitive domain scores. However, moderate tibia lead levels (mean ~ 19 μg/g) were significantly associated with worse performance in all seven cognitive domains (Shih et al. 2006). Thus, in the environmental studies of older adults, the most consistent findings across studies are associations between bone lead levels and cognitive function. The associations in the BMS were cross-sectional, whereas the predominant associations in the NAS were with change in cognitive function over time, although a significant cross-sectional association with MMSE score was also observed in this sample. Taken together, these data suggest that at environmental exposure levels, the effects of cumulative exposure are more pronounced than recent effects of current exposure. The absence of associations in the Stokes et al. (1998) study could be because of the younger age of studied subjects, the very low current blood and tibia lead levels, or the inadequacy of tibia lead in the third decade of life to estimate early life dose (Hoppin et al. 2000). Studies of occupationally exposed workers Fifteen articles were identified of workers with current or past occupational exposure to lead. Eight of these studies used a surrogate measure of cumulative lead dose (i.e., IBL) rather than a direct measure of lead in bone. Among these studies, which compared blood and IBL lead dose, when the lead exposure was primarily current (e.g., relatively high blood lead levels), most studies found an association between increasing blood lead values and worse cognitive function (Barth et al. 2002; Bleecker et al. 1997; Lucchini et al. 2000). However, studies in which the exposure was primarily in the past demonstrated that surrogate measures of cumulative dose were a stronger predictor of worse cognitive function compared with blood lead levels (Bleecker et al. 2005; Chia et al. 1997; Lindgren et al. 1996). Studies that used bone lead levels as a direct indicator of retained cumulative lead dose are summarized below. One study of currently exposed lead workers in South Korea (n = 803) found strong and consistent associations of blood lead levels with worse cognitive function after adjustment for covariates, but tibia lead levels were not as consistently associated (Schwartz et al. 2001). The same null findings for bone lead levels were observed in two smaller studies, one with male smelter workers (n = 57) in whom finger bone (mixed trabecular and cortical tissue) lead levels were measured (Osterberg et al. 1997). The second article describes the study of a sample of 54 storage battery workers in whom tibia and calcaneus lead levels were measured (Hanninen et al. 1998). This is the only study published to date to report an association between IBL and cognitive outcomes in which there was a lack of an association with bone lead levels. Both these studies used early XRF techniques (e.g., KXRF with cobalt-57) with higher limits of detection that have not been commonly used since, and this use makes the findings more difficult to interpret. Bleecker et al. (1997), in a study similar to the one by Schwartz et al. (2001), reported stronger and more consistent associations of blood lead measures and neurobehavioral test performance compared to tibia lead levels. In the South Korean lead workers with current occupational exposure, a longitudinal analysis was performed to separate recent lead dose (measured as blood lead levels) from cumulative lead dose (measured as tibia lead levels), and acute effects from chronic effects in 575 subjects with complete data across the three study visits (Schwartz et al. 2005). The authors reported significant cross-sectional associations of blood lead levels with lower executive ability and manual dexterity test scores, with some evidence also for a longitudinal association of changes in blood lead levels with neurobehavioral decline. Tibia lead levels were more consistently associated with longitudinal declines in manual dexterity, executive abilities, neuropsychiatric symptoms, and peripheral sensory functioning than change in blood lead levels. The authors concluded that lead was associated with worse cognitive function in two ways: an acute effect of recent dose and a chronic effect of cumulative dose. The authors also discussed that contrasting associations with blood and tibia lead levels could be due to the following: a) tibia and blood lead levels are biologically related and blood lead is in equilibrium with bone lead stores; b) the error in measurement of tibia lead levels is larger than that for blood lead; c) controlling for cross-sectional associations could obscure longitudinal ones; and d) lead in blood reflects recent external exposure, and is in equilibrium with bone lead stores, possibly taking away explained variance from bone lead associations via this correlation in cross-sectional analyses. Results of a cross-sectional analysis of former organolead workers showed that higher peak tibia lead levels (range, –2.2 to 105.9 μg/g) were related to poorer functioning on a number of cognitive tests, including those assessing manual dexterity, executive ability, verbal intelligence, and verbal memory (Stewart et al. 1999). In a longitudinal analysis in this same population, among 535 lead workers exposed a mean of 16 years before, increases in peak tibia lead levels [mean ± SD = 22.6 ± 16.5 μg/g] but not in blood lead levels predicted declines over time in these same domains in addition to visual memory (Schwartz et al. 2000). This finding indicates that even many years after high lead exposure, and in the absence of high current lead exposure, cumulative lead dose may exert progressive effects on cognitive functioning (Links et al. 2001). Lead exposure and psychiatric symptoms Several lines of evidence suggest that increased blood lead levels are associated with psychiatric symptoms in adults, such as depression, anxiety, irritability, and anger. For example, a cross-sectional analysis of 107 occupationally exposed individuals showed increased rates of depression, confusion, anger, fatigue, and tension as measured by the Profile of Mood States (POMS; McNair et al. 1971) among those with blood levels > 40 μg/dL (Baker et al. 1983). Maizlish et al. (1995) found that current and cumulative measures of blood lead levels in currently exposed lead workers were associated with tension, anxiety, hostility, and depression measured by the POMS questionnaire. Lindgren et al. (1996) examined the POMS’ factor structure in retired lead smelter workers and showed that the resulting “general distress” factor was significantly related to IBL but not to current blood lead level. In occupationally exposed South Korean lead workers, tibia lead levels were significantly associated with more depressive symptoms measured by the Center for Epidemiologic Studies Depression scale (CES-D; Radloff 1977) after adjusting for age, sex, education, job duration, and blood lead level (Schwartz et al. 2001). However, only one recent study has examined a direct measure of cumulative dose with bone measurements in a community sample (Rhodes et al. 2003). These authors used the Brief Symptom Inventory (BSI; Derogatis and Melisaratos 1983) to show that patella bone lead levels were associated with an increased risk of anxiety and depression sub-scale scores. The logistic regression estimate for the phobic anxiety subscale was statistically significant (p < 0.05), as well as for the combined measure of all three BSI subscales (anxiety, depression, and phobic anxiety). Psychiatric symptoms, specifically symptoms of depression, potentially share the same neural substrates with components of cognition, and thus may be important to late-life cognitive functioning. Compared with nondepressed elderly individuals, depressed elderly perform more poorly on tests involving attention, memory encoding, and retrieval. However, intelligence tests are more resistant to these effects of depression (Arnett et al. 1999; Naismith et al. 2003; Weingartner et al. 1981). Depressive symptoms (as measured by the CES-D) are positively associated with both the risk of Alzheimer disease and a steeper rate of cognitive decline (Wilson et al. 2002). Because late-life symptoms of depression are closely associated with dementia, investigators have put forth a number of hypotheses that suggest depression a) may be a risk factor for cognitive decline, b) has risk factors in common with dementia, c) is an early reaction to declining cognition, and d) influences the threshold at which dementia emerges [for review see Jorm (2000)]. The exact temporal and mechanistic relation remains unclear. Regardless of the exact relation between depressive symptoms and cognitive function, however, the assessment of the impact of lead exposure on these outcomes is not compromised. Whatever the associations with these outcomes, they would still be attributed to lead—that is, even if depressive symptoms lead to worse cognitive performance, and lead leads to symptoms of depression, the cognitive impairment as a result of that depression could still be considered part of the total effect of lead. Lead–gene interactions In the former organolead worker studies discussed above, possessing at least one apolipoprotein E (APOE) ɛ 4 allele magnified the negative cross-sectional association of tibia lead levels with performance on the cognitive domains of executive ability, manual dexterity, and psychomotor skills (Stewart et al. 2002). No direct effects of the APOE ɛ 4 allele were observed on cognitive function in this study, presumably because of the sample’s younger age (range, 41–73 years). Other studies have found that APOE ɛ 4 modifies dementia outcome in individuals with previous traumatic head injury, suggesting that APOE ɛ 4 plays a role in recovery from brain insults (Mayeux et al. 1995), which may be extended to include insult from lead exposure. Discussion Summary of evidence for a causal relationship The literature on associations of recent and cumulative dose biomarkers with cognitive function has grown impressively since the 1995 review (Balbus-Kornfeld et al. 1995). We believe sufficient evidence exists to conclude that there is an association between lead dose and decrements in cognitive function in adults. Overall, while the association between blood lead levels and cognitive function is more pronounced in occupational groups with high current lead exposures, associations between bone lead levels and cognitive function are more evident in studies of older subjects with lower current blood lead levels, particularly in longitudinal studies of cognitive decline. Consistency of associations Following is a summary of the findings from each of the three types of populations. First, cross-sectional studies of currently exposed lead workers showed that associations of blood lead levels and cognitive function were clearer than the associations for tibia, patella, or calcaneus lead levels, perhaps because the acute effects of recent dose in an occupational setting masked the chronic effects of cumulative lead dose. Second, previously exposed occupational populations demonstrated a stronger association between cumulative lead dose measured in tibia bone with cognitive deficits compared with blood lead levels. The two studies that deviated from these otherwise consistent findings may not have had sufficient power to detect any associations (n < 60). Last, studies of environmentally exposed adults who had notably higher exposures in the past suggest that bone lead level is more consistently associated with performance on cognitive tests than is blood lead level. The domains associated with lead dose do not differ in general by lead biomarker (blood, tibia, patella). The cognitive domains consistently associated with each biomarker in both environmental and occupational studies on adults include verbal and visual memory, visuospatial ability, motor and psychomotor speed, manual dexterity, attention, executive functioning, and peripheral motor strength. Comparisons of lead and psychiatric symptom associations in previously and currently exposed samples lend credence, although perhaps at higher thresholds than for cognitive outcomes, that neurobehavioral functioning is consistently associated with blood lead when exposure is currently high (e.g., occupational) and bone lead when exposure is primarily from past chronic exposure. These associations exist in multiple settings, including both occupational and non-occupational, in men and women, and in populations with diversity by socioeconomic status and race/ethnicity. This reduces the likelihood of associations by statistical chance or due to unmeasured confounding. However, this consistency cannot completely rule out the possibility of uncontrolled confounding or effect modification (Martin et al. 2006; Shih et al. 2006). In addition, in studies of general populations with diversity by socioeconomic status and race/ethnicity, the ability to disentangle social, cultural, and biological factors from the “independent” influence of lead dose may be a futile exercise (Weiss and Bellinger 2006). Strength of association The strength of associations between lead and cognitive function is strong and can be compared to the influence of age on cognitive function. The comparative magnitude of these effects has been reported in several studies. In currently exposed lead workers, cross-sectional associations showed that a 5-μg/dL increase in blood lead was equivalent to an increase of 1.05 years in age (Schwartz et al. 2001). The magnitude of cross-sectional associations with tibia lead levels in the BMS was moderate to large. A proportion comparison of the direct effect of age and the direct effect of tibia lead levels on cognitive outcomes demonstrated that the magnitude of the association with tibia lead levels was moderate to large, equivalent to 22–60% of the magnitude of the age effect in its relations with cognitive domain scores. Specifically, an interquartile range increase in tibia lead levels was equivalent to 2–6 more years of age at baseline across all seven domains (Shih et al. 2006). Longitudinal analyses in the NAS observed that an interquartile range higher patella lead level was approximately equivalent to that of aging 5 years in relation to the baseline MMSE score (Weisskopf et al. 2004) and an interquartile range higher bone (patella or tibia, depending on the specific cognitive outcome) lead level was approximately equivalent to that of aging 1 year in relation to the baseline test scores on a battery of cognitive tests (Weisskopf et al. 2007). Specificity Lead has adverse effects on many other health outcomes in addition to cognitive function. This is not surprising given lead’s numerous biologic effects, including calcium agnonism and antagonism (Ferguson et al. 2000), binding to sulfhydryl and carboxyl groups on proteins, and activation of nuclear transcription factors (Ramesh et al. 2001), for example. It is thus not surprising that lead’s toxicity is not specific to the brain and we do not believe this lack of target organ specificity diminishes the inference for a causal relationship between lead and cognitive dysfunction. Temporal relationship Associations between lead biomarkers and cognitive outcomes have been demonstrated in both cross-sectional and longitudinal studies. In several of the longitudinal studies, change in cognitive function was explicitly modeled in relation to preceding lead dose or in relation to change in lead dose. In either case, the temporality condition is met. In addition, as bone lead is a measure that ascertains prior dose, even in cross-sectional analyses, analysis of bone lead with cognitive test scores evaluates lead dose that preceded current cognitive performance; thus, while cognitive assessment is cross-sectional, dose assessment is retrospective and cumulative. This again would minimize concerns about incorrect temporal relations. Biological gradient (dose–effect relations) Nearly all reviewed studies found a dose–effect relation for blood lead, bone lead, or both. Existing studies do not allow determination of a threshold dose for either blood lead or bone lead or the shape of the dose–effect relationship at low dose levels. Associations have been observed in populations with mean blood lead levels as low as 4.5 μg/dL (Wright et al. 2003) and mean tibia lead levels as low as 18.7 μg/g (Shih et al. 2006). Biologic plausibility and experimental data Lead adversely affects the brain in a variety of ways. Lead is thought to increase oxidative stress, induce neural apoptosis, influence neurotransmitter storage and release, and damage mitochondria. The ability of lead to substitute for calcium allows it to affect calcium-mediated processes and pass through the blood–brain barrier. It may also interfere with zinc-dependent transcription factors, altering the regulation of genetic transcription (Zawia et al. 2000). Animal studies indicate that the accumulation of lead in the brain is generally uniform (Widzowski and Cory-Slechta 1994), although the hippocampus and limbic system, prefrontal cerebral cortex, and cerebellum are clearly principal sites of the effects of lead (Finkelstein et al. 1998). Low lead levels in rats produce structural changes in the hippocampus (Cory-Slechta 1995), a brain region critical for learning and memory (Eichenbaum 2001), which is consistent with the finding of learning and memory deficits in lead-exposed individuals. Blood lead level is a measure of current biologically active lead burden and is therefore a better marker of the acute effects of recent lead dose. These are likely to be effects on neurotransmission and calcium enzyme-dependent processes such as synaptic plasticity. This could lead to circulating blood lead impairing, for example, information storage and retrieval mechanisms or processing speed, which have been suggested to impair performance on cognitive tests (Salthouse 1996a, 1996b). Lead levels in bone are a measure of cumulative dose over decades as well as a source of lead in the body that is available for mobilization into blood, especially during periods of increased bone turnover (e.g., pregnancy, puberty). Although lead stored in bone is not directly harmful to the brain, the cumulative effects of chronic lead exposure are likely to be related to oxidative stress and neuronal death and could impair cognitive function, for example, by reducing the capacity of specific regions to process information, or by impairing diffuse ascending projection systems such as the midbrain cholinergic and dopaminergic cells. Lead may also influence cognitive function indirectly through its effects on blood pressure, hypertension, or homocysteine levels. Increased homocysteine levels, a well-known risk factor for cardiovascular disease, have also been associated with risk for poorer cognitive functioning (Dufouil et al. 2003; Schafer et al. 2005a) and risk for dementia (Hogervorst et al. 2002; McCaddon et al. 2003; Selley 2003). Homocysteine is neurotoxic to the central nervous system by influencing neurotransmitter synthesis, and causing excitotoxicity and cell death (McCaddon and Kelly 1992; Parnetti et al. 1997). Blood lead levels were associated with homocysteine levels as well, although the direction of causality has yet to be determined (Guallar et al. 2006; Schafer et al. 2005b). Both blood and bone lead levels have been linked with blood pressure and hypertension in community-based samples of older adults (Martin et al. 2006; Nash et al. 2003) and occupationally exposed populations (Glenn et al. 2003, 2006). Hypertension has also been identified as a potential risk factor for dementia (Birkenhager and Staessen 2006; Hayden et al. 2006; Skoog and Gustafson 2006). Thus, lead may indirectly play a role in cognitive declines by way of poor vascular health. We believe the effect modification by APOE genotype offers strong biologic plausibility to the inference that lead causes cognitive dysfunction (Stewart et al. 2002). The APOE ɛ 4 allele is a risk factor for lateonset Alzheimier disease (Corder et al. 1993; Meyer et al. 1998; Saunders et al. 1993), hippocampal atrophy (Moffat et al. 2000), and senile plaques (Zubenko et al. 1994). It appears that the APOE ɛ 4 allele lowers the age of onset of the disease and accelerates age-related cognitive decline (Meyer et al. 1998). Mechanistically, APOE ɛ 4 is involved in the recovery response of injured nerve tissue (Poirier and Sevigny 1998), with the APOE ɛ 4 allele having reduced ability to promote growth and reduced antioxidant properties (Miyata and Smith 1996; Teter et al. 1999; Yankner 1996). The interaction of APOE genotype with tibia lead level may be related to an impaired ability to counteract injury from lead exposure among APOE ɛ 4 carriers. Another recent study also offers biologic plausibility. In the former organolead workers, tibia lead level was associated with the prevalence and severity of white matter lesions on brain MRI, using the Cardiovascular Health Study white matter grading system (Stewart et al. 2006). Tibia lead level was also associated with smaller volumes on several regions of interest ranging from large (e.g., total brain volume, lobar gray and white matter volumes) to small (e.g., cingulate gyrus, insula, corpus callosum). As volume can decline because of changes in cell number, synaptic number or density, or other changes in cellular architecture, these findings reinforce evidence that lead may cause a persistent change in the brain that is associated with progressive declines in cognitive function. Public health implications The removal of lead from gasoline, paint, and most other commercial products has succeeded in dramatically reducing environmental sources of lead exposure, and this has been reflected by the parallel declines in mean blood lead levels in Americans over the same time frame. However, lead has accumulated in the bones of older individuals, and especially those of lead workers exposed at the continued higher levels encountered in lead-using workplaces. Thus, past use of lead will continue to cause adverse health effects even when current exposures to lead are much lower than in the past. Lead in bone is not directly harmful to the central nervous system, and most of the structural and neurochemical damage is likely to have occurred decades ago. Nevertheless, lead in bone might serve as a source from which lead can be mobilized into blood, and potentially cross the blood–brain barrier. The chronic effects of lead may account for a proportion of cognitive aging; future research will be able to determine whether the chronic effects of cumulative lead dose alter the trajectory of normal cognitive aging. Research efforts should be directed to development of preventive interventions for both lead-associated cognitive decline with aging from past exposures, as well as the mobilization of current bone lead stores into the circulatory system leading to new health effects. Cognitive aging occurs in conjunction with the normal biological aging process. It remains to be determined whether lead affects cognitive aging in adults by permanently reducing brain circuitry capacity thereby lowering baseline cognitive functioning, or by inducing steeper declines in cognitive functioning, leading to abnormal cognitive aging. It may be that lead influences cognitive health through its relationship with depressive symptoms, hypertension, or homocysteine levels, all of which influence cognitive impairment and risk of dementia. Future investigations should explicitly account for these complex causal pathways, and also determine whether chronic effects of cumulative lead dose increases the risk for such clinically relevant syndromes as mild cognitive impairment (Petersen et al. 1999).
[ "lead", "cognitive function", "adults", "blood", "bone", "neurobehavior" ]
[ "P", "P", "P", "P", "P", "P" ]
Eur_J_Pediatr-3-1-1820762
ADAMTS13 phenotype in plasma from normal individuals and patients with thrombotic thrombocytopenic purpura
The activity of ADAMTS13, the von Willebrand factor cleaving protease, is deficient in patients with thrombotic thrombocytopenic purpura (TTP). In the present study, the phenotype of ADAMTS13 in TTP and in normal plasma was demonstrated by immunoblotting. Normal plasma (n = 20) revealed a single band at 190 kD under reducing conditions using a polyclonal antibody, and a single band at 150 kD under non-reducing conditions using a monoclonal antibody. ADAMTS13 was not detected in the plasma from patients with congenital TTP (n = 5) by either antibody, whereas patients with acquired TTP (n = 2) presented the normal phenotype. Following immunoadsorption of immunoglobulins, the ADAMTS13 band was removed from the plasma of the patients with acquired TTP, but not from that of normal individuals. This indicates that ADAMTS13 is complexed with immunoglobulin in these patients. The lack of ADAMTS13 expression in the plasma from patients with hereditary TTP may indicate defective synthesis, impaired cellular secretion, or enhanced degradation in the circulation. This study differentiated between normal and TTP plasma, as well as between congenital and acquired TTP. This method may, therefore, be used as a complement in the diagnosis of TTP. Introduction Von Willebrand factor (VWF) is a glycoprotein that plays a key role in the primary hemostatic process by inducing platelet adhesion and aggregation at sites of vascular injury under conditions of high shear stress. The main source of circulating VWF is the endothelium, from which it is secreted in the form of ultra-large multimers (ULVWF) [53]. ULVWF multimers are biologically very active [2, 31] and, upon release, undergo processing into smaller multimers in normal individuals. This occurs on the surface of endothelial cells [10]. VWF defects may potentially lead to both bleeding and thrombotic disorders: defective VWF secretion, intravascular clearance, multimer assembly, or increased proteolytic degradation may lead to different types of von Willebrand disease. On the other hand, dysfunctional VWF proteolysis may lead to the thrombotic disorder thrombotic thrombocytopenic purpura (TTP) [40]. TTP is a thrombotic microangiopathy (TMA) characterized by microangiopathic hemolytic anemia, thrombocytopenia, fever, neurological and renal manifestations. Chronic recurrent TTP has been associated with the presence of ULVWF in the plasma [30]. ULVWF multimers are capable of inducing increased platelet retention in children with TTP [21]. These observations, along with the finding of VWF and platelet-rich (but fibrin-poor) thrombi in the microcirculation of the heart, brain, kidneys, liver, spleen, and adrenals in TTP patients [3], led to the conclusion that ULVWF multimers are responsible for the disseminated platelet thrombi occurring in TTP and that their degradation to smaller VWF multimers is impaired due to the deficiency of a VWF-cleaving protease [15]. Recently, the VWF-cleaving protease was purified [12, 13, 16, 50] and the encoding gene sequenced, linking the protease to the ADAMTS (a disintegrin-like and metalloprotease with thrombospondin-type-1 motif) family of metalloproteases [27]. The protease, named ADAMTS13, cleaves VWF at the 1605Tyr-1606Met peptide bond in the A2 domain, yielding the 140-kD and 176-kD VWF fragments present in normal plasma [13, 50]. Cleavage is made possible by a conformational change in VWF due to shear stress in the circulation, which exposes the cleavage site, making it susceptible to proteolysis [55]. ADAMTS13 activity is severely deficient (<5% of normal plasma activity) in TTP patients [6], either due to a mutation in the ADAMTS13 gene in the congenital form of TTP or due to auto-antibodies in the acquired form [14, 27, 52]. Autosomal recessive hereditary TTP (also termed the Upshaw-Schulman syndrome) typically presents during the neonatal period or early childhood (<10 years of age), but may also manifest during adolescence and adulthood. Recurrent TTP episodes may occur as often as every third week. TTP recurrences are associated with cerebral vascular accidents in approximately 30% of cases, and these episodes may lead to neurological complications. Renal manifestations may be mild or may result in acute renal failure due to hemoglobinuria and TMA. About 20% of patients progress to end-stage renal failure [28]. Hemolytic uremic syndrome (HUS) is a similar microangiopathic disorder characterized by microangiopathic hemolytic anemia, thrombocytopenia, and acute renal failure [5]. Two forms of HUS have been described: D+ or typical (diarrhea-associated) HUS and D- or atypical (non-diarrhea-associated) HUS. D+ HUS occurs after infection with Shiga-like toxin producing bacteria, typically, enterohemorrhagic Escherichia coli. The patients are usually children presenting with abrupt onset of diarrhea, followed by the development of HUS 2–10 days later. A prothrombotic state precedes the acute renal failure [8], but the pathogenetic mechanism is, as yet, unclear. It is assumed that bacterial virulence factors gain access to the circulation, circulate on blood cells, activate platelets, and reach the kidney, where the endothelium is injured [36, 47]. D- HUS is associated with mutations in certain complement regulatory factors, such as factor H, factor I, and membrane co-factor protein (CD46). The mutations lead to activation of the complement system on host endothelial cells [29, 58]. The resulting vascular damage may lead to the formation of thrombotic lesions in the kidneys. Although HUS patients are, typically, young children with a history of diarrhea and acute renal failure, the clinical manifestations of HUS and TTP often overlap, making differentiation between the two syndromes based solely on clinical presentation difficult. ADAMTS13 antigen levels can differentiate between HUS and TTP, as they are severely deficient in patients with congenital TTP and normal to moderately reduced in HUS [51]. Assays for ADAMTS13 activity can, therefore, differentiate between TTP (congenital and acquired) and HUS [26, 48]. Several ADAMTS13 assays are available today based on antigen detection and activity [15, 17, 25, 38], showing the presence of the protease (by enzyme-linked immunosorbent assay, ELISA) and its bioactivity in normal plasma and the lack of protease and activity in the plasma from patients with congenital TTP. These assays have also shown that patients with acquired TTP have auto-antibodies that neutralize the activity of ADAMTS13. The present study utilized a different method, immunoblotting, and two anti-ADAMTS13 antibodies against specific domains, to investigate the presence of ADAMTS13 antigen in normal plasma, TTP plasma (congenital and acquired), and in heterozygous carriers of ADAMTS13 mutations, demonstrating the presence of ADAMTS13 and its size in normal plasma, the lack thereof in congenital TTP, and auto-antibody-bound protease in acquired TTP. Materials and methods Subjects Citrated plasma was available from patients with congenital (n = 5) and acquired (n = 2) TTP. The patient data are presented in Table 1. The ADAMTS13 activity level was assayed as previously described [15, 17]. Table 1Clinical and laboratory data regarding thrombotic thrombocytopenic purpura (TTP) patientsPatient no.SexAge at debut Age at sampling (years)Current age (years)Symptoms during episodesNo. of episodesADAMTS13 activity levelADAMTS13 mutationADAMTS13 inhibitor Reference1aM2 d1619Jaundice, hemolytic anemia, thrombocytopenia, macroscopic hematuria, pathological urinalysis, fever, neurological symptoms, elevated serum creatinine>5<5%4143insAbcNone [4, 20, 43]2aM5.3 y1718Jaundice, hemolytic anemia, thrombocytopenia, fever, neurological symptoms, pathological urinalysis>5<5%4143insAbcNone[4, 20, 43]3F20 m1523Hemolytic anemia, thrombocytopenia, hematuria, epileptic attacks, slightly elevated serum creatinine>5<5%P353Ld, P457LeNone[4]4M3 y79Hemolytic anemia, thrombocytopenia, purpura, pathological urinalysis4<5%P671Lf, 4143insANone[43]5M2 d3939Jaundice, hemolytic anemia, petechiae, thrombocytopenia, transitory neurological deficits and aphasia, elevated creatinine>5<5%4143insAbNone–6F54 y7075Recurrent hemolytic anemia and thrombocytopenia, reduced consciousness, pathological urinalysis>5<5%NA0.5 U/ml–7F25 y4244Thrombocytopenia, hemolytic anemia, elevated creatinine during viral infection and pregnancy2<5%NA0.2 U/ml–aPatients 1 and 2 are siblingsbPatients 1, 2, and 5 are homozygous for the ADAMTS13 mutationc4143insA leads to a mutation in the second CUB domaindP353L is a mutation in the disintegrin-like domaineP457L is a mutation in the cysteine-rich domainfP671L is a mutation in the spacer domainNA: not assayed The study also included the parents of patients 1–4. The parents of patients 1 and 2 are both heterozygous for the 4143insA mutation, and have protease activity levels of 20% (mother) and 50% (father), as assayed by the VWF multimeric structure analysis [15]. The parents of patient 3 are heterozygous for the P353L (mother) and P457L (father) mutations, and both have 50% ADAMTS13 activity. The parents of patient 4 are heterozygous for P671L (mother) and 4143insA (father), and have 50% ADAMTS13 activity. All parents are clinically unaffected. Plasma samples from 20 healthy adult volunteers were used as controls. The study was conducted with the approval of the ethics committee of Lund University and the plasma samples were collected with the informed consent of the patients, their parents, and the controls. Plasma samples At the time of sampling, the patients were treated regularly with fresh frozen plasma or Octaplas (Octapharma, Stockholm, Sweden; patients 1–5). Patient 6 was treated with plasma infusions every sixth week. Patient 7 did not receive any plasma treatment at sampling. All blood samples were obtained at least three weeks (patients 1–4 and 6) or one week (patient 5) after the last treatment. Venous blood from patients and controls was collected, and the plasma obtained as previously described [20]. Anti-ADAMTS13 antibodies A polyclonal anti-peptide antibody was raised in New Zealand white rabbits against a unique sequence in the second CUB domain (AA1413-1427) and affinity-purified against the peptide. Antibody specificity was tested by ELISA (plates coated with the peptide) and by immunoblotting with purified plasma ADAMTS13 [50] under reducing (SDS-PAGE) and non-reducing (dot blot) conditions. The monoclonal antibody A10 [56], directed against the disintegrin-like domain, was used to confirm the results of the polyclonal antibody. The polyclonal antibody reacted with ADAMTS13 under reducing (reduced by the addition of 2-mercaptoethanol to disrupt disulfide bonds in the protease) and non-reducing conditions, whereas the monoclonal antibody reacted with ADAMTS13 only under non-reducing conditions. Immunoblot analysis for the detection of ADAMTS13 in plasma The plasma samples (1:20) were subject to SDS-PAGE under reducing (for blotting with the polyclonal antibody) and non-reducing conditions (for blotting with the monoclonal antibody) [19, 58]. Purified plasma ADAMTS13 (1:100) was used as the control for the polyclonal antibody. Immunoblotting was performed with rabbit anti-ADAMTS13 IgG 1.6 μg/ml followed by goat anti-rabbit IgG HRP (DakoCytomation, Carpinteria, CA) 1:2000, or with mouse anti-ADAMTS13 IgG 0.6 μg/ml followed by goat anti-mouse IgG HRP (DakoCytomation, Carpinteria, CA) 1:2000. The signal was detected by chemiluminescence. The specificity of the signal obtained with the polyclonal antibody was tested by preincubation with a 50-fold molar surplus of blocking peptide followed by immunoblotting with the blocked antibody. The specificity of the secondary antibodies was tested by omission of the primary antibodies. Immunoblotting with the polyclonal antibody revealed, in addition to ADAMTS13, two unspecific bands at 130 kD and 170 kD, which were identified as C3 and alpha-2-macroglobulin. These proteins were removed by incubating the plasma samples with protein A-sepharose coupled rabbit anti-C3 IgG and rabbit anti-alpha-2-macroglobulin IgG. The results obtained using the polyclonal antibody show samples from which these proteins have been removed. In order to investigate the presence of ADAMTS13-autoantibody complexes in the plasma from patients with acquired TTP, samples were passed onto protein G-sepharose (Amersham Biosciences, Buckinghamshire, UK) prior to immunoblotting. Normal plasma (n = 2) was used for comparison. Results Detection of ADAMTS13 in plasma samples from normal individuals Normal plasma under reducing conditions revealed a single immunoreactive band at 190 kD when blotted against the polyclonal antibody (Fig. 1a). Purified plasma ADAMTS13 showed a similar band under the same conditions (Fig. 1a). The monoclonal antibody detected an immunoreactive band at 150 kD in normal plasma under non-reducing conditions (Fig. 1b). Preincubation of the polyclonal antibody with the blocking peptide abolished the ADAMTS13 band in normal plasma (data not shown). Immunoblots in which the primary antibodies had been omitted showed no bands (data not shown). Fig. 1a–gDetection of ADAMTS13 in normal and thrombotic thrombocytopenic purpura (TTP) plasma. a The polyclonal antibody detected a single band at 190 kD in normal plasma (NP) under reducing conditions. Purified plasma ADAMTS13 in the right lane showed a similar band. b Normal plasma under non-reducing conditions revealed an immunoreactive band at 150 kD when blotted against the monoclonal antibody. c Immunoblotting with the polyclonal antibody revealed that patients 1–5 with congenital TTP all lacked the ADAMTS13 band, whereas patients 6 and 7 with acquired TTP presented a normal expression pattern. Normal plasma (NP) was run on the same gel for comparison. d These results were confirmed by the monoclonal anti-ADAMTS13 antibody. e Immunoblot using the polyclonal antibody. Immunoadsorption of immunoglobulins from the plasma samples of patients 6 and 7 with acquired TTP lead to the simultaneous removal of the ADAMTS13 band, indicating that ADAMTS13 is complexed with the anti-ADAMTS13 auto-antibodies. In contrast, the ADAMTS13 band remained visible in normal plasma (NP) treated similarly. f A schematic presentation of the mechanism by which the removal of immunoglobulins leads to the removal of the ADAMTS13 antigen from the plasma of patients with acquired TTP. The plasma sample contains ADAMTS13 (filled circles), auto-antibodies to ADAMTS13 (⤙), and various other plasma proteins (open circles). Immunoblotting of the plasma sample prior to the removal of immunoglobulins detects the presence of ADAMTS13 antigen. Passage of the plasma sample through a protein G-sepharose column leads to the binding and removal of all immunoglobulins from the sample. Since ADAMTS13 is bound to the anti-ADAMTS13 auto-antibodies, it is removed along with them. Immunoblotting of the flow-through shows no ADAMTS13 band. g Immunoblot with the monoclonal antibody showing a normal ADAMTS13 band in the plasma of the parents, which are all heterozygous for one mutated allele and are clinically unaffected. Lane 1: the mother of patients 1 and 2, 4143insA; lane 2: the mother of patient 3, P353L; lane 3: the father of patient 3, P457L; lane 4: the mother of patient 4, P671L; lane 5: the father of patient 4, 4143insA. Normal plasma (NP) was run on the same gel for comparison Detection of ADAMTS13 in plasma samples from TTP patients The plasma samples from all patients with congenital TTP, regardless of the mutation, lacked the ADAMTS13 band (Fig. 1c). Patients 6 and 7 with acquired TTP revealed the same ADAMTS13 protein expression pattern as the controls (Fig. 1c). These results were obtained using both the polyclonal (Fig. 1c) and the monoclonal antibody (Fig. 1d). When immunoadsorption of plasma immunoglobulins was carried out prior to immunoblotting, the ADAMTS13 band remained visible in normal plasma, but was completely removed from the samples of the patients with acquired TTP (Fig. 1e), probably due to its association with the immunoglobulin inhibitor. Detection of ADAMTS13 in the parents of the TTP patients The parents of the TTP patients are all carriers of one ADAMTS13 mutation and one normal allele. All parents presented a normal ADAMTS13 phenotype using both the polyclonal (data not shown) and the monoclonal antibody (Fig. 1g). Discussion In the present study, we detected ADAMTS13 in plasma using a polyclonal and a monoclonal antibody. This assay was capable of distinguishing TTP patients from normal individuals, as well as differentiating between congenital and acquired TTP. Plasma from the patients with congenital TTP lacked the ADAMTS13 antigen. In contrast, the plasma of patients with acquired TTP expressed a normal ADAMTS13 phenotype. Previous studies describing the ADAMTS13 phenotype in normal plasma by immunoblotting with other specific anti-ADAMTS13 antibodies have shown immunoreactive bands of the same molecular weight using similar conditions [33, 46]. The ADAMTS13 antigen in patients with congenital and acquired TTP has recently been shown by ELISA, demonstrating low to undetectable ADAMTS13 levels in patients with congenital TTP [11, 38] and decreased, but mostly detectable, levels in patients with acquired TTP [11, 38, 45]. In the present study, the ADAMTS13 phenotype in TTP patients is described by immunoblotting, confirming the lack of ADAMTS13 antigen in the plasma of patients with congenital TTP and the presence of circulating complexes in acquired TTP. Furthermore, we showed that heterozygous carriers of the ADAMTS13-related mutations who, thus, have reduced ADAMTS13 bioactivity have a normal phenotype. The plasma of the patients with congenital TTP did not present the ADAMTS13 band. This may be due to altered synthesis, secretion or antigenicity, or due to increased breakdown of the protease in plasma. The fact that two antibodies directed to two different domains in ADAMTS13 were unable to detect the protease band makes altered antigenicity less likely to be the cause for the lack of the ADAMTS13 band in these patients. Previous studies have shown impaired secretion of the 4143insA (patients 1, 2, 4, and 5) [35, 42, 44] and P353L (patient 3) [42] mutants from cells, thus, indicating that the protease may accumulate intracellularly, at least in some patients with congenital TTP. This may be due to a missing cell sorting signal, as in the case of 4143insA [44] or due to conformational changes in the protein, impairing its secretion. Similar findings regarding two other ADAMTS13 mutations, V88M and G239V, have been recently reported [34]. A total lack of ADAMTS13 activity in the plasma is thought to be incompatible with life [27], thus, the patients may have very low amounts of ADAMTS13 activity in their plasma, which we were unable to detect with this method. The patients with acquired TTP presented with a normal ADAMTS13 band, which is consistent with the fact that ADAMTS13 protease genotype and expression are normal, but their activity is lower, due auto-antibodies [11, 14, 41, 45, 52]. The finding that the immunoadsorption of immunoglobulins from the plasma of the patients with acquired TTP also led to the removal of the ADAMTS13 antigen from their samples indicates that the protease is complexed with the auto-antibodies in the circulation of these patients, and that this occurs even during clinical remission. The majority of the ADAMTS13 assays available today detect ADAMTS13 activity levels in plasma (Table 2). This is performed either by detecting the VWF products resulting from ADAMTS13 cleavage (assays 1–5) or by measuring the residual VWF activity (assays 6–7). The VWF substrate utilized in these assays may be high molecular weight VWF (plasma-derived or recombinant; assays 1–4) or VWF domains or short synthetic peptides (assay 5). Two multicenter studies evaluating methods 1, 3–5, and 6–7 showed that all assays were able to detect severely ADAMTS13-deficient plasma samples and indicated that methods 1, 6, and 7 were the most consistent and reliable methods [48, 49]. A recent smaller study evaluating the FRETS-VWF73 method (assay 5) showed that this is a reliable assay which provides results in good accordance with other methods [26]. Table 2ADAMTS13 assaysASSAYPRINCIPLEREFERENCEDetection of ADAMTS13 activityDetection of VWF cleavage products 1. VWF multimer structure analysisDetection of the breakdown of high-molecular-weight VWF[15, 23] 2. Immunoblotting of VWF Detection of cleavage products of native VWF or recombinant VWF domains[37, 50] 3. IRMADetection of VWF cleavage products[32] 4. Flow assayDetection of the breakdown of ULVWF-platelet strings attached to endothelial cells[1] 5. Various methods using VWF domains or short synthetic VWF peptides as the substrate, such as the FRETS-VWF73 assayDetection of cleavage products of the VWF domains or VWF peptides[9, 18, 22, 25, 59–61]Detection of VWF residual activity 6. Collagen bindingDetection of VWF binding to collagen; binding correlates to VWF multimer size[17] 7. Ristocetin cofactor activityDetection of platelet aggregates; VWF ability to induce platelet aggregates in the presence of ristocetin correlates to multimer size[7]Detection of ADAMTS13 antigen and auto-antibodies 8. ELISADetection of ADAMTS13 antigen or anti-ADAMTS13 auto-antibodies[11, 38, 39, 41, 54] 9. ImmunoblottingDetection of anti-ADAMTS13 auto-antibodies[57] 10. Present assay (immunoblotting)Detection of ADAMTS13 antigen and size and (indirectly) of auto-antibodies–Mutation analysis 11. PCRDetection of mutations in the ADAMTS13 gene[27]IRMA: immunoradiometric assay; FRET: fluorescence resonance energy transfer; FRETS-VWF73: a 73-amino-acid-long synthetic peptide which provides a minimal substrate for ADAMTS13 [24] that has been made fluorogenic; PCR: polymerase chain reaction Few recent studies have shown that ADAMTS13 antigen was detectable in plasma by ELISA (assay 8 in Table 2). ELISA assays are a valuable complement to the ADAMTS13 activity assays, and offer a fast analysis of the ADAMTS13 antigen levels in plasma samples using antibodies directed to ADAMTS13. These assays are able to distinguish ADAMTS13-deficient TTP plasma from normal plasma samples. Other methods are capable of detecting anti-ADAMTS13 auto-antibodies (Table 2, assays 8–9). Mutational analysis can be carried out by polymerase chain reaction (PCR) in order to detect mutations in the ADAMTS13 gene (assay 11 in Table 2). In the present study, we developed a qualitative, semi-quantitative assay capable of detecting ADAMTS13 antigen and anti-ADAMTS13 auto-antibodies in plasma and, thus, distinguishing between TTP and normal plasma, as well as distinguishing between congenital and acquired TTP. All patients with congenital TTP, regardless of their ADAMTS13 mutations, presented with undetectable levels of ADAMTS13 antigen, whereas patients with acquired TTP presented a normal phenotype. Heterozygous carriers of ADAMTS13 mutations also revealed a normal ADAMTS13 antigen band. Although not a quantitative method like the above-mentioned ELISA assays, the present assay offers the advantage of not only demonstrating the presence or absence of ADAMTS13 antigen, but also the molecular size of ADAMTS13 present in plasma samples. Furthermore, this assay was able to show the presence of anti-ADAMTS13 antibodies indirectly by performing immunoadsorption of plasma immunoglobulins prior to immunoblotting. In conclusion, this assay is able to distinguish between normal and TTP plasma, and also between congenital and acquired TTP. It is easy to perform and can be used at any hospital laboratory routinely using SDS-PAGE electrophoresis and immunoblotting, and offers a complement to existing ADAMTS13 methods in the diagnosis of TTP.
[ "adamts13", "plasma", "thrombotic thrombocytopenic purpura", "von willebrand factor", "von willebrand factor cleaving protease", "immunoblotting" ]
[ "P", "P", "P", "P", "P", "P" ]
Pediatr_Nephrol-3-1-1805051
Iron therapy for renal anemia: how much needed, how much harmful?
Iron deficiency is the most common cause of hyporesponsiveness to erythropoiesis-stimulating agents (ESAs) in end-stage renal disease (ESRD) patients. Iron deficiency can easily be corrected by intravenous iron administration, which is more effective than oral iron supplementation, at least in adult patients with chronic kidney disease (CKD). Iron status can be monitored by different parameters such as ferritin, transferrin saturation, percentage of hypochromic red blood cells, and/or the reticulocyte hemoglobin content, but an increased erythropoietic response to iron supplementation is the most widely accepted reference standard of iron-deficient erythropoiesis. Parenteral iron therapy is not without acute and chronic adverse events. While provocative animal and in vitro studies suggest induction of inflammation, oxidative stress, and kidney damage by available parenteral iron preparations, several recent clinical studies showed the opposite effects as long as intravenous iron was adequately dosed. Thus, within the recommended international guidelines, parenteral iron administration is safe. Intravenous iron therapy should be withheld during acute infection but not during inflammation. The integration of ESA and intravenous iron therapy into anemia management allowed attainment of target hemoglobin values in the majority of pediatric and adult CKD and ESRD patients. Introduction Iron-restricted erythropoiesis is a common clinical condition in patients with chronic kidney disease (CKD). The causes underlying this pathology and the subsequent contribution of absolute or functional iron deficiency to renal anemia include: Inadequate intake of dietary ironBlood loss during the extracorporeal procedure in hemodialysis patientsBlood loss from the gastrointestinal tract (bleeding)(Too) frequent diagnostic blood testsInadequate intestinal iron absorption and inhibition of iron release from macrophages (anemia of chronic disease)Increased iron requirements during therapy with erythropoiesis-stimulating agents (ESAs). In iron deficiency without anemia, reduction in iron storage is not sufficiently large enough to decrease the hemoglobin level. In CKD patients with absolute iron-deficient anemia, however, iron deficit is so severe that it aggravates renal anemia. Iron supplementation is mandatory in the majority of patients with end-stage renal disease (ESRD), particularly in those receiving ESA therapy. Evaluation of iron status In patients with normal kidney function, absolute iron deficiency is characterized by low serum ferritin concentration (<30 μg/l). The ferritin cut-off level for absolute iron deficiency in CKD patients is 100 μg/l [1] by the experience that chronic inflammation increases serum ferritin levels approximately three-fold. The Kidney Disease Outcomes Quality Initiative (K/DOQI) guidelines recommend serum ferritin levels >200 μg/l for the adult hemodialysis patient population [2]. The European Best Practice Guidelines define the optimal range for serum ferritin as 200–500 μg/l in adult patients with ESRD [1]. A normal ferritin level (≥100 μg/l) cannot exclude iron deficiency in uremic children [3, 4], but a serum ferritin <60 μg/l is a specific predictor of its presence [5]. An upper ferritin level of 500 μg/l is recommended for adults and children with CKD [2]. Serum ferritin is an indicator of storage iron. Iron deficiency is accompanied by reductions in serum iron concentration and transferrin saturation (TSAT) and by elevations in red cell distribution width, free erythrocyte protoporphyrin concentration, total serum iron binding capacity (TIBC), and circulating transferrin receptor [6]. Serum soluble transferrin receptor, however, reflects ongoing erythropoiesis but not iron availability in ESA-treated chronic dialysis patients [7]. Typically, TSAT (the ratio of serum iron to TIBC) is 15% or less (normal 16–40%) with iron deficiency, but TSAT also decreases in the presence of acute and chronic inflammation (functional iron deficiency). TSAT is raised with bone marrow dysfunction due to alcohol, cancer chemotherapy, or a megaloblastic process. TSAT is also affected by diurnal variations, being higher in the morning and lower in the evening [8]. Even a TSAT > 20% or a serum ferritin level > 200 μg/l does not exclude iron deficiency in ESRD patients. In a study by Chuang et al. [9], 17% of iron-deficient hemodialysis patients had serum ferritin levels greater 300 μg/l. Clinically, functional iron deficiency is confirmed by the erythropoietic response to a course of parenteral iron and is excluded by the failure of erythroid response to intravenous iron administration [10]. Erythrocyte and reticulocyte indices, such as the percentage of hypochromic red blood cells and the reticulocyte hemoglobin content (CHr) provide direct insight into bone marrow iron supply and utilization. Determination of the percentage of hypochromic red blood cells, i.e., those with a cellular hemoglobin concentration <28 g/dl, provides important information on functional iron deficiency in ESA-treated dialysis patients [11]. Tessitore et al. [12] found that hypochromic red blood cells >6% are the best marker to identify adult ESRD patients who will have the best response to intravenous iron. CHr has been proposed as a surrogate marker of iron status and as an early predictor of response to iron therapy in adult dialysis patients [13, 14]. Combined use of CHr and high-fluorescence reticulocyte count predicts with a very high sensitivity and specificity the response to intravenous iron in adult dialysis patients [8]. There are, however, only few studies in the pediatric renal literature on the use of CHr [15, 16]. In children with ESRD, an increase from baseline CHr levels was observed in response to oral and intravenous iron, but cut-off values for the use of CHr in the pediatric CKD population are not clear. This measure has proven to be of value with adult ESRD patients. Detection of both absolute and functional iron deficiency is important because iron deficiency is the most common cause of hyporesponsiveness to ESAs. In clinical practice, an increased erythropoietic response to iron supplementation is the most widely accepted reference standard of iron-deficient erythropoiesis. For pharmacological therapy of iron deficiency, both oral and parenteral iron preparations are available. Intravenous iron is more effective than oral iron supplementation, at least in CKD patients. Iron is not only a prerequisite for effective erythropoiesis but also an essential element in all living cells. Elemental iron serves as a component of oxygen-carrying molecules and as a cofactor for enzymatic processes. Its redox potential, however, limits the quantity of iron that can be safely harbored within an individual. Oral iron therapy Oral iron is best absorbed if given without food. Side-effects of oral iron therapy include constipation, diarrhea, nausea, and abdominal pain. In the treatment of iron deficiency with ferrous sulphate, the usual adult dose is one 300 mg tablet (containing 60 mg elemental iron) three to four times daily. The pediatric dose is 2–6 mg/kg per day of elemental iron in 2–3 divided doses [17, 18]. Intestinal iron absorption is enhanced in patients with iron deficiency and declines with the correction of iron deficiency and reaccumulation of iron stores. If side-effects limit compliance, the medication can be administered with food, or the dose can be reduced. One 500-mg ferrous sulphate dose nightly at bedtime may be an effective therapy in adults [19]. Uremia is a chronic inflammatory state [20, 21]. Even in the absence of overt infection or inflammation, many ESRD patients show increased levels of acute-phase proteins, such as C-reactive protein (CRP), ferritin, fibrinogen, and/or interleukin-6 (IL-6), associated with low serum albumin levels [22]. The interaction of proinflammatory cytokines with hepcidin in mediating functional iron deficiency may explain why CKD patients have high ferritin levels, poor intestinal iron absorption, and disturbed iron release from the reticuloendothelial system [23]. In the duodenum and proximal jejunum, the nonheme dietary Fe3+ is reduced to Fe2+ by the cytochrome b-like ferrireductase Dcytb. Fe2+ is gathered from the lumen of the intestine and crosses the apical enterocyte brush border membrane through the divalent metal transporter-1 (DMT1). The expression of both Dcytb and DMT1 is strongly affected by the iron concentration within the enterocyte. Circulating levels of hepcidin negatively regulate intestinal iron absorption by the enterocyte DMT1. Hypoxia, anemia, iron deficiency, and/or stimulated erythropoiesis strongly down-regulate hepatic hepcidin release, allowing intestinal iron absorption, while iron overload or inflammation/infection stimulates hepcidin production, resulting in inhibition of intestinal iron absorption. Hepcidin controls the whole-body iron content. It also inhibits the release of iron by the iron exporter ferroportin (iron-regulated transporter-1) located along the entire basolateral membrane of enterocytes and also in the intracellular vesicular compartment of tissue macrophages. Hepcidin is primarily produced in the liver in response to acute-phase reactions. Any further expression depends on the degree of hepatic iron storage (for review, see [24]). Thus, the inflammatory state associated with uremia and less uremia per se is predominantly responsible for poor intestinal iron absorption in ESRD patients. In iron-depleted peritoneal dialysis patients with normal CRP values, high-dose oral iron is well absorbed [25]. The European Pediatric Peritoneal Dialysis Working Group recommended that anemia treatment should aim for a target hemoglobin concentration of at least 11 g/dl accomplished by administration of ESA and iron. Oral iron should be preferred in pediatric peritoneal dialysis patients [18]. The majority of pediatric hemodialysis patients are also supplemented with oral iron. The 2001 North American Pediatric Renal Transplant Cooperative Study (NAPRTCS) annual report showed that 84% of pediatric peritoneal dialysis and 72% of pediatric hemodialysis patients at 12 months of dialysis were receiving oral iron therapy [26]. Intravenous iron therapy Since oral iron therapy is often not sufficient in ESRD patients, parenteral administration of iron is necessary to optimally care for these patients. Intravenous iron can be given safely to CKD patients [27–34] as long as the therapy is performed according to international recommendations and guidelines [1, 2]. This therapy is unequivocally superior to oral iron supplementation [35]. All forms of intravenous iron may be associated with acute adverse events [1, 2]. Potential risk factors associated with intravenous iron therapy include acute allergic reactions such as rash, dyspnoea, wheezing, or even anaphylaxis, as well as long-term complications caused by the generation of powerful oxidant species, initiation and propagation of lipid peroxidation, endothelial dysfunction, propagation of vascular smooth muscle cell proliferation, and/or inhibition of cellular host defense. Allergy is believed to relate to dextran moiety. Iron dextran therapy is associated with a higher risk for serious type I reactions compared with newer intravenous iron products. Iron sucrose carries the lowest risk for hypersensitivity reactions [36]. In our clinical experience with more than 100,000 intravenous injections of iron sucrose and ferric gluconate within the last 15 years, we detected no significant differences in efficacy or adverse events between both intravenous iron preparations. Serious reactions to iron dextran are unpredictable and possibly life threatening. Labile- or free-iron reactions are more frequent with nondextran forms [37]. Recommended doses of iron sucrose or ferric gluconate appear safe, at least in adult CKD patients [32, 34, 38]. Parenteral therapy with iron sucrose or ferric gluconate is also safe and effective in the management of anemia in adult hemodialysis patients sensitive to iron dextran [29, 39]. Iron sucrose safety data are sparse in the pediatric CKD literature [2]. It should be considered that the iron load administered intravenously to CKD patients based on international recommendations is more than ten times less than the iron load by repeated blood transfusions at times when no ESA therapy was available for ESRD patients. Iron deficiency in CKD patients develops primarily during the correction of renal anemia by ESA treatment. Approximately 150 mg of iron is necessary for an increase of 1 g/dl in hemoglobin level. In adult hemodialysis patients, annual blood losses up to 4 l of blood, equivalent to 2 g iron, should be considered [40]. Thus, intravenous iron prevents iron-restricted erythropoiesis during ESA therapy. Parenteral treatment strategies depend on the availability of iron products in respective countries. Hemodialysis patients should receive at least one dose of intravenous iron every 2 weeks [1]. Careful monitoring of iron status is mandatory in order to avoid iron overload. In patients with anemia of chronic disease (and inflammation), a major part of iron administered intravenously is transported into the reticuloendothelial system, where it is not readily available for erythropoiesis [41]. Intravenous iron therapy is underused in pediatric ESRD patients. Chavers et al. [42] compared anemia prevalence in US Medicare pediatric and adult dialysis patients treated with ESAs from 1996 to 2000. Prevalence of anemia (defined as hemoglobin values less than 11 g/dl) was found in pediatric and adult hemodialysis patients during 54.1% versus 39.8% patient years as well as in pediatric and adult peritoneal dialysis patients during 69.5% versus 55.1% patient years, respectively. The percentage of patient years with intravenous iron was low, especially for pediatric peritoneal dialysis patients: 33.9% (age group 0–4 years) and 71% (age group 5–19 years) versus 0.3% and 19.4% among pediatric hemodialysis patients in these age categories, respectively. Among pediatric hemodialysis and peritoneal dialysis patients, intravenous iron was not administered among 34% and 85% of patient years [42]. Data obtained from the US Centers for Medicare and Medicare Services on hemodialysis patients in an age range between 12 and <18 years indicate that 37% of these patients are anemic, defined as hemoglobin <11 g/dl. Dialyzing <6 months, a low serum albumin, and a mean TSAT <20% were identified as predictors of anemia in these children. Despite the prescription of iron supplements in almost all pediatric patients, there was evidence for low TSAT and/or low ferritin in many children. In this study, approximately 60% of all children received intravenous iron therapy [43]. An international multicenter study investigated the safety and efficacy of two dosing regimens (1.5 mg/kg or 3 mg/kg) of ferric gluconate during eight consecutive hemodialysis sessions in iron-deficient pediatric hemodialysis patients receiving concomitant ESA therapy. Efficacy and safety profiles were comparable, with no unexpected adverse events with either dose [16]. Initial recommended ferric gluconate therapy is 1.5 mg/kg for eight doses for iron-deficient pediatric hemodialysis patients and 1 mg/kg per week for iron-replete pediatric hemodialysis patients, with subsequent dose adjustments made according to TSAT and/or ferritin levels [16, 44]. In children, iron sucrose in a high dose (5 mg/kg) should be given over 90 min and in a low dose (up to 2 mg/kg) over 3 min [45]. An other recommendation for pediatric patients is to inject 6 mg iron/kg per month during iron deficiency, with subsequent dose adjustments according to serum ferritin [46]. Nonrandomized intravenous iron (1–4 mg/kg per week) trials in children on hemodialysis and nondialyzed or transplanted children showed an increase in hemoglobin or hematocrit and a decrease in ESA requirements between 5% and 62% per week or per dose of ESA [3, 4, 47, 48]. De Palo et al. [49] reported an excessive increase in hemoglobin with severe hypertension in children on maintenance hemodialysis after the first month of darbepoetin alpha therapy combined with intravenous ferric gluconate in a very high dose of 10–20 mg/kg per week. The patients had already been on erythropoietin therapy for at least 6 months with adequate iron status (serum ferritin 220 ± 105 μg/l; TSAT 24.2 ± 11.5%). The complications observed in this study are not surprising, as such a high-dosed intravenous iron therapy is not justified, either in adult or in pediatric dialysis patients with “adequate iron status”. The current practice of intravenous iron therapy in pediatric hemodialysis patients is often performed on extrapolation from adult data and not based on data obtained from prospective multicenter trials performed in children [50]. Even if exposure to intravenous iron may lead to oxidative stress, renal injury, infection, and/or cardiovascular disease, the magnitude of these complications is not really clear. The overall risk–benefit ratio favors the use of intravenous iron in CKD patients in order to optimize erythropoiesis and prevent iron deficiency [51]. Intravenous iron therapy is still underutilized in the adult hemodialysis population [52], but its use increased from 1997 to 2002. Ferric gluconate and iron sucrose have become the predominant form of intravenous iron therapy [53]. Iron and inflammation/infection Intravenous iron therapy may adversely impact CKD patients via a potentiation of systemic inflammation. In animals, even a single ultra-high-dosed intravenous injection of available iron preparations (2 mg iron for mice with a body weight of 25–35 g, corresponding to 5,000 mg iron for a 75-kg adult patient) does not independently raise plasma levels of tumour necrosis factor-α (TNF-α). Systemic inflammation experimentally induced by intraperitoneal endotoxin injection (2 or 10 mg/kg in mice) resulted in a dramatic increase in plasma TNF-α levels. Interestingly, 2 h following concomitant injection of endotoxin and ferric gluconate (2 mg) or iron dextran (2 mg), a decrease of plasma TNF-α levels was observed. In contrast, combined endotoxin and iron sucrose injection resulted in a further increase in plasma TNF-α compared with endotoxin alone [54]. However, a 75-kg CKD patient will neither receive 5,000–25,000 mg endotoxin intraperitoneally nor concomitant 5,000 mg iron sucrose injection intravenously in order to demonstrate that under these artificial conditions, TNF-α mRNA and TNF-α release are stimulated. It is therefore of particular importance that relevant clinical studies demonstrated that intravenous iron sucrose therapy within recommended doses may even display anti-inflammatory effects [55, 56]. Intravenous iron sucrose therapy affects positively circulating cytokine levels in hemodialysis patients: IL-4 levels increase, while TNF-α levels decrease [55]. There is a direct correlation between IL-4 and TSAT but an inverse correlation between TNF-α and TSAT. Hemoglobin levels increase with an increase of IL-4 and a decrease of TNF-α, while ESA dose decreases with an increase of IL-4 and a decrease of TNF-α [55]. In other words, adequately dosed intravenous iron therapy in hemodialysis patients results in down-regulation of proinflammatory immune effector pathways and stimulation of the expression of the anti-inflammatory cytokine IL-4. By these mechanisms, in addition to its well-known stimulatory effects on erythropoiesis, iron therapy contributes to an increase in hemoglobin levels and to a decrease in the need of ESAs. The anti-inflammatory properties of intravenous iron therapy have also been demonstrated in patients with rheumatoid arthritis [56]. In contrast, iron-mediated weakening of the Th-1 immune effector function (estimated by lowered TNF-α production) with a subsequent strengthening of Th-2-mediated immune effector function (estimated by increased IL-4 production) is an unfavorable condition for ESRD patients in the case of an acute infection or malignant disease [55]. Moreover, intravenous administration of iron increases the availability of this essential nutrient for microorganisms [57] associated with an increased incidence of infectious complications in ESRD patients. Teehan et al. [58] followed 132 hemodialysis patients for up to 1 year after the initiation of intravenous iron therapy for the outcome of bacteremia. Iron-replete patients (those with a TSAT value ≥ 20% and a ferritin level ≥ 100 ng/ml) had a significantly higher risk of bacteremia (hazard ratio 2.3 in the univariate analysis and 2.5 in the multivariate analysis) compared with adult hemodialysis patients who were not iron replete [58]. Inhibition of intracellular killing of bacteria by polymorphonuclear leukocytes (PMNL) due to iron sucrose therapy in high-ferritin hemodialysis patients has been reported [59]. Peritoneal dialysis patients receiving high-dose intravenous iron sucrose also displayed short-term inhibition of bacterial killing by PMNL [60]. Finally, iron sucrose as well as ferric gluconate inhibit in vitro migration of PMNL through endothelial cells [61]. All these data suggest a risk for infectious complications, at least in patients overtreated with iron. However, clinical studies on intravenous iron therapy in ESRD patients reported controversial results [62–64]. According to the published cohort study, among 32,566 hemodialysis patients, there was no association between iron administration and mortality. This study by Feldman and coworkers [64] supports intravenous administration of iron ≥1,000 mg over 6 months if needed to maintain target hemoglobin levels. This is, however, an adult and not a pediatric recommendation. Intravenous iron therapy should be withheld in the presence of acute infection until the infection has successfully been treated and resolved [65]. Intravenous iron is ineffective and may increase the virulence of bacterial and viral pathogens. On the other hand, ESRD patients with chronically infectious complications may develop absolute iron deficiency if iron supplementation is withheld over months. In such a situation, iron should be administered intravenously as soon as ferritin levels drop below 100 μg/l (personal opinion). ESA-stimulated erythropoiesis of chronically infected adult ESRD patients may benefit from low-dose intravenous iron supplementation (10–20 mg iron sucrose or ferric gluconate per hemodialysis session), even if serum ferritin is normal or slightly elevated. The level of serum ferritin at which ESRD patients are considered to be in an iron overload state is still not defined. Inflammatory states should not be considered indiciations to withhold the benefits of intravenous iron therapy in general [65]. However, a clinical problem is the diagnosis of chronic anemia associated with inflammation and true iron deficiency, as serum ferritin concentration increases rather than decreases. Iron and kidney function Intravenous administration of 100 mg iron sucrose in CKD patients caused transient proteinuria and tubular damage [37], but ferric gluconate did not (125 mg infused over 1 h or 250 mg infused over 2 h) [66]. Induction of passive Heymann nephritis in rats resulted in a marked increase in nonheme iron content of kidney cortex and tubules, while a iron-deficient diet caused a significant reduction of nonheme iron level in glomeruli and also a significant reduction of proteinuria in these animals [67]. Pediatric thalassemia patients have a high prevalence of renal tubular abnormalities, probably caused by the anemia and increased oxidative stress induced by excess iron deposits. Significantly higher levels of urinary N-acetyl-beta-d-glucosaminidase, malondialdehyde, and beta-2-microglobulin were found in these children compared with normal children [68]. Under artificial experimental conditions (intravenous injection of 2 mg iron sucrose or ferric gluconate into mice with a body weight of 25–35 g), induction of monocyte chemoattractant protein-1 (MCP-1) in renal and extrarenal tissues has been observed. Since MCP-1 has profibrotic properties, implications for CKD progression in case of intravenous iron therapy has been suggested [69]. However, in a recent article, Mircescu et al. [70] reported that intravenous iron sucrose therapy (200 mg elemental iron per month for 12 months) resulted in an increase in hemoglobin from 9.7 ± 1.1 to 11.3 ± 2.5 g/dl in nondiabetic patients with CKD and a mean glomerular filtration rate (GFR) of 36.2 ± 5.2 ml/min per 1.73 m2, estimated by the formula of Cockcroft and Gault. The majority of these CKD patients had preexisting iron deficiency (mean ferritin 98.0 μg/l, range 24.8–139.0 μg/l). An important finding of this study was that GFR (final values at the end of the study 37.2 ± 0.9 ml/min per 1.73 m2) remained completely stable over a period of 12 months despite 2,400 mg of intravenous iron sucrose administration. The CKD patients had relatively high blood pressure (140 ± 32/82 ± 20 mmHg at baseline), which did not change throughout the investigation [70]. Agarwal [71] found that a single dose of 100 mg iron sucrose results in a transient increase of MCP-1 in plasma and urine of CKD patients. Those who believe that 100 mg iron sucrose administered to CKD patients may negatively affect kidney function should simply administer a lower intravenous iron dose, e.g., 50 mg iron sucrose intravenously. Iron and oxidative stress Zager et al. [72] compared in vitro parenteral iron toxicity induced by three commercially available iron preparations (iron dextran, ferric gluconate, iron sucrose) using renal tubular cells and renal cortical homogenates. Each test agent induced massive and similar degrees of lipid peroxidation. Under the in vitro conditions used, iron sucrose caused markedly higher cell death than ferric gluconate, and ferric gluconate caused higher cell death than iron dextran. This relative toxicity profile was also observed in cultured aortic endothelial cells. Again, it should be stressed that the study of Mircescu et al. [70] demonstrated that intravenous iron sucrose therapy administered within international recommendations (none of the 58 CKD patients exceeded serum ferritin of 500 μg/l) does not cause a decline in kidney function in CKD patients over a period of 1 year. Intravenous iron therapy may enhance symptoms of oxidative stress [73–75]. Drüeke et al. [75] demonstrated that advanced oxidation protein products (AOPPs) correlated with iron exposure and carotis artery intima thickness in dialysis patients. In hemodialysis patients, oxidative stress as a result of intravenous iron therapy caused serum albumin oxidation [76]. Ferric gluconate modifies fibrinogen and β2-microglobulin as a marker of oxidative stress in adult hemodialysis patients [77, 78]. Intravenous administration of 100 mg iron sucrose in CKD patients increased malondialdehyde as a marker of lipid peroxidation [37]. Hemodialysis patients with ferritin levels above 650 μg/l showed an enhanced oxidative burst in PMNL [59]. However, not all studies found evidence for enhanced oxidative stress caused by parenteral iron therapy in ESRD patients. Hemodialysis therapy per se was found to cause a significant increase in peroxide concentration. Interestingly, this rise in plasma total peroxides was not additionally influenced by concomitant intravenous injection of 100 mg iron sucrose [79]. These data confirm increased oxidative stress associated with hemodialysis [80]. Whether intravenous iron therapy results in an additional oxidative stress reaction needs to be further evaluated. Increased blood levels of non-transferrin-bound iron (NTBI) and/or its redox-active part have been reported in adult ESRD patients receiving intravenous iron therapy [79, 81, 82]. Intravenous infusion of 300 mg iron sucrose in ESRD patients also caused peripheral vasodilation, which was confirmed by increased forearm blood flow. NTBI and redox-active iron were considered to be, at least in part, responsible for endothelial dysfunction observed in ESRD patients. However, an increase in NTBI and redox-active iron caused by intravenous iron sucrose infusion did not influence vascular reactivity to intra-arterial acetylcholine, glycerol-trinitrate, or L-N-mono-methyl-arginine (L-NMMA) [82]. Vitamin C and iron ESRD patients undergoing regular hemodialysis or hemodiafiltration may develop vitamin C deficiency [83]. Vitamin C deficiency may cause oxidative stress and vascular complications as well as impairment of intestinal iron absorption and iron mobilization from iron stress. Moretti et al. [84] measured iron absorption in young women from test meals fortified with isotopically labeled ferric pyrophosphate and ferrous sulfate. The addition of ascorbic acid at a molar ratio of 4:1 to iron increased iron absorption from ferric pyrophosphate to 5.8% and that from ferrous sulfate to 14.8%. In the fasting state, ferrous ascorbate is better absorbed than ferric hydroxide-polymaltose complex [85]. High-dose oral vitamin C may increase intestinal aluminium absorption [86]. Oxidative stress can cause hyporesponsiveness to ESA therapy in ESRD patients. Vitamin C may improve erythropoiesis through its antioxidative properties [87]. Intravenous ascorbic acid therapy facilitates iron release from inert deposits, resulting in a decrease of soluble transferrin receptor and an increase of TSAT [88]. In contrast, oral vitamin C supplementation (250 mg three times per week for 2 months) did not influence oxidative/antioxidative stress and inflammation markers in adult hemodialysis patients [89]. In adult hemodialysis patients on maintenance intravenous iron sucrose therapy, intravenous administration of 500 mg ascorbic acid three times a week for 6 months resulted in an increase of TSAT and hemoglobin in approximately 65% of the patients [90]. In contrast, neither oral nor intravenous ascorbic acid changed TSAT or hemoglobin levels in a study performed by Chan et al. [91]. Ascorbic acid increases the intracellular labile iron pool and iron mobilization to transferrin in human hepatoma HepG2 cells only in the presence of iron sucrose but not in the presence of iron dextran or ferric gluconate [92]. Several studies reported an increase in hemoglobin and/or a decrease in adult ESA dose during adjuvant ascorbic acid therapy three times per week in ESRD patients [93–99]. Measurements of plasma oxalate concentration are needed in ESRD patients supplemented with ascorbic acid [100]. Studies on vitamin C and iron in children with ESRD are not available so far. Iron therapy after kidney transplantation Anemia is observed in 21–39.7% of adult renal transplant patients [101–104]. The prevalence may be even higher in pediatric transplant recipients: 84.3% of children were anemic in the first month after kidney transplantation, and prevalence of anemia was not below 64.2% between 6 months and 6 years after transplantation. Iron deficiency was identified in 27–56% of children between 1 and 60 months posttransplantation [105]. Fourteen pediatric and young adult renal transplant recipients received single iron gluconate infusions ranging from 1.9 to 6.4 mg/kg. The mean hemoglobin level increased significantly from 10.1 ± 1.6 to 11.4 ± 2.1 g/dl following ferric gluconate therapy. Adverse events were observed in three children [106]. A recent study by Morii and coworkers showed that oral coadministration of ferrous sulphate markedly decreased the absorption of mycophenolate mofetil in healthy Japanese subjects [107]. However, a randomized crossover trial failed to confirm this observation in European transplant patients receiving long-term mycophenolate mofetil therapy [108]. In line with this observation, an in vitro study showed that iron ions did not interact with mycophenolate mofetil [109]. Conclusions The integration of ESA and intravenous or oral iron therapy into standard anemia management resulted in target hemoglobin levels (as established by international guidelines) in the vast majority of ESRD patients [110]. Correction of renal anemia reduced morbidity and mortality as well as hospitalization in ESRD patients. It also improved quality of life, cognitive function, and physical activity. Using a balanced approach to iron supplementation within international recommendations allowed the attainment of benefits of intravenous iron therapy at storage iron levels far below those generally seen with transfusions in the pre-ESA era [110].
[ "erythropoiesis-stimulating agents", "iron supplementation", "iron status", "inflammation", "infection" ]
[ "P", "P", "P", "P", "P" ]
Diabetologia-3-1-1914292
Advanced glycation end products cause increased CCN family and extracellular matrix gene expression in the diabetic rodent retina
Aims/hypothesis Referred to as CCN, the family of growth factors consisting of cystein-rich protein 61 (CYR61, also known as CCN1), connective tissue growth factor (CTGF, also known as CCN2), nephroblastoma overexpressed gene (NOV, also known as CCN3) and WNT1-inducible signalling pathway proteins 1, 2 and 3 (WISP1, −2 and −3; also known as CCN4, −5 and −6) affects cellular growth, differentiation, adhesion and locomotion in wound repair, fibrotic disorders, inflammation and angiogenesis. AGEs formed in the diabetic milieu affect the same processes, leading to diabetic complications including diabetic retinopathy. We hypothesised that pathological effects of AGEs in the diabetic retina are a consequence of AGE-induced alterations in CCN family expression. Introduction Diabetic retinopathy is a major complication of diabetes and a leading cause of blindness [1, 2]. Despite recent progress in understanding the pathogenesis of diabetic retinopathy, further research is warranted, as the disease remains neither preventable nor curable. Diabetic retinopathy is preceded by an asymptomatic preclinical phase, in which a microangiopathy develops which is characterised by diffusely increased vascular permeability and capillary basement membrane thickening, resulting from excess accumulation of extracellular matrix components [3–5]. In later stages of preclinical diabetic retinopathy, endothelial cell and pericyte loss leads to vascular cell death and the development of acellular capillaries. Experimental prevention of basement membrane thickening has been shown to ameliorate these retinal vascular changes [6, 7]. Thus in galactose-fed rats, a model of diabetes, downregulation of fibronectin synthesis partly prevented retinal basement membrane thickening and also reduced pericyte and endothelial cell loss [6]. Combined downregulation of mRNA levels of the extracellular matrix components fibronectin, collagen type IV alpha 3 (Col4a3) and laminin beta 1 (Lamb1) not only prevented increases in their protein levels, but also reduced vascular leakage in the retina of rats with streptozotocin-induced diabetes [7]. These findings suggest that basement membrane thickening is not just an epiphenomenon of the diabetic state, but may be instrumental in the progression of sight-threatening diabetic retinopathy. Modulation of basement membrane thickening may therefore have a preventive effect on the development of diabetic retinopathy. However, the mechanisms leading to diabetes-induced basement membrane thickening remain largely unknown. One of the postulated mechanisms is the formation of AGEs in the diabetic milieu. Inhibition of AGE formation by aminoguanidine has been shown to protect against retinal capillary basement membrane thickening [8]. AGE have also been shown to induce extracellular matrix synthesis in the diabetic rat kidney. A similar induction of extracellular matrix synthesis was also shown to be mediated by connective tissue growth factor (CTGF, also known as CCN2) [9, 10], a member of the family of proteins referred to as CCN (for cystein-rich protein 61 [CYR61, also known as CCN1], CTGF and nephroblastoma overexpressed gene [NOV, also known as CCN3]). CTGF leads to accumulation of extracellular matrix by induction of collagen, fibronectin and laminin synthesis, as well as decreased proteolysis of extracellular matrix components as a result of increased production of tissue inhibitors of metalloproteases (TIMP) [9, 11–18]. Recently, we observed that Ctgf+/− mice (lacking one functional allele for Ctgf) are protected from diabetes-induced basement membrane thickening of retinal and kidney glomerular capillaries ([19]; P. Roestenberg, F .A. Van Nieuwenhoven, R. Verheul et al., unpublished results). We hypothesised in the present study that AGE-induced basement membrane thickening observed in the retina is at least partly mediated by CTGF. To establish the role of CTGF in the AGE-induced production of vascular basement membrane components and their mediators, we investigated the effects of aminoguanidine on the levels of CTGF, the other CCN family members (CYR61, NOV and WNT1-inducible signalling pathway protein 1, 2 and 3 [WISP1, −2, and −3, also known as CCN4, −5 and −6]) [20–22] and vascular basement membrane-related molecules in the retina of rats with streptozotocin-induced diabetes, as well as in the retina of mice infused with AGE. Materials and methods Animals All animal studies were carried out in accordance with the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All experiments involving rats were reviewed and approved by the ethics committee for animal care and use of the Free University Medical Centre, Amsterdam, the Netherlands. All experiments involving mice were carried out in accordance with British Home Office regulations. Streptozotocin-induced diabetic rat model Adult Wistar rats (Charles River, Maastricht, the Netherlands), weighing approximately 250 g, were randomly divided into three experimental groups: a control group (n = 14), a diabetic group (n = 16) and a diabetic group treated with aminoguanidine hydrogen carbonate (n = 14; Fluka, Buchs, Switzerland). Diabetes was induced by a single i.p. injection of 60 mg/kg streptozotocin (Sigma, St Louis, MO, USA). Immediately prior to use, streptozotocin was dissolved in cold 0.1 mol/l citrate buffer, pH 4.5. Control rats received a single i.p. injection of 0.1 mol/l citrate buffer only. Aminoguanidine was administered from day 1 at a dose of 1 g/l in drinking water. Serum glucose levels and body weight were monitored at the start and end of the experiment. Diabetes was verified by a serum glucose level >13.9 mmol/l. At 6 weeks, half of the rats were randomly selected from the three experimental groups and killed with a lethal dose of pentobarbital sodium (i.p.). At 12 weeks the remaining rats were killed. Eyes from each rat were rapidly enucleated, one being snap-frozen in liquid nitrogen and stored at −80°C, while the contralateral eye was fixed in 4% paraformaldehyde. Additionally, blood samples were collected and plasma levels of Nɛ-(carboxymethyl)lysine (CML) were measured by stable-isotope dilution tandem mass spectrometry [23]. In vivo administration of exogenous AGE Female C57BL/6 mice (10–12 weeks old) were randomly assigned to two groups of equal size and injected i.p. with either native mouse serum albumin (MSA) or glycoaldehyde-modified MSA (10 mg/kg) daily for seven consecutive days. At 3–4 h after the final injections, mice were killed, eyes enucleated and retinas dissected freshly, before being snap-frozen in liquid nitrogen. Preparation of AGE-modified albumin The glycoaldehyde-modified MSA preparation was made according to Nagai et al. [24]. Following dialysis against PBS, endotoxin was removed using an endotoxin-removing column (Pierce, Rockford, IL, USA). Glycoaldehyde-MSA and native MSA were passed three times through separate columns to ensure that all contaminating endotoxin was removed. Analysis of the CML content of glycoaldehyde-MSA and native MSA was performed using gas chromatography–mass spectrometry. The lysine content of the samples was analysed by cation exchange chromatography and the values for CML were corrected for lysine loss and expressed as mmol CML/mol lysine as previously reported [25]. RNA isolation and mRNA quantification Snap-frozen rat and mouse eyes were allowed to thaw in ice-cold RNAlater (Ambion, Austin, TX, USA). The anterior chambers of the eyes were removed and the retinas were carefully dissected. Total RNA was extracted with TRIzol reagent (Invitrogen, Carlsbad, CA, USA). The amount of total retinal RNA isolated was approximately 12 μg per retina (spectrophotometric measurements at 260 nm), with no significant differences between the experimental groups. The integrity of the RNA samples was verified using an automated electrophoresis system (Experion; Bio-Rad, Hercules, CA, USA). All samples had sharp ribosomal RNA bands with no sign of degradation.A 2-μg aliquot of total RNA was treated with DNAse I (amplification grade; Invitrogen) and reverse-transcribed into first strand cDNA with Superscript II and oligo(dT)12–18 (Invitrogen).Details of the primers are given in the Electronic supplementary material (ESM) (ESM Table 1). Specificity of the primers was confirmed by a nucleotide–nucleotide BLAST (http://www.ncbi.nlm.nih.gov/blast/index.shtml) search. The presence of a single PCR product was verified by both the presence of a single melting temperature peak and detection of a single band of the expected size on a 3% agarose gel.Real-time quantitative PCR (qPCR) was performed in a sequence detection system (ABI Prism 5700; Applied Biosystems, Foster City, CA, USA). For each primer set a mastermix was prepared consisting of 1× SYBR Green PCR buffer (Eurogentec, Seraing, Belgium), 3 mmol/l MgCl2, 200 μmol/l each of dATP, dGTP and dCTP, 400 μmol/l dUTP, 0.5 U AmpliTaq Gold (Eurogentec) and 2 pmol primers. All cDNA samples were diluted 1:10 and amplified using the following PCR protocol: 10 min at 95°C, followed by 40 cycles of 15 s at 95°C and 60 s at 60°C and a melting program (60–95°C). Relative gene expression was calculated using the equation: R=E−Ct, where E is the mean efficiency of all samples for the gene being evaluated and Ct is the cycle threshold for the gene as determined during real-time PCR. All qPCR experiments were performed at least twice.Real-time qPCR data from the mouse experiments were normalised using 18S rRNA, which was determined to be stably expressed in all experimental groups. For the rat experiments, no suitable housekeeping genes that were not regulated by the diabetic background could be found. Therefore, the rat data were normalised using the relative starting amounts of cDNA, which was determined using a novel technique recently developed in our laboratory (J. M. Hughes, I. Klaassen, W. Kamphuis, C. J. F. Van Noorden and R. O. Schlingemann, unpublished results). In brief, reverse transcription reactions were carried out in duplicate with one set of reactions containing the normal dNTP mix and the parallel set of reactions containing a dNTP mix with α-32P-labelled dCTP. From each sample 4 μl of the α-32P-labelled dCTP-incorporated cDNA were pipetted on to separate nitrocellulose filters, which were allowed to air-dry. After washing with 0.1 mol/l phosphate buffer, radioactivity of the filters was measured using a scintillation counter (Beckman Coulter, Fullerton, CA, USA). Western blotting Protein was isolated from paraformaldehyde-fixed retinal tissue as described by Shi et al. [26]. In brief, retinas were dissected from the 4% paraformaldehyde-fixed rat eyes and pooled in 1.5 ml Eppendorf vials in antigen-retrieval buffer (20 mmol/l Tris, 2% SDS, pH 7). The pooled samples were then dissociated using a pestle and incubated at 100°C for 20 min followed by 2 h at 60°C. Supernatant fractions were collected after centrifugation at 4°C for 15 min at 10,000g. Protein concentrations were determined with a bicinchoninic acid protein assay kit (Perbio, Etten-Leur, the Netherlands) and adjusted to 2.5 μg/μl. For SDS-PAGE and western blots, proteins were separated using 13% mini gels under reducing conditions. Following gel electrophoresis, proteins were transferred to a nitrocellulose filter (Whatman Schleicher & Schuell, Brentford, Middlesex, UK) using a semi-dry transfer cell (Bio-Rad). At the end of the transfer, the filter was blocked in blocking buffer (1% non-fat skimmed milk powder, 1% BSA, 1 mmol/l NaN3 in Tris-buffered saline and 0.05% Tween) overnight at 4°C while being gently rocked. The filter was incubated in blocking solution for 2 h at room temperature, with the relevant antibodies as defined below. Following three washes in TBS/0.05% Tween-20, the blots were incubated in blocking solution with horseradish peroxidase-conjugated goat–anti-rabbit or goat–anti-mouse antibodies for 1 h at room temp. After extensive washing, blots were developed using a chemiluminescent kit (SuperSignal West Pico; Perbio). Filters were exposed to X-ray film (Kodak-Biomax, Herts, UK).Primary antibodies (ESM Table 2) were diluted with 0.3% skimmed milk powder in TBS/Tween and horseradish peroxidase-conjugated goat–anti-rabbit or goat–anti-mouse (Perbio) was diluted 1:20,000. Intensity of bands was quantified by densitometry using AlphaEase software (AlphaInnotech, San Leandro, CA, USA). Immunohistochemistry Cryostat sections (10-μm thick) were stained using an indirect immunoperoxidase procedure as previously described [27]. Primary antibodies are listed in ESM Table 2. Primary antibody was omitted for negative controls. Indirect immunoperoxidase staining was performed using histostaining reagents (Powervision; ImmunoVision, Daly City, CA, USA) for all sections except those incubated with the TIMP1 antibody. The TIMP1 sections were indirectly stained using horseradish peroxidase-labelled rabbit–anti-goat antibody (P0160; Dako, Glostrup, Denmark). Statistics CML data were log10 transformed to obtain a normal distribution. Significant differences (p < 0.05) in glucose and CML plasma levels, and in gene expression levels among groups were calculated with single ANOVA. The Bonferroni post hoc test was used to perform pairwise comparisons of groups. Results Glucose and CML levels in control and diabetic rats Induction of diabetes and the degree of hyperglycaemia in streptozotocin-treated rats was established by serum glucose levels (Fig. 1a). Streptozotocin treatment resulted in a three- to fourfold increase in serum glucose concentration after 6 and 12 weeks irrespective of aminoguanidine treatment. Fig. 1Plasma glucose (a) and CML (b) levels in control and diabetic rats at 6 and 12 weeks of streptozotocin-induced diabetes. White bars, control rats; black bars, diabetic rats; cross-hatched bars, diabetic rats treated with aminoguanidine. *p < 0.05 and *** p < 0.001 for difference between experimental and control group. The aminoguanidine group was only significantly different from the groups with diabetes at 12 weeks (†p < 0.05). The error bars show the standard deviation for each groupPlasma levels of CML were measured to determine the efficacy of aminoguanidine treatment. CML plasma levels were elevated by twofold at 6 and 12 weeks after streptozotocin-treatment (Fig. 1b). Aminoguanidine treatment had no effect on CML levels at 6 weeks, but at 12 weeks the CML levels were decreased by approximately 25%. CCN family gene expression in control and diabetic rats After 6 weeks of diabetes, Cyr61 mRNA levels in the diabetic retina were increased by threefold against control retina. Treatment with aminoguanidine reduced Cyr61 expression to levels that were not significantly different from control levels (Fig. 2). CYR61 protein was mainly localised in the ganglion cell layer (Fig. 3). No differences in staining patterns were found between experimental groups. Fig. 2Gene expression of CCN family members. Fold change, compared with control values, in retinal mRNA levels of CCN family members in streptozotocin-induced diabetic rats at 6 (white bars) and 12 weeks (cross-hatched bars) after streptozotocin-induction and in aminoguanidine-treated, streptozotocin-induced diabetic rats at 6 (black bars) and 12 weeks (grey bars) of diabetes. *p < 0.05 for difference between experimental group and control group; †p < 0.05 for difference between aminoguanidine-treated diabetic group and diabetes-only groupFig. 3Immunohistochemical staining patterns of a CYR61, b CTGF and c TIMP1 in control rat retinas. Intense staining of CYR61 and CTGF was present in large cell bodies of the ganglion cell layer (GCL) and weak staining in the inner plexiform layer (IPL). Intense uniform immunostaining of TIMP1 was found in the GCL and weak staining in the IPL. INL inner nuclear layer, OPL outer plexiform layer, ONL outer nuclear layer, RCL rod and cones layer, RPE retinal pigment epithelium. Magnification: ×150Ctgf mRNA levels were elevated by twofold at 12 weeks of streptozotocin-induced diabetes. Aminoguanidine treatment almost completely prevented this increase (Fig. 2). Western blotting showed a 1.8-fold increase in CTGF protein levels in the retina of diabetic rats at 12 weeks, whereas aminoguanidine treatment also prevented this effect (Fig. 4). CTGF immunostaining was mainly found in the ganglion cell layer and was more diffuse throughout the outer plexiform layer, inner nuclear layer and inner plexifrom layer (Fig. 3). Differences in staining between experimental groups were not observed. Fig. 4CTGF protein levels in retina of control and diabetic rats. a Western blots of CTGF and GAPDH as loading control. Samples were pooled for each group. A prominent band of CTGF protein is present in the 12-week diabetic group (12D; n = 8), whereas protein bands were similar in all other groups (control rats at 6 [6C; n = 6] and 12 [12C; n = 8] weeks, diabetic rats at 6 [6D; n = 8] weeks and diabetic rats treated with aminoguanidine at 6 [6AG; n = 7] and 12 [12AG; n = 7] weeks). b The blots were quantified by densitometry and expressed as a ratio of CTGF:GAPDHWisp1 and Wisp3 mRNA levels were low in retinas of all groups of rats. The levels never differed more than 1.4-fold between experimental and control groups. Due to small standard deviations, significant differences were found (Fig. 2), but it is doubtful whether these small differences are biologically meaningful. Nov and Wisp2 mRNA expression levels were too low to be detected in all groups of rats. Expression of transforming growth factor beta 1 and 2 in control and diabetic rats To determine whether: (1) transforming growth factor beta (TGFB), an upstream regulator of CTGF production, is induced by diabetes; and (2) this induction is prevented by aminoguanidine in rat retina, we examined the mRNA levels of its two most common isoforms, Tgfb1 and Tgfb2. Tgfb1 mRNA expression was decreased by approximately 30% in retina of rats with diabetes for 6 weeks (Fig. 5). Whether this difference is biologically relevant remains to be determined. In all other experimental groups, Tgfb1 and Tgfb2 mRNA levels were similar to control levels (Fig. 5). Immunohistochemical analysis of TGFB1 revealed a vascular pattern of staining in all experimental groups (Fig. 6). Fig. 5Fold change of Tgfb1 and Tgfb2 expression in diabetic rats at 6 (white bars) and 12 weeks (cross-hatched bars) after streptozotocin-induction and in aminoguanidine-treated diabetic rats at 6 (black bars) and 12 weeks (grey bars) after streptozotocin-induction. Tgfb1 was significantly decreased in the diabetic rats after 6 weeks (*p < 0.05 vs control). This difference was not observed in the aminoguanidine-treated group at 6 weeks. Tgfb2 expression was not significantly altered at either time pointFig. 6Immunohistochemical staining patterns of TGFB1 (a, d), laminin (b, e) and fibronectin (c, f) in retina of control rats (a–c) and rats 12 weeks after streptozotocin-induced diabetes (STZ) (d– f). Immunostaining of TGFB1, laminin and fibronectin was confined to the retinal microvasculature and did not notably differ between control and streptozotocin sections. Magnification: × 150 Expression of extracellular matrix molecules in control and diabetic rats CTGF and CYR61 are known modifiers of the extracellular matrix. Therefore, we investigated expression patterns of various extracellular matrix components. Col4a3 mRNA levels in the rat retina were elevated by threefold after 6 weeks of diabetes. Aminoguanidine treatment inhibited this induction of Col4a3 mRNA levels by 30% (Fig. 7). Lamb1 mRNA levels showed a 1.5-fold increase in 12-week diabetic rats, which was virtually unaffected by aminoguanidine treatment. Fibronectin mRNA levels were not affected by streptozotocin-induced diabetes. Timp2 mRNA levels were not affected either, but Timp1 mRNA levels were elevated by 2.5-fold in retina of 12-week diabetic rats, this increase being completely prevented by aminoguanidine treatment (Fig. 7). Laminin and fibronectin were localised immunohistochemically in microvessels in rat retina (Fig. 6). This staining pattern was similar in all groups. TIMP1 immunostaining was restricted to the ganglion cell layer in all groups of rats (Fig. 3). Fig. 7Gene expression of extracellular matrix components. Fold change of extracellular matrix gene expression as indicated in diabetic rats at 6 (white bars) and 12 weeks (cross-hatched bars) after streptozotocin-induction and in aminoguanidine-treated diabetic rats at 6 (black bars) and 12 weeks (grey bars) after streptozotocin-induction. *p < 0.05 for difference between experimental and control group; †p < 0.05 for difference between aminoguanidine-treated diabetic group and diabetes-only group CCN family mRNA expression in control and AGE-treated mice AGE-treated MSA infusion of mice induced an increased retinal expression of Cyr61 and Ctgf mRNA by 3.7-fold and twofold, respectively, compared with control mice (Fig. 8). Wisp1 and Wisp3 mRNA expression was not affected by AGE-treated MSA infusion (data not shown). Fig. 8Relative mRNA levels of Cyr61 and Ctgf in retinas of control (white bars) and AGE-treated (black bars) mice, depicted as fold change in comparison with control mice. *p < 0.05 for effect of AGE treatment on Cyr61 and Ctgf mRNA levels Summary of results In the retina of rats with streptozotocin-induced diabetes, mRNA levels of the CCN family members Cyr61 and Ctgf were increased threefold at 6 weeks and twofold at 12 weeks, respectively, whereas expression of all other CCN family members was not notably affected. CTGF protein levels in retina were also elevated twofold at 12 weeks of diabetes. In the aminoguanidine-treated diabetic rats these increases were partly counteracted by treatment with this AGE inhibitor. In line with these findings, treatment of mice with exogenous AGE induced elevated retina mRNA levels of Cyr61 and Ctgf, but not of the other CCN family members. In parallel, mRNA levels of some extracellular matrix components were also increased in the retina of diabetic rats, an effect also prevented by aminoguanidine treatment. Discussion We present here a comprehensive expression analysis of the CCN family of fibrosis-inducing cytokines in the retina of rats with streptozotocin-induced diabetes. Messenger RNA and protein levels of CYR61 and CTGF, both known to be capable of modulating the extracellular matrix, were increased in diabetic rats, whilst the AGE inhibitor aminoguanidine attenuated these effects of diabetes. We also found that exogenously administered AGEs are capable of inducing Cyr61 and Ctgf expression in the adult mouse retina in vivo. Taken together, these data present evidence that AGEs are both necessary and sufficient to cause increased levels of CYR61 and CTGF in the diabetic retina. Expression of CTGF at the mRNA or protein level has previously been demonstrated in vivo in normal and diabetic rat [28] and human retina [29], as well as in cultured retinal microvascular cells [30] and astrocytes [31]. Our data on CTGF are in agreement with a previously reported twofold increase in Ctgf expression in the diabetic rat retina [28]. In our study, immunostaining with a polyclonal anti-CTGF antibody showed staining in the ganglion cell layer [28] and diffusely throughout the larger part of the inner rat retina. In contrast, in a previous study of the human diabetic retina employing a monoclonal anti-CTGF antibody, CTGF was detected in microglia and pericytes in the microvasculature of the inner retina [29]. These varying patterns of CTGF protein distribution may be species-related or due to differences in specificity of the antibodies used. Gene expression analysis was also performed on the other five known members of the CCN family. We found Cyr61 expression in the normal adult rat retina and upregulation in the diabetic retina. In addition to CTGF, this suggests a possible role for CYR61 in the development of diabetes-related retinal sequelae. The lack of detectable Nov and Wisp2 expression argues against a role for these two proteins in normal or diabetic retina. This is in agreement with a previous study, which demonstrated that Wisp2 mRNA was not present in tissues of the adult rat [32]. Expression of Wisp1, which is much less studied but known to suppress cancer cell growth in vivo [33], was slightly decreased in diabetic retina, whereas Wisp3, a cell growth suppressor and inhibitor of angiogenesis [34], was increased. The significance of these findings remains unclear, as the functions of the CCN family members are complex. Still, such opposing actions may be reflective of a regulatory balance or may indicate that some CCN family members are redundant, although this is currently not considered to be the case in other tissues [35]. The role of CTGF in diabetic renal pathology has been clearly established. CTGF is responsible for mesangial expansion [36] and increased extracellular matrix deposition [37, 38] as observed in early stages of diabetic nephropathy. Therefore, CTGF and/or CYR61 may have a similar role in diabetic retina and be responsible for the thickening of microvascular basement membranes observed in early stages of diabetic retinopathy. We also therefore examined expression patterns of the genes encoding several extracellular matrix-related molecules. Expression of the basement membrane component Col4a3 was found to be increased concomitantly with Cyr61 expression after 6 weeks of diabetes, an increase significantly reduced in aminoguanidine-treated diabetic rats. At 12 weeks of diabetes, Timp1 and Lamb1 expression demonstrated a concomitant increase with Ctgf, which was significantly attenuated in the aminoguanidine-treated group (not statistically significant for Lamb1). These findings suggest a causal role of AGEs in the diabetes-induced production of CTGF, CYR61, COL4A3 and TIMP1 in the retina. Whether increased Lamb1 expression is also mediated by AGEs remains to be determined. However, this possibility is supported by a previous study showing that protein and mRNA levels of the ribosomal protein SA (previously known as laminin receptor 1 [67 kD, ribosomal protein SA]) are upregulated by AGEs in cultured retinal microvascular endothelial cells [39]. AGE CML plasma levels, used in our study as a marker of AGE formation in the rat diabetes model, were not altered at 6 weeks in diabetic rats, making an effect of AGEs at this time point questionable. However, it should be noted that CML is merely one of many types of AGE known to be generated under hyperglycaemic conditions [40] and that aminoguanidine has been shown to decrease serum AGE levels in diabetic rats as early as 6 weeks after diabetes induction [41]. In previous studies in the diabetic rat kidney, Ctgf and fibronectin gene expression both increased after 32 weeks of streptozotocin-induced diabetes [10]. These changes were prevented by aminoguanidine treatment. As AGE accumulation in the diabetic kidney was prevented by aminoguanidine, those authors surmised that the anti-fibrotic effects of aminoguanidine could be at least partially mediated by a decrease in CTGF expression [10]. Our study may have been too short to observe an increase of fibronectin in the diabetic retina, but otherwise our results are in line with these findings in kidney. Additional studies will be necessary to further elucidate the ability of CYR61 and CTGF to directly modulate these extracellular matrix molecules in retinal vascular cells. TGFB, considered to be the most important fibrotic factor, has been shown to upregulate CTGF in many cell types in vitro [42–44] and in vivo [45, 46]. Although our findings indicate that TGFB production is not increased in the diabetic retina, a role for TGFB in the observed upregulation of CTGF cannot be ruled out, as the regulation of TGFB bioavailability is complicated and not solely dependent on the level of TGFB production [47]. We have recently demonstrated that vascular endothelial growth factor (VEGF) increases expression of Ctgf and Cyr61 in the rat retina in vivo as well as in retinal vascular endothelial cells in vitro (E. J. Kuiper, J. M. Hughes, I. M. C. Vogels et al., unpublished results). As AGEs are known inducers of VEGF in retinal cells [48, 49], it is possible that the increases in CTGF and CYR61 observed in our animal models are the result of AGE-induced VEGF. A major finding of our study was the ability of aminoguanidine to attenuate the increase in Ctgf expression observed in the diabetic rat retina. Tikellis et al. [28] have reported that perindopril, an ACE inhibitor, prevents increased Ctgf expression in diabetic rat retina [28]. This suggests that both interventions may affect a common molecular mechanism leading to the upregulation of CTGF. As ACE inhibition has been shown to prevent AGE accumulation in diabetic tissues [50], it is feasible that inhibition of AGE formation could be a common molecular mechanism allowing both ACE inhibitors and aminoguanidine to inhibit the increase of retinal CTGF. In summary, this study provides the first evidence that, in addition to CTGF, the CCN family molecules CYR61, WISP1 and WISP3 play possible roles in the development of early stages of experimental diabetic retinopathy. At the very least, these results warrant further study into the functional aspects of these molecules in the eye, and of how these aspects pertain to the development of diabetic retinopathy. Additionally, we demonstrate for the first time that AGEs directly upregulate both CTGF and CYR61 levels in the retina in vivo and that aminoguanidine inhibits these diabetes-induced increases. This provides the first evidence that CTGF and CYR61 are downstream effectors of AGEs in the diabetic retina and implicates them as possible targets for future intervention strategies. Electronic supplementary material Below is the link to the electronic supplementary material.
[ "advanced glycation end products", "extracellular matrix", "cystein-rich protein 61", "connective tissue growth factor", "diabetic retinopathy", "basement membrane", "experimental", "aminoguanidine", "diabetes mellitus", "gene expression regulation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "R" ]
Histochem_Cell_Biol-4-1-2386530
Structure and function of mammalian cilia
In the past half century, beginning with electron microscopic studies of 9 + 2 motile and 9 + 0 primary cilia, novel insights have been obtained regarding the structure and function of mammalian cilia. All cilia can now be viewed as sensory cellular antennae that coordinate a large number of cellular signaling pathways, sometimes coupling the signaling to ciliary motility or alternatively to cell division and differentiation. This view has had unanticipated consequences for our understanding of developmental processes and human disease. Introduction Although in 1835, Purkinje and Valentin (1835) published the first treatise to include studies of mammalian cilia, the history of what may be considered modern research on mammalian cilia does not extend back more than a few years longer than the history of this journal. Motile mammalian cilia attracted the attention of microscopists before the advent of electron microscopy, because they moved and were obviously similar in this motion to protozoan cilia. Nevertheless, several light microscope structures whose later fine structure appearance (and molecular biology) had little in common with cilia were nevertheless labeled cilia, including the stereocilia of the hair cells, which are in fact modified microvilli. In 1954, the electron microscope observations of Fawcett and Porter (1954) definitively characterized the 9 + 2 pattern of the cytoskeleton, the axoneme, of motile mammalian and other cilia. They showed that this structure was enclosed by an extension of the cell membrane, the ciliary membrane. The axoneme of true cilia was always an extension of a basal body and related to the nine-fold construction of the centriole. By 1956, Porter (1957) and De Robertis (1956) realized that cilia were found as sensory structures in, for example mammalian photoreceptors, where the central pair of microtubules in the axoneme was absent, and the cilia were nonmotile, hence 9 + 0. Finally, in the early 1960s, a number of electron microscopy studies showed that solitary 9 + 0 cilia, now termed primary cilia, are present on many differentiated cells of tissues of the mammalian and vertebrate body, including kidney cells, fibroblasts, neurons and even Schwann cells (Barnes 1961; Sorokin 1962; Grillo and Palay 1963). Although specific antibodies against detyrosinated and acetylated tubulin became available to detect primary cilia in immunofluorescence microscopy (Piperno et al. 1987; Cambray-Deakin and Burgoyne 1987), primary cilia were generally neglected as interesting cell organelles. The biggest problem was that primary cilia had no demonstrable function, even though considerable circumstantial evidence suggested that they were some form of chemical or mechanical sensor. On this basis, Poole et al. (1985) proposed that primary cilia of connective tissue cells functioned as a cellular cybernetic probe, or in current terminology, as a cellular global positioning system. Roth et al. (1988) then demonstrated that primary cilia on cultured kidney epithelial cells could be bent by flow, and Wheatley (1995) suggested that if primary cilia in renal epithelia did not function properly in sensing flow, there would be pathophysiological consequences. Nevertheless, many cell biologists dismissed the primary cilium as vestigial. While early studies of mammalian cilia focused on showing that the mechanism and biochemistry of motility was virtually identical to that discovered in other cilia, more recent studies have examined the way that cilia are built. These studies have led to novel insights for both motile and nonmotile, including primary, cilia. All cilia can now be viewed as sensory cellular antennae that coordinate a plethora of cellular signaling pathways, and this view has had unanticipated consequences for our understanding of developmental processes and disease. A recent review of mammalian cilia structure and function covering many of the topics presented here, in further detail, is found in Satir and Christensen (2007). The mechanism of ciliary motility and primary ciliary dyskinesia The basic mechanism of ciliary motility is now reasonably well understood, although many details of how waveform is generated and propagated remain obscure. While much of the information about this mechanism was derived from protistan and invertebrate organisms (Satir 1985), the basic principles clearly apply to mammalian cilia of the respiratory and reproductive tract, and to brain ependymal cilia. An electron micrograph of cilia of the mouse oviduct is shown in Fig. 1. Fig. 1Classic transmission electron micrograph of mouse oviduct cilia. Cross-sections show the 9 + 2 axoneme of motile cilia (asterisk). The axoneme grows from a basal body, with a basal foot (arrowhead) pointing in the direction of the effective stroke. The transition zone between basal body and axoneme contains the ciliary necklace (arrow). (From Dirksen and Satir 1972, unpublished, with permission) Ciliary motility is caused by the relative sliding of the nine outer axonemal doublets, operating as two opposing sets, one (doublets 1–4) to produce the effective stroke (or principal bend in flagella), one (doublets 6–9) to produce the recovery stroke (or reverse bend), powered by ATP and a set of molecular motors, the axonemal dyneins. The axonemal dyneins are arranged in two sets of arms: the outer dynein arms (ODAs) consisting of two heavy-chain dynein isoforms in mammalian cilia, packaged with intermediate and light chains, with four identical ODAs, each aligned along 96-nm long doublet microtubule (Nicastro et al. 2006), and the more complex inner dynein arms (IDAs). The IDAs, more centrally located, consist of at least seven heterodimeric and monomeric heavy-chain dynein isoforms per 96 nm. Generally, the ODAs and IDAs function together, but the ODAs principally regulate beat frequency, in part by cAMP-dependent phosphorylation of an ODA regulatory light chain, while the IDAs control beat form. Both frequency and form change by changing the rate of sliding and the switching of activity between the doublet sets. Sliding is converted into propagated bending, most efficiently by control of IDA activity via phosphorylation and mechanical interaction involving the radial spoke, central pair complex. In effect, the 9 + 2 axoneme is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines. The enzymes that control axonemal response, for example protein kinase A, are structurally part of the complexes. Strong support for this mechanism of ciliary motility comes from studies of paralyzed and other mutants of protistan cilia, where a structural defect and a corresponding genetic mutation can directly be correlated with a change in ciliary beat. Similarly, strong support for the applicability of the mechanism to human cilia comes from studies, mainly of ciliary structural defects and genetic mutations associated with impaired mucociliary function in the respiratory system, which produces sinusitis and bronchiectasis. The genetic diseases are now called primary ciliary dyskinesia (PCD). Clinical features of PCD, which include chronic rhinitis/sinusitis, otitis media, male infertility and an increased incidence of hydrocephalus, point to physiological processes where ciliary motility is essential (Afzelius 2004). As might be anticipated, prominent causes of PCD are mutations that affect the assembly or function of the dynein arms—most commonly a mutation in DNAH5, a gene encoding a dynein heavy chain of the ODAs (Olbrich et al. 2002). Any mutation affecting the ciliary beat mechanism will potentially produce PCD, depending on the severity of beat impairment. An instructive example is hydin, a component of the central pair projection complex that interacts with the radial spokes in the switching mechanism between doublet sets. Hydin-deficient flagella of Chlamydomonas become intermittently paralyzed, stopping for long times at the end of the effective or recovery strokes, where the direction of beat is reversed (Lechtreck and Witman 2007). Mice defective in Hydin develop hydrocephalus and die shortly after birth. Lechtreck et al. (2008) analyzed ciliary structure and motility in these mice. The central pair projection assigned to the hydin complex was missing. Mutant cilia were unable to bend properly and frequently stalled, implying that because central pair structure was defective, interactions between the central pair and the radial spokes that affected dynein arm switching were abnormal, and consequently cilia-generated flow was impaired. Motility of tracheal cilia was also impaired. One would predict that human mutations in HYDIN are also likely to result in hydrocephalus or a form of PCD. Many people diagnosed with PCD have situs inversus totalis that is reversal of left–right asymmetry of body organs such as the heart. This is now known to be produced, because fluid flow produced by motile cilia at the embryonic node is defective (Hirokawa et al. 2006). The usual flow pattern produced by the cilia is unidirectional right-to-left, and this activates a left-side-only gene cascade, probably by impinging on primary cilia at the left side of the node (McGrath et al. 2003). The nodal cilia are mainly missing the central pair of axonemal microtubules, hence, although motile, are often 9 + 0, but with some unique ODA dynein isoforms (Supp et al. 1999). It is not known whether the response of the primary cilia at the node is to mechanical displacement (flow sensing) or if it involves morphogens produced in membrane parcels in the node (Tanaka et al. 2005). Morphogen-containing multimembrane vesicles are a feature of nodal response that could also apply respiratory and other ciliated epithelia, where flow occurs. While mouse mutants or knockouts that are missing cilia develop situs inversus, hydin mutants do not show this phenotype, presumably because their motility is sufficient to generate appropriate nodal flow. In a similar way, by examining the PCD phenotype of mutations in various ciliary proteins, identified by proteomics, it should be possible to dissect details of the function of the protein in the mechanism of beat generation (Avidor-Reiss et al. 2004; Badano et al. 2006; Blacque et al. 2005). Principles of ciliogenesis A major breakthrough in the understanding of the importance of cilia in the body has come from the study of Chlamydomonas ciliogenesis. In that cell, the cilia are traditionally called ‘flagella’. Ciliogenesis involves transport of materials into, along and out of the cilium, directly visualized as ‘intraflagellar transport’ or IFT (reviewed by Rosenbaum and Witman 2002). IFT requires molecular motors, several kinesins for anterograde transport and a specialized cytoplasmic dynein for retrograde transport. Coupling the transported cargo to the molecular motors that move along the outer doublets of the axoneme are two protein complexes (Cole 2003) comprising about 19 IFT proteins. Remarkably, this machinery is conserved almost universally wherever cilia are built, and orthologs of the motors and IFT proteins are found in sensory cilia and their derivatives, such as the mammalian photoreceptor (Baker et al. 2003) and in primary cilia, as well as in protistan motile cilia. However, motile cilia of mammalian tissues often assemble in mass (Gaillard et al. 1989; Dirksen 1991). Centriologenesis of hundreds of basal bodies occurs from a fibrogranular mass in the cytoplasm and cilia sprout from the cell surface. It is unclear whether classical IFT is the major mechanism of ciliogenesis in respiratory epithelium, or what modifications of the process occur. Wheatley (1969), Fonte et al. (1971) and Archer and Wheatley (1971) studied ciliogenesis of primary cilia of established cell lines in vitro. Primary cilia grow when the cells become confluent and reach stationary phase. Cells resorb their cilia shortly before entering mitosis. Postmitotic cells reassemble primary cilia in G1 and maintain the cilium as the cells enter growth arrest (G0) and undergo differentiation. In contrast to motile cilia, the primary cilium emanates uniquely from the distal end of the existing mother centriole of the centrosome, which migrates to the cell surface during growth arrest. Ciliogenesis by IFT is initiated while the centrosome is positioned at the Golgi apparatus near the nucleus, while extensive IFT and elongation of the cilium take place after docking of the nascent cilium at the cell surface, where microtubule pairs are quickly assembled to form the mature axoneme. Two recent and extensive reviews of the complex interactions between IFT particles, molecular motors, centrosomal proteins and other microtubule-associated proteins in ciliogenesis are presented in Pedersen et al. (2008) and Blacque et al. (2008). Ciliopathies The first human illness that could be linked to primary cilia in epithelial cells was polycystic kidney disease (PKD). One of the orthologs of IFT88 is the mammalian protein polaris mutated in a transgenic mouse Tg737orpk, which was a model for the study of PKD. Chlamydomonas IFT88 mutants are defective in ciliogenesis, and Pazour et al. (2000) demonstrated that, similarly, cilia of the mouse kidney were abnormally short or missing, which suggested that PKD might be a ciliary disease. This conclusion was strengthened by two succeeding discoveries. Mechanical deflection of kidney tubule epithelial cells induces an increase in intracellular calcium (Praetorius and Spring 2001), and the PKD proteins, polycystins 1 and 2 in Ca2+-signaling, localize to the ciliary membrane (Pazour et al. 2002; Yoder et al. 2002). Many other proteins whose functions are disrupted in cystic diseases, e.g. nephrocystins, have now been localized to the cilium or at the ciliary basal body, supporting the conclusion that the primary cilium in kidney tubules partly functions as a mechanosensor that, upon bending, activates a series of signaling pathways in the cilium to control development and homeostasis of the tissue (for reviews, see Yoder et al. 2007; Hildebrandt and Zhou 2007), as originally envisaged by Roth et al. (1988) and Wheatley (1995). Thus, even if primary cilia are present, unless these proteins are targeted appropriately to the cilia or the ciliary basal body, pathology develops. It is now clear that the ciliary membrane is a privileged and specialized compartment for receptor signaling and that the polycystins are only one type of many membrane proteins that must be targeted to the primary cilium to function properly. Other carefully documented examples include the receptor tyrosine kinase PDGFRα (Schneider et al. 2005) and the patched receptors for hedgehog signaling (Rohatgi et al. 2007) that control cell growth and differentiation processes. Further, essential downstream components in PDGFRα and hedgehog signaling as well as signaling in neurotransmission, Wnt pathways, extracellular matrix interaction and osmolyte transportation uniquely localize to the cilium and/or to the ciliary centrosome (Gerdes et al. 2007; Corbit et al. 2008; for reviews, see Singla and Reiter 2006; Michaud and Yoder 2006; Christensen et al. 2007; Christensen and Ott 2007). In many of these signaling systems, the cellular response is initiated by receptor activation in the primary cilium. Deficiencies in placement of receptors and their immediate downstream signaling components in the cilium or at the ciliary base results in ciliopathy, including cystic kidney, pancreatic and liver diseases, retinitis pigmentosa, cancer, defective neurogenesis, Bardet-Biedl syndrome, polydactyly, anosmia and other developmental defects. In the Tg737 mouse, these defects lead to early perinatal death. As shown in Fig. 2, primary cilia with hedgehog receptor pathway signaling are found on human embryonic stem cells (Kiprilov et al. 2008), which suggests that ciliary signaling is involved in differentiation from the beginning of embryogenesis. In embryogenesis, signaling through the primary cilium is necessary for normal development, probably because such signaling regulates the balance between cell division, polarity, migration, differentiation and apoptosis for many tissues. Fig. 2Primary cilia of human embryonic stem cells. Immunofluoresence microscopy using acetylated α tubulin antibody (tb) reveals the presence of primary cilia (arrows) on human embryonic stem cells. In the absence of stimulation, the hedgehog receptor ‘patched’ (Ptc) colocalizes with the acetylated α tubulin all along the ciliary membrane. Red and green channels are displaced in the images to define colocalization more clearly. Nuclei are stained with DAPI (blue). Upon stimulation, as part of the signaling cascade, Ptc leaves the cilium and the smoothened receptor (Smo) enters to activate the hedgehog signaling cascade. Asterisk marks the ciliary base. (From Kiprilov et al. 2008, with permission, courtesy of The Journal of Cell Biology) Primary cilia in adult tissues Primary cilia persist in many differentiated cells, including kidney tubule epithelial cells, fibroblasts and neurons, after organogenesis is complete and cell division rates fall. Presumably, as semipermanent structures, the cilia function as mechano- or chemosensors and as a cellular global positioning system to detect changes in the surrounding environment, to initiate cellular replacement after damage, for example. To test this hypothesis, techniques are being used to knock out primary cilia or ciliary proteins in specific tissues of adult organisms. Davenport et al. (2007) have completed one of the first of these studies, using an inducible system in adult mice to disrupt IFT in several different ways, causing loss of primary cilia. Respiratory motile cilia are probably more stable and relatively unaffected. Surprisingly, when primary cilia are lost from all adult tissues, the devastating abnormalities and lethality seen after embryonic loss of cilia are not observed. PKD eventually develops a year after induction. The same delay has been reported by Piontek et al. (2007) after adult-specific knockout of polycystin-1. This delay correlates well with a greatly reduced rate of cell division and a different pattern of gene expression in the mature kidney and could explain the increase in human PKD with age. Such changes in cell proliferation rate may relegate the primary cilium of adult tissues to a less immediate role, involving long-term homeostasis of the tissue. Because signaling through primary cilia is coupled to cell cycle events, for example in PDGFRαα signaling in fibroblasts, long-term disruption of ciliary signaling could be a factor in oncogenesis. One adult tissue responds immediately to ciliary knockout however—that is nervous tissue, most specifically neurons in the hypothalamus. Knockout of all adult primary cilia in the mouse, or specifically only of primary cilia on POMC neurons, leads to hyperphagia—compulsive and excessive eating—leading to obesity. Obesity then causes numerous secondary defects resembling type II diabetes. These defects do not occur if the knockout mice are kept on a restricted diet. Eating behavior is regulated by the hormone leptin. These fndings suggest that the leptin receptor might be located in the membrane of the primary cilia of the POMC neurons. The hypothalamic hormone somatostatin is a negative regulator of leptin. While leptin leads to reduced eating, somatostatin increases eating behavior. The somatostatin receptor sst 3 is localized to primary cilia in hypothalamic neurons (Stepanyan et al. 2007). The results of Davenport et al. (2007) suggest that, much like the patched and smoothened receptors work in hedgehog signaling, the leptin receptor and a somatostatin receptor could work in a Yin-Yang relationship within the POMC primary cilium (Satir 2007). Whatever the precise cell biological explanation of the relationship between ciliary knockout and hyperphagia, the effect of knockout is behavioral. We might expect that mutations in other proteins of neuronal primary cilia could lead to other behavioral responses. There are reports that the rages that fueled the famous Hatfield-McCoy feud of Appalachia probably were driven by a family suffering from Von Hippel-Landau (VHL) syndrome. VHL protein controls ciliogenesis and is localized to cilia (Schermer et al. 2006). Perhaps a more ciliocentric view of neuronal activity affecting recurrent or addictive actions is warranted. Conclusions Beginning with important electron microscopic studies and culminating in immunolocalization combined with molecular genetic technology, much has been learned about motile, sensory and primary cilia in mammals. Defects in building the primary cilium or mutations in ciliary membrane or axonemal proteins lead to ciliopathies, important human diseases. The cilium has moved to a prominent place in studies of embryogenesis and tissue differentiation and maintenance. There are hints that the fundamental cell biology of cilia will also be important in oncogenesis, aging diseases and human behavioral disorders. The strides of the past half century in understanding this organelle have been impressive, and the promise of discovery in the next half century is compelling.
[ "motility", "primary cilia", "signaling", "motile cilia", "ciliopathies", "sensory organelles" ]
[ "P", "P", "P", "P", "P", "R" ]
Diabetologia-4-1-2270365
Joint effects of HLA, INS, PTPN22 and CTLA4 genes on the risk of type 1 diabetes
Background/hypothesis HLA, INS, PTPN22 and CTLA4 are considered to be confirmed type 1 diabetes susceptibility genes. HLA, PTPN22 and CTLA4 are known to be involved in immune regulation. Few studies have systematically investigated the joint effect of multiple genetic variants. We evaluated joint effects of the four established genes on the risk of childhood-onset type 1 diabetes. Introduction The risk of complex diseases such as type 1 diabetes is generally thought to be influenced by multiple genetic and non-genetic factors, and it has been hypothesised that interactions between genes, or epistasis, are very common for such diseases [1]. The presence of interactions could be one of the reasons why searching for susceptibility loci for many diseases has been less successful than expected [2]. When moving from monogenic diseases to complex diseases, it seems reasonable to assess more than one locus at a time, although models become increasingly complex as the number of loci increases [3]. Whereas few or no common genetic variants have been firmly established for most common diseases [4], there are now at least four genetic loci that are established as causally involved in the aetiology of type 1 diabetes. They give us a unique possibility to evaluate gene–gene interactions among established susceptibility genes. Specific allelic combinations of DRB1, DQA1 and DQB1 in the human leucocyte antigen (HLA) complex, variants in the insulin gene (INS), the cytotoxic T lymphocyte antigen-4 gene (CTLA4) and the protein tyrosine phosphatase, non-receptor type 22 gene (PTPN22) have been repeatedly associated with type 1 diabetes susceptibility [5–8] using different approaches. All established loci are thought to be involved somehow in immune regulation, but details of the mechanisms relating the polymorphisms to risk of type 1 diabetes are in most cases poorly understood. Evaluating the joint effects of genes contributes important information for risk prediction, and is also thought to provide information about biological interactions, although the latter is controversial and more complex than commonly thought [2, 3, 9]. Previous studies have assessed interaction between HLA and INS and reported divergent results [10–18]. The reported results of the joint effect of HLA and INS are confusing not only because they have shown diverging results but also because the definitions and terminology of interactions are not consistent [19]. The interpretation of statistical interaction depends on the choice of scale used to measure the effects [2]. Although additivity of risks is often taken as independence [20], multiplicativity of risk is sometimes also taken as independence [21]. The joint action of HLA and INS has variously been described as being multiplicative [14, 15], additive [13], providing evidence of interaction [15], and non-interacting [13, 14, 17]. Few studies have investigated the more recently established susceptibility loci PTPN22 and CTLA4 in the context of joint effects on the risk of type 1 diabetes. The studies that have been done have mainly concluded that there is a non-interaction [22, 23] effect, but here also the results have diverged [24–27]. The aim of our study was to assess the joint effects of the four established susceptibility loci HLA, INS, CTLA4 and PTPN22 in type 1 diabetes, using a consistent approach with both population-based case-control and family trio designs and with large sample sizes. Methods Participants We analysed two independent type 1 diabetes data sets. One nuclear family set consisted of 421 trios with mother, father and one child diagnosed in Norway with type 1 diabetes before age 15 years (225 [53.4%] of the affected children were boys). The families were collected between 1993 and 1997. In families with more than one affected sibling, only the proband was included in the analyses. The case-control data set consisted of 1,331 type 1 diabetes patients (51.9% boys and 48.1% girls) and 1,625 (51.4% boys and 48.6% girls) control participants aged <15 years. In analyses involving age of disease onset, we divided the data sets according to age of disease onset of the affected child into three groups (0–4.9, 5–9.9 and ≥10 years). The controls were randomly selected from the official population registry among children born between 1985 and 1999 and recruited in 2001, as previously described [28]. The patients in the case-control material were from the Norwegian Childhood Diabetes Registry consecutively recruited between 1997 and 2000 [29] and between 2002 and 2005. The type 1 diabetes patients and their family members were recruited by the Norwegian Childhood Diabetes Study Group, including all paediatric departments in Norway. All type 1 diabetes patients were diagnosed according to EURODIAB criteria [30]. The study was approved by the local ethics committee, and informed consent was obtained from all participants or their parents. Genomic DNA extraction and genotyping In the majority of the type 1 diabetes case-control samples we used DNA extracted from buccal cells [31]. In all remaining samples, DNA was extracted from peripheral whole blood using a salting-out protocol. Genotyping of HLA-DRB1, −DQA1 and −DQB1 was performed using PCR-SSOP (sequence-specific oligonucleotide probing) mainly following published methods [32], or PCR-SSP (sequence-specific primer) [33, 34], or using time-resolved fluorescence technology in the Delfia assay (Perkin-Elmer Life Sciences, Turku, Finland). HLA genotypes were grouped into four risk categories based on DQB1, DQA1 and DRB1 genotypes, including DRB1*04 subtyping. The majority of DRB1*04-DQA1*0301-DQB1*0302 haplotypes in Norway are DRB1*0401 or −0404 (constituting >94% of the haplotypes) [35]. In the present study the other rare subtypes are referred to as DRB1*04XX. Because of the almost complete linkage disequilibrium between INS-VNTR allele classes and the −23 HphI polymorphism, we genotyped −23 HphI (rs689) as a marker for the INS-VNTR. The −23 HphI A allele corresponds to the VNTR class I and the −23 HphI T allele correspond to the VNTR class III. In PTPN22 we genotyped the single nucleotide polymorphism (SNP) Arg620Trp (rs2476601). The SNP JO27_1 (rs11571297) was genotyped in CTLA4. SNP genotyping was performed by TaqMan allelic discrimination assays on an ABI 7900HT DNA Analyzer (Applied Biosystems, Foster City, CA USA). Primer and probe sequences are shown in Electronic Supplementary Material (ESM) Table 1. The PCR conditions are available on request. Data analysis The HLA haplotypes were grouped as high risk, intermediate risk, neutral risk and low risk according to the following criteria: high-risk category, DRB1*0401/04XX-DQA1*03-DQB1*0302/DRB1*03-DQA1*05-DQB1*0201 (DR4-DQ8/DR3-DQ2); low-risk category, all genotypes with at least one DQB1*0602 allele; intermediate-risk category, DRB1*0404-DQ8/DR3-DQ2, DR3-DQ2/DR3-DQ2, DR4-DQ8/DR4-DQ8 (with the exception of DRB1 0404-DQ8 homozygotes, which were grouped as neutral), DRB1*0401 or 040XX-DQ8/X (X≠DQB1*0602 or DR3-DQ2). The remaining haplotypes were grouped in the neutral-risk category. For assessment of two-locus joint effects, we pooled genotypes of INS, PTPN22 and CTLA4 as follows: INS class I/I genotypes were compared with I/III together with III/III genotypes. The PTPN22 TT and CT genotypes were compared with CC. CTLA4 (JO27_1) TT genotypes were compared with TC and CC genotypes. Not all individuals were genotyped for non-HLA polymorphisms because of lack of DNA. We did not exclude individuals with some missing genotypes to prevent the loss of important information when looking at joint effects between polymorphisms. The available numbers of individuals for each analysis are seen in the tables. Data were presented using stratified 2 × 2 tables and analysed using logistic regression models including interaction (product) terms, in SPSS for Windows (version 14.0; SPSS, Chicago, IL, USA). In addition to formal analyses treating HLA categories as categorical in the logistic regression, we also tested for interactions when treating HLA category as a continuous variable coded 1, 2, 3, 4, thus maximising the power under alternative models where the effect of a non-HLA locus (as measured by the odds ratio [OR]) was assumed to decrease (or increase) (logit) linearly over the four HLA risk categories (test for interaction with one degree of freedom). Case-only analyses were used to estimate interaction parameters and to test for deviation from multiplicative effects using logistic regression [36]. In addition to the increased power obtained by utilising all cases (from case-control and trio materials) simultaneously, case-only analyses have increased power by making the implicit assumption that there is no association between the two loci in the population; i.e. the OR for their association is 1.0 in the population. Thus, under reasonable assumptions the case-only analysis makes the most efficient use of data to assess deviation from multiplicative models. In the case-control analysis, likelihood ratio tests comparing nested logistic regression models were used as global tests for interaction. The transmission disequilibrium test [37] was performed using the UNPHASED application implemented in the UNPHASED software version 2.4 [38]. For the trio data, 95% confidence intervals for the relative risk were estimated using conditional logistic regression in UNPHASED. Receiver operating curve (ROC) and confidence bounds for the area under the curve were estimated assuming a non-parametric distribution and analyses were done using SPSS version 14.0. Genotypes were added sequentially in order of likelihood ratio (or equivalently by the absolute risk conferred by a given genotype combination estimated using Bayes’ formula). Two four-locus genotype combinations were absent among cases in our material, and a very low value for the estimated absolute risk was imputed for these to allow inclusion in the ROC curve estimation with all four loci simultaneously. A p value <0.05 was considered to be statistically significant. Results The single-locus main effects are shown in ESM Table 2 (case-control data) and ESM Table 3 (trios). Compared with the neutral HLA risk category, the high-risk category showed a strong association with type 1 diabetes, with OR 20.6; for the intermediate-risk category the OR was 5.7 and for the low-risk category it was 0.09. INS, PTPN22 and CTLA4 also showed an association with type 1 diabetes, as expected. The transmission of the risk allele in the nuclear families confirmed the associations in INS, PTPN22 and CTLA4 (JO27_1), although with borderline significance for JO27_1 (ESM Table 3). Joint effect of HLA and PTPN22 ORs for the effect of PTPN22 varied across the HLA risk categories and were significant in some of the subgroups. The ORs were smaller for the risk-conferring HLA genotypes, indicating negative deviation from a multiplicative model. A global test of interaction (with 3 df) between HLA and PTPN22 in the logistic regression model confirmed a significant interaction (p = 0.024). In the trio data, the relative risk conferred by the PTPN22 T allele was similar in the strata defined by HLA group, with no evidence for deviation from a multiplicative model (ESM Table 4). A case-only analysis among all cases from the case-control and family materials (ESM Table 5) supported a significant negative deviation from multiplicative effects, with weaker ORs conferred by PTPN22 in the HLA risk categories (3 df test for interaction; p = 0.028). When treating HLA-encoded risk as a continuous variable in the analysis (1 df), the interaction was even more statistically significant (p = 0.003). There was no association between HLA and PTPN22 among the controls (3 df test; p = 0.19). We tried to fit the case-control data to an additive odds model using generalised linear models in STATA (version 9), as described by Skrondal [39]. However, convergence was not obtained, suggesting that the data did not fit well to an additive model. Joint effect of HLA and INS The 3 df test for interaction between INS and HLA was not statistically significant (p = 0.67). There was also no statistically significant deviation from a multiplicative model in the trio data (test for interaction, p = 0.5) (ESM Table 4) or in the case-only analysis (3 df test; p = 0.49) (ESM Table 5); even when treating HLA-encoded risk as a continuous variable in the analysis there was no significance (1 df test, p = 0.12). There was also no association between INS and HLA among controls, as expected (3 df test; p = 0.41). Joint effect of HLA and CTLA4 The ORs for CTLA4 in the different HLA categories (Table 1) indicated no deviation from a two-locus multiplicative model (3 df test; p = 0.53). This was also the case in the trio data set (ESM Table 4) and was supported by the case-only analysis (ESM Table 5; 3 df test; p = 0.57). Again, there was no association between the two loci among controls (3 df test; p = 0.21). Table 1Interaction between HLA-INS (-23HphI), HLA-PTPN22 (Arg620Trp) and HLA-CTLA4 (JO27_1) in the case–control data set using logistic regressionHLA categoryaNon-HLA genotypesCasesControlsOR95% CITest for interaction (p value)n (%)n (%)INSb0.67High riskI-I218 (25.2)20 (2.8)1.560.76–3.21III+98 (28.6)14 (2.5)1Intermediate riskI-I412 (47.6)122 (17.1)2.101.52–2.89III+160 (46.8)99 (17.4)1Neutral riskI-I221 (25.5)346 (48.5)2.341.74–3.15III+81 (23.7)299 (52.6)1Low riskI-I14 (1.6)225 (31.6)3.260.92–11.54III+3 (0.9)156 (27.5)1Total12071281PTPN220.024High riskTT + TC89 (23.2)8 (3.1)1.260.55–2.88CC237 (27.8)27 (2.6)1Intermediate riskTT + TC177 (46.1)55 (21.2)1.310.91–1.86CC406 (47.7)164 (15.6)1Neutral riskTT + TC111 (28.9)121 (46.7)2.441.80–3.31CC199 (23.4)536 (51.0)1Low riskTT + TC7 (1.8)75 (29.0)3.271.18–9.06CC9 (1.1)323 (30.8)1Total12351309CTLA40.53High riskTT122 (24.9)14 (3.6)0.900.45–1.84TC-CC210 (27.9)22 (2.5)1Intermediate riskTT241 (49.2)65 (16.5)1.591.14–2.23TC-CC346 (46.0)148 (16.7)1Neutral riskTT121 (24.7)212 (53.8)1.341.01–1.77TC-CC187 (24.9)439 (49.7)1Low riskTT6 (1.2)103 (26.1)1.800.63–5.18TC-CC9 (1.2)275 (31.1)1Total12421278n, Number of cases/controlsaHLA risk categories: high risk, DQA1*03-DQB1*0302/DQA1*05-DQB1*0201 (DQ8/DQ2), where DRB1≠0404; low risk, at least one DQB1*0602 allele independent of genotype on the other allele; intermediate risk, DRB1*0404-DQ8/DR3-DQ2, DR3-DQ2/DR3-DQ2, DR4-DQ8/DR4-DQ8 (excluding homozygous DRB1 0404), DRB1*0401 or 040x-DQ8/X (X≠DQB1*0602 or DR3). The remaining haplotypes were grouped in the neutral risk category (see Methods)bRepresents genotypes III/III and I/III Joint effects of non-HLA loci There was also no indication of deviation from multiplicative two-locus joint effects of PTPN22-INS, INS-CTLA4 or PTPN22-CTLA4 in the case-control data (Table 2) or in the case-only analysis (ESM Table 5) (all p > 0.39). For the trios, the test for interaction between PTPN22 and CTLA4 showed p = 0.046 (ESM Table 6). Taken together with the analysis of the case-control and the case-only data, this weighs against any deviation from a multiplicative two-locus joint effect also of CTLA4 and PTPN22. Table 2Interaction between INS-PTPN22, INS-CTLA4 and PTPN22-CTLA4 in the case-control data set using logistic regressionNon-HLA genotypesCasesControlsOR95% CITest for interactiona (p value)n (%)n (%)INSPTPN    0.67I-ITT+TC274 (72.9)156 (58.4)1.741.38–2.18I-ICC594 (71.6)587 (54.8)1 III+TT+TC102 (27.1)111 (41.6)1.891.38–2.58III+CC236 (28.4)485 (45.2)1 Total (n) 1,2061,339  INSCTLA4    0.42I-ITT354 (73.1)222 (53.9)1.511.23–1.86I-ITC-CC520 (71.0)492 (54.8)1 III+TT130 (26.9)190 (46.1)1.310.99–1.73III+TC-CC212 (29.0)405 (45.2)1 Total (n) 1,2161,309  PTPNCTLA4    0.78TT+TCTT143 (29.7)75 (17.7)1.461.04–2.05TT+TCTC-CC239 (31.7)183 (20.1)1 CCTT339 (70.3)348 (82.3)1.41.15–1.67CCTC-CC514 (68.3)729 (79.9)1 Total (n) 1,2351,335  The low-risk genotypes were used as reference (CC, TC-CC, TC-CC)aLikelihood ratio tests of whether the OR conferred by one locus is significantly different over strata defined by genotypes in the other locusn, Number of cases/controls Joint effects of more than two susceptibility loci We also tested models with all three-way and four-way interactions involving the four susceptibility loci using logistic regression (categorising all loci in two groups: increased risk genotypes or not), but none of the multi-way interactions were statistically significant (all p > 0.29). The simultaneous distribution of risk genotypes at all four loci among cases and controls is shown in ESM Table 7. The results show that the more risk loci an individual carries, the higher the relative risk, but the presence or absence of HLA risk loci influences the relative risk much more than the other loci, as expected. For instance, carrying risk genotypes at all three non-HLA loci but not at HLA is associated with a much lower risk than HLA risk genotypes together with low-risk genotypes at all three other loci. The relative risk (OR) conferred by simultaneously carrying high- or moderate-risk HLA and risk genotypes at all the three other loci compared with non-risk-associated genotypes at all four loci was 61. The expected relative risk under a strict multiplicative model involving all four loci was 123 (multiplying all four single-locus effects by each other). The relatively small number of individuals simultaneously carrying all risk genotypes indicates that the observed negative deviation from a four-way multiplicative model was not statistically significant, in accordance with the formal test cited above. ROC curve Another way to assess the predictive utility of combinations of genetic risk markers is the ROC curve [40]. This utilises the genotypes of all included individuals and assesses the combination of sensitivity and specificity of different combinations of genotypes. ROC curves for HLA alone, pairwise combinations of HLA and non-HLA loci, and multiple genotypes (Fig. 1) showed an area under the curve of 0.82 for HLA alone, which was only marginally increased by adding non-HLA loci. Fig. 1ROC curve for HLA genotypes in four categories and for combinations of genotypes defined by HLA and non-HLA susceptibility loci. The area under the curve (95% confidence interval) was 0.820 (0.803–0.836) for HLA (dark blue line), 0.828 (0.811–0.844) for HLA+CTLA4 (purple line), 0.835 (0.819–0.851) for HLA+PTPN22 (grey line), 0.840 (0.824–0.855) for HLA+INS (green line), 0.848 (0.833–0.863) for HLA+INS+PTPN22 (yellow line) and 0.852 (0.837–0.867) for HLA+INS+PTPN22+CTLA4 (red line). Turquoise dashed line, reference line Age of disease onset and sex We found no significant deviation from a multiplicative model concerning age–locus and sex–locus interaction for any of the genes. This was confirmed in the trio families and case-only analysis (ESM Tables 8 and 9). Discussion The present study is a comprehensive evaluation of joint effects of the four most well established type 1 diabetes susceptibility genes in both a large case-control series and family material. The relative risk conferred by PTPN22 was stronger in the lower-risk HLA categories than in the high-risk HLA category, while all other two-locus combinations (HLA-INS, HLA-CTLA4, INS-CTLA4, INS-PTPN22 and PTPN22-CTLA4) were consistent with multiplicative models. Although model-free methods have been developed for gene–gene interaction studies, such as multifactor dimensionality reduction (see [1] and references therein), these methods are designed for the detection of novel susceptibility loci, which was not the goal of our investigation. Two of the three previous studies of the joint effect of PTPN22 and HLA were in accordance with our results [25, 27] while the other study found no deviation from a multiplicative model [22]. It should be noted that the interaction between PTPN22 and HLA found in the case-control material and case-only analysis was not replicated in our trio data. One of the reasons for this could be lower statistical power in the trio data. Using the Quanto program ([41]; http://hydra.usc.edu/gxe), we found that we had more than 80% power to detect significant two-way gene–gene interaction if the true interaction parameter was 0.5. For the trio design we would need as many trios as we had cases in the case-control study to obtain a similar power. Our number of trios was only about a third of the number of cases in the case-control study, with consequently lower power. The case-only design is known to be the most efficient to detect interaction under certain assumptions. For instance, we had >99% power to detect interaction if the true interaction parameter was 0.5. The few studies concerning two-locus interaction effects between HLA and CTLA4 and among non-HLA genes [22] have generally indicated multiplicative effects, which is in accordance with our results. Some previous studies have found that the relative risk conferred by INS was similar in subgroups defined by HLA susceptibility genes [11, 12], while three studies have indicated that the effect was stronger in the low risk-HLA categories [16–18], a finding that was only partially supported by our data. On the other hand, one relatively small study has found that the effect of INS was confined to the high-risk HLA-DR4 group [10]. The reason for diverging results of the joint effect of established type 1 diabetes susceptibility genes in the literature could be that the studies have been performed with varying sample sizes and with different study designs. Linkage studies [14, 15], case-control association studies [10, 12, 13, 16] and family trio designs [16, 18, 22, 25] have all been used in these studies. Smaller studies might be inadequate to reveal significant interactions. Different criteria for categorising the HLA risk groups could potentially influence the result of the joint effect of HLA and other type 1 diabetes susceptibility genes. However, our conclusions were not affected by alternative classification of HLA risk groups, such as into DR4-DQ8 vs DR3-DQ2 carriers (data not shown). Studies in different populations and ethnic groups have indicated some heterogeneity in HLA-associated risk of type 1 diabetes and it is also possible that gene–gene interactions may vary across populations. However, despite the observed variations in population risk of type 1 diabetes and in HLA haplotype frequencies across populations, the relative predisposing effects of HLA haplotypes seem to be consistent across populations [42]. In our study all the patients were diagnosed before 15 years of age. The fact that the relative risks associated with both risk genotypes and low-risk genotypes seem to diminish with age above 15 years [43] raises the question whether gene–gene interactions may also differ in different age-groups. Although no preventive intervention is available for type 1 diabetes today, prediction of disease is an important part of strategies for prevention, both for recruitment of participants for research studies and for identification of target populations for future preventive interventions. Understanding the interacting effect of the established type 1 diabetes susceptibility genes will increase this possibility. In a multiplicative model the relative risk (RR) for a person holding a high-risk genotype at both loci compared with a person with low risk at both loci will be RRlocus1 × RRlocus2. In terms of absolute risk differences, a doubling or tripling of risk due to INS or PTPN22 would be greater for a person with a high-risk HLA genotype than it would for someone with a low-risk HLA genotype. The absolute risk for persons with a given genotype can be estimated by multiplying the average cumulative incidence in the population (0.42% cumulative risk up to age 15 years in Norway [44]) by the ratio of genotype frequency in patients and genotype frequency in controls [18], or using Bayes’ formula. For instance, for a person with a low-risk HLA genotype, a high- or low-risk INS genotype will define whether the estimated absolute risk is approximately 0.01% or 0.028% (absolute risk difference, 0.027%), whereas for those with the high-risk HLA genotype INS will define an estimated risk of approximately 3.0% or 4.7% (absolute risk difference 1.7%). As discussed in the general setting by Janssens et al. [40], increasing the number of susceptibility loci considered simultaneously generally increases the predictive value for disease. The downside is that the proportion of the population simultaneously carrying multiple risk alleles becomes minute even with a moderate number of susceptibility polymorphisms, and that even with relatively large data sets, as in our study, the absolute risk estimate becomes imprecise. The high-risk HLA genotype is carried by fewer than 3% of population controls, but confers a very high risk of disease. Several practical and scientific aspects of prediction should be considered when evaluating the utility of different prediction regimes. The ROC curve analysis confirms that, despite the higher absolute risk for those few with combinations of several risk markers, adding non-HLA genetic markers only marginally increases the utility of the prediction over that of HLA alone. While up to six susceptibility loci in addition to those studied here have recently been established in type 1 diabetes [45], the magnitude of the effect for each additional locus is very much smaller than that of HLA and even smaller than that of INS and PTPN22, suggesting that they are likely to add only marginally to the prediction of disease in individuals. Furthermore, an informal assessment of the number needed to be genetically screened in order to obtain a cohort of high-risk individuals, which will give rise to a given number of cases of type 1 diabetes, and the costs connected to the genotyping also suggest limited cost-effectiveness in adding non-HLA genetic markers to the prediction regime (data not shown). In conclusion, in this comprehensive study of interactions among established type 1 diabetes susceptibility genes, we found that the joint effect of HLA and PTPN22 was significantly less than multiplicative in the case-control material, while a multiplicative model could not be rejected for HLA-INS, HLA-CTLA, PTPN-INS, INS-CTLA4 and PTPN-CTLA4. Despite near-multiplicative effects for most loci, and the fact that groups with very high relative risk of type 1 diabetes can be identified by testing for multiple susceptibility genes, only a small proportion of the population (and cases with type 1 diabetes) simultaneously carry HLA and multiple non-HLA susceptibility genotypes. Electronic supplementary material Below is the link to the electronic supplementary material. ESM Table 1 (PDF 14.3 kb) ESM Table 2 (PDF 110 kb) ESM Table 3 (PDF 69.2 kb) ESM Table 4 (PDF 51.2 kb) ESM Table 5 (PDF 31.9 kb) ESM Table 6 (PDF 48.8 kb) ESM Table 7 (PDF 25.9 kb) ESM Table 8 (PDF 68.0 kb) ESM Table 9 (PDF 25.1 kb)
[ "ins", "hla", "ptpn22", "ctla4", "genes", "type 1 diabetes", "interaction" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
J_Comp_Physiol_A_Neuroethol_Sens_Neural_Behav_Physiol-4-1-2323032
Mechanics of the exceptional anuran ear
The anuran ear is frequently used for studying fundamental properties of vertebrate auditory systems. This is due to its unique anatomical features, most prominently the lack of a basilar membrane and the presence of two dedicated acoustic end organs, the basilar papilla and the amphibian papilla. Our current anatomical and functional knowledge implies that three distinct regions can be identified within these two organs. The basilar papilla functions as a single auditory filter. The low-frequency portion of the amphibian papilla is an electrically tuned, tonotopically organized auditory end organ. The high-frequency portion of the amphibian papilla is mechanically tuned and tonotopically organized, and it emits spontaneous otoacoustic emissions. This high-frequency portion of the amphibian papilla shows a remarkable, functional resemblance to the mammalian cochlea. Introduction The anatomy and physiology of the amphibian ear show both remarkable resemblances and striking differences when compared to the mammalian auditory system. The differences between the human and the amphibian auditory system are too significant to warrant direct generalizations of results from the animal model to the human situation. However, studying hearing across species helps to understand the relation between the structure and function of the auditory organs (Fay and Popper 1999). Thus, we hope and expect that the knowledge gained about the amphibian auditory system fits into our understanding of auditory systems in general. Over the course of history, a number of diverse amphibian species developed. Currently only three orders remain: anurans, urodeles, and caecilians. Their evolutionary relationship, as well as the evolutionary path of the individual orders, is still under debate. However, they are generally grouped into a single subclass, Lissamphibia of the class Amphibia (Wever 1985). The ancestral lineage of amphibians separated from the mammalian lineage, approximately 350 million years ago, in the paleozoic era. Many of the important developments in the auditory systems emerged after the ancestral paths separated (Manley and Clack 2003). This implies that shared features, like the tympanic middle ear, developed independently in different vertebrate lineages. The anurans -frogs and toads- form the most diverse order of amphibians. The living species are classified into two suborders, Archaeobatrachia and Neobatrachia (Wever 1985). Both within and between these suborders, there is a large variation in the anatomy and physiology of auditory systems. The most thoroughly studied species belong to the family Ranidae, as is reflected in the work referenced in this paper. The hearing organs of anurans are often falsely assumed to be more primitive than those of mammals, crocodiles, and birds. The relatively simple structure and functioning of the amphibian ear offer an excellent possibility to study hearing mechanisms (e.g., Ronken 1990; Meenderink 2005). On the other hand, the sensitivity of the frog inner ear, which appears to be able to detect (sub)angstrøm oscillations (Lewis et al. 1985), shows that the frog ear functions as a sophisticated sensor. While the ears of most vertebrate species contain one dedicated acoustic end organ, the frog ear has two, the amphibian papilla and the basilar papilla. 1 Like in other vertebrates, these organs contain hair cells for the transduction of mechanical waves into electrical (neural) signals. In mammals, birds and lizards, the hair cells are set on a basilar membrane. The frog inner ear lacks such a flexible substrate for its sensory cells. The hair bundles of the frog’s auditory organs are covered by a tectorial membrane, as they are in all terrestrial vertebrates except for some lizards species (Manley 2006). In mammals, the mechanical tuning of the basilar membrane is the primary basis for frequency selectivity. In the absence of the basilar membrane, the frog’s auditory organs must rely solely on the tectorial membrane and on the hair cells themselves for frequency selectivity. Recently, Simmons et al. (2007) and Lewis and Narins (1999) published reviews of the frog’s ear anatomy and physiology. In the current paper, we focus on the mechanics of the inner ear, specifically on the mechanics of the tectorial membrane. Only one publication exists on direct mechanical/acoustical measurements of structures in the frog inner ear (Purgue and Narins 2000a). Therefore, many of our inferences will result from indirect manifestation of inner ear mechanics, as observed in anatomical, electro-physiological and otoacoustic-emission studies. Nevertheless, these studies provide a consistent view of the mechanics of the anuran inner ear. Anatomy Middle ear The ears of most terrestrial vertebrates can be divided into three principal parts: the outer ear, the middle ear and the inner ear. In mammals, the outer ear consists of a pinna and an ear canal, which terminates at the tympanic membrane. In most frog species the outer ear is absent, 2 and the tympanic membrane is found in a bony ring, the tympanic annulus, in the side of the skull. The tympanic membrane defines the distal boundary of the middle ear cavity. This air-filled cavity is spanned by the ossicular chain, which serves to transfer vibrations of the tympanic membrane to the oval window of the inner ear. In the frog, the ossicular chain consists of two structures, the extra-columella and the columella (Jørgensen and Kanneworff 1998; Mason and Narins 2002a). The cartilaginous extra-columella is loosely connected to the center of the tympanic membrane. Medially, it flexibly connects to the partially ossified columella. The columella widens to form a footplate at its medial end, where it attaches to the oval window of the inner ear. Acoustic stimuli primarily enter the inner ear through the oval window. The middle-ear’s primary function is to compensate for the impedance mismatch between the air and the fluid-filled inner ear. There are two contributions to this compensation (Jaslow et al. 1988; Werner 2003). The first contribution results from the small area of the oval window relative to the area of the tympanic membrane. This causes a concentration of the external force exerted on the tympanic membrane. The second contribution involves a lever action of the columella footplate. The footplate attaches to the otic capsule along its ventral edge. This bond is suggested to be the location of the hinging point of the middle ear lever in the frog (Jørgensen and Kanneworff 1998; Mason and Narins 2002b). The lever action serves as a force amplification mechanism and contributes to the impedance matching between the outside air and the fluids in the inner ear. Both effects result in pressure amplification between the tympanic membrane and the columella footplate, thus overcoming the impedance mismatch between air and the inner-ear fluids. An additional bony disk, the operculum, is flexibly attached to the oval window in amphibians. The presence of an operculum in anurans is unique among vertebrates. The operculum’s position in the oval window can be modulated through the m. opercularis, which also connects it to the shoulder girdle. The function of the operculum is not entirely clear. Possibly, it serves to transfer substrate vibrations to the inner ear (Lewis and Narins 1999; Mason and Narins 2002b). The putative path for these vibrations includes the front limbs, the shoulder girdle and the m. opercularis (Hetherington 1988; Wever 1985). Alternatively, the operculum-columella system is proposed to protect the inner ear’s sensory organs from excessive stimuli. This protection hypothesis takes two different forms. Wever (1985) suggests that the operculum and the columella footplate can be locked together through muscle action. In this manner, the flexibility of the connection to the oval window decreases and the input impedance increases, which in turn decreases the input signal amplitude of the pressure wave in the inner ear. It has also been suggested that the action of the m. opercularis could uncouple the operculum and the footplate (Mason and Narins 2002b). This would allow the operculum to move out of phase with the footplate. The out-of-phase motion could absorb part of the inner ear fluid displacement caused by the motion of the footplate. Effectively this creates an energetic by-pass and decreases the amplitude in the inner ear. A tympanic middle ear, as described above, is considered to be the typical situation (Jaslow et al. 1988), which can be found in the family Ranidae. However, a wide range of variations in middle ear structures is found across species. In some species, a bony disk occupies the tympanic annulus rather than a membrane, for example, Xenopus leavis (Wever 1985), and there are a number of "earless" frogs. The tympanic membrane and the tympanic annulus are absent in these species. A functioning inner ear and a partial middle ear usually exist, although the middle ear cavity may be filled with connective tissue (e.g., Telmatobius exsul, Jaslow et al. 1988), or not exist at all (e.g., species in the Bombina family, Hetherington and Lindquist 1999; Wever 1985). Remarkably, some of these "earless" frogs have a mating call and exhibit neurophysiological responses (Bombina bombina, Walkowiak 1988; Atelopus, Lindquist et al. 1998) at typically auditory frequencies, which implies they have another path for the transfer of airborne sound to the inner ear (Jaslow et al. 1988), for example, through the lungs (Narins et al. 1988; Lindquist et al. 1998; Hetherington and Lindquist 1999). Inner ear The inner ear in the frog has two membranous windows: the oval window and the round window. As mentioned above, acoustic energy primarily enters the inner ear through the oval window. The round window is the main release point of this energy (Purgue and Narins 2000a). A similar lay-out can be found in other terrestrial vertebrates. However, the round window of the frog does not open into the middle ear as it does in mammals. Rather, it can be found in the top of the mouth cavity, under a lining of muscle tissue. Within the inner ear, there are two intertwined membranous compartments: the perilymphatic and the endolymphatic labyrinths (see Fig. 1). The perilymphatic labyrinth connects to both the oval window and the round window. Starting at the oval window and going medially, it passes through a narrow foramen, and widens into the otic cavity, forming the periotic cistern. Continuing medially it narrows again into the periotic canal. This canal connects the periotic cistern with the perilymphatic space at the round window (Purgue and Narins 2000b). Fig. 1Schematic drawing of a transverse section through the frog ear (adapted from Wever 1985). The division into the middle, and inner ear is indicated above the image; a selection of features is indicated in the image. The colored arrows indicate the paths of vibrational energy: green arrows represent the columellar path, red arrows the putative opercular path, and blue arrows indicate the path through the inner ear after combination of the columellar and opercular paths. The grey areas represents endolympatic fluid, dark yellow perilymphatic fluid. The green areas indicate the tectorial membranes in the papillae. (Color figure is available in the online version) Between the lateral perilymphatic cistern and the round window, part of the endolymphatic space can be found. The endolymphatic space also includes the semi-circular canals located dorsally from the otic system. It contains the sensory organs of hearing and balance. In the frog inner ear, there are eight sensory epithelia (Lewis and Narins 1999; Lewis et al. 1985), located as follows: three cristae in the semi-circular canals, which are sensitive to rotational acceleration of the head, and one each in: the utricule, which detects linear acceleration,the lagena, which detects both linear acceleration and non-acoustic vibrations (Caston et al. 1977),the sacculus, which is sensitive substrate vibrations up to approximately 100 Hz, and also detects high level low-frequency airborne sound, (Narins 1990; Yu et al. 1991)the amphibian papilla, which detects low-frequency acoustic stimuli (Feng et al. 1975), andthe basilar papilla, which is sensitive to high-frequency airborne stimuli (Feng et al. 1975). Hair cells are the sensory cells in all of these organs. Like all hair cells, these cells have a stereovillar bundle on their apical surface. Deflection of the bundle as a result of an acoustical vibration or a mechanical acceleration, initiates an ionic transduction current into the cell. This initial current causes a cascade of ionic currents, eventually resulting in the release of neurotransmitter at the basal surface of the cell. The released neurotransmitter triggers neural activity in the nerve fiber dendrites that innervate to the basal portion of the hair cell (Pickles 1988; Yost 2000; Keen and Hudspeth 2006). As in most vertebrates, a tectorial membrane covers the sensory cells of the auditory end organ. This membrane is a polyelectrolyte gel, which lies on the stereovilli (Freeman et al. 2003). The function of the tectorial membrane is not well understood, and may vary between classes. However, since the stereovilli in most vertebrate ears connect to this membrane, it obviously plays an important role in the conduction of acoustic vibrations to the hair cells. 3 Basilar papilla The basilar papilla is found in a recess that opens into the saccular space at one end, and is limited by a thin contact membrane at the other. The contact membrane separates the endolymphatic fluid in the papillar recess from the perilymphatic fluid at the round window (Lewis and Narins 1999; Wever 1985). The recess perimeter is roughly oval in shape; in the bullfrog, Rana catesbeiana, its major axis is approximately 200 μm long, while the minor axis measures approximately 150 μm (Van Bergeijk 1957). In the leopard frog, Rana pipiens pipiens, it is of similar size (personal observation, RLMS & JMS). The oval perimeter of the lumen is formed from limbic tissue; a substance unique to the inner ear, and similar to cartilage (Wever 1985). The sensory epithelium is approximately 100 μm long. It occupies a curved area that is symmetrical in the major axis of the elliptical lumen. It contains approximately 60 hair cells (measured in Rana catesbeiana), from which the stereovilli protrude into the lumen and connect to the tectorial membrane (Frishkopf and Flock 1974). Typically the orientation of the hair cells, as defined by the direction to which the v-shape of the stereovilli bundle points (Lewis et al. 1985), is away from the sacculus in Ranidae. The tectorial membrane spans the lumen of the papillar recess. It occludes about half the lumen, and consequently takes an approximately semi-circular shape when viewed from the saccular side (Frishkopf and Flock 1974; Wever 1985). The membrane has pores at the surface closest to the epithelium, into which the tips of the hair bundles project (Lewis and Narins 1999).4 Amphibian papilla The amphibian papilla can be found in a recess, that extends medially from the saccular space and, in frogs with derived ears, bends caudally to end at a contact membrane. Like the basilar papilla’s contact membrane, the membrane separates the endolymphatic fluid in the papilla recess from the perilymphatic fluid at the round window. The sensory epithelium is set on the dorsal surface of this recess (Lewis and Narins 1999). The epithelium itself has a complex shape; it consists of a triangular patch at the rostral end, and an s-shaped caudal extension towards the contact membrane (see Fig. 2). The exact shape and length of the caudal extension varies across species, with the most elaborate extensions occurring in species of the family Ranidae (Lewis 1984), while some species lack the s-shaped extension altogether (Lewis 1981). Fig. 2Schematic drawing of the amphibian papilla of the bullfrog, Rana catesbeiana, (adapted from Lewis et al. 1982), rotated to match orientation of Fig. 1). TM Tectorial membrane, AP amphibian papilla. a General overview of the AP; the dashed outline indicates the location of the sensory epithelium, b hair cell orientation in the sensory epithelium; dashed line indicates the position of the tectorial curtain. The numbers along the perimeter indicate the characteristic frequency of the auditory nerve fibers connecting to that site (in Hz) In the epithelium, the hair cell orientation follows a complicated pattern (see Fig. 2b). In the rostral patch the cells are orientated towards the sacculus. On the rostral half of the s-shaped extension, they are oriented along the s-shape. However, on the caudal half, the orientation rotates 90° to become perpendicular to the s-shape (Lewis 1981). An elaborate tectorial membrane is found on the hair bundles. A bulky structure covers the rostral patch, while the membrane gets thinner along the caudal extension (Lewis et al. 1982). A tectorial curtain spans the papilla recess approximately halfway between the sacculus and the contact membrane (Shofner and Feng 1983; Wever 1985). The curtain, also called the sensing membrane (Yano et al. 1990), spans the entire cross-section of the lumen. A small slit in the tectorial curtain may function as a shunt for static fluid pressure differences (Lewis et al. 1982). Response of the auditory end organs As mentioned in the section “Anatomy”, the oval window serves as the primary entry point of acoustic energy into the inner ear; the round window presumably serves as the primary release point. After the energy passes through the oval window, it enters the periotic cistern. Between this relatively large perilymphatic space and the round window there are two possible routes: through the endolymphatic space, or through the periotic canal, bypassing the endolymphatic space and the sensory organs altogether (Purgue and Narins 2000a). The bypass presumably serves to protect the sensory organs against low-frequency over stimulation (Purgue and Narins 2000b). The vibrational energy that ultimately stimulates the auditory end organs predominantly may enter the endolymphatic space through a patch of thin membrane in its cranial wall near the sacculus. This entry-point was identified by Purgue and Narins (2000b), by mechanically probing the perimeter of the endolyphatic space. After entering the endolymphatic space, the energy may pass either through the basilar papilla’s or through the amphibian papilla’s lumen to the round window. Measurements of the motion of the respective contact membranes show that there is a frequency-dependent separation of the vibrational energy between paths through the amphibian and the basilar papilla (Purgue and Narins 2000a; see Fig. 3c). The accompagnying dynamic model of the energy flow through the bullfrog’s inner ear (Purgue and Narins 2000b) indicates that this separation may occur based on the acoustic impedances of the paths. Fig. 3Overview of measurements of the frog inner ear; comparison between Rana (left) and Hyla (right). The dashed lines indicate the separation between the amphibian papilla and the basilar papilla. a, b Distributions of characteristic frequencies of auditory nerve fibers in Rana pipiens pipiens, and Hyla cinerea. c Example of the response of the contact membrane in R. catesbeiana; black line represents the amphibian papilla, open markers the basilar papilla. d, e Distributions of spontaneous otoacoustic emissions in ranid species (combined data from R. pipiens pipiens and R. esculenta), and hylid species (combined data from H. cinerea, H. chrysoscelis, and H. versicolor). f Example of stimulus frequency otoacoustic emissions in R. pipiens pipiens at indicated stimulus levels. g, h Examples of DP-grams measured in Rana pipiens pipiens, and Hyla cinerea. a, b, d, e, g and h are taken from Van Dijk and Meenderink (2006). There they were reproduced from Ronken (1990), Capranica and Moffat (1983), Van Dijk et al. (1989, 1996), Meenderink and Van Dijk (2004), and Van Dijk and Manley (2001), respectively. c is taken from Purgue and Narins (2000a), and f is an adapted presentation of data from Meenderink and Narins (2006) (graph created with data provided by Dr. Meenderink) The perilymphatic path through the periotic canal may serve as a shunt for acoustical energy to the round window. As its impedance exponentially increases with frequency, low-frequency vibrations will most effectively utilize this path. The endolymphatic path, on the other hand, presumably has a relatively constant impedance throughout the frog’s auditory range. The respective lumina of the amphibian and basilar papilla have a frequency-dependent impedance of their own. According to the model mentioned above, these impedances are dominated by the characteristics of the contact membranes (Purgue and Narins) (2000b). The respective peak displacements of the contact membranes correspond to the detected frequencies in the associated organs (Purgue and Narins 2000a). Basilar papilla The basilar papilla’s tectorial membrane is presumably driven by a vibrating pressure gradient between the the sacculus and the basilar papilla’s contact membrane. No reports have been published on direct measurements of the mechanical response of the tectorial membrane, or on the basilar papilla’s hair bundle mechanics. However, the hair cell orientation in the basilar papilla implies that the tectorial membrane’s primary mode of motion is to and from the sacculus. Auditory nerve fiber recordings from the frog basilar papilla show a frequency selective response (see Fig. 4 for examples of tuning curves). The range of characteristic frequencies in nerve fibers from the basilar papilla is species dependent. In the leopard frog, they are approximately between 1,200 and 2,000 Hz (Ronken 1990); in the bullfrog they are slightly lower, between 1,000 and 1,500 Hz (Shofner and Feng 1981; Ronken 1991). In the Hyla-family, the characteristic frequencies appear to be significantly higher; in Hyla cinerea, the green treefrog, they range from 2.8 to 3.9 kHz (Ehret and Capranica 1980; Capranica and Moffat 1983), and in Hyla regilla roughly from 2 to 3 kHz (Stiebler and Narins 1990; Ronken 1991). Where studied in other species, the characteristic frequencies of the basilar papilla’s nerve fibers fall roughly within the bounds defined by the bullfrog at the low end and the green treefrog at the high end (Scaphiopus couchi: approximately 1−1.5 kHz, Capranica and Moffat 1975; Ronken 1991; Eleutherodactylus coqui: approximately 2−4 kHz, Narins and Capranica 1980, 1976; Ronken 1991; Physalaemus pustulosus group: around 2.2 kHz, Wilczynski et al. 2001). Fig. 4Tuning curves measured in the auditory nerve in R. catesbeiana (unpublished measurements by JMS & PvD, 1992; various specimens). The numbers in the graph indicate values In each individual frog, the tuning curves of the auditory nerve fibers appear to have a nearly identical shape and characteristic frequency (Ronken 1990; Van Dijk and Meenderink 2006). This suggests that the entire basilar papilla is tuned to the same frequency. Because of this collective tuning, characterized by one characteristic frequency and a single tuning-curve shape throughout the organ, the basilar papilla may be referred to as a "single auditory filter". In comparison, the mammalian cochlea, and the anuran amphibian papilla (see below), consist of a combination of a large number of auditory filters (Pickles 1988). The quality factor, Q10dB (e.g., Narins and Capranica 1976; Shofner and Feng 1981), is lower than that of other vertebrate hearing organs in the same frequency range, (Evans 1975; see Fig. 5), and ranges from approximately 1 to 2 in both the leopard frog and the bullfrog (Ronken 1991; see Fig. 5). For other anuran species, the ranges are somewhat different, with the lowest minimum values (approximately 0.5) reported for Hyla regilla, and the highest maximum values (approximately 2.8) in Scaphiopus couchi. Thus, the basilar papilla’s frequency selectivity is relatively poor. Fig. 5Comparison of the filter quality factor versus the characteristic frequency (CF, in kHz) of nerve fibers from the cat cochlea (adapted from Evans 1975) and the leopard frog (adapted from Ronken 1991). In the leopard frog graph, the triangular symbols correspond to nerve fibers from the amphibian papilla; the circles to fibers from the basilar papilla. The black line indicates the upper limit of the amphibian papilla’s frequency domain. The grey area in the upper (cat) graph corresponds to the area of the lower (frog) graph. The loops indicate the approximate perimeter of the fiber populations in the lower graph for the amphibian papilla and the basilar papilla. As illustrated in Fig. 3, there is no correspondence between the range of characteristic frequencies in the basilar papilla and the range of spontaneous otoacoustic emission frequencies (Van Dijk and Manley 2001; Van Dijk and Meenderink 2006; Van Dijk et al. 2003; Meenderink and Van Dijk 2004, 2005, 2006; Meenderink and Narins 2007). Since it is generally assumed that otoacoustic emissions of a particular frequency are generated at the detection site for this frequency, this suggests that the basilar papilla does not generate spontaneous emissions. However, it does emit distortion product otoacoustic emissions (Van Dijk and Manley 2001), and stimulus frequency otoacoustic emissions (Palmer and Wilson 1982; Meenderink and Narins 2006). The peak amplitudes of the distortion product otoacoustic emissions match the characteristic frequencies of the auditory nerve fibers innervating the basilar papilla (Meenderink et al. 2005). The amplitude and phase characteristics of the distortion product otoacoustic emissions can be qualitatively modeled by assuming the basilar papilla to be a single passive non-linear auditory filter (Meenderink et al. 2005). Thus, nerve fiber recordings, otoacoustic emission measurements and a model based on these measurements show that the basilar papilla functions as a single frequency band auditory receptor. This frequency band is relatively broad, and the center frequency may depend on species and individual animals. The hypothesis that considers the basilar papilla as a single resonator was originally put forward by (Van Bergeijk 1957). He investigated the mechanical response of the tectorial membrane in a scale model consisting of a thin rubber tectorium spanning a lumen in a stiff wall. A number of different vibration modes existed in this model. Although Van Bergeijk’s model is vastly oversimplified, the basic idea that the mechanical tuning of the tectorial membrane may be the basis of the basilar papilla’s frequency selectivity is still viable. Amphibian papilla As in the basilar papilla, the tectorial membrane in the amphibian papilla is presumably driven by a vibrating pressure difference between the sacculus and the round window. Due to the more elaborate tectorial membrane and the more complex pattern of hair cell orientations, the motion of the membrane may be expected to be more complex than that of the basilar papilla’s tectorial membrane. The tectorial curtain is in the sound path through the papilla, and presumably plays a role in conveying vibrations to the tectorial membrane and the hair bundles. Electrophysiological recordings from and subsequent dye-filling of single fibers of the auditory nerve show that the amphibian papilla has a tonotopic organization (Lewis et al. 1982). The fibers innervating the triangular patch have low characteristic frequencies, down to approximately 100 Hz. The frequencies increase gradually along the caudal extension. In the bullfrog, the upper frequency is about 1000 Hz; an overview of the tonotopic organization is given in Fig. 2b. 5 The frequency selectivity of the amphibian papilla’s nerve fibers is similar to that of mammalian auditory nerve fibers with the same characteristic frequency. This is in contrast to the significantly poorer frequency selectivity in the basilar papilla’s nerve fibers (Ronken 1990; Evans 1975; see also Fig. 5). In the low-frequency, rostral part of the papilla, the hair cells are electrically tuned (Pitchford and Ashmore 1987; Smotherman and Narins 1999). This tuning stems from the electrical properties of the cell membrane’s ion channels. The hair cell tuning characteristics parallels the tonotopy of the single nerve recordings. Therefore, frequency selectivity in the rostral part of the amphibian papilla appears to be primarily determined by the electrical characteristics of the hair cells. However, there is a fundamental discrepancy between the tuning characteristics of the hair cells and the auditory nerve fibers. While hair cells exhibit a second-order resonance (Pitchford and Ashmore 1987) auditory neurons display a higher order filter characteristic (Lewis 1984). Nevertheless, due to the parallels in the tonotopic organization, the assumption that the frequency selectivity is determined by the electrical tuning seems viable for the rostral part of the amphibian papilla. The higher-order responses in the neural signal may result from coupling between hair cells, which may be mechanical, for instance through the tectorial membrane. Neurons innervating the rostral portion of the amphibian papilla display non-linear two-tone suppression similar to that in other vertebrates (Capranica and Moffat 1980; Benedix et al. 1994). Another manifestation of non-linear behavior can be found in the response to noise: second-order Wiener kernels of low-frequency neurons show off-diagonal components, which are an indication of non-linearity (Van Dijk et al. 1994, 1997). The spectro-temporal receptive fields constructed from these Wiener kernels exhibit suppressive side bands besides the main characteristic frequency band of the fiber (Lewis and Van Dijk 2004). Hair cells caudal to the tectorial curtain do not display electrical resonance (Smotherman and Narins 2000). Therefore, the tuning of this high-frequency, caudal region of the papilla must result from the mechanical properties of the tectorial membrane and the hair cells. Based on the hair cell orientation, displayed in Fig. 2b, the tectorial membrane motion in the amphibian papilla is expected to be far more complex than in the basilar papilla. Assuming that the hair bundles are orientated in such a way that they are maximally deflected by the connected tectorial membrane, the rostral patch of the membrane should be moving to and from the sacculus, if the appropriate stimuli are presented. The rostral part of the s-shaped extension is moving along its major axis, whereas the extension caudal to the tectorial curtain should be moving in a transverse direction. The amphibian papilla appears to be the only source of spontaneous otoacoustic emissions in the frog inner ear (Van Dijk et al. 1989, 1996; Long et al. 1996; Van Dijk and Manley 2001; Fig. 3d-e). The frequency distribution of these emissions corresponds to the range of best frequencies of the neurons projecting to the portion of the amphibian papilla caudal to the tectorial curtain. It is generally assumed that an otoacoustic emission of a specific frequency is generated at the location in the inner ear where that frequency is detected. Under this assumption, the presence of spontaneous otoacoustic emissions indicates that the caudal portion of the amphibian papilla exhibits spontaneous activity. Presumably, this activity is related to active amplification of input signals in this area. The caudal region of the amphibian papilla is also involved in the generation of distortion product otoacoustic emissions (Van Dijk and Manley 2001; Meenderink and Van Dijk 2004), and stimulus frequency otoacoustic emissions (Meenderink and Narins 2006). The distortion product otoacoustic emissions from the amphibian papilla are more vulnerable to metabolic injuries than those from the basilar papilla (Van Dijk et al. 2003). Also, both the spontaneous (Van Dijk et al. 1996) and distortion product (Meenderink and Van Dijk 2006) otoacoustic emissions display a clear dependence on body temperature. These results combine to indicate that the s-shaped extension of the amphibian papilla caudal to the tectorial curtain functions as an active hearing organ. Discussion Our aim in this review is to outline what is known about the mechanical response properties of the amphibian and basilar papilla. Only one published report exists of the direct mechanical measurements of structures associated with these papillae (Purgue and Narins 2000a). The measurements show that the response of the contact membrane is frequency dependent for each papilla. The movement of the contact membrane may be assumed to reflect the fluid motion within the respective papilla. The contact membrane of the amphibian papilla shows a maximum response when the ear is stimulated with relatively low acoustic frequencies, while the basilar papilla contact membrane exhibits a maximum response to higher frequencies. The amphibian and the basilar papilla are the only hearing organs found in terrestrial vertebrates in which the hair cells are not on a flexible basilar membrane. Instead the hair cells are embedded in a relatively stiff cartilaginous support structure. Any frequency selective response, therefore, most likely originates from the mechanical or electrical properties of the hair cells, or the mechanical properties of the tectorial membrane, or a combination of these factors. Since there are no direct mechanical measurements of either the hair cells in the papillae or the tectorial membranes, we cannot come to any definite conclusions regarding their properties. However, the available morphological and functional data allow for some hypotheses. The most conspicuous functional characteristic of the amphibian papilla is its tonotopic organization (Lewis et al. 1982). Rostral to the tectorial curtain, the hair-cell orientation is essentially parallel to the tonotopic axis. In this low-frequency region of the amphibian papilla, the tectorial membrane apparently moves in a rostro-caudal direction. In contrast, the hair-bundle orientation suggests that the tectorial-membrane motion is perpendicular to the tonotopic axis in the high-frequency, caudal region of the papilla. The tectorial membrane’s caudal end, therefore, appears to vibrate in a markedly different direction than its rostral end. In the low-frequency region of the amphibian papilla, the hair cells display electrical tuning. The tuning properties of the hair cells parallel the tonotopic organization are measured from the afferent nerve fibers (Pitchford and Ashmore 1987). This strongly suggests that the tuning characteristics of the nerve fibers are primarily determined by the electrical hair-cell resonances. The auditory nerve-fiber recordings reflect the presence of high-order filtering (Lewis 1984), whereas hair cells essentially function as second-order resonances. It is, therefore, likely that coupling between the hair cells shapes the frequency responses in the nerve fibers. Such coupling may be mechanical, for example, by the tectorial membrane, or electrical, or a combination of mechanical and electrical. Hair cells in the high-frequency, caudal region do not display any electrical resonance (Smotherman and Narins 1999). This implies that the frequency selectivity must be based on mechanical tuning, probably by the tectorial membrane. The caudal region of the amphibian papilla shares some notable characteristics with the mammalian cochlea (see also Lewis 1981): the papilla is elongated, and it exhibits a tonotopic gradient along the long axis;the orientation of the hair cells is perpendicular to the tonotopic axis, indicating that the hair cells are stimulated most efficiently by a deflection perpendicular to the tonotopic axis;frequency selectivity, very probably, relies on mechanical tuning;frequency selectivity is similar, with Q10dB-values ranging from 0.8 to 2.2; andboth spontaneous and distortion product otoacoustic emissions are generated. These emissions are physiologically vulnerable. The presence of spontaneous otoacoustic emissions shows that at least part of the amphibian papilla’s caudal extension functions as an active hearing organ. In this respect it is similar to the mammalian cochlea and other vertebrate hearing organs (Lewis and Narins 1999). One active mechanism in the mammalian cochlea is the prestin-mediated active somatic length changes in the outer hair cells (Brownell et al. 1985; Yost 2000; Zheng et al. 2000; Liberman et al. 2002; Dallos 2003). However, this mechanism is probably exclusively present in mammalian outer hair cells. Active hair bundle movements have been reported as an alternative active mechanism in anuran saccular hair cells (Martin and Hudspeth 1999; Martin et al. 2003; Bozovic and Hudspeth 2003); this mechanism may be present in the auditory organs as well. Although the fundamental active mechanism may differ between species, the functional result seems to be very similar across vertebrates: high auditory sensitivity and good frequency selectivity (Manley 2000). The basilar papilla seems to function in a much simpler manner. Both neural recordings and otoacoustic emission measurements suggest that it functions as a single auditory filter. Since the hair cells in the basilar papilla are unlikely to be electrically tuned, its frequency selectivity most likely results from mechanical tuning, probably via the tectorial membrane. The basilar papilla is remarkable in that no spontaneous otoacoustic emissions have been recorded in its frequency range. The absence of such emissions can either be caused by the fact that they are not generated within the papilla, or by the fact that the transmission of such emissions to the tympanic membrane is inhibited. However, distortion product otoacoustic emissions can be recorded in this range (e.g., Van Dijk and Manley 2001). This implies that the outward transmission is not inhibited, and therefore that spontaneous emissions are most likely not generated within the basilar papilla. Furthermore, the amplitude of the basilar papilla’s distortion product otoacoustic emissions depends less on temperature than that of the amphibian papilla’s (Meenderink and Van Dijk 2006). Also, emissions from the basilar papilla are less sensitive to the disruption of oxygen supply (Van Dijk et al. 2003). Apparently, emissions from the basilar papilla are relatively independent of the metabolic rate, and therefore, it has been suggested that the basilar papilla is not an active hearing organ (Vassilakis et al. 2004; Van Dijk and Meenderink 2006). In conclusion, the frog inner ear takes an exceptional place among the hearing organs of terrestrial vertebrates. It includes two auditory end organs, which both lack the basilar membrane present in every other terrestrial vertebrate species. Instead the hair cells are embedded in a relatively stiff structure. They are stimulated by the motion of the tectorial membrane. Although the basilar and amphibian papilla are similar in this respect, they appear to function by different mechanisms. In fact, even within the amphibian papilla two distinctly different functional regions can be identified. The low-frequency portion, rostral to the tectorial curtain, contains hair cells that exhibit electrical tuning. The hair cells are most sensitive to deflection along the tonotopic axis, thus this is presumably the tectorial membrane’s direction of vibration. By contrast, the region caudal to the tectorial curtain shows more similarities to, for example, the mammalian cochlea: the hair cell orientation is perpendicular to the tonotopic axis, and the presence of spontaneous otoacoustic emissions suggests that it functions as an active hearing organ. Finally, the basilar papilla is yet different: it appears to function as a single passive auditory filter. Thus the frog inner ear includes two auditory end organs with three functional regions.
[ "anuran", "auditory system", "amphibian", "frog", "inner ear mechanics" ]
[ "P", "P", "P", "P", "P" ]
Eur_J_Appl_Physiol-4-1-2267484
Modulation in voluntary neural drive in relation to muscle soreness
The aim of this study was to investigate whether (1) spinal modulation would change after non-exhausting eccentric exercise of the plantar flexor muscles that produced muscle soreness and (2) central modulation of the motor command would be linked to the development of muscle soreness. Ten healthy subjects volunteered to perform a single bout of backward downhill walking exercise (duration 30 min, velocity 1 ms−1, negative grade −25%, load 12% of body weight). Neuromuscular test sessions [H-reflex, M-wave, maximal voluntary torque (MVT)] were performed before, immediately after, as well as 1–3 days after the exercise bout. Immediately after exercise there was a −15% decrease in MVT of the plantar flexors partly attributable to an alteration in contractile properties (−23% in electrically evoked mechanical twitch). However, MVT failed to recover before the third day whereas the contractile properties had significantly recovered within the first day. This delayed recovery of MVT was likely related to a decrement in voluntary muscle drive. The decrease in voluntary activation occurred in the absence of any variation in spinal modulation estimated from the H-reflex. Our findings suggest the development of a supraspinal modulation perhaps linked to the presence of muscle soreness. Introduction In an exercise inducing muscle damage, inadequate neural drive can be an attempt of the neuromuscular system to protect the muscle-tendon unit from additional damage (Strojnik and Komi 2000; Nicol et al. 2006). This inadequate neural drive could be the result from a combination of three factors: the conscious and unconscious will of the subject to reduce the exercise intensity; an inability of the motor cortex to generate sufficient output to maximally activate the muscle; and/or a decreased transmission of the supraspinal input to the muscle by the spinal motor axons. A decrement in H-reflex amplitude has been observed immediately after an exhausting voluntary contraction of a single muscle group (i.e., Duchateau et al. 2002; Garland and McComas 1990; Kuchinad et al. 2004). It has been proposed that this decline in the transmission of the action potentials from the Ia afferent to the α-motoneuron may be a consequence of a presynaptic inhibition mediated by group III and IV afferents (Bigland-Ritchie et al. 1986; Duchateau et al. 2002; Garland and McComas 1990; Garland 1991; Woods et al. 1987) induced by muscle damage (Avela et al. 2006). Furthermore, increased group III and IV muscle afferent inputs could induce H-reflex depression when muscle soreness progresses as muscle pain is believed to reflect activity in group III and IV muscle afferents (O’Connor and Cook 1999). But only a few studies have observed a decrease in H-reflex amplitude after exercise of multiple muscle groups as occurs for example in running (Avela et al. 1999; Bulbulian and Bowles 1992; Racinais et al. 2007b) raising the question of whether spinal modulation occurs after whole body exercise. Recently it has been shown that walking backward induces muscle soreness in the muscles of the lower limb (Nottle and Nosaka 2005). Accordingly, this exercise model allows the study of the effects of muscle soreness on alteration in neural drive. Thus the goal of this study was to determine whether the impaired exercise performance of muscles with delayed onset muscle soreness (DOMS) is due to an alteration in neural drive related to spinal modulation. Methods Subjects Ten healthy subjects (eight males and two females; age 27 ± 1 years; weight 68 ± 2 kg; height 174 ± 2 cm; data in mean ± SEM) gave informed, written consent to participate in this study. The procedures complied with the Helsinki declaration for human experimentation and were approved by the local Ethics Committee. None of the subjects suffered from muscle soreness or ankle injuries. Subjects were asked to avoid caffeine intake within the 8-h preceding the test and to avoid all vigorous activity during the 24-h preceding the test. Subjects were also asked to refrain from analgesic intake all along the protocol, which could have disturbed DOMS perception. Experimental procedures Subjects visited our laboratory on four consecutive days (Fig. 1). The first day consisted of a neuromuscular test session (described subsequently) followed by a backward walking exercise (description below) followed by further neuromuscular testing. On the second, third and fourth days, subjects returned to the laboratory at the same hour of day that they had finished the walking exercise and performed the neuromuscular testing. Fig. 1Experimental design. Thin arrow indicates stimulation at Hmax intensity, simple thick arrow indicates stimulation at Mmax intensity, double thick arrow indicates doublet at Mmax intensity Neuromuscular tests The neuromuscular tests are described in Fig. 1. All the neuromuscular tests began with the determination of the stimulation intensity required to elicit a maximal H-reflex (Hmax). Afterwards, three Hmax (interspaced by 20 s) and three maximal M-wave amplitudes (Mmax) interspaced by 8 s were elicited from the relaxed muscle. The amplitude of the three twitches evoked at Hmax and Mmax intensities were averaged for subsequent analysis in both the soleus and gastrocnemius medialis. Thereafter, subjects were instructed to perform three maximal voluntary torque (MVT) contractions of the plantar flexor muscles, each for 5 s. Subjects were verbally encouraged to perform maximally. A superimposed stimulus (Hmax intensity) was evoked during each MVT plateau to obtain the H-reflex during contraction (Hsup). Then, another superimposed stimulus (Mmax intensity) was evoked in order to obtain a superimposed M-wave (Msup) during voluntary contraction. Finally, a doublet (two electrically evoked twitches, 10 ms apart, Mmax intensity) was evoked during each plateau (superimposed twitch) and another doublet was evoked 4 s after each MVT (potentiated twitch). The ratio of the amplitude of the superimposed twitch torque over the amplitude of a twitch evoked at rest 4 s after the MVT was used to assess the level of voluntary activation (VA) (Allen et al. 1995). According to the twitch interpolation method (Allen et al. 1995), the percentage of VA was calculated as follow: VA (%) = (1 − Superimposed Twitch/Potentiated Twitch) × 100. Muscle soreness assessment A subjective evaluation of the extent of DOMS in the plantar flexor muscles was performed before each neuromuscular test by completing two subjective scales for evaluation of DOMS. The first was a visual scale of 9 cm without any graduation (horizontal line ranging from no pain at the left to extreme pain at the right). The second was a Lickert scale with seven items (from 0: no pain to 6: severe pain limiting movement, Vickers 2001). Backward downhill walking exercise Subjects exercised by walking on a motorized treadmill (S2500, HEF Techmachine, France) for 30-min at a constant velocity of 1 ms−1 with a negative grade of −25%. To increase the eccentric loading on the plantar flexor muscles, the walk was performed backward (Nottle and Nosaka 2005) whilst wearing a vest loaded with an additional weight equivalent to 12% of body weight. Measurement and calculations Torque measurement The MVT of the plantar flexor muscles was recorded by a dynamometric pedal (Captels, St Mathieu de Treviers, France). Subject position was standardized with hip, knee and ankle angulations of 90°, and foot securely strapped on the pedal. Evoked potentials The tibial nerve was stimulated with a cathode electrode with a diameter of 9 mm placed in the popliteal cavity (Contrôle Graphique Medical, Brie-Comte-Robert, France). Subjects were in a standardized position with motionless head (Zehr 2002) and a standardized environment (i.e., same time-of-day, silent room, constant lighting). Furthermore, a constant pressure was applied to the electrode with the use of a strap. This was controlled by an air pressure-recorder (Kikuhime, TT MediTrade, Soro, Denmark) located under the strap. The anode (10 cm × 5 cm, Medicompex, Ecublens, Switzerland) was positioned distal to the patella. Electrical stimulations (400 V, rectangular pulse of 0.2 ms) were delivered by a high-voltage stimulator (Digitimer DS7AH, Digitimer, Hertfordshire, England). The amperage was adjusted for each subject during the familiarization session. During this first session, the amperage was increased progressively (10 mA increment) until a plateau in twitch mechanical response [peak twitch (Pt)] and Mmax were observed. With increasing stimulation intensity, the H-reflex response initially increased progressively before decreasing and then disappearing. Thereafter, the intensity needed to obtain Hmax was adjusted by 1 mA. The stimulation intensity needed to obtain Hmax was determined before each test session but with a simplified procedure based on the knowledge of the intensity used during the first test session. This adjustment seemed necessary for the H-reflex in view of the important variation occurring in Hmax for a small variation in stimulation conditions (e.g., intensity, localization of the cathode). Recordings Evoked Pt torque was recorded in relaxed muscle by the same ergometer than MVT. Both MVT and Pt were measured with the knee at 90° to reflect the changes occurring in the soleus. Reflex waves for both the soleus, which provides the highest responses due to the activation of the slow twitch fibre by H-reflex (Buchthal and Schmalbruch 1970), and the gastrocnemius medialis, which is particularly susceptible to be affected by the walking exercise were recorded with 9 mm diameter bipolar Ag/AgCl electrodes (Contrôle Graphique Medical, Brie-Comte-Robert, France) with an inter-electrode distance of 25 mm. The reference electrode was placed on the wrist. Low impedance between the two electrodes (<5 kΩ) was obtained by abrading and washing the skin with emery paper and cleaning with alcohol. Signals were amplified and filtered (band pass 30–500 Hz, gain = 1,000), and recorded at high frequency (2,000 Hz). The compound muscle action potentials were recorded using MP30 hardware (Biopac Systems Inc., Santa Barbara, CA, USA) and dedicated software (BSL Pro Version 3.6.7, Biopac Systems Inc., Santa Barbara, CA, USA). The same equipment was also used to drive the stimulator. Calculation The Pt may be considered as an index of the contractile properties and Mmax amplitude represents an index of sarcolemmal excitability. Because no evidence exists to show that data recorded at rest reflects the responses of the neuromuscular system during contraction, Hsup amplitude was recorded during MVT to complement Hmax amplitude at rest. To ensure that any changes in the evoked Hmax and Hsup amplitudes were not due to changes at the muscle fiber membrane or neuromuscular junction (Cupido et al. 1996), we normalized these recordings to the M-wave amplitude recorded under the same testing conditions, that is the Hmax/Mmax and Hsup/Msup ratios. These ratios may be considered as a global index of the spinal modulation produced by presynaptic inhibition, motoneuron excitability, collision in antidromically activated axons and Renshaw cell inhibition, acting individually or in concert. Statistical analysis Each variable was tested for normality using the Skewness and Kurtosis tests with acceptable Z values not exceeding ±1.5. With the assumption of normality confirmed, parametric tests could be performed. The effect of the walking exercise was analyzed for each variable by a one-way analysis of variance for repeated measures (five test sessions). The contrast method was applied as post hoc to further investigate the effect of both the exercise and the recovery. Because VA level failed to display a normal distribution, a Friedman test was used instead of the ANOVA. Statistical analyses were performed with Systat software (Systat, Evanston, IL, USA). Data are reported as mean ± SEM and the level of statistical significance was set at P < 0.05. Results Maximal voluntary torque, voluntary activation and contractile properties The MVT significantly changed across the 4 days following the walking test (F4,36 = 3.8, P < 0.02, Fig. 2a). Post hoc analysis showed a significant decrease in MVT after the walking exercise and which persisted during the next two days (F1,9 = 12.33, P < 0.01). A significant recovery in MVT was observed on the third day (48-h versus 72-h after: F1,9 = 9.64, P < 0.02). Fig. 2Evolution of voluntary torque (a), voluntary activation (b), electrically evoked peak twitch (c) and subjective delayed onset muscle soreness (d, black rectangle: analogic visual scale, white rectangle: Lickert scale) across the experimental sessions. Data in mean ± SEM, Asteriskindicates value or group of values significantly different from the other values of the graph (P < 0.05) In line with the evolution observed in MVT, post hoc analysis showed a significant decrease in VA level after the walking exercise (pre versus post-exercise: P < 0.02, Fig. 2b) which failed to recover by 48-h (post-exercise versus 24-h and 48-h after exercise, NS). However there was a significant recovery by 72-h (post-exercise versus after 72-h, P < 0.005). The electrically evoked Pt also displayed a significant variation following walking exercise (F4,36 = 14.07, P < 0.001, Fig. 2c). Post hoc analysis revealed a significant decrease in Pt after the exercise (F1,9 = 58.34, P < 0.001) followed by a significant recovery thereafter (F1,9 = 24.76, P < 0.001). Subjective DOMS The subjects feeling of DOMS increased significantly in the days following the walking exercise (F4,27 > 28, P < 0.001 for both scales used, Fig. 2d). Post hoc analysis displayed significantly higher subjective DOMS for the 3 days following exercise compared to the termination of exercise (F1,9 > 35, P < 0.001, for both scales). Muscle soreness reached a maximum 48-h after exercise and began to recover by 72-h (48-h versus 72-h after exercise: F1,9 > 12, P < 0.01 for both scales). Evoked potentials An example of evoked potentials recorded in a representative subject is displayed in Fig. 3 and the mean values are displayed in Table 1. The walking exercise failed to induce significant changes in the evoked potentials both at rest (Mmax, F4,36 = 1.60 and 0.60 for soleus and gastrocnemius medialis respectively, NS) and during the voluntary contraction (Msup, F4,36 = 0.46 and 2.20 for soleus and gastrocnemius medialis respectively, NS). Furthermore, the reflex waves calculated both at rest (Hmax/Mmax ratio) and during contraction (Hsup/Msup ratio) also did not change significantly (all F4,36 < 0.98, NS, Table 1). Fig. 3Example of evoked potentials recorded in a representative subject. Each drawing represents the average of three recordings obtained on a relaxed muscleTable 1Evolution of maximal compound action potential electrically evoked at rest (Mmax) and during MVT (Msup), normalized H-reflex at rest (Hmax/Mmax) and during MVT (Hsup/Msup)ExerciseRecoveryStatistical analysisBeforeAfter24-h48-h72-hSoleusMmax (mV)7.356.807.837.747.18NS0.530.490.80.580.75Msup (mV)8.878.139.058.918.75NS1.140.761.130.971.29Hmax/Mmax ratio0.360.370.380.370.36NS0.100.090.100.100.09Hsup/Msup ratio0.390.450.390.340.34NS0.100.100.080.080.06Gastrocnemius medialisMmax (mV)6.616.106.426.836.50NS1.070.760.641.00.92Msup (mV)9.407.048.247.958.14NS1.291.060.810.892.26Hmax/Mmax ratio0.180.190.220.190.19NS0.050.050.050.050.05Data in mean ± SEM Discussion The downhill walking exercise induced a significant decrease in the MVT of the plantar flexor muscles. Immediately after the walking exercise, the torque decrement of −15% (Fig. 2a) appeared to be caused partly by an alteration in muscle contractile properties (i.e., −23% for Pt, Fig. 2c). This alteration is typically referred to as “peripheral fatigue” [for a review, see Millet and Lepers 2004]. Furthermore, this peripheral fatigue was also associated with a decrease in VA (i.e., −5.3%, Fig. 2b) suggesting the concomitant existence of a “central modulation” [for a review, see Gandevia 2001]. The first finding of this study is that the maximum voluntary torque failed to recover before the third day (i.e., −12% and −10% after 24-h and 48-h of recovery, respectively, Fig. 2a) whereas the measure of the (peripheral) contractile properties had recovered significantly within the first 24 h after exercise (P < 0.01, Fig. 2c). This delayed recovery in MVT appeared to be mainly associated with a decrease in voluntary muscle activation (Fig. 2b). The time course of change in VA presents similarities with the time course of torque changes. A significant decrease in the VA reaching the muscle has previously been observed after prolonged (Millet et al. 2002, 2003) and short-duration (Racinais et al. 2007a) fatiguing exercise, but our data showed that a backward downhill walking exercise did produce a decrease in muscle activation persisting for a few days after the exercise. It has been suggested that this central component could explain as well the reduction in force production by the respiratory muscles after heavy exercise (Verin et al. 2004) and represents a protective mechanism in order that peripheral muscle fatigue does not exceed a critical threshold (Amann et al. 2006). This central protection of the muscle from further peripheral fatigue and damage will be performed at the expense of a truly maximal performance in which all the motor units are activated (Gandevia et al. 1996). It has recently been suggested that a “central governor” in the brain regulates the extent of skeletal muscle recruitment during exercise (Noakes et al. 2005). According to this theory, the sensation of fatigue is the conscious interpretation of these homoeostatic control mechanisms during prolonged exercise (Noakes et al. 2005). Our results could partly support this theory but with the proviso that the modulation of central activation in this experiment appears to be related not only to fatigue immediately after exercise but also to the development of DOMS the days following the exercise. However, regulation of muscle recruitment by the brain (i.e., supraspinal regulation) is not the only system that could explain the observed decrease in skeletal muscle activation since reflex pathways need also to be considered (i.e., spinal modulation). Previous results showed impairment in VA after eccentric exercise when VA was estimated by nerve stimulation but not by cortical stimulation (Prasartwuth et al. 2005). That suggests that the VA deficit lies within these two site of stimulations, i.e., in the motor cortex or at spinal level (Prasartwuth et al. 2005), rather than a governor upstream to the motor cortex. In the present study, we used H-reflex amplitude as a tool to evaluate the spinal modulation (Aagaard et al. 2002; Schieppati 1987) to provide a differentiation within these possible regulatory mechanisms. The DOMS produced by our experimental protocol would have induced an increased discharge of group III and IV muscle afferents (Avela et al. 1999) and thus a pre-synaptic inhibition of the transmission from the Ia afferent stimulation to the α-motoneurons (Avela et al. 2006; Bigland-Ritchie et al. 1986; Duchateau et al. 2002; Garland and McComas 1990; Garland 1991; Woods et al. 1987). The extent to which this inhibition occurs will depend on whether the input producing the inhibition is ongoing or has ceased. But, in theory, increased group III and IV muscle afferent input induced by the DOMS should induce H-reflex depression as the soreness progresses. Indeed, muscle pain is believed to reflect activity in group III and IV muscle afferents (O’Connor and Cook 1999). However, our results failed to show significant variations in the evoked reflex-wave amplitudes throughout the experiment (i.e., Hmax/Mmax and Hsup/Msup ratios) suggesting that the motoneuron pool excitability was well preserved. A number of previous studies have demonstrated a significant H-reflex decrease in the exercised muscle group after an exhausting voluntary contraction of that muscle group (i.e., Duchateau et al. 2002; Garland and McComas 1990; Kuchinad et al. 2004). But only a few studies have observed this decrease after more generalized muscular exercise such as running, (Avela et al. 1999; Bulbulian and Bowles 1992; Racinais et al. 2007b). Our results support these findings by showing that after a 30-min downhill walking exercise, electrically evoked reflex-wave activity was not decreased. Thus we conclude that non-exhausting walking exercise, sufficient to induce significant DOMS seems not to induce alteration of the spinal modulation. Since the persistence of a decrease in VA during several days in this study could not be explained by spinal modulation, it seems likely that a supraspinal component must have played a part. Accordingly the changes in VA that occurred at the same time that the subjective symptoms of DOMS suggest a supraspinal regulation of muscle recruitment. Indeed it has been known for some time that exercise performance is regulated at least in part by supraspinal factors. For example, Bigland-Ritchie (1981) showed that central fatigue during repeated isometric contraction is minimized by exhorting the subject to produce a “super” effort at the end of each voluntary contraction. Our results add to that interpretation by suggesting the possibility that supraspinal modulation can also occurs after locomotory activity such as walking, even without exhaustion. As we have already argued, the observed increase in DOMS could represent the conscious interpretation of an increased discharge of group III and IV muscle afferents (O’Connor and Cook 1999). Even though we failed to observe a significant alteration of the spinal modulation at the time of increased muscle soreness, this would not prove that the discharge of these afferents had not increased since it is still unclear whether group III and IV muscle afferents induce a post-exercise decrease in motoneurons excitability in healthy humans. Indeed, previous studies have shown that maintained firing of ischaemically sensitive group III and IV muscle afferents does not influence the altered muscle responses to cortical or corticospinal stimulation observed after fatiguing exercise (Andersen et al. 2003; Gandevia et al. 1996; Taylor et al. 2000). All these findings might suggest that, after exercise, increased output from group III and IV muscle afferents may not directly inhibit the motoneurons but may act upstream of the motor cortex to impair voluntary descending motor drive (Taylor et al. 2006). Accordingly, a novel contribution of this study is that VA significantly recovered on the third day, when the DOMS displayed also a significant decrement (F1,9 > 12, P < 0.01, for the both scales). Although this temporal relationship does not prove causality, this finding could suggest a relationship between the persistence of the decrease in VA and the subjective symptoms of DOMS in these subjects. This observation is consistent with the data of Le Pera et al. (2001) showing that muscle pain could induce a long-lasting depression in motor activation. Their data suggest that this inhibition in motor system excitability could be linked to a decreased excitability of the motor cortex as well as spinal modulation (Le Pera et al. 2001). However, Prasartwuth et al. (2005) observed a different time course in muscle soreness after eccentric exercise and changes in VA leading these authors to suggest that muscle pain did not directly cause the change in voluntary drive. These different data emphasised the complexity of the relation within muscle soreness and voluntary muscle drive. It has been recently observed that motoneuron excitability in elbow flexors, but not extensors, was able to recover when ischemia is maintained after fatiguing contractions (Martin et al. 2006), a finding that suggests differential influences of group III and IV muscle afferents on different motoneuron pools (Martin et al. 2006). In this study concerning the plantar flexors, we observed some statistical similarities within DOMS and VA (i.e., lowest value of VA at 48-h when DOMS was the highest and significant recovery for the both at 72-h). However, from a functional point of view, VA returned at control level at 72-h whereas DOMS at 72-h was not less than DOMS at 24-h, and VA decreased after exercise when DOMS was weak. That suggests that subjective DOMS of the plantar flexors can not be considered as an objective indicator of VA capability. Conclusion It was recently reported that a 90 min bout of flat running exercise produced a modification in spinal modulation (Racinais et al. 2007b). The present study showed that a 30-min downhill walking exercise failed to induce the same modification. However there was a significant decrease in VA during maximal voluntary contractions performed during the days after eccentric exercise that produced DOMS. This suggested the occurrence of a supraspinal modulation of muscle activation during this period when muscle contractile properties had fully recovered following the eccentric exercise. Furthermore, the persistence of the decrement for several days suggests that this modulation was not caused by an acute exercise-related physiological or biochemical alteration in the motor cortex, so-called central fatigue but more likely represents a modulation which may be partly linked to the muscle soreness.
[ "eccentric", "exercise", "neuromuscular", "doms", "central fatigue" ]
[ "P", "P", "P", "P", "P" ]
Ann_Surg_Oncol-3-1-2039838
A Comparison Between Radioimmunotherapy and Hyperthermic Intraperitoneal Chemotherapy for the Treatment of Peritoneal Carcinomatosis of Colonic Origin in Rats
Background Cytoreductive surgery (CS) followed by heated intraperitoneal chemotherapy (HIPEC) is considered the standard of care for the treatment of patients with peritoneal carcinomatosis (PC) of colorectal cancer (CRC). These surgical procedures result in a median survival of 2 years at the cost of considerable morbidity and mortality. In preclinical studies, radioimmunotherapy (RIT) improved survival after CS in a model of induced PC of colonic origin. In the present studies we aimed to compare the efficacy and toxicity of CS followed by adjuvant RIT in experimental PC to the standard of care, HIPEC. Peritoneal carcinomatosis (PC) of colorectal cancer (CRC) frequently is an end stage of colorectal cancer, occurring in 5–50% of the patients, either synchronous or metachronous.1 If untreated, patients suffering from PC have a median survival of only 6 months.2 Survival is significantly improved by radical surgical debulking procedures (cytoreduction) in combination with intraperitoneal chemotherapy, either in combination with normothermia or hyperthermia (HIPEC).3–7 The median survival after cytoreductive surgery and HIPEC is 13–34 months,4,8 and the 5-year survival rate is 19–27% at the cost of considerable morbidity and mortality rates of up to 23% and 4%, respectively.6,9 In the latest reported clinical trial on adjuvant RIT in the setting of colon cancer, Liersch et al. reported results of a Phase II trial with 131I-labeled anti-CEA antibody labetuzumab administered to patients after complete resection of colorectal liver metastases. This study, where RIT was applied in an adjuvant setting to complete resection, resulted in a promising 5-year survival rate of 51.5%.10 Radioimmunotherapy using radiolabeled monoclonal antibodies directed against tumor-associated antigens may therefore be an attractive anticancer therapy in patients with small volume disease. We therefore have studied the application of RIT as adjuvant therapy following cytoreductive surgery (CS) in the setting of PC. In previous studies regarding PC of CRC in a rat model we showed that RIT could be an effective adjuvant treatment after CS. The efficacy of adjuvant RIT in combination with CS was investigated and compared with no treatment, CS only, and RIT only. The results of this study showed a significantly improved survival of animals treated with CS followed by RIT (median 88 days) compared with those treated with CS only (median 51 days) and RIT only (median 61.1 days).11 Based on the encouraging results, showing the observed increase in survival that was achieved with low-dose RIT and concomitant low toxicity, we now aimed to compare the efficacy of this treatment to that of today’s standard of care, HIPEC,12,13 in a preclinical setting. MATERIALS AND METHODS Animal Model of Peritoneal Carcinomatosis WAG/Rij rats (10–12 weeks old, body weight 240–290 g, Harlan Horst, The Netherlands) were housed under nonsterile standard conditions (temperature, 20–24°C; relative humidity, 50–60%; 12-h light/dark cycle) in filter-topped cages (two rats per cage), with free access to food (Ssniff, Bio Services Uden, The Netherlands) and water. Rats were accustomed to laboratory conditions for at least 1 week before experimental use. Peritoneal carcinomatosis was induced by intraperitoneal inoculation of 2.0 × 106 CC-531 colon cancer cells, as described previously.14 All experiments were approved by the local Animal Welfare Committee of the Radboud University Nijmegen and were carried out in accordance with the Dutch Animal Welfare Act of 1997. Operative Procedure Prior to the laparotomy, all rats were given 10 mL of saline to prevent hypovolemia. Surgical procedures were performed under general anaesthesia using isoflurane 3%, O2 and N2O 1:1. Thirty minutes prior to and once daily until the third day postoperatively, rats were given buprenorphine (5 μg, 0.1 mL/rat/day) for analgesia. During the operation, rats were placed on a warmed mattress to limit body heat loss. All rats underwent a midline laparotomy. After opening the abdomen the extent of intraperitoneal tumor growth was scored semiquantitatively, 0 (no macroscopic tumor), 1 (little; located at 1–2 sites with a diameter of 1–2 mm), 2 (moderate; located at 1–2 sites and a diameter 2–5 mm), or 3 (abundant; located at multiple sites and/or diameter >5 mm) in all four quadrants of the abdomen. The sum of the tumor scores of all sites represented the peritoneal cancer index (PCI).11 Subsequently, CS, including a routine omentectomy, was performed in all rats. Irresectable tumor deposits were cauterized using an electrocoagulation device. After completion of the surgical cytoreduction, the abdominal wall was closed in two layers using continuous Vicryl 3/0 sutures for the muscular component and iron wound clips for the skin in animal treated with CS only or CS + RIT. Monoclonal Antibody, Radiolabeling, and RIT The murine MG1 monocolonal antibody (MAb), an anti-CC531 IgG2a monoclonal antibody that recognizes a 80 kDa cell surface antigen and localizes preferentially in tumors when injected in rats bearing CC-531 tumors,15 was purchased from Antibodies for Research Applications BV (Gouda, The Netherlands). Labeling of the antibody with 177Lu was carried out as previously described.11 In brief, the MAb was conjugated with 2-(4-isothiocyanatobenzyl)-diethylenetriaminepenta-acetic acid (ITC-DTPA) (Macrocyclics, Dallas, TX) and subsequently labeled with 177Lutetium (IDB Holland, Baarle Nassau The Netherlands) and purified by gel filtration on a PD10 column (Amersham, Pharmacia Biotech, Maarsen, The Netherlands). The purified 177Lu-MG1 preparation was diluted in PBS with 0.5% BSA for injection, the specific activity of the administered 177Lu-MG1 preparation was 0.4 MBq/μg. The labeling procedure using 177Lu was performed under strict metal-free conditions. RIT (185 μg MG1/ rat, radiolabeled with 74 MBq 177Lu in 3.0 mL) was intraperitoneally injected immediately after surgery, as this was determined to be the most optimal time for adjuvant administration.16 Mitomycin-C Mitomycin-C (MMC) was obtained from Nycomed Christiaens BV (Breda, The Netherlands) as a powder in glass vial (40 mg/vial). Immediately before use, MMC was dissolved in 0.9% sodium chloride to the appropriate concentrations. HIPEC Procedure Following CS, while the abdomen was still exposed, two multiperforated catheters (Argyle, Sherwood Medical, Ireland) were inserted laterally through the abdominal wall and subsequently fixed in the abdominal cavity. The inflow drain was placed in the right paracolic gutter, the outflow drain in the left subdiaphragmatic space. The intraperitoneal temperature was monitored with an intra-abdominal thermometer (PTFE Insulated thermocouple, VWR International, Amsterdam, The Netherlands), at the site with generally the highest tumor load (omentum). In addition, a thermometer was placed inside the rectum. After placement of the catheters, the abdominal wall was closed using a continuous suture (Ethilon 3.0, Johnson & Johnson, Ethicon) (Fig. 1). During the HIPEC procedure, rats were removed from the warmed mattress to prevent general hyperthermia. The perfusion system was filled with 250 mL saline, containing 4 mg MMC (Mitomycin-C Kyowa, Christiaens). The perfusate was heated in a tube coil using a thermostatically regulated water bath set to a temperature of 48°C and infused into the peritoneal cavity by a roller pump (Ismatec IPS-8, Ismatec SA, Glattbrugg, Switzerland) for the duration of 60 minutes at 10 mL/min. Abdominal inflow temperature was set at 44°C. In order to achieve a uniform heat distribution, gentle massage of the abdomen was applied throughout the duration of the HIPEC procedure. After completion of the perfusion, the abdominal cavity was flushed with warmed (37°C) saline for a period of 10 minutes. The abdomen was opened again to remove the catheters. Subsequently, the abdomen was closed as described previously. FIG. 1.HIPEC Perfusion System; MMC Mitomycin C Kyowa 16 mg/L perfusate. Adapted from Ref. 19. Reproduced with permission. Intraperitoneal Distribution of MMC and Dose Determination Prior to the therapy experiment with HIPEC, we investigated the intraperitoneal distribution of the perfusion fluid using a methylene blue stained perfusate. The perfusate was administered in the same fashion as in the therapeutic experiment. After completion, the abdominal cavity was inspected for the presence of blue dye in all quadrants on both parietal as well as visceral peritoneum of the intra-abdominal organs and the diaphragm. Subsequently, a study to determine the dose of MMC that resulted in acceptable toxicity was performed in nine animals (three animals per group). Animals underwent a laparotomy including an omentectomy and complete bowel inspection followed by heated perfusion of the abdominal cavity with MMC at 4 mg/L or 16 mg/L. Control rats underwent laparotomy and an omentectomy only. Body weight and physical condition were monitored during 6 days following the procedure to assess treatment-related toxicity. Treatment Efficacy Seven days after intraperitoneal tumor induction with 2.0 × 106 CC-531 tumor cells, 45 rats, 15 per treatment group, were randomly assigned to undergo either CS only, CS + RIT or CS + HIPEC. The operative procedures and application of the adjuvant therapies were performed as described previously. Toxicity of the treatment was determined clinically and by weighing the rats. Body weight was expressed as relative body weight compared with the body weight on the day of surgery. Survival was scored, and at autopsy the extent of tumor growth was determined. Follow-Up The primary endpoint was 16-week survival. As part of monitoring the physical condition during the immediate postoperative period, the general condition was monitored and the body weight was measured daily during the first 2 weeks. When the humane endpoint (HEP) was reached (signs of massive hemorrhagic ascites, physical inactivity or signs of intra-abdominal tumor growth with invalidating consequences), rats were killed by O2/CO2-administration and immediately dissected. The HEP was determined by an experienced biotechnician who was blinded to the therapeutic regimen. At the time of the HEP rats were generally lethargic, showing signs of advanced PC as the presence of ascites. At dissection, the intraperitoneal tumor growth was scored as described previously. At 16 weeks postoperatively, the study was terminated and the remaining rats were euthanized and dissected. In case of absence of macroscopic tumor, all relevant organs, including the greater momentum, the mesentery, and the diaphragm were removed for histopathological staining to determine tumor presence microscopically. Sections were stained using hematoxylin & eosin (H&E) and/or immunohistochemical staining using the murine MG1 antibody in combination with a horse-anti-mouse IgG antibody, HRP conjugated (Vector Laboratories Inc., Burlingame, CA, USA). Statistical Analysis Statistical analysis was performed using SPSS (Chicago, IL) software and Graphpad Prism (Graphpad Software Inc., San Diego CA) for analysis. Comparison of dichotomous values was done using chi-square or Fisher’s Exact test. Nonparametric testing was performed using two-way ANOVA testing. Survival curves were analyzed using Kaplan–Meier curves and compared by means of the log-rank test. Posttesting using Bonferroni was applied to correct for multiple groups. All tests were two-sided; the level of statistical significance was set at a P value of <.05. RESULTS Intraperitoneal Distribution of HIPEC and Dose Determination The intraperitoneal distribution of the perfusate administered according to the previously described procedure, showed a distribution pattern amongst all quadrants, including the diaphragm bilaterally and at the mesenterial root (Fig. 2). FIG. 2.Intraperitoneal distribution of methylene blue stained perfusate. The applied dose of 16 mg MMC/L resulted in a maximum mean weight loss of 13.7 ± 2.9% at 4 days postoperatively. In addition, the first 3 days following the heated perfusion, animals were lethargic and suffered from diarrhea from day 2 until day 4 postoperatively. In contrast, the maximum weight loss in the 4 mg/L was 8.3 ± 2.9% at day 3 and 7.5 ± 2.3% at day 3 in the control group (Fig. 3). None of the animals died during the immediate postoperative period. Based on these observations, HIPEC, when administered at a dose of 16 mg/L for the duration of 60 minutes at the given temperature, was considered to be the maximal tolerable dose to be used for the HIPEC procedure. FIG. 3.The relative body weight of Wag/Rij rats after exploratory laparotomy (control) and heated intraperitoneal chemotherapy (HIPEC) given immediately postoperatively in different doses. Data represent means ± standard error of the mean (SEM). Operative Procedure Preoperative body weight did not differ between groups, P = .52 (Table 1) At laparotomy, tumor nodules were present in the omentum, liver hilum, the mesentery, and gonadal fatpads (1–3 mm diameter). Median PCI score at time of surgery was 5 (range 4–8) and was similar in all experimental groups. After surgical cytoreduction, residual disease remained in situ in 7 rats after cauterization and was equally distributed among the groups (P = .84). The surgical procedures without adjuvant therapy took 20–30 minutes per animal. TABLE 1.Treatment group characteristicsPCIMedian (range)CSCS + HIPECCS + RITPreoperative body weight (g)266 (251–287)264 (245–285)262 (244–276)Tumor score per site  Greater omentum2 (2–3)2 (1–2)2 (1–2)  Liver hilum1 (0–1)1 (0–1)1 (0–1)  Perisplenic0 (0–1)0 (0)0 (0)  Mesentery1 (0–2)1 (0–2)1 (0–2)  Gonadal fatpads0 (0–2)0 (0–2)1 (0–2)  Diaphragm0 (0–1)0 (1)0 (0–1)  Parietal peritoneum1 (0–1)1 (0–1)1 (0–1)  Total5 (4–8)5 (4–6)5 (4–8)Resection macroscopically complete  Yes121313  No322Treatment group characteristics (peritoneal cancer index; PCI) found during laparotomy before the administration of the adjuvant therapy.CS, cytoreductive surgery; HIPEC, heated intraperitoneal chemotherapy; RIT, radioimmunotherapy. PCI is expressed as median and range. There was no intraoperative mortality. However, one rat in the CS + HIPEC and one rat in the CS + RIT group were euthanized after 2 and 9 days, respectively. The animal in the CS + HIPEC group showed massive weight loss as a result of bowel necrosis and subsequent perforation, the cause of death of the animal in the CS + RIT group remained unclear. The median intra-abdominal temperature during the HIPEC procedure, measured at the site where the greater omentum was removed, was 41.0°C (range 40.4–41.6°C). In contrast, the median rectal temperature was 34.6°C (range 34.1–34.8°C) (Fig. 4). FIG. 4.The recorded intra-abdominal and rectal temperature during the HIPEC procedure. Data represent means ± standard error of the mean (SEM). CS and CS + RIT were well tolerated, whereas animals in the CS + HIPEC groups showed signs of physical discomfort; animals of the latter group were generally lethargic and showed pilo erection two days following the procedure. In addition, these animals all suffered from diarrhea up to 4 days after the HIPEC procedure. The relative body weight after of the various treatment groups is depicted in Fig. 5. Maximum body weight loss after CS or CS + RIT was similar (7.3 ± 2.6% vs 9.3 ± 1.8% 4 days postoperatively, P > .05). Rats that received adjuvant HIPEC had a maximum body weight loss of 12.3 ± 1.7%, which was significantly higher than that after CS alone (P < .001) or CS + RIT (P < .001). Rats generally gained weight from the fifth postoperative day onward. In the HIPEC group, however, postoperative mean body weight remained significantly lower than that of the animals in the CS group, until 5 weeks postoperatively. FIG. 5.The relative body weight of Wag/Rij rats with small peritoneal CC-531 tumors in the first 14 days after cytoreductive surgery (CS) only, CS + radioimmunotherapy given immediately postoperatively (RIT) or heated intraperitoneal chemotherapy (HIPEC) given immediately postoperatively. Data represent means ± standard error of the mean (SEM). Treatment Efficacy During the experiment, 29 animals were euthanized because of massive amounts of ascites that resulted from intraperitoneal tumor growth. The mean amount of ascites at the humane endpoint was 31 ± 22.6 mL, 26.5 ± 23.8 mL, and 26.4 ± 22.6 mL in the CS, RIT, and HIPEC groups, respectively (P = .82). At the time of death, mean PCI in the CS, CS + HIPEC, and CS + RIT groups was 18 (range 9–22), 12 (range 5–15), and 18 (range 16–19), respectively, with significant differences between the CS + HIPEC and both other treatment groups (P < .001 for both comparisons). The survival curves of the various treatment groups are depicted in Fig. 6. Median survival of the rats that were treated with CS only was 57 days (range 36–112). Adjuvant HIPEC resulted in a median survival of 76 days (range 33–112), P = .17, when compared with CS only. Median survival of the rats that were treated with CS followed by the adjuvant administration of RIT was improved to a median survival of 97 days (range 49–112), P < .001 compared with CS only. When compared with CS followed by adjuvant HIPEC, the adjuvant administration of RIT to surgery did not result in an improved survival (P = .33). FIG. 6.Kaplan–Meier survival curves for Wag/Rij rats with small peritoneal CC-531 tumors after cytoreductive surgery (CS), CS + RIT (RIT) or CS + HIPEC (HIPEC). At the endpoint of the study, 16 weeks after CS, 14 animals (two animals in the CS group, five animals in the CS + HIPEC group, and seven animals in the CS + RIT group) were still alive, without any physical signs of intraperitoneal tumor growth. Of these 14 animals that were still alive 16 weeks after surgery, one animal in the CS + HIPEC group and three animals in the CS + RIT group showed macroscopic evidence of tumor at dissection. In the remaining 10 animals (two in the CS alone group, four in the HIPEC group, and four in the RIT group) not even microscopic tumor presence was found. DISCUSSION In the present study, adjuvant radioimmunotherapy after cytoreductive surgery for peritoneal carcinomatosis of colorectal origin in rats significantly improved survival, whereas HIPEC did not. In addition, the application of HIPEC was associated with considerably more toxicity as compared to RIT. The treatment of peritoneal carcinomatosis was studied with CC-531 syngeneic tumors that grew intraperitoneally in Wag/Rij rats. This model is highly reproducible, and the growth and distribution pattern throughout the abdominal cavity are similar to the human entity of PC.14 Cytoreduction performed at 7 days after tumor reduction resulted in minimal residual disease (<1 mm). In this setting, both HIPEC and RIT result in maximum therapeutic efficacy.12,13,17 The MG1 MAb preferentially localizes in the CC-531 tumors,11 with only minor localization in thymus, lymph node, salivary gland tissue, and skin.15177Lu was selected as the radionuclide for RIT of minimal residual disease because of its high tumor uptake and adequate physical properties including a medium-energy β-emission with a maximum penetration range in tissue of 2.5 mm. In our previous studies we have used the combination of 177Lu-MG1 radionuclide-antibody. These studies demonstrated the combination to be highly effective for the improvement of survival in the model of PC as described previously.11,16 Moreover, in previous experiments we have shown that radioimmunotherapy with a radiolabeled irrelevant antibody is less effective by far compared with therapy with a radiolabeled specific antibody.18 HIPEC has been studied in only a few preclinical studies.19–22 These studies showed that its use was associated with a decreased tumor load compared with control groups.19 However, in these studies, HIPEC was associated with a considerable toxicity, indicated by lethargy, marked body weight loss, and bacterial translocation.23 These results on toxicity are in corroboration with the results of our study and mimic the clinical effects of HIPEC. Clinical studies with postoperative intraperitoneal chemotherapy are associated with a high mortality and morbidity.12 Of 16 published reports on the use of HIPEC in the clinical setting, 13 reports described the administration of MMC.12 In vitro, MMC has also shown to inhibit growth of CC531 cells in a concentration a thousand-fold lower than the concentration used in the present experiment.24 The applied dose of 16 mg/L MMC in our study is within the range of doses applied in clinical practice (5–20 mg/L25) and is higher than described in other preclinical studies (2.25 mg/L19 and 4 mg/L22). In addition, in the present study HIPEC was applied as an adjuvant treatment to cytoreduction, whereas Pelz et al.19 applied HIPEC as monotherapy with subsequent killing of the animals after only 10 days. The intraperitoneal distribution of MMC during the perfusion was studied before the start of the actual experiment and showed equal distribution among all quadrants. The effect of the perfusion technique on distribution differences and their associated differences in survival have never been studied clinically. Glehen and colleagues performed a large clinical study in 506 patients that were treated with both the open and closed abdomen perfusion technique. The authors reported no differences in survival between both perfusion techniques.6 This observation was confirmed by the study of Sugarbaker and colleagues.26 RIT with 74 MBq of the 177Lu-MG1 radionuclide-antibody in 3 mL has previously been shown to be an effective treatment for experimentally induced PC of colonic origin when administered intraperitoneally.11,16 The biodistribution of intraperitoneally injected 111In-labeled MG1 was studied by Koppe et al. and showed a preferential uptake of the radiolabeled antibody in the tumor.11 The extraperitoneal temperature of 48°C necessary to obtain inflow temperatures of 44°C would not have had a negative influence on the cytotoxicity of MMC, since Ahrar et al.27 showed that only temperatures exceeding 60°C decreased its cytotoxic effect. On the other hand, one has to bear in mind the fact that the additive effect of hyperthermia in HIPEC in the clinical setting has not yet been proven in a randomized trial. Elias et al. and Glehen et al. both reported the use of early postoperative chemotherapy (EPIC), without hyperthermia, ranging from day 1 to day 5 after surgery, and HIPEC. Elias et al. found no significant difference in survival. Similarly, in the multicenter study in 506 patients of whom 53.3 and 24.3 % underwent HIPEC and EPIC alone, no significant difference was found in survival between the two treatment groups.6 The duration of the heated perfusion, 60 min. in this experiment, is in concordance with the recent consensus statement regarding Cytoreductive Surgery and Hyperthermic Intraperitoneal Chemotherapy in the Management of Peritoneal Surface Malignancies of Colonic Origin, stating that the perfusion should last 60–120 minutes.13 These data on the dose of MMC, the used perfusion time and temperature, together with our results that showed antitumor effect in the model of induced PC from CC531 cells (significantly lower PCI at HEP in favor of CS + HIPEC), we can conclude that the HIPEC model used in our study was able to induce regression of PC of colonic origin. In the latest reported clinical trial on adjuvant RIT in the setting of colon cancer, Liersch et al. reported results of a Phase II trial with 131I-labeled anti-CEA antibody labetuzumab administered to patients after complete resection of colorectal liver metastases. This study, where RIT was applied in an adjuvant setting to complete resection, resulted in a promising 5-year survival rate of 51.5%.10 The results of this study is in accordance with the conclusion of a recent review on the use of RIT to treat colon cancer.17 In this review, the authors state that the time may have come for clinical trials in which RIT is added to standard regimens to establish the place of this treatment modality as an adjuvant treatment after CS. To our knowledge, the present study is the first study comparing the use of RIT and HIPEC in an adjuvant setting to CS for the treatment of PC in colorectal cancer. Our preclinical studies indicate that the application of RIT immediately following CS can improve survival in rats with PC of CRC. Moreover, from the present study we conclude that the use of adjuvant RIT is an effective treatment with low toxicity. When compared with today’s standard of care, HIPEC, RIT was at least as effective. RIT consisted of an activity dose of 74 MBq of 177Lu-labeled MG1per rat, resulting in only minor toxicity, whereas the theoretical MTD of 177Lu-labeled antibodies in 250 g rats could be approximately 150 MBq.11 There are, however, some challenges to the clinical applications of adjuvant RIT. For example, after cytoreductive surgery, patients are transferred to the intensive care unit. Optimal patient care has to be balanced with radiation safety issues for the medical staff. Previously, we reported on the optimal time interval between RIT and CS.16 In that study, we showed that RIT should be administered as soon as possible after CS, with a window of opportunity for RIT administration up to 4 days after surgery. It can therefore be envisioned that for radiation safety reasons the therapy should be given not before discharge of the patient from ICU and removal of the abdominal drains. Our study therefore justifies the consideration of intraperitoneal radioimmunotherapy after cytoreductive surgery in case of peritoneal carcinomatosis of colorectal cancer. In clinical studies, this approach should be compared with HIPEC. CONCLUSION This study showed that RIT adjuvant to CS significantly improved survival compared to CS alone in a rat model of PC of CRC, whereas the contemporary gold standard, HIPEC, did not cause a significant improvement in survival. This improvement of survival was associated with a decreased level of treatment-related toxicity compared with HIPEC. Adjuvant radioimmunotherapy might therefore be an alternative adjuvant treatment after cytoreductive surgery of PC of colorectal origin in a clinical trial setting.
[ "radioimmunotherapy", "peritoneal carcinomatosis", "cytoreductive surgery", "heated intraperitoneal chemotherapy", "adjuvant", "colon cancer" ]
[ "P", "P", "P", "P", "P", "P" ]
Int_J_Colorectal_Dis-4-1-2386750
Surgical and pathological outcomes of laparoscopic surgery for transverse colon cancer
Purpose Several multi-institutional prospective randomized trials have demonstrated short-term benefits using laparoscopy. Now the laparoscopic approach is accepted as an alternative to open surgery for colon cancer. However, in prior trials, the transverse colon was excluded. Therefore, it has not been determined whether laparoscopy can be used in the setting of transverse colon cancer. This study evaluated the peri-operative clinical outcomes and oncological quality by pathologic outcomes of laparoscopic surgery for transverse colon cancer. Introduction Since its first report [1], laparoscopic colon surgery has been controversial with regard to its use for colorectal cancer. Several prospective randomized trials including the COLOR and CLASSIC studies have demonstrated that laparoscopic-assisted surgery for colorectal cancer resulted in a shorter hospital stay, reduced analgesic use, and earlier recovery to bowel movement [2–6]. Moreover, the COST study established the long-term oncological safety of laparoscopic-assisted surgery for colon cancer, and currently the laparoscopic approach is accepted as an alternative to open surgery for colon cancer [7]. However, transverse colon cancer was excluded from prior randomized controlled trials. The reasons for exclusion of transverse colon cancer include the following: difficulty in deciding the appropriate operative procedure and extent of lymph node dissection, as well as technical difficulties with the laparoscopic identification, ligation, and lymph node dissection around the middle colic vessels. Therefore, there continues to be debate on whether to use laparoscopic surgery for transverse colon cancer. Materials and methods Between August 2004 and November 2007, the medical records of all patients who underwent laparoscopic surgery for colorectal cancer were reviewed. Laparoscopic colorectal cancer resection was started August 2004 in our clinic. Pathologic confirmation, colonoscopy, barium enema, computed tomography (CT), ultrasonography, and chest PA were performed for diagnosis in all patients preoperatively. All patients with colorectal adenocarcinoma admitted to our clinic were considered for laparoscopic surgery. Exclusion criteria for laparoscopic surgery were as follows: (1) patients with colorectal cancer obstruction and failure of stent insertion, (2) patients with colorectal cancer perforation requiring emergency surgery, (3) patients with T4 colorectal cancer lesion that could not be resected laparoscopically, and (4) patients with compromised cardio-pulmonary function in whom pneumoperitoneum under general anesthesia was contraindicated. In this study, transverse colon cancer was defined as lesions between the hepatic flexure and splenic flexure in the colon, requiring ligation of the middle colic vessels at their origin. CT or barium enema was performed in all patients with colon cancer preoperatively for localization of the tumor. If radiological localization was unclear, preoperative colonoscopic Indian ink tattooing or endoscopic clipping was performed. The procedure used for the transverse colon cancer was chosen based on the location of the tumor. A tumor located at the hepatic flexure or within 10 cm distal to hepatic flexure was treated by an extended right hemicolectomy, and a tumor located at the splenic flexure or within 10 cm proximal to the splenic flexure was treated by an extended left hemicolectomy. A tumor located between the above two lesions was treated by a transverse colectomy. An extended right hemicolectomy was defined as lymphadenectomy simultaneously with ligation of ileocolic, right colic, and middle colic vessels at their origins; an extended left hemicolectomy was defined as lymphadenectomy simultaneously with ligation of left colic and middle colic vessels at their origins. A transverse colectomy was defined as lymphadenectomy simultaneously with ligation of middle colic vessels at their origins. Extra-corporeal anastomosis was performed in all cases of laparoscopic surgery for transverse colon cancer. All patients started a diet after passing flatus. To evaluate the postoperative surgical outcomes and oncological quality of laparoscopic surgery for transverse colon cancer, we compare the age, gender, body mass index (BMI), operating time, blood loss, time to passing flatus, time to start of diet, hospital stay, surgical morbidity, surgical mortality, conversion to open surgery, tumor size, distal resection margin, proximal resection margin, radial margin, and number of harvested lymph nodes between the transverse colon cancer group (TCC) and other site colon cancer group (OSCC). In this study, a single surgeon (YS Lee) performed all operations. Comparisons between the two groups were made by applying the independent samples t test and χ2 test. Differences were considered to be significant for P value < 0.05. Results Postoperative clinical outcomes Three hundred and twelve patients underwent colorectal cancer resection in our clinic during these periods. Among them, 44 patients underwent conventional open surgery according to exclusion criteria (five cases of stent failure or colon obstruction, five cases of emergency operation, 12 cases of far advanced tumor, 12 cases of recurrent cancer operation, seven cases of patient’s refusal, and three cases for old age with high risk for pneumoperitoneum). There was no transverse colon cancer in the conventional open surgery group. A total of 268 patients underwent laparoscopic resection for colorectal cancer. Of the 268 patients, 140 patients who underwent laparoscopic resection for rectal cancer were excluded in this study. Finally, 128 patients who underwent laparoscopic resection for colon cancer were enrolled in this study. Of the 128 patients, 34 patients had transverse colon cancer, counting for 10.8% of total colorectal cancers, and 94 patients had other sites of colon cancer. In the TCC group, extended right hemicolectomy was performed in 18 cases, transverse colectomy was performed in eight cases, and extended left hemicolectomy was performed in eight cases. In the OSCC group, right hemicolectomy was performed in 38 cases, left hemicolectomy was performed in five cases, and anterior resection was performed in 51 cases. There were no statistical differences in age, gender distribution, BMI, operating time, intraoperative blood loss, time to flatus, time to start of diet, and hospital stay between patients with TCC and OSCC (Table 1). Four patients in the OSCC group had a major complication, one had colon injury, and two had anastomosis leak. A simple laparoscopic closure was performed in the case of colon injury, and two patients with anastomosis leak underwent re-operation. Minor complications that occurred in four cases in the OSCC group (one case of ileus, two cases of port site minor infection, and one case of urinary retention) and two cases in the TCC group (one case of ileus and one case of atelectasis) were treated successfully conservatively. One case among the TCC group and three cases in the OSCC group were converted to open surgery; all of these converted cases were due to tumor-related factors, T4 lesion, or huge tumor. There was no surgical mortality in this study. Table 1Clinical characteristics of patients TCC (N = 34)OSCC (N = 94)P valueAge (years)64.1 ± 11.362.5 ± 12.1NSSex (M:F)15:1946:48NSBody mass index (kg/m2)23.5 ± 3.024.0 ± 3.1NSOperating time (min)211.1 ± 52.2 (140–360)a220.4 ± 94.3 (60–620)aNSBlood loss (ml)100.0 ± 83.8 (0–400)a114.2 ± 145.2 (0–700)aNSTime to pass flatus (days)2.8 ± 0.82.6 ± 1.0NSDiet start (days)4.2 ± 1.84.2 ± 2.8NSHospital stay (days)11.4 ± 4.111.2 ± 6.0NSTCC Transverse colon cancer, OSCC other site colon cancer, NS not significantaValues are ranges. Oncological quality by pathologic outcomes There were no statistical differences in the tumor size, proximal resection margin, distal resection margin, radial margin, and number of harvested lymph nodes between patients with TCC and OSCC (Table 2). Table 2Pathological outcomes of patients TCC (N = 34)OSCC (N = 94)P valueTumor size (cm)5.2 ± 2.5 (0.7–11)a4.6 ± 2.4 (0.1–14)aNSPRM (cm)19.5 ± 10.2 (10.0–25.0)a15.3 ± 11.0 (4.2–25.0)aNSDRM (cm)13.9 ± 6.9 (6.0–25.0)a11.6 ± 5.3 (3.0–32.0)aNSNo. of lymph nodes24.4 ± 11.7 (3–59)a21.1 ± 8.4 (3–59)aNSRadial margin (cm)0.9 ± 0.8 (0–3.0)a1.0 ± 0.9 (0–3.5)aNSTCC Transverse colon cancer, OSCC other site colon cancer, DRM distal resection margin, PRM proximal resection margin, NS not significantaValues are ranges. Discussion Since laparoscopic colon resection was first reported in 1991 [1], laparoscopic surgery has been widely employed for various benign colorectal disease such as benign mass, diverticular disease, inflammatory bowel disease, rectal prolapse, and now increasingly for colorectal malignant disease. Evidences of the safety and efficacy of the laparoscopic surgery for colorectal cancer had been reported from several prospective randomized controlled studies and meta-analysis of several trials, which favored the laparoscopic surgery for colorectal cancer over conventional open surgery due to its many short-term benefits such as shorter hospital stay, reduced use of analgesics, and earlier recovery of bowel movements [2–6, 8–13]. Eventually, long-term oncological safety of laparoscopic colon cancer resection was established, and the laparoscopic approach was accepted as an alternative to open surgery for colon cancer by the COST study [7]. Transverse colon cancer occurs in about 10% of cases of colorectal cancer, and it often presents a challenge for the choice of the best surgical procedure based on the location of the tumor and extent of lymph node dissection. There could also be technical difficulties with laparoscopic identification, ligation, and lymph node dissection around the middle colic vessels depending on the surgeon’s experience. Because of these reasons, transverse colon cancer was excluded from almost every prior prospective randomized trial. Therefore, there is continued debate over the validity of laparoscopic surgery for transverse colon cancer. The major controversy about laparoscopic surgery for transverse colon cancer lies on whether or not it is feasible to perform sufficient extent of lymph node dissection around the middle colic artery laparoscopically. As experiences of laparoscopic surgery are accumulating and surgical techniques and instruments are developing, we consider that the extent of laparoscopic lymph node dissection for transverse colon cancer is not less than the extent of conventional lymph node dissection for transverse colon cancer. Some of the difficulties with laparoscopic surgery for transverse colon cancer that need to be resolved include the following: First, precise localization of carcinoma of the transverse colon is important; this is because the extent of resection and lymph node dissection depends on the location of the tumor in the transverse colon. Diminished tactile guidance can make localization of the tumor in the transverse colon more difficult in small tumor. There are a number of methods used to localize the tumor. Preoperative barium enema is useful for localization of large and advanced tumor, but radiological localization is inconclusive and difficult in early cancer. In such cases, colonoscopic tattooing with Indian ink or placing endoscopic clips is very effective. Using endoscopic clip at the time of preoperative colonoscopy and using fluoroscopy for localization, the clip migration could be the main problem, and using fluoroscopy in the operating theater is troublesome and time consuming. However, in the case of hepatic flexure or splenic flexure colon cancer, by checking X-ray just after placing the endoscopic clip, we could precisely localize both flexure tumors easily. Intraoperative colonoscopy can be used for localization of the tumor, but it is also time consuming, and moreover, it can cause colonic insufflation that makes laparoscopic surgery difficult [14]. Properly placed tattoos are long lasting and can be placed at the time of diagnostic colonoscopy [15]. In the event of tattoo failure, one can use intraoperaitve colonoscopy for localization easily. In this study, we performed preoperative colonoscopic tattoo in three cases to localize the mid transverse colon cancer, colonoscopic clipping in three cases of hepatic flexure colon cancer, and barium enema in the other cases. There was no case with non-localization. We think that colonoscopic tattooing is effective in small mid transverse colon cancer, and endoscopic clipping is effective in small hepatic or splenic flexure colon cancer. Second is the laparoscopic identification, ligation, and lymph node dissection around the middle colic vessels. Fujita et al. reported laparoscopic techniques that can be used for identification of middle colic vessel. They suggested that the ventral aspect of the caudal portion of superior mesenteric vein be exposed, and the exposed vessel brought cephalad toward the caudal portion of the pancreas, with identification of the middle colic vessels [16]. Baca et al. introduced the ‘window technique’ for lymphadenectomy with simultaneous resection of the vascular stem [17]. Ichihara et al. introduced the technique of rotation of mesocolon to identify middle colic artery [18]. The authors have identified the middle colic vessels using surgical techniques introduced by Fujita in most of the cases, and the in case of slender patients we could directly identify pulsating middle colic artery by stretching the transverse mesocolon by traction of both ends of the transverse colon by the first assistant. Identifying middle colic vessels, excessive traction of the transverse mesocolon by the first assistant may cause tearing of the vein and bleeding. Moreover, the length of the gastrocolic trunk of Henle is relatively short, and attempt to control bleeding from the gastrocolic trunk of Henle can injure the superior mesenteric vein [19]. It is especially important for a laparoscopic colorectal surgeon to recognize the variable drainage of the right colic and middle colic vein to the gastrocolic trunk of Henle and to have a precise knowledge of superior right colic vein anatomy. Third is the low incidence of transverse colon cancer. Consecutive cases are needed for overcoming the learning curve. Therefore, it might take surgeons longer time to become experienced in the techniques used for laparoscopic transverse colon cancer resection. In this study, there were no significant differences between patients with TCC and OSCC in terms of operating time, blood loss, resection margin, and number of lymph nodes. Schlachta et al. reported that the operating time was longer in the TCC group compared to the OSCC group, and the number of harvested lymph nodes in the TCC group was greater than in the OSCC group because of the extended resection and larger specimens, as well as the need to attend to the middle colic vessels in patients with TCC [20]. However, in this study, there was no statistical difference in operating time and number of harvested lymph nodes, as well as the surgical outcomes between the two groups. We think that operating time could be similar in the TCC and OSCC groups in experienced hands, and actually, the extent of lymph node dissection in transverse colon cancer is not too wide than that of other site colon cancer, so there were no statistical differences in pathological outcomes between the two groups in this study. There was one case of colon injury proximal to anastomosis caused by electrocautery during the operation in the OSCC group; this was treated by a laparoscopic simple closure on postoperative day 3. Three cases of anastomosis leak, one case of right hemicolectomy, and two cases of anterior resection occurred in the OSCC group. A patient with anastomosis leak was diagnosed by clinical symptoms or sign, fecal or purulent discharge from drain, fever, leukocytosis, and peritoneal irritation sign. Radiologic study using a water-soluble dye was not performed in this study. Of the three cases, one patient with a leak of anterior resection was treated conservatively successfully, and two patients with leak cases underwent laparoscopic re-operation. All other minor complications were successfully treated conservatively. Conversion to open surgery occurred in one case in the TCC group and in three cases in the OSCC group; these differences were not significant. Factors related to the tumor were the cause of conversion to open surgery. One case in the TCC group was converted to open surgery due to T4 lesion in the transverse colon cancer that invaded the anterior wall of the stomach. In the OSCC group, one case with a larger tumor that was hard to handle by laparoscopy, and two cases of sigmoid colon cancer that invaded the uterus were converted to open surgery. However, T4 colon cancer is not a contraindication for laparoscopy if en bloc resection could be performed with the laparoscopy. In those cases converted to open surgery, laparoscopic en bloc resection was impossible. This study has some weak points. First, the number of patients with transverse colon cancer is too small, making up 10% of the total colorectal cancer resection and 12% of the total laparoscopic colorectal resection in this study. The second is that although the data in this study were collected prospectively, the data were not derived from a larger case series and this study was not randomized controlled. The third is that the mean follow-up period was too short to evaluate the oncological outcomes (15.9 months; range 1–40 months). We think that large-scale prospective controlled trials and long-term analysis are mandatory to overcome these limitations and confirm the oncological safety of laparoscopic transverse colon cancer surgery. We are going to report a long-term analysis after a long-term follow-up involving more cases prospectively. Conclusions The results of this study show no significant differences with regard to surgical outcomes and oncological quality by pathologic outcomes between patients in the OSCC and TCC groups. Further investigations with large-scale prospective studies and long-term analysis of laparoscopic surgery for transverse colon cancer are mandatory to establish the oncological safety of laparoscopic surgery for transverse colon cancer.
[ "transverse colon", "colon cancer", "laparoscopy" ]
[ "P", "P", "P" ]
Biochem_Biophys_Res_Commun-1-5-1899526
An outwardly rectifying anionic background current in atrial myocytes from the human heart
This report describes a hitherto unreported anionic background current from human atrial cardiomyocytes. Under whole-cell patch-clamp with anion-selective conditions, an outwardly rectifying anion current (IANION) was observed, which was larger with iodide than nitrate, and with nitrate than chloride as charge carrier. In contrast with a previously identified background anionic current from small mammal cardiomyocytes, IANION was not augmented by the pyrethroid tefluthrin (10 μM); neither was it inhibited by hyperosmolar external solution nor by DIDS (200 μM); thus IANION was not due to basal activity of volume-sensitive anion channels. IANION was partially inhibited by the Cl− channel blockers NPPB (50 μM) and Gly H-101 (30 μM). Incorporation of IANION into a human atrial action potential (AP) simulation led to depression of the AP plateau, accompanied by alterations to plateau inward calcium current, and to AP shortening at 50% but not 90% of complete repolarization, demonstrating that IANION can influence the human atrial AP profile. The electrophysiological behaviour of cardiac myocytes from mammalian hearts is determined by the combined activity of a range of different cation and anion channel types. The reversal potential for chloride (Cl−) ions in the heart (ECl) lies between ∼−60 and −40 mV [1]. Negative to ECl outward Cl− movement generates depolarizing ionic current, whilst positive to ECl inward Cl− movement generates repolarizing ionic current. Therefore, the activation of Cl− channels can influence both the resting membrane potential and the duration of cardiac action potentials (APs) ([1–3] for reviews). Several different anion channel types have been identified that may contribute to cardiac physiology and pathophysiology [1–3]. Of the cardiac anion channel currents thus far identified, the three major types are: (i) a cystic fibrosis transmembrane conductance regulator (CFTR) current-activated through cAMP-dependent phosphorylation (ICl,cAMP; e.g. [4–6]); (ii) a stretch- or swelling-activated Cl− current (ICl,Swell; e.g. [7–9]) and (iii) a Ca2+-activated Cl− current (ICl,Ca; e.g. [10–12]). Recently, an outwardly rectifying anionic background current (IAB) has been identified in cardiac myocytes from two commonly studied model species (rat and guinea-pig) using whole-cell patch-clamp measurements [13,14]. IAB is distinct from previously identified Cl− currents as it has a distinct permeability sequence and is insensitive to the stilbene diphosphonate Cl− channel inhibitor DIDS, to cell swelling and to intracellular Ca2+ and cAMP [13,14]. IAB can also be differentiated from other major cardiac anion currents as it can be activated by the pyrethroid agent tefluthrin [14]. Anion substitution experiments have provided evidence that IAB can influence AP duration (APD) [13]. There is some disagreement as to whether or not a basally active anionic current exists in human atrium [15,16] and there is no information as to whether humans exhibit an IAB with the characteristics of that seen in small mammal hearts. Therefore, the present study was undertaken to determine whether or not IAB exists in adult human cardiac myocytes. The resulting findings indicate the presence in human atrial myocytes of an outwardly rectifying anionic background current (IANION). Notably, the IANION observed in this study has the potential to contribute to human atrial electrophysiology, but is distinct from both the IAB recorded previously from myocytes from small mammal hearts [13,14] and from outwardly rectifying stilbene diphosphonate-sensitive anionic currents recorded previously from human atrium [17]. Methods Atrial myocyte isolation The study was approved by the local Central and South Bristol Research Ethics Committee and was conducted in accordance with the principles of the Declaration of Helsinki. Human right atrial appendages were obtained, with consent, from 32 patients (27 males, 5 females, average age 69.7 ± 1.7 years) undergoing coronary artery bypass surgery. Single human atrial myocytes were isolated from right atrial appendages by mechanical and enzymatic dispersion. Tissue samples were quickly immersed in cardioplegic solution (see Table 1; solution G, 100% O2, ice cold). The samples were chopped into small chunks and washed with an EGTA-containing solution (see Table 1; solution H) gassed with 100% O2 for 15 min at 37 °C. The chunks were then incubated in the same solution from which EGTA was excluded and protease type XXIV (3 U/ml, Sigma) and collagenase type V (250 U/ml, Sigma) were added. The medium was continuously gassed with 100% O2 at 37 °C. After 15 min, the incubation medium was substituted for the same solution containing collagenase only. Myocytes were progressively released from the chunks into the supernatant and their yield monitored under a microscope. The suspension was washed in enzyme-free solution and the myocytes were stored at room temperature until use (within ∼8 h of cell isolation). Electrophysiology Solutions used. Experimental solutions for the investigation of anionic current were similar to those used previously to study IAB [14]; the composition of all solutions used is given in Table 1. Osmolarity values given for each of the solutions listed in Table 1 were measured using a micro-osmometer employing a freezing-point method (Advanced Instruments, Norwood, MA, USA). Myocytes used in whole-cell voltage-clamp experiments were superfused (20–25 °C) with a standard Hepes-buffered Tyrode’s solution (see Table 1; solution A) until the whole-cell recording configuration had been obtained. For isolation of background anion current, sodium-free Tyrode’s solutions were used (solutions B–E) in which Na was replaced by N-methyl-d-glucamine (NMDG), with one of several possible dominant anions: solution B, chloride; solution C, aspartate; solution D, iodide; solution E, nitrate. All drugs used were added to solution E from stock solutions made in dimethyl sulfoxide (DMSO) with an exception of N-(2-naphthalenyl)-((3,5-dibromo-2,4-dihydroxyphenyl)methylene)glycine hydrazide (Gly H-101), which was solved in distilled water. The hyperosmotic external solution (solution F) was prepared by adding 70 mM sucrose to solution E. A Cs-based pipette solution (solution I) was used for all experiments. Solution I was sodium-free to prevent contamination of chloride currents by the sodium–calcium exchanger current. Drugs. Diisothiocyanostilbene-2,2′-disulfonic acid (DIDS, final concentration 200 μM), 5-nitro-2-(3-phenylpropylamino)benzoic acid (NPPB, final concentration 50 μM) and tefluthrin (TEF, final concentration 10 μM) were purchased from Sigma Chemical Co. (Poole, UK). N-(2-naphthalenyl)-((3,5-dibromo-2,4-dihydroxyphenyl)methylene)glycine hydrazide (Gly H-101, final concentrations of 10 and 30 μM) was purchased from Merck (Frankfurt, Germany). All the drug-containing solutions were protected from light throughout. Electrophysiological recording In electrophysiological experiments, junction potential changes were minimized by immersing the reference Ag/AgCl electrode in a 3 M KCl solution with a continuous agar bridge (4% agar in 3 M KCl). Borosilicate glass pipettes (Harvard Apparatus, UK) were pulled using a vertical two-step Narishige PP-830 microelectrode puller (Narishige, Japan) and had a tip resistance of 5–7 MΩ when filled with the pipette solution. During anion substitution experiments, background anion current was elicited from voltage-clamped myocytes (superfused with solutions B, C, D and E in the whole-cell configuration) by depolarizing ramps from −90 to +70 mV from a holding potential of −50 mV (ramp rate of 0.32 V s−1; sweep duration 1.03 s). A holding potential of −50 mV was used to inactivate the Na+-current and T-type Ca2+-current. Recordings were made using an Axopatch 200A amplifier, and data were recorded on computer using pClamp v. 9.0 software (Axon Instruments, Forster City, CA). Data were analyzed using the Clampfit program of pClamp v. 9.0. Mean values of averaged original signals over five command pulses were used for statistical analysis. Hyperpolarizing voltage steps of −20 mV and 5 ms duration were applied at 20 Hz to record the capacitance transients required for direct integration and the calculation of cell capacitance. The statistical significance between control and the drug periods or and other anions were determined by the Paired Student’s t-test using either Microsoft Excel or GraphPad Prism v. 4.0. The statistical significance between the normal and hyperosmotic solutions was calculated with two-way ANOVA test using GraphPad Prism v. 4.0. Statistical significance was considered to refer to the 95% level of confidence (p < 0.05). Human atrial action potential simulations The Courtemanche et al. human atrial action potential (AP) model [18] was modified to incorporate a formulation for IANION based on the experimental data obtained with and Cl− in Figs. 1 and 2. Readers are referred to [18] for the general equations required to set up the model. The following equation was used to simulate anionic background current (IANION)where EANION represents the current reversal potential and gANION is the conductance of IANION. By fitting Eq. (1) to experimental data shown in Fig. 1B and scaled to the mean data shown in Fig. 2A, we obtained gANION = 0.37 pS/pF, EANION = −45.64 mV, c = 0.87, d = 8.4 × 10−4 mV−1 for the -sensitive IANION, and gANION = 0.19 pS/pF, c = 0.94, d = 2.5 × 10−4 mV−1 for the Cl−-sensitive IANION. Results and discussion The voltage protocol used for these experiments was similar to that used previously to study IAB present in cardiomyocytes from small mammal hearts [14] and is shown as an inset to Fig. 1A. From a holding potential of −50 mV, ascending voltage ramps were applied between −90 and +70 mV. This protocol was applied to cells superfused first with aspartate (Asp)-containing solution (solution C) and then with different superfusates containing more permeant anions. Fig. 1A shows an example of the net current traces obtained from a cell superfused serially with solutions containing Asp−, Cl−, and I−. Both inward and, particularly, outward current components were greater with Cl−, and I− than with Asp− in the external superfusate. Fig. 1B shows current traces for the same cell, obtained by subtracting the current in Asp− from that with each of the more permeant anions. With each of Cl−, and I−, the Asp−-sensitive difference current showed marked outward rectification. Fig. 1C compares the mean outward current amplitude at +60 mV (normalized to membrane capacitance) for the three anions. Compared to , the observed current was significantly greater with I− and smaller with Cl− as charge carrier. These observations support the presence in human atrial cells of a basally active, anionic current (IANION); however, the relative current amplitudes with the three permeant anions differ from those observed previously with for the IAB observed in myocytes from guinea-pig and rat hearts, where IAB was largest with (permeability sequence ; [13,14]). Fig. 2A shows the mean IANION–voltage relation for 19 atrial cells, with as the major external anion (with IANION measured as the  − Asp− difference current). The mean current–voltage relation for the resulting current showed clear outward rectification, with an observed reversal potential (Erev) for IANION in these experiments of −45.7 ± 2.2 mV (obtained by pooling Erev values from individual experiments). Previous studies provide evidence that human atrial cells exhibit ICl, Swell (e.g. [15,17,19–21]). Therefore, in order to determine whether or not IANION could be attributed to basal activity of channels mediating ICl,Swell, Asp− to substitutions were also made using hyperosmolar external solution [14]. The mean data from eight such experiments are shown in Fig. 2B. There was no statistically significant difference between the plotted densities of IANION from the I–V relation obtained in hyperosmolar solution and that shown in Fig. 2A, suggesting that IANION is distinct from ICl,Swell. In order to characterize further IANION from human atrial myocytes, the sensitivity of the current to a range of pharmacological interventions was tested. Fig. 2C summarises the effects of the various interventions (expressed as % changes in -sensitive current at +60 mV). The stilbene disulphonate DIDS failed to inhibit IANION at a concentration (200 μM) that would be anticipated to inhibit ICl,Swell [19,21]. On the other hand, tefluthrin (10 μM), which we have previously reported to activate the IAB seen in myocytes from guinea-pig hearts [14], failed to alter significantly the amplitude of IANION from human atrial myocytes. Together with the relative IANION amplitudes in I−, Cl− and , the lack of effect of tefluthrin indicates that IANION is distinct from the previously reported rat/guinea-pig IAB [13,14]. Moreover, the lack of significant inhibition of the current by DIDS or hyperosmolar solution makes the IANION observed in the present study distinct from ICl,Swell [1] and from an osmolarity- and stilbene-sensitive outwardly rectifying chloride current recently reported by Demion and colleagues [17]. NPPB (50 μM) produced a partial, statistically significant (p < 0.05) inhibition of IANION. The glycine hydrazide Cl− channel inhibitor Gly H-101 failed to produce a significant blockade of IANION at 10 μM (∼7-fold greater than the reported IC50 for CFTR channel inhibition at +60 mV [22]); but produced partial attenuation of the current at 30 μM (∼20-fold the reported IC50 for CFTR [22]). Evidence for the presence of CFTR (ICa,cAMP) in human atrial cells is mixed [1,15,19–21,23], with a number of studies failing to observe the current in response to β-adrenergic stimulation, forskolin or cAMP (e.g. [15,19–21]). Previous work has failed to find evidence for ICl,Ca in human atrial myocytes [24] and, moreover, the presence of EGTA in the pipette dialysate (Table 1, solution I) and external [Ca2+] replacement in our experiments would have inhibited any [Ca2+]i-activated conductances on membrane depolarization. Therefore, the IANION seen here appears to differ not only from guinea-pig and rat IAB [13,14] but also from the three major reported cardiac anion conductances in: (i) being basally active and (ii) its overall pharmacological profile and sensitivity to anion substitution. In order to gain insight into the physiological role of IANION, the current was incorporated into human atrial AP simulations as outlined in the ‘Methods’. Fig. 3A shows the simulated APs (at an AP frequency of 1 Hz) from the Courtemanche et al. model [18] both without (Control) and with inclusion of IANION, whilst Fig. 3B shows the corresponding current profiles during the time-course of the AP. With either and Cl− as charge carrier, incorporation of IANION into the model produced shortening of AP duration at 50% repolarization (APD50; the measured APD50 values were 184, 165 and 159 ms for Control, Cl-sensitive IANION and NO3-sensitive IANION, respectively). In contrast, APD90 was comparatively unaffected (the measured APD90 was ∼305 ms under each condition) and resting potential also changed relatively little (with resting potential values of −81, −79 and −78 mV, respectively, for Control, IANION with Cl− and IANION with ). The more marked effect of IANION inclusion at less negative potentials during AP repolarization is concordant with the outwardly rectifying nature of the current. An additional observation made from the AP simulations is that incorporation of IANION also influenced the profile of L-type calcium current (ICa,L) during the AP plateau: the initial rapid component of ICa,L was unaffected by IANION incorporation, but the sustained component during the AP plateau showed a modest reduction. Thus, both an increase in repolarizing current carried by IANION and the consequent decrease in the sustained component of ICa,L combined to lead to AP plateau depression and abbreviation of APD50. The results shown in Fig. 3 demonstrate clearly that IANION is able to influence human atrial AP repolarization. Further work is now warranted to determine both the extent to which the incorporation/omission of IANION influences the susceptibility of human atrial cells and tissue to arrhythmia and to pursue the underlying identity and regulation of this novel background conductance.
[ "anion", "anion", "background current", "atrial myocyte", "heart", "patch-clamp", "action potential", "cardiac", "atrium", "anionic", "computer modelling" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R" ]
Psychopharmacologia-4-1-2270918
Acute neuropsychological effects of MDMA and ethanol (co-)administration in healthy volunteers
Rationale In Western societies, a considerable percentage of young people expose themselves to 3,4-methylenedioxymethamphetamine (MDMA or “ecstasy”). Commonly, ecstasy is used in combination with other substances, in particular alcohol (ethanol). MDMA induces both arousing as well as hallucinogenic effects, whereas ethanol is a general central nervous system depressant. Introduction In Western societies, a considerable proportion of young people expose themselves to 3,4-methylenedioxymethamphetamine (MDMA or ‘ecstasy’; Gross 2002; Parrott 2001; Tancer and Johanson 2007). Ecstasy has gained widespread use in the ‘club’ scene, typically all-night parties with loud music and intense lights (Winstock et al. 2001). The average dose of ecstasy used recreationally is reported to be around 80–90 mg of MDMA with considerable individual variation (Tanner-Smith 2006). Ecstasy users are generally multidrug users who have experience with various recreational drugs and use these in combination with ecstasy (Gouzoulis-Mayfrank and Daumann 2006b). Probably due to its availability, alcohol remains one of the most co-used substances (Barrett et al. 2005). As the use of alcohol is known to induce impairment of cognitive function and decrease the awareness of this impairment, this can lead to dangerous behaviour like driving under influence (Lamers and Ramaekers 2001; Riley et al. 2001). MDMA acts primarily by releasing serotonin (5-hydroxytryptamine (HT)) from pre-synaptic 5-HT terminals. It reverses the direction of the reuptake transporter and increases 5-HT levels at the post-synaptic receptors (Liechti and Vollenweider 2000; Mlinar and Corradetti 2003; Pifl et al. 1995). MDMA is also a potent releaser of dopamine and (nor)adrenaline (Colado et al. 2004; Liechti and Vollenweider 2001). MDMA is rapidly absorbed following oral administration. Within 30 min, MDMA is detectable in the blood. Plasma levels peak at 1–2 h after drug administration, and maximum behavioural and subjective effects occur around 1–2 h and have declined by 4 h in spite of persisting plasma levels (de la Torre et al. 2004; Green et al. 2003). Increasing the dose does not result in a proportional rise in plasma concentrations, which is indicative of non-linear pharmacokinetics (de la Torre et al. 2000). The behavioural effects of MDMA resemble but are not restricted to effects of psychostimulants (e.g. amphetamines or ‘speed’) as well as hallucinogenics (e.g. lysergic acid or ‘lysergic acid diethylamide’), although MDMA’s most characteristic effects are described as an increase in empathy and friendliness (Vollenweider et al. 2002). This led to MDMA being categorized as an ‘entactogen,’ as coined by Nichols and Oberlender (1990). Most research into the cognitive effects of MDMA in humans has focused on the long-term effects, where only memory was consistently found to be impaired (Verbaten 2003; Verkes et al. 2001). Our review of the acute effects of MDMA in humans showed that cognitive effects were assessed only in a limited number of studies, using diverse tests and generally addressing only certain aspects of neuropsychological function. As such, no consensus on MDMA’s cognitive effects could be reached (Dumont and Verkes 2006). Since then, reports on the effects of MDMA generally confirmed previous findings (Kuypers et al. 2006; Kuypers et al. 2007; Ramaekers et al. 2006; Tancer and Johanson 2007). Interestingly, two studies reported effects of MDMA on memory, which had not been assessed previously. These reports showed acute impairment of immediate and delayed recall of words as well as spatial memory by MDMA (Kuypers and Ramaekers 2005, 2007). Drinks containing ethanol, commonly referred to as alcohol, are widely available and regularly used in Western society. Ethanol is chiefly a central nervous system (CNS) depressant. It inhibits both excitatory and inhibitory post-synaptic potentials by potentiating the action of gamma-aminobutyric acid at its receptor (Suzdak et al. 1988). Reports of the cognitive effects of combined use of MDMA and ethanol in humans have been sparse in the literature. Studies that were performed assessed psychomotor function, attentional performance and subjective effects (Hernandez-Lopez et al. 2002; Kuypers et al. 2006; Ramaekers et al. 2006). In general, MDMA and ethanol had no or opposite effects on effect measures, and as such co-administration did not exacerbate single-drug effects. In the current study, we employed a series of tests sensitive to changes in all common neuropsychological domains induced by several pharmacological compounds, including amphetamines (Wezenberg et al. 2004). It is generally acknowledged that the combined use of alcohol with other CNS-depressant drugs may enhance the effects of ethanol or of the other drugs. MDMA, however, has stimulant effects while ethanol is a sedative agent, suggesting that the effects of co-administration are diminished rather than augmented compared to the effects following single administration. This hypothesis was investigated during acute co-administration of MDMA and ethanol in healthy volunteers. Materials and methods Study design This study utilised a four-way, double-blind, randomised, crossover, placebo-controlled design. Sixteen volunteers were randomly assigned to one of four treatment sequences. Each volunteer received a capsule containing either 100-mg MDMA or placebo and an ethanol–placebo infusion (target blood alcohol concentration (BAC) of 0.6‰) with a washout of 7 days between each treatment. Study outline Subjects arrived in the morning and were admitted to the study after a negative urine drug screen (opiates, cocaine, benzodiazepines, amphetamines, methamphetamines and delta-9-tetrahydrocannabinol), as well as a negative alcohol breath test and recording of signs and symptoms of possible health problems. A light breakfast was offered. Drug administration was scheduled at 1030 hours and the alcohol infusion was started at 1100 hours for a duration of 3 h. At 1130 hours, subjects performed the psychological test battery as described below. Specific test times are reported in Table 1. Subjects received lunch at 1400 hours and were sent home at 1700 hours after a medical check. Adverse events where recorded throughout the study day. Vital signs were monitored using a Datascope® Accutorr Plus™ cardiovascular monitor and Braun® type 6021 ThermoScan during the study day. The data presented in this report are a subset of a larger data set, which will be reported elsewhere. Table 1TimelineNeuropsychological testsDescriptionTime (h:m)Drug administration0:0018-word list immediate recallImmediate recall of 18-word list1:00SDSTTranslate symbols to digits with key present in 90 s1:05SDRTTranslate symbols to digits from memory1:08Pursuit taskKeep dot within moving circle1:10Tangles taskTangled line leads to which target?1:13Switch taskFollow, possibly conflicting, instructions (choice between left or right)1:1718-word list delayed recallDelayed recall of 18-word list1:2218-word list delayed recognitionRecognise words of 18-word list memorised earlier among 18 distracters1:23Point taskKeep pen steady in air, measures tremor1:25Visual analogue scales16 100-mm scales for subjective experiences1:30Times are relative to drug administration. Subjects Sixteen healthy volunteers (nine male, seven female), regular users of ecstasy and alcohol, aged 18–29 years and within 80–130% of their ideal bodyweight, were recruited through advertisement on the internet and at local drug testing services. They were all in good physical and mental health as determined by assessment of medical history, a medical, electrocardiogramme and clinical, haematological and chemical blood examination. Previous drug use was assessed using a structured interview. Fifteen volunteers were right handed and one was left handed. The study was approved by the local Medical Ethics Committee. All subjects gave their written informed consent before participating in the study and were compensated for their participation. Subject demographics and drug history are reported in Table 2. Table 2Volunteer demographics–drug history MeanSDMinMaxAge (years)22.12.918.029.0Education (years)16.51.61218Height (cm)174.712.3147.0189.1Weight (kg)67.512.445.788.4Opiates0.10.301LSD2.56.6025Amphetamines37.381.10250Ecstasy94.6138.414431Cannabis1,174.31,665.5205,840Cocaine33.7105.70400Alcohol2,367.91,981.6505,200Solvents3.613.3050Barbiturates0000Benzodiazepines18.657.30216Psilocybin6.910.4030Drug quantities mentioned are lifetime drug exposures, not further specified. One subject had a mild adverse reaction (local vascular reaction) to the alcohol infusion and one subject did not refrain from drug use; both (one male, one female) were excluded from further participation and results obtained were not included in the data analysis. Drugs and dosages MDMA (or matched placebo) was given as a capsule in a single dose of 100 mg via oral administration (dose range; 1.1–2.2 mg/kg). MDMA was obtained from Lipomed AG, Arlesheim, Switzerland and encapsuled according to Good Manufacturing Practise by the Department of Clinical Pharmacy, UMC St Radboud, Nijmegen, the Netherlands. MDMA 100 mg orally is a relevant dose in the range of normal single recreational dosages. Previous experiments in humans used doses up to 150 mg without serious adverse events. Ethanol (or matched placebo) was administered continuously by intravenous infusion of 10% ethanol in glucose solution resulting in an ethanol blood concentration of 0.6‰ with a duration of 3 h as described below. Alcohol clamping To standardise alcohol delivery and maintain a constant alcohol blood concentration over time, an intravenous ethanol clamp was used. Ethanol was administered by infusion of a 10% ethanol in glucose solution for a duration of 3 h. The infusion rate was calculated using frequent breath alcohol concentrations measurements, according to a previously designed algorithm (Amatsaleh et al. 2006a). Breath alcohol concentration was assessed using a HONAC AlcoSensor IV® Intoximetre. An intravenous administration route was chosen, ensuring standardisation of the rate and bioequivalence of ethanol administration. This is an important pre-requisite for predictable pharmacokinetics of ethanol. The process was semi-automated using a computer spreadsheet programme, which uses measured breath alcohol concentrations to calculate the infusion rate needed to maintain the ethanol level at 0.6 mg/mL. This is a relevant dose equivalent to peak levels of approximately two to three units of alcoholic beverages. In many European countries driving is prohibited at BAC above 0.5‰. This limit has been confirmed by a report that shows that at an average BAC of 0.6‰ psychomotor performance is significantly impaired (Amatsaleh et al. 2006b). A BAC of 0.6‰ is equivalent to approximately two to three alcoholic beverages commonly used in social settings in Western society, which is considered to be a safe and relatively moderate dose, despite its significant CNS effects. MDMA blood analysis For the assessment of serum levels of MDMA, blood samples were collected 90 min after drug administration from each subject on each study day. Venous blood samples (10 ml) were collected into heparinised tubes, centrifuged immediately at 4°C for 15 min. Plasma was split into aliquots of 2 mL (to prevent over-freezing–thawing), frozen rapidly using liquid nitrogen and stored at −80°C. Samples were analysed for MDMA and MDA concentration by the Toxicology unit of the Leyenburg hospital, The Hague, the Netherlands. Neuropsychological tests, apparatus and procedure The performance on all neuropsychological tests was recorded by means of a digitising tablet (WACOM UD-1218-RE), a laptop computer, a pressure-sensitive pen (which could also be used as a cursor) and test forms. The x and y coordinates of the pen tip on and up to 5 mm above the digitiser were sampled with a frequency of 200 Hz and a spatial accuracy of 0.2 mm. The time schedule of the tests is summarised in Table 1. To familiarise the subjects with the tests and procedures, they were invited to the hospital to perform a practise session within 1 week before the actual study days. All tests had five equivalent versions for four test days and one practise day, test versions were counterbalanced over test days. Executive function Switch task This test is a reaction time task measuring simple as well as complex reaction time, assessing executive performance (Baker and Letz 1986). After a random period of 0.75 to 1.75 s, two rectangular fields appeared on both sides of a circle in the centre of the screen. Only one of the two fields provided the subjects with information, either a colour, an arrow or both. The other non-informative field always had a neutral grey colour. Five conditions were subsequently presented to subjects. If only green fields appeared, subjects had to move as fast as possible into the green field. If green and red fields appeared, subjects had to move into the green field and away from the red field as soon as they appeared. If green fields with a left or right arrow were presented, subjects were to move into the direction of the arrow. Green and red fields with a left or right arrow indicated that subjects were to follow the direction of the arrows in the green field, but the opposite direction of the arrows in the red field. Finally, the first condition was repeated. All conditions contained 20 trials except condition four in which there were 40 trials (total = 120 trials). The outcome measures were the mean reaction times per condition. The last condition is a repetition of the first to check for possible changes in attention. Memory Eighteen-word list A verbal memory test based on the classic Auditory Verbal Learning Test (Vakil and Blachstein 1993) was used. A variant was made consisting of a list of 18 words. The classic test uses 15 words. A longer wordlist was chosen, however, to prevent ceiling effects. The list was presented verbally three times. Under normal circumstances, subjects are supposed to remember an increasing number of words after each trial. Directly after each presentation, and after an interval of 20 min, subjects were asked to recall as many words as possible. After the delayed recall trial, a list of 36 words was presented from which they were asked to recognise the 18 words previously presented. The incorrect words were distracters and resembled the correct words in a semantic or phonologic manner. Responses were either correct positive (when a word that was recognised was indeed part of the list presented during immediate recall) or false positive (when a word was recognised but was not part of the list presented during immediate recall, e.g. the word was a distracter). The outcome measure was the number of correctly recalled or recognised words for the average of the three immediate recall trials, the delayed recall trial and the delayed recognition trial. Symbol digit recall test The symbol digit recall test (SDRT) followed directly after the Symbol Digit Substitution test (SDST), which is discussed in the last paragraph of this section. After subjects had finished the SDST, they were shown the symbols of the SDST without the translation key, one at a time, and asked to produce the corresponding numbers. This test is based on an extended procedure of the SDST to measure incidental learning (Kaplan et al. 1991). The outcome measure was the number of correctly translated symbols. Psychomotor function Pursuit task To measure implicit procedural learning, a computerised version of the rotor pursuit task was used. This test is based on the classic rotary pursuit task (Ammons 1951). It is a continuous motor task. Subjects had to follow the movement of a large target stimulus on the computer screen with a cursor by moving the pen over the XY tablet. The speed of the target gradually increased when the cursor was contained within the target but decreased considerably when it was not. The target followed a spatially predictable circular path over the screen. The outcome measure for this test was the total number of rotations within 2 min. Point task The point task, a measure for tremor, required subjects to try to keep the cursor inside a very small circle for 1 min, while avoiding contact between the pen and the test form. The outcome measure for this test was the deviation from the target. Visuospatial and visuomotor function Tangle task The tangle task required the subject to visually track a particular line winding through two to four other lines. On subsequent trials, the tangles increased in complexity; they got longer and made more 90° turns. The paper form had a start area and five target areas, numbered 1 to 5, which reflect the maximum target areas on the screen, starting with only three target areas. This test is modelled after the visualisation test from the ‘kit for factor referenced cognitive tests.’ It was selected by the US NAVY to study environmental and other time-course effects and has good task stability and reliability (Bittner et al. 1986). The outcome measures are the reaction time per trial and the number of correct trials in 2 min. Attention Symbol Digit Substitution test This test is a version of the subtest from the Wechsler Adult Intelligence Scale (Wechsler 1981). Subjects had to substitute the nine symbols for the digits 1–9 on the basis of a given translation key. The outcome measure was the total number of digits completed in 90 s. According to Hege et al. (1997) this task measures many cognitive components, e.g. visuospatial scanning, intermediate memory, perceptual motor speed and speed of cognitive processing. Therefore, subsequent analyses were performed in order to attempt and disentangle these cognitive processes. Based on pen pressure, movement trajectories were defined as either pen-up periods or pen-down periods. This allowed for subsequent analysis of matching times and movement (writing) times in the Symbol Digit Substitution test. For the motor component, the mean writing times were computed. For the more cognitive component, the mean matching times were computed. These analyses have been previously performed (Sabbe et al. 1999; Wezenberg et al. 2005). Subjective Subjective effects were recorded using the Bond and Lader (Visual Analogue) Mood Rating Scale (BLMRS). This inventory was completed at the end of each neuropsychological test battery on each study day. The BLMRS scale consisted of 16 lines, each 10 cm in length, with opposite terms at each end of the line (alert–drowsy, calm–excited, strong–feeble, muzzy–clear-headed, well coordinated–clumsy, lethargic–energetic, contented–discontented, troubled–tranquil, mentally slow–quick witted, tense–relaxed, attentive–dreamy, incompetent–proficient, happy–sad, antagonistic–amicable, interested–bored, withdrawn–gregarious). Subjects were asked to indicate which item was more appropriate by marking the line. The outcome measure was the distance to the marker on each scale. These scale scores were then aggregated to scores for ‘calmness,’ ‘alertness’ and ‘contentedness’ as described by Bond et al. (1974). Statistical analyses Statistical evaluation (using SPSS 11.5 for Windows) was performed with general linear model repeated-measures analysis of variance. Main and interaction effects were tested using a two-factor (‘ethanol’ and ‘MDMA’), two-level (absent versus present) multivariate model. The analysis of the data was based on Maxwell and Delaney (2004) and Kirk (1995). First the presence of interaction (non-additivity) was tested with alfa = 0.05. When the interaction was not statistically significant we proceeded by testing the main effects, each at alfa = 0.05. In the case of a significant interaction, we proceeded by testing simple main effects of each drug, i.e. MDMA vs. placebo and ethanol vs. placebo. Results Subject demographics are summarised in Table 2. Out of 16 subjects, 14 completed the study procedure. One subject had a mild adverse reaction (local vascular reaction which subsided with infusion stop) to the alcohol infusion and one subject did not refrain from drug use; both were discontinued from study participance and data already obtained were not included in statistical analysis. Only significant results are mentioned in this section, unless stated otherwise. MDMA blood concentration 90 min after administration did not differ for MDMA single vs. MDMA and ethanol co-administration and was on average 196 μg/L (SD = 83 μg/L). Blood alcohol concentration was maintained at an average of 0.54‰ (SD = 0.07‰). Executive function Executive function (switch task) did not show any significant main or interaction effects. Memory function Memory function was assessed by the 18-word list (outcome measures were ‘immediate recall,’ ‘delayed recall’ and ‘recognition,’ see Fig. 1) as well as the SDRT. Immediate recall was impaired only by ethanol (F(1, 12) = 8.71, p = 0.011). Fig. 1Memory effects (18-word list), Immediate: immediate recall, average score of three trials of correctly recalled verbally presented words, Delayed: correctly recalled verbally presented words 20 min after presentation, Recognition: correctly recognised verbally presented words containing 18 distracters, 20 min after presentation (mean and SEM). Immediate recall was impaired only by ethanol (F(1, 12) = 8.71, p = 0.011). Delayed recall was impaired by MDMA (F(1, 12) = 10.447, p = 0.007) as well as by ethanol (F(1, 12) = 16.031, p = 0.002); recognition was not affected by any drug condition Delayed recall as assessed by the 18-word list was impaired by MDMA (F(1, 12) = 10.447, p = 0.007) as well as by ethanol (F(1, 12) = 16.031, p = 0.002). The SDRT, also a test for delayed recall, showed a similar pattern of impairment by MDMA (F(1, 12) = 5.300, p = 0.038) as well as by ethanol (F(1, 12) = 7.654, p = 0.016). Psychomotor function Psychomotor function was assessed with tests for tremor (point task), accuracy (pursuit task) and speed (SDST motor time, see Fig. 2); other SDST results are reported in the section “Attention.” Ethanol impaired psychomotor speed as reflected in the increase in SDST motor time (F(1, 12) = 9.295, p = 0.009). Fig. 2Psychomotor effects: SDST writing time (mean, SEM). Ethanol increased writing times (F(1, 12) = 9.295, p = 0.009) Visuospatial and visuomotor function Visuospatial and visuomotor function were measured with the tangle task, subdivided into ‘total number correctly solved’ and ‘reaction time,’ and did not show any significant effects, although a trend of impairment by MDMA (F(1, 12) = 3.966, p = 0.068) was observed. Attention Attention was assessed with the SDST task; the outcome measures were ‘motor time’ (see “Psychomotor function”), ‘matching time’ (Fig. 3) and ‘total number correctly substituted.’ The time required to match symbols to the corresponding numbers showed a significant MDMA and ethanol interaction (F(1, 12) = 6.214, p = 0.027). Tests for simple main effects revealed that both single-drug conditions reduced attention compared to placebo (ethanol F(1, 13) = 6.248, p = 0.027; MDMA F(1, 13) = 6.822, p = 0.022; see Fig. 3). Fig. 3Attention effects: SDST matching time, i.e. time needed for translation (mean and SEM). A significant MDMA by ethanol interaction was found (F(1, 12) = 6.214, p = 0.027) Subjective effects Subjective effects are depicted in Fig. 4. Feelings of ‘contentedness’ where increased significantly by MDMA only (F(1, 12) = 4.710, p = 0.049). Fig. 4Subjective effects (aggregated Bond and Lader scores, mean and SEM). Feelings of ‘Contentedness’ where increased significantly by MDMA only (F(1, 12) = 4.710, p = 0.049) A significant interaction effect (F(1, 12) = 7.358, p = 0.018) was found for feelings of ‘Alertness.’ Feelings of ‘Calmness’ were reduced only by MDMA (F(1, 12) = 20.259, p = 0.001) A significant interaction effect (F(1, 12) = 7.358, p = 0.018) was found for feelings of ‘alertness.’ Tests for simple main effects revealed that ethanol but not MDMA significantly decreased feelings of alertness compared to placebo (F(1, 13) = 50.613, p < 0.001). Feelings of ‘calmness’ were reduced only by MDMA (F(1, 12) = 20.259, p = 0.001). Discussion This study demonstrates that the effects of 100-mg MDMA, commonly known as ecstasy, on cognitive function are no greater than the effects of a relatively low dose of ethanol. This is remarkable as these results suggest that the effects of 100-mg MDMA are comparable to the peak effects of two to three alcoholic beverages. Co-administration of these compounds did not result in any significant cognitive impairments beyond those observed after administration of only ethanol. The use of moderate amounts of alcohol is common in Western societies and, although impairing cognitive function, socially accepted, while ecstasy use remains very controversial. Of course, our findings only relate to the acute neuropsychological implications of ecstasy use and not to the physiological and long-term effects, which rightfully remain topics of discussion (Gouzoulis-Mayfrank and Daumann 2006a; Nutt 2006; Parrott 2007). Drug effects observed in this placebo-controlled crossover study were moderate. Co-administration was well tolerated as indicated by the subjective scores, which were comparable to those found after single administration of MDMA. An interaction of MDMA and ethanol was found for subjective alertness scores. Ethanol, as expected, reduced subjective alertness, while MDMA co-administration reversed the reduction of subjective alertness by ethanol. In the present study, MDMA by itself did not significantly affect subjective alertness, although this effect has been consistently reported in other studies and is a well-known effect of amphetamines. However, MDMA did significantly reduce subjective calmness, i.e. subjects felt more excited after MDMA use. Probably, the Bond and Lader Mood Rating Scale is not well suited for the assessment of subjective effects of psychoactive drug effects and future studies should employ more appropriate subjective drug effect measures such as the Profile Of Mood States (de Wit et al. 2002). When considering the results for each neuropsychological domain, executive function was not affected by any drug condition. A previous study showed impairment of executive function by ethanol but not MDMA, although ethanol impaired performance in only one out of three tests of executive function (Lamers et al. 2003). The BAC in this study was 0.3‰ at the time of testing compared to 0.56‰ in our current study, suggesting a lack of sensitivity of the test employed in the current study. The above-mentioned previous study also reported visuospatial and visuomotor impairment by MDMA but not by ethanol. Although not significant, our current results show a similar pattern where MDMA showed a trend of impairment of visuospatial and visuomotor function, whereas ethanol did not. Psychomotor function was impaired only after ethanol administration (SDST motor time, see Fig. 2). The majority of studies addressed in our review of acute effects of MDMA in humans (Dumont and Verkes 2006) did not report any change in psychomotor function after MDMA either. However, increased psychomotor function after MDMA has also been found (Lamers et al. 2003; Ramaekers et al. 2006). These studies administered 75 mg instead of 100 mg. Possibly, the effects of MDMA are biphasic, with a low dose of MDMA exhibiting more amphetamine-like effects, e.g. arousal, increasing performance, whereas higher doses may elicit more hallucinogenic effects and impair performance (Liechti et al. 2001; Solowij et al. 1992). As mentioned above, MDMA co-administration reversed the ethanol-induced feelings of sedation, although MDMA was unable to reverse the psychomotor impairment induced by ethanol. This dissociation between subjective and objective sedation confirms previous findings by Hernandez-Lopez et al. (2002). Several studies assessed MDMA’s effect on attention using the Digit Symbol Substitution Task (DSST), although no significant effects were found (Cami et al. 2000; Farre et al. 2004; Kuypers and Ramaekers 2005). One study reported decreased DSST performance after ethanol as well as after ethanol and MDMA co-administration but no effect of MDMA (Hernandez-Lopez et al. 2002). Our findings confirm these findings to a large extent. We found no main effects of MDMA or ethanol on attention, although an interaction of ethanol and MDMA for ‘matching time’ (time required to match the number to the corresponding symbol) was found. Co-administration of MDMA and ethanol increased ‘matching time’ comparable to the increase observed after both MDMA and ethanol single administration, compliant with our hypothesis of competitive mechanisms of action of both drugs (see Fig. 3). Studies investigating the long-term effects of MDMA consistently found memory to be affected (Verbaten 2003). In the present report, almost all memory measures showed quantitatively comparable impairment for each drug condition (see Fig. 1), although the effect of MDMA on immediate recall did not reach statistical significance. Only delayed recognition was not impaired in any drug condition. These findings suggest a deficit in the retrieval of verbal information encoded in memory, rather than impairment in the storage of information. Our findings are similar to the results of a previous study on MDMA-induced effects on memory (Kuypers and Ramaekers 2005). In this previous study, no memory impairment was observed after methylphenidate administration, a pronounced dopamine and norepinephrine releaser, suggesting the involvement of serotonin in memory impairment. Several other studies also have shown serotonin-mediated modulation of memory function through interaction with the cholinergic neurotransmitter system, although the details of this complicated interaction remain elusive (Cassel and Jeltsch 1995; Garcia-Alloza et al. 2006; Meneses 2007). Generally, subjects stated that they were well aware of their impaired memory after MDMA. BAC was on average 0.56‰. At this level, driving is prohibited by law in many European countries because of its interference with normal functioning. Although the effects were moderate, ethanol impaired cognitive performance in various tests. Similar moderate effects were observed with MDMA 100 mg, considered to be slightly above the average recreational dose (Tanner-Smith 2006). This might be considered surprising for a drug with reported robust subjective stimulating and hallucinogenic properties. However, because the effects caused by a single dose of 100-mg MDMA were comparable to the effects of a BAC of 0.56‰, this dose should by inference be considered unacceptable in motorised traffic. Arguably, the moderate drug effects as found in this study could be explained by ‘missing’ the time of the maximal drug effects. Although the average MDMA blood concentration reported here (196 μg/L) is comparable to Cmax of 100-mg MDMA (199.8 μg/L) as reported by de la Torre et al. (2000), MDMA concentration was assessed at the end of the testing procedure. However, Hernandez-Lopez et al. (2002) found significant effects at 60 min as well as 90 min after drug administration, arguing against the suggestion of ‘missing’ peak drug effects. The circumstances in which these substances are normally used cannot be fully recreated in the laboratory and this may have suppressed the effects of both substances. It is not unlikely that these substances show enhanced effects when tested under typical circumstances and surroundings. Recently, Parrott et al. (2006) concluded that the increase in physical activity and body temperature typically experienced when using MDMA enhance MDMA effects. Ball et al. (2006) demonstrated that a familiar surrounding increased MDMA-induced locomotor response as well as single-neuron activity in rats, compared to unfamiliar surroundings. Therefore, the psychosocial context in which MDMA is used, along with the different expectations and behaviour, probably influences its effects (Sumnall et al. 2006). It is unlikely, however, that this affects the quality of the interactions of MDMA and ethanol. In conclusion, co-administration of MDMA and ethanol did not impair cognitive function significantly more than MDMA or ethanol administration alone. The most prominent effect of (co-)administration of MDMA and ethanol was an impairment of memory. Ethanol also impaired psychomotor function. Although the impairment of performance by each drug condition was relatively moderate, this significant impairment of cognitive function should be considered intolerable in motorised traffic and other cognitively demanding situations as confirmed by previous research and as defined by law. However, the effects of these drugs in the concentrations used in the present study on established neuropsychological tests appear to be smaller than one would assume based on their reputation.
[ "acute", "neuropsychologic", "effects", "mdma", "ethanol", "healthy volunteers", "ecstasy", "alcohol", "interaction" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
Eur_J_Clin_Pharmacol-4-1-2426926
Efficacy and safety of disodium ascorbyl phytostanol phosphates in men with moderate dyslipidemia
Objective This study investigated the efficacy, safety, tolerability, and pharmacokinetics of a novel cholesterol absorption inhibitor, FM-VP4, comprising disodium ascorbyl sitostanol phosphate (DASP) and disodium ascorbyl campestanol phosphate (DACP). Introduction Worldwide, cardiovascular disease (CVD) is the most common cause of death, with atherosclerotic vascular disease as the underlying cause. It has been well established that elevated plasma concentrations of low-density lipoprotein cholesterol (LDL-C) lead to an increased risk of atherosclerosis and coronary heart disease [1]. Dietary modification can improve the lipid profile and the potential risk of CVD, but for a significant portion of the population, treatment with lipid-lowering agents is necessary to reduce blood cholesterol effectively. Statins are the primary class of drugs used for managing LDL-C and are the most potent agents in lowering LDL-C. In fact, the lowering of LDL-C by statin therapy has been shown to decrease the number of fatal and nonfatal myocardial events in both primary and secondary prevention studies [2–6]. However, statins have not provided the final answer in terms of CVD prevention, and their use is associated with side effects, albeit rare, at higher doses. Therefore, cholesterol-lowering agents that act via other mechanisms are of great interest. Plant sterols, or phytosterols, are naturally occurring compounds in vegetable oil, seeds, and nuts that are structurally related to cholesterol. Daily intake of 2,000 mg of plant sterols or their saturated counterparts, plant stanols, or phytostanols, has been shown to decrease LDL-C by 9–14% without affecting high-density lipoprotein cholesterol (HDL-C) concentrations [7, 8]. Plant sterols and stanols are thought to decrease plasma cholesterol concentrations by inhibiting cholesterol absorption in the intestine [9]. When cholesterol absorption is decreased, the hepatic cholesterol pool is reduced, resulting in enhanced cholesterol synthesis by the liver. At the same time, LDL receptors are upregulated, with ensuing lower LDL-C concentrations in plasma [10]. A new water soluble plant stanol derivative, disodium ascorbyl phytostanol phosphate (FM-VP4, Fig. 1) has been developed as a cholesterol-absorption inhibitor. FM-VP4 consists of a mixture of sitostanol and campestanol to which ascorbate is covalently bound via a phosphodiester linkage. It has been shown to inhibit the in vitro uptake of micellar [3H]-cholesterol by approximately 50% in human and rat enterocyte cell lines [11, 12]. In vivo, FM-VP4 led to a dose-related inhibition of [3H]-cholesterol absorption in rats, as shown by a maximally 80% reduced area under the concentration time curve (AUC) of orally administered micellar [3H]-cholesterol [13]. Furthermore, the LDL-lowering activity of FM-VP4 was observed in a broad range of LDL-sensitive species, including gerbils [14, 15], hamsters [16], and apolipoprotein E (ApoE)-deficient transgenic mice [17]. In hamsters, FM-VP4 was a more potent LDL-lowering agent than unesterified plant stanols [16], and in ApoE deficient mice, the LDL-lowering effect of FM-VP4 was correlated with retardation of atherosclerotic plaque development [17]. Other effects of FM-VP4 included the decrease of plasma triglyceride levels and the reduction of abdominal fat or body-weight gain in gerbils [14, 15] and mice [18]. In all of these cases, FM-VP4 showed no side effects to the animals tested. Fig. 1Chemical structure of FM-VP4, composed of disodium ascorbyl campestanol phosphate (R=CH3) and disodium ascorbyl sitostanol phosphate (R=C2H5) Preclinical data suggested that FM-VP4 is a potent cholesterol-lowering agent with no significant toxic effects. Here, we report the first human study designed to assess the efficacy, safety, tolerability, and pharmacokinetics of single and multiple doses of this water-soluble plant stanol analogue. Subjects and methods Subjects Subjects were recruited via advertisements in local newspapers. The study protocol was carefully explained before subjects were asked to give their written informed consent. The study protocol was approved by the Institutional Review Board of the Academic Medical Centre. Subjects were eligible if they were male, 18–75 years old, healthy as reviewed by medical history and physical examination, had LDL-C concentrations ≥3.0 mmol/L during one of the screening visits or at baseline and triglyceride (TG) concentration ≤4.5 mmol/L at the first visit, had a body mass index (BMI) <35 kg/m2, and did not use any steroids, β-blockers, corticosteroids, thiazide diuretics, or antiepileptics. Subjects with a history of hypertension, arterial diseases, diabetes mellitus type I or II, hypothyroidism, obstructive biliary disorders, pancreatitis, collagen disorders, or autoimmune disease were excluded, as were subjects with history of malignancy during the previous 3 years, significant hepatic, renal, cardiac, or cerebral disease, or plasma levels of hepatic transaminases higher than two times the upper limit of normal (ULN). Subjects of phase 1 were not allowed to use any lipid-lowering drugs at inclusion, and the use of plant sterol- or stanol-containing products had to be discontinued at the inclusion visit. Subjects of phase 2 had to discontinue the use of plant sterol or stanol products or fish oils at the inclusion visit and statin treatment 6 weeks (40 days) before the start of study treatment. Thirty men participated in the phase 1 trial and 100 men in phase 2. Drugs Two types of tablets were used: 100 mg oval FM-VP4 tablets or a matching placebo. Both investigational products were supplied through Forbes Medi-Tech Inc. (Vancouver, Canada). FM-VP4 is a semisynthetic esterified plant stanol derivative produced as a disodium salt. The two major components of FM-VP4 are disodium ascorbyl sitostanol phosphate (DASP) and disodium ascorbyl campestanol phosphate (DACP) (Fig. 1), which are present in the proportion of approximately 2:1, respectively. Design This single-center, double-blind, placebo-controlled, dose-escalation trial comprised two parts: In phase 1, 30 men received a single dose of FM-VP4. Five subjects were assigned to each dose group (100, 200, 400, 800, 1,600, or 2,000 mg), and within each group, one subject was randomly assigned to placebo. Once a complete cohort of five subjects was treated and the safety parameters had been reviewed, the following dose was initiated. In the subsequent phase 2 trial, 100 men were treated for 28 days (4 weeks). Twenty-five subjects were assigned to each dose group (100, 200, 400, or 800 mg/day), and within each group, five subjects were randomly assigned to placebo. The first five subjects in each cohort were hospitalized for 5 days. Escalation to the next dosing level was only allowed once these five subjects completed treatment and all results and safety data were evaluated. Phase 1 A week before treatment, subjects visited the hospital for screening. Informed consent was obtained, and subjects underwent a physical examination. A fasting blood sample was taken to measure lipids, biochemistry, and hematology. Within 3 days before treatment, subjects visited the hospital for another blood sample, and urinalysis and electrocardiogram (EKG) were performed. If subjects met all inclusion criteria, they were hospitalized for 24 h on the day of treatment. In the morning, subjects’ weight, supine blood pressure (BP), and heart rate were measured, and a predose fasting blood sample was taken for baseline safety parameters and pharmacokinetics. Subsequently, subjects were administered one to 20 tablets containing 100 mg FM-VP4 each or placebo. Tablets were swallowed with 250 mL of water. Breakfast followed 30 min later. Weight, supine BP, and heart rate were recorded at 3, 6, 9, and 12 h after dosing. A blood sample for pharmacokinetics was taken 6 and 12 h after dosing. Any spontaneous complaints were recorded as adverse events and closely monitored. Subjects were detained overnight under observation, and 24 h after dosing, another blood sample was taken for safety parameters and pharmacokinetics. Subjects’ weight, supine BP, and heart rate were measured, and urinalysis and an EKG were performed. Once the EKG, biochemistry, and hematology of the 24-h postdosing sample had been reviewed, subjects were discharged from the hospital. They returned to hospital 48, 96, and 144 or 168 h (6 or 7 days) after treatment for weight, supine BP, and heart rate measurements and for a blood sample to measure safety parameters and pharmacokinetics. At the last visit, a final physical examination was performed. Phase 2 Four to 8 weeks before treatment, subjects visited the hospital for screening. Informed consent was obtained, and subjects underwent a physical examination. A fasting blood sample was taken to measure lipids, biochemistry, and hematology. Subjects were instructed to follow a diet adapted from the National Cholesterol Education Program (NCEP) Step 1 diet during the entire study, including the 4- to 8-week run-in period. Consumption of plant sterol- or stanol-containing products and the use of fish oils had to be discontinued at the screening visit, and statin treatment had to be discontinued 6 weeks before treatment with FM-VP4. If they met all inclusion criteria, subjects visited the hospital halfway through the run-in period and within 3 days before study treatment for baseline blood lipids, biochemistry, and hematology measurements. At the latter visit, urinalysis was performed and an EKG recorded.The first five subjects per dose cohort, of which four were on active treatment and one on placebo, were hospitalized for 5 days. Each morning, subjects’ weight, supine BP, and heart rate were measured, and a predose fasting blood sample was taken for safety parameters and pharmacokinetics. Subsequently, subjects were administered one to four tablets containing 100 mg FM-VP4 each, depending on the dose cohort, or placebo. Tablets were swallowed entirely with up to 100 mL of water. Breakfast followed 30 min later. All doses were divided and given twice per day; another one to four tablets were administered 30 min before dinner. The morning and evening doses were packaged in separate bottles. Subjects in the 100-mg group received one FM-VP4 tablet and one placebo tablet. Supine BP and heart rate were also recorded daily after lunch and dinner. Any spontaneous complaints were recorded as adverse events and closely monitored. Subjects were observed overnight and discharged 5 days after the first dosing. All subjects visited the hospital for efficacy and safety measurements once per week during and 14 days after treatment. At the last visit, a final physical examination and an EKG were performed. Compliance was calculated based on the number of tablets supplied to the patient minus the number returned. Plasma analyses Complete blood count, fibrinogen, and the biochemical profile [alanine aminotransferase (ALT) and aspartate aminotransferase (AST), bilirubin, creatine kinase (CK), creatinine, glucose, and C-reactive protein (CRP)] were assessed using the local hospital laboratory. Changes in laboratory parameters were considered abnormal in case of ALT or AST levels greater than three times ULN, a CK level greater than five times ULN, a creatinine increase of ≥40 μmol/L compared with baseline, a creatinine level >177 μmol/L, white blood cell count <3 × 109/L, and a decrease in hemoglobin of at least 1.5 g/dL compared with baseline. Thyroid-stimulating hormone (TSH) was measured at the first screening visit. Urinalysis for blood, glucose, protein, pH, and specific gravity was performed by dipstick within 3 days before and 1 day after dosing in phase 1 and within 3 days before, after 4 weeks of treatment, and at the last visit of phase 2. Urinalysis by wet microscopic slide was performed if the dipstick analysis was abnormal. Lipid analyses were performed by an external laboratory (MRL International, Zaventem, Belgium) by using standardized procedures, and LDL-C was calculated using the Friedewald equation [19]. The results were kept blinded to the investigators. Plasma was stored at −80°C for further analyses. ApoB, ApoAI, and lipoprotein a [Lp(a)] as well as vitamin E and vitamin A were analyzed in one run after the study had been completed. ApoAI, ApoB, and Lp(a) levels were determined by nephelometry with a Beckman Array (Mijdrecht, the Netherlands) according to the manufacturer’s instructions, and vitamin E and vitamin A were measured by high-performance liquid chromatography (HPLC) with fluorescence detection using a Chromsep Glass, 100*3 mm, inertsil 5, ODS-3 column (Varian-Chrompack, Middelburg, the Netherlands). All DASP and DACP plasma concentrations were assayed by a validated liquid chromatography/mass spectrometry/mass spectrometry method (LC/MS/MS) based on a previously described method [20]. The lower limit of quantification (LLOQ) was 30.8 ng/mL. Data analysis was carried out using Microsoft Excel 97, and pharmacokinetic data was analyzed using Pharsight WinNonlin software, version 3.1 build 168 (Pharsight WinNonlin, Mountain View, CA, USA) based on noncompartmental kinetics analysis. For phase 1, concentrations of DASP and DACP in plasma were measured at baseline and 6, 12, 24, 48, 96, and 144 or 168 h after treatment. DASP and DACP concentrations were plotted against time, and the half-lives of the compounds were estimated by the method of residuals. The area under the DACP and DASP concentration time curve (AUC 0-t) was estimated by the trapezoidal rule [21]. In phase 2, trough concentrations of DASP and DACP were measured at baseline, after 8 and 28 days of treatment, and 14 days after treatment (day 42). In subjects who were hospitalized for the first 5 days of treatment, trough DASP and DACP concentrations were also measured each morning of hospitalization (day 1 through day 5). Plant sterols and stanols β-sitosterol, campesterol, sitostanol, and campestanol were assessed by selective ion monitoring gas chromatography mass spectrometry (SIM-GC-MS). High-purity solvents were purchased from Merck, Germany. Bis-(Trimethylsilyl)trifluoroacetamide (BSTFA) was obtained from Sigma (Steinneim, Germany) and pyridine from Pierce (Rockford, IL, USA). Beta-sitosterol (24β-ethylcholesterol), β-sitostanol (24α-ethyl-5α-cholestan-3β-ol), and campesterol (24α-methyl-5-cholesten-3β-ol) were purchased from Sigma, and stigmasterol from Lacoclau AB, Sweden. For sterol extraction, 500 μl of plasma was mixed with 100 μl 0.01 mg/ml stigmasterol and saponified for 60 min at 60°C in 1 ml of 4% (w/v) KOH in 90% ethanol. After saponification, the samples were mixed with 1 ml of water and extracted two times with 2 ml of hexane. The pooled hexane extracts were dried under nitrogen and derivatized with 50 μl BSTFA/pyridine (v/v 5:1) at 60°C for 60 min. For SIM-GC-MS, 2 μl of derivative mixture were delivered by automatic injection to an HP-5890 gas chromatograph split-injection port (1:20) leading to a 0.2 mm × 25 m Chrompack CP-sil 19 CB (WCOT Fused Silica) capillary column. The injection port contained a glass wool liner. The carrier gas was helium at a linear rate of 1 ml/min. The oven temperature started at 120°C and was raised to 260°C at 20°C/min, then to 280°C at 2°C/min, and finally to 300°C at 40°C/min and held for 5 min. An HP-5989B mass spectrometer was used as detector. Measurements were done in the electron impact mode at 70 eV with an ion source temperature of 250°C. The quadropole temperature was 150°C. Mass spectrometric data were collected in the elected ion mode at m/z=396 and 357 for β-sitosterol, m/z=488 and 373 for β-sitostanol, m/z=382 and 343 for campesterol, m/z=369 and 384 for campestanol, and m/z=255 and 394 for stigmasterol. Calibration curves were constructed by mixing 100 μl of 0.01 mg/ml stigmasterol with a series of 0- to 500-μl samples of a standard solution containing 10 μmol β-sitosterol, 2 μmol/l sitostanol, and 20 μmol/l campesterol. Statistical analyses The primary efficacy variable was the percent change in LDL after 4 weeks of treatment compared with baseline levels in phase 2. Differences in percentage LDL-C changes between the five treatment groups were calculated using analyses of variance (ANOVA) in SAS. P < 0.05 was considered statistically significant. If there was a significant difference between dose groups by ANOVA, each active dose was compared with placebo by a one-sided t test, with p < 0.025 being statistically significant. This procedure was also performed for percentage changes in total cholesterol (TC), HDL-C, and TG. For ApoB, ApoAI, Lp(a), vitamin A, vitamin E, and plant sterols, only ANOVA was performed to test differences in percentage changes between the five treatment groups. Safety data are presented descriptively. Results In phase 1, 30 male volunteers completed the study in accordance with the protocol. There were no withdrawals. The majority of subjects (93%) were of Caucasian descent. Mean subject age was 50.4 (range 24–63) years. In phase 2, 101 male subjects were enrolled, but one subject withdrew consent prior to receiving study medication due to personal reasons and therefore contributed no safety or efficacy data. The remaining 100 subjects completed the study. The majority of subjects (89%) were Caucasian, with the remaining being Asian or another race. Mean subject age was 53.7 (range 23–75) years. As calculated from returned tablets, the mean percentage of received tablets was 97%. One subject in the 800-mg group had a compliance <80%. All other patients received >80% of the planned number of tablets during the treatment period. Five subjects received statin treatment within 40 days before the first dose of study treatment (29–35 days). Mean weight ranged between 83.9 ± 13.1 and 84.6 ± 13.0 kg during the entire study. Adverse events In phase 1, 23 treatment-emergent adverse events were reported by 16 subjects. Of these events, five (21.7%) occurred in the 100-mg group, three (13.0%) in both the 200- and 400-mg groups, four (17.4%) in the 800-mg group, two (8.7%) in the 1,600-mg group, and three (13.0%) in both the 2,000 mg and placebo groups. All reported events were mild. The most common events were dizziness, headache, and fatigue. Other adverse events were loose stools, vasovagal attack, influenza, upper respiratory tract infection, elevated bilirubin, elevated BP, arthralgia, difficult micturition, polyuria, and pharyngolaryngeal pain. Three events were considered to be possibly related to the study drug. One was reported by a subject receiving the placebo treatment, whose bilirubin concentration increased from 12 μmol/L on the morning of treatment (baseline) to 25 μmol/L 24 h after treatment but decreased to normal 7 days after treatment. The other two possibly related events were reported in the 800- and 1,600-mg groups, and both consisted of one episode of loose stools on the day of treatment. In phase 2, 67 subjects reported one or more treatment-emergent adverse events: 12 in the 100-mg group, 14 in the 200-mg group, 11 in the 400-mg group, 15 in the 800-mg group, and 15 in the placebo group. Most events were mild, and four subjects reported a moderate event. The most frequent event was headache, which was reported by a total of 19 subjects and by two (400-mg group) to five subjects (100-mg and 200-mg groups) in each of the five groups. The four adverse events that were moderate included two subjects with headache in the 800-mg group, one subject with an elevated CK level in the 400-mg group, and one subject with epilepsy in the placebo group. No subjects discontinued study treatment due to a treatment-emergent adverse event. One subject in the 800-mg group did not take the study medication for 3.5 days due to nausea and diarrhea, which was not considered to have been related to the study drug. Once the subject recommenced treatment, no further treatment-emergent adverse events were reported. A total of 24 subjects reported events that were considered to be drug-related: eight in the 100-mg group, one in the 200-mg group, two in the 400-mg group, six in the 800-mg group, and seven in the placebo group. The most commonly reported event was flatulence, which was reported by three subjects in the 100-mg groups, one in the 400-mg group, two in the 800-mg group and one in the placebo group. There were no differences in the incidence of treatment-emergent adverse events between active and placebo groups. Blood pressure, heart rate, and EKG analysis There was no effect of FM-VP4 on BP or heart rate during phase 1. In phase 2, the mean systolic BP was slightly decreased after 4 weeks of treatment (a maximum average of 4% in the 100-mg group; data not shown), but there was no relationship between this decrease and the dose of FM-VP4. Diastolic BP did not change. All pre- and postdose EKGs were normal in both phases of the trial. Laboratory analyses In phase 1, one subject in the placebo group showed a bilirubin concentration increase from 12 μmol/L on the morning of treatment (baseline) to 25 μmol/L 24 h after treatment, as described in “Adverse events.” Seven days after treatment, bilirubin concentration was reduced to 13 μmol/L. There were no other clinically significant changes in hematology, biochemistry, plant sterols, or urinalysis measurements, and there were no differences between treatment groups. One subject who received 100 mg of FM-VP4 showed a decrease in white blood cell (WBC) count from 6.2 × 109/L on the morning of treatment (baseline) to 2.9 × 109/L 24 h after dosing. Seven days after dosing, the WBC count recovered to 7.3 × 109/L. This event was not considered to be clinically significant. Nine subjects had abnormal laboratory values during phase 2. Four laboratory abnormalities were considered clinically significant and were reported as adverse events. Two subjects showed a decrease of WBC count. One subject in the 800-mg group had a level of 2.2 × 109/L at screening, but those levels were increased to 5.8 × 109/L at the following visit and remained normal during the rest of the trial. This event was not considered clinically significant. A subject in the 100-mg group had a WBC count of 3 × 109/L at screening, and levels fluctuated between 2.8 × 109/L and 4.6 × 109/L during the study. Two subjects showed an increase in CK levels that was greater than five times the ULN. In one of these patients, who was in the 400-mg group, the event was recorded as an adverse event of moderate severity. In the other subject, who was in the placebo group, the event was not recorded as an adverse event, as the patient had performed vigorous exercise the previous day. Both subjects continued with medication, and both had CK levels within normal limits at subsequent measurements. Five subjects showed a decrease in hemoglobin of a least 1.5 g/d (0.93 mmol/L): one each in the 100-mg, 200-mg, and 400-mg groups, and two in the placebo group. However, those decreases occurred only once, and levels were restored to normal at the subsequent visit. Therefore, those changes were not considered clinically significant. There were no differences between treatment groups in the changes of vitamin A and vitamin E concentrations at week 4 compared with baseline (p=0.32 and p=0.38, respectively; data not shown). Overall, there were few abnormal laboratory values, and no trends were observed over time. There appeared to be no relationship between laboratory parameters and the dose of FM-VP4 administered. Efficacy In phase 1, changes in lipids and lipoproteins were not statistically compared between the seven treatment groups, as only one dose of FM-VP4 was administered. In the phase 2 run-in period, mean LDL-C levels were 3.94 ± 0.71 mmol/L at 4–8 weeks before baseline, 4.19 ± 0.78 mmol/L halfway through the run-in period, and 4.16 ± 0.76 mmol/L at baseline. The absolute lipoprotein levels at baseline and after 4 weeks of treatment per dose group and the mean percentage change in LDL-C levels are presented in Table 1. The percent changes of LDL-C by visit are also depicted in Fig. 2. ANOVA analyses showed a borderline statistically significant difference in the percentage change in LDL-C and HDL-C, and pairwise comparisons between each active dose and placebo revealed that 400 mg of FM-VP4 reduced LDL-C significantly (p=0.02). In the placebo group, LDL-C increased by 2.7%, and in the 400-mg/day group LDL-C reduced by 6.5%. There were no statistically significant differences in percentage change of LDL-C changes between the other active doses and placebo, and there were no statistically significant differences in percentage change of HDL-C between any of the active doses and placebo (Table 1). When statistical analysis was carried out on a per-protocol basis and subjects who were noncompliant (n=1) or who received statins within 40 days before study treatment (n=5) were excluded, a dose-response in percentage change from baseline in LDL-C was observed: changes in the placebo group and in the 100-, 200-, 400-, and 800-mg/day groups were 3.7%, 0.9%, −4.1%, −6.9%, and −6.2%, respectively. When using these per-protocol data, decreases in the 400- and 800-mg/day groups were significantly different from placebo (p=0.007 and p=0.01, respectively). The percentage change in ApoB at week 4 compared with baseline differed significantly between treatment groups (p 0.007). Absolute changes in ApoB were 0.017, 0.063, −0.058, −0.047, and −0.038 g/L in the placebo, 100-, 200-, 400-, and 800-mg/day group, respectively. There were no differences between treatment groups in ApoAI and Lp(a) changes at week 4 compared with baseline (data not shown). Table 1Plasma lipid and lipoprotein concentrations before and after 4 weeks of treatment with placebo or with 100 mg, 200 mg, 400 mg, or 800 mg/day of FM-VP4 (disodium ascorbyl campestanol phosphate and disodium ascorbyl sitostanol phosphate in phase 2 (n=20 per dose) Dose level (mg)P valueaPlacebo100 mg200 mg400 mg800 mgmmol/LTotal cholesterolBaseline6.36 ± 0.656.40 ± 0.876.40 ± 0.886.04 ± 1.056.02 ± 0.90Day 286.50 ± 0.726.49 ± 1.126.26 ± 0.785.81 ± 1.055.90 ± 0.90% Change2.6%1.2%−1.7%−3.8%−1.7%0.09LDLBaseline4.29 ± 0.634.17 ± 0.584.40 ± 0.774.02 ± 0.933.94 ± 0.83Day 284.38 ± 0.794.28 ± 0.784.18 ± 0.633.75 ± 0.883.73 ± 0.81% Change2.7%2.9%−4.2%−6.5%b−4.6%0.05HDLBaseline1.37 ± 0.321.34 ± 0.381.20 ± 0.251.24 ± 0.301.21 ± 0.27Day 281.37 ± 0.371.29 ± 0.341.28 ± 0.311.22 ± 0.271.26 ± 0.37% Change−0.1%−3.6%6.7%−1.2%4.1%0.04TGBaseline1.56 ± 0.771.86 ± 1.301.77 ± 0.721.73 ± 0.721.92 ± 0.68Day 281.65 ± 0.861.95 ± 1.161.76 ± 1.081.86 ± 0.951.97 ± 1.00% Change8.2%16.3%0.7%7.5%9.7%0.8All values are mean ± standard deviationLDL low-density lipoprotein, HDL high-density lipoprotein, TG triglyceridesaDifferences between all treatment groups were analyzed by using analysis of variance. If the difference between treatments was statistically significant, then each active treatment group was individually compared with placebobP < 0.025 (one-sided) as compared with the change in the placebo groupFig. 2Mean low-density lipoprotein cholesterol levels during 28 days of treatment with placebo or 100, 200, 400, and 800 mg/day FM-VP4 (disodium ascorbyl campestanol phosphate and disodium ascorbyl sitostanol phosphate) and after 14 days of follow-up Pharmacokinetics In the 100- and 200-mg cohorts of phase 1, the majority of plasma samples had DASP and DACP concentrations below detection level. Therefore, the AUC 0→t and the plasma elimination half-live (t1/2) values were not evaluable, and for that reason, those data are not presented. For the groups of 400 mg or higher, the peak DACP level was reached 6–24 h after FM-VP4 administration (tmax), and DASP tmax was reached 12–24 h postdose with the exception of one subject in the 1,600 mg group who had a tmax of 49 h (Table 2). Mean t1/2 of all eligible subjects was 57 h (2–3 days), which ranged from 16 to 134 h for DACP and from 29 to 90 h for DASP. Table 2Pharmacokinetic parameters of disodium ascorbyl campestanol phosphate (DACP) and disodium ascorbyl sitostanol phosphate (DASP) in 24 subjects after a single dose of 400, 800, 1,600, or 2,000 mg FM-VP4 (DACP and DASP) in phase 1Dose level (mg)Numbert1/2 (h)Cmax (ng/mL)tmax (h)AUC 0→∞ (h.ng/mL)AUC 0→t (h.ng/mL)DACP  400479.6 ± 42.3100.6 ± 25.510.7 ± 3.08,734 ± 3,8645,942 ± 2,646  800446.2 ± 21.8144.5 ± 43.612.1 ± 0.17,672 ± 2,4795,787 ± 2,045  1,600435.9 ± 11.1175.6 ± 106.57.9 ± 3.211,736 ± 11,4559,314 ± 10,910  2,000464.0 ± 28.3190.9 ± 47.218.3 ± 6.417,179 ± 5,68511,414 ± 4,318DASP  400477.3 ± 11.6247.9 ± 55.112.1 ± 0.128,142 ± 7,15321,115 ± 5,255  800452.2 ± 19.0339.7 ± 86.912.1 ± 0.128,872 ± 8,18322,907 ± 9,382  1,600453.0 ± 14.9344.3 ± 225.824.9 ± 21.639,029 ± 38,42830,243 ± 30,624  2,000443.6 ± 21.8495.9 ± 89.018.5 ± 6.635,642 ± 16,58429,204 ± 14,549Mean ± standard deviationt1/2 half-life in plasma, Cmax peak concentration, tmax time to reach peak concentration, AUC0→t area under the concentration time curve where t is the last time of blood sampling at 168 h (7 days) after treatment AUC0→t strongly correlated with the dose (400–2,000 mg) of FM-VP4 for DACP (R2 = 0.93) and DASP (R2 = 0.90). However, AUC increased in a lower than dose-proportional manner. Between 400 mg and 2,000 mg, DACP and DASP AUC increased approximately twice and 1.3 times, respectively, when the dose was increased fivefold (Table 2). In phase 2, trough DACP and DASP concentrations were at or near steady-state levels by day 8 in the subset of subjects per active dose group who were hospitalized and sampled during the first week of dosing (Fig. 3). The mean concentrations for all subjects in each cohort on day 8 were about the same as the mean concentrations on day 28, at the end of the treatment period. At the end of the follow-up period, 14 days after the final dose, DASP was present in the plasma of most or all of the subject plasma samples in the 200-, 400-, and 800-mg groups. DACP was mainly present in plasma samples of most subjects in the 800-mg group. The majority of plasma samples for DACP in the 100- and 200-mg groups and for DASP in the 100-mg group were below detection limit 14 days after follow-up. Fig. 3Mean trough concentrations of disodium ascorbyl sitostanol phosphate (DASP) and disodium ascorbyl campestanol phosphate (DACP) during 28 days of dosing and 14 days postdosing follow-up. The dots of days 2–5 represent the 16 hospitalized subjects on FM-VP4 (DACP and DASP), whereas the dots of days 8, 28, and 42 represent all 80 subjects on FM-VP4. DASP and DACP were not present in plasma at day 1 (baseline), and DACP concentrations were mostly below detection limit during the first 5 days of intake in the 100-mg group. The mean levels of DASP in the 100-mg group and DACP in the 200-mg group were below the lower limit of quantification (LLOQ) because some of the subjects had levels below the LLOQ, which were regarded as 0 ng/ml DASP and DACP concentrations increased in a less than dose-proportional manner compared with baseline on days 8 and 28, as well as on day 42, 14 days after treatment. Plant sterol and stanol concentrations In phase 1, concentrations of campestanol and sitostanol did not change within 24 h and 7 days after a single dose of FM-VP4. Also, concentrations of campesterol and β-sitosterol were not affected after a single dose of FM-VP4 (data not shown). In phase 2, sitostanol and campestanol apparently rose in a dose-dependent fashion over the 28-day time period, with the maximal percent change from baseline being 87% for campestanol and 178% for sitostanol in the 800-mg group (Table 3). Campesterol and sitosterol did not change from baseline to day 28 (Table 3). Table 3Concentrations of plant sterols and sterols in 100 subjects after 4 weeks of treatment with 100, 200, 400, or 800 mg/day FM-VP4 (disodium ascorbyl campestanol phosphate and disodium ascorbyl sitostanol phosphate) or placebo in phase 2Plant stanol or sterolDose level (mg)P valuePlacebo (n = 20)100 (n = 20)200 (n = 20)400 (n = 20)800 (n = 20)ng/mLSitostanolBaseline77.9 ± 33.379.6 ± 30.890.4 ± 32.975.0 ± 28.363.8 ± 19.2Day 28102.5 ± 39.2104.2 ± 38.3132.5 ± 56.3178.3 ± 63.8168.3 ± 44.6% Change41 ± 5837 ± 4648 ± 37159 ± 99178 ± 83<0.0001CampestanolBaseline116.0 ± 84.691.0 ± 36.6104.3 ± 81.7100.7 ± 57.283.4 ± 18.9Day 28140.9 ± 77.3107.1 ± 36.6134.1 ± 78.9156.2 ± 72.5150.2 ± 37.9% Change39 ± 7021 ± 2243 ± 5070 ± 6187 ± 570.002SitosterolBaseline2,998.3 ± 1,842.53,579.3 ± 2,084.32,715.0 ± 10,725.82,704.3 ± 1,071.22,276.3 ± 993.2Day 283,036.0 ± 1,164.93,782.1 ± 2,377.92,744.9 ± 1,256.13,138.4 ± 1,632.72,316.9 ± 734.4% Change35 ± 1475 ± 202 ± 2317 ± 4010 ± 380.61CampesterolBaseline3,275.2 ± 2,017.23,699.1 ± 1,914.82,850.4 ± 1,289.02,791.1 ± 1,403.22,370.8 ± 1,036.6Day 283,212.3 ± 1,375.93,870.2 ± 2,252.63,002.7 ± 1,439.23,031.5 ± 1,500.12,416.9 ± 917.6% Change16 ± 804 ± 226 ± 1912 ± 358 ± 350.91Mean ± standard deviation. Differences between all treatment groups were analyzed by using analysis of variance Discussion In this study, we showed that a single dose of 100–2,000 mg as well as 4-week treatment in doses of 100–800 mg of FM-VP4 administered to moderate dyslipidemic men was well tolerated and safe. Furthermore, 4-week treatment of FM-VP4 reduced LDL-C levels by 6–7% compared with baseline or by 9–11% compared with placebo. The main treatment—emergent adverse events were dizziness, headache, and fatigue after a single administration and headache after multiple-dose administration of the drug. All symptoms resolved spontaneously, and subjects receiving placebo also reported these symptoms. As there was no difference in the incidence between active and placebo groups, it is unlikely that the treatment-emergent adverse events were due to FM-VP4. No drug-related abnormalities could be identified by laboratory safety tests. In phase 1, one subject receiving 100 mg of FM-VP4 showed a decrease in WBC count 24 h after treatment, but this was considered unrelated to the study drug and returned to normal 7 days after treatment. In phase 2, one subject in the 100-mg group and one in the 800-mg group also showed low WBC counts, but these abnormalities were already present at baseline. One subject in the placebo group but also one in the 400-mg group showed increased CK levels at one visit. However, we think this is no reason for concern. First of all, the patient in the 400-mg group had no complaints. Second, the levels had returned to normal by the next visit without medication discontinuation. Because no adverse events have been reported for plant sterols in general [8], we think this single CK increase was likely to be a chance finding. In addition, five subjects from various dose groups showed a hemoglobin decrease at one of the visits. These decreases were nonpersistent, and levels had returned to normal by subsequent visits. No EKG or hemodynamic abnormalities were detected after administration of FM-VP4. Also, no liver or kidney abnormalities were identified. Thus, in terms of safety tests, there was no evidence that treatment with FM-VP4 caused any acute or delayed toxicity. In phase 1, DACP and DASP were absorbed slowly into plasma, with a tmax of approximately 12 h for both components. Clearance was also rather slow, with the average elimination t1/2 of 57 h. Furthermore, t1/2 from phase 1 can be used to estimate the time to reach steady-state concentration levels by calculating three times t1/2, which is 3 × 57=171 h, or 7.1 days. This estimated time to reach steady state corresponds with phase 2 data that showed a steady state after approximately 8 days, as the mean trough concentration at day 8 was similar to the mean trough concentration after 4 weeks of treatment. The low plasma concentrations of DACP and DASP suggest low bioavailability of FM-VP4 in humans. This is supported by findings with other plant sterol structures. The bioavailability from 600 mg unesterified soy plant stanols was only 0.04% for sitostanol and 0.15% for campestanol [22], and in another study, the absorption of campestanol from a margarine spread containing 540 mg campestanol as fatty acid esters was 5.5% in healthy subjects [23]. In both studies, absorption of plant sterols was measured by intravenous injection and oral administration of labeled isotopes in the fasting state [22, 23]. As we have not administered labeled FM-VP4 intravenously, we were not able to estimate the bioavailability of FM-VP4 in humans and to investigate whether the esterified ascorbate group affects its absorption. Future studies with FM-VP4 are needed to assess the bioavailability of FM-VP4 in humans. Repeated doses of FM-VP4 in phase 2 appeared to increase sitostanol levels in plasma by up to 178% and campestanol by 87%, whereas corresponding sterol components did not change. This rise in campestanol and sitostanol appeared to increase with dose but in a less than proportional manner. Furthermore, the difference in increase from baseline between sitostanol and campestanol was consistent with the difference in plasma concentrations for DASP and DACP of approximately 2:1, which was most clear in the 400- and 800-mg groups. We speculate that conversion of DASP and DACP to free stanols may have occurred in vivo or that FM-VP4 alters the metabolism of plant stanols. Nevertheless, the absolute increase in total stanol levels remains relatively small, as the stanol contribution is <5% of total levels (Table 3). Therefore, the clinical relevance of such increases is probably limited. Based on our data, we may speculate on the mechanism by which FM-VP4 exerts its LDL-C-lowering effect. Plant sterols and stanols are thought to compete with cholesterol for incorporation into mixed micelles, the vehicles that transport sterols to the enterocyte. In the case of FM-VP4, the solubility of plant stanols for the mixed micelles has been improved by the addition of the hydrophilic ascorbyl residue to the hydrophobic campestanol and sitostanol tail through a phosphodiester linkage (FM-VP4). In fact, this new structure allowed self-assembly into micelle structures in aqueous media in the absence of bile salts [24]. Although the assembly of micelles by means of FM-VP4 has never been directly compared with other types of plant sterols, the incorporation of plant stanols and sterols as well as cholesterol into mixed micelle normally requires the addition of bile salts in vitro and in vivo. Furthermore, in animals, FM-VP4 was shown to be more potent than unesterified plant stanols in a direct comparison when administered in the diet [16]. The results of the current clinical trial in humans showed that FM-VP4 lowered LDL-C by 6–7% compared with baseline, whereas the control group showed a 3–4% increase in plasma LDL-C levels. This means that compared with placebo, FM-VP4 showed LDL-C decreases of 9–11% when administered at 400–800 mg/day over 4 weeks. This effect was also reached with plant sterol and stanol esters, but at doses that were two to three times higher than the currently used doses of FM-VP4 [8]. Thus, the higher solubility of FM-VP4 into mixed micelles may lead to a more efficacious competition with cholesterol for incorporation into the micelles, and as a consequence, a lower dose of FM-VP4 is required to realize an equal LDL-lowering effect compared with plant sterol and stanol esters. However, the optimal treatment dose and duration, schedule and formulation for FM-VP4 have yet to be determined. On the other hand, it has also been suggested that, in addition to competition with cholesterol for incorporation into the micelles, plant sterol and stanol esters also may activate liver X receptor (LXR) target genes within the enterocyte [25]. It is unknown whether DACP and DASP activate such genes within the enterocyte or whether there may be systemic effects or additional mechanisms to decrease cholesterol absorption. Further research is needed to elucidate the mechanism by which several types of plant sterol and stanol esters lower LDL-C levels. In conclusion, this study demonstrated that single and multiple doses of FM-VP4 for 4 weeks are safe and well tolerated by moderately hypercholesterolemic subjects. Furthermore, the higher doses of FM-VP4 significantly reduce LDL-C levels by 6–7% compared with baseline or by 9–11% when compared with placebo. The pharmacokinetics showed that DACP and DASP are absorbed and cleared slowly but that the absolute quantity of drug absorbed is low, as suggested by the low plasma concentrations of DACP and DASP. This study suggests that FM-VP4 merits further investigation as an alternative for treating hyperlipidemia.
[ "safety", "pharmacokinetics", "fm-vp4", "plant sterols", "hypercholesterolemia" ]
[ "P", "P", "P", "P", "U" ]
Invert_Neurosci-4-1-2413115
Crystal structures of Lymnaea stagnalis AChBP in complex with neonicotinoid insecticides imidacloprid and clothianidin
Neonicotinoid insecticides, which act on nicotinic acetylcholine receptors (nAChRs) in a variety of ways, have extremely low mammalian toxicity, yet the molecular basis of such actions is poorly understood. To elucidate the molecular basis for nAChR–neonicotinoid interactions, a surrogate protein, acetylcholine binding protein from Lymnaea stagnalis (Ls-AChBP) was crystallized in complex with neonicotinoid insecticides imidacloprid (IMI) or clothianidin (CTD). The crystal structures suggested that the guanidine moiety of IMI and CTD stacks with Tyr185, while the nitro group of IMI but not of CTD makes a hydrogen bond with Gln55. IMI showed higher binding affinity for Ls-AChBP than that of CTD, consistent with weaker CH–π interactions in the Ls-AChBP–CTD complex than in the Ls-AChBP–IMI complex and the lack of the nitro group-Gln55 hydrogen bond in CTD. Yet, the NH at position 1 of CTD makes a hydrogen bond with the backbone carbonyl of Trp143, offering an explanation for the diverse actions of neonicotinoids on nAChRs. Introduction Nicotinic acetylcholine receptors (nAChRs) are ligand-gated ion channels that mediate fast-acting excitatory cholinergic neurotransmission in vertebrates and invertebrates. Each nAChR molecule consists of five subunits; acetylcholine (ACh) binds at extracellular ligand-binding domain (LBD) at the subunit interfaces, and a central, cation-permeable ion channel opens transiently in response to ACh and agonists (Karlin 2002; Sine and Engel 2006). Each subunit possesses an N-terminal extracellular domain with a conserved di-cysteine loop, four transmembrane regions (TM1–TM4 of which TM2 provides most of the channel-lining residues), and a large intracellular loop between TM3 and TM4 (Karlin 2002). Usually, nAChRs are made up of α and non-α subunits (Fig. 1a), the α subunits being defined by adjacent cysteines in loop C of LBD. The ACh binding site is located at the LBD interfaces and formed by loops A, B and C of the α subunit and loops D, E and F which are normally located in non-α subunit (Corringer et al. 2000; Karlin 2002; Lindstrom 2003). However, α7, α8 and α9 subunits can form functional homo-pentamers (Couturier et al. 1990; Elgoyhen et al. 1994; Gerzanich et al. 1994) (Fig. 1a), whereas α9 and α10 subunits form α hetero pentamers (Elgoyhen et al. 2001). When only α subunits are present, they donate not only loops A–C, but also loops D–F. Fig. 1Nicotinic acetylcholine receptors (nAChRs) and the acetylcholine binding protein (AChBP). a Schematic representation of neuronal nicotinic acetylcholine receptors and acetylcholine binding protein. b Chemical structures of natural nicotinic ligands and neonicotinoids [imidacloprid (IMI) and clothianidin (CTD)] used in this study. c Multiple sequence alignment of AChBP with the ligand binding domain of nAChR subunits. Lymnaea stagnalis AChBP (Ls-AChBP) and N-terminus LBD sequences of insect (species names are colored magenta) and vertebrate (species names are colored black) nAChRs are aligned using ClustalW2 program (Larkin et al. 2007) with default parameters and the details are manually adjusted. Secondary structure elements from the Ls-AchBP–IMI complex are indicated above the multiple alignments. Amino acids involved in the interactions with neonicotinoids are highlighted in bold with yellow and cyan backgrounds in the principal (+)- and complementary (-)-sides, respectively. Amino acid residue number from methionine for each protein was indicated at the top of each sequence In mammals, nAChRs play central roles in neuromuscular and inter-neuronal cholinergic neurotransmission. Mutations in muscle nAChR subunits account for many cases of congenital myasthenia syndrome (Engel et al. 2003) and also multiple pterygium (Morgan et al. 2006), whereas mutations in neuronal nAChR subunits (α4,β2) can produce epilepsies (Aridon et al. 2006; Steinlein 2004). It has also been shown that cholinergic neurotransmission is reduced in Alzheimer’s (Marutle et al. 1999) and Parkinson’s diseases as well as in schizophrenia (Woodruff-Pak and Gould 2002). Further, in some cases of Alzheimer’s disease, loss of nAChRs has been reported (Court et al. 2001). Thus there is a growing interest in nAChRs as potential drug targets (Arneric et al. 2007; Dani and Bertrand 2007). In the insect nervous system, where ACh is the primary excitatory neurotransmitter, nAChRs are present at densities comparable to those in the electric organ of the electric fish, Electrophorus electricus (Sattelle 1980), and several classes of insecticides (cartap, bensultap, thiocyclam, spinosad and neonicotinoids) target insect nAChRs (Narahashi 2000; Raymond-Delpech et al. 2005; Millar and Denholm 2007). Of these, neonicotinoids (Fig. 1a) are increasingly used not only for controlling crop pests worldwide, but also for animal health application such as flea and louse controls. Imidacloprid (IMI; Fig. 1b), the first insecticide with billion dollars per annum sales, is a partial agonist of native (Deglise et al. 2002; Ihara et al. 2006; Tan et al. 2007) as well as of recombinant (Ihara et al. 2003) nAChRs expressed in Xenopuslaevis oocytes. On the other hand, clothianidin (CTD; Fig. 1b) and its analog evoke supra-maximal responses (with reference to ACh responses) not only in the Drosophila Dα2/chicken β2 hybrid nAChRs (Ihara et al. 2004) expressed in Xenopus oocytes, but also in native Drosophila central cholinergic neurons (Brown et al. 2006). Single channel recording has shown that a CTD analog induces the opening of the Drosophila nAChR channels at the largest conductance state more frequently than ACh, offering a possible explanation for its super-agonist action (Brown et al. 2006). Contrasting with these actions, some neonicotinoids (Ihara et al. 2006; Salgado and Saar 2004), and bis-neonicotinoids containing two neonicotinoid units joined together by an alkyl chain (Ihara et al. 2007a), antagonize the acetylcholine-induced responses of native insect neurons. Neonicotinoids show higher affinity for insect nAChRs, accounting, at least in part, for their selective toxicity to insects over vertebrates (Matsuda et al. 2001, 2005; Tomizawa and Casida 2005). Our studies using site-directed mutagenesis combined with two-electrode voltage-clamp electrophysiology have shown that the X residue in the YXCC motif of loop C (Shimomura et al. 2004), the region upstream of loop B (Shimomura et al. 2005) and basic residues in loop D (Shimomura et al. 2002, 2003, 2006) contribute to the high neonicotinoid sensitivity of insect nAChRs (see Fig. 1c for loop positions). The acetylcholine binding protein (AChBP) from snail, Lymnaea stagnalis (Ls-AChBP) was discovered in glial cells as a water soluble protein modulating synaptic ACh concentration (Smit et al. 2001), and the X-ray crystal structure of the protein was described at the same time (Brejc et al. 2001). Subsequently, X-ray crystal structures of Ls-AChBP homologs from Aplysia californica (Ac-AChBP) (Bourne et al. 2005) and Bulinus truncatus (Bt-AChBP) (Celie et al. 2005b) were reported. Both Ls- and Ac-AChBPs show similarities to the extracellular domain of neuronal α7 nAChRs and form homopentamers in solution (Smit et al. 2001). Notably, all the six regions (loops A–F) that make up the ligand binding site are conserved in AChBPs. Thus AChBPs are considered as surrogates of the LBD in nAChRs and therefore have been employed to elucidate the mechanism for interactions with low molecular weight ligands as well as peptide toxin antagonists (Bourne et al. 2005; Celie et al. 2004, 2005; Dutertre et al. 2007; Hansen et al. 2005; Hansen and Taylor 2007; Ulens et al. 2006). Employing the crystal structures of AChBPs, we have previously modeled LBDs in complex with IMI to show that basic residues in loop D may interact electrostatically with the nitro group of neonicotinoids (Shimomura et al. 2006). Yet, details of neonicotinoid–nAChR interactions are poorly understood. Therefore, we have crystallized the Ls-AChBP in complex with two commercial neonicotinoids, IMI and CTD. Materials and methods Materials Lymnaea stagnalis derived from the stocks of Vrije Universeit (Amsterdam) was provided from Professor Sakakibara at Tokai University. IMI and CTD were gifts from BayerCropscience Co. in Japan. 1-(6-Bromopyridylmethyl)-2-nitroiminoimidazolidine (bromine-substituted imidacloprid; Br-IMI) and N-(2-bromo-thiazol-5-ylmethyl)-N′-methyl-N′′-nitroguanidine (bromine-substituted clothianidin; Br-CTD) were synthesized according to the published methods (Maienfisch et al. 2000; Nishiwaki et al. 2000). Protein preparation The cDNA of Ls-AChBP was amplified by RT-PCR from L. stagnalis and was cloned into the pPICZα B vector (Invitrogen) at the Pst I site, with an L1A mutation introduced to facilitate cloning. The Ls-AChBPs were expressed in yeast, Pichia pastris X-33 using EasySelect Pichia Expression Kit (Invitrogen). Secreted proteins were purified with Q-Sepharose column (GE Healthcare) immediately following concentration and buffer-exchange using Vivaflow200 (Zartorius), and then treated with 50 U/mg protein of peptide-N-glycosidase F (Wako Pure Chemical Industries) at 37°C for 24 h to remove the glycosyl-chain at Asn67. The protein samples were subsequently purified with Mono Q and Superdex 200 columns (GE Healthcare). Purified proteins were confirmed to be the samples of interest by N-terminal sequencing and mass spectrometry. Purified proteins were buffer-exchanged against 20 mM Tris–HCl buffer with 0.02% NaN3 (pH 8.0). Evaluation of equilibrium binding of neonicotinoids The equilibrium binding of neonicotinoids with Ls-AChBP was determined by quenching of intrinsic tryptophan fluorescence. Ls-AChBP at 600 nM in binding site concentration in 20 mM Tris–HCl buffer with 0.02% NaN3 (pH 8.0) was incubated with each ligand at various concentrations on a 96-well plate. Ls-AChBP was excited at 280 nm, and emission intensity was recorded at 342 nm using Varioskan microplate reader (Thermo Fisher Scientific) at room temperature. Data were fitted by non-linear regression according to the following equation using Prism Software version 4.03 (GraphPad Software): where ΔF and ΔFmax represent quenching of fluorescence at a ligand concentration X and maximum quenching of fluorescence at saturation, respectively. F0 and [Protein] are fluorescence measured in the absence of ligands and the concentration of Ls-AChBP, respectively. Crystallization and X-ray data collection Purified Ls-AChBPs (6.0 mg/mL) were incubated with 0.5 mM neonicotinoids at 4°C for 1 h prior to crystallization. Ls-AChBP-neonicotinoid complex crystals were obtained by vapor diffusion method at 20°C with 1:1 ratio of protein to reservoir solution containing 0.2 M Na citrate pH 5.7, 15–22% PEG3350 and about 0.5 mM of either IMI, CTD, Br-IMI or Br-CTD. The crystals were flash-cooled in liquid nitrogen after soaking into the cryoprotectant solutions containing 0.2 M Na citrate pH 5.7, 25% PEG3350, 20% glycerol and about 0.5 mM of each neonicotinoid. X-ray diffraction data sets were collected at 90 K using either Bruker AXS DIP6040 detector at BL44XU (Yoshimura et al. 2007) or ADSC QUANTUM 210 detector at BL44B2 (Adachi et al. 2001) beamlines in SPring-8, and processed with Mosflm (Leslie 1992) or HKL2000 (Otwinowski and Minor 1997). In order to identify the positions of the sulfur atom at the thiazole ring of CTD as well as the bromine atom of Br-IMI and Br-CTD, anomalous data from the crystals complexed with these neonicotinoids were collected at wavelength of 1.75 Å and 0.919 Å (Table 1), respectively. Table 1X-ray diffraction data collection and refinement statistics for Ls-AChBP–neonicotinoid complexesIMICTDBr-IMIBr-CTDCTD (S ano)aData collection BeamlinebBL44XUBL44B2BL44B2BL44B2BL44B2 Wavelength (Å) 0.9000.9190.9190.9191.75 Space groupP65P65P65P65P65 Cell dimensions a, c (Å)75.0, 351.074.6, 351.074.8, 350.974.5, 351.274.6, 351.3 Resolution (Å)15.8–2.5650.0–2.7050.0–2.7050.0–2.9050.0–2.90 Rsym (%)c, d8.0 (41.3)8.5 (38.9)9.7 (34.1)11.8 (34.1)8.7 (39.5) I/σc8.1 (1.7)20.7 (4.0)47.0 (10.7)45.5 (10.9)60.8 (9.32) Completeness (%)c94.8 (96.7)100 (100)100 (100)100 (100)99.5 (96.2) Redundancyc2.6 (2.6)5.7 (5.7)23.0 (23.3)22.5 (23.2)21.6 (20.5)Refinement Resolution (Å)15.8–2.5850.0–2.70 No. of reflections3303130121 R/Rfree (%)e20.3/27.720.2/27.0 Bond length (Å)/angles (deg)f0.008/1.30.008/1.3 Average B factor41.841.6 PDB accession codeg2ZJU2ZJVSee text for abbreviations of the neonicotinoid namesaData set for detection of anomalous peaks at sulfur atomsbAll data sets were collected at SPring-8cValues in parentheses refer to data in the highest resolution shellsdRsym = Σ|I- < I>|/ΣI, where I is the observed intensity and <I> is the average intensity from multiple observations of the symmetry-related reflectionseR = Σ||Fo| – |Fc|| / Σ|Fo|. Rfree is an R factor of the CNS refinement evaluated for 5% of reflections that were excluded from the refinementfRMS deviations from ideal values were calculatedgThe atomic coordinates and structure factors have been deposited in the Protein Data Bank Structure determination and refinement The structure of the IMI complex was solved by molecular replacement with Phaser (McCoy 2007) using the coordinates of an Ls-AChBP in complex with nicotine (PDB entry: 1UW6) (Celie et al. 2004) as a search model. Subsequently, the refined coordinates of the IMI complex were used to solve the structure of the CTD complex. Refinements and manual model building were performed with CNS version 1.2 (Brünger et al. 1998) and Coot (Emsley and Cowtan 2004), respectively. The non-crystallographic-symmetry restraints between the subunits were initially applied for the refinements and removed at later steps. The details of data collection and refinement statistics are provided in Table 1. Figures are generated by PyMOL (DeLano Scientific LLC), Bobscript (Esnouf 1999) and Raster3D (Merritt and Bacon 1997). Results and discussion Structure analysis In this study, we have crystallized the Ls-AChBP in complex with IMI and CTD to elucidate the mechanism underpinning their selective and diverse actions on nAChRs. Both complexes were crystallized in the same space group with one pentamer in an asymmetric unit. Ligand-omit and anomalous electron density maps clearly indicated AChBP in the crystals bound neonicotinoids (see Fig. 3a, b). The positions and orientations of the 2-chloropyridine ring in IMI and 2-chloro-1,3-thiazole ring in CTD were modeled to satisfy the observation of Br- and S-anomalous peak positions. The X-ray crystal structures of the Ls-AChBP complexed with IMI and CTD were refined at 2.58 and 2.70 Å resolutions, respectively (Table 1). The neonicotinoid-bound Ls-AChBPs form the same homopentameric structure (Fig. 2a, b) as those determined for the 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES)-bound apo form (Brejc et al. 2001) and the other ligand-bound forms (Celie et al. 2004, 2005; Hansen et al. 2005; Ulens et al. 2006). Typically, each subunit has a secondary structure composed of an N-terminal 12-residue α-helix followed by β-strand-rich parts as shown previously for other Ls-AChBPs. Root mean square differences (rmsd) of Cα atoms between the IMI- or CTD-complex and the nicotine-complex (Celie et al. 2004) were observed as 0.975 and 0.758 Å, respectively. In two structures determined in the present study, neonicotinoids occupied all five ligand-binding sites that are formed between the two adjacent subunits (Fig. 2a, b). The neonicotinoids occupy broadly the same position as HEPES and (-)-nicotine within the pocket formed by “A-F loop” regions. Although “loops A, B, D and E” are mainly composed of β-strands and only loops C and F have no canonical secondary structure, we use these designations to facilitate comparison with previous studies on nAChRs (Corringer et al. 2000; Matsuda et al. 2001, 2005). The principal side [(+)-chain] of the ligand-binding site possesses residues from loops A, B and C which are located in α subunits, whereas the complementary side [(-)-chain] contains loops D, E and F, which are donated by non-α subunits in α/non-α heteropentamers. Overall, the structures of the IMI and CTD complexes were very similar with Cα rmsd of 0.597 Å. However, the loop C region has a specific conformation in each complex. For example, compared with the IMI complex, loop C of the CTD complex takes a “closed conformation”, showing approximately 4 Å shift of the positions of the Cα atoms (Fig. 2c). The variations in loop C conformation were often observed between different chains within the same Ls-AChBP pentamer in complex with neonicotinoids (e.g. subunits A and C of the CTD complex, data not shown). Conformational change in loop C is evident when comparing structures with the apo form Ls-AChBP complexed with HEPES (PDB entry code: 1UX2) (Fig. 2c). The Thr155-Asp160 loop region of the (-)-chain upstream of loop F also indicates a conformational change in the ligand-binding site. These results suggest that induced-fit movement of loop regions is essential for recognition of ligands including neonicotinoids. The detailed binding interactions of neonicotinoids will be considered in relation to their key components, the aromatic ring and the guanidine/related moieties. Fig. 2Overall view of crystal structure of Ls-AChBP complexed with imidacloprid (IMI). The pentameric structure of Ls-AChBP is viewed from the top (a) corresponding to an extracellular surface of nAChRs and from the side (b). IMI molecules bound to Ls-AChBP are shown in space-filling model, whereas carbons, nitrogens and oxygens are colored green, blue and red, respectively. Two subunits, A and B chains, are colored by pink and blue, respectively. c The structure of A–B domain boundary of the Ls-AChBP complexed with IMI (red) is compared with that of the Ls-AChBP complexed with CTD (blue) and the apo form (light brown) complexed with 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) (PDB entry code: 1UX2) (Celie et al. 2004). Ligands are presented by stick models Interactions with aromatic rings of insecticides When complexed with Ls-AChBP, the pyridine ring of IMI was buried deep in the binding site, interacting with loop E segments from the (-)-chain (Fig. 3c). Recently, Tomizawa et al. (2008) has reported that two azido-neonicotinoid probes containing the photolabile group in the pyridine ring labeled Tyr164 in loop F of the (-)-chain and Tyr192 in loop C of the (+)-chain in the Ls-AChBP. However, these two distinct binding modes were not observed in our models, neither in the IMI complex, nor in the CTD complex of the Ls-AChBP. The nitrogen atom of the pyridine ring forms a hydrogen bond with the amide group of Met114 and the carbonyl group of Leu102 in loop E via a water molecule, resembling the observations for both the Ls-AChBP-nicotine complex (Celie et al. 2004) and Ac-AChBP-epibatidine complex (Hansen et al. 2005). Fig. 3Electron density maps of bound ligands and their interactions with the loop E region. The annealed Fo–Fc omit maps (blue) for imidacloprid (IMI) (a) and clothianidin (CTD) (b) bound to the Ls-AChBP are contoured at 4σ and drawn with the final models of the native and non-Br-neonicotinoids. In addition, the anomalous maps of bromine (purple) from the Br-neonicotinoid-complexes and sulfur atoms (yellow) from the native-neonicotinoid-complexes for ligands, which are contoured at 5 and 1.5σ, respectively, are overlaid. Interactions of the loop E region with pyridine moiety of IMI (c) and thiazole moiety of CTD (d) are shown in the binding site of the Ls-AChBP. In each ligand and amino acid, chlorine, nitrogen, oxygen and sulfur atoms are colored purple, blue, red and yellow, respectively. The hydrogen bonds are presented by orange broken lines The chlorine atom was located in the vicinity of a hydrogen bond bridge between the backbone C = O of Leu112 and the backbone amide NH of Arg104. As a result, it made van der Waals contact the peptide backbones of these two amino acids. The aromatic ring of CTD also contacts with the loop E segment in a manner similar to IMI (Fig. 3d). The nitrogen atom of the thiazole ring of CTD, in a similar fashion to the nitrogen atom of the pyridine ring of IMI, formed hydrogen bond with the backbone carbonyl of Leu102 and the backbone amide group of Met114 via water molecule (Fig. 3d). Nicotine, which is active not only on insect, but also on vertebrate nAChRs, has also been shown to interact with the backbone of loop E in a similar manner to neonicotinoids (Celie et al. 2004). Therefore interactions with the backbone are insufficient to explain the selective actions of neonicotinoids. On the other hand, Met114 in loop E was located in the vicinity of the nitroimine group (Fig. 3c, d) which plays a key role in selective insect nAChR–neonicotinoid interactions (Ihara et al. 2003). Some insect α subunits possess either basic or hydrogen bondable residues favorable for interactions with the nitro group such as those corresponding to Met114 (Fig. 1c). Hence, if two adjacent α subunits form a ligand-binding site, such residues may contribute to the neonicotinoid sensitivity of insect nAChRs. Interactions with guanidine and related moieties The crystal structures of the Ls-AChBP bound with nicotine or carbamylcholine (Celie et al. 2004) and Ac-AChBP in the presence of epibatidine (Hansen et al. 2005), as well as quantum chemical calculations (Cashin et al. 2005), suggested that the protonated nitrogen atom in nicotinic agonists undergoes cation–π interactions with Trp143 in loop B. In line with this notion, it was earlier demonstrated that the electron-deficient guanidine moiety of neonicotinoids is likely to contact with Trp143 (Matsuda et al. 2005; Tomizawa and Casida 2005). However, the guanidine moiety of IMI and CTD was found to stack with the aromatic ring of Tyr185 in the crystal structures of CTD- as well as IMI-Ls-AChBP complexes (Fig. 4a, d). Since Tyr185 is conserved throughout vertebrate and invertebrate nAChR α subunits (Fig. 1c), this residue itself is not involved directly in determining the selective interactions with neonicotinoids. Yet the interactions with Tyr185 contribute to the affinity for neonicotinoids. The stacking level for CTD appears to be weaker than that for IMI (Fig. 4a, b). In addition, the stacking interaction resulted in CH–π contacts with Trp143 of the methylene (CH2–CH2) bridge in IMI, which was not seen in CTD, accounting, at least in part, for the higher binding potency of IMI (Kd = 1.57 ± 0.21 μM, n = 3) compared to that of CTD (Kd = 7.26 ± 1.13 μM, n = 3) on the Ls-AChBP as evaluated by quenching of the protein fluorescence. Fig. 4Imidacloprid (IMI) and clothianidin (CTD) binding to Lymnaeastagnalis AChBP (Ls-AChBP). a IMI–Ls-AChBP complex. b CTD–Ls-AChBP complex. The binding site is located at the interface between two adjacent subunits, as drawn in green for (+)-chains and in cyan for (-)-chain (see text). The subunits EA and CD interfaces are shown for the binding sites for IMI (a) and CTD (b), respectively. Orange broken lines depict hydrogen bonds. In each ligand and amino acid, chlorine, nitrogen, oxygen and sulfur atoms are colored purple, blue, red and yellow, respectively Another difference between CTD and IMI is that the former exposes a hydrogen bond donor NH at position 1, whereas the latter lacks it. We suggested earlier that this NH may form a hydrogen bond with the backbone carbonyl of Trp143 in loop B (Ihara et al. 2007b). As suggested, the NH of CTD made a hydrogen bond with the backbone carbonyl of Trp143 from the (+)-chain in the crystal structures of the Ls-AChBP (Fig. 4b), although it was observed at only 2 of 5 interfaces. Such hydrogen bond formation may assist CTD in keeping contact with nAChRs when large conformational changes occur in LBD to open the central channel (Miyazawa et al. 2003; Unwin 2005), accounting, at least in part, for higher agonist efficacy of CTD compared to IMI and ACh in certain native and recombinant nAChRs (Brown et al. 2006; Ihara et al. 2004). Interactions with loop D and selectivity of neonicotinoids We have previously shown that substitution of a glutamate for the glutamine corresponding to Gln55 of Ls-AChBP (Fig. 1c) markedly reduced the maximum response to IMI and nitenpyram of the chicken α7 nAChR, whereas mutation of the same glutamine to arginine enhanced it, slightly shifting the concentration–response curve to the left (lower concentrations). In complete contrast, the same mutation reduced the (-)-nicotine sensitivity of the α7, whereas substitution with a glutamate enhanced it (Shimomura et al. 2002). Therefore, it was postulated that this glutamine residue is likely to be located in close proximity to the nitro group of neonicotinoids, and that marked changes in the α7 response to neonicotinoids as well as (-)-nicotine are the results of electrostatic interactions between the added residues and the nitro group. Consistent with this hypothesis, Gln55 faced the nitro groups of IMI and CTD in the crystal structure of the Ls-AChBP within a hydrogen bondable distance in the AChBP–IMI complex (<3.3 Å in 2 of 5 interfaces in the crystal structure of the Ls-AChBP) (Fig. 4). For CTD, however, the distance between the Gln55 and the nitro group was longer than 3.5 Å, excluding or minimizing a contribution of hydrogen bonding to the binding potency of CTD. All vertebrate nAChRs, perhaps apart from those containing either human or rat β4 subunit, are likely to show reduced sensitivity to neonicotinoids due to the lack of the basic residues in loop D. The human β4 subunit (Fig. 1c), as well as rat β4 subunit (sequence not shown), possesses a lysine as residue corresponding to Gln55 of Ls-AChBP. However, the vertebrate β4 subunits possess a glutamate residue as that corresponding to Thr57 in Ls-AChBP which is located in the vicinity of Gln55 (Fig. 5). This glutamate residue is postulated to interfere electrostatically with the neighboring basic residue–neonicotinoid interactions, thereby reducing the neonicotinoid sensitivity of nAChRs containing the β4 subunits. Fig. 5Interactions between loops C, D and F. IMI-binding site of Ls-AChBP is presented by especially focusing on loops C, D and F. The binding site at the interface of the subunits C and D is drawn in green for (+)-chains and in cyan for (-)-chain (see text). In each ligand and amino acid, chlorine, nitrogen, oxygen and sulfur atoms are colored purple, blue, red and yellow, respectively. Orange broken lines depict hydrogen bonds Role of loops C and F in their interactions with neonicotinoids We have found that replacement by glutamate of the proline corresponding to Ser186 in the YXCC motif in loop C of the Drosophila Dα2 subunit reduces the IMI sensitivity of the Dα2β2 hybrid nAChR, whereas its reverse mutation in the α4β2 nAChR enhanced sensitivity (Shimomura et al. 2004), suggesting a contribution of the loop C residue to the neonicotinoid sensitivity of nAChRs. However, another view has been proposed in which the serine or threonine residue corresponding to Ser186 from (+)-chain of Ls-AChBP in loop C of insect nAChRs (see Fig. 1c) forms a hydrogen bond with the nitro group of neonicotinoids, thereby enhancing the binding potency of neonicotinoids (Tomizawa et al. 2007). In the crystal structures of Ls-AChBP, Ser186 made hydrogen bonds with Glu163 and Tyr164 in loop F of (-)-chain but not with the nitro group of IMI (Fig. 5). This hydrogen bond web, as well as the basic residues in loop D, seems to play an important role in determining the neonicotinoid sensitivity of nAChRs. In most heteromeric vertebrate nAChRs, α subunits possess acidic residues as those corresponding to Ser186 in loop C, resulting in electrostatic repulsion of the acidic residues in loop F of β subunits corresponding to Glu163 in Ls-AChBP (Fig. 1c). On the other hand, insect nAChR α subunits do not possess such an acidic residue in loop C (Fig. 1c). Hence, it is conceivable that amino acids not only from α subunits, but also from non-α subunits of insect nAChRs, contact more effectively with neonicotinoids than those of vertebrate nAChRs. In this context, Tyr164 may also contribute indirectly to the neonicotinoid sensitivity of insect nAChRs because vertebrate non-α subunits possess phenylalanine residue incapable of assisting the formation of the hydrogen-bond-network at this position. In conclusion, we have for the first time elucidated the crystal structures of the Ls-AChBP in complex with two commercial neonicotinoids IMI and CTD. Met114 in loop E and Gln55 in Loop D closest to the nitro group in neonicotinoids, suggesting that corresponding residues in insect nAChRs may play important roles in determining neonicotinoid sensitivity. Ser186 in loop C was found to form hydrogen bonds with Gln163 as well as Tyr164 in loop F, suggesting that these hydrogen bondable residues can also indirectly influence the selective interactions with neonicotinoids. CTD forms hydrogen bond with the backbone carbonyl of Trp143 in loop B and shows weaker hydrophobic contacts with Trp143 compared with other neonicotinoids tested, which may altogether lead to its unique actions on nAChRs. Although there are sequence and structural similarities between AChBPs and nAChRs, there are also important differences. Future structural studies on AChBP mutants engineered to resemble more closely insect or human nAChRs could help to enhance even further our understanding of neonicotinoid actions and selectivity. Nevertheless, we have provided new insights into the molecular basis of differential actions of neonicotinoid molecules. These results may be of value in the design of even safer new crop protection agents.
[ "crystal structures", "neonicotinoids", "nicotinic acetylcholine receptors", "ion channels", "acetylcholine binding protein (lymnaea stagnalis)" ]
[ "P", "P", "P", "P", "R" ]
Environ_Manage-2-2-1705480
Multiscale Drivers of Water Chemistry of Boreal Lakes and Streams
The variability in surface water chemistry within and between aquatic ecosystems is regulated by many factors operating at several spatial and temporal scales. The importance of geographic, regional-, and local-scale factors as drivers of the natural variability of three water chemistry variables representing buffering capacity and the importance of weathering (acid neutralizing capacity, ANC), nutrient concentration (total phosphorus, TP), and importance of allochthonous inputs (total organic carbon, TOC) were studied in boreal streams and lakes using a method of variance decomposition. Partial redundancy analysis (pRDA) of ANC, TP, and TOC and 38 environmental variables in 361 lakes and 390 streams showed the importance of the interaction between geographic position and regional-scale variables. Geographic position and regional-scale factors combined explained 15.3% (streams) and 10.6% (lakes) of the variation in ANC, TP, and TOC. The unique variance explained by geographic, regional, and local-scale variables alone was <10%. The largest amount of variance was explained by the pure effect of regional-scale variables (9.9% for streams and 7.8% for lakes), followed by local-scale variables (2.9% and 5.8%) and geographic position (1.8% and 3.7%). The combined effect of geographic position, regional-, and local-scale variables accounted for between 30.3% (lakes) and 39.9% (streams) of the variance in surface water chemistry. These findings lend support to the conjecture that lakes and streams are intimately linked to their catchments and have important implications regarding conservation and restoration (management) endeavors. Introduction Surface water chemistry is regulated by a complex suite of processes and mechanisms operating at varying spatial and temporal scales. Early work by lake ecologists focused on the importance of geographic position as a strong predictor of lake water chemistry. For instance, in the early 1900s, Thienemann (1925) and Naumann (1932) developed lake trophic classification schemes that basically recognized differences between lowland, nutrient rich (eutrophic) and alpine, nutrient poor (oligotrophic) ecosystems. Although lake ecologists were early to appreciate the importance of adjacent land type on lake-water chemistry, stream ecologists have addressed the terrestrial–aquatic linkage concept more formally, with streams being regarded as “open systems that are intimately linked with their surrounding landscapes” (e.g., Hynes 1975). However, lake ecologists have recently revisited the landscape position hypothesis and formalized paradigms that recognize more explicitly the importance of landscape position and its significance for describing among-lake variance (e.g., Kratz and others 1997; Soranno and others 1999; Riera and others 2000). The surrounding landscape (catchment) with its distinct geology, hydrology, and climate clearly influences the physico-chemical features of a specific water body (e.g., Omernik and others 1981; Osborne and Wiley 1988; Allan 1995; Kratz and others 1997; Soranno and others 1999; Riera and others 2000), and several studies have highlighted the links between surface water chemistry and catchment characteristics, particularly in relation to sensitivity to nutrient enrichment and acidification (Vollenweider 1975; Sverdrup and others 1992; Hornung and others 1995). Indeed, water chemistry, both within and among lakes or streams, is considered to be driven by factors acting on both regional and local scales. Regional factors such as climate, geology and weathering are interrelated with other factors such as soil type and land cover/use, whereas local factors, like the input and retention of organic matter, are related to the vegetation type and topographical relief. Hence, a priori, a close linkage is expected between regional- and local-scale factors. Geographic proximity alone is, however, often not sufficient to predict the physical and chemical characteristics of individual streams or lakes, as differences in external processes such as stream hydrology or lake morphometry and water retention time as well as internal processes such as nutrient cycling, and strengths of interactions with the surrounding landscape may singly or in concert confound the importance of regional-scale factors. Although a number of studies have addressed the importance of land use/type on surface water chemistry, few studies have simultaneously focused on the importance of local and regional factors as determinants of surface water chemistry, and fewer still have addressed the similarities and differences of lake and stream ecosystems. To our knowledge, only one study (Essington and Carpenter 2000) has simultaneously studied the response of stream and lake ecosystems. These authors showed that streams and lakes were surprisingly similar in nutrient cycling, in particular when adjustments were made for water residence time. By concurrently studying stream and lake ecosystems, we hope to better our understanding of the processes and mechanisms that drive surface water chemistry in these different, but certainly not ecologically isolated ecosystems. We hypothesize that both streams and lakes are strongly linked to the surrounding landscape, and that spatial variation in surface water chemistry is regulated by non-mutually exclusive factors acting on various hierarchical scales depending on landscape type and/or geographic position. Here, we study the effect of regional and local-scale factors on three commonly measured water chemical variables. Acid neutralizing capacity (ANC) was selected to indicate the effect that catchment geology and weathering might have on buffering capacity. Total phosphorus (TP) was selected for its key role in driving ecosystem productivity and because it is biologically active (i.e., is expected to decrease along, e.g., lake chains). Finally, total organic carbon (TOC) was used as a surrogate measure of the importance of allochthonous input from the boreal catchments. The sites used in this study are often natural brown-water systems, with high contents of humic substances. We attempted to (1) identify and quantify possible sources of variation in surface water chemistry of boreal streams and lakes, (2) determine which environmental factors and which spatial scales are most important in determining the surface water chemistry of boreal streams and lakes, and (3) determine similarities/differences in the factors driving stream/lake water chemistry. Methods Study Site The data set used in this study consists of 390 streams and 361 lakes sampled as part of the Swedish national stream and lake survey in autumn 2000 (Johnson and Goedkoop 2000; Wilander and others 2003) (Fig. 1). A number of factors suggested that this dataset was sufficiently robust for examining among-site similarities/dissimilarities in surface water chemistry of boreal streams and lakes. Firstly, streams and lakes were selected randomly; thus, the samples should be representative of the population of streams and lakes sampled. In selecting lakes, only lakes with surface areas >4 ha were included, and two size classes were used for stratifying stream sites (catchment area classes of 15 to 50 and 50 to 250 km2). Because we were interested in obtaining a depth-integrated measure of surface water chemistry, lakes were sampled during autumn turnover. Hence, sampling started in the northernmost parts of the country and progressed southwards. A more detailed description of stream and lake selection is given in Wilander and others (2003). In this study, we were interested in understanding the effects of local and regional-scale variables on the expected natural variability of selected water chemistry variables. Thus, sites deemed to be affected by liming, acidification (lakes: critical load exceedence of S and N > 0; Rapp and others (2002)) and agriculture/silviculture (catchments with more than 25% defined as arable and affected by clear-cutting, respectively) were not included in this dataset. Fig. 1Location of the 361 lakes and 390 streams used to assess the influence of geographic position, and regional and local scale factors on surface water chemistry The streams and lakes can be classified as relatively small (mean stream width = 5 m; mean lake area = 3.27 km2), nutrient poor, ranging from clear to brownwater ecosystems (mean stream abs 420 nm = 0.188; mean lake abs 420 nm = 0.149). The streams and lakes were distributed fairly evenly across the country. Streams were generally situated at a somewhat lower altitude than lakes (201 m a.s.l. for streams and mean altitude = 331 m a.s.l. for lakes). The catchment area of streams was also smaller (mean catchment area = 64 km2) than that of lakes (257 km2), because streams with catchments >250 km2 were not included in the national stream survey. Water Chemistry A single, midstream or midlake (approximately 0.5 m depth) water sample was collected in autumn 2000. All water chemistry analyses were done by the SWEDAC (Swedish Board for Accreditation and Conformity Assessment) certified laboratory at the Department of Environmental Assessment, Swedish University of Agricultural Sciences following international (ISO) or European (EN) standards when available. ANC is a measure of the buffering ability of lakes and streams against strong acid inputs. This metric was chosen because it includes humic substances and compensates for their natural variation, i.e., the effect of acid deposition is more pronounced than in other acidification indicators such as pH or sulfate concentration. Independent Variables During sampling, sites were classified according to (aquatic) substratum particle size and vegetation; six substrate classes (ranging from silt/clay to block), two classes of detritus (coarse and fine), and 10 classes of riparian land use and vegetation were classified using four categories as: 0%, <5%, 5–50%, and >50% coverage (Table 1). For streams, 50-m reaches (sampling site) of relatively homogeneous substratum were chosen, and the riparian vegetation designated at a 5-m-wide zone on both sides of the sampling site was classified as above. For lakes, 10-m long and 5-m wide littoral areas of relatively homogeneous substratum were chosen and riparian vegetation, designated at a 50-m long and 5-m wide shoreline zone, was classified as above. Table 1Dependent and independent variables used in RDAVariableUnitLakes (N = 361)Streams (N = 391)a) DependentChemistryAcid neutralizing capacity (ANC)meq l−13.36 (0.09−0.74)0.51 (0.15−0.99)Total phosphorus (TP)μg l−113.17 (2−28)27.42 (2−67)Total organic carbon (TOC)mg l−19.13 (2.02−16.6)10.57 (2.2−21.08)b) IndependentExplained variabilityGeographic positionLakesStreamsLatitudeDecimal degreesAltitudem a.s.l.18.5%(2)2.7%(3)Ecoregions*Dummy variable  Arctic/alpineDummy variable  Northern borealDummy variable  Southern borealDummy variable  BoreonemoralDummy variable  NemoralDummy variableRegional factors  Mean annual discharge (Q)m3 s-11.3%(5)  Wet & Dry NHx deposition  Wet & dry non-seasalt Mg depositionCatchment land use/cover  Urban areas%  Forested areas%  Alpine treeless land cover%55.4%(1)17.3%(2)  Glacier%  Open freshwater bodies%  Marsh/mires%  Arable land%4.6%(3)68%(1)  Pasture%  Alpine forested areas%Local factorsPhysical properties of sample site  Stream width mM2.7%(4)  Lake areakm21.5%(5)  Water temperature°CAquatic substrate**Classified 0−3  Boulder (>250 mm)Classified 0-3  Block (200–250 mm)Classified 0-3  Cobble (60–200 mm)Classified 0-3  Pebble (20–60 mm)Classified 0-3  Silt/clay (0.02 mm)Classified 0-3  Coarse detritusClassified 0-3  Floating leaved vegetationClassified 0-34.6%(4)  Fine leaved submerged vegetationClassified 0-3  PeriphytonClassified 0-3  Fine dead woodClassified 0-3Riparian land use/cover  Deciduous forestClassified 0-3  HeathClassified 0-3  Arable landClassified 0-3  AlpineClassified 0-3  PastureClassified 0-3  MireClassified 0-3  Canopy coverClassified 0-3a) Chemistry variables (n = 3) with mean values and 10th and 90th percentiles in parentheses. b) Environmental variables (n = 38), divided into three subsets, included in the analyses. Also shown are the first five variables (explained variability in %) that could best explain the variability in ANC, TP, and TOC, using RDA and stepwise forward selection with the order of selection shown in parentheses. Note: the middle boreal ecoregion was insignificant in the Monte Carlo permutation test and excluded from the analysis*Six major ecoregions according to the Nordic Council of Ministers (1984)**Classified as percent coverage where no coverage 0 = 0%, very low coverage 1 = <5%, medium coverage 2 = 5–50%, high coverage 3 = >50% Catchments were classified as percentage land use/vegetation cover according to the same land use categories used for riparian zones. Hence, catchment land use/cover ranged from 0% (all classes) to 100%. Thereby, maximum urban areas in catchments were 10.2% (lakes) and 26.3% (streams), forested areas covered 99.8% in both lake and stream catchments, and alpine treeless cover was very high with 99.7% (lakes) and 99.9% (streams). Glacier areas comprised only 2.3% of total lake catchment areas, but covered 26.6% of stream catchments; other open freshwater bodies in the catchment comprised 19.4% of lake and 28.9% of stream catchments. Maximum marsh or mire land cover was 82.9% for lake and 67.4% for stream catchments, whereas pasture comprised 18.1% (lakes) and 14.2% (streams). Maximum alpine forested area cover was higher in lake (98.7%) than in stream catchments (65.6%), and maximum arable land covered 24.4% of lake and 24.6% of stream catchments. Ecoregion delineation of Sweden was obtained from the Nordic Council of Ministers (1984). The ecoregions range from the nemoral region in the south to the arctic/alpine complex in the north. The nemoral region is characterized by deciduous forest, mean annual temperatures >6°C, and a relatively long growth period (180–210 d). In contrast, the arctic/alpine complex in the north is characterized by relatively low mean annual temperatures (<2°C) and short growth periods (<140 d). Geographic position descriptors (longitude, latitude, altitude), ecoregion delineation, discharge, deposition variables, land use/vegetation cover descriptors, physical properties (stream width, lake area) as well as aquatic substrate descriptors resulted in a dataset of 60 environmental (independent) variables. Statistical Analyses First, detrended correspondence analyses were conducted to obtain the gradient length of both the stream and lake chemistry data. Because the gradient lengths were in both cases ≤1.5 SD, the linear method redundancy analysis (RDA) was used to study the effects of environmental variables representing geographic position and regional- and local-scale factors on stream and lake water chemistry. Moreover, preliminary analyses of water chemistry (total phosphorus concentration) and catchment land use (% agriculture) did not reveal any step changes between the northern and southern regions. RDA was performed on a correlation matrix and is a form of direct gradient analysis (like Principal Components Analysis). In a first step in RDA, the entire set of 60 environmental variables was tested to determine the significance of individual variables using a Monte Carlo permutation test (with 999 unrestricted permutations). Variables that were not significantly correlated with the three water chemistry variables or that were found to co-vary with other environmental variables (i.e., variance inflation factors >100) were removed (n = 22) from the data set. Variance Partitioning The remaining 38 explanatory variables were grouped into three subsets to yield ecologically interpretable variance components as follows: (1) variables describing the geographic position (G) of the water body, (2) regional scale (R), and (3) local scale (L) variables (Fig. 2, Table 2). The variation partitioning technique used has been previously described by Borcard and others (1992) and hence we will not go into detail here. In brief, the procedure allows for the variance in the explanatory data set to be partitioned into different variable components through the use of covariables (i.e., variables whose influence is partialled out of the analysis). Initially, this technique was used to partition variation in ecological data sets into environmental and spatial components (e.g., Økland and Eilertsen 1994) and has been extended by incorporating three sets of explanatory variables (e.g., Anderson and Gribble 1998). Fig. 2Venn diagram (hypothetical model) showing the unique variation, the partial common variation, and the common variation of the three subsets G, R, and L representing the environmental dataTable 2The procedure of variation partitioning of water chemistry (n = 3) in streams (n = 390) and lakes (n = 361) explained by three sets of environmental variables, geographic (G), regional (R), and local (L) in partial redundancy analysis (pRDA)RunEnvironmental variableCovariableλstreamsλlakes1GRLNone0.7510.6512GeoR&L0.0180.0373R&LNone0.7330.6144R&LGeo0.1730.1845GeoNone0.5780.4676RegG&L0.0990.0787G&LNone0.6520.5738G&LReg0.0550.1169RegNone0.6960.53510LocalG&R0.0290.05811G&RNone0.7210.59312G&RLocal0.2700.22113LocalNone0.4800.430aλ = computed eigenvalue in RDA. These numbers are used to calculate the explanatory power of each component (see Table 3)Table 3Calculation of explanatory power of each component in the variance partitioning modelVariation explained by factorsAbbreviation (see Figs. 2 & 3)Calculation (no. of run, Table 2)λstreamsλlakesGeographicG20.0180.037RegionalR60.0990.078LocalL100.0290.058Geographic & regionalGR12–6–20.1530.106Geographic & localGL8–2–100.0080.021Regional & localRL4–6–100.0450.048Geographic, regional & localGRL7–8–(12–6–2)–(4–6–10)0.3990.303Total explainedTotX10.7510.651UnexplainedUXTotV−TotX0.290.349Total varianceTotV1.01.0aAbbreviations refer to the legend in Fig. 2. The figures in the calculation column refer to the runs in Table 2Fig. 3Sources of variation in lake and stream water chemistry, respectively. Column labels indicate the variation (%) in acid neutralizing capacity, total phosphorus, and total organic carbon accounted for by each subset and their combinations The total variance explained and the unique contributions of each subset and their joint effects were obtained by the following: (1) RDA was run with all three subsets as environmental variables and no covariables to obtain a measure of the total variance, (2) partial RDA was run with one of the three subsets as environmental variables and no covariables, and (3) partial RDA was run with one of the three subsets as environmental variables constrained by the remaining two groups as covariables and reverse. The third step was repeated three times and each subset was treated as environmental variables constrained by the remaining subsets as covariables. This procedure resulted in four runs of RDA for each subset combination or a total of 13 runs of RDA were done for the full set of analyses for each ecosystem (Table 2). With three subsets of environmental data, the total variation of water chemistry was then partitioned into seven components including covariance terms (Fig. 2, Table 3). The variation explained by these subsets is subtracted from the total variation (1.0 in case of RDA) to obtain the unexplained variation. Stepwise RDA Stepwise RDA with forward selection was performed with all 38 environmental variables as independent variables and the 3 water chemical variables (ANC; TP and TOC) as dependent variables to determine the best predictors (high R2 values). In this procedure, selected variables are run as co-variables and subsequent variables (step 2 and on) need to explain a significant amount of the residual variance (tested by Monte Carlo permutation). Redundancy analyses and partial RDA were done using CANOCO for Windows Version 4.5 (Ter Braak and Smilauer 1997–1998). Prior to all statistical analyses (RDA), chemical and deposition variables, stream width, lake area, and altitude were log-transformed and proportional catchment land use/vegetation cover variables were arcsine square-root transformed to achieve normal distribution (SAS). Results Variance decomposition using redundancy analysis showed that all independent variables combined explained more than 65% of the total variation in stream and lake surface water chemistry (Table 3). The amount of variation explained was somewhat higher for streams (λstreams = 0.751) compared to lakes (λlakes = 0.651). The largest proportion of variance was explained by the interaction between all three scale factors (Fig. 3). Both stream and lake surface water chemistry was more influenced by regional-scale factors than either by geographic position or local-scale factors. However, the unique variance explained by geographic position, regional- or local-scale variables was low (<10%) (Fig. 3). For streams, the unique variance explained by regional-level variables (9.9%) was substantially higher than that explained by local-scale variables (2.9%) or geographic position (1.8%). Similarly, for lakes the unique variance explained by regional-scale variables (7.8%) was higher than that explained by local-scale variables (5.8%) and that explained by geographic position (3.8%). Geographic position and regional-scale factors (G&R) were better predictors of surface water chemistry than regional and local (R&L) or geographic position and local (G&L) factors. The strongest interaction was found between geographic position and regional-scale variables. For streams, the interaction between geographic position and regional-scale variables (G&R) explained 15.3% of the variance in stream chemistry. For lakes, the G&R interaction explained 10.6% of the variance in lake chemistry. The relation between geographic position and local-scale variables was much weaker, in particular for streams. The G&L interaction explained 0.8% of the variance in stream and 2.1% of the variance in lake chemistry. The amount of variance explained by the interaction between regional- and local-scale variables was 4.5% for streams and 4.8% for lakes. Ordination of stream chemistry and environmental variables showed that the primary RDA axis represented a latitudinal gradient (Fig. 4a). Eigenvalues for the first and second RDA axes were 0.685 and 0.056, respectively. Streams situated in alpine forested or alpine treeless catchments were placed on the right side of the ordination, whereas lowland streams situated in pasture and arable landscape in the south (e.g., boreonemoral ecoregion, eco5) with high wet and dry deposition of NHx (WDNHx) were placed to left. ANC and TP were strongly associated with pasture and arable land use and high WDNHx. TOC concentration was positively correlated with forested catchments and habitats with high amounts of coarse detrital matter and negatively correlated with mean annual discharge (Q) and, like lake-TOC, unrelated to arctic/mountainous characteristics. The second RDA axis was related to glacial land cover and whether the stream was located in the southern boreal ecoregion (eco4). Fig. 4RDA biplot of environmental factors and ANC, TP, and TOC of (A) streams and (B) lakes. 1 = riparian pasture cover; 2 = floating leaved vegetation; 3 = riparian deciduous forest cover; 4 = riparian alpine cover; 5 = riparian heath cover; 6 = boulder; 7 = block; 8 = pebble; 9 = periphyton; 10 = cobble; 11 = fine leaved submerged vegetation; 12 = water temperature; 13 = wet & dry non–sea salt Mg deposition; 14 = riparian arable cover (streams), alpine forest (lakes); eco1 = arctic/alpine ecoregion; eco2 = northern boreal ecoregion; eco4 = southern boreal ecoregion; eco5 = boreonemoral ecoregion; = nemoral ecoregion; WDNHx = Wet & Dry NHx deposition; c_detritus = coarse detritus; f_detritus = fine detritus; FWD = fine wooded debris (substrate); Q = annual mean discharge All three lake chemistry variables were negatively correlated with the first RDA axis (Fig. 4b). Eigenvalues for the first and second RDA axes were 0.599 and 0.056, respectively. The first RDA axis represented gradients in latitude and catchment/ecoregion. Lakes situated in alpine, treeless catchments at high latitude and altitude were situated to the right, whereas sites situated in forested catchments or catchments with pasture or arable land use were placed to the left in the ordination. Both ANC and TP were positively correlated with the amount of catchment classified as pasture and arable. Moreover, many of these lakes were situated in the boreonemoral ecoregion (eco5), with high wet and dry deposition of NHx (WDNHx). In contrast, lake water total organic carbon (TOC) was associated with forested catchments, with high amounts of coarse detrital matter (c_detritus). The second RDA axis represented gradients in the amount of catchment classified as mire (or bog), in particular the importance of local factors such as substrate type, water temperature, and riparian mire and fine wooded debris (FWD). Stepwise RDA of stream and lake chemistry as dependent variables and the “single” variables of geographic position and regional and local environmental variables showed that all variables accounted for 65% (lakes) and 75% (streams) of the total variance. The amount of alpine treeless areas in the catchment was the single most important predictor of lake water chemistry (explaining 55.4% of the explained variance). The second variable selected was altitude (18.5%, i.e., the amount of residual variance explained after running the first variable selected, “alpine treeless areas in the catchment,” as a covariable), followed by the amount of arable land in the catchment (4.6%), percent coverage of floating leaved vegetation in the littoral (4.6%), and lake surface area (1.5%). For streams, the five best single predictors of water chemistry were the amount of arable land in the catchment (68%), followed by the amount of alpine treeless areas in the catchment, altitude (2.7%), stream width (2.7%), and mean annual discharge Q (1.3%). Discussion Lakes and streams are often perceived as structurally and functionally different ecosystems, and indeed major dissimilarities do exist regarding differences in water movement. For example, streams are characterized by unidirectional, turbulent flow and high flushing rates, whereas lake chemistry is more affected by the timing and frequency of turnover events (e.g., polymictic to dimictic mixing in boreal lakes). Furthermore, obvious differences in nutrient cycling and recycling are expected due to the relative importance of benthic vs. pelagic productivity (Essington and Carpenter 2000). The surface water chemistry of streams is considered to be tightly linked to catchment characteristics, with geomorphology determining soil type and availability of ions through weathering (e.g., Allan 1995). Lakes, on the other hand, have until recently been perceived as separate entities, more isolated than streams from the surrounding landscape (e.g., Kratz and others 1997; Soranno and others 1999; Riera and others 2000; Quinlan and others 2003). Clearly, terrestrial–aquatic linkages are important predictors of surface water chemistry for both streams and lakes, but the strength of this interaction should vary with geologic and hydrologic settings. Thus, the major difference between the River Continuum Concept (Vannote and others 1980) and the concept of lake landscape position (Kratz and others 1997) probably lies in large differences in water residence times between streams and lakes. Given the differences in water movement, in particular flushing rates, one might expect that streams and lakes differ in the external drivers that affect water chemistry. Surprisingly, our findings do not support this conjecture; the major part of the variation in water chemistry in both streams and lakes was explained by all components (i.e., geographic position as well as regional- and local-scale variables), followed by the combination (or interaction) of geographic position and regional-scale factors. These findings support the premise that variability in surface water chemistry is driven by interactions between geographic position and regional factors. Our finding, however, that regional factors alone accounted for a large part of variation in ANC, TP, and TOC indicates the pivotal role that catchment land use/cover plays in determining surface water chemistry. The finding that the surface water chemistry of streams and lakes could be partly predicted by regional-scale variables, in particular catchment land use (e.g., arable) agrees with the findings of several earlier studies (e.g., Schonter and Novotny 1993; Allan and others 1997). Johnson and others (1997) showed, for example, that urban land use and rowcrop agriculture were important factors in explaining variability in stream water chemistry. Similarly, Hunsaker and Levine (1995) were able to explain more than 40% of the variance in total nitrogen using landscape metrics. In our study, we were interested in analyzing “natural” variability, so we removed sites judged to be affected by agriculture (i.e., sites with >25% of their catchments classified as arable were not included). Hence, our finding that the amount of arable land in a catchment explained nearly 70% of the variability in stream water chemistry was not expected using these data, and implies that even a small-scale agricultural land use within a catchment may affect phosphorus concentration. The importance of a riparian zone has been proposed to be less important in explaining among-site differences in heavily managed catchments (Omernik and others 1981). These authors suggested that the total amount of agriculture and forest in a catchment are more important predictors of water chemistry than the vegetation composition of the riparian zone. Our finding that less than 6% of the variation in surface water chemistry was explained by local factors alone (such as the presence of a riparian zone) supports this conclusion. Furthermore, in contrast to regional-scale factors, only a small amount of the variation was “hidden” in joint effects or interaction terms between regional and local (<8%) and between geographic and local (<3%) variables. Hence, other factors not considered here, such as where the land use is located in the catchment and in relation to the water body, presumably need to be considered. Indeed, studies of small scale or local factors have been shown to be important in modifying larger scale effects, e.g., several studies have shown the ameliorative influence of a vegetated riparian zone (e.g., Cooper 1990; Osborne and Kovacic 1993). Redundancy analysis showed that the variability in both stream and lake water chemistry was explained by the similar regional- and local-scale variables. For example, as discussed above, the proportion of arable land use in the catchments was a strong predictor of stream water chemistry (68%), followed by alpine, treeless land cover (17.3%). For lake water chemistry, the amount of alpine, treeless land cover was a good predictor (55.4%), followed by altitude (18.5%) and catchment arable land use (4.6%). Clearly, several of the variables in different “local” and “regional” components covary. For instance, the amount of alpine treeless land cover in the region/catchment and stream width are presumably correlated with altitude. However, as demonstrated here, regional factors were better predictors of stream and lake water chemistry and thus contribute largely to the explanatory power of the covariation components. All three water chemistry variables were strongly correlated with variables representing a latitudinal gradient; for example, sites in the south are more well buffered and nutrient rich compared to sites in the north. This distinct north–south gradient in water chemistry was not unexpected, but can be easily explained by major landscape-level differences between the northern and southern parts of the country. For instance, the legacy of historical processes on present-day distribution patterns of vegetation is clearly visible in Sweden. At approximately 60°N latitude, a marked difference in vegetation occurs, and this ecotone (limes norrlandicus) basically delineates the transition of broad-leaved (e.g., English oak and elm) and coniferous mixed (e.g., Scots pine and spruce) forests in the south from the boreal pine and spruce forests in the north (Nordic Council of Ministers 1984). In addition, the limes norrlandicus ecotone is also correlated with the highest postglacial coastline or the highest level the sea reached after the last ice age and below which fluvial sediments have been deposited. Hence, these two landscape-scale discontinuities in vegetation and soil type can have profound importance for surface water chemistry. Finally, broad-scale climatic differences also exist between the northern and southern parts of the country, which are manifested in differences in discharge regimes. For example, streams in the south are dominated by autumn and winter rains, whereas streams in the north are dominated by snowmelt-driven peaks in runoff during spring (Anonymous 1979). Given the profound differences in climate, geomorphology, and vegetation between the northern and southern parts of the country, we anticipated discernible differences in the factors driving surface water chemistry. Indeed, the finding that landscape position is important in explaining variability in surface water chemistry has been shown in previous studies (e.g., Johnson 1999), and supports the use of ecoregions to partition natural variability. Ecoregions have been suggested as appropriate ecological units for classification because they are generally perceived as being relatively homogeneous, having similar climate, geology, and other environmental characteristics (Wright and others 1998), and hence are considered as relatively good predictors of spatial patterns of surface water chemistry (e.g., Landers and others 1988; Larsen and others 1988). However, to be an appropriate classification tool, an ecoregion should minimize within and maximize among region variability, and, ideally, knowledge of how both natural and human-induced variability affect ecosystem processes should be known in order to fully assess the adequacy of ecoregions for partitioning natural variability. For example, it is well known that catchment management practices can have profound effects on surface water quality. For instance, whether a catchment is forested (promotes infiltration, high transpiration, and reduces runoff), clear-cut (results in lower infiltration and transpiration and increased runoff), or reforested may singly or in concert affect the water chemistry of aquatic ecosystems (Foster and others 2003). The results of this study showed the importance of interactions between variables acting on multiple spatial scales on among-lake and stream water chemistry. Somewhat surprising was the finding that the major drivers were similar between lakes and streams, despite the obvious differences in ecosystem types. For instance, in streams nutrients are spiralling downstream, whereas in lakes, nutrient retention is relatively high, depending on lake size and morphometry. Obviously, the chemical composition of a surface water body is a product of a series of mechanisms and processes acting along a scale continuum, i.e., from broad (geographic) to small (local) scales. Moreover, the environmental characteristics of a specific habitat are not random, but are considered to be controlled by macro-scale geomorphic patterns (Frissell and others 1986). Building on this premise, a conceptual framework has been developed where the aquatic (stream) organism assemblage at a site can be seen as a product of a series of filters (e.g., from continental to microhabitat), with each species occurring at a site having passed through these filters (e.g., Tonn and others 1990; Poff 1997). Similarly, surface water chemistry of a particular site is also constrained to some extent by a number of environmental filters. Small-scale systems develop within the constraints set by broad-scale systems of which they are part, and likewise local-scale processes and conditions are generated by broad-scale, geographic patterns and conditions. The idiosyncrasies of both ecosystems might be suppressed by the effects of large-scale factors. Geographic position functions as a template determining both regional and local factors. At the catchment level, geology controls soil type, weathering determines ion concentrations (and buffering capacity), and climate determines vegetation type (and land use). However, changes in land use (e.g., afforestation of arable to urban) and/or vegetation cover (e.g., deforestation or afforestation of arable land) are sources of catchment variation that might generate high amounts of variability or noise, making it difficult to tease apart components of natural variation from the effects of anthropogenic impact on surface water chemistry. This inherent catchment variation is probably responsible for the large amount of variation explained by regional factors, which may hide the effects of individual features of lakes and streams appearing similar in their response to environmental factors and influences. However, another caveat in addressing issues of “scale-effects” is that the spatial resolution at which observations are made can confound interpretation of scale-related processes (e.g., Minshall 1988; Manel and others 2000). For example, environmental variables such as nutrient concentrations and hydrology are more influenced by regional-scale processes, whereas other variables such as in-stream vegetation cover are more influenced by local control mechanisms (e.g., Allan and others 1997). Our findings of the importance of interactions between geographic position and regional- and local-scale variables support this conclusion.
[ "anc", "total phosphorus", "toc", "geographic position", "spatial scale", "variation partitioning", "partial rda", "lentic", "lotic" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "U" ]
Knee_Surg_Sports_Traumatol_Arthrosc-4-1-2226194
Does ligament balancing technique affect kinematics in rotating platform, PCL retaining knee arthroplasties?
The goal of this prospective, randomized, blinded trial was to determine if ligament balancing techniques for rotating platform TKA affect postoperative knee kinematics. Sixteen patients with unilateral rotating platform TKA consented to participate in this institutional review board approved study. Eight patients were randomly selected to receive ligament balancing with an instrumented joint spreader device and eight patients received ligament balancing using fixed thickness spacer blocks. A single plane shape matching technique was used for kinematic analysis of static deep knee flexion and dynamic stair activities. There were no differences in knee kinematics between groups during static deep flexion activities. The spreader group demonstrated kinematics more similar to the normal knee during the ascending phase of the dynamic stair activity. Knee kinematics in static knee flexion were unaffected by ligament balancing technique, while knees balanced with the spreader demonstrated a medial pivot motion pattern during stair ascent. This medial pivot motion pattern may improve long-term results by more closely replicating normal knee kinematics. Introduction Rotating-platform total knee arthroplasty (TKA) has become increasingly popular because this type of design provides good tibiofemoral conformity and low contact stresses without imposing rotational constraint [28]. These designs have been used for well over 20 years with excellent survivorship [10]. Recent kinematic studies of rotating platform knee arthroplasties have shown excellent stability in extension, but frequent anterior translation of the femur with respect to the tibia in flexed postures [6]. These anterior femoral translations may reduce maximum weightbearing flexion [4] and implant longevity [9], and therefore merit further study. Tibiofemoral translations are influenced by ligament balance [4, 16, 21, 30], muscle and external forces, and implant design. Ligament balance is thought to play a particularly important role in the function of rotating-platform knee arthroplasties, and numerous balancing techniques have been reported [8, 12, 13, 18]. However, no well designed clinical studies of ligament balance and knee kinematics have been reported. By performing a prospective, randomized, blinded trial of two ligament balancing techniques for rotating platform TKA, we sought to determine if ligament balancing technique affected postoperative knee kinematics. We hypothesized that ligament balancing with a calibrated spreader/balancer would provide better controlled knee kinematics, specifically reduced anterior femoral translations with flexion, than ligament balancing with fixed thickness spacer blocks. Materials and methods Sixteen patients with unilateral osteoarthritis of the knee and with no history of knee injuries or trauma consented to participate in this prospective, randomized, blinded, and institutional review board approved study. All subjects received the same rotating platform, PCL-retaining total knee prosthesis (TC-PLUS SB Solution, Plus Orthopedics AG, Rotkreuz, Switzerland, Fig. 1). The subjects were randomly assigned to two groups preoperatively: eight knees received the prosthesis using a ligament balancing technique employing fixed thickness spacer blocks (control group), while the other eight knees received the same prosthesis employing a calibrated spreader/balancer device to equalize the joint gaps and ligament balance in flexion and extension (spreader group) (Fig. 2). Fig. 1All patients received a rotating platform total knee arthroplasty (TC-PLUS SB Solution, Plus Orthopedics, Rotkreuz, Switzerland)Fig. 2One group of knees was treated using fixed thickness spacer blocks for ligament balancing (control group, left) and the other group was treated using a calibrated tensioning device (spreader group, right) All surgeries were performed by the senior surgeon (FK) at South–West London Elective Orthopaedic Centre, Epsom, United Kingdom. All study subjects were operated in the supine position under spinal anesthesia and sedation, and each was administered prophylactic antibiotic prior to inflation of the tourniquet. Standard extramedullary and intramedullary instrumentation were used in all knees for preparation of the tibia and femur, respectively. Standard sequential soft-tissue releasing techniques [22, 29] were utilized in the control and spreader groups, which included resection or release of (1) the anterior fibres of PCL, (2) medial and posteromedial capsule, (3) medial osteophytes, and (4) superficial MCL. In the control group spacer blocks were used in extension and 90 degree flexion to guide soft tissue releases to create balanced and equal flexion–extension gaps. In the spreader group a balancer device (laminar spreader, Plus Orthopedics AG) (Fig. 2) was used in extension and 90 degree flexion to guide soft tissue releases to create balanced and equal flexion–extension gaps. Soft-tissue balance was assessed at 0 and 90 degrees of flexion with patella equally subluxed during measurements in both spreader and spacer block groups in order to accommodate appropriate measuring device into the joint space. A standard force of 20 N was applied to the medial and lateral jaws of the balancing device during this technique [23]. The posterior cruciate ligament (PCL) was retained in all knees with a bone block on the proximal tibia, recessing anterior fibers when necessary to achieve suitable balance. Study subjects were assessed with pre-operative plain anteroposterior and lateral weightbearing radiographs of the knee and immediately postoperative non-weightbearing anteroposterior and lateral weightbearing radiographs. Patients were assessed post-operatively and over an average follow-up time of 11 ± 3 months (range: 7–15 months). The Knee Society Score [14] was employed as the scoring instrument. There were no differences between the control and spreader groups for height, weight, age, sex distribution, preoperative deformity or preoperative clinical scores (Table 1). Table 1Patient demographics and clinical assessments (mean ± 1SD)ControlSpreaderP valueAge at operation (years)71.0 ± 8.472.2 ± 6.60.96Height (cm)167 ± 7.6165 ± 7.80.65Weight (kg)75.0 ± 22.670.3 ± 12.90.72Sex (M/F)3/54/41.0aVarus/valgus distribution8/07/11.0aPre-op knee score42.1 ± 10.350.0 ± 11.70.13Pre-op function score50.0 ± 18.955.0 ± 20.40.50Post-op knee score90.5 ± 5.993.5 ± 1.80.51Post-op function score81.3 ± 23.488.1 ± 15.10.72Follow-up (months)10.3 ± 3.111.3 ± 2.30.44aFisher’s exact test Follow-up consisted of clinical and fluoroscopic assessment performed at Mayday University hospital, Croydon, United Kingdom. Fluoroscopic imaging (Siemens Polystar TOP, Siemens AG, Munich, Germany) consisted of (1) weightbearing maximum flexion lunge activity, (2) kneeling on a padded bench to maximum comfortable flexion and (3) a 4 cycles of a step-up/down on a 25 cm step. For the stair activity, the subjects faced the same direction throughout the cycle, therefore, the step-down was a backward motion that reversed the step ascent motion. Patients were instructed on the study activities prior to recording, and were given an opportunity to practice until comfortable. Lateral fluoroscopic views of the knee were recorded in the maximally flexed positions for the lunge and kneeling activities, as were four repeat trials of step-up/down on the stair. The fluoroscopic images were recorded at 15 frames per second onto an S-VHS VCR. Views of calibration targets also were acquired for distortion correction and optical calibration. The three-dimensional (3D) positions and orientations of the implant components were determined using model based shape matching techniques [3, 5], including previously reported techniques, manual matching, and image space optimization routines (Fig. 3). The fluoroscopic images were digitized and corrected for static optical distortion. The optical geometry of the fluoroscopy system (principal distance, principal point) was determined from images of calibration targets [3, 5]. The implant surface model was projected onto the geometry corrected image, and its 3D pose was iteratively adjusted to match its silhouette with the silhouette of the subject’s TKA components. The results of this shape matching process have standard errors of approximately 0.5° to 1.0° for rotations and 0.5–1.0 mm for translations in the sagittal plane [3, 5]. Fig. 3Model based shape matching techniques are used to determine the three-dimensional pose of the arthroplasty components from fluoroscopic images. The fluoroscopic image shows the outlines, in red, of the implant surface models superimposed in their registered positions. The images along the right margin show medial, lateral, coronal and transverse views of the implant components’ relative orientations The relative motions of the femoral and tibial components were determined from the 3D pose of each TKA component using the convention of Tupling and Pierrynowski [26]. The locations of condylar contact were estimated as the lowest point on each femoral condyle relative to the transverse plane of the tibial baseplate. Anteroposterior translations of the condyles were computed with respect to the anteroposterior midpoint of the tibial baseplate. Motion of the mobile bearing was not analyzed since the mobile bearing insert was not visible in the X-ray images and could not be tracked without addition of metallic markers. Researchers were unblinded to subject group membership only after all kinematic data had been produced. Statistical comparisons of the fluoroscopic images were performed (SPSS ver 13, SPSS Inc., Chicago, US) using two-way repeated measures ANOVA with post hoc pair-wise comparisons (Tukey/Kramer) at a 0.05 level of significance. All other parameters were evaluated using non-parametric tests. Results Both the Knee Score and the Function Score were slightly lower for the control group (Table 1). This tendency existed pre-operatively but was not statistically significant. For the maximum kneeling activity, no significant differences were found in knee angles or translations (Table 2). Maximum implant flexion for the control and spreader groups averaged 102° ± 13° and 108° ± 10° (P = 0.34), respectively. Tibial component valgus for the control and spreader groups averaged 0° ± 2° and −1° ± 2° (P = 0.56), respectively. Tibial external rotation for the control and spreader groups averaged −5° ± 7° and −5° ± 6° (P = 0.87), respectively. Medial tibial contact was located 2.7 ± 12.2 and 1.8 ± 8.2 mm (P = 0.87) posterior to the midline of the tibial plateau for the control and spreader groups, respectively. Lateral tibial contact was located at 10.5 ± 11.4 and 11.1 ± 11.8 mm (P = 0.93) posterior to the midline of the tibial plateau for the control and spreader groups, respectively. Table 2Knee pose during maximum flexion kneeling (mean ± 1SD)GroupFlexion (°)Valgus (°)Tibial Ext. Rot. (°)Medial AP (mm)Lateral AP (mm)Control102.0 ± 12.80.1 ± 2.1−4.7 ± 7.4−2.7 ± 12.2−10.5 ± 11.4Spreader107.9 ± 10.1−0.5 ± 1.8−5.3 ± 6.3−1.8 ± 8.2−11.1 ± 11.8P value0.34 0.56 0.87 0.87 0.93  For the maximum lunge activity, no significant differences were found in knee angles or translations (Table 3). Knee flexion for the control and spreader groups averaged 95° ± 15° and 102° ± 11° (P = 0.36), respectively. Tibial component valgus for the control and spreader groups averaged 0° ± 1° and −1° ± 2° (P = 0.62), respectively. Tibial external rotation for the control and spreader groups averaged −9° ± 6° and −6° ± 7° (P = 0.29), respectively. Medial tibial contact was located 0.3 ± 8.2 mm and 6.7 ± 7.7 mm (p = 0.1.5) posterior to the midline of the tibial plateau for the control and spreader groups, respectively. Lateral tibial contact was located 16.5 ± 8.7 mm and 16.8 ± 9.9 mm (p = 0.96) posterior to the midline of the tibial plateau for the control and spreader groups, respectively. Table 3Knee pose during maximum flexion lunge (mean ± 1SD)GroupFlexion (°)Valgus (°)Tibial Ext. Rot. (°)Medial AP (mm)Lateral AP (mm)Control95.3 ± 15.1−0.1 ± 1.4−9.6 ± 5.9−0.3 ± 8.2−16.5 ± 8.7Spreader101.6 ± 10.6−0.6 ± 2.1−5.7 ± 7.3−6.7 ± 7.7−16.8 ± 9.9P value0.360.620.290.150.96 For the stair activity, knees in the spreader group exhibited more posterior medial (P = 0.04, RM-ANOVA) and lateral (P < 0.005, RM-ANOVA) condylar contact than the control knees. There was no difference in average tibial rotation between the two groups, and no pair-wise comparisons at specific flexion ranges resulted in significant differences (Fig. 4). On average, both groups of knees had approximately 2° tibial internal rotation at 0° flexion, and rotated to 7° tibial internal rotation at 80° flexion. Medial contact was observed to remain at approximately 2 mm posterior to the AP midpoint from 0° to 50° flexion, then moved anterior to 80° flexion. The control group showed greater anterior translation of medial contact from 50° to 80° flexion than did the spreader group. Lateral contact was more posterior in the spreader group throughout the stair activity. Both groups showed posterior translation of lateral contact of 2–3 mm from 0° to 30° flexion, with very little net translation from 30° to 80° flexion. Fig. 4Knee motions during the stair activity differed between the control and spreader groups. Condylar positions were significantly more posterior in the spreader group. There were no significant differences in tibial rotation, nor were there significant pair-wise differences for rotations or translations Tibiofemoral kinematics during the step activity also were compared using average centers of rotation (COR) for femoral motion with respect to the tibial base-plate (Fig. 5) [6]. The COR provides a concise measure of femoral AP translation—if the COR is central (close to 0%), the femur rotates about the center of the tibia with little AP translation. A medial COR (between 0% and +50%) indicates the femur translates posterior with external rotation during flexion. A lateral COR (between −50 and 0%) indicates the femur translates anterior with external rotation during flexion. For the entire step-up/down cycle, the centers of rotation were at 0% (central) and 13% (medial) (P = 0.058) for the control and spreader groups, respectively. When step-up kinematics were compared, the spreader group showed a COR located more medially (28%) than the control group (0%, P < 0.05, Table 4). There was no difference in COR for step-down kinematics (Table 4). Both groups of knees showed tibial internal rotation with knee flexion, 8.5° and 7.6° for the control and spreader groups, respectively. These differences were not statistically significant. Fig. 5Average centers of rotation for the entire stair activity were in the center of the tibial plateau (0%) for the control group (left) and to the medial side (13%) for the spreader group (right). This difference was not significant (P = 0.058)Table 4Medial/lateral center of rotation (COR) during the stair activity (mean ± 1SD)GroupExtension phase (%)Flexion phase (%)Control−3 ± 19−4 ± 2Spreader28 ± 41−3 ± 3P value<0.05>0.05 None of the subjects demonstrated valgus or varus angles larger than 2 degrees during motion, consequently there was no obvious evidence of condylar lift-off. Discussion One goal of TKA is to reproduce normal knee kinematics. Ligament and soft-tissue balance are thought to play critical roles in obtaining optimal kinematic behavior. Theoretical merits of many balancing techniques and instruments have been discussed [13, 25, 27, 30]. This prospective, randomized and blinded study evaluated two ligament balancing techniques with posterior cruciate retaining rotating platform total knee arthroplasty to determine if balancing technique affected knee kinematics. Randomizing patients for surgical treatment and blinding the investigators to group membership until after all data had been processed reduced the potential for selection, measurement, and interpretation bias to affect the study findings. All subjects demonstrated satisfactory knee function based on clinical scores, there were no clinical complications in any knee, and no evidence of condylar lift-off was found during dynamic activity. The two knee groups exhibited no significant differences in knee kinematics for the weightbearing lunge and passive kneeling activities. This similarity is not unexpected given the posterior cruciate was retained in all knees. It is interesting to note that tibiofemoral AP position in these knees appears to differ from previous reports with mobile bearing knee arthroplasties. Banks et al. [4] reported lunge kinematics for a mixed group of rotating platform and rotating-and-translating arthroplasties during the same lunge activity, and observed 102° average flexion, 7.7° average tibial internal rotation, and 2.2 mm posterior femoral position with respect to the tibial AP midpoint. The control and spreader groups exhibited approximately the same knee angles, but 8.4 and 11.8 mm posterior femoral position, respectively. Greater posterior femoral translation with flexion is suggestive of more physiologic posterior cruciate ligament function and knee mechanics, although there is insufficient information to attribute those translations specifically to surgical technique, implant design, or a combination of factors. The amount of tibial rotation observed in the flexed postures is similar to other previous reports for knee arthroplasties [6, 15] but is much smaller than the amount of tibial rotation observed in healthy knees in similar postures [1, 20]. Ligament balancing technique did affect knee kinematics during the dynamic stair activity. Condylar contact locations were observed to remain more posterior on the tibia and to have a more medial center of rotation during step-up in the spreader group. These findings suggest the spreader balancing technique provided more normal balance or stability to the medial compartment of the knee, resulting in less medial contact translation during the stair activity. Medial contact in the control group was observed to move anterior with flexion on the stair activity, indicating greater functional laxity in that compartment. Simple comparisons of knee kinematics across groups are possible using the center of rotation characterization. A medial center of rotation has been described in the healthy normal knee [2, 17]. The spreader group showed a medial center of rotation during stair ascent, indicating medial contact did not move significantly while lateral contact moved anterior with knee extension and femoral internal rotation. The control group showed a center of rotation close to the middle of the tibia for the stair activity, indicating that the femur rotated internally during knee extension with little AP translation (medial contact moved posterior and lateral contact moved anterior with extension). The center of rotation in the spreader group knees differed between the ascending (medial COR) and descending (central COR) phases of the step-up/down activity (Table 4). This suggests that the spreader balancing technique provided greater anterior medial stability than the technique employing fixed thickness spacer blocks, but posterior medial stability was equivalent between the two balancing techniques. Banks and Hodge [7] reported on a mixed group of 44 rotating platform and rotating-and-translating mobile bearing knee arthroplasties during the same stair activity, and found average tibial rotations of 9° and average centers of rotation at −19% (lateral). These motions were associated with anterior femoral translation with flexion, which has been observed in numerous knee arthroplasty designs [11, 19, 24]. The knees in the present study showed similar amounts of tibial rotation, but both groups showed centers of rotation that were more medially located. Thus, the knees in this study exhibited less anterior femoral translation with flexion compared to the knees in the previous report, suggesting both balancing techniques provided beneficial tibiofemoral stability compared to the group average of well-functioning mobile-bearing knee arthroplasties. This double-blinded prospective randomized study used fluoroscopic kinematic measurements to determine if two ligament balancing techniques would affect knee motions in several activities. Kinematics in flexion were similar, with both groups showing a more posterior femoral position than previously has been reported for similar implant designs. Knees operated with a spreader/balancer device showed a more medial center of rotation in ascending a stair activity, and both groups showed average centers of rotation that were more medial than previously had been reported for similar implant designs. Kinematics closer to the normal knee may yield improved knee performance and implant longevity. However, these kinematic differences are clinically insignificant upon short-term follow up, and their long-term significance remains to be studied.
[ "rotating platform", "tka", "knee kinematics", "spreader balancing device", "soft tissue balancing", "randomized control trial" ]
[ "P", "P", "P", "R", "R", "R" ]
Dig_Dis_Sci-3-1-2140097
Twenty-Four Hour Tonometry in Patients Suspected of Chronic Gastrointestinal Ischemia
Background and aims Gastrointestinal tonometry is currently the only clinical diagnostic test that enables identification of symptomatic chronic gastrointestinal ischemia. Gastric exercise tonometry has proven its value for detection of ischemia in this patients group, but has its disadvantages. Earlier studies with postprandial tonometry gave unreliable results. In this study we challenged (again) the use of postprandial tonometry in patients suspected of gastrointestinal ischemia. Introduction In patients presenting with postprandial pain, especially when associated with weight loss and positive history for cardiovascular diseases, chronic gastrointestinal ischemia (CGI) should be among the differential diagnosis [1–3]. Vascular anatomic abnormalities can be demonstrated with duplex ultrasound or angiography. However, a stenosis does not necessarily imply ischemia, due to the abundant collateral circulation. We have recently demonstrated that gastric exercise tonometry (GET) allows differentiation between patients with and without gastrointestinal ischemia. GET showed an accuracy of 87% in detection of gastrointestinal ischemia. The patients selected for treatment using GET, are likely to benefit from revascularization techniques [4–9]. Gastric (exercise) tonometry is not widely accepted as a diagnostic technique for gastrointestinal ischemia due to the lack of familiarity with this approach and its time-consuming nature. Twenty-four hour gastrointestinal tonometry with meals is more familiar (resembling 24-h pH measurement) and easier to perform. Over 90% of CGI patients report postprandial, while only 60% report exercise-related, complaints. Ischemic pain after meals in CGI is caused by insufficient increase of postprandial blood flow to balance the increased metabolic demand of the gastrointestinal tract [10]. This indicates that tonometry directly after meals would be the most physiologic approach of measuring ischemia in these patients. However, earlier studies showed unreliable results using tonometry after meals, related to insufficient suppression of gastric acid secretion and dilution effects [11–13]. We therefore started by testing standard meals in vitro and performed a study with healthy subjects using these meals and high-dose proton pump inhibition (PPI) as an optimal gastric acid suppressant [14]. In this study we retrospectively evaluated the additional value of prolonged gastrointestinal tonometry in a group of patients suspected for possible CGI. Methods Patients with unexplained chronic abdominal symptoms who were referred for suspected CGI were included in this study. More-common causes of chronic abdominal symptoms had been excluded previously by appropriate diagnostic evaluation. All patients had imaging of the splanchnic arteries [intra-arterial digital substraction multiplane abdominal angiography (DSA) and duplex ultrasound scanning] and GET. Along with this standard diagnostic work up, patients had twenty-four hour (24 h) tonometry testing, directly following GET. Gastric exercise tonometry (GET) GET was performed using a standardized protocol, before, during, and after 10 min of submaximal exercise, as described previously with both gastric and jejunal catheters [15]. A maximal gradient was calculated between stomach and arterial PCO2. The criteria for a positive GET (all three required), established in healthy volunteers and a patient cohort, were: (1) a gradient of >0.8 kPa in the stomach after exercise, (2) an increase in gastric PCO2, and (3) an arterial lactate <8 mmol/l [8,15]. Twenty-four hour tonometry testing A gastric and jejunal tonometer catheter (8 French, Datex Ohmeda, Helsinki, Finland) and a gastric pH meter (pHersaflexTM, internal reference, Medical Measurement Systems, Enschede, the Netherlands) were inserted nasogastrically using fluoroscopy. Intravenous infusion of omeprazole was started with a bolus of 80 mg in 30 min, followed by 8 mg/h, using an infusion pump (Perfusor compact®, B Braun Melsungen AG, Melsungen, Germany). The catheters were connected, respectively, to the Tonocap (Datex Ohmeda, Helsinki, Finland) and the pH recording device (Medical Measurement Systems, Enschede, the Netherlands). The Tonocaps were connected to a computer on which a data-collection program automatically registered the gastric and jejunal PCO2 level every 10 min. The gastric pH was automatically recorded and stored in a datalogger (Medical Measurement Systems, Enschede, the Netherlands), which also allows for real-time reading of the gastric pH. As soon as the gastric pH was >4.0 for ≥30 min, the first meal was started (t = 0 min). All patients had meals at standard times: breakfast I (08:00), dinner (12:00), liquid compound meal I (15:00), bread meal (18:00), liquid compound meal II (21:00), and breakfast II (08:00 the next day). The breakfast, bread, and dinner meals were standardized. The liquid compound meal consisted of two packages of 200 ml each (Nutridrink®, Nutricia, The Netherlands). The contents and caloric density of each meal used are presented in Table 1. The patients were instructed to eat their meals within 15 min. The consumption of small amounts of liquids (noncarbonated) was allowed and noted, consumption of alcohol-, acid-, and CO2-containing beverages was strictly prohibited. Due to the limited length of the catheters, the subjects were only capable of performing very minor exercise and were allowed to lie down in supine position from 22:00. Table 1Composition characteristics of the various standard mealsMealCompositionkcal/gBreakfast Fat (16%), proteins (22%), carbohydrates (62%)1.7DinnerFat (16%), proteins (47%), carbohydrates (37%)2.2Bread mealFat (10%), proteins (19%), carbohydrates (71%)1.8Compound solutionFat (35%), proteins (16%), carbohydrates (49%)1.5Percentages of delivered energy (En%); kcal = kilocalories; g = gram. Diagnosis and treatment The results of all diagnostic procedures were discussed in a multidisciplinary team. In this team a gastroenterologist, a vascular surgeon, and an interventional radiologist discussed the symptoms, medical history, physical examination, and all diagnostic evaluations, with the exception of the results on 24-h tonometry. The latter therefore did not influence the consensus diagnosis. The multidisciplinary team decided for every patient: (1) no splanchnic stenosis, (2) splanchnic stenosis and no ischemia or (3) splanchnic stenosis and ischemia (i.e. chronic gastrointestinal ischemia, CGI). The gold standard for the diagnosis of chronic gastrointestinal ischemia was a positive outcome after successful revascularization at (long-term) follow-up. The outcome of GET, and consensus diagnosis of the multidisciplinary team were compared to the results of the 24-h tonometry testing. Definition of a positive (abnormal) 24 h tonometry The cut-off values established in the previous healthy subjects study were used to define the criteria for the results on 24-h tonometry [9]. These cut-off values were, for the stomach: 12.1, 11.4, and 11.3 kPa for the breakfast (or bread meal), dinner, and compound solution meals, respectively; in the jejunum these threshold values were, respectively, 12.0, 13.6, and 10.6 kPa. The criteria for a positive finding (abnormal result) on 24-h tonometry were: (1) pathologic responses after three or more (standard) meals, or (2) a combination of one or two pathologic responses after (standard) meals combined with a median PCO2 > 8.0 kPa, measured in between meals. Statistics Data were expressed as mean (standard deviation) or median (range) when appropriate. The data of the ischemic and non-ischemic patients were compared using Student’s t-test or χ2 testing. Sensitivity, specificity as well as positive and negative predicted values of 24-h tonometry were calculated with the consensus diagnosis as the gold standard. Results Patient characteristics In a period of three years (2002–2005), in 37 patients referred for suspected of CGI, 24-h tonometry along with the standard work-up was performed. Of these, 33 (89%) patients had complete work-up and were included in this study. Mean age was 54 (22–82) years, with eight males and 25 females. Significant splanchnic stenosis were found in 23/33 (69%) patients. A significant single vessel splanchnic stenosis was found in 14/33 (42%) patients [13 celiac artery (CA) and one superior mesenteric artery (SMA)]. A significant stenosis of two splanchnic arteries was found in 9/33 (27%) patients (all CA and SMA stenosis). All 33 patients had chronic abdominal pain for a mean of 35 months (range 3–120), 24/33 (73%) patients had pain following meals, 11/33 (33%) patients reported pain during or after exercise, 9/33 (27%) patients reported both pain following meals and during, or after, exercise, and 23/33 (70%) patients reported weight loss. The mean weight loss was 11 kg (range 3–28 kg) in 17 months (range 2–120 months); see Table 1. Gastric exercise tonometry In 18/31 (58%) patients a gradient of >0.8 was found by GET. In 14/18 (78%) patients this increased gradient was defined as abnormal using the three criteria as previously defined. In four patients a gradient >0.8 kPa was not caused by ischemia: three patients had persistent acid production, and one performed excessive exercise (leading to false positive findings) [8]. Consensus diagnosis of the multidisciplinary team and results after treatment According to the team the diagnosis of no stenosis (and no ischemia), stenosis but no ischemia, and stenosis with ischemia (CGI) was diagnosed in, respectively, 12 (36%), four (12%) and 17 (52%) patients. Fifteen patients diagnosed with CGI had treatment: 10 patients surgical and five patients stent-placement therapy. Three patients had no treatment: two patients preferred a conservative approach and one patient proved inoperable due to comorbidity. After a mean follow-up of 55 months (49–85), 12 out of 15 (80%) patients were free of complaints, one patient died immediately after surgical revascularization (multiple organ failure), one patient had partial improvement, and one patient had persistent complaints. The latter patient had a celiac artery release, and was diagnosed as having no CGI after follow-up; this patient had no abnormalities on 24-h tonometry testing; see Table 2. Table 2Patient characteristics, results of diagnostic tests and conclusionNrAgeSexStenosisPP painPE painGET resultsConsensus diagnosisTreatmentOutcome complaintsFinal conclusion24-h tono results161FNone+−0,5No stenosis, no ischemia–No stenosis, no ischemiaNormal236MCA+−0,4No ischemia–No ischemiaNormal355FCA+−0,7aCGISurgeryFreeCGIAbnormal476MNone−−0,6No stenosis, no ischemia–No stenosis, no ischemiaNormal547MNone−+1,6bNo stenosis, no ischemia–No stenosis, no ischemiaNormal642FCA + SMA−−1,5CGISurgeryFreeCGIAbnormal765FCA + SMA−−2,0CGIConservativeCGIAbnormal877FCA + SMA+−2,2CGISurgeryDied post-op.CGIAbnormal972FSMA++1,8CGIStentFreeCGIAbnormal1041MNone+−2,8cNo stenosis, no ischemia–No stenosis, no ischemiaNormal1172FCA+−1,4CGIStentPartial relieveCGIAbnormal1267FCA−−0,9cNo stenosis, no ischemia–No ischemiaNormal1340FCA−+1,1cCGISurgeryUnchangedNo ischemiaNormal1482MCA + SMA++1,0CGIConservativeCGINormal1554FCA++0,9CGIConservativeCGINormal1626MCA++1,0CGISurgeryFreeCGIAbnormal1758MNone++2,2cNo stenosis, no ischemia–No stenosis, no ischemiaNormal1822FCA+−0,7No ischemia–No ischemiaNormal1942FNone+−1,1dNo stenosis, no ischemia–No stenosis, no ischemiaNormal2048FCA+−1,7CGISurgeryFreeCGINormal2151FCA+−0,7No ischemia–No ischemiaNormal2243FCA + SMA++0,5eCGISurgeryFreeCGIAbnormal2354FCA + SMA−−1,5CGISurgeryFreeCGINormal2476FCA + SMA+−0,6fCGIStentFreeCGIAbnormal2553MNone+−2,0cNo stenosis, no ischemia–No stenosis, no ischemiaNormal2653FCA++1,1CGIStentFreeCGIAbnormal2750FCA−−1,3CGISurgeryFreeCGIAbnormal2861FCA + SMA++1,7CGIStentFreeCGIAbnormal2963FNone+−0,8No stenosis, no ischemia–No stenosis, no ischemiaNormal3024FCA++1,3CGISurgeryFreeCGIAbnormal3174FCA + SMA−−0,8No ischemia–No ischemiaNormal3241FNone+−1,2dNo stenosis, no ischemia–No stenosis, no ischemiaNormal3363FNone+−0,0No stenosis, no ischemia–No stenosis, no ischemiaAbnormalPP = postprandial, PE = post-exercise, M = male, F = female; CA = celiac artery, SMA = superior mesenteric artery; GET = gastric exercise tonometry, result presented as gradient (in kPa); 24-h tono = twenty-four hour tonometry; CGI = chronic gastrointestinal ischemia.a false negative GETb acid production during GETc false positive GETd no CO2 raise during GETe abnormal jejunal gradient during GETf minor exercise during GET Twenty-four hour tonometry The 24-h tonometry was well tolerated in all patients, no medical or technical problems occurred. In 28/33 (85%) patients tonometric measurements were performed in both stomach and jejunum, in 5/33 (15%) patients only stomach measurements were performed (in all five patients placement of jejunal tonometry catheter failed); see Fig. 1. In 8/33 (24%) patients a dose reduction of the compound solution was necessary, due to patient’s inability to consume the normal dosage. The overall gastric acid suppression was good, with a gastric pH > 4 during 94.8% (range 71–100%) of the time. Pathological peaks during 24-h tonometry coexisting with periods of pH < 4, were defined as non-pathologic peaks. Fig. 1 Individual curves of results of 24-h tonometry in a non-ischemic (A) and an ischemic patient (B). Individual curves of a non-ischemic patient (A) and an chronic gastrointestinal ischemia patient (B); on the horizontal axis the time from 0 to 24 h, on the vertical axis PCO2 from 0 to 20 in kilopascal (kPa); curves: PCO2 values measured every 10 min in stomach (□), jejunum (×) and four meals spread over the 24 h period (*) The fasting baseline of stomach and jejunal PCO2 measurements were significantly higher in the ischemic patients compared to the non-ischemic patients group. The jejunal PCO2 peaks after breakfast and dinner were significantly higher in the ischemic patients compared to the non-ischemic patients; see Tables 3 and 4. Table 3Results of 24-h tonometry in ischemic and non-ischemic patientsCGI pts.Non-ischemic pts.PeakΔ-peakMean PCO2PeakΔ-peakMean PCO2StomachB10.6 (3.9)4.0 (3.4)–8.5 (2.7)2.6 (2.0)–D9.9 (1.9)3.7 (1.5)–8.5 (2.3)3.3 (2.5)–CS10.4 (3.0)3.3 (2.2)–8.1 (2.6)1.8 (1.2)–Fasting––7.7 (1.4)a––6.8 (0.7)    Day––6.9 (1.1)––6.5 (0.7)    Night––8.2 (1.8)––6.9 (0.8)JejunumB11.6 (3.2)b3.2 (1.5)–8.8 (1.4)2.1 (0.8)–D12.2 (3.4)c3.7 (2.0)–9.0 (1.7)2.2 (0.7)–CS10.6 (2.2)2.5 (1.6)–9.0 (1.9)1.5 (1.0)–Fasting––8.9 (1.6)d––7.4 (0.7)    Day––8.8 (1.3)––7.5 (0.9)    Night––8.9 (1.9)––7.5 (0.8)CGI = chronic gastrointestinal ischemia; PCO2 = carbon dioxide in kilopascal; B = breakfast, D = dinner, CS = compound solution meal; a P = 0.02, b P = 0.005, c P = 0.04, d P = 0.03Table 4Results of different tests compared to final diagnosisPatientsFinal diagnosisGET24-h tonometryCombination GET––24-h tonometryCGI 1714 (82%)13 (76%)17 (100%)No ischemia1611 (69%)15 (94%)16 (100%)Data presented as number of patients with positive predictive value (PPV) and negative predictive value (NPV); GET = gastric exercise tonometry; CGI = chronic gastrointestinal ischemia; GET = gastric exercise tonometry Using the previously defined criteria, 13/17 patients with CGI and 15/16 patients without ischemia were correctly identified with 24-h tonometry. The calculated test properties show a sensitivity of 76% and a specificity of 94%, a positive predictive value (PPV) of 76% and a negative predictive value (NPV) of 94% for detection of ischemia by 24-h tonometry alone. Combining the results of GET and 24-h tonometry, 17/17 patients with CGI and 16/16 patients without ischemia could be correctly identified (sensitivity of 100% and specificity of 100%). Comparing patients with single- and multi-vessel ischemia, or patients with or without postprandial and/or exercise-related complaints, no significant differences in diagnostic accuracy were found. Discussion The results of this retrospective study indicate that 24-h gastrojejunal tonometry is feasible and may be clinically useful in diagnosing chronic gastrointestinal ischemia. The measurements were easy to perform, generally well tolerated and no complications occurred. The fasting baseline PCO2 in both stomach and jejunum was significantly higher in the ischemic patients group compared to the non-ischemic patients. This difference might be explained by the continuous compromised arterial blood flow of the mucosa of the stomach (and/or jejunum) in the ischemic patients group. The significant higher maximum peak after the breakfast and dinner and the borderline significant higher peak after the compound solution meal (P = 0.07 and 0.052 for stomach and jejunum, respectively) support the theory that, with a maximal metabolic oxygen demand, mucosal ischemia is apparent and detectable using tonometry. Using the cut-off values predicted by the previously performed healthy subjects study, the positive predictive value of 24-h tonometry seems very promising [14]. Comparing the results of detection of ischemia for 24-h tonometry and GET, no significant differences were found. Theoretically it might be expected that 24-h tonometry is more accurate in patients with postprandial complaints, and GET in patients with exercise-predominant complaints, but this was not found in this small patient series. In five patients the results of GET were incorrect (four false-positive and one false-negative result), whereas the 24-h tonometry (retrospectively) correctly predicted the presence (one patient) or absence (four patients) of gastrointestinal ischemia in these patients. One of the major advantages of 24-h tonometry over GET is the fact that all patients are suitable for 24-h tonometry, in contrast to GET, where patients have to perform submaximal exercise, which is not always possible due to age, concomitant disease, and/or compromised general condition of the patient. Another advantage of the 24-h tonometry is the familiarity with 24-h pH measurement, which is a widely accepted diagnostic tool in gastroenterology. Moreover 24-h tonometry testing is more easily standardized than GET, especially regarding the level of exercise, which is difficult to manage and may cause false positive findings during or after excessive exercise [15]. In this study, the 24 h tonometry was not repeated in a standard fashion after treatment. In two patients who had successful, anatomical and clinical, revascularization 24-h tonometry was repeated, and showed normalization in both patients. In theory, the compound solution meal (low volume, high calorie content) should be the ideal test meal for provocation of gastrointestinal ischemia. The use of standard meals with large metabolic demand, like the compound solution meal, called for a dose reduction in several patients. These patients already had severely impaired food intake and could hardly tolerate (larger) meals. This dose reduction might have influenced the outcome of the 24-h tonometry, the borderline significant differences between CGI and non-ischemic patients after compound solution meals might be explained by this effect. The results of tonometry after meals have to be interpreted with care. Unsuccessful suppression of acid production and meal-related production of CO2 may influence the results of tonometry. Using optimal acid suppression medication and standardized meals, these effects can be minimized, but not completely ruled out. In this study gastric pH measurement was used to control acid suppression. Furthermore, duodenogastric reflux of jejunal contents and/or pancreatic juices are quite common and might theoretically influence intragastric and -jejunal PCO2 levels in a major way, leading to false positive tonometry findings [16]. In conclusion, this retrospective study shows that 24-h tonometry is feasible, safe, and has a very promising diagnostic accuracy for the detection of gastrointestinal ischemia. Using high-dose PPI acid suppression and standard meals, and previously established normal values, 24-h tonometry identifies gastrointestinal ischemia with an acceptable accuracy. The definitive role of 24-h tonometry in the diagnosis of chronic gastrointestinal ischemia has to be established in (future) prospective studies.
[ "splanchnic stenosis", "carbon dioxide", "chronic splanchnic syndrome", "chronic mesenteric ischemia", "gastric tonometry", "small bowel tonometry" ]
[ "P", "P", "M", "R", "R", "M" ]
Neurosurg_Rev-4-1-2279160
Pure endoscopic endonasal odontoidectomy: anatomical study
Different disorders may produce irreducible atlanto-axial dislocation with compression of the ventral spinal cord. Among the surgical approaches available for a such condition, the transoral resection of the odontoid process is the most often used. The aim of this anatomical study is to demonstrate the possibility of an anterior cervico-medullary decompression through an endoscopic endonasal approach. Three fresh cadaver heads were used. A modified endonasal endoscopic approach was made in all cases. Endoscopic dissections were performed using a rigid endoscope, 4 mm in diameter, 18 cm in length, with 0 degree lenses. Access to the cranio-vertebral junction was possible using a lower trajectory, when compared to that necessary for the sellar region. The choana is entered and the mucosa of the rhinopharynx is dissected and transposed in the oral cavity in order to expose the cranio-vertebral junction and to obtain a mucosal flap useful for the closure. The anterior arch of the atlas and the odontoid process of C2 are removed, thus exposing the dura mater. The endoscopic endonasal approach could be a valid alternative to the transoral approach for anterior odontoidectomy. Introduction Removal of the odontoid process is a procedure often required for the treatment of the basilar impression with compression of the brain stem or cervical spinal cord due to irreducible atlanto-axial translocation. Different disorders may produce atlanto-axial dislocation such as congenital malformation, chronic inflammation, metabolic disorders and trauma. The transoral approach is the most favoured approach to the odontoid process and it is largely used for the surgical treatment of extradural and also intradural disorders of the cranio-vertebral junction [6–9, 11–13, 15, 16, 22–26]. Despite the fact that such an approach provides a direct route to the odontoid process, it presents several disadvantages such as the deepness of the surgical corridor, the sometimes required splitting of the soft palate, the risk of tongue and teeth damage and, in case of dural opening, the increased risk of post-operative CSF leakage and meningitis. Based on the experience of endoscopic endonasal pituitary surgery [3, 17], some works have recently reported anatomical studies and clinical applications of a modified endoscopic endonasal approach for the removal of the dens. These studies show the potential applications of the endoscopic endonasal approach for the surgical management of suprasellar, parasellar and retroclival pathologies [1, 2, 4, 5, 10, 14, 18–21]. This anatomic study describes the extended endoscopic endonasal approach to the cranio-vertebral junction, with particular attention to the reconstruction of the surgical route. Material and methods For this anatomic study, three fresh cadaver heads were dissected; an extended endoscopic endonasal approach to the cranio-vertebral junction was performed in all cases. Endoscopic dissections were performed using a rigid endoscope (Karl Storz GmbH, Tuttlingen, Germany), 4 mm in diameter, 18 cm in length, with 0 degree lenses. The endoscope was connected to a light source through a fiberoptic cable and to a camera fitted with 3CCD sensors. The video-camera was connected to a 21” monitor supporting the high resolution of the 3CCD technology. Results The procedure started with the introduction of the endoscope into a nasal vestibule through a lower trajectory as compared to the one employed for reaching the sellar region. Along such trajectory, the first structures to be visualized were the nasal septum medially, the inferior turbinate and the middle turbinate laterally. The inferior margin of the middle turbinate led to the choana which represented the main landmark of the approach. By advancing the endoscope through the choana it was possible to identify the ostium of the Eustachian tube laterally, the rhinopharynx posteriorly, the soft palate inferiorly and the inferior wall of the sphenoid sinus superiorly; the latter representing the superior limit of the surgical approach. Angling the endoscope to the contralateral nasal cavity, it was possible to visualize the ostium of the contralateral Eustachian tube. The ostia of the two Eustachian tubes represented the lateral limits of this approach (see Fig. 1a,b). Fig. 1a, b Entering the choana, the rhinopharinx and the Eustachian tube have been bilaterally visualized (iwsphs inferior wall of sphenoid sinus, ET Eustachian tube, Rphx rhinopharinx) In order to expose the cranio-vertebral junction the mucosa of rhinopharynx was incised along its lateral limits at the edge with the ostia of the Eustachian tube and along the inferior wall of the sphenoid sinus superiorly (see Fig. 2a). The mucosa, the longus capitis and longus colli muscles were gently dissected downward as a single layer, thus creating a muscle-mucosal flap (see Figs. 2b and 3). Proceeding from the inferior wall of the sphenoid sinus to the soft palate of the lower clivus, the atlanto-occipital membrane, the anterior arch of C1 and the body of C2 were visualized. Fig. 2a The mucosa of the rhinopharynx has been incised in order to create a mucosal flap. b The muscles longus capitis and colli have been dissected together with the mucosa in order to expose the cranio-vertebral junction. (ET Eustachian tube, Rphx rhinopharinx, NS nasal septum, iwsphs inferior wall of sphenoid sinus, C clivus, aom atlanto-occipital membrane, C1 atlas, mmf muscle-mucosal flap)Fig. 3Schematic drawing showing the muscle-mucosal flap and the lower structures (iwsphs inferior wall of sphenoid sinus, ET Eustachian tube, C1 atlas, mmf muscle-mucosal flap, C2 axis, D dens) Introducing the endoscope in the oral cavity it was possible to reach the dissected mucosa of rhinopharynx and to transpose the muscle-mucosal flap into the oral cavity (see Fig. 4a,b). Fig. 4a The muscle-mucosal flap is transposed in the oropharinx. b Schematic drawing showing the exposure of the cranio-vertebral junction after replacing the flap. (Ophx oropharynx, mmf muscle-mucosal flap, T tongue) This manoeuver permitted an adequate endonasal exposure of the cranio-vertebral junction without removing the mucosa of the rhinopharinx, which provides a useful autologous material for closure of the surgical field. Reintroducing the endoscope into the nasal cavity, the anterior arch of the atlas was removed and the dens with the apical and alar ligaments were exposed (see Fig. 5a,b). The dens was then thinned with a microdrill, separated from the alar and apical ligaments, and finally removed (see Fig. 6a,b). Fig. 5a The anterior arch of the atlas has been removed. b The dens has been exposed. (C clivus, aom atlanto-occipital membrane, C1 atlas, dm dura mater, al alar ligament, D dens)Fig. 6a The dens has been drilled. b After removal of the odontoid process the transverse ligament and the underling tectorial membrane are visible. (iwsphs inferior wall of sphenoid sinus, ET Eustachian tube, dm dura mater, D dens) At this point the transverse ligament was identified; it was removed with the tectorial membrane, a double layer membrane positioned behind the transverse ligament, in order to expose the dura mater. At the end of the procedure the muscle-mucosal flap was replaced into the nasal cavity, thus closing the surgical field (see Fig. 7). Fig. 7The muscle-mucosal flap is replaced in the rhinopharinx (iwsphs inferior wall of sphenoid sinus, ET Eustachian tube, mmf muscle-mucosal flap) In this study, the endoscopic endonasal approach to the cranio-vertebral junction has been performed using both the one-nostril and the two-nostril technique, without removal of inferior and/or middle turbinate, nasal septum or other nasal structures. Although the procedure can be performed through only one nostril, the binostril technique provides, without any additional surgical trauma, a better manoeuverability of the surgical tools and the possibility to work with “three hands”. As a matter of fact, this technique permits a free-hand use of the endoscope in one nostril, held by the assistant, and the use of the other nostril or both nostrils for the insertion of the surgical instruments. Furthermore, in the case of a narrow nasal cavity, it is valuable to perform a unilateral middle turbinectomy and removal of the posterior third of the nasal septum to enlarge the surgical corridor. Discussion Different pathological disorders may produce atlanto-axial translocation with ventral compression of the brain stem or spinal cord. The most common are congenital malformations, such as Arnold Chiari malformation type II, chronic inflammation, such as rheumatoid arthritis, genetic transformation, such as Down’s syndrome and trauma, such as type II odontoid fracture. Some of these patients are candidate to the resection of the odontoid process for anterior decompression. The indication for odontoid resection is irreducible atlanto-axial subluxation, associated with severe spinal cord compression causing progressive myelopathy. The most favoured approach to the odontoid process is the transoral approach. This approach provides a direct route to the surgical field, without any neurovascular manipulation and passing through the oropharynx, without injuring major neurovascular structures. The main limitation to this approach is the difficulty of dural closure and the subsequent higher risk of CSF leak and meningitis. For this reason the trans-oral approach is mainly used for extradural lesions [7, 8, 11–13, 15, 22–24], although some studies have reported its application for the surgical treatment of intradural pathology of the lower clivus and ventral cranio-cervical junction [6, 9, 16, 25, 26]. Other minor disadvantages are, however, related to this approach: the split of the soft palate and even of the hard palate is often performed in the case when rostral extension of the approach is required; tongue swelling may occur for prolonged compression; there is risk of damaging the teeth with retractors; velopharyngeal insufficiency may develop; and there is the necessity of nasal feeding in the postoperative stay. Recently, increased diffusion in the use of the endoscope for transsphenoidal pituitary surgery [3, 17] led some studies to explore the possibility of applying the endoscopic endonasal approach in the surgical treatment of skull base lesions other than pituitary tumors. In recent years some works have reported anatomical studies and surgical experience in the endoscopic endonasal approach to different areas of the midline skull base, from the olfactory groove to the cranio-vertebral junction [1, 2, 4, 5, 10, 14, 18–21]. Thanks to the properties of the endoscope itself, the endonasal approach provides a wider view of the surgical field and a close-up vision, when compared with the transoral microscopic approach. Furthermore, the minimal invasiveness of the endoscopic endonasal route may reduce some morbidities related to the transoral approach. In fact, it is no longer necessary to use mouth retractors, prolonged compression of the tongue or split of the soft palate, and even considering the necessity of a middle turbinectomy or removal of the posterior portion of the nasal septum to enlarge the surgical corridor, these adjunctive manoeuvres do not usually produce morbidity to the patient. These manoeuvres are often performed in the endonasal extended approaches to the area around the sella in live patients and do not cause any respiratory problems. The possibility of performing an odontoidectomy through the nose is strictly related to the level of the C1–C2 junction. In fact, in the case of a low junction, below the level of the hard palate, it is virtually impossible to remove the odontoid process with an endonasal approach. On the contrary, in the case of a high position of the atlas-axis junction, the dens is more easily reached and removed through the nasal cavities. Odontoidectomy may be considered one of the most complicated manoeuvres for the transoral approach, in which the split of the soft and even hard palate is often necessary. Thus, this approach could be evaluated for those cases in which a transoral removal is considered more difficult. However, this kind of approach still presents some of the main problems of the transoral approach. The first problem concerns the risk of CSF leak and subsequent meningitis. Although the endoscope, thanks to its close-up and multi-angled vision, has a greater chance of detecting an occasional CSF leak, it is quite hard to suture the dura and the nasopharynx mucosa with conventional suturing tools through the nose. For this reason, in our anatomical study, we have created a muscle-mucosal flap, comprehensive of the entire muscular and mucosal tissue covering the ventral cranio-vertebral junction. This flap, as shown, is transposed into the oral cavity during the bone’s removal and replaced in its original site at the end of the procedure. Due to the difficulty of anchoring the flap with suture, it is only distended on the defect and its borders are apposed on the corresponding lines of incision. The mucosa of the inferior wall of the sphenoid sinus could be stripped to favour the adherence of the flap and fibrin glue could be used to seal the edges. The creation of a peduncolated muscle-mucosal flap permits a more physiological reconstruction of the surgical corridor and, furthermore, the vascularization of the flap that directly continues with the oropharynx, facilitates a rapid healing. An endoscopic control should be performed one month after surgery to check the recreating integrity of the rhinopharynx mucosa. The second problem of the transoral approach concerns the stability of the cranio-vertebral junction. The removal of the odontoid process with its ligaments can destabilize the cranio-vertebral junction. The third problem concerns the haemostasis that is often difficult in the extended endonasal approaches. Bleeding control may become difficult with bipolar coagulation because the endonasal approach presents a long and narrow corridor, with a limited working space between the tips of the bipolar forceps. Nevertheless, specific bipolar forceps (TAKE-APART bipolar forceps; Karl Storz GmbH, Tuttlingen, Germany) have been used to work through the nose in a safe and effective way as well. Conclusions This cadaver study has been performed to demonstrate the possibility of an anterior decompression of the upper cervical cord through an endoscopic endonasal approach. Similar to the transoral approach, the endoscopic endonasal approach provides a direct route to the surgical target, but it seems related to less morbidity. For clinical applications of this approach, the most common surgical problems are the risk of CSF leak and meningitis, the instability of the cervico-medullary junction and the bleeding control. The creation of a muscle-mucosal flap may represent a valid modality for closure of the surgical field. For application in live surgery, dedicate surgical instruments, such as endonasal bipolar forceps, high-speed low-profile drill and surgical guidance systems are needed. In selected cases this approach could be considered a valid alternative to the transoral microscopic approach for the resection of the odontoid process of C2. Obviously, it should be performed only by surgeons very skilled in endoscopic endonasal surgery and in endoscopic cadaver-dissections, while cooperation in a team with ENT surgeons is recommended.
[ "odontoid process", "cranio-vertebral junction", "endoscopy" ]
[ "P", "P", "U" ]
Exp_Neurol-2-1-2288636
Effects of low-frequency stimulation of the subthalamic nucleus on movement in Parkinson's disease
Excessive synchronization of basal ganglia neural activity at low frequencies is considered a hallmark of Parkinson's disease (PD). However, few studies have unambiguously linked this activity to movement impairment through direct stimulation of basal ganglia targets at low frequency. Furthermore, these studies have varied in their methodology and findings, so it remains unclear whether stimulation at any or all frequencies ≤ 20 Hz impairs movement and if so, whether effects are identical across this broad frequency band. To address these issues, 18 PD patients chronically implanted with deep brain stimulation (DBS) electrodes in both subthalamic nuclei were stimulated bilaterally at 5, 10 and 20 Hz after overnight withdrawal of their medication and the effects of the DBS on a finger tapping task were compared to performance without DBS (0 Hz). Tapping rate decreased at 5 and 20 Hz compared to 0 Hz (by 11.8 ± 4.9%, p = 0.022 and 7.4 ± 2.6%, p = 0.009, respectively) on those sides with relatively preserved baseline task performance. Moreover, the coefficient of variation of tap intervals increased at 5 and 10 Hz compared to 0 Hz (by 70.4 ± 35.8%, p = 0.038 and 81.5 ± 48.2%, p = 0.043, respectively). These data suggest that the susceptibility of basal ganglia networks to the effects of excessive synchronization may be elevated across a broad low-frequency band in parkinsonian patients, although the nature of the consequent motor impairment may depend on the precise frequencies at which synchronization occurs. Introduction There is extensive evidence that neuronal activity is abnormally synchronized at low frequencies in Parkinson's disease (PD) and in animal models of parkinsonism (reviewed in Gatev et al., 2006; Hammond et al., 2007; Uhlhaas and Singer, 2006). However, this does not, by itself, prove that pathological synchrony is mechanistically important in parkinsonism. More persuasive evidence would be the impairment of voluntary movement by the artificial synchronization of neural activity in the basal ganglia. Such synchronization is possible by stimulating deep brain electrodes implanted for the treatment of PD at low frequencies, rather than at those frequencies above 100 Hz used for therapeutic benefit. Electrical stimulation of surgical targets like the subthalamic nucleus (STN) simultaneously activates neural elements in the vicinity of the electrode and this synchronous activity is then propagated onwards, as evinced by evoked pallidal (Brown et al., 2004; Hashimoto et al., 2003), cortical (MacKinnon et al., 2005) and muscular activity (Ashby et al., 1999, 2001). So far there have been several reports of the impairment of movement by stimulation of the STN at frequencies ≤ 20 Hz in patients with PD. Moro et al. (2002) and Chen et al. (2007) studied finger tapping during DBS at 5 Hz and 20 Hz, respectively, and found this to be slowed. However, Timmermann et al. (2004), using the motor United Parkinson's Disease rating scale (UPDRS), failed to confirm a worsening during DBS at 5 and 20 Hz, but did find increased bradykinesia with stimulation at 10 Hz. Another study evaluated tapping performance over several frequencies within the same patients, but only found weak effects that involved relative rather than absolute impairments in motor performance, superimposed upon an overall tendency for movement to improve with increasing stimulation frequency (Fogelson et al., 2005). Accordingly, it is unclear whether stimulation at any or all frequencies ≤ 20 Hz impairs movement and if so, whether effects are identical across this broad frequency band. The issue is an important one, as although spontaneous synchrony tends to occur at frequencies centered around 20 Hz in PD (Hammond et al., 2007), it occurs at rather lower frequencies in the 1-methyl-4-phenyl-1,2,3,6,-tetrahydropyridine (MPTP) primate model of PD (Goldberg et al., 2004; Raz et al., 1996, 2000). Here we contrast the effects of STN stimulation and thereby extrinsically imposed synchronization at a number of frequencies ≤ 20 Hz, to establish whether all such frequencies impair movement and, if so, whether they impair movement in the same way. To this end we studied performance in a simple finger tapping task, as this is objective and correlates with motor impairment (Giovannoni et al., 1999; Rabey et al., 2002), and considered changes in task execution according to baseline performance (Chen et al., 2006a). Materials and methods Patients and surgery Twenty patients participated with informed consent and the permission of the local ethics committees (5 females, mean age 59.5 ± 1.4 years; mean disease duration 13.5 ± 1.0 years). Their clinical details are summarized in Table 1. Fourteen of these patients had also been recorded at least 6 months previously in a different paradigm involving stimulation at 20 Hz, 50 Hz and 130 Hz (Chen et al., 2007). Implantation of bilateral STN DBS electrodes was performed in all subjects for treatment of Parkinson's disease at least 6 months prior to study (mean 34.7 ± 5.9 months). The DBS electrode used was model 3389 (Medtronic Neurological Division, Minneapolis, USA) with four platinum–iridium cylindrical surfaces (1.27 mm diameter and 1.5 mm length) and a centre-to-centre separation of 2 mm. Contact 0 was the most caudal and contact 3 was the most rostral. The intended coordinates at the tip of contact 0 were 10–12 mm from the midline, 0–2 mm behind the midcommissural point and 3–5 mm below the anterior commissural–posterior commissural line. Adjustments to the intended coordinates were made in accordance with the direct visualization of STN in individual stereotactic MRI (Hariz et al., 2003) and, in the patients operated in Taiwan (n = 7), the results of microelectrode recordings. Correct placement of the DBS electrodes in the region of the STN was further supported by: [1] effective intra-operative macrostimulation; [2] post-operative T2-weighted MRI compatible with the placement of at least one electrode contact in the STN region; [3] significant improvement in UPDRS motor score during chronic DBS off medication (22.7 ± 3.0) compared to UPDRS off medication with stimulator switched off (52.6 ± 4.8; p < 0.00001, paired t-test). One patient was excluded due to the absence of significant improvement in UPDRS motor score during chronic DBS and another one due to missing clinical data. Protocol All patients were assessed after overnight withdrawal of antiparkinsonian medication, although the long action of many of the drugs used to treat PD meant that patients may still have been partially treated when assessed. They were studied when the stimulator was switched off and during bilateral STN stimulation at 5 Hz, 10 Hz and 20 Hz. The stimulation types were assessed in pseudo-randomized order across patients, as was the presentation order of trials within a stimulation type. Stimulation contacts, amplitude and pulse duration were the same as utilized for therapeutic high frequency stimulation in each subject (see Table 1). There was no evidence of capsular spread during stimulation, as determined by clinical examination. Patients were not informed of the stimulation type. We did not stimulate one side at a time to avoid possible functional compensation by the non-stimulated side. We waited 20 min after changing between conditions before testing. This is sufficient time to elicit about 75% of DBS effects (Temperli et al., 2003). Task The task was repetitive depression of a keyboard key as fast as possible by rapid alternating flexion and extension of the index finger at the level of the metacarpophalangeal joint (Chen et al., 2006a, 2007). Tapping was performed in two runs of 30 s, separated by ∼ 30-s rest and each hand tested separately (giving four runs per condition). Data from one side were rejected as these were collected contralateral to a previous unilateral pallidotomy (case 18 in Table 1). The number of taps made with the index finger in 30 s was recorded and the run from each pair with the best performance selected for analysis, as this was less likely to be affected by fatigue, or the effects of impaired arousal/concentration. Statistics The results of the tapping task in patients were analyzed according to their baseline performance (e.g. without stimulation). The lower limit of normal baseline performance was obtained by testing ten healthy age matched control subjects (20 sides, 4 males, mean age 57 years, range 52–64 years) using the same tapping task. The mean tapping rate in this control group was 162 taps/30 s. The lower limit of the normal range (e.g. mean − [2 × standard deviation]) in this control group was 127 taps per 30 s. The 35 tapping sides studied in the 18 patients were accordingly divided into those with baseline performance within normal limits (n = 17; the mean tapping performance across this group, 157 taps/30 s, was still lower than the mean tapping performance in healthy subjects) and those with baseline tapping rates lower than normal limits (n = 18; mean tapping performance 58 taps/30 s). The rationale behind this approach was to select those sides (with baseline performance within normal limits) in which any deleterious effects of DBS would not be overshadowed by the beneficial effects of DBS-induced suppression of spontaneous pathological activity or limited by floor effects due to major baseline impairment (Chen et al., 2006a, 2007). Four subjects had sides distributed across the two groups of differing baseline tapping performance. Tapping rates and coefficients of variation were normally distributed (one-sample Kolmogorov–Smirnov tests, p > 0.05). Repeated-measures ANOVAs with within-subjects simple contrasts (comparing different frequencies of stimulation to no stimulation) were performed in SPSS (SPSS for Windows version 11, SPSS Inc., Chicago, IL, USA). Mauchly's test was used to determine the sphericity of the data entered in the ANOVAs, and where data were non-spherical Greenhouse–Geisser corrections applied. Means ± standard error of the means are presented in the text. Results Low-frequency stimulation had no reliable clinical effect and did not consistently induce tremor, mobile dyskinesia, or dystonic posturing. Four patients experienced dystonic postures during the experiment and one had increased tremor. However, these effects were seen for overlapping stimulation frequencies. We divided the tapping sides into two groups according to whether or not tapping performance off DBS was within normal limits established on 20 sides in 10 healthy age-matched subjects (see Materials and methods). ANOVA of tapping scores with factors FREQUENCY (four levels: 0, 5, 10 and 20 Hz) and BASELINE TAPPING PERFORMANCE (two levels: within normal limits and less than normal limits) demonstrated a within-subjects interaction between FREQUENCY and BASELINE TAPPING PERFORMANCE (F[3,48] = 4.224, p = 0.01). Data were further analyzed with separate ANOVAs in each baseline tapping performance group. In those patients with baseline tapping performance within normal limits, ANOVA with the factor FREQUENCY confirmed that the latter was a significant main effect (F[3,48] = 3.777, p = 0.016). Within-subjects contrasts indicated that tapping during 5 and 20 Hz stimulation was worse than during no stimulation (F[1,16] = 6.385, p = 0.022 and F[1,16] =8.793, p = 0.009, respectively). The average deterioration in tapping rate during 5 and 20 Hz stimulation compared to no stimulation (0 Hz) in this group was 11.8 ± 4.9% and 7.4 ± 2.6%, respectively (Fig. 1). There was a trend towards a decreased tapping performance at 10 Hz compared to 0 Hz (F[1,16] = 3.578, p = 0.077). There was no effect of FREQUENCY in patients with baseline tapping performance below normal limits (ANOVA, F[1.9,32.8] = 2.202, p = 0.128). We also analyzed the variability in tapping as measured by the coefficient of variation (CV) of the time intervals between successive taps on those sides with baseline tapping performance within normal limits. ANOVA with the factor FREQUENCY (four levels: 0, 5, 10 and 20 Hz) revealed a significant effect of FREQUENCY (F[3,48] = 3.408, p = 0.025). Within-subjects contrasts indicated that the CV increased during 5 and 10 Hz stimulation compared with no stimulation (F[1,16] = 5.144, p = 0.038 and F[1,16] = 4.852, p = 0.043, respectively). The average increase of the CV during 5 and 10 Hz stimulation compared to 0 Hz in this group was 70.4 ± 35.8% and 81.5 ± 48.2%, respectively (Fig. 2). There was no difference between the CV at 20 Hz compared to 0 Hz (F[1,16] = 0.871, p = 0.365). There was, however, a trend for the CV with 5 Hz stimulation to exceed that with 20 Hz stimulation (t-test; p = 0.059). Discussion We have shown that STN DBS at a variety of low frequencies can slow distal upper limb movements in PD patients with relatively preserved baseline tapping function at the time of study. The effect was present with DBS at 5 Hz and 20 Hz in line with previous studies (Chen et al., 2007; Fogelson et al., 2005; Moro et al., 2002), and there was a trend towards a similar effect with stimulation at 10 Hz (Timmermann et al., 2004). These effects were apparent when tapping sides were separately analyzed according to whether the level of baseline performance was within or outside of normal limits, in line with previous studies suggesting that deleterious effects of DBS are more evident on those sides with relatively preserved baseline performance (Chen et al., 2006a, 2007). The effect was not apparent during stimulation on those sides with impaired baseline performance, either because of confounding, albeit mild, suppressive effects of low-frequency DBS on spontaneous pathological oscillations or because of floor effects (Chen et al., 2007; Fogelson et al., 2005). In principle, then, the susceptibility of basal ganglia–cortical loops to the effects of excessive synchronization may be elevated across a broad low-frequency band in parkinsonian patients. Accordingly, the relatively different frequency ranges of pathological synchronization in patients and MPTP-treated primates (Hammond et al., 2007) may be more indicative of the resonance properties of basal ganglia networks in the different situations, rather than any fundamental differences in the mechanism of bradykinesia. However, it must be stressed that this is a generalization, and although synchronization at different frequencies may conspire to disturb movement, there may still be subtle differences in the way movement is impaired. This is brought out by the differential effects of low-frequency stimulation on the variation in tapping intervals, evident in differences in the coefficient of variation and hence independent of any differences in tapping rate. Only DBS at 5 and 10 Hz increased temporal variability, whereas DBS at 20 Hz selectively decreased tapping rates without changing tapping variability. The implication is that basal ganglia networks are involved in processing related to the temporal patterning and regularity of movement and that these circuits may be particularly susceptible to disruption by pathological synchronization at frequencies ≤ 10 Hz. In support of basal ganglia involvement in the temporal patterning of movement, PD patients have increased temporal variability in finger tapping (Giovannoni et al., 1999; Shimoyama et al., 1990), and temporal variability in motor performance is a very early feature of Huntington's disease (Hinton et al., 2007). Indeed, Flowers considered increased variability of movement in both time and space to be one of the core components of motor dysfunction in PD, along with a basic slowness of movement, and a difficulty in initiating and maintaining movement (Sheridan et al., 1987). This variability in motor performance may also relate to the phenomenon of freezing. No overt freezing episodes were observed during tapping in our patients, but an increased variability of stride has been shown in PD patients experiencing freezing of gait independent of frank freezing episodes (Hausdorff et al., 2003). However, a primary disturbance of temporal patterning is not the only potential interpretation for the increased variability seen during stimulation at 5 Hz and 10 Hz. Tremor was not seen during low-frequency stimulation (except in one patient), in agreement with Timmermann et al. (2004), nor were there any obvious and consistent dyskinesias. Nevertheless, it is possible that synchronization at frequencies ≤ 10 Hz induced subtle hyperkinesias that led to increased temporal variability across taps. A previous case report describes dyskinetic movements induced by STN DBS at 5 Hz (Liu et al., 2002a), and there is increasing evidence that excessive synchronization over 4–10 Hz within basal ganglia circuits may be related to both levodopa-induced dyskinesias in PD (Alonso-Frech et al., 2006) and mobile elements of dystonia (Chen et al., 2006b; Liu et al., 2002b; Silberstein et al., 2003). Relevant in this regard, a recent study demonstrated an increased variability of speech rate in patients treated with l-DOPA and suggested that this effect was related to dyskinesia (De Letter et al., 2006). Variability of swing movement was also observed in the gait of dyskinetic CP children (Abel et al., 2003). In summary, our results provide further evidence that DBS of the STN over a relatively broad band of low frequencies can impair movement, in line with other more circumstantial evidence of an association between low-frequency synchrony in basal ganglia–cortical loops and altered movement (see recent reviews by Gatev et al., 2006; Uhlhaas and Singer, 2006; Hammond et al., 2007). The present results also raise the important possibility that the detailed profile of motor abnormalities evident in extrapyramidal diseases depends to some extent on the precise frequencies at which pathological synchronization occurs. Indeed, some differences in the details of the effects of pathological synchrony at different frequencies might be anticipated, given the evidence for selective tuning of basal ganglia–cortical sub-circuits (Fogelson et al., 2006).
[ "parkinson's disease", "synchronization", "basal ganglia", "dbs" ]
[ "P", "P", "P", "P" ]
Arthritis_Res_Ther-5-3-165040
Degeneration of the intervertebral disc
The intervertebral disc is a cartilaginous structure that resembles articular cartilage in its biochemistry, but morphologically it is clearly different. It shows degenerative and ageing changes earlier than does any other connective tissue in the body. It is believed to be important clinically because there is an association of disc degeneration with back pain. Current treatments are predominantly conservative or, less commonly, surgical; in many cases there is no clear diagnosis and therapy is considered inadequate. New developments, such as genetic and biological approaches, may allow better diagnosis and treatments in the future. Introduction Back pain is a major public health problem in Western industrialized societies. It causes suffering and distress to patients and their families, and affects a large number of people; the point prevalence rates in a number of studies ranged from 12% to 35% [1], with around 10% of sufferers becoming chronically disabled. It also places an enormous economic burden on society; its total cost, including direct medical costs, insurance, lost production and disability benefits, is estimated at €12 billion per annum in the UK and 1.7% of the gross national product in The Netherlands [1,2]. Back pain is strongly associated with degeneration of the intervertebral disc [3]. Disc degeneration, although in many cases asymptomatic [4], is also associated with sciatica and disc herniation or prolapse. It alters disc height and the mechanics of the rest of the spinal column, possibly adversely affecting the behaviour of other spinal structures such as muscles and ligaments. In the long term it can lead to spinal stenosis, a major cause of pain and disability in the elderly; its incidence is rising exponentially with current demographic changes and an increased aged population. Discs degenerate far earlier than do other musculoskeletal tissues; the first unequivocal findings of degeneration in the lumbar discs are seen in the age group 11–16 years [5]. About 20% of people in their teens have discs with mild signs of degeneration; degeneration increases steeply with age, particularly in males, so that around 10% of 50-year-old discs and 60% of 70-year-old discs are severely degenerate [6]. In this short review we outline the morphology and biochemistry of normal discs and the changes that arise during degeneration. We review recent advances in our understanding of the aetiology of this disorder and discuss new approaches to treatment. Disc morphology The normal disc The intervertebral discs lie between the vertebral bodies, linking them together (Fig. 1). They are the main joints of the spinal column and occupy one-third of its height. Their major role is mechanical, as they constantly transmit loads arising from body weight and muscle activity through the spinal column. They provide flexibility to this, allowing bending, flexion and torsion. They are approximately 7–10 mm thick and 4 cm in diameter (anterior–posterior plane) in the lumbar region of the spine [7,8]. The intervertebral discs are complex structures that consist of a thick outer ring of fibrous cartilage termed the annulus fibrosus, which surrounds a more gelatinous core known as the nucleus pulposus; the nucleus pulposus is sandwiched inferiorly and superiorly by cartilage end-plates. The central nucleus pulposus contains collagen fibres, which are organised randomly [9], and elastin fibres (sometimes up to 150 μm in length), which are arranged radially [10]; these fibres are embedded in a highly hydrated aggrecan-containing gel. Interspersed at a low density (approximately 5000/mm3 [11]) are chondrocyte-like cells, sometimes sitting in a capsule within the matrix. Outside the nucleus is the annulus fibrosus, with the boundary between the two regions being very distinct in the young individual (<10 years). The annulus is made up of a series of 15–25 concentric rings, or lamellae [12], with the collagen fibres lying parallel within each lamella. The fibres are orientated at approximately 60° to the vertical axis, alternating to the left and right of it in adjacent lamellae. Elastin fibres lie between the lamellae, possibly helping the disc to return to its original arrangement following bending, whether it be flexion or extension. They may also bind the lamellae together as elastin fibres pass radially from one lamella to the next [10]. The cells of the annulus, particularly in the outer region, tend to be fibroblast-like, elongated, thin and aligned parallel to the collagen fibres. Toward the inner annulus the cells can be more oval. Cells of the disc, both in the annulus and nucleus, can have several long, thin cytoplasmic projections, which may be more than 30 μm long [13,14] (WEB Johnson, personal communication). Such features are not seen in cells of articular cartilage [13]. Their function in disc is unknown but it has been suggested that they may act as sensors and communicators of mechanical strain within the tissue [13]. The third morphologically distinct region is the cartilage end-plate, a thin horizontal layer, usually less than 1 mm thick, of hyaline cartilage. This interfaces the disc and the vertebral body. The collagen fibres within it run horizontal and parallel to the vertebral bodies, with the fibres continuing into the disc [8]. The healthy adult disc has few (if any) blood vessels, but it has some nerves, mainly restricted to the outer lamellae, some of which terminate in proprioceptors [15]. The cartilaginous end-plate, like other hyaline cartilages, is normally totally avascular and aneural in the healthy adult. Blood vessels present in the longitudinal ligaments adjacent to the disc and in young cartilage end-plates (less than about 12 months old) are branches of the spinal artery [16]. Nerves in the disc have been demonstrated, often accompanying these vessels, but they can also occur independently, being branches of the sinuvertebral nerve or derived from the ventral rami or grey rami communicantes. Some of the nerves in discs also have glial support cells, or Schwann cells, alongside them [17]. Degenerated discs During growth and skeletal maturation the boundary between annulus and nucleus becomes less obvious, and with increasing age the nucleus generally becomes more fibrotic and less gel-like [18]. With increasing age and degeneration the disc changes in morphology, becoming more and more disorganized (Fig. 2). Often the annular lamellae become irregular, bifurcating and interdigitating, and the collagen and elastin networks also appear to become more disorganised (J Yu, personal communication). There is frequently cleft formation with fissures forming within the disc, particularly in the nucleus. Nerves and blood vessels are increasingly found with degeneration [15]. Cell proliferation occurs, leading to cluster formation, particularly in the nucleus [19,20]. Cell death also occurs, with the presence of cells with necrotic and apoptotic appearance [21,22]. These mechanisms are apparently very common; it has been reported that more than 50% of cells in adult discs are necrotic [21]. The morphological changes associated with disc degeneration were comprehensively reviewed recently by Boos et al. [5], who demonstrated an age-associated change in morphology, with discs from individuals as young as 2 years of age having some very mild cleft formation and granular changes to the nucleus. With increasing age comes an increased incidence of degenerative changes, including cell death, cell proliferation, mucous degeneration, granular change and concentric tears. It is difficult to differentiate changes that occur solely due to ageing from those that might be considered 'pathological'. Biochemistry Normal discs The mechanical functions of the disc are served by the extracellular matrix; its composition and organization govern the disc's mechanical responses. The main mechanical role is provided by the two major macromolecular components. The collagen network, formed mostly of type I and type II collagen fibrils and making up approximately 70% and 20% of the dry weight of the annulus and nucleus, respectively [23], provides tensile strength to the disc and anchors the tissue to the bone. Aggrecan, the major proteoglycan of the disc [24], is responsible for maintaining tissue hydration through the osmotic pressure provided by its constituent chondroitin and keratan sulphate chains [25]. The proteoglycan and water content of the nucleus (around 50% and 80% of the wet weight, respectively) is greater than in the annulus (approximately 20% and 70% of the wet weight, respectively). In addition, there are many other minor components, such as collagen types III, V, VI, IX, X, XI, XII and XIV; small proteoglycans such as lumican, biglycan, decorin and fibromodulin; and other glycoproteins such as fibronectin and amyloid [26,27]. The functional role of many of these additional matrix proteins and glycoproteins is not yet clear. Collagen IX, however, is thought to be involved in forming cross-links between collagen fibrils and is thus important in maintaining network integrity [28]. The matrix is a dynamic structure. Its molecules are continually being broken down by proteinases such as the matrix metalloproteinases (MMPs) and aggrecanases, which are also synthesized by disc cells [29-31]. The balance between synthesis, breakdown and accumulation of matrix macromolecules determines the quality and integrity of the matrix, and thus the mechanical behaviour of the disc itself. The integrity of the matrix is also important for maintaining the relatively avascular and aneural nature of the healthy disc. The intervertebral disc is often likened to articular cartilage, and indeed it does resemble it in many ways, particularly in the biochemical components present. However, there are significant differences between the two tissues, one of these being the composition and structure of aggrecan. Disc aggrecan is more highly substituted with keratan sulphate than that found in the deep zone of articular cartilage. In addition, the aggrecan molecules are less aggregated (30%) and more heterogeneous, with smaller, more degraded fragments in the disc than in articular cartilage (80% aggregated) from the same individual [32]. Disc proteoglycans become increasingly difficult to extract from the matrix with increasing age [24]; this may be due to extensive cross-linking, which appears to occur more within the disc matrix than in other connective tissues. Changes in disc biochemistry with degeneration The most significant biochemical change to occur in disc degeneration is loss of proteoglycan [33]. The aggrecan molecules become degraded, with smaller fragments being able to leach from the tissue more readily than larger portions. This results in loss of glycosaminoglycans; this loss is responsible for a fall in the osmotic pressure of the disc matrix and so a loss of hydration. Even in degenerate discs, however, the disc cells can retain the ability to synthesize large aggrecan molecules, with intact hyaluronan-binding regions, which have the potential to form aggregates [24]. Less is known of how the small proteoglycan population changes with disc degeneration, although there is some evidence that the amount of decorin, and more particularly biglycan, is elevated in degenerate human discs as compared with normal ones [34]. Although the collagen population of the disc also changes with degeneration of the matrix, the changes are not as obvious as those of the proteoglycans. The absolute quantity of collagen changes little but the types and distribution of collagens can alter. For example, there may be a shift in proportions of types of collagens found and in their apparent distribution within the matrix. In addition, the fibrillar collagens, such as type II collagen, become more denatured, apparently because of enzymic activity. As with proteoglycans, the triple helices of the collagens are more denatured and ruptured than are those found in articular cartilage from the same individual; the amount of denatured type II collagen increases with degeneration [35,36]. However, collagen cross-link studies indicate that, as with proteoglycans, new collagen molecules may be synthesized, at least early in disc degeneration, possibly in an attempt at repair [37]. Other components can change in disc degeneration and disease in either quantity or distribution. For example, fibronectin content increases with increasing degeneration and it becomes more fragmented [38]. These elevated levels of fibronectin could reflect the response of the cell to an altered environment. Whatever the cause, the formation of fibronectin fragments can then feed into the degenerative cascade because they have been shown to downregulate aggrecan synthesis but to upregulate the production of some MMPs in in vitro systems. The biochemistry of disc degeneration indicates that enzymatic activity contributes to this disorder, with increased fragmentation of the collagen, proteoglycan and fibronectin populations. Several families of enzymes are capable of breaking down the various matrix molecules of disc, including cathepsins, MMPs and aggrecanases. Cathepsins have maximal activity in acid conditions (e.g. cathepsin D is inactive above pH 7.2). In contrast, MMPs and aggrecanases have an optimal pH that is approximately neutral. All of these enzymes have been identified in disc, with higher levels of, for example, MMPs in more degenerate discs [39]. Cathepsins D and L and several types of MMPs (MMP-1, -2, -3, -7, -8, -9 and -13) occur in human discs; they may be produced by the cells of the disc themselves as well as by the cells of the invading blood vessels. Aggrecanases have also been shown to occur in human disc but their activity is apparently less obvious, at least in more advanced disc degeneration [29,30,40]. Effect of degenerative changes on disc function and pathology The loss of proteoglycan in degenerate discs [33] has a major effect on the disc's load-bearing behaviour. With loss of proteoglycan, the osmotic pressure of the disc falls [41] and the disc is less able to maintain hydration under load; degenerate discs have a lower water content than do normal age-matched discs [33], and when loaded they lose height [42] and fluid more rapidly, and the discs tend to bulge. Loss of proteoglycan and matrix disorganization have other important mechanical effects; because of the subsequent loss of hydration, degenerated discs no longer behave hydrostatically under load [43]. Loading may thus lead to inappropriate stress concentrations along the end-plate or in the annulus; the stress concentrations seen in degenerate discs have also been associated with discogenic pain produced during discography [44]. Such major changes in disc behaviour have a strong influence on other spinal structures, and may affect their function and predispose them to injury. For instance, as a result of the rapid loss of disc height under load in degenerate discs, apophyseal joints adjacent to such discs (Fig. 1) may be subject to abnormal loads [45] and eventually develop osteoarthritic changes. Loss of disc height can also affect other structures. It reduces the tensional forces on the ligamentum flavum and hence may cause remodelling and thickening. With consequent loss of elasticity [46], the ligament will tend to bulge into the spinal canal, leading to spinal stenosis – an increasing problem as the population ages. Loss of proteoglycans also influences the movement of molecules into and out of the disc. Aggrecan, because of its high concentration and charge in the normal disc, prevents movement of large uncharged molecules such as serum proteins and cytokines into and through the matrix [47]. The fall in concentration of aggrecan in degeneration could thus facilitate loss of small, but osmotically active, aggrecan fragments from the disc, possibly accelerating a degenerative cascade. In addition, loss of aggrecan would allow increased penetration of large molecules such as growth factor complexes and cytokines into the disc, affecting cellular behaviour and possibly the progression of degeneration. The increased vascular and neural ingrowth seen in degenerate discs and associated with chronic back pain [48] is also probably associated with proteoglycan loss because disc aggrecan has been shown to inhibit neural ingrowth [49,50]. Disc herniation The most common disc disorder presenting to spinal surgeons is herniated or prolapsed intervertebral disc. In these cases the discs bulge or rupture (either partially or totally) posteriorly or posterolaterally, and press on the nerve roots in the spinal canal (Fig. 1). Although herniation is often thought to be the result of a mechanically induced rupture, it can only be induced in vitro in healthy discs by mechanical forces larger than those that are ever normally encountered; in most experimental tests, the vertebral body fails rather than the disc [51]. Some degenerative changes seem necessary before the disc can herniate; indeed, examination of autopsy or surgical specimens suggest that sequestration or herniation results from the migration of isolated, degenerate fragments of nucleus pulposus through pre-existing tears in the annulus fibrosus [52]. It is now clear that herniation-induced pressure on the nerve root cannot alone be the cause of pain because more than 70% of 'normal', asymptomatic people have disc prolapses pressurizing the nerve roots but no pain [4,53]. A past and current hypothesis is that, in symptomatic individuals, the nerves are somehow sensitized to the pressure [54], possibly by molecules arising from an inflammatory cascade from arachodonic acid through to prostaglandin E2, thromboxane, phospholipase A2, tumour necrosis factor-α, the interleukins and MMPs. These molecules can be produced by cells of herniated discs [55], and because of the close physical contact between the nerve root and disc following herniation they may be able to sensitize the nerve root [56,57]. The exact sequence of events and specific molecules that are involved have not been identified, but a pilot study of sciatic patients treated with tumour necrosis factor-α antagonists is encouraging and supports this proposed mechanism [58,59]. However, care must be exercised in interrupting the inflammatory cascade, which can also have beneficial effects. Molecules such as MMPs, which are produced extensively in prolapsed discs [30], almost certainly play a major role in the natural history of resorbing the offending herniation. Aetiology of disc degeneration Disc degeneration has proved a difficult entity to study; its definition is vague, with diffuse parameters that are not always easy to quantify. In addition, there is a lack of a good animal model. There are significant anatomical differences between humans and the laboratory animals that are traditionally used as models of other disorders. In particular, the nucleus differs; in rodents as well as many other mammals, the nucleus is populated by notochordal cells throughout adulthood, whereas these cells disappear from the human nucleus after infancy [60]. In addition, although the cartilage end-plate in humans acts as a growth plate for the vertebral body, in most animals the vertebrae have two growth plates within the vertebral body itself, and the cartilage end-plate is a much thinner layer than that found in humans. Thus, although the study of animals that develop degeneration spontaneously [61,62] and of injury models of degeneration [63,64] have provided some insight into the degenerative processes, most information on aetiology of disc degeneration to date has come from human studies. Nutritional pathways to disc degeneration One of the primary causes of disc degeneration is thought to be failure of the nutrient supply to the disc cells [65]. Like all cell types, the cells of the disc require nutrients such as glucose and oxygen to remain alive and active. In vitro, the activity of disc cells is very sensitive to extracellular oxygen and pH, with matrix synthesis rates falling steeply at acidic pH and at low oxygen concentrations [66,67], and the cells do not survive prolonged exposure to low pH or glucose concentrations [68]. A fall in nutrient supply that leads to a lowering of oxygen tension or of pH (arising from raised lactic acid concentrations) could thus affect the ability of disc cells to synthesize and maintain the disc's extracellular matrix and could ultimately lead to disc degeneration. The disc is large and avascular and the cells depend on blood vessels at their margins to supply nutrients and remove metabolic waste [69]. The pathway from the blood supply to the nucleus cells is precarious because these cells are supplied virtually entirely by capillaries that originate in the vertebral bodies, penetrating the subchondral plate and terminating just above the cartilaginous end-plate [16,70]. Nutrients must then diffuse from the capillaries through the cartilaginous end-plate and the dense extracellular matrix of the nucleus to the cells, which may be as far as 8 mm from the capillary bed. The nutrient supply to the nucleus cells can be disturbed at several points. Factors that affect the blood supply to the vertebral body such as atherosclerosis [71,72], sickle cell anaemia, Caisson disease and Gaucher's disease [73] all appear to lead to a significant increase in disc degeneration. Long-term exercise or lack of it appears to have an effect on movement of nutrients into the disc, and thus on their concentration in the tissue [74,75]. The mechanism is not known but it has been suggested that exercise affects the architecture of the capillary bed at the disc–bone interface. Finally, even if the blood supply remains undisturbed, nutrients may not reach the disc cells if the cartilaginous end-plate calcifies [65,76]; intense calcification of the end-plate is seen in scoliotic discs [77], for instance. Disturbances in nutrient supply have been shown to affect transport of oxygen and lactic acid into and out of the disc experimentally [78] and in patients [79]. Although little information is available to relate nutrient supply to disc properties in patients, a relationship has been found between loss of cell viability and a fall in nutrient transport in scoliotic discs [80,81]. There is also some evidence that nutrient transport is affected in disc degeneration in vivo [82], and the transport of solutes from bone to disc measured in vitro was significantly lower in degenerate than in normal discs [65]. Thus, although there is as yet little direct evidence, it now seems apparent that a fall in nutrient supply will ultimately lead to degeneration of the disc. Mechanical load and injury Abnormal mechanical loads are also thought to provide a pathway to disc degeneration. For many decades it was suggested that a major cause of back problems is injury, often work-related, which causes structural damage. It is believed that such an injury initiates a pathway that leads to disc degeneration and finally to clinical symptoms and back pain [83]. Animal models have supported this finding. Although intense exercise does not appear to affect discs adversely [84] and discs are reported to respond to some long-term loading regimens by increasing proteoglycan content [85], experimental overloading [86] or injury to the disc [63,87] can induce degenerative changes. Further support for the role of abnormal mechanical forces in disc degeneration comes from findings that disc levels adjacent to a fused segment degenerate rapidly (for review [88]). This injury model is also supported by many epidemiological studies that have found associations between environmental factors and development of disc degeneration and herniation, with heavy physical work, lifting, truck-driving, obesity and smoking found to be the major risk factors for back pain and degeneration [89-91]. As a result of these studies, there have been many ergonomic interventions in the workplace [91]. However, the incidence of disc degeneration-related disorders has continued to rise despite these interventions. Over the past decade, as magnetic resonance imaging has refined classifications of disc degeneration [5,92], it has become evident that, although factors such as occupation, psychosocial factors, benefit payments and environment are linked to disabling back pain [93,94], contrary to previous assumptions these factors have little influence on the pattern of disc degeneration itself [95,96]. This illustrates the tenuous relationship between degeneration and clinical symptoms. Genetic factors in disc degeneration More recent work suggested that the factors that lead to disc degeneration may have important genetic components. Several studies have reported a strong familial predisposition for disc degeneration and herniation [97-99]. Findings from two different twin studies conducted during the past decade showed heritability exceeding 60% [100,101]. Magnetic resonance images in identical twins, who were discordant for major risk factors such as smoking or heavy work, were very similar with respect to the spinal columns and the patterns of disc degeneration (Fig. 3) [102]. Genetic predisposition has been confirmed by recent findings of associations between disc degeneration and gene polymorphisms of matrix macromolecules. The approach to date has been via searching for candidate genes, with the main focus being extracellular matrix genes. Although there is a lack of association between disc degeneration and polymorphisms of the major collagens in the disc, collagen types I and II [103], mutations of two collagen type IX genes, namely COL9A2 and COL9A3, have been found to be strongly associated with lumbar disc degeneration and sciatica in a Finnish population [104,105]. The COL9A2 polymorphism is found only in a small percentage of the Finnish population, but all individuals with this allele had disc degenerative disorders, suggesting that it is associated with a dominantly inherited disease. In both these mutations, tryptophan (the most hydrophobic amino acid, which is not normally found in any collagenous domain) substituted for other amino acids, potentially affecting matrix properties [103]. Other genes associated with disc generation have also been identified. Individuals with a polymorphism in the aggrecan gene were found to be at risk for early disc degeneration in a Japanese study [106]. This mutation leads to aggrecan core proteins of different lengths, with an over-representation of core proteins able to bind only a low number of chondroitin sulfate chains among those with severe disc degeneration. Presumably these individuals have a lower chondroitin sulfate content than normal, and their discs will behave similarly to degenerate discs that have lost proteoglycan by other mechanisms. Studies of transgenic mice have also demonstrated that mutations in structural matrix molecules such as aggrecan [107], collagen II [108] and collagen IX [109] can lead to disc degeneration. Mutations in genes other than those of structural matrix macromolecules have also been associated with disc degeneration. A polymorphism in the promoter region of the MMP-3 gene was associated with rapid degeneration in elderly Japanese subjects [110]. In addition, two polymorphisms of the vitamin D receptor gene were the first mutations shown to be associated with disc degeneration [111-114]. The mechanism of vitamin D receptor gene polymorphism involvement in disc degeneration is unknown, but at present it does not appear to be related to differences in bone density [111,112,114]. All of the genetic mutations associated with disc degeneration to date have been found using a candidate gene approach and all, apart from the vitamin D receptor polymorphism, are concerned with molecules that determine the integrity and function of the extracellular matrix. However, mutations in other systems such as signalling or metabolic pathways could lead to changes in cellular activity that may ultimately result in disc degeneration [115]. Different approaches may be necessary to identify such polymorphisms. Genetic mapping, for instance, has identified a susceptibility locus for disc herniation, but the gene involved has not yet been identified [116]. In summary, the findings from these genetic and epidemiological studies point to the multifactorial nature of disc degeneration. It is evident now that mutations in several different classes of genes may cause the changes in matrix morphology, disc biochemistry and disc function typifying disc degeneration. Identification of the genes involved may lead to improved diagnostic criteria; for example, it is already apparent that the presence of specific polymorphisms increase the risk for disc bulge, annular tears, or osteophytes [112,117]. However, because of the evidence for gene–environment interactions [97,114,118], genetic studies in isolation are unlikely to delineate the various pathways of disc degeneration. New therapies Current treatments attempt to reduce pain rather than repair the degenerated disc. The treatments used presently are mainly conservative and palliative, and are aimed at returning patients to work. They range from bedrest (no longer recommended) to analgesia, the use of muscle relaxants or injection of corticosteroids, or local anaesthetic and manipulation therapies. Various interventions (e.g. intradiscal electrotherapy) are also used, but despite anecdotal statements of success trials thus far have found their use to be of little direct benefit [119]. Disc degeneration-related pain is also treated surgically either by discectomy or by immobilization of the affected vertebrae, but surgery is offered only to one in every 2000 back pain episodes in the UK; the incidence of surgical treatment is five times higher in the USA [93]. The success rates of all these procedures are generally similar. Although a recent study indicated that surgery improves the rate of recovery in well selected patients [120], 70–80% of patients with obvious surgical indications for back pain or disc herniation eventually recover, whether surgery is carried out or not [121,122]. Because disc degeneration is thought to lead to degeneration of adjacent tissues and be a risk factor in the development of spinal stenosis in the long term, new treatments are in development that are aimed at restoring disc height and biomechanical function. Some of the proposed biological therapies are outlined below. Cell based therapies The aim of these therapies is to achieve cellular repair of the degenerated disc matrix. One approach has been to stimulate the disc cells to produce more matrix. Growth factors can increase rates of matrix synthesis by up to fivefold [123,124]. In contrast, cytokines lead to matrix loss because they inhibit matrix synthesis while stimulating production of agents that are involved in tissue breakdown [125]. These proteins have thus provided targets for genetic engineering. Direct injection of growth factors or cytokine inhibitors has proved unsuccessful because their effectiveness in the disc is short-lived. Hence gene-therapy is now under investigation; it has the potential to maintain high levels of the relevant growth factor or inhibitor in the tissue. In gene therapy, the gene of interest (e.g. one responsible for producing a growth factor such as transforming growth factor-β or inhibiting interleukin-1) is introduced into target cells, which then continue to produce the relevant protein (for review [126]). This approach has been shown to be technically feasible in the disc, with gene transfer increasing transforming growth factor-β production by disc cells in a rabbit nearly sixfold [127]. However, this therapy is still far from clinical use. Apart from the technical problems of delivery of the genes into human disc cells, the correct choice of therapeutic genes requires an improved understanding of the pathogenesis of degeneration. In addition, the cell density in normal human discs is low, and many of the cells in degenerate discs are dead [21]; stimulation of the remaining cells may be insufficient to repair the matrix. Cell implantation alone or in conjunction with gene therapy is an approach that may overcome the paucity of cells in a degenerate disc. Here, the cells of the degenerate disc are supplemented by adding new cells either on their own or together with an appropriate scaffold. This technique has been used successfully for articular cartilage [128,129] and has been attempted with some success in animal discs [130]. However, at present, no obvious source of clinically useful cells exists for the human disc, particularly for the nucleus, the region of most interest [131]. Moreover, conditions in degenerate discs, particularly if the nutritional pathway has been compromised [65], may not be favourable for survival of implanted cells. Nevertheless, autologous disc cell transfer has been used clinically in small groups of patients [132], with initial results reported to be promising, although few details of the patients or outcome measures are available. At present, although experimental work demonstrates the potential of these cell-based therapies, several barriers prevent the use of these treatments clinically. Moreover, these treatments are unlikely to be appropriate for all patients; some method of selecting appropriate patients will be required if success with these therapies is to be realized. Conclusion Disorders associated with degeneration of the intervertebral disc impose an economic burden similar to that of coronary heart disease and greater than that of other major health problems such as diabetes, Alzheimer's disease and kidney diseases [1,133]. New imaging technologies, and advances in cell biology and genetics promise improved understanding of the aetiology, more specific diagnoses and targeted treatments for these costly and disabling conditions. However, the intervertebral disc is poorly researched, even in comparison with other musculoskeletal systems (Table 1). Moreover, the research effort in, for instance, the kidney in comparison with that in the disc is completely disparate to the relative costs of the disorders associated with each organ and the number of people affected. Unless more research attention is attracted to interverterbal disc biology, little will come from these new technologies, and back pain will remain as it is at present – a poorly diagnosed and poorly treated syndrome that reduces the quality of life of a significant proportion of the population. Competing interests None declared. Abbreviations MMP = matrix metalloproteinase.
[ "back pain", "genetics", "epidemiology" ]
[ "P", "P", "P" ]
Eur_J_Pediatr-3-1-2042511
What’s new in using platelet research? To unravel thrombopathies and other human disorders
This review on platelet research focuses on defects of adhesion, cytoskeletal organisation, signal transduction and secretion. Platelet defects can be studied by different laboratory platelet functional assays and morphological studies. Easy bruising or a suspected platelet-based bleeding disorder is of course the most obvious reason to test the platelet function in a patient. However, nowadays platelet research also contributes to our understanding of human pathology in other disciplines such as neurology, nephrology, endocrinology and metabolic diseases. Apart from a discussion on classical thrombopathies, this review will also deal with the less commonly known relation between platelet research and disorders with a broader clinical phenotype. Classical thrombopathies involve disorders of platelet adhesion such as Glanzmann thrombastenia and Bernard-Soulier syndrome, defective G protein signalling diseases with impaired phospholipase C activation, and abnormal platelet granule secretion disorders such as gray platelet disorder and delta-storage pool disease. Other clinical symptoms besides a bleeding tendency have been described in MYH9-related disorders and Duchenne muscular dystrophy due to adhesion defects, and also in disorders of impaired Gs signalling, in Hermansky Pudlack disease and Chediak Higashi disease with abnormal secretion. Finally, platelet research can also be used to unravel novel mechanisms involved in many neurological disorders such as depression and autism with only a subclinical platelet defect. Introduction Normal hemostasis prevents spontaneous bleeding and traumatic hemorrhage by a coordinated sequence of cellular and biochemical reactions to the ultimate formation of a stable platelet-fibrin aggregate [20]. Platelets, under normal circumstances, circulate in close contact to the endothelial cell lining of the vessel wall, and respond to vascular damage by adhering to subendothelial structures. Platelet adhesion is the first step in the hemostatic plug formation [11]. The major platelet receptors involved in this process are the von Willebrand factor (vWF) receptor GP(glycoprotein)Ib/IX/V, the collagen integrin receptor α2β1, and the fibrinogen integrin receptor αIIBβ3 (Fig. 1a). Subsequent platelet spreading is conducted by cytoskeleton proteins including the structural subunit of the microtubules, the αβ-tubulin heterodimer, filamin and actin (Fig. 1a). The cytoskeleton is responsible for the shape of the resting platelet and carries out contractile events such as the secretion of granules and retraction of clots by activated cells. Fig. 1a Schematic model of the main components involved in platelet adhesion and the cytoskeleton proteins. Platelet adhesion and its subsequent activation by calcium release is mainly regulated by the platelet receptor αIIbβ3 after binding to fibrinogen or the RGD domain of vWF, the main vWF receptor GPIb/IX/V and the collagen receptor α2β1. Microtubules together with the cytoplasmic, actin-rich cytoskeleton are responsible for the platelet structure. Different actin binding proteins have been identified in platelets such as filamin A, myosin and dystrophin. b Schematic model of G protein signal transduction in platelets regulated by Gq for platelet activation by the ultimate step of calcium release. Gi and Gs further influence the platelet activation by respectively inhibiting and stimulating the intracellular cAMP formation. c Schematic model of platelet secretion. The second amplification step in platelet activation is the release of alpha and dense granules in platelets guarantying irreversible platelet activation Platelet adhesion also initiates multiple intracellular G protein-coupled signalling pathways (Fig. 1b). Stimulation of Gq by different ligands such as adenosine diphosphate (ADP) and thromboxane A2 (TXA2) results in platelet activation by stimulating phospholipase C and a release of calcium from the intracellular stores. This platelet activation process is enhanced when Gi is activated and inhibited when Gs is activated both by modulating the intracellular cAMP level. Platelet adhesion and activation eventually results in secretion from platelet organelles (Fig. 1c) [37]. Resting platelets circulate as discoid anuclear cells and consist of a lipid bilayer and an internal dense tubular system, where calcium is sequestered. The platelet cytoplasm contains mitochondria, glycogen particles, lysosomes, and the platelet-specific storage granules: the α-granules and dense granules (Fig. 2). The α-granules contain proteins such as platelet factor 4, β-thromboglobulin, platelet derived growth factor, fibrinogen, fibronectin, thrombospondin, plasminogen activator inhibitor I and vWF. Dense bodies are rich in serotonin, ATP, ADP and calcium. Fig. 2Electron microscopy (original magnification ×22,500) of platelets showing the dense tubular system (DTS), microtubules (MT), open canalicular system (OCS), alpha granules (G), glycogen (Gly) and the dense bodies (DB) The study of platelet adhesion, G protein signalling and secretion is particularly useful for our understanding of several clinical disorders (Table 1). The most obvious reason is of course the study of platelet defects in patients with an isolated platelet disorder leading to bleeding or thrombosis in order to gain additional information on the different pathways involved in platelet function. The analysis of human disorders in which the defective platelet phenotype is just one part of its clinical spectrum to unravel novel biological and genetic mechanisms involved in the disease. Also, using platelet functional and morphological studies can be used as a tool to find novel pathways involved in more complex disorders usually caused by more than one gene defect. Recently, it became obvious that the molecular pathways involved in more complex human disorders such as diabetes type 2 or neurological disorders such as schizophrenia, migraine, bipolar disorder, and depression can also be better understood by studying platelet signalling and secretion. Due to space limitations, it is impossible to give a full overview of all human disorders studied today by means of platelet research. This review describes some well-known as well as some less common disorders to illustrate how platelet research contributes to the understanding of thrombopathies [13, 35] as well as the broader future of this research outside its classical field of thrombosis and hemostasis (Table 1). Table 1Syndromic and non-syndromic platelet defects and the implicated genes according to the defective platelet pathwayType of platelet defectIsolated platelet disordersDisorders including a platelet defectDisorders studied by functional platelet assaysAdhesion and cytoskeletal defectsGlanzmann thrombasteniaMay-Hegglin anomaly, Fechtner syndrome, Epstein syndrome, and Sebastian syndromeNeurological disorders as bipolar disorder, schizophrenia, depression, autismProlonged bleeding timeAll: macrothrombocytopenia, prolonged bleeding timeNo other clinical problemsAll: leucocyte inclusions, Epstein,Fechtner: nephritis, deafness, cataractsITGB3 or ITGA2MYH9Bernard-Soulier SyndromeDuchenne Muscular DystrophyMacrothrombocytopenia, prolonged bleeding timeProlonged bleeding after surgeryNo other clinical problemsMuscle degenerationGPIbα, GPIbβ or GPIXDMDG protein signalling defectsADP P2Y12 receptorInducible Gsα hyperfunction syndromeSubclinical platelet defectProlonged bleeding timeProlonged bleeding time after traumaNo other clinical problemsBrachydactyly, increased alkaline phosphatase and neurological or growth retardationP2Y12XLαsThromboxane TXA2 receptorPACAP overexpressionProlonged bleeding timeProlonged bleeding timeNo other clinical problemsMental retardation and hypogonadismTXA2RPACAPSecretion defectsGray platelet disorderHermansky Pudlack diseaseNeurological defect?Prolonged bleeding timeProlonged bleeding timeNo other clinical problemsAlbinism, lysosomal defect?HSP 1–8Delta storage Pool diseaseChediak Higashi diseaseProlonged bleeding timeProlonged bleeding timeNo other clinical problemsAlbinism, immunological lethal defect?LYSTThe following implicated genes are indicated in : ITGB3 integrin beta3; ITGA2 integrin alpha2; GPIbα glycoprotein Ibalpha; GPIbß glycoprotein Ibbeta; GPIX glycoprotein IX; P2Y12 purinergic receptor 12; TXASR thromboxane A2 receptor; MYH9 nonmuscle myosin heavy chain 9; DMD Dystrophin; XLαs extra-large stimulatory G protein alpha subunit; PACAP pituitary adenylate cylase-activating peptide; HSP 1–8 Hermansky Pudlack genes 1 through 8; LYST lysosomal trafficking regulator When considering type of platelet tests A clinical platelet-based bleeding problem is of course the main reason to investigate platelet function and morphology. The diagnostic approach to easy bruising or a suspected platelet-based bleeding disorder includes a careful history and physical examination of the patient as well as different laboratory investigations such as the Ivy bleeding time, platelet aggregation tests, ATP secretion, platelet adhesion by the platelet function analyzer (PFA100) and platelet morphology by electron microscopy [13, 40]. Reviewing the medical history can already establish whether the disorder is hereditary or acquired. The specific clinical findings useful in the differential diagnosis of coagulation versus platelet-based disorders are summarized in Table 2. Frequently, mucocutaneous bleedings characterize abnormalities of platelet function. In contrast, hemorrhage into synovial joints and deep muscular hemorrhage are signs of severe hereditary coagulation disorders and very rare events in disorders of platelets, vessels or acquired coagulation disorders. Inherited disorders of platelet function are further subdivided based on the functions or responses that are abnormal and therefore can belong to different subgroups including abnormal platelet adhesion, signalling and secretion [36]. Platelet-based bleeding disorders are usually classified according to abnormalities of platelet function, platelet number (thrombocytopenia) or both [36]. Table 2Clinical presentation of coagulation and platelet-based bleeding disordersClinical symptomsDisorders of coagulationDisorders of plateletsPetechiae, epistaxisRareCharacteristicSuperficial ecchymosesCommon: large and solitaryCharacteristic: small and multipleBleeding from superficial cuts and bruisesMinimalPersistant: often profuseDelayed bleedingCommonRareDeep dissecting hematomasCharacteristicRareHemarthrosisCharacteristicRare Functional and morphological platelet studies in patients with mainly a neurological, metabolic or another clinical problem but no obvious bleeding problem are usually not performed for diagnostic purposes but rather for research aims. In such patients, novel insights are expected to result from the platelet research studies, which are still preliminary today but in the future hopefully will help to better define when to ask for what type of platelet tests in a given patient. Thrombopathies Glanzmann thrombasthenia and the Bernard-Soulier syndrome (BSS) are two rare inherited disorders of platelet adhesion. Glanzmann thrombasthenia (MIM 273800) is an autosomal recessive disorder, characterized by prolonged bleeding time and abnormal clot retraction [30, 31, 35]. The hallmark of this disease is severely reduced or absent platelet aggregation in response to various physiological platelet agonists such as ADP, thrombin and collagen. The defect is caused by mutations in one of the integrin genes, ITGA2 or ITGB3, encoding the αIIbβ3 receptor complex. Lack of expression or qualitative defects in αIIbβ3 results in a disturbed interaction between activated platelets and adhesive glycoproteins (fibrinogen at low shear and vWF at high shear) that bridge adjacent platelets during platelet aggregation (Fig. 1a). The Bernard-Soulier syndrome (MIM 231200) is caused by abnormalities in the GP Ib/IX/V receptor complex due to mutations in the genes for GPIbα, GPIbβ or GPIX (but there are no reports of BSS affecting the GPV gene) [22]. It is an autosomal recessive disorder with moderate to severe macrothrombocytopenia, decreased platelet survival and often a spontaneous bleeding tendency. The bleeding events can be severe but are usually controlled by platelet transfusion. Most heterozygotes, with a few exceptions, do not have a bleeding diathesis. BBS platelets aggregate normally in response to physiological agonists (ADP and collagen), have a weak response towards low concentrations of thrombin and do not agglutinate when platelet rich plasma is stirred with ristocetin or botrocetin [22]. Defects in G protein signaling resulting in an isolated platelet defect [36] are expected to be caused by a mutant G protein-coupled receptor (GPCR) since these can be cell-specific while the G proteins and their downstream effectors in this pathway are ubiquitously expressed. A dominantly inherited mutation (Arg60Leu) in the Gq-coupled TXA2 receptor was described in patients with a mild bleeding disorder characterized by defective platelet aggregation responses to TXA2 and its analogues (MIM 188070) [15]. In cultured cells, the Arg60Leu mutant was shown to impair phospholipase C (PLC) activation. Patients can be heterozygous (with some PLC activation left) or homozygous (without PLC activation) for this mutation but all have a life-long history of mucosal bleeding and easy bruising but no episodes of major bleeding such as hematuria, gastrointestinal bleeding or hemarthrosis [14]. The Gi-coupled ADP receptor P2Y12 (Fig. 1b) is responsible for the sustained, full aggregation response to ADP. P2Y12 deficiency (MIM 609821) is an autosomal recessive bleeding disorder characterized by excessive bleeding, prolonged bleeding time and abnormalities that are very similar to those observed in patients with secretion defects (reversible aggregation in response to weak agonists and impaired aggregation towards low concentrations of collagen and thrombin), except for the severely impaired response to ADP [2]. Study of the heterozygous P2Y12 defect revealed platelets that undergo a normal first wave of ADP induced aggregation but abnormal ATP secretion with different agonists [3, 42]. A defective platelet secretion is described for patients with absent alpha granules (gray platelet syndrome) or abnormal dense granules (delta-storage pool disease, δ-SPD) [37]. Gray platelet syndrome or α-SPD (MIM 139090) owes its name to the fact that the typically enlarged platelets devoid of α-granule staining, present with a gray color in a Wright-stained blood smear [32]. Most cases are sporadic though some family studies suggest an autosomal dominant inheritance. Affected members have a life-long history of mucocutaneous bleeding, which may vary from mild to moderate in severity, prolonged bleeding time, mild thrombocytopenia, abnormally large platelets and an isolated reduction of the platelet α-granule content. The molecular defect(s) in α-SPD have not yet been defined and further insights into the molecular mechanisms responsible for platelet exocytosis (as the SNARE proteins) will help in the search for causes of human platelets secretory disorders. δ-SPD (MIM 185050) may present as an isolated platelet function defect or can be associated with a variety of other congenital defects (see further). δ-SPD is characterized by a bleeding diathesis of variable degree, mildly to moderately prolonged skin bleeding time (fully related to the amount of ADP or serotonin contained in the granule), abnormal platelet secretion induced by several agonists and a reduced platelet aggregation. The δ-SPD platelets have decreased levels of the dense granule contents: ATP and ADP, serotonin, calcium and pyrophosphate (Fig. 1c). It was estimated that 10–18% of patients with a congenital abnormality of the platelet function have δ-SPD [12]. The inheritance pattern is autosomal recessive in some families while autosomal dominant in others but the molecular players responsible for δ-SPD are still unknown. Human disorders comprising a platelet defect Defects in platelet adhesion and subsequent platelet activation can also be due to an alteration in the platelet cytoskeletal organization, which consists of the microtubules and F-actin coupled to myosin, filamin and dystrophin [29]. Mutations in these widely expressed proteins result in a broader clinical phenotype. May-Hegglin anomaly (MIM 155100), Fechtner syndrome (MIM 153640), Epstein syndrome (MIM 153650), and Sebastian syndrome (MIM 605249) are characterized by macrothrombocytopenia, with or without different types of leukocyte inclusions, which can only be differentiated by an accurate ultrastructural examination [38]. In addition, patients with Epstein or Fechtner syndrome also suffer from nephritis, deafness, and congenital cataracts. Recently it became obvious that Sebastian platelet syndrome, May-Hegglin anomaly, Fechtner and Epstein syndrome have mutations in the same gene MYH9, encoding the 224-kD nonmuscle myosin heavy chain 9 polypeptide [26]. This gene is expressed in platelets, monocytes, granulocytes, the kidney, the auditory system but also in a lot of other tissues. MYH9 deficiency results in an alteration of the composition and agonist-induced reorganization of the platelet cytoskeleton [1, 6]. The cytoskeletal defect could also explain the abnormal platelet formation from megakaryocytes, resulting in thrombocytopenia and giant platelets in MYH9 deficiency. Why patients with May-Hegglin anomaly, Fechtner syndrome, Epstein syndrome and Sebastian platelet syndrome have different signs and symptoms in other tissues than their common defect in platelets still remains to be elucidated. Duchenne muscular dystrophy (DMD) is an X-linked recessive disease (MIM 310200) characterized by progressive degeneration of muscle resulting in early death from respiratory or cardiac failure. DMD is caused by mutations in the dystrophin gene, a 427-kDa membrane-associated cytoskeletal protein. Evidence for a role of dystrophin in platelets started with the observation that DMD patients tend to bleed more during spinal surgery for scoliosis than do patients during the same surgery with other underlying conditions [7]. Other C-terminal isoforms of dystrophin due to differential promoter usage and/or alternative splicing at the 3′-end of the gene have been identified in platelets (Dp71), the retina (Dp260) and in the peripheral (Dp116) and central nervous systems (Dp140). It is well established that platelets contain a complex membrane cytoskeleton that resembles, at least in part, the cytoskeleton found in muscle, but a role for dystrophin during platelet activation still remains to be clarified [18, 27]. Recent studies showed a role for dystrophin in normal controls during platelet spreading and adhesion by regulating the α2β1 receptor but this was not studied in DMD patients [4, 5]. Another study describes a normal platelet function in DMD patients and blames the selective defect of primary hemostasis in DMD to impaired vessel reactivity [43]. Patients with an abnormal signal transduction are a heterogeneous group combined of defects in platelet G protein-coupled receptors (GPCR), the G proteins, and their effectors. Due to an extreme complex regulation between those key components (Fig. 1b), the incidence of this class of defects is definitely underestimated and the underlying molecular defects for the signaling problems are still largely unknown. Platelet Gs activity is easily determined using the platelet aggregation-inhibition test which gives a value for the Gs activity based on the inhibition of platelet aggregation by the rapid generation of cAMP after incubation with different Gs agonists such as prostacyclin or prostaglandin (Fig. 1b). A congenital Gs hyperfunction syndrome was described in three patients of two unrelated families due to a paternally inherited functional polymorphism in the extra-large stimulatory G-protein gene (XLαs) and its overlapping cofactor ALEX [8]. This XLαs variant is associated with Gs hyperfunction in platelets, leading to an increased trauma-related bleeding tendency but is also accompanied by neurological problems and brachydactyly (MIM 139320). A subsequent study revealed eight additional patients who paternally inherited the same XLαs polymorphism presenting with platelet Gs hyperfunction, brachydactyly, increased alkaline phosphatase and neurological problems or growth deficiency [9, 17]. Megakaryocytes and platelets express the Gs-coupled VPAC1 receptor, for which both the pituitary adenylyl cyclase-activating polypeptide (PACAP) and the vasointestinal peptide (VIP) are specific agonists. Studies in two related patients with a partial trisomy 18p revealed three copies of the PACAP gene and elevated PACAP concentrations in plasma. The patients suffer from multiple neurological (epilepsy, hypotonia, convulsions, mental retardation, tremor, psychotic, hyperactive behavior), gastro-intestinal (diarrhea, vomiting) and endocrinological (hypoplasia of the pituitary gland, hypogonadotropic hypogonadism) problems and have a pronounced bleeding tendency (MIM 102980) [10]. The basal cAMP level in the patients' platelets was strongly elevated, providing a basis for the strongly reduced platelet aggregation. The VPAC1 signalling pathway also mediates megakaryocyte maturation and platelet formation (unpublished results). Patients with PACAP overexpression have a mild thrombocytopenia, a normal platelet survival, relatively small platelets and their bone marrow examination reveals almost no mature megakaryocytes. There exist two rare syndromic forms of the δ-SPD: the Hermansky-Pudlak syndrome (HPS) and the Chediak-Hygashi syndrome (CHS). HPS (MIM 203300) consists of several genetically different autosomal recessive disorders, which share the clinical manifestations of oculocutaneous albinism, bleeding, and lysosomal ceroid storage resulting from defects of multiple cytoplasmic organelles: melanosomes, platelet-dense granules, and lysosomes hypopigmentation [25, 36]. HPS can arise from mutations in at least eight different genes known to date (HSP1 to HSP8), all coding for proteins involved in the formation, trafficking or fusion of intracellular vesicles of the lysosomal lineage [44]. CHS (MIM 214500) is also an autosomal recessive disorder, characterized by variable degrees of oculocutaneous albinism, large peroxidase-positive cytoplasmic granules in hematopoietic and non-hematopoietic cells, δ-SPD, recurrent infections, neutropenia, and an accelerated chronic lymphohistiocytic infiltration phase. The only known CHS-causing gene, LYST, codes for a large protein of unknown function but it seems that CHS is a disease of vesicle trafficking [16]. Most CHS patients present in early childhood and die before the age of 7 years unless treated by bone marrow transplantation [21]. About 10–15% of patients exhibit a much milder clinical phenotype and survive to adulthood but develop progressive and often fatal neurological dysfunction. Human disorders examined by functional and morphological platelet assays Some mainly polygenetic disorders can also be studied by using platelets although the patients only present with a subtle subclinical platelet phenotype. It is not easy to define disorders such as diabetes type 2 or some neurological diseases according to a defective adhesion, G protein signalling and secretion since the platelet defect is not yet well studied and usually overlaps different pathways. This part of the review will only briefly focus on the use of platelet research in our understanding of neurological disorders. It has been known for years that certain cellular functions are very similar in platelets and in neurosecretory cells [33] but the link between functional platelet studies and neurological defects is novel. A platelet and a neuron contain both mitochondria and dense core vesicles in which transmitters (such as serotonin) are stored. Platelets release serotonin upon activation (Fig. 2c) and the neuronal membrane is facilitated by a calcium dependent excitation-excretion coupling mechanism. In addition, both platelets and neurons contain functional neurotransmitter and neuromodulator receptor sites on their outer membrane such as adrenoceptors, serotonin receptors and serotonin transporters. More recently it was shown that platelets also express GABA and glutamate receptors [24, 34]. Serotonin uptake and release by platelets and serotonin plasma levels have been quantified and found to be altered in patients with bipolar disorder, schizophrenia, depression, aggression, autism, migraine, etc. [19, 23, 28]. Many epidemiological studies try to link these changes in activity of the serotonin transporter or changes in the density and responsiveness of the serotonin 2A receptor and the alpha2 adrenoceptor on the platelet membrane of these patients with genetic polymorphism in the corresponding genes. As for other epidemiological studies solid evidence for any linkage is not obvious but it is irrefutable that platelet studies have been invaluable in enabling an insight to be gained into the role of serotonin in a number of psychiatric and neurological disease [39, 41]. Major advances are expected of platelet research for this field in the near future since it is now obvious that besides the serotonin pathway, which was only the tip of the iceberg, many other pathways are chaired between platelets and neurons as well as the many gene products responsible for the regulation of granule formation, transport, secretion and endocytosis. Conclusions Platelet research is an expanding field originally studying isolated thrombopathies caused by the imbalance between thrombosis and hemostasis but more recently being able to bring novel insights in our understanding of human pathology in other clinical disciplines such as neurology, endocrinology and metabolic diseases. Platelets are easily accessible cells, and different techniques are possible to study platelet function and morphology under basal and activated conditions. Defects in platelet adhesion, G protein signalling and secretion can result from mutations in platelet-specific genes leading to isolated thrombopathies or from mutations in widely expressed genes leading to broader clinical phenotype including a platelet defect. In addition to using platelet research for diagnostic purposes, these platelet functional and morphological studies can also be used for research aims. From the close collaboration between clinicians of different disciplines, geneticists and the functional platelet research unit, novel insights in the pathogenesis of different human disorders are to be expected in the near future by using this strategy.
[ "platelets", "adhesion", "secretion", "g protein signalling", "cytoskeleton" ]
[ "P", "P", "P", "P", "P" ]
Int_Arch_Occup_Environ_Health-4-1-2175021
Coping and sickness absence
Objectives The aim of this study is to examine the role of coping styles in sickness absence. In line with findings that contrast the reactive–passive focused strategies, problem-solving strategies are generally associated with positive results in terms of well-being and overall health outcomes; our hypothesis is that such strategies are positively related to a low frequency of sickness absence and with short lengths (total number of days absent) and durations (mean duration per spell). Introduction A strong association exists between ill health and sickness absence, particularly for long absence spells (Marmot et al. 1995; Hensing et al. 1997). However, the decision of an employee to go on sick leave or to stay at work is not just the result of his or her (ill) health status alone (Aronsson et al. 2000; Rosvold and Bjertness 2001; Sandanger et al. 2000; Whitaker 2001; Anonymous 1979; Johansson and Lundberg 2004) but depends also on a number of demographic, social, and economic determinants (Johansson and Lundberg 2004; Voss et al. 2001; Eshoj et al. 2001). For instance, age (Sandanger et al. 2000), gender (Evans and Steptoe 2002), marriage (Mastekaasa 2000), level of education (Eshoj et al. 2001), salary (Chevalier et al. 1987), and sickness absence history (Landstad et al. 2001) are known to be associated with sickness absence behaviour. In addition, the way the individual deals with stressful situations (at work) is likely to affect his or her decision to report ill. In this article we focus on the role of this kind of so-called employee coping behaviour. The relationship between coping and illness behaviour has been a major research focus over the past two decades (Somerfield and McCrae 2000). A variety of conceptual coping-frameworks have been proposed and numerous measures have been developed to assess ways of coping (McWilliams et al. 2003). Pioneering work in the field of coping has been carried out by Folkman and Lazarus (1980) who define coping as “the cognitive and behavioral efforts made to master, tolerate, or reduce external and internal demands and conflicts among them”. In their opinion, coping has to be considered as a behaviour that is primarily determined by environmental demands, that is, coping is an individual response to a stressful environment. In contrast, other scholars (Holahan et al. 1996; Moos and Holahan 2003) consider coping primarily as a trait or as a resource. The former refers to a relatively stable personal characteristic: that is, similar coping strategies are used across a wide variety of situations (Parker and Endler 1992; Carver and Scheier 1994). The latter refers to the use of particular social and personal characteristics: that is, personal resources on which the individual may draw upon when dealing with stressful situations (Pearlin and Schooler 1978). This trait- or dispositional approach of coping implies a stable coping style or a coping resource regularly used. As early as four decades ago, Kahn et al. (1964) distinguished between two general coping strategies: problem-solving strategies and reactive–passive strategies. Their idea of two general coping strategies has been worked out by Lazarus and Folkman (1984) in what nowadays is probably the most popular and widely accepted conceptualization of coping behaviour. Problem solving-coping refers to active strategies that are directly targeted at solving the problem at hand, whereas reactive–passive focused coping refers to those strategies that reduce the negative emotions that are evoked by the stressful situation (Elfering et al. 2005). Much research on coping strategies reveals that both reactive–passive strategies and avoidance strategies result in psychological and physical symptoms (Terry et al. 1996; Pisarski et al. 1998; Penley et al. 2002), whereas active, problem-solving coping generally has a positive impact on well-being and overall health outcomes (Penley et al. 2002). However, in their recent review, Austenfeld and Stanton (2004) criticized this popular and almost generally accepted conclusion. They identified over a hundred articles examining the relationship between reactive–passive coping and adjustment (Stanton et al. 2002b) and found that hardly any of the coping instruments contained the same set of coping strategies, which made it practically impossible to aggregate the findings. Furthermore, the association between reactive–passive strategies and psychological and physical symptoms appeared to be related to the way these strategies had been operationalized (Stanton et al. 2002a). It appeared that corruption of the original coping items as well as the use of item formulations that include the expression of emotional distress or self-deprecation result in spurious correlations. Studies on coping and sickness absence are scarce. Kristensen (1991) was among the first to investigate this relationship and he asserted that sickness absence itself should be regarded as coping behaviour reflecting the individual’s perception of health or illness. Sickness absence itself, in his opinion, is a functional coping strategy, used by employees to reduce work-related strain by avoiding the workplace and thus creating for themselves the opportunity for recuperation. Kristensen was one of the first not to primarily focus on determinants of sickness absence, but rather tried to understand sickness absence from a coping perspective. By doing so, he went beyond existing concepts of coping by considering sickness absence “a type of coping behaviour” (Kristensen 1991). As he stated: “sickness absence can well be a rational coping behaviour seen in the light of a person’s wish to maintain his/her health and working capacity: as such it is the opposite of withdrawal behaviour”. Clearly, this approach differs from considering coping as a personality trait or resource. In the present study, coping is conceptualized and measured as a trait or disposition i.e. it is assumed that individuals tend to use rather similar coping strategies across a wide variety of situations. The Utrecht coping list (UCL) (Schreurs et al. 1993) was selected to assess the employees’ coping style. This well-validated self-report questionnaire is the most widely used coping inventory in the Netherlands, both in research and in practice (Schreurs et al. 1993; Schaufeli and Van Dierendonck 1992; Norberg et al. 2005; Buitenhuis et al. 2003). Like the COPE questionnaire of Carver et al. (1989), the UCL asks individuals how they deal with stressful situations; that is, how often they engage in various exertions encountering problems or unpleasant occurrences. The UCL distinguishes between five coping styles that can be grouped together into two higher-order coping styles: active, problem-solving and a reactive–passive style (Schaufeli and Van Dierendonck 1992). Hence, the UCL offers the possibility to investigate employees’ coping styles at a more detailed level, at the same time taking into account the conceptual distinction between problem-solving and reactive–passive coping. Sickness absence has been measured in terms of frequency, (total) length of sickness absence, (mean) duration of sickness absence spells as well as by the sickness absence free interval. These sickness absence measures are defined in accordance with recommendations of Hensing et al. (1998) who pleaded for a more standardized international description of sickness absence measures. In their literature review, Hensing et al. pointed out the multi-interpretability of sick leave indicators and recommended basic measures to encompass the full spectrum of the sickness absence phenomenon to make studies more accessible for international comparisons. Recently, a study by Landstad et al. (2001) confirmed this line of reasoning by concluding that different forms of absenteeism need to be studied simultaneously, in order to distinguish changes in sickness absence pattern correctly. In summary then, the aim of the study is to examine the role of coping styles in sickness absence. Based on the fact that, contrary to reactive–passive strategies, problem-solving strategies are generally associated with positive results in terms of well-being and overall health outcomes, our hypothesis is that such strategies are positively related to a low frequency of sickness absence and with short lengths and durations. Reactive–passive strategies, on the other hand, are not expected to be related to sickness absence. Subjects and methods Study population and participants Participants were employees of a large Dutch telecom company. An occupational health survey was sent to all 7,522 employees, including an assessment of coping strategies (response rate 51%; N = 3,852). Sickness absence of the participants was followed up for 1 year after the survey. Due to missing sickness absence data, the sample was reduced to 3,628 employees [3,302 men (mean age 44.7 years, SD = 7.5) and 311 women (mean age 39.7 years, SD = 8.7)]. A description of the sample is shown in Table 1. During the first quarter after the start of the study, 64% of the participants have not been absent because of sickness, whereas, 7% of the participants have been absent for more than 14 days (length). Table 1Demographics and absenteeism of participants Variable Male91% Age, mean (SD) (min–max) years44.2 (7.7) (22–63)Marital status Married or cohabiting79% Single17% Divorced or separated4%Educational level Lower vocational education27% Intermediate vocational education50% Higher vocational education and university21% Missing/something else2%Working years present job 1 year30% >1–5 year43% >5–10 year14% >10 year14%Sickness absence first quarter 0 days 64% 1–7 days22% 8–14 days6% >14 days7%Function Blue collar (executive)41% Office workers (administrative)30% Supervisors6% Consultants16% Managerial staff7% Compared to non-participants, participants were predominantly male, older, better paid, and were less absent for sickness (see Table 2). Table 2Demographics and absenteeism of participants and non-participantsParticipants (n = 3,628)Non-participants (n = 3,670)T Test pχ2pGender (%women)8.614.10.000Age, mean (SD) in years44.2 (7.7)40.7 (9.3)0.000Salary (%)0.000 Low40.653.2 Medium42.933.3 High16.513.5Absenteeism Length mean (SD) days14.9 (39.9)22.9 (59.3)0.000 Frequency 1.20 (1.31)1.31 (1.46)0.000 Measures Coping style We assessed the coping strategy of the participants using the shortened 19-item version of the original 30-item Utrecht Coping List (UCL) (Schreurs et al. 1993). This questionnaire was designed to measure the coping strategies people use in stressful situations, either life events or daily hassles. Each item is rated on a four-point Likert scale ranging from one (never) to four (very often). The UCL includes five dimensions; (1) active problem-focusing (five items, e.g. thinking of different possibilities to solve a problem), (2) seeking social support (5 five items, e.g. seeking comfort and sympathy), (3) palliative reaction pattern (four items, e.g. looking for distraction), (4) avoidance behavior (three items, e.g. complying to avoid problematic situations) and (5) expression of emotions (two items, e.g. showing frustrations). The first three coping styles were found to cluster into a second-order active problem-solving factor, whereas both final styles clustered into a reactive–passive factor (Schaufeli and Van Dierendonck 1992). According to the test manual, the internal consistencies as well as the test–retest reliability are satisfactory (Schreurs et al. 1993). In order to assess the factorial validity of the shortened UCL in our employee sample, a confirmative factor analysis was carried out. Sickness absence Sickness absence data were taken from the sickness absence records of the employees filed in the database of ArboNed, an occupational health service (OHS) serving the telecom company. All spells of absence for medical reasons were centrally reported and registered by the executive manager of the company. Absence spells longer than 2 weeks were verified by an occupational physician by inviting the employee on sick leave for an interview. Therefore, the validity of the absence data is assumed to be high. Measures used are (1) (total) length of sickness absence in current and new spells during the study period (1 year) per sick listed person (i.e. total number of days absent) (2) frequency of sickness absence (new sick-leave spells during the study period (1 year) and (3) (mean) duration of sickness absence (sick-leave days in new spells during the study period (1 year) per spell). The duration of sickness absence is classified into more or less than 7 days. In our sample, short-term sickness (less than 7 days) accounts for 75% of the absences and mainly represents minor ailments. Finally, we assessed the median time before the onset of a new sick leave period after the occupational health survey. Statistical analysis Confirmatory factor analysis (CFA), using the AMOS 5 software program (Arbuckle 2003) was used to test the fit of two competing models: M1 that assumes that all 19 items load on one general coping factor, and M2 that assumes that the items load on the five hypothesized correlated factors. Maximum likelihood estimation methods were used and the input for each analysis was the covariance matrix of the items. The goodness-of-fit of both models was evaluated using the χ2 goodness-of-fit statistic and the root mean square error of approximation (RMSEA). However, χ2 is sensitive to sample size so that the probability of rejecting a hypothesized model increases when sample size increases, even if the difference between the fitted model and the “true” underlying model is very small. To overcome this problem, the computation of relative goodness-of-fit indices is strongly recommended (Bentler 1990). Three relative goodness-of-fit indices were computed: the normed fit index (NFI), the non-normed fit index (NNFI) and the comparative fit index (CFI). The latter is particularly recommended for model comparison purposes (Goffin 1993). For all relative fit-indices, as a rule of thumb, values greater than 0.90 are considered as indicating a good fit (Byrne, 2001, pp. 79–88), whereas values smaller than .08 for RMSEA indicate acceptable fit (Cudeck and Browne 1993). Next, Cronbach alphas were calculated for the UCL-subscales. In a next step, scale scores for different coping strategies were calculated and transformed into scale scores ranging from 0 to 100. Finally, tertiles of the distribution of the 0–100 scale scores were used to distinguish between low-, medium- and high levels of the coping strategies. To examine the relationship between coping and sickness absence, odds ratios and corresponding 95% confidence intervals were calculated using logistic regression analysis. Stepwise multiple logistic regression analysis was used to study the (confounding) influence of sociodemographic factors and other determinants on the relationship between coping and sickness absence. The magnitude of the (confounding) effects was assessed by calculating the proportion of the excess risk (OR minus 1.0) explained when fitting these terms in the model. Finally, the period between the health surveillance and the onset of a new period of absenteeism was evaluated using survival analysis. Since we wish to estimate the probability of absenteeism at a designated time interval (conditional probability) the Kaplan–Meier methodology (Kaplan and Meier 1958) has been applied. With this statistical technique, means, medians and confidence intervals of the ‘survival’ (in this study: the onset of absenteeism) are calculated without making assumptions about the survival distribution. Results UCL factor structure As can be seen in Table 3, confirmatory factor analysis (EFA) corroborated the underlying five-factor structure of the short form of the UCL. More particularly, all fit-indices of M2—the hypothesized model with five correlated factors—sufficed their respective criteria, except NNFI that approached its criterion of 0.90. The mean correlation between the five factors was 0.24, ranging from −0.04 to 0.45. Moreover, the fit of M2 was superior to that of M1 that assumed that all items load on one undifferentiated coping factor (Δχ2 = 10146.22; df = 10; P < 0.001). Hence the factorial validity of the UCL-15 was demonstrated. Table 3Fit indices of one-factor (M1) and two-factor (M2) models of coping (UCL-19)Modelχ2dfGFIAGFIRMSEANFINNFICFIM11030.291520.690.610.140.430.360.43M2184.071420..950.930.060.900.880.90Null model17976.601710.540.490.17–––GFI goodness of fit index, AGFI adjusted goodness of fit index, RMSEA root mean square estimate of approximation, NFI normed fit index, NNFI non-normed fit index, CFI comparative fit index; all χ2, P < 0.001 The Cronbach alphas for the subscales avoidance behaviour, expression of emotions, seeking social support, active problem-focusing and palliative reaction in this study were 0.67, 0.65, 0.76, 0.81 and 0.68, respectively. Although some values are slightly below 0.70, which is recommended for established scales, all values are well above 0.60, which is deemed satisfactory for newly developed scales (Nunnaly and Bernstein 1994). Sickness absence and demographics As can be seen from Table 4, length (total number of days absent) and duration (mean duration per spell) of sickness absence are associated with gender (i.e. women), being divorced or single, having an intermediate or lower education, a shorter period working in the present (current) job, lower salary, higher age, and a history of sickness absence both for length and frequency. Likewise, a higher frequency of sickness absence was associated with gender (i.e. women), being divorced, an intermediate salary and a history of sickness absence both for length and frequency. In our sample there is no association between absence frequency and level of education, the period working in the current job, or age. Table 4Associations of demographics and sickness absenceSickness absenceLength >14 daysDuration >7 daysFrequency >2xOR95% CIOR95% CIOR95% CIGender Woman1.001.001.00 Man0.490.38–0.620.660.51–0.850.420.32–0.55Married Married1.001.001.00 Single0.830.66–1.031.251.03–1.521.050.82–1.36 Divorced1.731.24–2.412.180.54–8.811.541.04–2.28Education University1.001.001.00 Higher vocational education0.880.59–1.331.130.72–1.781.000.63–1.59 Interm. vocational education1.421.00–2.001.931.30–2.861.160.78–1.72 Lower vocational education2.071.45–2.962.841.90–4.241.300.86–1.96Present (current) job >10 years1.001.001.00 5–10 years0.980.74–1.281.020.78–1.341.040.74–1.46 <5 years0.670.54–0.830.610.49–0.760.910.69–1.19Salary Low 4–61.001.001.00 Intermediate 7–90.500.43–0.600.490.41–0.591.711.28–2.28 High >90.380.30–0.490.330.25–0.441.070.79–1.44Age <35 years1.001.001.00 35–45 years1.381.07–1.781.541.17–2.021.200.90–1.60 >45 years1.481.16–1.881.791.38–2.320.930.70–1.22History sickness absence in days (length) 1 year before 01.001.001.00 1–71.571.24–2.011.190.95–1.503.072.18–4.31 8–143.682.85–4.742.652.08–3.396.304.44–8.95 > 149.727.75–12.24.263.43–5.2912.99.40–17.8History frequency (1 year before) 0x1.001.001.00 1–2x2.772.26–3.411.911.58–2.323.962.91–5.40 >2x8.666.81–11.023.833.04–4.8417.4712.6–24.2n ranges between 3,575–3,606 due to missing values Sickness absence and ways of coping As displayed in Table 5, a greater length (total number of days) of sickness absence is predicted by low- or medium-active problem-focusing, avoidance behaviour and a medium- or high palliative reaction. The frequency and the duration of sickness absence are associated in a similar way, however, the latter showing a relation with low seeking social support rather than a palliative reaction. Table 5b summarizes the significant associations between various sickness absence measures and ways of coping (Table 5). It can be seen from this table that the crude ORs of the active and avoidant coping styles show the most consistent patterns of associations across all sickness absence measures. Table 5Associations of coping and sickness absenceSickness absenceLength >14 daysDuration >7 daysFrequency >2xOR95% CIOR95% CIOR95% CIProblem-solvingActive problem-focusing Low1.001.001.00 Medium0.840.70–0.990.830.69–0.990.840.68–1.04 High0.610.49–0.750.690.56–0.860.780.53–0.87Seeking social support Low1.001.001.00 Medium0.980.82–1.170.970.80–1.161.01−0.80–1.26 High0.920.75–1.120.810.66–0.991.150.90–1.45Palliative reaction Low1.001.001.00 Medium1.221.01–1.491.150.94–1.411.431.13–1.81 High1.331.11–1.591.190.99–1.431.401.12–1.74Reactive–passiveAvoidance behaviour Low1.001.001.00 Medium1.110.92–1.321.140.94–1.371.220.98–1.51 High1.351.10–1.651.321.07–1.631.391.09–1.67Expression of emotions Low 1.001.001.00 Medium1.050.87–1.281.210.99–1.480.910.72–1.14 High1.190.95–1.501.130.89–1.441.290.99–1.69Summary table 5Sickness absenceLengthDurationFrequencyProblem-solvingActive problem-focusingXXXSeeking social support–X–Palliative reactionX–XReactive–passiveAvoidance behaviourXXXExpression of emotions––– Sickness absence, and demographics and ways of coping Of course, the question arises whether or not the association between coping and sickness absence could be explained by previous sickness absence and by demographics. Therefore, Table 6 displays the ORs for the three sickness absence measures with coping strategies after adjustment for previous sickness absence and the demographics mentioned in Table 4. Table 6Odds ratios (95% CI) for sickness absence (length > 14 days, frequency >2x, duration >7 days) associated with different coping styles measured at the start of a 1-year follow-up study (n = 3575)AdjustmentsLength >14 daysDuration >7 daysFrequency >2xOR95% CIOR95% CIOR95% CIProblem-solvingActive problem- focusingNo adjustments (crude OR)0.63(0.51–0.77)0.71(0.57–0.80)0.69(0.54–0.89)History sickness absence length0.71(0.56–0.88)0.78(0.63–0.97)0.71(0.55–0.93)History sickness absence length + gender (female)0.72(0.58–0.90)0.79(0.63–0.98)0.73(0.55–0.95)History sickness absence length + gender + salary (high)0.77(0.61–0.97)0.86(0.69–1.07)0.74(0.57–0.98)History sickness absence length + gender + salary + education (high)0.79(0.63–0.99)0.88(0.71–1.10)0.74(0.57–0.98)History sickness absence length + gender + salary + education + marital status (married)0.79(0.62–0.99)0.880.70–1.10)0.74(0.57–0.98)Seeking social supportNo adjustments (crude OR)0.90(0.73–1.09)0.80(0.65–0.98)1.14(0.90–1.45)History sickness absence length0.95(0.78–1.16)0.81(0.66–1.00)1.06(0.83–1.37)History sickness absence length + gender (female)0.83(0.67–1.03)0.78(0.63–0.97)1.01(0.78–1.30)History sickness absence length + gender + salary (high)0.87(0.70–1.08)0.83(0.67–1.02)1.03(0.80–1.33)History sickness absence length + gender + salary + education (high)0.88(0.71–1.10)0.84(0.68–1.04)1.03(0.80–1.34)History sickness absence length + gender + salary + education + marital status (married)0.88(0.71–1.10)0.84(0.68–1.04)1.03(0.80–1.34)Palliative reactionNo adjustments (crude OR)1.32(1.10–1.58)1.19(0.99–1.43)1.37(1.10–1.72)History sickness absence length1.24(1.02–1.50)1.16(0.96–1.41)1.18(0.93–1.50)History sickness absence length + gender (female)1.20(0.99–1.45)1.14(0.94–1.38)1.15(0.91–1.46)History sickness absence length + gender + salary (high)1.20(0.99–1.46)1.14(0.94–1.38)1.15(0.91–1.45)History sickness absence length + gender + salary + education (high)1.21(1.00–1.47)1.15(0.95–1.40)1.15(0.91–1.46)History sickness absence length + gender + salary + education + marital status (married)1.22(1.00–1.48)1.16(0.96–1.41)1.15(0.91–1.46)Reactive–passiveAvoidance behaviourNo adjustments (crude OR)1.36(1.11–1.66)1.33(1.07–1.64)1.37(1.07–1.75)History sickness absence length1.24(0.99–1.54)1.29(1.04–1.60)1.37(1.05–1.78)History sickness absence length + gender (female)1.23(0.99–1.53)1.28(1.03–1.59)1.36(1.04–1.77)History sickness absence length + gender + salary (high)1.22(0.98–1.52)1.27(1.02–1.57)1.35(1.04–1.75)History sickness absence length + gender + salary + education (high)1.21(0.97–1.51)1.26(1.02–1.57)1.35(1.03–1.75)History sickness absence length + gender + salary + education + marital status (married)1.22(0.97–1.52)1.27(1.02–1.58)1.35(1.03–1.75)Expression of emotionsNo adjustments (crude OR)1.17(0.97–1.40)1.01(0.83–1.22)1.41(1.13–1.75)History sickness absence length1.07(0.88–1.31)0.98(0.80–1.19)1.34(1.06–1.68)History sickness absence length + gender (female)1.06(0.87–1.29)0.97(0.79–1.18)1.32(1.05–1.67)History sickness absence length + gender + salary (high)1.07(0.87–1.30)0.98(0.80–1.19)1.33(1.05–1.67)History sickness absence length + gender + salary + education (high)1.08(0.88–1.31)0.99(0.81–1.20)1.33(1.05–1.67)History sickness absence length + gender + salary + education + marital status (married)1.08(0.88–1.31)0.99(0.81–1.20)1.33(1.05–1.67)All odds ratios are based each time on the same 3,575 employees without missing values on each variable in the model Length Adjustment for sickness absence history increases the excess risk to be absent for more than 14 days in one year by 22% for active problem-focusing (thus, sickness absence history reduces the effect of active coping), while reducing it by 25 and 33% for palliative reaction and avoidance coping, respectively. After adjustment for gender and sickness absence history, the excess risk for length in addition to palliative reaction and sickness absence history decreases by 17%. The excess risk for length adjusted for salary in addition to active problem-focusing, sickness absence history and gender increases by 18%. In summary, adjusted for several confounding variables, the length of sickness absence is effectively influenced by active problem-focusing and palliative reaction. Frequency Adjustment for sickness absence history barely minimizes the risk for frequency by coping considering active problem-focusing and avoidance behaviour. For palliative coping, the reduction for the excess risk amounts to 51%. When adjusted for gender, in addition to sickness absence history, the risk of high frequency in association with palliative reaction reduces by another 16%. In sum, adjusted for several confounding variables the frequency of sickness absence is effectively influenced by active problem-focusing, avoidance behaviour and expression of emotions. Duration Adjustment for sickness absence history reduces the excess risk of active problem focusing by 24%, of seeking social support and palliative reaction by 16%, and of avoidance behaviour by 12%. Adjustment for gender in association with seeking social support affects the excess risk of duration by 16%. In summary, adjusted for several confounding variables, the duration of sickness absence is effectively influenced by active problem focusing, avoidance behaviour and seeking social support. Effects on the onset of a new period of absenteeism During the first year, the median time before the onset of a new episode of absenteeism is significantly shorter for those low in active problem-focusing, high in avoidance, and high in a palliative response. For the two remaining coping styles, no significant results were found (Table 7). This means that employees who are used to solving problems actively instead of avoiding problems or engaging in alternative behaviours, enter sick leave later, the next time. Table 7Kaplan–Meier: the relation between different coping styles and the onset of absenteeism in the year after coping assessmentLog rankMedian (days)SE95% CIStatdfSignProblem-solvingActive problem-focusing Low1527137–167 Medium17013145–195 High17614149–2039.4420.01Seeking social support Low16810148–188 Medium1658148–182 High15517122–1880.4520.80Palliative reaction Low18214155–209 medium15512132–178 High1467131–16113.6520.00Reactive–passiveAvoidance behaviour Low18211160–204 Medium15111130–172 High14410125–16314.620.00Expression of emotion low16510146–184 Medium16711145–189 High15612133–1790.9820.61 Discussion In accordance with our hypothesis, and after adjustment for potential confounders, employees with an active problem-solving coping strategy are less likely to drop out because of sickness absence in terms of frequency, length (total number of days absent, longer than 14 days), and duration (mean duration per spell, more than 7 days) of sickness absence. This positive effect is observed in the case of ‘seeking social support’ only for duration of sickness absence, and in the case of ‘palliative reaction’ only for length and frequency of sickness absence. In contrast, an avoidant coping style, representing a reactive–passive strategy, significantly increases the likelihood of frequent absences, as well as the duration of sickness absence. Expression of emotions, representing another reactive–passive strategy, has no effect on sickness absence. The median time before the onset of a new episode of absenteeism, finally, is significantly extended for active problem-solving and reduced for avoidance and for a palliative response. In summary, we conclude that in accordance with our hypothesis, a problem-solving coping strategy, in contrast to a reactive–passive coping strategy, significantly reduces sickness absence. This result seems to corroborate other research findings that showed that problem-solving coping is associated with well being and overall health outcomes (Kohn 1996). On the other hand, our results are at odds with research findings that document a positive relationship between reactive–passive coping and health (Austenfeld and Statton 2004; Coyne and Racioppo 2000). Austenfeld and Statton (2004) have argued that the negative effect of reactive–passive coping on health may partly be attributed to the operationalizaton of this construct, and therefore recommended a clear description of the reactive–passive coping items used. The idea is that reactive–passive coping can be separated into two factors, namely emotional expression and emotional processing (Lazarus 1993). The former factor is an active attempt to acknowledge, explore meanings or come to an understanding of one’s emotions. Items measuring emotional processing, however, focus on the acknowledgement of emotions, the validness and importance of feelings, the delving into the feelings. Especially, emotional processing has a positive association with health, although how the influencing occurs is still unclear. The items that tap reactive–passive coping in the UCL refer to the expression of emotions and not to their processing. This probably explains the indifferent and negative effect on sickness absence by ‘emotional expression’ and ‘avoidance behaviour’, respectively. A second possible explanation can be that reactive–passive strategies have a positive relationship to health but not necessarily with sickness absence. Our study partly refutes the assumption of Kristensen (1991) that sickness absence is a coping strategy by itself. Kristensen claimed that employees who use sickness absence as a coping strategy would experience less work-related strain, especially in jobs with poor decision latitude. Accordingly, because they are no longer exposed to their stressful jobs, employees would recuperate during sickness absence, especially in the case of psychosomatic symptoms. In our study, sickness absence history that can be considered a proxy of the coping strategy of sickness absence had only a minor impact on sickness absence given a general coping style. And although the effect is less strong, the measured coping strategies of the UCL still have an effect on sickness absence. The favourable outcome of problem-solving coping in relation to sickness absence can be attributed to being engaged in active transactions between person and environment with the aim of alleviating stress-inducing situations (Lazarus 1993; Huizink et al. 2002; Roesch and Weiner 2001). Efforts to remove the stressor, gathering information, and finding possible solutions for the problems are a few examples. In general, these strategies are associated with self-confidence and perceived control, and are observed in individuals who are persistent and assertive, self-efficacious, and less anxious and depressed (Heppner 1988; Heppner and Baker 1997). Two factors in the evaluation of problem-solving coping should be commented upon. Men are believed to be more likely to confront a problem with active coping, whereas women are believed to exhibit a more reactive–passive response (Pearlin and Schooler 1978; Hamilton and Fagot 1998). For instance, a meta-analysis of Tamres et al. (2002) showed that compared to men, women are more likely to use indirect strategies that involve verbal expression or to seek emotional support. Huizink et al. (2002), however, argue that the presumed effectiveness of problem-solving strategies is based on the assumption that male-gender role behaviour is superior. She suggests that studies, as a result of gender bias, have failed to identify other styles of coping as potentially effective. In our study, however, considering several styles, the adjustment for gender barely affects the influence of coping on sickness absence measures. Another complicating factor in the evaluation of the effectiveness of problem-solving coping may be that reviewers group several distinct coping behaviours under this one single coping category in an effort to simplify the findings (Tamres et al. 2002). For instance, problem-solving coping may be composed of different behaviours. This is underscored by our finding that different problem-solving strategies have different outcomes on sickness absence. Seeking social support, for example, affects only duration (marginally), whereas active problem-focusing affects length, duration and frequency. The difference in outcome of different sickness absence measures in the case of seeking social support may be clarified by Stansfeld et al. (1997) who argues that social support may influence absence-related behaviour and encourage a person to take absence at a time of illness. Contrarily, one may postulate that social support also shortens sickness absence. Both postulations may result in the absence of a substantial effect. To the authors’ knowledge, this is the first study with four sick leave outcome measures in relation to coping that reveals a more comprehensive picture of changes in the sick leave pattern. In line with Isacsson et al. (1992), we can conclude that “adding more measures gives a more comprehensive picture of sickness absenteeism and of differences between groups”. For instance, the present study demonstrates a relation between a palliative coping reaction and length of sickness absence in contrast with the duration of sick leave. Without the differential pattern for sickness absence, the differential effects of several coping strategies would remain invisible. Another, and perhaps even more important, argument to use different measures of sickness absence is the accessibility of this study for international comparisons in future research. Finally, the multi-factorial aetiology of sickness absence requires discussion. Alexanderson (1998) pointed out that different disciplines and scientific traditions deal in different ways with absenteeism. In medical science, for instance, the focus of research is on occurrence, etiology and intervention, whereas the focus in medical sociology is on interacting factors within a pre-circumscribed model. She and other authors (Whitaker 2001; Alexanderson 1998), therefore categorized the many factors of sickness absence in three levels: macro/national level (Alexanderson 1995) (e.g. insurance systems), organizational level (Jeurissen and Nyklicek 2001; Vahtera et al. 1996) (e.g. job demands, resources) and individual level (e.g. gender, education). Recognizing this phenomenon, our analyses were adjusted for several known risk factors at the level of the individual. Since the present study was conducted in one Dutch company, the influence of organizational and socioeconomic factors was equally present in all groups and in this sense controlled for. A strong point of our study is the detailed way in which sickness absence is assessed, using objective archival data. Thus far, relatively little attention has been paid to the implications of different quantitative measures of sickness absence. Moreover, a prospective design was used that allowed for predicting future sickness absenteeism. A limitation of the study is the non-recurring measurement of coping in our study. Therefore, we cannot rule out the possibility that sickness absence might influence the way employees cope with stressful situations. Although, coping styles, as measured with the UCL, have proven to be relatively stable in time (Norberg et al. 2005), reversed causation cannot be ruled out. A second limitation could be the Cronbach’s alpha of some subscales of the UCL (slightly below the 0.70). However, the criterion of 0.70 is an arbitrary value that is not universally accepted as the minimum level of acceptability. As an example of the arbitrariness of this criterion, Nunnally (1967) mentioned that αs ranging from 0.50–0.60 would be acceptable, but in the second edition of that book he suggests that 0.70 is the minimally acceptable value—without further justification (Nunnally 1978). Moreover, the minimally required degree of reliability is a function of the research purpose; for individual-level, diagnostic research α should be much higher than for the basic, group-level research reported in our study (Peterson 1994). Hence we used a minimum threshold for coefficient α of 0.65 as was recently proposed by De Vellis (2003). In spite of these limitations, the results of the present study support the notion that problem-solving coping and reactive–passive strategies are inextricably connected with frequency, duration, length and onset of sickness absence. Especially ‘active problem-focusing’ decreases the chance of future sickness absence.
[ "coping", "sickness absence", "frequency", "length", "duration", "ucl" ]
[ "P", "P", "P", "P", "P", "P" ]
J_Autism_Dev_Disord-4-1-2226079
Brief Report: Normal Intestinal Permeability at Elevated Platelet Serotonin Levels in a Subgroup of Children with Pervasive Developmental Disorders in Curaçao (The Netherlands Antilles)
This study investigated the relationship between platelet (PLT) serotonin (5-HT) and intestinal permeability in children with pervasive developmental disorders (PDD). Differential sugar absorption and PLT 5-HT were determined in 23 children with PDD. PLT 5-HT (2.0–7.1 nmol/109 PLT) was elevated in 4/23 patients. None exhibited elevated intestinal permeability (lactulose/mannitol ratio: 0.008–0.035 mol/mol). PLT 5-HT did not correlate with intestinal permeability or GI tract complaints. PLT 5-HT correlated with 24 h urinary 5-hydroxyindoleacetic acid (5-HIAA; p = .034). Also urinary 5-HIAA and urinary 5-HT were interrelated (p = .005). A link between hyperserotonemia and increased intestinal permeability remained unsupported. Increased PLT 5-HT in PDD is likely to derive from increased PLT exposure to 5-HT. Longitudinal studies, showing the (in)consistency of abnormal intestinal permeability and PLT 5-HT, may resolve present discrepancies in the literature. Introduction Autism has been linked to gastrointestinal (GI) disturbances (White 2003). It is, however, questionable whether GI anomalies in children with autism are specific (Erickson et al., 2005). An increase of chronic diarrhea, constipation, abdominal bloating and food regurgitation has been found in some studies, but could not be confirmed in more recent studies (reviewed by Erickson et al. 2005). Increased GI permeability, as established through the differential sugar absorption test (SAT), was demonstrated in 9/21 (43%) (D’Eufemia et al., 1996) and 19/26 (76%) (Horvath & Perman, 2002) of patients diagnosed with autism. The SAT measures the integrity of the intestine by the ingestion of two indigestible saccharides that after GI uptake become fully excreted in urine. One of these (usually lactulose) passes the intestinal wall through paracellular transport (‘leakage’), while the other (usually mannitol) passes by paracellular and transcellular transport. The urinary lactulose/mannitol ratio is used as a measure of intestinal integrity and permeability (van Elburg et al., 1995). In addition, a recent study found a high prevalence of congenital GI anomalies (adjusted odds ratio 5.1, 95% confidence interval 1.8–14.1), notably pyloric stenosis, in autism, which may be linked to the high rate of GI dysfunction reported by their parents (Wier, Yoshida, Odouli, Grether, & Croen, 2006). The implication of the GI tract in autistic pathophysiology warrants more detailed investigation of the gut-brain axis (Erickson et al., 2005). Especially the role of serotonin (5-hydroxytryptamine; 5-HT), as a messenger within this axis (Gershon, 2005), deserves attention. Many different aspects of the 5-HT system in autism have already been studied (Burgess, Sweeten, McMahon, & Fujinami, 2006; Croonenberghs, Verkerk, Scharpe, Deboutte, & Maes, 2005; Janusonis, 2005; Mulder et al., 2004). A recent report (Mulder et al., 2004) on platelet (PLT) 5-HT in PDD showed PLT hyperserotonemia in approximately 36% of patients with autism and in 58% of patients with PDD not otherwise specified (NOS). Using mixture-modeling analysis Mulder et al. (2004) derived an empirical cut-off value that enabled dichotomization of patients with PDD into normo- and hyperserotonemic. Extensive behavioral assessments did, however, not show significant correlates with PLT 5-HT or hyperserotonemic status. A common (developmental) factor, causing both an autistic brain and deregulated 5-HT release from the GI tract, years after birth, may be involved in the etiology of PDD (Janusonis, 2005). The primary site of the hyperserotonemia in autism is likely to be located in the GI tract. Serotonin is a biogenic amine that derives from the essential amino acid tryptophan (Tryp) by hydroxylation and subsequent decarboxylation. The GI tract contains about 80% of bodily 5-HT, which is unevenly distributed among the enterochromaffin cells (90–95%) and neurons (5–10%) (Houghton, Atkinson, Whitaker, Whorwell, & Rimmer, 2003). The main functions of 5-HT are in smooth muscle contraction, blood pressure regulation, and peripheral and central neurotransmission. Serotonin localized in the basolateral stores of enterochromaffin tissue is released upon neuronal, chemical or mechanical stimulation. Several 5-HT receptors control GI motility, sensation and secretion (Gershon, 2005). Following its release, 5-HT is removed from the interstitial space by 5-HT selective reuptake transporters (Gershon, 2005). However, part of the 5-HT enters the portal blood and systemic circulation where it is either rapidly taken up and accumulated by PLT, or metabolized by the liver, lung and kidneys into its major metabolite 5-hydroxyindoleacetic acid (5-HIAA) (Houghton et al., 2003). Platelets store and transport the majority (99%) of circulating 5-HT (Ortiz, Artigas, & Gelpi, 1988). Elevated PLT 5-HT levels observed in subgroups of patients with PDD, may be related to increased GI motility. This notion is supported by higher PLT 5-HT in patients with diarrhea predominant irritable bowel syndrome (d-IBS), as compared with healthy controls (Houghton et al., 2003), although this was not consistently found (Atkinson, Lockhart, Whorwell, Keevil, & Houghton, 2006). Patients with d-IBS have augmented GI motility, which is likely to cause increased exposure of their circulating PLT to 5-HT (Atkinson et al., 2006; Gershon, 2005; Houghton et al., 2003). Increased PLT 5-HT is also observed in patients with carcinoid tumors (Kema et al., 2001). Carcinoid tumors derive from enterochromaffin cells and are characterized by high 5-HT production with diarrhea as a frequent symptom (Modlin, Kidd, Latich, Zikusoka, & Shapiro, 2005). Consequently, measurement of PLT 5-HT is used as a sensitive marker for the early diagnosis and the subsequent follow-up of patients with carcinoid tumors (Kema et al., 2001). The aim of the present study was to investigate whether the subgroup of children with PDD having increased PLT 5-HT levels, is the same as the one exhibiting increased intestinal permeability as established by a SAT. Methods Patients Parents of patients with PDD (n = 31) according to the DSM-IV TR (American Psychiatric Association, 1994) were asked for the participation of their affected children via the local patient society and pediatricians. Oral and written informed consent were obtained. Information regarding comorbidity, medication, nutritional supplements and the prevalence of GI related complaints was obtained from medical records and with the aid of assisted questionnaires. The study was performed in Curaçao (The Netherlands Antilles) in the summer of 2004. All collected urine and blood samples were transported in dry ice to the Netherlands for further analyses in the University Medical Center Groningen (UMCG). The study was approved by the Medical Ethical Committee of the St. Elisabeth Hospital in Curaçao. Sugar Absorption Test The SAT was performed according to van Elburg et al. (1995). Shortly, the patients ingested a sugar (lactulose, mannitol and sucrose) containing test fluid after an overnight fast. All urine voidings during the following 5 h were collected and pooled. Urinary sugars were analyzed by gas chromatography as previously described (Jansen, Muskiet, Schierbeek, Berger, & van der Slik, 1986). A urinary lactulose/mannitol (L/M) ratio above 0.090 was considered to be indicative for abnormal GI integrity/increased intestinal permeability (van Elburg et al., 1995). Serotonin Assays For estimation of 5-HT turnover and exposure of PLT to 5-HT, we examined 24 h urinary excretion of 5-HIAA and total 5-HT (Kema et al., 2001). For this, parents were asked to abstain their child completely from 5-HT containing foods (e.g. banana, pineapple, kiwi, walnuts) during collection and during the preceding 12 h. The volumes of the urine samples were measured before storing at −20°C. Urinary 5-HIAA and total 5-HT concentrations were determined as previously reported (Kema et al., 2001). Urinary 5-HIAA values were evaluated with the use of age-dependent reference values (American Association for Clinical Chemistry, 2005). Non-fasting venous blood (for serum antibodies) and EDTA-anticoagulated blood (all other assays) were collected from children with PDD. EDTA-anticoagulated blood was placed on melting ice. Hematological indices were measured immediately after sampling. Within 1 h after collection a 1:1 mixture of K2EDTA and Na2S2O5 was added to PLT-rich plasma (PRP) to prevent oxidation of indoles. Plasma and serum were stored at −80°C. Simultaneous analysis of indoles [Tryp, 5-hydroxytryptophan (5-HTP), 5-HT and 5-HIAA] in PRP was performed as previously described (Kema et al., 2001). PLT 5-HT data were compared with both a local cut-off value of 5.4 nmol/109 PLT (Meijer, Kema, Volmer, Willemse, & de Vries, 2000) and an empiric cut-off value of 4.55 nmol/109 PLT (Mulder et al., 2004). The local cut-off value represents the 97.5th percentile of a reference group of healthy adults, that is employed in our laboratory for the diagnosis of carcinoid tumors. The empirical cut-off value represents the bottom of the valley of the PLT 5-HT bimodal distribution, as exhibited by patients with PDD. This value allows optimal classification into those who are normoserotonemic and hyperserotonemic. Platelet-rich-plasma Tryp data were evaluated with the use of age-dependent reference values (American Association for Clinical Chemistry, 2005). Exclusion of Celiac Disease Serum IgA anti-endomysium titers and HLA genotype were assessed to rule-out celiac disease, which is an established cause of increased GI permeability (van Elburg, Uil, Mulder, & Heymans, 1993). Statistics All data were analyzed using the Statistical Product and Service Solutions package, version 11.5 (SPSS Inc. Chicago). Data were tested for normality using the Shapiro–Wilk W test. Group comparisons (normo- and hyperserotonemic) were performed with the Mann–Whitney U test (non-parametric). Spearman (non-parametric) tests were used to evaluate correlations at α = 0.05, to minimize type-II errors. Results Patients We enrolled the first 24 (77%) of the 31 patients with PDD whose parents agreed to participate. Patient characteristics are reported in Table 1. The parents reported 13/23 (57%) of their affected children to have one or more GI symptoms. Table 1Characteristics of patients with pervasive developmental disorders in Curaçao tested for platelet serotonin and intestinal permeabilityParameterGender (male/female)18 (75%) / 6 (25%)Age (years)9.9 (±3.9)DSM-IV TR diagnoses    299.00 (autistic disorder)8 (33%)    299.80 (PDD-NOS)16 (67%)Ethnicity    Caucasian8 (33%)    African–American13 (54%)    Other3 (13%)Comorbidity    Epilepsy5 (22%)    Allergy2 (9%)    Asthma1 (4%)    Intestinal yeast infection1 (4%)Medication for comorbidity9 (38%)Nutritional supplements (vitamins/ω3-oils)9 (38%) / 2 (4%)Diet (gluten and casein free)a2 (9%)Physical complaints related to GI tracta13 (57%)    Nausea0 (0%)    Vomiting1 (4%)    Diarrhea4 (17%)    Constipation4 (17%)    Bloating and gaseousness8 (35%)an = 23. PDD-NOS pervasive developmental disorder—not otherwise specifiedData represent number (percentage) or mean (±SD) for 24 patients, unless otherwise specified Because of cleanliness problems, no urine samples were obtained from one patient, while from another we only received urine for SAT. Blood sampling from yet another patient was problematic. Consequently, our study comprised urine for SAT from 23/24 patients, 24 h urine from 22/24 patients and blood samples from 23/24 patients. Table 2 shows the indices related to 5-HT turnover and intestinal permeability of these patients. Table 2Indices of serotonin (5-HT) turnover and intestinal permeability in patients with pervasive developmental disorders in CuraçaoParameterReference values<RV≥RVPLT 5-HT (nmol/109 PLT)a3.4 (2.0–7.1)<4.55b6 (26%)<5.4c4 (17%)Tryp (μmol/l)a50.0 (±10.9)2–18y: 0–79d0 (0%)0 (0%)2y: 35–73d}6y: 37–76d3 (13%)0 (0%)16y: 54–93d5-HIAA (μmol/24 h)e8.4 (3.9–36.4)3–8y: 2.1–29.3d}9–12y: 5.2–32.9d1 (5%)1 (5%)13–17: 4.7–34.0d>18y: 5.2–36.6d5-HT (nmol/24 h)e305 (±92)Lactulose (mmol/5 h)f0.024 (±0.11)Mannitol (mmol/5 h)f1.32 (±0.59)L/M ratio (mol/mol)f0.019 (±0.007)<0.090g23 (100%)0 (0%)aPlatelet-rich plasma, n = 23bEmpiric cut-off derived from Mulder et al. (2004)cUpper reference value derived from apparently healthy adults Meijer et al. (2000)dFrom reference American Association for Clinical Chemistry (2005)eAbout 24 h urine, n = 22fAbout 5 h urine from sugar absorption test, n = 23gFrom reference van Elburg et al. (1995)Data represent number (percentage), median (range) or mean (±SD).RV; reference value; PLT, platelet; Tryp, tryptophan; 5-HIAA, 5-hydroxyindoleacetic acid; L/M, lactulose/mannitol Sugar Absorption Test Intestinal permeability, reflected by the L/M ratio [median (range) 0.017 mol/mol (0.008–0.035)], indicated that none of the patients had increased intestinal permeability (i.e. L/M ratio ≥ 0.090). Serotonin Assays PLT 5-HT [median (range): 3.4 (2.0–7.1) nmol/109 PLT] was elevated in 4 (range: 5.7–7.1 nmol/109 PLT) and 6 [range 4.6–7.1 nmol/109 PLT] patients if compared to the local cut-off value ((Meijer et al., 2000); <5.4 nmol/109 PLT) or the empirical cut-off value ((Mulder et al., 2004); <4.55 nmol/109 PLT), respectively. The sole patient exhibiting detectable levels of plasma 5-HIAA (26.0 μmol/l), also exhibited increased PLT 5-HT (6.4 nmol/109 PLT). However, this patient did not exhibit increased urinary total 5-HT or 5-HIAA excretions. Urinary excretion of 5-HIAA was within normal range. Exclusion of Celiac Disease None of the patients were positive for serum IgA anti endomysium and 8/23 (35%) patients had a genotype positive for either HLA-DQ2 (n = 5) or HLA-DQ8 (n = 3). Based on the results of serology, none of the patients seemed to have celiac disease, although we did not perform further tests to exclude this. Statistics There was no correlation between PLT 5-HT and L/M ratio (p = .663; r = −.098). Patients exhibiting GI tract complaints did not have higher PLT 5-HT, higher L/M ratios, or higher 5-HIAA and total 5-HT 24 h urinary excretions (p > .4). Platelet 5-HT correlated with 24 h urinary 5-HIAA excretion (p = .034; r = .465). Also the 24 h excretion rates of 5-HIAA and total 5-HT showed a positive correlation (p = .005; r = .580). Discussion In this study of children with PDD we did not observe a relation between PLT 5-HT and intestinal permeability, as derived from the urinary L/M ratio. The number of children with PDD exhibiting increased PLT 5-HT was lower compared with reports of others. For instance, the recent study of Mulder et al. (2004) showed 23/81 (28%) of Dutch children with PDD to exhibit increased PLT 5-HT, using the same analytical method and a cut-off value of 5.4 nmol/109 PLT. Also our data on intestinal permeability contrast with previous reports (D’Eufemia et al., 1996; Horvath et al., 2002) showing that 43–76% of children with autism have increased intestinal permeability, as established by a SAT. A weakness of the current study is its small size and the lack of a local age- and gender matched control group. We have, on the other hand, no indications for deviant reference values for PLT 5-HT or L/M ratios in Curaçao, as compared with The Netherlands. Age and gender do not appear to affect PLT 5-HT (Mulder et al., 2004), but it must be noted that PLT 5-HT reaches highest levels during childhood and gradually decreases during adulthood (Flachaire et al., 1990). Dependent on the cut-off values employed, we nevertheless found 4 and 6 patients with increased PLT 5-HT. Neither of these patients had abnormal intestinal permeability or L/M ratios residing in the upper range of normality. However, the positive correlation between PLT 5-HT levels and 24 h urine 5-HIAA, and also the relation between urinary 5-HIAA and urinary 5-HT, suggest that, also in PDD, exposure of PLT to 5-HT determines PLT 5-HT and its consistently found increase in a subgroup. It is possible that increased PLT 5-HT and GI permeability are not consistent features of children with PDD in time. Long term, e.g. monthly, monitoring of a well defined patient and control group may shed more light on this potential source of variance as a cause of the conflicting results found by several investigators. Differences in the activity of the 5-HT transporter based on genetic polymorphisms are unlikely, since these seem to have minor effects, if any, on PLT 5-HT levels (Mulder, 2006). In conclusion, the finding of a subgroup of children with PDD exhibiting hyperserotonemia was replicated. None of the children exhibited increased intestinal permeability, while PLT 5-HT was unrelated to both intestinal permeability and GI symptoms. Additional studies are needed to elucidate the etiology of increased PLT 5-HT in PDD and to establish its relation with intestinal pathology, if any.
[ "permeability", "platelets", "serotonin", "pervasive", "gastrointestinal", "child development disorders" ]
[ "P", "P", "P", "P", "P", "M" ]
Appl_Microbiol_Biotechnol-3-1-2043089
Assessment of technological options and economical feasibility for cyanophycin biopolymer and high-value amino acid production
Major transitions can be expected within the next few decades aiming at the reduction of pollution and global warming and at energy saving measures. For these purposes, new sustainable biorefinery concepts will be needed that will replace the traditional mineral oil-based synthesis of specialty and bulk chemicals. An important group of these chemicals are those that comprise N-functionalities. Many plant components contained in biomass rest or waste stream fractions contain these N-functionalities in proteins and free amino acids that can be used as starting materials for the synthesis of biopolymers and chemicals. This paper describes the economic and technological feasibility for cyanophycin production by fermentation of the potato waste stream Protamylasse™ or directly in plants and its subsequent conversion to a number of N-containing bulk chemicals. Introduction Plants have the ability to use the incident sunlight for the biosynthesis of a tremendous variety of compounds that may contain a number of functionalized atoms or groups. An important group of these functionalized compounds are proteins and especially the individual amino acids that contain one or more nitrogen atoms. When starting from crude oil or naphtha, the incorporation of functionalities (e.g., −NH2) into derived bulk chemicals (such as 1,2-ethanediamine and 1,4-butanediamine) requires considerable amounts of energy and catalysts. However, some amino acids appear to be very suitable starting materials for highly functionalized bulk chemicals (Scott et al. 2007). The biorefinery concept is a rapidly emerging field of research and commercial activities aiming at the integral use of all components of agricultural crops. In addition to the main product such as starch or oil, also, other side stream fractions including protein, free amino acid, and fiber fractions have high potential for valorization. An example of such waste stream fraction, Protamylasse™ that remains after starch and protein extraction from potato and its possible application as a substrate for microbial fermentation and production process for cyanophycin (Elbahloul et al. 2005a,b, 2006), will be described in detail in the current paper. Cyanophycin (multiarginyl-poly[l-aspartic acid]; CGP, cyanophycin granule peptide) is a non-ribosomal protein-like polymer which consists of equimolar amounts of aspartic acid and arginine arranged as a poly-aspartic acid backbone to which arginine residues are linked to the β-carboxyl group of each aspartate by its α-amino group. In nature, cyanophycin is produced by most, but not all, cyanobacteria as a temporary nitrogen reserve material during the transition of cells from the exponential phase to the stationary phase. The polymerization reaction is catalyzed by only one enzyme, which is referred to as cyanophycin synthetase (CphA). Because of the low polymer content and the slow growth of cyanobacteria resulting in only low cell densities, cyanobacteria are not suitable for large-scale production of cyanophycin. Therefore, the cphA genes from a number of cyanobacteria have been expressed in several bacteria, and, more recently, also in plants. Furthermore, the polymer isolated from recombinant strains contained lysine as an additional amino acid constituent. Now that cyanophycin can be produced in sufficient amounts by pilot scale fermentations for studying its material properties, it appears of biotechnological interest because purified cyanophycin can be chemically converted into a polymer with a reduced arginine content, which might be used like poly-aspartic acid as a biodegradable substitute for synthetic polyacrylate in various technical processes. In addition, cyanophycin might also be of interest for other applications when the hitherto unknown physical and material properties of this polymer will be revealed. On the other hand, cyanophycin is a convenient source of the constituent amino acids that may be regarded as nitrogen-functionalized precursor chemicals. In the current paper, conditions will be discussed for the technological and economic feasibility of cyanophycin production by microbial fermentation and by cyanophycin production directly in plants. The conditions for fermentative cyanophycin production will be based upon the use of cheap substrates derived from agricultural waste streams and the possible cyanophycin production simultaneously with other fermentation products like ethanol. This aspect is denoted process integration. Biorefinery and its place in the production of chemicals The depletion in fossil feedstocks, increasing oil prices and the ecological problems associated with CO2 emissions, are forcing the development of alternative resources for energy, transport fuels, and chemicals: the replacement of fossil resources with CO2 neutral biomass. Potentially, biomass may be used to replace fossil raw materials in several major applications: heat, electricity, transport fuels, chemicals, and other industrial use. Each of these groups represents about 20% of the total fossil consumption in the industrialized countries (Oil Market Report of the International Energy Agency 2004). Large variations in the cost of these products at the wholesale level, based on their energy content, are evident (Table 1). When one considers the contribution to costs by the raw materials (expressed per GJ end product), large differences are also seen. Heat can be produced from coal for around 3 €/GJ due to utilizing inexpensive feedstocks with high conversion efficiency (about 100%), while the raw material costs for electricity is double (6 €/GJ) due to a conversion yield of about 50%. Most notable is the high raw material costs for chemicals. Here, expensive raw materials (oil) are used with low(er) conversion yields (Sanders et al. 2005, 2007). Table 1Different applications and contributions of biomassContributionIntegral cost prices (€/GJ end product)Raw material cost fossil (€/GJ)Percentage of total energy in the Netherlands (3.000 PJ) consumed per application (%)Heat43 (Coal)±20Electricity226 (Coal)±20Transport fuel108 (Oil)±20Average bulk chemicals7530 (Oil)±20Rest of industry±20 To obtain a good net income for biomass, an effective biorefinery system is required for the separation of the harvested crop into fractions for use in (several of) these applications. These may be used directly as the desired product or undergo conversion by chemical, enzymatic, and/or microbial means to obtain other products. Biorefinery systems are well established for a number of crops. For example, soybeans are the raw materials for large biorefineries to produce oil (for biofuels), proteins and valuable nutraceuticals. Less well explored is the use of biomass to make industrial chemicals. Effort to produce chemicals with constant quality and performance (such as lactic acid) has been addressed, but has mainly focused on the use of carbohydrates as raw materials and use of biotechnology for conversion. However, for effective biorefinery approaches, other biomass fractions should also be considered for the production of chemicals. It is anticipated that the substitution of petrochemical transportation fuels with biofuels will rise significantly in the coming years. This means that a rise in the production of biodiesel will lead to large volumes of glycerol as a residual stream, and indeed, some companies are already investigating the use of glycerol to produce chemicals. While the awareness of large volumes of glycerol from biofuel production is apparent, one should not overlook other waste streams. Indeed, from biofuel production, an immense concomitant waste stream of protein will also be generated. Sources of proteins and amino acids are not limited to those generated from biofuel production, but also from other industries such as potato starch production. For example, during AVEBE’s processing of potatoes for starch extraction, the main waste stream is Protamylasse™, which mainly contains sugars, organic acids, proteins, and free amino acids and has currently no major market use. Some of the amino acids present in such sources could be very suitable raw materials for preparing (highly) functionalized chemicals traditionally prepared by the petrochemical industry (Sanders et al. 2007). Generally, the conversion of crude oil products utilizes primary products (ethylene, etc.), and their conversion to materials or (functional) chemicals makes use of co-reagents such as ammonia and various process steps to introduce functionalities such as −NH2. Conversely, many products found in biomass, such as proteins and amino acids, often contain these functionalities. Therefore, it is attractive to exploit this to bypass the use, and preparation, of co-reagents as well as eliminate various process steps by utilizing suitable biomass-based precursors for the production of chemicals. Thus, the production of chemicals from biomass takes advantage of the biomass structure in a more efficient way than the production of fuels or electricity alone and can potentially save more fossil energy than producing energy alone (Scott et al. 2007). When used in combination with environmentally sound production and processing techniques across the whole biomass production chain, i.e., from cultivation and harvest, its (pre)treatment and conversion to products, the use of biomass is considered a sustainable alternative to conventional feedstocks, which is reflected by sound economic advantages in both raw material and investment costs. General introduction on NRPs and especially cyanophycin Cyanophycin (also referred to as CGP, cyanophycin granule polypeptide) that belongs to the family of bacterial poly-amino acids together with poly-γ-glutamic acid and poly-ɛ-lysine, was discovered in 1887 by Borzi (1887) during microscopic studies of cyanobacteria and was later found in all groups of cyanobacteria (Oppermann-Sanio and Steinbüchel 2002). The cyanophycin molecule structure is related to that of poly(aspartic acid)s, but, unlike synthetic poly-aspartic acid, it is a comb-like polymer with α-amino-α-carboxy-linked l-aspartic acid residues representing the poly(α-l-aspartic acid) backbone and l-arginine residues bound to the β-carboxylic groups of aspartic acids (Simon and Weathers 1976; for recent review, see Obst and Steinbüchel 2004; Fig. 1). Cyanophycin is synthesized by most, but not all, cyanobacteria as a temporary nitrogen reserve material during the transition of cells from the exponential phase to the stationary phase (Mackerras et al. 1990). At neutral pH and physiological ionic strength, cyanophycin is insoluble and deposited in the cytoplasm as membraneless granules (Lawry and Simon 1982). Fig. 1Chemical structure of the cyanophycin monomer Cyanophycin isolated from Cyanobacteria is highly polydisperse and shows a molecular weight range of 25–100 kDa as estimated by sodium dodecylsulphate polyacrylamide gel electrophoresis corresponding to a polymerization degree of 90–400 (Simon 1971; Simon and Weathers 1976). Cyanophycin is a transiently accumulated storage compound which is synthesized under conditions of low temperature or low light intensity. Its accumulation can be artificially enhanced by the addition of chloramphenicol as an inhibitor of ribosomal protein biosynthesis (Simon 1973). Cyanophycin plays an important role in the conservation of nitrogen, carbon, and energy and, as indicated by its biosynthesis in presence of chloramphenicol, is non-ribosomally synthesized by CphA. Cyanophycin is accumulated in the cytoplasm of cyanobacteria as membraneless granules (Allen and Weathers 1980) in the early stationary growth phase (Mackerras et al. 1990; Liotenberg et al. 1996). When growth is resumed, for example due to a change in cultivation conditions, cyanophycin is reutilized by the cells (Mackerras et al. 1990). Krehenbrink et al. (2002) and Ziegler et al. (2002) showed that cyanophycin occurs even in heterotrophic bacteria like Acinetobacter sp. and Desulfitobacterium hafniense and therefore confirmed the wide distribution of this biopolymer and its function in nature as a general storage compound. Cyanophycin is of biotechnological interest because the purified polymer can be chemically converted into a polymer with reduced arginine content (Joentgen et al. 1998), which might be used like poly-aspartic acid as a biodegradable substitute for synthetic polyacrylate in various technical processes. In addition, cyanophycin might also be of interest for other applications if the unknown physical and material properties of this polymer are revealed. Because of the low polymer content and the slow growth of cyanobacteria resulting in only low cell densities, cyanobacteria are not suitable for large-scale production of cyanophycin (Schwamborn 1998), and sufficient amounts of cyanophycin were hitherto not available. The polymerization reaction is catalyzed by only one enzyme, which is referred to as CphA (Ziegler et al. 1998). The cphA genes from Anabaena variabilis ATCC 29413, Anabaena sp. strain PCC7120, Synechocystis sp. strain PCC6803, Synechocystis sp. strain PCC6308, Synechococcus elongatus, Synechococcus sp. strain MA19 and others were cloned and expressed in Escherichia coli (Aboulmagd et al. 2000; Berg et al. 2000; Hai et al. 1999; Oppermann-Sanio et al. 1999; Ziegler et al. 1998). More recently, heterologous expression of cphA was also demonstrated at a small scale in recombinant strains of Ralstonia eutropha, Corynebacterium glutamicum, and Pseudomonas putida (Aboulmagd et al. 2001). Whereas in cyanobacteria the molecular mass of the polymer strands ranged from 25 to 100 kDa (Simon 1976), the polymer from recombinant strains harboring cphA as well as in vitro-synthesized polymer exhibited a much lower range (25 to 30 kDa) and polydispersity. Furthermore, it was found that the polymer isolated from recombinant strains contained lysine as an additional amino acid constituent (Aboulmagd et al. 2001; Ziegler et al. 1998). Recently, the results of a detailed in silico analysis of the occurrence of enzymes involved in cyanophycin metabolism was published (Füser and Steinbüchel 2007). Recently, the earlier postulated instability of recombinant E. coli strains employed for cyanophycin production was also confirmed in both DH1 and DH5α. This instability may be caused by loss of the plasmid during fermentation. However, as cyanophycin production continues in cultures that rapidly appear to loose the ampicillin resistance employed for selection and plasmid maintenance, other explanations are also under consideration such as competition for Arg and Asp by both cyanophycin and the ampicillin resistance protein and the theoretical possibility that the ampicillin resistance protein could be trapped into the cyanophycin granule or at least be made inaccessible to the ampicillin. Due to the wide knowledge of its metabolism and available genetic tools, E. coli is one of the most commonly used bacterial hosts for the production of recombinant proteins (Lee 1996). Several expression systems have been developed for technical-scale production of recombinant proteins in E. coli based on the regulated trp, lac, or lambda PL promoter (Hannig and Makrides 1998). The cultivation of recombinant E. coli strains harboring cphA from Synechocystis sp. strain PCC6803 at the 500-l scale for the production of cyanophycin has been described (Frey et al. 2002). As the previously described method for the purification of cyanophycin (Simon and Weathers 1976) is not applicable to a large scale, a simplified method for isolation of the polymer at the technical scale was elaborated. Biosynthesis of cyanophycin was extensively studied in the 1970s by Simon and coworkers (Simon 1971, 1976; Simon and Weathers 1976). Later, this led to the identification of cyanophycin synthetase enzymes and the encoding genes (cphA) in various organisms (Ziegler et al. 1998; Aboulmagd et al. 2000; Berg et al. 2000; Hai et al. 2002). Subsequently, the enzymes involved in the degradation of cyanophycin by intracellular cyanophcyinases of cyanobacteria (cphB) and extracellular depolymerases, like hydrolase (cphE) and cyanophycinase (cphI) genes, were identified (Obst et al. 2002, 2004; Obst and Steinbüchel 2004). Elbahloul et al. (2005a,b) have found that inactivation of the cyanophycinase gene in Acinetobacter resulted in significantly less cyanophycin accumulation than the wild type presumably due to a shortage of cyanophycin primer molecules. On the contrary, cyanophycin is highly resistant against hydrolytic cleavage by proteases such as trypsin, pronase, pepsin, carboxypeptidases B, carboxypeptidase C, and leucin-aminopeptidase, and cyanophycin is also resistant against arginases (Simon and Weathers 1976). Because the cyanophycin synthetase genes (cphA) of many cyanobacteria, and recently, also of other microorganisms were identified, cloned, and heterologously expressed in other bacteria (Aboulmagd et al. 2001) conferring the ability to produce comparably large amounts of cyanophycin (up to 50% of CDW) in a much shorter period of time (1–2 days) as compared to cyanobacteria (about 4 weeks) and cyanophycin production was demonstrated at the 30- to 500-l scale (Aboulmagd et al. 2001; Frey et al. 2002; Voß and Steinbüchel 2006), the interest in cyanophycin as a potential raw material has constantly increased over the last few years. Improvement of fermentation conditions, feeding regimes, and the possibility to produce cyanophycin now by the employment of many genetically engineered bacteria with industrial relevance like R. eutropha, C. glutamicum, or P. putida (Aboulmagd et al. 2001) in complex but also in defined media make it appear likely that further improvement of cyanophycin production in bacteria will be achieved during future studies. These studies may include elementary mode analyses (Diniz et al. 2006) or more conventional approaches involving experimental design using fermenter arrays and principal component analyses. In conclusion, large-scale fermentation processes for cyanophycin production and downstream processing are available for a number of different microorganisms able to grow in different substrates, including the potato waste stream Protamylasse™, and also, low cyanophycin yields were reported in plants. Production and economic aspects of fermentative cyanophycin production An important contribution to sustainability can be made by the use of a considerable plant waste stream for the production of renewable, biodegradable, and biocompatible polymers and/or valuable chemicals that are now produced on large scale from petroleum. Some of the polymer classes to be developed may be expected to replace some existing mineral petroleum-based polymers as soon as competitive production prices can be obtained and/or supporting measures will be taken to promote the use of renewable resources. On the other hand, completely novel types of biopolymers may be developed for completely novel applications. AVEBE, located in the northern part of The Netherlands, is the largest potato-starch-producing company in the world involved in extraction, processing, and sales of starch and starch-derived products. During processing of potatoes for starch extraction, the main waste stream is Protamylasse™. Annually, AVEBE produces about 120,000 m3 of Protamylasse™ containing about 70,000 tons of dry matter, mainly consisting of sugars (14,000 tons), organic acids (13,300 tons), proteins and free amino acids (18,000 tons). The amino acids arginine and aspartic acid amount to about 1,000 tons each and lysine to about 700 tons. Currently, there is no proper outlet for this Protamylasse™ other than low value epandage (e.g., by Bos Agra-Service, NL in which the salts present, such as potassium, are used as fertilizer), whereas rough calculation shows that the total intrinsic gross value of the valuable components in the Protamylasse™ is about 45 million euros. A number of research objectives is now in progress to add substantial value to the entire potato starch production chain (and thus its economical feasibility). Protamylasse™ may be considered a model for other agricultural waste streams, such as grass juice and beet residue. It is not certain whether concentrations of all medium components in the Protamylasse™ will be optimal to sustain microbial growth and cyanophycin production, and it is anticipated that additional medium components need to be identified and/or tailor-made production strains constructed. This will require a detailed analysis of Protamylasse™ components before and after the cyanophycin production phase. To calculate the economic feasibility of cyanophycin production using Protamylasse™, several aspects need to be included as follows: Necessary steps from potato starch and protein extraction, yielding Protamylasse™, to the final purified product cyanophycin (which is considered an intermediate product for derived polymer types and N-containing bulk chemicals) include: shipment to a fermentation plant, dilution to 5–6% (v/v) using tap water, removal of potato particles by filtration, disposal (or alternative use) of particle fraction (60% DM), loading of step up fermenters [to allow an microbial inoculum concentration of 10% (v/v) per step; e.g., 10→100→1,000→10,000→100,000 l], sterilization, cooling, addition of ampicillin, inoculation, (batch) fermentation, addition of acid and/or base for pH control, cell harvesting and concentration, disposal or recycling of spent Protamylasse™, cell disruption (not for E. coli), cyanophycin extraction using pH 2, neutralization, cyanophycin crystallization, precipitation, purification, and storage.Yearly Protamylasse™ supply, 120,000 m3; 70,000 tons (60% DM)Protamylasse™ dry matter composition: amino acids (18,000 tons), sugars (14,000 tons), organic acids (13,300 tons), ash (22,200 tons). For further details on the composition, see Elbahloul et al. (2005a, b).Small-scale fermentation process data: Protamylasse™ concentration, 5–6% (v/v); fermenter volume, 25 l; strain E. coli DH1 (pMa/c5-914::cphA); temperature 37°C; maximal OD and cyanophycin yield reached after 15 h; optimal pH 7.5–8.0; biomass yield 5–10 g/l (CDW); cyanophycin content, 25% (w/w); cyanophycin composition, Asp, 50%; Arg, 45%; Lys, 5%; E. coli occasionally (5–10%) incorporates lysine in stead of arginine; poly(Asp–Arg) is non-soluble in water at neutral pH; poly (Asp–Arg–Lys) is soluble in water. Assuming that most of the solid particles will be removed from the Protamylasse™ by filtration, a yearly amount of 48,000 m3 of Protamylasse™ liquid juice will become available, which is used at a 5% dilution during fermentation, thus, providing a yearly amount of 960,000 m3 of diluted fermentation broth. This volume would be enough to run 9,600 × 100 m3 fermenter volumes. Further assuming a 1 week’s run time (including cleaning, sterilization, fermentation, and harvest) a park of, e.g., 185 × 100 m3 or preferably 20 × 1,000 m3 fermenter units (step up units included) could be operated continuously. With the current E. coli biomass yield of 5 g/l (CDM) with a cyanophycin content of 25% (w/w DM), this would yield a yearly amount of 1,200 tons of purified cyanophycin. At an estimated market price of € 1,000 per ton cyanophycin, the Protamylasse™ juice fraction (40% v/v) would yield a yearly income of only 1.2 million euros. However, by only increasing the amount of microbial biomass from the actual 5 g/l (CDM, E. coli) to a realistic value of 100 g/l (CDM, S. cerevisiae) with the same cyanophycin content of 25% (w/w DM), this yearly income could be raised to 24 million euros. If within the same fermentation run via so-called process integration, in addition to cyanophycin, ethanol also [yielding from 960,000 m3 at 5% (v/v) about 48,000 m3 ethanol and an additional yearly income of 27.4 million euros at $2.8 per US gallon, € 0.57 per liter) could be produced using semi-aerobic fermentation with S. cerevisiae, this yearly income could be further raised to about 50 million euros. The Protamylasse™ particle fraction can, in principle, also be used for cyanophycin and ethanol production. Preliminary calculations using the model of Golden Grain Energy, LLC (http://sec.edgar-online.com/ 2004/06/14/0001104659-04-016859/Section7.asp) suggest that the break-even point could roughly be reached within about 3–5 years. Further process improvements may be obtainable and are necessary to make this process economically feasible. These obtainable figures provide estimates as to the investments and costs that will be necessary to start a cyanophycin production fermentation facility. However, before fermentative cyanophycin production on industrial scale can be started, a number of bottlenecks should be overcome. Table 2 lists current bottlenecks and proposed measures for optimal and economically feasible cyanophycin production. Table 2Economical and technological bottlenecks and proposed measuresBottleneckProposed measure(s)Investments, including costs for fermentation and downstream processing equipmentThe calculation provided here suggests that these may be acceptableCosts for the production of cyanophycin, cyanophycin-derived products and for downstream processing of biomassConstruction of a sufficiently productive microbial strain to convert or simply utilize constituents of plant waste streams like Protamylasse™ and to incorporate these compounds, presumably amino acids, into the cyanophycin polymer chain during cyanophycin biosynthesisPhenotypic instability of E. coli production strains used until now, DH1 and DH5α, containing plasmid pMa/c5-914::cphA6803Construction of stable strains with integrated copies of the cyanophycinsynthesis genesLow biomass yields of the E. coli strains usedSince not all components present in the current source of Protamylasse™ may have the proper concentration for current laboratory strain(s), an optimization may require the addition of substrates other than Protamylasse™, for example other plant waste streams. Sufficient provision of amino acids like arginine should be ensured during the production phaseOptimization of microbial biomass formationBy using yeasts as alternative production organisms biomass yields could be increased to 100 g/l CDM for S. cerevisiae [factor 20×] or 150 g/l CDM for Pichia pastoris [factor 30×] if in Protamylasse™ the same yields can be obtained as in dedicated growth mediaSub-optimal fermentation processesFermentation technology and feeding regimes have to be developed for optimum amino acid utilization or biosynthesis from Protamylasse™ or other plant waste streamsGeneration of valuable side stream particle fraction of Protamylasse™Alternative use of the side stream particle fraction of Protamylasse™, e.g. by using cyanophycin producing filamentous fungiCo-production with, e.g., ethanolWhen using S. cerevisiae as the production organisms and (semi-) anaerobic fermentation both cyanophycin and ethanol could possibly be produced during the same runCosts for cyanophycin extractionDevelopment of alternative cheap cyanophycin extraction methods using, e.g., hydro-cyclone equipment for the non-soluble fractionCost-efficient production of cyanophycin in plantsThe transfer of the bacterial cyanophycin synthetase gene (cphA) into eukaryotic hosts, mostly plants and its effective expression in suitable organs or cell compartments is a major step (see below)Efficacy of downstream processingDownstream processing has to be adapted and optimized for cyanophycin or cyanophycin derivatives containing biomass, which will be either bacterial cells or eukaryotic (mostly plant) cells or tissuesLack of insight in possible modifications of cyanophycin, their impact on cyanophycin properties and market potentialThe diverse possibilities to modify the cyanophycin molecule chemically or enzymatically has to be exhaustingly explored to identify all potential key applications for cyanophycin-derived products and to find the most suitable products with regard to market potential and the possibility of their commercializationLack of knowledge concerning properties of known cyanophycin synthetases and their genetic engineeringThe possibility to modify the active sites of the cyanophycin synthetases in order to change its substrate specificity and to allow the production of cyanophycin derivatives has to be determinedInsufficient insight in all possible applications for cyanophycin as a polymer or as a starting material for chemical synthesesThe exploitation of cyanophycins and cyanophycin-derived molecules as substitutes for well established industrial products or as renewable raw materials has to be determined precisely Cyanophycin production in plants Transgenic plants can be utilized to produce renewable resources for industrial purposes in a CO2-neutral, environmentally acceptable, and competitive way. Poly-3-hydroxybutyrate (PHB) was the first plastic-like compound produced in plants (Poirier et al. 1992), followed by, e.g., poly-3-hydroxyalkanoate (PHA; Poirier 2002) and medium chain-length PHA in potato (Romano et al. 2003, 2005) and showed the feasibility to produce biopolymers in plants. Recently, it has been shown that it is also possible to produce cyanophycin in plants (Neumann et al. 2005). For this, the Thermosynechococcus elongatus BP-1 cyanophycin synthetase gene was expressed constitutively under a 35S promoter in tobacco and potato plants. It was shown that approximately 1.14 and 0.24% dry weight could be accumulated in the cytosol of tobacco and potato leaves, respectively. The size (35 kDa), amino acid composition (Asp/Arg/Lys = 1:1.05:0.1), and structure of the plant-produced polymer was similar to that produced in transgenic E. coli expressing the same gene; however, the amount and molecular weight of the cyanophycin produced in plants was much lower than that observed in bacteria (up to 50% dry weight and 125 kDa in bacteria). The experiments have provided proof of concept for the potential of producing cyanophycin in plants. Production of the cyanophycin biopolymer in potato is of high interest to the potato starch industry. Production in this plant does not require any additional infrastructure. After processing of the potatoes, cyanophycin can be isolated from the Protamylasse™. However, for commercial application, the efficiency of cyanophycin accumulation in potato has to be significantly improved. Neumann et al. (2005) already indicated that directing the cyanophycin synthetase into several other compartments, such as the chloroplasts, could lead to increased accumulation of cyanophycin. However, chloroplasts in the cyanophycin-producing cells differ morphologically from wild-type chloroplasts. There are fewer and smaller grana stacks, and the growth rate is slower. One of the possible explanations for these properties is depletion of amino acid resources as a result of cyanophycin production (Neumann et al. 2005). The pioneering experiments by Neumann et al. (2005) have opened up a new field for the production of cyanopycin in agricultural crops and show that more research is needed before being introduced into agricultural production (Conrad 2005). Additional strategies Priming cyanophycin elongation In in vitro studies, it has been shown that cyanophycin synthetase works more efficiently with a (β-Asp/Arg)3/Arg primer. In planta production of such a primer may enhance cyanophycin biosynthesis. Cyanophycinases encoded by the chpB and chpE genes can degrade cyanophycin into arginine–aspartic acid dipeptides (Asp–Arg) which cannot be used as primer for cyanophycin biosynthesis. The cphI gene encodes a plant-type asparaginase able to hydrolyze β-Asp/Arg bonds and that, thus, may be responsible for the last step in cyanophycin degradation. Bacterial studies have indicated that cphI expression contributes to a higher cyanophycin level. It might be possible to use a poly-Asp backbone as primer for cyanophycin biosynthesis. This peptide can be produced by ribosomal protein biosynthesis. The gene should be under the control of a low-level promoter to prevent the production of many peptides, and thus, the production of many low molecular weight polymers. Optimization of amino acid biosynthesis It has been shown that cyanophycin synthetase uses arginine and aspartic acid as its major amino acid source but that it also can incorporate lysine in the cyanophycin polymer (Berg et al. 2000). It is unclear how this affects the properties of the polymer. Based on the chemical composition of Protamylasse™ and the affinity of the enzyme for arginine and lysine, it can be proposed that lysine accounts for 1.5% of the total cyanophycin. To reduce this amount, three strategies are possible, i.e., improve biosynthesis of arginine, reduce the level of lysine, and/or transform lysine into arginine. It is possible that the availability of substrates (Asp and Arg) in plants is limiting or off-balance. Therefore, it is important to identify the organs that have the highest concentrations of available substrates and to investigate whether the substrate supply can be enhanced by introduction of genes involved in substrate production. Comparison of economics of cyanophycin production by fermentation or in plants As for some other commodities, depending on production price, market volume and final product price (Fig. 2) for cyanophycin specialty product applications, fermentation is the preferred production method (roughly below 20.000 tons annually), whereas for bulk quantities of cyanophycin, the production directly in plants is preferred (above 20,000 tons). Therefore, for the production of cyanophycin-derived bulk quantities of nitrogen-containing chemicals, plants are considered the best production organisms, whereas for specialty polymers, fermentation may be the preferred production technology. Fig. 2Cyanophycin production in planta or by fermentation. Gray square: raw material costs, filled square: fermentation costs, open square: recovery and purification costs Assuming that a typical fermentative production of a bulk product, such as lysine, citric acid or glutamic acid costs about € 1,500 per ton and that these costs consist of: € 500 for the raw materials, € 500 for the fermentation process, and € 500 for recovery and purification. The advantage of producing in plants is that both the raw material costs and the fermentation costs can almost be neglected. On the other hand, recovery costs could be much higher. For the sake of the reasoning, it is assumed that recovery costs for cyanophycin production in plants will be the same as in the case of fermented production, i.e., € 500 per ton. In case of a fermentation process, a typical production volume for a company would be in the order of magnitude of 100,000 tons/year. The turnover would then be 150 million euros per year (i.e., € 1,500 per ton × 100,000 tons/year). In the case of production in plants, as raw material and fermentation costs are negligible, cost savings would be 100 million euros per year. This would be the maximum advantage, as this calculation does not include any costs for the production of the crop nor any additional costs for biorefining of the crop and treatment of side products. A similar reasoning would bring a maximum advantage for a product with a typical company production volume of 20,000 tons per year, such as the case with a medium-sized monomer. In this case, cost savings per ton would be higher, but as the volume is much smaller, the maximum advantage could be around 70 million euros. For specialty products with a volume of 300 tons per year (enzymes), a maximum advantage is estimated to be in the order of 5–10 million euros. For pharmaceutical production with a volume of 10 tons per year, the advantage would be around 0.5–1.5 million euros, as the volume would be small and production costs would mainly be ascribed to recovery. Other plant side streams In addition to the possible exploitation of Protamylasse™, a large number of other plant side streams may be used for cyanophycin production, including grass juice, which remains after protein extraction or beet or cane molasses remaining after sugar extraction. It is foreseen that with the production of biodiesel and bioethanol, large volumes of side streams will become available, all containing major protein quantities. Such streams will include dried distillers grain and solubles (DDGS) from corn and wheat, press cake from palm oil and rape seed oil. These streams all have in common a very low cost price, which implies that when used as a microbial growth medium, a major contribution to fermentation process costs (about 30%) can be eliminated. However, it is important to note that none of any waste stream (to be) considered has exactly the same chemical or element composition as the production organism and its product(s), including, e.g., CO2. This implies that in each fermentation process to be developed using plant waste streams as substrates, limiting components should be supplemented. For the identification of such limiting components, fermentation test runs should be performed, preferably supported by statistic analyses such as elementary mode analysis (Diniz et al. 2006). Only after performing such exercises can reliable estimates of integral project costs be obtained. Cyanophycin-derived bulk chemicals Cyanophycin can be hydrolyzed to its constituent amino acids, aspartic acid, and arginine. These amino acids may be utilized directly in food and pharmaceutical applications. However, based on the chemical structure of these amino acids and the presence of functionalized nitrogen-containing groups, it is possible to anticipate their conversion to a number of industrial chemicals, including: Arginine may be converted to 1,4-butanediamine. 1,4-diaminobutane, derived from petrochemistry, is currently used as a co-monomer in the production of nylon-4,6. The volume of production is not known, but is estimated to be in the range of 10,000s tons per year with a value of >€ 1,600 per ton.Conversion of aspartic acid to acrylonitrile is also envisaged. Using current petrochemical technology, the current worldwide market volume of acrylonitrile is 2.7–5 × 106 tons per year and a price of about € 800–1,000 per ton. Other chemicals which could be obtained from cyanophycin but are currently prepared from fossil resources include, e.g., 1,4-butanediol and urea. The production of cyanophycin by plants will drastically reduce the cost price, potentially below € 1,000 per ton. This will enable the production of functionalized bulk chemicals such as 1,4-diaminobutane, and possibly, also acrylonitrile. The transition from mineral oil to plant-based precursor production has a considerable impact. The use of ammonia for the incorporation of nitrogen into chemicals is very important, but also very energy-intensive. Therefore, if the incorporation of nitrogen can be realized in systems based on plant (rest) streams in the form of protein or amino acid precursors, then this will yield considerable energy savings. Cyanophycin-based biopolymers Poly-aspartic acid is derived from cyanophycin after the hydrolytic removal of arginine. This polymer has properties that are very similar to poly-acrylic acid. The cost price of this polymer can be set on 1,000€/ton in the cost calculations for arginine here above. As the volumes of these products will be similar, only 1,000 tons/year will be manufactured. Higher market prices might be obtained for special applications in food and/or pharmaceutical applications. This might change the cost structure of arginine in a way that market volumes might double or triple. Cyanophycin as such might have applications as a polymer. Furthermore, derivatives obtained by enzymatic/genetic and/or chemical modifications might give valuable properties. Without thorough investigations, we cannot anticipate on the value of these polymers. Utilization in this area can be expected after 10 to 12 years after the beginning of the proposed research approaches for research and development. Possible cyanophycin modifications, applications for bulk chemicals and for polymers By incorporating other amino acids, different types of polymers can be made. So far, several cyanophycins have been produced in recombinant strains of E. coli up to 50% dry weight. However, especially for health care medical and food packaging applications, E. coli may not be the best commercial production organism. Therefore, the development of alternative food-grade production organisms is also one of the objectives of the current activities. One suitable candidate may be the bakers yeast Saccharomyces cerevisiae (and others: see above). In addition, being stable at a pH between 3 and 9, cyanophycin can also be hydrolyzed in concentrated volumes into its pure components, arginine and aspartic acid. This would make the whole process a novel biological extraction procedure for the selected amino acids. Outlook Given the anticipated cost development for fossil energy carriers and environmental regulations, the chemical industry is facing increasing financial pressure and is thus looking for possibilities to broach new resources as a basis for polymer production. Important considerations in this search are to lower energy costs and prices of raw material and to develop cheaper and more sustainable production processes. Unlike poly-γ-glutamic acid and poly-ɛ-lysine, cyanophycin has not been commercialized yet. Cyanophycin can be broken down into the individual amino acids that can be used as building blocks in various industrial processes. Because of its homogeneous structure and composition, the cyanophycin polymer and its derivatives also appear to be good candidates as starting materials for the production of nitrogen-rich commodity products, which are based on nitrogen-rich chemicals, like, for example, nylons. For example, for poly(aspartic acid) which is the polymer backbone of cyanophycin, various applications have been developed ranging from water-softening or detergent applications to applications in the paper, building material, petroleum or leather industry, in cosmetics, as well as many dispersant applications. Cyanophycin can be chemically converted into a polymer with a reduced arginine content, which might be used like poly-aspartic acid as a biodegradable substitute for synthetic polyacrylate in various technical processes (Schwamborn 1998). Thus, cyanophycin may find applications in cyanophycin-derived bulk chemicals and in cyanophycin-based biopolymers. It can be expected that economical activities can be developed within the following areas, such as: fermentation industry, biopolymer production, processing, modification and product development (also for medical technology), packaging industry, food and feed supplementation industries and, last but not least, state-of-the-art technology (which, in turn, will attract additional financial sources and economical activities). It should be emphasized, however, that the mentioned applications are still uncertain and that these are so far only potential applications. On the one hand, this development will lead to the substitution of chemicals that are now produced at the cost of fossil raw materials, such as oil. As oil may be depleted in about 50 years, and as there seems to be a correlation of the use of fossil raw materials with climate changes, it is essential to develop alternatives. The anticipated alternatives can be produced by fermentation and, in principle, by plant production systems, and so, giving a new economic and knowledge intensive value to the fermentation industry and/or to agriculture. On the other hand, novel types of polymers will be developed that do not simply replace existing applications but that will enter novel product markets. Elements of the contents of this paper are included in a dedicated patent application (Elbahloul et al. 2006).
[ "cyanophycin", "biorefinery", "bulk chemicals", "n-functionality", "protamylasse", "non-ribosomal", "plant waste, rest stream" ]
[ "P", "P", "P", "P", "P", "P", "R" ]
Knee_Surg_Sports_Traumatol_Arthrosc-3-1-2042026
Analysis of Oxford medial unicompartmental knee replacement using the minimally invasive technique in patients aged 60 and above: an independent prospective series
We present the outcome of an independent prospective series of phase-3 Oxford medial mobile-bearing unicompartmental knee replacement surgery. Eight surgeons performed the 154 procedures in a community-based hospital between 1998 and 2003 for patients aged 60 and above. Seventeen knees were revised; in 14 cases a total knee replacement was performed, in 3 cases a component of the unicompartmental knee prosthesis was revised, resulting in a survival rate of 89% during these 2–7 years follow-up interval. This study shows that mobile-bearing unicompartmental knee replacement using a minimally invasive technique is a demanding procedure. The study emphasises the importance of routine in surgical management and strict adherence to indications and operation technique used to reduce outcome failure. Introduction Modifications over the past 15 years have improved unicompartmental knee replacement surgery, as indicated in recent reports on the procedure [1–4]. The designers [5] (the originators) of the Oxford unicompartmental knee prosthesis (Biomet, Warsaw, IN) reported in 1998 a 97.7% cumulative survival rate of 10 years. An independent series with a 15-year survival analysis claimed a 94% cumulative survival rate [6]. The outcome was dependent on proper patient selection, surgical techniques and implant design, [4, 7] and the results have been attributed to improvements in these factors. The procedure is now performed through a short incision from the medial pole of the patella to the tibial tuberosity. Using this approach, there is little damage to the extensor mechanism, the patella is not dislocated, and the suprapatellar synovial pouch remains intact. As a result, patients recover more quickly. Patients achieve knee flexion, straight leg-raising, and independent stair-climbing three times faster than after total knee replacement (TKR) and twice faster than after open unicompartmental knee replacement surgery [8]. The minimally invasive procedure has been shown to be reliable and effective [9]. Because of the favourable published clinical results, surgeons at the Martini Hospital in Groningen, the Netherlands, began using the Oxford knee prosthesis in 1998. The goal of this independent prospective study for patients 60 years of age and above was to compare and evaluate the clinical midterm results of the Oxford phase-3 unicompartmental knee replacement using the minimally invasive technique in a community hospital. Materials and methods Between December 1998 and 2003, 154 successive Oxford unicompartmental knee replacements were performed in patients 60 years of age and above (Table 1). Of these, 132 patients underwent unilateral surgery, 10 patients underwent bilateral surgery on separate occasions, and 1 patient underwent concomitant bilateral surgery in the same OR session. There were 86 women; the average patient’s age was 69.2 years (range 60–93 years). All patients gave informed consent before their inclusion in this prospective study. Five patients had secondary osteoarthritis because of previous trauma. The remaining patients had primary osteoarthritis. Table 1Oxford phase-3 unicompartmental knee replacementCriteriaResultsNumber of patients132Number of knees154Left/right knee (%)53.8/46.2Age (mean/range, in years)69.2 (60–93)Gender (M/W)57 (40%)/86 (60%)BMI30.7 ± 4.9Follow-up range2–7 years Standardised anteroposterior radiographs were obtained with the patient in a weight-bearing position (standing), and lateral radiographs were obtained with the patient in a non-weight-bearing position (the patient lying horizontally). The radiographs were examined for loosening or radiolucency around the femoral and tibial components, and the anatomical axis of the limb was measured. The imaging criterion for no increased risk for loosening of the bone was a <2 mm thick radiolucent line [10]. The presence of osteoarthritic changes in the nonreplaced compartment was graded according to the Ahlback classification of osteoarthritis (Table 2) [11]. These procedures were performed by eight senior staff surgeons over the study period. Mean preoperative range of motion was 122.9 ± 8.9° of flexion and −0.7 ± 4.5° of extension. Table 2The Alhback radiological scoring system for estimating the severity of OAGrade 0 Normal Grade 1 Joint narrowing Grade 2 Joint obliteration Grade 3 Bone destruction <5 mm Grade 4 Bone destruction >5 mmGrade 5 Subluxation The results (preoperative, intraoperative, and follow-ups at 3 months, 6 months, and 1 year) were prospectively recorded with a historical record, procedure record, Knee Society score, SF-36 questionnaire (short form consisting of 36 questions), and the Western Ontario McMaster (WOMAC) score. Knee Society score ratings of excellent (90–100 points) and good (80–89 points) indicated success. The preoperative scores of the patients are presented in Table 3. Table 3Scoring results of the non-revised patientsScoringResultsKnee society scoreKnee score Preoperative39.2 (SD 18.2) Postoperative89.4 (SD 14.0)Function Preoperative55.8 (SD 14.3) Postoperative77.1 (SD 24.7)Total score Preoperative47.6 (SD 12.3) Postoperative83.4 (SD 16.8)WOMAC scorePain Preoperative50.3 (SD 18.7) Postoperative78.6 (SD 21.5)Stiffness Preoperative51.2 (SD 22.6) Postoperative71.2 (SD 20.8)Function Preoperative50.6 (SD 20.7) Postoperative76.2 (SD 20.4)SF-36 questionnaireFunction Preoperative35.7 (SD 17.6) Postoperative56.1 (SD 24.5)Physical Preoperative28.2 (SD 37.2) Postoperative57.2 (SD 44.3)Pain Preoperative32.7 (SD 19.2) Postoperative59.8 (SD 26.5)Health Preoperative63.7 (SD 22.2) Postoperative61.4 (SD 21.7)Social function Preoperative52.6 (SD 17.1) Postoperative64.5 (SD 17.6)Emotional Preoperative64.5 (SD 44.6) Postoperative70.5 (SD 40.7)Mental health Preoperative73.7 (SD 17.9) Postoperative75.1 (SD 18.8)SD standard deviation Preoperative weight-bearing radiographs showed that the knees had an average femorotibial alignment of 2.4° of valgus (range 8°–3° of varus). Thirty-seven knees had grade-1 Ahlback osteoarthritis [11] in the lateral compartment on the preoperative radiographs, and one had grade-2 Ahlback osteoarthritis. The preoperative skyline view of the patellofemoral joint showed no bone loss with eburnation and longitudinal grooving in all the cases. All medial compartment arthroplasties were performed using the minimally invasive technique and under tourniquet control. The discharge criteria were control of immediate postoperative pain and the ability to flex the operated knee to a minimum of 90° with no lack of extension. All complications and revisions were reported, and a revision was defined as any surgical procedure resulting in removal or exchange of any of the prosthetic components. Results At the time of follow-up, two patients who had no known revisions were lost for the follow-up. The remaining 130 patients were available for follow-up. At the final follow-up, June 2006, revision TKR was performed in 14 knees and a prosthetic component was exchanged in three knees. An overview of the revisions is given in Table 4. Table 4Revisions of Oxford phase-3 knee replacement surgeryIncidenceRevision of a component of UKA3 Revision of the mobile bearing1 Revision of the femoral component and the bearing1 Revision of the tibial component and the bearing1Conversion to a TKR14Reason for revision to a TKA Inappropriate indication1 Misalignment and loosening5 Infection1 Progression of osteoarthritis in lateral compartment4 Persisting anteromedial pain >1 year3 One bearing was replaced because of luxation after a hyperflexion trauma. A new bearing of the same size was inserted, and no recurrence of luxation was seen at follow-up. In another case of luxation of the bearing, the femoral component, and the bearing were changed 9 months after the primary surgery. The fixation of the femoral component in this case was insufficient. The multiple small drill holes were not made, and there was no cement in the large drill hole. With flexion, the loose femoral component moved distally, causing luxation of the bearing. The tibial component and bearing revision was performed seven months after the primary surgery because of misalignment of this tibial component. With flexion, there was impingement of the bearing with the tibial component, causing a clicking sensation and rotation of the bearing. In one case, there was grade-2 Ahlback osteoarthritis [11] in the lateral compartment on the preoperative radiograph. This patient had no relief of preoperative pain, and the knee underwent TKR 18 months after the primary surgery. In five cases loosening of the components occurred; misalignment of the components is probably caused by impingement of the bearing. One patient had a deep Staphylococcus aureus infection, and a two-stage procedure was performed leading to a TKR. In four cases of revision, progression of osteoarthritis was seen in the lateral compartment with reported pain on the lateral side. These patients had a mean postoperative anatomical axis, femorotibial alignment of 18.6°. This overcorrection causes overloading of the lateral compartment with progression of arthritis in that compartment. Three patients with persisting anteromedial pain underwent revision. In two cases, no cause was found, and in both pain persisted after TKR. In the third case, the synovial biopsy showed synovitis villonodularis pigmentosa, and after the TKR this patient was pain-free. Except for the two patients with persisting anteromedial pain, all patients with a conversion to TKA were pain-free. No special augmentations or revision prosthetic components were necessary in these procedures; there were no bone defects that required the use of particulate autograft or allograft, and primary cruciate-retaining TKA was used in the revisions. Postoperative complications occurred after the primary unicompartmental knee replacements. One patient had a traumatic medial tibia plateau fracture 4 weeks postoperatively, which was treated conservatively. Another patient developed hemarthrosis that required extended hospitalisation; this was resolved with conservative treatment. There was one deep infection, and no deep venous thrombosis was reported. At the time of the most recent follow-up, average flexion was 125.8 ± 13.8°, with two patients achieving <90° flexion. The average flexion deformity/extension was 0.3 ± 2.2°. The postoperative scores of those patients who did not undergo revision (140 knees) at the latest follow-up are presented in Table 3. The Knee Society score total was 83.4. All three WOMAC scores improved. For the SF-36, the function, physical, and pain scores showed an improvement in the outcome; the other scores remained approximately the same. The final follow-up radiographs showed an average anatomical axis, femorotibial alignment of 8.8° of valgus (range 4°–22° of valgus). The knees were corrected by an average of 6.4° (range 2°–14°). This relative overcorrection gives increased stress on the lateral compartment. Signs of osteoarthritis progression in the uninvolved tibiofemoral compartment on the radiograph at the last follow-up were noted in 43 knees (grade-1 Ahlback osteoarthritis in 39 knees and grade-2 Ahlback osteoarthritis in four knees). No grade-3 or -4 changes were noted. At final radiographic evaluation, no component showed evidence of loosening. No knees had >2 mm of tibial cement-bone radiolucency. There were no radiolucent lines seen at the posterior aspect of the femoral components. Seventeen knees were revised, resulting in a survival rate of 89% in these 2–7 years of follow-up interval. Discussion The purpose of this prospective study was to evaluate midterm durability of Oxford unicompartmental knee replacement surgery for patients 60 years of age and older. We acknowledge that the present study has the limitations of a midterm follow-up. However, longer follow-up for this phase-III version with the minimally invasive technique is not possible, because the current version has been available only since 1998 [7]. Besides, most technical failures occur within the first 2 years [12]. In these 2–7 years of follow-up interval, 11% of unicompartmental knee arthroplasties in all patients needed revision—a survival rate of 89%. These results are considerably lower compared to the designer [5] series or the independent series [6]. The primary need for revision surgery could be attributed to indication and technical failures. Thirteen of the 17 revisions were probably related to human error, the remaining four are in one case a hyperflexion trauma and luxation of the bearing, one case with deep infection, and two cases with unexplained persisting anteromedial pain. Misalignment of the components was the primary cause of technical failure. With the minimally invasive technique, the visual field is restricted, making mobile-bearing unicompartmental knee replacement surgery a demanding procedure. Introduction of the minimally invasive option makes the terms surgical technique and pitfalls actual again. For the remaining 113 patients (140 knees) who did not undergo revision, the Knee Society score, WOMAC and SF-36 questionnaires showed an improvement in the outcome. All three scores indicated less pain and improvement in function, as confirmed by an average clinical average flexion of 126° at the latest follow-up. The Knee Society score total of 83.4 indicates a successful outcome. Over the 7-year period of our study, eight senior surgeons performed the operation with an average of <10 procedures a year per surgeon. All surgeons attended the instructional course organized by the designer group. There is no evidence for a learning curve in our study. The outcome should be attributed to the number of operations performed. As a result of the relatively low survival rate of this study, the number of senior surgeons performing the procedure in this hospital is now reduced to two. Conclusion Careful patient selection, surgeon experience, and proper instrumentation and surgical technique are important factors in mobile-bearing unicompartmental knee replacement surgery [13, 14]. For unicompartmental replacement surgery, long-term results are related to the number performed by the unit [14]. The surgeon should be well versed in the routine, indications, and technique of this procedure to minimise failure rates.
[ "unicompartmental", "knee", "replacement", "mobile bearing" ]
[ "P", "P", "P", "R" ]
Purinergic_Signal-3-4-2072925
New insights into purinergic receptor signaling in neuronal differentiation, neuroprotection, and brain disorders
Ionotropic P2X and metabotropic P2Y purinergic receptors are expressed in the central nervous system and participate in the synaptic process particularly associated with acetylcholine, GABA, and glutamate neurotransmission. As a result of activation, the P2 receptors promote the elevation of free intracellular calcium concentration as the main signaling pathway. Purinergic signaling is present in early stages of embryogenesis and is involved in processes of cell proliferation, migration, and differentiation. The use of new techniques such as knockout animals, in vitro models of neuronal differentiation, antisense oligonucleotides to induce downregulation of purinergic receptor gene expression, and the development of selective inhibitors for purinergic receptor subtypes contribute to the comprehension of the role of purinergic signaling during neurogenesis. In this review, we shall discuss the participation of purinergic receptors in developmental processes and in brain physiology, including neuron-glia interactions and pathophysiology. Introduction During the last two decades, evidence for the participation of ATP as neurotransmitter in neuronal signaling was collected by Drs. Surprenant [1] and Silinsky [2]. Purine-sensitive receptors were first classified as P1 G-coupled receptors which are activated by adenosine and P2 receptors, responding to stimulation of ATP [3]. Based on receptor cloning and studying of receptor-induced signal transduction, P2 receptors were divided into P2X receptors as ATP-gated ion channels and P2Y G protein-coupled receptors [4]. The expression of purinergic receptors has been identified during development and differentiation processes [5–10]. Nucleotides exert a synergic effect on cell proliferation in association with growth factors, chemokines, or cytokines in early stages of development [11–13] by parallel activation of the MAP kinase pathway and/or by transactivation of growth factor receptors [14, 15]. The complete role of ATP action in developmental processes still needs to be elucidated. It is known that ATP activates purinergic receptors resulting in many cases in increases of intracellular free calcium concentration [Ca2+]i. Changes in [Ca2+]i are involved in several events of differentiation and the embryogenesis process [16, 17]. Spitzer et al. [18] showed that naturally occurring patterns of Ca2+ transients encoded neuronal differentiation. Distinct frequency patterns of [Ca2+]i elevations were sufficient to promote neuronal differentiation, including physiological neurotransmitter receptors expression [19]. ATP and UTP are the main purinergic agonists activating P2X or P2Y receptors. These nucleotides can be rapidly degraded in the extracellular space by ectoenzymes to ADP or UDP, subsequently activating distinct P2Y receptors, or be finally degraded to adenosine, which is known to induce physiological responses via activation of P1 G protein-coupled receptors [20] (Fig. 1). Fig. 1Purine-induced signaling pathway involves the activation of P1 adenosine and P2 purinergic receptors and purine hydrolysis by ectonucleotidases. The scheme demonstrates purinergic receptor activity present in glia-glia, neuron-glia, and neuron-neuron interaction during neurogenesis as well as in the metabolism of the adult brain In this review article, we shall discuss the roles of purinergic signaling in neurogenesis such as cell cycle control during neural progenitor proliferation and differentiation as well as in maintaining physiology of neurons and glial cells and the involvement of purinergic receptors in pathophysiology. In addition, we shall outline state-of-the-art approaches used in investigation of P2 receptor function in physiological processes such as the use of antisense oligonucleotides, generation of knockout animals, and identification of new purinergic receptor subtype-selective drugs. Study of purinergic receptor function during in vitro differentiation During the development of the mammalian nervous system, neural stem cells and their derivative progenitor cells generate neurons by asymmetric and symmetric divisions [21]. P2 receptors were shown to be one of the first functionally active membrane receptors in chick embryo cells during gastrulation, in which ATP caused rapid accumulation of inositol triphosphate and Ca2+ mobilization in a similar way as acetylcholine (Ach) did via activation of muscarinic acetylcholine receptors, whereas other endocrine-acting substances such as insulin and noradrenaline (NA) induced much weaker effects in terms of intracellular calcium signaling [22, 23]. The induction of transient fluctuation in [Ca2+]i also denominated as calcium wave signaling allows for a coupling of spatial and temporal information. Thus, calcium waves have been proposed to play a role in mapping of neuronal networks [24] and to modulate neurogenesis during embryonic cortical development [25]. Neurotransmitters are prominent candidates for transcellular signals that could influence the development of embryonic neurons as they surround neural cells throughout brain development [26–29]. In addition, functional ligand-gated ionic channel receptors have been identified in neural progenitor cells prior to establishing cortical and subcortical synapses [30, 31]. In this context, the extracellular signaling mechanisms controlling the various transition steps involved in adult neurogenesis are still poorly understood. One approach used to identify the function of P2 receptors during development and differentiation is the use of in vitro models for neuronal and glial differentiation such as embryonic and adult neural progenitor cells (NPC), also known as neural stem cells (NSC), embryonic stem (ES), and embryonal carcinoma (EC) cells. ES cells are obtained from the inner mass cell of the blastocyst. The differentiation of these cells closely resembles the in vivo process and, therefore, provides stable models for embryonic growth and development [32, 33]. ATP promotes cell proliferation acting through P2X3, P2X4, P2Y1, and P2Y2 receptors in murine ES cells [34]. Tissue-nonspecific alkaline phosphatase (TNAP) was also detected in these cells and used as a marker for their undifferentiated stage [35]. The neuronal differentiation of EC cells, originated from irradiated embryo cells [36], also resembles early neuronal development in vivo. P19 mouse EC cells express stem cell-specific marker proteins and their phenotypic changes in specific differentiation stages are similar to those of stem cells [37]. Recently, our laboratory [38] has determined gene and protein expression of P2 receptor subtypes throughout in vitro neuronal differentiation of P19 cells as well as in the undifferentiated cell stage suggesting the participation of purinergic signaling in initiating and directing differentiation. Differential expression and activity of P2Y1, P2Y2, P2Y4, P2X2 subtypes and P2X6 subunits were reported during neuronal maturation of P19 cells [38, 39]. As direct evidence for participation of purinergic receptors in neuronal differentiation, the presence of the antagonists pyridoxalphosphate-6-azophenyl-2′,4′-disulfonic acid (PPADS), reactive blue 2, or suramin during differentiation of P19 neural progenitor cells (NPC) to P19 neurons resulted in reduced activity of cholinergic and glutamate NMDA receptors in differentiated P19 cells, pointing at a participation of P2Y1, P2Y2, and P2X2 receptors. Other in vitro neuronal and glial differentiation models used to understand the purinergic signaling are neural stem cells or progenitor cells which are isolated from the subventricular region (SVZ) located in the lateral ventricles (type B cells) or in the subgranular region of the gyrus dentatus of the hippocampus (residual radial glia) or even from the subcortical parenchyma of the cerebral cortex of embryonic and adult brain [40–42]. These regions in the adult brain act as neural stem cell reservoirs. These cells are already advanced in their differentiation stage when compared to ES or EC cells. Since NSC and NPC are capable of differentiating in both functional neurons and glial cells, they possess potential therapeutic applications such as ES cells in regeneration therapy following neuronal loss. These NPC differentiate into olfactory, cerebellar, and retinal neurons [40] in the presence of growth factors, neurotransmitters, vasoactive peptides in vivo [43], and growth factors such as epidermal growth factor (EGF), fibroblast growth factor 2 (FGF-2), and leukemia inhibitory factor (LIF) in vitro. When exposed to a high concentration of FGF-2 in suspension, proliferating NPC form tridimensional cell aggregates denominated as neurospheres, which following induction to differentiation express neuronal marker proteins such as β-III-tubulin, microtubule-associated protein-2 (MAP-2), and synaptophysin [44] and express P2X3 and P2X7 receptors which may contribute to early [Ca2+]i transients as prerequisites for further differentiation [41]. Shukla et al. [45] identified functional P2 receptors in adult mouse hippocampal progenitors in situ and the nucleoside triphosphate-hydrolyzing ectoenzyme (NTPDase) in type B cells of the SVZ [46] and in hippocampal progenitor cells. In adult murine NPC of SVZ, P2Y1 receptor activity mainly contributes to [Ca2+]i transients with some participation of P2Y2 receptors. The presence of the specific P2Y1 receptor antagonist MRS 2179 resulted in diminished cell proliferation in neurospheres due to reductions of [Ca2+]i transients. Similar results were obtained with NPC from SVZ of P2Y1 receptor knockout mice [47]. P2Y1 receptor-deficient mice are viable; however, they have deficits in platelet aggregation [48]. It is suggested that the purine signaling underlies autocrine or paracrine mechanisms and P2Y1 and P2Y2 receptors are important for NPC differentiation [47]. These models are useful tools to study the roles of P2 receptor signaling in early stages of development and differentiation. The importance of ATP release and purinergic signaling has not only been demonstrated in developmental progenitor cell expansion and neurogenesis, but also in persistent progenitor cells of the adult brain [49]. Expression of purinergic receptors during development of the central nervous system Purinergic signaling pathways are also involved in embryonic neurogenesis in much the same way as already discussed for in vitro differentiation models. ATP mediates elevation of [Ca2+]i and proliferation of immortalized human stem cells from embryonic telencephalon and mouse embryonic neurospheres [50, 51]. Ca2+ waves through radial glial cells in slices of the embryonic rat ventricular zone are mediated by P2Y1 receptors. Disrupting Ca2+ waves between embryonic NPC reduced ventricular zone cell proliferation during the peak of embryonic neurogenesis [25]. ATP directly contributes to modulate network-driven giant depolarizing potentials in the rat hippocampus during early stages of postnatal development [52]. In the developing hippocampal system a trophic role of ATP and the involvement of P2 receptor subtypes in shaping interneuronal connections during neuronal differentiation have been suggested [53]. Alterations of the regulation of embryonic growth by purinergic receptors might be involved in the onset of morphological malformations [54]. During rat postnatal development ectonucleotidase activity in the cerebral cortex steadily increases, reaching maximum values at 21 days of age [55]. Several P2Y and P2X receptors were shown to be dynamically expressed in the pre- and postnatal central and peripheral nervous system [56–59]. ATP inhibited motor axon outgrowth during early embryonic neurogenesis, most likely through the P2X3 receptor, and it was speculated that P2X7 receptors might be involved in programmed cell death during embryogenesis [58]. From all of the studied P2X receptors, homomeric P2X2 receptors were the first expressed in the rat central nervous system (CNS) on embryonic day 14 (E14) [56]. On E14, heteromeric receptors were formed by P2X2/3 receptor subunits. P2X3 receptor immunoreactivity was detected in cranial motor neurons as early as on E11, when neurons exited the cell cycle and started axon outgrowth, as well as postnatally on days 7 and 14 (P7 and P14) [56, 60]. Moreover, expression of P2X3-containing heteromeric receptors and other subunits was developmentally regulated in nucleus ambiguous motoneurons [61]. From E14 onwards P2X7 receptors were also expressed in the embryonic brain. For instance, in primary cultures of human fetal astrocytes basal levels of P2X7 receptor mRNA transcription and protein expression were detected [62]. Sperlágh et al. [63] have demonstrated that ATP regulates glutamate release via activation of P2X7 receptors. P2X7 receptor-induced excessive glutamate release alters Ca2+ homeostasis, subsequently resulting in activation of the apoptosis-related caspase cascade [64]. P2X receptor expression was downregulated in Purkinje cells and deep cerebellar nuclei at P21 and P66 rat embryonic stages, with the exception of P2X5 receptors whose immunoreactivity in granular cells was increased [65]. Evidence for participation of P2X receptors in different developmental processes such as neurite outgrowth (involving P2X3 receptors), postnatal neurogenesis (related to P2X4 and P2X5 receptor expression), and cell death (possibly involving P2X7 receptors) was collected. However, P2X1 and P2X6 receptor subunits may not play a role in neuronal development [58]. Neocortical neurons from 2-week-old rats possess a quite elaborated purine-triggered signaling system which includes both P2Y and P2X receptor activation [66]. Weissman et al. [25] showed that [Ca2+]i waves and subsequent ATP release, with consequent P2Y1 receptor activation, accompanied radial glial cell-derived neurogenesis in cultured slices of the developing rat forebrain, as mentioned above. Moreover, the importance of calcium signaling for differentiation of NPC has been studied [67, 68], and direct evidence for the participation of P2Y1 receptor-activated pathways in the early development has been provided by Scemes et al. [69]. P2Y receptors (particularly the P2Y1 subtype) were widely expressed in the embryonic rat brain as early as on E11 [57]. There was a marked decrease in the concentration of mRNA coding for P2Y1 receptors and upregulation of mRNA transcription coding for P2Y2 receptors in freshly isolated astrocytes of developing rat hippocampus [57]. Functional interactions between neurons and glia: a physiological overview An increasing amount of evidence, initiated by the neuron-glia unit idea proposed by Hyden [70], indicates that glial cells, once referred to as a simple support portion in the CNS, are now considered indispensable functional partners of neurons [71], both in physiological and pathological conditions. However, many questions remain unanswered: (1) how glia detects and interacts with neural function; (2) does neuron-glia signaling play a significant role in synaptic transmission and plasticity; and (3) how glial cells can communicate with other glial cells. Another important subject related to the interaction between glia and neurons emerges in neurogenesis. There is now a general agreement that the adult mammalian nervous system possesses many characteristics of astrocytes. The importance of glia in neuronal development was confirmed in a recent study showing that the number of GFAP (glial fibrillary acidic protein)-containing cells was reduced following transgenic targeting of adult mouse subependymal and subgranular zones, resulting in an almost complete loss of neurogenesis [72, 73]. In addition to assisting migration of neurons to their correct position and managing neurite outgrowth to their final communication targets [74, 75], glial cells have become an essential key for understanding neuronal differentiation by promoting initial stem cell proliferation and instructing undifferentiated cells to adopt a neuronal fate [76, 77]. In the mature brain, the proximity of astrocytes to neuronal synapses or to the blood-brain barrier makes these cells appropriate to control water diffusion and ion concentration in extracellular spaces [71, 78]. In particular, astrocytes regulate homeostatic environment and neurotransmitter levels by functional syncytium, in which gap junctions and specific membrane carriers play an important role [79–81]. In addition, glial cells produce and release a vast number of neurotrophins, including fibroblast growth factor, nerve growth factor, and transforming growth factor, which directly interfere in neuron physiology and coordinate developmental processes [71, 82–85]. ATP release and degradation, connecting adenosinergic and purinergic systems As already mentioned, it is well documented that glial cells may directly alter neuronal activity by releasing neurotrophins and consequently modulating neurotransmitter release in the synapse [86, 87]. One of the main mechanisms connecting the neuron-glia system is believed to be mediated by the release of glutamate from glial cells [88, 89]. In this context, growing evidence indicates that purinergic receptor ligands are widely involved in the cell-cell signaling mechanism by acting as neurotransmitters or neuromodulators released by glial cells to control synaptic transmission in the CNS, as part of multiple functions of astrocytes [22, 90, 91] (Fig. 1). ATP is an ideal molecule for cell signaling due to its intrinsic properties such as its small size, diffusing molecule rate, instability and low concentration in the extracellular environment, and impossibility to cross the plasma membrane [92, 93]. These properties imply the presence of particular pathways for ATP release that could be associated with cellular excitation/response and cell-cell signaling [94, 95]. First, ATP may be stored in synaptic vesicles alone or with other neurotransmitters and then released, as a classic synaptic mechanism in the peripheral or central nervous system [96, 97]. Second, a nonvesicular mechanism of ATP release could be observed through gap junction hemichannels, ATP-binding cassette proteins, P2X7 receptor pores in glial cells, and via chloride channels [98–101]. Third, ATP could be released due to cytolysis or cell damage. While this is not a physiological mechanism, it takes place following biological trauma and contributes to pathological conditions [102]. Subsequent to these mechanisms, the metabolism of the released ATP is regulated by a vast number of different families of ectonucleotidases in the synaptic cleft, including the ectonucleoside triphosphate diphosphohydrolase (E-NTPDase) and the ectonucleotide pyrophosphatase phosphodiesterase (E-NPP) which catalyze the degradation of ATP to ADP or AMP. The degradation to adenosine is mediated by ecto-5′-nucleotidase (E-5′-NT) and alkaline phosphatase [91, 103] (Fig. 1). Consequently, the reaction products resulting from the ATP hydrolysis may bind to P2 receptors, in the case of ADP, or to P1 receptors in the case of adenosine [104]. The adenosinergic receptor ligand adenosine is recognized as an important regulator of cellular homeostasis in the CNS and may be involved in the prevention or induction of apoptosis [105]. The reduction of ectonucleotidase activity in certain pathological conditions provided additional evidence for the accumulation of ATP in the extracellular environment [20]. Therefore, the complexity of the communication of neural and nonneural cells expands the functional significance by the interaction of the purinergic receptors in association with a variety of neurotransmitter systems. ATP-mediated neuron-glia signaling Novel studies in the purinergic field began to converge with glial research as it became more widely accepted that ATP is released through synaptic vesicles and thus accessible to perisynaptic glial cells, allowing them to detect neuronal activity. In particular, glial cells are responsive to ATP, as all types of glia, such as astrocytes, oligodendrocytes, microglia, and Schwann cells, express purinergic receptors [91]. In Schwann cells and oligodendrocytes, ATP-mediated signaling predominantly occurs through P2Y receptors, which in turn trigger intracellular Ca2+ release [106, 107]. However, the function of P2X1–6 receptors in astrocytes remains unclear, although P2X-mediated currents could be detected in astrocyte cells in culture, and P2X7 receptors are widespread in these cells with possible contribution to pathological conditions [108, 109]. Glial cells express many types of neurotransmitter receptors and conventionally are considered to be nonexcitable [110, 111]. However, a surprising observation was reported by Dani et al. [112] that synaptic transmission may propagate to glial cells as calcium waves, inducing membrane depolarization and regulating neurotransmitter release. These properties of glial cells suggest possible rapid communications between neurons and glia during synaptic transmission. This glial communication mechanism allows the released ATP to act onto adjacent astrocytes and neurons, thus supporting the propagation of Ca2+ waves in glial syncytium [113]. For example, in neuronal-glial cocultures prepared from hippocampus, ATP secreted by astrocytes was shown to inhibit glutamatergic synapses through activation of P2Y receptors [114]. The glial communication mechanism based on Ca2+ wave propagation could be inhibited by P2 receptor blockers or enzymes that rapidly hydrolyze extracellular ATP [115]. A stimulation applied to a single astrocyte in cocultures of rat forebrain astrocytes and associated neurons caused an elevation of [Ca2+]i and induced Ca2+ wave propagation in dorsal spinal cord through P2Y1 receptor activation [116] and glutamate release [117]. This new finding provided a parallel mechanism of intercellular communication that could allow astrocytes to detect synaptic function, propagate the information through neighboring glial cells, and then influence synaptic function in a distant part of the nervous system. Purinergic receptor-calcium signaling in glial cells plays important roles during CNS development. P2X1, P2X4, and P2X7 receptors were expressed in microglia at rat embryonic stage (E16) [59]. Moreover, changes in P2X4 receptor expression in microglial cells during postnatal development of the rat cerebellum have been reported. P2X5 receptor immunoreactivity was also upregulated in microglia and granular cells. Both P1 and P2 receptors contribute to the modulation of oligodendrocyte (OP) development, since they have been shown to exert similar effects on OP proliferation and differentiation [118]. The majority of the studies of ATP action have been concerned with the short-term P2 receptor signaling that occurs in neurotransmission and in secretion [119]. Furthermore, there is increasing evidence that purines and pyrimidines can have trophic roles in neuritogenesis [120, 121], regeneration [122], and proliferation [123]. However, some purines by themselves have limited trophic effects in a few types of cells; they appear to be much more effective as neuritogenic agents when they are combined with other trophic factors, such as NGF. For instance, inosine and 5′AMP alone do not elicit neurite extensions in PC12 cells [124]. Heine et al. [53] demonstrated that P2 receptor activation induced fiber outgrowth in organotypic cocultures in rat hippocampus. Fiber outgrowth was inhibited in the presence of the purinergic antagonist PPADS, suggesting the involvement of P2 receptors. In another study, the synergistic interaction between bFGF and ATP was reported on DNA synthesis in primary cultures of rat cortical astrocytes. ATP and bFGF induced a twofold and tenfold incorporation of [3H]thymidine into astrocytes, respectively, but when ATP and bFGF were added at the same time a 50-fold increase in [3H]thymidine incorporation was observed [12]. Neuroprotection ATP can activate P2X7 receptors in astrocytes to release glutamate, GABA, and also ATP which might regulate the excitability of neurons in certain pathological conditions [125]. It has been suggested that astrocytes can sense the severity of damage in the CNS by the amount of ATP released from damaged cells and that extracellular ATP concentration and the corresponding subtype of activated astrocytic P2 receptor modulate the tumor necrosis factor-α (TNF-α)-mediated inflammatory response [126]. After mechanical brain injury, the administration of PPADS facilitated the recovery of pathologically changed electroencephalograms [127]. These results suggest that interference with the ATP-induced excitatory responses could provide neuroprotection and possible therapeutic consequences. Evidence for a neuroprotective role was also found for the adenosine A1 receptor in hippocampus. This cerebral region is highly sensitive to hypoxia and ischemia. The study of the action of hypoxia on synaptic transmission in hippocampal slices has suggested that substances being released during hypoxia, such as GABA, ACh, and even glutamate, may also play neuroprotective roles. However, the actions of these neurotransmitters become evident only when activation of P1 receptors is impaired, suggesting a critical role for this receptor during hypoxic events. These substances can operate in a redundant or even overprotective manner, acting as a substitute for some adenosine actions when the nucleoside is not operative [128]. Neuroimmune interactions Microglia, the immune cells of the CNS, can be activated by purines and pyrimidines to release inflammatory cytokines such as IL-1, IL-6, and TNF-α. However, hyperstimulation of the immune reaction in the brain may accelerate neuronal damage. The P2X7 receptor is considered to have a potentially pivotal role in the regulation of various inflammatory conditions. ATP selectively suppresses the synthesis of the inflammatory protein microglial response factor through calcium influx via P2X7 receptors in microglia [129], which also leads to enhancement of interferon-γ (IFN-γ)-induced type II nitric oxide synthase (NOS) activity [130, 131]. P2X7 receptor activity also participated in ATP-induced IL-1 release from macrophages and microglia that had been primed with substances such as bacterial endotoxin [132] and was shown to stimulate the transcription of nuclear factor κB, TNF-α [133], the stress-activated protein kinases (SAPK)/JNK pathway [134], and the production of 2-arachidonoylglycerol, which is also involved in inflammation induction by microglial cells. P2Y rather than P2X7 receptors seem to have a major role in the IL-6 production by microglial cells [135]. ATP evoked the release of plasminogen [136] and IL-6 [135]. The stimulation of microglia by either ATP or BzATP revealed neurotoxic properties and the involvement of the P2X7 receptor has been reported in excitotoxic/necrotic and apoptotic degeneration [109]. Neurological disorders Epilepsy Several anti-epileptic agents reduce the ability of astrocytes to transmit Ca2+ waves, raising the possibility that blockade of ATP-induced [Ca2+]i transients in astrocytes by purinergic receptor antagonists could offer new treatments for epileptic disorders. Antiepileptic effects of adenosine are mostly due to the well-known inhibitory actions of P1 receptors on synaptic transmission in the hippocampus. However, as recently pointed out, adenosine actions are not limited to presynaptic actions on glutamate release [137]. The intraventricular injection of high doses of ATP in rats evoked severe chronic-tonic convulsions, whereas lower doses of ATP or adenosine elicited a kinetic state with muscle weakness [138]. P2X2 and P2X4 receptor expression in the hippocampus of seizure-prone gerbils was significantly reduced compared with that of normal gerbils [139]. GABAA receptors mediated modulation of expression of both P2X2 and P2X4 receptors, which may play an important role in the regulation of seizure activity in the gerbil hippocampus [139]. P2X7 receptors are thought to play a definite, but not yet well defined role in epilepsy. Treatment with the GABAB receptor agonist baclofen and antagonist phaclofen resulted in increased and decreased P2X7 receptor expression in hippocampus, respectively [140]. These purinergic receptor responses were interpreted as compensatory responses to the modulation of GABAB receptor function [140]. It is noteworthy to mention that this positive relationship between P2X and GABAA receptors was also reported for the spinal cord [141] and dorsal root ganglia (DRG) [142]. In these populations of neurons, ATP-mediated P2X receptor function may participate in neuronal transmission accompanied by GABA-mediated actions [139]. Pain The heteromeric channel comprised of P2X2 and P2X3 subunits was expressed almost exclusively in a subset of primary afferents implicated in nociception [143–145]. It has been observed that mechanical allodynia is reduced in mice with deleted P2X3 receptor genes [146, 147] in agreement with data obtained in rats that have been treated with intrathecal antisense oligonucleotides reducing expression of P2X3 receptors [148] or with the selective antagonist for P2X3 and P2X2/3 receptors A-317491 [148, 149]. P2X3 receptor knockout mice showed additional defects in afferent pathways.The P2X4 receptor is also implicated in pain sensation. Activation of dorsal horn microglia and tactile allodynia developing several days after ligation of a spinal nerve were greatly reduced when gene expression of P2X4 receptor in the dorsal horn had been inhibited by the presence of intrathecal antisense oligonucleotides [150]. Accordingly, intraspinal administration of microglia following induction of expression and activity of P2X4 receptors produced tactile allodynia in naive rats. Intrathecal administration of cultured brain microglia produced allodynia, but only when the cells had been pretreated with ATP [150]. The inhibition of P2X4 receptor activity in microglia might be a new therapeutic strategy for pain induced by nerve injury. Alzheimer’s disease Alzheimer’s disease (AD) is caused by extracellular deposition of amyloid β-peptide, which can damage neurons, leading to their dysfunction and death [151]. ATP and, in particular, aluminum-ATP promoted the formation of thioflavin T-reactive fibrils of β-amyloid and an unrelated amyloidogenic peptide, which could be blocked by suramin [152].Microglial cells are believed to contribute to the progression of AD and are known to release proinflammatory neurotoxic substances. Extracellular ATP, acting through the P2X7 receptor, can alter β-amyloid peptide-induced cytokine secretion from human macrophages and microglia and thus may be an important modulator of neuroinflammation in AD [153]. P2X7 receptors mediate superoxide production in primary microglia, and the expression of this receptor subtype was specifically upregulated around β-amyloid plaques in a transgenic mouse model of AD [154].In contrast to the control human brain, the P2Y1 receptor was colocalized with a number of characteristic AD structures such as neurofibrillary tangles, neuritic plaques, and neuropil threads in the hippocampus and cortex [155]. In general, control brain tissue exhibited a greater and more abundant level of P2Y1 receptor immunostaining than AD tissue did, probably due to severe neuronal cell degeneration in most AD brains. The intense P2Y1 receptor staining observed over pathological AD structures might imply that this receptor is involved either directly or indirectly in signaling events mediating neurodegeneration of pyramidal cells. Alternatively, P2Y1 receptors might have other diverse signaling roles, possibly involved in the production of intracellular tau deposits or might even serve to stabilize these tangle structures in some way [156]. Ischemia/hypoxia Under pathological conditions of hypoxia or ischemia, extracellular purine nucleotides leak from damaged cells and thereby may reach high concentrations in the extracellular space [157]. A direct participation of extracellular ATP and P2 receptors in ischemic stress has been reported in various cellular systems [157–160]. For example, P2X2 and P2X4 receptor expression in neurons and microglia, respectively, in the hippocampus of gerbils was upregulated following transient global ischemia [161]. Increased P2X7 receptor expression in astrocytes, microglia, and neurons appears to contribute to the mechanisms of cell death caused by in vivo and in vitro ischemia [162, 163]. Following induction of ischemia P2X7 receptor mRNA transcription and protein expression were elevated in cultured cerebellar granule neurons and organotypic hippocampal cultures [163]. Hence, the P2X7 receptor is apparently an important element in the mechanisms of cellular damage induced by hypoxia/ischemia. In many cell types, the activation of the P2X7 receptor led to rapid cytoskeletal rearrangements, such as membrane blebbing and cell lysis [164]. P2Y1 receptors are intensely expressed in Purkinje cells in deep layers of the cerebral cortex and in ischemia-sensitive areas of the hippocampus [165]. In conclusion, extensive evidence demonstrates a postischemic time- and region-dependent upregulation of P2X2,4,7 and P2Y1 receptor subtypes in neurons and glial cells and suggests a direct role of P2 receptors in the pathophysiology of cerebral ischemia in vitro and in vivo. Trauma and axotomy P2 receptors are suggested to be involved in neuronal reactions after axotomy. Colocalization and temporal coactivation of purinergic and nitrergic markers support this idea, indicating possible interactions between these two systems [166]. Following peripheral nerve lesions, P2X3 receptor expression in DRG neurons was changed [167]. The increased expression of P2X3 receptor mRNA in intact neurons indicates a role of this subtype in the post-injury pathomechanism in primary sensory neurons [167]. After spinal cord injury, large regions of the peritraumatic zone were characterized by a sustained process of pathologically high ATP release [168]. Spinal cord neurons express P2X7 receptors, and exposure to ATP led to high-frequency spiking, irreversible increases in [Ca2+]i and cell death. The administration of P2 receptor antagonists (PPADS, oxATP) after acute impact injury significantly improved functional recovery and diminished cell death in the peritraumatic zone [168]. The involvement of P2X1 and P2X2 receptors in neuronal reactions after hemicerebellectomy was also described [169]. Furthermore, neuronal NOS and P2 receptors were colocalized and showed temporal coactivation after cerebellar lesions, indicating a close relationship between these two systems [166]. In addition, in this mixed model of differentiation and axotomy, the colocalization of ataxin-2 (ax-2, involved in resistance to degeneration phenomena, which may be lost after mutation)-immunopositive cells and P2X2 receptors was demonstrated in neurons, and post-lesional induction of P2X1 receptor and ax-2 immunoreactivity was reported as well [170]. In vivo treatment of P2Y2 receptor-expressing sciatic nerves with ATP-γS increased expression levels of the growth-associated protein 43 (GAP-43) as a marker for axonal growth in wild-type but not in P2Y2 −/− mice [171].Possible therapeutic manipulations to modulate astrocytic proliferation and to diminish glial scar formation in the adult brain and during development include the use of drugs known to interfere with nucleotide synthesis. Pekovic et al. [172] showed that treatment with the purine nucleoside analogue ribavirin (Virazole; 1-β-D-ribofuranosyl-1,2,4-triazole-3-carboxamide) downregulates the process of reactive gliosis after sensory motor cortex lesion of the adult brain and facilitates re-establishing synaptic connections with the denervated cells at the lesion site. This may be a useful approach for improving neurological recovery from brain damage. The antiproliferative effect of ribavirin is due to the inhibition of de novo nucleic acid synthesis after depletion of GTP and dGTP pools with consequent impairment of specific transduction pathways. ATP-induced effects on cell cycle progression There is evidence showing that extracellular ATP enhances the expression of cell cycle regulating proteins [173, 174]. Progression of the cell cycle is highly controlled. Cyclins are synthesized and degraded in a synchronous way due to changing transcription or proteolysis rates, thereby directing the periods of the cellular cycle. Cyclins interact with cyclin-dependent kinases (cdks) resulting in activation of their kinase activity, phosphorylating their targets and themselves, and regulating the specific progression of the cell cycle through checkpoints [175]. Proliferation rates in mammalians are largely determined during the G1 phase of the cell cycle. The relevant proteins include three D-type cyclins (D1, D2, and D3) that, in different combinations, bind to and allosterically regulate one of two cdk subunits, cdk4 and cdk6, as well as the E-type cyclins (E1 and E2), which govern the activity of a single catalytic subunit, cdk2 [176]. Various combinations of D-type cyclins are expressed in different cell types, whereas cyclin E-cdk2 complexes are ubiquitously expressed [177]. Two families of cdk inhibitors regulate the activity of G1-type cyclins-cdks complexes: the Ink4 family (p16, p15, p18, and p19), which blocks the activity of cyclin D-cdk4-6 complexes, and the Cip/Kip family (p21, p27, and p57), which preferentially inhibits cyclin E-cdk2 complexes and also acts as a scaffold for the catalytically active cyclin D-cdk4-6 complexes. In addition to cyclins and cdks, mitogen-activated protein kinase (MAPK) is also believed to have a role in induction of cell proliferation. Therefore, cyclin D-dependent kinases may play a role in controlling the cell cycle of embryonic and maybe neural progenitor cells. In addition MAPK is also believed to have a role in induction of cell proliferation. Extracellular ATP induces Ca2+-dependent MAPK activation via stimulation of P2 receptors in neonatal rat astrocytes [178]. On the other hand, cell proliferation is associated with activation of diverse proteins. Positive regulators include cyclins and their partners with catalytic activity (cdks), which are essential for progression of the cells through each phase of the cell cycle and various cell cycle checkpoints [179, 180]. The regulation of cyclin D1 expression is also mediated by the Ras/ERK signaling pathways [181, 182]. Raf/MEK/ERK and PI3-K/Akt signaling pathways can act in synergy to promote the G1-S phase cell cycle progression in both normal and cancer cells [183, 184]. The promoter for cyclin D1 contains an AP-1 site, and the ectopic expression of either c-fos or c-jun induces cyclin D1 mRNA expression [185, 186]. In many cell types, phosphatidylinositol (PI)-3-kinase-dependent signaling pathways also regulate cyclin D1 expression [187]. It was also reported that the control of the cell cycle regulatory proteins was dependent on PI3-kinase and p44/42 MAPK pathways, indicating that extracellular ATP alone is sufficient to induce cell cycle progression beyond the G1 phase of the cell cycle. These findings also suggest that, once P2 receptors are activated, protein kinase C (PKC) transmit signals to the nucleus through one or more of the MAPK cascades, which may include Raf-1, MEK, and ERK, and stimulate transcription factors such as myc, max, fos, and jun. Moreover, MAPKs are upstream regulators of cdk2 and cdk4 expression. It has been reported that p44/42 MAPK phosphorylation is essential and sufficient for the increase in cdk2 [188, 189] and decrease in p27Kip1 expression [190, 191]. However, Delmas et al. [192] provided evidence that p44/p42 MAPK activation triggers p27Kip1 degradation independently from cdk2/cyclin E in NIH 3T3 cells. As described above, ATP regulation of the MAPK and cdk-cyclin complex has not been elucidated in other types of cells [193]. It is documented in the literature that purinergic receptor inhibitors interfere with the S phase of the cell cycle. Neurospheres treated with the purinergic receptor antagonists reactive blue 2 or suramin are mostly in S phase (5.7 ± 0.3% or 8.4 ± 2.3%) when compared to untreated control neurospheres with 16.4 ± 1.8% of the cells being in S phase. Moreover, neurosphere cultures treated with suramin or reactive blue 2 showed an increase in the expression of the tumor suppressor p27 as a strong regulator of cell division [49]. The discussed findings led to the suggestion that extracellular ATP plays an important physiological role during mammalian embryonic development by stimulating proliferation of ES cells, and therefore P2 receptor agonists and antagonists might provide novel and powerful tools for modulating embryonic cell functions. In conclusion, P2X and P2Y purinergic receptors can promote proliferation of ES cells as well as of progenitor cell types by a mechanism by that ATP induces increases in [Ca2+]i, leading to activation of PKC, PI3-kinase/Akt, p38, and p44/42 MAPK, followed by an alteration in the cdk-cyclin complex with p21 and p27, which are involved in stimulation of cell proliferation. Pharmacological approaches Most purinergic receptors do not have specific inhibitors. Therefore, P2 receptor agonists and antagonists acting on most of the purinergic receptor subtypes are widely used in experimental approaches to study biological functions of these receptors. Such approaches are feasible, since these compounds mostly have higher affinities to some P2 receptor subtypes than to other ones. As an example, we have used suramin, PPADS, and reactive blue 2 to study the participation of P2Y1, P2Y2, and P2X2 receptors in neuronal differentiation of P19 EC cells [38]. One possible approach towards a subtype-specific inhibitor would be based on results from P2 receptor structure determination. Using site-directed mutagenesis it has been possible to understand which amino acids are involved in ATP binding and to identify allosteric sites in purinergic receptors. The knowledge obtained on location and structural features of ligand and inhibitor binding sites is used in rational based drug design of selective purinergic subtype antagonists. Alternatively, combinatorial libraries formed by vast amounts of possible ligands can be employed for discovery of subtype-specific inhibitors. A-317491 was identified as a specific inhibitor for P2X2/3 and P2X3 receptors. In the presence of A-317491 both thermal hyperalgesia and mechanical allodynia were attenuated after chronic nerve constriction injury in which P2X3 homomeric and P2X2/3 heteromeric receptor activities were involved. Although active in chronic pain models, A-317491 was ineffective in reducing nociception in animal models of acute postoperative pain and visceral pain indicating that P2X3 and P2X2/3 receptor activation may not be a major mediator of acute postoperative or visceral pain [149]. MRS 2179 (2′-deoxy-N6-methyladenosine 3′,5′-bisphosphate) was discovered as a specific inhibitor of P2Y1 receptor activity [194]. This compound has an efficient antithrombotic action in which P2Y1 receptors are involved [195]. Based on structure design or combinatorial library approaches specific agonists or antagonists may be discovered for other purinergic receptor subtypes. For instance, the SELEX (systematic evolution of ligands by exponential enrichment) technique provides a particularly promising approach for the discovery of such compounds. This technique is based on the reiterative presentation of a partial random RNA or single-stranded DNA library to a protein preparation containing a particular purinergic receptor subtype. RNA or DNA molecules bound to a target site on the receptor are displaced from the receptor and eluted by addition of an excess concentration of an unspecific purinergic receptor antagonist and amplified by reverse transcription polymerase chain reaction (PCR) or PCR to restore the library used for the next in vitro selection cycle. Using this approach, it was possible to identify inhibitors specific for isoforms of a target protein [196]. Our group prepared membrane protein fractions of 1321N1 cells stably transfected with rat P2X2 receptors and coupled them onto an immobilized artificial membrane (IAM) as matrix for affinity chromatography. The equilibrium binding to the receptor and competition between ATP and the purinergic antagonists suramin and 2′3′-O-(2,4,6-trinitrophenyl) adenosine 5′-triphosphate (TNP-ATP) were analyzed by a chromatographic assay using [α-32P]-ATP as a radioligand. Our data indicate that suramin does not compete with ATP for the ligand binding site and TNP-ATP is a competitive antagonist, as already shown by Trujillo et al. [197]. Moreover, this chromatographic assay can be used in in vitro selection procedures for RNA aptamers binding to P2X2 receptors from a combinatorial SELEX RNA library [198]. The development of a subtype-specific P2X receptor antagonist by using the SELEX technique or another combinatorial library-based approach shall serve as proof of principle and encourage further works to obtain such specific antagonists for all P2 receptor subtypes as tools for elucidating their biological functions and for possible therapeutic applications. Conclusion P2 receptor function is involved in most physiological processes and participates in neurotransmission in the CNS. Results obtained with mouse ES and P19 EC and neural progenitor cells suggest an important role of purinergic signaling in early embryogenesis, especially in cell proliferation, migration, and differentiation, with different subtypes of receptors participating in these processes. Our understanding of the biological functions of specific P2 receptor subtypes during CNS development and in the adult brain has increased due to the availability of knockout animals and specific inhibition of gene expression or activity of purinergic receptor subtypes. The importance of P2 receptor signaling in neuroprotection, neuroimmunity, and guiding neuronal differentiation, especially in glial and microglial cells, has been related to purinergic receptor expression. Most importantly, specific agonists and antagonists for individual P2 receptor subtypes are both needed for studying their involvement in biological processes. The discovery of such selective compounds will elucidate yet unknown biological functions of P2 receptor subtypes as well as open new avenues for therapeutic approaches to disease states in which purinergic receptor activity is involved.
[ "knockout animal", "atp", "neurotransmitter", "neural stem cells", "p19 embryonal carcinoma cells" ]
[ "P", "P", "P", "P", "R" ]
Ann_Hematol-3-1-1914243
CD34-related coexpression of MDR1 and BCRP indicates a clinically resistant phenotype in patients with acute myeloid leukemia (AML) of older age
Clinical resistance to chemotherapy in acute myeloid leukemia (AML) is associated with the expression of the multidrug resistance (MDR) proteins P-glycoprotein, encoded by the MDR1/ABCB1 gene, multidrug resistant-related protein (MRP/ABCC1), the lung resistance-related protein (LRP), or major vault protein (MVP), and the breast cancer resistance protein (BCRP/ABCG2). The clinical value of MDR1, MRP1, LRP/MVP, and BCRP messenger RNA (mRNA) expression was prospectively studied in 154 newly diagnosed AML patients ≥60 years who were treated in a multicenter, randomized phase 3 trial. Expression of MDR1 and BCRP showed a negative whereas MRP1 and LRP showed a positive correlation with high white blood cell count (respectively, p < 0.05, p < 0.001, p < 0.001 and p < 0.001). Higher BCRP mRNA was associated with secondary AML (p < 0.05). MDR1 and BCRP mRNA were highly significantly associated (p < 0.001), as were MRP1 and LRP mRNA (p < 0.001) expression. Univariate regression analyses revealed that CD34 expression, increasing MDR1 mRNA as well as MDR1/BCRP coexpression, were associated with a lower complete response (CR) rate and with worse event-free survival and overall survival. When adjusted for other prognostic actors, only CD34-related MDR1/BCRP coexpression remained significantly associated with a lower CR rate (p = 0.03), thereby identifying a clinically resistant subgroup of elderly AML patients. Introduction Clinical resistance to chemotherapy in acute myeloid leukemia (AML) is often associated with the expression of (membrane) transport-associated multidrug resistance (MDR) proteins. Expression of P-glycoprotein (P-gp), encoded by the MDR1 gene, is an independent adverse prognostic factor for response and survival in de novo AML [1–4]. Moreover, it has been shown that besides P-gp, also the MDR-related protein (MRP1/ABCC1) and the lung resistance-related protein (LRP), also designated as the major vault protein (MVP), are expressed in AML. However, the prognostic significance of the latter resistance proteins has not been settled [3, 5–7]. Some years ago, a new drug resistant protein, i.e., the breast cancer resistance protein (BCRP/ABCG2), which is the equivalent of the mitoxantrone (MXT) resistant protein and the placental ABC transporter (ABCP), was found to be expressed in AML [8–13]. The precise role of either resistance proteins among poor risk AML such as in patients of older age has not been established. This study prospectively investigated the relevance of MDR1, MRP1, LRP, and BCRP messenger RNA (mRNA) expression in combination with known prognostic characteristics like CD34 expression, white blood cell (WBC) count, and secondary AML as possible denominators of response and survival in patients with AML aged 60+ who were treated in the same clinical trial. Patients and methods Patients A group of 154 patients with AML aged 60 years or older were included in the present study. All patients were enrolled between May 1997 and February 1999 in an international, multigroup, randomized phase 3 trial performed under auspices of the Dutch–Belgian Hemato-Oncology Cooperative Group and the UK Medical Research Council [14]. In that trial, 419 eligible white patients ≥60 years with previously untreated de novo and secondary AML (M0–M2 and M4–M7 according to the French–American–British [FAB] classification [15]) were randomized to receive two cycles of induction chemotherapy consisting of daunorubicin (DNR) and cytarabine (ara-C) with or without the P-gp inhibitor PSC-833 (Valspodar, Amdray®; Novartis Pharma, Basle, Switzerland). Patients in both arms in complete remission after these two cycles were to receive one consolidation consisting of ara-C, MXT, and etoposide. Inclusion criteria, clinical characteristics, treatment, and outcome of the phase 3 trial have been previously reported [14]. Bone marrow (BM) aspirates had been collected at diagnosis for the analysis of P-gp function and expression, as described previously [14]. Selection of patients for our study was based on availability of sufficient purified AML blast samples in our tissue bank, which was the case for 154 patients. This study was approved by the ethics committees of the participating institutions and was conducted in accordance with the Declaration of Helsinki. Written informed consent was obtained from all patients before randomization. Methods BM aspirates were obtained in heparinized tubes. Mononuclear BM cells were collected by Ficoll Hypaque density gradient centrifugation (density 1.077 g/m3; Pharmacia, Uppsala, Sweden). To obtain purified samples with more than 85% of blasts, T-cell depletion and adherence depletion was performed as previously described [16]. Cells were cryopreserved in Dulbecco modified Eagle medium (DMEM; Gibco, Paisley, UK) supplemented with 10% dimethyl sulfoxide (Merck, Darmstadt, Germany) and 20% fetal calf serum (FCS; Gibco) and stored in liquid nitrogen. On the day of the experiments, BM cells were thawed. Cells were washed and resuspended in DMEM supplemented with 10% FCS. Before RNA and DNA isolation, cells were washed with phosphate-buffered saline (Gibco). MDR1, MRP1, LRP, and BCRP mRNA analysis The drug resistance proteins were analyzed using the methods that we previously reported [11]. In brief, total RNA was isolated using the TRISOLV™ extraction as described by the manufacturer (Biotecx, Houston, TX). RNA was aliquoted and stored at −80°C. RNA samples were analyzed for RNA integrity by gel electrophoresis. cDNA was synthesized by the use of the TaqMan Reverse Transcription Reagents (Applied Biosystems, Foster City, CA), diluted, aliquoted, and stored at −80°C. Quantitative RT-PCR was used to measure the mRNA expression levels of MDR1, MRP1, LRP, and BCRP by Taqman-chemistry on an ABI PRISM 7700 sequence detector (Applied Biosystems) using two endogenous reference genes, i.e., glyceraldehyde-3-phosphate dehydrogenase and porphobilinogen deaminase. Definition of endpoints The clinical endpoints have been defined previously [14]. In brief, complete response (CR) was defined as a normocellular BM with <5% blasts, no Auer rods, and no evidence of extramedullary involvement. Because data on peripheral blood recovery within 60 days were not always available, they were not considered as a criterium for CR. Patients who relapsed or died within 28 days after CR were considered as not having achieved a CR. Event-free survival (EFS) was calculated from the date of randomization until no CR on induction therapy, relapse after CR, or death in CR, whichever came first. Patients who did not reach CR were considered failure for EFS at 1 day after randomization. Disease-free survival (DFS) was determined for all patients who achieved CR on induction therapy and was calculated from the date of CR until relapse or death, whichever came first. Overall survival (OS) was measured from randomization until death from any cause. Patients who were still alive at the date of last contact were then censored. Statistical analysis The original phase 3 trial had been designed to detect with a power of 80% an increase in 2-year EFS from 9.5% in the control arm (without PSC-833) to 18% in the PSC-833 arm (two-sided significance level α = 0.05) and included 419 eligible patients. mRNA data were obtained from a subset of 154 patients with sufficient BM samples in our tissue bank available for analysis. Baseline parameters of interest were MDR1, MRP1, LRP, and BCRP mRNA expression. Clinical endpoints were CR rate, EFS, DFS, and OS. Baseline characteristics of patients with or without mRNA expression data available were compared using the Fisher exact test or the Pearson χ2 test in case of discrete variables, whichever appropriate, or the Wilcoxon rank-sum test in case of continuous variables. The association between patient baseline characteristics and mRNA expression levels was analyzed using the Pearson χ2 test of the Spearman rank correlation test, whichever was appropriate. The prognostic value of mRNA levels with respect to CR rate was determined using logistic regression [17] whereas the impact of MDR1, MRP1, LRP, and BCRP on EFS, DFS, and OS was analyzed with Cox regression analysis [18]. For this purpose, the natural logarithm of the mRNA expression levels of the four resistance genes were included in the analyses because of the very skewed distribution of the original mRNA levels. In addition, the outcome of patients with coexpression of MDR1 and BCRP was evaluated to confirm the poor prognosis of AML with MDR1/BCRP coexpression reported by Benderra et al. [19] in patients with a median age of 45 years or older. These patients were defined as having mRNA levels of these two drug resistance genes equal to or higher than the median. Their outcome was compared to the other patients with at least one of the MDR1 and BCRP mRNA expression levels below the median. Logistic regression and Cox regression analyses were performed unadjusted, as well as adjusted for other prognostic factors, i.e., secondary AML, natural logarithm of WBC count, square root of percentage CD34+ cells, and cytogenetic risk (favorable/intermediate versus unfavorable versus unknown), as well as for treatment arm in the phase 3 trial, as about half of the patients had been randomized to receive PSC-833 in addition to their chemotherapy. Kaplan–Meier curves [20] were generated to illustrate survival and were compared using the log-rank test [21]. All reported p values are two-sided and, in view of the exploratory nature of these analyses, were calculated without adjustment for multiple testing. p values ≤ 0.05 were considered statistically significant. Results In the phase 3 trial, a total of 419 untreated patients with AML aged 60 years and older were randomized to receive two induction cycles with or without PSC-833. As reported, no difference was found between both treatment arms as regards CR rate (54% in the PSC-833 arm versus 48% in the control arm, p = 0.22), 5-year EFS (7 versus 8%, p = 0.53) nor DFS (13 versus 17%, p = 0.06) and OS (10% in both arms, p = 0.52) [14]. We previously reported the role of functional MDR1 expression with respect to clinical outcome in these patients. In 154/419 of the patients, sufficient BM cells were available in our tissue bank to investigate the mRNA expression level of the drug resistance genes MDR1, MRP1, LRP, and BCRP. This subgroup consisted of a representative group according to age, gender, CD34 expression, cytogenetics, and FAB classification (Table 1). In this test group, a higher WBC count at diagnosis was observed than in the other 265 patients, and relatively more patients had been randomized to the PSC-833 arm (57 versus 45%, p = 0.02). There was no significant difference in the levels of MDR1, MRP1, LRP, nor BCRP mRNA expression between the two treatment arms (data not shown). The CR rate and survival endpoints were also similar in both patient groups (Table 1). However, patients with mRNA data in the PSC-833 arm had a higher CR rate (61 versus 40%, p = 0.02), whereas this was 54 versus 48% (p = 0.22) in all 419 patients. Table 1Comparison between patients with or without data available for expression of the drug resistance genes Drug resistance genes evaluated Yes N (%)No N (%)Total N (%)pNumber of patients154265419Patient characteristicsMedian age, (range)67 (60–85)68 (58–85)67 (58–85)0.52Sex0.26 Male86 (56)163 (62)249 (59) Female68 (44)102 (38)170 (41)Secondary AML31 (20)73 (28)104 (25)0.09Median WBC count (×109/l; range)19.1 (0.1–389)5.6 (0.5–300)8.9 (0.1–389)0.001N146243389Median % CD34+, (range)32.5 (0.1–97.9)29.7 (0.1–93.7)30.3 (0.1–97.9)0.50N152157309Cytogenetic risk classificationa0.12 Favorable3 (3)2 (1)5 (2) Intermediate90 (80)132 (73)222 (76) Unfavorable19 (17)47 (26)66 (23) No data42 (n.i.)84 (n.i.)126 (n.i.)Treatment arm randomized0.02 DNR/ara-C66 (43)145 (55)211 (50) DNR/ara-C +PSC-83388 (57)120 (45)208 (50)Treatment outcomeCR rate, % (95% CI)52 (44–60)50 (44–56)51 (46–56)0.73EFS, % (95% CI)0.72 1 year23 (17–30)23 (18–28)23 (19–27) 5 years9 (5–14)7 (4–11)8 (5–11)DFS, % (95% CI)0.81 1 year38 (27–48)39 (31–48)39 (32–45) 5 years17 (10–26)14 (9–21)15 (11–21)OS, % (95% CI)0.31 1 year42 (34–50)41 (35–46)41 (36–46) 5 years14 (9–20)8 (5–12)10 (7–14)The results indicate that, apart from WBC count, there are no differences between the two subgroups.N number of patients with data (if not available for all patients); n.i. not included when calculating percentagesaClassification of cytogenetic abnormalities only for 293 patients with successful cytogenetics. Favorable risk: t(8;21), inv(16) or t(16;16). Unfavorable risk: the presence of monosomies or deletions of chromosomes 5 or 7, abnormalities of the long arm of chromosome 3(q21;q26), t(6;9), abnormalities involving the long arm of chromosome 11 (11q23), or complex cytogenetic abnormalities (defined as at least three unrelated cytogenetic abnormalities in one clone). Patients who did not meet the criteria for favorable or unfavorable risk were classified as intermediate risk [14]. The mRNA expression levels of the resistance genes were not significantly associated with the age of the patients (Table 2). MRP1 and LRP expression showed a strong positive correlation with WBC count. Negative associations of MDR1 and BCRP with WBC count were observed. A significant positive association was found between CD34 and MDR1 and also with BCRP mRNA expression. No significant correlation was found between MRP1 nor LRP, and CD34 expression (Table 2). Interestingly, secondary AML cases had a significantly higher expression of BCRP (p < 0.05) and lower MRP1 and LRP levels (both p < 0.01, Table 2). In the vast majority of our patients, also P-gp efflux and expression data were available. Function and expression data and MDR1 mRNA expression levels were highly correlated (p < 0.001), which was published recently [22]. Table 2Association between clinical patient characteristics and the mRNA expression of the four drug resistance genes and MDR1/BCRP coexpression mRNA expression ofMDR1MRP1LRPBCRPMDR1/BCRP coexpressionCharacteristic Age0.15−0.01−0.090.090.07(153)(153)(153)(137)(147) Secondary AML0.06−0.22**−0.21**0.19*0.12(153)(153)(153)(137)(147) WBC count−0.17*0.28***0.36***−0.36***−0.35***(145)(145)(145)(131)(139) CD34+0.54***0.14−0.080.17*0.27**(151)(151)(151)(135)(145) Unfavorable 0.11−0.05−0.23*0.130.10 Cytogenetic risk(111)(111)(111)(98)(106)Unfavorable cytogenetic risk was defined by the presence of monosomies or deletions of chromosomes 5 or 7, abnormalities of the long arm of chromosome 3(q21;q6), t(6;9), abnormalities involving the long arm of chromosome 11 (11q23), or complex cytogenetic abnormalities (defined as at least three unrelated cytogenetic abnormalities in one clone)Each cell displays the Spearman rank correlation coefficient between two variables and, between brackets, the number of patients with both variables available.*p < 0.05; **p < 0.01; ***p < 0.001 In this cohort of patients of higher age with AML, MDR1 and BCRP were highly associated (p < 0.001), just as were MRP1 and LRP mRNA (p < 0.001; Fig. 1). A negative association was found between BCRP and MRP1 and between BCRP and LRP (both p < 0.001; Fig. 1). The 40 patients with coexpression of BCRP and MDR1 had significantly higher CD34 expression (median 39.5% [range 0.1–97.7%] versus 25.9% [range 0.1–97.9%]; p = 0.001) and a lower WBC count (median 4.5 [range 0.8–300]×109/l versus 28.1 [range 0.1–389]×109/l; p < 0.001). No significant correlation of MDR1, BCRP, or coexpression of MDR1 and BCRP was found with unfavorable cytogenetics (p = 0.4; Table 2). Fig. 1Association between MDR1, MRP1, LRP, and BCRP mRNA expression levels. Each dot represents the expression of two drug resistance genes in one patient. The Spearman rank correlation coefficient has been calculated, along with the corresponding p value. Both the x- and y-axis have a logarithmic scale trim(X)* indicates that the 2.5% smallest and largest values of X have been shrinked; r, Spearman rank correlation coefficient; and p, p value. The results show a significant positive correlation between MDR1 and BCRP mRNA expression as illustrated by the p value and correlation coefficient. In addition, MRP1 and LRP are highly associated. BCRP shows a negative correlation with MRP1 and LRP To assess the clinical relevance of the four resistance genes, their expression was evaluated with regard to CR rate and survival data, respectively. The median follow-up of the 25 patients still alive was 58 months (range, 1–80 months). Univariate logistic regression analysis showed that higher MDR1 mRNA expression predicted for a lower CR rate (log[MDR1]: odds ratio [OR]=0.75, 95% confidence interval [CI] 0.61–0.93, p = 0.009), whereas MRP1, LRP, and BCRP mRNA were not associated with CR (Fig. 1). MDR1 expression was also associated with a worse EFS (log[MDR1]: hazard ratio [HR]=1.14, 95% CI 1.03–1.27, p = 0.01) and OS (log[MDR1]: HR = 1.16, 95% CI 1.05–1.29, p = 0.004). Similar results were also obtained for MDR1/BCRP coexpression (Table 3; Fig. 2). When the analyses were performed with adjustment for other prognostic factors, as described in the “Statistical analysis”, only MDR1/BCRP mRNA coexpression remained significantly associated with a lower CR rate (OR = 0.37, 95% CI 0.15–0.91, p = 0.03), whereas a trend was observed for worse EFS (Table 3). On the other hand, higher CD34 expression was significantly associated with a lower CR rate (square root[CD34]: OR = 0.86, 95% CI 0.76–0.98, p = 0.02) and with worse EFS (HR = 1.12, 95% CI 1.06–1.19, p < 0.001), DFS (HR = 1.19, 95% CI 1.09–1.30, p < 0.001), and OS (HR = 1.17, 95% CI = 1.10–1.25, p < 0.001). Table 3Prognostic value of drug resistance gene expression w.r.t. CR rate, EFS, DFS from CR, and OS CR rateEFSDFSOSOR95% CIpHR95% CIpHR95% CIpHR95% CIpMDR1 Univariate0.750.61–0.930.0091.141.03–1.270.011.130.97–1.300.111.161.05–1.290.004 Adjusted0.770.58–1.030.081.050.91–1.210.480.950.77–1.180.671.000.87–1.160.97MRP1 Univariate1.060.83–1.350.631.020.90–1.150.791.070.89–1.290.471.110.97–1.260.12 Adjusted1.220.90–1.660.201.000.87–1.150.981.120.92–1.370.261.050.91–1.210.54LRP Univariate1.160.94–1.430.160.950.86–1.060.360.980.84–1.140.790.970.87–1.080.60 Adjusted1.220.93–1.610.150.990.87–1.120.831.060.89–1.270.520.980.86–1.120.78BCRP Univariate0.840.66–1.060.141.040.91–1.180.580.950.77–1.160.600.960.84–1.100.58 Adjusted0.790.59–1.06 0.120.990.86–1.140.920.840.66–1.060.140.900.77–1.050.19MDR1/BCRP co-expression Univariate0.380.18–0.800.011.631.11–2.370.011.650.90–3.010.111.471.00–2.160.05 Adjusted0.370.15–0.92 0.031.530.98–2.380.061.370.67–2.820.391.160.74–1.830.51Results of logistic (for CR rate) and Cox regression (for survival) analyses, either univariate (=unadjusted) or adjusted for treatment arm, secondary AML, WBC count (natural logarithm), % CD34+ (square root), and cytogenetic risk (favorable/intermediate versus unfavorable versus unknown), are shown for each of the four drug resistance genes MDR1, MRP1, LRP, and BCRP (natural logarithm of mRNA expression levels) and for MDR1/BCRP co-expression.Fig. 2Survival of elderly AML patients with and without coexpression of MDR1 and BCRP mRNA. a Event-free survival, b disease-free survival, c overall survival. pos indicates patients with coexpression of MDR1 and BCRP; and other, patients without coexpression Discussion This is the first comprehensive analysis of the effect of the major classical MDR genes in a cohort of elderly patients with AML homogeneously treated in a prospective clinical trial [14]. A wide range of expression of the various resistance genes was observed, consistent with previous studies and with comparable median values [9–11, 23]. Our results show that MRP1, LRP, and BCRP are not associated with CR rate or survival endpoints in patients with AML aged 60 years or older, indicating that the clinical relevance of the expression of these genes is limited in this patient population. This study confirms previous reports, which showed the unique prognostic role of MDR1 expression—which was however highly correlated with CD34 expression—in drug resistance in elderly AML (Table 3), in contrast to the prognostic value of MRP1 expression in AML, which has shown conflicting results, whereas currently, LRP is no longer thought to be important for clinical drug resistance [4, 5, 7, 24–27]. Recently, two studies in, respectively, 40 and 31 adult AML patients showed no effect of BCRP gene expression on CR rate, whereas OS was lower in patients with the highest BCRP expression [10, 23]. Damiani et al. [28] showed that BCRP expression did not influence achievement of complete remission in AML patients with a median age of 53 years and normal karyotype, however, BCRP expression was associated with higher relapse rate. In 59 children with de novo AML, a higher BCRP expression was observed in patients who did not reach CR, but this was not translated in poorer survival [29]. Benderra et al. [19] indicated that BCRP gene expression was an adverse prognostic factor for CR in a group of 149 relatively younger adult AML patients but only in patients treated with DNR and MXT and not with idarubicin. In our cohort of elderly AML patients who were all treated with DNR, whereas MXT was given as consolidation therapy after reaching CR, a significant correlation of BCRP mRNA expression with lower CR rate could not be shown. Our study confirms that BCRP and MDR1 are coexpressed in AML patients with higher age as has been suggested previously from studies in smaller groups of relatively younger AML patients [9–11, 28]. Until now, only two studies have evaluated the clinical value of coexpression of MDR1 and BCRP in a sufficient number, although relatively younger adult AML patients [19, 28]. Benderra et al. [19] showed that CR rate was only 45% in the patients with coexpression of BCRP and MDR1 (+/+) in comparison with 66% in the MDR1/BCRP−/+ and +/−group and 90% in the MDR1/BCRP−/−group (p = 0.003). Moreover, a significantly lower DFS and OS were found in the MDR1/BCRP+/+group. Damiani et al. [28] found a trend towards a higher relapse rate in the small group of BCRP+/MDR1+patients, indicating that this represents a robust resistant AML phenotype, consistent with our findings in elderly AML. The recent finding that BCRP and MDR1 expression was mainly found in the most resistant group of AML, using gene expression profiling, underscores the role of these drug resistance genes in AML [30]. However, this study shows, that the prominent prognostic role of CD34 expression in elderly AML should be emphasized, as higher CD34 expression was adversely associated with all clinical endpoints. MDR1 and BCRP but not MRP1 and LRP mRNA expression were found to be associated with high CD34 expression in these elderly AML patients, which may explain why MDR1 was no longer significant for CR rate, EFS, and OS when adjusted for other prognostic variables including CD34. In the past, MDR1 expression has been linked to the CD34-positive hematopoietic stem cell compartment of the leukemia subtype. In two other studies in younger AML patients, no overexpression (on mRNA and protein level) of BCRP in the CD34-positive blast population of clinical AML samples was found [13, 19]. In contrast, earlier studies in mice demonstrated high levels of BCRP and MDR1 expression in normal hematopoietic stem cells [31–34]. Previously, BCRP expression in subsets of stem cells has been reported, indicating that high BCRP expression may exist in CD34+/CD38− cells or in CD34+/CD33−cells [12, 35]. The differential expression of BCRP and MDR1 in specific subsets of hematopoietic stem cells is consistent with the side population phenotype as proposed by Goodell et al. [36] who claim that BCRP expression can be separated from those expressing the other ABC proteins. This would suggest that BCRP is expressed in even less differentiated hematopoietic stem cells than MDR1 [19]. In our study in AML, these immature subsets could not be separately investigated, however, the unique BCRP/MDR1+/+subgroup of patients reflects an immature leukemic cell type that has a very resistant phenotype in vivo, illustrated by a low CR rate and poor outcome (Table 3; Fig. 2). This is the first study in which a correlation was found between secondary AML and a high expression of BCRP mRNA but not the other resistance proteins. In addition to our previous report that BCRP is frequently upregulated in patients with AML at relapse, we now demonstrate that expression of BCRP is representative of secondary AML, which is especially observed in elderly patients [11, 29]. Recently, Ross [37] suggested that MDR modifiers may be of benefit for patients with multiple dysplastic features. This may suggest that BCRP is upregulated in diseases in which exposure to xenobiotics during life plays an etiologic role. We conclude that coexpression of CD34-related coexpression of MDR1 and BCRP reflects a clinically resistant subgroup of elderly AML. In this age group, only BCRP is correlated with secondary AML. As such, the development of new treatment strategies for elderly AML patients may focus on modulation of drug resistance targeting both BCRP and MDR1.
[ "mdr1", "bcrp", "genes", "lrp", "mrp1", "elderly aml" ]
[ "P", "P", "P", "P", "P", "P" ]
Qual_Life_Res-4-1-2238788
Effectiveness of health-related quality-of-life measurement in clinical practice: a prospective, randomized controlled trial in patients with chronic liver disease and their physicians
Background This study assessed the effectiveness of computerized measurement and feedback of health-related quality of life (HRQoL) in daily clinical practice in patients with chronic liver disease. Introduction Health-related quality of life (HRQoL), or psychological, social, and physical functioning [1], has become an important outcome measure in medical care. Standardized assessment of HRQoL preceding each consultation may potentially provide physicians with valuable information. Several studies have shown that physicians vary in their ability to elicit psychosocial information or that they underestimate patients’ HRQoL [2–5]. Furthermore, various studies have shown that when communication with the physician encompasses both physical and psychosocial issues, patients have better treatment compliance, are more satisfied with the consultation, and report less symptoms [6–8]. Nevertheless, relatively few studies have assessed the value of HRQoL measurement in clinical practice. Some have shown positive results with regard to acceptance by patients and physicians or a significant increase in the identification and/or discussion of HRQoL issues [9–14]. Less consistent and favorable results have been obtained with regard to the effectiveness of standardized HRQoL measurement in actually improving HRQoL or psychosocial outcomes. Even though decreased depression [15], improved overall and emotional functioning [10], improved mental health [16], and a decrease in disease-specific debilitating symptoms of patients undergoing chemotherapy [13] have been associated with HRQoL measurement in clinical practice, several other studies found no significant improvement in HRQoL or psychosocial outcomes [9, 17–20]. A possible explanation might be that the majority of existing studies assessing the effectiveness of HRQoL measurement in clinical practice with regard to patients’ psychosocial functioning or HRQoL have included oncological patients or patients from general practice. Oncological patients can be considered a special group due to the life-threatening nature of the disease. Patients from general practice, on the other hand, may be too diverse and often present with generally minor complaints, which may hamper the discovery of beneficial effects. Both groups impede generalization of results to other chronic patient populations. Two important studies [9, 10] used designs in which physicians were part of both the control and the experimental group, either by using a crossover design (physicians were first assigned to one group, then crossed over to the other group halfway through the study) [9] or by assigning patients rather than physicians to the different groups [10]. This may possibly have caused bias. Two systematic reviews have stressed the need for further research evaluating the effectiveness of repeated measurements of HRQoL in clinical practice [18, 20] and the need for further research to help health care professionals identify patients who would benefit most from such interventions [20]. The study reported here differs from previous studies by including a patient population with chronic liver disease (CLD) in order to study the effects of HRQoL use in clinical practice in a population that is more representative of other patients with a chronic disease. CLD is one of the most prevalent diseases in the world. The most common causes of CLD, hepatitis B virus (HBV) and hepatitis C virus (HCV), have been estimated to affect 360 million and 200 million people worldwide, respectively (http://www.epidemic.org, 4-12-2006). In addition, alcohol is another main cause of end-stage liver disease worldwide and the second most common reason for liver transplantation in the United States [21]. CLD is a serious disease that is associated with significant physical and psychological symptoms such as impaired cognition, hepatic coma, fluid in the abdomen, abdominal pain, joint pain, fatigue, depression, and anxiety [22–28]. Not surprisingly, HRQoL in patients with CLD has been shown to be impaired [29, 30]. CLD is an appropriate example of a typical chronic disease, with patients experiencing substantial comorbidity and possibly mortality, as is the case in other chronic diseases such as kidney disease and chronic obstructive pulmonary disease. Our study also differs from previous studies by assessing the benefits of HRQoL measurement for patients with different demographic characteristics (e.g., men and women, young and old), which is essential for determining which patients are most likely to benefit from HRQoL measurement in clinical practice, a point recently reiterated in a systematic review on this topic [20]. In addition, in our study, physicians rather than patients were assigned to the control or the experimental group. This assigning of physicians to only one group prevents bias of physicians being focused on discussing HRQoL when seeing patients in the control group. The aims of the study were twofold: the first was to assess the effectiveness of real-time computerized measurement of HRQoL in various patients with CLD and presentation of the results to physicians before the consultation in terms of improvement in patient HRQoL, patient management, and patient satisfaction with the consultation by means of a randomized trial with repeated measurements. The second aim was to assess hepatologists’ experiences with the availability of real-time HRQoL patient data and to measure the possible effect(s) it had on their consultations. Patients and methods Patient recruitment This study was performed at the Department of Gastroenterology and Hepatology of the Erasmus Medical Centre, Rotterdam, where HRQoL measurement on a regular basis was implemented for the duration of 1 year. All patients older than 17 years of age with CLD visiting the department between September 2004 and January 2005 were invited to participate. Written information about the study was sent to the patients 3 days before their consultation at the outpatient department. Patients interested in participating informed their physician, who consequently directed them to the researcher for further explanation of the study and to sign informed consent. For this effectiveness study, we included all patients with two or more measurement moments. All physicians working at the Department of Hepatology participated. The protocol was in accordance with the ethical guidelines of the modified 1975 Declaration of Helsinki and approved by the Medical Ethics Committee of the Erasmus MC. Study objectives The primary aim of this study was to assess the effectiveness of computerized measurement of HRQoL in clinical practice. The primary outcome measures were patients’ generic HRQoL (physical and mental component score separately) and disease-specific HRQoL. Secondary outcome measures were patient satisfaction with the consultation and patient management. The secondary aim of this study was to assess hepatologists’ experiences with the availability of real-time HRQoL patient data. Study design and intervention Physicians Physicians were randomly assigned to either the experimental or control group by means of a restricted randomization procedure called blocking. To ensure an equal number of physicians in each group, it was decided to include six in the experimental group and five in the control group. We used a random sequence table to assign physicians to one of the conditions. Due to the nature of the intervention, it was impossible to blind physicians to group assignment. Physicians in the experimental group were able to obtain an instant computerized graphical output of HRQoL patient data, which also included data from previous measurement moments so that changes in patients’ HRQoL could be monitored (Fig. 1). Prior to the study, physicians received instructions from a psychologist with expertise in the field of HRQoL measurement on how to interpret this output. First, physicians were shown the questionnaires in order to familiarize them with the content. Second, they were informed that the red line in the graph was the average score of patients with CLD on the Short Form-36 (SF-36) measuring generic HRQoL and that scores under this line were to be considered low. They were also told that the average score of healthy people on this questionnaire was 50. The physicians were instructed to interpret the disease-specific Liver Disease Symptom Index 2.0 (LDSI 2.0) at item level, with scores ranging from 1 (not at all) to 5 (to a large extent). The physicians were asked to use the HRQoL data in all consultations for 1 year. No recommendations for specific responses were given. Instead, they were instructed to use their clinical experience to choose an appropriate treatment. After seeing a participating patient, physicians in both groups completed a checklist about the content of the consultation. Physicians in the control group conducted their consultations as usual. Fig. 1Example of the graphical output of patients’ health-related quality of life as presented to physicians in the intervention group. A score of 50 is the average score of a healthy norm population. The dashed line represents the mean score for patients with chronic liver disease Patients Through the random assignment of physicians, patients were indirectly allocated to either group. Patients were initially blinded to the group assignment. All patients participating in the study completed a computerized generic- and disease-specific HRQoL questionnaire and the first part of a pen-and-paper questionnaire on patient satisfaction with the consultation before each consultation at the outpatient Department of Hepatology for 1 year. They also completed the second part of the satisfaction questionnaire after the consultation. More specific information on the content of the questionnaires is provided in “Study measures”. To ascertain good questionnaire completion, a researcher was always available to answer questions about the computer and/or questionnaires at the patient’s request. Study measures HRQoL Disease-specific HRQoL: This was assessed by means of the LDSI 2.0, which measures severity and hindrance of nine symptoms: itch, joint pain, pain in the right upper abdomen, decreased appetite, jaundice, fatigue, depressed mood, worries about family situation, and fear of complications [24]. Because of time constraints, only items measuring symptom severity were included in this study (n = 9). The physicians were instructed to interpret the questionnaire at item level, with scores ranging from 1 (not at all) to 5 (to a large extent). For data analysis, a total score, ranging from 9 to 45, was computed by summing the scores of each item. The reliability of the LDSI 2.0 is good (internal consistency α > 0.79), as is the construct validity [30]. Generic HRQoL: This was assessed by means of the Short Form-12 version 1 (SF-12). The SF-12 produces a Physical Component Summary (PCS) and Mental Component Summary (MCS), representing physical and emotional functioning, respectively. The mean score of the PCS and MCS in the general population is 50 [standard deviation (SD) 10] with higher scores representing better HRQoL. Mean scores and SD of the PCS and MCS of CLD patients was calculated from a large database (n = 1,175) [29, 31] (PCS: mean 43.2, SD 10.7; MCS: mean 44.4, SD 12.8). These means were used as a reference point (red line) in the graphical representation for physicians so they could easily identify patients scoring below average within the CLD group. The SF-12 has been shown to be reliable between test and retest (MCS r = 0.76, PCS r = 0.89), and median relative validity estimates of 0.67–0.97 for the PCS and MCS, respectively, have been found [32]. Patient satisfaction with the consultation Patients’ satisfaction with the consultation was measured with the QUOTE-Liver, a newly developed questionnaire consisting of 20 items that assesses the discrepancy between patients’ needs/expectations (importance: measured before the consultation) and the actual care that they receive (performance: measured after the consultation). The internal consistency of the overall QUOTE-Liver was excellent (α = 0.90), as was the face validity: all patients (n = 152) in the validation study and three psychologists and a hepatologist agreed that the items of the QUOTE-Liver adequately reflected the most important aspects of care for CLD patients. Construct validity, as measured by the correlation between a visual analog scale (VAS) measuring overall satisfaction and the total score on the QUOTE-Liver was good (r = 0.69; P < 0.01). Content validity was also good: none of the 152 patients in the validation study suggested new items to be included (Gutteling et al. 2006, unpublished). A reduced version consisting of the nine items ranked by patients as most important and the two liver-disease-specific items was used in our study. Using a formula applied for all QUOTE-Liver instruments (10-importance × performance), a total satisfaction score can be computed ranging from 0 to 10, with 0 meaning not satisfied at all and 10 meaning completely satisfied [33]. Patient management The effect of the intervention on patient management was measured by means of a checklist that physicians completed after each consultation with a study participant, including the question: “Have you changed your treatment in any way?” and a subquestion: “If so, what have you done?” followed by several options: “Prescription of antidepressants,” “Referral to psychosocial care,” “Altering the frequency of consultations,” and “Other.” Physicians’ experiences Experiences of physicians with the experimental condition were assessed through the checklists that they completed after each consultation with a study participant, asking the question: “Did you find the HRQoL information useful? Why?” with the answering options: “Yes, it provided new information,” “Yes, it saved time,” “Yes...,” “No, the patient is doing well,” “No, I know this patient well enough,” “No, the patient tells me a lot,” “No...”. Also, a semistructured interview was conducted 6 months into the study and at the end of the study. The interview included questions about the effort to request HRQoL information, the usefulness of the information, whether the availability of HRQoL information increased the duration of the consultation, and whether participating patients addressed HRQoL issues more often than patients who did not participate. Physicians were also asked whether there were certain subgroups of patients whose HRQoL information they found particularly useful. Opinions of physicians in the control group toward possible future availability of HRQoL information during the consultation were assessed by means of the same semistructured interview at 6 months only. Statistical analysis Sample size A nonclustered power analysis based on a medium effect size (Cohen’s D = 0.50) with a 5% significance level and 80% power indicated that at least 64 patients were needed in each group to detect a statistically significant difference. Data selection For patients who were included in both groups because they had consultations with physicians from the control group as well as physicians from the experimental group during the year of the study, data from the condition in which they had most often been was included (n = 33). For patients who had been in both conditions equally (n = 19), all data were excluded. The first measurement moment of all patients (T1) was considered a baseline measure, as no HRQoL data had yet been presented to the physicians. Data analysis Differences on the variables gender, diagnosis, disease severity, and age between participants and nonparticipants were assessed by means of χ2 tests or t tests. The same was done for assessing differences between patients in the control group and the intervention group. Scores of participating patients on measurement moments (T2−Ti) were summarized into one overall score per variable in the study. Univariate analyses of variance were performed in SPSS 11.0. Fixed factors were age, gender, disease severity, presentation of HRQoL data to the clinicians (feedback), and interactions between these variables. Differences in diagnoses between patients in both groups were controlled for by entering one propensity score of the variable diagnosis as a covariate in the analyses. Propensity scores were especially designed for situations in which study participants could not be randomly assigned to groups, and their characteristics were therefore not balanced among the groups. A propensity score was defined as the conditional probability of assignment to a certain treatment group given a set of observed pretreatment characteristics and was usually estimated by means of a logistic regression analysis [34]. Thereby, the background characteristic(s), in this case diagnosis, was reduced to one single score, the propensity score. We calculated the propensity score by entering the different diagnoses (HBV, HCV, cholestatic liver disease, pretransplantation, posttransplantation, autoimmune hepatitis, and other) as dummy variables (M-1) in a logistic regression analysis. The unstandardized logistic regression weights were then multiplied by the relevant dummy variable and summed, together with the constant. This score was used in the univariate analysis to adjust for baseline confounding. Univariate analyses of variance were performed for each outcome variable (disease-specific HRQoL and generic HRQoL MCS and PCS) separately. A forward technique was used in which the main effects of the fixed factors were assessed in the first block, and the interactions between feedback of HRQoL data and each of the other fixed factors (age, gender, severity of the disease) were explored in the second block. Differences between the two groups on patient management variables and satisfaction with the consultation were assessed by means of Mann–Whitney tests. Hepatologists’ experiences with the availability of real-time patient HRQoL data was assessed by means of semistructured interviews and checklists. These data were of a descriptive nature and are presented as such. Results Characteristics of patients and physicians in the study Of the 587 patients that agreed to participate in the study, 181 completed the questionnaires more than once. Of these, 19 were included in the experimental and control conditions equally often and were therefore excluded from the analyses. One hundred and sixty-two patients (control group n = 80, experimental group n = 82) were included (Fig. 2). Differences in age, gender, diagnosis, and disease severity between patients in the study and nonrespondents are presented in Table 1. Demographic characteristics of the 162 patients are presented in Table 2. Patients in the control and experimental groups were comparable, except for the variables diagnosis and disease severity (Table 2). In the analyses, these differences between conditions were controlled for. All physicians working at the Department of Hepatology (n = 11, ten men) agreed to participate. Their mean age was 39 (range 27–55) years, and their average working experience was 8.7 (range 0–27) years. Fig. 2Patients in the studyTable 1Differences in age, gender, diagnosis, and disease severity between patients in the study and nonrespondentsPatients in the analyses (n = 162)Patients excluded from the analyses (n = 165)P valuePatients excluded from the study (n = 260)P valueAge, mean (range)47.5 (20–75)48.6 (20–81)0.5247.6 (18–80)0.92Gender, n (%)    Male96 (59)87 (53)0.24136 (52)0.21    Female66 (41)78 (47)124 (48)Diagnosis, n (%)    Hepatitis B22 (13)25 (15)0.0449 (19)0.00    Hepatitis C23 (14)24 (15)56 (22)    Cholestatic liver disease11 (7)22 (13)32 (12)    Pretransplantation11 (7)7 (4)1 (0)    Posttransplantation62 (38)48 (29)55 (21)     Autoimmune hepatitis12 (8)11 (7)18 (7)     Other21 (13)28 (17)49 (19)Disease severity, n (%)    No cirrhosis101 (62)105 (64)0.43159 (61)0.96    Compensated cirrhosis42 (26)45 (27)69 (27)     Decompensated cirrhosis19 (12)15 (9)32 (12)Differences were assessed by means of χ2 tests (except for age: t test). Reference group for comparison of both P values is the group of patients in the analysesTable 2Characteristics of patients included in the data analysisControl group (n = 80)Experimental group (n = 82)P valueGender, n (%)    Women38 (48)28 (34)0.08    Men42 (52)54 (66)Age, mean (range)47.5 (21–74)47.6 (20–74)0.98Diagnosis, n (%)    Hepatitis B1 (1)20 (25)0.00    Hepatitis C7 (9)16 (19)    Cholestatic liver disease4 (5)6 (7)    Pretransplantation5 (6)3 (4)    Posttransplantation43 (54)23 (28)    Autoimmune hepatitis6 (7)6 (7)    Other14 (18)8(10)Disease severity, n (%)    No cirrhosis44 (55)56 (68)0.01    Compensated cirrhosis16 (20)22 (27)    Decompensated cirrhosis20 (25)4 (5)Differences were assessed by means of χ2 tests (except for age: t test) Descriptives The number of times that patients in the control and experimental groups completed the questionnaires varied between two and 11 (Table 3). Mean scores of patients at T1 and T2−Ti on the outcome variables generic HRQoL and disease-specific HRQoL are presented in Table 4. Table 3Questionnaire completion rate of patients in the control and experimental groupsNumber of times questionnaires were completedTotal (n)234568911Control (n)2229117712180Experimental (n)451895221082Table 4Patients’ adjusted means and 95% confidence intervals at T1 and T2−TiT1P valueT2−TiP valueControl ExperimentalControlExperimentalOverallSF-12 PCS41.5 (39.0–43.9)45.6 (42.0–49.3)0.0642.0 (39.6–44.4)44.8 (41.4–48.3)0.19SF-12 MCS43.4 (40.3–46.5)46.0 (41.4–50.6)0.3543.8 (41.0–46.5)44.8 (40.8–48.8)0.69LDSI 2.021.2 (19.0–23.4)18.9 (15.7–22.2)0.2720.4 (18.6–22.2)18.8 (16.1–21.4) 0.31Male patientsSF-12 PCS10.2 (37.1–43.3)47.0 (42.9–51.2)0.1041.3 (38.2–44.2)45.7 (41.7–49.7)0.29SF-12 MCS41.6 (37.7–45.4)45.6 (40.4–50.8)0.4941.2 (37.8–44.6)46.7 (42.1–51.2)0.02LDSI 2.022.8 (20.0–25.5)18.1 (14.4–21.8)0.1021.4 (19.2–23.6)18.0 (15.0–21.0)0.14Female patientsSF-12 PCS42.7 (39.2–46.3)44.2 (39.8–48.7)42.8 (39.4–46.2)44.0 (39.7–48.2)SF-12 MCS45.2 (40.7–49.6)46.4 (40.8–52.0)46.3 (42.4–50.2)42.9 (37.9–47.8)LDSI 2.019.6 (16.4–22.8)19.8 (15.8–23.8)19.4 (16.9–22.0)19.5 (16.3–22.7)Older patientsSF-12 PCS41.5 (38.4–44.6)44.6 (40.7–48.6)0.4940.4 (37.4–43.3)43.4 (39.9–47.5)0.72SF-12 MCS41.5 (37.6–45.4)46.3 (41.4–51.3)0.2641.2 (37.8–44.7)45.9 (41.6–50.3)0.03LDSI 2.022.8 (20.0–25.5)19.1 (15.6–22.7)0.3122.1 (19.9–24.3)18.1 (15.3–21.0)0.04Younger patientsSF-12 PCS41.4 (37.9–44.9)46.7 (42.2–48.6)43.6 (40.3–47.0)45.9 (41.6–50.3)SF-12 MCS45.3 (40.9–49.7)45.7 (40.0–51.3)46.3 (42.5–50.2)43.6 (38.7–48.6)LDSI 2.019.6 (16.5–22.7)18.7 (14.7–22.8)18.8 (16.2–21.3)19.4 (16.1–22.6)The means at T1 and T2−Ti were obtained from the univariate analyses of variance with fixed factors: age, gender, severity of the disease, study group (control or experimental), and interactions between these variables. Differences in diagnoses between patients in both groups were controlled for. The significance level reflects the group for which the largest difference on the variable was foundSF-12 Short Form-12, PCS Physical Component Summary, MCS Mental Component Summary, LDSI 2.0 Liver Disease Symptom Index 2.0 Effects of the experimental condition on patients’ HRQoL and satisfaction with the consultation Disease-specific HRQoL There was no main effect for the experimental condition on disease-specific HRQoL. There was a statistically significant interaction effect for the variables age and feedback of HRQoL data on the outcome variable disease-specific HRQoL (Table 5): older patients (>48 years of age, as determined by the median split) in the experimental group had significantly lower total scores on the LDSI 2.0 (meanAdj = 18.1, 95% CI: 15.3–21.0) (F = 4.18; P < 0.05), indicating better disease-specific HRQoL, than other patients, especially older patients in the control group (meanAdj = 22.1, 95% CI: 19.9–24.3). This difference between older patients in the experimental group and the control group on disease-specific HRQoL is equivalent to a Cohen’s D of 0.51, reflecting a medium difference [35]. Table 5Interaction effects between age, gender, disease severity, and feedback on the outcome variable disease-specific HRQoL, controlled for diagnosisSourceF valuedfP valueR2Corrected model2.11100.03Intercept599.8310.00Diagnosis (propensity score)1.8010.180.08Gender0.0410.85Disease severity3.3920.04Age0.8410.36Feedback1.0510.31Gender * Feedback2.1710.140.12Severity * Feedback0.1520.86Age * Feedback4.1810.04Dependent variable: mean total score of the Liver Disease Symptom Index 2.0 [disease-specific health-related quality of life (HRQoL)] for the measurement moments T2...Ti Generic HRQoL Mental Component Summary score No main effect for the experimental condition on mental HRQoL was found. However, a significant interaction effect for the variables age and feedback of HRQoL data was found. Older patients in the experimental group had higher scores on the SF-12 MCS (meanAdj = 45.9, 95% CI: 41.6–50.3) (F = 4.62; P < 0.05), reflecting better HRQoL, than other patients, especially older patients in the control group (meanAdj = 41.3, 95% CI: 37.8–44.7) (Table 6). Furthermore, a significant interaction effect was found for the variables gender and feedback of HRQoL data, with male patients in the experimental group showing higher scores on the SF-12 MCS (meanAdj = 46.7, 95% CI: 42.1–51.2) (F = 6.10; P < 0.05) than other patients, especially male patients in the control group (meanAdj = 41.2, 95% CI: 37.8–44.6) (Table 6). Table 6Univariate analysis of variance with the variables age, gender, disease severity, and feedback on the outcome variable mental generic HRQoL, controlled for diagnosisSourceF valuedfP valueR2Corrected model1.65100.10Intercept1337.0510.00Diagnosis (propensity score)1.3410.250.03Gender0.1410.71Disease severity0.4020.67Age0.6510.42Feedback0.1610.69Gender * Feedback6.1010.020.10Severity * Feedback0.1320.88Age * Feedback4.6210.03Dependent variable: mean total score of Short Form-12 Mental Component Summary (SF-12 MCS) [generic mental-health-related quality of life (HRQoL)] for the measurement moments T2...Ti Physical Component Summary score No significant main effect or interaction effects were found for the variables feedback of HRQoL data and age, gender, and disease severity on the SF-12 PCS. Patients’ satisfaction with the consultation The scores on patient satisfaction did not differ significantly between the experimental and control groups (z = −1.20, P = 0.23). Also, no interaction effects of age, gender, and/or disease severity were found on this outcome variable. Effects of the experimental condition on the consultation and on patient management Physicians in the experimental group requested information of their patients in 92% of consultations, and they discussed it with their patients in 58% of consultations. They indicated finding the HRQoL information useful in 45% of consultations, which is generally in accordance with the percentage of patients in the experimental group scoring below average on the MCS (39%) and PCS (42%). They mostly found the HRQoL useless when a patient was doing well. Physicians in the experimental group indicated significantly more often than physicians in the control group that they spent more time than usual discussing psychosocial issues (30.7% vs. 6.6% of consultations, z = −6.65; P < 0.001). Treatment policy was altered significantly more often in the experimental group (11% of consultations vs. 1% of consultations in the control group; z = −3.73, P < 0.001). Most commonly, frequency of consultations was increased (n = 5). Other alterations concerned prescription of medication [3], increased attention for physical complaints [4], referral to psychosocial care [1] or occupational health physician [1], and increased attention to explanations/reassurance [2]. Physicians’ experiences with the availability of HRQoL information in clinical practice Experiences of physicians in the experimental group at 6 months and at the end of the study did not differ. All physicians in the experimental condition found the HRQoL information useful, except for one older physician who claimed to know his patients very well. They indicated being better able to understand some of their patients through the extra information that was provided by the questionnaires. These physicians did not perceive requesting the information as an extra effort on their part. Furthermore, they did not think that using the information lengthened their consultations. All physicians in the experimental group indicated that they wanted to continue using the HRQoL information in the future. Physicians in the control group were similarly positive toward the possible availability of HRQoL information during their consultations in the future, on the condition that it would not be time consuming. This specifically concerned patients awaiting liver transplantation, patients with hepatitis C, and nonnative speakers (mostly patients with hepatitis B). Discussion Computerized, real-time measurement of HRQoL at our busy outpatient Department of Hepatology and presentation of the results to physicians before each consultation did not show a main effect on patients’ overall HRQoL. However, secondary analyses showed that the HRQoL measurements positively affected disease-specific HRQoL and generic mental HRQoL of older patients (>48 years of age) with CLD and also generic mental HRQoL of male CLD patients. The results of our study are among the first to show a beneficial effect of presenting HRQoL data to physicians in clinical practice. Most other studies have failed to show evidence for the actual improvement in HRQoL or psychosocial outcomes [9, 17–20]. Of the studies that did find a beneficial effect, one showed a decrease in disease-specific debilitating symptoms [13], and another showed improved emotional functioning [10], which is in line with findings of our study. It should be noted that due to the cross-sectional data analyses, a causal relationship between intervention and HRQoL could not be demonstrated. Future studies should address this in further detail. Our study found no differences between patients in the experimental and control groups with regard to satisfaction with the consultation, which is in line with findings from previous studies [9, 36, 37]. The lack of observed differences between the study groups may have been due to high levels of satisfaction, resulting in a ceiling effect. This study was among the first to show a significant difference in patient management between experimental and control groups, with physicians in the experimental group mostly reporting a significant increase in the frequency of consultations. Our findings were statistically significant and in accordance with the findings of a systematic review [20] and subscribe to the increasingly acknowledged importance of using HRQoL information for the improvement of physician consultations [38]. However, it should be noted that even though the differences in patient management between control experimental groups were statistically significant, the absolute numbers were small. Therefore, the results should be interpreted cautiously, and further studies using more elaborate methods of data collection—for instance, monitoring patients’ medical records or administering more detailed checklists—are recommended. Physicians’ experiences with using HRQoL information during the consultation were generally positive; requesting the information was not considered an extra effort on their part, and they found the information especially useful for certain groups of patients, such as those awaiting liver transplantation, those with hepatitis C, and nonnative speakers. All physicians but one found the information useful for at least some (45%) of their patients. Physicians indicated finding the information least useful when patients were doing well in terms of HRQoL or when they knew the patient well. These generally positive experiences are in accordance with findings from previous studies [9–14], which assessed oncologists’ attitudes toward using HRQoL information in clinical practice. The confirmation of these results in hepatologists suggests that HRQoL information may also be well accepted by physicians treating patients with other chronic conditions. Another result of our study was that when HRQoL information was available, more time was spent discussing psychosocial issues and more treatments were altered. Interview and checklist data were contradictory regarding the duration of consultations when HRQoL information was available. In a previous study in which the duration of consultations was timed, no increase in consultation time was found [14]. Future studies should shed more light on whether the availability of HRQoL information increases the length of consultations in hepatology. The strength of our study lies in the analyses performed, where benefits for specific groups of liver patients were explored by entering interactions between gender, age, disease severity, and feedback of HRQoL data, rather than solely investigating main effects between the intervention and control groups. Also, this study included patients with CLD rather than patients with cancer or patients from general practice, making it especially relevant to a more general population of patients with a chronic illness. We are aware of several limitations of this study. First, physicians rather than patients were randomly assigned to either the intervention or control group. Randomization is a complicated issue in these kinds of implementation studies, and both methods are subject to limitations. An important advantage of the randomization of physicians is that the control group was not biased toward mentioning HRQoL topics more often than usual. Future studies using the same design but including more physicians are needed to further explore possible main effects of HRQoL measurement on patients’ overall HRQoL. A second limitation was the high number of nonparticipants. Part of the explanation may lie in the fact that patients were responsible for contacting their physician if they were interested in participating in the study. In addition, the number of non-Dutch-speaking patients visiting the department is relatively large (hepatitis B, for example, is most common among people from North Africa). These patients were also invited to participate but were less likely to respond. The relatively large number of patients who completed the questionnaires only once may be explained by the small window of opportunity to complete the questionnaires before each consultation. In addition, for such implementation endeavors, cooperation of all staff members is essential, and future research should explore this further. A last limitation of this study was that the checklists used to assess consultation content were not very detailed. This was done on purpose, as longer inventories would have compromised physician participation. However, considering the positive outcomes of this study, it is advisable that future studies consider ways to obtain a more detailed view of how the HRQoL information affects consultation content, for example, by recording consultations. In conclusion, although a main effect of the intervention was not found, this study showed a beneficial effect of implementation of HRQoL measurement in clinical practice on the HRQoL of older and male patients with CLD and on patient management. Nevertheless, the study had several shortcomings, and further studies are needed to substantiate these findings. Physicians’ experiences with the availability of HRQoL information were positive, especially for patients awaiting liver transplantation, patients with hepatitis C, and nonnative speakers. They expressed an interest in continued use of HRQoL information. These results advocate the continued use of measuring HRQoL in a clinical practice of hepatology. Including older patients and male patients, who have been shown to benefit most from such a procedure, should be aimed for.
[ "liver", "quality of life", "hepatology", "implementation" ]
[ "P", "P", "P", "P" ]
Purinergic_Signal-3-4-2072922
Adenosine A1 receptor: Functional receptor-receptor interactions in the brain
Over the past decade, many lines of investigation have shown that receptor-mediated signaling exhibits greater diversity than previously appreciated. Signal diversity arises from numerous factors, which include the formation of receptor dimers and interplay between different receptors. Using adenosine A1 receptors as a paradigm of G protein-coupled receptors, this review focuses on how receptor-receptor interactions may contribute to regulation of the synaptic transmission within the central nervous system. The interactions with metabotropic dopamine, adenosine A2A, A3, neuropeptide Y, and purinergic P2Y1 receptors will be described in the first part. The second part deals with interactions between A1Rs and ionotropic receptors, especially GABAA, NMDA, and P2X receptors as well as ATP-sensitive K+ channels. Finally, the review will discuss new approaches towards treating neurological disorders. Introduction The vertebrate central nervous system (CNS) is characterized by a dynamic interplay between signal transduction molecules and their cellular targets. Modulation of synaptic transmission by metabotropic or ionotropic receptors is an important source of control and dynamical adjustment in synaptic activity. Recent studies have provided new insights into the role of ligand-gated ion channels in modifying synaptic transmission. Along with a growing list of different types of pre- and postsynaptic ionotropic receptors and the cell types that express them, there have also been advances in characterizing the modulatory mechanisms of the receptors that link to receptor activation. This is important due to the convergence of data from biochemical, molecular, and electrophysiological studies, implicating ionotropic receptors in the effects of psychoactive and addictive drugs. G protein-coupled receptors (GPCRs) make up the largest and most diverse family of membrane receptors in the human genome, relaying information on the presence of diverse extracellular stimuli to the cell interior. An estimated 1% of the mammalian genome encodes for GPCRs, and about 450 of the approximately 950 predicted human GPCRs are thought to be receptors for endogenous ligands [1]. The manipulation of transmembrane signaling by GPCRs may constitute the most important therapeutic target in medicine. Nearly 40% of all current therapeutic drugs target GPCRs [2]. All known GPCRs share a common architecture of seven membrane-spanning helices connected by intracellular and extracellular loops. Drugs acting on GPCRs have been classified as agonists, partial agonists, or antagonists based on a “two-state model of receptor function.” Since experimental evidence pointed out the impossibility of explaining the operation of GPCRs without considering dimers as the minimum structure for many GPCRs the “two-state dimer receptor model” was developed based on the communication between the two subunits of the receptor dimmer [1, 3, 4]. This model is an extension of the “two-state model of receptor function” but considers dimeric structures able to bind one molecule to the orthosteric center in each monomer. GPCR signaling is subject to extensive negative regulation through receptor desensitization, sequestration, and downregulation, termination of G protein activation by GTPase-activating proteins, and enzymatic degradation of second messengers. Additional protein-protein interactions positively modulate GPCR signaling by influencing ligand binding and specificity. Multiprotein complexes mediate most cellular functions. In neurons, these complexes are directly involved in the neuronal transmission, which is responsible for learning, memory, and developments. The first publication in this direction came from Hökfelt’s group in 1983. The publication describes how substance P may modulate the high-affinity serotonin (5-HT) binding site in a spinal cord membrane preparation [5]. Over the past decade, the number and outcomes of interactions between receptors have increased continuously [6]. Recent studies have demonstrated close physical interactions where activation of one receptor affects the function of the other. Adenosine is an endogenous purine nucleoside that has evolved to modulate many physiological processes. Extracellular adenosine mostly originates from release of intracellular adenosine and from release and extracellular breakdown of cAMP and ATP by ecto-5′-nucleotidase and phosphodiesterase [7]. Cellular signaling by adenosine occurs through four known adenosine receptor subtypes (A1Rs, A2ARs, A2BRs, and A3Rs), all of which are seven-transmembrane-spanning GPCRs. Of the four known adenosine receptors, A1Rs and A2ARs are primarily responsible for the central effects of adenosine, especially in modulating synaptic transmission [8]. Adenosine can act on A1Rs to depress transmitter release and neuronal sensitivity to the transmitter [9, 10]. As a result, the A1Rs are important in the regulation of synaptic plasticity, playing a role in determining the amplitude of long-term potentiation or long-term depression [11]. There are numerous reviews that describe regulation of brain adenosine levels, adenosine receptors, their cellular and subcellular localization, signaling pathways, and function in the brain under physiological and pathophysiological conditions as well as selective receptor agonists and antagonists. Using A1Rs as a paradigm of GPCRs, this review focuses on how receptor-receptor interactions contribute to regulatory processes within the central nervous system. Considering the various types of receptors, one may expect to find three principle paths of receptor interaction: (1) interactions between ionotropic receptors, (2) interactions between a metabotropic receptor and an ionotropic receptor, and (3) interactions between metabotropic receptors. The examples mentioned below stem from the second and third type of interaction. Interactions with metabotropic dopamine receptors as well as A2A, A3, NPY, and P2Y1 receptors will be described in the first part. The second part deals with interactions between A1Rs and ionotropic receptors, especially the GABAA, NMDA, and P2X receptors as well as ATP-sensitive K+ channels. Finally, new approaches for neurological disorders will be discussed. Functional interactions with metabotropic receptors Two forms of GPCR classification exist. There is the historical division into three main families: (1) rhodopsin-like family which includes adenosine receptors, (2) secretin-like family, and (3) metabotropic glutamate receptor-like family. The families share some basic similarities—the seven-transmembrane-spanning domains, intracellularly located C terminus, and extracellularly residing N terminus. Differences between the families arise in the length of the intracellular and extracellular termini and amino acid sequences, disulfide bridge linking, and conserved domains. Five different groups can be classified by applying phylogenetic analyses. The GRAFS system distinguishes between glutamate, rhodopsin, adhesion, fizzled/taste, and secretin-like GPCRs [12]. The agonist binding on the receptor results in coupling to heterotrimeric G proteins and regulates a variety of cell responses. In brief, an exchange of G protein-bound GDP to GTP occurs, and the heterotrimer dissociates into the α subunit and the βχ dimer. The resulting products activate or inhibit effectors independently from each other. Currently, 16 different genes encode G protein α subunits, five genes encode β subunits, while 14 genes encode χ subunits [13]. The α subunits can be categorized into four basic groups: the stimulatory family couples to adenylate cyclase and increases cAMP levels, whereas the inhibitory family acts in the opposite way. Moreover, the family activates the phospholipase (), and lastly, the family which regulates Rho proteins. dimers are capable of triggering effects on inward rectifier K+ channels (GIRK1–4), voltage-dependent Ca2+ channels (VDCC), and phospholipase A2 (PLA2), , and the Na+/H+ exchanger (NHE1). Thus, it is not surprising that GPCRs are such interesting candidates in current drug research with their amazing potential in affecting signaling events. A single GPCR possesses the potential to activate more than just one signaling pathway [12]; for example, activation of A1Rs includes coupling to and increasing of IP3 level [14, 15]. Furthermore, homodimerization and heterodimerization are common paths of interaction and have been described in A1Rs and the A2Rs several times [16–18]. In addition, functional interactions on the A1R without receptor assembling have already been revealed [19, 20], or are currently being elucidated. The next paragraphs will deal with a few selected examples of this limitlessly wide topic. Relationship between A1Rs and A3Rs The A3R was the latest receptor subtype of the adenosine receptor family to be identified [21], and its functional role is still controversially discussed. Several findings indicate neuroprotective as well as neurotoxic action depending on experimental approach [22–29]. A3Rs couple to inhibition of adenylyl cyclase as well as to activation of PLC, and to elevation of inositol triphosphate levels [30, 31]. Furthermore, an increase in intracellular Ca2+ levels due to release from intracellular stores and Ca2+ influx has been described [32, 33]. One interesting example of A3Rs’ functional role is their involvement in acute neurotoxic situations and interplay with A1Rs. Dunwiddie et al. [34] reported on the potential of A3Rs to modify responses via A1Rs in the hippocampus. The activation of hippocampal A3Rs induced a desensitization of A1Rs on combined superfusion of Cl-IB-MECA and adenosine. This phenomenon was thought to reduce the protective effects of endogenous adenosine caused by the lack of sensitivity of A1Rs. Further investigations on pyramidal cells of the rat cingulate cortex did not confirm Dunwiddie et al.’s assumption [35]. In this brain area, A1Rs and A3Rs did not show any interaction. The receptor subtypes were unable to affect each other. The discrepancy was taken to be a genetic phenomenon, such as alternative splicing of the rat A3R transcript causing distinguished pharmacological and functional properties in the brain. Furthermore, Hentschel et al. [36] demonstrated the involvement of A3Rs in inhibition of excitatory neurotransmission during hypoxic conditions, indicating a neuroprotective action of endogenously released adenosine on A3Rs in addition to A1Rs. Lastly, Lopes et al. [37] attempted to define the possible role of A3Rs in the rat hippocampus using experiments similar to those of Dunwiddie et al. in non-stressful and stressful situations, with particular attention to whether A3Rs control A1Rs. These data suggested that no interaction between the two receptor subtypes exist, but confirm that A3Rs do not affect synaptic transmission on superfusion with A3R agonist Cl-IB-MECA or A3R antagonist MRS 1191. The authors pointed out that Cl-IB-MECA binds to A1Rs even at low nanomolar concentrations. Thus, the existence of an interaction between A1Rs and A3Rs has to wait for reliable ligands. Antagonistic interaction between A1Rs and A2ARs A2ARs are widely distributed in the CNS, but local and subcellular differences in allocation exist. They show high levels in all subregions of the striatum and in the globus pallidus. A2ARs are also expressed in neurons in the neocortex and limbic cortex, but at a density a twentieth of that found in basal ganglia [38]. Colocalization of A1Rs and A2ARs was approved for glutamatergic nerve terminals in the hippocampus [39]. In the striatum, A1R/A2AR heteromers were found on synapses with spines of medium spiny neurons and integrated in the presynaptic membrane of glutamatergic terminals that represent the cortical-limbic-thalamic input [18]. A1Rs and A2ARs modulate excitatory synaptic transmission, albeit in an opposite manner. A1R activation inhibited glutamatergic synaptic transmission mainly through presynaptic inhibition of glutamate release, while A2ARs have been shown to facilitate glutamatergic synaptic transmission [40–42]. At first sight, stimulating A1Rs and inhibiting A2ARs may have a neuroprotective influence on the mature CNS. However, problems arise due to long-term desensitization of A1Rs. A2ARs do not upregulate after antagonist administration, but have a low abundance in hippocampal and cortical areas compared with A1Rs [40, 43, 44]. A1Rs and A2ARs cannot be regarded in isolation from one another since cross talk between the subtypes has been described several times [16, 17, 45–47]. A2AR activation by agonists caused A1R desensitization resulting in decreased binding affinity for CPA in the hippocampus in young adult rats. Controlling A1Rs by A2ARs was mediated by protein kinase C in a cAMP-independent manner. A2AR activation was seen to play a role in fine-tuning A1Rs by attenuating the tonic effect of presynaptic A1Rs located on glutamatergic nerve terminals [46, 47]. In the striatal system, A1R/A2AR heteromers became prominent to show an antagonistic reciprocal interaction [18]. As in the hippocampus, A2AR stimulation decreased the affinity of A1Rs for agonists. The A1R/A2AR heteromer allows adenosine to perform a detailed modulation of glutamate release [16, 48]. Regarding A1Rs and A2ARs, basal conditions generate a low tone of endogenous adenosine and cause A1R activation, in contrast to situations of increased adenosine where A2AR activation becomes dominant. When adenosine concentrations rise, as during anoxia, likely also time appears to be important in regulating A2A receptor activity, which means A2A receptors are “active” under prolonged stimulation [49]. Finally, activation of the A1R/A2AR heteromer contributes to A2AR signaling when adenosine levels are elevated and may provide a mechanism to facilitate plastic changes in the excitatory synapse [18]. Interactions between adenosine and dopaminergic system Dopamine is an important transmitter in basal ganglia and is noted for influencing motor activity, playing an important role in Parkinson’s disease. Adenosine-dopamine interactions are complex and cannot be limited on functional considerations of A1Rs. Intramembrane heteromeric receptor-receptor interactions and the involvement of A2ARs in influencing dopaminergic signaling have to be mentioned due to the implications in the treatment of Parkinson’s disease. Ginés et al. [50] described the formation of functionally interacting heteromeric complexes between dopamine D1 receptors (D1Rs) and A1Rs in mouse fibroblast Ltk− cells cotransfected with respective cDNAs. Coaggregation occurred when cells were pretreated with R-PIA as A1R agonist, but was decreased by combined pretreatment with R-PIA and SKF-38393, a D1R agonist. Furthermore, the D1R agonist-induced cAMP accumulation was reduced by combined pretreatment of D1R- and A1R agonist, but remained unaffected when given alone, respectively. The results confirmed an antagonistic interaction between A1Rs and D1Rs that had already been observed by Ferré et al. [51] in behavioral studies using reserpinized mice and rabbits. In vivo and in vitro data on adenosine-dopamine interactions were mostly obtained from investigations in the basal ganglia and limbic regions [52, 53] due to the high abundance of A1Rs, A2ARs, D1Rs, and D2Rs in these areas and their involvement in the pathology of Parkinson’s disease. The antagonistic interaction of combined receptor activation seems to distinguish between adenosine and dopamine receptor subtypes. While A1Rs communicate mainly with the D1R subtype in strionigral-strioentopenduncular neurons, A2AR and D2R interaction occurs in striopallidal neurons. Studies on mice and monkeys pretreated with MPTP suggest that some degree of dopaminergic activity is needed to obtain adenosine antagonistic-induced motor activity. Furthermore, blockade of dopaminergic neurotransmission counteracts the antagonistic effect induced by adenosine [54]. Sufficient endogenous adenosine is present interstitially in the substantia nigra pars reticulata to control dopaminergic effects. The effects of adenosine are absent when dopaminergic influence is suppressed [53]. Thus, it seems that monotherapy with A2AR antagonists may only be useful in the early stages of Parkinson’s disease, but could support a therapeutic treatment with dopamine agonists in advanced stages. A promising approach using these therapeutic strategies can be seen in istradefylline, an A2AR antagonist that has since successfully passed clinical trials [55]. However, A1R blockade may also contribute to an increased dopamine release but this effect seems without clinical relevance. Relationship between A1Rs and NPY Neuropeptide Y (NPY) is one of the most abundant neuropeptides and exerts various functions on at least six GPCR subtypes (Y1Rs-Y5Rs, y6Rs). Immunohistochemical investigations revealed the appearance of the Y1R and Y5R subtypes in the rat frontal cortex [56–58]. Activation of NPY receptors results in an inhibition of excitatory synaptic transmission, while a presynaptic influence on cortical neurons has been postulated [59]. NPY receptors affect pertussis toxin-sensitive G proteins, which inhibit adenylyl cyclase and decrease cAMP levels. Inhibitory and facilitating effects on K+ and Ca2+ mobilization have also been observed [60]. Receptor-receptor interactions between Y1Rs have already been described, such as the antagonistic interaction with galanin receptors in the hypothalamus of the rat and their functional relevance for food intake. In contrast, a facilitatory interaction between the two receptors exists in the amygdala which may be of relevance for fear-related behavior [61]. In the CNS, A1Rs and NPY receptors share some similarities in distribution. Both A1Rs and Y1Rs are located on neurons of the prefrontal cortex, and their activation inhibits glutamatergic neurotransmission [62]. This is evidence for potential interaction between A1Rs and Y1Rs that may modulate long-term desensitization of A1Rs during pathophysiological situations. To investigate possible functional interactions, postsynaptic potentials (PSPs) were generated by electrical field stimulation on pyramidal neurons of layer V in the rat cingulate cortex as described by Brand et al. [35] and Hentschel et al. [36]. The Y1R agonist [F7,P34]pNPY inhibited the amplitude of PSPs. The inhibitory effect was reversible and reproducible, indicating that no desensitization appeared (Fig. 1a). An additional decrease in PSP amplitude was observed when NPY was superfused in combination with the A1R agonist CPA (Fig. 1b). Fig. 1Effect of the selective Y1 agonist [F7,P34]pNPY as well as neuropeptide Y (NPY) alone and in combination with selective A1R agonist N6-cyclopentyladenosine (CPA) on the amplitude of postsynaptic potentials (PSPs) evoked by electrical field stimulation (0.2 Hz, 2 ms) in layer I of the rat cingulate cortex (Sichardt et al., unpublished results). Intracellular recordings were performed in rat brain slices using glass microelectrodes placed in pyramidal cells of layer V. a [F7,P34]pNPY superfused for 5 min inhibits reversibly the PSPs by 34.6 ± 8%. Y1 antagonist BIBP3226 itself reduces the PSP by 23 ± 10%, whereas [F7,P34]pNPY has no inhibitory effect in the presence of the antagonist. b NPY depresses PSPs by 28.6 ± 0.7%. The combined superfusion of NPY and CPA resulted in an additional depression of the PSPs by 48.1 ± 5%. The depressant effects of the two agonists were reversible during washout. Data are expressed as mean ± SEM from n = 3 independent experiments. *p < 0.05 significant vs control; #p < 0.05 significant vs NPY alone The additional inhibition induced by CPA was in the same range as that found with CPA alone (48.1 ± 5% vs 55 ± 3%). NPY inhibited PSPs after blockading CPA-mediated inhibitory effects by DPCPX. No significant changes existed before and after blockading A1Rs (Fig. 2). The results suggest that no interaction between A1Rs and Y1Rs exist. Each neuromodulator contributes to inhibitory regulation of excitatory neurotransmission. Regarding the desensitization of A1Rs but not of Y1Rs, this may be important under pathophysiological conditions with increased adenosine concentration in the synaptic cleft. Fig. 2Effect of neuropeptide Y (NPY) on the PSPs alone and in combination with the selective A1R agonist N6-cyclopentyladenosine (CPA) after preincubation with the selective A1R antagonist 1,3-dipropyl-8-cyclopentylxantine (DPCPX) (Sichardt et al., unpublished results). The experimental procedure was similar to that shown in Fig. 1. In the presence of NPY, PSPs were decreased by 35.4 ± 7%. Combined superfusion of NPY and CPA after preincubation with DPCPX decreased PSPs by 32.7 ± 8%. The depression was reversible during washout. Data are expressed as mean ± SEM from n = 5 independent experiments. *p < 0.05 significant vs control; #p < 0.05 significant vs superfusion of CPA and DPCPX Interaction of A1Rs and P2Y1Rs P2Y1Rs have been cloned and characterized in several species including human and rat, whereas mRNA was detected in various regions of the brain. The receptor subtype can be activated by ATP, but ADP as a degradation product of ATP is a more potent endogenous agonist. Cellular signaling differs between A1Rs and P2Y1Rs since A1Rs couple to Gi/0 and P2Y1Rs to Gq/G11. In fact, P2Y1Rs can be assumed to exert stimulatory effects in cells. P2Y1R signaling occurs in non-neuronal and non-muscular cell types, as well as on neurons in the CNS [63] where a colocalization of A1Rs and P2Y1Rs was demonstrated immunohistochemically in rat brain cortex, hippocampus, and cerebellum [64]. In 1996, Ikeuchi et al. reported the activation of an undefined P2YR by adenosine in patch clamp and calcium imaging experiments on hippocampal neurons [65]. Furthermore, extensive heteromerization experiments have been conducted on cotransfected HEK293 cells using immunoprecipitation, Western blotting, and bioluminescence resonance energy transfer (BRET). Receptor binding experiments in combination with cAMP assays have also been described [64, 66–70]. These respective studies confirmed heteromerization associated with changes in the agonist binding and signaling compared to monomer properties. The binding for selective A1R agonists was decreased while the A1R antagonist binding remained unaffected. Interestingly, ADP binding was blocked by DPCPX but not by the P2Y1R antagonist, suggesting an altered binding pocket on the A1R/P2Y1R complex. The G protein-coupling was sensitive to pertussis toxin and revealed a Gi/0 status for the heteromer. Although colocalization of A1Rs and P2Y1Rs in several brain areas has been demonstrated, there is still a lack of functional investigations. Nevertheless, the physiological relevance of this interaction has been postulated as follows: costorage and release of ATP with neurotransmitters, such as glutamate or noradrenaline, occurs in the CNS [71–73]. ADP, the degradation product of ATP, acts as an A1R agonist due to the activation of the A1R/P2Y1R complex and contributes to the inhibitory modulation of excitatory synaptic transmission via adenosine acting on A1Rs. This interaction can be assumed as an additional mechanism for influencing and fine-tuning synaptic neurotransmission. Functional interaction with ionotropic receptors Neuronal excitability is regulated by voltage and ligand-gated ion channels. Ionotropic receptors also referred to as ligand-gated ion channels (LGICs) are a group of intrinsic transmembrane ion channels that open and close in response to binding of a chemical messenger, as opposed to voltage-gated ion channels or stretch-activated ion channels. Ion channels are regulated by a ligand and are usually very selective to one or more ions such as Na+, K+, Ca2+, or Cl−. These receptors located at synapses convert the chemical signal of presynaptically released neurotransmitter directly and very quickly into a postsynaptic electrical signal. Many LGICs are additionally modulated by allosteric ligands by channel blockers, ions, or membrane potential. Nicotinic acetylcholine receptor serves as the prototypical LGIC [74] and consists of a pentamer of protein subunits with two binding sites, which, when bound, alter the receptor configuration and cause an internal pore to open. This pore, permeable to Na+ ions, allows them to flow down their electrochemical gradient into the cell. With a sufficient number of channels opening at once, the intracellular Na+ concentration rises to the point at which the positive charge within the cell is sufficient to depolarize the membrane, and an action potential is initiated [75]. Many important ion channels are ligand-gated, and they show a great degree of homology at the genetic level. The LGICs are classified into three superfamilies; the first—the Cys-loop receptor family—is subdivided into the anionic GABAA and glycine receptors on the one hand, and cationic 5-HT3 serotonin and nicotinic acetylcholine receptors on the other. The second group—ionotropic glutamate receptors—consists of NMDA, kainate, and AMPA receptors. The third group covers the ATP-gated channels—the P2X receptors [76]. Adenosine is known to inhibit glutamatergic neurotransmission by activation of presynaptic A1Rs [35]. This is probably due to reduction of the calcium influx, possibly by modulating both P/Q- and N-type presynaptic voltage-dependent calcium channels, which in turn controls transmitter release [77]. Furthermore, A1Rs have long been known to mediate neuroprotection by reduction of excitatory effects at the postsynaptic level [10, 78, 79]. In addition to its direct presynaptic and postsynaptic actions on neurons, A1R interaction with NMDA [80–82, 84], GABAA [85–88], and P2X receptors [80, 89–91] contributes to fine-tuning neuromodulation via adenosine. Interaction between A1Rs and NMDA receptors Glutamate is the major excitatory neurotransmitter in the mammalian central nervous system. In most brain areas, glutamate mediates fast synaptic transmission by activating ionotropic receptors of the AMPA, kainate, and NMDA type. Additionally, NMDA receptors play a critical role in synaptic plasticity, synaptic development, and neurotoxicity. Recent studies suggest that some NMDA-mediated actions are altered or mediated by adenosine. Synaptic currents mediated by glutamate in rat substantia nigra pas reticulata neurons were reduced by adenosine acting via A1Rs. The inhibitory action was not mediated by a postsynaptic site since adenosine did not block currents evoked by local application of glutamate [86]. NMDA is known to increase the extracellular level of adenosine via bidirectional adenosine transporters or from released adenine nucleotides degraded by a chain of ectonucleotidases [92, 93]. On the other hand, endogenous adenosine present in the extracellular fluid of hippocampal slices tonically inhibits NMDA receptor-mediated dendritic spikes as well as AMPA/kainate receptor-mediated synchronized EPSPs by activation of A1Rs in CA1 pyramidal cells [81]. In line with these results, it has been shown that the tonic activation of A1Rs by ambient adenosine depressed field potentials in the striatum. The effect of adenosine in the striatum [84] or hippocampus [94] has not been found in A1R knockout mice and clearly demonstrates the involvement of A1Rs. The involvement of A1Rs was also supported by experiments using the selective receptor ligand 2-CA. In isolated rat hippocampal pyramidal cells [95] and in bipolar cells of the retina [96], 2-CA decreased inward currents induced by iontophoretic application of NMDA. Another interesting interaction concerns NMDA preconditioning to protect against glutamate neurotoxicity. The A1R antagonist 8-CPT has been shown to prevent neuroprotection evoked by NMDA preconditioning against glutamate-induced cellular damage in cerebellar granule cells [83]. In this study, the functionality of A1Rs was not affected by NMDA preconditioning, but this treatment promoted A2AR desensitization in concert with A1R activation [83]. These results are in line with other studies indicating that adenosine downregulates excitatory and inhibitory synaptic transmission in several brain areas through activation of A1Rs and A2AR [97, 98]. Furthermore, activation of A1Rs mediates reversal of long-term potentiation (LTP) produced by brief application of NMDA in hippocampal CA1 neurons [99]. Taken together, there are several ways in which adenosine may interact with NMDA-induced cellular events. Adenosine can affect glutamatergic transmission via both presynaptic and postsynaptic mechanisms by activating A1Rs [78]. NMDA receptors and A1Rs interact to downregulate glutamate release presynaptically in pyramidal cells of the cingulate cortex [35], neurons of the hippocampus [100], and striatal neurons [84]. Another putative mechanism is related to postsynaptic A1Rs; adenosine elevates the threshold to open NMDA receptor-operated channels by antagonizing membrane depolarization [101]. Interaction between A1Rs and GABAA receptors Fast synaptic inhibition in the brain and spinal cord is largely mediated by GABAA receptors that are also targeted by drugs such as benzodiazepines, barbiturates, neurosteroids, and some anesthetics. The modulation of their function will have important consequences for neuronal excitation [102]. One accepted means of modifying the efficacy is a functional interaction with adenosine. Adenosine may have an effect on either presynaptic GABA release in interneurons and/or on postsynaptic GABAA receptors in projection neurons. The site of action may be studied electrophysiologically by inducing fast inhibitory postsynaptic potentials (IPSPs) or application of GABA directly onto the cell. Adenosine and selective A1R agonist CHA reduced the amplitude of the fast IPSP in lateral amygdala slice preparations. The effect of CHA was blocked by DPCPX, indicating the involvement of A1Rs. Additionally, adenosine did not block currents evoked by local application of GABA [85]. Thus, the modulatory effect of adenosine on the GABAergic neurotransmission appears to take place on a presynaptic site by inhibiting GABA release from nerve terminals [85, 86]. The assumption that the activation of A1Rs can presynaptically modulate inhibitory postsynaptic responses agrees with findings in several brain areas, such as the thalamus [87], suprachiasmatic and arcuate nucleus [88], and substantia nigra pars compacta [86]. There is some evidence that activation of A1Rs is also involved in GABAA receptor downregulation, implying a facilitation of the neurotransmission on a postsynaptic site. GABA but not adenosine evoked an inward current in rat sacral dorsal commissural neurons (SDCN). The GABA-induced current was significantly reduced be adenosine. CHA and DPCPX, but not selective ligands for A2ARs, mimicked or blocked the inhibitory effect of adenosine, respectively [103]. Adenosine and muscimol induced a concentration-dependent reduction in the amplitude of population potentials in hippocampal slices. Additionally, adenosine potentiated the ability of muscimol to inhibit evoked potentials, which were blocked by the A1R-selective antagonist 8-CPT. The effects of adenosine as well as muscimol were reduced by the chloride channel blocker DIDS, indicating the ability of adenosine to regulate the GABAA chloride channel by activation of A1Rs [104]. Sebastiao’s group studied the mechanisms by which GABA modulates adenosine-mediated effects and found that endogenous GABA exerts an inhibitory effect through GABAA receptors via a predominant adenosine-mediated action in the hippocampus. The authors concluded that there is an A1R-mediated ability to inhibit synaptic transmission [19]. Further, this study showed that the blockade of GABAergic inhibition induced the release of NO, which was able to potentiate the inhibitory action of adenosine. They therefore suggested that the modulation of the A1R-mediated response by activation of GABAA receptors occurs indirectly via NO [19]. Activation of GABAA receptors is effective in limiting neuronal ischemic damage [105] and endogenous adenosine that arises during hypoxia, and acts neuroprotectively partly by activating A1Rs [36]. Therefore, the contribution and potential interactions of GABA and adenosine as modulators of synaptic transmission during hypoxia has been investigated. Activation of A1Rs inhibits the release of GABA from the ischemic cerebral cortex in vivo [106]. In contrast, the administration of an A1R agonist in the hippocampus failed to affect the release of GABA during ischemia [107]. In the light of these controversial results, the role of the two neuromodulators during hypoxia was investigated in the CA1 area of rat hippocampal slices using selective A1R antagonists [108]. Indeed, activation of A1R and GABAA receptors is partly involved in the inhibition of synaptic transmission during hypoxia. The action of GABA becomes evident when A1Rs are blocked. Regarding the desensitization of A1Rs during hypoxia [109, 110], it may be assumed that GABAA-mediated inhibition of the synaptic transmission is evident when the A1R is desensitized or downregulated [108]. Comodulation by A1Rs and GABAA receptors was also suggested in acute cerebellar ethanol-induced ataxia. Using GABAA and A1R agonists and antagonists, respectively, a functional similarity between GABAA receptors and A1Rs has been shown even though both receptor types are known to couple to different signaling systems [111]. This provides conclusive evidence that A1Rs and GABAA receptors both play a comodulatory role in ethanol-induced cerebellar ataxia without any direct interaction. Functional interaction between A1Rs and P2X receptors P2X receptors are ligand-gated ion channel receptors; seven subunits (P2X1-P2X7) have been identified [63]. The P2X receptor subunits show many differences in localization, pharmacology, kinetics, and signaling pathways [112, 113]. The P2X1 to P2X6 receptors have 379–472 amino acids, with a predicted tertiary structure of transmembrane segments, a large extracellular loop and intracellular C and N termini. The P2X2, P2X4, and P2X4/P2X6 receptors appear to be the predominant neuronal types [91]. These subunits may occur as homooligomers or as heterooligomeric assemblies of more than one subunit. The P2X7 receptor has a similar structure, but with a much larger intracellular C terminus. This contrasts strikingly with any of the other known ligand-gated ionotropic receptors [114]. P2X7 subunits do not form heterooligomeric assemblies, but are involved in mediating apoptosis and necrosis in glial cells and possibly neurons. Interaction between adenosine receptor-mediated and P2 receptor-mediated effects have been shown to occur in neuronal and non-neuronal cells [80]. Both adenosine and ATP induce astroglial cell proliferation and formation of reactive astrocytes [89]. In hippocampus, adenosine and ATP are released on stimulation and are potent neuronal transmission inhibitors [115, 116]. It should be pointed out that the interpretation of effects induced by both is difficult since ATP is degraded enzymatically to adenosine [38]. Adenosine is formed by extracellular catabolism of released ATP via the ectonucleotidase pathway [90, 117]. The role of the ectonucleotidases in forming adenosine is difficult to study since this system is extremely efficient, and it is difficult to block an enzyme system. The experimental paradigm used by Cunha et al. [118] demonstrates that ATP has to be converted outside the cell into adenosine to exert its inhibitory effects on hippocampal synaptic transmission. The inhibitory effect of ATP was not modified by the P2 receptor antagonist suramin, but was attenuated by the ecto-5’-nucleotidase inhibitor and was nearly prevented by the adenosine A1R antagonist DPCPX, whereas dipyridamole, an inhibitor of adenosine uptake, potentiated the inhibitory effect of ATP [118]. These results offer evidence for localized catabolism of adenine nucleotides followed by substrate channeling to A1Rs. This localized catabolism may mask the adenosine-mediated ATP effect [119]. Recently it was demonstrated that the exogenous application of ATP or ATPχS reduced the hippocampal neurotransmission. The inhibitory effect was blocked by the selective A1R antagonist DPCPX and was potentiated by different ecto-ATPase inhibitors [120]. These results suggest that the synaptic inhibition may consist of an inhibitory purinergic component of ATP itself in addition to degradation to adenosine. Interaction with neuronal ATP-sensitive K+ channels ATP-sensitive K+ channels (KATP) are widely expressed in the cytoplasmic membrane of neurons and couple cell metabolism to excitability [121]. These channels are regulated by the intracellular ATP/ADP ratio [122] and modulated by many endogenous mediators, including adenosine, via A1Rs. Activation of A1Rs inhibited the activity of inspiratory neurons in the brainstem by opening KATP in neonatal mice [123]. A1R stimulation promotes KATP activity in principal dopamine neurons in the substantia nigra pars compacta [124] and hippocampus [125]. In contrast, one recent study has demonstrated that adenosine induces internalization of KATP, resulting in a decrease in KATP-mediated response in the hippocampus [126]. The discrepancy might be due to the additional activation of A2ARs by adenosine located in hippocampus, but not in the substantia nigra pars compacta [127]. In addition to the inhibitory effect on the presynaptic site, the activation of A1Rs acts as an inhibitory modulator to electrical activity on the postsynaptic site, and this effect has been attributed to enhancement of KATP activity. The modulating effect on the membrane potential may differ depending on the brain regions, as neuronal KATP is heterogeneous in different neurons. A1R interactions—new approaches for neurological disorders By activation of its receptors, adenosine regulates many pathophysiological processes, particularly in excitable tissues of the brain (Fig. 3). Its widespread functions in the body include regulation of seizure susceptibility [128, 129], neuroprotection [40], regulation of pain perception [130], sleep induction [131], and involvement in Parkinson’s disease [132]. There is increasing evidence that the functional interaction of A1Rs with other neuronal receptors may contribute to fine tuning in synaptic transmission, and A1R agonists may represent a useful therapeutic approach for the treatment of some neurological disorders by regulating homeostasis in transmitter systems. However, the use of A1R agonists has not proved clinically useful due to mainly cardiovascular side effects as well as low brain permeability. Pioneering experimental approaches have been evaluated using focal drug delivery in epilepsy models. One experimental study has used intraventricular implantation of an adenosine-releasing synthetic polymer [128]. In a later study, Guttinger et al. [133] used encapsulated C2C12 myoblasts that were engineered to release adenosine by disruption of their adenosine kinase gene [133]. The local delivery of adenosine by implanted cells appears to be a promising strategy for the control not only for affecting seizure activity but also other neurodegenerative diseases with dysregulated synaptic neurotransmission. Concluding remarks As a consequence of its ubiquitous distribution and because of its linkage to the energy pool, adenosine has evolved as an important messenger in extracellular signaling. Modifications in extracellular adenosine levels with subsequent alterations in the activation of its receptors interferes with the action of other receptor systems. Figure 4 summarizes possible interactions of A1Rs with metabotropic receptors, Fig. 5 shows the interactions between A1Rs and ionotropic receptors as well as with KATP, and Fig. 3 shows the neurological disorders where A1R interactions may play a role. Fig. 3Schematic representation of possible interactions of A1Rs with metabotropic receptors. Heteromerization between presynaptically located A1Rs and A2ARs, D1Rs, and P2Y1Rs causes changes in influencing glutamate release by adenosine. Cross talk between A1Rs and P2Y1Rs contributes mainly to triggering fast attenuation of transmitter release, whereas ADP acts as a ligand on the heteromer. During elevated adenosine levels, A2AR signaling becomes dominant in the A1R/A2AR complex, providing enhancement of glutamate release. The A1R/D1R heteromer requires both adenosine and dopamine to be activated and inhibits transmitter releaseFig. 4Schematic representation of possible interactions of A1Rs with ionotropic receptors contributing to the fine-tuning of neurotransmission. Adenosine acting via presynaptic A1Rs may attenuate the influx of Ca2+ through voltage-dependent calcium channels and thus decrease the release of glutamate and GABA, which inhibits or facilitates the activation of postsynaptically located NMDA receptors, respectively. Adenosine acting through postsynaptic A1Rs may activate KATP, which leads to hyperpolarization of postsynaptic neurons and inhibits directly the activity of NMDA and GABAA receptorsFig. 5Neurological disorders where A1R interactions may play a role There is evidence that various regulatory mechanisms exist as well as multiple mechanisms that act independently of each other on the same cell depending on the brain region and cell type. The overaction and redundancy principle ensures transmitter homeostasis under pathophysiological conditions in a special time window. The function of adenosine receptors in the regulation of the synaptic transmission is complex. The key receptor in regulation of the neuronal transmission may be the A2AR, whereas the interaction of the A1R with metabotropic and ionotropic receptors as well as with KATP serves as fine-tuning to inhibit synaptic transmission, as mentioned by Sebastiao and Ribeiro [80]. A1Rs may play a nonessential role in normal physiology as demonstrated in mice lacking the A1Rs [134]. However, they play an important protective role under pathophysiological conditions especially during hypoxia. The activation initiates a fast inhibition of the glutamatergic neurotransmission and the receptor interactions may contribute to its maintenance or can support the A1R-mediated effects. Most of our knowledge on receptor-receptor interactions involving the A1R results from experiments on cell cultures, slice preparation or, to a lesser extent, from in vivo experiments where regulation can be studied in principle or new drug targets can be characterized. These findings may contribute to a better understanding of disturbances in transmitter homeostasis. As our understanding of the complexity of receptor signaling and interaction develops, we may well gain new perspectives in new drug development. The clinical relevance of the testing models has often been questioned, however. Discordance between studies on cells and animal and human studies may be due to bias or failure of models to mimic clinical disease to an adequate degree. There are new techniques such as neuroimaging, nanotechnology, siPCR, and new selective receptor ligands that will help to overcome some of these aspects in the near future.
[ "adenosine", "receptor interactions", "g protein-coupled receptors", "ionotropic receptors", "adenosine receptors", "neurotransmission" ]
[ "P", "P", "P", "P", "P", "P" ]
Biotechnol_Lett-3-1-1914260
Expression of alternansucrase in potato plants
Alternan, which consists of alternating α-(1→3)/α-(1→6)-linked glucosyl residues, was produced in potato tubers by expressing a mature alternansucrase (Asr) gene from Leuconostoc mesenteroides NRRL B-1355 in potato. Detection of alternan was performed by enzyme-linked immunosorbent assay in tuber juices, revealing a concentration between 0.3 and 1.2 mg g-1 fresh wt. The Asr transcript levels correlated well with alternan accumulation in tuber juices. It appeared that the expression of sucrose-regulated starch-synthesizing genes (ADP-glucose pyrophosphorylase subunit S and granule-bound starch synthase I) was down-regulated. Despite this, the physico-chemical properties of the transgenic starches were unaltered. These results are compared to those obtained with other transgenic potato plants producing mutan [α-(1→3)-linked glucosyl residues] and dextran [α-(1→6)-linked glucosyl residues]. Introduction Production of novel polymers in plants by genetic modification is a great opportunity to obtain plants with unique properties that cannot be generated by conventional breeding (Kok-Jacon et al. 2003). In addition, modifications of native polymers in planta could also generate crops with added nutritional, environmental or commercial value. For instance, production of biodegradable plastics in crops such as flax offers new perspectives for the replacement of oil-derived plastics (Wróbel et al. 2004). Another example is the production of a freeze-thaw-stable potato starch exhibiting novel physicochemical properties, thereby increasing the number of industrial applications (Jobling et al. 2002). Alternan is a unique polymer which is produced by three Leuconostoc mesenteroides strains: NRRL B-1355, NRRL B-1498 and NRRL B-1501 (Jeanes et al. 1954). Alternan synthesized by L. mesenteroides NRRL B-1355 is mediated by the alternansucrase ASR (EC 2.4.1.140) which is a large glucansucrase of 2,057 amino-acids (Argüello-Morales et al. 2000). Its C-terminal domain (also referred to as glucan-binding domain or GBD) exhibits short repeats specific for ASR, which could contribute to its distinct features (Janeček et al. 2000). The resulting polymer has a unique structure with alternating α-(1→3)/α-(1→6)-linked glucose residues, present for 46% and 54%, respectively. Due to this structure, alternan is a highly soluble and low viscous polymer, which is resistant to microbial and mammalian enzymes making it suitable for the production of ingredients for functional foods such as prebiotics (Côté 1992). Also, novel industrial applications were investigated by hydrolyzing native alternan polymers with isolates of Penicillium bacterial strains, creating potential replacers of commercial gum arabic (Leathers et al. 2002; 2003). Furthermore, ASR is an attractive enzyme because of its efficiency in bond formation, which is higher than that of the dextransucrase (DSRS) (Richard et al. 2003). In addition, mutated ASR enzymes showed a high efficiency in glucosylating acceptor molecules (cellobiose, α-alkylglucosides) in comparison to native ASR and DSRS enzymes, which might enable novel industrial applications (Argüello-Morales et al. 2001; Richard et al. 2003; Luz Sanz et al. 2006). In this work, we describe the production of alternan in potato tubers by expressing ASR. Modification of starch structure was envisaged with ASR, because of its high acceptor reaction efficiency. The effect of ASR on starch biosynthesis was studied at the microscopical, molecular and biochemical level, and compared to the effects of the dextransucrase (DSRS) and mutansucrase (GTFI), producing less soluble polymers, such as dextran and mutan that are mainly composed of α-(1→6) and α-(1→3)-linked glucose residues, respectively (Kok-Jacon et al. 2005a, b). Materials and methods Construction of binary plant expression vector containing the Asr gene An expression cassette containing the patatin promoter (Wenzler et al. 1989), the chloroplastic ferredoxin signal peptide (FD) from Silene pratensis (Pilon et al. 1995) fused to the NOS terminator was cloned into the pBluescript SK (pBS SK) plasmid, resulting in pPF that was used as starting material for cloning the alternansucrase (Asr) gene. A mature Asr gene from L. mesenteroides NRRL B-1355 (Argüello-Morales et al. 2000; AJ250173) was ligated in frame between the signal peptide FD and the NOS terminator. The mature Asr gene was amplified by PCR, with a forward primer containing a SmaI restriction site (5′-CATCAGGGCCCCGGGGATACAAAT-3′) and a reverse primer containing a NruI restriction site (5′-CTCCTTTCGCGAATCCTTCCCTTA-3′) using the proofreading Pfu turbo DNA polymerase (2.5 units/μl; Stratagene, UK) and cloned into the SmaI/NruI restriction sites of pPF, resulting in pPFAsr. FD and the fused Asr gene were completely sequenced in one direction by Baseclear (The Netherlands) to verify the correctness of the construct. pPFAsr was digested with SacI and SalI and subsequently ligated into a pBIN20 binary vector (Hennegan and Danna 1998), resulting in pPFA (Fig. 1). Fig. 1Schematic representation of pPFA binary vector used for potato plant transformation Transformation and regeneration of potato plants pPFA was transformed into Agrobacterium tumefaciens strain LBA 4404 using electroporation (Takken et al. 2000). Internodal stem segments from the tetraploid potato genotype (cv. Kardal (KD)) were used for Agrobacterium-mediated transformation, which was performed as described by Kok-Jacon et al. (2005a). Starch isolation Potato tubers were peeled and homogenized in a Sanamat Rotor (Spangenberg, The Netherlands). The resulting homogenate was allowed to settle overnight at 4°C and the potato juice was decanted and stored at −20°C for characterization of soluble alternan. The starch pellet was washed three times with water, air-dried at room temperature for at least three days and stored at room temperature. Immunological detection of alternans in tuber juices and gelatinized starches Presence of alternans was investigated with enzyme-linked immunosorbent assay (ELISA) as described by Kok-Jacon et al. (2005a), using monoclonal anti-α-(1→6) dextran antibodies (45.21.1 (groove-type; IgA/Kappa) and 16.4.12EBI (cavity-type; IgA/Kappa)) (Wang et al. 2002) with tuber juices and gelatinized starches. The monoclonal anti-α-(1→6) dextran antibodies detect structures containing both internal and terminal epitopes of α-(1→6) dextran which can be applicable for the detection of α-(1→6) linked glucose residues present in alternan (Sharon et al. 1982; Dr Denong Wang, personal communication). Expression analysis of Asr and genes involved in starch biosynthesis using semi-quantitative and real-time quantitative RT-PCR analysis RNA was isolated from 3 g (fresh weight) of potato tuber material from selected transgenic lines according to Kuipers et al. (1994). Semi-quantitative and real-time quantitative RT-PCR’s were performed as described by Kok-Jacon et al. (2005a). AsrRT primers, 5′-ACCGGTTCCATCAACTAATAAT-3′ and 5′-GACATCTCGGAAGGATCCC-3′ (Tm = 55°C, 35 cycles) were based on the Asr gene sequence (Argüello-Morales et al. 2000). RNA sample from Karnico potato tubers expressing a sense/antisense GBSSI cDNA inverted-repeat construct referred to as RVT34-77 (Heilersig 2005) was used as a positive control, because its GBSSI expression level was completely down-regulated. Determination of morphological and physicochemical starch properties Analysis of starch granule morphology was performed by light microscopy and scanning electron microscopy (SEM) as described by Kok-Jacon et al. (2005a). Median values of the granule size distribution (d50), gelatinization analysis, amylose content, starch content, chain length distributions (HPSEC, HPAEC) were determined as described by Kok-Jacon et al. (2005a). Results Detection of alternan in transgenic potato juices To enable plastidic protein targeting, the mature Asr gene was fused to the ferredoxin (FD) signal peptide (Gerrits et al. 2001). The resulting gene fusion was inserted between the patatin promoter (Fig. 1) allowing high-tuber expression (Wenzler et al. 1989) and the Nos terminator sequence. At the FD▲Asr fusion, two mutations were present because a SmaI restriction site was engineered at this position (VTAM↓ATYKVTLITK▲ADT became VTAM↓ATYKVTLITP▲GDT, in which ↓ represents the splice site for amyloplast entry and ▲ the gene fusion). Furthermore, differences from the published ASR sequence (Argüello-Morales et al. 2000) were found at three positions (Y208H, D221G and G1092S), but these did not affect conserved residues. After Agrobacterium-mediated plant transformation, thirty independent transgenic potato clones were obtained using the Kardal (KD) genotype. Five plants of each transgenic clone were grown in the greenhouse from which the tubers were pooled for further characterization. KDAxx referred to the transformed potato plant serie in which A represents the Asr gene and xx the clone number. The untransformed genotype is referred to as KD-UT. Detection of alternan was performed by analyzing tuber juices of the transformants with ELISA using anti-dextran antibodies (Wang et al. 2002). Alternan was detected in 4 out of 29 tubers (about 14%) in a concentration ranging from 0.3 to 1.2 mg g−1 fresh wt (Fig. 2) in the transformants KDA16, KDA19, KDA27 and KDA13. As expected, no alternan was found in KD-UT plants. According to the tuber juice results, the KDA transformants were divided in three classes: (−), (+) and (++), representing no, intermediate (≤1 mg g−1 FW) and high (>1 mg g−1 FW) levels of alternan, respectively. All the transformants containing alternan and two from the (−) class were selected for further characterization: KDA13 (++), KDA16 (+), KDA19 (+), KDA27 (+), KDA1 (−) and KDA24 (−). RNA was isolated from potato tubers and subjected to RT-PCR analysis. The expression levels were determined for the Asr and Ubi3 genes, of which the latter is used as a control because of its constitutive expression (Garbarino and Belknap 1994) (Fig. 3). Heterologous Asr gene expression was detected in the expressers KDA13, KDA16, KDA19, KDA27. No Asr mRNA was detected in the (−) class transformants and in the KD-UT plants. The Asr expression levels correlated well with the ELISA results described above. Fig. 2Detection of alternans accumulated in potato juices by ELISA using anti-dextrans antibodies. Based on the alternan concentration [in mg g−1 fresh wt (FW)], three categories of transformants were made, where (−), (+) and (++) represent no, intermediate and high alternan accumulation, respectively. Transgenic clones indicated with grey bars were selected for further characterizationFig. 3RT-PCR analysis of the selected KDA transformants and KD-UT tuber RNA. The upper panel shows the PCR products using the primers designed on the Asr sequence. The lower panel shows the PCR products using the primers designed on the Ubi3 sequence that served as an internal control. pPFAsr plasmid: positive control Alternan accumulation does not interfere with plant, tuber and starch morphologies Asr expressing plants (green parts and tubers) did not exhibit any morphological changes in comparison to KD-UT plants (data not shown). In addition, starch morphology of Asr expressing plants was quite similar to that of KD-UT. With SEM, a rough surface was present on some of the (++) class transformant granules (Fig. 4B, F), but was considered as not significant when compared to dextran- (Fig. 4C, G) and mutan- (Fig. 4D, H) accumulating plants. In general, starch granules from the (+) and (−) class transformants were similar to those of the KD-UT (data not shown). Starch granules comparable to those illustrated in Fig. 4(F) were scored by analyzing a population of 100 granules in triplicate for each selected transformant (data not shown). KDA13, belonging to the (++) class transformant, exhibited (12% ± 1.0) of altered starch granules, followed by the (+) class transformant [KDA19 (9.3% ± 0.6); KDA27 (8.3% ± 0.6)]. For the (−) class transformant and KD-UT, the frequency of altered granules was lower, which was around the 7%. Fig. 4SEM analysis of starch granules (×350: upper panel) and (×1,000: lower panel) from KD-UT (A, E) compared to that of selected transformants producing foreign polymers with decreasing water-solubility (KDA13 that produces alternan (B and F; ++: highly soluble (S)), KDD30 that produces dextran (C and G; +: soluble (L)) and KDIC15 that produces mutan (D and H; −: insoluble (I)). Degrees of polymer solubility were defined according to Robyt (1996) in which class S = more soluble referring to glucans precipitated by 40–44% (v/v) ethanol, L = less soluble referring to glucans precipitated by 34–37% ethanol and I = water-insoluble The physicochemical properties and starch content of KDA transformants remain unchanged Median granule size (d50), gelatinization characteristics (T0 and ΔH), amylose and starch content measurements were performed on selected transformants (Table 1). From these results, it can be seen that no consistent changes were detected for the different classes of transformants. Furthermore, chain length distribution experiments (HPSEC and HPAEC) were also done, particularly because ASR exhibits a high acceptor reaction efficiency. After complete debranching of starch with isoamylase, no consistent changes were found with HPSEC and HPAEC in comparison to KD-UT starches (data not shown). In addition, debranched starches, which were further treated with α-amylase, were analyzed with HPAEC in order to detect the presence of novel structural elements on starch molecules such as alternating α-(1→3)/α-(1→6) linkages. Again, no consistent changes were detected with HPAEC in comparison to KD-UT starches (data not shown). Table 1Summary of granule size (d50), gelatinization characteristics (To, ΔH), amylose and starch content measurements of starches from the selected transformants and KD-UT. Data (±SD) are the average of two or three independent measurementsTransformantsd50 (μm)*T0 (°C)†ΔH (kJ/g)‡Amylose content(%)Starch content (mg/g FW)KD-UT26.5 (±0.3)67.9 (±0.1)14.5 (±0.1)22.3 (±0.2)214.8 (±117.5)KDA1 (−)24.4 (±0.2)68.1 (±0.1)17.0 (±0.1)22.2 (±0.2)103.4 (±66.3)KDA24 (−)25.0 (±0.2)68.0 (±0.1)16.3 (±1.2)21.3 (±0.4)86.7 (±41.9)KDA16 (+)24.9 (±0.3)67.9 (±0.2)16.4 (±1.3)22.2 (±0.1)140.0 (±88.2)KDA19 (+)27.9 (±0.2)67.7 (±0.0)15.2 (±0.1)23.0 (±0.2)137.1 (±38.2)KDA27 (+)22.8 (±0.7)67.7 (±0.2)16.2 (±0.5)22.2 (±0.4)289.3 (±39.7)KDA13 (++)24.0 (±0.1)67.8 (±0.1)16.0 (±0.7)22.2 (±0.5)107.2 (±49.4)* Median value of the granule size distribution† Temperature of onset of starch gelatinization‡ Enthalpy released Expression levels of AGPase and GBSSI genes are down-regulated in the (+) and (++) KDA class The expression levels of key genes involved in starch biosynthesis such as sucrose synthase (SuSy), ADP-glucose pyrophosphorylase subunit S (AGPase), starch synthase III (SSIII), starch branching enzyme I (SBEI) and granule-bound starch synthase I (GBSSI) were monitored by real-time quantitative RT-PCR (Fig. 5). All these genes seemed to be down-regulated, particularly the AGPase and GBSSI genes. In most cases, the extent of AGPase and GBSSI down-regulation corresponded well with the amount of alternan that was accumulated in the potato tubers. However, AGPase down-regulation did not correlate with a reduction in starch content for the (++) transformants (107.2 ± 49.4 mg g−1 FW) when compared to KD-UT (214.8 ± 117.5 mg g−1 FW). Concerning GBSSI, the down-regulation was about 20 times less than for the transformant RVT34-77 in which GBSSI is completely inhibited. Typically, no reduction in amylose content was observed for the KDA transformants (Table 1), irrespective of their GBSSI messenger RNA level. Thus, the observed reduction in GBSSI expression for the (+) and (++) KDA classes were significant within the selected transformants, but relatively small with respect to the RVT34-77 transformant. Fig. 5Real-time quantitative RT-PCR analysis of KDA24 (−), KDA27 (+) and KDA13 (++) transformants and KD-UT tuber RNA using the following specific primers: SuSy, sucrose synthase; AGPase, ADP-glucose pyrophosphorylase subunit S; SSIII, starch synthase III; SBEI, starch branching enzyme I; GBSSI, granule-bound starch synthase I. RNA levels for each gene were expressed relative to the amount of Ubi3 RNA, as described in materials and methods. RNA sample from Karnico potato tubers expressing a sense/antisense GBSSI cDNA construct exhibiting a complete GBSSI down-regulation (RVT34-77), was used as a positive control Discussion This report is the first study on the production of alternan in potato tubers. Their presence in potato juices was demonstrated by ELISA using anti-dextran antibodies. Expression of ASR did not interfere with plant growth and development, and tuber and starch yield penalties were not observed. These results were similar to those obtained with the dextransucrase (DSRS) expression (Kok-Jacon et al. 2005a), but not to those obtained with the mutansucrase (GTFI) expression in which the tuber phenotype was significantly affected (Kok-Jacon et al. 2005b). The amount of alternan accumulated in potato tubers (1.2 mg g−1 fresh wt) was lower than that of dextran (1.7 mg g−1 fresh wt) (Kok-Jacon et al. 2005a). It might be possible that the large size of the mature ASR (2,057 amino-acids (a.a) when compared to DSRS with only 1,527 a.a.) might reduce the efficiency with which the enzyme is transported through the amyloplast membrane. However, such explanation needs to be approached with caution because the presence of alternansucrase in the amyloplast was not directly evidenced, as no ASR antibodies were available to us. Interestingly, it has been shown that the size of ASR can be reduced (by removal 82% (632/767 a.a.) of the C-terminal GBD) without compromising its activity (Joucla et al. 2006). If the size of the protein is indeed a critical factor, than this truncated variant may be a useful tool to enhance alternan synthesis in the amyloplast. Such an approach was already employed successfully for the Streptococcus downei mutansucrase GTFI (Kok-Jacon et al. 2005b). We have directed a mature and a GBD-truncated GTFI protein to potato amyloplasts, and found that the truncated form synthesized a larger amount of mutan, with much more pronounced effect on starch granule morphology. Although ASR is known to be efficient in catalyzing acceptor reactions (Richard et al. 2003; Côté and Sheng 2006), no evidence was found for the covalent attachment of novel, alternan-based structural elements to starch molecules. Also with dextransucrase and mutansucrase we have not been able to introduce different glycosyl linkage patterns in starch (Kok-Jacon et al. 2005a, b). To this end, acceptor reactions of glucansucrases with starch or maltodextrins are not studied in much detail. It has been observed that the efficiency of acceptor reaction decreases with increasing length of maltodextrins (reviewed in Kok-Jacon et al. 2003). We had anticipated that the nascent starch polymers would be poor acceptors for the glucansucrases. However, during starch biosynthesis potential acceptors (small maltodextrins) are thought to be generated through the action of, for instance, debranching enzymes (or isoamylases). If such a small acceptor is mutanylated, alternanylated, or dextranylated at the non-reducing end, then these novel structures might be incorporated into starch polymers through the action of certain transferases such as, for instance, branching enzyme. Apparently, this does not happen, or at a very low (undetectable) frequency, but the reason for this is unclear. Starch morphology in the ASR transformants was not significantly altered in comparison to that of dextran and mutan-accumulating plants (Fig. 4). This might be related to the fact that alternan is more water-soluble than dextran and mutan. An indication of the water-solubility of the three polysaccharides is given in Fig. 4; the more ethanol is required for precipitation, the higher the water-solubility. The water-solubility decreases in the order of alternan, dextran and mutan. We hypothesize that the co-synthesis of water-insoluble mutan and starch leads to co-crystallization of the two polymers, as a result of which the granule is packed in a less orderly fashion. This comparison should be approached with caution. For alternan and dextran, the observed differences in starch morphology may also be related to the fact that more dextran than alternan was accumulated in the potato tubers; for mutan, we have not been able to quantify the amount accumulated in the tubers. Therefore, it can not be excluded that the observed effects are related to the amount of foreign polymer produced. Interestingly, co-synthesis of levan, a water-soluble fructosyl-based polymer, and starch resulted in a dramatically altered starch granule morphology (Gerrits 2000). However, it should be noted that much higher levels of levan, which were estimated to be 66 mg g−1 fresh wt (Gerrits et al. 2001; Cairns 2003), were produced in comparison with alternan (1.2 mg g−1 fresh wt) or dextran (1.7 mg g−1 fresh wt), and that the starch granules contained approximately 5% of levan. This result contrasts with that of alternan- and dextran-accumulating plants in which foreign polymers were only found in the stroma. Taking together the results of potato transformants expressing glucan- or levansucrases in amyloplasts, it seems that the site of accumulation of the foreign polymer (granule or stroma), the solubility of the foreign polymer, and the amount of foreign polymer that is actually produced are important factors in determining starch granule morphology.
[ "alternan", "transgenic potato", "glucansucrase", "polymer solubility" ]
[ "P", "P", "P", "P" ]
Qual_Life_Res-3-1-2039822
Reliability and validity of functional health status and health-related quality of life questionnaires in children with recurrent acute otitis media
In this study the reliability and validity of generic and disease-specific questionnaires has been assessed focusing on responsiveness. This is part of a study on the effects of recurrent acute otitis media (rAOM) on functional health status (FHS) and health-related quality of life (HRQoL) in 383 children with rAOM participating in a randomized clinical trial. The following generic questionnaires were studied: 1. RAND general health rating index, 2. Functional Status Questionnaire (FSQ Generic and FSQ Specific), 3. TNO-AZL Infant Quality of Life (TAIQOL), and the following disease-specific questionnaires: 1. Otitis Media-6 (OM-6), 2. Numerical rating scales (NRS) for child and caregiver (NRS Child and NRS Caregiver), and 3. a new Family Functioning Questionnaire (FFQ). Reliability was good to excellent (Cronbach’s α range 0.80–0.90, intraclass correlation coefficient range 0.76–0.93). Moderate to strong correlations were found between the questionnaires as well as between questionnaires and relevant clinical indicators (r = 0.29–0.49), demonstrating construct validity. Discriminant validity for children with few versus frequent episodes of acute otitis media per year was good for most questionnaires (P < 0.004) but poor for the otitis media-related subscales of the TAIQOL (P = 0.10–0.97) and both NRS (P = 0.22 and 0.48). Except for the TAIQOL subscales, change scores were significant (P < 0.003) for generic and disease-specific questionnaires. Effect sizes were somewhat higher for disease-specific compared to generic questionnaires (0.55–0.95 versus 0.32–0.60) except for the TAIQOL subscales, which showed very poor sensitivity to change. Anchor-based methods resulted in a somewhat larger range of estimates of MCID than distribution-based methods. Combining distribution-based and anchor-based methods resulted in similar ranges for the minimally clinical important differences for generic and disease-specific questionnaires: 2–15 points on a 0–100 scale. Apart from the generic TAIQOL subscales, both generic and disease-specific questionnaires used in this study showed good psychometric qualities and responsiveness for use in clinical studies on children with rAOM. Introduction Acute otitis media (AOM) is a common childhood infection with a peak incidence occurring between 6 and 12 months of age. Five to fifteen percent of all children, depending on their age, suffer from recurrent acute infections of the middle ear (4 or more episodes per year) [1–4]. Repetitive episodes of pain, fever and general illness during acute ear infections [5–8] as well as worries about potential long-term sequelae such as hearing loss and disturbed language development [9–13] may all compromise the quality of life of the child and its family [14–16]. Although several questionnaires have been used in assessing the effects of recurrent acute otitis media (rAOM) in children, lack of true health-related quality of life (HRQoL) questionnaires as well as incomplete data on their reliability and validity mean that our current knowledge on the subject is limited for both research and clinical practice [17]. Assessment of functional health status (FHS) and HRQoL, as defined in Table 1 [18–26], has become increasingly important in clinical trials on the effectiveness of treatment in paediatric chronic conditions. The validation of FHS and HRQoL questionnaires, however, has so far mainly focused on reliability and construct validity. Responsiveness has been assessed for only a few paediatric HRQoL questionnaires for conditions other than otitis media [27–31]. In order to evaluate treatment effects on FHS and HRQoL meaningfully, questionnaires are needed that are not only reliable and valid but also responsive to changes in FHS and HRQoL. In adult studies, various strategies have been used to assess responsiveness, which is defined as the ability to detect clinically important change over time and therefore involves both the assessment of sensitivity to change and the assignment of meaning to that change [32, 33]. Since none of these strategies is without limitations, we will try to assess the responsiveness of FHS and HRQoL questionnaires by using multiple strategies, categorized into distribution-based and anchor-based methods. Table 1Definitions of health-related quality of life and functional health statusHealth-related quality of life:Level of satisfaction a person inputes to those aspects of his or her life that are affected by the effects of illness and its treatment [18–20]. Incorporation of a person’s valuation of his life distinguishes HRQoL from other measures of well-being [21, 22].Functional health status:Reflection of the (severity of) signs and symptoms and the adequacy of daily functioning across various life-domains in an individual with a certain health condition [23–26]. Distribution-based methods express the amount of change relative to the amount of random variance of a questionnaire [34, 35], whereas anchor-based methods enhance interpretability of changes in questionnaire scores by linking meaning and clinical relevance to change scores [34, 36]. Both generic and disease-specific questionnaires have been used in studies of paediatric FHS or HRQoL. Generic questionnaires span a wide spectrum of quality of life components, bridging various health states and populations. Disease-specific questionnaires on the other hand, assess health-related issues specific to particular conditions and may be able to detect small changes that are often small but clinically important; these provide a more detailed assessment of HRQoL, but cannot be used for comparisons across health conditions [37–39]. Both questionnaires are often combined in order to profit from the merits of both types. However, there have been few head-to-head comparisons between generic and disease-specific HRQoL measurement questionnaires in the setting of randomized controlled trials (RCT) [40]. The current RCT on the effectiveness of pneumococcal vaccination in children with rAOM will address both the issues of using generic versus disease-specific questionnaires and responsiveness in evaluating treatment effects on HRQoL in RCTs. The results will lead to recommendations regarding the applicability of these questionnaires in clinical studies in children with rAOM. Methods Setting and procedure FHS and HRQoL were assessed in 383 children with rAOM participating in a double-blind randomized, placebo-controlled trial on the effectiveness of pneumococcal conjugate vaccination versus control hepatitis vaccination. The study was conducted at the paediatric outpatient departments of a general hospital (Spaarne Hospital Haarlem) and a tertiary care hospital (University Medical Center Utrecht). Children were recruited for this trial through referral by general practitioners, paediatricians, or otolaryngologists, or were enrolled on the caregiver’s own initiative from April 1998 to February 2001. Study population Inclusion criteria: children were aged between 12 and 84 months and suffering from rAOM at study entry; defined in this study as having had at least 2 episodes of physician diagnosed AOM in the year prior to study entry. Exclusion criteria were conditions with a known increased risk for AOM such as: known immunodeficiency (other than IgA or IgG2 subclass deficiency), cystic fibrosis, immotile cilia syndrome, cleft palate, chromosomal abnormalities (like Down syndrome) or severe adverse events upon vaccination in the past. At each scheduled visit, two research physicians (C.N.M.B. and R.H.V.) collected data regarding the number of episodes of AOM (based on parental report at baseline and on physician report during follow-up), upper respiratory tract infections, and pneumonia. Information about the medical treatment, and ear, nose, and throat surgery in the preceding 6 months was also collected. The primary caregivers completed questionnaires assessing FHS and HRQoL of their child and family during the clinic visits at baseline and at 7, 14, and 26 months follow-up. Caregivers were requested to have the same person complete the questionnaires each time and to rate their child’s FHS and HRQoL with regard to their recurrent episodes of acute otitis media. Informed consent was obtained from caregivers of all children before study entry. Medical ethics committees of both participating hospitals approved the study protocol. Questionnaires Four generic questionnaires (RAND, FSQ Generic, FSQ Specific, TAIQOL) and one disease-specific questionnaire (OM-6) were used to assess FHS and HRQoL of the children in the study. Additionally, two disease-specific one-item numerical rating scales (NRS Child and NRS Caregiver) were used to obtain a global rating of HRQoL of the child and of the caregiver, respectively, related to rAOM. For the assessment of the impact of rAOM on family functioning a newly composed disease-specific questionnaire the Family Functioning Questionnaire (FFQ), was used to assess the impact of rAOM on family functioning. Table 2 summarises the characteristics of the questionnaires [14, 41–57]. Table 2Characteristics of FHS and HRQoL questionnaires used in this studyQuestionnairesType; number of items; scaleConstruct(s) measuredApplication in other studiesGenericRANDFHS; 7; LikertGeneral health: current health; previous health; resistance to illnessLow-birth-weight children; survivors of childhood cancer; asthmatic children [42, 44, 47, 48]FSQ genericFHS; 14; LikertAge appropriate functioning and emotional behaviourLow-birth-weight children; survivors of childhood cancer; asthmatic children [41, 45–47, 49–51]FSQ specificIdem to FSQ Generic, measuring general impact of illness on functioning and behaviourTAIQOLHRQoL; 35/46*; LikertSleeping, appetite, lung problems, stomach problems, skin problems, motor functioning, problem behaviour, social functioning, communication, positive mood, anxiety, livelinessLow-birth-weight children, childen with chronic illness, children with chronic OME [51, 54]Disease-specificOM-6FHS; 6; LikertPhysical suffering; hearing loss; speech impairment; emotional distress; activity limitations; caregiver concernsChildren with recurrent AOM; children with chronic OME [14, 55–57]NRS Child NRS CaregiverHRQoL; 1; index 0–100HRQoL; 1; index 0–100Global well-being of child related to AOM episodesGlobal well-being of parent related to child’s AOM episodesChildren with recurrent AOM or chronic OMEnone [14]Family Functioning Questionnaire (FFQ)FHS; 7; LikertParents: sleep deprivation; change of daily or social activities; emotional distress. Family: cancelling family plans or trips. Siblings: feeling neglected; demanding extra attention.None* 46 items when age > 15 months Generic questionnaires The RAND general health-rating index (RAND) and the Functional Status Questionnaire (FSQ) had already been translated and validated for Dutch children by Post et al. [41, 42] (Table 2). The RAND assesses general health perceptions of caregivers regarding their child [43]. The FSQ consists of two parts: one measuring functional limitations in general, not necessarily related to illness (FSQ Generic) and the other (paradoxically named FSQ Specific) measuring functional limitations that are attributable to any illness [43]. Functional limitations in both versions of the FSQ are mainly expressed as behavioural problems. During the course of the study, a new Dutch questionnaire on generic HRQoL became available: the TNO-AZL Infant Quality of Life (TAIQOL) questionnaire [51, 53]. For this reason, from July 1999 the TAIQOL was added to the previously selected set of questionnaires. Although the full, original version of the TAIQOL has been applied during the study, only those subscales from the TAIQOL are discussed that, based on their content, were assumed to be sensitive to the consequences of AOM. The following subscales tap functional items that are often affected by AOM (OM-related): ‘Sleeping’, ‘Appetite’, ‘Liveliness’, ‘Problem behaviour’, ‘Positive mood’, and ‘Communication’ (items about speech and language capacity) which are 6 of the 12 subscales in the TAIQOL. Although the TAIQOL has been developed for children aged up to 5 years, we also used the questionnaire in children aged 6–7 years, as no appropriate alternative was available during the study. Disease-specific questionnaires To measure disease-specific FHS, the Otitis Media-6 (OM-6) [14, 55] was translated into Dutch according to principles of backward–forward translation [58–61]. This six-item questionnaire covers both acute and long-term functional effects of otitis media in children on FHS. A new questionnaire has been developed to assess the impact of rAOM in children on their caregivers and siblings: the FFQ. The content of the FFQ was based on previous work by Asmussen et al. [15, 62] on the impact of rAOM on family well-being. A panel of paediatric otorhinolaryngologists and paediatricians from our study sites selected the items most relevant according to their clinical experience. The FFQ is composed of six questions covering effects of the child’s rAOM on caregiver and family activities and two questions assessing these effects on emotional behaviour of the other siblings. The Likert-scale was used as a response format and was analogous to that of the RAND and OM-6 in our study, ranging from score 1–4. Furthermore, two numerical rating scales (NRS) (0–100) were used, the NRS Child and the NRS Caregiver (see Table 2). The NRS Child [14] was translated into Dutch using the same principles of backward–forward translation that have been applied to translation of the OM-6. The newly created NRS Caregiver was modelled upon the NRS Child and added to the previously selected set of questionnaires from July 1999. The NRS caregiver has been created in this study, following the example of the NRS child which was created by Rosenfeld et al. [14]. Finally, the Dutch version of the OM Functional Status Questionnaire specific (OMFSQ [52]) was included as an anchor for responsiveness (instrument description in section on responsiveness). Questionnaire application Questionnaires were completed in a randomly selected, but fixed order during the follow-up assessments to prevent possible influence of order effects [63, 64]: RAND, FSQ Generic and Specific, OM-6, NRS Child, FFQ, TAIQOL, OMFSQ, NRS Caregiver. For all questionnaires higher scores indicate the presence of a better HRQoL or FHS. To allow comparisons between scores on the questionnaires, all scores were linearly transformed into 0–100 Scales. For each questionnaire, the evaluation period was the 6 weeks before completion. Statistical analyses Floor and ceiling effects Floor and ceiling effects were estimated for the baseline-assessment of each questionnaire by calculating percentages of respondents that had minimum and maximum scores, respectively. Questionnaires should exhibit minimal floor and ceiling effects to be optimally able to detect difference and change. Reliability First, internal consistency was assessed by calculating Cronbach’s alpha, which should be above 0.70 for each questionnaire or subscale [65]. Inter-item correlations of questionnaires were assessed to reveal item redundancy or ‘hidden’ subscales that may erroneously yield a high overall Cronbach’s alpha. For the assessment of test–retest reliability, a subset of caregivers attending the outpatient ward from February 2000 to June 2001 (n = 160) was given a second set of the same questionnaires (retest) to complete at home. The time frame for completion was 2 weeks after the first set of questionnaires was filled out during the outpatient visit at 14 months (first test). Children with AOM at the first test were excluded, since differences in their scores could be due to real change and interfere with the assessment of reliability. For the assessment of test–retest reliability, a time-interval of 2–14 days is often considered long enough to prevent recall bias and too short for relevant change to occur in chronic disease [66]. Test–retest reliability was computed as the intraclass correlation coefficients (ICC) between the two sets of questionnaires. An ICC of 0.80 was considered the required minimum for good reliability [65, 67]. Construct and discriminant validity In order to demonstrate construct validity, hypotheses were formulated about the strength of correlations between questionnaires. A higher percentage of correct predictions indicates stronger support for construct validity. A correlation of 0.10–0.30 was defined as weak, 0.30–0.50 as moderate, and >0.50 as strong [68]. The correlation between FSQ Generic and NRS Caregiver was predicted to be weak since they were expected to assess two different constructs. Moderate to strong correlations (r > 0.40) were predicted between RAND and NRS Caregiver. Moderate to strong correlations were also expected between OM-6 and FSQ Specific, NRS Child, NRS Caregiver and FFQ, as all assess otitis media-related HRQoL or FHS. The correlation between FSQ Generic and FSQ Specific was expected to be strong (r > 0.50). The remaining correlations among the questionnaires were expected to be moderate (Table 5). Additionally, correlations between questionnaire scores and frequency of physician visits for upper respiratory tract infections as well as frequency of AOM episodes in the preceding 6 months were calculated. Since distributions of questionnaire scores were skewed, correlations were assessed using Spearman’s rho. Discriminant validity was assessed by dichotomizing the study participants in children with 2–3 versus 4 or more episodes of otitis media per year. Based on clinical and immunological data, children with 4 or more AOM episodes per year are considered as ‘otitis prone’ [2, 69–71], reflecting a sub-group with an increased rate of upper respiratory tract infections, related medical interventions and compromised child functioning [72, 73]. It was assumed that this group would perform significantly poorer than children with 2–3 otitis media episodes per year on all questionnaires, which was assessed by independent sample Mann–Whitney tests. Responsiveness Since pneumococcal conjugate vaccination showed no clinical effectiveness when compared to the control vaccine [74], the intervention could not be used as an external criterion of change. Data of both vaccine groups were pooled instead for the assessment of responsiveness to spontaneous remission. The clinical experience of a panel of 5 experts in the field of otitis media, formed the basis for defining a reduction of 2 or more episodes of AOM per child per year as the external criterion for change while a reduction of 1 episode or less identified no change. Responsiveness was evaluated for two intervals: from 0 to 7 months and from 7 to 14 months follow-up. The observed change in these episodes was multiplied by 12/7 (1,714) to get the estimated change per year. The first step in the assessment of responsiveness was to explore the ability of questionnaires to detect change at all, i.e., its sensitivity to change. Secondly, meaning and clinical relevance of the change score were determined in accordance with recent recommendations, using both distribution- and anchor-based methods [36, 75–77]. Distribution-based methods express the amount of change relative to the amount of random variance of a questionnaire [34, 35]. Some ratios of change to random variance have, often empirically, been found to represent a minimally clinical important difference. Anchor-based methods enhance interpretability of changes in questionnaire scores by linking meaningful and clinically relevant indicators to change scores [34, 36]. The assessment of responsiveness will be described in further detail below. Sensitivity to change Sensitivity to change was assessed by calculating both the statistical significance of change scores using a paired t-test or Wilcoxon matched pairs test (for skewed distributions), and effect sizes (ES) using Guyatt’s responsiveness statistic [78] for changed subjects. In this statistic, the observed change that occurred in changed subjects is related to the observed random change or random error, in unchanged subjects. A parametric effect size was computed as: mean change score changed group/SD (change score unchanged group); a nonparametric effect size was computed as: median change score changed group/interquartile range (change score unchanged group)). According to the benchmarks of Cohen [79], an effect size of 0.2 represents a small change, 0.5 a moderate change and 0.8 or higher represents a large change. For skewed distributions Wilcoxon matched pairs test was used to calculate the significance of change. Clinical relevance of change scores The interpretation of change is often assessed by calculating the minimally clinical important difference (MCID), which is the smallest difference in a questionnaire total or domain score that patients perceive as beneficial [80]. The MCID can be computed from both distribution-based and anchor-based methods. Several estimates of the MCID from both methods are reported, to assess the likely range of the MCID for each questionnaire. Interpretation of change—distribution-based methods (ES-MCID and SEM-MCID) The main distribution-based methods for assessing the MCID are the Effect Size and the Standard Error of Measurement. A change in questionnaire scores corresponding to the effect size of Guyatt’s Responsiveness Statistic with values of 0.3–0.5 has been found to be consistent with other (empirical) estimates of the MCID [36, 81–83]. In this study the change in questionnaire scores corresponding with an effect size of 0.3 is used as benchmark of MCID (ES-MCID). A change of one Standard Error of Measurement (1-SEM) has empirically been found to correspond with the MCID of a questionnaire [77, 84–86]. The 1-SEM of a questionnaire links reliability of an instrument to the variance of scores in a population as reflected in its formula: 1-SEM = SD (change scores unchanged subjects) * √(1-ICC). It is an estimate of what part of the observed change may be due to random measurement error by including distribution of scores (SD) and instrument reliability (ICC). Change larger than the SEM therefore is considered ‘real’ change. The SEM is here used as an estimate of the MCID (SEM-MCID). The ES-MCID and SEM-MCID support the interpretation of measured change, as they reflect the smallest change that is substantially larger than the random variability in the study population which is based on the standard deviation of the unchanged subjects. Interpretation of change—anchor-based methods Anchor-based methods require an independent standard, the anchor, that in itself is easily interpretable and that is at least moderately correlated (>0.3) with the questionnaire being assessed. Changes in questionnaire scores were compared with change in two clinically relevant anchors: the AOM frequency (incidence of AOM episodes per child) and the AOM severity assessed with the Dutch version of the OM-Functional Status Questionnaire specific (OM-FSQ) [52]. The OM-FSQ was used as an anchor for responsiveness. It consists of three questions assessing clinical AOM severity: earache, sleeping problems, and other signs and symptoms (irritability, fussiness, fever) that may indicate the presence of an ear infection. In our population, the OM-FSQ demonstrated high internal consistency (Cronbach’s α = 0.88) and good test–retest reliability (ICC = 0.94). The OM-FSQ correlated weakly with the NRS Child (Spearman’s rho = 0.18), but moderately with the RAND (0.36), FSQ Generic (0.37), and NRS Caregiver (0.34), and strongly with the FSQ Specific (0.52), OM-6 (0.73) and FFQ (0.61). In relation to the AOM frequency, an expert panel in the field of otitis media considered a reduction of 2 episodes per year as a small or minimal clinically important change, whereas a change of 3 to 4 episodes per year was considered moderate to large. In the study of Alsarraf et al. [52], the OM-FSQ total score was about 62 on a scale of 0–100 during an episode of AOM, increasing to 92 at 6 weeks and to 90 at 12 weeks after an episode of AOM with higher scores reflecting less severe ear-related symptoms. Therefore, a score change of 10–20 on the 0–100 scale of the OM-FSQ in the current population was considered to be a small clinically relevant change in AOM severity, a score change of 30–50 as moderate to large. Anchor-based estimates of the MCID were computed as the change in questionnaire scores associated with small changes in AOM frequency and OM-FSQ. For all analyses the Statistical Package for the Social Sciences (SPSS) version 10.1 was used. Results Population The population characteristics summarized in Table 3 show that the majority of children suffered from 4 or more AOM episodes per year, and half of them suffered from chronic airway problems or atopic symptoms. Most children had undergone one or more ENT surgeries. Overall they seemed to suffer from more severe disease than the average child with 2–3 middle ear infections, as stated earlier. Table 3Characteristics of study population*Mean or % (n = 383)SD or 95% CIAge (months)34(19.7)Male gender62%(57–67)In the year prior to inclusionNumber of AOM episodes/year5.0(2.7)2–337%(32–42)4–531%(26–36)6 or more32%(27–37)Impaired hearing**35%(30–40)Language or speech problems**22%(18–26)History ofChronic airway problems or atopic symptoms ***51%(46–56)Adenoidectomy47%(42–52)Tympanostomy tubes51%(46–56)Other ear-, nose-, and throat surgeries2%(0.6–3)Antibiotic prophylaxis15%(11–19)Ever had speech-therapy9%(6–12)* at inclusion in the study** reported by the caregiver*** asthma, wheezing, hayfever, or eczema Floor and ceiling effects Generally, the questionnaires demonstrated no floor-effects. However, Table 4 shows that some questionnaires (FSQ Specific and FFQ) and most TAIQOL subscales showed moderate to large ceiling effects, which indicates that measurement of improvement may be limited while it may actually be present. Table 4Floor and ceiling effects*, internal consistency and test–retest reliability of the questionnairesMinimum score (%)Maximum score (%)Internal consistency Cronbach’s α n = 383**Test–retest reliability ICC*** n = 106GenericRAND000.810.89FSQ generic020.800.92FSQ specific0210.860.89TAIQOLN.A.N.A.0.72–0.900.76–0.90    Sleeping2120.900.83    Appetite0220.860.82    Positive mood0800.900.81    Liveliness0.6810.880.76    Problem behaviour140.860.85    Communication 0.4530.880.82Disease-specificOM-60140.850.89NRS child23N.A.0.83FFQ0.5270.900.93NRS caregiver00N.A.0.81* percentage of respondents with minimum (floor effect) and maximum (ceiling effect) scores** n = 169 for the TAIQOL subscales and NRS Caregiver*** Intra-class Correlation Coefficient Reliability Cronbach alpha coefficients were adequate to high (range 0.72–0.90) for the TAIQOL subscales and high (range 0.80–0.90) for all other questionnaires. The calculation of inter-item correlations revealed no ‘hidden’ subscales or item redundancy (i.e., individual correlations are too high, with possible loss of content validity) (Table 4). In order to assess test–retest reliability, 126 (79%) of 160 approached caregivers completed a second set of questionnaires of which 113 (71%) were completed within 2 weeks. Seven children with AOM at the time of the outpatient visit (test 1) were excluded, resulting in 106 sets for analysis (Table 4). ICCs were moderate to high for all questionnaires (range 0.81–0.93) and most TAIQOL subscales (range 0.76–0.90), but in the borderline range for the TAIQOL subscale ‘Liveliness’ (0.76). Construct and discriminant validity Table 5 reflects the calculated correlations between the questionnaires, which ranged from moderate to strong for the RAND, FSQ Generic, FSQ Specific, OM-6, and FFQ. These outcomes show that 14 (67%) of the hypothesized correlations were correct. False predictions were mainly made about the NRS Child and NRS Caregiver, as the correlations with other questionnaires were generally expected to be at least moderate, but were found to be weak. Disease-specific questionnaires (OM-6, NRS Child, FFQ and NRS Caregiver), showed moderate correlations (Spearmans’ rho 0.39–0.49) with the frequency of AOM episodes in the preceding 6 months. Moderate correlations (Spearmans’ rho 0.29–0.48) were also found between global FHS (RAND) and the disease-specific questionnaires on the one hand and the number of physician visits for all upper respiratory tract infections (URTIs), a more global indicator of illness, on the other hand (Table 6). Table 5Construct validity: calculated correlations *  between the questionnaires**RANDFSQ genericFSQ specificOM-6NRS childFFQNRS caregiverRAND1.000.520.490.340.330.430.49FSQ generic1.000.800.370.250.430.24FSQ specific1.000.490.260.520.24OM-61.000.230.740.28NRS child1.000.220.47FFQ1.000.39NRS caregiver1.00* Spearman correlation coefficients were calculated** appropriately à priori predicted correlations are bold-printedTable 6Construct validity—‘correlations* between questionnaire scores and frequency of physician visits for URTI** and of AOM** episodes’Frequency of physician visits for URTIFrequency of AOM episodes***GenericRAND−0.48−0.31FSQ generic−0.20−0.07#FSQ specific−0.27−0.12##Disease-specificOM-6−0.32−0.41NRS child−0.41−0.49FFQ−0.29−0.39NRS caregiver−0.41−0.40* Spearmans’rho correlation coefficients were calculated** URTI: upper respiratory tract infection; AOM: acute otitis media*** All correlations P < 0.001, except for # (P = 0.16) and ## (P = 0.02) The RAND, FSQ Generic, FSQ Specific, OM-6 and FFQ were able to discriminate between children with moderately recurrent AOM (2–3 episodes per year) and “otitis-prone” children with severe, recurrent AOM (4 or more episodes per year) (Table 7). However, neither the two numerical rating scales (NRS Child and NRS Caregiver) nor the otitis media-related subscales of the TAIQOL discriminated between these two groups. Table 7Discriminant validity: scores of children with 2–3 vs. 4 or more AOM episodes in the preceding year*2–3 AOM episodes≥4 AOM episodesMann–Whitney P-valueGenericRAND21.119.60.004FSQ generic76.572.20.002FSQ specific83.978.40.001TAIQOL    Sleeping66.260.70.10    Appetite74.773.20.44    Liveliness93.291.30.81    Positive mood92.092.50.97    Problem behaviour64.860.90.24    Communication83.884.50.69Disease-specificOM-618.917.0<0.001NRS child5.25.40.48FFQ84.978.5<0.001NRS caregiver6.66.20.22Calculated by Mann–Whitney test* 2–3 episodes means moderate and >4 episodes means serious AOM Responsiveness According to our external criterion of change (a reduction of 2 or more episodes of AOM per year), 270 children (70%) of 383 were classified as ‘changed’ for the first interval (0–7 months) and 126 children (33%) for the second interval (7–14 months). The two intervals differed considerably regarding the reduction of AOM incidence; during the 0–7 months follow-up the mean incidence per child decreased by 1.8 AOM episodes, whereas during 7–14 months follow-up the mean decrease was 0.35 episodes [74]. Sensitivity to change Sensitivity to change, expressed as significant mean change and effect size, is presented in Table 8. Except for most TAIQOL subscales, generic as well as disease-specific questionnaires yielded significant change scores during both follow-up periods, ranging from 4.9 to 28.3 on a 0–100 scale. Absolute change scores for the first follow-up period generally were larger (range 0.4–28.3) than for the second period (range −2.8–14.2). Table 8Sensitivity to change: mean change-scores* and effect sizes** for changed subjectsMean change-scoreEffect size—GRS0–7 months#7–14 months0–7 months7–14 monthsn = 270***P-valuen = 126****P-valuen = 270***n = 126****GenericRAND10.2<0.0017.7<0.0010.600.54FSQ Generic7.0<0.001 4.90.0010.370.29FSQ specific9.1<0.0016.0<0.0010.370.32TAIQOL    Sleeping9.9<0.0017.10.030.370.36    Appetite6.80.0010.01.00.280.00    Problem behaviour0.40.80−2.80.330.020.13    Positive mood1.50.303.90.110.060.25    Liveliness2.30.191.60.510.220.11    Communication2.90.121.70.320.160.11Disease-specificOM-616.6<0.00111.5<0.0010.600.73NRS child28.3<0.00114.2<0.0010.910.64FFQ13.6<0.0018.0<0.0010.550.60NRS caregiver19.20.0039.10.0030.950.57* calculated with paired t-test**calculated with Guyatt’s responsiveness statistic (GRS)*** n = 114 for TAIQOL subscales and NRS Caregiver; # follow-up interval**** n = 51 for TAIQOL subscales and NRS Caregiver The effect sizes for the generic FHS questionnaires ranged from small to moderate (0.29–0.60). For the generic TAIQOL subscales however, the effect sizes were lower, ranging from almost zero for the subscales ‘Appetite’ (0,0), ‘Problem behaviour’ (0.02) and ‘Positive mood’ (0.06) to small for ‘Sleeping’(0.37) and ‘Liveliness’ (0.22). Effect sizes for the disease-specific questionnaires were moderate to large (0.55–0.95). For the questionnaires the ES were quite similar for the first (0–7 months) and second intervals (7–14 months), whereas for the second interval absolute change scores were smaller. The TAIQOL was excluded from further analyses on the interpretation of change, due to its poor sensitivity to change. Interpretation of change—distribution-based methods Minimally clinical important differences (MCIDs) calculated with distribution-based methods are presented in Table 9. During the first interval, ES-MCIDs using an effect size of 0.3 as benchmark were somewhat smaller for generic questionnaires, ranging from 5.0 to 7.4 on a 0–100 scale, than those for disease-specific questionnaires ranging from 6.1 to 9.4. During the second interval, however, ES-MCIDs for generic and disease-specific questionnaires were comparable (range 4.0–6.7), indicating that for both types of questionnaires similar change scores are needed in order to be clinically relevant. Table 9Responsiveness—distribution-based indices for minimally clinical important difference (MCID) using 0.3 Effect Size (ES) and one standard error of measurement (SEM)ES—MCID*SEM—MCID**0–7 months#7–14 months0–7 months7–14 monthsGenericRAND5.04.35.34.5FSQ generic5.75.15.44.8FSQ specific7.45.67.85.9Disease-specificOM-68.34.78.85.0NRS child9.46.712.58.9FFQ7.44.06.13.3NRS caregiver6.14.88.36.6* MCID using 0.3 effect size as benchmark; # follow-up interval** MCID using one-SEM as benchmark Except for the NRS Child and NRS Caregiver, the SEM-MCIDs were quite comparable with the ES-MCIDs for both generic and disease-specific questionnaires. Assuming that the estimated MCIDs using either an effect size of 0.3 or a one-SEM as benchmark are correct, our results suggest that the range for the distribution-based MCID for generic as well as disease-specific questionnaires corresponds with a change of 3 - 9 points on a 0–100 scale (see Table 9). Interpretation of change—anchor-based methods Changes in AOM frequency (AOM incidence per child per year) were compared to the magnitude of change scores on the FHS and HRQoL questionnaires. A small change of 2 AOM episodes, which is considered a MCID, in AOM frequency corresponded with 3–10 points change on a 0–100 scale for the generic questionnaires (Graph 1a), and with 5–15 points change for disease-specific questionnaires, except for the NRS Child during the 0–7 months interval with 29 points change. Graph 1 Responsiveness—change-scores per questionnaire corresponding with an anchor-based responsiveness index: (a) AOM frequency; (b) AOM severity (OM-FSQ score) Likewise, a small improvement in AOM severity corresponded with change scores ranging from 2–10 points on a 0–100 scale for the generic questionnaires and with change scores from 4–8 points for the disease-specific questionnaires, except again for the NRS Child with 16 and 17 points change (Graph 1b). Change scores corresponding with moderate to large changes in AOM frequency and severity are also presented in Graph 1a, b. Comparing small change with moderate to large change shows that, overall, the larger the change in AOM severity or frequency, the larger the magnitude of the change score on the questionnaires. However, this trend was not true for the FSQ Generic and the disease-specific NRS Child (e.g., a small change in AOM severity equalized a change score of 17 on the NRS Child, whereas a moderate-large change equalized a change score of 13). Comparison of anchor- and distribution-based methods Comparing the results of the anchor-based methods with those of the distribution-based methods (Graph 2) showed that generic questionnaires (RAND, FSQ Generic, and FSQ Specific), disease-specific questionnaires (OM-6 and FFQ) and the NRS Caregiver yielded quite similar estimates of the MCID for both methods (3–9 points on a 0–100 scale for distribution and 2–15 points for anchor-based methods) as well as for both follow-up periods (4–15 points for 0–7 months interval, 2–8 points for 7–14 months interval). Averaging these distribution-based and anchor-based estimates of MCID yields a point-estimate MCID for generic questionnaires of 6.0 (range 2–10) and for disease-specific questionnaires of 7.3 (range 3–15) on a 0–100 scale (excluding the NRS Child, as it had much larger estimates for the MCID). Graph 2 Minimally clinical important difference (MCID) per questionnaire according to distribution-based (ES-MCID and SEM-MCID) and anchor-based (AOM frequency and AOM severity) methods Discussion In this study, the reliability and validity of generic as well as disease-specific FHS and HRQoL questionnaires have been assessed in the setting of a RCT concerning children with recurrent AOM. Most generic (RAND, FSQ-Generic and FSQ-Specific) and disease-specific (OM-6 and FFQ) questionnaires showed similar, good to excellent reliability and adequate construct and discriminant validity. Construct validity was poor for the numerical rating scales (NRS Child and NRS Caregiver), and discriminant validity was low to moderate for both NRS and the subscales of the TAIQOL considered to be otitis media-related (Tables 4, 5, 6 and 7). Generic as well as disease-specific questionnaires proved to be sensitive to change in the incidence of AOM (Table 8). The effect sizes were found to be ranging from small to moderate for both generic and disease-specific questionnaires (Table 8). The MCIDs for generic and disease-specific questionnaires were quite similar in terms of responsiveness (Table 9 and Figure 1 & 2). However, most otitis media-related subscales of the TAIQOL, the only true HRQoL questionnaire, proved insensitive to change. Reliability and validity Results on internal consistency and test–retest reliability of the RAND, FSQ Generic, FSQ Specific, TAIQOL and OM-6 found in this study, were comparable with those of previous studies using these questionnaires [14, 41, 42, 51, 52]. The consistency of results across different paediatric populations supports the reliability of these questionnaires. Similar to the poor discriminant validity in this study of the otitis media-related TAIQOL subscales, Fekkes et al. [51] found the TAIQOL subscales ‘Problem behaviour’, ‘Positive mood’, and ‘Liveliness’ discriminated neither between healthy and preterm children nor between healthy and chronically ill children. The ability of the RAND, FSQ Generic and FSQ Specific to discriminate between children who differed in AOM frequency, on the other hand, supported their discriminant validity previously found in children with asthma and healthy children [41, 42]. However, the heterogeneity of methods used limits the comparability of results regarding validity of this study with those from previous studies. The FFQ and NRS Caregiver are newly composed questionnaires to assess the influence of recurrent AOM on the caregiver and family. The FFQ demonstrated excellent reliability and validity, meeting the minimal required reliability coefficients of 0.90 for individual assessment [65, 87]. The strong correlation with the OM-6 supports its complementary usefulness in FHS and HRQoL assessment in children with rAOM. Results of the NRS Caregiver, however, were similarly poor as those observed for the NRS Child, which needs further exploration. Their global, single-item assessment of HRQoL may be too crude to reflect subtle differences in HRQoL [88, 89]. On the other hand, comments of the caregivers indicated that some of them may have misunderstood the NRS test-instructions. This is supported by the fact that improvement of construct validity occurred during follow-up assessments, presumably due to learning effects after reading the instructions a second time. Responsiveness So far, little attention has been given to the responsiveness of the questionnaires used in our study. Only Rosenfeld et al. [55] assessed effect sizes for the OM-6 (using a standardized response mean) that were much larger (1.1–1.7) than the ones found in this study. This may be explained by the use of different identifiers of change. Rosenfeld et al. [55] used an intervention with expected clinical effectiveness, for which proxies were not blinded, as indicator of change. Since pneumococcal vaccination proved to be clinically ineffective [74], treatment could not be used as an external criterion for change. Instead, a change of 2 or more AOM episodes per year was used as criterion to identify changed subjects. In addition, social desirability and expectancy bias may have influenced the outcome of the study of Rosenfeld et al. [55] Although clinical criteria such as change in the incidence of AOM episodes have been suggested as adequate alternative criteria to identify change [34], the choice for any external criterion for change remains somewhat arbitrary. It is a surrogate measure that often only reflects one aspect of the QoL construct. The poor responsiveness of the TAIQOL subscales ‘Behavioural problems’, ‘Positive mood’ and ‘Liveliness’, for example, may indicate that our clinical indicator is less suitable as external criterion for change in emotional and behavioural functioning. However, considering the overall poor responsiveness of the twelve TAIQOL subscales (results not shown), it seems more obvious that poor responsiveness in itself mainly applies for these three subscales as well. Several studies have supported the empirically found link between one SEM and the MCID for HRQoL questionnaires [75, 81, 85, 86]. In this study the MCIDS based on the value of one-SEM largely corresponded with a MCID that was estimated using 0.3 ES as a benchmark, which is in further support of the one-SEM as an indicator of MCID (Table 9). However, it should be realized that the SEM as well as the ES are both only statistical indicators, which relate change to random (error) variance. Interestingly, the anchor-based methods yielded similar estimates for the MCIDs (Graphs 1a, b, 2), which is in agreement with recent observations that one-SEM equals anchor-based MCID in patients with moderately severe illness [90]. By applying and comparing multiple methods as well as two evaluation periods, we have not only been able to demonstrate consistency in responsiveness but also to give ranges for minimally clinical important changes instead of point-estimates. As there is no ‘golden standard’ for the assessment of responsiveness in FHS and HRQoL measurement, a range of scores gives a more realistic reflection of responsiveness than a point-estimate. Point estimates can be misapplied by users who are either unaware of the limited precision of data used for estimating the MCID or who are unaware of the intrinsic limitations of dichotomising what is actually a continuum. Generic versus disease-specific questionnaires Although generic questionnaires are generally expected to be less sensitive to differences in FHS or HRQoL than disease-specific questionnaires [19, 37, 91, 92], in this study most disease-specific questionnaires performed only marginally better than the generic questionnaires on the discriminant validity test. Likewise, the responsiveness of generic questionnaires, and their usefulness as measures of outcome in randomized trials has been questioned [21]. Although in some studies generic measures indeed were found to be less responsive to treatment effects than specific measures [93–96], other studies did find comparable responsiveness [97–99]. In this study, only the smaller effect sizes for the FSQ Generic and FSQ Specific may indicate that sensitivity to responsiveness of generic questionnaires is somewhat poorer than that of disease-specific questionnaires. Possibly, this higher sensitivity at the start of the study reflects the higher incidence of symptoms and functional limitations that are specific to AOM, whereas during the study AOM incidence decreases and consequently AOM symptoms become less prominent compared to other health problems. Overall, the generic questionnaires appeared to be as sensitive to clinical change as disease-specific questionnaires, except for the TAIQOL. For the FSQ Generic and FSQ Specific, but not for the RAND which assesses general health perceptions, sensitivity to differences and change in FHS could be explained by their content, as they include many physical and emotional behaviour items that may be affected by rAOM. The more relevant a questionnaire is to a particular condition, the more sensitive it is likely to be. The sensitivity of the RAND, assessing general health and resistance to illness, may indicate that it meets the perceptions of the caregivers of children with rAOM in thinking that their overall health is worse compared with other children. It also may reflect the significant co-morbidity like chronic airway problems and atopic symptoms in the study population (Table 3). The reasons for the poor performance of the TAIQOL with regard to both discriminant validity and sensitivity to change are not obvious. Possibly the subscale scores represent each an aspect of HRQoL that is too limited to be sensitive to differences or change. Combining the subscales to more comprehensive constructs may then improve sensitivity. In addition, each item of the TAIQOL consists of two questions; a question about FHS is followed by the request to rate the child’s well-being in relation to this health status. Response shift bias may have modified the caregivers’ expectations about how their child feels in line with the child’s changing health, that is caregivers may rate their child’s well-being as better than it actually is as they adapt to the situation. Studies on factors that may influence sensitivity to change or responsiveness besides the type of questionnaire (generic versus disease-specific), such as questionnaire structure and content, disease severity, co-morbidity and other population characteristics, are needed. Bias and generalisibility There are several issues that need to be considered when interpreting the current results. First, frequency of AOM episodes at enrolment was based on proxy report, whereas during the trial only physician-diagnosed episodes were counted. The number of AOM episodes in the year prior to inclusion is likely to be overestimated by proxies [100], resulting in the underestimation of HRQoL change scores because they may have evaluated the situation as worse than it objectively was in the first place. However, if such a recall-bias regarding AOM frequency was in fact present, it may also have influenced caregivers’ reflection on subjective measures such as FHS and HRQoL, which results in realistic or even overestimated change scores. However, estimating responsiveness for the interval of 7–14 months, in which AOM frequency was not affected by recall bias since al episodes were physician diagnosed, yielded similar results. This indicates that recall bias appears not to have influenced responsiveness substantially. Secondly, in assessing test–retest reliability, two different modes of questionnaire administration were used: completion at the clinic versus home completion. The possible intention to give more socially desirable answers at the clinic as well as other effects such as being more distracted when filling in the questionnaires at home, may have caused differences in questionnaire scores between the first (test) and second (retest) assessment. Although this impact may be larger for single item questionnaires such as the NRSs compared to multiple item questionnaires, and might explain their somewhat smaller ICCs, the impact on the ICCs appears to be small. Thirdly, during the trial, 8 children (4.2%) in the pneumococcal vaccine group and 13 (6.7%) in the control vaccine group were lost to follow-up. One child switched from the control to the pneumococcal vaccine group. It is unlikely that these small numbers of dropouts and crossovers influenced the trial results. Furthermore, indices of validity and reliability are not fixed characteristics of FHS and HRQoL questionnaires but are influenced by the study design, intervention, and study population in particular. Our study population had significantly severe ear disease with frequent episodes and was older than the average child with AOM. Assessment of reliability and validity of the questionnaires in populations with less severe disease may present more ceiling effects and lack of discriminant validity. Therefore, the results of this study should only be generalized to paediatric populations with moderately to seriously severe recurrent acute ear-infections at an older age (approximately 14–54 months). Finally, of all questionnaires in this study, only the FFQ demonstrated a reliability that meets the minimal required reliability coefficients for individual assessment of HRQoL. Although some authors suggest to use FHS and HRQoL questionnaires for individual assessment in clinical practice as well [31], we do not support this approach. It is suggested that routine use of these questionnaires would facilitate detection and discussion of psychological issues and help guide decisions regarding, for example, referral. However, considering the complexity and many pitfalls of reproducibility and responsiveness assessment, individual use of HRQoL and FHS questionnaires as part of the follow-up of individuals is not reliable nor valid. Recommendations for clinical use In conclusion, generic (RAND, FSQ Generic and FSQ Specific) as well as disease-specific (OM-6, FFQ, and, to a lesser extent, NRS Caregiver) questionnaires demonstrated similar and high reliability and adequate construct and discriminant validity as well as responsiveness to justify use in clinical studies of children with rAOM. However, NRS as used in this study may be less adequate for assessment of HRQoL in this population. The TAIQOL, the only true generic HRQoL questionnaire, unfortunately showed a poor discriminant validity and sensitivity to change, needing extensive revision before further use in clinical outcome studies in children with otitis media. Using both a generic questionnaire (RAND or FSQ) and the OM-6 in clinical studies regarding FHS in children with rAOM is recommended, as it would combine the merits of both generalisability and sensitivity in outcome assessment and facilitate head-to-head comparisons of their performance in various paediatric populations with OM. More studies are needed assessing responsiveness of paediatric QoL questionnaires by multiple, distribution as well as anchor-based, methods to increase our appreciation of minimal clinically important changes in various paediatric conditions. Further studies on factors such as questionnaire structure and content, disease severity, co-morbidity and other population characteristics that may influence sensitivity to change or responsiveness besides the type of questionnaire (generic versus disease-specific) may increase our appreciation of the complex dynamics in HRQoL and FHS assessment.
[ "reliability", "validity", "functional health status", "quality of life", "acute otitis media", "responsiveness", "childhood infection" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
Purinergic_Signal-3-4-2072921
Mapping P2X and P2Y receptor proteins in striatum and substantia nigra: An immunohistological study
Our work aimed to provide a topographical analysis of all known ionotropic P2X1–7 and metabotropic P2Y1,2,4,6,11–14 receptors that are present in vivo at the protein level in the basal ganglia nuclei and particularly in rat brain slices from striatum and substantia nigra. By immunohistochemistry-confocal and Western blotting techniques, we show that, with the exception of P2Y11,13 receptors, all other subtypes are specifically expressed in these areas in different amounts, with ratings of low (P2X5,6 and P2Y1,6,14 in striatum), medium (P2X3 in striatum and substantia nigra, P2X6,7 and P2Y1 in substantia nigra) and high. Moreover, we describe that P2 receptors are localized on neurons (colocalizing with neurofilament light, medium and heavy chains) with features that are either dopaminergic (colocalizing with tyrosine hydroxylase) or GABAergic (colocalizing with parvalbumin and calbindin), and they are also present on astrocytes (P2Y2,4, colocalizing with glial fibrillary acidic protein). In addition, we aimed to investigate the expression of P2 receptors after dopamine denervation, obtained by using unilateral injection of 6-hydroxydopamine as an animal model of Parkinson’s disease. This generates a rearrangement of P2 proteins: most P2X and P2Y receptors are decreased on GABAergic and dopaminergic neurons, in the lesioned striatum and substantia nigra, respectively, as a consequence of dopaminergic denervation and/or neuronal degeneration. Conversely, P2X1,3,4,6 on GABAergic neurons and P2Y4 on astrocytes augment their expression exclusively in the lesioned substantia nigra reticulata, probably as a compensatory reaction to dopamine shortage. These results disclose the presence of P2 receptors in the normal and lesioned nigro-striatal circuit, and suggest their potential participation in the mechanisms of Parkinson’s disease. Introduction It is now well established that the arrangement of ionotropic P2X and metabotropic P2Y receptors [1, 2] on a cell membrane is a very dynamic process, often related to developmental or physiopathological conditions. Moreover, it is common knowledge that multiple P2 proteins are simultaneously recruited on a cell membrane for triggering biological functions. As a consequence, P2 receptors are rightly considered more than the sum of their single entities and must be therefore regarded as a complex network of cooperating receptors. Under this perspective, a numerical model was also introduced, the combinatorial receptor web model, which explains the biological efficacy of combining an assorted array of different P2 proteins on a given cell, in order to integrate, upgrade, guarantee and optimize specific receptor-dependent functions [3]. This trend of course applies to the central nervous system (CNS) as well, where in situ hybridization of P2 mRNA subtypes and immunohistochemistry of P2 proteins shows, for instance, wide but heterogeneous simultaneous distribution of both P2X [4–11] and P2Y [12–15] classes of receptors. In particular, P2X2,4,6 and P2Y1 subtypes are abundant and widespread approximately in the entire brain, while P2X1 protein is enriched in the cerebellum, P2X3 in the brain stem, and P2X7 is largely prejunctional. The hippocampus concurrently expresses all P2X and, moreover, P2Y1,2,4,6,12 receptor subtypes. Particularly in the basal ganglia (BG), neostriatal medium-spiny neurons and cholinergic interneurons highly express P2X2 and P2Y1 receptors, but it appears that they become functional only under certain, as yet unknown, conditions [16]. Moreover, P2X2 receptor protein was described in substantia nigra pars compacta (SNC) [17], whereas both protein and mRNA were described in SNC and striatum [18]. Finally, only very low levels of P2X4,6 mRNAs were detected in substantia nigra (SN) and striatum [19]. By functional analysis, ATP release was demonstrated from cultured embryonic neostriatal neurons [20], and ATP-evoked potassium currents in rat striatal neurons were shown to be mediated by P2 receptors [21]. ATP was also proved to increase extracellular dopamine levels in rat striatum through stimulation of P2Y subtypes [22], although it was claimed to inhibit dopamine release in the neostriatum [23]. Extracellular ATP via P2 receptors was finally reported to induce neurotoxicity in vitro [24] and in vivo [25] in the striatum. Besides P2 receptors on neurons, in BG there is also evidence of P2 receptors on, and release of ATP from, glial cells. P2Y12 subtype is present, for instance, on oligodendrocytes in striatum and SN [26], and P2X7 receptor is upregulated on microglia in striatum after middle cerebral artery occlusion [27]. In spite of these results, there is a general paucity of studies addressing the cellular distribution of all P2 receptor proteins in BG. Our work thus aimed to provide the complete topographical analysis of known P2X and P2Y subtypes that are present in rat striatum and SN in vivo, and to investigate the dynamic presence of P2 proteins after the induction of experimental parkinsonism by dopamine-denervation achieved by using the unilateral 6-hydroxydopamine (6-OHDA) rat model. By upgrading the current map of P2 receptors expressed in the brain, our study discloses the potential impact of these receptors in the normal and lesioned nigro-striatal circuit. Materials and methods Histological procedures Wistar rats (Harlan, Udine, Italy) were anesthetized by i.p. injections of sodium pentobarbital (60 mg/kg), and transcardially perfused with saline (0.9 % NaCl) followed by 4% paraformaldehyde, in phosphate buffer (PB, 0.1 M pH 7.4). Each brain was immediately removed, post-fixed in the same fixative for 2 h, and then transferred to 30% sucrose in PB at 4°C, until it sank. The experimental protocol used in this study was approved by the Italian Ministry of Health and was in agreement with the guidelines of the European Communities Council Directive of November 24, 1986 (86/609/EEC) for the care and use of laboratory animals. All efforts were made to minimize the number of animals used and their suffering. Double immunofluorescence Transverse sections (40-μm thick) were cut on a freezing microtome and were processed for double immunofluorescence studies. Non-specific binding sites were blocked with 10% normal donkey serum in 0.3% Triton X-100, in phosphate buffered saline (PBS) for 30 min at room temperature. The sections were incubated in a mixture of primary antisera for 24–48 h in 0.3% Triton X-100 in PBS. Rabbit anti-P2r (1:300, Alomone, Jerusalem, Israel) was used in combination with either mouse anti-calbindin-D-28K (1:200, Sigma, Mi, Italy), mouse anti-tyrosine hydroxylase (TH, 1:500, Sigma), mouse anti-parvalbumin (1:200, Chemicon International, Temecula, CA, USA), mouse anti-glial fibrillary acidic protein (GFAP) (1:400, Sigma), mouse anti-myelin basic protein (MBP, 1:200, Chemicon International), mouse anti-neurofilament H non-phosphorylated (SMI 32, 1:500, Sternberger Monoclonals, Lutherville, MD, USA), mouse anti-neurofilament H and M non-phosphorylated (SMI 33, 1:500, Sternberger Monoclonals), mouse anti-neurofilament 160 (NF160, 1:500, Sigma) or goat anti-neurofilament-L protein (NF-L, 1:100, Santa Cruz, Mi, Italy). The secondary antibodies used for double labeling were Cy3-conjugated donkey anti-rabbit IgG (1:100, red immunofluorescence, Jackson Immunoresearch, West Baltimore Pike, PA, USA), Cy2-conjugated donkey anti-mouse IgG (1:100, green immunofluorescence, Jackson Immunoresearch) or Cy2-conjugated donkey anti-goat IgG (1:100, green immunofluorescence, Jackson Immunoresearch). The sections were washed in PBS three times for 5 min each, and then incubated for 3 h in a solution containing a mixture of the secondary antibodies in 1% normal donkey serum in PBS. After rinsing, the sections were mounted on slide glasses, allowed to air dry and coverslipped with gel/mount anti-fading medium (Biomeda, Foster City, CA, USA). Confocal microscopy Double- or triple-label immunofluorescence was analyzed by means of a confocal laser scanning microscope (CLSM) (LSM 510, Zeiss, Arese, Mi, Italy) equipped with argon laser emitting at 488 nm, helium/neon laser emitting at 543 nm, and helium/neon laser emitting at 633 nm. Specificity of the antibodies was positively proved by performing confocal analysis in the absence of the primary antibodies, but in the presence of either anti-rabbit or anti-mouse secondary antibodies. Specificity was further confirmed for the P2r antiserum by performing immunoreactions in the simultaneous presence of the P2r neutralizing immunogenic peptides. Isolation of cerebral areas and protein extraction Wistar rats were anesthetized by i.p. injections of sodium pentobarbital (60 mg/kg) and, after decapitation, brains were removed. Each brain was transversally cut on a vibratome (300 μm). The specific cerebral areas were isolated with the aid of a dissection microscope and homogenized in RIPA buffer (1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% SDS in PBS containing protease inhibitors). After short sonication, the homogenates were incubated on ice for 1 h and centrifuged at 14,000 r.p.m. for 10 min at 4°C. Protein quantification was performed in the supernatants by Bradford colorimetric assay (Biorad, Milan, Italy). Western blot analysis Equal amounts of cell lysate (20–30 μg of protein from each cerebral area) were separated by electrophoresis on 10–12% SDS-PAGE and transferred to nitrocellulose membranes Hybond-C extra (Amersham Biosciences, Cologno Monzese, Italy). The filters were pre-wetted in 5% non-fat milk in TBS-T (10 mM Tris pH 8, 150 mM NaCl, 0.1% Tween 20) and hybridized overnight with P2X1,2,4 antisera (Alomone, 1:500), P2X5 and P2Y4/14 (1:200), P2Y6 (1:300) or P2Y2 (1:400). The antisera were immunodetected with an anti-rabbit HRP-conjugated antibody (1:5,000) and developed by ECL chemiluminescence (Amersham Biosciences), using Kodak Image Station (KDS IS440CF). Anti-P2r specificity The polyclonal P2r antisera used in this study were raised against P2r highly purified peptides (identity confirmed by mass spectrography and amino acid analysis, as indicated in the certificate of analysis provided by the manufacturer), corresponding to specific epitopes not present in any other known protein. The specificity of the P2r signals was moreover assessed by incubating Western blots either in the absence of the primary antiserum, or in the presence of the primary antiserum together with the neutralizing P2r immunogenic peptides (μg protein ratio 1:1 between peptide and antiserum). 6-OHDA lesion and Nissl staining Deeply anesthetized rats (45 days old, about 150 g body weight) were injected with 8 μg/4 μl 6-OHDA in saline 0.1% ascorbic acid in the medial forebrain bundle (stereotaxic coordinates ap = −4.4; l = +1.2; vd = −7.8, see also Paxinos et al. [28]) at a rate of 0.38 μl/min. Fifteen days later, the lesioned rats were tested with 0.05 mg/kg s.c. of the D1/D2 dopamine agonist apomorphine, in order to verify the efficacy of the 6-OHDA lesion, and contralateral turns to the lesion were counted for 40 min. Only those rats that made at least 200 contralateral turns were used for the study. It has been previously demonstrated that rats meeting this screening criterion have greater than 95% depletion of striatal dopamine [29]. At 1.5 months after the 6-OHDA lesion, rats were used for immunohistological experiments (n = 3). In order to evaluate cell damage, 40-μm rat brain sections were mounted onto gelatinized slides. They were dehydrated through alcohols, and then rehydrated and stained in 2% cresyl violet for 45 min. Following deionized water rinses, the slides were dehydrated in a standard alcohol series, cleared in xylene, and coverslipped. Results P2X and P2Y receptor proteins in rat striatum We describe in this work the cellular and subcellular in vivo distribution of P2X and P2Y receptors in transverse sections of adult rat striatum, showing by double immunofluorescence confocal analysis that the various P2 subtype proteins are distinguished by different degrees of expression and are not uniformly distributed throughout the entire tissue (Fig. 1). Fig. 1P2X and P2Y receptor proteins in rat striatum. Transverse sections through the striatum of adult rats were processed for double immunofluorescence studies. Rabbit polyclonal antisera against P2X1,2,4 and P2Y4 receptors (red Cy3 immunofluorescence) were used in combination with antibodies against neuronal or glial markers (green Cy2 immunofluorescence). Panel A P2X1: confocal images illustrate clear colocalization of P2X1 receptor with neurofilament-light protein (NF-L). The merged field of inset a shows absence of colocalization between the neuronal GABAergic marker calbindin (green) (a calcium-binding protein expressed mainly in medium spiny neurons of the striatum) and P2X1 receptor (red). The merged field of inset b shows the merged field of P2X1 (red) and MBP (green) overlapping immunoreactivities at higher magnification. Panel B P2X2: double immunofluorescence demonstrates that P2X2 receptor immunoreactivity (red) colocalizes with calbindin protein (green). The insets e and f show colocalization with the neuronal GABAergic marker parvalbumin (green) (a calcium-binding protein that is expressed in interneurons of the striatum). Panel C P2X4: red immunofluorescence for P2X4 protein merges with the green signals of the three types of neurofilament proteins: NF-L (inset g, merged field), NF160 (inset i, merged field) and SMI 32 and, moreover, with parvalbumin (inset h, merged field) and calbindin (inset i, merged field). Panel D P2Y4: red P2Y4 immunoreactivity is present on calbindin-positive neurons (green), on parvalbumin-positive neurons (green) (insets n–q), and on GFAP-positive astrocytes (inset o, merged field). Western blot analysis also confirms the presence of receptor proteins P2X1,2,4 (insets c in panel A, d in panel B, m in panel C, respectively) and P2Y4 (inset r in panel D) in striatum. Specificity of the P2 receptor signals was assessed by incubations of the primary antisera with the corresponding neutralizing immunogenic peptides (μg protein ratio 1:1 between peptide and antiserum). Scale bars are 10 μm in A; 100 μm in inset a; 2 μm in inset b; 50 μm in B and in insets e and f; 20 μm in C; 10 μm in insets h, i, l; 5 μm in inset g; 50 μm in D; 5 μm in inset n; and 20 μm in inset o. Similar results were obtained in at least four independent experiments In particular, a strong P2X1 receptor immunoreactivity (red) confers a patchy appearance to the striatum, being localized mainly in white matter, while sparing the projecting calbindin-positive GABAergic neurons that are highly enriched in gray matter (Fig. 1A, inset a, green). Moreover, P2X1 protein immunofluorescence is present on NF-L positive, transversally oriented neuronal fibers, although the merged field provides only partial colocalization between the two signals (Fig. 1A). In addition, the high magnification analysis (Fig. 1A, inset b) of P2X1 (red) and MBP (green) immunoreactive signals shows that P2X1 receptor is surrounded by MBP, proving the presence of P2X1 protein on myelinated fibers. Due to the close vicinity of the two signals, overlapping yellow immunofluorescence is also observed. Finally, P2X1 receptor in striatum is recognized by Western blot analysis as a single protein band of 60–65 kDa, additionally abolished in the presence of the P2X1 receptor–neutralizing immunogenic peptide (Fig. 1A, inset c). Conversely, an abundant P2X2 receptor immunoreactivity (red) is found in gray matter of striatum (Fig. 1B), while sparing the bundles of white matter. Specific receptor immunolabeling is present not only on the highly expressed calbindin-positive projecting GABAergic neurons, but also on the fewer parvalbumin-positive GABAergic interneurons (Fig. 1B, insets e, f). By Western blot analysis, we show that P2X2 receptor is present in striatum under two isoforms of about 45 and 32 kDa, furthermore, it is abolished in the presence of the P2X2 receptor–neutralizing immunogenic peptide (Fig. 1B, inset d). P2X3 receptor immunostaining in striatum is of medium intensity (Table 1), and mainly localizes on GABAergic neurons of gray matter (data not shown). Table 1Map of P2 receptor proteins in striatum and substantia nigra StriatumSubstantia nigraP2X1++++++P2X2++++++P2X3++++P2X4++++++P2X5++++P2X6+++P2X7++++P2X1+++P2X2++++++P2X4++++++P2X6++++P2X11––P2X12++++++P2X13––P2X14–+++Relative abundance of all P2X and P2Y receptor proteins was analyzed by confocal immunofluorescence microscopy, as described in “Materials and methods”. The intensity of the specific immunostaining was scored as follow: – = not detected; + = just sufficient to evaluate presence and outline of positive cells; ++ = adequate to assess morphological features of cell bodies and/or cellular processes; +++ = very bright P2X4 receptor signal is instead very copious in white matter, although present on a few fibers of gray matter as well (Fig. 1C, red). It partially colocalizes with all types of heavy-, light- and medium-chain neurofilament proteins (merged fields): SMI 32 (green), NF-L (inset g), and NF160 (inset i). Moreover, we find P2X4 protein also on GABAergic interneurons (inset h) and GABAergic spiny neurons (inset l). By Western blot analysis, we demonstrate that P2X4 receptor is present in striatum as a single band of about 60 kDa, moreover, it is abolished in the presence of the P2X4 receptor–neutralizing immunogenic peptide (Fig. 1C, inset m). P2X5,6,7 and P2Y1 receptor immunoreactivities in striatum are very weak (Table 1) in gray matter, although totally absent from white matter under our experimental conditions (data not shown). The P2Y2 receptor is highly expressed in striatum on axons of white matter and astrocytes of gray matter (Table 1). Moreover, it is detected as a double protein band in the 55–65 kDa range (data not shown). A strong P2Y4 receptor immunoreactivity is present only in gray matter of striatum, localized on both types of GABAergic neurons: calbindin-positive (Fig. 1D) and parvalbumin-positive (insets n–q). Nevertheless, the receptors are also widespread throughout the striatum on astrocytes, as shown by colocalization with the GFAP marker (inset o). By Western blot analysis, we prove that P2Y4 receptor is present in striatum as a double band of about 42 and 85 kDa (inset r), likely corresponding to the monomeric and dimeric aggregation states of the receptor [30, 31]. While P2Y6 receptor is barely detectable (Table 1) on GABAergic neurons in striatum (data not shown), P2Y11,13,14 receptor proteins were not identified by any means under our experimental conditions (Table 1). Finally, P2Y12 receptor in striatum (Table 1) is abundantly expressed only on oligodendrocytes and myelin sheets, as previously shown [26]. P2X and P2Y receptor proteins in substantia nigra We conducted a parallel analysis on the cellular and subcellular in vivo distribution of P2X and P2Y receptors in transverse sections of adult rat SN. We showed by double immunofluorescence confocal analysis that the different P2 receptor proteins possess more comparable levels of expression with respect to the striatum, and are also more uniformly, although differently, distributed throughout the entire SNC and SNR (Fig. 2). In particular, strong signals for ionotropic P2X2,5 (red, Fig. 2A,B), P2X1,4 (Table 1), metabotropic P2Y6,14 (red, Fig. 2C,D) and P2Y4 (Table 1), or moderate signals for P2X3,6 and P2Y1 receptors (Table 1) are present on dopaminergic neurons (TH-positive) of SNC. Moreover, P2Y2 and P2Y12 receptors are abundantly expressed in SN (Table 1), but P2Y2 is expressed on axons and astrocytes, and P2Y12 only on oligodendrocytes and myelin sheets [26]. Conversely, in SNR, a weak P2X/Y receptor immunoreactivity is limited to sparse neuronal bodies, likely identified as GABAergic neurons by colocalization with parvalbumin (data not shown). The presence at the tissue level in SN of ionotropic P2X2,5 (insets a in panel A, and b in panel B of Fig. 2, respectively) and metabotropic P2Y6,14 (insets c in panel C, and d in panel D of Fig. 2, respectively) proteins is confirmed by Western blot analysis performed in all cases in the presence of specific receptor–neutralizing immunogenic peptides. Similarly to the striatum, immunoreactive signals for P2Y11,13 receptors were not identified by any means under our experimental conditions (Table 1). Fig. 2P2X and P2Y receptor proteins in rat substantia nigra. Double immunofluorescence visualized by confocal analysis was performed in transverse sections through the substantia nigra of adult rats. Strong signals for ionotropic P2X2,5 and metabotropic P2Y6,14 (red Cy3 immunofluorescence) are present on dopaminergic neurons (TH-positive, green Cy2 immunofluorescence) of substantia nigra pars compacta (SNC), whereas in substantia nigra pars reticolata (SNR) P2X/Y immunoreactivity is limited to sparse neuronal bodies. Western blot analysis confirms the presence in substantia nigra of receptor proteins P2X2,5 (insets a in panel A, and b in panel B, respectively) and P2Y6,14 (insets c in panel C, and d in panel D, respectively). Specificity of the P2 receptor signals was assessed by incubation of the primary antisera with the corresponding neutralizing immunogenic peptides (μg protein ratio 1:1 between peptide and antiserum).Scale bars in all panels are 50 μm. Similar results were obtained in at least four independent experiments 6-Hydroxydopamine modulates the expression of selected P2 receptors in striatum and substantia nigra No contralateral rotation as a sign of motor deficit was reported in rats before being 6-OHDA-lesioned, but was instead detected after the lesion rotation (data not shown), together with loss of dopaminergic TH-positive neurons only from the ipsilateral hemisphere of SNC (Fig. 3A and insets a, b). Fig. 36-Hydroxydopamine modulates the expression of selected P2 receptor proteins in striatum and substantia nigra. Staining of rat substantia nigra after 6-hydroxydopamine treatment. Panel A Conventional microscopy images of Nissl staining shows several dopaminergic neurons (arrows) in the contralateral control hemisphere, which are lost (asterisks) in the ipsilateral lesioned hemisphere. Specific ipsilateral dopaminergic lesion of substantia nigra pars compacta (SNC) was also visualized by confocal TH-immunostaining (green) (insets a, b). Panel B Confocal merged yellow images show upregulation of P2X1 receptor protein (red) in parvalbumin-positive GABAergic neurons (green) in the lesioned side of substantia nigra pars reticolata (SNR) of 6-hydroxydopamine-treated rats. Panel C Confocal merged yellow images show a drastic increase in GFAP-positive astrocytes (green) in the lesioned side of 6-hydroxydopamine-treated rats and, correspondingly, an augment of P2Y4 signal (red) (inset c). Scale bars are 100 μm in A, B and in insets a, b; 20 μm in C; and 10 μm in inset c. Similar results were obtained in at least three independent experiments Concomitantly, we prove that dopamine denervation in the 6-OHDA-lesioned rat generates a significant and selective rearrangement of P2 receptor proteins. Whereas the expression pattern and immunofluorescence intensities of P2X1,4, P2Y2 (colocalizing with all neurofilaments and present in white matter on fibers projecting from the cortex), and P2Y12 (present on oligodendrocytes of white matter) remain constant in both ipsi- and contralateral hemispheres after 6-OHDA treatment (as well as in control animals), all other P2X and P2Y receptors are decreased on parvalbumin- and calbindin-positive GABAergic neurons of deafferented ipsilateral striatum (but not contralateral and in control animals), as measured by semiquantitative analysis (Table 2) (n = 3). Table 2Map of P2 receptor modulation after dopamine denervation Ipsilateral Striatum Ipsilateral SNP2X1=↑GABAP2X2↓GABA↓THP2X3↓GABA↓TH, ↑GABAP2X4↓GABA↓TH, ↑GABAP2X5=↓THP2X6=↓TH, ↑GABAP2X7==P2X1=↓THP2X2==P2X4↓GABA↑GFAPP2X6=↓THP2X11==P2X12==P2X13==P2X14=↓THRelative increase (↑) or decrease (↓) in P2X and P2Y receptor proteins analyzed by confocal immunofluorascence microscopy in striatum and SN after treatment in rat in vivo with 6-hydroxydopamine (ipsilateral), and in control (not lesioned) brain hemisphere (contralateral).TH= presence in dopaminergic neurons,GABA= presence in GABAergic neurons,GFAP= presence in astrocytes Similarly, all P2X and P2Y receptors are lost in the lesioned (but not contralateral) substantia nigra pars compacta, consequent to the degeneration of the majority of TH-positive dopaminergic neurons (Table 2). Conversely, P2X1 (Fig. 3B) and P2X3,4,6 (Table 2) receptors present on GABAergic neurons, and P2Y4 receptors on astrocytes augment their expression only in ipsilateral substantia nigra pars reticulata adjacent to the lesioned pars compacta. In this same area, a phenomenon of astrogliosis is also induced, as detected by more abundant expression of GFAP-positive astrocytes (Fig. 3C). Discussion Because the roles of ATP in the CNS have received less attention until recently, often due to lack of appropriate research tools, our knowledge of the functional qualification of P2 receptors in the brain is limited, although rapidly improving. As a group of nuclei interconnected with cerebral cortex, thalamus and brainstem, and associated with a variety of functions, such as motor control, cognition, emotions and learning, the BG [32] is an area that deserves thorough analysis. Our work was aimed at mapping in vivo the presence of P2 receptor subtypes in the BG nuclei of striatum and SN by immunofluorescence-confocal and Western blotting techniques. The specificity of the highly sensitive molecular probes used for the detection of all known P2X and P2Y receptor proteins has been previously validated [33, 34]. In addition, we undertook an analysis that excluded possible cross-reactivity for all antisera used. Our results not only establish that the majority of P2X (P2X1–7) and P2Y (P2Y1,2,4,6,11–14) receptors so far cloned from mammalian tissues are found in striatum and SN, but also prove their distinctive localization on neurons and/or glial cells. In detail we show that, with the exception of only P2Y11 and P2Y13 receptors (whose immunoreactivity was not identified by any means under our experimental conditions), all other subtypes are specifically localized in striatum and SN (both pars compacta and reticulata), although with different levels of expression, rated as low (P2X5,6 and P2Y1,6,14 in striatum), medium (P2X3 in striatum and SN; P2X6,7 and P2Y1 in SN) and high. Moreover, while we show a prevalence of P2 receptors on neurons (P2X1,4 and P2Y2 colocalizing with neurofilament light, medium and heavy chains) with features that are either dopaminergic (P2X2–5 and P2Y1,4,6,14 colocalizing with TH, in SN) or GABAergic (P2X2–4 and P2Y4 colocalizing with parvalbumin and calbindin, in striatum), we also describe their expression on astrocytes (P2Y2,4 in striatum and SN, colocalizing with GFAP), microglia (P2X7, colocalizing with OX42) [27], and oligodendrocytes (P2Y12, colocalizing with MBP and RIP) [26]. By confirming previous autoradiographic studies [35, 36], our results therefore prove the widespread but diversified P2-receptor protein distribution in striatum and SN, and extend to these nuclei the great level of biological complexity and molecular sophistication pertaining to P2 receptors [3]. Although the configuration of receptor subunits required for assembly into functional cation channels gated by extracellular ATP in different regions of the CNS comprising the BG is not known yet, colocalization of so many different P2X subtypes in striatum and SN is definitely compatible with heteromultimeric assembly of ionotropic subunits. Since a growing body of biochemical and biophysical evidence now indicates that the propensity to form homo- and especially hetero-multimers is frequent also for G protein-coupled receptors [37] comprising the P2Y subtypes [30, 31], the concurrent expression in striatum and SN of as many metabotropic receptors could explain once more a complex hetero-oligomeric architecture. Nevertheless, the biological phenomenon of redundancy could also justify the simultaneous presence of multiple P2 receptor subtypes in these nuclei, with the final outcome of increasing the structural and pharmacological heterogeneity of these brain regions. Finally, the composite architecture of P2 receptors that we depicted in striatum and SN might likely also signify a multipart mechanism of receptor cooperative behavior (Volonté et al., personal communication) that sustains the concomitant level of complexity of this brain area in several tasks, such as planning and modulation of movement pathways, cognitive processes involving executive functions, reward and addiction. These possibilities are, of course, not mutually exclusive. Striatal neurons, including the most abundant medium spiny neurons, receive convergent synaptic modulation from nigral dopaminergic neurons and from cortical glutamatergic projections [38]. The present study showing that lesions of nigral dopaminergic neurons do not significantly affect purinergic receptors present on axons of striatum white matter, but do generate a significant overall decrease in P2X and P2Y receptor proteins from striatal spiny neurons and GABAergic interneurons, thus confirms and extends the involvement of P2 receptors and extracellular ATP to the cortex-basal ganglia circuit [21]. Since dopaminergic denervation affects not only the nigrostriatal dopaminergic pathway but, as a consequence, the corticostriatal glutamatergic pathway with an increase in glutamatergic transmission [39–41] and extracellular glutamate levels in the striatum [42], the reduced P2 receptor protein expression that we demonstrate in striatum gray matter could thus not only be a direct effect of the nigrostriatal inhibition, but also a cause of de-inhibitory mechanisms occurring in the corticostriatal circuit. In this regard, it is common knowledge that extracellular ATP participates in excitatory neurotransmission in the CNS [43], that release of extracellular ATP occurs in CNS under both normal and pathological conditions [44] and, not least, that glutamate release is induced by extracellular ATP in CNS glutamatergic neurons [45]. Neurons of the pars compacta responsible for dopamine production in the brain, which we have shown here to completely lose their array of P2 receptors as a consequence of neurodegeneration induced by 6-OHDA treatment, receive inhibiting signals also from neurons of the pars reticulata that produce GABA [46]. Loss of dopamine neurons in the SNC, one of the main pathological features of Parkinson’s disease leading to a marked reduction in dopamine function in the brain, thus also impedes the inhibitory pathway of SNR, with a consequent overactivation of GABAergic neurons. Our findings that specific expression of both ionotropic P2X1,3,4,6 receptors on GABAergic neurons and metabotropic P2Y4 receptors on astrocytes is remarkably increased in SNR after dopamine denervation thus probably reflects a parallel compensatory overreaction of GABAergic neurons to dopamine shortage. One possible explanation is that purinergic mechanisms might thus play a crucial role in the fine-tuned regulation not only of dopaminergic and glutamatergic cross-talk in striatum, as it occurs in nucleus accumbens [47], but also of GABAergic and dopaminergic interplay in SN, as it occurs in the mesolimbic neuronal circuit [48]. This is consistent with the overall versatile functions accomplished by P2 receptors in the CNS under both normal and pathological conditions [43, 44, 49] and, in particular, with the intermediary role in oligodendrocyte-to-neuron [26], Bergmann glia-to-neuron, and neuron-to-neuron communication [50] proposed for P2 receptors in various brain regions. In summary, the importance of our work is twofold. We first provide the complete topographical analysis of all known P2X and P2Y receptor subtypes expressed in vivo at their protein levels in rat striatum and SN, which, when considered alongside functional studies, supports a key role for extracellular ATP as a cotransmitter/neuromodulator in these brain areas. Then, we prove that dopamine denervation in the 6-OHDA animal model of Parkinson’s disease generates a significant rearrangement of P2 receptor proteins in these nuclei, therefore disclosing the participation of P2 receptors in the lesioned nigro-striatal circuit. While requiring further investigation, our findings indicate a potential but noteworthy pharmacological and therapeutic novel outcome for Parkinson’s disease.
[ "rat brain", "tyrosine hydroxylase", "6-hydroxydopamine", "parkinson’s disease", "purinergic receptors", "γ-aminobutyric acid" ]
[ "P", "P", "P", "P", "P", "M" ]
Diabetologia-4-1-2270360
Best practice guidelines for the molecular genetic diagnosis of maturity-onset diabetes of the young
Aims/hypothesis Mutations in the GCK and HNF1A genes are the most common cause of the monogenic forms of diabetes known as ‘maturity-onset diabetes of the young’. GCK encodes the glucokinase enzyme, which acts as the pancreatic glucose sensor, and mutations result in stable, mild fasting hyperglycaemia. A progressive insulin secretory defect is seen in patients with mutations in the HNF1A and HNF4A genes encoding the transcription factors hepatocyte nuclear factor-1 alpha and -4 alpha. A molecular genetic diagnosis often changes management, since patients with GCK mutations rarely require pharmacological treatment and HNF1A/4A mutation carriers are sensitive to sulfonylureas. These monogenic forms of diabetes are often misdiagnosed as type 1 or 2 diabetes. Best practice guidelines for genetic testing were developed to guide testing and reporting of results. Introduction Maturity-onset diabetes of the young (MODY) describes the dominantly inherited disorder of non-insulin-dependent diabetes typically diagnosed before 25 years that was first recognised by Tattersall [1, 2]. MODY is the most common form of monogenic diabetes, accounting for an estimated 1–2% of diabetes in Europe [3, 4], but is often misdiagnosed as type 1 or type 2 diabetes. The term MODY is used to describe a group of clinically heterogeneous, often non-insulin-dependent forms of diabetes that are defined at the molecular genetics level by mutations in different genes. All show dominant inheritance and are disorders of beta cell dysfunction, but variable features include the age at onset, severity of the hyperglycaemia (and hence risk of complications) and associated clinical features. The most recent classification of diabetes by the American Diabetes Association and the World Health Organization recognises these discrete subtypes of MODY [5]. Mutations in the GCK and HNF1A genes are the most frequent cause of MODY in all populations studied. They account for approximately 70% of cases (see Table 1). The ratio of GCK to HNF1A mutations varies between countries because of different recruitment strategies for genetic testing; blood glucose screening in young, asymptomatic individuals will identify a higher proportion of GCK mutations. Table 1Genes in which mutations cause MODY Gene symbol (other symbol)GCKHNF1A (TCF1)HNF4APDX1 (IPF1)NEUROD1HNF1B (TCF2)ProteinGlucokinaseHepatocyte nuclear factor-1 alphaHepatocyte nuclear factor-4 alphaInsulin promoter factor-1Neurogenic differentiation 1Hepatocyte nuclear factor-1 betaChromosome locus7p1312q24.3120q13.1213q12.22q31.317q12Gene Accession no.NM_000162.2NM_000545.4NM_000457.3aNM_000209.2NM_002500.2NM_000458.1OMIM * (Gene)138079142410600281600733601724189907OMIM # (Phenotype)125851600496125850606392606394137920Mutation frequency (%) (not known in ~20% of cases)20–5020–50~5<1<1~5The Genbank reference sequence NM_000457.3 refers to the full length P1 transcript, which uses exons 1a and 1b, and not 1d. For the full length P2 transcript, which includes exon 1d, there is a human mRNA sequence AY680697 but no refseq. The convention is to use NM_000457.3 for exons 1a/1b and 2–10, with AY680697 for exon 1d only. HNF4A mutation descriptions in the literature use the translation start codon reported by Chartier et al. in 1994 [45]. However, an alternative start codon nine amino acids upstream was proposed in 1996 [46] and this alternative start codon is used in NM_000457.3. Hence the convention is to report HNF4A mutations using the amino acid methionine at codon 10 in NM_000457.3 as the start codon, with the A of this codon as the first nucleotideaNomenclature for the HNF4A gene is complicated because the gene encodes nine isoforms expressed from two promoters. The liver-specific P1 promoter drives the expression of transcripts 1–3, which include exons 1a and 2–10, and transcripts 4–6, which include exons 1a, 1b, 1c and 2–10. Transcripts 7–9 are expressed from the pancreatic (P2) promoter located approximately 46 kb upstream of the HNF4A transcription start site and exhibit splicing of the upstream exon 1d to exon 2, without the inclusion of sequences from either exons 1a, 1b or 1c Heterozygous loss-of-function GCK mutations result in mild, stable hyperglycaemia from birth. Microvascular complications are rare, reflecting the fact that HbA1c is normally just above the upper limit of the normal range. Treatment with oral hypoglycaemic agents or insulin is not needed because it rarely changes HbA1c [6]. A genetic diagnosis is important for the small number of children misdiagnosed with type 1 diabetes and treated with insulin [7]. The identification of GCK mutations in women with gestational diabetes can be useful for obstetric management, since their babies who do not inherit the mutation are at risk of macrosomia [8], and it can guide follow-up in the mothers. Transcription factor mutations in the HNF1A or HNF4A genes cause a similar progressive diabetic phenotype although the penetrance of HNF4A mutations is lower (S. Ellard and A. T. Hattersley, unpublished data). Sensitivity to sulfonylureas means that some patients can transfer from insulin to oral agents [9, 10]. A low renal threshold for glucose is a feature of HNF1A mutations [11] and may provide a useful method of screening at-risk family members during childhood [12]. Mutations identified in the GCK, HNF1A and HNF4A genes include missense, nonsense, splicing, small deletions/insertions/duplications, and splice site and promoter region mutations [13, 14]. Partial and whole deletions have recently been reported in HNF1A and GCK [15]. The location of mutations within the HNF1A gene influences the age at diagnosis; the average age at diagnosis for patients with exon 1–6 mutations that affect all three HNF1A isoforms is younger than for those with mutations in exons 8–10 that affect only isoform HNF1A(A) [16, 17]. Rarer forms of MODY include heterozygous mutations in PDX1 (also known as IPF1; [18, 19]) and NEUROD1 [20, 21], but analysis of these genes is not usually included in routine molecular genetic testing for MODY. Dominantly inherited syndromic forms of diabetes may also be described as MODY subtypes. The renal cysts and diabetes syndrome results from HNF1B mutations, and other features include renal abnormalities, female genital malformations, hyperuricaemia, pancreatic atrophy and abnormal liver function tests [22–24]. Mutations in the CEL variable number tandem repeat cause a syndrome of diabetes and pancreatic exocrine dysfunction [25]. These syndromes and maternally inherited diabetes and deafness caused by the mitochondrial m.3243A→G mutation are not included in these guidelines since testing is guided by the non-endocrine pancreatic or extra-pancreatic clinical features. A molecular genetic diagnosis of a GCK, HNF1A or HNF4A mutation is important because it confirms a diagnosis of MODY, classifies the subtype, predicts the likely clinical course and may change the patient’s treatment. First-degree relatives will be at 50% risk of inheriting the mutation and asymptomatic individuals may be offered predictive genetic testing (after appropriate genetic counselling) in order to provide reassurance (for those shown not to carry the mutation) or regular blood glucose monitoring with early diagnosis and appropriate treatment (for mutation carriers). Methods A group of European clinicians and scientists met on 22 May 2007 at a workshop to formulate best practice guidelines for molecular genetic testing in MODY. Discussions focused on clinical criteria for selection of patients for testing, methodologies, interpretation of results and reporting those results to the referring clinicians. A draft document was posted on 24 August 2007 and an online editing tool was used by participants to produce consensus guidelines. Results Clinical criteria for testing Mild fasting hyperglycaemia: testing for GCK mutations The finding of raised fasting blood glucose in the range of 5.5–8 mmol/l is unusual in children and young adults. This always raises concern that they may be about to develop type 1 diabetes or the patient has type 2 diabetes. However, a considerable proportion of young, non-obese patients with persistent mild fasting hyperglycaemia will have a heterozygous mutation in the GCK gene. In a cohort of 82 children with incidental hyperglycaemia, 43% had GCK mutations [26]. The phenotype associated with GCK mutations is remarkably similar for all mutations. The following features suggest a diagnosis of a GCK mutation: The fasting hyperglycaemia is ≥5.5 mmol/l (98% patients), persistent (at least three separate occasions) and stable over a period of months or years [27].HbA1c is typically just above the upper limit of normal and rarely exceeds 7.5%.In an OGTT the increment [(2 h glucose) − (fasting glucose)] is small (71% of patients in the large European study reported by Stride et al. [27] had an increment <3 mmol/l). An increment of 4.6 mmol/l is often used to prioritise testing and corresponds to the 90th centile (S. Ellard and A. T. Hattersley, unpublished data).Parents may have ‘type 2 diabetes’ with no complications or may not be diabetic. On testing, one parent will usually have a mildly raised fasting blood glucose (range of 5.5–8 mmol/l) unless the mutation has arisen de novo. Testing of apparently unaffected parents’ fasting glucose is important when considering a diagnosis of a glucokinase mutation. Gestational diabetes: testing for GCK mutations Mutations of GCK cause mild fasting hyperglycaemia throughout life and this is often diagnosed during pregnancy when routine testing is performed. Since these patients have consistently raised fasting blood glucose levels, their babies who do not inherit the mutation may be macrosomic [28]. The diagnosis of a GCK mutation is important, not only as the child may subsequently be picked up as having a raised fasting blood glucose and this may lead to concern about type 1 diabetes, but also because the guidelines given to the mother are different from the normal ‘pre-type 2’ diabetic phenotype as they will not deteriorate with time. The following criteria identify when GCK testing is appropriate [29]: Persistently raised fasting blood glucose in the range of 5.5–8 mmol/l before, during and after pregnancy.An increment of <4.6 mmol/l on at least one OGTT (either during or after pregnancy).A parent may have mild type 2 diabetes but often this has not been detected and so the absence of family history should not exclude the diagnosis. Children and young adults with diabetes and a strong family history of diabetes: testing for HNF1A mutations The possibility of monogenic diabetes should be considered whenever a parent has diabetes even if they are thought to have type 1 or type 2 diabetes. The most common form of MODY is caused by HNF1A mutations. The clinical characteristics of patients with HNF1A mutations include: Young-onset diabetes (typically before 25 years old in at least one family member).Non-insulin-dependent outside the normal honeymoon period (3 years), e.g. not developing ketoacidosis in the absence of insulin, good glycaemic control on less than the usual replacement dose of insulin, or detectable C-peptide measured when on insulin with glucose >8 mmol/l.Family history of diabetes (at least two generations). This may be insulin treated and considered to be ‘type 1’ diabetes or ‘type 2’ diabetes. At least two individuals within the family would typically be diagnosed in their 20s or 30s. There may also be an affected grandparent, although often these are diagnosed after 45 years. OGTTs in early stages tend to show a very large glucose increment, usually >5 mmol/l [27]. Some individuals may have a normal fasting level but a value within the diabetic range at 2 h.The absence of pancreatic islet autoantibodies.Glycosuria at blood glucose levels <10 mmol/l is often seen, as these patients have a low renal threshold [11].Marked sensitivity to sulfonylureas resulting in hypoglycaemia despite poor glycaemic control before starting sulfonylureas [9, 30].Several features suggesting monogenic diabetes rather than a diagnosis of young-onset type 2 diabetes should be considered: no marked obesity or evidence of insulin resistance in diabetic family members, absence of acanthosis nigricans and whether the family is from an ethnic background with a low prevalence of type 2 diabetes (e.g. of European descent). Children and young adults with diabetes and a strong family history of diabetes: testing for HNF4A mutations Diabetes caused by mutations in the HNF4A gene is considerably less common (Table 1) than HNF1A mutations. The clinical characteristics are similar, except there is not a low renal threshold and the age of diagnosis may be later [31]. HNF4A mutations should be considered when HNF1A analysis does not detect a mutation but the clinical features are strongly suggestive of HNF1A. Patients are often sensitive to sulfonylureas [32]. HNF4A mutations are associated with macrosomia (approximately 56% of mutation carriers) and transient neonatal hypoglycaemia (approximately 15% of mutation carriers) [33, 34]. The possibility of HNF4A mutations should be considered when diabetic family members have marked macrosomia (>4.4 kg at term) or if diazoxide-responsive neonatal hyperinsulinism has been diagnosed in the context of familial diabetes. Babies with diazoxide-responsive neonatal hyperinsulinaemic hypoglycaemia and a strong family history of diabetes: testing for HNF4A mutations Mutations of HNF4A are a cause of neonatal hypoglycaemia that remits during infancy or early childhood, with diabetes developing later in life [34]. Macrosomic babies with diazoxide-responsive hyperinsulinism and a strong family history of diabetes (see characteristic 3 in the section above entitled Children and young adults with diabetes and a strong family history of diabetes: testing for HNF1A mutations) should be considered for HNF4A mutation screening. Testing methodology The mutation screening methodology should be described in the report [e.g. sequencing, denaturing high-performance liquid chromatography (dHPLC), conformation-sensitive capillary electrophoresis (CSCE)] together with the sensitivity. PCR primers should be checked for primer binding site single nucleotide polymorphisms (SNPs; a useful tool is available at http://ngrl.man.ac.uk/SNPCheck/index.html). Gene dosage analysis may be useful if a diagnosis of MODY is strongly suspected and no mutation is found on mutation screening. Interpretation of results The textbox includes recommended interpretations for the most common reporting scenarios. Reporting Each laboratory has its own reporting format and general guidance on reporting is available from the European Molecular Genetics Quality Network (http://www.emqn.org), the UK Clinical Molecular Genetics Society (http://www.cmgs.org) and the Swiss Society of Medical Genetics (http://www.ssgm.ch). A one page report is the preferred format. The report should state the methodology and specify the gene, exons and/or mutations tested for. If promoter sequences are examined then the report should specify the nucleotides analysed. An estimation of the assay sensitivity is particularly useful for pre-screening techniques such as dHPLC, CSCE etc. The use of mutation nomenclature approved by the Human Genome Variation Society (http://www.hgvs.org/mutnomen) is strongly recommended. The gene accession number (with version) is required in order to describe mutations unambiguously (see Table 1). The A nucleotide of the ATG start codon is numbered +1. Reports describing novel variants should state that the variant is novel and include the evidence in support of pathogenicity. This might include the absence from a large series of ethnically matched controls or MODY patients (testing of 210 normal chromosomes is necessary to achieve at least 80% power to detect a polymorphism present in 1% of the population [35]). Testing of other affected relatives is recommended in order to check for co-segregation and to calculate the LOD score in suitable sized pedigrees (LOD scores of ≥1 or ≥3 are suggestive or conclusive of linkage, respectively). For missense variants the evidence for pathogenicity might include conservation across species and a significant amino acid substitution. Several programs are available that predict the pathogenicity of a missense variant based upon amino acid conservation (SIFT; http://www.blocks.fhcrc.org/sift/SIFT.html) or the structure and function of the protein (PolyPHEN; http://www.genetics.bwh.harvard.edu/pph) but they should be used to supplement other pieces of evidence rather than in isolation. Both missense and silent variants can affect splicing if the mutation is within an exon splicing enhancer or exon splicing silencer. Splice predictor software programs (http://www.fruitfly.org or http://rulai.cshl.edu/cgi-bin/tools/ESE3/esefinder.cgi?process=home) may aid interpretation. Base substitutions affecting the conserved splice donor (GT) site, splice acceptor (AG) site or the conserved A nucleotide within the branch site are highly likely to be pathogenic but splice predictor software may be useful in the interpretation of other intronic variants. Analysis of patient mRNA is often informative but lymphoblastoid cell lines are usually required because of the low levels of expression of the MODY genes in blood. Sequence analysis of RT-PCR products amplified from lymphoblastoid cell mRNA has demonstrated exon skipping, retention and the use of cryptic splice sites for a variety of intronic mutations in the GCK, HNF1A and HNF1B genes [36–38]. Novel promoter variants may be investigated by examination of known transcription binding sites or by in vitro transfection experiments [39–42]. They may also alter mRNA expression levels, which may be measured by allele-specific real-time PCR [43]. Polymorphisms Some laboratories include details of polymorphisms detected in the report. The reasons for doing this include: (1) making all data available to the requesting clinician based on the rationale that a polymorphism may later be reclassified as a mutation; and (2) identifying heterozygous SNPs excludes a gene deletion involving the exon(s) in question. However, this information can cause confusion or even misinterpretation of the result (A. T. Hattersley, unpublished data) and it certainly adds to the length of the report. While in some cases there may be reports in the literature of an association with type 2 diabetes or reduced insulin secretion, these polymorphisms do not cause MODY and we recommend that they should be excluded from the report. Treatment Individual treatment recommendations are outside the jurisdiction of a molecular genetics report since this is the referring clinician’s responsibility. It is useful to include an appropriate reference if there is evidence in the literature for a particular treatment (e.g. low-dose sulfonylureas in HNF1A/4A MODY) associated with the genetic diagnosis. Other issues Genetic counselling should be provided for all asymptomatic individuals requesting predictive testing. We recommend that unaffected relatives are offered a biochemical test first (fasting blood glucose for GCK mutations or OGTT for HNF1A/HNF4A mutations). If the biochemical test is consistent with a diagnosis of diabetes or hyperglycaemia then the genetic test will be diagnostic, not predictive. For families requesting predictive testing for children too young to provide informed consent, referral to a specialist clinical genetics unit (or equivalent) is strongly recommended. Reasons for testing children include (1) to remove the uncertainty around the child’s status, and (2) to assist with management, as a negative test would mean that monitoring of blood glucose/glycosuria would not be necessary [44]. Conclusions Molecular genetic testing is useful in patients with MODY because it confirms a diagnosis of monogenic diabetes, predicts likely clinical course, defines risk for relatives and determines treatment. At the present time, molecular genetic testing for MODY is relatively expensive and phenotypic selection prior to testing is normal practice. With the development of new technologies it is likely that these costs will decrease in time and that the analysis of genes associated with monogenic diabetes may become routine for all newly diagnosed patients. In the meantime we hope that these guidelines will be useful in determining which patients should be offered testing, and in the interpretation and reporting of the test results. Below is the link to the electronic supplementary material. ESM (PDF 12.9 kb)
[ "best practice", "maturity-onset diabetes of the young", "gck", "hnf1a", "hnf4a", "mody", "monogenic diabetes" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
Intensive_Care_Med-4-1-2228379
Short-term beneficial effects of methylene blue on kidney damage in septic shock patients
Objective We previously demonstrated that upregulation of renal inducible nitric oxide synthase (iNOS) is associated with proximal tubule injury during systemic inflammation in humans. In this study we investigated the short-term effect of methylene blue (MB), an inhibitor of the NO pathway, on kidney damage and function in septic shock patients. Introduction Refractory septic shock remains the major cause of death in noncoronary intensive care units, especially when accompanied by multiple organ failure, with an estimated mortality rate of 50–60% [1]. The incidence of acute renal failure in refractory septic shock is approx. 40–50% [2]. Nitric oxide has emerged as an important contributory factor to the pathogenesis of septic shock. We previously demonstrated that induction of renal NO is associated with proximal tubule injury during systemic inflammation in humans [3]. NO stimulates soluble guanylate cyclase (sGC) by binding to its heme moiety, which generates cyclic guanosine monophosphate (cGMP) [4]. In the kidney NO and cGMP production are associated with lipopolysaccharide-induced renal proximal tubular cell toxicity [5]. Selective sGC inhibition during septic shock in rats resulted in an attenuation of renal dysfunction [6], indicating that blocking sGC may be a potential therapeutic strategy to treat septic shock-associated renal failure. Methylene blue (MB) binds to sGC, blocks cGMP production and has the ability to scavenge NO and to inhibit NO synthases [7, 8]. Although several controlled and uncontrolled clinical studies showed beneficial effects of MB on the hemodynamic instability during septic shock (reviewed in [9]), there are no studies on the putative protective renal effects. We examined the short-term effects of continuous infusion of low-dose MB (1 mg/kg per hour) in patients with septic shock on urinary excretion of the acute kidney injury markers, cytosolic glutathione S-transferases (GSTs) present in proximal tubule (GSTA1-1) and distal tubule (GSTP1-1) [10]. Material and methods Patients Nine patients received a 4 h continuous infusion of 1 mg/kg per hour MB (1% w/v) provided by the VieCuri Medical Center pharmacy (for inclusion criteria see Electronic Supplementary Material, ESM). Arterial blood and catheterized urine were collected at several time points during the first 24 h. Clinical parameters, the severity of illness using Acute Physiology and Chronic Health Evaluation II (APACHE II), and Sepsis-Related Organ Failure Assessment (SOFA) were recorded. Chemical assays Biochemical parameters were determined by routine clinical chemistry. Hemoglobin, methemoglobin, and bilirubin were measured to assess possible side effects of MB, such as hemolytic anemia and methemoglobinemia [11–13]. Total amount of the stable NO metabolites, nitrate and nitrite, measure of NO radicals production, and the amounts of GSTA1-1 and GSTP1-1 in urine were assayed as described [3]. The blue color in urine due to MB excretion did not affect the chemical assays. Statistical analysis Values are given as mean ± S.E. or as median (25–75% range) depending on their distribution. Differences between experimental groups were tested by analysis of variances for repeated measures. A two-tailed p-value less than 0.05 was considered statistically significant. Results Patients Seven patients ultimately died in the intensive care unit, one of refractory shock (within 12 h) and six patients because of multiple organ failure. In the latter group two died within 7 days and the remaining four within 28 days after intervention. The mean calculated predicted mortality rate was 61%, and all patients had at least three organ failures, reflecting a mean SOFA score of 11.1 ± 0.9. The median stay at the intensive care unit was 16 days (range 7–24); the two survivors stayed 89 days (52–121) in hospital. Pathogenic organisms isolated by culture and site of infection are illustrated in the ESM. Median C-reactive protein was 178 mg/l (118–189); all patients had lactic acidemia (median 2.7 mmol/l, range 2.1–3.7) and thrombocytopenia (68 × 109/l, 50–104). The median MAP increased slightly by 5 mmHg (2–11) from 69 (65–70) at baseline to 74 (68–82) 3 h after the start of MB (p < 0.05) with no change in norepinephrine infusion rate. Methemoglobinemia or hemolytic anemia did not develop after MB (data not shown). All MB-treated patients showed blue-coloring of urine and skin. MB attenuates NO formation The concentration NO metabolites in plasma was higher in septic shock patients than in to healthy volunteers [3] but did not change after MB administration (Fig. 1a). In contrast, MB significantly attenuated the urinary excretion of NO metabolites by a median 90% (75–95%, p < 0.05) from baseline (233 μmol/mmol creatinine, 112–536) to 6 h (37, 10–87) after the start of MB (Fig. 1b). At 4h the MB infusion was stopped, after which the median excretion of urinary NO metabolites increased by 135% (65–795%, p < 0.05) within 6–24 h after MB treatment (Fig. 1b). Fig. 1NO metabolites in plasma and urine and the urinary excretion of tubular injury markers glutathione S-transferase (GST) A1-1 and P1-1. NO metabolite levels in plasma (a, n = 9) and urine (b, n = 8) and levels of GSTA1-1 (c, proximal tubule, n = 8) and GSTP1-1 (d, distal tubule, n = 8) were measured in urine at various times after MB administration in septic shock patients. The urinary excretion of NO metabolites and GSTs was corrected for creatinine excretion. Data are expressed as median with 25–75% range and analyzed by analysis of variance with repeated measures over the two time periods. *p < 0.05 vs. baseline, # p < 0.05 vs. 6 h after MB treatment MB attenuates kidney damage All patients showed impaired renal function with oliguria and mild proteinuria (Table 1). One patient suffered from anuria during the first day. Six patients required continuous venovenous hemofiltration renal replacement therapy with a flow of 35 ml/kg (for median 8.5 days, 4–13), of which 3 during the MB infusion. The creatinine clearance improved by median 51% (18–173%, p < 0.05) during the first 24 h after MB but was still strongly impaired (Table 1). The urinary excretion of both GSTA1-1 and GSTP1-1 was elevated in all septic shock patients, indicating both proximal and distal renal tubule damage. During the first 6 h of MB urinary excretion of GSTA1-1 and GSTP1-1 was attenuated by median 45% (10–70%) and 70% (40–85%) vs. baseline (Fig. 1c, d, p < 0.05). After ending MB infusion urinary excretion of GSTA1-1 and GSTP1-1 increased again, although not significantly (Fig. 1c, d). Table 1Kidney function parameters of septic shock patients (n = 8) treated with MBMedianRangeTotal urine volume, 0–24 h (ml)495169–885Protein excretion, 0–24 h (mg/day)342245–434Creatinine clearance (ml/min) Baseline 8.2 4.2–17.4 24 h 10.6* 9.6–14.8Blood urea nitrogen Baseline 17.8 10.8–20.0 24 h 17.3 10.4–22.5Fractional excretion of sodium (%) > 2% Baseline (n = 0)–– 24 h (n = 3) 3.1 2.6–3.6 < 1% Baseline (n = 8) 0.6 0.3–0.7 24 h (n = 5) 0.4 0.2–0.5p < 0.05 vs. baseline Discussion Several clinical studies in septic shock patients have investigated the effects of MB on the heart, vascular wall, and lungs [9]. This is the first report demonstrating that MB attenuates kidney damage in human septic shock. To determine the effect of NO pathway inhibition on renal damage we examined the urinary excretion of early tubular injury markers and found that MB inhibited the NO pathway and preserved the integrity of renal tubules. After termination of MB infusion these parameters returned to their elevated pretreatment levels. Both plasma concentrations (more than twofold) and urinary NO metabolites levels in our septic shock patients were much higher than in healthy volunteers, as demonstrated earlier [3]. Hydrocortisone may have inhibited iNOS activation, however, the patients received a continuous infusion of hydrocortisone that started before MB infusion and continued during the 24 h period. Therefore the observation that urinary NO metabolite excretion was attenuated only in the first 6 h after the start of MB suggests that this effect is not related to steroids. Although urinary NO metabolite excretion was attenuated, we did not find a reduction in plasma NO metabolites. This is in contrast with an earlier report, in which patients received a bolus injection of MB prior to the continuous infusion [14]. Renal failure in septic shock patients is a complex and multifactorial disease process. During septic shock the systemic vasodilation increases renal sympathetic activity and angiotensin concentration which results in intrarenal vasoconstriction with sodium and water retention and decreased glomerular filtration rate [2]. We previously demonstrated that induction of renal iNOS, constitutively expressed in the kidney [15], is associated with proximal tubule injury during systemic inflammation in humans [3]. As a result of its active secretory transport function and role in urine concentration, the proximal tubule is a susceptible target and often the first site of damage [16]. Therefore inhibition of peroxynitrite formation from excessively produced NO and superoxide [17] by MB may be beneficial for the kidney during septic shock, possibly explained by the local accumulation of MB in renal proximal tubules [18]. Global hemodynamic variables can influence renal function, however, only a small but statistically significant increase was found for mean arterial pressure, whereas other global hemodynamic parameters did not change during MB treatment. Since the detailed nature of our investigation, obviously these methods are not feasible in a large-scale clinical intervention study. The most elegant way to examine the effects of MB would be in a randomized, placebo-controlled cross-over study. However, with such critically ill patients this design may be considered unethical. We deliberately chose a subgroup of severe septic shock patients with a high chance of sepsis-induced renal damage for two reasons: first, this is the group of patients in which MB is used as a “last resort therapy” and, second, to demonstrate the putative beneficial effects of MB on renal damage. With an estimated standard deviation of 36% in urinary GST excretion, 80 patients would be needed to demonstrate with 80% power a 10% reduction in renal injury. This number of patients was not feasible for our investigations. Therefore we decided to determine the parameters before, during and after MB infusion during a 24 h period, which allows each patient to serve as own control. Because of the observational nature and limited size of the present study and the heterogeneity of the patient population, our findings warrant conformation on hard endpoints by a larger clinical trial. However, in our view, first a long-term study is necessary to assess the safety of chronic MB administration in septic patients with refractory shock. Promising effects of MB were found in a trial with vasoplegic patients after cardiac surgery treated with MB [19], in which a reduction in both mortality and incidence of renal failure was observed. In conclusion, short-term infusion of MB in septic patients with refractory shock, is associated with a decrease in NO production and an attenuation of the urinary excretion of renal tubular injury markers. Electronic supplementary material Electronic Supplementary Material (DOC 26K) Electronic Supplementary Material (DOC 37K)
[ "acute kidney injury", "glutathione s-transferase", "inducible nitric oxide synthase expression", "nitric oxide metabolites" ]
[ "P", "P", "R", "R" ]