pmcid
stringlengths
6
6
title
stringlengths
33
185
abstract
stringlengths
168
3.26k
fulltext
stringlengths
7.55k
58.3k
file_path
stringlengths
64
64
pmid
stringlengths
8
8
doi
stringlengths
21
28
526202
PCOGR: Phylogenetic COG ranking as an online tool to judge the specificity of COGs with respect to freely definable groups of organisms
Background The rapidly increasing number of completely sequenced genomes led to the establishment of the COG-database which, based on sequence homologies, assigns similar proteins from different organisms to clusters of orthologous groups (COGs). There are several bioinformatic studies that made use of this database to determine (hyper)thermophile-specific proteins by searching for COGs containing (almost) exclusively proteins from (hyper)thermophilic genomes. However, public software to perform individually definable group-specific searches is not available. Results The tool described here exactly fills this gap. The software is accessible at and is linked to the COG-database. The user can freely define two groups of organisms by selecting for each of the (current) 66 organisms to belong either to groupA, to the reference groupB or to be ignored by the algorithm. Then, for all COGs a specificity index is calculated with respect to the specificity to groupA, i. e. high scoring COGs contain proteins from the most of groupA organisms while proteins from the most organisms assigned to groupB are absent. In addition to ranking all COGs according to the user defined specificity criteria, a graphical visualization shows the distribution of all COGs by displaying their abundance as a function of their specificity indexes. Conclusions This software allows detecting COGs specific to a predefined group of organisms. All COGs are ranked in the order of their specificity and a graphical visualization allows recognizing (i) the presence and abundance of such COGs and (ii) the phylogenetic relationship between groupA- and groupB-organisms. The software also allows detecting putative protein-protein interactions, novel enzymes involved in only partially known biochemical pathways, and alternate enzymes originated by convergent evolution.
Background The COG-database has become a powerful tool in the field of comparative genomics. The construction of this data base is based on sequence homologies of proteins from different completely sequenced genomes. Highly homologous proteins are assigned to clusters of orthologous groups (COGs) [ 1 , 2 ]. Each of the COGs consists of individual proteins or groups of orthologs from at least 3 lineages and thus corresponds to a conserved domain. The COG collection currently consists of 138,458 proteins, which form 4,873 COGs and comprise 75% of the 185,505 (predicted) proteins encoded in 66 genomes of unicellular organisms [ 3 ]. In addition, the database now includes KOGs containing the clusters of seven eukaryotic genomes. The COG database is an ideal source to search for proteins specific to a certain group of organisms. Several such surveys aimed at finding (hyper)thermophile-specific proteins that made use of the COG-database are published. For instance, Forterre detected reverse gyrase as the only hyperthermophile-specific protein [ 4 ]. In addition, a survey to find specific genes important for hyperthermophily [ 5 ] and a study identifying thermophile-specific proteins [ 6 ] are published. However, those studies used rather nonflexible tools designed for other purposes [ 7 ] or software especially written and not accessible for the public. To overcome these issues, a more flexible software-tool is needed that allows defining the group of organisms individually for which specific COGs can be searched. Here we describe phylogenetic COG ranking (PCOGR), a platform independent software tool capable to rank all COGs with respect to a freely definable group of organisms versus a group of reference organisms. Implementation PCOGR is written in PHP (v.4.3.3) including the domxml (v.20020815) plugin and runs on an openBSD (v.3.4) operating system at dmz.uni-wh.de in an apache (v.1.3.28) web-server environment. In addition, at the clients-side, HTML, javascript, and CSS are used. Phylogenetic COG ranking (PCOGR) is an online-tool to analyze the microbial COG, or after clicking "Switch to PKOGR", to analyze the eukaryotic KOG database. PCOGR provides a means for determining the specificity of each COG with respect to the presence of sequences from organisms belonging to a predefined group (groupA) versus the absence of sequences from organisms belonging to a second predefined reference group (groupB). For that purpose, each of the organisms can be assigned to one of the two groups or defined to be ignored by the analysis. The software then calculates a specificity index S for every individual COG. The highest ranking COGs (large S) contain sequences from the most groupA-organisms whereas the most sequences from groupB-organisms are absent. To process S for each individual COG, the algorithm starts at S = 0, adds a constant A for each groupA-organism and subtracts a constant B for each groupB-organism being present in the COG under analysis with A = A tot /B tot and B = B tot /A tot where A tot is the total number of organisms belonging to groupA and B tot is the total number of organisms belonging to groupB. After all COGs have been processed in this way, all S-values are scaled to values between 0 and 1. Then, all COGs are output in the order of their specificity indexes S. In addition, a graphical representation shows the number of COGs as a function of their S-values in discrete intervals. The total number of intervals to be displayed can be specified by the user (default = 40 for PCOGR and 7 for PKOGR). A Javascript-mouseover info box intuitively explains all functions of the graphical user interface of PCOGR. Furthermore, additional information about both, organisms and output COGs, are available by the implementation of links to Figure 1 , 2 , and 3 show screenshots of the parameter input and output sections, respectively. Results and discussion PCOGR allows detecting group-specific proteins by both ranking all COGs and graphically showing their distribution over their specificity indexes. The graphical representations can be interpreted as follows: If the two predefined groups are rather related, one expects a single peak in the middle of the graph, i. e. there are little or no proteins specific to one of the groups resulting in a specificity value of around 0.5 for most COGs. In contrast, if the two groups are rather distant, further maxima, either on the left, the right or on both sides become visible, i. e. there are group-specific proteins with S-values around 1 and/or S-values around 0. Even two single organisms can be compared by assigning the first to groupA, the second to groupB and ignoring all other organisms. For instance comparing the closely related Escherichia coli strains O157:H7 EDL933 and O157:H7 results in a prominent single peak in the middle of the graph whereas two further peaks on the edges become visible if two more distant organisms e. g. Aquifex aeolicus and Saccharomyces cerevisiae are compared. Distance and relationship may be interpreted either in phylogenetic or in physiologic terms. To demonstrate that physiologic relevant differences in protein distributions indeed can be detected by PCOGR, two parameter-presets are selectable: (i) a specificity ranking of hyperthermophile-specific versus non-thermophile-specific proteins as published by Makarova et al. [ 5 ] and of thermophile-specific versus non-thermophile-specific proteins as described by Klinger et al. [ 6 ]. For the ranking according to Makarova et al., optimum growth temperatures of corresponding organisms belonging to groupA are all above 80°C and all other organisms are assigned to groupB. For the specificity ranking according to Klinger et al., the optimum growth temperature needed for an organism to be assigned to groupA is above 55°C instead of 80°C. The user will notice that for the two presets, there are two additional peaks, the first corresponding to COGs containing (hyper)thermophile-specific proteins, and the second peak corresponding to COGs containing mesophile-specific proteins. A further attractive potential of PCOGR lies in the easy way to detect novel protein-protein interactions since physically interacting proteins should phylogenetically similarly be distributed [ 8 ]. Thus, if the phylogenetic pattern for a putative interacting protein target is known, a ranking with this pattern as the input will result in a ranking of potentially interacting candidates. To simplify such a procedure, the phylogenetic pattern of a certain COG defined by the user can automatically be assigned as the preset of a subsequent ranking. As an example, we performed a ranking choosing the phylogenetic pattern of COG2025 (electron transfer flavoprotein, alpha subunit). This ranking resulted in only two high-scoring outputs (specificity value S = 1): COG2025 (the target) and COG2086 (electron transfer flavoprotein, beta subunit) which is shown by x-ray crystallography to build a complex with the alpha subunit [ 9 ]. All following proteins have specificity values below 0.9 indicating the suitability of such a search for protein-protein interactions. Not only protein-protein interactions can be detected but also enzymes involved in the same biochemical pathway as a certain target enzyme [ 8 ]. This possibility may be useful to find the biochemical function of yet uncharacterized proteins given that one or more catalysts of the same pathway are already characterized. For example, a search performed with the phylogenetic pattern of COG0135 (phosphoribosylanthranilate isomerase), an enzyme involved in the biosynthesis of L-tryptophan, results in four (COG0135, COG0159, COG0547, and COG0134) of the five enzymes involved in tryptophan biosynthesis at the top four places of the ranking. The beta subunit of tryptophan synthase is the only missing enzyme also involved in this pathway. A closer look reveals that this protein is assigned to two instead of one COGs (COG0133: rank 29 and COG1350: rank 1770). The latter COG is annotated as "predicted alternative tryptophan synthase beta-subunit (paralog of TrpB)". This double assignment may explain the absence of the beta subunit of tryptophan synthase from high-scoring proteins of the ranking. Another attractive use of PCOGR can be to look for an alternative enzyme form catalyzing the same reaction but originated by non orthologous gene displacement (NOGD). Occurrence of NOGD in essential functions can be explored systematically by detecting complementary, rather than identical or similar, phylogenetic patterns [ 10 ]. A ranking performed with COG0588 (phosphoglycerate mutase 1) indeed resulted in COG3635 (predicted phosphoglycerate mutase, AP superfamily) at the seventh last rank (rank 4867 out of 4873) demonstrating that PCOGR is also well suited for such a purpose. Conclusions With the online availability of PCOGR researchers can perform their own individual searches for group-specific proteins. This will not only allow a deeper insight into phylogenetic relationships of organisms or groups of organisms but also help to detect new highly group-specific proteins worth for isolation and further biochemical characterization. In addition, novel protein-protein interactions could be detected in silico, and this tool is also suitable to assign proteins of unknown function to partially known biochemical pathways. A further application lies in the search of alternate enzymes originated by convergent evolution. Availability and requirements Project name: Phylogenetic COG ranking (PCOGR) Project home page: Operating system(s): Platform independent Programming language: PHP, javascript, CSS and HTML Other requirements: Web-browser capable to execute javascript License: GNU General Public License Any restrictions to use by non-academics: Contact authors Authors' contributions FM carried out the software development and programming work. MK conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526202.xml
15488147
10.1186/1471-2105-5-150
524373
The fallacy of enrolling only high-risk subjects in cancer prevention trials: Is there a "free lunch"?
Background There is a common belief that most cancer prevention trials should be restricted to high-risk subjects in order to increase statistical power. This strategy is appropriate if the ultimate target population is subjects at the same high-risk. However if the target population is the general population, three assumptions may underlie the decision to enroll high-risk subject instead of average-risk subjects from the general population: higher statistical power for the same sample size, lower costs for the same power and type I error, and a correct ratio of benefits to harms. We critically investigate the plausibility of these assumptions. Methods We considered each assumption in the context of a simple example. We investigated statistical power for fixed sample size when the investigators assume that relative risk is invariant over risk group, but when, in reality, risk difference is invariant over risk groups. We investigated possible costs when a trial of high-risk subjects has the same power and type I error as a larger trial of average-risk subjects from the general population. We investigated the ratios of benefit to harms when extrapolating from high-risk to average-risk subjects. Results Appearances here are misleading. First, the increase in statistical power with a trial of high-risk subjects rather than the same number of average-risk subjects from the general population assumes that the relative risk is the same for high-risk and average-risk subjects. However, if the absolute risk difference rather than the relative risk were the same, the power can be less with the high-risk subjects. In the analysis of data from a cancer prevention trial, we found that invariance of absolute risk difference over risk groups was nearly as plausible as invariance of relative risk over risk groups. Therefore a priori assumptions of constant relative risk across risk groups are not robust, limiting extrapolation of estimates of benefit to the general population. Second, a trial of high-risk subjects may cost more than a larger trial of average risk subjects with the same power and type I error because of additional recruitment and diagnostic testing to identify high-risk subjects. Third, the ratio of benefits to harms may be more favorable in high-risk persons than in average-risk persons in the general population, which means that extrapolating this ratio to the general population would be misleading. Thus there is no free lunch when using a trial of high-risk subjects to extrapolate results to the general population. Conclusion Unless the intervention is targeted to only high-risk subjects, cancer prevention trials should be implemented in the general population.
Background Some prevention trials are restricted to high-risk subjects. If the investigators are only interested in the effects of the intervention on subjects at increased risk [ 1 ] or if the study is designed to be a preliminary investigation in preparation for a definitive study in the general population, we think this restriction is reasonable. However some investigators who are interested in studying the effect of the intervention in the general population may be tempted to design a "definitive" study to estimate the effect of the intervention in a high-risk group. Some investigators may believe that a trial of high-risk subjects would have greater power than a trial of the same size among average-risk subjects. Some examples of this type of thinking can be found in papers on risk prediction models [ 2 , 3 ]. Some investigators may believe that a trial of high-risk subjects with the same power as a trial of average-risk subjects would have lower costs than a trial of average-risk subjects. Some investigators may believe the ratio of benefits to harms can be correctly extrapolated from high-risk to average-risk subjects. Although the rationales for these various beliefs are related, they involve some distinct underlying assumptions that are important to critically examine. Methods and results Possibly lower statistical power To crystallize our thinking about statistical power, we consider the following simple hypothetical and realistic example. Investigators want to estimate the effect of intervention in the general population, so they first consider designing a randomized trial among the general at-risk population. Suppose they anticipate that the cumulative probability of incident cancer over the course of the study is p C = .02 in the control arm and p I = .01 in the study arm, and they believe that the difference in probabilities is clinically significant. Also suppose that due to the limited availability of the intervention, they can enroll at most n = 2000 study participants in each arm. The investigators compute power using the following standard formula [ 1 ] setting the two-sided type I error at .05, where NormalCDF is the cumulative distribution function for a normal distribution with mean 0 and variance 1, Δ is the anticipated difference one wants to detect, n is the sample size per arm, se Null is the standard error under the null hypothesis, and se Alt is the standard error under the alternative hypothesis. Let p = ( p C + p I )/2. As discussed in [ 1 ], for a study designed to estimate the absolute risk difference, the statistic of interest is , so For a study designed to estimate the relative risk, the statistic of interest is , so Applying these formulas to the above example and substituting either (2) or (3) into (1), the investigators obtain a power of .74 based on the absolute risk difference statistic and a power .76 based on a relative risk statistic [see Additional file 1 ]. Suppose the investigators think this power is too low. To increase power they propose to restrict the study to a high-risk group in which the probability of cancer is .04. Also suppose the investigators make the typical assumption that if the intervention yields a relative risk of .5 in the general population, it would also yield a relative risk of .5 in the high-risk group. Applying (1–3) with high risk subjects for whom p C = .04 and p I = .02 with n = 2000, the investigators compute a power of .96 using either the absolute risk difference or relative risk. Because the power is higher using high-risk subjects, the investigators plan the study for a high-risk population and will generalize the results to the general population. Is there a free lunch? An underlying assumption in this example is that the relative risk is invariant between the general population and the high-risk group. There is no free lunch because the impact of violating this assumption could be substantial. For example, suppose instead that the absolute risk difference is invariant between the general population and the high risk group. Under this scenario the absolute risk difference in the general population is .01, so the absolute risk difference in the high-risk group is also .01. In this case for p C = .04, p I = .03, and n = 2000, the power (computed using either absolute risk difference or relative risk statistics) for the trial of high-risk subjects is only .41. The decreased power in a high risk group under a constant risk difference model is not surprising: if the risk difference p C - p I is the same, but p I is increasing, the variances, p C (1 - p C )/ n and p I (1 - p I )/ n , will increase as p C increases up to .5, which will reduce the power. A crucial issue is whether or not the absolute risk difference or the relative risk is likely invariant between average-risk subjects in the general population and high-risk subjects. The answer depends on the cancer, the interventions, and the biology. To gain some appreciation of this issue, we analyzed published data (summarized in Table 1 ) from a prevention trial of particular interest to us, a study of tamoxifen for the prevention of breast cancer [ 5 ]. Rather than limit the analysis to one particular high-risk group, we investigated subjects at various levels of risk defined separately by three variables: age, predicted risk, (the five-year risk of cancer based on the Gail model [ 3 ]), and family risk. We fit four models separately to each variable: Table 1 Data from a cancer prevention trial for investigating assumptions of constant risk difference and relative risk when risk groups change. Placebo group Tamoxifen group Variable risk group cancer at risk cancer at risk age at entry 1 ≤ 49 68 10149 38 10045 2 50–59 50 7912 25 8040 3 >60 57 7719 26 7782 predicted risk 1 ≤ 2.00% 35 6318 13 6311 2 2.01–3.01% 42 8108 29 8262 3 3.01–5.00% 43 7313 27 6959 4 ≤ 5.01% 55 4142 20 4425 family risk 1 0 38 5891 17 5724 2 1 90 15000 46 15182 3 2 37 4263 20 4211 4 3 10 729 6 855 Cancer is invasive breast cancer. Predicted risk is the 5-year predicted risk. Family risk is number of first degree relatives with breast cancer. Data are from Table 5 of [5] with number at risk computed by dividing number of breast cancers by reported breast cancer rate. constant risk difference, where δ is the risk difference that is constant over groups; varying risk difference, where δ i is the risk difference that varies over groups; constant relative risk, where β is the relative risk that is constant over groups; varying relative risk, where β is the relative risk that varies over groups. We obtained maximum likelihood estimates of δ , δ i , β , and β i using a Newton-Raphson procedure [see Additional file 2 ]. To investigate the plausibility of the constant relative risk and constant risk difference models in this example, we plotted the estimates of δ , δ i , β , and β i along with confidence intervals (Figure 1 ). In the top row of Figure 1 we plotted points corresponding to with (100 - 5/ k ) % confidence intervals and horizontal lines for with 95% confidence intervals. We also presented the p-values corresponding to twice the difference in log-likelihoods for Varying RD versus Constant RD . Similarly, in the bottom row of Figure 1 , we plotted points corresponding to with (100 - 5/ k )% confidence intervals and horizontal lines for with 95% confidence intervals. We also presented the p-value corresponding to twice the difference in log-likelihoods for Varying RR versus Constant RR . Out of 6 p-values (3 risk factors × 2 statistics) only one, for absolute risk difference under the risk factor of predicted risk had a small p-value (and the p-value of .01 would not be significant at the .05 level under a Bonferroni adjustment of .05/6). Based on these p-values and inspection of Figure 1 , the models Constant RD and Constant RR are both plausible, especially for age and family risk. Figure 1 Data from the tamoxifen prevention trial. See text for a description of groups. Horizontal lines are estimates and 95% confidence intervals for model for constant absolute risk difference per 1000 (RD) or relative risk (RR). P-values correspond to likelihood ratio tests comparing the models with varying and constant risk difference or relative risks. The trial designer does not know the true state of nature. If Constant RD is the true state of nature, the power will be lower in the high-risk group than the general population. However if Constant RR is the true state of nature, the power will be greater in the high-risk group than the general population. Thus there is high probability that the power could be reduced when studying high-risk subjects than when studying the general population. Therefore, there is no free lunch in terms of lowering statistical power. Possibly increased costs Even if the model is correct (namely p C and p I are correctly chosen), the smaller trial of high-risk subjects may be more expensive than the larger trial of average-risk subjects from the general population. Consider the following two trials with a power of .90 and a one-sided type I error of .05. In the trial of high-risk subjects p C = .04 and p I = .02, and in the trial of average-risk subjects, p C = .02 and p I = .01. Suppose the statistic of interest is the absolute risk difference. To obtain sample size for each randomization group we use the standard sample size formula [ 4 ], where p = ( p C + p I )/2, 1.644485 is the z-statistics corresponding to the 95th percentile of the normal distribution (for a one-sided type I error of .05) and 1.28155 is the z-statistics corresponding to the 90th percentile (for a power of .90). Based on (4), the sample size for a trial using average-risk subjects from the general population study is 2529 per group and the sample size for a trial of high-risk subjects is 1244 per group. Let C R denote the cost of recruitment per subject and C I denote the cost of intervention and follow-up per subject averaged over the two randomization groups . Suppose high risk subjects comprise a fraction f of the general population. The total cost of the trial for average-risk subjects from the general populations is C general = 2( C R 2529 + C I 2529),    (5) and the total cost of the trial for high-risk subjects is C high-risk = 2( C R 1244/ f + C I 1244).    (6) where the factor of 2 is for the two randomization groups. The condition for the trial of high-risk subjects to cost more than the trial of average-risk subjects (namely C high-risk > C general ) is when 1244/ f - 2529 > 0. If f = .20, the trial of high-risk subjects will cost more than the trial of average-risk subjects if C R / C I > .34. If f = .10, the trial of high-risk subjects will cost more than the trial of average-risk subjects if C R / C I > .13. In many cancer prevention trials the above values of C R / C I are likely. For example, diagnostic testing to identify high-risk smokers can include expensive airway pulmonary function tests or bronchoscopy. In the future, more trials will likely involve expensive genetic testing of subjects [ 5 ] with costs ranging from $350 to almost $3,000 per test according to recent information from Myriad Genetic Laboratories. As part of a sensitivity analysis related to genetic testing of subjects prior to enrollment in a trial, Baker and Freedman [ 5 ] considered values of .1, .5, and 1 for ratios similar to C R / C I . Even without diagnostic testing, the costs of obtaining high-risk subjects can be substantial. If f = .10, the initial recruitment will require ten times the number of people as for a trial of average-risk subjects from the general population. This increased recruitment would likely require higher advertising costs and increased overhead costs from the inclusion of additional institutions. One additional consideration is how noncompliance and contamination affect the intent-to-treat analysis. If noncompliance and contamination can be anticipated, the investigator can correspondingly adjust the sample size and costs. Mathematically the effect of noncompliance and contamination is to change the values of p C and p I in (4), which would then affect (5) and (6). In some settings, investigators may anticipate that high-risk subjects are more likely to comply with the intervention than average-risk subjects. To compensate for the anticipated increased compliance, study designers could reduce the sample size which would lower costs. However, in other situations, investigators may anticipate that subjects found to be at high-risk on a diagnostic test would likely seek the best therapy outside of the trial rather than chance randomization to standard or experimental therapy. To compensate for the anticipated dilution in treatment effect, investigators would need to increase the sample size which would increase the costs. For the above reasons even if the probabilities under the alternative hypothesis are correctly specified, some trials of high-risk subjects may be more expensive than larger trials of average-risk subjects with the same power and type I error. Possibly misleading ratio of benefits to harms When there is strong evidence prior to the trial of a high probability of harmful side effects due to the intervention, one would want to restrict the intervention to high-risk subjects. Otherwise, some investigators may be tempted to estimate the ratio of benefit to harms in the trial of high-risk subjects and extrapolate the ratio to average risk subjects. Unfortunately, even if the assumption of constant relative risk over risk categories were true, extrapolating the benefit-harm ratio from a high risk group to the general population could be misleading. Suppose that in a randomized trial involving average-risk subjects from the general population the probability of cancer is .02 in the control arm and .01 in the study arm. Also suppose that relative risk is same in the general population as in the high-risk group, so that in a randomized trial involving a high-risk group, the probability of cancer is .04 in the control arm and .02 in the study arm. Furthermore, suppose that the probability of harmful side effects is the same for high-risk subjects as for average-risk subjects in the general population, namely .015 in the control arm and .025 in the study arm. Based on these results, for every 1000 high-risk persons who receive the intervention, (.04 - .02) 1000 = 20 will benefit from the intervention and (.025 - .015) 1000 = 10 will be harmed by side effects, yielding a benefit-harm ratio of 20:10 = 2:1. Similarly for every 1000 average-risk person who receive the intervention, (.02 - .01) 1000 = 10 will benefit from the intervention and (.025 - .015) 1000 = 10 will be harmed by side effects yielding a benefit-harm ratio of 10:10 = 1:1. In this example it would be incorrect to extrapolate the high benefit-harm ratio estimated from the high-risk group to the general population for whom the benefit-harm ratio is much lower. For many cancer prevention interventions, the ratio of life-threatening disease avoided to life threatening harms would be favorable in the high-risk group but not favorable when extrapolated to the general population. Conclusion There is no "free lunch" when using high-risk subjects in prevention trials design to make inference about the general population. Using high risk subjects instead of average-risk subjects from the general population may lower statistical power, increase costs, and yield a misleading ratio of benefit to harms than actually the case. Given the substantial costs of definitive randomized trials in cancer prevention, and the importance of accurately assessing the balance of benefit and harm when treating healthy and asymptomatic people, it is therefore important to conduct trials in the actual target population rather than try to conduct them in high-risk populations with the plan to extrapolate results to the general population. Competing Interests The authors declare that they have no competing interests. Authors' contributions SGB wrote the initial draft, and BSK and DC made valuable improvements. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional File 1 Appendix A, worked-out calculations of power. Click here for file Additional File 2 Appendix B, likelihood formulations Click here for file
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524373.xml
15461821
10.1186/1471-2288-4-24
524367
Perceived personal, social and environmental barriers to weight maintenance among young women: A community survey
Background Young women are a group at high risk of weight gain. This study examined a range of perceived personal, social and environmental barriers to physical activity and healthy eating for weight maintenance among young women, and how these varied by socioeconomic status (SES), overweight status and domestic situation. Methods In October-December 2001, a total of 445 women aged 18–32 years, selected randomly from the Australian electoral roll, completed a mailed self-report survey that included questions on 11 barriers to physical activity and 11 barriers to healthy eating (relating to personal, social and environmental factors). Height, weight and socio-demographic details were also obtained. Statistical analyses were conducted mid-2003. Results The most common perceived barriers to physical activity and healthy eating encountered by young women were related to motivation, time and cost. Women with children were particularly likely to report a lack of social support as an important barrier to physical activity, and lack of social support and time as important barriers to healthy eating. Perceived barriers did not differ by SES or overweight status. Conclusions Health promotion strategies aimed at preventing weight gain should take into account the specific perceived barriers to physical activity and healthy eating faced by women in this age group, particularly lack of motivation, lack of time, and cost. Strategies targeting perceived lack of time and lack of social support are particularly required for young women with children.
Introduction In many developed countries, overweight and obesity have reached epidemic proportions [ 1 - 8 ]. One group at particular risk of weight gain and the development of obesity is young women[ 2 , 9 , 10 ]. In the US, for example, one study that tracked weight in a large population sample over a 10-year period found that major weight gain (increased body mass index (BMI) > 5 kg/m 2 ) was twice as common in women (5.3%) as in men (2.3%) [ 2 ]. A recent study of almost 9,000 women aged 18–23 years in Australia showed that 41% of the sample gained more than 5% of their BMI baseline over a four-year period (1996–2000) [ 9 ]. This risk of weight gain and the development of obesity places young women at increased risk of a range of chronic medical conditions and diseases, such as hypertension, type-2 diabetes, cardiovascular disease, and certain cancers [ 11 ]. In an effort to reverse the current global epidemic of overweight and obesity, strategies to promote increased physical activity and to encourage healthy eating have been promoted in many countries [ 12 - 15 ]. In Australia, for instance, individuals are encouraged to consume diets that are low in fat, high in fibre and rich in fruits and vegetables[ 13 ], and to participate in at least 30-minutes of moderate-intensity activities at least five days/week [ 12 ]. Despite such efforts, many young women do not meet the current physical activity recommendations [ 16 ] and their diets are less than optimal. For example, mean daily intakes of fruits and vegetables fall well below recommended levels [ 17 ] and 50% of young Australian women are consuming at least one takeaway meal per week, which is likely to be high in energy density [ 9 ]. Poor compliance with dietary and physical activity guidelines is not unique to Australia [ 18 - 20 ]. In addition, recent work we have conducted suggests that many young women do not consider the kinds of lifestyle changes that are being recommended as feasible for them in the context of their daily lives [ 21 ]. An understanding of the perceived barriers faced by young women in achieving healthy lifestyle changes is therefore important. Most existing studies examining perceived barriers to physical activity and healthy eating have focused on the general population,[ 18 , 22 - 25 ]. with few specifically considering the perceived barriers experienced by those at particular risk of weight gain, such as young women. However, the perceived barriers faced by young women are likely to differ from those faced by other groups, such as by men or older women. For example, a study in the USA showed that women more frequently report 'tiredness' and 'time' as significant perceived barriers to healthy habits than do men, and that this may be partly attributable to their domestic situation [ 25 ]. In addition, young women are more likely than older women to experience particular life events (e.g. leaving family homes, starting work, entering a marital or de facto relationship, and becoming mothers) that may influence their physical activity and dietary habits [ 26 , 27 ]. As well as perceiving different barriers to those faced by other groups in the population, the perceived barriers to increasing physical activity and improving diet that young women face may vary according to their social and personal circumstances. For example, having children is likely to impact on a women's ability to adopt healthy habits [ 21 , 28 , 29 ]. In addition, persons of lower socioeconomic status (SES) may have poorer access to parks, walking or jogging trails, and gym equipment than those of higher SES [ 25 ]. Access to good quality, inexpensive healthy foods has also been reported to be more limited among persons of low SES; for instance, the cost of healthy foods has been reported to be greater for those living in deprived areas. [ 30 , 31 ]. A number of studies have suggested that a lack of knowledge is a greater barrier to eating a healthy diet among those of lower education level [ 22 , 23 ]. Being overweight can also be perceived as a significant barrier to physical activity [ 32 ]. However, whether or not these factors are perceived as barriers to physical activity and healthy eating among young women is unknown. In order to develop appropriate and effective obesity prevention strategies for young women it is important to understand the barriers they perceive in attempting to control their weight. The aim of this study was to examine perceptions of a range of personal, social and environmental barriers to physical activity and healthy eating, specifically related to weight maintenance, among young women, and how these vary by domestic situation, SES and overweight status. Methods Participants A total of 445 women provided data for this study. Initially, a sample of 1200 women aged 18–32 years was selected from the Australian Electoral Roll using a stratified random sampling procedure, with strata based on the number of eligible cases in each of the eight States/Territories of Australia. As voting is compulsory for Australian adults, the electoral roll provides a complete record of population data on Australian residents aged 18 years and over. Excluding those who had moved and left no forwarding address, the study achieved a response rate of 41% (462 women participated), which is comparable to response rates reported in similar postal surveys with this age group [ 33 , 34 ]. Data from 17 women who were pregnant were excluded. The socio-demographic characteristics of the sample are reported in full elsewhere [ 21 ]. Briefly, 42% of the respondents were tertiary-educated. Half of the women were married and one in three had at least one child. One in three respondents was classified as overweight or obese. The socio-demographic profile of the sample was comparable to that of women of similar age (18–44 y) who participated in the most recent (2001) Australian National Health Survey [ 35 ]. Procedures A questionnaire was developed and pilot-tested with a convenience sample of 10 women in the same age group as participants. The questionnaire, a study description, an invitation to participate, a consent form and a reply-paid envelope for returns were mailed to the study sample of women in October 2001. Non-responders were sent a reminder postcard two weeks later and a second reminder with replacement questionnaire a further three weeks later. Measures The participants completed the following questions. Socio-demographic background The socio-demographic questions included domestic situation (household composition) and education. Domestic situation was assessed by asking 'Who lives with you?' with response options: No-one , I live alone ; Partner / spouse ; Own children ; someone else's children ; parents ; brothers / sisters ; Other adult relatives ; and Other adults who are not family members . This was subsequently re-categorized as living with parental family; living alone/share 'flatting'; living with partner (no children); or living with children (including those living with partner and child/ren, and single mothers). Education level (highest level of schooling: still at school , primary school , some high school , completed high school , technical / trade school certificate / apprenticeship , or University / tertiary qualification ) was subsequently categorized as tertiary educated or not tertiary educated and used as an indicator of SES. Body weight Women were asked to self-report their height and weight and this information was used to calculate body mass index (BMI = weight (kg)/height (m 2 )). Self-reported height and weight have been shown to provide a reasonably valid measure of actual height and weight for the purpose of investigating relationships in epidemiological studies [ 36 ]. Women were categorised as overweight (BMI ≥ 25) or not overweight (BMI < 25) [ 11 ]. Perceived barriers to weight maintenance Young women's perceptions of barriers to weight maintenance were assessed using 22 items. Participants were asked 'How important are the following as barriers to you keeping your weight at the level you want?' The complete list of barrier items is included in Tables 1 and 2 . These items were based on a review of the literature investigating barriers to weight maintenance behaviours in other population groups [ 22 - 25 ]. There were two sets of perceived barriers assessed, those related to physical activity and those to healthy eating. For each set of questions, participants were asked about access to information; motivation; enjoyment; skills; partner support and children's support (where relevant); friends' support; access; cost; time due to job demands; and time due to family commitments as possible barriers. Response options for all barrier items were: Not a barrier ; A somewhat important barrier ; A very important barrier ; Not applicable . For analyses, responses Not applicable and Not a barrier were combined. Table 1 Perceived barriers to physical activity (N = 445) Barriers to physical activity Factor loadings Not a barrier (%) A somewhat important barrier (%) A very important barrier (%) Factor 1: Personal barriers to physical activity (Eigenvalue = 4.21, 38% variance, Cronbach's alpha = 0.76) Do not have the motivation to do physical activity, exercise or sport 0.58 26 34 40 Not enjoying physical activity, exercise or sport 0.80 57 25 18 Do not have the skills to do physical activity, exercise or sport 0.70 81 14 5 Factor 2: Social support barriers to physical activity (Eigenvalue = 1.13, 10% variance, Cronbach's alpha = 0.68) No partner's support to be physically active 0.80 78 13 9 No children's support to be physically active 0.82 94 4 2 No friends' support to be physically active 0.57 84 11 5 Factor 3: Environmental barriers to physical activity (Eigenvalue = 1.22, 11% variance, Cronbach's alpha = 0.71) Do not have enough information about how to increase physical activity 0.75 83 12 5 Not having access to places to do physical activity, exercise or sport 0.57 66 23 11 Not being able to find physical activity facilities that are inexpensive 0.70 49 29 22 Not having the time to be physically active because of job 0.76 42 29 29 Not having the time to be physically active because of family commitments 0.68 63 22 15 Table 2 Perceived barriers to healthy eating (N = 445) Barriers to healthy eating Factor loadings Not a barrier (%) A somewhat important barrier (%) A very important barrier (%) Factor 4: Personal and environmental barriers to healthy eating (Eigenvalue = 4.61, 42% variance, Cronbach's alpha = 0.83) Do not have enough information about a healthy diet 0.70 72 17 11 Do not have the motivation to eat a healthy diet 0.70 34 41 25 Do not enjoy eating healthy foods 0.80 64 26 10 Do not have the skills to plan, shop for, prepare or cook healthy foods 0.70 73 19 8 Do not have access to healthy foods 0.65 80 16 4 Not able to buy healthy foods that are inexpensive 0.60 60 27 13 Factor 5: Social and environmental barriers to healthy eating (Eigenvalue = 1.23, 11% variance, Cronbach's alpha = 0.72) No partner's support to eat a healthy diet 0.76 79 13 8 No children's support to eat a healthy diet 0.80 97 2 1 No friends' support to eat a healthy diet 0.57 83 12 5 Not having time to prepare or eat healthy foods because of job 0.47 57 23 20 Not having time to prepare or eat healthy foods because of family commitment 0.55 77 15 8 Most important perceived barriers In order to ascertain women's perceptions of the single most important barrier to physical activity and healthy eating (which may not have been included in the list of barriers developed by the researchers), participants were asked the following two open-ended questions: 'What is the one thing that makes it hardest for you to be physically active?' and 'What is the one thing that makes it hardest for you to eat a healthy diet?' Statistical Analyses Analyses were conducted mid-2003, using SPSS version 11.0.0 statistical software. [ 37 ]. Initially, descriptive analyses were performed to describe the proportion of women rating each of the items as not a barrier, a somewhat important barrier or a very important barrier. Content analyses of the open-ended questions were undertaken to identify main recurring themes. Two separate exploratory factor analyses using SPSS FACTOR were performed with the 11 barriers to physical activity and the 11 barriers to healthy eating, to identify underlying patterns of relationships among individual items, and to reduce and simplify the items in order to facilitate subsequent analyses. Principal components analysis with varimax rotation (since factors were not correlated) was used. For any cross-loading items (i.e. items that had loadings of greater than 0.4 on more than one factor), only the higher loading was taken into account when calculating final factor scores. Inter-item reliability for each factor was assessed by Cronbach's α coefficients. Kaiser's measure of sampling adequacy was used to confirm the appropriateness of factor analysis [ 38 ]. Standardized factor scores were computed for each factor, with a large positive score representing more important barriers and a large negative score, less important barriers. Analysis of variance or t-tests were performed separately for each of the standardized factor scores to investigate differences in perceived barriers to physical activity and healthy eating with regard to domestic situation, SES and overweight status. Results Perceived barriers to physical activity Table 1 presents the proportions of women reporting each of the perceived barriers to physical activity. The main barriers reported by young women related to motivation, time and cost. Combining the response categories 'somewhat important' and 'very important', 74% of the sample reported lack of motivation – 'not having the motivation to do physical activity, exercise or sport', time (58%) – 'not having time to be physically active because of my job,' and cost (51%) – 'not being able to find physical activity facilities that are inexpensive' – as common barriers to physical activity. Lack of time due to work commitments (reported by 58%) was more commonly reported than lack of time due to family commitments (37%), perhaps due to the relatively small proportion (30%) of young women in this study with at least one child. Less common perceived barriers to physical activity included lack of information, skills, partners' and children's support, and friends' support. Perceived barriers to healthy eating Table 2 presents perceived barriers to healthy eating. As with physical activity, lack of motivation (66%), lack of time due to job commitments (43%), and cost (inability to buy healthy foods that are inexpensive: 40%) were common perceived barriers. Less commonly reported barriers included lack of information, skills and friends', partners' and children's support, and access. As with physical activity, lack of time related to job demands (reported by 43%) was more common than lack of time due to family commitment (23%). The most important perceived barriers to physical activity and healthy eating Consistent with women's responses to the closed-ended questions, the most important perceived barriers to physical activity reported in response to the open-ended questions were lack of time due to work, study or family commitments (78%), lack of motivation (37%) and childcare issues (25%). The most important perceived barriers to healthy eating related to taste (24%); lack of time (21%); lack of motivation (13%); and the perception that healthy foods are inconvenient or expensive (13%). Factor analysis of perceived barriers to weight maintenance The factor analysis of the perceived barriers to physical activity revealed three interpretable factors (Table 1 ) with eigenvalues greater than one. These factors together explained 60% of the total variance. Two items – 'not having access to places to do physical activity, exercise or sport' and 'not having friends' support to be physically active' – cross-loaded on two factors and these items were included only on factors on which each item showed the largest loading. The Cronbach's α coefficients for the three factors ranged from 0.68 to 0.76, indicating moderate internal reliability. Provisional names were assigned for these three factors: 'personal barriers', 'social support barriers' and 'environmental barriers'. The items included as personal barriers to physical activity were related to motivation, enjoyment, and skill. Social support barriers encompassed lack of support from family and friends; and environmental barriers related to information, access, cost, and time. The principal components analysis of the 11 barriers to healthy eating resulted in two distinct interpretable factors with eigenvalues greater than one (Table 2 ). The Cronbach's α coefficients for the two factors were 0.72 and 0.83, indicating moderate to good internal reliability. Together the two factors explained 53% of the total variance. Provisional names were assigned to these factors: 'personal and environmental barriers' and 'social and environmental barriers'. Personal and environmental barriers to healthy eating included motivation, enjoyment, skills, information, cost, and access. Social and environmental barriers were related to lack of support from family and friends and time constraints. Associations of domestic situation, education and overweight status with perceived barriers Mean factor scores did not vary according to women's overweight status or SES. Mean factor scores did differ significantly by domestic situation for two factors: social support barriers to physical activity and social and environmental barriers to healthy eating (see Table 3 ). Compared with women living in other domestic situations, women with children had the lowest score on the social support for physical activity factor, suggesting that lack of support from partners, children and friends was a more important perceived barrier to physical activity for these women. This group also had the lowest score on social and environmental barriers to healthy eating factor, suggesting that lack of social support and insufficient time were more important perceived barriers to healthy eating among women with children than among other women. Conversely, young women who lived with their parents had the highest scores on these factors, indicating the relative lack of importance of social support for physical activity, and social and environmental barriers to healthy eating, for this group. Table 3 Mean standardized factor scores on weight maintenance by domestic situation a Factor Domestic situation Parents Alone/ Share Partner Children p Personal barriers to physical activity 0.08 0.22 -0.09 -0.07 0.12 Social support for physical activity -0.37 -0.14 -0.06 0.55 .000 Environmental barriers to physical activity 0.13 0.12 0.03 -0.18 0.11 Personal and environmental barriers to healthy eating -0.04 0.23 0.02 -0.04 0.25 Social and environmental barriers to healthy eating -0.30 -0.16 -0.06 0.49 .000 a . A large positive score represents more important barriers; a large negative score, less important barriers. Discussion This study suggests that a lack of motivation, time constraints due to work, and cost issues are the key perceived barriers to maintaining weight faced by young women. Overall these findings support other research that has examined barriers to physical activity and healthy eating [ 18 , 22 , 25 , 39 ]. However, the present study is unique in providing an insight into the relative importance of a range of personal, social and environmental factors as perceived barriers to weight maintenance among young women, a high risk group for weight gain. Findings showed that young women tended to rate personal factors as key perceived barriers to physical activity and healthy eating, followed by environmental factors, with social factors rated as less important. While the environment is likely to be an important source of influence on obesity-related behaviours [ 40 ], these findings highlight that efforts to prevent obesity should not ignore the central role of cognitive factors. Given the striking similarities in the types of barriers reported to impede physical activity, and the perceived barriers to healthy eating, findings also suggest that there may be potential economies of scale in health promotion programs aimed at preventing weight gain among young women. For example, strategies aimed at boosting motivation for healthy behaviour may help to promote both increased physical activity and healthy eating simultaneously. While motivating young healthy women to adopt healthy eating and physical activity behaviors is likely to be challenging, recent intervention research suggests that motivationally-tailored interventions may be more successful that other approaches (e.g. based on social-cognitive theory) in promoting physical activity and healthy eating [ 41 , 42 ]. It is noteworthy that perceived barriers to weight maintenance did not vary by socio-economic status or overweight status in this sample of women. In contrast, previous research has shown that overweight men and women face a number of perceived physical activity barriers [ 32 ]. Similarly, given that diet varies by socio-economic status [ 43 , 44 ] we expected that women of lower socio-economic status would be more likely to experience barriers to eating a healthy diet. Previous studies also suggest that persons of low SES often live in areas where the cost of food is greater, and access to healthy foods is poorer [ 30 , 31 ]. The reasons for the difference between the present results and earlier findings are unclear. It may be, however, that in this sample of relatively young women, many were still acquiring their education, and hence any SES differences in perceived barriers to healthy behaviours were not yet established. Compared to other young women, those living with children were the most likely to report lack of social support for physical activity, and lack of support and time for healthy eating, as key perceived barriers to maintaining their weight. Young women who lived with their parents were the least likely to perceive these to be barriers to weight maintenance. These findings are consistent with those of previous studies showing that getting married and having children are associated with decreased physical activity and greater weight gain [ 21 , 26 ]. Any weight gain prevention program targeting women with children should incorporate a focus on enlisting social support for both physical activity, and shopping for and preparing healthy foods. In a previous study with the same sample, we reported that while the majority of the women were in a healthy weight range (51%) or overweight/obese (31%), 18% of the women were underweight [ 21 ]. It should be acknowledged that some women in this sample, particularly those who were underweight, may have been trying to gain weight. One limitation of the present study was that the questions assessing perceived barriers to weight maintenance did not distinguish women trying to keep their weight down, from those trying to keep their weight up, and interpretation of the questions on perceived barriers may have been slightly different between these groups. However, attempts to gain weight are relatively uncommon among young women [ 45 ], and hence this is likely to have affected only a small proportion of the sample. A second limitation of this study is that the barriers were not assessed objectively, but rather through self-reports (ie perceived barriers). Nonetheless, it is important to consider women's perceptions of factors hindering their efforts to engage in healthy behaviours, since objective barriers may be perceived differently by different women (e.g., poor access to a gym may be viewed as less of a barrier to physical activity among a woman who walks for exercise than one who prefers aerobics). Finally, although the study achieved a somewhat modest response rate, the sample was selected from a nationally representative sampling frame and the socio-demographic profile of women was comparable to that of similarly-aged women in the wider population [ 35 ]. Conclusions The findings of this study highlight the need for health promotion strategies that provide increased motivation, support and skills to enable young women to shop and prepare healthy, quick and inexpensive meals. Similarly, the findings suggest a need to promote more time-efficient physical activity alternatives. Additional strategies that recognize the perceived barriers to physical activity and healthy eating faced by young women with children are particularly required. Competing interests The authors declare that they have no competing interests. Authors' contributions SA conducted the literature review, final statistical analyses and early drafts of the results and conclusions sections. KB and DC conceived the study, design and measures, collected the data, coordinated the analyses and participated in the write-up of all sections. NW conducted preliminary analyses and drafting of early results. VI contributed to drafting the final manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524367.xml
15462679
10.1186/1479-5868-1-15
545955
Antitumor effects of two bisdioxopiperazines against two experimental lung cancer models in vivo
Background Probimane (Pro), an anti-cancer agent originating in China, was derived from razoxane (ICRF-159, Raz), a drug created in Britain, specifically targeting at cancer metastasis and as a cardioprotectant of anthrocyclines. Pro and Raz are bisdioxopiperazine compounds. In this work, we evaluated the anti-tumor and anti-metastatic effects of Pro and Raz in vivo against two lung tumor models, one of murine origin (Lewis lung carcinoma, LLC) and one of human origin (LAX-83). Results After determining the lethal dosage of Pro and Raz, we assessed and compared the inhibitory effects of Pro and Raz against primary tumor growth and metastatic occurrences of LLC at the dosage of LD 5 . Pro and Raz were active against primary tumor growth and significantly inhibited pulmonary metastasis of LLC at same dose-ranges (inhibitory rates > 90 %). Both Raz and Pro were effective in 1, 5, and 9 day administration schedules. Three different schedules of Raz and Pro were effective against the primary tumor growth of LLC (35–50 %). The synergistic anticancer effect of Raz with bleomycin (Ble) (from 41.3 % to 73.3 %) was more obvious than those with daunorubicin (Dau) (from 33.1 % to 56.3 %) in the LLC tumor model. Pro was also seen to have synergistic anti-cancer effects with Ble in the LLC model. Both Raz and Pro inhibited the growth of LAX 83 in a statistically significant manner. Conclusions These data suggest that both Raz and Pro may have anti-tumor potentiality and Raz and Pro have combinative effects with Ble or Dau. The potential targets of bisdioxopiperazines may include lung cancers, especially on tumor metastasis. The anti-cancer effects of Raz and Pro can be increased with the help of other anticancer drugs.
Background Razoxane (ICRF-159) ( Raz ), first developed in UK, was the earliest agent against spontaneous metastasis for the murine model (Lewis lung carcinoma) in 1969 [ 1 ]. A large volume of papers and projects have been published in the utilities and mechanisms of Raz for anticancer actions, like assisting radiotherapy, [ 2 ] overcoming multi-drug resistance (MDR) of daunorubicin and doxorubicin [ 3 ], inhibiting topoisomerase II [ 4 ] and so on. More importantly, Raz , as a cardioprotectant of anthrocyclines, has been licensed in 28 countries in 4 continents. Since morpholine groups in some structures were reported to be responsible for cytotoxic or modulative actions on tumors, an anticancer agent, probimane [1,2-bis (N 4 -morpholine-3, 5-dioxopeprazine-1-yl) propane; AT-2153, Pro] was synthesized by introducing two morpholine groups into Raz in China.[ 5 ]. Raz and Pro belong to bisdiopiperazines . Like Raz , Pro also exhibits anti-tumor activity both in vivo and in vitro against experimental tumor models in a small scale investigation [ 6 , 7 ] and limited clinical data showed that Pro could inhibit human malignant lymphoma even for those resistant to other anticancer drugs [ 8 ]. Pro exhibits the same pharmacological effects as Raz , like detoxication of Adriamycin ( ADR ) induced cardiotoxicities, and synergism with ADR against tumors [ 9 , 10 ]. We have found some novel biological effects of Pro , like inhibiting the activity of calmodulin ( CaM ), a cell-signal regulator, which can explain anticancer actions and the combined cytotoxic effect of Pro and ADR [ 11 ]. Pro was also shown to inhibit lipoperoxidation ( LPO ) of erythrocytes [ 12 ], influence tumor sialic acid synthesis [ 13 ] and inhibit the binding of fibrinogen to leukemia cells [ 14 ]. Lung cancer is the No 1 killer among all categories of cancers in urban areas in China and many Western countries. The high mortality rate of lung cancer can easily be caused by inducing multi-drug resistance ( MDR ) and by high metastatic occurrence in clinics [ 15 ]. Since we assume that Pro , like Raz may possess useful therapeutic potentialities, we evaluated in vivo the chemotherapeutical parameters of Pro and Raz for lung cancer of both murine and human origins. Results Lethal toxicity of Pro and Raz in mice The lethal dosage of Pro and Raz is tabulated in Table 1 . Since the toxicity of Pro and Raz seemed to lack sex specificity in mice, we were able to combine their numbers for LD 50 and LD 5 calculations. We used the approximate dosage of LD 5 of Pro (60 mg/kg ip × 7) and Raz (20 mg/kg ip × 7) as equitoxic dosages for further treatment studies. Table 1 The subacute toxicity of Pro and Raz in mice: Mouse survival was observed for 1 month. The numbers of mice in each group were 20 for each of the 5 dosages of a single agent. Drugs Protocols LD 5 mg/kg LD 50 mg/kg Probimane ip × 10 66 121 Razoxane ip × 10 23 53 Antitumor and antimetastatic effects of Pro and Raz on LLC Antitumor and antimetastatic effects of Pro and Raz on LLC are tabulated in Table 2 and Table 3 . Pro and Raz at equitoxic dosages (LD 5 ) showed a noticeable anticancer effect on primary tumor growth (inhibitory rates, approximately 30–45 %), and significantly inhibited the formation of tumor metastases (inhibitory rates on pulmonary metastasis > 90 %, P < 0.001). Primary tumor growth of LLC was inhibited more by Pro (48 %) than by Raz (40.3%) in a 20 day trial, whereas the inhibition of Pro (35.7%) was slightly less than that of Raz (40 %) on an 11 day trial. Pro seems to be more persistent than Raz in inhibiting primary tumor growth of LLC . Antitumor effects of bisdioxopiperazines for different schedules and in combination with other anticancer drugs Antitumor effects of Raz and Pro on LLC are included in Table 4 , 5 , 6 . We evaluated 1, 5 and 9 day administration schedules in our study. We found that Raz and Pro were effective in a statistically significant manner with the 3 injection schedule of the 1, 5 and 9 day administrations on LLC . If we administered Raz to tumor-bearing mice once on day 1, 5 and 9, there was no difference between treatment and vehicle control. Antitumor effects of Raz in combination with Ble on LLC (73.3 %) were better than those in combination with Dau (56.3 %) (Table 5 and Table 6 ). Pro also showed synergistic effects in combination with Ble (Table 7 ). Table 2 The influence of Pro and Raz on primary tumor of LLC (using Student T-test): Route: ip × 7 daily. Experiment term was 11 days. * P < 0.05 (treatment vs vehicle control). The numbers of mice were 30 for the control group and 20 for each treatment group. 100 % survival was observed in each group. Compounds Dosage mg/kg/d Body weight (g) Tumor weight (g) Tumor inhibition% Control -- 23.3/24.4 2.80 ± 0.04 -- Razoxane 20 23.3/23.4 1.61 ± 0.03* 40.0 Probimane 30 23.4/21.6 1.91 ± 0.03* 32.1 Probimane 60 23.3/23.8 1.80 ± 0.03* 35.7 Table 3 The influence of Pro and Raz on primary and metastatic tumor of LLC: PTI (%) – Primary tumor inhibition. MFCPM – metastatic foci count per mouse. Route: ip × 7 every 2 days. Experiment term was 20 days, * P < 0.001(treatment vs vehicle control). The numbers of mice were 30 for both control group and each treatment group. 100 % survival was observed in each group. Compounds Dosage mg/kg/d Body weigh (g) PTI(%) MFCPM Control --- 22.8/21.4 -- 30.9 ± 7.3 Razoxane 20 22.7/21.5 40.3 1.2 ± 0.5* Probimane 30 23.3/22.5 42.0 1.5 ± 0.5* Probimane 60 23.3/20.3 48.0 1.0 ± 0.2* Table 4 Antitumor effects of bisdioxopiperazines of different schedules on Lewis lung carcinoma: *Administration every 3 hours, 16 mice were included in each testing group. **p < 0.05 (treatment vs control), Experimental term was 11 days Compounds Dosage Schedule Tumor weight Tumor inhibition mg/kg 1, 5, 9 administrations (g) % Control -- -- 2.36 ± 0.05 Razoxane 80 1 time a day 2.49 ± 0.05 -5.5 Razoxane 40 1 time a day 2.32 ± 0.07 1.7 Razoxane 20 1 time a day 2.80 ± 0.06 -18.6 Razoxane 10 3 times a day* 1.51 ± 0.04** 36.0 Probimane 20 3 time a day* 1.19 ± 0.05** 49.6 Table 5 Antitumor effects of Raz on Lewis lung carcinoma in combination with daunorubicin: *Administration every 3 hours. Experimental term was 11 days Compounds Dosage Schedule Tumor weight (g) Tumor inhibitions mg/kg 1, 5, and 9 administrations % Control 2.34 ± 0.05 Razoxane (Raz) 10 3 times a day* 1.57 ± 0.05 32.9 Daunorubicin (Dau) 2 1 time a day 1.10 ± 0.04 53.0 Raz + Dau 10 + 2 3 times/1 time a day 1.02 ± 0.04 56.4 Table 6 Antitumor effects of Raz on Lewis lung carcinoma in combination with bleomycin: * Administrate every 3 hours in one day. ** p < 0.01 (treatment vs vehicle control). Experimental term was 11 days Compounds Dosage Schedule Tumor weight Tumor Inhibition mg/kg 1, 5, and 9 administration (g) % Control -- -- 2.46 ± 0.06 Razoxane (Raz) 10 3 times a day* 1.44 ± 0.07 41.5 Bleomycin (Ble) 15 1 time a day 1.50 ± 0.06 39.0 Raz + Ble 10 + 15 3 times + 1 time a day 0.66 ± 0.05** 73.2** Table 7 Antitumor effects of Pro on Lewis lung carcinoma in combination with daunorubicin or bleomycin: *Administration every 3 hours. Experimental term was 11 days Compounds Dosage Schedule Body weight Tumor weight (g) Tumor inhibitions mg/Kg 1, 5, and 9 administration g % Control -- -- 20.6/21.6 2.62 ± 0.08 Pro 20 3 times a day 20.6/20.8 1.45 ± 0.07 44.6 Dau 2 1 time a day 20.6/20.0 1.14 ± 0.08 56.5 Ble 15 1 time a day 20.7/21.2 1.36 ± 0.08 48.1 Pro + Dau 20 + 2 3 times/1 time a day 20.6/20.9 1.07 ± 0.05 59.2 Pro + Ble 20 + 15 3 times/1 time a day 20.7/19.8 0.59 ± 0.04 77.5 Antitumor activity of Pro and Raz on LAX-83 The experiments showed that LAX-83 was sensitive to Raz (40–60 mgKg -1 , ip × 5) and Pro (80–100 mgKg -1 ip × 5) with inhibitory rates of 25–32 % and 55–60 % respectively (P < 0.01 vs control). CTX , as a positive anticancer drug (40 mgKg -1 ip × 5), exhibited antitumor activities against the growth of LAX-83 with an inhibitory rate of 84 %. Obvious necrosis in tumor tissues was observed by histological evaluation of CTX and Pro treatment groups, but Pro showed larger vacuoles than CTX . Drug inhibition on tumor volumes were calculated and outlined in Table 8 . We have tested the 5 most commonly used anticancer drugs – cyclophosphamide (CTX), 5-fluoruoracil (5-Fu), methotrexate (MTX), cisplatin (DDP) and vincristine (VCR) (Table 9 ). In the LAX-83 model, CTX has been shown to be the most effective one. The anticancer effect of Pro was the same or better than those of MTX, DDP and as well as 5-Fu against LAX-83 tumor growth. Table 8 Antitumor activities of Pro and Raz on human tumor LAX-83 using subrenal capsule assay: Route: ip × 5 daily from the day after surgery. * P < 0.05, ** P < 0.001 (treatment vs vehicle control). Experiment was completed within 7 days. Tumor volume = 1/2 × width 2 × length (using T-test) Compounds Dosage mg/kg/d No mice Body weight (g) Tumor volume (mm 3 ) Inhibition% Control --- 16 19.2/21.0 39.8 ± 3.2 -- Razoxane 40 12 20.8/21.5 29.7 ± 3.0* 25 Razoxane 60 12 19.8/18.8 27.2 ± 2.8* 32 Probimane 80 12 20.0/19.6 18.0 ± 2.6** 55 Probimane 100 12 20.0/20.0 15.8 ± 2.6** 60 Cyclophosphamide 40 12 21.0/20.9 6.4 ± 2.0** 84 Table 9 Antitumor activities of anticancer drugs on human tumor LAX-83 using subrenal capsule assay: Route: ip × 5 daily from the day after surgery. * P < 0.05, ** P < 0.001 (treatment vs vehicle control). Experiment was completed within 7 days. Tumor volume = 1/2 × width 2 × length (using T-test) Compounds Dosage mg/kg/d No mice Body weight (g) Tumor volume (mm 3 ) Inhibition% Control --- 16 20.9/22.5 29.7 ± 3.2 -- Methotrexate 1.5 12 21.2/21.9 27.4 ± 3.0 7.7 Cis-platin 1.5 12 22.8/21.7 16.6 ± 2.6** 44.1 5-fluoruoracil 37.5 12 21.7/21.4 12.8 ± 2.6** 57.5 Cyclophosphamide 30.0 12 21.0/20.9 5.8 ± 2.3** 80.5 Vincristine 0.3 12 20.8/20.8 7.6 ± 2.2** 74.4 Discussion Explanations of anticancer and antimetastatic mechanisms of bisdioxopiperazines are now inconclusive. The present explanation for the anticancer mechanisms of Raz has been attributed to antiangiogenesis and topoisomerase II inhibition.[ 16 ] Since the antimetastatic activities of Raz and Pro were much stronger than those actions against primary tumor growth, this special targeting on metastasis ought to be more useful in clinical cancer treatment. Raz and Pro show typical characteristics of antiangiogenesis agents, which target small nodule of tumors. Meanwhile, recent reports on drugs targeting angiogenesis indicate that most anti-vascular drugs have low or even no effects on most cancers when they are used alone in clinics, but they show synergistic effects in combination with other anticancer drugs. [ 17 , 18 ] Our study shows synergistic anticancer actions of Raz and Pro with Ble or Dau basing on this theory. Previous work showed that Pro and Raz could reduce the cardiotoxicity of anthrocycline ,[ 1 , 9 , 10 ] so we may reasonably deduce that they can also reduce the cytotoxicity of anthrocyclines . The data in our study suggests that the synergistic effects of Raz with anthrocyclines are present, but not as potent as those with Ble . Since we have tested the antitumor activity of clinically available anticancer drugs (CTX, 5-Fu, MTX, DDP and VCR) against LAX-83, CTX being the best one, two bisdioxopiperazines studied on this work show overall similar anticancer effective as commonly used drugs. Although the anticancer effects of CTX and VCR are better than those of Pro, for other commonly used drugs, such as DDP, MTX and 5-Fu, the antitumor effects are no better than those of Pro. Since the antitumor effects of MTX and DDP are even less effective than those of Pro and Raz , we suggest that anticancer effects of Pro and Raz are within the effective anticancer ranges of commonly available anticancer drugs. The other useful property of Pro is that it is the most water-soluble among the bisdioxopiperazines . Most bisdioxopiperazines are less water-soluble and given orally in clinics. Although oral administration is easy for patients, bioavailability varies from patient to patient. For some patients who have a poor absorption of bisdioxopiperazines in oral administration, Pro can be injected iv to maintain stable drug levels. Our previous work showed that Pro could strongly accumulate in tumor tissue while Pro levels in other tissues decrease rapidly [ 19 ]. Presently, a stereo-isomer of Raz , (dexrazoxane, ICRF-187 ), a water-soluble Raz, is being reinvestigated and has aroused the interests of clinical oncologists. Phase III clinical studies are currently underway in the US. More importantly, ICRF-187 was licensed in 28 countries in 4 continents. This work shows a noticeable inhibition of Pro and Raz on lung cancers and suggests possible usage of Raz and Pro on lung cancer in clinics. Conclusions The advantages of bisdioxopiperazines in clinical treatment of lung cancers are as follows: (i) Pro and Raz can inhibit the growth of lung cancers, with and without the help of other anticancer drugs, like Dau and Ble ; (ii) like Raz , Pro strongly inhibits spontaneous pulmonary metastasis of LLC ; (iii) since Pro can inhibit CaM [ 11 ], a calcium activated protein that's associated with MDR and metastatic phenotypes, synergistic anticancer effects of Pro and Raz can be expected in combination with other anti-cancer drugs, like Dau or Ble . Now, new concepts of the relationship between tumor metastasis and MDR in cancers have been stated,[ 20 ] whereas bisdioxopiperazines can inhibit both tumor metastasis and MDR . As a counterpart of Raz , Pro might be of interest and have chemotherapeutic potential in clinics. Methods Drugs and animals Cyclophosphomide ( CTX ), daunorubicin ( Dau ) and bleomycin ( Ble ), 5-fluororacil (5-Fu), vincristine (VCR), cisplatin (DDP), methotrexate (MTX) were purchased from Shanghai Pharmaceutical Company. Pro and Raz were prepared by Department of Medicinal Chemistry, Shanghai Institute of Materia Medica, Chinese Academy of Sciences. C57BL/6J and Kun-Min strain mice were purchased from Shanghai Center of Laboratory Animal Breeding, Chinese Academy of Sciences. Nude mice (Swiss-DF), taken from Roswell Park Memorial Institute, USA, were bred in Shanghai Institute of Materia Medica, Chinese Academy of Sciences under a specific pathogen free condition. Human pulmonary adenocarcinoma xenograft ( LAX-83 )[ 21 ] and Lewis lung carcinoma ( LLC ) were serially transplanted in this laboratory. All animal experiments were conducted in compliance with the Guidelines for the Care and Use of Research Animals, NIH, established by Washington University's Animal Studies Committee. Bouin's solution consists of water saturated with picric acid: formaldehyde: glacial acetic acid (75: 20: 5, v/v/v). Lethal dosage determination in mice Mice of Kun-Min strain (equal amount of male and female) were ip injected with Pro and Raz daily for 10 successive days. The deaths of mice were counted after 1 month. Lethal dosage of agents was calculated by Random Probity tests . Antitumor and antimetastatic studies of LLC C57BL/6J mice were implanted sc with LLC (2 × 10 6 cells) from donor mice. The mice were injected intraperitoneally with drugs daily or every two days for 7 injections. On day 11 or day 20, mice were sacrificed, and locally growing tumors were separated from skin and muscles and weighed, and lungs of host mice were placed into a Bouin's solution for 24 h, and then the lung samples were submerged into a solution of 95 % alcohol for 24 h. Finally, the numbers of extruding metastatic foci in lungs were counted. Antitumor actions of different schedules and in combinations with different drugs C57BL/6J mice were implanted sc with LLC (2 × 10 6 cells) from donor mice. Mice were injected intraperitoneally with drugs on day 1, 5, 9. Single injection or 3 injections every 3 hours were used. Tumors were separated and weighed on day 11. Antitumor activity study of human tumors Nude mice were inoculated with LAX-83 under the renal capsule (SRC method).[ 22 ] Nude mice were injected intraperitoneally with drugs daily during next five days after inoculation of LAX-83 . Then nude mice were sacrificed, and their kidneys were taken out for measurement of tumor sizes using a stereomicroscope a week after transplantation. Tumor volume was calculated as 1/2(ab 2 ) where a and b are their major and minor axes of the lump. Kidneys with tumors were paraffin-embedded, sliced and hematoxylin dyed. The tumor tissues were then observed from a light microscope. Statistical analysis Student's t-test was used to assess the differences between control and drug treatment groups of above methods. List of abbreviation used are Pro, probimane; Raz, razoxane; CaM, calmodulin; LPO, lipoperoxidation; Dau, daunorubicin; Ble, bleomycin; LLC, Lewis lung carcinoma, LAX-83; a lung adenocarcinoma xenograft; ADR, adriamycin; Author's contribution The experimental design was made by Bin Xu and Da-Yong Lu. Experiments were performed by Da-Yong Lu (anticancer activity tests) The manuscript was written by Da-Yong Lu, and Jian Ding. Figure 1 Structural formulas of razoxane and probimane
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545955.xml
15617579
10.1186/1471-2210-4-32
526216
Imbalance in the health workforce
Imbalance in the health workforce is a major concern in both developed and developing countries. It is a complex issue that encompasses a wide range of possible situations. This paper aims to contribute not only to a better understanding of the issues related to imbalance through a critical review of its definition and nature, but also to the development of an analytical framework. The framework emphasizes the number and types of factors affecting health workforce imbalances, and facilitates the development of policy tools and their assessment. Moreover, to facilitate comparisons between health workforce imbalances, a typology of imbalances is proposed that differentiates between profession/specialty imbalances, geographical imbalances, institutional and services imbalances and gender imbalances.
Introduction Imbalance in the health workforce is a major challenge for health policy-makers, since human resources – the different kinds of clinical and non-clinical staff who make each individual and public health intervention happen – are the most important of the health system's inputs [ 1 ]. Imbalance is not a new issue, as nursing shortages were reported in hospitals in the United States of America as early as 1915 [ 2 ]. It remains a major concern to this day, reported in both developed and developing countries and for most of the health care professions. Although imbalance in the health workforce is an important issue for policy-makers, various elements contribute to obscuring policy development. First, many reports of shortages are not borne out by the evidence. Rosenfeld and Moses [ 3 ] show that an overwhelming majority of newspapers, journals and newsletter articles describing the nursing situation in the United States presume the existence of a shortage. They found that even in those areas where concrete evidence of a shortage was not available, the term "nursing shortage" still appeared. Second, the notion of shortage is a relative one: what is considered a nursing shortage in Europe would probably be viewed differently from an African perspective. Finally, imbalances are of different types and their impact on the health care system varies. In consequence, there is a general need to critically review the imbalance issue. The objective of this paper is to contribute to a better understanding of the issues related to imbalance through a critical review of its definition and nature and the development of an analytical framework. Definition There are various approaches to defining imbalances [ 4 ]. From an economic perspective, a skill imbalance (shortage/surplus) occurs when the quantity of a given skill supplied by the workforce and the quantity demanded by employers diverge at the existing market conditions [ 5 ]. Labour market supplies and demands for occupational skills fluctuate continuously, so at times there will be imbalances in the labour market. In other words, a shortage/surplus is the result of a disequilibrium between the demand and supply for labour. In contrast, non-economic definitions are usually normative, i.e. there is a shortage of labour relative to defined norms [ 6 ]. In the case of health personnel, these definitions are based either on a value judgement – for instance, how much care people should receive – or on a professional determination – such as deciding what is the appropriate number of physicians for the general population. Nature One of the key questions regarding imbalances is how long these last: Is the imbalance temporary or permanent? In a competitive labour market, we should expect most imbalances to be resolved over time. Imbalances will tend to disappear faster the greater the reaction speed and also the greater the elasticity of supply (or demand) [ 7 ]. This type of imbalance (shortage or surplus) is defined as dynamic. In contrast, a static imbalance occurs because supply does not increase or decrease; market equilibrium is therefore not achieved. For instance, wage adjustments may respond slowly to shifts in demand or supply as a result of institutional and regulatory arrangements, imperfect market competition (monopoly, monopsony) and wage-control policies. Another example is physicians' education: because of the length of time required to educate physicians, changes in available supply take a long time to react significantly. Lack of information on the state of the various labour markets can also be a factor in the speed of market adjustment. To make proper labour market decisions, households and firms must be informed of the existing market conditions across markets. They must therefore know what wages are paid and the nature and location of job openings and available workers. Moreover, we should also differentiate between qualitative and quantitative imbalance. In a tight labour market, employers might not find the ideal candidate, but will still recruit someone. Under these conditions, the issue is the quality of job candidates rather than the quantity of people willing and able to do the job [ 8 ]. From the employers' perspective, a shortage of workers exists; from the job-market perspective, the existence of a shortage could be questioned because the jobs are filled. One negative hidden impact of a qualitative shortage is the number of positions that are filled with ineffective individuals [ 9 ]. A conceptual framework To better understand the role of factors affecting health workforce imbalances and to facilitate the development of policy tools, a conceptual framework is presented in this section. Introduction Factors affecting health workforce imbalances are numerous and complex, but focusing on crucial elements should permit insight into the issue of health workforce imbalances. The framework is depicted in Fig. 1 and contains six main components: the demand for health labour, the supply of health labour, the health care system, policies, resources and "global" factors. Figure 1 Framework for imbalance of human resources for health Central to this framework are the demand for and supply of health labour. Also included in the framework is the health care system, and in particular, some of its features that are likely to have an impact on health workforce imbalances. Policies constitute another crucial element of the framework. In effect, health policies but also non-health-oriented policies can have an impact on health workforce imbalances. The framework also incorporates financial, physical and knowledge resources that contribute to model the health workforce.Finally, "global" factors such as economic, sociodemographic, political, geographical and cultural factors are included. These elements contribute directly or indirectly to shaping and transforming the entire society and hence the health workforce. The demand for health personnel The first element of the framework to be examined is the demand for health personnel. The demand for health personnel can be considered as a derived demand for health services. Accordingly, we should consider factors determining the demand for health services. Personal characteristics – such as health needs, cultural and sociodemographic characteristics – and economic factors play an important role. It has often been proposed that the planning of human resources for health be based solely on estimates of health needs in the population [ 10 ]. However, relying only on the concept of need is difficult, because it can be defined either broadly or restrictedly and accordingly lead to a perception of either systematic shortage or surplus. Health needs is only one of the factors affecting the demand for health personnel. Several studies have attempted to estimate the impact of economic factors on the demand for health care. In particular in the United States, studies have attempted to estimate price and income elasticities of demand for medical services [ 11 - 13 ]. Measurements of price or income elasticities make it possible to evaluate the impact of a change in price or income on the demand for health care. Most studies reported elasticities in the range between 0.0 and -1.0, indicating that consumers tend to be responsive to price changes but that the degree of price sensitivity is not very large compared to that for many other goods and services [ 14 ]. Another element influencing the demand for health care is the value of a patient's time, such as travel time and waiting time. Acton [ 15 ] found that in the United States, elasticity of demand with respect to travel time ranged between -0.6 and -1, meaning that a 10% increase in travel time would induce a reduction of 6%-10% in the demand for health care. Other factors affect the demand for health labour. In particular, some specific features of the health care system and its features, policies, resources and environmental factors do have an impact on the demand for health labour. Their respective role will further discussed later. The supply of human resources for health After reviewing factors affecting the demand for health labour, we shall now turn to those affecting the supply of the health workforce. In particular, we shall consider the following elements: factors affecting the choice for a health professional training/education, participation in the health labour market and migration. Education/professional training choice The availability of a renewed health workforce, as well as the type of profession and specialty chosen by individuals, is a major concern for health decision-makers. These issues are of particular relevance, especially since the number of younger people, predominantly women, choosing a nursing career is declining in some countries and since in professional training/education, individuals' choices do not always match the absorptive capacity of the market. From an economic perspective, the decision to undertake professional training/education is considered an investment decision. To emphasize the essential similarities of these investments to other kinds of investments, economists refer to them as investment in human capital [ 16 ]. Since investment decisions usually deliver payoffs over time, we must consider the entire stream of costs and benefits. The expected returns on human capital investments are a higher level of earnings, greater job satisfaction over one's lifetime and a greater appreciation of non-market activities and interests. Based on the human capital approach, rate of return on education can be estimated. An average rate of return that is high and rising for a given profession will attract more individuals to that profession. On the other hand, a lower and decreasing average rate of return will discourage individuals from choosing that profession. Nowak and Preston [ 17 ], using the human capital approach, found that Australian nurses are poorly paid in comparison to other female professionals. The declining interest in nursing can be partly explained by the expansion of career opportunities in traditionally male-dominated occupations over the last three decades that entail a higher rate of return [ 18 ]. The number of young women entering the registered-nurse workforce has declined because many women who would have entered nursing in the past – particularly those with high academic ability – are now entering managerial and professional occupations that used to be traditionally male. Besides the human capital approach, the choice of a profession can also be explained by sociopsychological factors. For instance, individuals may choose a profession because it is highly valued by the society or for family tradition. In the health sector, the satisfaction afforded by caring for people and assisting them to improve their health is an important element used by nursing schools to attract new enrollees. In the light of this approach, the decline in the number of individuals choosing nursing as a career might also be explained by the fact that this profession is now less socially valued than before [ 19 , 20 ]. Participation in the labour market The economic theory of the decision to work views the decision as a choice concerning how people spend their time. Individuals face a trade-off between labour and leisure. They decide how much of their time to spend working for pay or participating in leisure activities, the latter being activities that are not work-related. An issue that has drawn a lot of attention recently is the impact of wage increases on labour participation, in particular for nurses. In the short term, higher wages can have at least two effects on the labour supply of current qualified nurses: first, qualified nurses who are working in other occupations may return to nursing activities; second, nurses now in practice may respond by working more hours. In the long run, higher wages in nursing relative to other occupations make nursing an attractive profession and will draw more people into nurse training programmes. In their literature review of wage elasticity of nursing labour supply, Antonazzo et al. [ 21 ] and Chiha and Link [ 22 ] found that most of the studies indicate a positive relationship, although not a strong one, between wages and labour supply. Accordingly, increases in nursing wages are unlikely to cause significant increases in labour participation. A literature review on the women's workforce undertaken by Killingsworth and Heckman [ 23 ] indicated that in addition to wage rate, women's participation is responsive to changes in unearned income, spouse's wage and having children (particularly of pre-school age). Another aspect of labour supply decisions that has been investigated by Philips [ 24 ] is the costs associated with entering the nursing labour market (such as costs of child care and housework). The elasticity of participation with respect to changes in working costs was evaluated at -0.67 for all nurses. This suggests that a subsidy leading to a decrease of 10% in these costs would increase the participation of nurses by 6.7%. Moreover, hospitals are also using a variety of strategies to recruit new staff. A survey of hospitals in the United States shows that richer benefits, such as health insurance and vacation time, are the most common incentives used. In addition, hospitals may offer other recruitment and retention benefits, such as tuition reimbursement, flexible hours and signing bonuses based on experience or length of commitment [ 25 ]. Many countries, but particularly developed ones, use such incentives to recruit new staff. Economic factors also play a role in physician's participation in the labour market, as demonstrated by the impact of cost-containment policies in Canada, where most provincial governments have implemented an assortment of controls of health care expenses. Threshold reductions were introduced, so that fees payable to individual physicians were reduced as billing exceeded an agreed threshold. As a consequence, physicians who had billed at the threshold level chose to take leaves of absence rather than receive a level of reimbursement they considered inadequate [ 26 ]. When health personnel choose an alternative or additional occupation, this is likely to have consequences on health labour supply. In developing countries, and particularly in Africa, attempts to reform the health care sector have frequently failed to respond to the aspirations of staff concerning remuneration and working conditions. Salaries are often inadequate and may be paid late, and health workers try to solve their financial problems in a variety of ways [ 27 ]. Private practice is only one of the many survival strategies that health personnel use to supplement their income and increase their job satisfaction. Teaching, attending training courses, supervision activities, research, trade and agriculture are some of these alternative strategies [ 28 ]. Labour market exit Parker and Rickam [ 29 ] examined the economic determinants of the labour force withdrawal of registered nurses in the United States, i.e. nurses leaving the profession to pursue a non-nursing occupation and employed nurses withdrawing from the labour force. Their results suggest that a significant number of registered nurses withdraw, at least temporarily, from the labour force. Among the significant elements influencing the withdrawal decision are the wage rate, other family income, presence of children and full-time/part-time work status. Increasing registered nurses' wages and working full-time is expected to reduce the probability of labour force withdrawal, whereas higher education levels, age and other family income increase the probability of labour force withdrawal. The relative importance of wage is also emphasized by studies investigating job satisfaction. There is support in the empirical literature for the existence of job dissatisfaction among nurses, and the link between job dissatisfaction and job exit [ 30 , 31 ]. In the United States the most important factors in nurses' resignation were, in order of importance: workload, staffing, time with patients, flexible scheduling, respect from nursing administration, increasing nursing knowledge, promotion opportunities, work stimulation, salary and decision-making. These studies suggest that salary is just one of the reasons why nurses are quitting. The relative importance of wage is confirmed by Shields and Ward [ 32 ]. Their results suggest that dissatisfaction with promotion and training opportunities has a stronger impact than workload or pay. Migration Migration of health personnel can have a serious impact on the supply of human resources in health, because it may exacerbate health personnel imbalances in "sending" countries. It is suggested that migration is an "individual, spontaneous and voluntary act" that is motivated by the perceived net gain of migrating – that is, the gain will offset the tangible and intangible costs of moving [ 33 ]. Decisions to migrate are often a family strategy to produce a better income and improve survival chances [ 34 ]. The reality for many health workers in developing countries is to be underpaid, poorly motivated and increasingly dissatisfied and sceptical [ 35 ]. The relevance of motivation to migration is self-evident. There can be little doubt that for many health workers an improvement in pay and conditions will act as an incentive to stay in the country. Improved pensions, child care, educational opportunities and recognition are also known to be important [ 36 - 38 ]. In Ghana it is estimated that only 191 of the 489 doctors who graduated between 1985 and 1994 were still working in the country in 1997 [ 39 ]. Health system characteristics As the health workforce is part of the health care system, we shall also consider features of the health care system that are likely to have an impact on the demand and supply of health labour. In particular, we shall examine market failures, the diversity of stakeholders, the supply-demand adjustment time lag and hospitals' potential monopsony power. Market failures From an economic perspective, the health care market is characterized by market failures – that is, the assumptions for perfect competition are violated. From a societal perspective, in the presence of market failures such as externalities – imperfect knowledge, asymmetry of information and uncertainty – market mechanisms lead to a non-optimal demand and/or supply in health services. In other words, shortages and surpluses are likely to result from the health care market. Most markets are characterized by market failures, but what is unique to the health services market is the extent of these market failures [ 40 ]. Governments try to correct health care market failures through policy interventions. A classic example of public intervention in the presence of a positive externality is the introduction of a policy of mandatory vaccination. However, implementing such policies is sometimes difficult and may result in only partial correction of the market failures. Stakeholders The health care system is characterized by a wide range of institutional stakeholders involved in shaping human resources for health, all of whom may have different objectives [ 41 , 42 ]. The objectives of a union or professional association do not necessarily coincide, for example, with those of a government ministry, a hospital manager or the central government. Unions/professional associations seek to increase their members' market power, employment and income [ 43 ], whereas the ministry of finance will want more budget equilibrium and will favour measures to limit health care expenditures. In the case of Mozambique, whereas the policy of employing national professionals by cooperation agencies has met with warm support from national cadres, its effect on the health sector is problematic [ 44 ]. The prospect of immediate financial gains puts pressure on qualified professionals to leave their posts within the Mozambique National Health Service to take up management or consultant positions. The substantial investment in their training is therefore producing dubious direct returns to the National Health Service. More seriously perhaps, the presence of donor-paid jobs outside the health sector (as programme coordinators, researchers, etc.) is creating pressure on the Ministry of Health itself, exacerbating the imbalances in the National Health Service and creating incentives for trained Mozambicans to leave the public sector. Time lag Moreover, adjustments between the demand and supply for health personnel may take a long time. In the health care field; the time lag between education and practising may be quite substantial. To obtain licensure to practise medicine requires lengthy education and training, and the long lag time between a changed student intake and a change in supply has been noted [ 45 ]: supply adjustment for physicians is not immediate, but takes a long time. Hospitals' potential monopsony power A single entity that is the sole purchaser of labour is a monopsony . One example is the potential monopsony power of hospitals in hiring nurses or the ministry of health in hiring the health workforce. The amount of labour demanded will influence the price the monopsonist must pay for it. In contrast to the situation in a competitive market, the monopsony is a price maker, not a price taker. Monopsony results in lower wages and lower employment of nurses compared to a competitive market. A number of studies have tested whether or not hospitals possess monopsony power with respect to nurses, and the results are contradictory. Sullivan [ 46 ] and Staiger et al. [ 47 ] concluded that hospitals have a substantial degree of monopsony power. In contrast, Hirsch and Schumacher [ 48 ] did not find empirical support for the monopsony model. Nurses' wages were found not to be related to hospital density and to decrease rather than increase with respect to labour market size. Provider power/monopoly In contrast, providers' power may enable the latter to restrict the supply of human resources for health. Seldon, Jung and Cavazos [ 49 ] suggest that physicians in the United States have market power through such avenues as restricting supply and price-fixing. In France, trade unions are granted an institutional role at establishment level [ 50 ]. In India and Sri Lanka, a clear constraint to support-services contracting was the inability to counter the power of the public service unions in dictating employment terms and conditions [ 51 ]. The varying degree of homogeneity of the different professional groups may also explain their relative success in maintaining a monopoly of practice. In Iceland, one of the factors that contributed to breaking the professional monopoly of pharmacists was division within the profession [ 52 ]. Regulations The type off regulation associated with a profession plays an important role regarding the supply of members of a profession. Regulation has, by tradition, been achieved through a combination of direct government regulation and, to a large extent, through rules adopted by professional associations, whose self-regulatory powers enable them to establish both entry requirements and rules regarding professional conduct [ 53 ]. Such barriers to entry exist in particular for doctors, but also in other health professions, such as dentistry. Some argue that these barriers constitute a means to limit entry into the profession, and hence maintain high incomes. Muzondo and Pazderka [ 54 ] established, for Canadian professional licensing restrictions, a relationship between different variables of self-regulation and higher income. Seldon et al. [ 55 ] suggest that physicians in the United States have market power through such sources as restricting supply and price-fixing. However, the proponents of self-regulation claim that these barriers are a means to provide health care of quality and to protect patients from incompetent providers. In contrast, although most countries have a professional nursing association, nurses tend to have limited power to regulate entry to the profession. This could be associated with a large diversity of specialist groups in nursing failing to unite on issues related to professional regulation [ 56 ]. Health and non-health policies Health and non-health policies contribute to shaping the health care system and have an influence on the demand and supply of health labour. Health policy can be defined as a formal statement or procedure within institutions (notably government) that defines priorities and the parameters for action in response to health needs, available resources and other political pressures. Health policy is often enacted through legislation or other forms of rule-making that create regulations and incentives for providing health services and programmes and access to them. For instance, the decision to introduce or expand health insurance coverage is likely to have an impact on the demand for health services. This is illustrated by the RAND Health Insurance Experiment, a controlled experiment that increased knowledge about the effect of different insurance copayments on use of medical services. Insurance copayments ranged from zero to 95%. The RAND study concluded that as the co-insurance rose, overall use and expenditure fell for adults and children combined [ 57 ]. Non-health policies reflect state interventions in areas such as employment, education and regional development that contribute to shaping the health workforce. These policies do not directly address health issues, but have an indirect impact on such issues. In France, a controversial new regulation was introduced that reduced the workweek to a maximum of 35 hours in an attempt both to create hundreds of thousands of new jobs and to achieve greater flexibility in the labour force. Unions responded by demanding the creation of more posts in public hospitals. Financial, physical and knowledge resources Financial, physical and knowledge resources are crucial to any type of health care workforce. The level of resources attributed to the health care system, and how these resources are used, will have a significant impact on health workforce issues. In terms of financial resources, human resources account for a high proportion of national budgets assigned to the health sector [ 58 ]. Health expenditure claims an increasingly important share of the gross domestic product and, in most countries, wage costs (salaries, bonuses and other payments) are estimated to account for between 65% and 80% of the renewable health system expenditure [ 59 , 60 ]. Physical resources include human resources within the health sector and other sectors; buildings and engineering services such as sanitation, water and heating systems for community use and for the use of medical care institutions; and equipment and supplies. Finally, the health workforce is also constrained by its human capital. This human capital can be associated with the qualification and education of the health workforce. Education of the health workforce is the systematic instruction, schooling or training given in preparation for work. Global factors Economic, sociodemographic, cultural, and geographical factors contribute to shaping and transforming society and hence have a direct or indirect impact on health workforce issues. From an economic perspective, for instance, there is evidence of a correlation between the level of economic development of a country and its level of human resources for health. Countries with higher GDP per capita are said to spend more on health care than countries with lower income, as demonstrated by cross-sectional studies, [ 61 ] and hence would also tend to have larger health workforces. Moreover, both the demand and supply are likely to be affected by sociodemographic elements such as the age distribution of the population. On the demand side, the ageing of the population is giving rise to an increase in the demand for health services and health personnel, especially nurses for home care. On the supply side, the ageing of the health workforce, and in particular of nurses, has serious implications for the future of the nursing labour market. For example, the Institute of Medicine noted that older registered nurses have a reduced capacity to perform certain tasks [ 62 ]. It was found that between 1983 and 1998 the average age of practising registered nurses increased by more than four years, from 37.4 to 41.9 years [ 63 ]. In contrast, the average age of the United States workforce as a whole increased by less than two years during the same period. Furthermore, the proportion of the registered-nurse workforce younger than 30 years decreased from 30.3% to 12.1% during this period. Geographical and cultural factors also play a role in determining the demand and supply of human resources. Geographical characteristics affect the organization of health services delivery. For instance, a country with many islands or with isolated population groups will face particular challenges in terms of health workforce issues. Similarly, significant climatic changes are likely to give rise to changes in health needs, which in turn will call for changes in health services and in the health workforce. Finally, both cultural and political values also affect the demand for and supply of human resources for health. Health workforce imbalances: a typology This section considers a typology of imbalances, and differentiates between the following: • Profession/specialty imbalances: Under this category, we consider imbalance in the various health professions, such as doctors or nurses, as well as shortages within a profession, e.g. shortage of one type of specialists. • Geographical imbalances: These are disparities between urban and rural regions and poor and rich regions. • Institutional and services imbalances: These are differences in health workforce supply between health care facilities, as well as between services. • Gender imbalances: These are disparities in female/male representation in the health workforce. Profession/specialty imbalances Imbalances have been reported for almost all health professions, and in particular for nurses. The United States General Accounting Office [ 64 ] reports a nursing shortage. However, the nursing shortage has not been institution-wide but is concentrated in specialty care areas, particularly intensive care units and operating rooms [ 65 ]. The shortage of registered nurses in intensive care units is explained in part by the sharp decline in the number of younger registered nurses, whom intensive care units have historically attracted. Shortages in operating rooms probably reflect that many registered nurses who work in this setting are reaching the age when they are beginning to reduce their hours worked or are retiring altogether. Major variations occur in the number of health care workers per capita population and in the skill mix employed across countries, as depicted in Fig. 2 . The nurse/doctor ratio varies widely from one country to another, as shown in Fig. 2 . The nurse/doctor skill mix is important and may have consequences for the respective tasks of nurses and doctors [ 66 ]. It is also interesting to note that these variations are taking place among countries with a relatively similar economic development level. Figure 2 Distribution of physicians, nurses, midwives, dentists and pharmacists in selected countries. WHO data, 2000. Geographical imbalances Virtually all countries suffer from a geographical maldistribution of human resources for health, and the primary area of concern is usually the physician workforce [ 67 ]. In both industrialized and developing countries, urban areas almost invariably have a substantially higher concentration of physicians than rural areas. Understandably, most health care professionals prefer to settle in urban areas, which offer opportunities for professional development as well as education and other amenities for themselves and their families. But it is in the rural and remote areas, especially in the developing countries, that most severe public health problems are found. The geographical maldistribution of doctors has been the object of particular attention. In general there is a higher concentration of general practitioners in the inner suburbs of the metropolitan areas. According to the Australian Medical Workforce Advisory Committee [ 68 ], the reasons for high concentration of general practitioners in inner city areas are: • historical • lifestyle-related: access to amenities • spouse/husband-related: greater employment opportunities • child-related: better access to secondary and tertiary education services • professional, family and social ties and professional ambitions. The geographical distribution of health care personnel is an important issue in many countries. Managua, the capital of Nicaragua, contains one-fifth of the country's population but around half of the available health personnel [ 69 ]. In Bangladesh, most of the doctors (35%) and nurses (30%) in health services are located in four metropolitan districts where only 14.5% of the population lives [ 70 ]. This concentration pattern is characteristic of developing countries. In Indonesia the geographical distribution of physicians is a particular concern, since Indonesia's vast size and difficult geography present a tremendous challenge to health service delivery [ 71 ]. It is difficult to place doctors in remote islands or mountain or forest locations with few amenities, no opportunities for private practice, and poor communications with the rest of the country. To improve the geographical distribution of physicians, governments often have used combinations of compulsory service and incentives. So far, there is virtually no country in the world that has solved the problem of a rural/urban imbalance of the physician workforce [ 67 ]. This does not necessarily mean that policies and programmes designed to reduce the imbalance have had no effect. For example, Thailand has successfully begun to stem the migration of health professionals from rural to urban areas and from public to private facilities with a range of strong financial incentives [ 72 ]. Institutional and services imbalances Institutional imbalances occur when some health care facilities have too many staff because of prestige, working conditions, ability to generate additional income, or other situation-specific factors, while others are understaffed [ 73 ]. Institutions such as magnet hospitals, for example, are hospitals characterized by adequate to excellent staffing, low turnover, rich nursing skill mix and greater job satisfaction, among other factors, even in the face of a general health personnel shortage [ 74 ]. Imbalance between the types of health services provided may also arise. In particular, we can consider the issue of curative versus preventive care. In effect, it has been estimated that most diseases (80%) and accidents are preventable through known methodologies, yet at present there is an imbalance in the funding of medical research, with only 1%-2% going to prevention and 98%-99% spent on curative approaches [ 75 ]. This imbalance in funding raises the question of a health workforce imbalance between preventive and curative care. Gender imbalances In many countries, women still tend to concentrate in the lower-status health occupations and to be a minority among more highly trained professionals and managers. In Bangladesh, the distribution by gender of the health workforce shows that the total proportion of women accounts for little more than one-fifth in health services [ 76 ]. The distribution of women by occupational category is biased in favour of nurses. Women are very poorly represented in other categories, such as dentists, medical assistants, pharmacists, managers/trainers and doctors. The underrepresentation of women in managerial and decision-making positions may lead to less attention to and poorer understanding of the problems specific to women and the particularities of their utilization patterns [ 77 ]. Female general practitioners have been shown to practise differently from males, managing different types of medical conditions, with some differences due to patient mix and patient selectivity, and others inherent in the sex of physician. In some more traditional areas, some women will not seek care for themselves or even for their children because they do not have access to a female provider [ 76 ]. Discussion This framework can be used to assess policy reforms and their impact on health workforce imbalances; it also provides a common framework for cross-country comparisons. This framework emphasizes the number and type of factors affecting health workforce imbalances, illustrating the complexity of this issue. From a policy perspective, it is particularly interesting to identify factors that policy-makers can influence in order to remedy imbalance problems. Various monetary and non-monetary incentives are used to influence the supply and/or demand for the health workforce. For example, subsidies, grants and scholarships are examples of incentives that can be used to attract more nursing students, whereas wage increases, additional benefits and working hours flexibility are examples of commonly used incentives to attract or retain the health workforce. The numerous factors and actors involved in the health workforce imbalance issue call for a coherent health workforce vision and policy. In that context, health planning plays an important role since it contributes to shaping the health care system. Moreover, since from a societal perspective market mechanisms alone do not allow an adequate demand/supply of health personnel to be reached, public interventions such as human resources planning are a means to correct for market failures. Health planning involves a time horizon. Forecasting the future number of health personnel needed and developing policies to meet such figures are common to any health care system. Physicians represent the profession for which more planning effort has been expended to achieve a workforce of appropriate size than for any other health profession. Countries' desire to meet population health needs and to avoid social welfare losses resulting from a shortage or an oversupply are factors explaining, to a large extent, the importance attributed to planning in the context of public health policies. The policy implications of forecasting either a shortage or a surplus of health care personnel are different, and hence attempts at projections must be rigorous. For instance, referring to previous studies predicting significant surpluses, Cooper [ 78 ] notes that such large surpluses have not occurred so far, because of a decrease in physician work effort. Factors such as age, sex and lifestyle contributed to this evolution. As a result of forecasted physician surpluses, various policy recommendations have been formulated. The United States Institute of Medicine [ 79 ] published a report recommending, among other things, that there be no new medical schools, that existing schools should not increase their class size and that the number of first-year residency positions should be reduced. The Pew Health Professions Commission Report [ 80 ] issued a report recommending more severe steps, such as the closing of some medical schools and tightening the visa process for international medical graduates. This framework also apprehends the different types of imbalances. This is important since the choice of a policy will also depend on the type of imbalance. Significant disparities in human resources for health between health occupation, regions, gender or health services are recognized as classic problems of imbalance. However, the question of a public/private imbalance is more debatable. One the one hand, we can argue that for equity and access, a health care system should have a strong public component. On the other hand, we can imagine a private-sector oriented health care system with mechanisms to ensure access to the poor. Conclusion In an attempt to contribute to a better understanding of imbalances in the health workforce, this paper has discussed a framework for human resources for health and proposed a typology of imbalances. Although the term "imbalance" is commonly used with respect to the health workforce, it is clear that imbalance in the health workforce encompasses a wide range of possible situations and is a complex issue. The use of a framework should facilitate the development of policy tools and their assessment. Competing interests None declared. Authors' contributions All authors participated in writing the original text and read and approved the final manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526216.xml
15377382
10.1186/1478-4491-2-13
517949
Comparison of the NEI-VFQ and OSDI questionnaires in patients with Sjögren's syndrome-related dry eye
Background To examine the associations between vision-targeted health-related quality of life (VT-HRQ) and ocular surface parameters in patients with Sjögren's syndrome, a systemic autoimmune disease characterized by dry eye and dry mouth. Methods Forty-two patients fulfilling European / American diagnostic criteria for Sjögren's syndrome underwent Schirmer testing without anesthesia, ocular surface vital dye staining; and measurement of tear film breakup time (TBUT). Subjects were administered the Ocular Surface Disease Index (OSDI) and the 25-item National Eye Institute Vision Functioning Questionnaire (NEI-VFQ). Main outcome measures included ocular surface parameters, OSDI subscales describing ocular discomfort (OSDI-symptoms), vision-related function (OSDI-function), and environmental triggers, and NEI-VFQ subscales. Results Participants (aged 31–81 y; 95% female) all had moderate to severe dry eye. Associations of OSDI subscales with the ocular parameters were modest (Spearman r (ρ) < 0.22) and not statistically significant. Associations of NEI-VFQ subscales with the ocular parameters reached borderline significance for the near vision subscale with TBUT (ρ = 0.32, p = .05) and for the distance vision subscale with van Bijsterveld score (ρ = 0.33, p = .04). The strongest associations of the two questionnaires were for: ocular pain and mental function with OSDI-symptoms (ρ = 0.60 and 0.45, respectively); and general vision, ocular pain, mental function, role function, and driving with OSDI-function (ρ = 0.60, 0.50, 0.61, 0.64, 0.57, and 0.67, respectively). Conclusions Associations between conventional objective measures of dry eye and VT-HRQ were modest. The generic NEI-VFQ was similar to the disease-specific OSDI in its ability to measure the impact of Sjögren's syndrome-related dry eye on VT-HRQ.
Background Dry eye is a common disorder of the ocular surface and tear film and is estimated to affect from 2% to over 15% of persons in surveyed populations, depending on the definition used [ 1 - 6 ]. Symptoms of dry eye are a major reason to seek ophthalmic care: a study by Nelson and co-workers found that 1.3% of Medicare patients had a primary diagnosis of keratoconjunctivitis sicca or dry eye [ 7 ]. Dry eye can range from mild to severe disease; although the majority of patients with dry eye experience ocular discomfort without serious vision-threatening sequelae, severe dry eye can compromise corneal integrity by causing epithelial defects, stromal infiltration, and ulceration, and can result in visually significant scarring [ 8 ]. Moderate to severe dry eye disease can adversely affect performance of visually demanding tasks due to pain and impaired vision [ 9 ]. In addition, corneal surface irregularity due to epithelial desiccation, quantified by using corneal topography, can decrease visual acuity [ 10 ]. Patient-reported measurements used to evaluate the specific impact of eye disease and vision on symptoms (discomfort), functioning (the ability to carry out activities in daily life), and perceptions (concern about one's health) are referred to as vision-targeted health-related quality of life (VT-HRQ) instruments. Valid and reliable measurements of VT-HRQ have become essential to the assessment of disease status and treatment effectiveness in ocular disease [ 11 ]. There are two general categories of VT-HRQ instruments: generic, which are designed to be used for a broad spectrum of visual disorders and ocular disease; and disease-specific, which are tailored toward particular aspects of a specific ocular disorder. In general, disease-specific instruments tend to be more sensitive than generic ones in detecting VT-HRQ impairments [ 12 ]; however, generic instruments allow comparisons across more diverse populations and diseases [ 13 ]. In addition, generic instruments may be able to capture additional aspects of systemic disease, related to the ocular disorder in question, providing a broader characterization of health-related quality of life [ 14 , 15 ]. There is therefore no clear-cut basis in a given study or population for choosing a generic versus a disease-specific measure: if possible, both should be utilized to determine whether one or the other is more consistent with clinical indicators, or if one appears to obtain additional, relevant information on patient status [ 16 ]. However, it may be the case that weak-to-moderate associations between clinical indicators and quality-of-life measures indicates that the VT-HRQ measure is capturing elements of disease above and beyond those that can be measured clinically (for example, visual acuity may be good but a patient may have problems with functioning related to problems with contrast sensitivity or glare disability). Again, depending on the characterization of the disease desired and the goal of the study, a researcher might choose an instrument that either is or is not strongly correlated with clinical signs. The measurement of the impact of dry eye on a patient's daily life, particularly symptoms of discomfort, is a critical aspect of characterizing the disease [ 17 ]. Despite the fact that most studies have found weak or no correlations between symptoms and signs of dry eye [ 18 - 20 ], symptoms are often the motivation for seeking eye care and are therefore a critical outcome measure when assessing treatment effect [ 7 ], and hence are increasingly used as a surrogate for ocular surface disease in many epidemiologic studies. Indeed, recent studies have focused on developing more robust ways of measuring patient-reported symptoms of dry eye [ 21 - 23 ]. The Ocular Surface Disease Index (OSDI) © [ 24 ] was developed to quantify the specific impact of dry eye on VT-HRQ. Sjögren's Syndrome is an autoimmune systemic disease characterized by dry mouth and dry eye signs and symptoms [ 25 , 26 ]. Its manifestations include fatigue, arthritis, neuropathy, and pulmonary and renal disease. Histopathologic evidence of salivary gland inflammation and the presence of serum autoantibodies SSA or SSB are important diagnostic features of the disease [ 27 ]. Sjögren's Syndrome has been stated to be the second most common autoimmune disease, ranking between rheumatoid arthritis and systemic lupus erythematosus [ 27 ]. In the U.S., it is estimated that between 1 and 4 million persons (approximately 1–2 in 200) have Sjögren's Syndrome [ 28 ]. Prevalence estimates for other countries range from 0.3 to 4.8% [ 29 ]. Female gender and older age are known risk factors for Sjögren's syndrome [ 30 ]. A wide range of studies have assessed the ocular manifestations of Sjögren's syndrome [ 31 - 33 ]; however, assessment of symptoms and quality of life have been limited and, in most cases, generic measures of well-being, psychological distress, and fatigue without ocular dimensions have been employed [ 34 - 40 ]. Further, while there are many published studies of VT-HRQ in mild to moderate dry eye, there are few publications on VT-HRQ in Sjögren's syndrome, which is characterized by dry eye causing significant ocular irritation as well as systemic disease factors that could have their own additional significant impact on VT-HRQ. Our purpose in this study was to examine VT-HRQ in patients with primary Sjögren's syndrome, using a generic and a dry-eye-disease-specific instrument. We examined the associations of ocular surface parameters with the VT-HRQ scores, hypothesizing that the disease-specific instrument would be more closely related than the generic to the clinical markers of disease. We also examined the association of the generic and disease-specific VT-HRQ scores with each other. Methods The study protocol was approved by the National Eye Institute Internal Review Board. All patients completed an informed consent prior to examination. Consecutive patients with diagnosed primary Sjögren's syndrome were recruited from the NIH Clinical Center, Bethesda, MD. The diagnosis of primary Sjögren's syndrome was based on European-American criteria, which requires at least four of the following six features: signs and symptoms of dry eye and of dry mouth, histopathologic evidence of inflammation on minor salivary gland biopsy, and positive anti-Ro or anti-La antibodies. Before the clinical examination, a trained interviewer administered two questionnaires (described further below) to measure VT-HRQ to each patient. The subsequent clinical examination included a comprehensive anterior segment evaluation, including slit lamp biomicroscopy, evaluation of lid margin thickness and hyperemia, conjunctival erythema, chemosis, tear film debris and mucus, and extent of meibomian gland plugging. Tests of tear function and ocular surface status were performed as described below. The OSDI [ 24 ] (provided by Allergan, Inc. Irvine, CA) was used to quantify the specific impact of dry eye on VT-HRQ. This disease-specific questionnaire includes three subscales: ocular discomfort (OSDI-symptoms), which includes symptoms such as gritty or painful eyes; functioning (OSDI-function), which measures limitation in performance of common activities such as reading and working on a computer; and environmental triggers (OSDI-triggers), which measures the impact of environmental triggers, such as wind or drafts, on dry eye symptoms. The questions are asked with reference to a one-week recall period. Possible responses refer to the frequency of the disturbance: none of the time, some of the time, half of the time, most of the time, or all of the time. Responses to the OSDI were scored using the methods described by the authors [ 24 ]. Subscale scores were computed for OSDI-symptoms, OSDI-function, and OSDI-triggers, as well as an overall averaged score. OSDI subscale scores can range from 0 to 100, with higher scores indicating more problems or symptoms. However, we subtracted the OSDI overall and subscale scores from 100, so that lower scores would indicate more problems or symptoms. The 25-item NEI Visual Function Questionnaire (NEI-VFQ) [ 41 , 42 ] is a non-disease-specific (i.e., "generic") instrument designed to measure the impact of ocular disorders on VT-HRQ. Depending on the item, responses to the NEI-VFQ pertain to either frequency or severity of a symptom or functioning problem. A recall period is not specified in the questionnaire. Responses to the NEI-VFQ were scored using the methods described by the authors [ 43 ]. Subscale scores for general vision, ocular pain, near vision, distance vision, social functioning, mental functioning, role functioning, dependency, driving, color vision, and peripheral vision, as well as an overall score, were computed. The NEI-VFQ scores can range from 0–100, with lower scores indicating more problems or symptoms. Schirmer tests of tear production without and with anesthesia were performed by inserting a Schirmer tear test sterile strip (35 mm, Alcon Laboratories, Inc, Fort Worth, TX) into the inferior fornix, at the junction of the middle and lateral third of the lower eyelid margin, for 5 minutes with the eyes closed. The extent of wetting was measured by referring to the ruler provided by the manufacturer on the envelope containing the strips. Possible scores range from 0 to 35 mm, with lower scores indicating greater abnormality in tear production. This test was repeated after instillation of topical anesthetic, 0.5% proparacaine [ 44 ]. A Schirmer without anesthesia score of ≤ 5 mm in at least one eye is one required element of dry eye, as defined by the European-American Sjögren's syndrome diagnostic criteria [ 45 ]. The assessment of ocular surface damage was performed by a cornea specialist using vital dye staining with 2% unpreserved sodium fluorescein and then 5% lissamine green dye. The corneal, temporal, and nasal regions of the conjunctiva were scored individually from 0–5 (for fluorescein) and 0–5 (for lissamine green) using the Oxford grading scheme [ 46 ]. The Oxford score was derived by adding the scores for corneal fluorescein and nasal plus temporal conjunctival lissamine green staining. Total Oxford score could range from 0–15. The van Bijsterveld score [ 47 ] (VB) was assessed using lissamine green staining of the cornea (0–3) and conjunctiva (0–3). Total VB score could range from 0–9. For all staining tests, higher scores indicate worse ocular surface damage. Tear film stability was assessed using fluorescein tear film breakup time (TBUT). Five microliters of 2% sodium fluorescein was instilled into the inferior fornix and the patient was asked to blink several times. Using the cobalt blue filter and slit lamp biomicroscopy, the duration of time required for the first area of tear film breakup after a complete blink was determined. If the TBUT was less than 10 seconds, the test was repeated for a total of 3 values and the average was calculated. For analysis, for each individual, the maximum (worse) score for the two eyes was used for Oxford score and VB, and the minimum (worse) score for the two eyes was used for Schirmer with and without anesthesia and for TBUT. TBUT values greater than or equal to 10 seconds [ 48 ] were coded as 10 (normal) and < 10 seconds was defined as abnormal. Schirmer without anesthesia score result of ≤ 5 mm or VB ≥ 4 were used as objective evidence of dry eye, following the European / American criteria for the diagnosis of dry eye for Sjogren's syndrome [ 49 ]. Hypotheses of specific associations were formulated based on the areas and domains assessed by the two VT-HRQ instruments. Scatterplots and Spearman's correlation coefficient (ρ) [ 50 ] were used to examine associations between pairs of variables. Multiple linear regression [ 51 ] was used to assess the strength of association between pairs of variables while adjusting for confounders (e.g., age). Results Characteristics of participants A total of 42 patients, 40 female and 2 male, were included in this study. The average age was 55 years (range, 31–81 y). Most (81%) were of European descent. Visual acuity in the better eye was 20/20 or better for 68% of the patients; the remainder had 20/25 or better in the better eye, except for one patient who was 20/32 in both eyes. Ocular examination (Table 1 ) showed that, on average, the participants suffered from moderate to severe dry eye: mean Oxford score was 7.2, mean VB score was 5.3. Average Schirmer without anesthesia score was 4.8 mm, with nearly all (79%) having scores less than 10 mm and the majority (59%) having scores less than 5 mm. Mean TBUT was 2.9 seconds, with nearly all (87%) having scores less than 5 seconds. Table 1 Characteristics of participants (n = 42) Mean, sd [range] N (%) Age (y) 54.9 (12.7) [31–81] Ethnicity European-derived 34 (81%) African-derived 3 (7%) Other 5 (12%) Gender Female 40 (95%) Male 2 (5%) Visual acuity* 20/20 + OU 18 (44%) 20/20+, better eye 10 (24%) 20/25+, better eye 12 (29%) <20/25, better eye 1 (2%) Vital dye staining Oxford score 7.2 (3.4) [1–14] -- 5+ -- 34 (81%) Van Bijsterveld score** 5.3 (2.7) [0–9] -- 4+ -- 28 (74%) Tear production Tear film break-up time (s)** 2.9 (1.7) [1–8] -- < 5 sec -- 33 (87%) Schirmer without anesthesia (mm) 4.9 (5.4) [0–20] -- 0–5 -- 25 (60%) 5-<10 -- 8 (19%) 10+ -- 9 (21%) Meibomian gland disease** None -- 10 (26%) 1 -- 8 (21%) 2+ -- 20 (53%) European-American dry eye criteria -- 37 (90%) *One person had missing visual acuity information; **Four persons had missing information for some components of the clinical examination Association of OSDI © with ocular surface parameters OSDI scores (all subtracted from 100) indicated moderate problems with symptoms, functioning, and adverse environmental conditions. Mean OSDI-symptoms score was 62.5, mean OSDI-function score was 78.2, and mean OSDI-triggers score was 60.2. However, some patients had no problems with these areas: 12% reported no problems with irritation symptoms, 21% reported no problems with functioning, and 24% had no problems with environmental triggers. Associations of the OSDI subscale and overall scores with ocular surface parameters (Oxford score, VB, TBUT, and Schirmer score with and without anesthesia) are shown in 2 . In general, no substantive associations were found, except for visual functioning with TBUT (r = 0.22), and none of the observed associations reached statistical significance. Median scores on OSDI were compared between normal/abnormal categories of ocular surface variables (Schirmer without anesthesia score < 5, 5-<10, versus 10+; TFB < 5 versus > = 5; VB < 4 versus 4+, Oxford score < 5 versus 5+; European-American criteria, yes versus no). Considerable overlap in the distributions between categories was observed for all subscales, with no significant differences in median values (data not shown). Table 2 Association of OSDI (scores subtracted from 100) with ocular surface parameters (Spearman ρ) Oxford score van Bijsterveld score Tear film breakup time Schirmer without anesthesia score Schirmer with anesthesia score OSDI Mean (sd); % floor Symptoms 62.5 (25.7); 12% 0.02 0.16 -0.10 -0.04 0.02 Visual function 78.2 (21.4); 21% 0.15 0.17 0.22 0.12 0.05 Environmental triggers 60.2 (34.3); 24% -0.01 0.13 -0.02 0.04 0.12 Overall 70.0 (20.2); 10% 0.07 0.19 0.06 0.04 0.08 Association of NEI-VFQ with ocular surface parameters Overall, scores on the NEI-VFQ subscales tended to be high. Average scores for near and distance vision, social and mental functioning, dependency, driving, and peripheral vision were over 80, and a substantial percentage reported no problems at all with any of the items on the subscale: 26% for near vision, 24% for distance vision, 83% for social functioning, 17% for mental functioning, 74% for dependency, 38% for driving, and 79% for peripheral vision. The subscale indicating the most impairment was the ocular pain subscale, with a mean score of 66.7. Associations of the NEI-VFQ subscale and overall scores with ocular surface parameters (Oxford score, VB, TBUT, and Schirmer score with and without anesthesia) are shown in Table 3 . Overall, associations were weak to moderate, and none attained statistical significance. General vision showed moderate correlations with Oxford score, VB, and TBUT scores (r values from 0.20–0.27). Ocular pain showed a moderate correlation with TBUT (r = 0.23) and Schirmer with anesthesia score (r = 0.22). Near vision was associated with VB (r = .20) and to a greater extent with TBUT (r = 0.32). Distance vision showed moderate associations with Oxford score, TBUT, and Schirmer with anesthesia score (r values from 0.21 – 0.26) and a stronger association with VB (r = 0.33). Social functioning was moderately associated with VB (r = .24). Role functioning was associated with Schirmer scores both with and without anesthesia, more strongly so with Schirmer with anesthesia score (r = 0.31). Dependency was associated with TBUT (r = .29) and Schirmer with anesthesia score (r = .21). An anomalous finding was that peripheral vision showed moderate association with VB score (r = .29). Mental functioning and driving showed no associations with any of the ocular surface parameters. Median scores on NEI-VFQ scales were compared between normal/abnormal categories of ocular surface variables (Schirmer without anesthesia score < 5, versus 10+; TFB < 5 versus > = 5; VB < 4 versus 4+, Oxford score < 5 versus 5+; European-American criteria, yes versus no). Considerable overlap in the distributions between categories was observed for all subscales, with no significant differences in median values (data not shown), with the exception of the European-American criteria, where, counterintuitively, scores were higher (better) for near vision for those with dry eye (45.8) than for those without (83.7; p = .03). However, only 4 patients were in the "no dry eye" category, so this result may be the consequence of unstable small sample size. Table 3 Association of NEI-VFQ with ocular surface parameters (Spearman ρ) Oxford score van Bijsterveld score Tear film breakup time Schirmer without anesthesia score Schirmer with anesthesia score NEI-VFQ Mean (sd); % floor General vision 78.6 (12.8); 14% 0.22 0.20 0.27 -0.04 0.08 Ocular pain 66.7 (22.2); 12% 0.06 0.06 0.23 -0.06 0.22 Near vision 80.4 (19.4); 26% 0.18 0.20 0.32 -0.02 0.16 Distance vision 80.2 (18.4); 24% 0.25 0.33 0.26 -0.04 0.21 Social function 96.1 (11.2); 83% 0.14 0.24 0.15 -0.07 -0.09 Mental function 83.1 (17.5); 17% 0.15 0.18 0.19 -0.10 0.17 Role function 73.2 (25.4); 29% 0.07 -0.02 0.16 0.22 0.31 Dependency 94.4 (10.5); 74% -0.09 -0.04 0.29 0.03 0.21 Driving 84.9 (15.5); 38% -0.02 0.04 0.19 -0.07 -0.06 Peripheral vision 91.7 (19.6); 79% 0.11 0.29 0.15 -0.15 -0.13 Overall 83.6 (12.8); 2% 0.19 0.20 0.24 -0.04 0.19 Association of OSDI © with NEI-VFQ subscales In general, stronger associations were observed between subscales of the OSDI and NEI-VFQ (Table 4 ) than between ocular surface parameters and either the OSDI or the NEI-VFQ. Because of the large number of potential comparisons, we restrict discussion to associations that were hypothesized based on clinical plausibility. To test whether the overall (i.e., combined) OSDI and NEI-VFQ scales were related, we examined their linear relationship (Figure 1 ). Indeed, the association of these scales was strong (r = 0.61) and remained statistically significant after age adjustment. We hypothesized that the OSDI-symptoms subscale and the NEI-VFQ ocular pain subscale should show strong association, and in fact this was observed (r = 0.60, p < .001 after adjustment for age). A scatterplot of the data is shown in Figure 2 . We also hypothesized that the OSDI-triggers measure should be associated with the NEI-VFQ ocular pain subscale. This association was moderate (r = 0.46, Figure 3 ) and did not remain statistically significant after age adjustment. The OSDI-function subscale measures a domain that has theoretical overlap with the NEI-VFQ subscales for general, near, and distance vision, as well as driving, so we hypothesized that these correlations should also be relatively strong. This was true in particular for general vision (r = 0.60, Figure 4 ) and driving (r = 0.57, Figure 7 ), both of which remained highly statistically significant after adjustment for age (p < .001). The correlations of OSDI-function with NEI-VFQ near and distance vision were not as strong (0.45, Figures 5 and 6 ) and were not statistically significant after adjusting for age. Table 4 Associations of OSDI © subscales (subtracted from 100) with NEI-VFQ subscales (Spearman ρ). OSDI Symptoms OSDI Visual function OSDI Environmental triggers OSDI Overall NEI-VFQ General vision 0.34 0.60*† 0.28 0.51* Ocular pain 0.60*† 0.50* 0.46† 0.62* Near vision 0.08 0.46† 0.23 0.33 Distance vision 0.37 0.45† 0.27 0.46 Social function 0.16 0.26 0.17 0.22 Mental function 0.45* 0.61* 0.20 0.53* Role function 0.19 0.64* 0.33 0.48* Dependency 0.17 0.42* 0.17 0.33 Driving 0.28 0.57*† 0.33 0.48* Peripheral vision 0.18 0.02 0.04 0.14 Overall 0.43 0.67* 0.37 0.61*† †Associations hypothesized at the start of the study; *statistically significant after age adjustment (p < 0.001) Figure 1 Association between OSDI (scores subtracted from 100) and NEI-VFQ overall scales. Spearman ρ: 0.61*. Figure 2 Association between OSDI ocular discomfort subscale (scores subtracted from 100) and NEI-VFQ ocular pain subscale. Spearman ρ: 0.60* Figure 3 Association between OSDI environmental triggers subscale (scores subtracted from 100) and NEI-VFQ ocular pain subscale. Spearman ρ: 0.46. Figure 4 Association between OSDI visual function subscale (scores subtracted from 100) and NEI-VFQ general vision subscale. Spearman ρ: 0.61. Figure 7 Association between OSDI visual function subscale (scores subtracted from 100) and NEI-VFQ driving subscale. Spearman ρ: 0.57. Figure 5 Association between OSDI visual function subscale (scores subtracted from 100) and NEI-VFQ near vision subscale. Spearman ρ: 0.46. Figure 6 Association between OSDI visual function subscale (scores subtracted from 100) and NEI-VFQ distance vision subscale. Spearman ρ: 0.45. Table 4 shows that, in fact, several other significant associations not conjectured in our original hypotheses were observed. In particular, the OSDI-function subscale, in addition to the associations hypothesized above, showed substantial and statistically significant associations with ocular pain (r = 0.50), mental function (r = 0.61), role function (r = 0.64), and dependency (r = 0.42). The OSDI-symptoms subscale showed a moderate and statistically significant association with NEI-VFQ mental health (r = 0.45). The overall OSDI scale showed significant associations with NEI-VFQ general vision (r = 0.51, ocular pain (r = 0.62), mental and role functioning (r = 0.53 and 0.48, respectively), and driving (r = 0.61). Discussion We compared subscale scores for an ocular surface disease-specific instrument (OSDI) with a generic VT-HRQ instrument (NEI-VFQ-25) in patients with a systemic autoimmune disease associated with moderate to severe dry eye. We found that patients with primary Sjögren's syndrome had OSDI scores (mean, 30, before subtraction from 100) similar to those previously published [ 24 ] for moderate to severe dry eye patients (mean score was 36 for severe cases). Despite the fact that all of our patients had Sjögren's syndrome, with moderate to severe dry eye, we found that correlations of ocular surface parameters with VT-HRQ (i.e., patient-reported) parameters tended to be weak or nonexistent, consistent with several other studies demonstrating poor correlations between signs and symptoms of dry eye [ 18 - 20 ]. Indeed, contrary to our expectations, NEI-VFQ correlations with objective ocular surface parameters tended to be higher than those of OSDI, although all were relatively modest (all < 0.35) and none reached statistical significance. One explanation could be that the nature of the items for each of these instruments is quite different. The OSDI queries the frequency of a symptom or difficulty with an activity, over a one week recall period. The NEI-VFQ incorporates questions both the frequency and intensity of symptoms and their impact on activities, with no specified recall period. Perhaps this added element of capturing both the frequency and intensity of a symptom or impact accounts for some of the differences we found. For subscales that are similar, agreement was higher but still moderate, possibly due to differences in the nature of the questions or response options. The OSDI is targeted to assess how much the symptoms of dry eye affect the patient's current status (i.e., in the past week), whereas the NEI-VFQ may be more suited to capturing the overall impact of a chronic ocular disease on VT-HRQ. In this group of primary Sjögren's syndrome patients, associations between subscales of the NEI-VFQ and OSDI were moderate to strong (< 0.70) and in hypothesized directions. Significant associations were seen between OSDI and NEI-VFQ overall scales; OSDI-symptoms and NEI-VFQ ocular pain; and OSDI-function and NEI-VFQ general vision and driving. This suggests that both instruments are capturing important aspects of VT-HRQ. It is not surprising that the highest correlations were observed between subscales with similar domains, which serves to validate the use of alternate methodologies. On the other hand, it is counter-intuitive that the generic and disease-specific instruments appeared similar (or that the generic seemed to do a little better) with respect to their association with objectively measured clinical signs of dry eye, as the NEI-VFQ was designed to capture broader aspects of VT-HRQ. For the NEI-VFQ, we found moderate correlations (greater than 0.3) of distance vision with VB and near vision with TBUT. This was surprising, as one may have expected that subscales measuring ocular discomfort or pain (i.e., more disease-specific for dry eye) would have the strongest correlations with clinical measures of dry eye. Clinical signs of dry eye include measures of tear production, ocular surface staining, and tear film break-up; visual acuity and other aspects of visual function are not generally as widely used. However, some investigators have reported that visual acuity in dry eye patients is correlated with decreased spatial contrast sensitivity [ 52 ] and is functionally reduced with sustained eye opening due to increased surface irregularity which can be detected with corneal topography [ 53 , 10 ], which could explain our finding of moderate associations of ocular surface measures with near and distance vision. It has been proposed [ 10 ] that "subtle visual disturbance" is an important reason for dry eye patients to seek care. Indeed, improvement in blurred vision symptoms was one of the most frequently reported benefits of topical cyclosporine treatment for dry eye in a large, multicenter clinical trial [ 54 ]. The impact of the quality of vision or functional visual acuity on VT-HRQ has not been a focus of studies of the subjective aspects of dry eye. Our data indicate that the impact of dry eye on VT-HRQ is only partially accounted for by ocular pain in patients with severe dry eye, such as in Sjogren's syndrome. Would we expect the associations to be different in Sjögren's patients? Sjögren's syndrome is an autoimmune exocrinopathy and effects of its systemic nature and chronicity on dry eye may have been more readily captured by the NEI-VFQ's ability to measure both frequency and intensity of problems with VT-HRQ. In contrast, although the OSDI includes items to measure function, responses are limited to the frequency of problems. Because the type of dry eye in Sjögren's syndrome is more likely to be severe, and all patients in our study had Sjögren's-related dry eye, we speculated that somewhat stronger associations between signs and symptoms might be observed. On the other hand, ocular surface inflammation and decreased corneal sensation are features of severe dry eye which might alter a patient's perception of symptoms of ocular irritation and might be the cause of weaker correlations between signs and symptoms [ 48 , 55 ]. Indeed, reduced corneal sensation could provide inadequate feedback through the ophthalmic nerve to the central nervous system, resulting in less efferent stimulation to the lacrimal gland with reduced tear production and promotion of a vicious cycle. In addition, meibomian gland dysfunction plays a key role in dry eye in Sjögren's syndrome [ 56 ]. Therefore, aqueous and evaporative tear deficiency may combine to produce a particularly diseased ocular surface. Conclusions In addition to clinical signs, it is important to include assessments of VT-HRQ and visual function to fully characterize the impact of dry eye on health status. The correlation between signs and VT-HRQ are modest at best, indicating that VT-HRQ is capturing an additional component of disease that is not captured by the clinical assessment. This does not necessarily mean that the measures of VT-HRQ or the methods of detecting clinical signs are deficient, but rather that VT-HRQ is an additional element of the overall impact of this disease process on affected individuals. Furthermore, in diseases with systemic manifestations, such as Sjögren's syndrome, that may have an influence on quality of life independent of dry eye symptoms, appropriate tests of VT-HRQ are critical to completely characterize quality of life in these patients. It may also be valuable to explore possible differences in associations of clinical signs with VT-HRQ in patient populations with different manifestations or causes of dry eye. List of abbreviations VT-HRQ: Vision-targeted health-related quality of life; TBUT: Tearfilm breakup time; OSDI: Ocular Surface Disease Index; NEI-VFQ: National Eye Institute Visual Function Questionnaire; VB: van Bijsterveld Authors' contributions SV helped to design the study and performed all analyses and took the lead in writing the manuscript. LG performed the patient interviews and assisted with data analyses. GFR provided advice on statistical methods and presentation of the results. JA conceived and helped to design the study and assisted with writing the manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC517949.xml
15341657
10.1186/1477-7525-2-44
545941
Differential regulation of Aβ42-induced neuronal C1q synthesis and microglial activation
Expression of C1q, an early component of the classical complement pathway, has been shown to be induced in neurons in hippocampal slices, following accumulation of exogenous Aβ42. Microglial activation was also detected by surface marker expression and cytokine production. To determine whether C1q induction was correlated with intraneuronal Aβ and/or microglial activation, D-(-)-2-amino-5-phosphonovaleric acid (APV, an NMDA receptor antagonist) and glycine-arginine-glycine-aspartic acid-serine-proline peptide (RGD, an integrin receptor antagonist), which blocks and enhances Aβ42 uptake, respectively, were assessed for their effect on neuronal C1q synthesis and microglial activation. APV inhibited, and RGD enhanced, microglial activation and neuronal C1q expression. However, addition of Aβ10–20 to slice cultures significantly reduced Aβ42 uptake and microglial activation, but did not alter the Aβ42-induced neuronal C1q expression. Furthermore, Aβ10–20 alone triggered C1q production in neurons, demonstrating that neither neuronal Aβ42 accumulation, nor microglial activation is required for neuronal C1q upregulation. These data are compatible with the hypothesis that multiple receptors are involved in Aβ injury and signaling in neurons. Some lead to neuronal C1q induction, whereas other(s) lead to intraneuronal accumulation of Aβ and/or stimulation of microglia.
Introduction Alzheimer's disease (AD) is the most common form of dementia in the elderly. Its main pathological features include extracellular amyloid beta (Aβ) deposition in plaques, neurofibrillary tangles (composed of hyperphosphorylated tau protein) in neurons, progressive loss of synapses and cortical/hippocampal neurons, and upregulation of inflammatory components including activated microglia and astrocytes and complement activation [ 1 ]. Although the contribution of abnormal phosphorylation and assembly of tau to AD dementia remains a focus of investigation, therapies that interfere with Aβ production, enhance its degradation, or cause its clearance from the central nervous system (CNS) have been the center of many studies in search of a cure for this disease. Microglial cells, when activated, are believed to be responsible for much of the Aβ clearance through receptor-mediated phagocytosis [ 2 , 3 ]. Upon activation, microglia acquire features more characteristic of macrophages, including high phagocytic activity, increased expression of leukocyte common antigen (CD45), major histocompatibility complex (MHC) class II and costimulatory molecules B7, and secretion of proinflammatory substances [ 4 ]. In addition, phagocytic microglia also participate in the removal of degenerating neurons and synapses as well as Aβ deposits ([ 5 ], and reviewed in [ 6 ]). Thus, while some microglial functions are beneficial, the destructive effects of the production of toxins (such as nitric oxide, superoxide) and proinflammatory cytokines by activated microglia apparently overcome the protective functions in the chronic stage of neuroinflammation [ 7 , 8 ]. In vitro studies have shown both protection and toxicity contributed by microglia in response to Aβ depending on the state of activation of microglia [ 9 , 10 ]. Correlative studies on AD patients and animal models of AD strongly suggest that accumulation of reactive microglia at sites of Aβ deposition contributes significantly to neuronal degeneration [ 3 , 11 ], although decreased microglia have been reported to be associated with both lowered and enhanced neurodegeneration in transgenic animals [ 12 , 13 ]. Aβ itself is believed to initiate the accumulation and activation of microglia. However, recent reports provide evidence for neuron-microglial interactions in regulating CNS inflammation [ 14 ]. Nevertheless, the molecular mechanisms responsible for activation and regulation of microglia remain to be defined. Complement proteins have been shown to be associated with Aβ plaques in AD brains, specifically those plaques containing the fibrillar form of the Aβ peptide [ 11 ]. Complement proteins are elevated in neurodegenerative diseases like AD, Parkinson's disease, and Huntington's disease as well as more restricted degenerative diseases such macular degeneration and prion disease [ 11 , 15 - 18 ]. Microglia, astrocytes, and neurons in the CNS can produce most of the complement proteins upon stimulation. C1q, a subcomponent of C1, can directly bind to fibrillar Aβ and activate complement pathways [ 19 ], contributing to CNS inflammation [ 13 ]. In addition, C1q has been reported to be synthesized by neurons in several neurodegenerative diseases and animal injury models, generally as an early response to injury [ 20 - 23 ], possibly prior to the synthesis of other complement components. Interestingly, C1q and, upon complement activation, C3 also can bind to apoptotic cells and blebs and promote ingestion of those dying cells [ 24 - 26 ]. Elevated levels of apoptotic markers are present in AD brain tissue suggesting that many neurons undergo apoptosis in AD [ 27 - 29 ]. Excess glutamate, an excitatory neurotransmitter released from injured neurons and synapses, is one of the major factors that perturb calcium homeostasis and induce apoptosis in neurons [ 30 ]. Thus, it is reasonable to hypothesize that neuronal expression of C1q, as an early injury response, may serve a potentially beneficial role of facilitating the removal of apoptotic neurons or neuronal blebs [ 31 ] in diseases thereby preventing excess glutamate release, excitotoxicity, and the subsequent additional apoptosis. We have previously reported that in rat hippocampal slice cultures treated with exogenous Aβ42, C1q expression was detected in pyramidal neurons following the internalization of Aβ peptide. This upregulation of neuronal C1q could be a response to injury from Aβ that would facilitate removal of dying cells. Concurrently, microglial activation was prominent upon Aβ treatment. In the present study, the relationship of Aβ-induced neuronal C1q production to microglia activation and Aβ uptake in slice cultures was investigated. Materials and methods Materials Aβ 1–42, obtained from Dr. C. Glabe (UC, Irvine), was synthesized as previously described [ 32 ]. Aβ 10–20 was purchased from California Peptide Research (Napa, CA). Lyophilized (in 10 mM HCl) Aβ peptides were solubilized in H 2 O and subsequently N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid (HEPES) was added to make a final concentration of 10 mM HEPES, 500 μM peptide. This solution was immediately diluted in serum-free medium and added to slices. Glycine-arginine-glycine-aspartic acid-serine-proline (RGD) peptide was purchased from Calbiochem (San Diego, CA). D-(-)-2-amino-5-phosphonovaleric acid (APV) was purchased from Sigma (St. Louis, MO). Both compounds were dissolved in sterile Hanks' balanced salt solution (HBSS) without glucose at 0.2 M and 5 mM, respectively, before diluted in serum-free medium. Antibodies used in experiments are listed in Table 1 ; RT-PCR primers, synthesized by Integrated DNA Technologies (Coralville, IA), are listed in Table 2 . All other reagents were from Sigma unless otherwise noted. Table 1 Antibodies used in immunohistochemistry. antibody/antigen concentration source anti-rat C1q 2 μg/ml M. Wing, Cambridge, UK OX-42 (CD11b/c) 5 μg/ml BD/PharMingen, San Diego, CA ED-1 3 μg/ml Chemicon, Temecula, CA anti-CD45 0.5 μg/ml Serotec Inc, Raleigh, NC 4G8 (Aβ) 1 μg/ml Signet Pathology Systems, Dedham, MA 6E10 (Aβ) 0.5 μg/ml Signet Pathology Systems Table 2 PCR primers and cycling conditions for RT-PCR assay. Gene Primer sequences Denaturation Annealing Extension cycle Ref C1qB 5'-cgactatgcccaaaacacct-3' 5'-ggaaaagcagaaagccagtg-3' 94°C 1 min 60°C 1 min30 sec 72°C 2 min 35 [61] MCSF 5'-ccgttgacagaggtgaacc-3' 5'-tccacttgtagaacaggaggc-3' 92°C 30 sec 58°C 1 min 72°C 1 min30 sec 35 [62] CD40 5'-cgctatggggctgcttgttgacag-3' 5'-gacggtatcagtggtctcagtggc-3' 94°C 30 sec 58°C30 sec 72°C 1 min 30 [63] β-actin 5'-ggaaatcgtgcgtgacatta-3' 5'-gatagagccaccaatccaca-3' 94°C 30 sec 60°C30 sec 72°C 1 min 25 [61] IL-8 5'-gactgttgtggcccgtgag-3' 5'-ccgtcaagctctggatgttct-3' 94°C 1 min 56°C 1 min 72°C 1 min 39 [64] Slice cultures Hippocampal slice cultures were prepared according to the method of Stoppini et al [ 33 ] and as described in Fan and Tenner [ 34 ]. All experimental procedures were carried out under protocols approved by the University of California Irvine Institutional Animal Care and Use Committee. Slices prepared from hippocampi dissected from 10d-old Sprague Dawley rat pups (Charles River Laboratories, Inc., Wilmington, MA) were kept in culture for 10 to 11 days before treatment started. All reagents were added to serum-free medium (with 100 mg/L transferrin and 500 mg/L heat-treated bovine serum albumin) which was equilibrated at 37°C, 5% CO 2 before addition to the slices. Aβ 1–42 or Aβ 10–20 was added to slice cultures as described previously [ 34 ]. Briefly, peptide was added to cultures in serum-free medium at 10 or 30 μM. After 7 hours, the peptide was diluted with the addition of an equal amount of medium containing 20% heat-inactivated horse serum. Fresh peptide was applied for each day of treatment. Controls were treated the same way except without peptide. RGD or APV was added to the slice cultures at the same time as Aβ 42. Immunohistochemistry At the end of the treatment period, media was removed, the slices were washed with serum-free media and subjected to trypsinization as previously described [ 34 ] for 15 minutes at 4°C to remove cell surface associated, but not internalized, Aβ. After washing, slices were fixed and cut into 20 μm sections for immunohistochemistry or extracted for protein or RNA analysis as described in Fan and Tenner [ 34 ]. Primary antibodies (anti-Aβ antibody 4G8 or 6E10; rabbit anti rat C1q antibody; CD45 (leukocyte common antigen, microglia), OX42 (CD11b/c, microglia), or ED1 (rat microglia/macrophage marker), or their corresponding control IgGs were applied at concentrations listed in Table 1 , followed by biotinylated secondary antibody (Vector Labs, Burlingame, CA) and finally FITC- or Cy3-conjugated streptavidin (Jackson ImmunoResearch Laboratories, West Grove, PA). Slides were examined on an Axiovert 200 inverted microscope (Carl Zeiss Light Microscopy, Göttingen, Germany) with AxioCam (Zeiss) digital camera controlled by AxioVision program (Zeiss). Images (of the entire CA1-CA2 region of hippocampus) were analyzed with KS 300 analysis program (Zeiss) to obtain the percentage area occupied by positive immunostaining in a given field. ELISA Slices were homogenized in ice-cold extraction buffer (10 mM triethanolamine, pH 7.4, 1 mM CaCl 2 , 1 mM MgCl 2 , 0.15 M NaCl, 0.3% NP-40) containing protease inhibitors pepstatin (2 μg/ml), leupeptin (10 μg/ml), aprotinin (10 μg/ml), and PMSF (1 mM). Protein concentration was determined by BCA assay (Pierce, Rockford, IL) using BSA provided for the standard curve. An ELISA for rat C1q was adapted from Tenner and Volkin [ 35 ] with some modifications as previously described [ 34 ]. RNA preparation and RT-PCR Total RNA from cultures was isolated using the Trizol reagent (Life Technologies, Grand Island, NY) according to the manufacturer's instructions. RNA was treated with RNase-free DNase (Fisher, Pittsburgh, PA) to remove genomic DNA contamination. Each RNA sample was extracted from 3 to 5 hippocampal organotypic slices in the same culture insert. The reverse transcription (RT) reaction conditions were 42°C for 50 min, 70°C for 15 min. Tubes were then centrifuged briefly and held at 4°C. Primer sequences and PCR conditions are listed in Table 2 . PCR products were electrophoresed in 2% agarose gel in TAE buffer and visualized with ethidium bromide luminescence. To test for differences in total RNA concentration among samples, mRNA level for rat β-actin were also determined by RT-PCR. Results were quantified using NIH image software [ 36 ] by measuring DNA band intensity from digital images taken on GelDoc (BIO-RAD) with Quantity One program. Results NMDA receptor antagonist APV inhibits Aβ42 uptake and Aβ42-induced microglial activation and neuronal C1q production We have previously reported that C1q was detected in cells positive for neuronal markers and that microglial cells were activated in slices following Aβ42 ingestion [ 34 ]. Lynch and colleagues have shown that APV, a specific NMDA glutamate receptor antagonist, was able to block Aβ42 uptake by hippocampal neurons in slice cultures [ 37 ]. This provided a mechanism to down-modulate the Aβ42 internalization and test the effect on induction of C1q synthesis in neurons. Slices were treated with no peptide, 50 μM APV, 30 μM Aβ42, or 30 μM Aβ42 + 50 μM APV for 3 days with fresh reagents added daily. Cultures were collected and processed as described in Materials and Methods. Similar to reported previously, addition of exogenous Aβ42 resulted in Aβ uptake by hippocampal neurons, induction of C1q synthesis in neurons, and activation of microglial cells (Figure 1d, e, f compared with 1a, b, c ). As anticipated, Aβ42 uptake in neurons detected by both 4G8 (Figure 1g ) and 6E10 (data not shown) was inhibited by APV co-treatment. Neuronal C1q immunoreactivity was also inhibited when APV was added to Aβ42 treated slices (Figure 1h ). Aβ42-triggered microglial activation, assessed by upregulation of antigens detected by anti-CD45 (Figure 1i vs. 1f ), OX42 and ED1 (data not shown) was also fully diminished by APV. To quantify the immunohistochemistry results, images were taken from the entire CA1-CA2 region of each immunostained hippocampal section and averaged. Image analysis further substantiated the reduction in Aβ uptake, C1q synthesis and microglial activation (Figure 1j ). C1q gene expression at mRNA and protein levels was also assessed by RT-PCR and ELISA, respectively. Results showed decrease of C1q mRNA and protein in slice extracts treated with 30 μM Aβ42 + APV, compared to 30 μM Aβ42 alone (Figure 2a and 2b , n = 2). Figure 1 APV inhibited Aβ uptake, neuronal C1q production, and microglial activation. Slices were treated with no peptide (a, b, c), 30 μM Aβ 42 (d, e, f), or 30 μM Aβ 42 + 50 μM APV (g, h, i) for 3 days with fresh reagents added daily. Immunohistochemistry for Aβ (4G8, a, d, g), C1q (anti-rat C1q, b, e, h), and microglia (CD45, c, f, i) was performed on fixed and sectioned slices. Scale bar = 50 μm. Results are representative of three separately performed experiments. j. Immunoreactivity of Aβ (open bar), C1q (black bar), or CD45 (striped bar) was quantified as described in Materials and Methods. Values are the mean ± SD (error bars) from images taken from 8 slices (2 sections per slice) in 3 independent experiments (* p < 0.0001 compared to Aβ, Anova single factor test). Figure 2 Inhibition of Aβ-induced C1q synthesis by APV. a. C1q and β-actin mRNAs were assessed by RT-PCR in slices after 3 days of no peptide, 30 μM Aβ, or 30 μM Aβ + 50 μM APV treatment. Results are from one experiment representative of two independent experiments. b. Slices were treated with no peptide (open bar), 30 μM Aβ (black bar), or 30 μM Aβ + 50 μM APV (striped bar) daily for 3 days. 3 or 4 slices that had received same treatment were pooled, extracted and proteins analyzed by ELISA. Data are presented as percentage of control in ng C1q/mg total protein (mean ± SD of three independent experiments, **p = 0.01 compared to Aβ, one-tailed paired t-test). Integrin receptor antagonist GRGDSP (RGD) peptide enhances Aβ42 uptake and Aβ42-induced microglial activation and neuronal C1q expression It has been shown that an integrin receptor antagonist peptide, GRGDSP (RGD), can enhance Aβ ingestion by neurons in hippocampal slice cultures [ 37 ]. Therefore, we adopted this experimental manipulation as an alternative approach to modulate the level of Aβ uptake in neurons and assess the correlation between Aβ ingestion and neuronal C1q expression. Slices were treated with no peptide, 2 mM RGD, 10 μM Aβ42, or 10 μM Aβ42 + 2 mM RGD for 3 days with fresh peptides added daily. At the end of treatments, slices were collected and processed. Addition of RGD peptide by itself did not result in neuronal C1q induction or microglial activation (CD45) compared to no treatment control, as assessed by immunostaining (data not shown). While greater ingestion was seen at 30 μM (Figure 1d, e, f ), addition of 10 μM Aβ shows detectable Aβ ingestion, C1q expression, and microglial activation (Figure 3d, e, f compared with 3a, b, c ). The lower concentration of Aβ was chosen for these experiments to ensure the detection of potentiation of uptake (vs. a saturation of uptake at higher Aβ42 concentrations). When RGD was provided in addition to 10 μM Aβ42, Aβ immunoreactivity in neurons with antibody 4G8 (Figure 3g vs. 3d ) and 6E10 (similar results, data not shown), neuronal C1q expression (Figure 3h vs. 3e ), and CD45 (Figure 3i vs. 3f ) upregulation in microglia triggered by Aβ42, were significantly enhanced. Enhanced microglial activation was also detected with OX42 and ED1 antibodies (data not shown). Quantification by image analysis (Figure 3j ) definitively demonstrated that the increased accumulation of Aβ in neurons, microglial activation, and induction of neuronal C1q synthesis in the presence of RGD. RT-PCR (Figure 4a ) and ELISA (Figure 4b ) further demonstrated that both mRNA and protein expression of C1q was enhanced by RGD. Thus, under the conditions tested, both neuronal C1q synthesis and microglial activation are coordinately affected when the internalization of Aβ is modulated negatively by APV or positively by RGD. Figure 3 RGD enhanced Aβ uptake, neuronal C1q expression, and microglial activation. Hippocampal slices were treated with no peptide (a, b, c), 10 μM Aβ 42 (d, e, f), or 10 μM Aβ 42 + 2 mM RGD (g, h, i) for 3 days with fresh peptides added daily. Immunohistochemistry for Aβ (4G8, a, d, g), C1q (anti-rat C1q, b, e, h), and microglia (CD45, c, f, i) was performed on fixed slice sections. Scale bar = 50 μm. Results are representative of three separately performed experiments. j. Immunoreactivities of Aβ (open bar), C1q (black bar), or CD45 (striped bar) were quantified as described in Materials and Methods. Values are the mean ± SD (error bars) from images taken from 8 slices (2 sections per slice) in 3 independent experiments (* p < 0.0001, compared to Aβ, Anova single factor test). Figure 4 Enhancement of Aβ-induced C1q synthesis by RGD. a. C1q and β-actin mRNAs were assessed by RT-PCR in slices after 3 days of no peptide, 10 μM Aβ, or 10 μM Aβ + 2 mM RGD treatment. Results are from one experiment representative of two independent experiments. b. Slices were treated with no peptide (open bar), 10 μM Aβ (black bar), or 10 μM Aβ + 2 mM RGD (striped bar) daily for 3 days. 3 or 4 slices that had received same treatment were pooled, extracted and proteins analyzed by ELISA. Data are presented as percentage of control in ng C1q/mg total protein (mean ± SD of three independent experiments, **p = 0.06 compared to Aβ, one-tailed paired t-test). Aβ10–20 blocks Aβ42 induced microglial activation but triggers C1q synthesis in hippocampal neurons Data reported by Giulian et al suggests that residues 13–16, the HHQK domain in human Aβ peptide, mediate Aβ-microglia interaction [ 38 ]. To investigate the effect of HHQK peptides in this slice culture system, rat hippocampal slices were treated with no peptide, 10 μM Aβ42, 10 μM Aβ42 + 30 μM Aβ10–20, or 30 μM Aβ10–20 for 3 days with fresh peptides added daily. Sections were immunostained for Aβ, C1q, and microglia. Aβ immunoreactivity was significantly reduced in the Aβ42 +Aβ10–20 treated tissues compared to the Aβ42 alone treatment (Figure 5g vs. 5d ). Aβ10–20 alone-treated slices lacked detectable immunopositive cells with either 4G8 or 6E10 anti-Aβ antibody (Figure 5j and data not shown). Furthermore, as anticipated [ 38 ], when Aβ10–20 was present, microglial activation by Aβ42 as assessed by level of CD45, OX42, and ED1, was significantly reduced (Figure 5i vs. 5f and data not shown). Image analysis confirmed the inhibition of Aβ uptake (Figure 5m , open bars) and microglial activation (Figure 5m , striped bars) by the HHQK-containing Aβ10–20 peptide. However, production of C1q in neurons treated with Aβ42 was not inhibited by Aβ10–20 (Figure 5h vs. 5e ). In fact, with Aβ10–20 alone, neurons were induced to express C1q to a similar level as Aβ42 (Figure 5k ). The sustained C1q induction by Aβ10–20 was confirmed by RT-PCR for C1q with mRNAs extracted from slices (Figure 6a ). Figure 5 Aβ10–20 blocked Aβ42 uptake, microglial activation, but not neuronal C1q induction. Slices were treated with no peptide (a, b, c), 10 μM Aβ 42 (d, e, f), 10 μM Aβ 42 + 30 μM Aβ 10–20 (g, h, i) or 30 μM Aβ 10–20 (j, k, l) for 3 days with fresh peptides added daily. Immunohistochemistry for Aβ (4G8, a, d, g, j), C1q (anti-rat C1q, b, e, h, k), and microglia (CD45, c, f, i, l) was performed on fixed and sectioned slices. Results are representative of three independent experiments. Scale bar = 50 μm. m. Immunoreactivities of Aβ (open bar), C1q (black bar), or CD45 (striped bar) were quantified as described in Materials and Methods. Values are the mean ± SD (error bars) from images taken from 8 slices (2 sections per slice) in 3 independent experiments. Microglial activation by Aβ42 was significantly inhibited by Aβ10–20 (* p < 0.0001, compared to either Aβ42 + Aβ10–20 or Aβ10–20, Anova single factor test). Figure 6 a. Aβ10–20 inhibited Aβ42-induced C1q and CD40 mRNA elevation, but not that of MCSF. C1q, MCSF, CD40, and β-actin mRNAs were assessed by RT-PCR in slices treated for 3 days with no peptide, 10 μM Aβ 42, 30 μM Aβ 10–20, or 10 μM Aβ 42 + 30 μM Aβ 10–20. Results are from one experiment representative of two independent experiments. b. APV blocked MCSF, CD40, and IL-8 mRNA induction triggered by Aβ42. RT-PCR for MCSF, CD40, IL-8, and β-actin were performed on RNA extracted from slices treated with no peptide (control), 30 μM Aβ 42, or 30 μM Aβ42 + 50 μM APV for 3 days. Results are from one experiment representative of two separate experiments. CD40, IL-8, and MCSF mRNAs are induced by Aβ42 and differentially regulated by Aβ10–20 and APV It is known that activated microglia cells can produce pro-inflammatory cytokines, chemokines, and nitric oxide, as well as higher expression of co-stimulatory molecules like CD40 and B7 [ 39 ]. Many of those proteins have been shown to be upregulated in microglia stimulated by Aβ in cell culture and in vivo [ 40 ]. Semi-quantitative reverse transcriptase PCR technique was used to determine how certain inducible activation products were modified in slice cultures stimulated with exogenous Aβ42 and in the presence of Aβ10–20 or APV. Rat slices were treated with 30 μM Aβ42 +/-APV or 10 μM Aβ42 +/- 30 μM Aβ10–20 for 3 days before mRNAs were extracted from tissues. LPS, was added at 150 ng/ml for 24 hr, served as positive control, with positive detection for all molecules tested (data not shown). RT-PCR revealed that mRNAs for CD40 and IL-8 were enhanced in Aβ treated slice cultures relative to the control after 3 days (Figure 6a and 6b ). Both Aβ10–20 and APV inhibited Aβ42-triggered upregulation of CD40 (Figure 6a and 6b ), consistent with the inhibition of microglial activation by both Aβ10–20 and APV assessed by immunohistochemistry. APV also blocked Aβ42-induced IL-8 expression (Figure 6b ), as did Aβ10–20 (data not shown). Macrophage-colony stimulating factor (MCSF), a proinflammatory mediator for microglial proliferation and activation, has been shown to be expressed by neurons upon Aβ stimulation [ 41 ]. The expression of MCSF was induced in slice culture by Aβ treatment by Day 3 (Figure 6a and 6b ) and this increase was blocked by the presence of APV (Figure 6b ). In contrast, Aβ10–20 did not alter the Aβ42-triggered MCSF induction (Figure 6a ), suggesting that MCSF may be required for microglial activation, but alone is not sufficient to induce that activation. Discussion Previously, it has been shown that Aβ is taken up by pyramidal neurons in hippocampal slice culture and that the synthesis of complement protein C1q is induced in neurons [ 34 ]. Here we demonstrate that blocking of Aβ42 accumulation in neurons by NMDA receptor antagonist APV and increasing Aβ42 ingestion by integrin antagonist RGD is accompanied by inhibition and elevation in neuronal C1q expression, respectively. However, Aβ10–20, which markedly inhibits Aβ42 accumulation in pyramidal neurons, does not have any inhibitory effect on neuronal C1q expression. Thus, intraneuronal accumulation of Aβ is not necessary for Aβ-mediated induction of neuronal C1q synthesis. Since Aβ10–20 alone can induce a level of C1q expression in neurons comparable to Aβ42, it is hypothesized that amino acids 10–20 in Aβ peptide contain the sequence that is recognized by at least one Aβ receptor. It was reported by Giulian et al. that the HHQK domain (residues 13–16) in Aβ is critical for Aβ-microglia interaction and activation of microglia, as they demonstrated that small peptides containing HHQK suppress microglial activation and Aβ-induced microglial mediated neurotoxicity [ 38 ]. We have previously reported that rat Aβ42, which differs in 3 amino acids from human Aβ42, including 2 in the 10–20 region and 1 in the HHQK domain, was internalized and accumulated in neurons but failed to induce neuronal C1q expression [ 34 ]. This is consistent with the hypothesis that a specific Aβ interaction (either neuronal or microglial), presumably via the HHQK region of the Aβ peptide, but not intracellular Aβ accumulation, can lead to neuronal C1q induction in hippocampal neurons. Neurons are the major type of cells that accumulate exogenous Aβ in slice cultures. Microglial activation, as assessed by CD45, OX42, and ED1, was increased with enhanced neuronal Aβ42 uptake and inhibited when Aβ42 uptake was blocked by APV or Aβ10–20 in this slice culture system. These data would be consistent with a model in which neurons, upon internalization of Aβ peptide, secrete molecules to modulate microglial activation [ 14 , 41 , 42 ] (Figure 7 , large arrows). Synthesis and release of those molecules may require the intracellular accumulation of Aβ since blocking intraneuronal Aβ accumulation always blocked microglial activation. The finding that treatment with Aβ10–20 alone did not result in intraneuronal Aβ immunoreactivity or microglial activation, while rat Aβ42, which did accumulate within neurons, induced activation of microglial cells, is consistent with this hypothesis. It should be noted that an absence of Aβ immunoreactivity in Aβ10–20 treated slices does not exclude the possibility that Aβ10–20 was ingested but soon degraded by cells, and thus accumulation of Aβ rather than ingestion alone may be necessary to induce secretion of microglia activating molecules from neurons. Giulian et al. reported that the HHQK region alone was not able to activate microglia [ 38 ]. Thus, Aβ10–20 might block microglial activation by competing with Aβ42 for direct microglial binding, as well as by blocking uptake and accumulation of Aβ in neurons. Figure 7 Model of Aβ interaction with neurons and microglia in slice cultures. Exogenous Aβ peptide interacts with neuronal receptors leads to at least two separate consequences, in one of which C1q expression is upregulated in neurons. A second receptor mediates the secretion of certain modulatory molecules, which lead to microglial activation involving the expression of CD45, CR3, CD40, and IL-8. This does not exclude the direct interactions of Aβ with receptor(s) on microglia that may also contribute to microglial activation. Activated glial cells, especially microglia, are major players in the neuroinflammation seen in of Alzheimer's disease [ 43 ]. Microglial cells can be activated by Aβ and produce proinflammatory cytokines, nitric oxide, superoxide, and other potentially neurotoxic substances in vitro , although the state of differentiation/ activation of microglia and the presence of other modulating molecules is known to influence this stimulation [ 7 , 9 , 43 ]. "Activated" microglia also become more phagocytic and can partially ingest and degrade amyloid deposits in brain. This leads many to hypothesize that there are multiple subsets of "activated" microglia, each primed to function in a specific but distinct way [ 5 , 43 ]. In hippocampal slice cultures, we and others have shown that Aβ42 triggered microglial activation as assessed by immunohistochemical detection of CR3 (OX42), and cathepsin D [ 34 , 37 ]. Several chemokines, including macrophage inflammatory protein-1 (MIP-1α, MIP-1β), monocyte chemotactic protein (MCP-1), and interleukin 8 (IL-8), have been reported to increase in Alzheimer's disease patients or cell cultures treated with Aβ [ 44 , 45 ]. CD40, a co-stimulatory molecule, is also upregulated in Aβ-treated microglia [ 10 ]. In this study, similar to reports of cultured microglia, immunoreactivity of CD45 was found increased on microglia in Aβ42 treated slice cultures, and CD40 and IL-8 messenger RNAs were elevated after Aβ42 exposure. As expected, CD40 and IL-8 mRNA induction was blocked whenever immunohistochemistry analysis showed the inhibition of microglial activation. [We did not observe change in MIP-1α, 1β mRNAs in slice culture with Aβ42 treatment, and MCP-1 was too low to be detected with or without Aβ stimulation although it was detectable in LPS treated slices (data not shown).] The data presented thus far suggest the hypothesis that neurons, upon uptake and accumulation of Aβ, release certain substances that activate microglia. One possible candidate of those neuron-produced substances is MCSF, which has been reported to be induced in neuronal cultures upon Aβ stimulation [ 41 , 46 ], and is known to be able to trigger microglial activation [ 47 ]. Indeed, MCSF mRNA was found to increase after 3 days of Aβ treatment (Figure 6a and 6b ). The diminished MCSF signal with the addition of APV and coordinate lack of microglial activation is consistent with a proposed role of activating microglia by MCSF produced by stimulated neurons. However, in the presence of Aβ10–20, MCSF induction was unaltered, though microglial activation was inhibited. Thus, MCSF alone does not lead to the upregulation of the above-mentioned microglial activation markers. In this organotypic slice culture, no significant neuronal damage was observed in 3 day treatment with Aβ at concentrations that have been reported to cause neurotoxicity in cell cultures. One possible explanation is that the peptide has to penetrate the astrocyte layer surrounding the tissue to reach the multiple layers of neurons. Thus, the effective concentration of Aβ on neurons is certainly much lower than the added concentration. Aβ failing to induce neurotoxicity in slices to the same extent as in cell cultures may also indicate the loss of certain protective mechanisms in isolated cells. A distinct advantage of the slice culture model is that the tissue contains all of the cell types present in brain, the cells are all at the same developmental stage, and cells may communicate in similar fashion as in vivo . Our data demonstrating distinct pathways for the induction of neuronal C1q and the activation of microglial by amyloid peptides suggest the involvement of multiple Aβ receptors on multiple cell types in response to Aβ (Figure 7 , model) and possibly in Alzheimer's disease progression. This multiple-receptor mechanism is supported by reports suggesting many proteins/complexes can mediate the Aβ interaction with cells [ 48 ]. These include, but not limited to, the alpha7nicotinic acetylcholine receptor (alpha7nAChR), the P75 neurotrophin receptor (P75NTR) on neurons, the scavenger receptors and heparan sulfate proteoglycans on microglia, as well as receptor for advanced glycosylation end-products (RAGE) and integrins on both neurons and microglia (Figure 7 ). Several signaling pathways have been implicated in specific Aβ-receptor interactions [ 49 - 51 ]. However, it is not known which receptors are required for induction of C1q in neurons. In addition, as of yet the function of neuronal C1q has not been determined. Previous reports from our lab have shown that C1q is associated with hippocampal neurons in AD cases but not normal brain [ 52 ], and the fact that it is synthesized by the neurons has been documented by others [ 23 , 53 ]. In addition, C1q was prominently expressed in a preclinical case of AD (significant diffuse amyloid deposits, with no plaque associated C1q, and no obvious cognitive disorder) and is expressed in other situations of "stress" or injury in the brain [ 54 - 58 ]. Indeed, overexpression of human cyclooxygenase-2 in mice leads to C1q synthesis in neurons and inhibition of COX-2 activity abrogates C1q induction. These data suggest that in addition to the facilitation of phagocytosis by microglia [ 59 , 60 ] (particularly of dead cells or neuronal blebs), the induction of C1q may be an early response of neurons to injury or regulation of an inflammatory response, consistent with a role in the progression of neurodegeneration in AD. Whether and how the neuronal C1q production affects the survival of neurons is still under investigation. Identifying the receptors responsible for neuronal C1q induction may be informative in understanding the role of C1q in neurons in injury and disease. Conclusions In summary, induction of C1q expression in hippocampal neurons by exogenous Aβ42 is dependent upon specific cellular interactions with Aβ peptide that require HHQK region-containing sequence, but does not require intraneuronal accumulation of Aβ or microglial activation. Thus, induction of neuronal C1q synthesis may be an early response to injury to facilitate clearance of damaged cells, while modulating inflammation and perhaps facilitating repair. Microglial activation in slice culture involves the induction of CD45, CD40, CR3, and IL-8, which correlates with intraneuronal accumulation of Aβ, indicating contribution of factors released by neurons upon Aβ exposure. MCSF may be one of those stimulatory factors, though by itself MCSF cannot fully activate microglia. Removal of Aβ to prevent deposition and of cellular debris to avoid excitotoxicity would be a beneficial role of microglial activation in AD. However, activated microglia also produce substances that are neurotoxic. Therefore, the goal of modulating the inflammatory response in neurodegenerative diseases like AD is to enhance the phagocytic function of glial cells and inhibit the production of proinflammatory molecules. Being able to distinguish in the slice system C1q expression (which has been shown to facilitate phagocytosis of apoptotic cells in other systems [ 24 ]) from microglial activation suggests a plausible approach to reach that goal in vivo . List of abbreviations Aβ: amyloid beta; AD: Alzheimer's disease; APV: D-(-)-2-amino-5-phosphonovaleric acid; BSA: bovine serum albumin; GRGDSP (RGD): glycine-arginine-glycine-aspartic acid-serine-proline; HBSS: Hanks' balanced salt solution; HEPES: N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid; MCSF: macrophage colony stimulating factor; NMDA: N-methyl-D-aspartic acid; PMSF: phenylmethylsulfonylfluoride; TAE: triethanolamine. Competing interests The author(s) declare that they have no competing interests. Authors' contributions RF cultured and processed the tissue, performed all experiments (immunohistochemistry, ELISA, PCR and others), analyzed the data, and drafted the manuscript. AJT contributed to the design of the study, guided data interpretation and presentation and edited the manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545941.xml
15642121
10.1186/1742-2094-2-1
548295
Expression pattern and regulation of genes differ between fibroblasts of adhesion and normal human peritoneum
Background Injury to the peritoneum during surgery is followed by a healing process that frequently results in the attachment of adjacent organs by a fibrous mass, referred commonly as adhesions. Because injuries to the peritoneum during surgery are inevitable, it is imperative that we understand the mechanisms of adhesion formation to prevent its occurrence. This requires thorough understanding of the molecular sequence that results in the attachment of injured peritoneum and the development of fibrous tissue. Recent data show that fibroblasts from the injured peritoneum may play a critical role in the formation of adhesion tissues. Therefore, identifying changes in gene expression pattern in the peritoneal fibroblasts during the process may provide clues to the mechanisms by which adhesion develop. Methods In this study, we compared expression patterns of larger number of genes in the fibroblasts isolated from adhesion and normal human peritoneum using gene filters. Contributions of TGF-beta1 and hypoxia in the altered expression of specific genes were also examined using a semiquantitative RT-PCR technique. Results Results show that several genes are differentially expressed between fibroblasts of normal and adhesion peritoneum and that the peritoneal fibroblast may acquire a different phenotype during adhesion formation. Genes that are differentially expressed between normal and adhesion fibroblasts encode molecules involved in cell adhesion, proliferation, differentiation, migration and factors regulating cytokines, transcription, translation and protein/vesicle trafficking. Conclusions Our data substantiate that adhesion formation is a multigenic phenomenon and not all changes in gene expression pattern between normal and adhesion fibroblasts are the function of TGF-beta1 and hypoxia that are known to influence adhesion formation. Analysis of the gene expression data in the perspective of known functions of genes connote to additional targets that may be manipulated to inhibit adhesion development.
Background Peritoneal adhesions resulting from surgical injury are often associated with pelvic pain, bowel obstruction and infertility [ 1 ]. Epidemiological studies conclude that 30 to 35% of all hospital readmissions are associated with adhesion associated complications, of which 4.5 to 5.1% are directly related to adhesions [ 2 ]. Mechanisms of adhesion formations are not completely known. It is also not clear why adhesion form in some patients and not in others. Therefore, deciphering genetic components that signal adhesion formation may help diagnose adhesion-prone patients prior to surgery. Needless to mention that such information will facilitate finding ways to prevent post-surgical adhesion formation. Parietal and visceral peritoneum that surfaces the intraperitoneal organs is covered by a layer of squamous epithelial cells, the mesothelium. The submesothelial layer consists of fibroblasts, macrophages and blood vessels. Surgical abrasion to the peritoneum releases mesothelial cells, macrophages, fibroblasts, and blood containing cytokines and several cell types at the site of injury. Coagulation of blood creates a fibrinous mass between injured surfaces. In some patients fibrinolysis of clot followed by proliferation of mesothelial cells covers the wound. In others, failure of fibrinolysis followed by proliferation and migration of fibroblasts into the proteinous mass generates fibrous tissues of adhesion. Consequently, the process of adhesion formation include inflammatory response, fibrin deposition, cell-proliferation, -differentiation, -migration, -death, angiogenesis, extra cellular matrix (ECM) turnover regulated by cytokines, hypoxia, genetic and epigenetic factors [ 3 ]. Recent studies illustrate roles of peritoneal fibroblasts in adhesion development [ 4 - 10 ]. It is also proposed that fibroblasts from the chronic wounds migrate into the fibrin deposit; secrete ECM proteins causing wound contraction and scar formation [ 11 ]. The migration of fibroblasts may be coordinated by TGF-β1 mediated interactions of integrin receptors [ 10 ] with the RGD sequence of the fibrin, fibrinogen and fibronectin at the fibrin clot [ 12 ]. Additional cytokines and the hypoxic condition at the site of injury may also influence peritoneal fibroblasts to attain a phenotype supporting formation of adhesion tissue. This change in the phenotype of fibroblasts may be induced by changes in expression pattern of several genes during the process of adhesion development. Therefore, identifying differences in the global gene expression pattern between normal and adhesion fibroblasts may provide additional clues to the mechanisms by which normal fibroblasts attain the adhesive, proliferating and migratory phenotype required for the formation of fibrous tissues of adhesions. In the present study, we compared gene expression patterns between adhesion and normal peritoneal fibroblasts using GF211 gene filters (Research genetics) containing 4325 randomly selected known genes. Furthermore, we confirmed the expression pattern of genes of interest by a semiquantitative RT-PCR method and examined possible contribution/s of TGF-β1 and hypoxia in the transformation of normal peritoneal fibroblasts into an adhesion phenotype. Methods Peritoneal-tissue collection, fibroblast-isolation and culture Tissues were collected at the initiation of surgery and after the entry into the abdominal cavity of female patients (25–50 years) undergoing laparatomy for pelvic pain as described earlier [ 4 ]. All patients gave informed written consent for the tissue collection, which was conducted under a protocol approved by the Wayne State University Institutional Review Board. Normal parietal peritoneal tissues were collected from these patients from the anterior abdominal wall, approximately midway between the umbilicus and symphyses pubis, and lateral to the midline incision. Peritoneal tissues from adhesions, that were at least 3 inches away from the site of normal tissue collections, were also collected from the same patient. The peritoneal fibroblasts were isolated and separated from mesothelial cells by a differential centrifugation procedure that is briefly described earlier [ 4 ]. The isolation of fibroblasts from mesothelial cells were also verified by the RT-PCR detection of Collagen type I, Matrix metalloproteinase-2 (MMP-2) and Transforming growth factor-β3 (TGF-β3) [ 13 - 15 ]. The primary cultures were maintained in a humidified incubator (37°C, 5% CO 2 ) for 3 days in DMEM (Life Tech.) supplemented with 10% fetal bovine serum (Life Tech.) and antibiotics (Penicillin and Streptomycin 50 U/ml; Life Tech.). The monolayer of cells were passaged in trypsin-EDTA solution (Life Tech.). Cells at 3–5 passages were cultured in serum free medium in 75 cm 2 flasks (Fisher Scientific, Pittsburgh, PA) to 75% confluency prior to studies. Gene expression pattern in the fibroblasts from adhesion and normal peritoneum Total RNA was isolated from monolayer of fibroblasts at 12 h of culture in serum free medium using Trizol reagent (Invitrogen Inc.). Human Gene Filters (GF211; Research Genetics, Inc., Huntsville, AL) containing 4325 known human cDNA spots were used for the identification of differentially expressed genes between adhesion and normal fibroblasts from human peritoneum. Method suggested by the manufacturer was strictly followed. In brief, 10 μg of total RNA from monolayer cultures of fibroblasts were subjected to cDNA synthesis in presence of 10 μl 33 dCTP (10 mCi/ml; ICN Radiochemicals, Irvin, CA). Radiolabeled cDNA was separated from the free nucleotides using Bio-Spin 6 chromatography column (Bio-Rad Laboratories). Gene filters were labeled as adhesion or normal fibroblasts and washed with 0.5% sodium dodecyl sulfate (SDS) prior to prehybridization. Individual membrane was transferred to separate roller tubes of the hybridization oven (Fisher Scientific, Inc., Pittsburg, PA), each containing MicroHyb hybridization solution (Research Genetics) supplemented with Human Cot-1 DNA (Life Technology) and Poly dA (Research Genetics). The membranes were rotated at 10 RPM and at 42°C for 2 h. Radio-labeled cDNA prepared from adhesion and normal fibroblasts total RNA was denatured by heating in a boiling water bath for 3 min. The denatured probes were then injected into the prehybridization solution containing respective membrane. The membranes were hybridized with respective probe for 18 h at 42°C. The hybridization solution was then replaced with washing solution (2 × SSC containing 1% SDS). The temperature of the oven was raised to 50°C and RPM of rotors was increased to 15. Membranes were washed for 20 minutes when washing solution was replaced with a batch of fresh and prewarmed (50°C) washing solution. Washing was continued for additional 20 min. A third wash was performed with 0.5 × SSC solution containing 1% SDS at 55°C for 15 minutes. Membranes with cDNA spots facing up were covered with Saran wrap and exposed to phosphor screen (Kodak) for overnight. The screen was scanned with a Phosphor Imager (Storm System; Amersham Biosciences Corp., Piscataway, NJ). After acquisition of signal intensities from the normal and adhesion fibroblasts of one patient, filters were stripped according to protocol and subjected to gene filter experiments with the RNA samples from a second patient and images were scanned. Tiff images obtained from normal and adhesion fibroblasts of two patients were analyzed using Pathway 4 software (Research Genetics) for the identification of differentially expressed genes between the normal and adhesion fibroblasts of each patient. Relative abundance of selected genes in the fibroblasts from adhesion and normal peritoneum Steady-state levels of mRNA of selected genes that are known to have a role in cellular adhesion, proliferation, migration, apoptosis and demonstrating different expression levels between adhesion and normal fibroblasts in the gene filter experiments were verified further by a previously described semiquantitative RT-PCR method [ 16 ]. Total RNA (1 μg) from the monolayer culture of adhesion or normal fibroblasts was subjected to reverse transcription as described earlier. Complementary DNA (100 ng) was subjected to PCR amplification of the cDNA of interests in a 25 μl reaction mixture containing 50 mM Tris-HCl (pH 8.4), 50 mM KCl, 2.5 mM MgCl 2 , 0.2 mM dNTP, 0.5 U Taq Polymerase (all from Life Technology, BRL) and 1 μM each of sense and antisense primers. Primer sequences were determined using Primer3 software from the Internet . The control primers ( sense 5'-ggaggttcgaagacgatcag-3' and antisense 5'-cgctgagccagtcagtgtag-3') were expected to provide an amplicon of 509 bp from human 18S ribosomal subunit cDNA (gi: 337376). Accession numbers of genes of interests are provided in Table 2 and nucleotide sequences of primers and expected size amplicons are provided in the Table 3. Each PCR cycle consisted of a hot start at 95°C for 1 min, followed by melting at 95°C for 30 sec, annealing at 58°C for 1 min and extension at 72°C for 1 min. At the end an extension reaction at 72°C for 10 min was performed. Table 2 Genes differentially expressed in the adhesion fibroblasts and known to have roles in cell-adhesion, -proliferation, -migration, -differentiation and -death. Accession Number Definition Fold Change Functions P1 P2 * Increased in Intensity gi:17986276 Collagen, type IV alpha 2 2.4 2.7 See Discussion gi:4506760 S100 calcium-binding protein A10 2.3 2.7 See Discussion gi:6679055 Nidogen 2 6.4 7.1 See Discussion gi:14250074 Transmembrane 4 superfamily member 1 3.7 2.6 See Discussion gi:4758081 Chondroitin sulfate proteoglycan 2 3.4 3.2 See Discussion gi:187538 Metallothionein 1E 4.3 4.0 See Discussion gi:4336324 Small membrane protein 1 2.4 2.6 Cell viability [54] gi:17738299 Cyclin-dependent kinase inhibitor 2A 2.0 2.0 Cell proliferation [55] gi:16359382 Nuclear receptor subfamily 4, group A 1.6 2.1 Antagonizes TNF-α induced apoptosis [56] gi:40353726 Synaptopodin 2.9 2.4 Actin cytoskeleton dynamics [57] gi:23398519 Vasodilator-stimulated phosphoprotein 1.5 2.1 Enhances actin based cell motility, Cytoskeltal dynamics [58] gi:28329 α-Smooth muscle actin 3.0 3.2 Myofibroblast transformation [44] gi:14574570 Bcl-2 related gene bfl-1 1.6 1.4 Anti apoptotic; inhibitor of p53 induced apoptosis gi:796812 p53 tumor suppressor 1.5 1.6 Cell cycle arrest and apoptosis [52] Decreased in Intensity gi:184522 Insulin-like growth factor binding protein 3 3.2 2.3 See Discussion gi:4504618 Insulin-like growth factor binding protein 7 2.3 2.0 Growth suppressing factor [59] gi:28610153 Interleukin 8 3.2 2.6 Inhibits fibroblast migration, delays wound healing, reduces wound contraction [60] gi:4504982 Lectin, galactoside-binding, soluble 3 [galectin) 3.0 3.0 Tumor-suppressive and pro apoptotic [61] gi:12803916 Gap junction protein, beta 1, [Connexin 32) 1.8 2.2 Tumor suppressive and Proapoptotic [62] gi:14589894 Cadherin 5, type 2, VE-cadherin [vascular] 2.3 1.7 Down regulation associates with tumor metastasis, Initiates endothelial-mesenchymal transdifferentiation [63] gi: 16198356 Lactotransferrin 2.2 2.1 Inhibits growth of malignant tumors. Elevated by high level of estrogen [64] gi:21619838 Lipocalin 2, Oncogene 24p3 3.3 2.5 Proapoptotic [65] gi: 23273645 Calponin 1, basic, Smooth muscle cell 1.7 2.5 Inhibits smooth muscle cell contraction and Tumor Suppressive [66] gi:40225461 RAP1A, member of RAS oncogene family 1.8 1.6 Inhibits cell proliferation [67] gi:4507112 Synuclein-gamma 1.5 1.3 Expression reduced in carcinoma [68] * Adhesion/Normal peritoneal fibroblasts values of gene expression intensity from patient 1 (P1) and 2 (P2). Table 3 PCR primers, amplicon size and expression ratios of genes between adhesion and normal peritoneal fibroblasts Transcripts Primer Sequence (5' to 3') Amplicon size (base pairs) Adhesion/Normal Ratio Gene Filter* Adhesion/Normal Ratio (RT-PCR)** 18S Ribosomal Subunit Sense ggaggttcgaagacgatcag 509 (No spot) 0.9 Antisense cgctgagccagtcagtgtag Collagen type IV alpha 2 chain(COL4A2) Sense caccatgcccttcctgtact 351 2.6 2.3 Antisense ttgcattcgatgaatggtgt S100 Calcium binding protein A10 (S100A10) Sense cacaccaaaatgccatctca 389 2.5 2.1 Antisense cttctatgggggaagctgtg Nidogen 2 (NID2) Sense gcttacgaggaggtcaaacg 500 6.8 2.9 Antisense ttcacccggaaggtattcag Transmembrane 4 superfamily member 1 (TM4SF1) Sense tcgcggctaatattttgctt 500 3.2 1.9 Antisense gcctccaagcactccattta Chondoitin sulfate proteoglycan 2 (CSPG2) Sense gaaccaaattatggggcaga 400 3.3 3.0 Antisense ctcccaatccttcgtcgata Insulin-like growth factor binding protein 3 precursor (IGFBP3) Sense gggtaggcacgttgtaggaa 603 -2.8 -2.8 Antisese gtgaggctggctaagaatgc Metallothionine (hMT-Ie) Sense cagagggtctctgggtttca 400 4.2 3.3 Antisense gccccatgtcctctcactaa * Average intensity of Adhesion/Normal peritoneal fibroblast gene expression data from patient 1(P1) and 2 (P2) presented in Table 2. Minus (-) sign indicate fold decrease in intensity in the adhesion fibroblasts. ** Ratios of Adhesion/Normal mean values from 4 patients. Table 4 Expression profiles of genes in the adhesion vs. normal peritoneal fibroblasts and the effects of TGF-β1 or hypoxia on the expression level of genes in the normal peritoneal fibroblasts Transcripts Adhesion/Normal fibroblasts (Gene Filter & RT-PCR) TGF-β1 Effects (RT-PCR) Hypoxia Effects (RT-PCR) COL4A2 ↑ ↑ ↑ S100A10 ↑ ↑ ↑ NID2 ↑ — ND TM4SF1 ↑ — ND CSPG2 ↑ ↑ — IGFBP3 ↓ — ND hMT-Ie ↑ ↑ ↑ ↑ = Up regulation (p < 0.05); ↓ = Down Regulation (p < 0.05); — = No Change ND = Not determined Initially cDNA of interests were amplified from normal peritoneal fibroblasts at different (25 to 35) PCR cycles. PCR products were subjected to agarose gel electrophoresis. Molecular weight marker (100 bp DNA ladder; Life Technology) were also loaded in adjacent lanes. DNA in the gel were stained with 1:10,000 dilution of SYBR Green I dye (Molecular Probes, Inc., Eugene, OR) and photographed using a DC 120 Kodak digital camera (Eastman Kodak, Rochester, NY) for the verification of size and analysis of band intensity using Image J software . Band intensities were plotted to determine the linearity of PCR reactions for the amplification of target transcripts. Target cDNA were amplified by PCR from normal and peritoneal fibroblasts at specific PCR cycle within its linear range of amplification. Total RNA samples from normal and adhesion fibroblasts of 4 patients (included RNA from normal and adhesion fibroblasts of two patients used for the gene filter experiments) were used for the RT-PCR experiments. Optical densities obtained from amplicons of 4 patients (1 normal and 1 adhesion fibroblast RNA sample per patient) were used to derive mean ± standard error of mean values representing relative levels of each mRNA species in normal and adhesion fibroblasts. Effects of TGF-β 1 or hypoxia on gene expression pattern Effects of TGF-β1 or hypoxic conditions on the steady state levels of specific gene transcripts in the normal peritoneal fibroblasts were also studied to examine the possibility of adhesion causing factors potentiating the gene expression pattern in the normal fibroblasts similar to adhesion fibroblasts. Normal peritoneal fibroblasts were cultured in six well culture plates (FALCON). When confluent, monolayer of cells in culture were exposed to 1 ng/ml TGF-β 1 (Sigma Chemical Company, St. Louis, MO) or hypoxia (2% Oxygen) for 24 h. Control plates were cultured for the same duration in absence of TGF-β1 or hypoxia. Total RNA was isolated from the control, TGF-β1 and hypoxia treated cells and subjected to RT-PCR reactions as described above to determine relative levels of 18S ribosomal subunit and gene specific transcripts in the control and treated cells. RT-PCR experiments were conducted twice with the normal peritoneal fibroblasts isolated from 3 patients to have six control, six TGF-β1 treated and six hypoxia exposed amplicons. This included normal fibroblasts from one new patient and two patients that were used exclusively for RT-PCR experiments for the confirmation of gene array data. Optical densities of amplicons from six control or treated cells per mRNA species were used to derive the mean ± standard error of mean values for comparison. Statistical analysis Band intensity value of each RT-PCR experiment (normal, adhesion or treated fibroblasts) was used to derive Mean ± Standard error of Mean using Statview 4.5 software (Abacus Concepts, Berkley, CA). Differences between Means were tested for significance by one-way analysis of variance with the specific post hoc test using the same software to compare differences in the steady state levels of different mRNA species. Results Expression pattern of genes between adhesion and normal peritoneal fibroblasts Hybridization intensities of radio labeled cDNA from normal or adhesion fibroblasts from both the patients were different when analyzed using Pathway software. Comparison of hybridization intensities from individual gene spots between normal and adhesion fibroblast RNA (Figure 1 ) demonstrated that the expression levels of ~4% genes were >1.5 fold different. BLAST search of the accession number of genes from the list provided by the manufacturer showed that genes with altered expression level between normal and adhesion fibroblasts are reported to be involved in cell adhesion and migration; transformation, transcription, translation and growth factors as well as cytokines and signaling molecules. Figure 1 Images depicting radioactive signals from GF211 filters hybridized with radiolabeled cDNA. Gene filters were hybridized with 33 P labeled cDNA from normal peritoneal fibroblasts or fibroblasts from adhesion tissue. Unbound signals were washed and relative radioactive signal intensities were detected using a Phosphoroimager as described in the Methods. A. Tiff images of radioactive signals from individual gene spots of filters hybridized with normal (above) and adhesion fibroblasts, both isolated from Patient 1. B. Scatter plot showing signal intensities from normal peritoneal (Intensity I) and adhesion (Intensity II) fibroblasts. Dotted lines indicate a two fold changes in hybridization intensities from the median (solid line). Gene filter data from two patients showed similar expression pattern of collagen type 1 (alpha 2), Collagen type III (alpha 1), fibronectin 1, Matrix metalloproteinase-1 (MMP-1), Transforming Growth Factor beta-1 (TGF-β1), TGF-β2 and tissue plasminogen activator as reported earlier using multiplex PCR technique (Table- 1 ). Signal intensities representing TGF-β3 (gi:22531293), TGF-β III Receptor (gi:26251745), VEGF-A (gi:2565322), VEGF-B (gi:39725673) and VEGF-C (gi:19924300) expression levels were respectively 1.6, 1.5, 1.9, 1.3 and 1.3 fold (average values from two patients) lower in the adhesion compared to normal fibroblasts. No spots for antiapoptotic bcl-2 and proapoptotic bax were present in GF211 filters. Signal intensities representing anti apoptotic molecule bcl-2 related gene bfl-1 (gi: 14574570) and pro-apoptotic molecule p53 (gi:796812) were higher (Table 2 ) in adhesion compared to normal fibroblasts. Expression levels of proapoptotic molecule bad (gi: 14670387) and bak1 (gi: 33457353) were not different between normal and adhesion fibroblasts. A list of additional genes that are differentially expressed between normal and adhesion fibroblasts and known to be involved in apoptosis as well as cell adhesion, proliferation and migration are listed in Table 2 . Table 1 Ratios of signal intensities from adhesion and normal peritoneal fibroblasts detected from gene filters representing relative expression level of genes in patient 1 (P1) and 2 (P2). Gene Accession Number P1 (A/N) P2 (A/N) Reference* Collagen Type I (alpha 2) gi:48762933 1.4 1.5 [4,15,53] Collagen Type III (alpha 1) gi:15149480 2.0 1.7 [15] Fibronectin 1 gi:53791222 1.5 1.2 [4,15] MMP-1 gi:13027798 1.6 1.4 [4] TGF-β1 gi:10863872 1.4 1.7 [4,15] TGF-β2 gi:339549 1.5 1.3 [4] tPA gi:2161467 -1.5 -2.0 [8] Minus (-) sign represents lower signal intensity in adhesion (A) compared to normal (N) fibroblasts (gene filter data) * Citations reporting expression levels of respective genes in fibroblasts from normal human peritoneum and adhesion using multiplex PCR technique Semiquantitative RT-PCR experiments (Figure 2 ) conducted to verify expression pattern of specific genes from the gene filter experiments that were not studied earlier in the peritoneal fibroblasts confirmed higher expression (p < 0.05) of Collagen type IV (alpha 2) chain (COL4A2), S100 Calcium binding protein A10 (S100A10), Nidogen 2 (NID2), Transmembrane 4 superfamily member 1 (TM4SF1), Chondroitin sulfate proteoglycan 2 (CSPG2) and Metallothioneine (hMT-Ie) in adhesion compared to normal fibroblasts. The semiquantitative RT-PCR experiments also confirmed lower expression levels of Insulin-like growth factor binding protein 3 precursor (IGFBP3) mRNA in the adhesion compared to normal peritoneal fibroblasts. Transcript levels of 18S ribosomal subunit estimated by RT-PCR method was not significantly different between fibroblasts isolated from normal and adhesion peritoneum (Figure 2 and Table 3 ). Figure 2 Relative abundance of specific mRNA species in the normal and adhesion fibroblasts. Genes differentially expressed between the normal and adhesion fibroblasts, as identified by gene filter experiments, were amplified by the RT-PCR technique at 26 PCR cycle. PCR products (20 μl) were subjected to electrophoresis, stained with fluorescent dye, photographed and optical density determined as described in Methods. A : Representative gel showing amplicons from normal (odd lane numbers) and adhesion (even lane numbers) fibroblasts. Lanes 1 & 2, 3 & 4; 5 & 6; 7 & 8; 9 & 10 and 11 & 12 show RT-PCR products from COL4A2; NID2; CSPG2; S100A10; 18S ribosomal subunit and TM4SF1 mRNA respectively. Lanes 13 & 14; 15 & 16 and 17 & 18 show RT-PCR products from 18S ribosomal subunit, IGFBP3 precursor and MET-1e mRNA respectively. L: Lanes loaded each with 7 μl of 100 bp DNA ladder. The 600 bp band of the ladder is shown by arrow head. B . Histogram showing mean and standard error of mean values of optical densities derived from amplicons of specific genes (x axis) from normal (empty bars) and adhesion (filled bars) fibroblasts isolated from 4 patients as described in Methods. *Significantly different ( p < 0.05) between normal and adhesion fibroblasts. Effects of TGF-β1 or hypoxia on the expression levels of specific genes in the normal peritoneal fibroblasts Exposure to TGF-β 1 or hypoxic conditions for 24 h altered expression levels of specific genes in the normal peritoneal fibroblasts as evidenced by semiquantitative RT-PCR. Transcript levels of COL4A2, S100A10, CSPG2 and hMT-Ie were up regulated by TGF-β1 in the normal peritoneal fibroblasts (Figure 3 ), whereas transcript levels of NID2, TM4SF1, and IGFBP3 were not altered by TGF-β1 treatment. Hypoxic conditions elevated expression levels of COL4A2, S100A10 and hMT-Ie transcripts in the normal peritoneal fibroblasts (Figure 4 ). Transcript levels of CSPG2 were not significantly altered by hypoxia. Figure 3 Effects of TGF-β1 on the steady state levels of specific mRNA species in normal peritoneal fibroblasts. Normal peritoneal fibroblasts were cultured for 24 h in absence or presence of TGF-β1 and total RNA from cells were examined for the steady-state levels of different mRNA species by semiquantitative RT-PCR technique as described in Methods. A . Representative gels showing amplicons generated by RT-PCR from specific gene transcripts (denoted on the left of the panel) from control (lanes 1, 2 and 3) and TGF-β1 (lanes 4, 5 and 6) treated cells. Complementary DNA for all genes except IGFBP3 precursor was amplified at 26 PCR cycles. IGFBP3 precursor transcripts were amplified at 25 cycles. L Lane loaded with 100 bp DNA ladder. B Histogram showing mean and standard errors of mean of optical densities from amplicons representing specific mRNA species (x axis). The RT-PCR experiments were conducted twice from normal and peritoneal isolated from 3 patients to obtain OD values of six amplicons from control (empty bars) or treated (shaded bars) fibroblasts statistical analysis. * Significantly different from control conditions at p < 0.05. Figure 4 Effects of hypoxia on the steady state levels of specific genes in normal peritoneal fibroblasts. Normal peritoneal fibroblasts were cultured for 24 h in normoxic and hypoxic conditions and total RNA from cells were examined to determine the steady state levels of specific transcripts as described in Methods. Complementary DNA for all genes was amplified at 26 PCR cycles. Histogram showing mean and standard errors of mean of optical densities of amplicons representing specific mRNA species (x axis) from control (empty bars) or hypoxia exposed cells (shaded bars) from 3 patients. The RT-PCR experiments were conducted twice to obtain OD values of six amplicons from normoxic or hypoxic fibroblasts for statistical analysis. Images of gels with amplicons from cells treated with hypoxia are not shown. * Significantly different from control conditions at p < 0.05. Discussion We present evidence that the expression pattern of large number of genes differ between the fibroblasts isolated from adhesion tissues and normal human peritoneal supporting the notion that adhesion fibroblasts may attain a different phenotype following peritoneal injury. Genes that displayed altered expression levels in this transition included those involved in cell proliferation, differentiation, signaling molecules, transcription and translation factors, proteolysis and cytokines. Results indicate that fibroblasts from adhesion tissue may perceive and respond to external and internal cues differently than those residing in normal human peritoneum. We attempted to decipher the functional consequences of altered gene expression pattern in the adhesion fibroblast to further elucidate the mechanism of adhesion formation and point to additional ways adhesion development may be restrained. Expression pattern of genes in the fibroblasts from normal and pathological sites are shown to be different also in earlier studies [ 17 ]. More relevant to the present study are the reports [ 4 , 8 ] on the mRNA levels of human type I collagen (alpha 2), fibronectin 1, MMP-1, TIMP-1, TGF-β1, TGF-β2, IL-10, PAI-1, tPA and COX-2 in adhesion and normal peritoneal fibroblasts from humans estimated by multiplex PCR technique. Gene filter data from two patients also showed similar pattern of collagen, type 1 (alpha 2), fibronectin 1, MMP-1, TGF-β1, TGF-β2 and tPA mRNA levels in the normal and adhesion fibroblasts (Table 1 ). Expression pattern of TIMP-1, IL-10, PAI-1, COX-2 in the adhesion and normal peritoneal fibroblasts as reported earlier [ 4 , 8 , 9 ] could not be verified by gene filter experiments because GF211 filters do not have spots representing these genes. Even so, similarities in the expression pattern of many genes between two patients (Tables 1–3) and those reported earlier using multiplex PCR technique [ 4 , 8 ] validate our findings. The semiquantitative RT-PCR experiments conducted to verify expression pattern of specific genes recorded from gene filter experiments show that mRNA levels of COL4A2, S100A10, nidogen-2, TM4SF1, CSPG2, MT-1e and IGFBP3 precursor indeed differ between normal and adhesion fibroblasts. Even though expression levels of these transcripts were significantly different between normal and adhesion fibroblasts, only minor variations in the optical densities of amplicons were recorded within normal or adhesion tissues of patients of different age groups. This indicate that age dependent differences in the expression levels of genes in the fibroblasts from normal or adhesion tissues may tend to attain a relatively similar expression levels when in culture. Despite the fact that our study focused on the steady state levels of mRNA species and not on translation or posttranslational events, analysis of the functional consequences of altered expression of encoded proteins from the literature as referred below indicated that changes in the pool of these mRNA species may lead to the transformation of normal peritoneal fibroblasts into a specialized phenotype during the healing process. COL4A2 is a major structure-defining component in all basement membranes [ 18 ] and forms a framework for the ordered aggregation of additional molecules like laminin, heparin sulphate proteoglycans, and nidogen [ 19 ]. Relatively higher levels of COL4A2 observed in the adhesion fibroblasts may enhance synthesis of basement membrane in the tissues of adhesions. As COL4A2 gene is up regulated during malignant transformation and tumor vessel proliferation [ 20 ], it is anticipated that up regulated levels of COL4A2 in the adhesion fibroblasts may aid to the formation of adhesion tissue by increasing proliferation of adhesion fibroblasts and supporting new vessel formation for the nourishments of growing tissue. S100A10 proteins interact with Annexin A2 forming a heterotetrameric structure AIIt; that dock into the cell membrane promoting F-actin reorganization and cell migration [ 21 ]. AIIt also colocalizes with uPAR and plasminogen in the cells [ 22 ]. Heightened levels of S100A10 may enhance migration of adhesion fibroblasts by changing F-actin dynamics and influencing Cathepsin B and plasminogen machinery [ 23 ]. S100A10 also interacts with cytosolic phospholipase A2, inhibits its activity and decreases synthesis of archidonic acid [ 24 ]. Therefore, increase in S100A10 levels in the adhesion fibroblasts may deplete intracellular levels of archidonic acid and Prostaglandin E2 (PGE2) that are known to inhibit cell proliferation, collagen I synthesis, contraction of ECM and fibroblast migration [ 25 ]. Nidogen-2 (entactin-2) interacts with laminin1 P1, collagen I, collagen IV, perlecan and fibulin-2 in the extracellular space and stabilizes the basement membrane. It also interacts with α6β1 and α3β1 integrin receptors on cells [ 26 ]. Relatively higher levels of nidogen-2 secreted by adhesion fibroblasts in the extracellular space may strengthen the basement membrane and enhance integrin mediated adhesion and migration of fibroblasts into the growing tissue of adhesion. TM4SF molecules (tetraspanins) play important roles in cell migration and in the generation of complexes with integrins functionally relevant for cell motility, tumor progression and wound healing [ 27 ]. It is proposed that tetraspanins can influence cell migration by (i) modulating integrin signaling and integrin-mediated reorganization of the cortical actin cytoskeleton; (ii) regulating compartmentalization of integrins on the cell surface or (iii) directing intracellular trafficking and recycling of integrins [ 27 ]. Therefore, heightened intercalation of TM4SF1 in the cell surface of adhesion fibroblasts may facilitate their integrin-mediated migration into the developing tissues of adhesion. Versicans (CSPG2) are also known to influence α4β1 and α2β1 integrin mediated invasion of melanoma cells [ 28 ]. Higher CSPG2 in the fibroblasts of adhesion tissues may assist in the integrin-CSPG2 mediated migration of peritoneal fibroblasts to the site of injury and increase the number of fibroblasts by enhancing proliferation and decreasing apoptosis as evidenced in other cell types [ 28 , 29 ]. Veriscan interacts with hyaluronan and CD44 and increase the viscoelastic nature of the pericellular matrix, creating a highly malleable extracellular environment that supports a cell-shape change necessary for cell proliferation and migration [ 30 ]. Because MT-1e transcripts are detected in cell types that have undergone myoepithelial differentiation [ 31 ], significant differences in the MT-1e mRNA levels between adhesion and normal peritoneal fibroblasts indicate that fibroblasts in the adhesions are at different state of differentiation compared to normal peritoneum. Molecules including IL-1; IL-6, TNF-α, EGF, bFGF, glucocorticoids, LPS, and estrogen that promote post surgical adhesion formation [ 32 - 34 ] directly or indirectly increase MT-1 transcripts and proteins in several tissues and cell types [ 35 ]. Therefore, it is likely that these molecules may increase adhesion formation by augmenting MT-IE levels which in turn may increase proliferation, reduce cell death and confer invasiveness of adhesion fibroblasts [ 36 ]. Contrary to increase in the above mentioned mRNA species in the adhesion fibroblasts, steady state levels of IGFBP3 precursor transcript were found to be lower. Because IGFBP-3 is known to inhibit cell growth by sequestering IGF, its decreased level may enhance proliferation of adhesion fibroblasts [ 37 ]. Reduced levels of IGFBP3 mRNA are reported in the tumorigenic cells [ 38 ]. Therefore, reported lower incidence of pelvic adhesion formation in the primates on anti-estrogenic therapy [ 32 ] could be due to the antiproliferative effects of anti-estrogens mediated in part, by IGFBP-3 [ 39 ]. IGFBP-3 also induces growth inhibition and apoptosis [ 40 ]. Decrease in the levels of IGFBP-3 in the adhesion fibroblasts may promote adhesion development both by increasing proliferation and reducing apoptosis at the site of injury. Our attempts to examine the regulatory roles of TGF-β1 and hypoxia, factors known to promote adhesion development [ 3 ], on the expression pattern of specific genes show that not all changes in the gene expression pattern between the normal and adhesion fibroblast are the function of these factors (Figure 3 and 4 ; Table 4 ). Our data show that while mRNA levels of COL4A2, S100A10 and MT-1e are elevated by both TGF-β1 and hypoxia in the human peritoneal fibroblasts, the mRNA levels of only CSPG2 is influenced by TGF-β1. Moreover, transcript levels of nidogen-2, TM4SF1 and IGFBP3 mRNA were not influenced by TGF-β1. Based on these results we hypothesize that genes that are not influenced by TGF-β1 and hypoxia in the peritoneal fibroblasts may be influenced by factors such as interleukins and TNF-α that are also known to play role in adhesion formation. Alternately, TGF-β1 and/or hypoxia may influence actions of these genes at the post transcriptional level without altering transcript levels. TGF-β1 induced up regulation of integrin α5, αv and α6 subunits in the normal human peritoneal fibroblasts without altering mRNA levels [ 10 ] is consistent with this possibility. It is also possible that TGF-β1 and hypoxia may alter expression of these genes in mesothelial and other cell types following peritoneal injury. Likewise lower levels of VEGF transcripts in adhesion fibroblasts may be compensated by its higher levels in other cell type required for angiogenesis during adhesion formation [ 3 ]. Detected lower intensity of VEGF-A isoform in the adhesion fibroblasts may also be due to the fact that spots representing this isoform do not distinguish different VEGF-A splice variants that are known to be up or down regulated during adhesion formation [ 16 ]. It is known that a new phenotype of fibroblasts is induced during wound healing. These fibroblasts, termed- myofibroblasts, express higher levels of α-smooth muscle actin and vinculin-containing fibronexus adhesion complexes [ 41 ]. Fibroblasts isolated from adhesion tissues express higher levels of α-smooth muscle actin transcripts compared to normal peritoneal fibroblast (Table- 2 ) [ 42 ] and TGF-β1 induces formation of adhesion complex in these cells [ 10 ]. These observations in addition to the known roles of TGF-β in the development of post surgical adhesions [ 43 ] and transformation of fibroblasts into smooth muscle α-actin expressing myofibroblasts [ 44 ] imply that this cytokine may influence transformation of normal fibroblasts into a phenotype similar to myofibroblasts in the developing tissues of adhesion. Therefore, hindering this transformation may reduce adhesion formation. For instance, augmenting E prostanoid 2 (EP2) receptor pathways may be a way to reduce the incidence of adhesion formation because prostaglandin E 2 (PGE 2 ) is shown to inhibit TGF-β1 induced expression of α-SMA, production of Collagen I and the transformation of fibroblasts to myofibroblasts via EP2 signaling [ 45 ]. Additionally, adhesion formation may be reduced by P311 ( PTZ17 ) and Interferon γ treatments, which inhibits TGF-β1 induced myofibroblast transformation [ 46 , 47 ]. During the course of normal wound healing, myofibroblasts disappear, possibly by apoptosis [ 48 ]. In contrast, when there is abnormal wound healing, myofibroblasts persist [ 49 ]. Data obtained in our study also indicate that adhesion fibroblasts may resist apoptosis due to anti apoptotic effects mediated by increased hMet1-e and CSPG2 levels and down regulation of IGFBP3. They may also attain a high proliferating nature due to up regulation of S100A10 and CSPG2 genes, and down regulation of IGFBP3 (Table 2 ). Higher proliferating and reduced apoptotic nature of adhesion fibroblasts derived from altered ratio of bcl-2 and bax expression is suggested in an earlier study [ 5 ]. It is apparent now that higher proliferative and reduced apoptotic nature of adhesion fibroblasts in human as reported earlier [ 5 ] could also derive from altered expression of hMet1-e, CSPG2, S100A10, CSPG2, IGFBP3, and the Bfl-1 that inhibits p53-induced apoptosis and is induced by cytokines TNF-α and IL-1β [ 50 ]. This altered phenotype of adhesion fibroblasts, acquired during the healing process, may lead to the accumulation of extra number of cells at the site of peritoneal injury resulting fibrosis and scar formation. Of note, one of the pivotal differences between wounds that proceed to normal scar compared with those that develop hypertrophic scars or fibrosis may be a lack of or reduced cell death [ 51 ]. Therefore, excess fibroblasts at the site of peritoneal wound healing may divert the normal process of healing towards fibrosis and adhesion. The elevated levels of p53 in the adhesion fibroblasts during this disarray, as evident from the gene filter data (Table 2 ), may guard against its transition towards malignancy [ 52 ]. Conclusions It is evident from our study that steady state levels of several genes are different between adhesion and normal peritoneal fibroblasts in human and that adhesion development may be a function of several genes. Changes in the functional interdependence of these genes at the site of injury may transform normal peritoneal fibroblast into cell type/s with altered phenotype. These cells- designated as adhesion fibroblasts may mimic previously known myofibroblasts and are highly proliferative. These cells resist apoptosis and secrete ECM molecules to renovate basement membrane. With changed expression pattern of cell surface molecules these cells may respond to intracellular signaling for migration over the fibrin clot. This altered nature of adhesion fibroblasts therefore may play a major role in the formation of the fibrous mass of adhesion-tissue that bridges adjacent and injured peritoneum. Blocking changes in the expression or function of genes necessary for this transformation of normal peritoneal fibroblasts may curtail adhesion formation. This could be achieved by the application of PGE 2 , EP 2 blockers, interferon γ, P311 and applying measures to induce apoptosis in the peritoneal fibroblast at the site of injury. The obvious question – "how to maintain apoptosis at a desired level for normal peritoneal healing?" however, remains to be answered. List of Abbreviations Collagen type IV (alpha 2) chain (COL4A2), Nidogen 2 (NID2), Chondroitin sulfate proteoglycan 2 (CSPG2), S100 Calcium binding protein A10 (S100A10), Transmembrane 4 superfamily member 1 (TM4SF1), Metallothioneine (hMT-Ie), Insulin-like growth factor binding protein 3 precursor (IGFBP3), Transforming growth factor (TGF), Prostaglandin E2 (PGE2), Urokinase Plasminogen activator receptor (uPAR), Annexin 2 and S100A10 complex (AIIt), tissue Plasminogen Activator (tPA), Plasminogen Activator Inhibitor (PAI), Cyclooxigenase (COX), Matrix metalloproteinase (MMP), Tissue Inhibitor of Metalloproteinase (TIMP), Interferon γ (IFN-γ), IL (Interleukin). Authors' Contributions GMS and MPD were responsible for the isolation of peritoneal fibroblasts from normal peritoneum and adhesion tissues as well as establishing hypoxia chambers. MPD provided patient information and valuable suggestions during writing the manuscript. UKR performed microarray and semiquantitative RT-PCR experiments, analyzed the data and wrote the manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548295.xml
15642115
10.1186/1477-7827-3-1
527875
Statistical design considerations for pilot studies transitioning therapies from the bench to the bedside
Pilot studies are often used to transition therapies developed using animal models to a clinical setting. Frequently, the focus of such trials is on estimating the safety in terms of the occurrence of certain adverse events. With relatively small sample sizes, the probability of observing even relatively common events is low; however, inference on the true underlying event rate is still necessary even when no events of interest are observed. The exact upper limit to the event rate is derived and illustrated graphically. In addition, the simple algebraic expression for the confidence bound is seen to be useful in the context of planning studies.
Introduction In the translational research setting, statisticians often assist in the planning and analysis of pilot studies. While pilot studies may vary in the fundamental objectives, many are designed to explore the safety profile of a drug or a procedure [ 1 , 2 ]. Often before applying a new therapy to large groups of patients, a small, non-comparative study is used to estimate the safety profile of the therapy using relatively few patients. This type of investigation is typically encountered in the authors' experiences as collaborating biostatisticians at our General Clinical Research Center as well as developing applications addressing the National Institutes on Health Roadmap Initiative . In the context of pilot studies, traditional levels of α (the Type I error rate) and β (the Type II error rate) may be inappropriate since the objective of the research is not to provide definitive support for one treatment over another [ 3 ]. For example, the null hypothesis in a single arm pilot study might be that the tested intervention produces a safety profile equal to a known standard therapy. A Type I error (rejecting the null hypothesis when it is false) in the context of this preliminary investigation would encourage additional examination of the treatment in a new clinical trial. This is in contrast to a Type I error in a Phase III/IV clinical trial in which the error could result in widespread exposure of an ineffective treatment. Allowing for a less stringent Type I error rate is critical when trying to transition therapies from the animal models to clinical practice since it identifies a greater pool of potential therapies that could undergo additional research in humans. Similarly, power (1 - β ) is of less practical importance in a single arm, non-comparative (or historically controlled) pilot study since the results would almost always require confirmation in a controlled trial setting. Shih et al [ 4 ] extend the deviations from traditional hypothesis-driven analyses to suggest preliminary investigations should focus on observing responses at the subject level rather than testing a treatment's estimated mean response. In the section that follows, we will relate these notions under the context of safety data analysis and provide interpretations that can be used for sample size considerations. Methods For ease of presentation, assume the pilot study will involve n independent patients for which the probability of the adverse event of interest is π , where 0 < π < 1. A 100 × (1 - α )% confidence interval is to be generated for π and an estimate of the sample size, n , is desired. Denote X as the number of patients sampled who experience the adverse event of interest. Then, the probability of observing x events in n subjects follows the usual binomial distribution. Namely, Denote π u as the upper limit of the exact one-sided 100 × (1 - α )% confidence interval for the unknown proportion, π [ 5 ]. Then π u is the value such that A special case of the binomial distribution occurs when zero events of interest are observed. In pilot studies with relatively few patients, this is of practical concern and warrants particular attention. When zero events are realized (i.e., x = 0), equation (1) reduces to (1 - π u ) n = α . Accordingly, the upper limit of a one-sided 100 × (1 - α )% confidence interval for π is π u = 1 - α 1/ n .     (2) The resulting 100 × (1 - α )% one-sided confidence interval is (0, 1 - α 1/ n ). Graphically, one can represent this interval on a plot of π against n as illustrated in Figure 1 for α = 0.05, 0.10 and 0.25. As the figure illustrates, for relatively small sample sizes, there is a large amount of uncertainty in the true value of π . It is critical to convey this uncertainty in the findings and to guard against inferring a potential treatment is harmless when no adverse effects of interest are observed with limited data. Louis [ 6 ] also cautioned the clinical observation of zero false negatives in the context of diagnostic testing stating that zero false negatives may generate unreasonable optimism regarding the rate, particularly for smaller sample sizes. Figure 1 Upper limit of the 100 × (1 - α )% one-sided confidence interval for the true underlying adverse event rate, π , for increasing sample sizes when zero events of interests are observed Furthermore, one can consider using (2) in other clinically important manners. For instance, an investigator may be planning a pilot study and want to know how large it would need to be to infer with 100 × (1 - α )% confidence that the true rate did not exceed a pre-specified π , say π 0 , given that zero adverse events were observed. Using (2), it follows that: To illustrate the utility of this solution, consider the following example. Ototoxicity is well documented with increasing doses of cisplatin, a platinum-containing antitumoral drug that is known to be effective against a variety of solid tumors. It is of clinical interest to identify augmentative therapies that can alleviate some of the cell death since up to 31% of patients receiving initial doses of 50 mg/m 2 cisplatin are expected to have irreversible hearing loss [ 7 , 8 ]. Therefore, it is desirable to rule out potential treatments not consistent with this rate of hearing loss before considering more conclusive testing. Using equation (3), we would conclude that the augmentative therapy has a hearing loss rate less than 0.31, at the 90% confidence level, if a total of 7 patients are recruited and all 7 do not experience ototoxicity. Therefore, an initial sample size of 7 patients would be sufficient to identify augmentative therapies, such as heat shock or antioxidant supplements, that demonstrate preliminary efficacy in humans. In the event one or more ototoxic events are observed, then the results in relationship to the historical rate (31% in this example) may not be statistically different. The results of several of these pilot studies could then be used to rank-order potential therapies thereby proving an empirically justified approach to therapy development. Conclusions In translational research, it is common to explore the adverse event profile of a new regimen. In this note, we illustrate how a simple expression has utility for the generation of confidence intervals when zero events are observed. A more comprehensive and methodological treatment of inference with zero events can be found in Carter and Woolson [ 9 ], and Winkler et al [ 10 ], which treats the issue from a Bayesian statistical viewpoint. This commentary and related works have implications as a practical finding for the interpretation of clinical trial safety data and offer clinicians advice on the range of adverse event rates that can be thought to be consistent with the observation of zero events. The presented formula offers more flexibility than the "rule of 3" approximation [ 11 ] since it allows for the specification of significance levels other than α = 0.05. The ability to choose the significance level might be important when designing or interpreting preliminary data obtained from a pilot study. In summary, small sample sizes and a focus on safety are often associated with translational research, and the statistical approaches to these studies may need to deviate from traditional, hypothesis-driven designs. Competing interests The author(s) declare that they have no competing interests. Authors' Contributions RC and RW contributed to the conceptualization, writing and editing of this manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC527875.xml
15511289
10.1186/1479-5876-2-37
535530
A trial design for evaluation of empiric programming of implantable cardioverter defibrillators to improve patient management
The delivery of implantable cardioverter defibrillator (ICD) therapy is sophisticated and requires the programming of over 100 settings. Physicians tailor these settings with the intention of optimizing ICD therapeutic efficacy, but the usefulness of this approach has not been studied and is unknown. Empiric programming of settings such as anti-tachycardia pacing (ATP) has been demonstrated to be effective, but an empiric approach to programming all VT/VF detection and therapy settings has not been studied. A single standardized empiric programming regimen was developed based on key strategies with the intention of restricting shock delivery to circumstances when it is the only effective and appropriate therapy. The EMPIRIC trial is a worldwide, multi-center, prospective, one-to-one randomized comparison of empiric to physician tailored programming for VT/VF detection and therapy in a broad group of about 900 dual chamber ICD patients. The trial will provide a better understanding of how particular programming strategies impact the quantity of shocks delivered and facilitate optimization of complex ICD programming.
Background Over the past decade ICD implantation has become increasingly straightforward, yet ICD programming and follow up has become more complex due to device feature and capability enhancements. While sophisticated algorithms provide high sensitivity and improved specificity of arrhythmia detection, allowing delivery of necessary effective therapy with minimization of inappropriate defibrillation shocks, detection and therapy of ventricular tachycardia (VT) / ventricular fibrillation (VF) still requires programming about 100 settings [ 1 - 3 ]. Good programming choices are crucial as they relate to patient acceptance of ICD therapy. It has been found that patients who receive multiple shocks have greater difficulty adjusting to the ICD implant. These patients may become anxious or depressed, especially if a prior history of these ailments exists [ 4 ]. Reducing shocks delivered to the patient would improve overall patient management. To date, there is no proven consensus on how to use information about the patient's complex diseases to program the ICD, and usually little is known about the patient's spontaneous VT rates, their risk of syncope, or therapies to effectively terminate spontaneous ventricular arrhythmias. Furthermore, ICD indications have dramatically changed within the last five years. Physicians may retain old programming habits even with enhanced devices or expanding patient indications, which may result in sub-optimal detection and therapy, such as unnecessary shocks for faster VT, supraventricular tachycardia (SVT), and non-sustained VT. Physicians often adjust many programmable settings that may benefit the patient. For example, physicians may prescribe patient-specific regimens for anti-tachycardia pacing (ATP) or shock energies based on lab testing. While one would expect this tailoring of programming to improve outcomes, it has never been studied. Empiric programming has been shown to be effective for subsets of ICD settings, including subsets of dual chamber detection and ATP therapies [ 3 , 5 - 10 ]. Whether this holds true for comprehensive programming of VT/VF detection and therapy for all ICD patients is unknown. A proven optimal programming approach would be useful for simplifying therapy prescription, improving therapy outcomes, reducing inadvertent programming errors, and overall reducing shock-related morbidity. The EMPIRIC trial has been designed to evaluate a standardized empiric programming regimen by testing the hypothesis stated below. The EMPIRIC trial outcome will provide an understanding of how programming strategies impact defibrillation shock delivery in ICD therapy. EMPIRIC Trial Hypothesis This trial tests the hypothesis that the shock related morbidity of ICD therapy is similar whether patients are treated with a standardized empiric programming regimen for VT/VF detection and therapy or with a patient-specific physician tailored approach. Indices of Shock Morbidity Only sustained VT/VF that cannot be painlessly terminated should result in shock therapy and it is unusual for supraventricular arrhythmias (SVT) to require shock therapy. Shock morbidity is related to the number and frequency of shocks that patients receive and therefore morbidity is reduced if shocks are delivered only when necessary for effective arrhythmia termination. Thus, indices that address shock morbidity should reflect both the frequency and appropriateness of shocks for VT/VF and SVT. Shock morbidity is quantifiable by determination of the following: ♦ proportion of true VT/VF episodes that are shocked ♦ proportion of true SVT episodes that are shocked ♦ time to first shock (VT/VF or SVT) ♦ time to first VT/VF shock ♦ time to first SVT shock These parameters are used to define the Empiric Trial's main objectives. Empiric Trial Objectives The primary objective is to demonstrate that the proportion of shocked VT/VF episodes and the proportion of shocked SVT episodes in a population whose ICDs are programmed using a standardized regimen for VT/VF detection and therapy, is either similar to or less than the same proportion in a similar population whose ICDs are programmed using a physician-tailored approach. This primary objective was chosen to independently evaluate the effects of programming on both appropriate and inappropriate ICD shocks (which are likely to have different implications for patient management). The advantage of this approach is that it focuses on frequency of shock delivery while also allowing an assessment of their appropriateness. However, this assessment could be confounded by a disproportionate number of SVT events in the two study groups. For example, an abundance of non-shocked SVT events in the physician-tailored arm, despite a greater incidence of inappropriate SVT shock therapies in that arm, nevertheless would result in the proportion of SVT episodes shocked being similar in the two arms. The analysis is also heavily dependent on the electrogram data stored in the ICDs. Given the electrogram storage capability of ICDs, differing rates of electrogram storage might occur between study arms or between VT/VF and SVT episodes that may skew the amount of data available for analysis. Therefore, the key secondary endpoint in this study is considered to be the time to delivery of first shock therapy in any given patient. This endpoint offers the advantage that it enables patient cross over to occur between the study arms without endpoint compromise and it is a clinically robust indicator of patient shock-related morbidity. Furthermore, its analysis is not influenced by the appropriateness or otherwise of a shock therapy and therefore cannot be confounded by differential occurrence of non-shocked SVT events in the study arms. Other secondary endpoints will further evaluate the impact of the standardized programming regimen on patients by an assessment of detection performance, health care utilization, shock impact on device longevity, and "true VT/VF" episode durations. EMPIRIC Trial Protocol Design The EMPIRIC trial is a worldwide, multi-center, prospective, one-to-one randomized comparison of empiric to physician tailored programming. About 900 patients were enrolled worldwide at 52 centers from August 2002 to October 2003. Each patient will be followed for approximately one year. The inclusion criteria require patients to meet all of the following conditions: 1. Indicated for an ICD according to internationally accepted criteria. 2. Willing to sign informed consent or offer a legal representative who can provide consent. 3. Achieved a 10 Joule safety margin at implant. Patients are excluded if they: 1. Have permanent atrial fibrillation (AF). 2. Had a previous ICD. 3. Have a medical condition that precludes the testing required by the protocol or limited trial participation. 4. Have a life expectancy less than one year. 5. Are unable to complete follow-ups at the trial center. 6. Are enrolled or participating in another clinical trial. Randomization Patients receiving a Marquis DR ICD are randomized to one of the two programming approaches after meeting a 10 J safety margin. In order to control for physician practice between the two treatment arms, randomization is stratified by treatment center. Further, since the incidence and prevalence of spontaneous VT/VF and SVT among primary prevention patients is not well known, randomization is also stratified by ICD indication (secondary vs. primary). A secondary indication includes patients with a history of spontaneous sustained VT/VF or syncope with suspected VT. A primary prevention indication includes all other patients. Programming Approaches The physician tailored approach is based on the standard practice of each physician. All VT/VF programming may be tailored to the patient except that VT detection must be turned to 'On' or 'Monitor' to record episodes of slower VT. The empiric standardized regimen is based on various programming strategies to reduce shocks. In this arm, initial device settings are fixed (see Table 1 ), with the exception of the VT detection interval, which can be set slower than 150 bpm when clinically necessary. Table 1 Empiric Arm Programming Detection Interval Beats To Detect Redetect Therapies VF On 300 ms 18/24 9/12 9/12 30 J × 6 FVT via VF 240 ms NA Burst (1 sequence), 30 J × 5 VT On ≥ 400 ms* 16 12 Burst (2), Ramp (1), 20 J, 30 J × 3 SVT Criteria On: AF/Afl, Sinus Tach (1:1 VT-ST Boundary = 66%), SVT Limit = 300 ms Burst ATP: 8 intervals, R-S1 = 88%, 20 ms decrement Ramp ATP: 8 intervals, R-S1 = 81%, 10 ms decrement VT/VF detection and therapy programming changes are permitted at follow-up in both arms only when medically justified. These changes must be documented, and are reviewed throughout the study. Data Collection Patients are followed for a 12-month period, with required clinic visits at 3, 6 and 12 months. Data collection includes: VT/VF and SVT episodes, device programming, medical justifications for VT/VF programming changes, cardiovascular medication, adverse device events, P and R wave measurements, and cardiovascular-related hospitalizations. Study Design Challenges A challenge of the study design is the possibility that physician practice could become biased by in-trial experience, causing physician practice to gravitate towards the empiric standardized regimen. This might occur if empiric programming is perceived to be efficacious, particularly with respect to management of rapid ventricular tachycardia by pace termination. Collection of pre-trial programming practices provides the capacity to evaluate potential "treatment drift". This result will be reported. Additionally, in an effort to prevent drifting or possible physician bias to programming in the physician tailored arm, a weekly comparison of programming status and initial implant programming will be assessed through device interrogation information. Any programming changes made must be supported by a medical justification with a basis of event-related occurrences (i.e. system- or procedure-related adverse events, spontaneous episodes, or inappropriate shocks). In order to protect protocol design integrity, reprogramming will be encouraged for non-justified programming deviations. In this manner the initial treatment strategies are tested using an intention-to-treat analysis with characterization of programming changes. Empiric Arm Programming Strategies The empiric arm standardized programming regimen is based on the following key strategies to reduce shocks. 1) Strategies to reduce shocks for VT/VF • Multiple ATP attempts for VT≤ 200 bpm: Three sequences of ATP will be attempted for rhythms with ventricular rates ≤ 200 bpm. Empiric ATP has been shown to terminate ≥ 90% of VTs in the VT zone [ 5 - 10 ]. Furthermore, induced VTs do not predict spontaneous VT cycle length, morphology, or therapy efficacy [ 11 ]. Three sequences will be attempted for rates up to 200 bpm because the average rate of fast VTs was 199 bpm in the PainFREE Rx1 study, where the FVT zone was 188 – 250 bpm, and more ATP provided incremental shock reductions[ 6 ]. ATP will be used in all patients because even cardiac arrest patients have been shown to have VTs [ 5 , 12 - 14 ]. • ATP for VTs 201 – 250 bpm: One sequence of ATP will be delivered for fast VTs (FVT) using the FVT via VF zone, which maintains sensitivity to polymorphic VT (PVT) and VF and delivers ATP if the 8 beats prior to FVT detection are ≤ 250 bpm. Approximately 81% of ICD detected VF is monomorphic VT (MVT). MVT can be pace-terminated approximately 75% of the time with one sequence of ATP, without increased risk of syncope or acceleration [ 6 , 7 , 15 ]. • Longer detection duration: The VF initial beats to detect will be set to 18 of 24. Shorter beats to detect are often programmed by physicians, but may increase the unnecessary shocks for non-sustained VT and for SVTs. At least 25% of ICD-detected VF is non-sustained VT/VF [ 15 - 17 ]. • High Output 1 st VF and FVT Shock: A 30 Joule energy will be used for the first VF and FVT shock. This will allow additional time for spontaneous conversions that frequently occur. A higher shock energy may also improve 1 st shock success and therefore reduce the need for multiple shocks within an episode. The LESS study found no difference in 1 st shock success with 31 J versus DFT++, however it analyzed all VT/VF faster than 200 bpm [ 18 ] ATP should terminate a majority of these rhythms and for that reason the benefit of empiric high-energy shocks for polymorphic VT (PVT)/VF or after a failed ATP is unknown. The primary reason some physicians program lower energy 1 st shocks is due to concerns about syncope. Several recent studies have shown very low syncope rates [ 6 , 19 ] Furthermore, charge times are much faster and more stable over the life of the device than in older ICDs. For instance, the Medtronic Marquis DR 30 Joule charge time is 5.9 and 7.5 seconds at beginning and end of life, respectively [ 20 ]. 2) Strategies to Reduce Shocks for SVTs and Sensing Issues • Empiric SVT Criteria: The PR logic criteria of AF/A. Flutter and Sinus Tach will be programmed 'On' in all patients. These criteria have been shown to have a relative VT/VF sensitivity of 100% and a positive predictive value to 88.4% [ 3 ]. • SVT Criteria applied to faster rates: The SVT limit and VF rate cut-off will be increased to 200 bpm in all patients to provide SVT discrimination at faster rates. Two of the top five reasons for inappropriate detections in the GEM DR Study (933 patients) were a ventricular rate during AF in VF zone and a SVT cycle length faster than programmed SVT limit [ 3 ]. • Avoid detecting 1:1 SVTs with Long PRs as VT: 1:1 SVTs with long PR intervals accounted for 38% of inappropriate detections in the Gem DR (7271) Clinical Study [ 3 ]. A retrospective analysis found that changing the 1:1 VT-ST boundary programmable parameter from 50% to 66% might eliminate 32% of all inappropriate detections. The downside to this approach is that it may result in a 0.8% rate of VT/VF misclassification or delay [ 21 ]. • Longer detection duration: VF initial beats to detect will be set to 18 of 24. Shorter beats to detect may result in more unnecessary shocks for SVTs or ventricular over-sensing. • ATP attempts: In addition to terminating ventricular arrhythmias without shocks, ATP should eliminate some inappropriate shocks when inappropriate detections occur by terminating SVTs or slowing conduction. The VT rate cut-off is one of the most important ICD settings because it can result in untreated symptomatic VT if set too fast, however it can result in unnecessary therapies for non-sustained VT, SVTs, or sensing issues, if set too slow. Reports have shown that some secondary prevention patients have significant symptoms for VTs outside treated zones [ 22 ]. The VT cut-off in the empiric arm is set to ≤ 150 bpm to err on the side of treating VTs and to advance the understanding of the incidence of slower VTs in all patient populations. The optimal VT rate cut-off may need to be set according to the patient's presenting conditions at implant (e.g., faster cut-off in primary prevention patients). Statistical Considerations The primary endpoint is the proportion of true episodes that are shocked during the 12-month follow-up period. The standardized empiric programming regimen will be considered non-inferior to the physician tailored programming approach if both the proportion of shocked VT/VF episodes and the proportion of shocked SVT episodes are no more than 10 percentage points greater in the empiric arm than the physician tailored arm. The chosen margin 10 percent is considered clinically important. It is assumed that 24% of patients will have at least one true VT/VF episode and 33% of patients will have at least one true SVT episode during the 12-month follow-up period. Based on unpublished data from other Medtronic trials, the within-patient correlation coefficient for multiple episodes is assumed to be 0.3. Assuming a similar distribution of episode counts per patient as observed in these previous trials and a shock rate of 30% and 14% for VT/VF and SVT episodes respectively, a total of 900 patients (450 in each arm) will give at least 80% power for the VT/VF hypothesis and 90% power for the SVT hypothesis, each tested at the significance level 0.05. The critical secondary endpoint, time to first shock therapy, will be analyzed using the Cox proportional hazards model for 1) any VT/VF or SVT, 2) true VT/VF only and 3) true SVT only. The empiric programming approach will be considered non-inferior if the upper confidence limit for the hazard ratio is less than 1.5. Other Planned Analyses To better understand the changing ICD patient populations, we will investigate whether or not the proportion of appropriate and inappropriate shocks delivered is related to the following baseline characteristics: main indication for implant (especially spontaneous sustained monomorphic VT), left ventricular ejection fraction, CAD status, history of Atrial Tach/Atrial Fib/Atrial Flutter, NYHA classification, use of amiodarone, sotalol, or beta-blockers, and inducibility for VT/VF. In addition, to facilitate understanding of the optimal programmable settings for various patient sub-groups, we will consider the impact of programmable settings on outcomes. In particular, we will examine the "treated cut-off" (TC), which is the VT detection cut-off if VT detection is 'On' or the VF detection cut-off if VT detection is 'Off' or 'Monitor'. Outcomes in patients with a faster TC (physician tailored arm) will be compared to patients with slower TC (either physician tailored arm or empiric arm). Other programmable settings that will be investigated include the number of beats to detect VF and the number of ATP attempts based at various rates (e.g., <175 bpm, 175–200 bpm, >200 bpm). The types of arrhythmias, median ventricular cycle length, and therapies delivered will also be characterized relative to the patient's conditions and programming. Furthermore, the incidence of slower VTs in patients without a history of spontaneous, sustained monomorphic VT will be characterized. Conclusions and Trial Impact The EMPIRIC trial is a worldwide, multi-center, prospective, one-to-one randomized comparison of shock- related morbidity in a population of about 900 ICD patients whose ICD therapy is determined either by a standardized programming regimen or by physician tailored programming of VT/VF detection and therapy. Shock-related morbidity is assessed by a primary objective that compares between study arms the proportion of VT/VF episodes that are shocked and the proportion of SVT episodes that are shocked, and by a key secondary endpoint that compares to time to first shock therapy. ICD patient populations have rapidly changed within the last five years but little has been published on optimal programming for the emerging patient subsets (e.g., primary prevention). Therefore a standardized regimen of parameters is used in this trial for all patient populations. Today's patient population is quite diverse, so a slightly more sophisticated programming approach may be necessary (e.g. change VT cut-off based on main ICD indication) or perhaps complex physician tailoring is critical to reducing shocks. The EMPIRIC trial will characterize the shock morbidity of a single empiric programming approach compared to patient-specific, physician tailored programming. Empiric programming may be an acceptable strategy if it achieves equivalence with physician tailored programming. The EMPIRIC trial results will also provide a better understanding of how particular programming strategies impact the frequency of shocks delivered and will facilitate a way to optimize complex ICD programming. Competing Interests 1. Have you received reimbursements, fees, funding, or salary from an organization that may in any way gain or lose financially from the publication of this paper in the past five years, or is such an organization financing the article-processing charge for this article? Dr. Morgan: Yes, Medtronic has paid me honoraria. Dr. Sterns: Yes, I am a paid investigator in several Medtronic clinical trials and key investigator in the present trial. I understand that Medtronic is paying for the processing fee for this article. Dr. Wilkoff: Yes, Medtronic, Guidant, St. Jude Medical Hanson, Ousdigian, and Otterness: Yes, Employees of Medtronic. 2. Have you held any stocks or shares in an organization that may in any way gain or lose financially from the publication of this paper? Dr. Morgan and Dr. Sterns and Dr. Wilkoff: No Hanson, Ousdigian, and Otterness: Yes, own Medtronic stock. 3. Do you have any other financial competing interests? Dr. Morgan and Dr. Sterns and Dr. Wilkoff: and Hanson and Ousdigian and Otterness: No. 4. Are there any non-financial competing interests you would like to declare in relation to this paper? Dr. Morgan and Dr. Sterns and Dr. Wilkoff: and Hanson and Ousdigian and Otterness: No. Authors' Contributions All 6 authors contributed to the study design and writing of this manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC535530.xml
15541169
10.1186/1468-6708-5-12
548281
PCR cloning of a histone H1 gene from Anopheles stephensi mosquito cells: comparison of the protein sequence with histone H1-like, C-terminal extensions on mosquito ribosomal protein S6
Background In Aedes and Anopheles mosquitoes, ribosomal protein RPS6 has an unusual C-terminal extension that resembles histone H1 proteins. To explore homology between a mosquito H1 histone and the RPS6 tail, we took advantage of the Anopheles gambiae genome database to clone a histone H1 gene from an Anopheles stephensi mosquito cell line. Results We designed specific primers based on RPS6 and histone H1 alignments to recover an Anopheles stephensi histone H1 corresponding to a conceptual An. gambiae protein, with 92% identity. Southern blots suggested that Anopheles stephensi histone H1 gene has multiple variants, as is also the case for histone H1 proteins in Chironomid flies. Conclusions Histone H1 proteins from Anopheles stephensi and Anopheles gambiae mosquitoes share 92% identity to each other, but only 50% identity to a Drosophila homolog. In a phylogenetic analysis, Anopheles , Chironomus and Drosophila histone H1 proteins cluster separately from the histone H1-like, C-terminal tails on RPS6 in Aedes and Anopheles mosquitoes. These observations suggest that the resemblance between histone H1 and the C-terminal extensions on mosquito RPS6 has been maintained by convergent evolution.
Background Ribosomal protein (RP) S6 is a phosphorylated protein that resides on the small subunit of eukaryotic ribosomes. Phosphorylation occurs on a cluster of five serine residues near the C-terminal end of the protein. Although details remain unclear, the phosphorylation state of RPS6 is believed to influence translational efficiency of some mRNAs [ 1 ], possibly mediated by direct contact between RPS6 and the 28S rRNA in the large subunit. RPS6 has also been implicated in ribosome biogenesis, and is thought to play a conserved role in the initiation of protein synthesis [ 2 ]. In Aedes aegypti and Aedes albopictus mosquitoes, the RPS6 protein is ~17 kDa larger than its Drosophila homolog, and on polyacrylamide gels, it migrates as the largest protein from the small ribosomal subunit. Ae. aegypti and Ae. albopictus RPS6 cDNAs encode an approximately 100 amino acid extension at the C-terminal end of the protein. The extension is particularly rich in lysine, alanine and glutamic acid, and most closely resembles the sequence of histone H1 proteins from diverse sources [ 3 ]. Because RPS6 is thought to have regulatory function(s) in a variety of cell signaling pathways [ 2 ], we were surprised to uncover this difference between mosquito and Drosophila RPS6 proteins. We have recently shown that RPS6 protein isolated from ribosomal subunits retains its histone H1-like tail [ 4 ]. Thus, unlike the case with the ubiquitinated ribosomal protein S27a in the rat [ 5 ], the histone tail is not removed from the mosquito ribosomal protein prior to ribosome assembly. RpS6 cDNA from an Anopheles stephensi cell line encodes an approximately 170 amino acid histone H1-like C-terminal extension, and in silico analysis reveals a similar modification encoded by the rpS6 gene in Anopheles gambiae . In both Aedes and Anopheles mosquitoes, the C-terminal extension was completely encoded in Exon 3, directly contiguous with upstream open reading frame encoding the series of serines that may be phosphorylated [ 4 ]. Anopheline mosquitoes are believed to be ancestral to the Culicidae, which includes the genera Aedes and Culex [ 6 ]. Thus, to a first approximation, we infer that the longer tail in Anopheles mosquitoes represents the ancestral state, and that the RPS6 tail has been lost in the higher Diptera, which include D. melanogaster . Although mosquito RPS6 tails in general resemble histone H1 proteins, their divergence between Aedes and Anopheles mosquitoes was high, relative to the conventional portion of the RPS6 coding sequence. Because histone H1 is the most variable of the histone proteins, and functions as a linker, rather than as a component of the histone octamer, we set out to clone a cDNA encoding a bona fide histone H1 protein from an An. stephensi cell line. In a phylogenetic comparison, the An. stephensi histone H1 protein clusters with homologs from Drosophila and Chironomus , rather than with RPS6 histone H1-like tails from mosquitoes. These results indicate that the histone H1-like tails on mosquito RPS6 proteins are evolving independently of conspecific histone H1 proteins. Results Design of PCR primers The gene encoding Drosophila melanogaster histone H1 spans 1204 nucleotides, and encodes a 256 amino acid protein in a single exon [ 7 ]. There is a single recorded His1 allele in Drosophila [ 8 ], while multiple histone H1 variants have been described in Chironomid flies [ 9 - 11 ]. When the deduced sequence of the Drosophila histone H1 protein (Accession NM_058232) was compared to the Anopheles gambiae genome using the program BLAST [ 12 ] on the NCBI website (National Center for Biotechnology Information; ), we obtained 5 accessions with E values ranging from 3e-35 to 8e-43, distributed on mosquito chromosomes 2 and 3. Upon further examination, we noted that XP_314184 and XP_314186 (chromosome 2) corresponded to the same protein. Two additional histone H1 candidates (XP_311486 and XP_309451) were encoded on chromosome 3. These three conceptual Anopheles proteins shared 70–80% identity to one another, and about 50% identity to the Drosophila H1 protein sequence. In the EST-other database, we found a single uninformative match to an unidentified An. gambiae entry (dbEST id = 11236311), with the relatively modest E value of 0.055. Histone H1 sequences from Aedes mosquitoes are not yet in existing databases. The 50% identity between Drosophila and Anopheles histone H1 proteins was relatively low, compared to approximately 80% amino acid identity between Drosophila and Anopheles RPS6, exclusive of the histone-H1-like tail in the mosquito protein. The Drosophila H1 histone was also ~50% identical to that from Chironomus thummi , a fly closely related to mosquitoes in the infraorder/superfamily Culicomorpha [ 13 ]. To design primers that would amplify a histone H1 gene, and not the histone H1-like tail in mosquito rpS6 , we aligned one of the An. gambiae H1 candidate proteins (XP_311486) to a histone H1 protein from C. thummi , and examined the alignment for precise matches (Fig. 1A ) that did not match well in a separate alignment of the An. gambiae histone H1 protein with the An. gambiae RPS6 tail (Fig. 1B ). The forward primer (F1) corresponded to amino acids PKKPKKP in An. gambiae , and a reverse primer (R1) corresponded to residues AAKKPKA (Fig. 2 ). Figure 1 Primer design. To design primers, we aligned an An. gambiae putative histone H1 candidate XP_311486 (Panel A, top) with a histone H1 protein (Q07134; Panel A, bottom) from C. thummi . Boxed residues were chosen for design of primers, according to the An. gambiae nucleotide sequence. Panel B shows these primer residues aligned between the An. stephensi RPS6 tail (top), and the putative Anopheles gambiae histone H1 (bottom). Vertical bars designate identities. Figure 2 Sequence of An. stephensi histone H1 gene. The positions of internal primers F1 and R1, and primers F2 and R2 are designated by arrows. The ATG start codon and TAA stop codon are boxed. Recovery of An. stephensi histone H1 gene We used F1 and R1 primers with Hin dIII-digested genomic DNA from An. stephensi cells to obtain an approximately 450 bp PCR product, which was sequenced and verified to encode a histone H1 protein. The 5-end of the gene, which extended 81 nucleotides upstream of the ATG start codon, was obtained using primer R1 with the GeneRacer kit (Invitrogen, Carlsbad, CA), with total RNA as the template. The absence of a poly(A) tail on histone mRNAs required an unconventional strategy to obtain the 3'-end of the coding sequence. First, we used Hin dIII-digested genomic DNA template, with a primer based entirely on the 3'-UTR of An. gambiae XP_314184, without success. When we designed a second primer (R2, in Fig. 2 ) extending from the 3'-UTR through the TAA stop codon and into the coding region, we obtained the 3'-end of the coding sequence. Finally, primers F2 and R2 (Fig. 2 ) were used to verify the complete nucleotide sequence. Southern blots with An. stephensi genomic DNA The likelihood that the mosquito genome contains multiple histone H1 gene variants is consistent with the multiple H1 variants that have been described in Chironomus [ 9 - 11 ] and eight histone H1 subtypes that have been described in mammals [ 14 , 15 ]. When we used the An. stephensi cDNA to probe Southern blots of genomic DNA digested with various restriction enzymes with 6 bp recognition sites, most enzymes gave multiple bands, with the notable exception of Bam HI, which hybridized to a single band longer than 10 kb (Fig. 3 ). Based on the observation that D. melanogaster H1, H2A, H2B, H3 and H4 histone genes are organized in approximately one hundred 5 kb repeats per haploid genome [ 16 ], the large Bam HI fragment from An. stephensi may be a starting point for recovery of a complete cluster of the An. stephensi histone gene family. Figure 3 Southern blot of An. stephensi genomic DNA hybridized to the An. stephensi histone H1 probe. DNA was digested with Bam HI (B), Eco RI (E), Hin dIII (H) and Pvu I (P). Positions of size markers are shown at right. The An. stephensi nucleotide sequence (GenBank accession # AY672907) matched An. gambiae histone H1 candidates on chromosomes 2 and 3 with an E value of 0.0. In addition, 6 unmapped sites also had E values of 0.0. A final two sites had E values of 4e-170 and 3e-127. The deduced An. stephensi protein sequence was 92% identical to An. gambiae protein XP_314184 on chromosome 2 (Fig. 4A ). A similar level of identity was obtained with An. gambiae XP_309451 on chromosome 3, but the alignment required introduction of a 58 amino acid gap in the shorter (190 residue) deduced Anopheles gambiae protein (not shown). Identity with An. gambiae XP-311486 was 79%. Based on these criteria, we have cloned the An. stephensi homolog of An. gambiae XP_314184. Figure 4 Comparison of mosquito histone H1 proteins and RPS6 histone H1-like tails. Panel A shows the alignment of the experimentally-determined An. stephensi histone H1 amino acid sequence, compared to An. gambiae conceptual protein XP_314184. Panel B shows a phylogram produced in PAUP* by neighbor joining, with the nematode C. elegans histone H1-like protein 2 (AAM44399) designated as the outgroup. The alignment includes histone H1 proteins from various Diptera, and the known histone H1-like tails on mosquito RPS6. Values on the horizontal lines indicate branch lengths, defined as the fraction of substitutions between the nodes that define the branch. Bootstrap values based on 1000 replicates are shown within circles. A single tree with identical topology was obtained with the optimality criterion set to parsimony. Comparisons of histone H1 proteins with mosquito RPS6 C-terminal extensions The identity between Drosophila and Anopheles (or Drosophila and Chironomus ) histone H1 proteins was only 50%. This divergence undoubtedly reflects the ~250 million years [ 6 ] separating Nematoceran from Cyclorrhaphan diptera. In this study, we were interested in comparing mosquito histone H1 proteins to the histone H1-like tails of mosquito RPS6. Fig. 4B shows a neighbor-joining analysis in which we compared protein sequences from Aedes and Anopheles RPS6 histone H1-like tails, exclusive of the conventional RPS6 protein sequence, with histone H1 proteins from the nematode Caenorhabditis elegans (AAM44399), the closely-related flies Chironomus thummi (Q07134) and Chironomus tentans (AAB62239), Drosophila , and the Anopheles gambiae and Anopheles stephensi homologs (Fig. 4A ). With the C. elegans sequence designated as the outgroup, the phylogram shows that the RPS6 tails cluster into a distinct group relative to the Dipteran histone H1 proteins. Circled values indicate bootstrap values based on 1000 replicates. When the analysis was repeated with the optimality criterion set to parsimony, we obtained a tree with the same topology, with the 77% value shown in Fig. 4B reduced to 59%, and the 97% value reduced to 94%. The 100% values remained unchanged. In an alignment of mosquito RPS6 tails with the Anopheles H1 histones (Fig. 5 ), we note that while some degree of identity covers the entire histone H1 protein, the C-terminal half of the H1 histone has a higher proportion of identities to the RPS6 tail, as indicated by the distribution of consensus residues. Within the RPS6 tails, however, the boxed motifs:VAKK(D/E)A, KKEVKK, AAPA, KKEAPKRKPE, KG(D/E)ASAAK(E/D) are shared by all four mosquitoes. In contrast, the additional amino acids in the Anopheles RPS6 tails, which are represented by gaps in the Aedes sequences (Fig. 5 ), did not show regions of homology with Anopheles histone H1. Figure 5 Alignment of mosquito RPS6 tails with mosquito histone H1 proteins. Angam (CAD89874), An. gambiae ; Anstep (AY237124), An. stephensi ; Aealbo (Q9U762), Ae. albopictus ; Aeaegy (Q9U761), Ae. aegypti . The alignment was produced with ClustalX (version 1.83), using default settings. Indicators of consensus residues are shown below the alignment. Boxes in the top four entries indicate identities (aside from D, E substitutions) shared by the mosquito RPS6 tails. Discussion An important rationale for cloning an An. stephensi histone H1 was to compare its sequence to the histone H1-like tails on mosquito RPS6 ribosomal proteins. Our choice of an Anopheles histone H1 was based on the existing database for An. gambiae , the observation that the tail in Anopheles RPS6 is nearly twice as long as that in Aedes RPS6 proteins [ 4 ], and evidence that the genus Anopheles is ancestral to Aedes [ 6 ]. Because putative homologies to Drosophila histone H1 protein could be recovered as conceptual translation products from the An. gambiae database, we used these sequences to design primers that would discriminate between an An. stephensi histone H1 gene, and the histone H1-like extension in An. stephensi RPS6. Because the Drosophila gene was encoded in a single exon, and the histone message was unlikely to be polyadenylated [ 14 ], we used genomic DNA from An. stephensi as a template for our PCR reaction. The gene we recovered had more than 90% identity to XP_314184 in An. gambiae . The proteins differed in length by a single amino acid residue, and showed 92 % identity. When we analyzed RPS6 tails and histone H1 genes, we found that the Dipteran histone H1 proteins and the RPS6 tails each fell into distinct groups, suggesting that in present-day mosquitoes, these proteins are evolving independently. Although these data are consistent with the possibility that present-day histone H1 proteins and the histone H1-like tails on mosquito RPS6 protein share a common ancestral gene, the histone tails seem to be evolving independently in the two mosquito genera, and have changed more rapidly than the conventional portion of mosquito RPS6 proteins. Because RPS6 is considered an important functional component of the ribosome, it seems surprising that a histone H1-like tail occurs at the C-terminal end of this particular protein. However, histone H1-like tails have been reported at the N-terminus of Drosophila melanogaster ribosomal proteins L22 and L23a [ 17 ]. The An. gambiae homolog of D. melanogaster L23a also contains an N-terminal histone-like extension. The N-terminal tails of Drosophila L22 and L23a were found in an effort to identify proteins that interact with poly (ADP-ribose) polymerase (PARP). In future studies, we plan to explore whether the histone H1-like tail undergoes posttranslational modification, and whether it plays a functional role in ribosome biogenesis, perhaps through the activity of PARP. Experimental procedures Mosquito cells and culture conditions We used the ASE-IV Anopheles stephensi mosquito cell line [ 18 ], which was adapted to Eagle's minimal medium, supplemented with non-essential amino acids, glutamine and 5% heat-inactivated fetal bovine serum [ 19 ]. This formulation is called E-5 medium. Genomic DNA preparation Cells grown as suspended vesicles for 4 to 5 days in twenty 60 mm plates were collected by centrifugation, and the cell pellet was washed twice with phosphate-buffered saline (PBS; [ 20 ]). The cell pellet was resuspended in 20 ml lysis buffer (10 mM Tris-HCl, pH 7.5, 10 mM EDTA, 200 μg/ml proteinase K), and SDS was added to a final concentration of 0.5%. The lysate was incubated at 37°C overnight. NaCl was added to a final concentration of 0.4 M, and the DNA was extracted once with 20 ml phenol, twice with an equal volume of phenol:chloroform (1:1), and twice with an equal volume of chloroform. Two volumes of ethanol were added, and DNA was spooled onto a clean glass rod. The DNA was dried, and dissolved in 10 ml of TE (10 mM Tris-HCl, pH 8.0, containing 1 mM EDTA) at 37°C. RNase A was added to a final concentration of 200 μg/ml and incubated at 37°C for 4 hours. DNA was phenol extracted, ethanol precipitated and dissolved in TE as described above. DNA amplification by PCR Genomic DNA (0.4 mg) was digested with Hin dIII (Promega) at 37°C overnight. Enzyme was removed by phenol:choloroform extraction, and the DNA was recovered by precipitation with ethanol and dissolved in TE. Digested DNA (100 ng) was used as template for the PCR reaction, which contained 1X PCR buffer, 1.5 mM MgCl 2 , 0.2 mM of each of the four dNTPs, 0.4 μM of primer F1 (5'CCG AAG AAG CCG AAG AAG CCC) and R1 (5'TGC TTT CGG CTT CTT GGC AGC) and 2.5 units of Taq DNA polymerase (Promega, Madison, WI). PCR was performed with an initial denaturation at 94°C for 2 minutes. The next 35 cycles included 94°C denaturation for 45 sec, 55°C annealing for 1 minute, and 72°C extension for 1 minute. The reaction was terminated by a final elongation cycle at 72°C for 2 minutes. The PCR product was recovered from a 0.9% agarose gel, purified using Ultra-Clean 15 (MO Bio Laboratories Inc., Solana Beach, CA) and cloned into PGEM T-Easy vector (Promega). The 3'-end of the gene was obtained in a similar manner, using primers R2 (Fig. 2 ) and F1. Amplifying the 5'-end of the cDNA Total RNA was recovered from ASE-IV cells by guanidine isothiocyanate extraction and cesium chloride centrifugation as described by Davis et al. [ 21 ]. The final RNA pellet was dissolved in DEPC-treated water and stored at -70°C. RNA (1 μg) was used with the GeneRacer kit (Invitrogen) to obtain the 5' end of the mRNA, using primer R1 as the reverse primer. Programs and accession numbers The analysis in Fig. 4A was produced using the Genetics Computer Group (GCG; Madison, WI) program "gap". The tree in Fig. 4B and the alignment in Fig. 5 were produced by an alignment of amino acid residues using default parameters of Clustal X (version 1.83) [ 22 ]. The tree was created in PAUP* [ 23 ], with the C. elegans H1 protein designated as an outgroup. The An. stephensi histone H1 sequence has GenBank accession # AY672907. Authors' contributions YZ did the experimental work, AMF helped with experimental design and manuscript preparation. Both authors read and approved the final manuscript.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548281.xml
15667661
10.1186/1471-2164-6-8
545799
A Drosophila protein-interaction map centered on cell-cycle regulators
A Drosophila protein-protein interaction map was constructed using the LexA system, complementing a previous map using the GAL4 system and adding many new interactions.
Background Protein-protein interactions have an essential role in a wide variety of biological processes. A wealth of data has emerged to show that most proteins function within networks of interacting proteins, and that many of these networks have been conserved throughout evolution. Although some of these networks constitute stable multi-protein complexes while others are more dynamic, they are all built from specific binary interactions between individual proteins. Maps depicting the possible binary interactions among proteins can therefore provide clues not only about the functions of individual proteins but also about the structure and function of entire protein networks and biological systems. One of the most powerful technologies used in recent years for mapping binary protein interactions is the yeast two-hybrid system [ 1 ]. In a yeast two-hybrid assay, the two proteins to be tested for interaction are expressed with amino-terminal fusion moieties in the yeast Saccharomyces cerevisiae . One protein is fused to a DNA-binding domain (BD) and the other is fused to a transcription activation domain (AD). An interaction between the two proteins results in activation of reporter genes that have upstream binding sites for the BD. To map interactions among large sets of proteins, the BD and AD expression vectors are placed initially into different haploid yeast strains of opposite mating types. Pairs of BD and AD fused proteins can then be tested for interaction by mating the appropriate pair of yeast strains and assaying reporter activity in the resulting diploid cells [ 2 ]. Large arrays of AD and BD strains representing, for example, most of the proteins encoded by a genome, have been constructed and used to systematically detect binary interactions [ 3 - 6 ]. Most large-scale screens have used such arrays in a library-screening approach in which the BD strains are individually mated with a library containing all of the AD strains pooled together. After plating the diploids from each mating onto medium that selects for expression of the reporters, the specific interacting AD-fused proteins are determined by obtaining a sequence tag from the AD vector in each colony. High-throughput two-hybrid screens have been used to map interactions among proteins from bacteria, viruses, yeast, and most recently, Caenorhabditis elegans and Drosophila melanogaster [ 4 - 10 ]. Analyses of the interaction maps generated from these screens have shown that they are useful for predicting protein function and for elaborating biological pathways, but the analyses have also revealed several shortcomings in the data [ 11 - 13 ]. One problem is that the interaction maps include many false positives - interactions that do not occur in vivo . Unfortunately, this is a common feature of all high-throughput methods for generating interaction data, including affinity purification of protein complexes and computational methods to predict protein interactions [ 11 - 14 ]. A solution to this problem has been suggested by several studies that have shown that the interactions detected by two or more different high-throughput methods are significantly enriched for true positives relative to those detected by only one approach [ 11 - 13 ]. Thus it has become clear that the most useful protein-interaction maps will be those derived from combinations of cross-validating datasets. A second shortcoming of the large-scale screens has been the high rate of false negatives, or missed interactions. This is evident from comparing the high-throughput data with reference data collected from published low-throughout studies. Such comparisons with two-hybrid maps from yeast [ 13 ] and C. elegans [ 5 ], for example, have shown that the high-throughput data rarely covers more than 13% of the reference data, implying that only about 13% of all interactions are being detected. The finding that different large datasets show very little overlap, despite having similar rates of true positives, supports the conclusion that high-throughput screens are far from saturating [ 10 , 12 ]. For example, three separate screening strategies were used to detect hundreds of interactions among the approximately 6,000 yeast proteins, and yet only six interactions were found in all three screens [ 10 ]. These results suggest that many more interactions might be detected simply by performing additional screening, or by applying different screening strategies to the same proteins. In addition, anecdotal evidence has suggested that the use of two-hybrid systems based on different fusion moieties may broaden the types of protein interactions that can be detected. In one study, for example, screens performed using the same proteins fused to either the LexA BD or the Gal4 BD produced only partially overlapping results, and each system detected biologically significant interactions missed by the other [ 15 ]. Thus, the application of different two-hybrid systems and different screening strategies to a proteome would be expected to provide more comprehensive datasets than would any single screen. We set out to map interactions among the approximately 14,000 predicted Drosophila proteins by using two different yeast two-hybrid systems (LexA- and Gal4-based) and different screening strategies. Results from the screens using the Gal4 system have already been published [ 6 ]. In that study, Giot et al . successfully amplified 12,278 Drosophila open reading frames (ORFs) and subcloned a majority of them into the Gal4 BD and Gal4 AD expression vectors by recombination in yeast. They screened the arrays using a library-screening approach and detected 20,405 interactions involving 7,048 proteins. To extend these results we subcloned the same amplified Drosophila ORFs into vectors for use in the LexA-based two-hybrid system, and constructed arrays of BD and AD yeast strains for high-throughput screening. Our expectation was that maps generated with these arrays would include interactions missed in previous screens, and would also partially overlap the Gal4 map, providing opportunities for cross-validation. Initially, we screened for interactions involving proteins that are primarily known or suspected to be cell-cycle regulators. We chose cell-cycle proteins as a starting point for our interaction map because cell-cycle regulatory systems are known to be highly conserved in eukaryotes, and because previous results have suggested that the cell-cycle regulatory network is centrally located within larger cellular networks [ 16 ]. This is most evident from examination of the large interaction maps that have been generated for yeast proteins using yeast two-hybrid and other methods. Within these maps there are more interactions between proteins that are annotated with the same function (for example, 'Pol II transcription', 'cell polarity', 'cell-cycle control') than between proteins with different functions, as expected for a map depicting actual functional connections between proteins. Interestingly, however, certain functional groups have more inter-function interactions than others. Proteins annotated as 'cell-cycle control', in particular, were frequently connected to proteins from a wide range of other functional groups, suggesting that the process of cell-cycle control is integrated with many other cellular processes [ 16 ]. Thus, we set out to further elaborate the cell-cycle regulatory network by identifying new proteins that may belong to it, and new connections to other cellular networks. Results Construction of an extensive protein interaction map centered on cell-cycle regulators by high-throughput two-hybrid screening We used the same set of 12,278 amplified Drosophila full-length ORFs from the Gal4 project [ 6 ] to generate yeast arrays for use in a modified LexA-based two-hybrid system (see Materials and methods). In the LexA system the BD is LexA and the AD is B42, an 89-amino-acid domain from Escherichia coli that fortuitously activates transcription in yeast [ 17 ]. In the version that we used, both fusion moieties are expressed from promoters that are repressed in glucose so that their expression can be repressed during construction and amplification of the arrays [ 18 ]. Previous results have shown that this prevents the loss of genes encoding proteins that are toxic to yeast, and that interactions involving such proteins can be detected by inducing their expression only on the final indicator media [ 18 , 19 ]. The ORFs were subcloned into the two vectors by recombination in yeast as previously described [ 3 , 6 ], and the yeast transformants were arrayed in a 96-well format. The resulting BD and AD arrays each have approximately 12,000 yeast strains, over 85% of which have a full-length Drosophila ORF insert (see Materials and methods). For all strains involved in an interaction reported here, the plasmid was isolated and the insert was sequenced to verify the identity of the ORF. As a first step toward generating a LexA-based protein-interaction map, we chose 152 BD-fused proteins that were either known or homologous to regulators of the cell cycle or DNA damage repair (see Additional data file 2). We used all 152 proteins as 'baits' to screen the 12,000-member AD array. We used a pooled mating approach [ 19 ] in which individual BD bait strains are first mated with pools of 96 AD strains. For pools that are positive with a particular BD, the corresponding 96 AD strains are then mated with that BD in an array format to identify the particular interacting AD protein(s). We had previously shown that this approach is very sensitive and allows detection of interactions involving proteins that are toxic to yeast or BD fused proteins that activate transcription on their own [ 19 ]. Moreover, the final assay in this approach is a highly reproducible one-on-one assay between an AD and a BD strain, in which the reporter gene activities are recorded to provide a semi-quantitative measure of the interaction. Using this approach we detected 1,641 reproducible interactions involving 93 of the bait proteins. We also performed library screening [ 6 ] with a subset of the 152 baits that did not activate the reporter genes on their own. This resulted in the detection of 173 additional interactions with 57 bait proteins. Thirty-nine interactions were found by both approaches, and these involved 21 of the 44 BD genes active in both approaches. There were 95 BD genes for which interaction data was obtained by the pooled mating approach, and 59 active BD genes in the library screening approach. The average number of interactions was 18 per BD gene in the pooled mating data, while the library screening data had an average of only four interactions per active BD gene. The average level of reporter activation for the 39 interactions that were detected in both screens was significantly higher than the average of all interactions (see Additional data file 3), suggesting that the weaker interactions are more likely to be missed by one screen or another, even though they are reproducible once detected. Altogether we detected interactions with 106 of the 152 baits, which resulted in a protein-interaction map with 1,814 unique interactions among the products of 488 genes (see Additional data file 3). The map includes interactions that were already known or that could be predicted from known orthologous or paralogous interactions (see below). The map also includes a large number of novel interactions, including many involving functionally unclassified proteins. Evaluation of the LexA-based protein interaction map As is common with data derived from high-throughput screens, the number of novel interactions detected was large, making direct in vivo experimental verification impracticable. Thus, we set out to assess the quality of the data by examining the topology of the interaction map, by looking for enrichment of genes with certain functions, and by comparing the LexA map with other datasets. First we examined the topology of the interaction map, because recent studies have shown that cellular protein networks have certain topological features that correlate with biological function [ 20 ]. In our interaction map, the number of interactions per protein ( k ) varies over a broad range (from 1 to 84) and the distribution of proteins with k interactions follows a power law, similar to previously described protein networks [ 6 , 21 ]. Most (98%) of the proteins in the map are linked together into a single network component by direct or indirect interactions (Figure 1a ). The network has a small-world topology [ 22 ], characterized by a relatively short average distance between any two proteins (Table 1 ) and highly interconnected clusters of proteins. Removal of the most highly connected proteins from the map does not significantly fragment the network, indicating that the interconnectivity is not simply due to the most promiscuously interacting proteins (Figure 1b ). In other interaction maps generated with randomly selected baits, proteins with related functions tend to be clustered into regions that are more highly interconnected than is typical for the map as a whole [ 5 , 6 , 16 ]. Moreover, interactions within more highly interconnected regions of a protein-interaction map tend to be enriched for true positives [ 6 , 23 - 25 ]. Thus, the overall topology of the interaction map that we generated is consistent with that of other protein networks, and in particular, with the expectation for a network enriched for functionally related proteins. Next we assessed the list of proteins in the interaction map to look for enrichment of proteins or pairs of proteins with particular functions. An interaction map with a high rate of biologically relevant interactions should have a high frequency of interactions between pairs of proteins previously thought to be involved in the same biological process. Among the 488 proteins in the map, 153 have been annotated with a putative biological function using the Gene Ontology (GO) classification system [ 26 , 27 ]. Because we used a set of BD fusions enriched for cell-cycle and DNA metabolic functions, we expected to see similar enrichments in the list of interacting AD fusions, as well as more interactions between genes with these functions. Both of these expectations are borne out. In the list of BD genes, both cell-cycle and DNA metabolism functions are enriched approximately 17-fold compared to similarly sized lists of randomly selected proteins ( P < 0.00002). In the AD list, these two functions are enriched four- and threefold, respectively (Table 2 ). The frequency with which interactions occur between pairs of proteins annotated for DNA metabolism is five times more than expected by chance; similarly, cell-cycle genes interact with each other six times more frequently than expected ( P < 0.001). Thus, the enrichment for proteins and pairs of interacting proteins annotated with the same function suggests that many of the novel interactions will be biologically significant. It also suggests that the map will be useful for predicting the functions of novel proteins on the basis of their connections with proteins having known functions, as described for other interaction maps [ 16 , 28 ]. Comparison of the Drosophila protein-interaction maps Direct comparison of the LexA cell-cycle map with the Gal4 data revealed that only 28 interactions were found in common between the two screens (Table 1 ). Moreover, more than a quarter of the proteins in the LexA map were absent from the Gal4 proteome-wide map. Among the 106 baits that had interactions in the LexA map, for example, 60 failed to yield interactions in the Gal4 proteome-wide map, even though all but six of these were successfully cloned in the Gal4 arrays [ 6 ] (see Additional data file 6). Similarly, 46 of the 152 LexA baits that we used failed to yield interactions from our work, yet 14 of these had interactions in the Gal4 map. Thus, the lack of overlap between the two datasets is partly due to their unique abilities to detect interactions with specific proteins. Nevertheless, for the 347 proteins common to both maps, the two screens combined to detect 1428 interactions, and yet only 28 of these were in both datasets. This indicates that the two screens detected mostly unique interactions even among the same set of proteins. Comparison with a set of approximately 2,000 interactions recently generated in an independent two-hybrid screen [ 29 ] showed only three interactions in common with our data, in part because only eight of the same bait proteins were used successfully in both screens. Although only 28 interactions were found in both the Gal4 map and our map, this rate of overlap is significantly greater than expected by chance ( p < 10 -6 ; Table 1 ). To show this, we generated 10 6 random networks having the same BD proteins, total interactions and topology as the LexA map, and found that none of these random maps shared more than two interactions in common with the Gal4 map. To assess the relative quality of the 28 common interactions we used the confidence scores assigned to them by Giot et al . [ 6 ]. They used a statistical model to assign confidence scores (from 0 to 1), such that interactions with higher scores are more likely to be biologically relevant than those with lower scores. The average confidence scores of the 28 interactions in common with our LexA data (0.63), was higher than the average for all 20,439 Gal4 interactions (0.34), or for random samplings of 28 Gal4 interactions (0.32; P < 0.0001), indicating that the overlap of the two datasets is significantly enriched for biologically relevant interactions. Thus, the detection of interactions by both systems could be used as an additional measure of reliability. The surprisingly small number of common interactions, however, severely limits the opportunities for cross-validation, and suggests that both datasets are far from comprehensive. An alternative explanation for the small proportion of common interactions is the possible presence of a large number of false positives in one or both datasets. The estimation of false-positive rates is challenging, in part because it is difficult to prove that an interaction does not occur under all in vivo conditions, and also because the number of potential false positives is enormous. Nevertheless, the relative rates of false positives between two datasets can be inferred by comparing their estimated rates of true positives [ 11 - 13 ]. To compare true-positive rates between the LexA and Gal4 datasets, we looked for their overlap with several datasets that are thought to be enriched for biologically relevant interactions (Table 3 ). These include a reference set of published interactions involving the proteins that were used as baits in both the LexA and Gal4 screens; interactions between the Drosophila orthologs of interacting yeast or worm proteins (orthologous interactions or 'interlogs' [ 30 , 31 ]); and between proteins encoded by genes known to interact genetically, which are more likely to physically interact than random pairs of proteins [ 32 , 33 ]. As expected, the overlap with these datasets is enriched for higher confidence interactions. The average confidence scores for the Gal4 interactions in common with the yeast interlogs, worm interlogs and Drosophila genetic interactions are 0.63, 0.68 and 0.80, respectively, substantially higher than the average confidence scores for all Gal4 interactions (0.34). This supports the notion that these datasets are enriched for true-positive interactions relative to randomly selected pairs of proteins. We found that the fractions of LexA- and Gal4-derived interactions that overlap with these datasets are similar (Table 3 ). For example, 25 (1.4%) of the 1814 LexA interactions and 294 (1.4%) of the 20,439 Gal4 interactions have yeast interlogs. This suggests that the LexA and Gal4 two-hybrid datasets have similar percentages of true positives, and thus similar rates of false positives. They also appear to have similar rates of false negatives, which may be over 80% if calculation is based on the lack of overlap with published interactions (Table 3 ). This supports the explanation that the main reason for the lack of overlap between the datasets is that neither is a comprehensive representation of the interactome, and suggests that a large number of interactions remain to be detected. Biologically informative interactions Further inspection of the LexA cell-cycle interaction map revealed biologically informative interactions and additional insights for interpreting high-throughput two-hybrid data. For example, we expected to observe interactions between cyclins and cyclin-dependent kinases (Cdks), which have been shown to interact by a number of assays. Our interaction map includes six proteins having greater than 40% sequence identity to Cdk1 (also known as Cdc2). A map of all the interactions involving these proteins reveals that they are multiply connected with several cyclins (Figure 2 ). For example, all of the known cyclins in the map interacted with at least two of the Cdk family members. The map includes 20 interactions between five Cdks and six known cyclins plus one uncharacterized protein, CG14939, which has sequence similarity to cyclins. Only one of these interactions (Cdc2c-CycJ) is known to occur in vivo [ 34 ], and several others are thought not to occur in vivo (for example Cdc2-CycE [ 35 ]). Similarly, the Gal4 interaction map has three Cdk-cyclin interactions [ 6 ], including one known to occur in vivo (Cdk4-CycD) and two that do not occur in vivo [ 35 ]. Thus, while some of these interactions are false positives in the strictest sense, the data is informative nevertheless, as it clearly demonstrates a high incidence of paralogous interactions - where pairs of interacting proteins each have paralogs, some combinations of which also interact in vivo . Such patterns are consistent with potential interactions between members of different protein families, even though they do not reveal the precise pair of proteins that interact in vivo . This class of informative false positives may be common in two-hybrid data where the interaction is assayed out of biological context. Experimentally reproducible interactions, whether or not they occur in vivo , can be used to discover interacting protein motifs or domains [ 6 , 36 ]. They can also suggest functional relationships between protein families and guide experiments to establish the actual in vivo interactions and functions of specific pairs of interacting proteins. The Cdk subgraph also illustrates that proteins with similar interaction profiles may have related functions or structural features. To look for other groups of proteins having similar interaction profiles we used a hierarchical clustering algorithm to cluster BD and AD fusion proteins according to their interactions (see Materials and methods). The resulting clustergram reveals several groups of proteins with similar interaction profiles (Figure 3 ). One of the most prominent clusters (Figure 3 , circled in blue) includes three related proteins involved in ubiquitin-mediated proteolysis, SkpA, SkpB and SkpC. Skp proteins are known to interact with F-box proteins, which act as adaptors between ubiquitin ligases, known as SCF (Skp-Cullin-F-box) complexes, and proteins to be targeted for destruction by ubiquitin-mediated proteolysis [ 37 ]. A map of the interactions involving the Skp proteins shows a group of 21 AD proteins that each interact with two or three of the Skp proteins (Figure 4 ). This group is highly enriched for F-box proteins, including 13 of the 15 F-box proteins in the AD list; the other two F-box proteins interacted with only one Skp (Figure 4 ). Several of the interactions in common with the Gal4 data are also in the Skp cluster, and 12 out of 16 of these involve proteins that interact with two or more Skp proteins. Thus, the Skp cluster provides another example of how proteins with similar interaction profiles may be structurally or functionally related, and how such clusters may be enriched for biologically relevant interactions. This is consistent with previous results showing that protein pairs often have related functions if they have a significantly larger number of common interacting partners than expected by chance [ 24 , 38 ]. These groups of proteins are likely to be part of more extensive functional clusters that could be identified by more sophisticated topological analyses (for example [ 39 - 44 ]. Maps showing several other major clusters derived from the cluster-gram are shown in Additional data file 7. The interaction profile data is statistically confirmed by domain-pairing data, which shows that certain pairs of domains are found within interacting pairs of proteins more frequently than expected by chance (Table 4 ). These include the Skp domain and F-box pair, the protein kinase and cyclin domains, and several less obvious pairings. For example, the cyclin and kinase domains are observed to be associated with various zinc-finger and homeodomain proteins, and the kinase domain with a number of nucleic-acid metabolism domains (Table 4 ). A similar analysis of the Gal4 data, performed by Giot et al . [ 6 ], revealed a number of significant domain pairings, including the Skp/F-box and the kinase/cyclin pairs and several others found in the LexA dataset. Therefore, although the number of proteins in the LexA dataset is relatively small, domain associations are observed in the data, demonstrating that a high-density interaction map, with a high average number of interactions per protein, provides insight into patterns of domain interactions that is equally valuable as that obtained from a proteome-wide map. Discussion Proteome-wide maps depicting the binary interactions among proteins provide starting points for understanding protein function, the structure and function of protein complexes, and for mapping biological pathways and regulatory networks. High-throughput approaches have begun to generate large protein-interaction maps that have proved useful for functional studies, but are also often plagued by high rates of false positives and false negatives. Several analyses have shown that the set of interactions detected by more than one high-throughout approach is enriched for biologically relevant interactions, suggesting that the application of multiple screens to the same set of proteins results in higher-confidence, cross-validated interactions [ 11 - 13 ]. Such cross-validation has been limited, however, by the lack of overlap among high-throughput datasets. Here we describe initial efforts to complement a recently published Drosophila protein interaction map that was generated using the Gal4 yeast two-hybrid system [ 6 ]. We constructed yeast arrays for use in the LexA-based two-hybrid system by subcloning approximately 12,000 Drosophila ORFs, using the same PCR amplification products used in the Gal4 project, into the LexA two-hybrid vectors. Initially, we used a novel pooled mating approach [ 19 ] to screen one of the 12,000-member arrays with 152 bait proteins related to cell cycle regulators. By using both a different screening approach and a different two-hybrid system, we expected to increase coverage and to validate some of the interactions detected by the Gal4 screens. The level of coverage for a high-throughput screen can be estimated by determining the percentage of a reference dataset that was detected; reference sets have been derived from published low-throughput experiments, for example, which are considered to have relatively low false-positive rates. High-throughput two-hybrid data for yeast and C. elegans proteins were shown to cover only about 10-13% of the corresponding reference datasets [ 5 , 10 , 13 ]. Two factors may contribute to this lack of coverage. First, some interactions cannot be detected using the yeast two-hybrid system, even though they could be detected in low-throughput studies using other methods. Examples include interactions that depend on certain post-translational modifications, that require a free amino terminus or that involve membrane proteins. Second, high-throughput yeast two-hybrid screens often fail to test all possible combinations of interactions; in other words, the screens are not saturating or complete. Although the relative contribution of these two factors is difficult to estimate, results from screens to map interactions among yeast proteins suggest that the major reason for the lack of coverage is that the screens are incomplete. Complete screens would identify all interactions that could possibly be detected by a given method; ideally therefore, two complete screens using the same method would identify all the same interactions. However, the rate of overlap among the different yeast proteome screens is low, even though they used very similar two-hybrid systems. Moreover, the overlap between screens is not significantly greater than the rate at which they overlap any reference set [ 4 , 10 ]. This is true even when only higher-confidence interactions are considered; for example, two large interaction screens of yeast proteins detected 39% and 65% of a higher-confidence dataset, respectively, but only 11% of the reference set was detected by both screens [ 12 ]. These results indicate that the lack of coverage in high-throughput two-hybrid data is largely due to incomplete screening, and that significantly larger datasets than those currently available will be needed before different datasets can be used to cross-validate interactions. The rates of coverage and completeness from our high-throughput two-hybrid screening with Drosophila proteins are consistent with those for the yeast proteins. We used the LexA system to detect 1,814 reproducible interactions to complement the 20,439 interactions previously detected in a proteome-wide screen using the Gal4 system [ 6 ]. The overlap between the LexA and Gal4 screens is less than 2% of each dataset, whereas their overlap with a reference set was 17% and 14%, respectively, and only 2% of the reference set was detected by both screens (Table 2 ). Taken together, these results suggest that, like the yeast interaction data, both Drosophila datasets are far from complete and that many more interactions could be detected by additional two-hybrid screening. The actual number of interactions that might be detected by complete two-hybrid screening might be roughly estimated from the partially overlapping datasets, as was performed for accurate estimation of the number of genes in the human genome [ 45 , 46 ]. In this approach, the overlap of two subsets, given that one subset is a homogeneous random sample of the whole, is sufficient to estimate the size of the whole. To make such an estimate with high-throughput two-hybrid data, however, it is necessary to first filter out false positives, as they are mostly different for the two datasets, as suggested by the fact that the nonoverlapping data has a lower rate of true positives than the overlapping data. Giot et al . estimated that at least 11% of the Gal4 interactions are likely to be biologically relevant, based on the prediction accuracy of their statistical model [ 6 ]. We found by comparison with other datasets that the rates of true positives are not substantially different between the LexA and Gal4 data (Table 3 ). Thus, if we use 11% as the minimal rate of true positives in each dataset, we obtain 200 true interactions from the LexA screen and 2,248 from the Gal4 screens. If we further assume that all of the 28 common interactions are true positives, we can estimate that complete screens should be able to detect around 16,000 true positive interactions (200 × 2,248/28). If each screening approach has a false-positive rate of 89%, then around 150,000 interactions from each approach would be required in order to create complete, cross-validating datasets, where the overlap would be comprised of true positives. This estimate is highly sensitive to both the frequency of true positives in the two datasets, and the number of positives in the overlap between the datasets; for example, if true-positive frequency is underestimated by only twofold, there will be four times as many interactions. False-positive interactions have been classified as technical or biological [ 5 ]. A technical false positive is an artifact of the particular interaction assay, and the two proteins involved do not actually interact under any setting. A biological false positive is one in which the two proteins genuinely and reproducibly interact in a particular assay, but the interaction does not take place in a biological setting; for example, the interacting proteins may never be temporally or spatially co-localized in vivo . Using the approach described here, the interactions are shown to be reproducible during the one-on-one two-hybrid assays that are used to record reporter activity scores, suggesting that we have minimized the frequency of technical false positives. We suggest that the biological false positives might be further classified as informative and non-informative. Informative false positives are interactions that do not occur in vivo , but that nevertheless have some biological basis for being detected and are potentially useful for guiding future experiments. In our data, for example, the Cdk and Skp proteins each interact with a different group of targets, which in turn interact with multiple Cdk or Skp proteins. From this data alone, we would accurately predict that Cdk proteins interact with cyclins, and that Skp proteins interact with F-box proteins, even though only some of the specific combinations are true in vivo partners. Similarly, from analysis of domain pairs in the LexA dataset, other patterns are evident, such as homeobox domains being associated with both protein kinase and cyclin domains (Table 4 ). Additional information or experimentation would be needed to determine which of the specific paralogous interactions function in vivo . Co-affinity purification, for example, might be used to directly test all possible pairs of paralogous interactions implied by the two-hybrid map. Alternatively, the genes encoding each possible pair of proteins could be examined for correlated expression patterns, for example, to suggest more likely pairs or to exclude pairs that are not coexpressed. Conclusions We used high-throughput screening to detect 1,814 protein interactions involving many proteins with cell-cycle and related functions. The resulting interaction map is similar in quality to other large interaction maps and is predominated by previously unidentified interactions. The majority of the proteins in the map have not been assigned a biological function, and the map provides a first clue about the potential functions of these proteins by connecting them with characterized proteins or pathways. High-throughput interaction data such as this should allow researchers to quickly identify possible patterns of protein interactions for use in selecting additional functional assays to perform on their gene(s) of interest. This narrows down the number of potential assays necessary to establish function for a given gene from hundreds to just a handful; conversely, when studying a specific function, such as the cell cycle, interaction data can identify which few genes, selected from thousands, may have a role in the process. Just as the sequencing of various genomes has not allowed unambiguous ascription of biological function to the majority of the identified genes, mapping of an interactome by high-throughput methods does not allow final assignment of interaction capacity or of higher functionality to a protein. This requires additional experiments, guided by these and other high-throughput data. The results presented here show that extending and combining different two-hybrid datasets will allow further refinement of the selection of functional analyses to be performed for each protein of the proteome. Materials and methods Plasmids and strains Yeast two-hybrid vectors used are related to those originally described for the LexA system [ 17 ]. The vector for expressing amino-terminal LexA DNA-binding domain (BD) fusions was pHZ5-NRT, which expresses fusions from the regulated MAL62 promoter [ 18 ]. The vector for expressing amino-terminal activation domain (AD) fusions from the GAL1 promoter was pJZ4-NRT, which was constructed from pJG4-5 [ 17 ] by replacing the ADH1 terminator with the CYC1 terminator and inserting the 5' and 3' recombination tags (5RT1 and 3RT1 [ 18 ]) into the cloning site downstream from the AD coding region. Construction details can be found in Additional data file 1. Maps and sequences are available at [ 47 ]. Yeast ( S. cerevisiae ) strain RFY231 (MAT trp1 :: hisG his3 ura3-1 leu2 ::3Lexop- LEU2 ) and RFY206 (Mat a his3Δ200 leu2-3 lys2Δ201 ura3-52 trp1Δ :: hisG ) were previously described [ 2 , 48 ]. RFY206 containing the lacZ reporter plasmid pSH18-34 [ 49 ] is referred to here as strain Y309. Yeast two-hybrid arrays Two yeast arrays were constructed by homologous recombination (gap repair) in yeast [ 3 ]. We began with the 13,393 unique PCR products, which were generated using gene-specific primer pairs corresponding to the predicted Drosophila ORFs, from ATG to stop codon, described in Giot et al . [ 6 ]. For the AD array, we co-transformed RFY231 with each PCR product along with pJZ4-NRT that had been linearized with Eco RI and Bam HI, and selected recombinants on glucose minimal media lacking tryptophan. Five colonies from each transformation were picked and combined into a well of a 96-well plate. For the BD array, we co-transformed Y309 with each PCR product along with pHZ5-NRT that had been linearized with Eco RI and Bam HI, and selected recombinants on glucose minimal medium lacking histidine and uracil. BD clones used in the screens and AD clones showing positive interactions were sequenced to verify the ORF identities. See Additional data files for details. Two-hybrid screening The BD fused proteins used as baits in our screens are listed in Additional data file 2. The AD array was screened using a two-phase pooled mating approach [ 19 ]. First, pools containing the 96 AD strains from each plate in the AD array were constructed by scraping strains grown on agar plates, dispersing in 15% glycerol, and aliquoting into a 96-well format; the 142 pools, representing approximately 13,000 AD strains, were arrayed on two 96-well plates. In the first phase, individual BD strains were mated with the 142 AD pools by dispensing 5-μl volumes of each culture onto YPD plates using a Biomek FX robot (Beckman Coulter). After 2 days growth at 30°C, yeast were replicated to medium selective for diploids, which have the AD, BD and lacZ reporter plasmids, and containing both galactose and maltose to induce expression of the AD and BD fusions, respectively. The plates also lacked leucine to assay for expression of the LEU2 reporter, and contained X-Gal (40 μg/ml) to assay for expression of lacZ . These plates were photographed after 5 days at 30°C and interactions were scored as described [ 19 ]. In the second phase of screening, single BD strains were mated with the appropriate panel(s) of 93 AD strains corresponding to the pools that were positive in the first phase. The LEU2 and lacZ reporters were assayed on separate plates: growth on plates lacking leucine was scored from 0 (no growth) to 3 (heavy growth); the extent of blue on the X-Gal plates was scored from 0 (white) to 5 (dark blue). After re-testing interactions (see Additional data files) the AD plasmids from interacting AD strains were rescued in bacteria and clones were sequenced to verify insert identity. Cloned plasmids were then reintroduced into RFY231 and used in all possible combinations of one-on-one mating operations with the appropriate BD strains to repeat the interaction assay a third time. The same set of BDs was also used to screen a pool of all approximately 13,000 AD strains using a library screening approach as described in the Additional data files. All interaction data from both screens are listed in Additional data file 3 and are also available at [ 47 , 50 ] and at IntAct [ 51 ] in the Proteomics Standards Initiative - Molecular Interactions (PSI-MI) standard exchange format [ 52 ]. Data analysis The interaction profiles for the BD fused proteins and AD fused proteins were independently clustered and are plotted in Figure 3 using Genespring software (Silicon Genetics). Protein-interaction map graphs in Figures 1 , 2 and 4 and Additional data file 7 were drawn with a program developed by Lana Pacifico (L. Pacifico, F. Fotouhi and R.L.F., unpublished work) available at [ 47 ]. To determine Drosophila interlogs of yeast or worm interactions, a list of Drosophila proteins belonging to eukaryotic clusters of orthologous groups (KOGs) [ 53 ] was obtained from the National Center for Biotechnology Information (NCBI) [ 54 ]. Each fly protein was assigned one or more KOG IDs, based on the cluster(s) to which it belongs. A list of interactions among yeast ( S. cerevisiae ) proteins, derived mostly from high-throughput yeast two-hybrid screens [ 4 , 55 ] and from the determination of proteins in precipitated complexes [ 56 , 57 ], was obtained from the Comprehensive Yeast Genome Database [ 58 , 59 ]. For the interactions determined by precipitation of complexes, two lists were generated. One list includes the binary interactions between the bait protein and every protein that was co-precipitated, but not between the precipitated proteins (hub and spoke model). The second list included all possible binary interactions among the members of a complex (matrix model). The lists were each used to generate a list of interactions between KOG pairs, which in turn was used to generate a list of potential interactions between pairs of Drosophila proteins belonging to those KOGs. Similarly, Drosophila -worm ( C. elegans ) interlogs were determined using the list of interactions between worm proteins determined by high-throughput yeast two-hybrid screening [ 5 ]. Drosophila genetic interactions were obtained from Flybase [ 27 , 60 ]. To compare the two-hybrid data with other datasets we generated random interaction maps having the same BD proteins, total interactions and topological properties as the LexA or Gal4 data. The AD clones in each interaction list were indexed, an array of the same number of genes as the AD clones was randomly fetched from the Drosophila Release 3.1 genome [ 61 ] and these genes were used to replace the original AD clones at the same indexed positions. Fifty thousand such random networks were generated for each two-hybrid dataset, and then compared with the yeast interlogs, worm interlogs, and genetic interactions to determine the amount of overlap expected by chance. P values represented the number of times that the observed number of overlapping interactions was detected in 50,000 iterations of a random network, divided by 50,000. In most cases P < 0.0002 (see Additional data file 6). Additional methods are in Additional data file 1. To compare the number of common interactions between the LexA and Gal4 maps with the number expected by chance, we generated 10 6 random LexA maps and found that they never contained more than two interactions in common with the Gal4 map; thus, the P -value for the 28 common interactions is significantly less than 10 -6 . Additional data files The following additional data are available with the online version of this paper. Additional data file 1 contains Supplementary materials and methods; Additional data file 2 contains Supplementary Table 1, BD 'baits' used in the LexA screens; Additional data file 3 contains Supplementary Table 2, Interactions detected in the LexA screens; Additional data file 4 contains Supplementary Table 3, Enrichment of Gene Ontology classes, complete list; Additional data file 5 contains Supplementary Table 4, Enrichment of Domain pairs, complete list; Additional data file 6 contains Supplementary Table 5, P -values for overlap among datasets, and Supplementary Table 6, Interactions from the LexA and Gal4 screens that successfully used the same BD bait proteins; Additional data file 7 is a PDF containing Supplementary Figure 1, Interaction maps of other clusters; Additional data file 8 is a PDF containing Supplementary Figure 2, Proteins clustered by interaction profile; Additional data file 9 contains the legends to Supplementary Figures 1 and 2. Supplementary Material Additional data file 1 Supplementary Materials and methods Click here for additional data file Additional data file 2 Supplementary Table 1: BD 'baits' used in the LexA screens Click here for additional data file Additional data file 3 Supplementary Table 2: Interactions detected in the LexA screens Click here for additional data file Additional data file 4 Supplementary Table 3: Enrichment of Gene Ontology classes (the complete list) Click here for additional data file Additional data file 5 Supplementary Table 4: Enrichment of Domain pairs (the complete list) Click here for additional data file Additional data file 6 Supplementary Table 5: P -values for overlap among datasetsa nd Supplementary Table 6: Interactions from the LexA and Gal4 screens that successfully used the same BD bait proteins Click here for additional data file Additional data file 7 Supplementary Figure 1: Interaction maps of other clusters Click here for additional data file Additional data file 8 Supplementary Figure 2: Proteins clustered by interaction profile Click here for additional data file Additional data file 9 The legends to Supplementary Figures 1 and 2 Click here for additional data file
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545799.xml
15575970
10.1186/gb-2004-5-12-r96
434148
DNA Display I. Sequence-Encoded Routing of DNA Populations
Recently reported technologies for DNA-directed organic synthesis and for DNA computing rely on routing DNA populations through complex networks. The reduction of these ideas to practice has been limited by a lack of practical experimental tools. Here we describe a modular design for DNA routing genes, and routing machinery made from oligonucleotides and commercially available chromatography resins. The routing machinery partitions nanomole quantities of DNA into physically distinct subpools based on sequence. Partitioning steps can be iterated indefinitely, with worst-case yields of 85% per step. These techniques facilitate DNA-programmed chemical synthesis, and thus enable a materials biology that could revolutionize drug discovery.
Introduction Subsequent to the discovery of DNA as the information-carrying blueprint for biopolymer assembly, the possibility has existed for its utilization to program molecular processes devised by man. DNA is an attractive material for several reasons. It provides very high information density: a micromolar solution of thousand-base DNA fragments can store 10 6 bits per femtoliter. The information is amplifiable, so that a single molecule can be copied to produce a measurable quantity of nucleic acid. A large collection of enzymatic tools (e.g., polymerases, helicases, recombinases, and restriction enzymes) and man-made tools (e.g., oligonucleotide synthesizers, thermal cyclers, and purification kits) exist to manipulate DNA. Several technologies take advantage of these facts. For example, patterned DNA fragments have been used to direct self-assembly of nucleic acid objects ( Seeman 2003 ), to follow the fate of cells in complex populations ( Shoemaker et al. 1996 ), to localize substrates and catalysts for “lab on a chip” experiments ( Winssinger et al. 2002 ), and for DNA computing ( Braich et al. 2002 ). More recently, the idea has been advanced that patterned DNAs could be used to direct small-molecule synthesis ( Harbury and Halpin 2000 ; Gartner and Liu 2001 ), providing a genetic code for organic chemistry. A fundamental difficulty in using DNA to program molecular events is transducing the information contained within a nucleic acid sequence into a corresponding physical outcome. One general scheme to link DNA identity to a downstream process relies on sequence-specific partitioning. This self-separation is accomplished straightforwardly by hybridization of DNA molecules to immobilized oligonucleotides. Once spatially separated, the different pools of nucleic acid can be subjected to different processing steps. Thus, the sequence of a DNA fragment determines its fate. For multistep procedures, sequential hybridizations to multiple subsequences within a DNA molecule are required. Iterative partitioning of DNA molecules is equivalent to routing the molecules through a network, with each sequence taking a unique path. Routing small quantities of DNA requires high-yielding, high-fidelity, and repeatable preparative hybridization. Although a vast literature exists on DNA hybridization for analytical purposes, literature on preparative applications, where DNA must be recovered after hybridization, is quite limited. Major precedents include RNA purification over polyrA binding resins ( Aviv and Leder 1972 ) or tRNA binding resins ( Tsurui et al. 1994 ), and an electrophoresis-based selection procedure used in DNA computing ( Kenney et al. 1998 ). However, none of these methods is suitable for routing, either because they are not efficient, not repeatable, or difficult to interface with a downstream physical outcome. Here we present a practical method to autoroute DNA libraries through multiple decision points of a tree-type network, making DNA-programmed assembly processes possible. Results Our experimental scheme is illustrated in Figure 1 . A population of DNA “genes” consisting of catenated coding positions is constructed. Defined sets of “codon” sequences exist at the first position (a 1 , b 1 , and so on), second position (a 2 , b 2 , and so on), and subsequent positions. Codons present in one coding position are mutually exclusive of the codons at any other position. The identity of the first codon determines the fate of the gene at the first decision point of the network, the second codon at the second decision point, and so forth. Figure 1 Routing DNA through Networks (A) Structure of a simplified nine-member routing gene library. The ssDNA consists of 20-base noncoding regions (black lines Z 1 –Z 4 ) and 20-base coding positions (colored bars [a,b,c] 1–3 ). All library members contain the same four DNA sequences at the four noncoding regions. At each of the three coding positions, three mutually exclusive codons, (a,b,c) n , are present for a total of twenty-seven different routing genes. Resin beads coated with an oligonucleotide complementary to one codon (anticodon beads; gray ball at left) capture by hybridization ssDNAs containing the corresponding codon. (B) To travel through the network, the ssDNA library starts on one or multiple DEAE columns (black column on left) and is hybridized to a set of anticodon columns (red, green, and blue columns) corresponding to the set of codons in the first coding position. The genes are thus physically partitioned into subpools based on sequence identity and can be processed accordingly. Each subpool is subsequently transferred to a distinct DEAE column, completing the first step through the network. The hybridization splitting, processing, and transfer are repeated for all subsequent coding regions. After completion of the final step, the library is concentrated on a reverse-phase column (RP; black-and-white column on right) and eluted for solution manipulation. Genes are “read” by hybridization to a set of anticodon columns. Each anticodon column displays an oligonucleotide complementary to one codon sequence, and a complete set of columns comprises the complements to all codons at a single coding position. Genes bind to the columns by codon–anticodon base pairing, and are thus partitioned. To read a subsequent coding position, the genes are first transferred to a nonspecific DNA binding resin, regenerating an unhy-bridized state. This DNA is then hybridized to a new set of anticodon columns. By a series of such reading cycles, the sequence of a gene guides it through the network. DNA Routing Genes For DNA routing genes, we adopted a modular design adaptable to networks of varying depth and width. We chose codons consisting of 20 bases, catenated to neighboring codons through 20-base noncoding regions ( Figure 1 ). To prevent aberrant codon–anticodon pairing, all sequences were taken from a set of more than 10,000 distinct 20mers that do not crosshybridize in microarray experiments ( Giaever et al. 2002 ). The work reported here utilized 340-base fragments that specified routes through a tree with eight hierarchical levels and ten branches per level. Each of the 10 8 unique genes contained routing instructions for eight decision points. Construction of the gene library proceeded in two stages ( Figure 2 A). Initially, 160 40-base oligonucleotides comprising a codon and an adjacent noncoding region were synthesized. We assembled sets of 16 of these 40mers (for example, the oligonucleotides corresponding to codons a 1 , a 2 , . . . , a 8 ) into ten 340-base genes (“all a,” “all b,” etc.). The ten different genes were subcloned. Eight 60-base segments were then PCR amplified from an equimolar mixture of the parental plasmids. Each segment consisted of a coding position and two adjacent noncoding regions. The eight degenerate products were spliced together into 340-base fragments by primerless PCR, thus producing a library of 10 8 complexity. In principle, collective assembly of the 160 40mer oligonucleotides would have created the library in one step. In practice, the two-stage approach provided better control over codon distributions in the final gene population. Figure 2 Construction and Diversification of Routing Gene Populations (A) Overlapping complementary oligonucleotides that span an entire gene (for example [Z–a] 1–8 and a 1–8 ′–Z 2–9 ′) were assembled into full gene products (“all a,” “all b,” etc.) by primerless PCR and subcloned. Equivalent amounts of the ten resulting plasmids (a 1–8 , … , j 1–8 ) were mixed and used as template for eight separate PCR reactions with noncoding region primer pairs (Z i /Z i +1 ′) that flanked a single coding position. The eight degenerate PCR products (Z n −x n − Z n +1 ) were assembled into a library of 10 8 different genes by primerless PCR (right). (B) To generate ssDNA, a T7 promoter (pT7) was appended to the 3′ end of the double strand DNA library. The minus strand of the library was transcribed using T7 RNA Polymerase (T7 RNAP), and reverse transcribed from a Z 1 primer using MMLV Reverse Transcriptase (MMLV RT) in a coupled reaction. The resulting DNA/RNA heteroduplex was treated with sodium hydroxide to hydrolyze the RNA, providing ssDNA. The noncoding regions play an instrumental role in the construction and handling of genes. By providing conserved crossover points, they facilitate the modular generation of highly complex populations from a small number of starting oligonucleotides. For the same reason, the noncoding regions make it possible to diversify existing gene sets by recombination. The noncoding regions also place codons in the correct coding position, ensuring that all genes incorporate one codon per branch point of a network. The existence of a well-defined “reading frame” results from the fact that anticodon columns only hybridize to DNA subsequences at a specified coding position, and not to codons elsewhere in the gene. To obtain hybridization-capable nucleic acid, the duplex DNA genes must be converted to a single-stranded form. (Possibly, duplex DNA hybridization to oligonucleotides through D-loops could be driven by RecA and ATP [ Shortle et al. 1980 ]). A number of approaches for generating single-stranded DNA (ssDNA) have been described (for example Nikiforov et al. 1994 ; Williams and Bartel 1995 ; Pagratis 1996 ; Ball and Curran 1997 ), but we found most of them unsuitable for large-scale work. A modified nucleic-acid-sequence-based amplification protocol ( Compton 1991 ) ultimately proved most expedient. Thus, we appended a T7 polymerase promoter to duplex DNA routing genes by PCR amplification with appropriate primers ( Figure 2 B). This material was used as the substrate for a coupled transcription/reverse-transcription reaction to generate DNA/RNA heteroduplexes. Hydrolysis of the RNA strand of the heteroduplexes with sodium hydroxide provided nanomole quantities of high-quality single-stranded DNA. Oligonucleotide Hybridization Chromatography Synthesis of anticodon columns involves covalent attachment of oligonucleotides to a solid phase. Thiol-containing and amine-containing oligonucleotide modification reagents are commercially available, and either should facilitate coupling to an appropriately activated resin. However, pilot experiments indicated that amide linkages were more easily prepared than thioether linkages. The deprotection protocols for oligonucleotide-linked sulfhydryl moieties were more complex than for amine moieties, and sulfhydryl-modified oligonucleotides were prone to oxidation and general loss during manipulation steps. Amine-modified oligonucleotides were easier to work with and were thus used for production of anticodon columns. It proved necessary to desalt crude oligonucleotides over reverse-phase cartridges before coupling. As candidate solid phases, we tested commercially available chromatography resins made of polystyrene (Magnapore macroporous chloromethylpolystyrene beads, Argogel-NH2, epoxide-activated Poros 50 OH), methacrylate (Ultralink Biosupport Mediumand Iodoacetyl), and agarose ( N -hydroxysuccinimide[NHS]-activated Sepharose, carbonyl diimidazole-activated Sephacryl S-1000). 20-base modified anticodon oligonucleotides were coupled to the resins. Quantification by reverse-phase chromatography of the uncoupled oligonucleotide remaining in solution provided a measure of reaction progress ( Figure 3 ). An underivatized ten-base oligonucleotide was included in all coupling reactions to control for nonspecific loss of nucleic acid. Figure 3 Anticodon Column Synthesis (A) Jeffamine 1500 (compound 1) was reacted with glutaric anhydride, and the singly acylated linker (compound 2) was purified over a HiTrap SP column. Purified compound 2 was coupled to NHS-activated Sepharose (gray ball). Treatment of the linkered resin compound 3 with TBTU/NHS, and subsequent incubation with a 5′-amino modified oligonucleotide (NH 2 -DNA), completed the synthesis of an anticodon column. (B) Refractive index FPLC chromatograms of PEG compounds 1 and 2 before and after purification by cation-exchange chromatography. Linker compound 1 migrates as a bisamine (green trace) while compound 2 migrates as a monoamine (red trace). (C) HPLC chromatograms of a 5′-aminated 20-base oligonucleotide (NH 2 -20mer) and a nonaminated ten-base oligonucleotide control (10mer) incubated with TBTU/NHS activated resin compound 3. Chromatograms of the starting material (black) and supernatant after 12 h (red) are shown. An unknown side-product of the coupling reaction (NH 2 -20mer side-product) is labeled. To test hybridization properties, 50 μl columns of the derivatized resins were loaded with 1 nmol each of a complementary 20-base oligonucleotide and a noncomplementary ten-base oligonucleotide by cyclical flow in high-salt buffer. After column washing, bound oligonucleotides were eluted with deionized water. The specificity and efficiency of hybridization were evaluated by high performance liquid chromatography (HPLC) analysis of the load, flow-through, and elute fractions ( Figure 4 ). By this assay, none of the initial resins functioned for preparative DNA fractionation, either because they failed to bind DNA well (Argogel-NH2, epoxide-activated Poros 50 OH, NHS-activated Sepharose, and Biosupport Medium) or because they bound DNA without sequence specificity (Magnapore beads and Iodoacetyl). Figure 4 Linker Effects on Hybridization The hybridization to anticodon columns of a ten-base noncomplementary oligonucleotide and a 20-base complementary oligonucleotide was analyzed by HPLC. Chromatograms of the hybridization load (blue), flow-through (red), and elute (black) are shown. (A) shows the anticodon column that was synthesized by coupling the anticodon oligonucleotide directly to NHS-activated Sepharose. (B) shows the anticodon column that was synthesized by coupling the anticodon oligonucleotide to NHS-activated Sepharose through a PEG linker. Following an observation that long polyethylene glycol (PEG) linkers dramatically improve hybridization to DNA on polypropylene surfaces ( Shchepinov et al. 1997 ), we synthesized an approximately 100-atom modified PEG spacer to sit between the resin and the anticodon oligonucleotide (see Figure 3 ). The synthetic scheme utilized inexpensive, commercially available reagents and ion-exchange chromatography for purification. Efforts to attach the spacer to Biosupport Medium were unsuccessful, but the spacer coupled readily to NHS-activated Sepharose and to carbonyl diimidazole-activated Sephacryl S-1000. The linkered Sephacryl and Sepharose materials immobilized amine-modified oligonucleotides to final densities of approximately 90 nmol per milliliter of resin ( Figure 3 C). Anticodon columns containing 50 μl of either resin efficiently and reversibly hybridized to 1 nmol of a complementary 20-base oligonucleotide, while exhibiting unmeasurable binding to a noncomplementary ten-base oligonucleotide ( Figure 4 ). Subsequent experiments were carried out with the Sepharose-based resin. The hybridization columns proved extremely robust, withstanding over 30 hybridization cycles, treatment with 10 mM sodium hydroxide, and exposure to dimethylformamide (DMF) without a detectable decrease in perfor-mance. Fidelity of Routing We next investigated how buffer conditions and temperature influenced the accuracy and yield of 340-base ssDNA hybridization to anticodon columns. For these experiments, a single radiolabeled DNA gene was diluted 10-fold into an excess (50 pmol) of an unlabeled routing gene library, and loaded onto a 250-μl diethylaminoethyl (DEAE) Sepharose column. The DEAE Sepharose column was placed in a closed 3-ml fluid circuit containing ten anticodon columns, of which only one complemented a codon within the radiolabeled DNA. Hybridization buffer was pumped over the system in a direction that placed the complementary anticodon column distal to the DEAE Sepharose column ( Figure 5 , left). After hybridization, the flow-through was collected, and bound nucleic acid was eluted off of each column in the system. The quantity of radiolabeled DNA present in each fraction was determined by scintillation counting. Figure 5 Cyclical Multistep Routing (Left) Genes are transferred from the “ n− 1” step DEAE columns to the “ n ” step anticodon columns by connecting all columns in series and cyclically pumping a high-salt buffer through the system with a peristaltic pump (gray box) for 1 h at 70 °C and 1 h at 46 °C. (Right) Genes are transferred from an “ n ” step anticodon column to an “ n ” step DEAE column by connecting the two columns in series and cyclically pumping 50% DMF through the system for 1 h at 45 °C. Arrows indicate the direction of flow. By varying the temperature (25 °C to 70 °C), salt identity (sodium chloride, lithium chloride, or tetramethylammonium chloride) and salt concentration (10 mM to 2 M) of the hybridization buffer, we determined that 1.5 M sodium chloride in a phosphate buffered solution with pH 6.5 at 45 °C provided the most robust hybridization behavior over multiple codon sequences. In addition, an initial high temperature step (70 °C) and the presence of a DEAE column inline proved critical to achieving uniformly high yields ( Table 1 ). The high temperature and DEAE column may serve to break up structures in the DNA genes that inhibit association with anticodon columns. Consistent with previous microarray data ( Shoemaker et al. 1996 ), addition of 20-base oligonucleotides complementary to the noncoding regions improved hybridization efficiency. The hybridization kinetics were fast, approaching equilibrium to within 5% in less than an hour at 46 °C. Using optimal hybridization conditions, 90% or more of the input radiolabeled DNA was routed to the correct anticodon column irrespective of the sequence pair used. Table 1 340-Base ssDNA Hybridization Efficiencies and Specificities A radiolabeled “all b” gene was hybridized to anticodon columns corresponding to one coding region (see Figure 5 , left). The fraction of input radiolabel recovered from each component of the system is reported, as measured by scintillation counting. Cold DNA: an unlabeled library of 10 6 genes was added to the hybridization reaction in 10-fold excess over radiolabeled DNA. 70 °C Step: hybridization was carried out at 70 °C for 1 h before cooling to 45 °C. FT: radiolabel recovered from hybridization flow-through. DEAE: radiolabel recovered from an inline DEAE column. a′–j′: radiolabel recovered from the specified anticodon column or pair of anticodon columns. Lost: input radiolabel not recovered from any component Serial Multistep Routing In order to route a DNA fragment through successive levels of a hierarchical tree, multiple hybridization steps are required. DNA from parental anticodon columns must be isolated and hybridized to anticodon columns corresponding to the daughter-node branches. The manipulations must be highly efficient to ensure good routing yields through trees with many levels. We investigated several schemes for accomplishing iterative hybridizations. Our initial strategies utilized a multi-step procedure with three columns (anticodon, DEAE, and reverse-phase), linear transfer formats, and centrifugal evaporation. These three-column strategies did not prove to be high-yielding. We eventually observed that efficient iterative hybridizations could be accomplished with only two columns, using cyclic column-to-column transfers. Thus, a parental anticodon column with bound DNA was placed in a liquid circuit with a 250-μl DEAE-Sepharose column ( Figure 5 , right). A 50% DMF solution was pumped over the system, breaking interactions between the anticodon column and bound DNA, and promoting the binding of free DNA to the anion-exchange resin. (DNA bound to DEAE columns can be conveniently interfaced with an encoded process, such as covalent transformation by solid-phase organic chemistry [ Halpin et al. 2004 ]). Subsequently, the DEAE-Sepharose column with bound DNA was placed in a liquid circuit with a set of anticodon columns corresponding to the branches of the daughter node. As described above, a high-salt buffer was pumped cyclically over the system to elute DNA from the anion-exchange resin, and to promote hybridization to the new set of anticodon columns. The two reciprocal transfers constitute one hybridization cycle and can be repeated indefinitely. The iterative hybridizations proceed with very high DNA recoveries (greater than 95% for anticodon to DEAE and greater than 90% for DEAE to anticodon) for several reasons. First, the columns “see” a large volume of liquid flow in the cyclic format, although the total volume of buffers used is small. Second, because the transfers are column-to-column, losses associated with manipulation of dilute DNA solutions do not occur. The two-column strategy makes it practical to iterate successive hybridizations, with worst-case overall yields of 0.85 n for n hybridizations. The final requirement was to isolate DNA as a concentrated, salt-free aqueous solution upon completion of routing. For this purpose, DNA bound to anticodon columns was eluted with a small volume of an (ethylenedinitrilo)tetraacetic acid (EDTA) solution and precipitated. Alternatively, DNA bound to DEAE-Sepharose columns was transferred to a reverse-phase cartridge by cyclically pumping a high-salt buffer over the two columns in series. DNA on the reverse-phase cartridge was then washed with deionized water, eluted in acetonitrile/water, concentrated by evaporation, and desalted over a microcentrifuge gel-filtration column. Discussion Several technical improvements to our protocols are possible. The hybridization conditions could be further optimized to increase yields and shorten times, perhaps by the addition of proteins such as Escherichia coli SSB or RecA ( Nielson and Mathur 1995 ). For procedures involving multiple generations of a gene population, a T7 promoter must be appended repeatedly to the library. Incorporation of an RNAZ module into the routing genes would eliminate this step by providing a permanent T7 promoter ( Breaker et al. 1994 ). Conceivably, ssDNA production could be rendered unnecessary by using peptide nucleic acids as capture oligonucleotides or as complements to the noncoding regions. Peptide nucleic acids have been reported to invade DNA duplexes, forming more stable heteroduplexes ( Kuhn et al. 2002 ). Routing of DNA populations provides a general way to exploit DNA as a programming medium. For example, a routing approach has been utilized to compute solutions to the traveling salesman problem ( Braich et al. 2002 ). In order to obtain the answer, it was necessary to isolate DNA fragments containing a defined set of subsequences through iterated hybridizations. By increasing the speed and yield of such isolation steps, the tools described here should aid DNA computing advances. The preparative hybridization protocols will also facilitate purification of defined genomic sequences and primary mRNA transcripts for the study of nucleic acid modifications, and for the analysis of adjunct proteins. One advantage of iterated hybridization in this context is that it increases the overall specificity of purification, in much the same way that successive amplifications with nested primers increase the specificity of PCR reactions. The technique could also be applied to the isolation of nucleoprotein complexes, such as telomerase, that have been tagged with nucleic acid epitopes. Finally, the fates of individual molecules undergoing a process of covalent assembly can be programmed by routing. For example, the protocols presented here were used to direct the split-and-pool synthesis of a combinatorial chemistry library ( Halpin and Harbury 2004 ). That work involved routing genes through a tree with six levels and ten branches per level. In order to program large libraries of very low-molecular-weight compounds, routing through shallow trees with thousands of branches per level will be required. Adaptation of the anticodon columns to a microarray format would achieve this goal in a practical manner. Such massively parallel DNA-directed chemistry has the potential to revolutionize modern drug discovery. Materials and Methods Materials. O , O′ -bis(3-aminopropyl) polyethylene glycol of average molecular weight 1500 Da (compound #14535-F, also called Jeffamine 1500) and all other chemicals and solvents were purchased from Sigma-Aldrich (St. Louis, Missouri, United States). DEAE columns were prepared by pipetting approximately 250 μl of DEAE Sepharose Fast Flow resin (#17-0709-01; Amersham Biosciences, Little Chalfont, United Kingdom) into empty TWIST column housings (#20-0030; Glen Research, Sterling, Virginia, United States). DNA library assembly. 160 40mer oligonucleotides were synthesized using standard phosphoramidite chemistry. The 5′ 20 bases of each oligonucleotide consisted of a noncoding region sequence, while the 3′ 20 bases consisted of a codon sequence. Oligonucleotides were purified by electrophoresis on 15% denaturing acrylamide gels. The oligonucleotides were divided into ten sets exclusive to a-type codons (a 1 –a 8 ), b-type codons (b 1 –b 8 ), and so forth. Each set of 16 oligonucleotides was assembled into a 340-base fragment by primerless PCR ( Stemmer et al. 1995 ). Assembly reactions contained 1 μl Vent DNA Polymerase (#0254; New England Biolabs, Beverly, Massachusetts, United States), 1X Vent buffer, 250 μM each dNTP, and 0.1–10 pmol of each oligonucleotide, and were run for 20 to 35 cycles (unless otherwise noted, PCR reactions had a volume of 100 μl). In a second PCR step, the assembly products were amplified from 1 μl of the assembly reaction using 0.2 nmol of each end primer. The amplified genes were purified on 2% NuSieve (#50081; FMC BioProducts, Rockland, Maryland, United States) agarose gels and subcloned between the SphI and EcoRI sites of the pET24A plasmid (#70769-3; Novagen, Madison, Wisconsin, United States). To construct a full library, the ten plasmids were mixed in equal proportions and used as template for eight PCR reactions. The primers were Z 1 /Z 2 ′ for the first reaction, Z 2 /Z 3 ′ for the second reaction, and so forth. The resulting 60-base-pair products were purified on 3% NuSieve agarose gels. Following quantification by densitometry, 120 ng of each fragment was used in a single 50-μl, ten-cycle primerless PCR reaction to assemble a library. The assembly products were amplified using 1 μl of the assembly reaction as template and 0.2 nmol of each end primer. The final library was subcloned, and 36 isolates were sequenced to verify the presence of the expected codon distribution at each coding position. Preparation of ssDNA. ssDNA was generated using a modified NASBA reaction ( Compton 1991 ). Duplex DNA template (1–10 pmol) was transcribed/reverse-transcribed in a 200-μl reaction containing 1 nmol primer, 40 mM Tris (pH 8.3), 20 mM magnesium dichloride, 40 mM potassium chloride, 10% DMSO, 5 mM DTT, 0.1 mg/ml BSA, 3.5 mM each rNTP, 2.5 mM each dNTP, 1000 U MMLV RNAseH minus reverse transcriptase (for example #M3682; Promega, Madison, Wisconsin, United States), 100 U T7 RNA Polymerase (for example Promega #P2075), and 2 U of pyrophosphatase (New England Biolabs #MO296). To prepare radiolabeled ssDNA, a primer kinased with γ- 33 P ATP was used. Reactions were incubated for 12 h at 42 °C. Following the enzymatic step, RNA was hydrolyzed by addition of sodium hydroxide to 100 mM and heating of the reaction tube for 2 min at 100 °C. The solution was subsequently neutralized with acetic acid and spun in a benchtop microfuge at 16,000 g for 2 min to remove precipitated material. The supernatant was transferred to a fresh tube, brought to 50 mM EDTA, and ethanol precipitated. ssDNA product was purified by electrophoresis on 4% denaturing acrylamide gels. Excised gel bands were crushed and rotated overnight in 3–6 ml 5 mM Tris (pH 8.0), 500 μM EDTA, and 500 μM EGTA. Acrylamide was removed by spin column filtration, and the solution volume was reduced to 800 μl by centrifugal evaporation. Samples were phenol/chlorofom extracted, ethanol precipitated, and resuspended in water. Purification of bisamine linker (compound 1). The crude Jeffamine material was purified by fast protein liquid chromatography cation-exchange chromatography over a 5-ml Hi-Trap SP column (Amersham Biosciences #17-1152-01). In early work, 1-ml batches of a 250-mg/ml aqueous solution were loaded onto the column in 50 mM acetic acid, washed with load buffer, and bumped off with 1 M lithium chloride. Subsequently, we developed a gradient protocol. The material was loaded in water, and the product was eluted with a linear water–hydrogen-chloride gradient (0–30 mM hydrogen chlo-ride over 15 column volumes) at 6 °C monitored by refractive index detection (RID-10A; Shimadzu, Tokyo, Japan). After every fifth injection, the column was washed with 1.5 M sodium chloride to remove a yellow residue and was then reequilibrated in deionized water. Pooled fractions of the bisamine peak were brought to pH 10 by addition of solid sodium carbonate, and the purified Jeffamine product (compound 1) was extracted into methylene chloride. The combined organic layers were dried over sodium sulfate, and solvent was removed by rotary evaporation. Yields of the pale yellow solid were 40% based on the weight of crude starting material. Synthesis of amine-acid linker (compound 2) One mole equivalent of 1.5 M glutaric anhydride in dioxane was added to a briskly stirred 250-mg/ml aqueous solution of purified Jeffamine (compound 1). After 30 min, the crude reaction product was injected in 1-ml batches over a 5-ml Hi-Trap SP column and eluted as described above. Pooled fractions of the monoamine peak were brought to pH 7 by addition of solid sodium bicarbonate, and the purified linker product (compound 2) was extracted into methylene chloride, dried over sodium sulfate, and obtained as a pale yellow solid by rotary evaporation. Yields were 35% to 50% based on the weight of purified Jeffamine starting material. A compound similar to the purified linker compound 2 is commercially available (#0Z2W0F02; Nektar Therapeutics, San Carlos, California, United States). Synthesis of linkered resin (compound 3). To prepare resin compound 3, compound 1 or compound 2 was dissolved at 300 mg/ml in DMF/200 mM DIEA. The linker solution was incubated with one volume equivalent of drained NHS-activated Sepharose (Amersham Biosciences #17-0906-01). The suspension was rotated at 37 °C for 72 h, washed over a plastic frit with DMF to remove excess linker, and incubated with 1 M ethanolamine in DMF for an additional 12 h at 37 °C. The product resin was washed and stored at 6 °C. Resins coupled to compound 1 were further treated by incubation with an equal volume of DMF containing 100 mM glutaric anhydride and 15 mM pyridine at 37 °C under rotation for 48 h. Resin activation (compound 4). Typically, TBTU (320 mg) and NHS (115 mg) were dissolved in 4 ml of DMF/500 mM DIEA, and drained compound 3 (1 ml) was added. The suspension was rotated at 37 °C for 1 h. Product resin was washed with the following sequence (20 ml of each): ethyl acetate, tetrahydrofuran, ethanol, water, 5 M sodium chloride, water, and DMF. Resin activation was performed just prior to oligonucleotide coupling. Construction of anticodon columns Twenty-base capture oligonucleotides were synthesized using standard phosphoramidite chemistry, with the addition of a C12-methoxytritylamine modifier at the 5′-end (Glen Research #10-1912). Following ammonia cleavage and drying, the oligonucleotides were desalted over C18 Sep-Pak cartridges (#WAT020515; Waters Corporation, Milford, Massachusetts, United States). Purification proceeded according to the manufacturer's instructions, but a deionized water wash was inserted before the final elution step to remove residual TEAA. Coupling reactions of oligonucleotides to resin were carried out in low-binding 0.65-ml microcentrifuge tubes (#11300; Sorenson Bioscience, Salt Lake City, Utah, United States). Ten nanomoles of a capture oligonucleotide and 10 nmol of a nonaminated 10mer control oligonucleotide in a 40-μl aqueous solution were mixed with 160 μl of DMF/200 mM DIEA. Fifty microliters of drained resin compound 4 was added, and the suspension was rotated at 37 °C for 12 h. Reaction progress was monitored by HPLC. Supernatant aliquots (20 μl) were injected onto a 4.6-mm × 25-cm Varian Microsorb-MV 300-5 C18 column (#R0086203C5; Varian, Palo Alto, California, United States) and eluted with a linear water–acetonitrile gradient (0%–45% acetonitrile in five column volumes) in the presence of 0.1 M TEAA (pH 5.2) at 50 °C. After 12 h, the resin was pelleted by centrifugation at 100 g for 1 min, supernatant was removed, and the resin was incubated with 1 M ethanolamine in DMF for an additional 12 h at 37 °C. The derivatized resins were loaded into empty DNA synthesis column housings (#CL-1502-1; Biosearch Technologies, Novato, California, United States). Oligonucleotide hybridization. Hybridization was performed in a closed system consisting of an anticodon column, male tapered luer couplers (Biosearch Technologies #CL-1504-1), capillary tubing (Amersham Biosciences #19-7477-01), silicone tubing (#8060-0020; Nalgene Labware, Rochester, New York, United States), tubing connectors (Amersham Biosciences #19-2150-01, #18-1003-68, and #18-1027-62), and a peristaltic pump (Amersham Biosciences #18-1110-91). Approximately 1 ml of hybridization buffer (60 mM sodium phosphate (pH 6.5), 1.5 M sodium chloride, 10 mM EDTA, and 0.005% Triton X-100) containing 400 pmol of a complementary 20-base oligonucleotide and 400 pmol of a noncomplementary ten-base oligonucleotide was cyclically pumped through the system at 0.5 ml/min for 1 h in a 46-°C water bath. DNA was eluted off the anticodon column with 4 ml of 1 mM EDTA (pH 8.0) and 0.005% Triton X-100 heated to 80 °C. Flow-through and elute fractions were analyzed by HPLC as described above (0%–18% acetonitrile in five column volumes). ssDNA hybridization. ssDNA was loaded onto a DEAE-Sepharose column as described ( Halpin et al. 2004 ). Anticodon columns were connected in series to the DEAE column using male tapered luer couplers, capillary tubing, silicone tubing, and tubing connectors. Approximately 3 ml of hybridization buffer containing 1 nmol of each oligonucleotide complementary to the noncoding regions was cyclically pumped over the system at 0.5 ml/min for 1 h at 70 °C, 10 min at 37 °C, and 1 h in a 46-°C water bath within a 37-°C room. Hybridized DNA was transferred back to a fresh DEAE column, or eluted with 4 ml of 1 mM EDTA (pH 8.0) and 0.005% Triton X-100 heated to 80 °C. For analysis purposes, the hybridization flow-through, the DEAE resin, and the anticodon column elutes were mixed with 10 ml of scintillation cocktail (Bio-safe 2, Research Products International, Mount Prospect, Illinois, United States) and shaken vigorously. Counting was performed using the 35 S preset channel of a scintillation counter (Beckman Instruments, Fullerton, California, United States). Anticodon to DEAE DNA transfer. DEAE and anticodon columns were connected in series using male tapered luer couplers, 3.16-mm manifold tubing (#39-628; Rainin, Oakland, California, United States), and tygon tubing 3603. Using a peristaltic pump (Minpuls2; Gilson, Middleton, Wisconsin, United States), approximately 7 ml of a 50% DMF solution was flowed cyclically over the columns at 3 ml/min for either 1 h at 45 °C or 12 h at 25 °C. Endpoint isolation of DNA. DEAE columns were connected in series to C8 SepPak columns (Waters #WAT036775) using male tapered luer couplers, tygon tubing 3603, and tubing connectors. Approximately 6 ml of 50 mM ethanolamine (pH 10.0), 1.5 M sodium chloride, 1 mM EDTA, and 0.005% Triton X-100 was cyclically pumped over the columns at 1 ml/min for 1 h at 50 °C. The Sep-Pak columns were then washed with 12 ml of 100 mM TEAA (pH 6.5) followed by 12 ml of water. ssDNA was eluted from the Sep-Pak column with 4 ml of 50% acetonitrile heated to 80 °C. Samples were concentrated by centrifugal evaporation to a volume of approximately 30 μl and desalted over G25 Sephadex spin columns (Sigma-Aldrich #G-25-150).
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC434148.xml
15221027
10.1371/journal.pbio.0020173
545969
Receptor and secreted targets of Wnt-1/β-catenin signalling in mouse mammary epithelial cells
Background Deregulation of the Wnt/ β-catenin signal transduction pathway has been implicated in the pathogenesis of tumours in the mammary gland, colon and other tissues. Mutations in components of this pathway result in β-catenin stabilization and accumulation, and the aberrant modulation of β-catenin/TCF target genes. Such alterations in the cellular transcriptional profile are believed to underlie the pathogenesis of these cancers. We have sought to identify novel target genes of this pathway in mouse mammary epithelial cells. Methods Gene expression microarray analysis of mouse mammary epithelial cells inducibly expressing a constitutively active mutant of β-catenin was used to identify target genes of this pathway. Results The differential expression in response to ΔNβ-catenin for five putative target genes, Autotaxin, Extracellular Matrix Protein 1 (Ecm1), CD14, Hypoxia-inducible gene 2 (Hig2) and Receptor Activity Modifying Protein 3 (RAMP3), was independently validated by northern blotting. Each of these genes encodes either a receptor or a secreted protein, modulation of which may underlie the interactions between Wnt/β-catenin tumour cells and between the tumour and its microenvironment. One of these genes, Hig2 , previously shown to be induced by both hypoxia and glucose deprivation in human cervical carcinoma cells, was strongly repressed upon ΔNβ-catenin induction. The predicted N-terminus of Hig2 contains a putative signal peptide suggesting it might be secreted. Consistent with this, a Hig2-EGFP fusion protein was able to enter the secretory pathway and was detected in conditioned medium. Mutation of critical residues in the putative signal sequence abolished its secretion. The expression of human HIG2 was examined in a panel of human tumours and was found to be significantly downregulated in kidney tumours compared to normal adjacent tissue. Conclusions HIG2 represents a novel non-cell autonomous target of the Wnt pathway which is potentially involved in human cancer.
Background The Wnt/β-catenin signal transduction pathway plays a central role in metazoan development, controlling such diverse processes as cell growth, proliferation and organogenesis [ 1 ]. Wnt-1 is the prototypic member of this large family of secreted glycoproteins and was originally identified as a gene insertionally activated by mouse mammary tumour virus [ 2 ]. Wnt-1 is one of a number of Wnt family members which act to control the cellular level of β-catenin. Wnt proteins bind seven-pass transmembrane receptors of the Frizzled family, and a signal is transduced via Dishevelled to a complex which contains the Adenomatous Polyposis Coli (APC), Axin and Glycogen Synthase Kinase-3β (GSK-3β) proteins [ 3 , 4 ]. This signal antagonizes the phosphorylation of β-catenin by GSK-3β. There are four phosphorylation sites in the N-terminus of β-catenin which, in the absence of Wnt signal, are phosphorylated by Casein Kinase I alpha and GSK-3β [ 5 , 6 ]. This phosphorylation leads to the ubiquitination and subsequent proteasomal degradation of β-catenin [ 7 ]. Inhibition of β-catenin phosphorylation by Wnt signalling leads to the accumulation of β-catenin which forms a bipartite complex with members of the TCF/LEF transcription factor family and activates the transcription of target genes, a process which is regulated by multiple interacting factors [ 8 ]. Overexpression of Wnt-1 in the mammary glands of transgenic mice leads to extensive hyperplasia and tumorigenesis [ 9 ]. APC was identified as the tumour suppressor gene mutated in the hereditary colorectal cancer syndrome, Familial Adenomatous Polyposis [ 10 , 11 ]. Mutations in Axin and β- catenin have also been detected in tumours of the colon and other tissues [ 12 ]. Deregulation of this pathway appears to be play a contributory role in a significant proportion of human tumours of epithelial origin and hence, the identification of effector genes of this pathway is an important step towards the elucidation of the mechanisms involved. Many of the Wnt targets thus far identified are cell-cycle regulators [ 13 , 14 ] and transcription factors [ 15 - 20 ], and function in a cell-autonomous manner, providing insight into the mechanisms by which tumour cells deregulate proliferation and inhibit apoptosis. Tumours are complex organs composed of tumour cells, stromal fibroblasts, endothelial cells and cells of the immune system; and reciprocal interactions between these cell types in the tumour microenvironment are necessary for tumour growth [ 21 , 22 ]. Here we postulate that proteins secreted by Wnt/β-catenin tumour cells and receptors expressed by these cells may play roles in mediating interactions between neighbouring tumour cells or between tumour cells and their microenvironment. Consequently, in this study we have focussed our attention on identifying novel genes encoding receptors and secreted proteins. Methods Cell culture All reagents were purchased from Sigma unless otherwise noted. HC11 mouse mammary epithelial cells were cultured in 5% CO 2 at 37°C in RPMI 1640, supplemented with 10% Foetal Bovine Serum, 2 mM L-glutamine, 2.5 μg/ml insulin, 5 ng/ml epidermal growth factor and 50 μg/ml gentamycin [ 23 ]. HC11- lacZ and HC11-Δ N β- catenin cells were routinely cultured in 2 μg/ml tetracycline to repress transcription of the tetracycline-regulated transgene. HEK293 and MDCK cells were grown in DMEM supplemented with 10% Foetal Bovine Serum. The HC11- lacZ and HC11-Δ N β- catenin cell lines were generated by infecting the cells with an ecotropic retrovirus (TRE-tTA) in which the tTA cDNA is under the control of a tetracycline responsive promoter. Consequently, tTA expression is minimal in the presence of tetracycline and, upon tetracycline withdrawal, tTA activates its own transcription in an autoregulatory manner [ 24 ]. HC11 cells expressing tTA were subsequently infected with ecotropic retroviruses derived from RevTRE (Clontech) which directed the expression of either β-galactosidase or ΔNβ-catenin in a tetracycline dependent manner. Bosc23 cells were used to produce ecotropic retroviruses [ 25 ]. Cells were transiently transfected with the appropriate retroviral construct and the supernatant was collected 48 hours post-transfection. Polybrene was added to a final concentration of 5 μg/ml and the supernatant was added to HC11 cells for 24 hours. HC11 cells were then subjected to antibiotic selection using either 250 μg/ml G418 or 200 μg/ml hygromycin B as appropriate. RNA isolation Cell monolayers were washed twice in ice-cold Phosphate Buffered Saline and lysed by addition of Trizol (Invitrogen). Total RNA was isolated according to the manufacturer's instructions. PolyA+ RNA was purified from total RNA using Oligotex (Qiagen) according to the manufacturer's instructions. Northern blotting 10 μg of total RNA from each cell line was fractionated on a denaturing formaldehyde agarose gel and transferred to a positively charged nylon membrane (Hybond N+, Amersham Pharmacia Biotech) in 10x SSC. Membranes were prehybridised for four hours in 50% (v/v) formamide, 5X SSPE, 2X Denhardt's reagent, 0.1% (w/v) SDS and 100 μg/ml denatured herring sperm DNA. Radiolabelled probes were prepared from PCR-amplified cDNA clones using the Rediprime II kit (Amersham Pharmacia Biotech) according to the manufacturer's instructions. EST sequences corresponding to the coding sequence of the genes-of-interest were identified by BLAST [ 26 ] and obtained from the I.M.A.G.E. consortium through the UK Human Genome Mapping Project Resource Centre (Hinxton, UK). ESTs bearing the following I.M.A.G.E. cloneIDs were used: Autotaxin – 533819; CD14 – 2936787; Ecm1 717050; Hig2 – 367488; Ramp3 – 615797, HIG2 4366895). Following overnight hybridisation with the labelled probe, the membranes were washed twice in 1X SSC, 0.1% (w/v) SDS at room temperature for 20 mins, and twice in 0.2X SSC, 0.1% (w/v) SDS at 68°C for 10 mins and exposed to film at -80°C for 48 hours. Bound probe was quantitated using a phosphorimager (Molecular Dynamics). Western blotting Cell monolayers were rinsed twice with ice-cold Phosphate Buffered Saline and total cell lysates were prepared by scraping cells into a minimal volume of 50 mM Tris. HCl pH 7.5, 150 mM NaCl, 0.5% NP40 and Complete protease inhibitor cocktail (Roche). Aliquots containing 80 μg protein from each sample were analysed by SDS-PAGE [ 27 ], and transferred electrophoretically to a PVDF membrane. Mouse monoclonal antibodies were used to detect tTA (Clontech), β-catenin (Transduction Laboratories) and EGFP (Santa Cruz Biotechnology). Samples of conditioned medium were concentrated 12-fold using Microcon YM-10 centrifugal filter units (Millipore) prior to analysis. Construction of plasmids A BgIII fragment containing the lacZ cDNA was excised from the CMV- lacZ construct (a gift of Trevor Dale) and sub-cloned into BamHI digested RevTRE to make RevTRE- lacZ . A plasmid containing a myc-tagged ΔNβ-catenin was obtained from Hans Clevers. The myc-tagged ΔNβ-catenin was excised with KpnI and NotI and the ends were blunted, and subcloned into HpaI digested RevTRE to make RevTRE-Δ N β- catenin . The mouse Hig2 open reading frame was amplified by PCR from I.M.A.G.E. cDNA clone 367488 using the primers TTTACTAGTAGGAGCTGGGCACCGTCGCC and TTTTACCGGTGCCTGCACTCCTCGGGATGGATGG. The PCR product was digested with AgeI and SpeI and subcloned into the AgeI and NheI sites in pEGFP-C1 (Clontech) to make the Hig2-EGFP fusion gene. Site directed mutagenesis was carried out by the method of Sawano and Miyawaki (2000) [ 28 ]. The primer TGCTGAACCTCGAGGAGCTGGGCATCATG was used to make the Hig2-EGFP(Y8V9/D8D9) mutant. Transient transfections Transient transfections were performed using Lipofectamine (Invitrogen) according to the manufacturer's instructions. Briefly, 1.5 × 10 5 cells were plated in 3.5 cm wells on the day prior to transfection. Each well was transfected with a total of 0.9 μg DNA under serum-free conditions for six hours, after which the cells were washed and incubated for a further 48 hours before assaying expression. β-galactosidase activity assay For the tetracycline dose response curve, 5000 HC11- lacZ cells for each condition, were cultured in triplicate in 96 well plates for 72 hours, and beta-galactosidase activity was determined as previously described [ 24 ]. Results Generation of HC11- lacZ and HC11-Δ N β- catenin cell lines Stable cell lines were generated in which either ΔNβ-catenin or β-galactosidase was expressed in a tetracycline dependent manner. These cell lines were established using a novel autoregulatory system in which the expression level of the tetracycline transactivator (tTA) protein is minimised during routine culture and is induced upon withdrawal of tetracycline with concomitant upregulation of the transgene-of-interest [ 24 ]. This strategy helps to minimise deleterious effects due to tTA toxicity. A dose-response analysis for the HC11- lacZ cell line is shown in Figure 1A . β-galactosidase expression is effectively repressed at tetracycline concentrations in excess of 20 ng/ml and is strongly induced in the absence of tetracycline. The N-terminal truncation mutant of β-catenin can be detected by western blotting by both its myc-epitope tag and an anti-β-catenin antibody (Figure 1B ). tTA expression is detectable only in the absence of tetracycline demonstrating the autoregulatory nature of this system. Microarray analysis Transgene expression was induced in HC11- lacZ and HC11-Δ N β- catenin cells by withdrawal of tetracycline for 72 hours. Total RNA was isolated, from which mRNA was purified. cDNAs were labelled and hybridized to an 8962 element Incyte mouse GEM1 cDNA microarray (Incyte Genomics, Palo Alto, CA). These data are provided as supplementary material (See Additional file 1 ). Among those genes upregulated were two genes shown by other workers to be transcriptional targets of this pathway – Fibronectin [ 29 ] and Autotaxin [ 30 ] (data not shown) – suggesting that our model of Wnt/β-catenin signalling deregulation results in the activation of a set of target genes which overlaps, at least partially, with pathway targets in other cell lines. The microarray experiment described here was performed only once but differential expression was repeatedly validated by northern blotting from independent samples for the genes discussed here. Validation of targets Five genes were selected for further study – Extracellular Matrix Protein 1 ( Ecm1 ), Autotaxin , Receptor Activity Modifying Protein 3 ( Ramp3 ), Cd14 and Hypoxia Inducible Gene 2 ( Hig2 ). Each putative target gene was initially subjected to a secondary screen by Northern blotting to confirm the differential expression in response to ΔNβ-catenin (Figure 2 ). RNA samples used for Northern blotting were from independent induction experiments to those used for microarray analysis, thus demonstrating repeatedly by two distinct methods that the transcript levels of these genes are altered in cells overexpressing ΔNβ-catenin. The expression level of each of the transcripts was quantitated using a phosphorimager and normalised to the expression of Gapdh mRNA in the samples. The data in Figure 2 represent film exposure times ranging between 24 and 72 hours. Quantitations were performed using short (one hour or less) exposures to a phosphorimager screen, such that the signal intensity was not saturating. Molecular cloning of mouse Hig2 Hypoxia Inducible Gene 2 encodes a 63 amino acid polypeptide and was one of several genes identified in a screen for genes regulated by hypoxia in a human cervical epithelial cell line [ 31 ]. HIG2 shares no sequence similarity with other known proteins. In order to facilitate the functional analysis of this gene, ESTs were identified which encoded mouse and rat Hig2, and the sequences of chimpanzee and baboon were inferred from genomic sequence data. A multiple alignment of the inferred amino acid sequences shows that these polypeptides are highly similar (Fig 3A ). Analysis of these sequences using a Kyte-Doolittle hydrophobicity plot showed that the N-termini of these proteins contain a series of hydrophobic amino acids (Fig 3B ). This region of hydrophobicity was reminiscent of a signal peptide and sequence analysis using the signal peptide prediction program, SignalP [ 32 , 33 ] supported this possibility. Hig2 has an N-terminal signal peptide and is secreted To investigate the subcellular localisation of Hig2, a Hig2 - EGFP fusion gene was constructed and expressed in both HC11 and Madin-Darby Canine Kidney (MDCK) cells by transient transfection (Figure 4a and 4c ). In both cell lines, Hig2-EGFP is localised to large round vesicle-like structures in the cytoplasm. Similar observations were made in HEK-293 cells (data not shown). The fluorescence was detected predominantly around the periphery of these structures suggesting that they do not consist of solid masses of aggregated protein. When two aspartate residues were introduced to the putative signal peptide by site-directed mutagenesis, Hig2(Y8V9/D8D9)EGFP, this distinctive subcellular localization was abolished (Fig 4B and 4D ). These large structures did not colocalize with either markers of mitochondria (pDsRed2-mito, Clontech), nor lipid droplets (Nile Red, Molecular Probes) nor with markers of endosomes or lysosomes (pulse-chase analysis with TRITC-dextran); data not shown. However, in live HC11 cells transfected with Hig2-EGFP, observations at high magnification revealed that the cytoplasm of these cells contained many very small solid green vesicles moving along the cytoskeleton. These vesicles were approx 1/100 the size of the large vesicles shown in Fig 4A and 4C , and were not observed in cells transfected with either Hig2(Y8V9/D8D9)EGFP or EGFP alone. The rapidity of this motion in live cells, even at room temperature, precluded capture of these images but suggested the possibility that measurable amounts of secreted Hig2-EGFP might be found in the culture medium. HEK-293 cells were chosen as they could be transfected at high efficiency (approx 80%), the presence of the green transport vesicles was confirmed, and 48 hours after transfection samples of total cell lysate and conditioned medium were analysed by western blotting. Secreted Hig2-EGFP was detected in the conditioned medium of Hig2-EGFP cells, but not Hig2(Y8V9/D8D9)EGFP cells (Fig 5 ). Multiple bands were detected in cell lysates for both Hig2-EGFP fusion proteins: whether these represent artifactual degradation products or physiologically relevant biological entities is as yet unknown. Such multiple banding has also been observed with other EGFP-fusion proteins targeted to the secretory pathway (Amphiregulin-EGFP, PK unpublished observations). At least one of the bands may result from internal translation initiation at the consensus Kozak initiation sequence of pEGFP-C1 which is located between the Hig2 and EGFP open reading frames. EGFP was also detected in the conditioned medium. This is consistent with previous reports of GFP secretion via a non-classical Brefeldin A-insensitive pathway [ 34 ]. In this study, several cell lines are described (including HEK293) in which wild-type GFP is released from the cell without passing through the golgi apparatus. Thus, it is formally possible that, instead of its secretion being directed by the putative signal peptide, HIG2-EGFP might be released from the cell via this pathway in a manner specifically dependent on the EGFP moiety. The presence of post-translational modifications acquired during endoplasmic reticulum/golgi apparatus mediated secretion would exclude the latter hypothesis. The altered mobility of HIG-2-EGFP in the medium suggested that it might be glycosylated, however the mobility was not changed by treatment with the glycosidase PNGaseF, suggesting that this secreted protein is not glycosylated (data not shown). Previous studies using GFP fused to a signal peptide directing entry into the ER demonstrated that, in this redox environment, the cysteine residues of GFP form intermolecular disulphide bridges which result in oligomerization of GFP molecules [ 35 ]. Oligomers of Hig2-EGFP were detected (Fig 5 , black arrowheads) but no oligomerization of EGFP was observed. Hig2 itself does not contain cysteine residues, thus the oligomerization is mediated by the EGFP domains. These data are consistent with HIG2-EGFP entry into the classical secretory pathway. Collectively, these data demonstrate that Hig2 contains a functional N-terminal signal peptide and is likely a secreted protein. Expression of HIG2 in human tumours To investigate the relevance of HIG2 in human tumours, the expression level of this gene was examined in 68 tumour cDNA samples compared to normal adjacent tissue from the same patients using a Matched Tumour/Normal cDNA blot (Clontech) (Figure 6A ). The levels of HIG2 were approximately similar in most of the tumour types examined but were strongly and consistently downregulated in most of the cases of kidney and stomach tumours analysed. These data suggest that the downregulation of HIG2 observed upon deregulated β-catenin signalling in vitro may be of clinical relevance in human tumours. Discussion cDNA microarray analysis of the transcriptional changes resulting from overexpression of a constitutively active β-catenin revealed a panel of putative target genes of the Wnt/β-catenin pathway in mouse mammary epithelial cells. This differential expression was confirmed by Northern blotting in five cases. Autotaxin was originally identified as a secreted enzyme with potent motility stimulating activity [ 36 ] and has both pyrophosphatase and phosphodiesterase activity [ 37 ]. Transplantation experiments in athymic mice showed that ras-transformed NIH-3T3 fibroblasts became significantly more tumorigenic, invasive and metastatic when transfected with Autotaxin [ 38 ], and purified recombinant Autotaxin has potent angiogenic activity in vivo [ 39 ]. Autotaxin has been shown to be regulated by both Wnt-1 and retinoic acid [ 30 ]. Autotaxin has been shown to have lysophospholipase activity and the effects of Autotaxin on tumour cell motility are mediated by its conversion of lysophosphatidylcholine to lysophosphatidic acid (LPA), a potent signalling molecule [ 40 , 41 ]. Extracellular Matrix Protein 1 was first identified as a novel 85 KDa protein secreted by a mouse osteogenic stromal cell line [ 42 ]. In situ hybridisation showed that Ecm1 was strongly expressed in most newly formed blood vessels and experiments using purified recombinant Ecm1 showed that it could increase the proliferation rate of vascular endothelial cells in vitro and also stimulate angiogenesis in vivo . The ability to induce de novo angiogenesis is an absolute requirement for tumours to grow beyond a size which can be readily perfused by oxygen and nutrients from the interstitial fluid. ECM1 is overexpressed in many epithelial tumours including 73% of breast tumours analyzed [ 43 ]. Homozygous loss-of-function mutations in the human ECM1 gene were recently identified by linkage analysis as the causative mutations behind Lipoid Proteinosis, a rare autosomal recessive disorder characterized by hyaline deposition in the skin, mucosae and viscera [ 44 ]. The identification of Autotaxin and Ecm1 as genes upregulated by activation of this pathway, together with VEGF [ 45 ] suggests that deregulation of Wnt/β-catenin signalling during tumour initiation and progression may be one of the factors which promotes tumour angiogenesis. CD14 , which can function as both a receptor and a secreted protein, was downregulated upon ΔNβ-catenin expression. CD14 is a glycosyl-phosphatidylinositol-linked cell surface protein, preferentially expressed in monocytes, where it acts as a receptor for Lipopolysaccharide Binding Protein:Lipopolysaccharide complexes [ 46 ]. Soluble CD14 (sCD14) is also expressed in mammary epithelial cells in vitro and has been detected in human milk where it is postulated to play a role in neonatal immunity [ 47 ], and is strongly upregulated in mammary luminal epithelial cells in vivo at the onset of involution [ 48 ]. Receptor Activity Modifying Protein 3 ( RAMP3 ) was downregulated upon ΔNβ-catenin induction and is one of three members of the RAMP family. These proteins are involved in mediating the cellular response to the neuropeptides calcitonin, calcitonin gene related peptide, amylin and adrenomedullin. The RAMP family members function as chaperones for the seven transmembrane domain G-protein coupled receptors for these neuropeptides, shuttling the receptor to the cell surface and altering receptor glycosylation. The ligand binding phenotype of the receptor is dependent on the RAMP family member with which it is associated [ 49 ]. RAMP3-Calcitonin Receptor (CR) heterodimers form a functional receptor for amylin [ 50 ], and RAMP3-Calcitonin-Receptor-Like-Receptor (CRLR) heterodimers act as an adrenomedullin receptor [ 51 ]. Expression of both CR and CRLR was detected in HC11 by RT-PCR (data not shown) suggesting that functional receptor-RAMP complexes are present in this cell line. Adrenomedullin, the ligand for the CRLR/RAMP3 receptor dimer, functions as a growth factor in several human tumour cell lines [ 52 ], in addition to promoting angiogenesis in vivo [ 53 ] via CRLR/RAMP3 and CRLR/RAMP2 receptor dimers [ 54 , 55 ]. Hypoxia - inducible gene 2 ( Hig2 ) was one of several genes identified in a representational difference analysis screen for genes regulated by hypoxia in a human cervical epithelial cell line. The human gene encodes a 63 amino acid polypeptide of unknown function [ 31 ]. Expression of mouse Hig2 was downregulated in HC11 cells overexpressing ΔNβ-catenin. The identification of a group of mammalian orthologues revealed a well conserved hydrophobic region in the N-terminus, reminiscent of a signal peptide. A Hig2-EGFP fusion protein entered the secretory pathway and was detected in conditioned medium of transfected cells. The introduction of a pair of charged amino acids into the hydrophobic region abolished secretion, lending support to the hypothesis that this region contains a functional signal peptide. The nature of the large vesicular structures observed in Hig2-EGFP overexpressing cells is as yet unclear. Mammary epithelial cells are known to contain membrane-enclosed lipid droplets, as well as a variety of vesicular compartments involved in the secretion of casein, citrate, lactose and calcium [ 56 ], however the presence of these vesicles in MDCK and HEK293 cells argues that they are not mammary specific. Indeed, co-localization experiments suggest that these structures are neither mitochondria, lysosomes, endosomes nor lipid droplets. Given the demonstration that Hig2 is secreted, these structure most likely correspond to overexpressed Hig2-EGFP in transit through the endoplasmic reticulum and golgi apparatus. As no antibody is available against Hig2, it was not possible to investigate the localisation of the endogenous protein, but these data represent a useful initial step in the functional characterisation of this gene. Analysis of the expression of HIG2 using a matched Tumour/Normal tissue cDNA array showed that HIG2 is widely expressed. In most cases, the levels of HIG2 in the tumours and the associated normal tissue controls were similar. HIG2 was, however, strongly and consistently downregulated in the majority of the kidney and stomach tumours analysed. This represents an significant validation of our in vitro findings in human tumours, and suggests that HIG2 may exert a tumour suppressive effect in vivo . Human HIG2 is located on 7q32.2, a commonly deleted region in several tumour types, most prominently leukaemias and lymphomas [ 57 ]. Deletion analysis of 7q in a panel of patients with Splenic Lymphoma with Villous Lymphocytes by Catovsky and colleagues suggests that a critical tumour suppressor is located on 7q32 [ 58 ]. Conclusions The identification of this panel of candidate target genes for this clinically important signal transduction pathway adds to those identified by other workers in a variety of model systems and suggests that, as well as promoting tumour cell proliferation and survival in a cell autonomous manner, this activation of this pathway is likely to have a series of non-cell autonomous effects. Here we have focussed on the identification of Wnt/β-catenin target genes that are either secreted signalling molecules or receptors. It is likely that such targets are involved in mediating autocrine proliferation, promotion of angiogenesis and the mediation of reciprocal communication between Wnt/β-catenin tumours and their microenvironmental milieu. Competing interests The authors declare that they have no competing interests. Authors' contributions PK carried out all of the experimental procedures and drafted the manuscript. PK, TE and AA contributed to the design of the study. All authors read and approved the final version of this manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional File 1 cDNA microarray dataset: HC11 ΔNβ-catenin v. HC11 lacZ. This file contains the dataset from the Incyte GEM1 cDNA microarray comparison between HC11 cells overexpressing ΔNβ-catenin (Probe 1) and β-galactosidase (Probe 2). This file may be easily imported into MS Excel, Genespring or other microarray analysis software to facilitate further analysis. Click here for file
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545969.xml
15642117
10.1186/1471-2407-5-3
539322
Disentangling Sub-Millisecond Processes within an Auditory Transduction Chain
Every sensation begins with the conversion of a sensory stimulus into the response of a receptor neuron. Typically, this involves a sequence of multiple biophysical processes that cannot all be monitored directly. In this work, we present an approach that is based on analyzing different stimuli that cause the same final output, here defined as the probability of the receptor neuron to fire a single action potential. Comparing such iso-response stimuli within the framework of nonlinear cascade models allows us to extract the characteristics of individual signal-processing steps with a temporal resolution much finer than the trial-to-trial variability of the measured output spike times. Applied to insect auditory receptor cells, the technique reveals the sub-millisecond dynamics of the eardrum vibration and of the electrical potential and yields a quantitative four-step cascade model. The model accounts for the tuning properties of this class of neurons and explains their high temporal resolution under natural stimulation. Owing to its simplicity and generality, the presented method is readily applicable to other nonlinear cascades and a large variety of signal-processing systems.
Introduction Animals and human beings rely on accurate information about their external environment and internal state for proper behavioral reactions. This vital requirement has led to a large variety of highly sophisticated sensory systems [ 1 ]. A common feature, though, is the step-by-step conversion of the incoming signal through multiple sequential transformations. In auditory systems, for example, air-pressure fluctuations induce oscillations of mechanical resonators such as the eardrums, basilar membranes, and hair sensilla [ 2 , 3 , 4 , 5 ]. These oscillations cause the opening of mechanosensory ion channels in auditory receptor cells [ 6 , 7 , 8 ]. The resulting electrical currents change the cells' membrane potentials. This, in turn, activates voltage-dependent ion channels that eventually trigger action potentials, which are passed to higher brain areas for further information processing ( Figure 1 ). Each processing step induces a transformation of the stimulus representation that may include rectification, saturation, and temporal filtering. In the mammalian ear, this processing sequence is extended by nonlinear mechanical amplification and feedback [ 9 ], which influence the individual processing steps. Similar multi-step sequences of biophysical or biochemical transduction processes underlie the proper function of all sensory and many other signaling systems. Figure 1 Sequential Processing in the Auditory Transduction Chain A sequence of several steps transforms an incident sound wave into a neural spike response. (1) Mechanical coupling. The acoustic stimulus induces vibrations of a mechanical membrane (basilar or tympanic membrane). (2) Mechanosensory transduction. The deflections cause the opening of mechanosensory ion channels in the membrane of a receptor neuron. Many details of this transduction process are still unknown. The depicted schematic coupling follows the gating-spring model proposed for mechanosensory transduction in hair cells [ 43 ]. (3) Electrical integration. The electrical charge due to the transmembrane current accumulates at the cell membrane. (4) Spike generation. Action potentials are triggered by voltage-dependent currents. Each of these four steps transforms the signal in a specific way, which may be nearly linear (as for the eardrum response) or strongly nonlinear (as for spike generation, which is subject to thresholding and saturation). In general, the illustrated steps may contain further sub-processes such as cochlear amplification or synaptic transmission between hair cells and auditory nerve fibers. For the auditory periphery of locusts investigated in the present study, this schematic picture resembles anatomical findings [ 18 ], which reveal that the receptor neurons are directly attached to the eardrum and that they send their action potentials down the auditory nerve without any further relay stations. We here show that it is possible to extract fine temporal details of individual processes within such signal-processing chains from observing the output activity alone. This progress results from a new method that extends an experimental strategy well known from measuring threshold curves in neurobiology [ 10 ] or applying equivalence criteria in psychophysics [ 11 ]: varying stimulus parameters such that the investigated pathway, cell, or system stays at a constant level of output activity. The key to the new method is to compare different stimuli within these measured iso-response sets in such a way that single processing steps can be dissociated. A cascade model is used as a mathematical framework to infer the salient features of the individual processes. This allows us to quantitatively characterize the signal-processing dynamics even under in vivo conditions. Unlike many classical approaches of systems identification, the method is not based on temporal correlations between the input and output; hence, the time resolution of the method is not limited by the output precision of the system under study. In a spike-based analysis of neural response properties, this allows us to assess the dynamical features of the involved processes with considerably higher resolution than suggested by the spike jitter. A particularly fine temporal resolution is needed to analyze signal processing in auditory systems that solve complex tasks such as sound localization, echolocation, and acoustic communication [ 12 , 13 , 14 , 15 ]. Here, even single receptor cells display extraordinary sub-millisecond precision [ 14 , 16 , 17 ], with the underlying signal-processing steps involving yet shorter time scales. How these individual steps operate over short times and eventually allow such remarkable precision is largely unknown because of the high vulnerability of the auditory periphery. This calls for methods based on neurophysiological measurements from a remote downstream location such as the auditory nerve, so that the mechanical structures of the ear remain intact. As a suitable model system to study signal processing in the ear, we chose the auditory periphery of the locust ( Locusta migratoria ). Its anatomy is well characterized [ 18 ], and the auditory nerve is easily accessible for electrophysiological recordings. The nerve contains the axons of the receptor cells. These can be roughly divided into two groups according to their frequency of maximum sensitivity, which lies near 5 kHz for low-frequency receptor cells and around 15 kHz for high-frequency receptor cells. The mechanical structure of the locust system is simpler than that of mammals, as the receptor cells are directly attached to the tympanic membrane, the animal's eardrum. Also, in contrast to the signal amplification in the vertebrate cochlea, there are no known feedback loops, a circumstance which facilitates the modeling. General features of mechanoreceptors, on the other hand, are surprisingly similar across species and are also shared by hair cells in the mammalian inner ear [ 8 ]. Results To analyze signal processing in the locust ear, we performed intracellular recordings in vivo from single receptor-cell axons in the auditory nerve. The stimuli consisted of two short clicks. The clicks were sound-pressure pulses with peak amplitudes A 1 and A 2 , respectively, and were separated by a short time interval, Δ t ( Figure 2 A; see also Figure S1 for microphone recordings). For such stimuli, the receptor cell fired at most one action potential per double click; stimulus intensity hardly influenced spike timing, but strongly affected spike probability, as shown in Figure 2 B. The response strength may thus be described by the probability that a spike occurs within a certain time window after the two clicks. Figure 2 Receptor Neuron Responses for Two-Click Stimuli (A) Stimulus parameters. Acoustic stimuli consisted of two short clicks with amplitudes A 1 and A 2 , respectively, separated by a peak-to-peak interval Δ t . The clicks were triangular and had a total width of 20 μs. The peak-to-peak interval was generally less than 1.5 ms. (B) Raster plots of spike responses. Spike times obtained from a single receptor neuron with four different peak intensities (83–86 dB SPL) are shown for 30 runs each. For the different intensities, both click amplitudes were varied while their ratio was kept fixed, with intensity values referring to the larger click amplitude. The inter-click interval in this example was 40 μs. The values of p denote the measured spike probabilities. The inset displays spike times from the strongest sound stimulus at higher magnification. All spikes fall in a temporal window between 4.5 and 5.5 ms after stimulation. Spike times were recorded with a temporal resolution of 0.1 ms. These data illustrate that the response of the receptor cell is well described by the occurrence probability of a single spike in a rather broad time window, for example, between 3 and 10 ms after stimulus presentation. As is often observed for these receptor cells, there is virtually no spontaneous activity. For fixed time interval Δ t, an iso-response set consists of those combinations of A 1 and A 2 that lead to the same predefined spike probability p . Since the spike probability increases with the click amplitudes, A 1 and A 2 can easily be tuned during an experiment to yield the desired value of p (see Materials and Methods ). The tuning scheme was applied for stimulus patterns with different relative sizes of the two clicks, so that a multitude of different combinations of A 1 and A 2 corresponding to the same p was obtained. Rapid online analysis of the neural responses and automatic feedback to the stimulus generator made it possible to apply this scheme despite the time limitations of the in vivo experiments. Figure 3 shows typical examples of such iso-response sets, measured for different time intervals Δ t . For each of the three cells displayed, two distinct values of Δ t were used. The sets can be used to identify stimulus parameters that govern signal processing at a particular time scale. Most importantly, the iso-response sets exhibit specific shapes that vary systematically with Δ t . For short intervals (below approximately 60 μs), the sets generally lie on straight lines, at least for low-frequency receptor cells. High-frequency receptor cells do not display straight lines even at the smallest Δ t used in the experiment (40 μs) for reasons that will become apparent later. For long intervals (between approximately 400 and 800 μs, depending on the cell), the iso-response sets fall onto nearly circular curves. Note that in Figure 3 C, the iso-response set for Δ t = 500 μs deviates from the symmetry between A 1 and A 2 . In Figure 3 D, the inter-click interval of Δ t = 120 μs fell in neither of the two regimes discussed above, and the corresponding iso-response set shows a particularly bulged shape. Recordings from a total of eight cells agree with the observations from the three examples displayed in Figure 3 . Figure 3 Measurements of Iso-Response Sets and Identification of Relevant Stimulus Parameters (A) Acoustic stimuli. The stimuli consisted of two short clicks with amplitudes A 1 and A 2 that were separated by a peak-to-peak interval Δ t , here shown for Δ t = 40 μs (upper trace) and Δ t = 750 μs (lower trace). (B–D) Examples of iso-response sets from three receptor cells. Here, as throughout the paper, iso-response sets correspond to a spike probability of 70%. Each panel shows iso-response sets from a single receptor cell for two different values of Δ t, one smaller than 100 μs (filled circles) and one larger (open squares). The solid lines denote fits to the data of either straight lines or circles. The values for Δ t used in the experiments are indicated in the respective panels. All error measures display 95% confidence intervals. For the short intervals, the data are well fitted by straight lines ( A 1 + A 2 = constant). For the long intervals in (B) and (C), circles ( A 1 2 + A 2 2 = constant) yield good fits; a slight asymmetry is clearly visible in (C). The data for the intermediate inter-click interval Δ t = 120 μs in (D) are not well fitted by either of these shapes. Here, the measured points are connected by a dashed line for visual guidance. Note that in (B) the overall sensitivity of the neuron seems to have changed; the intersections of the straight line and the circle with the x- and y-axis do not match exactly although the stimulus in these cases is the same, a single click. The reason may be either a slow adaptation process or a slight rundown of the recording over the experimental time of around 30 min. However, this does not account for the more prominent differences in shape of the two iso-response sets. These examples demonstrate that on different time scales, different stimulus parameters are relevant for the transduction process, the amplitude A of a sound stimulus for short times and its energy A 2 for long times. The two prominent shapes of the iso-response sets—straight lines and circles—reflect two different processing steps in the auditory transduction chain. A straight line implies that the linear sum, A 1 + A 2 , of both click amplitudes determines the spike probability and demonstrates that the sound pressure is most likely the relevant stimulus parameter. Such linear summation of the pressure on short time scales is not surprising, considering the mechanical properties of the eardrum; owing to its mechanical inertia, rapidly following stimuli can be expected to superimpose. This interpretation is in agreement with laser-interferometric and stroboscopic observations of the eardrum, which have demonstrated that it reacts approximately linearly to increases in sound pressure [ 3 , 19 ]. For the longer intervals, on the other hand, the iso-response sets are circles to good approximation, indicating that the quadratic sum, or A 1 2 + A 2 2 , now determines the spike probability. It follows that the sound energy, which is proportional to the squared pressure, is the relevant stimulus parameter on this time scale. This quadratic summation represents a fundamentally different way of stimulus integration from that of the linear summation on short time scales and indicates the involvement of a different biophysical process. A process that can mediate stimulus integration over longer intervals is the accumulation of electrical charge at the neural membrane. According to this explanation, the electrical potential induced by a click is proportional to the click's energy; contributions from consecutive clicks are then summed approximately linearly because of the passive membrane properties. This is in accordance with earlier investigations for stationary sound signals that revealed an energy dependence of the neurons' firing rate [ 20 ]. We conclude that in between the mechanical vibration of the eardrum and the accumulation of electrical charge at the neural membrane, there is a squaring of the transmitted signal. This squaring may be attributed to the core process of mechanosensory transduction, i.e., the opening of ion channels by the mechanical stimulus. The above findings motivate the following mathematical model, which describes how a stimulus consisting of two sound clicks is transformed into a spike probability. Within the model, a single click of amplitude A generates a vibration of the tympanum with strength X = c 1 ·A, i.e., linear in the amplitude with a proportionality constant c 1 . This mechanical vibration leads to a membrane potential, whose effect on the generation of the spike some time T after the click is given by J = c 2 · X 2 = c 2 · ( c 1 ·A ) 2 , i.e., quadratic in the amplitude with an additional proportionality constant c 2 . The square follows from the circular shape of the iso-response sets for longer time scales, which indicated that a quadratic operation must take place before the accumulation of charge at the neural membrane. Finally, the spike probability p is given by a yet unknown function p = g ( J ). As J is the relevant quantity determining spike probability, we also refer to it as “effective stimulus intensity.” The model contains a freedom of scaling; any proportionality constants in J can be absorbed into the function g ( J ). To simplify the notation, we thus set c 1 = c 2 = 1 and obtain X = A for the strength of the mechanical vibration and J = X 2 = A 2 for the effective stimulus intensity in response to a single click. Note that in this picture, the mechanical vibration and the membrane potential are each captured by a single quantity that does not describe the time course of the corresponding processes, but rather their integrated strength in response to a click. In general, the conversion of the mechanical vibration into a membrane potential as well as the spike generation are dynamical processes that do not happen at a single moment in time. For simplicity, however, one may think of X as describing the velocity of the mechanical vibration immediately after the click and J as capturing the membrane potential at the time of spike generation. For the two-click stimulus with amplitudes A 1 and A 2 , respectively, we choose the first click to be small enough so that it does not lead to a spike by itself. The measured action potential is thus elicited at some time T after the second click. To derive the model equation for this experimental situation, we divide the time from the first click to spike generation into the period between the two clicks and the period following the second click. Let us start by focusing on the inter-click interval. After the first click, the mechanical vibration has the strength X 1 = A 1 . However, how much electrical charge accumulates during the inter-click interval to influence spike generation at time T after the second click depends on the length Δ t of the inter-click interval. This effect is incorporated by a Δ t -dependent scaling factor Q (Δ t ) into the model and results in a first contribution from the first click to spike generation given by J 1 = A 1 2 ·Q (Δ t ). Since Q (Δ t ) denotes the effect of the first click within the inter-click interval only, it should vanish in the limit of very small Δ t . Let us now consider the remaining time before spike generation. After the second click, the mechanical vibration is due to a superposition of both clicks. For short inter-click intervals, the straight iso-response lines suggest a simple addition of the two click amplitudes; in general, however, the contribution of the first click to the membrane vibration after the second click will again depend on the inter-click interval Δ t . This is modeled by a scaling factor L (Δ t ), i.e., the vibration after the second click has a strength X 2 = A 1 · L (Δ t ) + A 2 . Accordingly, the effect of the two-click vibration on the membrane potential at time T after the second click is J 2 = ( X 2 ) 2 = ( A 1 · L (Δ t ) + A 2 ) 2 . For very small Δ t, L (Δ t ) should approach unity to account for the equal contribution of both clicks for vanishing inter-click intervals. The total effective stimulus intensity is then given by This quantity determines the spike probability p via the relation p = g ( J ). How does this model explain the particular shapes of the iso-response sets in Figure 3 ? The linear and the circular iso-response sets apparently correspond to the two special cases: (1) L (Δ t ) = 1 and Q (Δ t ) = 0 (straight line) and (2) L (Δ t ) = 0 and Q (Δ t ) = 1 (circle). We can therefore regard equation 1 as a minimal model incorporating linear as well as quadratic summation, as suggested by the measured iso-response sets. Based on the experimental data, we expect that the first case is approximately fulfilled for small Δ t and the second case in some range of larger Δ t . In our biophysical interpretation, the first case means that the two clicks are added at the tympanic membrane ( L (Δ t ) ≈ 1), but the short interval between the two clicks prevents a substantial accumulation of charge from the first click alone ( Q (Δ t ) ≈ 0), as already discussed above. The second case may be found for Δ t long enough that the mechanical vibration has already decayed ( L (Δ t ) ≈ 0). The two clicks are then individually squared, i.e., they independently lead to two transduction currents. The currents add up if the time constant of the neural membrane is significantly longer than the inter-click interval ( Q (Δ t ) ≈ 1). In the two limiting cases, equation 1 is symmetric with respect to A 1 and A 2 , reflecting the symmetry of, e.g., the data in Figure 3 B. However, for values of Δ t where neither of the two cases is strictly fulfilled, this symmetry of the iso-response sets will be distorted, as is noticable for the longer Δ t in Figure 3 C. Other sets of values for L (Δ t ) and Q (Δ t ) may lead to very different iso-response shapes, as in Figure 3 D. Equation 1 presents a self-contained model for click stimuli and is sufficient to analyze the temporal characteristics of the individual steps. It can be interpreted as a signal-processing cascade that contains two summation processes, one linear in the click amplitudes and one quadratic. For click stimuli, the functions L (Δ t ) and Q (Δ t ) are thus filter functions associated with the linear and quadratic summation, respectively. Despite the simple structure of the model, the filters L (Δ t ) and Q (Δ t ) can be expected to retain the salient features of the underlying biophysical processes such as frequency content and integration time. In Protocol S1 , we show that equation 1 can be obtained in an a posteriori calculation from a generalized cascade model and that this derivation leads to an interpretation of L (Δ t ) as the velocity of the mechanical vibration and of Q (Δ t ), at least for large enough Δ t, as the time course of the membrane potential following a click. In this generalized model, the input signal is an arbitrary sound pressure wave A ( t ), and the effective stimulus intensity is a continuous function of time, J ( t ), which is given by Here, the input A ( t ) is first convolved with a temporal filter, l ( τ ), the result is squared and subsequently convolved with a second filter, q ( τ ), as depicted in Figure 4 . The filters l ( τ ) and q ( τ ) have characteristics similar to the click-version filters L (Δ t ) and Q (Δ t ), but are not identical to them. Their relations follow from the calculation in Protocol S1 . As we here focus on click stimuli, we will use the simpler equation 1 to evaluate the temporal structures of L (Δ t ) and Q (Δ t ). Figure 4 Generalized Cascade Model of the Auditory Transduction Chain The model is composed of a sequence containing two linear temporal filters, l ( τ ) and q ( t ), and two static nonlinear transformations, namely a quadratic nonlinearity and an output nonlinearity g˜ (·), which may differ from the nonlinearity g (·) of the click-stimulus model (see Protocol S1 ). First, the stimulus A ( t ) is convolved with the filter l ( τ ) (linear integration). Second, the result is squared (nonlinear transformation). Third, the result of the previous step is convolved with the filter q ( τ ), yielding the effective stimulus intensity J ( t ) (linear integration). Fourth, a final transformation g˜ of J ( t ) (nonlinear transformation) determines the response, which in this generalized model is the time-dependent firing rate r ( t ). The model thus corresponds to an LNLN cascade. This abstract structure directly follows the sequential configuration of the biophysical processing steps shown in Figure 1 . Note that we interpret equation 1 to yield the spike probability after the second click. If the first click is large and the second small, however, the first click alone may account for some of the observed spikes; clearly this is the case when the second click vanishes. This is not captured by equation 1 , and one might expect that, for large values of A 1 , these additional spikes lead to measured values of A 2 that are slightly smaller than expected for a circular iso-response set. The data in Figure 3 , however, suggest that this effect is small and not picked up by our experiment. Nevertheless, for the following quantitative study, we will keep the first click always on a level where the click by itself does not contribute substantially to the spike probability. The previous experiment showed that the separate effects of the two summation processes can be discerned for short and long time intervals. For intermediate Δ t, however, their dynamics may largely overlap. Is it nevertheless possible to design an experiment that directly reveals the whole time course of the mechanical vibration L (Δ t ) and the electrical integration Q (Δ t )? This would provide a parameter-free description of both processes and advance the quantitative understanding of the auditory transduction dynamics. To reach this goal, we again measure iso-response sets. As before, we exploit that for fixed Δ t, any pair of click amplitudes ( B 1 , B 2 ) should result in the same spike probability p as the pair ( A 1 , A 2 ) as soon as J ( A 1 , A 2 ) = J ( B 1 , B 2 ). It is this straightforward relation that allows us to determine both L (Δ t ) and Q (Δ t ) independently of each other. In fact, some appropriate set of measurements that fulfill the iso-response relation is all that is needed to calculate L (Δ t ) and Q (Δ t ). Illustrating this concept, we now proceed with a particularly suited choice of stimulus patterns, which keeps the mathematical requirements for the calculation at a minimum. For each Δ t, we measure two different iso-response stimuli, and as a key feature, one of these has a “negative” second click, i.e., a sound-pressure pulse pointing in the opposite direction as the first click, as depicted in Figure 5 A. Mathematically, this choice of stimulus patterns leads to two simple equations for the two unknowns L (Δ t ) and Q (Δ t ), which can be solved explicitly, as explained in Materials and Methods . By repeating such double measurements for different values of Δ t, the whole time course of L (Δ t ) and Q (Δ t ) is obtained. Figure 5 Temporal Structure of the Mechanical Oscillation and Electrical Integration (A) Stimulus patterns. Two clicks were presented, separated by a time interval Δ t . The first click (amplitude A 1 ) was held constant throughout this experiment. The second click was presented in the same direction as the first click (solid line, amplitude A 2 ) or in the opposite (“negative”) direction (dashed line, amplitude à 2 ). The click amplitudes A 2 and à 2 were adjusted to fall in the desired iso-response set. (B–G) Mechanical oscillation and electrical integration of a high-frequency (B and E) and two low-frequency (C and F, and D and G, respectively) receptor neurons. (B–D) Time course of the eardrum vibration. The individual values (circles) were calculated from the measured values of A 2 and à 2 for each Δ t . The results are compared with a theoretical curve from a damped harmonic oscillator (solid line) with fundamental frequency f and decay time constant τ dec fitted to the data. (E–G) Time course of the electrical integration process. The measured data are compared to an exponential fit (solid line) with a time constant τ int . Figure 5 shows examples of L (Δ t ) and Q (Δ t ) for three different cells. L (Δ t ) displays strong oscillatory components, as was observed for all cells. This property presumably reflects the eardrum's oscillation at the attachment site of the receptor cell. The detailed temporal structure of L (Δ t ) now allows us to investigate the salient features of this oscillation. To quantify our findings, we fit a damped harmonic oscillation to the measured data for L (Δ t ) and extract the fundamental frequency as well as the decay time constant. We can use these values to predict the neuron's characteristic frequency (the frequency of highest sensitivity) and the width of its frequency-tuning curve. Figure 6 shows the comparison of these predictions with traditional measurements of the tuning curves for all 12 cells measured under this experimental paradigm with sufficient sampling to extract L (Δ t ). The remarkable agreement confirms that the new analysis faithfully extracts the relevant, cell-specific properties of the transduction sequence. The correspondence between the tuning characteristics and the filter L (Δ t ) also explains why high-frequency receptor cells do not feature straight lines for their iso-response sets even at the shortest inter-click interval (40 μs) used in the experiment. For those cells, L (Δ t ) decays rapidly, thus not allowing access to the region where L (Δ t ) ≈ 1. Figure 6 Predictions of Tuning Characteristics (A) Tuning curves for the same two cells as in Figure 5 B and 5 E, and 5C and 5F, respectively. The data show the intensity required to drive a receptor cell at a firing rate of 150 Hz for different sound frequencies in the range of 1 to 40 kHz. The characteristic frequency f CF is determined as the minimum of the tuning curve, and the tuning width Δ f 3dB as the width of the curve 3 dB above the minimum value. (B) Comparison of the predicted and measured characteristic frequency and the tuning width. The predictions were obtained from the fundamental frequency and decay time constant of the measured filter L (Δ t ); the measured values are taken from the tuning curves as in (A) ( n = 12). The encircled data points correspond to the three examples shown in Figure 5 . The width of the tuning curves is notoriously difficult to assess quantitatively, as it depends sensitively on an accurate determination of the intensity minimum of the tuning curve. This contributes strongly to the differences of the tuning-width values. The short initial rise phase of the measured Q (Δ t ) in Figure 5 E and 5 F illustrates the rapid buildup of the membrane potential after a click. The exponential decay following this phase suggests that the accumulated electrical charge decays over time owing to a leak conductance. Previously, the time constant could not be measured because of difficulties in obtaining recordings from the somata or dendrites of the auditory receptor cells. Using our new method, we find time constants in the range of 200 to 800 μs. These values are small compared to time constants in more central parts of the nervous system, reflect the high demand for temporal resolution in the auditory periphery, and explain the high coding efficiency of the investigated receptor neurons under natural stimulation [ 21 ]. In most of our recordings, the temporal extent of the filter L (Δ t ) was considerably smaller than that of Q (Δ t ). This usually leads to a region around a Δ t of 400–800 μs, depending on the specific cell, where L (Δ t ) ≈ 0 and Q (Δ t ) is still near unity. These findings correspond to the circular iso-response sets of the initial experiment. Towards very small Δ t, on the other hand, the data show that Q (Δ t ) usually decreases strongly. As explained earlier, this is expected from the linear iso-response sets, and it is observed exemplarily in the data shown in Figure 5 E and 5 F. In addition, the first few 100 μs of the data may show considerable fluctuations of Q (Δ t ) for some recordings, as in Figure 5 G. Different effects may influence this early phase of Q (Δ t ). (1) The electrical potential might be shaped by further dynamics in addition to the low-pass properties of the neural membrane, such as inactivation of the transduction channels or electrical resonances as found in some hair cells [ 6 ]. (2) The fluctuations could reflect the oscillatory influx of current following from the oscillation of the eardrum. In other words, the low-pass filtering of the neural membrane may not be strong enough to quench all oscillatory components of the transduction currents. The resulting effect on the filter Q (Δ t )—though too small to be picked up reliably by the present experiments—can be observed in simulations of the processing cascade, see Figure S2 . At present, we cannot distinguish between these two interpretations. More detailed future experiments, however, may allow a quantitative test of these hypotheses. Measuring the mechanical and electrical response dynamics, L (Δ t ) and Q (Δ t ), completes the model. In order to test its validity and suitability to make quantitative predictions, we investigated the model's performance on a different class of stimuli, namely combinations of three short clicks. Having measured the required values for L (Δ t ) and Q (Δ t ) with two-click stimuli as in the previous experiment (see Figure 5 ), we now ask the following question: if we keep the first two clicks small enough that they do not lead to a spike response, can we predict the size of the third click required to reach a given spike probability? We can use the measured values of L (Δ t ) and Q (Δ t ) to calculate these predictions and experimentally test them by performing a series of three-click iso-response measurements. This experiment was performed on three different cells; one cell featured an unusually high response variability, and results from the other two cells are shown in Figure 7 . The agreement between the predicted and the true click amplitudes shows that the model yields quantitatively accurate results. Figure 7 Model Predictions for Three-Click Stimuli (A) Stimulus patterns. The stimuli consisted of three clicks with amplitudes A 1 , A 2 , and A 3 that were separated by time intervals Δ t 1 and Δ t 2 , respectively. The second and third clicks were either given in the same or opposite (“negative”) direction as the first click. A 1 and A 2 were set equal and held constant, and A 3 was adjusted to yield a spike probability of 70%. The following pairs of time intervals (Δ t 1 , Δ t 2 ) were applied: (100 μs, 100 μs), (100 μs, 200 μs), and (200 μs, 100 μs). (B and C) Predicted and measured amplitudes of the third click for two different cells. Predictions were made after L (Δ t ) and Q (Δ t ) had been measured with two-click experiments such as in Figure 5 . The comparison between predicted and measured values for A 3 therefore contains no free parameters. The model equation for three-click stimuli is presented in Materials and Methods . As demonstrated by these data, the model allows quantitatively accurate predictions. Discussion We have presented a novel technique to disambiguate single processing steps within a larger sensory transduction sequence and to analyze their detailed temporal structures. Our approach is based on measuring particular iso-response sets, i.e., sets of stimuli that yield the same final output, and on specific quantitative comparisons of such stimuli to dissociate the individual processes. For the investigated auditory transduction chain in the locust ear, this strategy led to a precise characterization of two consecutive temporal integration processes, which we interpret as the mechanical resonance of the eardrum and the electrical integration of the attached receptor neuron. The method revealed new details of these processes with a resolution far below 1 ms. The results for the time course of the mechanical resonance agree with traditional measurements of tuning curves and show the decay of the oscillation with a temporal precision much higher than expected from the jitter of the measured output signal, the spikes. The time constants of the electrical integration that were extracted from the data had not been accessible by other means. The analysis resulted in a four-step model of auditory transduction in locusts. The model comprises a series of two linear filters and two nonlinear transformations. The quadratic nonlinearity that separates the two linear filters suggests that the mechanosensory transduction can be described by an energy-integration mechanism, as the squared amplitude corresponds to the oscillation energy of the tympanum. This quadratic form was derived from the circular shape of the iso-response sets for longer time scales Δ t and is in accordance with the energy-integration model that was found to capture the sound-intensity encoding of stationary sound signals in these cells [ 20 ]. Furthermore, the direct current component of the membrane potential in hair cells is also proportional to sound energy [ 22 ], and in psychoacoustic experiments, energy integration accounts for hearing thresholds [ 23 , 24 , 25 , 26 ]. However, a recent analysis of response latencies in auditory nerve fibers and auditory cortex neurons in cats suggests an integration of the pressure envelope for determining thresholds [ 27 ]. This effect may be attributable to the synapse between the hair cell and the auditory nerve fiber in the mammalian ear. In the locust ear, this synapse does not exist, as the fibers are formed by the axons of the receptor neurons themselves. Although the quadratic nonlinearity is fully consistent with our data, there is a second possibility within the general cascade model, equation 2 , namely, squaring after rectification. From a biophysical point of view, this would be expected if the mechanosensory ion channels can only open in one direction. Based on the current data, we cannot distinguish between these two possibilities. As the two scenarios should lead to slightly different response characteristics, future high-resolution experiments should be able to resolve this question. The linear filters L (Δ t ) and Q (Δ t ) were interpreted as the mechanical oscillation of the tympanum and the electrical integration at the neural membrane. Their oscillatory and exponential decay characteristics, respectively, support this view. In principle, however, other processes may well contribute to these characteristics, e.g., electrical resonances as seen in hair cells of the turtle and bullfrog [ 6 , 28 ]. These electrical amplification processes would be expected to influence the filter Q (Δ t ), but our data generally provide little evidence for such effects. Deviations from the exponential decay characteristics in Q (Δ t ) may in part be attributable to the oscillatory influx of charge resulting from the tympanic vibration. This may lead to a small oscillatory component in the early phase of the filter (cf. Protocol S1 ; Figure S2 ). The mechanical coupling in the first step of our model is linear. This is in accordance with mechanical investigations of the tympanum using laser interferometry [ 3 ] and stroboscopic measurements [ 19 ]. As the short clicks used in our study produce reliable spiking responses only at high sound pressure, however, we cannot exclude the influence of nonlinear coupling at low sound pressure, which has been hypothesized on the basis of distortion-product otoacoustic emissions [ 29 ]. In addition, the mechanical properties of the tympanum seem to change slightly under prolonged stimulation and give rise to mechanical adaptation effects with time scales in the 100-ms range [ 30 ]. Spike-frequency adaptation also adds a nontrivial feedback term to the minimal feedforward model of Figure 4 . Similarly, specific potassium currents and sodium-current inactivation induced by sub-threshold membrane potential fluctuations may complicate the transduction dynamics for more general inputs, but do not leave a signature in the present click-stimulus data. The model was quantitatively investigated by using combinations of short clicks. The particular structure of these stimuli allowed a fairly simple mathematical treatment. The derivation of equation 1 relied on capturing the mechanical vibration and the membrane potential, respectively, by single quantities in each time period following a click. This was possible because of the expected stereotypic evolution of the dynamic variables during the “silent phases” between and after the clicks. A generalization to arbitrary acoustic stimuli would require a more elaborate model in the form of equation 2 as well as extensions that account for neural refractoriness and adaptation. Besides its applicability under in vivo conditions, the presented framework has several advantageous properties. First, the method effectively decouples temporal resolution on the input side from temporal precision on the output side by focusing on spike probabilities. In all our measurements, for example, spike latencies varied by about 1 ms within a single recording set owing to cell-intrinsic noise (see Figure 2 ). Still, we were able to probe the system with a resolution down to a few microseconds. This would not have been possible using classical techniques such as poststimulus time histograms, reverse correlation, and Wiener-series analysis. All these methods are intrinsically limited by the width of spike-time jitter and thus cannot capture the fine temporal details of rapid transduction processes. With our method, the resolution is limited only by the precision with which the sensory input can be applied. For the investigated system, the achievable temporal resolution thus increases by at least two orders of magnitude. Second, the method is robust against moderate levels of spontaneous output activity, as this affects all stimuli within one iso-response set in the same way. Methods that require measurements at different response levels, on the other hand, are likely to be systematically affected because the same internal noise level may have a different influence at different levels of output activity. Finally, in many input–output systems, the last stage of processing can be described by a monotonic nonlinearity. Here, this is the relation between the effective stimulus intensity J and the spike probability p = g ( J ), which includes thresholding and saturation. By always comparing stimuli that yield the same output activity, our analysis is independent of the actual shape of g ( J ). Preceding integration steps may thus be analyzed without any need to model g ( J ). This feature is independent of the specific output measure and applies to spike probabilities, firing rates, or any other continuous output variable. Let us also note that the method does not require that the time scales of the individual processes be well separated. For the studied receptor cells, mechanical damping was on average about two times faster than electrical integration, and even for cells with almost identical time constants, iso-response measurements led to high-quality data and reliable parameter fits. Nor is the method limited to particularly simple nonlinearities. All that is needed are solid assessments of the iso-response sets. Mathematically, it is straightforward to substitute some or all of the analytical treatments of this work by numerical approaches, if required by the complexity of the identified signal-processing steps. This extension allows one to use a general parametrization of the full processing chain when the nonlinear transformation cannot be estimated from iso-response sets at large and small Δ t . Instead, performing more than the two measurements at each intermediate Δ t in the second experiment (see Figure 5 ) will provide additional information that can be exploited to improve the numerical estimates of the nonlinearity. As in many other approaches of nonlinear systems identification, the development of a quantitative model relies on the prior determination of the appropriate cascade structure. Unfortunately, there is no universal technique for doing so. In many cases, intuition is required to find suitable models, which should eventually be tested by their predictive power. In the present case, the findings of characteristic shapes of the iso-response sets gave a clear signature of two distinct linear filters with a sandwiched quadratic nonlinearity. In addition, this structure was supported by its amenability to straightforward biophysical interpretation. Generalizing our results, specific iso-response sets may aid structure identification in conjunction with a priori anatomical and physiological knowledge. Once the cascade structure is established, the individual constituents can be quantitatively evaluated by specific comparisons of iso-response stimuli. Comparing responses to clicks in positive and negative directions as in this study is in essence similar to the approach used by Gold and Pumphrey [ 31 ], who evaluated the perceptual difference between short sine tones with coherent phase relations and sine tones that contained phase-inverted parts in order to estimate the temporal extent of the cochlear filters. A yet open problem is the inclusion of feedback components. The present approach relies on the feedforward nature of the system to disentangle the individual processing steps. In particular cases, however, the iso-response approach may also aid in separating feedforward and feedback contributions, namely, when the feedback depends purely on the last stage of the processing cascade [ 30 ]. In this situation, iso-response measurements lead to a constant feedback contribution, and the analysis of the feedforward components may be carried out as in the present case. The experiment may then be repeated for different output levels to map out the feedback characteristics. The feedforward model that we have proposed here for the auditory transduction chain has the form of an LNLN (where “L” stands for linear and “N” stands for nonlinear) cascade, composed of two linear temporal integrations and two nonlinear static transformations [ 32 ]. Similar signal-processing sequences combining linear filters and nonlinear transformations are ubiquitous at all levels of biological organization, from molecular pathways for gene regulation to large-scale relay structures in sensory systems. In neuroscience, applications range from the sensory periphery, including frog hair cells [ 33 ], insect tactile neurons [ 34 ], and the mammalian retina [ 35 , 36 , 37 ], over complex cells in visual cortex [ 38 , 39 ], to psychophysics [ 40 ]. These studies are restricted to models that contain a single nonlinear transformation, corresponding to NL, LN, or LNL cascades [ 32 , 41 ]. An extension of these analyses was presented by French et al. [ 42 ], who derived an NLN cascade for fly photoreceptors. Complementary to the correlation techniques underlying the parameter estimations in those models, the method presented in this work provides a new way of quantitatively evaluating and testing cascade models. The increased complexity of the LNLN cascade identified in the present case was made accessible by invoking particular iso-response measurements, and a higher temporal resolution was achieved by focusing on how spike probabilities depend on the temporal stimulus structure instead of relying on temporal correlations between stimulus and response. Our experimental technique will be most easily applicable to systems whose signal processing resembles the cascade structure investigated here. The general concept of combining different measurements from within one iso-response set covers, however, a much larger range of systems. With increasingly available high-speed computer power for online analysis and stimulus generation, this framework therefore seems well suited to solve challenging process-identification tasks in many signal-processing systems. Materials and Methods Electrophysiology We performed intracellular recordings from axons of receptor neurons in the auditory nerve of adult Locusta migratoria . Details of the preparation, stimulus presentation, and data acquisition are described elsewhere [ 20 ]. In short, the animal was waxed to a Peltier element; head, legs, wings, and intestines were removed, and the auditory nerves, which are located in the first abdominal segment, were exposed. Recordings were obtained with standard glass microelectrodes (borosilicate, GC100F-10, Harvard Apparatus, Edenbridge, United Kingdom) filled with 1 mol/l KCl, and acoustic stimuli were delivered by loudspeakers (Esotec D-260, Dynaudio, Skanderborg, Denmark, on a DCA 450 amplifier, Denon Electronic, Ratingen, Germany) ipsilateral to the recorded auditory nerve. The reliability of the sound signals used in this study was tested by playing samples of the stimuli while recording the sound at the animal's location with a high-precision microphone (40AC, G.R.A.S. Sound and Vibration, Vedbæk, Denmark, on a 2690 conditioning amplifier, Brüel and Kjær, Langen, Germany). See Figure S1 for example recordings. Spikes were detected online from the recorded voltage trace with the custom-made Online Electrophysiology Laboratory software and used for online calculation of spike probabilities and automatic tuning of the sound intensities. The measurement resolution of the timing of spikes was 0.1 ms. During the experiments, the animals were kept at a constant temperature of 30 °C by heating the Peltier element. The experimental protocol complied with German law governing animal care. Measurement of iso-response sets Since the spike probability p of the studied receptor neurons increases monotonically with stimulus intensity, parameters of iso-response stimuli corresponding to the same value of p can be obtained by a simple online algorithm that tunes the absolute stimulus intensity. For fast and reliable data acquisition, we chose p = 70%. The response latency of the neurons varied by 1–2 ms, so that spike probabilities could be assessed by counting spikes over repeated stimulus presentations in a temporal window from 3 to 10 ms after the first click. In the first set of experiments, stimulus patterns were defined by fixed ratios of A 1 and A 2 , and the tuning was achieved by adjusting the two amplitudes simultaneously. The ratios were chosen so that the angles α in the A 1 – A 2 plane given by tan α = A 2 / A 1 were equally spaced. In the second set of experiments, A 1 was kept fixed, and only A 2 was adjusted; similarly, in the three-click experiments, only A 3 was adjusted. In the following, the intensity I always refers to the peak amplitude A max of the stimulus pattern, measured in decibel sound pressure level (dB SPL), For each stimulus, the absolute intensity I 70 corresponding to a spike probability of 70% was determined online in the following way. Beginning with a value of 50 dB SPL, the intensity was raised or lowered in steps of 10 dB, depending on whether the previous intensity gave a spike probability lower or higher than 70% from five stimulus repetitions. This was continued until rough upper and lower bounds for I 70 were found. From these, a first estimate of I 70 was obtained by linear interpolation. Seven intensity values in steps of 1 dB from 3 dB below to 3 dB above this first estimate were then repeated 15 times. From the measured spike probabilities, a refined estimate of I 70 was obtained by linear regression. Nine intensities from 4 dB above to 4 dB below this value were repeated 30 times (in some experiments 40 times). The final estimate of I 70 was determined offline from fitting a sigmoidal function of the form with parameters α and β to these nine intensity-probability pairs. This relation between p and I was then inverted to find the intensity and thus the absolute values of the amplitudes that correspond to p = 0.7. Extraction of L (Δ t ) and Q (Δ t ) from iso-response sets The response functions L (Δ t ) and Q (Δ t ) can be obtained independently of each other by combining the results from different measurements within one iso-response set. Here, we derive explicit expressions based on a specific choice of stimuli that are particularly suited for our system. Two measurements are needed to obtain both L (Δ t ) and Q (Δ t ) for given time interval Δ t . Each stimulus consists of two clicks. The first click has a fixed amplitude A 1 ; the amplitude A 2 of the second click at time Δ t later is adjusted so that a predefined spike probability p is reached. For the second measurement, the experiment is then repeated with a “negative” second click, i.e., a click with an air-pressure peak in the opposite direction from the first click. The absolute value of this click amplitude is denoted by à 2 . We thus find the two pairs ( A 1, A 2 ) and ( A 1 , à 2 ) as elements of an iso-response set. Since the spike probability increases with the effective stimulus intensity J, equal spike probability p implies equal J . The two pairs ( A 1, A 2 ) and ( A 1 , à 2 ) therefore correspond to the same value of J . According to the model, equation 1 , the click amplitudes thus satisfy the two equations Setting the two right sides equal to each other, we obtain or The first solution of this mathematical equation, à 2 = − A 2 , does not correspond to a physical situation as both A 2 and à 2 denote absolute values and are therefore positive. The remaining, second solution reads Solving for L (Δ t ), we obtain Substituting L (Δ t ) from equation 10 in equation 5 or equation 6 , we find This yields with c = J / A 1 2 . As we keep A 1 and J constant throughout the experiment, this determines Q (Δ t ) up to the constant c . It can be inferred from an independent measurement with a single click: by setting A 1 = 0 in equation 5 , we see that J corresponds to the square of the single-click amplitude that yields the desired spike probability. Alternatively, c can be estimated from the saturation level of Q (Δ t ) for large Δ t, as was done in the present study. The specific form of the effective stimulus intensity, equation 1 , led to particularly simple expressions for the response functions L (Δ t ) and Q (Δ t ); see equation 10 and equation 12 , respectively. Other nonlinearities may result in more elaborate expressions or implicit equations, but this technical complication does not limit the scope of the presented approach. Data fitting The datasets for L (Δ t ) were fitted with velocity response functions of a damped harmonic oscillator where ω and δ were optimized for minimizing the total squared error. From these, the fundamental frequency f and the decay time constant τ dec were determined as f = ω /( 2π ) and τ dec = 1/ δ . A simpler fit function of the form led to essentially indistinguishable results for f and τ dec . The resonance frequency, which corresponds to the characteristic frequency, f CF , of the tuning curve, and the tuning width, Δ f 3dB , can be predicted from the fitted values of ω and δ according to the theory of harmonic oscillators: The datasets for Q (Δ t ) were fitted with an exponential decay where the parameters a, τ int , and c were adjusted. Here, only data points for Δ t > 150 μs were taken into account, as Q (Δ t ) initially shows a rising phase. The obtained value for c was used to determine the constant J / A 1 2 in equation 12 . For comparing these predicted values with measurements, the minimum and width of the tuning curves (see Figure 6 A) were determined by fitting a quadratic function to the five data points closest to the data point with smallest intensity. Model predictions for three-click stimuli For stimuli consisting of three clicks with amplitudes A 1 , A 2 , and A 3 that are separated by time intervals Δ t 1 and Δ t 2 , respectively (see Figure 7 A), an approximate equation for the effective stimulus intensity J can be derived in the following way: The first click induces a tympanic vibration proportional to A 1 and a membrane potential proportional to A 1 2 . Following the second click, the tympanic deflection has become A 1 · L (Δ t 1 ) and is augmented by A 2 . This yields a membrane potential proportional to ( A 1 · L (Δ t 1 ) + A 2 ) 2 . After the third click, the tympanic deflection has evolved to A 1 · L (Δ t 1 + Δ t 2 ) + A 2 · L (Δ t 2 ) so that the membrane potential is increased by ( A 1 · L (Δ t 1 + Δ t 2 ) + A 2 · L (Δ t 2 ) + A 3 ) 2 . Summing up the different contributions and approximating the influence of the inter-click intervals on the membrane potential by appropriate factors of Q, we find for the effective stimulus intensity The value of J for a predefined spike probability can be measured from a single-click experiment by setting A 1 = A 2 = 0 and tuning A 3 until the desired spike probability is reached. After having measured L (Δ t ) and Q (Δ t ) from two-click experiments, the above equation can be used to predict the amplitude A 3 needed to reach this predefined spike probability for any combination of A 1 , A 2 , Δ t 1 , and Δ t 2 . Supporting Information Protocol S1 General Cascade Model (50 KB PDF). Click here for additional data file. Figure S1 Examples of Click Stimuli The four panels show different examples of stimuli used in our study. Each panel illustrates the computer-generated pulse signal that drives the loud speaker (upper trace) and the resulting air-pressure fluctuations as measured with a high-precision microphone at the site of the animal's ear (lower trace). The computer-generated clicks are triangular with a total width of 20 μs. The stimuli shown are (A) a single click, (B) a double click with a peak-to-peak interval Δ t = 50 μs, (C) a double click with Δ t = 500 μs, and (D) another double click with Δ t = 500 μs whose second click points in the oppositve (“negative”) direction. The measurements of air-pressure fluctuations indicate a slight broadening of the click width and some residual vibrations, but they nevertheless present a good approximation of the sharp original pulses. (10 KB PDF). Click here for additional data file. Figure S2 Simulation and Analysis of the General Cascade Model in Response to Two-Click Stimuli The general cascade model, equation 2 in the main text, was used with filters modeled as l ( t ) = sin(2 πft )exp(− t / τ dec ) and q ( t ) = exp(− t / τ int ). The parameters were taken from the first two cells presented in detail in the main text: f = 14.5 kHz, τ dec = 100 μs, and τ int = 300 μs for Cell 1 (left column) and f = 5.1 kHz, τ dec = 154 μs, and τ int = 590 μs for Cell 2 (right column). (A and B) Responses of tympanic vibration. x ( t ) denotes the signal after application of the linear filter l ( t ), arbitrary units, for positive second click (solid line) and negative second click (dashed line). Inter-click intervals in these two shown examples were Δ t = 80 μs for Cell 1 and Δ t = 130 μs for Cell 2. (C and D) Corresponding responses of J (Δ t ). The second click was tuned so that the maximum of J (Δ t ) was equal for positive and negative second clicks. This required click amplitudes of size 1.92 and −2.49 relative to the first click for Cell 1 and 2.09 and −1.27 for Cell 2. (E–H) Filters L (Δ t ) and Q (Δ t ) extracted according to equation 1 in the main text from tuning the maximum of J (Δ t ) for many different values of Δ t (gray dots). The parameters f, τ dec , and τ int indicated in the plots were obtained by fitting a damped harmonic oscillator and an exponential function to L (Δ t ) and Q (Δ t ), respectively (black lines). The initial part of Q (Δ t ) shows small fluctuations that result from the oscillatory influx of charge following the tympanic vibrations. In (G), a magnified view of the initial section is shown in the inset. (138 KB PDF). Click here for additional data file.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC539322.xml
15660161
10.1371/journal.pbio.0030008
545996
The association between clinical integration of care and transfer of veterans with acute coronary syndromes from primary care VHA hospitals
Background Few studies report on the effect of organizational factors facilitating transfer between primary and tertiary care hospitals either within an integrated health care system or outside it. In this paper, we report on the relationship between degree of clinical integration of cardiology services and transfer rates of acute coronary syndrome (ACS) patients from primary to tertiary hospitals within and outside the Veterans Health Administration (VHA) system. Methods Prospective cohort study. Transfer rates were obtained for all patients with ACS diagnoses admitted to 12 primary VHA hospitals between 1998 and 1999. Binary variables measuring clinical integration were constructed for each primary VHA hospital reflecting: presence of on-site VHA cardiologist; referral coordinator at the associated tertiary VHA hospital; and/or referral coordinator at the primary VHA hospital. We assessed the association between the integration variables and overall transfer from primary to tertiary hospitals, using random effects logistic regression, controlling for clustering at two levels and adjusting for patient characteristics. Results Three of twelve hospitals had a VHA cardiologist on site, six had a referral coordinator at the tertiary VHA hospital, and four had a referral coordinator at the primary hospital. Presence of a VHA staff cardiologist on site and a referral coordinator at the tertiary VHA hospital decreased the likelihood of any transfer (OR 0.45, 95% CI 0.27–0.77, and 0.46, p = 0.002, CI 0.27–0.78). Conversely, having a referral coordinator at the primary VHA hospital increased the likelihood of transfer (OR 6.28, CI 2.92–13.48). Conclusions Elements of clinical integration are associated with transfer, an important process in the care of ACS patients. In promoting optimal patient care, clinical integration factors should be considered in addition to patient characteristics.
Background Coronary artery disease is the leading cause of death among Americans [ 1 ]. Hospitalization for acute coronary syndromes (ACS), which includes both acute myocardial infarction (AMI) and unstable angina, is common and costly. Many patients admitted with ACS to primary hospitals (i.e. those without on-site cardiology subspecialty services, including cardiac catheterization facilities) are transferred to tertiary hospitals for cardiac catheterization and consideration of coronary revascularization. The coordination and integration between primary and tertiary hospitals has important implications for integrated health care delivery systems. The Veterans Health Administration (VHA) is one of the largest vertically integrated health care delivery systems in the United States [ 2 ]. The VHA is organized in 21 regional networks. Regionalization has been adopted by many integrated health care delivery systems, both to improve quality and to increase efficiency [ 3 - 6 ]. In most VHA regions, a single tertiary hospital is associated with one or more primary hospitals. A particular challenge in the VHA is providing access to sub-specialty cardiology services for patients hospitalized with acute coronary syndromes because primary hospitals are often geographically distant from tertiary hospitals [ 5 ]. Treatment guidelines for acute coronary syndromes [ 7 - 10 ] suggest that some diagnostic tests and therapies can be performed at most primary VHA hospitals, while others, such as cardiac catheterization and coronary revascularization, require transfer to a tertiary hospital. Well-functioning transfer processes are critical to making a policy of regionalization work. In addition, there are strong financial and organizational incentives to provide care within an integrated health care system like VHA rather than referring to non-VHA hospitals, even when this requires transfer to distant tertiary hospitals [ 11 ]. In the VHA, transfers within the system represent cost savings, while transfers out, by and large, represent cost increases. In addition to cost issues, there are also coordination of care concerns that are addressed through within-system transfer, particularly in a system with a common electronic medical record. However, the constraint on within-system transfer is that patients requiring urgent or emergent transfer to receive definitive care should be transferred to the nearest facility with capacity to provide care, even if this requires a transfer out of the system. Issues related to cost differences due to transfer within and outside integrated health care systems are most applicable in the United States, where the multiplicity of payers is a major financial concern; in other countries with integrated national health care, or single payer, systems, these issues are less relevant, although issues of care coordination may still be important. The objective of this study was to evaluate the association between structural components of clinical integration and patient transfer rates from VHA primary hospitals to tertiary hospitals, both within and outside the VHA system for patients with ACS. We hypothesized that primary VHA hospitals with structural components of clinical integration present would have a higher rate of within-system transfer of ACS patients than primary VHA hospitals lacking these components. Methods The VHA Access to Cardiology study was a prospective cohort study of 2,733 patients with a primary discharge diagnosis of either acute myocardial infarction (ICD9-CM 410.xx) or unstable angina (ICD9-CM 411.xx) discharged over a one year period (March 1, 1998 through February 28, 1999) from 24 VHA hospitals in five regions, including Minnesota and the Dakotas, the Southwest, the Rocky Mountains, the Pacific Northwest, and Southern California. Patient demographics, clinical characteristics, and specific processes of care including hospital transfer were obtained as part of the Access to Cardiology Study. All patients admitted to one of the 12 primary VHA hospitals in the study were eligible for this analysis (n = 862 out of the 2,733 in the larger Access to Cardiology study). The remaining 12 VHA Medical Centers were tertiary hospitals with cardiology services and cardiac catheterization laboratories on site. These were not the focus of the analysis reported in this paper. We excluded 107 patients because they were initially admitted to a private hospital and transferred into a primary VHA hospital. In addition, we excluded 3 patients who were transferred from one primary VHA hospital to another. Finally, 27 patients had missing data in the variable indicating prior history of congestive heart failure, which was included in the final analysis. As a result, a total of 725 patients from 12 primary VHA hospitals were included in these analyses. The study protocol was approved by the Human Subjects Committee at the University of Washington, and by Institutional Review Boards and Research and Development Committees at each participating VHA hospital. Transfer rates Patient transfer from a primary VHA hospital to a tertiary hospital (either VHA or private) was the primary outcome for this study. Secondary outcomes included both transfer from a primary VHA hospital to a tertiary VHA hospital, and transfer from a primary VHA hospital to a private (non-VHA) tertiary hospital. Transfers to a tertiary VHA hospital were considered transfers within the system, while transfers to a private hospital were considered transfers outside the system. Transfer data were available for all 725 patients in the study cohort. We constructed two binary variables for the analyses: transfer to any tertiary care hospital (yes/no), and transfer to a tertiary VHA hospital versus transfer to a private (non-VHA) hospital. Clinical integration The key independent variable for this study was clinical integration of cardiac services. We defined clinical integration [ 12 , 13 ] as the extent to which patient care services, in this case cardiology consultation services, are coordinated across the units and hospitals in the VHA providing care to cardiology patients. We measured clinical integration of cardiac services using three binary variables to indicate the presence or absence of these structural elements of clinical integration: a) a VHA staff cardiologist on-site at least episodically at the primary VHA hospital (either through a full or part time VHA staff cardiologist on site, or through periodic visits by a VHA staff cardiologist from the affiliated tertiary VHA hospital); b) a referral coordinator at the tertiary referral VHA hospital; and c) a referral coordinator at the primary VHA hospital. Referral coordinators at primary VHA hospitals are generalists, in that they facilitate referrals, transfers, and sometimes consultations for patients with many different kinds of diseases or health problems. In contrast, at tertiary VHA hospitals, referral coordinators are often associated with particularly sub-specialties, and work closely with these specialty services to provide assistance to referring hospitals and providers in determining whether transfer, referral, or consultation is advisable, and expediting the processes. These were all hospital level variables. We combined the two groups, VHA staff cardiologist on site and periodic visits by a VHA staff cardiologist, for two reasons. First, only one of the 12 primary hospitals in the sample had an on site VHA cardiologist, and the sample size in that group was too small to analyze independently. Second, in our interviews with Chiefs of Cardiology at the tertiary VHA hospitals, there was unanimity in their beliefs that either type of VHA cardiologist being available in a primary hospital produced more appropriate referrals, and improved interactions between providers at the primary hospital and the VHA tertiary cardiology service. The data used to construct these measures came from on-site interviews conducted with Chiefs of Cardiology at each of the tertiary VHA hospitals associated with the primary VHA hospitals included in this study. During on-site interviews, Chiefs of Cardiology were asked to describe all of the primary VHA hospitals that refer ACS patients to them on a regular basis, and to identify the presence or absence of each of the structural elements of clinical integration. Interviews followed a structured protocol, ensuring uniform data collection. In all cases, the Chiefs of Cardiology were able to provide detailed information about the services available at both the tertiary and primary VHA hospitals. We also asked the Chief of Cardiology about the degree of competitiveness for cardiac services in the local markets for each of the primary VHA hospitals. This was an ordinal variable, with three levels: non-competitive; moderately competitive; or highly competitive market. In all cases, the Chief of Cardiology was able to answer the questions about market competition in the primary hospital market without difficulty, indicating considerable awareness of market conditions and the impact these had on their referral base. In addition, we constructed two separate variables to control for patient distance from the primary VHA hospital to which they were initially admitted, and to control for the distance between primary and tertiary VHA hospitals. The patient distance variable was measured as the distance from the patient's home zip code centroid to the primary VHA hospital. The distance between the primary and tertiary referral VHA hospitals was measured in miles using VHA national databases. We tested different specifications of the distance variables, concluding that it was best to enter the distance between primary and tertiary VHA hospital as a continuous variable, whereas it made no difference in the results of the estimation what form we used for patient distance to primary VHA hospital. In the final analyses, it was dichotomized at greater than or equal to 100 miles – approximately two hours driving time. The patient distance variable is measured at the patient level, while the hospital distance variable is measured at the hospital level. We included several measures of patient clinical characteristics, including age 65 or over; prior history of chronic obstructive pulmonary disease, bleeding disorder (such as hemophilia or anticoagulation therapy), smoking, prior percutaneous coronary intervention (PCI), or chronic heart failure; having a "Do Not Resuscitate" order, and several measures of seriousness or urgency of condition during the index admission in the primary VHA hospital: ST segment elevation on electrocardiogram or elevated cardiac enzymes at presentation; and a composite variable indicating the presence of a serious event during admission. Presence of a serious event during admission was a binary variable taking the value "1" if at least one of the following conditions was present: angina persisting more than 24 hours after admission; hypotensive episode; heart failure during admission; cardiac arrest; or positive stress test during admission. All of these variables were abstracted from the medical record. Analyses We explored the bivariate association between clinical integration variables, distance variables, patient characteristic variables, and patient transfer using one-way analysis of variance with Scheffe correction for multiple comparisons. To construct the most parsimonious models using the full set of candidate independent variables (clinical integration variables and patient characteristics), we used backward stepwise logistic regression, beginning with all available patient clinical characteristics that have been shown to be significant in predicting mortality outcomes for ACS patients in prior studies. We eliminated variables from the model if the p-value for the variable was greater than 0.1. A number of the candidate variables, including many of the history and co-morbidity variables, were found to be insignificant, and we created a summary variable described above which included many of the highly significant variables from the index hospital admission (details available from authors). C-statistics for each of the final models ranged from 0.77 to 0.85. We used Stata SE version 8.2 for all analyses. We then investigated the relationship between clinical integration of cardiac services and transfer rates using random effects logistic regression [ 14 ], correcting for cluster sampling by hospital and region and controlling for distance and patient characteristics that reflect cardiac disease severity and therefore may affect the likelihood of transfer. Two models were estimated, one for transfer to any tertiary care hospital, and the second to estimate the conditional probability that the patient was transferred to a VHA tertiary hospital versus transfer to a non-VHA tertiary hospital, given that they were transferred. Random effects logistic regression allowed us to control for the effects of clustering on both the hospital and regional (Veterans Integrated Service Network, or VISN) level. The intra-class correlation of overall transfer with hospital and VISN jointly was 0.12 (p = 0.006), suggesting the need to control clustering at both levels. Results Among the 12 primary VHA hospitals included in the sample, the mean rate of transfer was 42% (319 of 725). Mean rate of transfer to a tertiary VHA hospital was 31% (237 of 725), and to a private hospital was 11% (82 of 725). Most patients were transferred in order to receive cardiac catheterization or coronary revascularization. In addition, 37% of patients were treated in primary VHA hospitals that were over 250 miles from their tertiary referral VHA hospital, and 18% of patients lived over 100 miles from the primary VHA hospital to which they were admitted. Three of the 12 primary care VHA hospitals had a VHA cardiologist available at least episodically on site; six had a referral coordinator at the associated tertiary center; and four had a referral coordinator at the primary VHA hospital. The distribution of these components is shown in Figure 1 . Five of the twelve hospitals had none of the three components of integration. Figure 1 Distribution of integration components across the 12 primary VHA hospitals Unadjusted associations The bivariate associations between the patient characteristic variables, clinical integration variables, and type of transfer are shown in Table 1 . All of the patient characteristics except history of chronic obstructive pulmonary disease were strongly and positively associated with transfer to a tertiary hospital. Distance between primary and tertiary VHA hospital was significantly different between the three groups, with overall transfer being associated with increased distance between the primary and tertiary VHA hospital. The degree of market competition was also significantly associated with transfer, principally to tertiary private hospitals. Each of the three individual components of integration were significantly associated with transfer from primary VHA. Table 1 Patient and facility characteristics by transfer type Variable Overall for study sample N = 755 Not transferred N = 436 Transferred to tertiary VHA hospital N = 237 Transferred to tertiary private hospital N = 82 p-value* Patient age 65 and over 58.0% 63.1% 52.3% 48.8% 0.005 Prior medical history Chronic obstructive pulmonary disease 37.1% 40.6% 31.9% 33.7% 0.067 Bleeding disorder 3.6% 2.1% 5.5% 6.2% 0.035 Smoker 31.6% 26.8% 41.8% 27.2% <0.001 Prior percutaneous coronary intervention 15.2% 11.7% 21.5% 15.8% 0.003 Chronic heart failure 23.0% 28.9% 13.1% 18.8% <0.001 Course of index hospital admission ST segment elevation on EKG 17.8% 12.8% 19.0% 39.0% <0.001 Cardiac enzymes abnormal on presentation 52.5% 52.0% 46.4% 71.3% <0.001 Do not resuscitate during hospitalization 5.3% 6.7% 2.1% 5.2% 0.039 In-hospital event** 47.3% 37.8% 62.9% 52.4% <0.001 Distance, market and integration variables Distance from patient home zip code centroid to hospital >100 miles 18.1% 15.6% 21.1% 22.0% 0.128 Distance from primary VHA to tertiary VHA hospital in miles 281 270 285 326 0.045 Degree of market competition (1 = not competitive; 3 = highly competitive) 1.74 1.82 1.57 1.79 <0.001 VHA cardiologist on site 30.6% 29.8% 36.3% 19.5% 0.015 Tertiary VHA hospital has referral coordinator 54.7% 56.4% 60.8% 30.5% <0.001 Primary VHA hospital has referral coordinator 33.0% 28.9% 43.9% 24.4% <0.001 * p-value obtained from ANOVA testing difference between means for patients not transferred, transferred to VHA tertiary hospital, or transferred to non-VHA tertiary hospital for continuous variables, chi-square test of inference for categorical variables ** Presence of at least one of the following adverse events during admission: angina persisting more than 24 hours after admission; a hypotensive episode; an episode of heart failure; cardiac arrest; or positive stress test during admission Risk-adjusted association: transfer to any tertiary care hospital Results of the random effects logistic regressions for transfer to any tertiary care hospital are shown in Table 2 . Patient factors increasing the likelihood of transfer to a tertiary hospital included being a smoker; history of chronic heart failure; ST-segment elevation on presenting electrocardiogram; in-hospital events (presence of at least one of the following events during admission: angina persisting more than 24 hours after admission; a hypotensive episode; an episode of heart failure; cardiac arrest; or positive stress test during admission); and distance from patient home to hospital more than 100 miles. Table 2 Results of random effects logistic regression of transfer to any tertiary care hospital Variable Odds ratio p-value Lower limit 95% CI Upper limit 95% CI Patient age 65 and over 0.69 0.06 0.48 1.01 Chronic obstructive pulmonary disease 0.48 <0.001 0.31 0.74 Bleeding disorder 0.68 0.04 0.47 0.98 Smoker 3.28 0.01 1.32 8.12 Prior percutaneous coronary intervention 1.30 0.18 0.89 1.91 Chronic heart failure 2.10 <0.001 1.33 3.32 ST segment elevation on presenting electrocardiogram 2.07 <0.001 1.32 3.26 Cardiac enzymes abnormal on presentation 0.92 0.65 0.64 1.31 Do not resuscitate during hospitalization 0.29 <0.001 0.12 0.65 In-hospital event* 3.14 <0.001 2.21 4.46 Distance from patient home zip code centroid to hospital >100 miles 1.71 0.02 1.08 2.70 Distance from primary VHA to tertiary VHA hospital in miles 0.998 0.03 0.997 0.999 Degree of market competition (1 = not competitive; 3 = highly competitive) 0.55 <0.001 0.41 0.73 VHA cardiologist on site 0.48 <0.001 0.29 0.79 Tertiary VHA hospital has referral coordinator 0.39 <0.001 0.23 0.69 Primary VHA hospital has referral coordinator 6.53 <0.001 3.29 12.98 * Presence of at least one of the following adverse events during admission: angina persisting more than 24 hours after admission; a hypotensive episode; an episode of heart failure; cardiac arrest; or positive stress test during admission Patient factors that decreased the likelihood of transfer to any tertiary hospital included history of chronic obstructive pulmonary disease, or bleeding disorder; and having a do not resuscitate (DNR) order during the hospital admission. In addition, the further the distance between primary and tertiary VHA, the less likely patients were to be transferred at all, and the more competitive the market for cardiac care, the less likely that the patient was transferred to a tertiary care hospital. All three components of integration were significantly associated with transfer to tertiary care, although in different directions. After adjustment for patient and other characteristics, the presence of a VHA staff cardiologist and having a referral coordinator at the tertiary VHA hospital decreased the likelihood of transfer to any tertiary care hospital. In contrast, the presence of a referral coordinator at the primary VHA hospital increased the probability of transfer to a tertiary hospital. Risk-adjusted association: transfer to tertiary VHA hospital vs. tertiary non-VHA hospital The results of this analysis are shown in Table 3 . Patient factors associated with transfer to VHA rather than private tertiary hospital included prior history of percutaneous coronary intervention, and history of chronic heart failure. Patient factors associated with transfer to private rather than VHA tertiary hospital included elevated ST-segment on presenting electrocardiogram, abnormal cardiac enzymes on presentation, and presence of a do not resuscitate order during the hospitalization. Table 3 Results of conditional random effects logistic regression of transfer toVHA tertiary care compared to private tertiary care hospital Variable Odds ratio p-value Lower limit 95% CI Upper limit 95% CI Patient age 65 and over 1.42 0.29 0.75 2.71 Chronic obstructive pulmonary disease 0.56 0.15 0.25 1.23 Bleeding disorder 1.10 0.75 0.60 2.03 Smoker 1.14 0.84 0.34 3.77 Prior percutaneous coronary intervention 3.67 <0.001 1.91 7.04 Chronic heart failure 2.05 <0.001 1.43 2.95 ST segment elevation on presenting electrocardiogram 0.27 <0.001 0.14 0.51 Cardiac enzymes abnormal on presentation 0.30 0.02 0.11 0.81 Do not resuscitate during hospitalization 0.14 <0.001 0.04 0.54 In-hospital event* 1.47 0.31 0.70 3.08 Distance from patient home zip code centroid to hospital >100 miles 2.10 0.10 0.86 5.10 Distance from primary VHA to tertiary VHA hospital in miles 1.00 0.35 0.99 1.00 Degree of market competition (1 = not competitive; 3 = highly competitive) 0.19 0.06 0.03 1.05 VHA cardiologist on site 1.17 0.85 0.23 6.06 Tertiary VHA hospital has referral coordinator 20.62 <0.001 4.50 94.47 Primary VHA hospital has referral coordinator 1.38 0.69 0.27 6.99 * Presence of at least one of the following adverse events during admission: angina persisting more than 24 hours after admission; a hypotensive episode; an episode of heart failure; cardiac arrest; or positive stress test during admission The degree of market competition was not significantly associated with transfer to VHA versus private tertiary hospital. Neither of the distance variables were associated with transfer either to VHA or non-VHA tertiary hospitals. Furthermore, only one of the individual integration variables entered separately were significantly associated with likelihood of transfer to tertiary VHA versus private hospital, and although the parameter estimate for the variable indicating presence of a referral coordinator at the tertiary hospital was large and significant, it was very imprecise (i.e. large standard error). This is probably due to the relatively small number of patients included in the estimation (N = 319) and uneven splits among hospitals, clustered by VISN. Discussion The goal of this study was to investigate the association between measures of clinical integration of care and transfer of patients with acute coronary syndromes in the VHA. In particular, we evaluated whether structural components of clinical integration, such as the presence of referral coordinators and on-site cardiologists, were associated with patient transfer within and/or outside of the VHA healthcare system. In multivariate analysis, the presence of referral coordinators located at primary care VHA hospitals increased the overall likelihood of transfer of ACS patients. In contrast, having a VHA staff cardiologist available or a referral coordinator at a tertiary VHA hospital significantly decreased the likelihood of any transfer to a tertiary care hospital. Finally, we found that only one of the three integration components, presence of a referral coordinator at the tertiary VHA hospital, was significantly associated with transfer to a tertiary VHA hospital compared to a non-VHA tertiary hospital. Our finding that referral coordinators at primary care hospitals increase the likelihood of transfer to tertiary care hospitals is consistent with prior studies demonstrating that referral coordinators increase the ease of referral and frequency of transfer [ 5 , 15 - 19 ]. Presence of a referral coordinator at the primary hospital means that a knowledgeable staff person, not a physician but usually a clinician such as a nurse, is available to coordinate and facilitate what can otherwise be a very cumbersome process of referral and transfer. This individual usually locates and communicates with tertiary care providers and facilitates paperwork and other processes required for patient transfer. However, our finding that the presence of a referral coordinator at a tertiary VHA hospital was negatively associated with transfer appears contradictory. It is possible that referral coordinators at the tertiary centers may facilitate consultation, which may, at least for lower risk patients, appropriately reduce the need for transfer. However, it is of some concern that these referral coordinators may be serving in a gatekeeper role with regard to transfer decisions. Future research should focus on the role and decision-making associated with these referral coordinators. Of note, when transfer did occur, the presence of a referral coordinator at the tertiary VHA hospital was positively associated with transfer to VHA facilities rather than non-VHA facilities. This suggests that referral coordinators may function differently with different kinds of patients, decreasing overall transfer rates but facilitating within-system transfer when transfer occurred. In general, we found that transfers to tertiary care were largely associated with patient characteristics appropriate to transfer: sicker and more urgent patients, except for those for whom more intensive care may not be indicated (e.g. DNR status), were significantly more likely to be transferred. In particular, patients with ST-segment elevation on their presenting electrocardiogram and abnormal cardiac enzymes were significantly more likely to be transferred, most likely for coronary revascularization. These patients are most likely to benefit from revascularization [ 5 , 9 ], and their higher probability of transfer suggests that appropriate triage and risk stratification took place in the primary VHA hospitals providing their care. In addition, we found that these patients were more likely to be transferred to non-VHA tertiary hospitals, presumably because these hospitals were closer to the primary VHA hospital than the affiliated tertiary VHA hospital, indicating appropriate out-of-system transfer for the most urgent patients who could benefit from rapid access to tertiary care. The finding that DNR status appears to be associated with transfer to private tertiary rather than VHA tertiary hospital may be due to small cell size, combined with other characteristics of the small number of patients with that status among those who were transferred at all (9 of 319). Distance between the patient's home and primary VHA hospital was significantly associated with increased likelihood of subsequent transfer to a tertiary care hospital. This may indicate that patients who live further from the hospital take longer to present and are therefore sicker on arrival, leading to the requirement for higher levels of care. Also of interest, distance between primary and tertiary VHA hospitals was significantly associated with a decreased likelihood of transfer, indicating that in situations where primary and tertiary VHA hospitals are further apart, primary VHA hospitals may elect to keep more ACS patients rather than transfer them at all. Future research is needed on the appropriateness of transfer of ACS patients, as it is not clear that variation in transfer based on distance between hospitals represents appropriate variation in care. The finding that cardiologist availability at the primary VHA hospitals was associated with less transfer to tertiary care hospitals may reflect that local or distant cardiology consultation was sufficient in some cases (e.g. lower risk patients) to avoid transfer. Similarly, the availability of a transfer coordinator at the tertiary VHA hospital may have provided an avenue for consultation and avoidance of transfer in some cases. Future studies are needed to define the mechanisms of association between reduced transfers and both on-site cardiology availability and tertiary hospital transfer coordinators. The findings of this study, that referral coordination is associated with transfer from primary to tertiary hospitals, but may operate differently for different types of patients, and may have one mechanism of operation within a health care system and another outside that system, have potential application outside VHA. Previous studies [ 20 ] have found that patients' access to needed services, such as revascularization after acute myocardial infarction, has a significant effect on mortality outcomes. Services such as referral coordination, which increase the likelihood that a patient will be transferred, can reduce the negative impact of receiving initial care in a hospital without specialized tertiary services, such as cardiac catheterization. These findings are potentially relevant in all health care systems where hospitals have different levels of service. Even though they are based on a relatively small patient sample size, the implications of the findings – that referral coordinators at primary hospitals increase the probability of transfer, with the link to better outcomes at tertiary centers [ 21 ] with a full range of treatment options – should spark discussion in a health care system such as VHA about recommending use of referral coordinators in primary hospitals. Limitations First, we were not able to conduct full-scale validation and reliability testing of the clinical integration measures, which would have required a larger sample of hospitals participating in the study to conduct split-sample validation. Second, we used structural, rather than process, elements of integration in this analysis. We focus on structural elements both because they are relatively easier to measure (present or not), and because in Donabedian's widely accepted model of quality in health care, structure precedes process and outcome [ 22 , 23 ]. Third, clinical integration is a complex multi-faceted construct which we captured in a relatively simplistic way. However, we wanted to see if measures that would be straightforward to implement in a health care system like the VHA, such as referral coordinators, had an impact on this key process of care. We measured other components of integration, including communication methods, provider satisfaction with communication methods, and overall perception of how well referral and consultation worked in providing care to ACS patients. Individually, these factors were not as strongly linked to the transfer process as the three structural components we present in this analysis. Fourth, because transfer is closely related to patient outcomes, especially for ACS patients [ 21 ], careful modeling of the relationship between transfer and mortality and morbidity outcomes is essential. We plan to conduct future analyses on the relationships between patient characteristics, transfer, and mortality and morbidity outcomes. In addition, it is important to note that most veterans over the age of 65 are dually eligible for Medicare as well as VHA benefits, and previous analyses have shown that a majority of veterans with acute myocardial infarction, even among those who use VHA hospitals, receive care for AMI in private hospitals [ 24 , 25 ]. This study was designed only to assess transfer of veterans who went to primary VHA hospitals for their ACS care. Conclusions We found that referral coordinators located at primary care VHA hospitals increase the overall likelihood of transfer of ACS patients. Referral coordinators at tertiary VHA hospitals and the presence of on-site cardiologists appeared to decrease the likelihood of transfer. Only one component of integration, presence of a referral coordinator at the tertiary hospital, was associated with within-system compared to out-of-system transfer. These findings have significant potential implications for the VHA. One of the goals of an integrated health care system is to maintain optimal coordination between its component parts [ 12 ]. This study demonstrates that simple structural components of care, such as a referral coordinator at either a primary or tertiary care hospital, can have an impact on a key process of care above and beyond patient characteristics. Competing interests The author(s) declare that they have no competing interests. Authors' contributions AES participated in the design and conduct of the study, conducted the analyses and wrote the manuscript. SLP participated in conducting the project, and assisted in writing the manuscript. DJM participated in writing the manuscript. NRE participated in the design and conduct of the study. NDS participated in writing the manuscript. JSR participated in the statistical analyses and co-wrote the manuscript. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here:
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545996.xml
15649313
10.1186/1472-6963-5-2
509236
Improving the scaling normalization for high-density oligonucleotide GeneChip expression microarrays
Background Normalization is an important step for microarray data analysis to minimize biological and technical variations. Choosing a suitable approach can be critical. The default method in GeneChip expression microarray uses a constant factor, the scaling factor ( SF ), for every gene on an array. The SF is obtained from a trimmed average signal of the array after excluding the 2% of the probe sets with the highest and the lowest values. Results Among the 76 U34A GeneChip experiments, the total signals on each array showed 25.8% variations in terms of the coefficient of variation, although all microarrays were hybridized with the same amount of biotin-labeled cRNA. The 2% of the probe sets with the highest signals that were normally excluded from SF calculation accounted for 34% to 54% of the total signals (40.7% ± 4.4%, mean ± sd). In comparison with normalization factors obtained from the median signal or from the mean of the log transformed signal, SF showed the greatest variation. The normalization factors obtained from log transformed signals showed least variation. Conclusions Eliminating 40% of the signal data during SF calculation failed to show any benefit. Normalization factors obtained with log transformed signals performed the best. Thus, it is suggested to use the mean of the logarithm transformed data for normalization, rather than the arithmetic mean of signals in GeneChip gene expression microarrays.
Background The high-density oligonucleotide microarray, also known as GeneChip ® , made by Affymetrix Inc (Santa Clara, CA), has been widely used in both academic institutions and industrial companies, and is considered as the "standard" of gene expression microarrays among several platforms. A single GeneChip ® can hold more than 50,000 probe sets for every gene in human genome. A probe set is a collection of probe pairs that interrogates the same sequence, or set of sequences, and typically contains 11 probe pairs of 25-mer oligonucleotides [ 1 - 3 ]. Each pair contains the complementary sequence to the gene of interest, the so-called perfect match (PM), and a specificity control, called the Mismatch (MM) [ 3 ]. Gene expression level is obtained from the calculation of hybridization intensity to the probe pairs and is referred to as the "signal" [ 4 - 10 ]. The normalization method used in GeneChip software is called scaling and is defined as an adjustment of the average signal value of all arrays to a common value, the target signal value in order to make the data from multiple arrays comparable [ 4 , 11 ]. The purpose of data normalization is to minimize the effects of experimental and/or technical variations so that meaningful biological comparisons can be made and true biological changes can be found among multiple experiments. Several approaches have been proposed and shown to be effective and beneficial. They were mostly from studies on two-color spotted microarrays [ 12 - 19 ]. Some authors proposed normalization of the hybridization intensities, while others preferred to normalize the intensity ratios. Some used global, linear methods, while others used local, non-linear methods. Some suggested using the spike-in controls, or house-keeping genes, or invariant genes, while others preferred all the genes on the array. For GeneChip data, some have proposed different models to normalize signal values or normalize probe pair values [ 10 , 20 - 24 ]. Despite the presence of other alternatives, many biologists still use the default scaling method and consider that such method is satisfactory and is useful to identify biological alterations [ 23 , 25 , 26 ]. With the increasing awareness and usage of GeneChip technology and willingness to continue to use GeneChip software among many biologists, it is worth improving the performance or correcting the problems of the software. In this report, the author has demonstrated that in the scaling algorithm excluding 2% of the probe sets with the highest and the lowest values did not have much benefit. However, the logarithmic transformation of signal values prior to scaling proved to be the optimum normalization strategy and is strongly recommended. Results The statistical algorithm in current GeneChip software (MAS 5 and GCOS 1) for gene expression microarray data has eliminated the negative gene expression values, a problem present in earlier versions of the software [ 5 , 7 ]. It uses a robust averaging method based on the Tukey biweight function to calculate the gene expression level from the logarithm transformed hybridization data [ 3 - 5 , 11 ]. The reported data of a probe set is the antilog of the Tukey biweight mean multiplied by a SF and/or a normalization factor ( NF affy ). When both the SF and NF affy are equal to 1, there is no normalization or manipulation of original data. Both NF affy and SF are computed in virtually the same way. NF affy is calculated in comparison analysis to compare the array average of one experiment with that of a baseline experiment, while SF is obtained from the signal average of one experiment comparing with a common value, the target signal in absolute analysis [ 3 - 5 , 11 , 22 ]. The average value used in GeneChip is a trimmed average. It is not calculated from all probe sets, but from 96% of the probe sets after the 2% of the probe sets with the highest and the 2% of the lowest signals were removed. In this report, a total of 76 experiments with rat U34A GeneChip were analyzed. As shown in Table 1 , the total hybridization signals varied although all arrays were hybridized with the same amount of biotin-labeled cRNA and scanned with the same scanner of identical settings. The array of the highest hybridization intensities had 2.8 times more signals than that of the lowest. The average array signals had 25.8% variation in terms of coefficient of variation. The mean signals were significantly greater than the median signals on each array, indicating a non-normal distribution. The density plot showed a long-tailed and skewed distribution (not shown) and the average of such data is known to be sensitive to the larger values in the data set. The rat U34A GeneChip contained 8799 probe sets; hence 2% was about 176 probe sets. The sum of the 2% of the probe sets with the lowest signals accounts for less than 0.1% of the total signals (0.05% ± 0.01%, mean ± SD, n = 76) and its impact on SF calculation can be ignored. However, the sum of the 2% of the probe sets with the highest signals, the TrimTotal as used in this report, was responsible for about 40% of the total signals (from 34% to 54%, Table 1 ). The remaining 96% of the probe sets used for SF calculation, produced only about 60% of the signals. Excluding 4% of the probe sets did not reduce the variation, but rather slightly increased the variation, which in turn resulted in a wider range of SF s (Table 1 ). It was also found that the TrimTotal was highly correlated with total signal (R = 0.928), but less with medians (R = 0.536) and the mean of log signals (R = 0.643). The trimmed percentage ( Tp ) was found to be negatively associated with the median (R = 0.558, b = -1.116) and the mean of log signals (R = 0.495, b = -0.968), but not with the total signal of all probe sets. Among other approaches to global linear normalization, one can also use the median signal or the mean of logarithm transformed signals to calculate the NF. NFLogMean showed a higher correlation with NFMedian than with SF . There were larger differences between NFLogMean and SF than those between NFLogMean and NFMedian (Fig. 1 ). To test if the larger difference was a result of removing 4% of the probe sets from the calculation, another NF, the NFTrimLogMean was obtained using the same data as for SF , but with a log transformation. There is a very significant correlation between NFTrimLogMean and NFLogMean (R = 0.9998). The 4% of the probe sets that was removed from NFTrimLogMean calculation reduced the total data by only 4% after log transformation. Since it is impossible to obtain the true normalization factor, an average of the four global linear NF s mentioned above was used instead to estimate the 'true' NF. To compare them with the true NF, a score ( NFscore ) is introduced. Each NF is calculated against the respective 'true' NF to obtain its NFscore . The average NFscore (± SD) is 7.01% (± 6.24%), 4.51% (± 3.48%), 2.25%(± 2.33%) and 1.95% (± 1.61%), and the sum of NFscore is 5.33, 3.43, 1.71 and 1.48 for SF , NFMedian , NFTrimLogMean and NFLogMean , respectively (Fig. 1 ). The sum of NFscore indicated an accumulated variation from the true NF, and the larger the number, the larger the accumulated variation. An attempt to add a 5th NF obtained from the arithmetic mean of all probe sets of the array was also made to calculate and compare NFscore with each NFs, and the results showed the same conclusion (data not shown). It is fair to conclude that NFLogMean produced the least variation. Discussion Logarithmic transformation is a well-accepted approach for stabilizing variance and has become a common choice for data transformation and normalization for spotted microarrays [ 12 , 16 ]. Much improvement has been made in GeneChip microarray technology and accompanying software during the past few years. The current version of GeneChip software has improved its performance and is better than the earlier versions that used the Average Difference to express levels of gene expression [ 3 , 4 ]. However, the normalization algorithm was inherited and remains the only and default option for gene expression data processing in both MAS 5 and the newly released GeneChip Operating Software (GCOS) software. They continue to use the arithmetic mean of signals to obtain the SF in absolute analysis (single array) and the NF in comparison analysis (two arrays) [ 3 - 5 , 7 , 11 , 22 ]. It is clearly shown here that the trimmed average and the resulting SF had a larger variance than the median-based NF, or the NF based on the mean of log transformed signals. Similar results were observed in other GeneChip expression arrays, such as mouse U74A and human U133A (data not shown). Elimination of the highest and the lowest 2% of the probe set signals did not stabilize the trimmed means. When intra-array variance was reduced by 40%, this approach cannot be considered to be optimal. The logarithmic transformation of signals stabilized the variation well and made the normalization process much less dependent upon the mean and less affected by the outliers. Although simple and popular, the global linear normalization has its drawbacks, especially when the relationship among multiple experiments or genes is not linear. To address such problems, several methods have been proposed to conduct local and non-linear normalization, [ 12 , 14 - 17 , 20 , 22 , 27 ]. Data normalization is a very critical and important step for microarray data mining process. The use of different approaches to normalization may have a profound impact on the selection of differentially expressed genes and conclusions about the underlying biological processes especially when subtle biological changes are investigated [ 12 , 16 , 28 ]. Conclusions Normalization of microarray data allows direct comparison of gene expression levels among experiments. A global linear normalization, called scaling has been widely used in GeneChip microarray technology for gene expression analysis. The scaling factor ( SF ) is calculated from a trimmed average of gene expression level after excluding the 2% of the data points of the highest values and the lowest values. It is shown here that the 2% of the probe sets of the highest signals contained from 34% to 54% of the total signals. Elimination of the outliers did not reduce, but increased the variation among multiple arrays. Instead, normalization factors obtained from the mean of the log transformed signals had the best performance. Thus, the current scaling method, although widely used, is not optimal and needs further improvement. The mean of logarithm transformed signals is highly recommended to use for normalization factor calculation. Methods GeneChip experiments and data Total RNA was isolated from rat tissues or cells in Trizol reagent and purified with Qiagen Rneasy kit. cDNA was synthesized in presence of oligo(dT)24-T4 (Genset Corp, La Jolla, CA) and biotinlated UTP and CTP were used to generate biotin labeled cRNA according to the recommended protocols [ 29 ]. Rat genome microarray, U34A GeneChip (Affymetrix Inc., Santa Clara, CA) was used and hybridized with 15 μg of gel-verified fragmented cRNA. Hybridization intensity was scanned in GeneArray 2500 scanner (Agilent, Palo Alto, CA) with Microarray Suite (MAS) 5.0 software [ 4 ]. Data from a total of 76 independent GeneChip experiments were used in this study. Normalization factor (NF) Gene expression data exported from MAS 5.0 were submitted to a Perl script to calculate different normalization factors. In the scaling approach, a trimmed average signal is calculated after excluding 2% probe sets with the highest signals and 2% with the lowest signal values. The scaling factor ( SF ) is obtained using equation (1) in comparison with a chosen fixed number, called the target signal ( TS ) and is verified with the results from MAS 5.0 of the same settings [ 3 , 4 , 11 ]. SF j = TS / S TrimMeanj (1) Other normalization factors for comparison were obtained by the following: NFMedian j = TS / S med j (2) NFLogMean j = 2 nf j where i = 1..., n represents the probe sets, j = 1..., J represented the array experiments, Si is the signal of the anti-log of a robust average (Tukey biweight) of log(PM-MM) reported from MAS 5.0 [ 5 ], S med j is the median signal on the array j , S TrimMeanj is the trimmed average on array j after excluding 2% of the probe sets with the highest and the lowest signals [ 3 , 4 , 11 , 22 ]. NFMedian j is obtained by using the median signal on array j , and NFLogMean j is obtained by using the mean of log transformed signals. TS was set to 150, 38 and 38 for SF , NFMedian and NFLogMean , respectively in order to have similar NFs. In comparison with different NFs, a score, NFscore is introduced. NFscore j = ( NF j - TrueNF j )/ TrueNF j , and TrueNF j = ( SF j + NFMedian j + NFLogMean j + NFTrimLogMean j )/4, where NFTrimLogMean j , was calculated from equation (3) excluding the 2% of the probe sets with the highest and lowest signals, TrueNF j was used as a 'true' NF. Sum of . Other analysis Unless otherwise specified, logarithm transformation is carried out with the logarithm base 2. Trimmed total signal TrimTotal is the sum of the signals from the 2% of the probe sets with the highest signal values. Total signal Total is the sum of the signals of all probe sets in the array, and trimmed percentage Tp j = ( TrimTotal j / Total j ) × 100%. Abbreviations GeneChip ® is the registered trademark owned by Affymetrix Inc. PM: perfect Match; MM: mismatch; SF: scaling factor; NF: normalization factor; TS: target signal Short phrase: Normalization of GeneChip microarray data
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC509236.xml
15283861
10.1186/1471-2105-5-103
548518
"Harnessing genomics to improve health in Africa" – an executive course to support genomics policy
Background Africa in the twenty-first century is faced with a heavy burden of disease, combined with ill-equipped medical systems and underdeveloped technological capacity. A major challenge for the international community is to bring scientific and technological advances like genomics to bear on the health priorities of poorer countries. The New Partnership for Africa's Development has identified science and technology as a key platform for Africa's renewal. Recognizing the timeliness of this issue, the African Centre for Technology Studies and the University of Toronto Joint Centre for Bioethics co-organized a course on Genomics and Public Health Policy in Nairobi, Kenya, the first of a series of similar courses to take place in the developing world. This article presents the findings and recommendations that emerged from this process, recommendations which suggest that a regional approach to developing sound science and technology policies is the key to harnessing genome-related biotechnology to improve health and contribute to human development in Africa. Methods The objectives of the course were to familiarize participants with the current status and implications of genomics for health in Africa; to provide frameworks for analyzing and debating the policy and ethical questions; and to begin developing a network across different sectors by sharing perspectives and building relationships. To achieve these goals the course brought together a diverse group of stakeholders from academic research centres, the media, non-governmental, voluntary and legal organizations to stimulate multi-sectoral debate around issues of policy. Topics included scientific advances in genomics innovation systems and business models, international regulatory frameworks, as well as ethical and legal issues. Results Seven main recommendations emerged: establish a network for sustained dialogue among participants; identify champions among politicians; use the New Plan for African Development (NEPAD) as entry point onto political agenda; commission an African capacity survey in genomics-related R&D to determine areas of strength; undertake a detailed study of R&D models with demonstrated success in the developing world, i.e. China, India, Cuba, Brazil; establish seven regional research centres of excellence; and, create sustainable financing mechanisms. A concrete outcome of this intensive five-day course was the establishment of the African Genome Policy Forum, a multi-stakeholder forum to foster further discussion on policy. Conclusion With African leaders engaged in the New Partnership for Africa's Development, science and technology is well poised to play a valuable role in Africa's renewal, by contributing to economic development and to improved health. Africa's first course on Genomics and Public Health Policy aspired to contribute to the effort to bring this issue to the forefront of the policy debate, focusing on genomics through the lens of public health. The process that has led to this course has served as a model for three subsequent courses (in India, Venezuela and Oman), and the establishment of similar regional networks on genomics and policy, which could form the basis for inter-regional dialogue in the future.
Background Inequities in global health continue to be among the major challenges facing the international community [ 1 ]. Despite tremendous advances in medicine, the benefits of science and technology have yet to make a major impact on the health and quality of life of majority of the world's population. Recognizing its fundamental role as engine for development, the New Partnership for Africa's Development (NEPAD) has identified science and technology as a key platform for Africa's renewal [ 2 ]. A major challenge for Africa, and for the entire international community, is to bring scientific and technological advances to bear on the health priorities of poorer countries [ 3 , 4 ]. Africa in the twenty-first century is faced with a heavy burden of disease, combined with ill-equipped medical systems and underdeveloped technological capacity [ 5 ]. The crippling poverty in many countries in the continent contributes to the disease burden, and hampers countries' ability to address the problem adequately [ 6 ]. While Africa's response to its health challenges has varied considerably across the continent, with governments traditionally placing less emphasis on developing S&T than other sectors [ 7 ], there has been ongoing R&D activity in genomics and related fields of technology over the past several years in various parts of the region. The African Medical Research Foundation (AMREF), Africa's largest indigenous health charity, has for nearly half a century made an important contribution to addressing health challenges in Africa through partnerships with local communities, governments and donors [ 8 ]. A number of centres of excellence have emerged across the continent in recent decades, including the International Centre of Insect Physiology and Ecology (ICIPE) in Nairobi where important work has been done to uncover the role of insects in the transmission of infection , and the Institute for Molecular and Cell Biology-Africa (IMCB-A), founded in 1999 to study the molecular mechanisms of tropical infections. A further example is the new Biosciences Facility for Eastern and Central Africa that was recently launched as part of a NEPAD initiative [ 9 ]. NEPAD, which has been adopted by the United Nations General Assembly as Africa's development framework, has called "for the establishment of regional platforms with concrete actions to build and strengthen Africa's competence to harness and use new technologies for human development" [ 2 ]. Its strategy acknowledges that Africa will have to overcome considerable challenges, including creating adequate regulatory and biosafety frameworks, building scientific capacity, and developing integrated systems of innovation. In March 2002, the African Centre for Technology Studies (ACTS) and the University of Toronto Joint Centre for Bioethics (JCB) co-organized an intensive five-day Course on Genomics and Public Health Policy in Nairobi, Kenya, bringing together scientists, policy makers, journalists, lawyers and NGOs from ten African countries to discuss, collectively, the question of "How best to harness genomics to improve health in Africa?" This course was sponsored by Genome Canada, the International Development Research Centre, and the African Centre for Technology Studies, through the Norwegian Agency for Development Co-operation. The primary goal of the course was to familiarize participants with the potential of genomics and related biotechnologies to address health needs in Africa. This article presents the findings and recommendations that emerged from this process, and suggests how such courses might be more broadly employed as a method for bringing together opinion leaders to share ideas and work collectively to develop practical policy solutions. Methods The programme was planned collaboratively by the African Centre for Technology Studies and the Joint Centre for Bioethics. The basic layout of the sessions and their topics was modelled on a prior course held in Toronto, Canada in May 2002. The programme was organized in line with the objectives outlined in Table 1 . Course participants as well as session leaders were identified on the basis of recommendations from recognized experts in the region and through literature searches. Many session leaders were local experts, well placed to contextualize the "new science" of genomics within the frame of concerns and realities particular to Africa. Care was taken to select participants representing a range of interests and backgrounds, including individuals from science, economics, law, government, the press, and non-governmental organizations. Such diversity was sought in recognition of the importance of "cross-pollination" on a multifaceted topic like genomics, and consequently the need for multiple actors to be part of the building of policy, as well as mediating the dialogue between policymakers and the public. In total, 30 participants attended; the countries and the institutions they represent are listed in Table 2 . Despite concerted efforts to draw a balanced group, the participant list reveals a markedly high proportion of academics, and indeed no representatives from industry. Moreover, only three of the participants are women. The organizers covered all costs for attending the course (transportation, hotel accommodation, and meals), in order that inability to pay not be an inhibiting factor for those who wished to participate. Table 1 Objectives of the course • To familiarize participants with the current status and implications of genomics and biotechnology for health in India, and to provide information relevant to public policy • To provide frameworks for analyzing and debating the policy issues and related ethical questions, and to help understand, anticipate and possibly influence the legal and regulatory frameworks which will operate, both nationally and internationally • To begin developing an opinion leaders network across different sectors (industry, academic, government, and voluntary organizations) by sharing perspectives and building relationships Table 2 Countries and Institutions Represented African Centre for Technology Studies, Kenya African Malaria Vaccine Testing Network (AMVTN), Tanzania African Medical and Research Foundation, Kenya Centre for the Development of People (CEDEP), Uganda Chemistry Department, University of Zambia, Zambia Department of Biochemistry, University of Khartoum, Sudan Department of Epidemiology of Parasitic Disease, National School of Medicine and Pharmacy, Mali Department of Obstetrics and Gynecology, Assiut University, Egypt Department of Pathology, Makarere University, Uganda Department of Virology, University of Ibadan, Nigeria Division of Human Genetics, Faculty of Health Sciences, University of Cape Town, South Africa Dysmorphology and Alcohol Pharmacokinetics in Fetal Alcohol Syndrome, South Africa Federal Ministry of Science and Technology, Nigeria Inter-Region Economic Network (IREN), Kenya Journalist Against AIDS (JAAIDS), Nigeria Lawyer, Kenya Maternal, Child and Women's Health, Dpt. of Health, Western Cape Province, South Africa Molecular Biology Research Facility, Nelson R Mandela School of Medicine, South Africa National Council for Science and Technology, Kenya National Health Laboratory Service and Division of Human Genetics, University of Witwatersrand, South Africa School of Public Health, University of Ghana, Ghana Science and Development News, and BiotekAfrika, Kenya Science Secretary, Uganda Council for Science and Technology, Uganda The People Newspaper, Kenya Because of the diversity of the participants, no background in science was presupposed. The sessions were organized so that participants were first introduced to the subject of the "new science" of genomics, and were then instructed in areas including national innovation systems, business models, intellectual property rights, international conventions and regulatory structures, ethics, and the role of networks in facilitating dialogue, advocacy and policy making. A detailed time-table of the programme is shown in Table 3 . Presenters used overhead transparencies or presentation software such as Microsoft Powerpoint. Active participation was encouraged throughout with at least 45 minutes allotted for discussion at the end of each session, on the assumption that each participant brought considerable expertise and valuable practical experience of his or her own. The programme therefore employed a peer-learning environment in which participants could learn from each other, in addition to learning from material presented by instructors. Each participant was provided with a course reader, which included additional background material on session topics; class sessions used a variety of learning methods including lectures, discussions, case analysis, and simulations. Table 3 Agenda for the Course on Genomics and Public Health Policy in Africa. Time Day 1 Day 2 Day 3 Day 4 Day 5 9.00–10.30 Introduction Prof Abdallah Daar, Dr John Mugabe New Science I : Introduction Dr Stephen Scherer Internet-based Leader Networking: Exercise Prof Joseph D'Cruz Intellectual Property Rights I Dr Patricia Kameri-Mbote Ethics I Dr Peter Singer Group Presentations 11.00–12.30 New Science II Dr Stephen Scherer National Innovation Systems Prof Norman Clark Intellectual Property Rights II Dr Patricia Kameri-Mbote Ethics II Prof Abdallah Daar Group Presentations Continued 1.30–3.00 New Science III Prof Onesmo ole-MoiYoi Business Models Prof Joseph D'Cruz Internet-based Leader Networking: Results Prof Joseph D'Cruz Science & Innovation Policy in International Conventions Dr John Mugabe 3.30–5.00 Genomics and Global Health Dr Peter Singer Group Work Group Work Group Work Early in the course, participants were divided into small Study Teams consisting of persons with diverse backgrounds, in order to maximize complementary skills. These Study Teams were an integral part of the learning process of the programme. Sessions were intended primarily to provide input for participant Study Teams, which assembled several times during the week. Their primary task was to draw upon the course material and their own experiences to propose recommendations for policy relating to genomics and biotechnology in Africa. Presentations were made on the last day of the course, and the final sessions focused on how to take forward the ideas and proposals generated during the course. This was the first course of its kind in Africa, as well as the first of a series of planned courses on genomics policy to be held in developing countries; evaluation was therefore a key component of the programme. At the end of each day, participants were given a questionnaire to complete, in which they had an opportunity to evaluate the day's sessions. At the end of the course, participants were asked to complete a more detailed questionnaire, asking for their feedback on the overall aims and organization of the course. Results The course opened with an Introduction, where Prof. Abdallah Daar and Dr John Mugabe welcomed the participants, explained the course's objectives, and then invited each of the participants to introduce him- or herself to the rest of the group. The opening session was led by Dr Stephen Scherer, and was intended to provide a comprehensive overview of the science of genomics and its relevance to health. Several of the participants had a limited scientific background; the presentation therefore include very basic descriptions of the science involved, as well as images and a brief video, and gradually progressed to a discussion of its applications in health research and medicine, both now and in the future. This session was followed by an introduction by Prof. Onesmo ole-MoiYoi, a pioneering Kenyan scientist, to advances in genomics and molecular biology within the African context – including cutting-edge research at his institute and others on the continent, as well as the broader relevance of genomics and molecular approaches to the health of Africa's people, animals and the environment. The first day closed with a session led by Dr Peter Singer who described a five-point strategy to systematically capture the benefits of genomics for the health of citizens in developing countries, through research, capacity-strengthening, consensus-building, public engagement, and an investment fund. Examples of ongoing work by the University of Toronto's Canadian Program on Genomics and Global Health in these areas were discussed, including the results of its 2002 study to identify the most promising biotechnologies to improve health in developing countries [ 13 ]. Prof. Joseph D'Cruz opened the second day with a discussion introducing participants to new approaches to forming and expressing opinions about emerging issues using the internet. Leaders in any area are required to develop their own views about new developments in their fields, and the process of forming these views is facilitated by peer discussions. Though traditionally these processes have taken place face-to-face, the internet offers an alternative medium that allows individuals to interact with their peers in other locations at a time and pace suited to each individual's commitments, without forcing the group to reach early consensus. Prof. Norman Clark followed with a session aimed at introducing participants to the concept of 'National Systems of Innovation', a conceptual framework for analysing country-specific factors that influence innovation across sectors. Innovation is understood as processes of generating new ideas, products and production processes, as well as to processes of institutional change and development. Such frameworks can be useful in identifying and analyzing key factors affecting African countries' ability to engage effectively in biotechnology and genomics for human development. The last session of the day focused on the business life cycle of a genomic product, tracing its development from the laboratory bench to a patented invention that is exploited commercially. The session addressed the strategic issues and choices that firms face at each point in this life cycle, and used a case-study based approach to frame the issues. The last one-and-a-half-hour session of the day was devoted to group work among members of Study Teams, whose members were selected to bring diverse views and experiences to bear on their deliberations. The third day of the course was devoted primarily to the issue of intellectual property rights (IPRs). The two sessions on IPRs were led by Dr Patricia Kameri-Mbote, Kenyan lawyer and scholar. During the first of these, Dr Kameri-Mbote explained the nature and different kinds of IPR protection, and explored how these impact on biotechnology development and technology transfer. She also considered the relationship between IP protection and public health in developing countries, using specific cases that have arisen under the World Trade Organization's Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS). Positions held by different countries and scholars on IP and biotechnology transfer in health were examined, and international, regional and national intellectual property regimes were reviewed. The second session focused on the link between IP, public health and transfer of biotechnology, in addition to the ethical, social and policy implications of the "Doha Declaration" on health by WTO ministers intellectual property rights in the area of health under TRIPS. At the end of the third and fourth days, participants again met for 1.5 hours in their Study Teams to prepare their proposals. Day four of the course had a heavy focus on ethical dimensions of emerging technologies like genomics. The first session provided an overview of ethical issues related to genomics and public health policy. Prof. Abdallah Daar led this and the second session on ethics. He described the World Health Organization's draft Guiding Principles on Medical Genetics and Biotechnology document, which he co-authored and which provides a broad overview of the ethical principles in this field. During the second session, Prof. Daar and Dr. Singer led the group through a case involving benefit sharing, and introduced the Human Genome Organization's principles and statement on benefit sharing. Dr. Singer then described an ethical framework and approach to priority-setting for genomics technologies in health care institutions. The last hour of this session was devoted to providing a forum for participants to share their expertise and experiences in areas related to policy. The final session of the day was led by Dr John Mugabe, then-Director of the African Centre for Technology Studies in Nairobi, Kenya. This session introduced participants to international conventions and protocols that emerged out of the United Nations Conference on Environment and Development (UNCED), and focused on science and innovation issues covered by the Conventions on Biological Diversity and its Cartagena Protocol on Biosafety, and the International Treaty on Plant Genetic Resources for Food and Agriculture. Specific lessons were drawn for international rule-making for health equity, and emphasis was given to biotechnology, risks assessment, technology transfer, sharing benefits of global scientific and technological advances, and technical cooperation. On the last day, each of the four Study Teams presented their proposals, which addressed the overarching question of the course: "How to harness genomics and related biotechnology to improve health in Africa?" Study Teams presented one at a time; after each presentation, there was a period for questions and discussion, and afterward an opportunity to consider all proposals together in light of the host of issues raised during the course of the week. The presentations, though prepared independently by each group, demonstrated a number of common themes that tended to be organized in terms of long-term foundational issues of sustainability, and more concrete short-term issues relating to garnering political involvement. Table 5 enumerates the key recommendations that emerged from these sessions. Table 5 Recommended Action-Steps Establish a regional network to foster sustained inter-sectoral dialogue Identify champions among politicians Use the New Plan for African Development (NEPAD) as entry point onto political agenda Commission African capacity survey in genomics-related R&D to determine areas of strength Undertake a detailed study of R&D models with demonstrated success in the developing world Establish seven regional research centres of excellence Create sustainable financing mechanisms Discussion The following is a synthesis of the participants' efforts, summarizing and describing key issues that emerged from their presentations and throughout the weeks' deliberations. It includes several concrete action-steps recommended by the participants, which flow from these considerations. Creating a Platform for Ongoing Dialogue and Advocacy The course generated a great deal of enthusiasm and vigorous discussion, and there was consensus among the participants on the need to create a mechanism for capitalizing on this momentum. Course participants and faculty therefore established an e-mail-based network, the African Genome Policy Forum (AGPF) , to allow the continued exchange of ideas and the building of consensus on issues related to genomics and public health policy. The group, composed of participants from areas of government, academia, civil society and the media, was created to bring to the table the views of their respective constituencies, and inform their peers of insights gained from the course and through the network. The network may also play an advocacy role in promoting the responsible use of genomics as a tool to improve health and promote development in Africa. Concrete Action-Step 1 : Establish a regional network to foster sustained inter-sectoral dialogue On the final day of the course, it was decided that a regional network, the "African Genome Policy Forum", be established comprising all participants and session leaders; it was further agreed that the Joint Centre for Bioethics would set up a web-site, discussion board, and e-mail based platform to facilitate ongoing discussion and inter-sectoral debate on the issues and proposals raised during the course. Mobilizing Political Support The success of any major initiative requires sustained dialogue with politicians. It is important to take the time to address their legitimate concerns, by clarifying the specific relevance of genomics and its applicability within the context of their communities. A point of particular relevance is the link between technologies like genomics and Africa's development, which has been well described in a number of recent reports [ e.g . [ 6 , 10 ]]. Participants highlighted the importance of taking back to their colleagues in their respective countries and institutions the lessons drawn from the course; those participants in public office agreed to seize opportunities to raise some of issues and proposals of the course when attending relevant forums. In particular, the nascent New Partnership for Africa's Development, adopted in 2001 under the mandate of the Organisation of African Unity, was repeatedly pointed to as an opportunity to bring genomics and its relevance to health in Africa onto the political agenda. Science and technology is among NEPAD's seven priority areas; another is human development, which encompasses health [ 11 ]. Genomics provides a clear example of how these two areas – science and technology, and health – come together, and can serve as a model for considering how science and technology and health concerns can be better integrated to address the continent's economic and health needs. Concrete Action-Step 2 : Identify champions among politicians The most efficient means of garnering political support is often to go directly to the politicians themselves – those who have been supportive or outspoken of the issues in question – to put the subject before their colleagues. The course itself represented an important step in this direction, as it brought together a spectrum of stakeholders, including academics, civil society, and government officials. The course, and the subsequently established network, therefore furnished an opportunity for direct communication and dialogue among individuals with a shared vision, including policymakers in a position to "champion" the issues and proposals that emerged from the course to their colleagues and others. Concrete Action-Step 3 : Use the New Plan for African Development (NEPAD) as entry point onto political agenda NEPAD offers a possible forum to bring the subject of genomics-related biotechnology onto the political agenda, and provides a means of informing African leaders of genomics and its relevance to improving health and development in Africa. In particular, the AGPF recommends the establishment by NEPAD of an 'African Genomics Committee', which would provide a plan for utilizing genomics and other new technologies to enhance health in Africa, advocate for increased investment in S & T, target other relevant stakeholders in individual countries, educate policy makers about the need for a strong R&D base established through partnerships across Africa, and organize steering committees to identify gaps and implement strategies for improvement. Prioritizing Needs Participants agreed on the need to consider emerging technologies like genomics in light of Africa's specific health challenges, and consequently on the importance of prioritizing these and identifying strategic entry points. Infectious (including sexually transmitted) diseases, genetic and other non-communicable disorders, sanitation, nutrition, environmental pollution and loss of biodiversity were all proposed as areas requiring concerted attention, with a special emphasis on the potential for using genomics-related biotechnology to target the three biggest killers in Africa: malaria, HIV/AIDS and tuberculosis. There are already well-known African-led initiatives to apply scientific innovation to combat important health concerns, such as the Multilateral Initiative on Malaria, and the African Malaria Vaccine Testing Network (AMVTN). It will be important to build on existing success stories, and to identify gaps in terms of priority health areas receiving inadequate attention. This will help to focus efforts and to more efficiently channel limited resource, both financial and human. A regional approach, which has since been adopted by NEPAD, was proposed as a promising mechanism for harnessing existing competence to address local needs. Concrete Action-Step 4 : Commission African capacity survey in genomics-related R&D to determine areas of strength This survey would identify strategic areas of strength, such as existing centres of excellence, potential areas of improvement, and health priorities receive inadequate attention. It would also serve to identify local and national innovators, and to inform the structuring of Regional Centres of Excellence described below. Capacity Building & Public Engagement For several years, genomics has been linked with a number of high-profile, intensely controversial issues like human cloning and genetically modified organisms. While emerging technologies like genomics raise a number of important ethical and social issues that deserve careful consideration [ 12 ], a nuanced message takes account of the possibilities as well as the challenges of new approaches. Often, technological applications can complement existing, well-established health approaches [ 13 ]. Scientists, policy-makers, and the media have an important part to play in publicizing science, and pointing out its relevance to Africans in a moderate rather than hyperbolic tone [ 14 ]. Local leaders can have an important role to play, not only in reflecting the leading-edge opinions of their different constituencies to policymakers, but also by playing a role in raising awareness within their communities. A more informed public is often a more engaged public, which can effectively advocate for the development of policies that reflect legitimate concerns, while leaving space to explore promising avenues of scientific endeavour. Public engagement was seen to form part of a long-term strategy for capacity building, and raising the overall profile of science and technology in Africa. The discussions reflected a conception of capacity strengthening as intimately linked with quality education – at all levels, and across disciplines. Core to this debate among course participants was the belief that endogenous capacity must be developed in order that Africa can begin to be self-sufficient, and itself become an innovator. Participants identified the following categories as needing attention: Primary, secondary and tertiary education There is a need to introduce innovative techniques to teach science and technology in the classroom, in order to generate interest and aptitude in the subject matter from an early stage in the educational process. Besides contemporary scientific approaches, indigenous knowledge and its applications to health could also be a relevant component to include in the curriculum. Policymakers Those in a position to shape policy should be familiarized with codes of ethics pertaining to their field; moreover, they should be educated about how best to capitalize on international frameworks (e.g. WTO's Trade-Related Aspects of Intellectual Property Rights; the UN's Convention on Biological Diversity) in order to ensure that their countries benefit from such arrangements, and are not exploited. Policy makers should develop strategies for negotiating their interests collectively in international forums, when appropriate, given shared needs and values. Media There is a general need to strengthen capacity in the area of communication, in particular on increasing the level of science literacy among the media. This might include integrating journalism and science programs at the college and university levels. There is a corresponding need to improve the ability of scientists to communicate the relevance of their work to the public, and to policy-makers. ELSI There is a great need to build capacity in Africa with regard to the ethical, legal and social issues (ELSI) which inevitably accompany the emergence of new technologies. Strategies would in many cases involve sensitizing the public to issues of relevance, such as their rights as patients and participants in research (e.g. informed consent, confidentiality of patient information), encouraging dialogue about the social consequences of introducing new technologies into traditional settings, and putting frameworks in place (e.g. ethics review boards) to ensure that ethical, quality and safety standards before research is undertaken. Partnerships Along with the need to strengthen the R&D base in science and technology, participants of the course identified a related need to increase the emphasis on commercialization – not only as a tool for sparking innovation but also to permit the generation of capital necessary to sustain the industry. An important step in the process of moving toward commercialization is the forming of alliances within countries, between universities and industry, sometimes known as "cross-linking". The fruitfulness of the Africa course, where people from across sectors and sub-regions came together with a common mission, re-enforced the value and the importance of establishing cross-sectoral networks and collaborations. Networks provide a means of generating new ideas, pooling the creative energies of individuals, and exchanging advice and expertise around a particular area of focus, in this case genomics and health policy. Such networks could play an advocacy role, combining the voices and the influence of key players from diverse disciplines and sectors, to advance a common aim. Collaborations , at the level of institutions – both within and between countries and regions – would facilitate the transfer of both knowledge and technology. During the course, it was pointed out that there is a particular need to encourage linkages between universities and industry to, among other things, facilitate the move from research and development to product generation and commercialization. This could include mechanisms to facilitate relationships between universities undertaking research in biotechnology and local industries. Institutional partnerships and collaborations at all levels, including internationally, can mean the channelling of resources to common areas of focus, and pooling the relative strengths and resources of partner institutions [ 15 ]. Such collaborations require very clearly defined roles for partners, and transparency with respect to goals, prioritization of needs, funding, and mechanisms to ensure equitable access to products. Creating sustainable financing mechanisms Ensuring that the benefits of science and technology, including emerging fields like genomics, requires a long-term strategy for sustained investment. Concrete Action-Step 5 : Design proposals for obtaining sustained investment for both research and development (R&D) in genomics and related biotechnologies to improve health, and the commercialization of the products of R&D Three models were suggested The establishment of an African Science and Technology Fund , dedicated to supporting research and development in the area of health-related biotechnology, would rely upon the contribution of African governments. The establishment of an Investment Fund for genome-related biotechnologies for improving health would represent an innovative approach to obtaining capital, providing a further incentive for investors to put money into development by creating a fund that provides a return on investment, as well as furnishing funds for advancement. Such a fund might be dedicated to providing capital for the development of mature, or future, health-related technologies. Capitalizing on existing funds allocated for research related to diseases afflicting Africa, such as the WHO's Global Fund to Fight AIDS, Tuberculosis and Malaria. Genomics and biotechnology represents a powerful set of tools for health improvement, and the World Health Organization through its Genomics and World Health (2002) report has raised it as an important issue deserving international attention. It is important to use this positive emphasis to give weight to the case for the relevance of biotechnology to health in developing countries, particularly for policy makers. Research and Development (R&D) With respect to R&D, there are already areas of strength on the continent; it is crucial to identify localized expertise, and to establish linkages with centres elsewhere in the region, as well as abroad, to ensure the transfer of knowledge and of technology, and to facilitate human resource development. Infrastructure must be developed to attract qualified African researchers to remain in or to return to Africa – both to support them, technically, intellectually, and socially and to provide them with similar opportunities for creativity and growth as may be found in other locales. The Biosciences Facility, established in 2003 by NEPAD, takes up this challenge, promoting "scientific excellence by bringing together a critical mass of scientists drawn from national, regional and international institutions in state-of-the-art facilities where they can undertake cutting-edge research to help solve the most important development constraints faced by the poor in Africa" [ 9 ]. While the new Biosciences Facility is the first of network of centres of excellent focused primarily on using science to help poor farmers, it may be an appropriate model for like initiatives using a regional approach for targeting health challenges. Concrete Action-Step 6 : Undertake a detailed study of R&D models with demonstrated success in the developing world, i.e. China, India, Cuba, Brazil Developing countries in various parts of the world have proven that they too can have strong technology sectors, and make important contributions in terms of science and innovation. Their successes represent an opportunity to bring to the attention of politicians that there are countries succeeding in genomics. A detailed study of these models can provide important insights into how Africa can capitalize on the promise of genomics and biotechnology, particularly as it relates to health. In 2003, the Joint Centre for Bioethics completed a qualitative study of R&D in biotechnology in South Africa; similar studies are underway in Cuba, Egypt and China. Research of this kind could feed into more systematic efforts in the region to better understand how some developing countries, including those in Africa, have managed to develop S&T research and manufacturing capacity in the health sector. Concrete Action-Step 7 : Establish Seven Regional Research Centres of Excellence The proposed centres would be distributed across Northern, Southern, Eastern, Western and Central African sub-regions. Each centre would have its own area of focus, in terms of targeted health problems, depending on regional expertise. The Centres would not be the sole preserve of each region, but would in fact use the strengths and specializations of each region to achieve the goal of harnessing genomics to improve health in Africa . These regional centres of excellence need not preclude the existence of national centres of excellence. The Biosciences Facility is modelled on such an approach. Conclusion Analysis The course on Genomics and Public Health Policy in Africa was carefully designed, with inputs from both its Canadian and African co-organizers, to have a programme and participant profile reflecting the inter-disciplinarity of the issue being considered. Genomics cuts across S&T, environmental, development, industrial, education and health policy and generates important ethical, legal and social issues. It therefore requires a genuinely participatory and multi-stakeholder approach, as well as frank discussions about both the potential promise and perils of a relatively new science. The strength of the course, as reflected in the evaluations submitted by participants, was the rare opportunity for discussion and networking among opinion leaders from different sectors. Both during and between sessions, participants exchanged perspectives and experiences with others from different regions of the content, and from different disciplines. Senior political officials, journalists, academics, and civil society representatives worked together in Study Teams to create proposals. Discussions were lively and open, with broad participation from those in attendance. However, a weakness of the course was the absence of industry representatives, who would certainly have contributed an important and valuable point of view. The small number of women participants was also a notable disadvantage. Later courses modelled on the Nairobi offering (i.e. those in Latin America, the Eastern Mediterranean, and India) had greater success in drawing participants from industry and obtaining a better gender balance. Notably, however, the recommendations that emerged from these courses, while reflecting differences due to regional priorities and context, did not vary considerably despite the broader contribution, particularly from the private sector [ 20 ]. A major outcome of the Nairobi course, and one which had strong support from participants, was the creation of a virtual network to facilitate ongoing interaction and discussion. Within two weeks of its completion, a website was created for the course , as well as a web-based discussion board. While there was some initial activity on the discussion board, this eventually subsided, and was soon evident that this approach had failed. In an effort to revive the momentum and to solicit ideas from AGPF members about how to best move forward with the network, a short survey was sent to members asking what their needs were, both in terms of the network as well as in terms of the technical facilities at their disposal. The response rate was extremely low; however, those who provided feedback confirmed what the participation level suggested: namely, that information technology facilities in Africa are such that very few individuals, outside of some well-equipped academic or private institutions, have regular access to the internet. The web-based discussion board was, therefore, in practice a highly unsustainable option for the majority of participants. The point was also raised that it was not enough to be connected electronically; there was also a need to share a more tangible goal or project, and to have a more visible leader from within the group, to galvanize efforts and motivate continued interaction. One respondent explained that finding the time to contribute to such networks is extraordinarily difficult for many Africans, who often "wear many hats". As a result, a general interest was insufficient to justify diverting time from other tasks; a concrete, realizable goal was essential for engaging individuals who already feel over-stretched. As a consequence of these inputs, an email-based forum was established, since most AGPF members have better access to email than to the internet, and a moderator was temporarily appointed over the group. Activity on the forum improved and continues today, more than two years later, though interventions are irregular and generally extend to the sharing of information or material of interest, rather than discussions about issues. The India course on Genomics and Public Health Policy was held in January 2003, less than one year after the inaugural Nairobi effort. Based on feedback from the previous course, the questionnaire requesting feedback about participants' technical and substantive needs in relation to the creation of a network was distributed during the course, to permit the creation of a network that was much more responsive to the needs of the participants. Moderators from among the participants were nominated before the course' end and their roles clarified, to facilitate the sustainability and autonomy of the network. Later in 2003, two further courses were held in Oman and in Venezuela, both of which added a further element demonstrating the learning from the first two courses. On both occasions, the Joint Centre for Bioethics collaborated with the Regional Offices of the World Health Organization; in the first instance, with the Eastern Mediterranean office (EMRO) and in the second, with the Pan-American Health Organization (PAHO). This collaboration ensured that the recommendations of each course had an institutional structure through which they could be channelled, to reach the ear of decision-makers. EMRO and PAHO have extensive links with ministries of health within their regions, as well as with representatives from civil society and industry. This provided an opportunity for the results of the course to have a much wider impact. By contract, the impact of the Nairobi course is very much linked to the efforts of individual participants to engage with their constituencies and with the NEPAD initiative, of which one of their members is now a senior actor. The Forum developed following the Nairobi course has not provided a framework to drive action the way it was initially intended; however, it continues to provide a portal for information-sharing and dialogue. Final Remarks The executive course on Genomics and Public Health Policy in Africa was the first of its kind to be held on the continent. The response of participants indicated a tremendous enthusiasm for and interest in discussing the emerging technology of genomics and its applications for addressing the health woes of Africans. The sessions covered a spectrum of topics, from basic science, to ethics, business models and international frameworks – exemplifying the range of intersecting issues relevant to informed discussions about genomics and related policy. The course also was a demonstration of the fruitfulness of a multi-stakeholder approach. An important aim of the course was to encourage network-building and the development of meaningful interactions, as a foundation for sustained dialogue among opinion leaders. Participants were encouraged to develop independent proposals in a collaborative environment, rather than to be passive recipients of "expertise" from the session leaders. The result was a series of concrete proposals for action, and the establishment of an e-network to provide a forum for ongoing communication, discussion and elaboration of the issues and proposals raised during the course. Several participants agreed to raise the proposals and themes articulated to their colleagues; the course also generated some publicity, as journalists invited to attend and to participate actively in the meeting reported on the key issues in various media [[ 16 , 17 ]; see also [ 18 ]]. Since the completion of this course, three more offerings have taken place, one in India in collaboration with the Indian Council for Medical Research (ICMR) in January 2003, another in Oman in August 2003, and a third in Venezuela in 2004. A fourth course is being planned for a venue in South-east Asia. The Nairobi offering demonstrated clearly the receptiveness of African researchers and policy makers to such an initiative, and captured the vision of a cross-section of stakeholders around how to ensure that the new wave of scientific promise does not pass them by, or crush them in its wake, but instead is harnessed for better health and to further economic development in their region [ 19 ]. The courses in India and Oman similarly gave rise to regional e-networks [ 20 ], which may eventually be connected to form an inter-regional forum for dialogue to form a basis for the sharing of experiences and expertise across regions in the developing world. Each of the three executive course held to-date has addressed similar themes in relation to genomics and health; but each has also been adapted to the particular context and interests of the host country or region. This has partly been achieved through active collaboration between the Joint Centre for Bioethics and the host institutions. The electronic networks provide a means of generating a long-term impact, driven by participants who are empowered, in their particular capacities, to take forward the ideas shared and the proposals developed through their interaction. The Nairobi course also highlighted the importance of being proactive in soliciting suggestions from participants about creative means of virtual networking that realistically address the poor information technology infrastructure in most parts of Africa. It also was instructive in demonstrating that a network is not itself self-sustaining; it must be driven by a clear, shared vision among participants, and possibly even a concrete and realizable project. Moreover, ideally a moderator from within the group should take leadership in feeding the forum, and motivating ongoing participation. The New Partnership for Africa's Development (NEPAD) has made science and technology (including genomics and biotechnology) a key platform in its plan for economic renewal [ 2 , 9 ]. Indeed, the recommendations outlined above overlap considerably with those described in a recent document detailing the resolutions of the first science and technology workshop of NEPAD, held in February 2003 [ 2 ]. The recent establishment of the African Biosciences Facility as a centre of scientific and technological excellence in the region, is further evidence that the recommendations articulated by the AGPF reflect a more widely shared vision. There is a growing recognition in Africa, and internationally, of the role that genomics and biotechnology can play, not only in alleviating health scourges of the poor, but also in addressing some of their economic concerns. With appropriate emphasis on its health needs, incentives for meaningful partnerships, sound regulatory structures, innovation and foresight, Africa could be in a position to benefit from genomics and related fields of biotechnology. The Course and Genomics and Health Policy in Africa had as its overarching goal that of bringing together a vibrant cross-section of individuals to foster dialogue around this timely issue. The African Genome Policy Forum works to build on this foundation, to sustain the momentum of the course, and to fulfill some of the participants' proposed goals. Perhaps most significantly, this series of courses represents a practical and effective mechanism for drawing together a variety of actors to address an issue of recognized import, which deserves a truly inter-disciplinary approach. Moreover, it is an initiative that generates important debate, but which is ultimately focused around generating concrete proposals to inform policymaking. Competing interests The author(s) declare that they have no competing interests. Authors' contributions All authors participated in and contributed to the course. ACS drafted the manuscript. PAS and ASD conceived of the course, refined the manuscript for critical content and approved final version; and with JM, participated the course design and its coordination. AGFP members provided intellectual input, through their lively discussions and proposals during the Course on Genomics & Public Health Policy in Africa, held 4–8 March 2002. Funding The Canadian Program from Genomics and Global Health is funded by several sources listed at . This course was funded primarily by Genome Canada and the International Development Research Centre (Canada). PAS holds a Distinguished Investigator Award from the Canadian Institutes for Health Research. ASD is supported by the McLaughlin Centre for Molecular Medicine. The African Centre for Technology Studies, which hosted the course, was supported by the Norwegian Agency for Development Co-operation. Table 4 Reading materials. 1 Scherer, S.W. 2001. The Human Genome Project. Isuma: Canadian Journal of Policy Research Vol. 2, No. 3, 11–19. 2 OWENS, K., KING, M-C. 1999, Genomic views on human history. Science 286, 451–455. 3 ROSES, A.D. 2000, Pharmacogenetics and the practice of medicine. Nature 405, 857–865. 4 Nature, Human Genome Volume, Vol. 409, Feb. 2001. 5 Science, Human Genome Volume, Vol. 291 Feb. 2001. 6 Nature, Human Genome Volume, Vol. 409, Feb. 2001. 7 Science, Human Genome Volume, Vol. 291 Feb. 2001. 8 PA Singer, AS Daar (2001). Harnessing Genomics and Biotechnology to Improve Global Health Equity. Science, 294 pp87–89 9 PA Singer, AS Daar (2000). Avoiding Frankendrugs. Nature Biotechnology, 18(12) 1225. 10 Walter W. Powell (1998). "Learning from Collaboration: Knowledge and Networks in the Biotechnology and Pharmaceutical Industries". California Management Review, vol. 40 (3), Spring. 11 Calestous Juma and Norman Clark. "Technological Catch-up: Opportunities and Challenges for Developing Countries". SUPRA Occasional Paper, Research Centre for the Social Sciences, University of Edinburgh (February, 2002). 12 Von Hippel, E. 1986. Lead Users: a source of novel product concepts. Management Science, Vol. 32, No. 7, pp. 791–805. 13 OECD, 1998. National Systems of Innovation. OCED, Paris. 14 1. Stefan Thomke, Ashok Nimgade (2001). "Millenium Pharmaceuticals, Inc." Harvard Business Law Review. 24pp. 15 2. Ray A. Goldberg. "Gene Research, the Mapping of Life and the Global Economy". Harvard Business Review. 58pp. 16 Philippe Cullet. "Trips and the Human Right to Health in Developing Countries". International Environmental Law Research Centre. (See ) 17 Jean O. Lanjouw (April 2001)."A Patent Policy Proposal for Global Diseases". Yale University, Brookings Institution and the NBER 18 Hartley & Hartley. "Limitations on using existing legal doctrines in addressing changes in technology: the example of the "Fertility Fraud" cases at UC Irvine". See Hartley & Hartley Attorneys at Law (California) at 19 "Declaration on the TRIPS Agreement and Public Health" (2001). WTO Ministerial Meeting, Doha, Qatar. 20 A.S. Daar, J.-F. Mattei. Appendix 2: Draft Guiding Principles and Recommendations, with alternative suggestions, after receiving comments. Medical Genetics and Biotechnology: Implications for Public Health. December 1999, World Health Organization. 21 A.S. Daar, J.-F. Mattei. Chapter 6: The Human Genome Diversity Project. Medical Genetics and Biotechnology: Implications for Public Health. December 1999, World Health Organization. 22 A.S. Daar, J.-F. Mattei. Chapter 7: Issues Raised by Conducting Research With Indigenous and Genetically Defined Communities. Medical Genetics and Biotechnology: Implications for Public Health. December 1999, World Health Organization. 23 HUGO Ethics Committee. Statement on Benefit-Sharing. April 9, 2000. 24 B.M. Knoppers, M. Hirtle, S. Lormeau. Statement on the Principled Conduct of Genetic Research. HUGO Ethical, Legal, and Social Issues Committee Report to HUGO Council, March 1996. 25 Statement of the WHO Expert Consultation on New Developments in Human Genetics. World Health Organization, 2000. 26 PA Singer, DK Martin, M Giacomini, L Purdy (2000). Priority setting for new technologies in medicine: qualitative case study. BMJ, 321(7272):1316-8. 27 N Daniels (2000). Accountability for reasonableness. BMJ, 321(7272):1300-1. 28 DK Martin, JL Pater, PA Singer (2001). Priority-setting decisions for new cancer drugs: a qualitative case study. Lancet, 358(9294):1676-81. 29 Mugabe, J. et. al. 1996. Managing Access to Genetic Resources: Strategies for Sharing Benefits. ACTS Press, Nairobi. 30 Mugabe, J. and Clark, N. 1997. Technology Transfer and the Convention on Biological Diversity. ACTS Press, Nairobi. 31 Sanchez, V. and Juma, C. 1993. Biodiplomacy. (Chapter 1). ACTS Press, Nairobi.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548518.xml
15667651
10.1186/1478-4505-3-2
543459
Relationship among Dexamethasone Suppression Test, personality disorders and stressful life events in clinical subtypes of major depression: An exploratory study
Background The present study aimed to investigate the relationship between dexamethasone suppression test, personality disorder, stressful life events and depression. Material Fifty patients (15 males and 35 females) aged 41.0 ± 11.4 years, suffering from Major Depression according to DSM-IV criteria entered the study. Method Diagnosis was obtained with the aid of the SCAN v 2.0 and the IPDE. Psychometric assessment included the HDRS, HAS, the Newcastle Scale (version 1965 and 1971), the Diagnostic Melancholia Scale, the Personality Deviance Scale and the GAF scale. The 1 mg DST was used. Statistical Analysis Included MANOVA, ANOVA with LSD post hoc test and chi-square test. Results Sixteen (32%) patients were non-suppressors. Eight patients without Personality Disorder (PD) (23.5%), and 5 of those with PD of cluster B (50%) were non-suppressors. Atypical patients were the subtype with the highest rate of non-suppression (42.85%). No difference between suppressors and non-suppressors was detected in any of the scales. Discussion The results of the current study suggest that pathological DST is not a core feature of major depression. They also suggest that there are more than one subtypes of depression, concerning the response to stress. It seems that the majority of depressed patients (50%) does not experience high levels of stress either in terms of self reported experience or neuroendocrine function. The rest of patients however, either experience high levels of stress, or manifest its somatic analogue (DST non-suppression) or have a very low threshold of stress tolerance, which makes them to behave in a hostile way.
Background Life events and environmental stressful factors may relate to the development of depression [ 1 - 4 ]. However, biological theories suggest that the cause of depression rely on a biochemical disturbance of the functioning of the central nervous system (CNS). The Dexamethasone Suppression Test (DST) [ 5 ] is the most known and worldwide used biological marker, its results suggest that a disorder of the HPA axis is present in at least some depressed patients [ 6 ]. DST non-suppression is of unknown aetiology, and as a test is not specific to any disease. Rather it constitutes an endocrin expression of stress. Basically, DST is reported to assess norepinephrine function. Topographically, it assesses the function of the hypothalamus and indirectly of the structures, which project to it. However, it is also supposed to be the result of an increased serotonin (5-HT) or Ach activity, or of a disturbance of the feedback to the hippocampus [ 7 ] and the hypothalamus. A debate still holds, whether some forms of depression are characterized by hypercortisolaimia or early escape from HPA tests. Possibly, DST non-suppression and hypercortisolemia are two different things [ 8 ]. The present study aimed to investigate the relationship between dexamethasone suppression test, personality disorder (PD), stressful life events and clinical manifestations of major depression. The hypothesis to test was that subtypes of depression could be identified on the basis of the presence of personality disorder (which constitutes an abnormal interpretation and response to environmental stimuli), the presence of abnormal DST results and/or hypercortisolemia (which both constitute an idiosyncratic neuroendocrine response to stress) and the presence or not of stressful life events (which trigger the above behavioral and neuroendocrine responses). The presence or not of Personality Disorder, and the response to the DST are both characteristics of the patient. Life events reflect the impact of the environment on the patient. So, life events provoke responses from the side of the patient, which are largely determined by Personality and DST response. Thus, four groups of patients can be identified and studied, according to the combination of the co-existence of DST non-suppression and personality disorder. Material Fifty (50) major depressive patients (15 males and 35 females) aged 41.0 ± 11.4 (range 21–60) years [ 9 , 10 ], took part in the study. All provided written informed consent. Fourteen of them fulfilled criteria for atypical features, 16 for melancholic features (according to DSM-IV) and 32 for somatic syndrome (according to ICD-10). Nine patients did not fulfil criteria for any specific syndrome according either classification system. Patients were in- or outpatients of the 3 rd department of psychiatry, Aristotle University of Thessaloniki, Greece. They constituted consecutive cases that fulfilled the inclusion criteria and no systemic bias exists. The SCAN v 2.0 [ 11 ] was used for the diagnosis of depression and its subtypes and the IPDE [ 12 - 14 ] was used for the diagnosis of personality disorders. Seventeen patients (34%) suffered from a personality disorder (PD). Ten of them (20%) had a cluster B PD. Concerning depressive subtypes, 5 (out of 16) melancholics (26.32%), 7 (out of 14) atypicals (50%), 9 (out of 32) patients with somatic syndrome (28.13%), and 3 (out of 9) 'undifferentiated' patients (33.33%), fulfilled criteria for PD (note: patients with PD are not 5 + 7 + 9 + 3 = 24, but only 17 as mentioned above, because there is ovelapping between depressive syndromes). No patient suffered from a paranoid, schizotypal, antisocial, dissocial, narcissistic, and avoidant PD, although individual criteria were met. No criteria belonging to the schizotypal or antisocial PDs were met. No patient fulfilled criteria for catatonic or psychotic features or for seasonal affective disorder. No patient fulfilled criteria for another DSM-IV axis-I disorder, excepting generalized anxiety disorder (N = 10) and panic disorder (N = 7). Another 5 patients had both generalized anxiety disorder and panic disorder (totally 22 patients that is 44% had some anxiety disorder). The present study did not include a normal controls group, since the aim of the study was to compare depressive subtypes between each other. Method Laboratory Testing included blood and biochemical testing, test for pregnancy, T3, T4, TSH, B 12 and folic acid. The Psychometric Assessment included the Hamilton Depression Rating Scale (HDRS), the Hamilton Anxiety Scale (HAS), the 1965 and 1971 Newcastle Depression Diagnostic Scale (1965 and 1971-NDDS) and the Diagnostic Melancholia Scale (DMS) [ 15 ] and the General Assessment of Functioning Scale (GAF) [ 16 ]. An attempt was made to assess the direction of aggression of the depressed patients, with the use of the Personality Deviance Scale (PDS) [ 17 ]. This was done mainly because the direction of aggression is considered to be a core feature for the etiopathogenesis of depression according to psychodynamic theories, but also is related to personality traits. The PDS consists from the following subscales: a. Extrapunitive Scale (ES) which consists of 1. HT: Hostile Thoughts and 2. DO: Denigratory Attitudes Toward other People. All these scales and subscales are scored in such a way that high scores denote lack of the characteristic. b. Intropunitive Scale (IS), which consists of 1. LSC: Lack of Self-Confidence and DEP: Overdependency on Others. All these scales and subscales are scored in such a way that high scores denote presence of the characteristic. c. Dominance Scale (DS) which consists of 1. MIN: Domineering Social Attitude and 2. HA: Uninhibited Hostile Acts. The MIN is scored in such a way that high scores denotes presence of the characteristic, while HA has opposite properties. Data concerning personal and family history and stressful life events a. age of onset b. presence of a recent suicide attempt c. history of such attempts d. The questionnaire of Holmes [ 18 ] was used to search for stressful life events during the last 6 months before the onset of the symptomatology. The 1 mg Dexamethasone Suppression Test (DST) protocol demands the administration of 1 mg dexamethasone per os at 23.00 of the first day, and determination of cortisol serum levels simultaneously and the next day at 16.00 and 23.00. Cortisol levels expressed in μg/dl were measured with Luminance Immunoassay (intra-essay reliability: 4.9%; inter-essay: 7.5%). Non-suppression cut-off level: 5 μg/dl. Statistical Analysis Multiple Analysis of Variance (MANOVA) was performed with DST (suppression vs. non suppression) and Personality Disorder (present vs. absent) as factors. The dependent variables list included: Age, Age of Onset, Number of previous episodes, Number of DSM-IV Criteria, Number of atypical features, Number of melancholic features, GAF, NDDS 1965, NDDS 1971, Endogenous axis of DMS, Reactive axis of DMS, Number of stressful life events, HDRS-17, HDRS-21, HDRS Depressive index, HDRS Anxiety index, HDRS Sleep index, HDRS non-specific index, HAS, HAS Somatic subscale, HAS Psychic subscale, PDS-Hostile Thoughts Scale, PDS-Denigratory Attitude Scale, PDS-Extrapunitive Scale, PDS-Low Self Confidence Scale, PDS-Overdependency by others Scale, PDS-Intropunitive Scale, PDS-Domineering Social Attitude Scale, PDS-Uninhibited Hostile Acts Scale and PDS-Dominance Scale. Afterwards, Analysis of Variance (ANOVA) with Least Significance Difference (LSD) test as post-hoc test was performed. Finally, Chi-square test was performed. PD and DST were independently placed in cross-tabulation with the presence or absence of Recent Suicide Attempt, History of Suicide Attempt, Generalized Anxiety or Panic Disorder, Melancholic Features, Atypical Features, Somatic Syndrome, 'Undifferentiated' symptomatology, Full and sustained remission, With Relapsing circumscribed episodes, Chronic Depression without full remission, Presence of Stressful life events, Family history of any mental disorder, Family history of depression in 1 st degree relatives, and Family history of depression in 2 nd degree relatives. Results Women were twice as many as men (70% versus 30%), which is not uncommon [ 19 ] and reflects the higher prevalence of depression observed in women. Sixteen out of 50 depressed patients (32%) were DST non-suppressors (NS). Eight out of 17 (47.05%) depressed patients with PD were also NS. When the patients with a coexistent personality disorder (PD) were excluded, then 8 out of 33 (24.24%) patients left, were NS. When only cluster b PDs were excluded, the respected percentage of NS climbs to 27.5% (11 out of 40). Fifty percent of Cluster b PD patients were NS (5 S and 5 NS). Six out of 14 (42.85%) atypical patients were NS, and this percentage makes this subtype the one with the highest NS percentage. No one of Chi-square tests revealed any significant findings (at p > 0.01). MANOVA results were significant both for Personality Disorder (p < 0.001) and for DST (P < 0.001) (table 1 ). Table 1 2-way MANOVA results. Both Personality disorders and DST results and their interaction produce significant results. Wilks' Lambda Rao's R df 1 df 2 p-level Factors : 1-Personality Disorder (present vs. absent) and 2-DST results (suppressors vs. non-suppressors) 1 0.02 18.26 30 12 0.000 2 0.02 20.99 30 12 0.000 12 0.01 28.42 30 12 0.000 ANOVA testing, separately for each dependent variable, revealed significant findings concerning the number of episodes, and HT, DO and HA subscales of the PDS. When PD was used as the sole factor variable, significant findings were found concerning the endogenous axis of DMS and the HDRS depressive index. The interaction of PD and DST produced significant findings concerning age, age of onset, number of atypical features, number of stressful life events, and the DO subscale of the PDS (table 2 ). Post-hoc comparisons for DST showed that NS were more endogenous (1971-NDDS and DMS endogenous axis) but with lower HDRS depressive index (p < 0.05). Post-hoc comparisons for PD characteristics showed that patients without PD had more previous episodes and less hostile thoughts (HT) and less uninhibited hostile acts (HA) (p < 0.05). The post-hoc results for the groups defined by the interaction of PD with DST are shown in table 3 . A graphical representation of these results is shown in figures 1 and 2 . Table 2 ANOVA results for each dependent variable separately (only significant results are shown. df Effect MS Effect df Error MS Error F p-level Factors : 1-Personality Disorder (present vs. absent) and 2-DST results (suppressors vs. non-suppressors) Dependent variable : age 1 1 93.29 46.00 103.25 0.90 0.347 2 1 80.23 46.00 103.25 0.78 0.383 12 1 935.13 46.00 103.25 9.06 0.004 Dependent variable : endogenous axis of DMS 1 1 9.08 46.00 8.26 1.10 0.300 2 1 78.71 46.00 8.26 9.53 0.003 12 1 21.10 46.00 8.26 2.55 0.117 Dependent variable : age of onset 1 1 71.51 46.00 117.92 0.61 0.440 2 1 82.59 46.00 117.92 0.70 0.407 12 1 750.95 46.00 117.92 6.37 0.015 Dependent variable : number of episodes 1 1 17.46 46.00 2.11 8.28 0.006 2 1 0.48 46.00 2.11 0.23 0.637 12 1 0.31 46.00 2.11 0.15 0.703 Dependent variable : number of atypical features 1 1 0.81 46.00 0.75 1.09 0.302 2 1 0.59 46.00 0.75 0.79 0.377 12 1 4.35 46.00 0.75 5.82 0.020 Dependent variable : number of stressful life events 1 1 10.45 46.00 3.27 3.20 0.080 2 1 4.87 46.00 3.27 1.49 0.229 12 1 19.51 46.00 3.27 5.97 0.018 Dependent variable : HDRS Depressive Index 1 1 1.47 46.00 7.04 0.21 0.650 2 1 44.23 46.00 7.04 6.29 0.016 12 1 4.01 46.00 7.04 0.57 0.454 Dependent variable : PDS HT subscale 1 1 76.28 41.00 9.74 7.83 0.008 2 1 4.23 41.00 9.74 0.43 0.514 12 1 10.51 41.00 9.74 1.08 0.305 Dependent variable : PDS DO subscale 1 1 44.95 41.00 10.11 4.44 0.041 2 1 10.27 41.00 10.11 1.02 0.319 12 1 40.50 41.00 10.11 4.01 0.052 Dependent variable : PDS HA subscale 1 1 97.48 41.00 13.12 7.43 0.009 2 1 7.91 41.00 13.12 0.60 0.442 12 1 30.77 41.00 13.12 2.35 0.133 Table 3 Post-hoc comparison between the four diagnostic groups determined by DST results and the presence of personality disorder concerning the continuous variables (Least Significance Difference-LSD Test). Group A Group B Group C Group D N = 25 (50%) N = 8 (16%) N = 9 (18%) N = 8 (16%) p p p p p p Mean SD Mean SD Mean SD Mean SD A/B A/C A/D B/C B/D C/D Age 44.90 9.55 34.00 10.89 33.78 8.96 40.57 11.63 0.005 0.002 0.168 0.964 0.241 0.173 Age of Onset 33.33 11.24 29.00 10.74 23.44 7.13 35.00 13.14 0.217 0.009 0.967 0.223 0.313 0.028 Number of Episodes 1.52 1.89 1.88 1.55 0.33 0.71 0.43 0.53 0.575 0.068 0.092 0.017 0.021 0.893 Number of atypical features 0.71 0.85 1.63 1.06 1.67 1.00 1.14 0.38 0.019 0.010 0.102 0.935 0.375 0.298 DMS Endogenous axis 4.33 2.29 5.88 1.89 2.11 2.52 6.57 4.28 0.217 0.032 0.155 0.004 0.754 0.018 Number of Life Events reported 2.05 0.97 2.50 2.39 4.22 2.77 2.14 1.77 0.260 0.001 0.529 0.193 0.720 0.082 HDRS depressed index 11.43 2.38 8.50 2.14 10.22 3.87 8.86 2.79 0.005 0.350 0.014 0.282 0.837 0.378 HT 19.24 2.36 19.63 2.56 17.44 3.88 15.71 4.50 0.703 0.129 0.012 0.197 0.045 0.422 DO 13.00 3.16 9.88 3.44 13.11 2.57 14.14 3.63 0.028 0.927 0.431 0.043 0.036 0.515 HA 18.86 3.61 19.75 3.28 17.44 4.90 14.71 1.25 0.548 0.385 0.007 0.279 0.002 0.175 DST baseline cortisol value (day 1, 23:00) 3.85 2.79 7.71 10.28 3.79 1.71 5.43 4.37 0.123 0.724 0.568 0.275 0.491 0.474 DST cortisol level at day 2, 16:00 1.40 1.13 6.81 7.91 1.34 0.98 4.84 5.32 0.002 0.973 0.001 0.057 0.584 0.047 DST cortisol level at day 2, 23:00 1.25 1.45 8.04 5.19 1.36 0.71 5.13 1.40 0.000 0.769 0.000 0.002 0.212 0.000 Group A: DST suppressors, no PD Group B: DST non-suppressors, no PD Group C: DST suppressors, with PD Group D : DST non-suppressors, with PD Figure 1 Histogram of the Distribution of Frequencies of Depressive Subtypes in the Four Groups Figure 2 Characteristics of the four groups (white arrows in dark background indicate that the characteristic takes its largest or lower value in the respective group in comparison to all 4. DST suppressors without PD were older, with more severe depressed mood and less atypical features (50% of patients, figure 2 , group A). DST non-suppressors without PD were hypercortisolemic, with less severe depressed mood and denigratory attitude towards others (16% of patients, figure 2 , group B). DST suppressors with PD were younger, with younger age of onset, more atypical features and less endogeneity and more stressful life events (18% of patients, figure 2 , group C). DST non-suppressors with PD had older age of onset, high endogeneity and high levels of expressed hostility (16% of patients, figure 2 , group D). Discussion The current study reports that personality disorders (PD) in depressed patients is 2.5–3 times higher in comparison to the general population. Half (47.05%) of these PD patients were also DST non-suppressors (NS). Atypical patients was the depressive subtype with the highest frequency of both personality psychopathology and DST NS. Figure 2 represents a graphical image of the intercorrelations between personality disorder, DST results and clinical manifestations. It seems that there is a circular relationship between PD, DST, age at interview, age of onset, number of episodes, reactivity to environment, hostility and depressed mood. DST results seem to be a severity marker rather than directly related to symptomatology. In patients without PD, DST NS (group B in figure 2 ) may relate to milder depressed mood, higher denigratory attitude and hostility, higher number of previous episodes and hypercortisolemia. In patients with PD, non suppression (group D in figure 2 ) was related to 'endogenous quality' of depression, and higher levels of hostility. These patients (group B) are highly hostile and perform uninhibited hostile acts, however simultaneously have lower denigratory attitude and hostile thoughts (possibly the hostility is impulsive) and older age of onset. Half of depressed patients belonged to the A group (suppressors without PD), and were characterized by the absence of atypical features. One could say that they represent a more 'formal' group of depressed patients. The rest of patients were equally distributed in the three groups (B, C and D). Groups B and C may represent two distinct types of vulnerability to stress (hypercortisolemia, DST non suppression and PD), while group D seems to represent a more severe form of depression, with an 'autonomous' hostility independent from the environment. This severe type could be considered to be the product of the accumulation of both vulnerabilities that characterize groups B and C, with the addition of a very low threshold for the tolerance of stress. Nearly 4–10% of normal persons are reported to be DST-NS. The reason for this is unknown, however it has been suggested that it is due to an underlying mood disorder or family history of affective disorder. Another explanation suggests that DST reflects in fact the degree of psychological pressure or discomfort of the subject and not a specific vulnerability or characteristic of depression. It seems that non-suppression is gradually increasing along a continuum, which has mourning outpatients on the one pole (13% NS) and severe psychotic melancholic inpatients with psychotic features and suicidal ideation on the opposite one (64% NS) [ 20 ]. In this frame, the percentage of non-suppression reported in the current study (32%) is not in contrast with the international literature, since most of patients were out-patients and 16 of them (32%) were melancholics. An important finding is the 42.85% rate of non-suppression in atypical patients. This is reported for the first time in the international literature. DST NS and hypercortisolemia may constitute two separate entities. For example, a patient may have baseline cortisol equal to 6 μg/dl, second cortisol value equal to 2.5 μg/dl and third cortisol value equal to 5.5 μg/dl and thus is classified as NS, but is not hypercorisolemic. On the contrary, a patient with baseline cortisol value equal to 10 μg/dl, second value equal to 4 μg/dl and third also equal to 4 μg/dl, is classified as NS, but is hypercorisolaimic. Kirschbaum et al [ 21 ] reported that it is possible, some normal control subjects do not manifest the hypercorisolaimic response to stressful life events when these events are repeated (habituation). They also divided responses in high and low-cortisol responses. They related the first group with low self-confidence, increased depressed mood and higher number of symptoms, and the second group with lower extraversion. Joyce et al [ 22 ] suggested that the hypercortisolaimic response is related to a tendency for dependence and extravagance. These are generally in accord with the findings of the present study. In contrast to what is widely accepted, NS is appeared to be closer to the atypical subtype. There are no direct reports in the international literature on this matter. However, the results of the study of Kocsis et al [ 23 ], in essence are in accord with the current study. Rothschild et al [ 24 ] related DST NS with increased dopamine (DA) activity. Atypical patients, on the other hand, when compared with melancholics, reported more stressful life events, relatively higher levels of anxiety and shorter brain potentials [ 25 ]. While it is not possible to interpret what is the cause and what is the effect, it is interesting that there are papers in the international literature suggesting that conditions of internal conflict increase DA activity and lead to the appearance of displacement activities, which in turn serve the lowering of the level of arousal and stabilize the system [ 26 ]. Increased appetite, food intake and weight gain (atypical features) could be attributed to such a displacement activity. From the opposite point of view, the exhaustion of DA storage is reported to increase vulnerability to stress, because the already hyperfunctioning neurons (DST non-suppression) fail to respond properly [ 27 ]. According to Tazi et al [ 26 ], behavioral analogues of the defensive mechanism of displacement seem to suppress this procedure and in this way contribute to the better copying with stressful situations. Conclusion Although the study sample of the current study is relatively small, the results suggest that there are more than one subtypes of depression, concerning the response to stress. The majority of depressed patients (50%) seems not to experience high levels of stress both in terms of self reported experience and neuroendocrine function. The rest of patients however, experience high levels of stress, either internally or have the somatic analogue of it (DST non-suppression) or have a very low threshold of stress tolerance, which makes them to behave in a hostile way. Competing interests The authors declare that they have no competing interests.
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC543459.xml
15598349
10.1186/1475-2832-3-15
519025
Protein-polymer nano-machines. Towards synthetic control of biological processes
The exploitation of nature's machinery at length scales below the dimensions of a cell is an exciting challenge for biologists, chemists and physicists, while advances in our understanding of these biological motifs are now providing an opportunity to develop real single molecule devices for technological applications. Single molecule studies are already well advanced and biological molecular motors are being used to guide the design of nano-scale machines. However, controlling the specific functions of these devices in biological systems under changing conditions is difficult. In this review we describe the principles underlying the development of a molecular motor with numerous potential applications in nanotechnology and the use of specific synthetic polymers as prototypic molecular switches for control of the motor function. The molecular motor is a derivative of a TypeI Restriction-Modification (R-M) enzyme and the synthetic polymer is drawn from the class of materials that exhibit a temperature-dependent phase transition. The potential exploitation of single molecules as functional devices has been heralded as the dawn of new era in biotechnology and medicine. It is not surprising, therefore, that the efforts of numerous multidisciplinary teams [ 1 , 2 ]. have been focused in attempts to develop these systems. as machines capable of functioning at the low sub-micron and nanometre length-scales [ 3 ]. However, one of the obstacles for the practical application of single molecule devices is the lack of functional control methods in biological media, under changing conditions. In this review we describe the conceptual basis for a molecular motor (a derivative of a TypeI Restriction-Modification enzyme) with numerous potential applications in nanotechnology and the use of specific synthetic polymers as prototypic molecular switches for controlling the motor function [ 4 ].
1. Type I Restriction-Modification enzymes Type I R-M enzymes are multifunctional, multisubunit enzymes that provide bacteria with protection against infection by DNA-based bacteriophage [ 5 ] They accomplish this through a complex restriction activity that cuts the DNA at random locations, which can be extremely distal (>20 kbp) from the enzyme's recognition sequence. In fact, the enzyme is capable of two opposing functions (restriction and modification), which are controlled enzymatically through an allosteric effector (ATP) and temporally through the assembly of the holoenzyme. In addition, the R-M enzyme has a powerful ATPase activity, which is associated with DNA translocation prior to cleavage; it is this translocation process that leads to random cleavage sites. Therefore, these enzymes are unusual molecular motors that bind specifically to DNA and then move the rest of the DNA through this bound complex (Fig 1 ). Figure 1 DNA Translocation by TypeI Restriction-Modification enzyme. The yellow block represents the recognition sequence for the enzyme. The enzyme binds at this site and upon addition of ATP, DNA translocation begins. During translocation, an expanding loop is produced. Type I R-M enzymes fall into families based on complementation grouping, protein sequence similarities, gene order and related biochemical characteristics [ 6 - 8 ]. Within one sub-type (the IC family) there are three well-described members, including EcoR124I, which is the focus of our interest. This enzyme recognises the DNA sequence GAAnnnnnnRTCG [ 9 ] and is comprised of three subunits (HsdR,M,S) in a stoichiometric ratio of R 2 M 2 S [ 10 , 11 ], (Fig 2 ). However, Janscák et al . also showed that the Eco R124I R-M holoenzyme exists in equilibrium with a sub-assembly complex of stoichiometry R 1 M 2 S [ 11 ] which is unable to cleave DNA, but retains the ATPase and motor activity [ 12 ]. The HsdS subunit is responsible for DNA specificity; HsdM is required for DNA methylation (modification activity) and together they can produce an independent DNA methyltransferase (M 2 S) [ 13 , 14 ]. HsdR, along with the core MTase is absolutely required for DNA cleavage (restriction activity) and is also responsible for ATP-binding and subsequent DNA translocation. Therefore, the HsdR subunit is the motor subunit of the enzyme and this subunit is associated with helicase activity [ 15 - 18 ]. However, the precise mechanism of DNA translocation is uncertain and the true nature of the motor function has yet to be fully determined but a number of important functional units – nuclease, helicase and assembly domains have been identified within the HsdR subunit [ 19 ]. Figure 2 Schematic of the motor subunits. HsdS denotes the DNA binding subunit; HsdM – is the subunit responsible for DNA methylation and HsdR subunit, together with the core enzyme acts to restrict DNA. 2. A versatile molecular motor The motor activity of Type I R-M enzymes is the mechanism through which random DNA cleavage is accomplished. Szczelkun et al . [ 20 ] showed that cleavage only occurs in a cis fashion indicating that the motor component of the HsdR subunit is able to 'grasp' adjacent DNA and pull this DNA through the enzyme-DNA-bound complex. According to the Studier model [ 21 ] cleavage occurs when two translocating enzymes collide (Fig 3 ). However, highly efficient cleavage of circular DNA carrying only a single recognition sites for the enzyme suggests collision-based cleavage is not the whole story [ 20 , 22 ]. Figure 3 Mechanism of DNA cleavage. The enzyme subunits are represented by: green ellipse – M2S complex, green box – HsdR subunit (with ATPase and restrictase activities; C denoting cleavage site). The black line represents DNA with the yellow box denoting the recognition sequence. Arrow shows direction of DNA translocation. For more details see text. DNA translocation has been assayed in bulk solution using protein-directed displacement of a DNA triplex and the kinetics of one-dimensional motion determined. The data shows processive DNA translocation followed by collision with the triplex and oligonucleotide displacement. A linear relationship between lag duration and inter-site distance gives a translocation velocity of 400 ± 32 bp/s at 20°C. Furthermore, this can only be explained by bi-directional translocation. An endonuclease with only one of the two HsdR subunits responsible for motion could still catalyse translocation. The reaction is less processive, but can 'reset' in either direction whenever the DNA is released (Fig 4 ). Figure 4 Motor activity of type I R-M Enzyme. (a) The yellow block represents the DNA-binding (recognition) site of the enzyme, which is represented by the green object approaching from the top of the diagram and about to dock onto the recognition sequence. (b) The motor is bound to the DNA at the recognition site and begins to attach to adjacent DNA sequences. (c) The motor begins to translocate the adjacent DNA sequences through the motor/DNA complex, which remains tightly bound to the recognition sequence. (d) Translocation produces an expanding loop of positively supercoiled DNA. The motor follows the helical thread of the DNA resulting in spinning of the DNA end (illustrated by the rotation of the yellow cube). (e) When translocation reaches the end of the linear DNA it stops, resets and then the process begins again. As previously mentioned, the final step of the subunit assembly pathway of the Type I Restriction-Modification enzyme EcoR124I produces a weak endonuclease complex of stoichiometry R 2 M 2 S 1 . We have produced a hybrid HsdR subunit combining elements of the HsdR subunits of the EcoR124I and EcoprrI [ 23 - 25 ] Type I Restriction-Modification enzymes. This subunit has been shown to assemble with the EcoR124I DNA methyltransferase (MTase) to produce an active complex with low-level restriction activity. We have also assembled a hybrid REase and the data obtained show that the hybrid endonuclease (REase) containing only HsdR(prrI) is an extremely weak complex, producing primarily R 1 -complex. The availability of the hybrid REase produced from core MTase(R124I) and HsdR(prrI), which provides a stable R 1 -complex, also gives a useful molecular motor that will not cleave the DNA that it translocates. 3. Sub-cellular localisation of R-M enzymes As can be seen from the above, DNA cleavage by Type I restriction enzymes occurs by means of a very unusual, and highly energy-dependent, mechanism. Therefore, these enzymes are believed to be involved not only as a defence mechanism for the bacterial cell, but also in some types of specialised recombination system controlling the flow of genes between bacterial strains [ 26 , 27 ]. A periplasmic location would be well adapted for the restriction activity of R-M enzymes, but recombination requires a cytoplasmic location. Restriction enzymes protect the cells by cutting foreign DNA and could be assumed to be located at the cell periphery. Using immunoblotting to analyse subcellular fractions, Holubova et al. [ 28 ] detected that the subunits of the R-M enzyme were predominantly in the spheroplast extract. The HsdR and HsdM subunits were found in the membrane fraction only when co-produced with HsdS and, therefore, part of a complex enzyme, either methylase or endonuclease. Further studies have shown that the R-M enzyme is bound to the membrane via the HsdS subunit and that for some enzymes this may involve DNA [ 29 ]. 4. Uses of the EcoR124I molecular motor: polymer-protein conjugates in nanobiotechnology One of the major obstacles for the practical application of single molecule devices is the absence of control methods in biological media, where substrates or energy sources (such as ATP) are ubiquitous. Synthetic polymers offer a robust and highly flexible means by which devices based on single biological molecules can be controlled. They can also be used to link individual biomacromolecules to surfaces, package them or to control their specific functions, thus expanding the applicability of the natural molecules outside conventional biological environments. Moreover, a number of synthetic polymers have been recently developed that can potentially perform nanoscale operations in a manner identical to natural and engineered biopolymers. A key property of these materials is 'smart' behaviour, especially the ability to undergo conformational or phase changes in response to variations in temperature and/or pH. Synthetic polymers with these properties are being developed for applications ranging from microfluidic device formation, [ 30 ] through to pulsatile drug release [ 31 - 34 ], control of cell-surface interactions [ 35 - 39 ], as actuators [ 40 ] and, increasingly, as nanotechnology devices [ 41 ]. In the context of bio-nanotechnology we focus here on the uses of one particular subclass of smart materials, i.e. substituted polyacrylamides, but it should be noted that there are many more examples of synthetic polymers and engineered/modified biopolymers that exhibit responsive behaviour and new types and applications of smart materials are constantly being reported. Poly(N-isopropylacrylamide) (PNIPAm) is the prototypical smart polymer and is both readily available and of well-understood properties [ 42 ]. PNIPAm undergoes a sharp coil-globule transition in water at 32 °C, being hydrophilic below this temperature and hydrophobic above it. This temperature (the Lower Critical Solution Temperature or LCST) corresponds to the region in the phase diagram at which the enthalpic contribution of water hydrogen-bonded to the polymer chain becomes less than the entropic gain of the system as a whole and thus is largely dependent on the hydrogen-bonding capabilities of the constituent monomer units (Fig 5 ). Accordingly, the LCST of a given polymer can in principle be "tuned" as desired by variation in hydrophilic or hydrophobic co-monomer content. Figure 5 Inverse temperature solubility behavior of responsive polymers at the Lower Critical Solution Temperature (LCST). Left hand side shows hydrated polymer below LCST with entropic loss of water and chain collapse above LCST (right hand side). 4.1 Soluble PNIPAm-biopolymer conjugates Covalent attachment of single or multiple responsive polymer chains to biopolymers offers the possibility of exerting control over their biological activity as, in theory at least, the properties of the resultant polymer-biopolymer conjugate should be a simple additive function of those of the individual components. This principle is now being widely exploited in pharmaceutical development, as covalent attachment of, for example, PEG chains to therapeutic proteins has been shown to stabilize the proteins without losing their biological function [ 43 - 48 ]. Polymer-biopolymer conjugates can be prepared as monodisperse single units, or as self-assembling ensembles depending on the chemistries used for attaching the synthetic component and on the associative properties of the polymer and/or biopolymer. Furthermore, by altering the response stimulus of the synthetic polymer, and how and where it is attached to the biopolymer, the activity of the overall conjugate can be very closely regulated. These chimeric systems can thus be considered as true molecular-scale devices. Pioneering work in this area has been carried out by Hoffman, Stayton and co-workers, who engineered a mutant of cytochrome b5 such that a single cysteine introduced via site-directed mutagenesis was accessible for reaction with maleimide end-functionalised PNIPAm [ 49 ]. Since the native cytochrome b5 does not contain any cysteine residues this substitution provided a unique attachment point for the polymer. The resultant polymer-protein conjugate displayed LCST behaviour and could be reversibly precipitated from solution by variation in temperature. This approach has proved to be very versatile and a large number of polymer-biopolymer conjugates have now been prepared, incorporating biological components as diverse as antibodies, protein A, streptavidin, proteases and hydrolases [ 50 , 51 , 50 , 51 ]. The biological functions or activities of these conjugate systems were all similar to their native counterparts, but were switched on or off as a result of thermally induced polymer phase transitions. Of especial note have been the recent reports of a temperature and photochemically switchable endoglucanase, which displayed varying and opposite activities depending on whether temperature or UV/Vis illumination was used as the switch [ 52 ]. 4.2. Controllable DNA packaging and compartmentalization devices We are currently developing responsive polymers as a switch to control the EcoR124I motor function and are investigating this polymer-motor conjugate as part of an active drug delivery system. We aim for the practical demonstration of a nano-scale DNA packaging/separation and delivery system uniting the optimal features of both natural and synthetic molecules. In essence, we assemble a supramolecular device containing the molecular motor capable of binding and directionally translocating DNA through an impermeable barrier. To control the process of translocation in biological systems, where a constant supply of ATP is present, we have added to the motor subunit of EcoR124I the thermoresponsive poly(N-isopropylacrylamide) (PNIPAm), which, through its coil-globule transition, acts as a temperature-dependent switch controlling motor activity. PNIPAm copolymers with reactive end-groups are being attached to a preformed R subunit of the motor via coupling of a maleimide-tipped linker on the synthetic polymer terminus to a cysteine residue. This residue has been selected, as it is both accessible and located close to the active centre on the R subunit of the motor. The protein-polymer conjugates are stable to extensive purification and, when combined with M2S complex, the activity of this conjugate motor system is similar to the native counterpart, but can be switched on or off as a result of thermally induced polymer phase transitions [ 53 , 54 ]. Thus the conjugation of the responsive polymer to the molecular motor generates a nano-scale, switchable device (Fig 6 ), which can translocate DNA under one set of conditions (i.e. into a protective capsule or into a compartment). Conversely, in another environment (e.g. inside cells), in response to changed conditions (e.g. changed temperature, pH) the polymer switch will change its conformation, allowing ATP to power the motor, releasing DNA from capsules or compartments. Figure 6 Schematic representation of the molecular motor function controlled by a thermoresponsive polymer switch. R, M and S denote the specific motor subunits. Chain-extension of the polymer below LCST provides a steric shield blocking the active site. Chain collapse (above LCST) enables access to the active site and restoration of enzyme function. For more details see text. The conjugation of the motor with synthetic polymers brings additional advantages. One such benefit arises from the ability to functionalise the polymer side chains or terminus in a way that allows attachment of the entire complex to surfaces for sensing and device applications. Therefore, although our hybrid polymer-protein conjugate was originally aimed at gene targeting (as it has the potential to increase the delivery of intact DNA to cell nuclei and thereby increase gene expression) this system may also be used in building automated nano-chip sensors, therapeutic and diagnostic devices, where DNA itself would be a target, or where DNA might be used as a 'conveyor-belt' for attached molecules. The strength of the molecular motor has proven sufficient to disrupt most protein-DNA interactions and thus numerous processes and applications where highly localised force is required can also be envisaged. 5. Conclusions The use of synthetic polymers offers a number of possibilities, which otherwise could not be exploited or would be difficult to take advantage of, if purely biological systems were used. Moreover, the combination of the properties of molecular motors with "smart" polymers has hitherto been unexplored and represents a novel concept in nanotechnology, which could ultimately lead to a wholly new class of molecular devices. Nanoscale control of molecular transport in vitro and especially in vivo opens up a whole host of possibilities in medicine, including drug or DNA delivery (e.g. gene therapy), but also where protection of a therapeutic is required under one biological regime and release in another (e.g. prodrugs conjugated to DNA which can be released by nuclease-mediated degradation at the site of action). In addition, this system may allow the generation of switchable nanodevices and actuators, controllable by changes in the synthetic copolymer structure as well as ATP-mediated DNA motion and may pave the way for biofeedback-responsive nanosystems. It can be used for nano-scale isolation of various biochemical processes in separate compartments connected via a tightly controlled shuttle device. In essence, this concept bridges the disciplines of chemistry and biology by using a biological motor to control chemistry and a synthetic polymer to regulate biological processes. Author's contributions KF conceived the idea of using the modified R-M enzyme as a molecular motor and carried out, with co-workers, the molecular studies of the motor components, SSP carried out the polymer synthesis, polymer-motor conjugations and functional studies, CA designed and participated in the synthesis of smart polymers and DCG conceived of the study. All authors participated in study design and coordination as well as the reading and approval of the final manuscript
/Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC519025.xml
15350203
10.1186/1477-3155-2-8