pmcid
stringlengths 6
6
| title
stringlengths 9
374
| abstract
stringlengths 2
4.62k
⌀ | fulltext
stringlengths 167
106k
| file_path
stringlengths 64
64
|
---|---|---|---|---|
516447 | Construction of a questionnaire measuring outpatients' opinion of quality of hospital consultation departments | Background Few questionnaires on outpatients' satisfaction with hospital exist. All have been constructed without giving enough room for the patient's point of view in the validation procedure. The main objective was to develop, according to psychometric standards, a self-administered generic outpatient questionnaire exploring opinion on quality of hospital care. Method First, a qualitative phase was conducted to generate items and identify domains using critical analysis incident technique and literature review. A list of easily comprehensible non-redundant items was defined using Delphi technique and a pilot study on outpatients. This phase involved outpatients, patient association representatives and experts. The second step was a quantitative validation phase comprised a multicenter study in 3 hospitals, 10 departments and 1007 outpatients. It was designed to select items, identify dimensions, measure reliability, internal and concurrent validity. Patients were randomized according to the place of questionnaire completion (hospital v. home) (participation rate = 65%). Third, a mail-back study on 2 departments and 248 outpatients was conducted to replicate the validation (participation rate = 57%). Results A 27-item questionnaire comprising 4 subscales (appointment making, reception facilities, waiting time and consultation with the doctor). The factorial structure was satisfactory (loading >0.50 on each subscale for all items, except one item). Interscale correlations ranged from 0.42 to 0.59, Cronbach α coefficients ranged from 0.79 to 0.94. All Item-scale correlations were higher than 0.40. Test-retest intraclass coefficients ranged from 0.69 to 0.85. A unidimensional 9-item version was produced by selection of one third of the items within each subscale with the strongest loading on the principal component and the best item-scale correlation corrected for overlap. Factors related to satisfaction level independent from departments were age, previous consultations in the department and satisfaction with life. Completion at hospital immediately after consultation led to an overestimation of satisfaction. No satisfaction score differences existed between spontaneous respondents and patients responding after reminder(s). Conclusion Good estimation of patient opinion on hospital consultation performance was obtained with these questionnaires. When comparing performances between departments or the same department over time scores need to be adjusted on 3 variables that influence satisfaction independently from department. Completion of the questionnaire at home is preferable to completion in the consultation facility and reminders are not necessary to produce non-biased data. | Background Medical care aims not only to improve health status but also to respond to patient needs and wishes and to ensure their satisfaction with care [ 1 ]. Likewise, conducting surveys to measure satisfaction with psychometrically validated questionnaires entails assessment of the quality of care organization and procedures [ 2 ]. Patient judgement on medical care also contributes to medical outcome. In the case of ambulatory care, it has been clearly shown that satisfied patients are more likely to cooperate with treatment, to maintain a continuing relationship with a practitioner [ 3 ] and thus enjoy a better medical prognosis [ 4 ]. From a conceptual point of view, the construct of patient satisfaction as been defined by Ware as an "attempt to capture the personal evaluation of care that cannot be known by observing care directly" and to consider opinion of patients as a multidimensional subjective indicator of quality of care [ 5 ]. The model most commonly, though implicitly, used in satisfaction work is the discrepancy model (degree of fulfillment of expectation is related to satisfaction level) giving to patient expectations a central role [ 6 ]. This model, according to Sitzia " implies that concentrating upon areas of expressed dissatisfaction is more valuable than obtaining consistency of expressed satisfaction" [ 4 , 7 ]. In France, measuring satisfaction has been mandatory since 1996 and several questionnaires have been developed to evaluate inpatient care [ 8 - 12 ]. Most existing outpatient satisfaction questionnaires have been developed to assess primary care practice, especially general practice [ 13 - 20 ]. However, it could be hypothesized that content of questionnaires evaluating primary care physician may be different from that of questionnaires exploring hospital consultation with a specialist because of differences in patient expectations. So it could be assumed that dimensions that are very important in the case of primary care like human qualities of the physician and medical information could have a lesser importance in case of hospital consultation, while technical competency could have a more important place [ 21 - 23 ]. Few questionnaires have been developed for hospital consultations. Of these, some were specific to one type of consultation like oncology [ 24 ], rheumatology [ 25 ] or diabetes clinics [ 21 ], while others were non-generic questionnaires [ 14 ]. There is one French-language questionnaire on satisfaction with outpatient hospital care, however this questionnaire was developed from an "expert" viewpoint [ 26 ]. Hence the decision to construct a complementary "patient-oriented" questionnaire implicating potential respondents in the generation and selection of items. Even if health care organization differs across countries, the role of the hospital in most countries is very similar and it could be expected that the questionnaire developed in France could be used in other countries with a public health system, in particular European countries. The main objective was to develop, according to psychometric standards, a generic outpatient satisfaction questionnaire that could be used to compare hospital outpatient departments one with another or the same department over time. The questionnaire needed to be brief, understandable and easy to complete for outpatients aged18 years or older in medicine, surgery and psychiatric hospital consultations. It was designed to be self-administered. The French final version is being adapted in English, German, Italian, Spanish and Hebrew. The secondary objective was to define administration procedures in routine study that minimize non-response bias. Three situations were tested: i) questionnaire issued and completed at hospital, immediately after consultation; ii) questionnaire mailed and completed at home before any reminder; iii) questionnaire mailed and completed at home only after reminder(s). The groups were compared for satisfaction. Overview of the questionnaire development It comprised 2 phases. First, a qualitative phase for item generation and construction of a first version of the questionnaire (41-item version). Secondly, a quantitative phase comprising 2 steps. A first validation phase that provided a shortened version of the questionnaire (27-item version). Second, a replication validation phase to corroborate results from the previous steps. Finally a very short-form version (9-item) was constructed. All versions are presented in the Appendix (see additional file 1). A steering committee supervised the questionnaire development procedure, comprising methodologists, hospital practitioners and persons from patient associations defending health care user rights. All analyses were performed using SAS software (version 8). Method Qualitative phase of item generation A psychologist conducted 25 individual semi-structured interviews with recent outpatients, using the critical analysis incident technique [ 27 ]. Subjects were asked to detail specific events they had experienced and situations associated with neutral, pleasant or unpleasant emotions that had influenced their opinion on consultation. An interview guide constructed according to the chronological order of a consultation was used. The interviews were pursued until new ideas were exhausted. Patients expressing ideas that were too general or those talking about non-personal experiences were interrupted in order to refocus on a particular personal experience. Each interview lasted 30 minutes on average. All the different wordings of a given idea were recorded. Interviews were transcribed and items were generated from the verbatim statements (n = 105 items). A literature review was carried out on validated satisfaction questionnaires [ 5 , 13 - 20 , 23 - 26 , 28 - 30 ]. This yielded a preliminary list of areas of satisfaction with consultation. Items found in the literature but not in the interviews were collated (n = 26). This procedure also identified other factors related to outpatient satisfaction with consultation (patient and physician profiles), relevant for inclusion in the questionnaire. The aim was to select the variables linked to satisfaction, independent from place of consultation (department), for the final questionnaire. These variables constitute background adjustment factors needed to avoid bias in comparing departments one with another or the same department over time (age of the patient for example) [ 31 ]. A list of satisfaction items (n = 131) was constructed classified into the following domains: administrative procedures, appointment making, receptionist and nurses, waiting time, facilities, duration and privacy of the consultation with the doctor, human relationships with the doctor, information provided by the doctor and shared decision-making, doctor's technical competence, coordination and continuity of care, and global satisfaction. The source of items (interview v. literature) was indicated. Using the Delphi technique [ 32 ], the steering committee and six patients (members of the National League against Cancer) selected items within each domain (n = 60). The number of items to be chosen was proportional to the number of items proposed in each domain. The list of items was submitted as often as necessary to obtain a consensus of at least 80% among the raters. A focus group (one two-hour meeting) coordinated by two of the authors (IG, SV) including two patient association representatives and three patients, with previous individual access to the list of items, checked acceptability of item wording and exhaustiveness of the list. A pilot study was conducted on 55 outpatients from different outpatient departments using a preliminary questionnaire comprising the selected items, to check comprehensibility and acceptability of items and response patterns. Confusing items were removed, rewritten or replaced. The list of the items extracted from this qualitative phase is shown on the appendix. Questionnaire The questionnaire obtained from the qualitative phase and tested in the first study comprised 41 negatively and positively worded satisfaction items (Appendix [see Additional File 1]). The traditional approach was chosen, in which the item is structured as a statement of opinion. A Likert five-point response balanced scale was chosen (in French : 'yes certainly', 'yes probably', 'neither yes nor no', 'probably not, 'certainly not') because it seems to be the best format [ 5 , 33 ] and the most often used [ 5 , 13 , 14 , 17 - 19 , 24 , 28 , 29 ]. A 'does not apply' category was provided for 19 items relating to situations not universally relevant. Each item was scored from 0 to 4, 4 indicating greatest of satisfaction. Non-response and 'does not apply' categories were considered as missing data. Patients were asked to answer for their last consultation in the department. Several other items on general satisfaction were also included in this questionnaire: one overall satisfaction item, using a seven-point scale (from 'not at all' to 'completely' satisfied) and two items on intended behavior (to recommend, to consult again), using a four-point scale ('yes certainly', 'yes probably', 'probably not', 'certainly not') and one open-ended question. These items were included to test concurrent validity. The questionnaire also comprised data on sociodemographic profile, medical status, visit background and characteristics. and an overall satisfaction with life (using a 7 point scale, from 'not at all' to 'completely' satisfied). This last variable was included because of the relationship between affective disposition and the expression of satisfaction [ 34 , 35 ] and because of the relationship between satisfaction with life and satisfaction with care [ 36 ]. Samples and studies design of the quantitative phase First study (first validation phase) To select items, a first study was conducted in 2001–2002 in 10 wards of 3 short-stay public teaching hospitals of Paris area (Paul Brousse, Bichat and George Pompidou European hospitals). Data was collected in 7 medical departments (internal medicine, rheumatology, 2 cardiology, dermatology, infectious disease, and oncology) and 3 surgical outpatient departments (urology, orthopedic, surgical gynecology). All consecutive eligible ambulatory patients over 18 years in scheduled consultation with a physician were included, to obtain approximately 100 subjects per department. Patients hospitalized before or immediately after the consultation were excluded. Research assistants approached outpatients immediately after consultation and invited them to participate. Outpatients were randomized prior to being approached. Outpatients randomized in group 1 completed the questionnaire alone immediately after consultation and left it in a box. Patients of group 2 received the questionnaire by mail at home for completion. They were asked to complete and return it by post in a prepaid envelope carrying a neutral address. Non-respondents were sent up to 3 more questionnaires at one-week intervals. To assess reliability over time a sample of 38 respondents from the second group was sent a second questionnaire to return completed, without any reminder. Finally of the 1548 outpatients approached, 70.9% agreed to participate (n = 1097) and 65.1% completed the questionnaire (n = 1007). Response rates were 57.0% in group 1 and 73.7% in group 2 (40.2% before any reminder, 63 % after one reminder, 69.7% after two and 73.7% after three). Reasons for non-participation were refusal or lack of time (12.9% of the overall sample), language barrier (8.5%), inability for medical reasons (7.2%), other reason (0.6%) and agreement but no return of the questionnaire after 3 reminders (5.8% of the overall sample and 12.0% of group 2). Compared to respondents, the non-respondent group comprised older subjects (60.2% v. 52.6% aged over 50 years, p < 0.001), more foreigners (12.5% v. 29.1%, p < 0.001) and more patients consulting for the first time in the department (28.0% v. 22.4%, p = 0.02). Response rates also differed according to the department (p < 0.001) and the hospital (p < 0.001). Second study (replication phase) To confirm the results of the previous study, a second study was conducted in the year 2002 in two departments (internal medicine and infectious disease) in one short-stay public teaching hospital. All consecutive outpatients of 18 years and over (not hospitalized immediately after consultation) were included to obtain 100 participants per department. The questionnaires were posted with a prepaid envelope. One reminder was sent 10 days after the first mailing to non-respondents. Participation rate was 33.9% before reminder and 56.5% after (n = 248). Results First validation phase Item selection A first selection of items was made from descriptive response distribution for each item. The criteria used to guide item selection/deletion were: high rates of non-response and 'not applicable' response (≥ 20%) except for items where high rates in this response category were expected, ceiling and floor effects (≥ 50%), and unacceptable test-retest reliability (weighted kappa coefficient<0.60). Pragmatic considerations also tempered selection: interest of the item in itself, number of items covering the same domain, redundancy. Results showed that the proportion of missing responses per item was low. As predictable, for the two items relating to accessibility of the service in case of emergency (items 5 and 6, Appendix [see Additional File 1]) the number of 'does not apply' responses was high (30.7% and 45.0%). A ceiling effect was observed for all items (from 54.4% to 79.6%), except for those on facilities and waiting time (items 10 to 13). Test-retest reliability was good for 20 items (weighted kappa ≥ 0.7 for 10 items and from 0.6 to 0.69 for 10 items). For 5 items, the coefficient ranged from 0.45 to 0.56. The item on doctors' warnings on side effects of treatment (item 22) had a very low weighted kappa (k = 0.17). At this stage 12 items were discarded. Item 22 was retained for its clinical relevance (Table 1 ). Table 1 Item description and scaling properties of the questionnaires extracted from the validation phase (26 item version) and from the replication phase Intermediate questionnaire 26-items retained at the and of the first validation phase (first study, n = 1007) Final questionnaire 27-item questionnaire tested at the replication phase (second study, n = 248) Title of the scales Consultation with the doctor Appointment making Reception Waiting time & facilities Overall scale Consultation with the doctor Appoint-ment making Reception & facilities Waiting time Overall scale Items properties # of items in the scale 13 6 3 4 26 13 6 5 3 27 # of questionnaires with at least 1/2 of items completed (1) 996 931 1004 1001 1003 235 248 247 244 248 # of items with 'non response' ≥ 20% 0 0 0 0 0 0 0 0 0 0 # of item with 'does not apply' response ≥ 20% 0 2 0 0 2 1 0 0 0 1 # of items with ceiling effect ≥ 50% (≥ 60%) 13 (12) 6 (4) 3 (2) 0 (0) 22 (18) 12 (4) 1 (1) 2 (0) 0 (0) 15 (5) # of items floor effect ≥ 50% 0 0 0 0 0 0 0 0 0 range of Weighted kappa (# of items with kappa ≥ 0.60) 0.14–0.83 (10) 0.46–0.77 (3) 0.45–0.78 (1) 0.68–0.82 (4) 0.14–0.82 (18) - - - - Scaling properties Mean score (± sd) 85.1 (17.2) 83.2 (19.9) 88.0 (14.5) 69.6 (24.9) 82.7 (13.7) 84.1 (17.2) 80.6 (18.5) 75.3 (18.3) 61.3 (19.6) 78.9 (15.3) Ceiling / floor effect (%) 26.2 / 0.10 32.2 / 0.2 38.5 / 0.1 16.2 / 1.1 4.2 / 0.1 25.8 / 0.4 24.7 / 0.4 13.7 / 0.5 19.0 / 4.9 4.4 / 0.4 Skewness value /SE -3.00 -2.09 -3.5 -0.86 -2.67 -0.98 -0.83 -0.58 -0.20 -0.76 Range of interscale correlations 0.33–0.35 0.34–0.37 0.35–0.40 0.33–0.40 - 0.46–0.51 0.51–0.59 0.42–0.49 0.42–0.53 - # of items with own scale correlation ≥ 0.40 (3) 12 6 2 4 - 13 6 5 3 - # of items with own scale (3) correlation greater than with other scale 13 6 3 4 - 13 6 4 3 - Cronbach alpha coefficient 0.85 0.82 0.69 0.77 0.88 0.94 0.87 0.79 0.89 0.94 Intraclass coefficient [IC95%] 0.69 [0.49–0.83] 0.84 [0.71–0.91] 0.86 [0.75–0.92] 0.83 [0.71–0.91] 0.90 [0.81–0.94] - - - - (1) Including non response and 'does not apply' response' (2) From the final principal component factor analysis (3) Corrected for overlap Factorial structure The 29 items retained were entered into principal-components factor analysis (PCFA) with 'varimax' rotation and the 26 items with substantial loading ≥ 0.40) on only one factor were retained (Appendix [see Additional File 1]). Another PCFA was computed on the 26 remaining items to determine the structure of the instrument. The screeplot revealed a predominant eigenvalue with nevertheless a four-dimensional structure (the following eigenvalues showed a smooth decrease). Hence the proposal is to consider a four-dimension structure with the possibility of an overall score. One dimension grouped the 13 items relating to consultation with the physician. The second dimension grouped the 6 variables relating to appointment-making. The third and fourth related respectively to waiting time or facilities (4 items) and reception (3 items). None of the 26 items loaded on more than one factor. Only item 26 ('doctor in touch with attending physician') had a borderline loading (0.37), but it was kept because coordination of care in hospital care is important. The stability of the 4 factors was ascertained with PCFA on subgroups and with 'oblique' rotation (male v. female and surgery v. medicine). Scale properties Scores for each scale were based on the standardized sum of the items, giving a range from 0 (low satisfaction) to 100 (high satisfaction). Scores were computed when at least half the items in a scale were completed. Because of a ceiling effect, mean scale scores are relatively high except for the 'waiting time and facilities' scale (Table 1 ). Interscale correlations were good for the four scales. One item had a borderline correlation with its own scale (r = 0.37 for item 7 'the consultation room was clearly sign-posted') and one item had a low correlation (r = 0.33 for item 26 'doctor in touch with attending physician'). All items had a higher correlation to their hypothesized scales than to other scales. Reliability was good, meeting both Cronbach alpha and intraclass correlation coefficient requirements (Table 1 ). A very strong association between the overall scale, intended behaviors, comments and global satisfaction question was noted, suggesting good concurrent validity (Table 2 ). Table 2 Association between overall satisfaction scale, intended behaviors and global satisfaction item from the first validation study (n = 1007) and replication study (n = 248) First validation phase First study (n = 1007) Replication phase Second study (n = 248) n Score (sd) (1) P n Score (sd) (2) P Overall satisfaction item (3) 1 ( not at all satisfied ) 13 58.0 (21.6) 2 12 68.7 (17.7) 3 18 56.0 (13.5) <0.001 (4) - na - 4 39 59.4 (14.0) 5 125 72.1 (11.5) 6 285 80.3 (10.0) 7 ( completely satisfied ) 496 90.6 (7.6) To recommend to relatives or friends certainly not 13 52.7 (18.2) <0.001 (4) - na - probably not 30 57.3 (16.7) yes probably 272 75.4 (13.1) yes certainly 665 87.6 (9.5) To consult again certainly not 10 58.7 (24.2) <0.001 (4) - na - probably not 20 57.3 (17.9) yes probably 213 72.5 (14.5) yes certainly 756 86.6 (10.3) To consult again 9 Do not agree - na - 46 57.8 (14.4) <0.001 agree 18 64.2 (12.4) Fully agree 1 84.3 (11.7 Content of the open-ended question negative comment 303 76.1 (14.7) 85 72.1 (14.9) mixed comment 110 78.4 (13.5) <0.001 8 81.0 (12.4) <0.001 no comment 442 85.4 (12.1) 91 82.0 (15.8) positive comment 152 91.2 (8.2) 42 85.5 (11.1) na: non available (1) Overall satisfaction score (26 items scale extracted from the first study) (2) Overall satisfaction score (27 items final scale extracted from the second study) (3) 7-point scale from 1 'not at all satisfied' to 7 'completely satisfied' (4) ANOVA test regrouping the responses 1,2 and 3 Replication phase Questionnaire tested (see appendix) A modified version of the questionnaire was constructed at the end of the previous step. To avoid the ceiling effect highlighted in the previous stage, responses choices were modified (using the pattern 'fully agree', 'agree', 'moderately agree, 'not really agree', 'not agree at all'). One satisfaction item on waiting time was added and one item on the facility was reworded to improve the chance of revealing a 'waiting time' subscale and a 'reception-facilities' subscale) and because reliability of the 'reception' subscale was borderline. Patient demographic variables identified at the previous stage as having a relationship with satisfaction scores, one item on intended behavior and an open-ended comment field were also added to the questionnaire. Final psychometric properties of the final 27-item version questionnaire The number of items with ceiling effect decreased. Item completion rates were good (Table 1 ). PCFA was performed on the 27 items. The screeplot highlighted the same internal structure. The 'varimax' rotation revealed that two dimensions were identical to those identified in the first study ('consultation with the doctor' and 'contact-appointment') (Table 4 ). The two others were slightly altered: the three items on 'waiting time' were isolated from items about 'facilities' that grouped themselves with the 'reception' factor. All items had a good loading on their own factor. Item 9 ('pleasantness and availability of receptionist') was the only item with secondary loading on another component. It was kept because it was the only item on human qualities of non medical staff which were cited very often by patients in the qualitative phase (Table 1 ). Table 4 Principal components factor analysis (varimax rotation) computed with the final 27-items version of the questionnaire (second study, n = 248) Factor 1 Factor 2 Factor 3 Factor 4 Consultation with the doctor Appointment making Reception & facilities Waiting time 1 easy to make an appointment by phone 0.07 0.49 0.12 0.39 2 Pleasantness of staff answering the phone 0.24 0.56 0.30 0.19 3 Acceptable time lapse to obtain appointment 0.19 0.81 0.22 0.18 4 Possibility of obtaining an appointment on convenient day and hour 0.20 0.77 0.20 0.14 5 Contacting someone in the facility on the phone for help or advice in case of problem 0.30 0.68 0.18 0.08 6 In an emergency, getting a quick appointment in the facility 0.19 0.77 0.24 0.11 7 Inside the hospital the consultation room was clearly sign-posted' 0.17 0.15 0.67 - 0.16 8 Administrative procedures (completing papers and paying) fast and easy 0.15 0.25 0.61 0.16 9 Pleasantness and availability of receptionist 0.20 0.44 0.59 0.23 10 Waiting room pleasant 0.11 0.15 0.78 0.26 11 Premises clean 0.17 0.22 0.66 0.18 12 Saw the doctor at the appointed time 0.24 0.18 0.14 0.84 13 Waiting time acceptable 0.25 0.29 0.10 0.85 14 Information on how long to plan for 0.23 0.16 0.20 0.78 15 The doctor was welcoming 0.69 0.09 0.38 0.08 16 Took an interest in me not just my medical problem 0.72 0.06 0.15 0.18 17 spent adequate time with me 0.82 0.14 0.21 0.07 18 Examined me carefully 0.78 0.04 0.21 0.07 19 Explained what he/she was doing during the consultation 0.78 0.15 0.17 0.12 20 Wanted to know if I had pain 0.75 0.13 0.17 0.15 21 Asked if I was taking medication for other health problems 0.72 0.07 0.08 0.09 22 Warned me about possible side effects of treatment (operation, drugs)- 0.68 0.30 0.01 0.19 23 Took my opinion into account 0.79 0.20 0.02 0.10 24 Explained decisions 0.76 0.25 0.18 0.20 25 I got the information I wanted 0.78 0.23 0.12 0.17 26 he/she is in touch with my GP 0.57 0.20 0.02 0.15 27 Agree with doctor's instruction 0.49 0.32 0.04 - 0.06 For item-scale correlations, item 9 also correlated to these two scales ('reception-facilities' factor and 'contact-appointment'). It was decided to attribute it to the factor that maximized internal consistency ('reception-facilities' scale). All items met the requirement of being highly correlated to their own scale, all interscale correlations were satisfying, as well as internal consistency (Table 1 ). The scale overall was significantly associated with comments and intended behaviors (Table 2 ). Construction of a unidimensional 9-item form As the factorial analysis of both the first validation and replication phases revealed a predominant factor that could be split into four underlying dimensions, it was decided to construct a unidimensional form of the questionnaire, that could produce an overall global outcome that could be very useful in case of evaluation study. Within each dimension, one third of the items were selected according to two criteria: items without 'not applicable' response choice, and items having strong loading on the principal component in PCFA. Thus 9 items were selected, 4 items from the 'consultation with the doctor' scale, 2 from the 'contact-appointment' scale, 2 from 'reception-facilities' and 1 from 'waiting time' (Appendix [see Additional File 1]). Final PCFA on these 9 items showed scale unidimensionality. Item loading on this factor ranged from 0.56 to 0.78. Item-scale correlation corrected for overlap ranged from 0.47 to 0.65. Internal consistency was good (Cronbach α = 0.86). Effect of mode of questionnaire administration on estimation of patient satisfaction First study showed that compared to the satisfaction score obtained with completion at home, mean scores for all hospital-completed satisfaction scales were very significantly higher. In the group that completed the questionnaire at home, comparison between respondents before any reminder and respondents after reminder(s) showed no difference in satisfaction scores, whatever the scale considered (Figure 1 – Satisfaction scores according to the place of completion and time of answering [before v. after reminder]). Figure 1 Satisfaction scores according to the place of completion and time of answering (before v. after reminder) (first study, n = 1007). Differences between departments A multiple linear regression showed that differences between departments were highly significant, even if patient characteristics that influenced patients' satisfaction were taken into account (i.e. age, satisfaction with life and previous consultation). Satisfaction scores ranged from 79.3 to 91.7 for 'consultation with the doctor' scale, from 72.8 to 94.2 for 'appointment making' scale, from 83.4 to 91.3 for 'reception' scale, from 57.3 to 80.5 for 'waiting time-facilities' and from 77.8 to 89.3 for the overall scale). Older age, good satisfaction with life and numerous previous consultations in the department were all associated with high levels of satisfaction, independently from the department (Table 3 ). Table 3 Association between demographic, medical, outpatient consultation characteristics considered as explanatory variables and overall satisfaction score as dependant variable (1) (linear regression analysis from the first study) DF F-value P-value Demographic profile Age (quantitative variable) 1 7.75 0.006 Matrimonial status (Married or living with partner v. single v. divorced, separated, or widowed) 2 0.93 0.42 Working status (employed v. student v. unemployed v. Retired v. prolonged sick leave v. other) 5 0.79 0.56 Level of education (university yes v. no) 1 1.72 0.19 Overall satisfaction with life (quantitative variable) 1 51.1 0.001 Modes of care provision Outpatient department (n = 10) 9 4.3 0.001 # of consultations in the department (first v. 2 to 3 v. 4 to 5 v. more then 5) 3 2.92 0.03 At least one hospitalization in the ward 1 0.73 0.39 Medical profile Duration of the health problem justifying the consultation (less than 6 month v. 6 month and more) 1 FGF 0.22 Severe medical problem ('yes definitely' v. 'yes rather' v. 'neither yes nor no', v. 'not really', v. 'definitely not' v. 'do not know') 5 1.19 0.31 Comorbidity (yes v. no) 1 1.97 0.16 Perceived health status, compared to persons of same age (better v. similar v. worse) 2 0.49 0.61 829 observations used in the analysis, r 2 = 0.21 (1) First short version of the satisfaction questionnaire List of other variables not entered into the model because of non significance in the bivariate analysis (p > 0.1): gender, nationality, gender of the physician, motive of the consultation (physical, psychological or mixed) and prescription of test or medication and having a general practitioner. Discussion Psychometric properties of the scale The 27-item and 9-item versions of the questionnaire developed here appear sufficiently concise, valid and reliable to provide a non-biased subjective evaluation of outpatient viewpoint on the quality of care and services in hospital consultations. The questionnaire demonstrated very good internal consistency and good reliability over time. The construction strategy presented here follows most of the recommendations for "good practice" in validation of measurement tools of patient satisfaction with care [ 7 ]. Questionnaire content comprises culture-specific features, but overall remains consistent with various north American and European studies [ 21 , 23 , 26 , 37 ]. The predominant role given to patients in the early development stages, the literature review and the implication of various experts ensure good content, construct and face validity. This first qualitative step, often insufficiently detailed and structured in satisfaction questionnaire construction, is indeed crucial [ 38 ]. The quantitative phase (i.e. first validation with replication) used not only statistical and psychometric results to reach decisions, but also the "intrinsic" and "clinical" relevance of items. This is a very important point. First, because satisfaction studies aim not only to measure quality from user viewpoint, but also to highlight practical elements that can be modified to improve quality. Second, questionnaires that are perceived to have content validity are needed to generate interest in results among health professionals and provide incentive for changes in approach to their jobs. Third, the tendency of health professionals to develop "home-made" questionnaires and their reluctance to use validated questionnaires developed elsewhere can be countered if questionnaire items are perceived as relevant. Dimensions of the questionnaire Each dimension comprises items exploring both technical aspects of care (i.e. equipment, competence, accessibility, continuity, compliance, pain management, waiting and consultation time...) and interpersonal aspects of care (i.e. information, decision sharing, attitude...). These aspects are both predictors of patient opinion on care and services [ 22 , 23 , 37 ] because implementation of appropriate technical medical strategies is necessary, but not sufficient, to achieve desired outcome. Good management of the human is needed because, as Donabedian remarks, "the interpersonal process is the vehicle by which technical care is implemented and on which its success depends" [ 1 ]. According to this author, technical and interpersonal performances are the first circle around the "bull's eye" of the "quality of care" target. The most important dimension explaining outpatient opinion of hospital quality is the actual consultation with the physician, representing half the items in the tool. This is consistent with other generic patient questionnaires on satisfaction with ambulatory care, also comprising a majority of items related to the medical intervention [ 17 , 19 , 21 , 26 , 29 , 37 ]. No independent subscales regarding specific aspects of the patient-physician encounter (i.e. communication, professional competence, interpersonal skills...) were identified here. They have been regularly identified by authors developing GP satisfaction questionnaires [ 5 , 17 , 29 , 37 , 39 , 40 ]. This could be explained by the fact that, as hypothesized, expectations of outpatients with respect to hospital care differs from expectations from primary care. Possibly patients have different needs and expectations according to the type of consultation, hospital specialists generating more mixed expectations because the specific technical competence of hospital specialists predominates and patients have greater difficulty in dichotomizing doctors' skills into "affective " and "technical' dimensions [ 23 , 41 , 42 ], whereas "affective" qualities have a predominant role in primary care [ 15 , 16 , 43 , 44 ]. This is corroborated by the fact that generic questionnaires designed to evaluate hospital care (inpatient or outpatient) most often do not identify such human versus technical dimensions [ 8 , 10 , 26 , 45 , 46 ]. The three other dimensions ('contact-appointments', 'reception-facilities' and 'waiting time') are all related to organizational non-medical aspects of care. These dimensions are classically identified in other generic questionnaires [ 17 , 18 , 21 , 23 , 28 , 29 , 40 , 42 ]. Comparison of the two factorial structures shows stability for all dimensions except 'reception-facilities' and 'waiting time'. From a strictly psychometric viewpoint, these two dimensions, both exploring events occurring just care quality, these two dimensions can pinpoint independent improvement measures, and calculating two different scores may improve the probability of highlighting the impact of such measures. Differences between departments and role of background factors It was shown that satisfaction scores were strongly related to consultation department, regardless of outpatient, physician and care-provision characteristics. These results suggest that this measure is more sensitive to levels of department performance than to patient profile or to modes of consultation, as shown elsewhere [ 47 ]. Therefore it is important that each department should identify its weak points to implement specific targeted actions to improve care quality. As in numerous studies, it was observed that older patients have a higher opinion of care provided than others [ 7 , 23 , 48 ]. For several authors, this contributes to construct validity of satisfaction questionnaires [ 41 ]. The same was observed for patients with multiple contacts with a department [ 10 ]. This could be explained by a better match between expectations and experience for multiple consultants, dissatisfaction during first contact leading patients to consult elsewhere. The strong relationship between overall satisfaction with life and opinion on care expresses the influence of the individual affective disposition trait (i.e. general tendency of an individual to be optimistic or pessimistic) which influences job satisfaction, a concept very close to patient satisfaction with care [ 49 ]. Other studies have found relationships between satisfaction and variables strongly associated with perception of overall quality of life, like mental health status and health-related quality of life [ 12 , 24 ]. The influence of these three background factors suggests the need to adjust patient satisfaction scores on these three variables (i.e. patient age, number of contacts and satisfaction with life) when comparing performances between departments or measuring performance over time within departments [ 31 ]. Impact of data collection method For patients completing the questionnaire immediately after consultation in the hospital, satisfaction estimates were higher than in case of home completion, in spite of procedures to preserve anonymity and confidentiality at hospital. Little data exists on the impact of place of completion for self-administered questionnaires on satisfaction with consultation: two studies conclude that patients express less satisfaction when the questionnaire is completed at home rather than in the medical facility [ 13 , 50 ] and one concluded that there was no difference according to data collection methods, but lacked power because of small sample size [ 51 ]. This could be interpreted as an over-estimation of patient satisfaction in case of completion in the facility, patients being more prone to express their real opinion when they have more time to consider the consultation and are safely back home [ 13 ]. Moreover response rates in the hospital completion group were relatively low (57%) expressing both refusal to participate or inability to respond, and reluctance to answer a satisfaction questionnaire immediately after consultation because of long waiting time beforehand, or because a relative, an ambulance or a taxi is waiting to take the patient home. It could be concluded that completion at home may be better than immediately after consultation. In the present study, no difference was observed between respondents without reminder and respondents only after reminder(s). This result is in agreement with other studies assessing inpatient satisfaction [ 9 , 52 ]. It could be concluded that reminders are not necessary to produce non-biased data. Limitations This work entails several limitations. First, overall response rates only reached 65% despite reminders sent to patients receiving mailed questionnaires at home. However, unlike other studies, the response rates calculated did not exclude patients unable to respond for medical reason (i.e. who were very ill or did not understand French) and homeless patients giving an invalid address (shelter...). Second, non-respondents differed from respondents regarding two background factors influencing satisfaction levels, with over-sampling of less-satisfied subjects in the respondent group (young patients and first consultants). There are also differences in participation rates between departments that could lead to over-estimating real differences between departments, because the more satisfied outpatients within each department may have been excluded. Third, validation is a continuous process and further studies are required to confirm these first results. The experimental nature of these studies may have induced bias in questionnaire responses. So there is a need to replicate findings using confirmatory statistical methods (IRT or structural equation model for example) using the data from non experimental, routine studies. Conclusion Good estimation of patient opinion on hospital consultation can be obtained with these two questionnaires. When comparing performances between departments or the same department over time scores need to be adjusted on the three variables that influence satisfaction independently from department (patient age, previous consultation in the department and overall satisfaction with life score). Mail-back completion at home of the questionnaire seemed preferable to completion in the consultation facility immediately after the consultation. Reminders are not necessary to produce non-biased data. Authors' contribution IG – initiation of the research, supervision of the project and drafting the manuscript; SV – coordination of the 2 studies, participation in the interpretation of the results and revision of the draft paper; CDS – performing statistical analyses; PD and PR- participation in the conception, design and coordination of the research; BF – participation in the interpretation of the results, supervision of the statistical analysis and revision of the draft paper. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516447.xml |
555595 | Cross-cultural adaptation of the VISA-A questionnaire, an index of clinical severity for patients with Achilles tendinopathy, with reliability, validity and structure evaluations | Background Achilles tendinopathy is considered to be one of the most common overuse injuries in elite and recreational athletes and the recommended treatment varies. One factor that has been stressed in the literature is the lack of standardized outcome measures that can be used in all countries. One such standardized outcome measure is the Victorian Institute of Sports Assessment – Achilles (VISA-A) questionnaire, which is designed to evaluate the clinical severity for patients with Achilles tendinopathy. The purpose of this study was to cross-culturally adapt the VISA-A questionnaire to Swedish, and to perform reliability, validity and structure evaluations. Methods Cross-cultural adaptation was performed in several steps including translations, synthesis of translations, back translations, expert committee review and pre-testing. The final Swedish version, the VISA-A Swedish version (VISA-A-S) was tested for reliability on healthy individuals (n = 15), and patients (n = 22). Tests for internal consistency, validity and structure were performed on 51 patients. Results The VISA-A-S had good reliability for patients ( r = 0.89, ICC = 0.89) and healthy individuals ( r = 0.89–0.99, ICC = 0.88–0.99). The internal consistency was 0.77 (Cronbach's alpha). The mean [95% confidence interval] VISA-A-S score in the 51 patients (50 [44–56]) was significantly lower than in the healthy individuals (96 [94–99]). The VISA-A-S score correlated significantly (Spearman's r = -0.68) with another tendon grading system. Criterion validity was considered good when comparing the scores of the Swedish version with the English version in both healthy individuals and patients. The factor analysis gave the factors pain/symptoms and physical activity Conclusion The VISA-A-S questionnaire is a reliable and valid instrument and comparable to the original version. It measures two factors: pain/symptoms and physical activity , and can be used in both research and the clinical setting. | Background Achilles tendinopathy is a common overuse injury, especially among athletes involved in activities that include running and jumping [ 1 - 4 ]. Several studies report the incidence of Achilles tendon disorders in runners to be 6–18% of all injuries [ 3 , 5 , 6 ]. Most commonly afflicted are middle-aged men, but Achilles tendinopathy occurs in both men and women at various ages [ 2 , 4 , 7 - 9 ]. Common complaints are pain during and after physical activity, tenderness on palpation and morning stiffness [ 7 - 10 ]. Symptoms usually subside with decreased physical activity, but tend to return as soon as physical activity is increased [ 2 ]. With increased severity patients may also have pain during daily functional activities [ 7 , 10 ]. Achilles tendinopathy causes many patients to significantly decrease their physical activity level, with a potentially negative impact on their overall health and general well-being [ 2 , 3 , 7 ]. Despite the high incidence of Achilles tendon disorders there have been few randomized treatment studies in patients with Achilles tendinopathy [ 8 , 11 - 16 ]. It is also difficult to compare the results of studies as outcome measures vary widely. A universally used clinical outcome measure of the symptoms and function would help comparisons between treatments in various clinics and research studies, and could also be the basis of criteria for various treatments. Robinson et al. [ 17 ] developed a questionnaire as an index of clinical severity of Achilles tendinopathy; the Victorian Institute of Sports Assessment – Achilles questionnaire (VISA-A). The VISA-A questionnaire is an easily self-administered questionnaire that evaluates symptoms and their effect on physical activity. It can be used to compare different populations with Achilles tendinopathy, and facilitate comparisons between studies. The VISA-A score has already been used to evaluate the outcome of treatment [ 16 ]. In the clinic, the VISA-A questionnaire can be used to determine the patient's clinical severity and provide a guideline for treatment as well as for monitoring the effect of treatment. In order to use the VISA-A questionnaire for non-english speaking patients it needs to be translated, culturally adapted and properly evaluated [ 18 ]. Therefore, the purpose of this study was to translate and culturally adapt the English VISA-A questionnaire to Swedish, to perform reliability and validity evaluations of the Swedish version, and to analyze the factor structure of the questionnaire. Methods To establish good face validity and content validity, the translation and cultural adaptation of the VISA-A questionnaire into Swedish was performed in several steps [ 18 ]. The English version was translated into Swedish independently by three people. All three were working in the medical field and had English as a second language. The three translations were synthesized into one Swedish version by a panel of experts consisting of four physical therapists who specialize in musculoskeletal disorders. The back translations of the Swedish version into English were performed by another three people. Two of the back translators were in the medical field (a sports medicine doctor and a physical therapist), and the third person was Swedish but has lived in the USA for many years. The panel of experts (same as above) then compared the original version with the back translations. The panel of experts consolidated the various versions into one pre-final version of the VISA-A questionnaire – Swedish version (VISA-A-S). The pre-VISA-A-S was pilot tested on five patients and five healthy individuals. After pilot testing question 1 was made clearer by entering the minutes in the boxes. The final version of the VISA-A-S (Additional file 1 ) was tested on both healthy individuals and patients with a diagnosis of Achilles tendinopathy. All subjects were given written information about the purpose and procedure of the study, and informed consent was obtained. Ethics approval was obtained from the Ethics Committee at the Medical Faculty, Gothenburg University, Sweden. For test-retest evaluation, we recruited a convenience sample of 15 healthy individuals (Table 1 ), age 20–40 years. They completed the VISA-A-S questionnaire three times within two weeks. The questions were answered with respect to their right Achilles tendon. Table 1 Summary of study populations Mean, standard deviation (SD) and 95 % confidence interval (CI) for age, duration of symptoms and VISA-A-S score for the study populations. Age (years) Duration of symptoms (months) VISA-A-S Score Mean SD 95% CI Mean SD 95% CI Mean SD 95% CI Healthy (n = 15, 3 F, 12 M) 29.5 4.3 27.1 – 31.9 N/A N/A N/A 96 4 94 – 99 Reliability Group Patients (n = 22, 8 F, 14 M) 45.4 15.5 38.6 – 52.3 55.5 134.7 6.3 – 57.4 50 24 40 – 61 Validity Group Patients (n = 51, 19 F, 32 M) 43.1 14.5 39.0 – 47.2 31.8 90.8 6.3 – 57.4 50 23 44 – 56 (N/A = not applicable, F = females, M = men) Fifty-one patients (Table 1 ), 39 to 47 years old, with Achilles tendinopathy, were included in the reliability evaluation for internal consistency and the validity evaluations. Twenty-two (Table 1 ) of the 51 patients also participated in a test-retest evaluation. Bilateral symptoms were reported in 15 of 51 patients (7 of the 22 in test-retest group). The patients were recruited from 11 physical therapy clinics throughout Sweden. The inclusion criteria were the same as in the original study [ 17 ]. The subjects had to be older than 18 and be able to give written consent. The subjects had to have a diagnosis of Achilles tendinosis, paratendinitis, or partial rupture with or without a retrocalcaneal or Achilles bursitis. The diagnosis was based on patient history and the physical therapists' clinical findings. Subjects with total Achilles tendon rupture and pregnant or nursing women were excluded. At their first physical therapy visit, all patients completed questionnaires regarding their injury, physical activity [ 19 ], tendon injury according to Stanish [ 1 ], and one VISA-A-S questionnaire for each leg. For the patients with bilateral symptoms, the side with the lowest VISA-A-S score, or if the score was equal, the side with the longest duration of symptoms was chosen for the evaluations. The 22 patients that participated in the test-retest evaluation completed the VISA-A-S questionnaire a second time within a week of the first visit. Construct validity of the VISA-A-S was tested according to the original article on the VISA-A English version [ 17 ]. The results from the 51 patients who completed the VISA-A-S questionnaire were compared with the results from the tendon grading system by Stanish et al. (1984). The results from the VISA-A-S questionnaire for patients with Achilles tendinopathy were also compared with the results of healthy individuals. Criterion validity of the VISA-A-S questionnaire was evaluated by comparing the results of our patients (n = 51) with the results of the two patient groups, the non-surgical group (n = 45) and surgical group (n = 14), in the original article by Robinson et al. (2001). The results of the healthy individuals in our study were also compared with the results from the healthy individuals in the original study. The structure of the VISA-A-S questionnaire was evaluated with a factor analysis. Statistical analysis All data were analysed by SPSS 11.5 for Windows. Descriptive data are reported as mean, standard deviation and 95% confidence interval. Test-retest data was analysed by Pearson's r , as performed for the VISA-A English version [ 17 ]. Inter-Class Correlation Coefficient (ICC) and Wilcoxon paired test for non-parametric data was also calculated for test-retest data since the questionnaire presents ordinal data. Internal consistency was assessed by calculation of Cronbach's alpha. Comparison of VISA-A-S with Stanish et al. (2000) tendon grading system was performed by calculating the Spearman's rank correlation coefficient for non-parametric data. VISA-A-S scores for the healthy group and the patient group were compared using the Mann Whitney U test. For comparison of the VISA-A-S with the VISA-A, a two sample t-test was used since only means and standard deviations, and no raw data, were available from the results in the original study [ 17 ]. The level of significance was set at p < 0.05. A principal axis factoring with varimax rotation, eigenvalue over 1.0, was applied for evaluation of the structure of the questionnaire. Results Table 2 summarizes the reliability evaluation of the VISA-A-S questionnaire. The VISA-A-S showed good test-retest reliability for healthy individuals (Pearson's r > 0.88, ICC > 0.88) and for patients (Pearson's r = 0.89, ICC = 0.89). There was no significant difference between the scores on test days 1 and 2. When analyzing each question separately (Table 3 ) the results showed good reliability. For questions 3 and 6, however, there were significant differences (p = 0.007 and p = 0.03 respectively) between the two test occasions. The internal consistency for the 8 questions in the VISA-A-S was 0.77 as measured with Cronbach's alpha. Table 2 Summary of reliability tests of VISA-A-S score Pearson's r ICC Wilcoxon Cronbach's alpha Healthy (n = 15) Test-retest 1–2 0.88 0.88 0.07 2–3 0.99 0.99 0.32 1–3 0.90 0.90 0.07 Patients (n = 22) Test-retest 1 week 0.89 0.89 0.051 Patients (n = 51) Internal consistency 0.77 Table 3 Test-retest scores for the 8 questions in the VISA-A-S score (patients, n = 22) Pearson's r ICC Wilcoxon Question 1 0.74 0.72 0.184 Question 2 0.81 0.81 0.308 Question 3 0.89 0.86 0.007 Question 4 0.78 0.78 0.269 Question 5 0.71 0.71 0.793 Question 6 0.68 0.66 0.032 Question 7 0.87 0.86 0.577 Question 8 0.79 0.79 0.721 The VISA-A-S score correlated significantly with the tendon grading system by Stanish et al. (2000) (Spearman's r = -0.68; p < 0.01). The patients with Achilles tendinopathy had a significantly lower score (p < 0.0001) compared with the healthy individuals. The mean VISA-A-S score for patients in the present study was significantly (p < 0.01) lower than the mean VISA-A score for the non-surgical group in the original article by Robinson et al. (2001). When comparing the VISA-A-S score for patients in the present study with the VISA-A score of the surgical group in the original study [ 17 ] there was no significant difference (p > 0.2). There was no significant (p > 0.2) difference between the healthy individuals score on the VISA-A-S when comparing with the healthy individuals in the original article [ 17 ]. The factor analysis revealed two factors of importance (eigenvalue over 1.0): pain/symptoms (questions 1–6) and physical activity (questions 7 and 8). Discussion A widely-used clinical outcome measure for patients with Achilles tendinopathy would help comparisons between treatments in various clinics and research studies. The VISA-A questionnaire is an easily self-administered questionnaire. Since research is performed in various countries it is important to properly translate, culturally adapt and evaluate instruments like a questionnaire in order to be able to compare the results [ 18 ]. This study demonstrates that the Swedish version of the VISA-A questionnaire has good reliability and validity. With careful translation and cultural adaptation, we established good face validity and content validity . The test-retest reliability and the internal consistency were considered good. A significant and strong correlation between the VISA-A-S and the tendon grading system by Stanish et al. (2000) indicates good construct validity . The comparison of the results of the patients and healthy individuals in the present study with the results of the non-surgical group, the surgical group and the healthy individuals, as reported in the original article by Robinson et al. (2001), indicates good criterion validity . The factor analysis gave the two factors: pain/symptoms (questions 1–6) and physical activity (questions 7 and 8), strongly confirming that the questionnaire is valid for evaluating the patient's symptoms and its effect on physical activity. The factor analysis and an internal consistency of 0.77 as measured by Cronbach's alpha indicate that no question should be excluded. We did not include a separate group of pre-surgical patients as in the original study by Robinsson et al. (2001). This is because the advances in the non-surgical rehabilitation of patients with Achilles tendinopathy during recent years have resulted in markedly reduced number of patients awaiting surgery. The patients in the present study can therefore be viewed as representing patients from both groups (surgical and non-surgical) of patients used in the original study [ 17 ]. This would explain why the patient group in the present study had a significantly lower score when compared to the non-surgical group in the original article [ 17 ]. The one week duration between the two tests in the test-retest reliability part was somewhat long. This duration was chosen because this is the usual time that lapses between a patient's first and second visit with the physical therapist. This may explain why two of the questions differed significantly between test-days. During this week the patients have met with their physical therapist, which may have caused the patients to change their view on their symptoms and physical ability. Good criterion validity indicates that the Swedish version and the English version of the VISA-A questionnaire evaluate the same aspects of clinical severity in patients with Achilles tendinopathy. It can, thus, be expected that similar scores in the two versions indicate the same index of severity in patients with Achilles tendinopathy. The 11 physical therapy clinics throughout Sweden, which participated in this study, all reported that the questionnaire was easily administered, and required a minimum of communication between the physical therapist and the patient. The physical therapists perceived the questionnaire as a good clinical tool and useful when treating patients with Achilles tendinopathy. A review of the literature in regards to treatment of Achilles tendinopathy yielded only a few randomized treatment trials [ 8 , 11 - 16 ]. There are however prospective and retrospective cohort studies as well as case studies [ 4 , 20 - 25 ]. Comparing the results of all these studies is difficult since the outcome measures vary. A questionnaire like the VISA-A and VISA-A-S which gives an index of the clinical severity for patients with Achilles tendinopathy, and also is easily administered and easy to fill out, could be very helpful in the future. Paavola et al. (2002) noted that few randomized intervention studies had a follow-up longer than twelve months. The VISA-A and VISA-A-S questionnaire can be filled out easily and quickly and require a minimum of assistance during follow-up and could therefore be very helpful for long-term follow-ups. The VISA-A questionnaire has successfully been used as an outcome measure in a randomized double-blind, placebo-controlled treatment trial [ 16 ]. Currently we are evaluating the VISA-A-S questionnaires responsiveness over time in a randomized treatment study for patients with Achilles tendinopathy. Conclusion This study has carefully performed the recommended steps for cross-cultural adaptation and has performed reliability and validity evaluations of the new version. The factor analysis, measuring the two factors: pain/symptoms and physical activity , reinforces that the VISA-A questionnaire can be used as an index of clinical severity. The present study culturally adapts and validates the VISA-A-S questionnaire (Additional file 1 ) for the Scandinavian countries and it is comparable to the original version. Competing interests The author(s) declare that they have no competing interests. Authors' contributions KGS conceived of the study, participated in its design, performed data acquisition, analyzed and interpreted the data and drafted the manuscript RT conceived the study, participated in its design, interpreted the data and helped to draft the manuscript. JK participated in the study design, interpreted the data and helped to draft the manuscript. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional File 1 VISA-A-S questionnaire The Swedish version of the VISA-A questionnaire in MICROSOFT WORD format. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC555595.xml |
539339 | Computation Provides a Virtual Recording of Auditory Signaling | null | A small rodent rustles through a field in the still night, making just enough noise to betray its location to a circling barn owl. A female frog sits on the bank of a pond amid a cacophony of courting bullfrogs, immune to the mating calls of all but her own species. Thanks to a sophisticated sensory processing system, animals can cut through a vast array of ambient auditory stimuli to extract meaningful information that allows them to tell where a sound came from, for example, or whether they should respond to a particular mating call. An acoustic stimulus arrives at the ear as sound energy in the form of air pressure fluctuations. The sound signal triggers oscillations in mechanical resonators such as the eardrum and hair sensilla. These oscillations convert sound energy into mechanical energy, opening ion channels in auditory receptor cells and producing electrical currents that change the neuron's membrane potential. This, in turn, produces the action potential that carries the sound signal to the brain. This multistep signal transduction process takes less than a millisecond, but exactly how it occurs at this time scale remains obscure. Direct measurements of the individual steps can't be made without destroying the mechanical structure; consequently, most measurements are taken downstream of the mechanical oscillations at locations like the auditory nerve. Likewise, the temporal resolution of most stimulus–response trials is far too imprecise to analyze processing at the sub-millisecond level. Given these experimental limitations, Tim Gollisch and Andreas Herz turned to computational methods and showed that it's possible to reveal the individual steps of complex signal processing by analyzing the output activity alone. Using grasshopper auditory receptors as models, the authors identified the individual signal-processing steps from eardrum vibrations to electrical potential within a sub-millisecond time frame and propose a model for auditory signaling. The crucial step in their study is the search for those sets of inputs (stimuli) that would yield a given fixed output (response). To get the parameters to describe the final output, the authors generated a sound stimulus (two short clicks) and recorded axon responses of receptor neurons in a grasshopper auditory nerve. From these recordings, they defined the fixed output as the probability of a receptor neuron firing a single action potential. They then asked how the various parameters, which were associated with different time scales, could produce the same predefined firing probability. A schematic representation of auditory signaling By varying the stimulus parameters and comparing the obtained values within their mathematical framework—and making certain assumptions, for example, that the steps signal through a “feedforward” process—they could then tease out the individual processing steps that contribute to the desired output within the required time frame. With this approach, Gollisch and Herz disentangled individual steps of two consecutive integration processes—which they conclude are the mechanical resonance of the eardrum and the electrical integration of the receptor neuron—down to the microsecond level. Surprisingly, this fine temporal resolution is achieved even though the neuron's action potentials jitter by about one millisecond. Thus, using just the final output, this approach can extract the temporal details of the individual processes that contribute to the chain of auditory transduction events. While this method is best-suited for deconstructing unidirectional pathways, the authors suggest it could also help separate “feedforward” from feedback signaling components, especially when feedback is triggered by the final steps. But since many sensory systems share the same basic signal-processing steps, this method is likely applicable to a broad range of problems. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC539339.xml |
549194 | Narrative Exposure Therapy as a treatment for child war survivors with posttraumatic stress disorder: Two case reports and a pilot study in an African refugee settlement | Background Little data exists on the effectiveness of psychological interventions for children with posttraumatic stress disorder (PTSD) that has resulted from exposure to war or conflict-related violence, especially in non-industrialized countries. We created and evaluated the efficacy of KIDNET, a child-friendly version of Narrative Exposure Therapy (NET), as a short-term treatment for children. Methods Six Somali children suffering from PTSD aged 12–17 years resident in a refugee settlement in Uganda were treated with four to six individual sessions of KIDNET by expert clinicians. Symptoms of PTSD and depression were assessed pre-treatment, post-treatment and at nine months follow-up using the CIDI Sections K and E. Results Important symptom reduction was evident immediately after treatment and treatment outcomes were sustained at the 9-month follow-up. All patients completed therapy, reported functioning gains and could be helped to reconstruct their traumatic experiences into a narrative with the use of illustrative material. Conclusions NET may be safe and effective to treat children with war related PTSD in the setting of refugee settlements in developing countries. | Background In the wars and armed conflicts of the past decades, children have been among the survivors who have been exposed to war or conflict-related violence. The United Nations High Commissioner for Refugees (UNHCR) recently stated that 43% of its population of concern are children under the age of 18 [ 1 ]. Mental health experts are also becoming more aware that war and conflict-related event types are among those that may result in children developing disorders of the stress spectrum, including posttraumatic stress disorder (PTSD) [ 2 - 5 ]. An increasingly important field of research addresses the wide-ranging negative sequelae that children and adolescents in modern post-conflict populations such as in Iraq, Kuwait, Bosnia, Rwanda, Croatia, South Africa and others may develop consequent to war and conflict violence [ 6 - 17 ]. Current research emphasis is now more than ever being placed on developing appropriate interventions that address the needs of survivors experiencing a range of symptoms after trauma exposure [ 18 - 29 ]. Given the pervasiveness of war and conflict-related trauma, especially in resource poor countries, interventions tailored to suit the circumstances of the overwhelming number of such survivors are especially in demand. However, treatment outcome studies in this field are still few. Many such interventions are derived from interventions initially developed for adults, such as cognitive behavioural therapy. Cognitive-behavioural interventions have been successfully used with school children exposed to violence, after single-incident stressors, after natural disasters as well as to treat sexually abused children [ 18 , 30 - 33 ]. Other interventions currently in use with children include psycho-pharmacological treatments, play therapy, psychological debriefing and testimony therapy [ 17 , 20 , 23 , 26 , 34 - 39 ]. It is notable that most approaches have not yet been tested within post-conflict populations of children and adolescents living in non-industrialized countries. Narrative Exposure Therapy (NET) is a treatment approach that was developed for the treatment of PTSD resulting from organized violence. vivo developed Narrative Exposure Therapy as a standardized short-term approach based on the principles of cognitive behavioural exposure therapy by adapting the classical form of exposure therapy to meet the needs of traumatized survivors of war and torture [ 40 - 42 ]. In exposure therapy, the patient is requested to repeatedly talk about the worst traumatic event in detail while re-experiencing all emotions associated with the event. In the process, the majority of patients undergo habituation of the emotional response to the traumatic memory. In addition to the reconstruction of the traumatic memory, this habituation consequently leads to a remission of PTSD symptoms. As most victims of organized violence have experienced many traumatic events, it is often impossible to identify the worst event before treatment. To overcome this difficulty in NET, the patient constructs a narration of his whole life from early childhood up to the present date while focusing on the detailed report of traumatic experiences. The focus of NET is therefore two-fold. As with exposure therapy, one goal is to reduce the symptoms of PTSD by 1) confronting the patient with memories of the traumatic event. However, recent theories of PTSD and emotional processing suggest that the habituation of the emotional processes is only one of the mechanisms that improve symptoms [ 43 ]. Other theories suggest that the distortion of the explicit autobiographic memory of traumatic events leads to a fragmented narrative of the traumatic memories. Thus, 2) the reconstruction of autobiographic memory and a consistent narrative should be used in conjunction with exposure therapy. Emphasis is put on the integration of emotional and sensory memory within the autobiographic narrative. Narrative Exposure Therapy was initially developed for adults, but has been adapted for use with children older than 8 years [ 43 , 44 ]. In narrative exposure procedures, children are asked to describe what happened to them in great detail, paying attention to what they experienced in terms of what they saw, heard, smelled, felt, the movements they recall and how they felt and thought at the time. Initially, the session is distressing, but as it is long enough to allow habituation, distress levels diminish towards the end and more and more details are recalled. After only four sessions of exposure, scores on intrusion and avoidance may drop significantly [ 43 ]. This study investigated the effectiveness of NET when applied to child refugees. The investigation was carried out in the context of the Nakivale mental health project, which aimed at the examination of mental health symptoms as well as the evaluation of different treatment approaches in the Nakivale refugee settlement in Uganda [ 45 ]. The first aim of this paper is to present and illustrate the procedure of KIDNET as a child-friendly treatment approach for traumatized children in post-conflict populations. In addition, we present the results of a small sample pilot test to allow the examination of the feasibility and potential efficacy of the method in a field context. Methods Ethical approval The study protocol was approved by the Ethical Review Board of the University of Konstanz and by the Ugandan National Council for Science and Technology, Kampala. Participants Six child refugees (ages 13 – 17 years; 3 girls and 3 boys) of Somali ethnic origin screened as having PTSD from a larger epidemiological survey in Uganda's Nakivale refugee settlement [ 45 ]. Instruments and procedure As instruments need to go through a work-intensive process of translation and validation and interpreters as well as interviewers need extensive training so that instruments are properly applied, we used the same already-validated instruments as in the adult epidemiological survey [ 45 ]. The Posttraumatic Diagnostic Scale (PDS) and the Hopkins Symptom Checklist-25 (HSCL) were administered face-to-face by trained, local, non-professional interviewers as assisted self-report interviews in order to screen for posttraumatic stress disorder and co-morbid depression [ 46 , 47 ]. The six children were identified as having PTSD according to the PDS. These diagnoses for both PTSD and depression were validated using the Composite International Diagnostic Interview (CIDI) version 2.1 [ 48 ] Sections K and E respectively, administered by expert clinicians with the help of extensively trained interpreters within two weeks of the initial interview. This was done within the context of a clinical interview during which the clinicians clarified the questions for the children. The clinicians employed child-appropriate language to make sure the children understood the questions. In all six cases, the initial diagnosis was confirmed. All six were assigned to a Narrative Exposure Therapy (KIDNET) treatment group, with treatment being offered by expert clinicians experienced in the use of NET. The post-tests were also conducted by expert clinicians and trained interpreters, again using the CIDI. All six child patients gave verbal assent for the screening. They were then formally offered individual treatment after diagnosis, along with a brief psycho-education describing the nature and prevalence of PTSD symptoms, and what treatment would entail. A standard written rationale that has been developed for this purpose was used, the goal being to explain that PTSD-related symptoms and dysfunction are frequently consequent to multiple traumatic experiences. All six gave their assent for treatment, which only began after informed consent from the parents or guardians was granted. It was made clear that both assessment and treatment would be entirely voluntary, and no monetary or food-item inducements would be offered. In all cases, both the patients and their parents or guardians were relieved that treatment was offered. The patients were tested again with the CIDI approximately four weeks (post-test) and nine months (follow-up) after the end of treatment. Treatment modality Narrative Exposure Therapy treatment sessions lasted between 4 to 6 sessions, of between 1 – 2 hours in duration. The treatment involved one-to-one sessions with a clinician attending to an single child patient at a time. At the beginning of Session 1, the patient was requested to draw any picture that came into their minds, as the very first step. Next, the patient was given a lifeline (length of rope) and a selection of stones of varying characteristics and sizes, as well as fresh flowers of varying sizes and colours. He or she was then asked to construct his lifeline, outlining the major events using flowers for positive events and stones for negative events in a chronological order. When he or she was quite sure about the sequence and magnitude of events, the patient was then asked to make a drawing of this lifeline, with brief titles for each event. The narrative session then began, with the patient narrating the events of his or her life starting with his or her birth, with the aid of the lifeline and the drawings. The patient used a representative object, which he or she moved at will to indicate where he or she had reached on the lifeline. In the following narration procedure, the participant constructed a detailed chronological account of his own biography in cooperation with the therapist. The therapist's task was to document the patient's autobiography, which was corrected with each subsequent reading. Special focus of the therapy was on the transformation of the generally fragmented report of traumatic experiences into a coherent chronological narrative, and working through emotions, sensations and reactions relevant to the traumatic events. During the discussion of the traumatic life experiences, the therapist asked for current emotional, physiological, cognitive and behavioural reactions, while accompanying the patient back into the details and emotions surrounding each event, and helping the patient to reconstruct the trauma memory. The participant was encouraged to relive these reactions and emotions while reporting the events. The discussion of a traumatic event was not terminated until a habituation of the emotional reactions presented and reported by the patient occurred. During the session and in subsequent sessions, the testimony was read back to the patients, who was asked to correct, modify or add to it until a complete document has been made of the patient's experiences. During the last session, the participant received a complete written document of his biography. The precepts of Narrative Exposure Therapy are described in detail in a manual [ 43 ]. The main innovation of KIDNET as compared to the adult version of NET is the use of illustrative material such as a lifeline (usually a length of rope or string), stones and flowers, as well as coloured drawings and role-play, to help the child reconstruct the memories of his or her experiences. Unlike with adult NET thus far, the patients were encouraged to extend the narration beyond the present, to describe their hopes and aspirations for the future, mainly with the use of flowers. This is done at the end of therapy. These hopes and aspirations were included in the drawings, lifeline and narrations as an integral part of the document. The therapist also highlighted the length of rope still left over, to illustrate opportunities and possibly improved life circumstances, despite negative past experiences. The patient was requested to construct his or her lifeline at the beginning of each session, after which narration was resumed. The therapist was also alert to detect any connection between the initial picture and the traumatic events in the storyline, especially the worst-ever event. As a final task, the therapist requested the child to draw any picture that came to their minds after handing over the document to the patient at the end of the last session, in order to compare this with the initial picture drawn. Results Two cases of Narrative Exposure Therapy Case one AWH AWH is a very slim young person of middle height, with active uneasy eyes that never rest. He was interviewed in March 2003 when he was 15 years old by a locally trained interviewer using the PDS. This PTSD diagnosis was confirmed within a two-week period by an expert using the CIDI, after which AWH was invited for therapy. He was very shy when he came. In fact, he approached the house then ran back into the street. After some time, he slowly approached again. Both the therapist and the interpreter put in a lot of effort to make him feel welcome and comfortable. As we later learnt, he lost both parents during the civil war in Somalia. This was without question his "worst event". He has lived by himself since the age of 14 in the camp, but fended for himself since the age of 9. He had no family in the camp or anywhere else, as far as he knew. At the time of his parent's death in Somalia, he recalled a younger brother and sister, but he had never heard of them again. He has been a registered refugee in Uganda since 1998. His initial drawing at the beginning of therapy shows the tiny lonely figure of a child, placed in the middle of the large white sheet of paper, nothing else. He named it " a boy". The beginning of NET was challenging, since he seemed to recall very little of his early years as a child in Somalia. He was also very economical with words and upon being asked to place flowers for good events in his life, he found there was no event that would deserve a flower. He simply placed two stones on his lifeline, one for the day his parents were shot in front of him, one for the time he reached Nairobi as a refugee, a time when he had to struggle hard to stay alive in the absence of any aid. After NET 1 (session 1), he did recall his early years in quite some detail and even remembered some joyful events; for one of which he later placed a flower: 'the time I used to play football with my dad in the evenings in our compound'. After having witnessed the death of his parents and being able to escape through the back of the house, he was completely on his own. He escaped Mogadisho with a group of strangers, survived Nairobi by himself, eating left-overs at hotel garbage dumps and finally smuggled himself into Uganda by hiding in the back of a bus. Excerpts of his story read: I was born in Mogadisho, Somalia. I do not know my exact birthyear, I think it is 1986. I grew up with both my parents. I have a sister who is 2 years younger called Halimo and a 4 years younger brother called Mohamed. We lived in a part of town called ... My mother had fair brown hair and skin. She was young and I loved her a lot. I was her first-born and her favourite. She even told me so. My father was of darker complexion. He was also a young man in those days. He was hard working. He had a shop close by in the market. He would usually leave in the morning and return home in the evening. Sometimes when he came home, he played with us in the evening. We played football together. Those were good times. I do not know how old I was then; I just remember that I was very young... When I remember those days I get sad. All these memories come back and I only know what I have lost. The years went by and I used to live like this until the war broke out. I don't remember the year, but I was still young... One day we fled from home in the late afternoon...We went with a car to a place called Bal'aad, about 30 km out of town... Eventually we went back home to Mogadisho. I must have been about 10 years by then. A few months later, the war reached us again. It was early in the morning. A group of about 10 civilian men reached came to our house. They were armed with guns... I stood very near to my parents. I was so scared. Suddenly I heard the sound of bullets. One of the soldiers had started shooting. The moment I saw that he pulled the trigger and heard the first bullet, I panicked. I started running. I felt such great fear. I ran inside the house and tried to hide myself behind a door in one of the rooms. I was shivering, fearing, thinking, they will also come for me, they will come and kill me'. I still have a heart beat now, when I recall that moment. After some time it went quiet outside. I still stood behind the door, silent, not moving. After a while I slowly moved towards the window and peeped out. What I saw was terrible. My mother and my father had been hit by the bullets. They were both lying on the ground. My mother had fallen on top of my father. They both had blood on their clothes. My mother had blood on her face and her stomach. They were not moving anymore, they had died. Until that day, I had never seen a dead person. I felt horror. I was so afraid of them, shocked by what I saw. I only thought of running away, leaving this place. I escaped through the back of the house and jumped over the fence. This was the last time I have seen my parents and also the last time I had been in our home in Mogadisho... While fleeing, I joined strangers in the street. So many people were trying to flee. I simply ran with them. When these people reached their destination, they branched off from the road. It was night time by then. I was alone. I hated my life. I followed the road and finally fell asleep under a bush. I had given up about life by then. I felt like I had died as well. I knew about the danger of wild animals and lions, but I did not care... This is how I came to Uganda. When we reached Kampala I saw a group of Somalis and went to greet them. They took me in and I lived with them for a few weeks. They also showed me how to register as a refugee with UNHCR. I remember the day I came to Nakivale Refugee Camp. I was so surprised how people can live in a place like this. I stayed with the Somali family that found me in Kampala for about two years in the camp. Finally, Red Cross helped me to build my own house in 2000, I was about 14 years then. Since then I live alone. I started going to school when I came to Nakivale. I will graduate from P7 at the end of this year. I have learnt how to live by myself; I can do everything by myself. I never ask for help. No one can help me anyway. I have never heard about my brother and sister again. Whether they are still alive and if so, where I could find them. But now I am ready to look for them. As mentioned before, he has been surviving on his own since the age of 9 years. This has probably led to his enormous shyness but also an amazing sense of self-reliability. 'I can do everything by myself. I never ask for help. Anyway who could help me?' There is also a strong feeling of sadness and loneliness around him that is very moving. Our local translator Haji, who is also trained as a NET therapist, was moved to free flowing tears at several moments of AWH's life recall, specially when he spoke about his loneliness and desperation. AWH is a smart boy, so he frequently challenged the concept of therapy in the beginning. 'It won't make my parents come back, and it won't change my life situation'. AWH has never really entered a state of expressing strong emotions like crying or anger during therapy. He said he could not cry, also not by himself. There were however visible signs of emotional processing like tears and strong heartbeat especially during NET 1 and 2. AWH reported having strong PTSD symptoms when he started therapy, especially continuous nightmares and flashbacks, mainly related to the day his parents were killed. Re-experiencing actually increased, he said, once therapy was started. After NET 4 he talked of having fewer symptoms. In his last session AWH talked about wanting to try and trace his family with the help of the Red Cross. He was especially interested in finding his two younger siblings. When the sum of symptoms were counted according the CIDI-K section, AWH had a sum score of 12 in the CIDI pre-test. This had decreased to 8 in the CIDI post-test immediately after the end of therapy, and 9 at the nine-month follow-up. At the nine-month follow-up, AWH was looking well-dressed, and had put on both height and weight. He had completed his primary school education at the camp primary school and spoke confidently of his plans. He had also joined the camp soccer team and played along with the group now every evening, a significant behavioural change, given that he previously would not even talk or socialize with others. He was not more friendly than usual, but admitted significant symptom reduction to an expert evaluator who had never seen him before. Case two – UG UG was a pretty seventeen-year old girl in March 2003 when she was interviewed by a locally trained interviewer. She looked, however, visibly ill and strained -quite unlike a normal happy young woman. She complained of constant headache and pain in her eyes. UG's PTSD diagnosis was confirmed within a two-week period by an expert using the CIDI. UG did not wait to be offered treatment, but came herself to the project centre and requested for help. In her own words, "I have been to all the doctors but they have done nothing for me. All they give me are painkillers. I had to drop out of school because of my headaches and pain in my eyes." When the treatment protocol was explained to her, she was enthusiastic about relating her experiences and readily gave her assent. Her mother was quick to give her consent. She said she would try anything that would help her daughter. In her own words, ' My daughter has not been her real self for a long time.' At the beginning of NET 1, UG drew a picture of the Somali flag. Asked to explain, she said she loved her homeland and hoped to go back there one day when there was peace. While laying her lifeline, UG included a happy family life in Somalia till the age of 4, her arrival in Uganda after a long and difficult flight itinerary and being accepted as a refugee by the UNHCR as the flowers in her life. The stones in her life represented the worst event, when her older brother was killed, her sister was severely injured and her two younger brothers were lost, never to be seen again, to date. Other stones symbolised flight difficulties such as extreme hunger, and a failure to access a solution for her headaches and eye pain, which she has had since the worst event. In subsequent NET sessions, UG talked in detail about these events. Excerpts from her story read: I was born on August 26th 1986 in Mogadishu.... I had six brothers, two younger than me and two sisters older than me. We were a very happy family. As a big family, we conversed a lot and made jokes. We were very happy with our father. He made us laugh and brought us presents whenever he went anywhere. This was a happy time till I was 4 years old... One day,... we heard guns and bullets firing and we were excited. My father ran from work and picked up the other children from school and came back home with them. Some people came to chase us away from our house because we were a minority clan (Madiban). They were from the Hawie (Habrigidir). They were men, very many. They told us to leave everything and flee from the house. They did not beat us. I was still a child, with a soft head. I heard the bullets and started vomiting and fell down. My mother picked me up and put me on her back. We all left the house... On the way, we met very many militia men dressed as army men. They told us to lie down on the ground. I was still crying. The rest were silent. One of them knocked me with the butt of the gun on the soft part of my head. Then I kept quiet. They wanted to kill everyone... They put us in a house for security purposes. A heavy gun was shot near the house. There was another house near our house. They shot this house and the fragments reached our house. Some people were killed in our group, including my brother. He was older than me. He was called Mahad. He was in his school uniform. He fell down on his stomach. I did not see him fall but I saw the blood. My sister Khadija was lying down when the fragments hit her. I did not see any blood, but the fragments went into her stomach and she was hurt in her stomach. My father thought she was dead, but people said she was still alive and she was still talking. He went to her and told her to get up. She could not get up. My father carried her to the hospital. Two of my brothers, Abdirahman aged four years, and Abdirasaq aged three years, disappeared in that group as everyone ran away. They were my two younger brothers. My dead brother Mahad was left there. We never saw my two brothers who disappeared again. So many other people died as well... After one year, we came back to Mogadishu. We found our father and sister in the hospital. They first stayed in the hospital for some time, then moved to a house. My sister had been operated on. My father had become a bit deaf because of the gunshot. We decided to leave Somalia and go somewhere else more secure. We left Somalia with nothing. My mother, my father, my injured sister Khadija, and the other six children... We went to Kampala. My mother was advised to go to the UNHCR and got a mandate. My sister Shamso and my cousin sister Fardosa also got separate mandates.... In 1997, we were resettled in Nakivale camp... I hit my head down and I felt the pain of the gun again. I was not hurt but I got a terrible headache for three days. I went to the doctor but I did not get any assistance except Panadols and eyedrops. Since then, I was unable to go to school because the headaches increased. My eyes had begun paining from birth, but they increased with the headaches. I am still here but I hope for a better future and a happier life e.g. to be resettled elsewhere, to have an education, to have a happy family of my own and to make contact with my lost loved ones. Among her hopes and aspirations for the future, UG hoped to trace her missing brothers under the auspices of the Red Cross, as well as re-enter the education system. She also hoped to one day have a happy family of her own In the final picture at the end of therapy, she drew a happy family inside a nice house, living in peace. UG had a CIDI pre-test sum-score of 16; immediately after treatment, this reduced to 12 in the CIDI post-test and 1 at the nine-months follow-up. The nine months follow-up found UG no longer in the camp but in Kampala, the capital city of Uganda. She looked happy and cared about her appearance. She said she had no more headaches, and little eye problems. In her own words, "I feel like a newborn child," she told the blind evaluator. She had moved away from the camp to explore possibilities for further education, resettlement and to seriously try to trace her brothers. She also reported joining other adolescents when they were out to socialize. "My biggest dream is to play soccer myself, but as a girl, I would never be allowed. But at least I go and cheer to the boys when they play." She says her family members, especially her mother, have all noticed the difference in her and keep commenting on the improvements she has made. She is more like any healthy young woman. Pilot study All the patients accepted and completed treatment. Two patients were unavailable for the post-test as they had left the settlement after treatment. At the 9-months follow-up, all children could be tested again, as the investigation included a testing in other regions to which Somali refugees had moved. At baseline, all six had moderate to severe PTSD according to the CIDI ( M = 14.3, SD = 1.9). Scores dropped to M = 9.0 ( SD = 2.2) at post-test and again to M = 6.2 ( SD = 3.3) at 9-months follow-up (see Figure 1 and Table 1 ). A mixed model (random factor: Subject, fixed factor: Time, missing data: restricted maximum likelihood), that allows the inclusion of all six cases in the analysis, indicated a significant reduction of symptoms across time; F (2,5) = 15.45, p < 0.01. This result was confirmed by a Friedman-test using row-wise-exclusion of the two missing cases: χ 2 (df = 2) = 6.50, p = 0.039. The analysis at individual level showed that the symptom scores of each individual patient decreased in the period between the pre-test and the follow-up period. Figure 1 Scatterplot of the sum of PTSD symptoms as recorded by the CIDI-K section across therapy . Both a mixed model ANOVA and a Friedman test confirmed a significant reduction of symptoms between time intervals. Table 1 The participants' individual scores on the PTSD section of the Composite International Diagnostic Interview at the timepoints pre-treatment, post-treatment and 9-months follow-up. Code Age Sex CIDI pre CIDI post CIDI follow-up AMA 16 female 12 9 7 ABJ 15 male 16 6 HI 13 female 15 7 4 DMM 16 male 15 10 AWH 15 male 12 8 9 UG 17 female 16 12 1 M (SD) 15.3 (1.4) 14.3 (1.9) 9.0 (2.2) 6.2 (3.3) After nine months, four of the six patients no longer met the criteria for PTSD. The remaining two both had borderline scores. Before treatment, four of the six patients presented with clinically significant depression as they fulfilled DSM-IV criteria for major depression according to CIDI interview. None of the subjects met the criteria for clinically significant depression according to the CIDI at the post-test or the 9-months follow-up. Discussion In this pilot trial we tried to get a first impression on the efficacy and safety of KIDNET, a short-term treatment approach for the therapy of traumatized adolescents. Results showed an important reduction in posttraumatic symptoms, as early as the post test. At the 9-months follow-up, 2 of the 6 patients still fulfilled PTSD criteria, but now at borderline levels and with less functional impairment. Clinically significant depression has remitted to non-clinical levels in all four adolescents who had presented with depression in the pre-test. This study presents with the limitations of pilot studies with small sample size. Symptom changes cannot be causally attributed to treatment, but might also be caused by spontaneous remission. Nevertheless, given the cross-sectional data available, in particular, the high prevalence of PTSD in the Somali refugee population (with nearly 50% [ 45 ]), it seems unlikely that spontaneous remission has occurred at this rate. Otherwise, the high prevalence would be difficult to explain, as the PTSD should have remitted earlier. Neither the clinical impression nor the symptom scores indicated a worsening of the symptoms in any of the patients. Therefore, this pilot study suggests that NET might be used effectively as a short-term treatment with child patients even in the unsafe conditions of a refugee camp in an African country. A possible adjustment would be to allocate more sessions where needed, such as where traumatic events were particularly severe or numerous. The case reports illustrated that the child patients were able and willing to narrate their traumatic experiences, especially with the aid of illustrative material. There is also evidence that children can recall details of traumatic events that occurred when they were younger. Teenaged children are also able to comply with short-term treatment approaches such as the version of NET presented here. It is noteworthy that this group of children had multiple and very severe war events and yet showed a clear benefit from treatment with NET. This encourages research into the effectiveness of KIDNET with other child trauma populations such as child PTSD after single stressors, after natural disasters or after accidents. More pressing of course, are randomized controlled trials comparing KIDNET to existing therapies for children. Any such comparisons between different groups of children has to await further research. It is often a matter of survival that mental function is restored to an extent that the survivor can cope with the daily stressors and that children actively take advantage of scholastic opportunities and ultimately strive for finding ways out of the camp. In this study, the therapies were carried out by highly trained experts. Given the large numbers of possibly traumatised youth in post-conflict low-income areas worldwide, treatment costs would be prohibitive unless further research can increase the ease with which non-professional paramedics can acquire adequate therapy skills through rigorous short-term training and supervision. Conclusions KIDNET proved to be a feasible method for the treatment of traumatized children in an African refugee settlement. The clinical observations and the assessed reduction in symptoms should approve KIDNET to be tested in further controlled trials. Competing interests The author(s) declare that they have no competing interests. Authors' contributions The study was designed by LPO, FN, MO, MS and TE. PLO, VE, and ES carried out the treatments and the assessments, FN and MS supervised the treatments. Data was analyzed by FN and LPO. LPO drafted the manuscript, all authors revised the manuscript and approved the final version. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549194.xml |
545962 | Patient satisfaction with out-of-hours primary care in the Netherlands | Background In recent years out-of-hours primary care in the Netherlands has changed from practice-based to large-scale cooperatives. The purpose of this study is to determine patient satisfaction with current out-of-hours care organised in general practitioner (GP) cooperatives, and gain insight in factors associated with this satisfaction. Methods From March to June 2003, 2805 questionnaires were sent to patients within three weeks after they had contacted the GP cooperative in their region. The study was conducted in the province of Limburg in the South of the Netherlands. One-third of these questionnaires was sent to patients who had only received telephone advice, one-third to patients who attended the GP cooperative for consultation, and one-third to patients who received a home visit. Four weeks after the first reminder, a non-respondents telephone interview was performed among a random sample of 100 patients. Analyses were performed with respect to the type of consultation. Results The total response was 42.4% (1160/2733). Sixty-seven percent of patients who received telephone advice only reported to be satisfied with out-of-hours care. About 80% of patients who went to the GP cooperative for consultation or those receiving a home visit, reported to be satisfied. Factors that were strongly associated with overall satisfaction included, the doctor's assistant's attitude on the phone, opinion on GP's treatment, and waiting time. Conclusion Patients seem generally satisfied with out-of-hours primary care as organised in GP cooperatives. However, patients who received telephone advice only are less satisfied compared to those who attended the GP cooperative or those who received a home visit. | Background In recent years, out-of-hours primary care in the Netherlands has been substantially reorganised. Formerly, general practitioners (GPs) used to perform these services in small locum groups (6 to 8 GPs) in which they joined a rota system. Nowadays, out-of-hours care is organised in large-scaled GP cooperatives (45 to 120 GPs) following examples in the UK and Denmark [ 1 , 2 ]. The initiative of reorganising out-of-hours care has come mainly from the profession itself, motivated by increased dissatisfaction with the organisation of former out-of-hours primary care services. This dissatisfaction was mainly due to the high perceived workload (after out-of-hours service a regular day of work followed), and poor separation between work and private life. The main advantage of the reorganisation was the substantial reduction of number of hours a GP has to be on call. Furthermore, the organisation of out-of-hours care became much more professional by installing management, employing doctor's assistants, and using chauffeured cars. Studies have indicated that GPs appear to be generally satisfied with out-of-hours care organised in cooperatives[ 3 ]. Not only did things change for doctors, but also patients experienced some important changes in out-of-hours primary care. Generally, the reorganisation caused a shift from more personal care to more anonymous care, with increased distance to the GP. Formerly, when patients needed primary care outside office hours, the probability of being seen by their own or a local GP with whom they were familiar, was higher. In addition, when patients contacted the GP during out-of-hours in the past, they were most likely to speak to the GP himself on the phone. Nowadays, the phone is staffed by a doctor's assistant who decides what action should follow the patient's call. Moreover, out-of-hours care used to be delivered by local GPs, indicating short distances to the GP's practice. In large-scale GP cooperatives, the distance to a GP outside office hours will have increased substantially for most patients. We expected that patient satisfaction would have been reduced after the reorganisation, because factors that guaranteed personal out-of-hours care at a short distance, that may be important to patients, were changed substantially. Furthermore, in Denmark it has been shown that after the out-of-hours primary care reform patient satisfaction dropped significantly[ 4 , 5 ]. Patient satisfaction with out-of-hours primary care has quite often been investigated, especially in the UK [ 4 - 11 ]. Mostly, comparisons have been made between different types of out-of-hours services. Several of these studies focused on out-of-hours primary care as organised in GP cooperatives. These studies have shown that patients are generally satisfied with out-of-hours primary care organised in GP cooperatives[ 5 , 8 , 9 , 11 ]. Nevertheless, patients receiving telephone advice only, appear to be less satisfied compared to those attending the cooperative or those receiving a home visit. In addition, it has been shown that the patient's expectation about their contact with the GP cooperative strongly affects the patient's overall satisfaction with out-of-hours care[ 12 ]. Other variables that appear to be related to overall satisfaction are, access to a car, age, and waiting time[ 8 ]. Insight in patient satisfaction with out-of-hours care supplies the health care provider with important information on the patient's perception of the quality of that care. During the last years, Dutch GP cooperatives have often received negative publicity in newspapers. The reorganisation has had some important implications for patients, and therefore research on their opinions about current out-of-hours care is warranted. The purpose of this study is to determine patient satisfaction with current out-of-hours care, and to determine how satisfaction is related to different aspects of the patient's contact with a GP cooperative. Methods Setting The study was conducted in the province of Limburg in the South of the Netherlands. With respect to out-of-hours primary care, the province is organisationally divided in five regions. Two of these regions each have two GP cooperatives (NL and ML), one region (OZL) has one GP cooperative with two satellite centres, and in the other two regions (WM and MH) only one GP cooperative is operational. All cooperatives but one (MH) are organisationally separate from the emergency department of the local hospital, and are located nearby the hospital. This implies that patients may choose between attending the emergency department and the GP cooperative for medical problems during out-of-hours. The MH cooperative is located at the emergency department of the region's only hospital and sees all patients needing out-of-hours care, except for those having a referral for emergency care. In total, these seven GP cooperatives cover a population of about 1.1 million people (the total Dutch population is over 16 million people), and are fully operational since the 1 st of September 2001. Development of the questionnaire To determine relevant issues for the questionnaire we interviewed GPs and managers involved with out-of-hours primary care. In addition, we analysed the process for a patient contacting the GP cooperative for all three loci of care (telephone advice, consultation at the cooperative, and home visits) separately to make sure that all facets of the GP cooperative a patient faces would be incorporated in the questionnaire. Moreover, we also analysed unpublished Dutch questionnaires in this field, and the patient satisfaction questionnaire developed by McKinley et al. [ 13 ]. Based on these three analyses, we identified a number of relevant elements (initial scales). Next a set of items was developed to enable us to produce multi-item scales. Subsequently, this list was sent to the patient organisation in our province, the two largest health insurance funds, and to the five GP cooperative organisations for commentary. These organisations were asked to critically review the list of items, and to add or remove items if they considered it necessary. After receiving all commentary the questionnaire was adjusted and was submitted to five people not involved in the development but with experience with out-of-hours primary care to check for clarity of the questions. Finally three questionnaires were constructed for each of the three types of consultations (telephone advice, consultation at the cooperative, and home visit). The three questionnaires differed on items related to the specific type of contact, but general items were the same for all three questionnaires. In this way it was possible to avoid complex skip sections which lengthen the questionnaire and can reduce the response rate. We used a balanced Likert five point scale (strongly agree, agree, neutral, disagree, strongly disagree) to record responses. The questionnaire related to telephone advice contained six initial scales measuring: accessibility of the cooperative by phone, doctor's assistant's attitude, questions asked by the assistant, advice given by the assistant, urgency of patient's complaint, and overall satisfaction. The questionnaire related to consultations at the cooperative contained ten initial scales: accessibility of the cooperative by phone, doctor's assistant's attitude, questions asked by the assistant, urgency of patient's complaint, waiting time at the cooperative, waiting room, distance to the cooperative, GP's attitude, treatment by GP, and overall satisfaction. The questionnaire related to home visits contained eight initial scales: accessibility of the cooperative by phone, doctor's assistant's attitude, questions asked by the assistant, urgency of patient's complaint, waiting time until GP arrives, GP's attitude, treatment by GP, and overall satisfaction. In addition, patient characteristics such as, age, gender, level of education, and health insurance (as a measure of social economic status) were recorded. Patients were also asked which type of consultation they expected prior to their contact with the GP cooperative, and whether they thought that the right diagnosis had been made. Sample From March to June 2003 a sample of 2805 patients – who had contacted the GP cooperative in their region – received a questionnaire by mail. Patients received this questionnaire within three weeks after they had contacted the GP cooperative. Sampling was performed per GP cooperative within the four-month period. With respect to patients who received telephone advice only and those who attended the GP cooperative, a computer program randomly selected each fourth patient contact with the GP cooperative backwards from the moment of sampling. Since the number of home visits is limited, all 150 patients, who were visited by a GP from the cooperative, prior to the moment of sampling received a questionnaire. These procedures assured that the time between receiving the questionnaire and the contact with the GP cooperative was not more than three weeks. Per region 450 questionnaires were sent out; 150 to patients who received only telephone advice, 150 to patients who visited the GP cooperative, and 150 to patients who received a home visit. Because of parallel research, more questionnaires were sent out in one of the regions (WM): 1005 questionnaires equally distributed among the three types of patient contact with the GP cooperative. The study size was chosen based on previous research by McKinley et al[ 7 , 13 ], who presented a study sample of about 1400 patients. We estimated that about half of all questionnaires would be returned, and therefore distributed 2805 questionnaires. The study was approved by the Institutional Medical Ethics Board. Reminder and non-respondents interview Three to four weeks after the questionnaire had been distributed, a reminder was sent to patients who had not returned the questionnaire, with the exception of the WM area. Four weeks after the last reminder, a random sample of 100 patients who had not responded, was contacted by phone. They were asked about their reasons not to return the questionnaire, and about their opinion on the contact they had with the GP cooperative. This interview was performed during office hours, during a three-week period. Statistics Principal components analysis with varimax rotation was used to test whether the items could be assumed to measure similar aspects or components of patients' opinions about their contact with the GP cooperative. Next, Cronbach's alpha coefficient was calculated to estimate the internal consistency as a measure for reliability for each component. Finally, scale scores were calculated per component by summing the scores per item and expressing the total result as a percentage of the maximum score for each scale[ 13 , 14 ]. Scale scores could range between 0 and 100. The relationship between individual variables and overall satisfaction was analysed using multiple regression analysis, with subscale satisfaction scores as covariates. Variables that did not significantly contribute to the regression model were excluded from the final model. In case of missing data, listwise deletion of missing cases was applied. All data were analysed using SPSS-pc, version 10.0.5. Results Patient characteristics Seventy-two of the 2805 questionnaires were excluded, either because they could not be delivered (patient had moved or gave a wrong address), the patient had died, or the patient was sent a double questionnaire (multiple contacts). Eventually the response was 42.4% (1160/2733). Generally more women responded to the questionnaire, and about three-quarter of the respondents had public health insurance (table 1 ). The age of respondents of those who received telephone advice only was comparable with those who attended the GP cooperative for a consultation. The respondents who received a home visit were generally older; two-third was over sixty years of age. Table 1 Patient characteristics. Telephone advice Consultation at the GP cooperative Home visit n (%) n (%) n (%) Response 366/908 (40.3) 392/912 (43.0) 402/903 (44.5) Age 0 – 20 years 127 (35.5) 146 (39.0) 9 (2.3) 21 – 40 years 96 (26.8) 81 (21.7) 26 (6.6) 41 – 60 years 67 (18.7) 82 (21.9) 93 (23.8) > 60 years 68 (19.0) 65 (17.4) 263 (67.3) Total 358 (100) 374 (100) 391 (100) Gender Male 148 (42.3) 159 (48.5) 177 (46.0) Female 202 (57.7) 169 (51.5) 208 (54.0) Total 350 (100) 328 (100) 385 (100) Level of education Low 92 (27.2) 91 (25.0) 161 (46.4) Middle 164 (48.5) 188 (51.6) 131 (37.8) High 82 (24.3) 85 (23.4) 55 (15.8) Total 338 (100) 364 (100) 347 (100) Health insurance Public 268 (74.4) 283 (73.5) 314 (80.5) Private 92 (25.6) 102 (26.5) 76 (19.5) Total 360 (100) 385 (100) 390 (100) Telephone advice Forty percent (366/908) of the patients who had received telephone advice only, returned the questionnaire. 67% of these patients responded to be satisfied (44.3%) or very satisfied (22.3%) with their contact with the GP cooperative, and 57% thought that the current out-of-hours care was an improvement compared to the former situation. We identified the same six scales that were initially set to represent patients' opinions on aspects of primary out-of-hours care (table 2 ). All six scales had Cronbach's alpha coefficients between 0.64 and 0.93. Detailed information on the scales and items can be found in table 7 . Table 2 Description of scales representing patients' opinion on different aspects of out-of-hours primary care. Cases Cronbach's alpha Scale score Scales a n Mean ± SD (95%CI) Telephone advice Accessibility by phone 364 0.72 76.5 ± 18.9 (74.6–78.5) Doctor's assistant's attitude 363 0.91 72.8 ± 22.1 (70.5–75.1) Questions asked by assistant 361 0.64 58.6 ± 25.4 (56.0–61.3) Advice given by assistant 351 0.93 53.7 ± 27.3 (50.8–56.5) Urgency of complaint 363 0.86 69.1 ± 24.5 (66.6–71.7) Overall satisfaction 361 0.93 64.2 ± 26.1 (61.5–66.9) Consultation at the GP cooperative Accessibility by phone 385 0.73 79.3 ± 17.6 (77.5–81.1) Doctor's assistant's attitude 386 0.88 79.8 ± 16.3 (78.2–81.4) Questions asked by assistant 384 0.65 63.5 ± 23.0 (61.2–65.8) Urgency of complaint 384 0.79 72.0 ± 21.5 (69.8–74.1) Waiting time at cooperative 387 0.62 61.5 ± 25.8 (58.9–64.1) Waiting room 381 0.60 65.6 ± 20.3 (63.5–67.6) Distance to cooperative 388 0.75 66.7 ± 21.2 (64.5–68.8) Treatment by GP 377 0.93 81.0 ± 18.9 (79.1–82.9) Overall satisfaction 392 0.88 73.7 ± 19.8 (71.7–75.6) Home visit Accessibility by phone 391 0.86 80.9 ± 18.4 (79.1–82.7) Doctor's assistant's attitude 393 0.90 80.6 ± 18.6 (78.7–82.4) Questions asked by assistant 383 0.73 59.2 ± 26.6 (56.5–61.9) Urgency of complaint 383 0.78 86.7 ± 16.0 (85.1–88.3) Treatment by GP 380 0.96 84.4 ± 19.7 (82.4–86.4) Waiting time until GP arrives 369 - 60.0 ± 30.7 (56.8–63.1) Overall satisfaction 390 0.92 74.6 ± 22.4 (72.4–76.9) a Scale scores range from 0 to 100, where 0 represents very dissatisfied and 100 represents highly satisfied. Table 7 Patient satisfaction questionnaire. (Original items are in Dutch) Scale 1. Accessibility by phone t,c,v It was easy to find the phone number of the GP cooperative # (+) It was easy to get through on the telephone (+) The time until the doctor's assistant picked up the phone was short (+) Scale 2. Doctor's assistant's attitude t,c,v The doctor's assistant was friendly on the phone (+) The doctor's assistant had enough time to talk to me on the phone (+) The doctor's assistant seemed to understand the problem (+) The doctor's assistant took my problem seriously (+) The information given by the doctor's assistant was very clear (+) Scale 3. Questions asked by the doctor's assistant t,c,v The doctor's assistant asked too many questions (-) I thought it was annoying that the doctor's assistant started with noting my personal data before asking about my complaints (-) Scale 4. Urgency of complaint t,c,v I believed my problem was very severe (+) I thought my problem needed immediate care (+) Scale 5. Advice given by doctor's assistant t The doctor's assistant's information about my problem was good (+) The advice the doctor's assistant gave me was very useful (+) The telephone advice by the doctor's assistant had reassured me (+) The telephone advice by the doctor's assistant was sufficient considering my problem (+) I thought the doctor's assistant was right to give me telephone advice only (+) Scale 6. Waiting time at the cooperative c I thought I had to wait too long at the registration desk (-) I thought I had to wait too long before the GP came to see me (-) Scale 7. Waiting room c There was enough material (magazines et cetera) in the waiting room to entertain the patients (+) The waiting room looked very clean (+) Scale 8. Distance to the GP cooperative c I think the travel time from my house to the GP cooperative is too long (-) The GP cooperative is easy accessible (+) Scale 9. Treatment by the GP c,v The GP took my problem seriously (+) The GP was friendly (+) The GP gave me clear information about my problem (+) The advice the GP gave me was very useful (+) The GP had enough time for me during the consultation (+) I was very pleased with the treatment by the GP (+) Scale 10. Waiting time until GP arrives v I thought it took too long for the GP to arrive (-) Scale 11. Overall satisfaction t,c,v I am satisfied about this contact with the GP cooperative (+) I am satisfied about the time it took to help me (+) I think the GP cooperative functions very well (+) Satisfaction rating on a scale from 1 to 10 regarding the functioning of the GP cooperative ‡ Satisfaction rating on a scale from 1 to 10 regarding the telephone procedure at the GP cooperative ‡ ,* t scale for the patients group who received telephone advice only c scale for the patients group who attended the GP cooperative for a consultation v scale for the patients group who received a home visit * this item was excluded from the scale related to patients who attended the GP cooperative # this item was excluded from the scale related to patients who received a home visit ‡ these items have been divided by two to reach the same range as the other items. Overall satisfaction in this group was significantly related to five scales, with a variance explained of 62% (see table 3 .). When patients judged that the right diagnosis had been made overall satisfaction was higher. We found that satisfaction also increased with age. When patients were satisfied with the accessibility of the cooperative by phone, the doctor's assistant's attitude on the phone, and the doctor's assistant's advice overall satisfaction was higher. Table 3 Regression analysis with overall satisfaction with out-of-hours primary care as dependent variable of patients who received only telephone advice (adjusted R 2 = 0.615). Unstandardised coefficients Standardised coefficients B SE Beta t Sig. Constant -2.404 4.302 -0.559 Diagnosis (1 = right, 0 = wrong) 12.345 2.644 0.200 4.668 < 0.001 Patient's age 0.077 0.036 0.076 2.128 0.034 Accessibility by phone a 0.155 0.054 0.112 2.859 0.005 Doctor's assistant's attitude a 0.401 0.067 0.355 5.960 < 0.001 Doctor's assistant's advice a 0.267 0.055 0.282 4.840 < 0.001 Variables that did not significantly contribute to the regression model: Patient's gender, type of health insurance, level of education, expectation about type of consultation, patient's perceived urgency of his or her complaint, and opinion on the questions asked by the doctor's assistant. a Scale score ranges from 0 to 100, where 0 represents very dissatisfied and 100 represents highly satisfied. Consultation at the GP cooperative Forty-three percent (392/912) of the patients who attended the GP cooperative returned the questionnaire. Approximately 80% of these patients reported to be satisfied (54.6%) or very satisfied (26.3%) with their contact with the GP cooperative, and 61% thought that the current out-of-hours care was an improvement compared to the former situation. We identified nine scales that represent patients' opinions on aspects of primary out-of-hours care (table 2 ), with Cronbach's alpha coefficients between 0.62 and 0.93. Two initial scales have been merged into one scale; these were patient's opinion on the GP's attitude and the treatment by the GP. All other identified scales were the same as the initial scales. Detailed information on the scales and items can be found in table 7 . Seven variables proved to be predictors of overall satisfaction, with a variance explained of 51% (see table 4 .). Patients, who expected prior to their contact with the cooperative that they were going to be asked to come to the GP cooperative, were generally more satisfied. Those who believed that their medical problem was urgent were less satisfied. Long waiting times and dissatisfaction with the distance to the cooperative also reduced overall satisfaction. When patients were satisfied with the accessibility of the cooperative by phone, the doctor's assistant's attitude on the phone, and the GP's treatment overall satisfaction was higher. Table 4 Regression analysis with overall satisfaction with out-of-hours primary care as dependent variable of patients who went for consultation to the GP cooperative. (adjusted R 2 = 0.501). Unstandardised coefficients Standardised coefficients B SE Beta t Sig. (Constant) -5.249 5.187 -1.012 Expectation about contact * 4.313 2.113 0.078 2.042 0.042 Accessibility by phone a 0.095 0.047 0.088 2.022 0.044 Doctor's assistant's attitude a 0.165 0.055 0.138 2.981 0.003 Urgency own complaint b -0.072 0.036 -0.078 -2.008 0.045 Waiting time a 0.181 0.030 0.241 6.059 < 0.001 Distance to cooperative a 0.176 0.035 0.192 4.965 < 0.001 GP's treatment a 0.454 0.042 0.441 10.756 < 0.001 Variables that did not significantly contribute to the regression model: Patient's age and gender, type of health insurance, level of education, diagnosis (1 = right, 0 = wrong), and opinion on the questions asked by the doctor's assistant. a Scale score ranges from 0 to 100, where 0 represents very dissatisfied and 100 represents highly satisfied. b Scale ranges from 0 to 100: 0 represents not urgent and 100 represents very urgent according to the patient. * Indicates whether the patient received the type of contact (telephone advice, consultation at the cooperative, or home visit) he or she expected (1 = in accordance with expectation, 0 = not in accordance with expectation) Home visits Almost forty-five percent (402/903) of the patients that received a home visit by a GP from the cooperative returned the questionnaire. About 81% of these patients reported to be satisfied (42.8%) or very satisfied (38.8%) with their contact with the GP cooperative, and 61% thought that the current out-of-hours care was an improvement compared to the former situation. We identified six multi-item scales that represented the patient's opinion on different aspects of out-of-hours primary care, with Cronbach's alpha coefficients between 0.73 and 0.96. Two initial scales have been merged into one scale; these were patient's opinion on the GP's attitude and the treatment by the GP. All other identified scales were the same as the initial scales. Detailed information on the scales and items can be found in table 7 . We found that five variables predicted overall satisfaction, with a variance explained of 51% (see table 5 .). Similar to the group of patients who had received telephone advice only, patients who receive a home visit were generally more satisfied when they believed that the GP of the cooperative had made the right diagnosis. When patients were satisfied with the accessibility of the cooperative by phone, the doctor's assistant's attitude on the phone, and the GP's treatment overall satisfaction was higher. In addition, when patients were satisfied about the waiting time until the GP arrives, overall satisfaction increased. Table 5 Regression analysis with overall satisfaction with out-of-hours primary care as dependent variable of patients who received a home visit from a GP from the cooperative. (adjusted R 2 = 0.506). Unstandardised coefficients Standardised coefficients B SE Beta t Sig. (Constant) -11.650 5.213 -2.235 Diagnosis (1 = right, 0 = wrong) 11.948 2.461 0.207 4.856 < 0.001 Accessibility by phone a 0.232 0.059 0.198 3.946 < 0.001 Doctor's assistant's attitude a 0.329 0.061 0.282 5.364 < 0.001 GP's treatment a 0.260 0.050 0.233 5.155 < 0.001 Waiting time until GP arrives*, a 0.154 0.030 0.218 5.183 < 0.001 Variables that did not significantly contribute to the regression model: Patient's age and gender, type of health insurance, education level, expectation about type of consultation, urgency of own complaint, and opinion on the questions asked by the doctor's assistant. a Scale score ranges from 0 to 100, where 0 represents very dissatisfied and 100 represents highly satisfied. * Single item scale Overall satisfaction The means of the three loci of care, adjusted for age, sex, insurance status, and education level, show that there is no difference between overall satisfaction in the group of patients who visited the GP cooperative (75.1 ± 1.31) and those who received a home visit (72.5 ± 1.37) (Table 6 ). However, patients who received telephone advice only (66.2 ± 1.30), were significantly less satisfied compared to the other two groups of patients. Table 6 Adjusted means for overall satisfaction. Mean SD 95% CI Telephone advice 66.2 1.30 63.6 – 68.7 Consultation at GP cooperative 75.1 1.31 72.5 – 77.6 Home visit 72.5 1.37 69.8 – 75.2 a adjusted for age, sex, insurance status, and level of education. Non-response Out of 100 randomly selected patients, who had not returned the questionnaire, we were able to reach 63 by phone. Of these 63 non-respondents 35 (55.6%) were male and 28 (44.4%) were female. Many of them reported that they had forgotten to return the questionnaire (40%). A minority said not to be interested (6.7%) or did not find it needful (6.7%). Most non-respondents (46.7%) gave other reasons like, no time, too difficult, or had lost the questionnaire. Of these patients, about 71% reported to be satisfied or very satisfied about their contact with the GP cooperative. Discussion The results of this study indicate that patients were generally satisfied about their contact with the GP cooperative. Patients who received telephone advice only, however, were less satisfied compared to those who attended the GP cooperative and those who received a home visit. A small majority believes that current out-of-hours care is an improvement compared to the former situation. The response rate in our study is not as high as presented previously by others who investigated patient satisfaction with out-of-hours primary care[ 5 , 7 - 9 , 11 ]. Reasons for patients not to return the questionnaire in our study were assessed through the non-respondents interview. We found that most patients gave reasons that were not directly related to their contact with the GP cooperative. Therefore, we assume that this reduced response rate may have had little effect on the outcome of our study. In addition, the overall satisfaction in the non-respondents group did not differ much from that of the respondents. In the process of determining relevant aspects of out-of-hours care to patients, we consulted the province patient organisation and studied discussions on out-of-hours care in newspapers. We have not used patient interviews, although this might have identified other relevant domains of out-of-hours care. However, we think that the current questionnaire captures many relevant domains of out-of-hours care to patients as well as to health professionals. Based on results of a Danish study [ 4 , 5 ], we expected overall patient satisfaction to be low because our study took place relatively shortly after out-of-hours care had been reorganised. However, we have not assessed patient satisfaction before the reorganisation, and therefore it remains unclear whether satisfaction has changed. Nevertheless, this study showed that more than half of the patients believe that the reorganisation has improved out-of-hours primary care. We have no reason to believe that the results of this study cannot be generalised to other regions in the Netherlands. Most GP cooperatives in the Netherlands are comparable, with respect to organisation and population size, to those in this study. In addition, the region in our study includes both rural and urban areas. Despite the similarities with out-of-hours primary care in other countries such as Ireland, the UK and Denmark, there are also differences with respect to the way these cooperatives are organised, and therefore care should be taken when generalising these results to other countries. We identified various factors that are closely related to overall satisfaction. These factors give important insight in aspects of the GP cooperative that really matter in the patient's opinion on out-of-hours care. The patient's opinion on the doctor's assistant's attitude on the phone proved to be the strongest predictor of overall satisfaction with respect to those having received telephone advice and those that received a home visit. Also for those attending the GP cooperative, this factor was a relatively strong predictor; in this group the patient's satisfaction with the GP's treatment was by far the strongest predictor of overall satisfaction. Thus, it appears that the patients' impression of the first contact they have with the cooperative, which is mostly through telephone, strongly influences overall satisfaction. In accordance with other studies we found that patients who received telephone advice only, are generally less satisfied with the out-of-hours service, compared to those attending the GP cooperative and those receiving a home visit[ 4 , 5 , 8 , 9 , 11 ]. Patient's expectation of care is assumed to be an important factor that influences overall satisfaction[ 12 ]. In our study, only 35% of the patients with telephone advice expected that they would receive this type of consultation. In contrast, 85% of the patients that were asked to attend the cooperative or received a home visit found this type of consultation in line with their expectations. This difference in expectation of care may very well explain the difference in overall satisfaction. It is questionable whether extra information to the public on the process of the telephone triage process will adjust patients' expectations. Similar to what Salisbury et al[ 8 ] suggested, we believe that a shift to an out-of-hours care organisation based predominantly on telephone advice may decrease patient overall satisfaction. Therefore, proper information about the telephone procedure at the GP cooperative is desirable[ 15 ]. This information can be supplied by the doctor's assistant on the phone, and by written information through folders and posters in GP practices. Conclusions This study has shown that patients are generally satisfied with out-of-hours care, but that patients with telephone advice only are less satisfied than those attending the cooperative or receiving a home visit. The patient's opinion on several aspects of out-of-hours care can predict overall satisfaction, with different predictors regarding the three types of consultations. However, the accessibility by phone and the doctor's assistant's attitude on the phone are always significantly related to overall satisfaction, regardless of the type of consultation. This implies that when trying to improve overall satisfaction one should always focus on at least these two factors. The questionnaire used in this study has potential for use as a standardised instrument for assessing satisfaction with out-of-hours care in The Netherlands for either research or service monitoring. Competing interests The author(s) declare that they have no competing interests. Authors' contributions CU participated in the design of the study, performed the statistical analysis, and drafted this manuscript. AA, SH, PZ, and HC participated in the design of the study, supervised the project, and provided critical edits to this manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545962.xml |
526221 | Incidence of "quasi-ditags" in catalogs generated by Serial Analysis of Gene Expression (SAGE) | Background Serial Analysis of Gene Expression (SAGE) is a functional genomic technique that quantitatively analyzes the cellular transcriptome. The analysis of SAGE libraries relies on the identification of ditags from sequencing files; however, the software used to examine SAGE libraries cannot distinguish between authentic versus false ditags ("quasi-ditags"). Results We provide examples of quasi-ditags that originate from cloning and sequencing artifacts (i.e. genomic contamination or random combinations of nucleotides) that are included in SAGE libraries. We have employed a mathematical model to predict the frequency of quasi-ditags in random nucleotide sequences, and our data show that clones containing less than or equal to 2 ditags (which include chromosomal cloning artifacts) should be excluded from the analysis of SAGE catalogs. Conclusions Cloning and sequencing artifacts contaminating SAGE libraries could be eliminated using simple pre-screening procedure to increase the reliability of the data. | Background Serial Analysis of Gene Expression (SAGE) is a rapid method to study mRNA transcripts in cell populations [ 1 ]. Two major principles underline SAGE: (1) short expressed sequenced tags (ESTs) are sufficient to identify individual gene products, and (2) multiple tags can be concatenated and identified by sequence analysis [ 1 , 2 ]. With the ever-expanding sequence information available in public databases, identification of gene transcripts with SAGE tags has greatly facilitated transcriptome comparisons and gene identification [ 3 ]. SAGE data are usually analyzed with software packages like "SAGE300" or "SAGE2000". The majority of SAGE libraries use NlaIII or Sau3A ( SalAI ) as anchoring enzymes (AE) to create SAGE tags. Both of these enzymes have 4-bp palindromic recognition sequences (CATG for NlaIII and GATC for Sau3A ) that flank individual ditags within concatemers. A major component of the software analysis is the identification of anchoring enzyme recognition sequences (AERS) that flank target sequences (SAGE ditags). After finding the first AE recognition sequence, the software continues reading the sequence until it finds the next one. The software then compares the distance between these recognition sequences with predicted ditag lengths (20–24 bp in the case of NlaIII or Sau3A ), and ditags that are too short (<20 bp) or too long (>24 bp) are excluded. However, if the length of the AERS-flanked sequence satisfies the size criteria, it is identified as a ditag. This algorithm relies on the assumption that all sequences have correctly organized ditag concatemers; however, the cloning efficiency of SAGE rarely reaches 100%. In this report, we show that up to 5% of ditags from some SAGE libraries should be omitted from the final analysis. These false ditags (termed "quasi-ditags") result from genomic contaminants and apparently random combinations of nucleotides generated by cloning or sequencing errors. Using a mathematical model to simulate the frequency of quasi-ditags in DNA, we propose a method to exclude quasi-ditags from SAGE catalogs. Results From twelve independent SAGE libraries, we analyzed numerous clones lacking organized ditag concatemers that would be excluded by SAGE software packages, including clones lacking inserts, clones with inserts containing bacterial or rodent genomic sequences, and clones with unidentifiable sequences (Figure 1 ). Depending on the quality of the SAGE library, examples of clones in Figure 1A,1B,1C can represent up to 50% of the total volume of clones sequenced [ 4 ], but generally range from 2–20%. A more typical example is taken from our R1 ES cell and AMH-II SAGE libraries [ 5 , 6 ], which contained 5,988 and 4,478 clones, respectively. The cloning efficiency was ~79% and ~76% (4,714 and 3,413 clones with inserts, respectively). Among these, 411 and 167 clones in the R1 ES SAGE library and 305 and 194 clones in the AMH-II library contained sequences with only 1 or 2 ditags, respectively. During our sequence analysis of the clones that had produced a least number of ditags (1–2 per clone), we identified a subset of sequences (up to 40%) that contain ditags that may be false. Importantly, some of these "ditags" matched bacterial genomic sequences (Figure 2A ), while others seemed to represent random combinations of nucleotides. Figure 2B show an example of a clone that contains a single ditag sequence embedded within a sequence of unidentifiable origin. Because most of this sequence is not composed of concatenated ditags, this embedded ditag may therefore represent a quasi-ditag, which should be excluded from further analysis. These two examples, among others, suggest that some inserts in pZErO-1 contain sequences that just by chance mimic SAGE ditags. To predict the potential frequency of randomly occurring quasi-ditags, we employed a stochastic model system to generate random sequences. We then used both computer-generated random sequences and true genomic DNA sequences to test this possibility. Random sequences were generated and analyzed with a Visual Basic program designed to mimic SAGE software analysis of ditags. The simulated sequences varied in length from 600 to 1200 nucleotides, which corresponds to the average sequence lengths generated by automated sequence analyzers. One million random sequence strings with L = 600, 700, 800, 900, 1000, 1100, and 1200 nucleotides were generated. Table 1 shows expected frequencies of quasi-ditags according to the model (equation (5)) and the observed frequencies based on computer simulations. The line plots of the expected (model) and observed (computer simulation) quasi-ditag frequencies are almost identical (Figures 3 and 4 ). Fragmented Saccharomyces cerevisiae genomic DNA that lack SAGE ditag concatemers was also employed for in vivo / in silico model validation, and a number of quasi-ditags was detected in these (Figures 4 and 5 ). When compared to Saccharomyces cerevisiae genomic DNA [ 7 ], quasi-ditag frequencies were somewhat less abundant than those generated by the computer, potentially due to the presence of nucleotide repeats and unequal frequencies of individual nucleotides in the Yeast genomic sequences. These data, however, support our hypothesis that quasi-ditags can be generated randomly from potential sequencing errors or from genomic contaminants. This analysis furthermore underscores the limited extent that quasi-ditags occur: the distribution of expected number of quasi-ditags per clone is clearly bimodal, with peaks at 1 and 2 ditags ( Q 1 and Q 2 , respectively). At the same time, the frequency of occurrence for three quasi-ditags ( Q 3 ) is extremely low (0.01% for L = 600 to 0.02% for L = 1200), such that the value of P 3 (20–24) effectively converges to zero for the majority of the SAGE catalogs (i.e. <3000–5000 clones) (Figure 4 ). Accordingly, the clones that include ditag concatemers of higher length should lack quasi-ditags. Clones containing only one or two ditags/quasi-ditags, however, could be excluded from SAGE analyses, without adversely affecting the data set (Figure 6 ). As an example, we extracted sequences from clones that produce 1–2 total ditags from AMH-II and R1 ES cell libraries. This reduced the total number of tags by 1.06% for ES R1 and 1.94% for AMH-II, but it effectively removed all contaminating bacterial sequences and improved the data reliability. However, the total AMH-I library (2,365 clones, ~78% cloning efficiency; [ 6 ]) had a larger proportion of ditags extracted as being too long (>24 bp), as indicated by lower tag per clone ratio (average insert size of 12.2 tags/clone vs. 22.6 in AMH-II library) amid the same average sequence length, suggesting higher proportion of quasi-ditags. Analysis of the AMH-I SAGE library has revealed 353 and 52 clones that contained just 1 or 2 ditags, respectively. Exclusion of these sequences decreased the total number of tags by 5.21% (calculated after duplicate dimer exclusion), and proved critical to our subsequent quantitative SAGE comparisons [ 6 ]. Failure to remove these quasi-ditag sequences decreased the quantitative reproducibility (R values) between AMH-I and AMH-II SAGE libraries, showing that quasi-ditags can adversely affect the reliability of SAGE libraries. Discussion SAGE is an important tool of modern molecular biology widely used in a number of applications. We hypothesized that actual SAGE catalogs could be contaminated by false ditags ("quasi-ditags") of various origins. Although SAGE software packages are designed to ignore sequences that lack 20–24 bp sequences flanked by two anchoring enzyme recognition sites, it does not exclude quasi-ditags originating from genomic contaminants or unknown sequences that may arise as cloning or sequencing artifacts (Figure 2 ). Negative controls (self-ligated vector) do not produce any colonies after Zeocin selection and cannot account for the appearance of background clones and quasi-ditags in Zeocin-resistant bacteria. Since some quasi-ditags, however, originate directly from E. Coli , we suggest that one probable source for these contaminanting tags is from recombination events that occur in E. Coli . Indeed, such a mechanism has already been documented [ 8 ] and has led to the development of Stbl2 bacteria that are mcrA - /mcrBC - hsdRMS - mrr - . Since pZErO-1 was not translated into recombination deficient bacteria (DH10B), large-scale amplifications of this plasmid within bacteria would be expected to lead to some random recombinations, and the generation of quasi-ditags (e.g. Figure 2A ). Some of the ditags derived from the clones that had produced a least number of ditags (1–2 per clone) do not match genomic sequences and thus might be originated from sequencing errors. We therefore suggested a model that provides a mathematical basis for the hypothesis that such a possibility exists. The mathematical model presented in the manuscript is an attempt to predict the frequency distribution of quasi-ditags in random sequences. The phenomenon itself is rather complex and there is no simple model that would capture it in full complexity. We, however, believe that we have selected a reasonable level of model complexity that captures the major pattern of frequency distribution. Using the computer simulation we show that random combinations of nucleotides generated could be indeed recognized by SAGE software as valid SAGE ditags. We also demonstrate that quasi-ditags may constitute a non-negligible proportion of SAGE catalogs. Our model, which simulates the frequency of quasi-ditags in DNA (equations (1–6)), suggests that single or double ditags may represent quasi-ditags; however, the results of the in silico experiments show that the probability of finding more than two quasi-ditags in the same sequence converges effectively to zero (Table 1 and Figure 4 ). Based on these findings, we suggest that additional steps be performed with SAGE libraries. We recommend removing clones with sequences containing ≤ 2 ditags at a pre-processing step ("clean-up"). The removal of clones containing 1 or 2 ditags can effectively remove bacterial genomic sequences and potential sequencing artifacts from SAGE libraries. The overall number of SAGE tags excluded by this additional step (authentic and quasi-ditags) is usually low, and generally does not exceed 1.0–1.8% of the total number of sequenced SAGE tags [ 5 , 6 , 9 , 10 ]; however, the frequency of potential quasi-ditags could be high (>5%) in some SAGE libraries. In AMH-I library, for example, the fraction of clones lacking appropriate ditag concatemers was >20%. In these instances, quasi-ditags significantly contribute to the final SAGE tag count, and should be removed. Chart in Figure 6 plots values for ditag distribution from both the model-based simulations (L = 800 bp) and actual clones from the SAGE libraries that had sequences of the same mean length (L ≈ 800 bp). The expected maximum frequency of 1–2 quasi-ditags in the plotted model data approximated the observed frequency of clones with 1–2 total ditags detected in the pool of the actual SAGE clones. Contrary to that, the frequency of occurrence of three or more quasi-ditags predicted by the model is extremely low, demonstrating a divergence in the distribution of expected quasi-ditags and valid SAGE ditags for higher number of ditags per clone. Note that owing to the gel-purification of concatemers the majority of clones in the representative samples belong to the clusters of higher ditag numbers (AMH-II and ES R1 libraries, 13–26 total ditags; AMH-I library, 4–11 total ditags). Comparing values of observed frequencies of the actual SAGE clones that produce 1–2 total ditags with those of expected quasi-ditag frequencies for the sequences of given length might be indicative on the possible contribution of cloning and sequencing artifact-derived quasi-ditags (Figure 6 ). The possible contribution of quasi-ditags to the final tag yield in SAGE libraries cannot be accurately predicted in advance but a failure to report the cloning efficiency and the number of clones with 1 or 2 ditags precludes an evaluation of potential false tags present in SAGE catalogs. Current SAGE protocols do not ensure 100% accurate size fractioning of concatemers: some of the smallest concatemers could therefore be cloned and sequenced. We recognize that some authentic tags (representing valid, but extremely short inserts that were not extracted during gel-purification of concatemers) will be excluded by removing all clones containing only 1 or 2 ditags. Nevertheless, we suggest that any potential loss of authentic ditags in the clean-up procedure is negligible compared to the advantage of having more reliable SAGE results. SAGE protocols are extremely complex technologically and every possible mean should be employed to ensure qualitative and quantitative accuracy of catalogs on both the experimental and analytical steps. Evaluation of the cloning efficiency and precision (e.g. with RAST-PCR [ 11 ]) and sequencing accuracy are therefore essential on the stage preceding large-scale sequencing of the clones. Nonetheless, introduction of the simple pre-processing step eliminating false ditags would further improve the accuracy of the method resulting in its wider application. Conclusions We have hypothesized that actual SAGE catalogs could be contaminated by false ditags (termed "quasi-ditags") of various origins and employed a mathematical model to predict the frequency of quasi-ditags in random nucleotide sequences. Cloning and sequencing artifacts contaminating SAGE libraries could be eliminated using simple pre-screening procedure to increase the reliability of the data. Methods SAGE Serial analysis of gene expression (SAGE) was performed according to the original protocol [ 1 ] with minor modifications [ 5 , 12 ]. Human (PC3) and mouse (P19, R1, D3, EG-1, MEF) cells and tissues (adult and old heart) have been employed for construction of SAGE libraries and sequence analysis to illustrate the "clean-up" process. SAGE tags were generated with NlaIII and BsmFI restriction enzymes (New England Biolabs, Beverly, MA, USA). Sequencing was performed by Perkin-Elmer Applied Biosystems / Celera Genomics (Foster City, CA, USA) and Agencourt Bioscience Corporation (Beverly, MA, USA). Stochastic model Anchoring enzyme recognition sites (AERS) are 4 bp long. Assuming for simplicity that all 4 nucleotide bases (A, T, C, and G) have equal frequencies, a probability that a random combination of 4 nucleotides would match the AERS is 4 -4 = 1/256. In a sequence of length L , the expected number of AERS (e.g. CATG for NlaIII anchoring enzyme) is L /256. Thus, the probability of finding k tags CATG in a random sequence of length L is determined by the Poisson distribution: If two CATG sequences (AERS (CATG) ) are located within the sequence of length L , then the probability that they are separated by a 20–24 bp distance ( P (20–24) ) is approximately: where 10 is the number of possible relative positions of two AERS (CATG) that yield a quasi-ditag and 24 is the mean distance from the center of one SAGE tag to the end of the sequence that does not leave enough space for another tag to form a quasi-ditag. If >2 AERS are present in the sequence, then there is a chance that additional AERS would appear within the quasi-ditag formed by first two AERS. A probability that additional AERS will not appear within the quasi-ditag is approximately: where 30 is the average length of a nucleotide string outside of the ditag. If the total number of AERS (CATG) equals k, then the number of possible AERS pairs is: Taken together, a probability of at least one quasi-ditag in the sequence that has exactly k AERS (CATG) is: Then, a probability ( Q 1 ) to find at least one quasi-ditag in a sequence of given length L is: where p(k) is given by equation (1). There is also a probability that more than one quasi-ditag exists within the sequence. In some cases the same AERS (CATG) could serve as a portion of the two neighboring quasi-ditags (...CATG-(N) 20–24 -CATG-(N) 20–24 -CATG...). In other cases, two or more quasi-ditags can be located independently in the sequence. If a sequence with k tags already has one quasi-ditag bounded by two tags, then other (k-2) tags may form additional quasi-ditags. The probability of existence of additional quasi-ditags on condition that one ditag is already present is approximately q(k-2). Then the total probability that any random sequence has at least two quasi-ditags is: In the same way, and so on. The probability that a random sequence has exactly n quasi-ditags is: R n = Q ( n ) - Q ( n + 1) (9). Software and analysis A random nucleotide generator (for L = 600–1200) and analysis program that mimics "SAGE300" or "SAGE2000" software algorithms was written in Visual Basic and is available upon request. Genomic DNA sequences of Saccharomyces cerevisiae that lack SAGE ditag concatemers were also employed for in vivo / in silico model validation. Randomly selected S. cerevisiae chromosomes were downloaded from GenBank, fragmented to create a minimum of 300 sequences (L = 600–1200) and searched for quasi-ditags using "SAGE2000" software (available at SAGE website [ 13 ]). Frequency distribution of the number of ditags was analyzed in raw sequences from 3 randomly chosen 96-well plates from AMH-I, AMH-II and ES R1 SAGE libraries (285 sequences for each library) using the same software. Authors' contributions SVA developed the hypothesis, overall plan and performed SAGE, computer simulations, and analysis of Yeast genome fragments. AAS developed and implemented the mathematical model predicting the appearance of "quasi-ditags" in random sequences of given length. Both authors have contributed to the writing and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526221.xml |
479043 | Ancient Adaptive Evolution of the Primate Antiviral DNA-Editing Enzyme APOBEC3G | Host genomes have adopted several strategies to curb the proliferation of transposable elements and viruses. A recently discovered novel primate defense against retroviral infection involves a single-stranded DNA-editing enzyme, APOBEC3G, that causes hypermutation of HIV. The HIV-encoded virion infectivity factor (Vif) protein targets APOBEC3G for destruction, setting up a genetic conflict between the APOBEC3G and Vif genes. This kind of conflict leads to rapid fixation of mutations that alter amino acids at the protein–protein interface, referred to as positive selection. We show that the APOBEC3G gene has been subject to strong positive selection throughout the history of primate evolution. Unexpectedly, this selection appears more ancient than, and is likely only partially caused by, modern lentiviruses. Furthermore, five additional APOBEC genes in the human genome appear to be engaged in similar genetic conflicts, displaying some of the highest signals for positive selection in the human genome. Despite being only recently discovered, editing of RNA and DNA may thus represent an ancient form of host defense in primate genomes. | Introduction Mobile genetic elements have been in conflict with host genomes for over a billion years. Our own genomes reveal the remarkable effects of retrotransposition, as about 45% of our genomic DNA results directly from this process ( Lander et al. 2001 ). This perennial state of conflict has led eukaryotes to adopt several strategies to curb the proliferation of transposable elements and viruses. These include transcriptional silencing through DNA and histone methylation ( Tamaru and Selker 2001 ; Selker et al. 2003 ) or RNA interference ( Ketting et al. 1999 ; Tabara et al. 1999 ; Aufsatz et al. 2002 ), and even directed mutagenesis of mobile elements ( Selker et al. 2003 ). Despite facing this gauntlet of defense strategies, transposable elements have thrived in eukaryotic genomes (with Neurospora crassa being a notable exception [ Selker et al. 2003 ]) by evolving suitable countermeasures. Our current understanding of the intracellular interplay between host defenses and the assault of transposable elements suffers from a paucity of cases where both counterstrategies have been clearly identified. This is in contrast to extracellular cases, where interactions between viral proteins and either host immune surveillance or host receptors have been well established. Understanding the nature and evolutionary time-frame of intracellular conflict is key to understanding the current state of eukaryotic genomes. Recent studies of host inhibition of HIV have uncovered mutations introduced by DNA editing as a novel means by which host genomes battle viruses intracellularly. Furthermore, the means by which viruses combat this defense strategy are also identified, thus providing an unprecedented opportunity to study the evolution of intracellular genetic conflict. Different human cell lines vary in their susceptibility to HIV infection. The gene responsible for this differential susceptibility was identified as apolipoprotein B–editing catalytic polypeptide 3G (APOBEC3G) ( Sheehy et al. 2002 ), whose product targets HIV and simian immunodeficiency virus (SIV) for editing as their genomes undergo reverse transcription in the cytoplasm of host cells. APOBEC3G is a cytidine deaminase that edits cytosines to uracils in the minus strand DNA copied from the viral RNA genome, resulting in promiscuous guanine-to-adenine (G-to-A) hypermutation of the plus (protein-coding) strand of the viral DNA ( Harris et al. 2003 ; Mangeat et al. 2003 ; Zhang et al. 2003 ). APOBEC3G is expressed in testes, ovary, spleen, peripheral blood leukocytes, and T-lymphocytes ( Jarmuz et al. 2002 ; Sheehy et al. 2002 ) and is packaged in nascent virions and delivered into new host cells along with the viral genome ( Harris et al. 2003 ). How this editing reduces the evolutionary fitness of the virus is not well established. The mutations introduced by the editing process may either directly reduce viral fitness, or target the uracil-containing viral DNA for destruction ( Gu and Sundquist 2003 ). Before the discovery of APOBEC3G, RNA editing was thought to function solely in the diversification of gene-encoded information. The discovery of viral targeting by APOBEC3G represents a new phase in our understanding of nucleic acid editing in primates. APOBEC3G belongs to a family of nine primate genes that catalyze the deamination of cytosine to uracil in DNA and/or RNA ( Figure 1 ). Two other members of this family are known to have important in vivo editing functions. APOBEC1 encodes a protein that site-specifically edits the mRNA of apolipoprotein B (APOB), leading to a truncated form of the APOB lipid-transport protein ( Chan et al. 1997 ), which is important for determining levels of low-density lipoprotein production. Another member of this family, activation-induced deaminase (AID), is important for all steps following V(D)J recombination in B lymphocytes ( Fugmann and Schatz 2002 ), from generating antibody diversity to class-switching events. Significantly, APOBEC1 and AID act within the nucleus, whereas APOBEC3G is exclusively cytoplasmic, which prevents it from mutating “self” DNA molecules. Whereas rodents have a single APOBEC3 gene, humans have at least six ( Jarmuz et al. 2002 ), including APOBEC3G . The functions of the other members of this expanded APOBEC3 cluster are unknown, although APOBEC3C has been shown to be catalytically active, exhibiting DNA mutator activity in a bacterial system that is like APOBEC3G ( Harris et al. 2002 ). More recently, APOBEC3F has also been associated with anti-HIV biological activity ( Wiegand et al. 2004 ; Zheng et al. 2004 ). Figure 1 The Primate APOBEC Family (A) The human genome contains nine known members of the APOBEC family. AID and APOBEC1 are located approximately 900 kb apart on human Chromosome 12. The primate-specific APOBEC3 cluster of six genes resides on human Chromosome 22, and likely arose through a series of gene duplication events ( Jarmuz et al. 2002 ; Wedekind et al. 2003 ). The single APOBEC3 -like gene found in mouse resides on Chromosome 15 (not shown), which is syntenic to human Chromosome 22 ( Sheehy et al. 2002 ). There is EST evidence for both APOBEC3D and APOBEC3DE (see Materials and Methods ), and we treat these as three separate transcripts in our analysis because currently there is no evidence for the relevant protein products. (B) All members of the APOBEC family contain an active site that encodes a zinc-dependent cytidine deaminase domain with the HXE, PCXXC signature ( Mian et al. 1998 ), a linker peptide, and a pseudoactive domain ( Navaratnam et al. 1998 ; Jarmuz et al. 2002 ). The active and pseudoactive domains are related by structure only, and likely originated from a gene duplication event followed by degeneration of the catalytic activity of the pseudoactive domain. Several members of the human APOBEC3 gene cluster ( APOBEC3B, 3DE, 3F, and 3G ) have undergone an additional duplication/recombination event and now contain two each of the active and pseudoactive sites ( Jarmuz et al. 2002 ; Wedekind et al. 2003 ), as does the single APOBEC3 -like gene found in mouse. DOI:10.1371/journal.pbio.0020275.g001 Most lentiviruses encode an accessory gene, virion infectivity factor (Vif), whose product counteracts the antiviral activity of APOBEC3G. Vif interacts with APOBEC3G and targets it for ubiquitination and proteasome-dependent degradation, thus preventing its incorporation into nascent virions ( Madani and Kabat 1998 ; Simon et al. 1998 ; Marin et al. 2003 ; Sheehy et al. 2003 ; Stopak et al. 2003 ; Yu et al. 2003 ). This interaction can be species-specific, as the Vif protein of one lentivirus will counteract APOBEC3G from its host species, but not always the APOBEC3G from a different primate species ( Mariani et al. 2003 ). Thus, APOBEC3G and Vif are predicted to be under selection to decrease and enhance, respectively, their interaction with one another, each driving rapid change in the other. Genetic conflicts like this one are predicted to result in the rapid fixation of mutations that alter amino acids, specifically those that affect this protein–protein interaction. This scenario is referred to as positive selection and is commonly seen in host–pathogen interactions. In this report, we directly test this prediction by studying the paleontology of selective pressures that have acted on APOBEC3G in the primate lineage, to ask whether APOBEC3G has been subject to positive selection, and to date the origins of this genetic conflict. We find that APOBEC3G has been under remarkably strong positive selection, and has undergone several episodes of adaptive evolution throughout the history of primates. Unexpectedly, we find that the positive selection acting on APOBEC3G predates modern lentiviruses, indicating that a more ancient, and perhaps ongoing, conflict has shaped its evolution. We also report evidence for strong positive selection acting on a majority of the APOBEC genes, suggesting that this family of genes may have expanded in primate genomes for genome defense via RNA/DNA editing. Results/Discussion APOBEC3G Has Been Evolving under Positive Selection in Primates To determine what selective pressures have shaped APOBEC3G evolution, we sequenced the APOBEC3G gene from a panel of primate genomes representing 33 million years of evolution. We sequenced the complete APOBEC3G coding sequence (approximately 1,155 bp) from ten primate species, including four hominids (other than human), four Old World monkeys (OWMs), and two New World monkeys (NWMs) ( Figure 2 ). A phylogeny constructed using either complete APOBEC3G sequences or individual exons (unpublished data) is congruent to the widely accepted primate phylogeny ( Purvis 1995 ), indicating that all sequences isolated by our PCR strategy are truly orthologous. Figure 2 APOBEC3G Has Been Under Positive Selection for at Least 33 Million Years The ω values and actual numbers of non-synonymous and synonymous changes (R:S, included in parentheses) in APOBEC3G are indicated on the accepted primate phylogeny ( Purvis 1995 ) that includes five hominids, five OWMs, and two NWMs. OWMs diverged from hominids about 23 million years ago, whereas NWMs diverged around 33 million years ago ( Nei and Glazko 2002 ). ω values were calculated using the PAML package of programs using the free-ratio model that allows ω to vary along each branch. In some instances, zero synonymous substitutions lead to an apparent ω of infinity. HIV/SIV-infected species are indicated by asterisks. DOI:10.1371/journal.pbio.0020275.g002 The hallmark of positive selection is an excess of non-synonymous substitutions (which alter the amino acid being encoded) relative to synonymous substitutions (which retain the encoded amino acid). Because non-synonymous changes are more likely to be deleterious, they are typically culled out by selection ( Hurst 2002 ) (referred to as purifying or negative selection). Therefore, in protein-coding open reading frames, the number of observed changes per synonymous site (Ks) usually exceeds the number of observed changes per non-synonymous site (Ka). In the case of the APOBEC3G, however, we found that a majority of branches of the phylogeny (including internal branches) show evidence of positive selection (defined as Ka/Ks [ω] greater than one; see Figure 2 ). This implies that the APOBEC3G has been subject to positive selection throughout the history of primate evolution. In support of this conclusion, all pairwise comparisons of the entire APOBEC3G gene between the various primates have ω greater than one (unpublished data). Maximum likelihood analysis using the PAML (phylogenetic analysis by maximum likelihood) suite of programs also finds strong evidence that the full-length APOBEC3G gene has been subject to positive selection ( p < 10 –13 ). Numbers in parenthesis in Figure 2 indicate the actual number of non-synonymous and synonymous changes (R:S) that have occurred along each branch. The average Ks in APOBEC3G is not unusually low; it is about 0.09 between hominids and OWMs and 0.26 between hominids and NWMs, compared to 0.08 and 0.15 respectively for comparisons of various intronic and noncoding regions of primate genomes ( Li 1997 ). Thus, we can rule out the possibility that selection has led to deflated Ks values in APOBEC3G that lead to artificially high ω ratios. Indeed, these high ω ratios can be explained only by a significantly higher rate of non-synonymous substitutions. Of the primates analyzed, lentiviral infections have been observed only in the African monkeys, chimpanzees, and humans ( Peeters and Courgnaud 2002 ). HIV/SIV-infected species are indicated with asterisks in Figure 2 . Estimating the age of lentiviruses is difficult because of their rapid rate of evolution and frequent cross-species transfer, but it has been suggested that primate lentiviruses are no older than 1 million years ( Sharp et al. 1999 ). The presence of modern lentiviruses appears to bear no correlation to either the presence or the strength of positive selection. For instance, the lineage leading to hominids has a ω of 3.3, the highest overall. The positive selection acting on APOBEC3G thus appears to predate modern lentiviruses, and interactions with lentiviral Vif proteins are not likely to be a major cause of this unusually strong signal of positive selection. In support of this conclusion, HIV has not been in the human population long enough to account for the positive selection of APOBEC3G specific to the human lineage (a 7:0 R:S ratio) arguing that, although the positive selection of Vif may be explained in large part by that of APOBEC3G, the reverse is certainly not the case. Positive Selection in APOBEC3G Is Not Localized to One Domain We wanted to identify the specific domains in APOBEC3G that were subject to positive selection, because this might suggest the driving evolutionary force. For instance, the positive selection in the major histocompatibility complex proteins is confined to only small segments of the protein that constitute the antigen-recognition site ( Hughes and Nei 1988 ; Yang and Swanson 2002 ), because only these sites participate in protein–protein interactions subject to genetic conflict. All members of the APOBEC family contain a similar domain organization (see Figure 1 B) that consists of an active site that encodes a zinc-dependent cytidine deaminase domain with the HXE, PCXXC (H, histidine; X, any amino acid; E, glutamic acid; P, proline; C, cysteine) signature ( Mian et al. 1998 ), a linker peptide, and a pseudoactive domain ( Navaratnam et al. 1998 ; Jarmuz et al. 2002 ). The active and pseudoactive domains are believed to have originated from a gene duplication event followed by degeneration of the catalytic activity of the pseudoactive domain. APOBEC3G and some other APOBEC genes have also undergone a second gene duplication/fusion event ( Jarmuz et al. 2002 ; Wedekind et al. 2003 ). Representative examples of pairwise (sliding window) comparisons of Ka/Ks ratios between two hominids, two OWMs, and two NWMs suggest that the same domain of APOBEC3G has not been subject to positive selection throughout primate evolution ( Figure 3 A– 3 C). In both the hominid and NWM comparisons, the second half of the gene shows evidence of positive selection ( Figure 3 A and 3 C), but in an OWM comparison, it is the first half that is under positive selection ( Figure 3 B). When the APOBEC3G gene is divided into structural domains, we find that all domains, including the active site domains, have undergone multiple distinct episodes of positive selection ( Figure S1 ). This highly unusual pattern suggests that the genetic conflicts that have shaped APOBEC3G evolution have involved episodic protein–protein interactions with different parts of the entire APOBEC3G protein. Figure 3 Episodic Positive Selection on Different Regions of the APOBEC3G Gene (A–C) Sliding window (300-bp window; 50-bp slide) analysis of Ka and Ks was performed on three representative pairs of primate APOBEC3G sequences, between two hominids (human–orangutan) (A), between two OWMs (crested macaque–baboon) (B), and between two NWMs (tamarin–woolly monkey) (C). Ka/Ks, Ka, and Ks are plotted against the length of the gene (with a schematic of protein domains along the x-axis) to illustrate that different domains of APOBEC3G have undergone positive selection, depending on which lineage is examined. The value for ω, indicated by Ka/Ks, is not shown for part of the crested macaque–baboon comparison (B), because Ks is zero in this region (see plot below). (D) A schematic of the domains of human APOBEC3G illustrates the N-terminal domain (aa 1–29), the two active sites (aa 30–120 and 215–311), and the pseudoactive sites (aa 162–214 and 348–384). Also illustrated is the Vif-interaction domain of APOBEC3G (aa 54–124) ( Conticello et al. 2003 ) as well as the single amino acid residue responsible for species-specific sensitivity to Vif (aspartic acid 128; cross shape in linker 1) ( Bogerd et al. 2004 ; Schrofelbauer et al. 2004 ). PAML ( Yang 1997 ) was used to identify individual residues (codons) that have significant posterior probabilities of ω greater than 1.0 (see Materials and Methods ). Those codons with posterior probabilities greater than 0.95 and greater than 0.99 are indicated by open and closed inverted triangles, respectively (listed in Figures S2 and S3 ). This represents only a subset of the residues that are likely to be under positive selection, highlighting those residues that have repeatedly undergone non-synonymous substitutions. For instance, residue 128 is not highlighted, as it has a posterior probability of only 0.55 because it has undergone only one fixed non-synonymous change (along the OWM lineage). Domains have been defined by protein sequence alignment to APOBEC1 ( Jarmuz et al. 2002 ). The first pseudoactive domain is likely to include in its C-terminus a second duplication of the N-terminal domain, although this boundary cannot be resolved because of sequence divergence. DOI:10.1371/journal.pbio.0020275.g003 We also employed a maximum-likelihood approach (see Materials and Methods ), using the PAML suite of programs ( Yang 1997 ) to identify the specific residues that have been repeatedly subject to positive selection in primates. These analyses (in the best fit model) identify 30% of the codons as having evolved under stringent purifying selection (ω of approximately zero). These include the catalytically important residues that are invariant throughout all APOBECs. The same analysis also identifies approximately 30% of the codons as having evolved under positive selection with an average ω of nearly 3.5 (residues that are evolving without selective constraint would be expected to have an average ω of one). Even among adaptively evolving proteins, this is an unusually high proportion of sites, once again implicating a large number of residues in APOBEC3G as having participated in some kind of genetic conflict. Of these, several residues are identified as being under positive selection with high confidence (posterior probability greater than 0.95, inverted triangles in Figure 3 D). In simulations using datasets with comparable levels of sequence divergence and strength of positive selection to our APOBEC3G dataset (tree length = 1.59), PAML analyses were found to be highly accurate in identifying residues subject to positive selection ( Anisimova et al. 2002 ). The schematic in Figure 3 D highlights the region where Vif is believed to interact with human APOBEC3G ( Conticello et al. 2003 ). It also highlights the single amino acid residue (cross symbol in linker 1) that is responsible for the species-specific interactions seen between Vif and APOBEC3G in African green monkeys (SIV) and humans (HIV) ( Bogerd et al. 2004 ; Schrofelbauer et al. 2004 ). There is a noticeable lack of correlation between the sites on APOBEC3G that are important for Vif interaction and those sites that are identified by PAML with high confidence, supporting our earlier conclusion that Vif interactions have played only a small role in dictating the positive selection of APOBEC3G . Other APOBEC Genes May Participate in Host Defense The discovery that APOBEC3G is involved in host defense was predicated on the tissue-specific inhibition of HIV. Other studies have investigated a possible inhibitory role of other APOBEC genes but found that only APOBEC3G and APOBEC3F exert an antiviral defense against HIV ( Mariani et al. 2003 ; Wiegand et al. 2004 ; Zheng et al. 2004 ). An unbiased look at selective pressures among other APOBEC genes could reveal clues to their function. We calculated whole-gene Ka/Ks ratios for other members of the human APOBEC family, using orthologs from the chimpanzee genome project ( Table 1 , second column). This analysis reveals strong evidence of purifying selection acting on AID and APOBEC3A but positive selection acting on APOBEC3B and APOBEC3DE (as well as APOBEC3D and APOBEC3E alone) in addition to APOBEC3G . There is no expression evidence for APOBEC3E, and it is unclear whether it occurs as a stand-alone gene, but its ω ratio of 5.6 is among the highest seen for any human–chimp comparison and argues strongly that it is a functional gene and an active participant in some form of genetic conflict. Whole-gene analyses are notoriously poor at identifying specific domains of positive selection, especially when the rest of the gene is subject to purifying selection. We therefore performed a sliding window Ka/Ks test ( Endo et al. 1996 ), which also reveals positive selection acting on APOBEC3F (amino acids [aa] 117–250). Table 1 Positive Selection throughout the APOBEC3 Gene Cluster * p < 0.05; ** p < 0.01; *** p < 0.001 a 189 out of 200 amino acids analyzed in human–chimp comparison b 376 out of 383 amino acids analyzed in human–chimp comparison c 196 out of 202 amino acids analyzed in human–chimp comparison d 119 out of 186 amino acids analyzed in human–chimp comparison e 314 out of 387 amino acids analyzed in human–chimp comparison N.D., not determined ω ratios were calculated for human–chimp orthologs, and tested against the neutral expectation that ω = 1 ( p -values obtained from simulations performed in K-estimator). Values of ω significantly less and greater than one imply purifying and positive/diversifying selection, respectively. We were unable to obtain enough APOBEC2 sequence from the chimpanzee genome project to do this analysis, so APOBEC2 was sequenced from orangutan. When sliding window analysis was performed, APOBEC1 (human–orangutan; see Figure 4 ), APOBEC3C (human–gorilla), and APOBEC3F (human–chimp) show regions of both significant positive and purifying selection. Windows of positive selection in these genes are indicated as amino acid ranges (e.g., aa 1–100 for APOBEC1) along with the associated ω values and statistical significance DOI:10.1371/journal.pbio.0020275.t001 The limited divergence between human and chimp genomes leads to some comparisons not being informative enough to detect selection (APOBEC1 and APOBEC3C), and there was insufficient chimpanzee sequence available in one case (APOBEC2). To gain further information about these genes, we sequenced them from either orangutan or gorilla ( Table 1 , third column). These comparisons reveal that strong purifying selection has acted on APOBEC2, but positive selection can be detected in both APOBEC1 (aa 1–100; also see Figure 4 ) and APOBEC3C (aa 34–133). Although we might have expected APOBEC1 to be evolving only under purifying selection based on its important editing of APOB mRNA, our analysis suggests that APOBEC1 has also participated in some kind of genetic conflict involving its first active site, and suggests that the rapid evolution of APOBEC1 seen previously in mouse–rat comparisons may also be due to positive selection ( Nakamuta et al. 1995 ). Figure 4 shows representative sliding window analyses of genes undergoing gene-wide purifying (APOBEC2) and positive (APOBEC3E) selection. These findings greatly extend the current understanding of the APOBEC family, and implicate a majority of APOBEC genes as participants in host defense. They also raise the possibility of other editing systems being involved in genome defense; for instance, hepatitis delta virus is known to be edited by adenosine deaminase ( Polson et al. 1996 ). Figure 4 Selective Pressures on APOBEC1, APOBEC2, and APOBEC3E Sliding window analysis (250-bp window; 50-bp slide) was performed on three APOBEC genes. Although APOBEC1 demonstrates purifying selection when the whole gene is analyzed ( Table 1 ), the sliding window analysis of a human–orangutan comparison reveals a window (aa 1–100) in the first active site (dark gray bar), which shows evidence of positive selection ( p < 0.01). Sliding window analysis of APOBEC2, which is also evolving under purifying selection ( Table 1 ), does not show any windows where ω is greater than one. APOBEC3E, which gives the strongest signal for positive selection ( Table 1 ), has ω greater than one for almost all windows. (Note that ω is not plotted where Ks = 0). DOI:10.1371/journal.pbio.0020275.g004 Human APOBEC3G Polymorphisms and AIDS The antiviral activity of APOBEC3G and the excess of non-synonymous changes specific to human APOBEC3G (see Figure 2 ) implicate non-synonymous polymorphisms as being functionally very important. Because binding by Vif inhibits APOBEC3G's antiviral ability, we might predict that APOBEC3G should be subject to overdominant selection (heterozygous individuals being at a selective advantage), especially in populations with a high incidence of HIV infection, since different alleles of APOBEC3G may have different susceptibility to various viral strains. The action of APOBEC3G on viral evolution could also be complex because, although it is ineffective as an antiviral mechanism in the presence of Vif, its action could also result in an increased likelihood of adaptive changes and viral diversity in the host due to the introduced G-to-A hypermutations. Polymorphisms in APOBEC3G may thus have direct impact on the progression time from initial HIV infection to AIDS, and should be investigated as such. What Drives the Long-Term Evolution of APOBEC3G? The evidence for positive selection of APOBEC3G does not identify the biological step that exerts this selective pressure. Formally, this step could be the yet-undefined mechanism by which APOBEC3G is packaged into virions, the interaction of APOBEC3G with Vif-like destruction proteins encoded by other viruses, and/or its interaction with the proteasome machinery. APOBEC3G may indeed interact with other viruses, because G-to-A hypermutation—a hallmark of the single-stranded DNA–editing activity of APOBEC3G-like enzymes—has been observed in some nonlentivirus viruses ( Vartanian et al. 2003 ), and because APOBEC3G has recently been shown to inhibit the replication of the hepatitis B virus upon deliberate coexpression ( Turelli et al. 2004 ). However, this inhibition of hepatitis B is not correlated with G-to-A hypermutation, suggesting that APOBEC3G may also inhibit viral replication independent of its catalytic activity. The ancient, constant pressure of positive selection on APOBEC3G in primates raises the possibility that at least some of its evolution may be explained by a struggle not in the lymphocytes, but in the germline, where APOBEC3G is also abundantly expressed ( Jarmuz et al. 2002 ), and where genome-restricted mobile genetic elements need to transpose to ensure survival. Of the three main classes of eukaryotic mobile elements, only two are active in humans and, most likely, other primate genomes. The first and major class includes the LINE1 (long interspersed element–1) non-LTR (long terminal repeat) retroposons that are not a likely target for APOBEC3G, because they carry out their reverse transcription in the nucleus (APOBEC3G is restricted to the cytoplasm). A second class, the LTR-bearing human endogenous retroviruses (HERVs), is identical in many aspects of its life cycle to retroviruses. While the selective disadvantage to an individual organism conferred by endogenous retroviruses may pale in comparison to that of pathogenic viruses, over time the steady retrotransposition of endogenous retroviruses is likely to be more detrimental to a species than scattered, episodic interactions with viruses. Thus, the constant efforts of HERVs to jockey for evolutionary dominance may provide a more likely explanation for the positive selection of APOBEC3G and other APOBEC genes in primate genomes. Materials and Methods Genomic DNA sequencing of primate samples. Genomic DNA was obtained from Coriell (Camden, New Jersey, United States). Species and Coriell repository numbers are: Pan troglodytes (chimpanzee) (NAO3448A), Pan paniscus (bonobo) (NGO5253), Gorilla gorilla (gorilla) (NG05251B), Pongo pygmaeus (orangutan) (NAO4272), Macaca nigra (Celebes crested macaque) (NG07101), Macaca fascicularis (crab-eating macaque) (NA03446), Erythrocebus patas (patas monkey) (NG06254), Lagothrix lagotricha (common woolly monkey) (NG05356), and Saguinus labiatus (red-chested mustached tamarin) (NG05308). Papio anubis (baboon) DNA was a personal gift from Dr. Trent Colbert. The APOBEC3G, APOBEC1, APOBEC2, and APOBEC3C genes were amplified exon-by-exon from genomic DNA with PCR Supermix High Fidelity (Invitrogen, Carlsbad, California, United States), and PCR products were sequenced directly. PCR and sequencing primers are shown in Table S1 . The human APOBEC3G sequence was obtained from the Ensembl database of the human genome project (ENSG00000100289). The Chlorocebus aethiops (African green monkey) APOBEC3G sequence (GenBank AY331714.1) is missing the last 21 bp of the coding sequence because it was sequenced from mRNA ( Mariani et al. 2003 ) in a previous study. Exon–intron boundaries are conserved, except in APOBEC3G from NWMs (woolly monkey and tamarin) where the “AG” directly 5′ of the eighth coding exon is missing. Sequences have been deposited in GenBank under the following accession numbers: APOBEC3G (AY622514–AY622593), APOBEC3C (AY622594–AY622597), APOBEC2 (AY622598–AY622599), APOBEC1 (AY622600–AY622604). Sequences of other APOBEC family members. Human sequences were obtained from the Ensembl or GenBank databases: APOBEC1 (ENSG00000111701), APOBEC2 (ENSG00000124701), AID (ENSG00000111732), APOBEC3A (ENSG00000128383), APOBEC3B (NM_004900.3), APOBEC3C (ENSG00000179750), APOBEC3DE (ENSG00000179007), and APOBEC3F (ENSG00000128394). Transcripts for both APOBEC3D (NM_152426) and APOBEC3DE (BC017022.1) exist in the database. Chimp sequences were obtained from orthology to human genes assigned on the University of California at Santa Cruz Genome Bioinformatics Website ( http://www.genome.ucsc.edu ). All orthologous chimp exons were checked for AG and GT flanking the 5′ and 3′ boundaries, respectively, an indication that human splice sites are conserved. The mouse APOBEC3 protein sequence can be found in GenBank (NP_084531.1). Sequence analysis. DNA sequences were aligned using Clustal_X ( Thompson et al. 1997 ), with hand alignment of small indels based on amino acid sequence. Changes along each lineage (see Figure 2 ) were assigned using parsimony and counted by hand. Changes at 18 positions could not be unambiguously assigned as non-synonymous or synonymous and were excluded from the R:S ratios. Ka and Ks for pairwise comparisons ( Figures 3 A– 3 C and 4 ; Table 1 ), as well as their confidence values, were calculated using the K-estimator software package ( Comeron 1999 ). For confidence values, simulations were carried out under the condition where Ka equals Ks and compared to actual Ka from that region, and multiple parameters for transition:transversion ratios were simulated. Maximum likelihood analysis was performed with the PAML software package ( Yang 1997 ). Global ω ratios for the tree (see Figure 2 ) were calculated by a free-ratio model, which allows ω to vary along different branches. To detect selection, the multiple alignments were fitted to either the F3×4 or F61 models of codon frequencies. We then compared the log-likelihood ratios of the data using different NSsites models: model 1 (two-state, neutral, ω > 1 disallowed) to model 2 (similar to model 1 but ω >1 allowed), and model 7 (fit to a beta distribution, ω > 1 disallowed) to model 8 (similar to model 7 but ω >1 allowed). In both cases, permitting sites to evolve under positive selection gave a much better fit to the data ( p < 10 −13 ) with a significant fraction of the sites (more than 30%) predicted to evolve at average ω ratios greater than 3.5 (see Figure S2 for details). These analyses also identified certain amino acid residues with high posterior probabilities (greater than 0.95) of having evolved under positive selection ( Figures 3 D and S2 ). Supporting Information Figure S1 Episodic Evolution of APOBEC3G Protein Domains The evolutionary history of APOBEC3G is represented. R:S ratios are indicated along each branch of the primate cladogram. The N-terminal domain (A) has undergone adaptive evolution in at least three distinct periods. Despite being only 29 codons long, this domain has accumulated ten non-synonymous changes and only two synonymous changes in the African green monkey since it and the patas monkey last shared a common ancestor. Similarly, the orangutan has retained eight non-synonymous changes and no synonymous changes since it split from the rest of the hominids. Finally, a ratio of 6:0 R:S changes is seen in the split between the NWMs and the common ancestor of OWMs and hominids. Surprisingly, even the two active site structures of APOBEC3G (B and E) show evidence for adaptive evolution (despite all the putative catalytic residues being conserved), including along the branch leading to the common ancestor of all hominids. The first pseudoactive domain (D) acquired ten non-synonymous and no synonymous changes since the hominids split from the OWMs. (343 KB PDF). Click here for additional data file. Figure S2 PAML Analysis of APOBEC3G Maximum likelihood analysis was performed on APOBEC3G sequences using the PAML software package. To detect selection, the multiple alignments were fitted to either the F3×4 (A) or F61 (B) models of codon frequencies. We compared the log-likelihood ratios of the data using comparisons of different NSsites models: model 1 (two-state, neutral, ω > 1 disallowed) versus model 2 (similar to model 1 but ω > 1 allowed) and model 7 (fit to a beta distribution, ω > 1 disallowed) versus model 8 (similar to model 7 but ω > 1 allowed). In both cases, permitting sites to evolve under positive selection gave a much better fit to the data ( p <10 −13 ) (C) with a significant fraction of the sites (more than 30%) predicted to evolve at average ω ratios greater than 3.5. These analyses also identified certain amino acid residues with high posterior probabilities (greater than 0.95) of having evolved under positive selection (A and B). (60KB PDF). Click here for additional data file. Figure S3 Alignment of APOBEC3G Protein Sequences The individual domains of the APOBEC3G protein are demarcated. Catalytically important residues are highlighted in bold, and those residues identified by PAML analysis as being under positive selection are indicated with gray shading. Blue shading highlights the single amino acid residue that can switch specificity of Vif interaction with APOBEC3G. AGM, African green monkey. (46 KB PDF). Click here for additional data file. Table S1 Complete List of Primers Used in This Study (39 KB PDF). Click here for additional data file. Accession numbers The GenBank ( http://www.ncbi.nlm.nih.gov/ ) and Ensembl ( http://www.ensembl.org/ ) accession numbers for the genes and gene products discussed in this paper are as follows. GenBank: APOBEC1 (AY622600–AY622604), APOBEC2 (AY622598–AY622599), mouse APOBEC3 (NP_084531.1), human APOBEC3B (NM_004900.3), APOBEC3C (AY622594–AY622597), APOBEC3D (NM_152426), APOBEC3DE (BC017022.1), APOBEC3G (AY622514–AY622593), and African green monkey APOBEC3G (AY331714.1). Ensembl (all human sequences): APOBEC1 (ENSG00000111701), APOBEC2 (ENSG00000124701), AID (ENSG00000111732), APOBEC3A (ENSG00000128383), APOBEC3C (ENSG00000179750), APOBEC3DE (ENSG00000179007), APOBEC3F (ENSG00000128394), and APOBEC3G (ENSG00000100289). Coriell ( http://www.coriell.undmj.edu/ ) repository numbers for primate genomic DNAs are Pan troglodytes (chimpanzee) (NAO3448A), Pan paniscus (bonobo) (NGO5253), Gorilla gorilla (gorilla) (NG05251B), Pongo pygmaeus (orangutan) (NAO4272), Macaca nigra (Celebes crested macaque) (NG07101), Macaca fascicularis (long-tailed macaque) (NA03446), Erythrocebus patas (patas monkey) (NG06254), Lagothrix lagotricha (common woolly monkey) (NG05356), and Saguinus labiatus (red-chested mustached tamarin) (NG05308). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC479043.xml |
554112 | Analysis of the human Alu Ye lineage | Background Alu elements are short (~300 bp) interspersed elements that amplify in primate genomes through a process termed retroposition. The expansion of these elements has had a significant impact on the structure and function of primate genomes. Approximately 10 % of the mass of the human genome is comprised of Alu elements, making them the most abundant short interspersed element (SINE) in our genome. The majority of Alu amplification occurred early in primate evolution, and the current rate of Alu retroposition is at least 100 fold slower than the peak of amplification that occurred 30–50 million years ago. Alu elements are therefore a rich source of inter- and intra-species primate genomic variation. Results A total of 153 Alu elements from the Ye subfamily were extracted from the draft sequence of the human genome. Analysis of these elements resulted in the discovery of two new Alu subfamilies, Ye4 and Ye6, complementing the previously described Ye5 subfamily. DNA sequence analysis of each of the Alu Ye subfamilies yielded average age estimates of ~14, ~13 and ~9.5 million years old for the Alu Ye4, Ye5 and Ye6 subfamilies, respectively. In addition, 120 Alu Ye4, Ye5 and Ye6 loci were screened using polymerase chain reaction (PCR) assays to determine their phylogenetic origin and levels of human genomic diversity. Conclusion The Alu Ye lineage appears to have started amplifying relatively early in primate evolution and continued propagating at a low level as many of its members are found in a variety of hominoid (humans, greater and lesser ape) genomes. Detailed sequence analysis of several Alu pre-integration sites indicated that multiple types of events had occurred, including gene conversions, near-parallel independent insertions of different Alu elements and Alu -mediated genomic deletions. A potential hotspot for Alu insertion in the Fer1L3 gene on chromosome 10 was also identified. | Background The proliferation of Alu elements has had a significant impact on the architecture of primate genomes [ 1 ]. They comprise over 10% of the human genome by mass and are the most abundant short interspersed element (SINE) in primate genomes [ 2 ]. Alu elements have achieved this copy number by duplicating via an RNA intermediate in a process termed retroposition [ 3 ]. During retroposition the RNA copy is reverse transcribed by target primed reverse transcription (TPRT) and subsequently integrated into the genome [ 4 - 6 ]. While unable to retropose autonomously, Alu elements are thought to borrow the factors that are required for their amplification from the LINE (long interspersed element) elements [ 6 - 9 ], which encode a protein with endonuclease and reverse transcriptase activity [ 10 , 11 ]. Because of their high copy number, Alu repeats have been a significant source of new mutations as a result of insertion and post-integration recombination between elements [ 12 , 13 ]. The majority of Alu amplification occurred early in primate evolution, and the current rate of Alu retroposition is at least 100 fold slower than the peak of amplification that appears to have occurred 30–50 million years ago [ 2 , 14 - 16 ]. Even though there are over one million Alu elements within the human genome, only a small number of these elements are capable of movement [ 17 ]. As a result of the limited amplification capacity of Alu elements, a series of discrete subfamilies of Alu elements that share common diagnostic mutations have been identified in the human genome [ 18 - 21 ]. A small subset of "young" Alu repeats are so recent in origin that they are present in the human genome and absent from the genomes of non-human primates, with some of the elements being polymorphic with respect to insertion presence/absence in diverse human genomes [ 16 , 22 - 25 ]. Individual SINE elements have proven to be essentially homoplasy-free characters which are therefore quite useful for resolving phylogenetic and population genetic questions [ 2 , 26 - 34 ]. For example, young Alu subfamilies which arose around the radiation of Subtribe Hominina (gorillas, chimpanzees, and humans) four to six million years ago [ 35 ] were used as homoplasy free phylogenetic markers to resolve the branching order in hominids [ 36 ]. Relationships among other primates have also been resolved using relatively large numbers of Alu elements as phylogenetic markers [ 28 , 37 - 40 ] We have previously characterized a large number of recently integrated Alu elements found in the human genome that fall in six distinct lineages, termed Ya, Yb and Yc, Yd, Yg and Yi based upon their diagnostic mutations [ 41 - 52 ]. Here, we describe the distribution in the human genome of three Alu subfamilies that are members of the Alu Ye lineage [ 53 ] and are characterized by four (Ye4), five (Ye5) and six (Ye6) diagnostic mutations, respectively. Results Subfamily size and age Alu Ye elements were identified in the draft sequence of the human genome using BLAST [ 54 ] queries of the draft sequence to identify exact complements to an Alu Ye specific oligonucleotide (Fig. 1 ). See the Materials and Methods section for details on the search. Using this approach we identified 25 Ye4 subfamily members that shared four diagnostic base positions and thus comprised the Alu Ye4 subfamily. We also identified 103 elements that shared five diagnostic base positions and comprise the Alu Ye5 subfamily and 25 Ye6 subfamily members that shared six diagnostic base positions and comprised the Alu Ye6 subfamily. Each of the subfamilies was named in accordance with standard nomenclature for new Alu subfamilies [ 55 ]. Figure 1 Sequence alignment of Alu Ye subfamilies . The consensus sequence for the Alu Y subfamily is shown at the top. The sequences of Alu Ye4, Ye5 and Ye6 subfamilies are shown below. The dots below represent the same nucleotides as the consensus sequence. Deletions are shown as dashes and mutations are shown as the correct base for each of the subfamilies. To estimate the copy number of the Ye4, Ye5 and Ye6 Alu subfamilies, we preformed BLAST searches of the draft sequence of the human genome using an Alu Ye lineage-specific oligonucleotide to query the database (as outlined in the methods). Seventeen of the 25 Alu Ye4 elements were unique (non-paralogous). There were also 76 unique Ye5 Alu elements and 23 unique Ye6 Alu subfamily members. Multiple alignments of the Alu elements from each subfamily were constructed and the number of mutations from the consensus sequence for each Alu subfamily was determined. In each case the mutations were divided into those that occur at CpG dinucleotides and those that occur at non-CpG positions without including small insertions or deletions as described previously [ 47 - 49 ]. The mutations are divided into these two different classes to estimate the average age of each subfamily because the CpG base positions in repeated sequences mutate at a rate that is about six times higher than non-CpG positions [ 56 ] as a result of the spontaneous deamination of 5-methylcytosine residues [ 57 ]. Mutation densities were calculated for each Alu Ye subfamily. For 17 elements from the Alu Ye4 subfamily, the non-CpG and CpG mutation densities were 2.1% (83/3944) and 12.5 % (106/850). Using a neutral rate of evolution of 0.15% per million years for non-CpG positions [ 58 ] and 0.9% per million years for the CpG base positions [ 56 ] along with the average mutation density yields age estimates of 14.03 and 13.86 million years old for the Ye4 subfamily. For the Alu Ye5 subfamily 76 elements were analyzed that contained a total of 17632 non-CpG nucleotides and 3800 CpG nucleotides that contained 351 non-CpG and 431 CpG mutations. The mutation densities of the Ye5 subfamily were 1.99% and 11.34% for the non-CpG and CpG nucleotides yielding age estimates based on the average mutation density of 13.27 and 12.60 million years old. For the Alu Ye6 subfamily 23 elements were analyzed that contained a total of 5336 non-CpG nucleotides and 1150 CpG nucleotides that contained 86 non-CpG and 92 CpG mutations. The mutation densities of the Ye6 subfamily were 1.61% and 8% for the non-CpG and CpG nucleotides yielding age estimates based on the average mutation density of 10.75 and 8.89 million years old. Evolutionary analysis In order to determine the approximate time of insertion for each Alu Ye4, Ye5 and Ye6 subfamily member, we performed a series of PCR reactions using human and non-human primate DNA samples as templates. Unfortunately, not all of the loci identified in the draft sequence were amenable to PCR analysis, as some of them had inserted into other repetitive regions of the genome making the design of flanking unique sequence PCR primers difficult. For the Ye subfamilies, 120 of the 153 elements identified in the draft human genomic sequence were amplified by PCR. Examination of the orthologous regions of the various species genomes displayed a series of different PCR patterns indicative of the time of retroposition of each of the elements into the primate genomes. Results from a series of these experiments showed a gradient of Ye Alu repeats beginning with some elements that are recent in origin and unique to the human genome (e.g. Ye5AH110) and ending with elements that are found within all ape genomes (e.g. Ye5AH148). The distribution of all the Ye elements in various primate genomes is summarized in Additional File 2. Gene conversion Gene conversion between Alu elements and in other regions of the human genome exerts a significant influence on the accumulation of single nucleotide diversity within the human genome [ 2 , 50 ]. To estimate the frequency of gene conversion in the Alu Ye subfamily members, we compared the sequences of the elements found in the human genome to the consensus sequences of other Alu subfamilies. Using this approach, we identified two Alu Ye5 subfamily members that appeared to have been subjected to partial gene conversion at their 3' ends. Alu Ye5AH70 contains three mutations that are diagnostic for the Yb8/9 subfamily. Similarly, Alu Ye5AH173 contains three Alu Sc mutations. Each of the sequence exchanges occurred in a short contiguous sequence suggesting that they were products of gene conversion rather than homoplasic point mutations. We identified one Alu -containing locus that was involved in full gene conversion/ replacement event, (Ye5AH181). In this case, the orthologous Alu elements have similar flanking sequences and direct repeats, although they are not precisely identical due to the random mutations that accumulated over time. DNA sequence analysis of this locus showed that the Alu element of selected new world monkey genomes (spider monkey, woolly monkey and tamarin) belonged to the Alu Sg subfamily. This suggests that a gene conversion of an older, pre-existing Alu Sg may have introduced the Ye5 sequence in the common ancestor of humans, chimpanzees, gorillas and orangutans. Amplification of this locus was unsuccessful in the old world monkey taxa tested. Alu -mediated genomic deletions Two deletions of part of the human genome appeared to be associated with newly inserted Alu Ye elements. These deletions were identified at loci Ye5AH24 and Ye5AH27. In the case of Ye5AH24, the deletion was associated with a gene conversion of an Alu Y in both orangutan and siamang to AluYe5 in human, bonobo, common chimpanzee and gorilla and involved the removal of about 500 bp from the 3' flanking region. For Alu Ye5AH27, the deletion was associated with a gene conversion of an Alu Sx element (orangutan and siamang) to AluYe5 (human, bonobo, common chimpanzee and gorilla) and involved the removal of 142 bp from the 3' flanking region. Based on this data, we estimate the frequency of Alu retroposition mediated deletions of approximately 1.67% (2/120). The pre-integration sites for three elements (Ye5AH11, Ye5AH40 and Ye5AH173) did not amplify in any non-human primate species. Previously, the insertion of L1 elements has been shown to be associated with large genomic deletions [ 59 ]. Thus, one possible explanation for the absence of pre-integration PCR products would be that a large deletion (>1 kb) occurred at each of these loci during Alu integration. If a deletion occurred during the integration of an Alu element in the human genome, then the pre-integration product size calculated computationally would be an underestimate of the true size of the locus. To investigate this possibility, we utilized long template PCR reactions of these loci that would facilitate the amplification of larger (up to 25 kb) products. Unfortunately, PCR amplicons were not generated by any of these loci, suggesting that the retrotransposition of these Alu elements in humans may have generated deletions greater than 25 kb in size. Alternately, the orthologous loci in non-human primate genomes may have undergone additional mutations at the oligonucleotide primer sites, preventing PCR amplification. Independent Alu insertions We have also identified one locus (Ye5AH161) that contained multiple paralogous Alu insertions in human, chimpanzee, gorilla lineage, old world monkey and new world monkey lineages (Fig. 2 ). In the human, chimpanzee and gorilla lineage (subtribe Hominina) there was an independent insertion of an Alu Ye5 in the 5' flank of an Alu Sx that is common to all taxa. In all the old world monkey genomes tested (Green monkey, Macaque and Rhesus monkey), an Alu Sp has inserted in the 5' flank of the shared Sx element about 58 bp away of the Alu Ye5 present in Hominina. Also, in the woolly and spider monkeys (new world monkeys), there was an independent insertion of an Alu Sx in the 5' flank of the shared Alu Sx. In gibbon, siamang and orangutan, there were no independent Alu insertions at this locus, only the common Alu Sx is present. In orangutan, however, there was an extra 145 bp of genomic sequences inserted inside the old Alu Sx. The pattern discussed suggests that these three independent parallel insertion events occurred sometime after the divergence of these primates from one another. This locus on chromosome 10q23.33 lies in intron 39–40 of the Human Fer1L3 gene, about 50 bp from exon 39. This locus may be considered a hot spot for Alu insertion. An alignment of locus Ye5AH161 is available as Additional file 1 and at . Figure 2 Parallel insertions at the Ye5AH161 locus. A) The figure shows an agarose gel chromatograph of the PCR products resulting from amplification at the Ye5AH161 locus in 13 primate species. The ~795 bp PCR product is found in the human, common chimpanzee, pygmy chimpanzee, gorilla, green monkey, Rhesus monkey, macaque, woolly monkey and spider monkey genomes. Smaller bands were found in orangutan, gibbon and siamang. Sequence analysis of the PCR products shows three independent insertions; a Ye5 in subtribe Hominina (human, chimpanzee and gorilla), a second insertion of an Alu Sp in old world monkeys, and an Alu Sx insertion in new world monkeys. Suspected non-homologous recombination has inserted 145 bp in the orangutan genome at this locus. B) A schematic representation of the multiple Alu independent insertions and the distance between the shared Alu Sx and the independently inserted Alu elements. The sequence of Fer1L3-Exon 39 is shown. Silent mutations are highlighted and the distance from the inserted Alus are indicated. Abbreviations used in the figure are: Human (H), Chimpanzee (C), Gorilla (G), Orangutan (O), Gibbon (Gn), Siamang (S), Green monkey (Gm), Rhesus monkey (R), Macaque (M), Woolly monkey (W) and Spider monkey (Sm). We also identified another near-parallel independent Alu insertion event at human Ye5AH16 locus in all the old world monkey genomes tested (Green monkey, Macaque and Rhesus), within the same locus where an Alu Ye5 element was located in the human, chimpanzee, gorilla and orangutan genomes. Thus, the near-parallel insertion most likely occurred after the divergence of humans and apes from old world monkeys, but before the radiation of the old world monkeys. The element present in the old world monkey genomes is an Alu Y and is 80 bp from the human insertion site. Human genomic diversity To determine the human genomic diversity associated with each of the Alu Ye4, Ye5 and Ye6 subfamily members, we performed a series of PCR reactions on a collection of 80 geographically-diverse human genomes. Using this approach, we identified one new Alu insertion polymorphism (Ye5AH167) from the loci analyzed in this report. The allele frequencies, genotypes and heterozygosities for the Alu insertion polymorphism are shown in Table 1 . Table 1 Human genetic diversity of Ye5AD167. Genotypes Ye5AD167 +/+ +/- -/- f Ye5 Het 1 African American 6 8 6 0.50 0.51 Asian 2 16 2 0.50 0.51 European/German Caucasian 3 9 7 0.39 0.49 South American 5 13 1 0.61 0.49 Average Heterozygosity 2 0.50 1. Unbiased heterozygosity. 2. The average heterozygosity for all populations. Discussion Our detailed analysis of the Alu Ye5 subfamily resulted in the recovery of two new Alu subfamilies, Ye4 and Ye6. Each of these Alu subfamilies has a relatively small copy number in the human genome. The proportion of polymorphic elements within each of the subfamilies is quite low with only 0.83% of the Alu Ye elements being polymorphic, only one member of Ye subfamilies (Ye5AD167) is polymorphic with respect to insertion presence/absence in the human genome. In contrast, many other young Alu subfamilies have levels of insertion polymorphism in excess of 20% [ 2 ]. Therefore, the amplification of these Alu subfamilies within the human genome has occurred at a very low rate, and may have recently ceased entirely. The estimated average ages of ~14, ~13 and ~9.5 million years old for the Alu Ye4, Ye5 and Ye6 subfamilies, respectively are consistent with their relatively recent origin in primate genomes. It is also consistent with the master gene model of SINE retroposition which suggests that as a master element accumulates mutations over time, the resulting elements will share those mutations [ 60 ]. Members of the Alu Ye lineages are dispersed throughout the genomes of all hominoids (humans, greater and lesser apes) suggesting that this subfamily of Alu elements began to amplify about 15–20 million years ago. Therefore, the Ye subfamily appears to have been retroposition competent during hominoid evolution, but must have been relatively inefficient at producing copies. Although the rate of Ye amplification has not been dramatic within the human lineage, it may be quite interesting to recover Alu Ye subfamily members from other ape genomes and to determine the rate of Ye subfamily amplification in these genomes to see if there has been any differential amplification of these elements in non-human primate genomes. The differential amplification of ID SINEs within various members of the rodent lineage has been reported previously suggesting that the amplification of SINEs within various genomes is subject to changes [ 61 , 62 ]. Gene conversion between Alu repeats has been reported previously [ 26 , 63 , 64 ]. The gene conversion events involve in three Alu Ye subfamily members were quite interesting. In one case (Ye5AH181), the Alu -containing locus was involved in full gene conversion event where Alu Sg in new world monkeys is replaced by an Alu Ye5 in Humans, chimpanzees, gorillas and orangutan. In the other two cases (Ye5AH70 and Ye5AH173), only a small portion of the 3' end of the Ye elements were involved in the gene conversion. This is in good agreement with the molecular nature of gene conversion events recently reported for the Ya5 and Yb8/9 Alu subfamilies [ 47 , 48 , 64 , 65 ]. The detection of three gene conversion events from about 153 Alu Ye elements suggests that gene conversion of these events has been relatively rare, with a rate of 1.96%. However, this rate is comparable to that reported previously for the Alu Ya5 and Yb8 subfamilies within the human genome, as well as that for the Ta subfamily of human LINE elements [ 64 - 66 ]. In all cases, the Ye Alu family members that were involved in the gene conversion were monomorphic for insertion presence within the human genome. In the partial gene conversion events, the Ye Alu repeats were gene converted by Yb8/9 and Sx Alu elements. The Yb8/9 Alu subfamily was one of the first groups of Alu repeats that was ever reported to be involved in gene conversion, and may be more prone to these types of events as a result of a retroposition rate that is slightly higher than other recently integrated Alu subfamilies in the human genome [ 48 , 64 , 65 ]. The gene conversion between Alu elements may in part be a function of the length of time that the individual Alu elements have resided in the human genome [ 26 , 50 ]. Based on an examination of low copy number transgenes in the mouse, it has been suggested that the germline recombination machinery in mammals has been evolved to prevent high levels of ectopic recombination between repetitive sequences [ 67 ]. It is quite possible that the high copy number of Alu elements allows for pairing between regions of sequence identity of different Alu elements initiating the start of gene conversion before cellular control systems can terminate the process resulting in the production of small gene conversion tracts. The identification of multiple paralogous Alu insertions involving an Alu Ye element (Ye5AH161) in humans, bonobo, common chimpanzee and gorilla lineage, Alu Sp in old world monkeys lineage and Alu Sx in new world monkeys lineage is also interesting. The paralogous insertion of an Alu repeat into the orthologous regions of human and non-human primate genomes is an independent evolutionary event [ 26 ]. To date there are no known cases of the independent insertion of paralogous Alu elements into identical sites within different genomes. The detection of parallel insertions is a function of the rate of retroposition of Alu elements within various primate lineages and the time since the most recent common ancestor [ 26 ]. However, this locus (Ye5AH161) supports the idea of hotspots for the integration of Alu repeats within primate genomes. Future studies on the integration of different SINE elements in syntenic regions of human and rodent genomes may yield new insight into the molecular nature of hotspots for SINE element integration. Genomic deletions created upon LINE-1 retrotransposition using cell culture assays have been recently identified [ 59 ]. The rate of LINE element deletion was estimated indirectly in the human genome to be about 3% [ 68 ] or 8–13% through sequencing variable sizes of the preintegration sites of L1HS in primates [ 69 ]. The precise molecular mechanism of the LINE mediated genomic deletions is still unclear. Recently, an Alu -mediated deletion that resulted in the inactivation of the human CMP-N-acetylneuraminic acid hydroxylase gene [ 70 ] and Alu mediated deletions of noncoding genomic sequences have been identified [ 71 ]. Here we report two new examples of Alu retroposition-mediated deletions that may have happened by a mechanism similar to that of the LINE element mediated genomic deletions since Alu and L1 elements utilize a common mobilization pathway [ 6 , 8 , 72 ]. In both cases, Alu Ye5AH24 and Alu Ye5AH27, the deletion appears to have occurred, after the separation of human, chimpanzee and gorillas from orangutan and Siamang, during the process of gene conversion similar to the lineage specific Alu deletion reported previously [ 70 , 71 ]. Here, we have estimated the frequency of Alu retroposition associated genomic deletions as approximately 1.67%. The size of the deleted sequences was over 300 bp on average. New Alu integrations have been estimated to occur in vivo at a frequency of one new event in every 10 to 200 births [ 12 ]. If sizable deletions accompany one in every 100 new Alu retroposition events in vivo , the genomic impact of these events could be substantial. This is not a trivial number of deletions when extrapolated to the copy number of Alu elements in the human genome which is over one million [ 2 ]. Approximately about 16,700 Alu elements may have been involved in retroposition mediated deletion events within primate genomes. If each of these deletion events removes an average of 300 bp of genomic sequence, this would mean that Alu retroposition mediates the deletion of about 5 Mb of the primate genomic sequences. However, if the Alu associated deletions have involved larger sequences similar to those recently reported for LINE elements [ 59 ], then the impact of these events may be 50–500 Mb of lineage specific deletions. In either case, these types of events represent a novel mechanism of lineage-specific deletion within the primate order. Detailed studies of the orthologous regions of primate genomes deleted in this manner may prove instructive for understanding the genetic basis of the difference between humans and non-human primates. Conlcusion The Alu Ye lineage has had an extended history of expansion in the human lineage. Its expansion appears to have begun soon after the divergence of the hominoids from the remainder of the catarrhine primates and proceeded at a relatively low level since then. Extended periods of relatively low levels of retrotransposition may allow some mobile elements to retain duplication capability for long periods of time. Despite a relatively low level of retrotransposition, the Alu Ye lineage has contributed to the architecture of the human genome through insertion mutations, retrotransposition associated genomic deletions, and gene conversion. Methods Computational analysis To identify Alu Ye elements in the draft sequence of the human genome (August 6, 2001, UCSC GoldenPath assembly), we used Basic Local Alignment Search Tool (BLAST) [ 54 ] queries of the draft sequence to identify exact complements to the oligonucleotide 5'- GAACCCCGGGGGGCGGAGCCTGCAG-3' that is diagnostic for the Ye lineage as shown in Fig. 1 . All of the exact complements to the oligonucleotide queries along with 1000 bp of adjacent flanking unique DNA sequence were excised and stored as unique files and subjected to additional analysis as outlined previously [ 47 - 49 ]. A complete list of all the Alu elements identified in the searches is located in Additional file 2 and is available at . DNA samples and PCR amplification Oligonucleotide primers and PCR amplification reactions for each of the Alu Ye lineage loci analyzed were performed as previously described [ 47 - 49 ] using the primers and annealing temperatures shown in Additional file 2 for Alu Ye lineage members. Diverse human DNA samples were available from previous studies [ 47 - 49 ]. The cell lines used to isolate DNA samples were as follows: chimpanzee ( Pan troglodytes ), WES (ATCC CRL1609); gorilla ( Gorilla gorilla ) lowland gorilla Coriell AG05251B, Ggo-1 (primary gorilla fibroblasts) provided by Dr. Stephen J. O'Brien, National Cancer Institute, Frederick, MD, USA; bonobo ( Pan paniscus ) Coriell AG05253A; orangutan ( Pongo pygmaeus ) ATCC CRL6301; green monkey ( Chlorocebus aethiops ) ATCC CCL70 (old world monkey); and owl monkey ( Aotus trivirgatus ) OMK (OMKidney) ATCC CRL 1556 (new world monkey). Cell lines were maintained as directed by the source and DNA isolations were performed using Wizard genomic DNA purification (Promega). DNA samples from peripheral lymphocytes or tissue were prepared from the gibbon ( Hylobates lar ) and siamang ( Hylobates syndactylus ). Additional non-human primate DNA samples ( Pan troglodytes, Pan paniscus, Gorilla gorilla, Pongo pygmaeus, Macaca mulatta (old world monkey), Macaca nemestrina (old world monkey), Saquinus labiatus (new world monkey), Lagothrix lagotricha (new world monkey), Ateles geoffroyi (new world monkey) and Lemur catta (prosimian) available as a primate phylogenetic panel (PRP00001) were purchased from the Coriell Institute for Medical Research. Sequence analysis DNA sequencing was performed on a gel purified PCR products that had been cloned using the TOPO TA cloning vector (Invitrogen) using chain termination sequencing [ 73 ] on an Applied Biosystems 3100 automated DNA sequencer. The sequence of the orthologous loci (that contained a paralogous Alu element) has been assigned accession numbers AY849282-AY849301. Sequence alignments of the Ye lineage subfamily members were performed using MegAlign software (DNAStar version 3.1.7 for Windows 3.2). The ages for each of the Alu Ye subfamilies were calculated using mutation densities as previously described [ 43 , 47 - 49 , 65 ] with rates suggested by Xing et al. [ 56 ]. Authors' contributions AS performed all experimental work for the project, shared in the analysis and interpretation of the results and wrote the first draft of the manuscript. DAR provided assistance with analysis and interpretation of the data and in preparing the manuscript for submission. DJH wrote the software used to extract Ye elements and the associated flanking sequences from the human genome draft sequence. JJ provided assistance with the analysis and interpretation of the data and input on late drafts of the manuscript. MAB provided the initial input for the project as well as valuable input on each draft of the manuscript. Supplementary Material Additional File 1 This supplemental file represents a sequence alignment for for locus Ye5AH161 in fasta format. Click here for file Additional File 2 This supplemental table lists all Alu Ye elements recovered with information on PCR conditions, chromosomal location and phylogenetic origin. It is in Microsoft Word format. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554112.xml |
539301 | Protocol for a randomised controlled trial of a decision aid for the management of pain in labour and childbirth [ISRCTN52287533] | Background Women report fear of pain in childbirth and often lack complete information on analgesic options prior to labour. Preferences for pain relief should be discussed before labour begins. A woman's antepartum decision to use pain relief is likely influenced by her cultural background, friends, family, the media, literature and her antenatal caregivers. Pregnant women report that information about analgesia was most commonly derived from hearsay and least commonly from health professionals. Decision aids are emerging as a promising tool to assist practitioners and their patients in evidence-based decision making. Decision aids are designed to assist patients and their doctors in making informed decisions using information that is unbiased and based on high quality research evidence. Decision aids are non-directive in the sense that they do not aim to steer the user towards any one option, but rather to support decision making which is informed and consistent with personal values. Methods/design We aim to evaluate the effectiveness of a Pain Relief for Labour decision aid, with and without an audio-component, compared to a pamphlet in a three-arm randomised controlled trial. Approximately 600 women expecting their first baby and planning a vaginal birth will be recruited for the trial. The primary outcomes of the study are decisional conflict (uncertainty about a course of action), knowledge, anxiety and satisfaction with decision-making and will be assessed using self-administered questionnaires. The decision aid is not intended to influence the type of analgesia used during labour, however we will monitor health service utilisation rates and maternal and perinatal outcomes. This study is funded by a competitive peer-reviewed grant from the Australian National Health and Medical Research Council (No. 253635). Discussion The Pain Relief for Labour decision aid was developed using the Ottawa Decision Support Framework and systematic reviews of the evidence about the benefits and risks of the non-pharmacological and pharmacological methods of pain relief for labour. It comprises a workbook and worksheet and has been developed in two forms – with and without an audio-component (compact disc). The format allows women to take the decision aid home and discuss it with their partner. | Background Patient participation in clinical decision making Making evidence-based decisions in clinical practice is not always straightforward: patients and their healthcare providers may need to weigh up the evidence between several comparable options, the evidence for some treatments may be inconclusive, and the information needs to be tailored to each patient's clinical context and personal preferences [ 1 , 2 ]. Good medical decision making should take into account the best available evidence, along with patients' preferences and values [ 3 ]. However, finding effective and efficient mechanisms for doing this in the clinical setting is a challenge. To assist patients and their doctors in making informed decisions, information must be unbiased and based on current, high quality, quantitative research evidence. However, patient information materials are often outdated, inaccurate, omit relevant data, fail to give a balanced view and ignore uncertainties and scientific controversies [ 4 , 5 ]. It is increasingly evident that the provision of patient and provider information alone, even if evidence-based, is not sufficient to influence health outcomes and behaviour [ 6 ]. It is only when mechanisms are provided that tailor this information to the individual patient that health outcomes, related to treatment decisions, are positively effected [ 7 ]. With this in mind, decision aids are emerging as a promising tool to assist practitioners and their patients in evidence-based decision making [ 1 ]. Decision Aids Decision aids are "interventions designed to help people make specific and deliberative choices among options by providing (at minimum) information on the options and outcomes relevant to the person's health status" [ 1 ]. Additional strategies may include providing: information on the condition; the probabilities of outcomes tailored to a person's health risk factors; an explicit values clarification exercise; examples of others' decisions; and guidance in the steps of decision making [ 1 ]. Decision aids are non-directive in the sense that they do not aim to steer the user towards any one option, but rather to support decision making which is informed, consistent with personal values and acted upon [ 1 ]. Decision aids have been found to improve patient knowledge and create more realistic expectations, to reduce decisional conflict (uncertainty about the course of action) and to stimulate patients to be more active in decision making without increasing anxiety [ 1 ]. Internationally decision aids have been evaluated in a variety of health and clinical settings. Although their use in pregnancy and birth has only just begun to be explored, this is an area in which consumers are known to want to participate actively in decision making [ 8 ]. A survey of 790 Australian women reported a tenfold increase in dissatisfaction among women who did not have an active say in decisions about pregnancy care [ 8 ]. Similarly in the UK, women rated the explanation of procedures, including the risks, before they are carried out and involvement in decision making as most important to satisfaction with care [ 9 ]. Significantly, neither obstetricians nor midwives appreciated the importance to women of "being told the major risks for each procedure" [ 9 ]. Our own survey of pregnant women attending an antenatal clinic found that overwhelmingly women wanted to be involved in decisions regarding their pregnancy care, and this was regardless of age, parity, education or delivery preferences [ 10 ]. Labour pain The pain of labour is a central part of women's experience of childbirth and is a constant feature of antenatal discussion groups [ 11 ]. Most women giving birth use some methods of pain relief (pharmacologic and/or non-pharmacologic) during labour. In Australia 92% of primiparas and 71% of multiparas use some analgesic agents for labour analgesia [ 12 ]. Significantly, there have been more clinical trials of pharmacological pain relief during labour and childbirth than of any other intervention in the perinatal field [ 13 ]. However satisfaction with childbirth is not necessarily contingent upon the absence of pain [ 14 ]. Many women are willing to experience pain in childbirth but do not want pain to overwhelm them. The Royal College of Obstetrics and Gynaecology (RCOG) makes the following evidence-based recommendations [ 15 ]: • Continuous caregiver support for a single individual should be available to women in labour • Midwives must involve women in decisions about analgesia and recognise the value of promoting personal control • Maternity services should ensure access to written and verbal information on pain relief and should support women in their choices for pain relief • Maternity services should respect women's wishes to have some control over their pain relief • Improved public information and data on pain and analgesia In Australia over 250,000 women give birth annually and the increasing use of epidural analgesia means some 75,000 women have an epidural in labour each year [ 16 ]. Among primiparas in NSW, the epidural rate increased from 25% in 1990 to 42% in 2000, but was as high as 74% in hospitals with greater availability of epidurals [ 12 ]. Other pharmacologic methods of pain relief for primiparas include 36% opioids and 55% nitrous oxide [ 12 ]. Pharmacologic methods of pain relief in labour and childbirth Randomised controlled trials have shown epidural analgesia provides the most efficacious pain relief for labour, but the adverse consequences include prolonged labour, restricted mobility, use of oxytocin augmentation and an increased incidence of instrumental delivery [ 17 , 18 ]. Consequences of instrumental delivery at 6 months postpartum include perineal pain 54%, urinary incontinence 18%, bowel problems 19%, haemorrhoids 36% and sexual problems 39% [ 19 ]. Further, the complications of epidurals can include unsatisfactory analgesia, dural-puncture headache, hypotension, nausea/vomiting, fever, localised backache, shivering, pruritis and urinary retention [ 18 ]. Although not as effective as epidural, randomised trials show inhalational analgesia (e.g. 50% nitrous oxide in oxygen) and systemic opioid analgesics (e.g. pethidine) can provide modest benefit to some patients during labour or supplement an unsatisfactory epidural [ 13 ]. Both these methods can cause nausea, vomiting and dizziness, and additionally opioid side-effects may include orthostatic hypotension, delayed stomach emptying and respiratory depression in the baby [ 13 ]. Non-pharmacologic methods of pain relief in labour and childbirth A number of women prefer to avoid pharmacological analgesia if possible [ 20 ]. The wish to maintain personal control during labour and birth, the desire to participate fully in the experience, and concerns about untoward effects of medications during labour, are among the factors that influence their attitude [ 20 ]. Non-pharmacological methods of pain relief include maternal movement and position changes, superficial heat and cold, immersion in water*, massage, acupuncture/acupressure, transcutaneous electrical nerve stimulation (TENS)*, aromatherapy, attention focussing, hypnosis*, music/audioanalgesia* and continuous caregiver support*. Only a few of these methods (marked*) have been assessed in randomised trials [ 20 - 22 ]. Only continuous caregiver support resulted in reduced analgesia requirements (and length of labour and the incidence of operative delivery). Although the other interventions trialled did not reduce the use of pharmacologic analgesia, they were well liked by women and had few side effects. Decision making and pain in labour Women report fear of pain in childbirth and often lack complete information on analgesic options prior to labour [ 11 ]. For example a Royal Australian and New Zealand College of Obstetrics and Gynaecology brochure on 'Epidural and Spinal Anaesthesia' reports the advantages of epidurals but does not mention any possible adverse outcomes or complications [ 23 ]. While written informed consent is required for epidural analgesia, it is not required for other analgesic options. Further, the consent for epidural (covering only the procedure and complications) is obtained by the anaesthetist at the time of the procedure – by which time most women are already distressed [ 24 ]. Dickerson stresses the importance of discussing preferences for pain relief before labour begins [ 13 ]. A woman's antepartum decision to use pain relief is likely influenced by her cultural background, friends, family, the media, literature and her antenatal caregivers [ 25 ]. A survey of Australian women found that antepartum information about analgesia was most commonly derived from hearsay and least commonly from health professionals [ 26 ]. Antenatally 82% of women wish to see how labour progresses and only want analgesia when pain becomes severe or intolerable [ 14 ]. Antenatal plans for analgesia are strongly associated with use: 96% of women who definitely planned to have an epidural, received one [ 25 ]. The management of pain in labour is a clinical decision that fulfils Eddy's criteria for a decision in which patients' values and preferences should be included [ 2 ]. The outcomes for analgesia options and, women's preferences for the relative value of benefits compared to risks are variable and could result in decisional conflict. For such a clinical decision, a decision aid would be expected to improve patient knowledge and create realistic expectations, to reduce decisional conflict and to stimulate patients to be more active in decision making without increasing anxiety [ 1 ]. Leap has suggested a 'working with pain' framework for managing labour and childbirth in a positive context [ 11 ]. This framework which aims to develop an understanding of 'normal pain' as part of the process of labour, rather than the absolute amelioration of pain, has been recommended by the Royal College of Obstetrics and Gynaecology. Development of a decision aid on the management of pain during labour During 2003 and 2004, we developed an evidence-based decision aid about the management of pain in labour for women having their first baby. This followed a needs assessment that collected data on the attitudes, preferences and knowledge of nulliparous women who were making plans about pain relief for labour and childbirth. The needs assessment found that women's knowledge of pain relief options was limited and these women would benefit from a decision aid for labour analgesia. In developing the decision aid we utilised the NHMRC guideline "How to prepare and present information for consumers of health services" [ 27 ] and the Ottawa framework established and rigorously tested by the Ottawa Health Decision Center [ 28 ]. The decision aid was developed to incorporate a workbook (with and without a complementary audio-component as a compact disc) and worksheet. The workbook highlights key points (similar to a slide presentation) and the audio component connects these points in a narrative format, providing more detail than the workbook. The worksheet is a one-page sheet to be completed by the woman to record her decision making steps, to list any questions she needs answered before deciding, and to encourage her to discuss he plans with her labour care providers. Most importantly, the decision aid is intended to be non-directive in that it does not aim to steer the user towards any one option or increase or decrease intervention rates but rather act as an adjunct to care The decision aid was designed for women to use at home or in the clinical setting, and takes about 30 minutes to complete. After working through the decision aid, women should take the completed worksheet to their next antenatal appointment to discuss their preferences with their health care provider. The worksheet is also useful for the practitioner, who can see rapidly from it what evidence the patient has considered, what her values and preferences are and which way she is leaning in her preferences for analgesia during labour. The decision aid was developed, pilot tested and revised with extensive consumer involvement, as outlined in the NHMRC guideline on preparing information for consumers [ 27 ]. The content of the decision aid was largely driven by consumers' questions and information needs as determined from the focus groups and from the process of drafting, pilot testing and re-drafting. A number of draft decision aids (including workbook, audio transcript, and worksheet), were developed and each subjected to pilot testing and revision as we obtained feedback. The process of testing and revising started with the study project group. The next phase included a review by a group of national and international content experts, including decision aid experts, obstetricians, midwives, perinatal epidemiologists, parent educators and psychologists. Once we were convinced that the content was accurate the decision aid was pilot-tested amongst consumers. There were several rounds of consumer review and refinement. Initially we aimed to compare the Decision Aid (workbook and audio-component) with usual care and counselling however preliminary work led us to alter our original study design. We could find no studies that compared Decision Aids with and without an audio-component. As the audio-component adds considerable complexity to the development and cost of the Decision Aid we decided to have two intervention arms: a Decision Aid with an audio-component and a Decision Aid without an audio-component. Further in pilot testing we found that women in the usual care arm were disappointed to not receive any information. Thus, to minimise refusals and losses to follow-up we decided to issue the women in the control group with a pamphlet called "Pain relief during childbirth – A guide for women" This pamphlet is published by the Royal Australian and New Zealand College of Obstetricians and Gynaecologists, is publicly available and includes information about methods of pain relief during labour [ 29 ]. These changes to the study protocol were approved by the institutional ethics committee prior to commencement of the trial. Methods/design 1. Specific Aim To compare the relative effectiveness of the Pain Relief for Labour Decision Aid with a pamphlet on women's decisional conflict, knowledge, expectations, satisfaction with decision making and anxiety, and examine its impact on service utilisation and perinatal outcomes (as secondary outcomes). 2. Hypotheses The primary study hypotheses are: Use of the Pain Relief for Labour Decision Aid by women expecting their first baby: 1. Reduces decisional conflict (uncertainty about the course of action) 2. Increases knowledge of labour analgesia 3. Increases satisfaction with their decision making 4. Reduces anxiety. The secondary hypotheses of the study are: Use of the Pain Relief for Labour Decision Aid by women expecting their first baby will not influence: 1. The type of analgesia women use for labour 2. Maternal and infant outcomes. 3. Study design We will conduct a randomised trial with the following study groups to assess the impact of the decision aid: Group 1: The pamphlet, "Pain relief during childbirth – A guide for women" [ 29 ] Group 2: Decision aid with an audio-component Group 3: Decision aid without an audio-component 4. Setting An Australian tertiary obstetric hospital with a full range of non-drug and anaesthetic options for pain relief in labour. Epidurals are available 24 hours a day from anaesthetic staff designated to labour ward. All forms of antenatal care (clinic, birth centre, private, shared care with a family physician) will be included in the study. 5. Participants/eligibility criteria Primiparous women in late pregnancy (≥36 weeks gestation) who are expecting to have a vaginal birth of a single infant will be eligible for the study. Primiparous women were selected because previous pregnancies have a strong impact on decision making and analgesia use in labour [ 14 , 16 ]. Exclusions include women who will not have any choice about analgesia, for example planned caesarean section (eg breech, placenta praevia, HIV), planned epidural (eg symptomatic heart disease), contraindications to analgesia (e.g drug sensitivities, anticoagulants, thrombocytopaenia). The decision aid was produced in English and designed to be simple and accessible for women with low levels of literacy. 6. Procedures, recruitment, randomisation and collection of baseline data The study procedure draws on the usual schedule of weekly antenatal visits in late pregnancy (Figure 1 ). We plan a pragmatic approach to assess the decision aid under the conditions most likely to be applied in practice. A research nurse will ask eligible women to participate, explain the trial and obtain informed consent, collect baseline data and randomly allocate women (using telephone randomisation) to one of the study groups. This is only a minor deviation from current practice. As women of child-bearing age are known to be very mobile, participants will be asked to provide alternate contact details (eg friend or relative) to enhance subsequent follow-up. Private obstetricians will be asked to offer participation in the study to their patients. Those interested will be requested to come to the antenatal clinic for recruitment and randomisation. The private obstetrician will provide standard care. Flyers and posters will be prepared to inform women of the study and will be distributed through family physicians and obstetricians as well as the clinics. Figure 1 Schema of Pain Relief for Labour Decision Aid trial Brief baseline data will be collected to assess comparability of the study groups. The baseline assessment will include age, brief socio-demographic data, highest level of education achieved, anxiety as assessed by the state component of the short Spielberger anxiety scale [ 30 ], and information sources about labour analgesia. 7 Intervention The aim of the decision aid is to assist preference elicitation, and not to influence the direction of the decision taken. Women in each study group will be given the opportunity to review the intervention they are allocated (decision aid or pamphlet) while in the antenatal clinic and/or to take home, which ever is most convenient. Many women will also want to discuss their preferences with their partner. At the next antenatal visit, women will be contacted by the research nurse to discuss the information materials and any questions they may have had. 8 Follow-up i) First follow-up questionnaire All participants will be given a follow-up questionnaire prior to their next antenatal consultation. (See Outcome Measure details below). ii) Midwife questionnaire After a study participant delivers, the midwife who provided the labour care will complete a brief questionnaire to assess the impact of the decision aid on the management of labour analgesia. Information will also be collected on caregiver support in labour, birthplace (delivery suite or birth centre), use of non-drug analgesic options and stage of labour at admission. iii) Second follow-up questionnaire At 12–16 weeks postpartum all participants will be mailed a second follow-up questionnaire. This will assess women's satisfaction with the decisions made and the decision-making processes. (See Outcome Measures below). Questionnaires will be mailed with reply paid envelopes, with up to two reminder telephone prompts to non-responders. iv) Qualitative follow-up We will conduct in-depth interviews to explore the impact of the decision aid on women's experiences in labour and childbirth. A sub-sample of 30 women will be purposively selected, to reflect heterogeneity of experience of labour. The interviews will provide an understanding of the complexities of analgesic preferences, management, expectations, satisfaction, and psychological health following delivery. This data will enable examination of unpredicted and subtle effects of the decision aid on psychosocial outcomes that may not be captured using quantitative methods. Interviews will be face-to-face and conducted in women's homes or at a clinic, according to participants' preferences. Interviews will be recorded and transcribed. Data will be analysed using thematic analysis. 9. Blinding and contamination As with many obstetric interventions blinding is virtually impossible. The main outcomes of this study are self-reported and the women are clearly not blinded to their treatment allocation. However, we will institute a number of measures aimed at keeping antenatal staff blind to the treatment allocation and preventing contamination of the control group: • Women will review the decision aid with the research nurse and complete the first questionnaire (primary outcome measures) prior to their next antenatal consultation • Usual antenatal care providers will be blinded to the exact content and format of the decision aid • Regular in-service (educational training) for the antenatal care providers to explain the trial protocol and to make clear the potential effect of unmasking or contamination. • Monitoring decision aid distribution and keeping them locked up and only accessible by the research nurse • Asking participants not to reveal their treatment allocation, or share their decision aid material with antenatal staff or other women. If participants do not want to keep their decision aid they will be asked to return it. 10 outcome measures Primary outcomes The primary outcomes of this study will be: Decisional conflict (uncertainty about which preference to choose) will be assessed by the Decisional Conflict Scale which has established reliability, good psychometric properties and is short (16 items) [ 31 ]. It has been used to evaluate a range of decision aids [ 1 ]. Measures of knowledge and realistic expectations about labour analgesia options and the benefits and risks of these options will be specific to this project. Thus we will need to develop, and test these measures as part of the project. Anxiety will be measured by the state component of the short Spielberger anxiety scale which has been extensively used and validated [ 30 , 32 ]. We do not anticipate the decision aid will increase women's anxiety but it is important to document any changes in anxiety associated with the decision aid. Satisfaction with analgesia decisions will be assessed using the Satisfaction with Decision Scale – a very brief six item scale with high reliability was developed specifically to assess satisfaction with health care decisions [ 33 ]. Satisfaction with the decision and anxiety will be measured again at 12–16 weeks postpartum. This interval was chosen to avoid the potential bias arising from questioning women still in the hospital who may feel a disloyalty to their caregivers by a critical appraisal and whose opinions have been shown to be more positive and short-lived than those obtained further out from the birth itself [ 34 ]. At that time we will also ask about exposure to the decision aid (to assess contamination), support during labour and use of pain relief methods prior to hospital admission. These issues will be further explored in the sample selected for in-depth interview. Secondary outcomes Service utilisation outcomes The aim of the decision aid is to assist preference elicitation, and not to influence the direction of the decisions taken. Nevertheless, it is important to collect service utilisation and pregnancy outcome data so we will record and compare the pain relief methods used by women in all arms of the study, as well as recording and comparing rates of pregnancy complications and perinatal outcomes. The latter will be obtained (with informed consent) from the existing computerised obstetric database and include: medical or obstetric complications, induction or augmentation of labour, mode of delivery (vaginal, emergency or planned CS), enrolment to delivery interval, gestational age, birthweight, Apgar scores, perinatal deaths, Neonatal Intensive Care Unit admission and length of stay. 11 statistical issues Sample size The planned sample size is 600 women, with approximately 200 women to be recruited to each arm of the trial. Based on data for 2001 from the tertiary obstetric hospital where the study will be conducted, about 1500 primiparous women give birth to singleton infants after 36 weeks gestation and 92% use some form of analgesia. We anticipate that at least 50% of women will be both eligible and willing to participate. The sample size calculations for the trial (significance 0.05, power 0.8) are based on the mean difference in the decisional conflict scale between any two arms of the trial. The effect of decision aids on this scale is documented and effect size data are available [ 1 ]. Meta-analysis of four randomised controlled trials comparing a decision aid to a pamphlet and that report a mean difference in decisional conflict gives a pooled mean difference of -4.35, 95%CI -6.8, -1.9 (on a scale ranging from 0 lowest to 100 highest decisional conflict; median standard deviation 13.0) [ 35 - 38 ]. Assuming a mean difference of -4.35 and standard deviation 13.0, we will need about 141 women in each arm of the trial to demonstrate a difference in decisional conflict. Approximately 20% of primiparous women have a caesarean section (6% before labour and 14% after labour has commenced) [ 12 ]. Some of these women will lose their options for analgesia, although some may have extensive use of analgesic agents prior to caesarean section (CS). We plan to conduct an a priori sub-group analysis that excludes women who lose their options for analgesia (defined as a CS planned after randomisation, an emergency CS within 1 hour of arriving in labour or those who receive a therapeutic epidural) as these women may have different satisfaction, anxiety and decisional conflict outcomes. We will inflate the sample size estimate by 20% (from 141 to 169) to ensure sufficient power in the sub-group analyses. A further inflation of 15% for loss to follow-up, gives the final sample size of at least 195 women in each arm of the trial. If there are no significant differences in outcome for the two decision aid groups (with or without the audio-component), the decision aid groups will be pooled giving two women with the intervention for each woman in the pamphlet group thereby increasing the power to detect differences between the decision aid and the pamphlet. Data analysis Analyses will be by intention to treat, including withdrawals and losses to follow-up firstly of all women randomised and then excluding women who lose their options for analgesia. Study groups will be compared in terms of baseline characteristics. As this is a randomised trial, we would anticipate minimal differences in baseline characteristics. If however, important differences are found, these potential confounders will be adjusted for in the analysis of outcomes. For the primary outcomes, the mean score for each measure for each group will be compared using t-tests. If adjustment for confounders is needed a multiple linear regression model will be used. The secondary outcomes will be compared using chi-squared tests of significance for categorical data and t-tests for continuous data. If adjustment for confounding is necessary logistic regression and multiple linear regression will be used respectively. 12 Ethical considerations This work involves the development of a decision aid for the management of pain in labour and childbirth. Women must decide between a range of non-pharmacological and pharmacologic methods of pain relief. However this decision must be made in the context of the likely analgesic effects of each option, the risk of complications and adverse obstetric effects, and maternal preference for relief of pain. There are currently no evidenced based materials available. We therefore expect this project to be beneficial for participating women. A systematic review of decision aids found they improved knowledge without increasing anxiety. Nevertheless we will measure anxiety levels at baseline and follow-up to document any adverse effects. A trained research nurse will interview all women and obtain written informed consent. Women will be encouraged to discuss any concerns/anxiety with the research nurse and/or with their usual antenatal care provider. Women will be reassured that they are able to withdraw from the study at any time with no adverse effects on their pregnancy management. Participation will require women to complete self-report questionnaires during and after pregnancy. Working through the decision aid will take approximately 30 minutes and review of their preferences or outstanding questions will be at a routine antenatal visit. Therefore we do not consider this to be an excessive burden on their time. The study has been approved by the Central Sydney Area Health Service Ethics Review Committee (Protocol no. X02-0247) and the University of Sydney Human Ethics Committee (Ref No. 3419). This project is funded by a nationally competitive peer-reviewed grant from the Australian National Health and Medical Research Council (No. 253635). 13 Confidentiality and data security Participants in the trial will be identified by a study number only, with a master code sheet linking names with numbers being held securely and separately from the study data. To ensure that all information is secure, data records will be kept in a secure location at the University of Sydney and accessible only to research staff. As soon as all follow-up is completed the data records will be de-identified. De-identified data will be used for the statistical analysis and all publications will include only aggregated data. The electronic version of the data will be maintained on a computer protected by password. All hard copy patient identifiable data and electronic backup files will be kept in locked cabinets, which are held in a locked room accessed only by security code and limited staff. Data files will be stored for seven years after completion of the project as recommended by the NHMRC. Disposal of identifiable information will be done through the use of designated bags and/or a shredding machine. Competing interests The author(s) declare that they have no competing interests. Authors' contributions CR, CRG, LT and KM were involved in the conception and design of the study. CR, NN and CRG were responsible for the drafting of the protocol. All authors have read and given final approval of the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC539301.xml |
545779 | Phylogenomic evidence supports past endosymbiosis, intracellular and horizontal gene transfer in Cryptosporidium parvum | An analysis of Cryptosporidium parvum genes of likely endosymbiont or prokaryotic origin supports the hypothesis that C. arvum evolved from a plastid-containing lineage. | Background Cryptosporidium is a member of the Apicomplexa, a eukaryotic phylum that includes several important parasitic pathogens such as Plasmodium , Toxoplasma , Eimeria and Theileria . As an emerging pathogen in humans and other animals, Cryptosporidium often causes fever, diarrhea, anorexia and other complications. Although cryptosporidial infection is often self-limiting, it can be persistent and fatal for immunocompromised individuals. So far, no effective treatment is available [ 1 ]. Furthermore, because of its resistance to standard chlorine disinfection of water, Cryptosporidium continues to be a security concern as a potential water-borne bioterrorism agent [ 2 ]. Cryptosporidium is phylogenetically quite distant from the hemosporidian and coccidian apicomplexans [ 3 ] and, depending on the molecule and method used, is either basal to all Apicomplexa examined thus far, or is the sister group to the gregarines [ 4 , 5 ]. It is unusual in several respects, notably for the lack of the apicoplast organelle which is characteristic of all other apicomplexans that have been examined [ 6 , 7 ]. The apicoplast is a relict plastid hypothesized to have been acquired by an ancient secondary endosymbiosis of a pre-alveolate eukaryotic cell with an algal cell [ 8 ]. All that remains of the endosymbiont in Coccidia and Haemosporidia is a plastid organelle surrounded by four membranes [ 9 ]. The apicoplast retains its own genome, but this is much reduced (27-35 kilobases (kb)), and contains genes primarily involved in the replication of the plastid genome [ 10 , 11 ]. In apicomplexans that have a plastid, many of the original plastid genes appear to have been lost (for example, photosynthesis genes) and some genes have been transferred to the host nuclear genome; their proteins are reimported into the apicoplast where they function [ 12 ]. Plastids acquired by secondary endosymbiosis are scattered among eukaryotic lineages, including cryptomonads, haptophytes, alveolates, euglenids and chlorarachnions [ 13 - 17 ]. Among the alveolates, plastids are found in dinoflagellates and most examined apicomplexans but not in ciliates. Recent studies on the nuclear-encoded, plastid-targeted glyceraldehyde-3-phosphate dehydrogenase (GAPDH) gene suggest a common origin of the secondary plastids in apicomplexans, some dinoflagellates, heterokonts, haptophytes and cryptomonads [ 8 , 18 ]. If true, this would indicate that the lineage that gave rise to Cryptosporidium contained a plastid, even though many of its descendants (for example, the ciliates) appear to lack a plastid. Although indirect evidence has been noted for the past existence of an apicoplast in C. parvum [ 19 , 20 ], no rigorous phylogenomic survey for nuclear-encoded genes of plastid or algal origin has been reported. Gene transfers, either intracellular (IGT) from an endosymbiont or organelle to the host nucleus or horizontal (HGT) between species, can dramatically alter the biochemical repertoire of host organisms and potentially create structural or functional novelties [ 21 - 23 ]. In parasites, genes transferred from prokaryotes or other sources are potential targets for chemotherapy due to their phylogenetic distance or lack of a homolog in the host [ 24 , 25 ]. The detection of transferred genes in Cryptosporidium is thus of evolutionary and practical importance. In this study, we use a phylogenomic approach to mine the recently sequenced genome of C. parvum (IOWA isolate; 9.1 megabases (Mb)) [ 7 ] for evidence of the past existence of an endosymbiont or apicoplast organelle and of other independent HGTs into this genome. We have detected genes of cyanobacterial/algal origin and genes acquired from other prokaryotic lineages in C. parvum . The fate of several of these transferred genes in C. parvum is explored by expression analyses. The significance of our findings and their impact on the genetic makeup of the parasite are discussed. Results BLAST analyses From BLAST analyses, the genome of Cryptosporidium , like that of Plasmodium falciparum [ 26 ], is more similar overall to those of the plants Arabidopsis and Oryza than to any other non-apicomplexan organism currently represented in GenBank. The program Glimmer predicted 5,519 protein-coding sequences in the C. parvum genome, 4,320 of which had similarity to other sequences deposited in the GenBank nonredundant protein database. A significant number of these sequences, 936 (E-value < 10 -3 ) or 783 (E-value < 10 -7 ), had their most significant, non-apicomplexan, similarity to a sequence isolated from plants, algae, eubacteria (including cyanobacteria) or archaea (Table 1 ). To evaluate these observations further, phylogenetic analyses were performed, when possible, for each predicted protein in the entire genome. Phylogenomic analyses The Glimmer-predicted protein-coding regions of the C. parvum genome (5,519 sequences) were used as input for phylogenetic analyses using the PyPhy program [ 27 ]. In this program, phylogenetic trees for each input sequence are analyzed to determine the taxonomic identity of the nearest neighbor relative to the input sequence at a variety of taxonomic levels, for example, genus, family, or phylum. Using stringent analysis criteria (see Materials and methods), 954 trees were constructed from the input set of 5,519 predicted protein sequences (Figure 1 ). Analysis of the nearest non-apicomplexan neighbor on the 954 trees revealed the following nearest neighbor relationships: eubacterial (115 trees), archaeal (30), green plant/algal (204), red algal (8), and glaucocystophyte (4); other alveolate (61) and other eukaryotes made up the remainder. As some input sequences may have more than one nearest neighbor of interest on a tree, a nonredundant total of 393 sequences were identified with nearest neighbors to the above lineages. Searches of the C. parvum predicted gene set with the 551 P. falciparum predicted nuclear-encoded apicoplast-targeted proteins (NEAPs) yielded 40 significant hits (E-value < 10 -5 ), 23 of which were also identified in the phylogenomic analyses. A combination of these two approaches identified 410 candidates requiring further detailed analyses. Of these candidates, the majority were eliminated after stringent criteria were applied because of ambiguous tree topologies, insufficient taxonomic sampling, lack of bootstrap support or the presence of clear vertical eukaryotic ancestry (see Materials and methods). Thirty-one genes survived the screen and were deemed to be either strong or likely candidates for gene transfer (Table 2 ). Of the 31 recovered genes, several have been previously published or submitted to the GenBank [ 20 ], including those identified as having plant or eubacterial 'likeness' on the basis of similarity searches when the genome sequence was published [ 7 ]. The remaining sequences were further tested to rule out the possibility that they were artifacts ( C. parvum oocysts are purified from cow feces which contain plant and bacterial matter). Two experiments were performed. In the first, nearly complete genomic sequences (generated in a different laboratory) from the closely related species C. hominis were screened using BLASTN for the existence of the predicted genes. Twenty out of 21 C. parvum sequences were identified in C. hominis . The remaining sequence was represented by two independently isolated expressed sequence tag (EST) sequences in the GenBank and CryptoDB databases (data not shown). In the second experiment, genomic Southern analyses of the IOWA isolate were carried out (Figure 2 ) for several of the genes of bacterial or plant origin. In each case, a band of the predicted size was identified (see Additional data file 1). The genes are not contaminants. Genes of cyanobacterial/algal origin Extant Cryptosporidium species do not contain an apicoplast genome or any physical structure thought to represent an algal endosymbiont or the plastid organelle it contained [ 6 , 7 ]. The only possible remaining evidence of the past association of an endosymbiont or its cyanobacterially derived plastid organelle might be genes transferred from these genetic sources to the host genome prior to the physical loss of the endosymbiont or organelle itself. Several such genes were identified. A leucine aminopeptidase gene of cyanobacterial origin was found in the C. parvum nuclear genome. This gene is also present in the nuclear genome of other apicomplexan species ( Plasmodium , Toxoplasma and Eimeria ), as confirmed by similarity searches against ApiDB (see Materials and methods). In P. falciparum , leucine aminopeptidase is a predicted NEAP and possesses an amino-terminal extension with a putative transit peptide. Consistent with the lack of an apicoplast, this gene in Cryptosporidium contains no evidence of a signal peptide and the amino-terminal extension is reduced. Similarity searches of the GenBank nonredundant protein database revealed top hits to Plasmodium , followed by Arabidopsis thaliana , and several cyanobacteria including Prochlorococcus , Nostoc and Trichodesmium , and plant chloroplast precursors in Lycopersicon esculentum and Solanum tuberosum (data not shown). A multiple sequence alignment of the predicted protein sequences of leucine aminopeptidase reveals overall similarity and a shared indel among apicomplexan, plant and cyanobacterial sequences (Figure 3 ). Phylogenetic analyses strongly support a monophyletic grouping of C. parvum and other apicomplexan leucine aminopeptidase proteins with cyanobacteria and plant chloroplast precursors (Figure 4a ). So far, this gene has not been detected in ciliates. Another C. parvum nuclear-encoded gene of putative cyanobacterial origin is a protein of unknown function belonging to the biopterine transporter family (BT-1) (Table 2 ). Similarity searches with this protein revealed significant hits to other apicomplexans (for example, P. falciparum , Theileria annulata , T. gondii ), plants ( Arabidopsis , Oryza ), cyanobacteria ( Trichodesmium , Nostoc and Synechocystis ), a ciliate ( Tetrahymena ) and the kinetoplastids ( Leishmania and Trypanosoma ). Arabidopsis thaliana apparently contains at least two copies of this gene; the protein of one (accession number NP_565734) is predicted by ChloroP [ 28 ] to be chloroplast-targeted, suggestive of its plastid derivation. The taxonomic distribution and sequence similarity of this protein with cyanobacterial and chloroplast homologs are also indicative of its affinity to plastids. Only one gene of algal nuclear origin, glucose-6-phosphate isomerase (G6PI), was identified by the screen described here. Several other algal-like genes are probable, but their support was weaker (Table 2 ). A 'plant-like' G6PI has been described in other apicomplexan species ( P. falciparum , T. gondii [ 29 ]) and a 'cyanobacterial-like' G6PI has been described in the diplomonads Giardia intestinalis and Spironucleus and the parabasalid Trichomonas vaginalis [ 30 ]. Figure 4b illustrates these observations nicely. At the base of the tree, the eukaryotic organisms Giardia , Spironucleus and Trichomonas group with the cyanobacterium Nostoc , as previously published. In the midsection of the tree, the G6PI of apicomplexans and ciliates forms a well-supported monophyletic group with the plants and the heterokont Phytophthora . The multiple protein sequence alignment of G6PI identifies several conserved positions shared exclusively by apicomplexans, Tetrahymena , plants and Phytophthora . This gene does not contain a signal or transit peptide and is not predicted to be targeted to the apicoplast in P. falciparum . The remainder of the tree shows a weakly supported branch including eubacteria, fungi and several eukaryotes. The eukaryotes are interrupted by the inclusion of G6PI from the eubacterial organisms Escherichia coli and Cytophaga . This relationship of E. coli G6PI and eukaryotic G6PI has been observed before and may represent yet another gene transfer [ 31 ]. Genes of eubacterial (non-cyanobacterial) origin Our study identified HGTs from several distinct sources, involving a variety of biochemical activities and metabolic pathways (Table 2 ). Notably, the nucleotide biosynthesis pathway contains at least two previously published, independently transferred genes from eubacteria. Inosine 5' monophosphate dehydrogenase (IMPDH), an enzyme for purine salvage, was transferred from ε-proteobacteria [ 32 ]. Another enzyme involved in pyrimidine salvage, thymidine kinase (TK), is of α or γ-proteobacterial ancestry [ 25 ]. Another gene of eubacterial origin identified in C. parvum is tryptophan synthetase β subunit ( trpB ). This gene has been identified in both C. parvum and C. hominis , but not in other apicomplexans. The relationship of C. parvum trpB to proteobacterial sequences is well-supported as a monophyletic group by two of the three methods used in our analyses (Figure 4c ). Other HGTs of eubacterial origin include the genes encoding α-amylase and glutamine synthetase and two copies of 1,4-α-glucan branching enzyme, all of which are overwhelmingly similar to eubacterial sequences. α-amylase shows no significant hit to any other apicomplexan or eukaryotic sequence, suggesting a unique HGT from eubacteria to C. parvum . Glutamine synthetase is a eubacterial gene found in C. parvum and all apicomplexans examined. The eubacterial affinity of the apicomplexan glutamine synthetase is also demonstrated by a well supported (80% with maximum parsimony) monophyletic grouping with eubacterial homologs (data not shown). The eubacterial origin of 1,4-α-glucan branching enzyme is shown in Figure 5 . Each copy of the gene is found in a strongly supported monophyletic group of sequences derived only from prokaryotes (including cyanobacteria) and one other apicomplexan organism, T. gondii . It is possible that these genes are of plastidic origin and were transferred to the nuclear genome before the divergence of C. parvum and T. gondii ; the phylogenetic analysis provides little direct support for this interpretation, however. Mode of acquisition We examined the transferred genes for evidence of non-independent acquisition, for example, blocks of transferred genes or evidence that genes were acquired together from the same source. Examination of the chromosomal location of the genes listed in Table 2 demonstrates that the genes are currently located on different chromosomes and in most cases do not appear to have been transferred or retained in large blocks. There are two exceptions. The trpB gene and the gene for aspartate ammonia ligase are located 4,881 base-pairs (bp) apart on the same strand of a contig for chromosome V; there is no annotated gene between these two genes. Both genes are of eubacterial origin and are not found in other apicomplexan organisms. While it is possible that they have been acquired independently with this positioning, or later came to have this positioning via genome rearrangements, it is interesting to speculate that these genes were acquired together. The origin of trpB is proteobacterial. The origin of aspartate ammonia ligase is eubacterial, but not definitively of any particular lineage. In the absence of genome sequences for all organisms, throughout all of time, exact donors are extremely difficult to assess and inferences must be drawn from sequences that appear to be closely related to the actual donor. In the second case, C. parvum encodes two genes for 1,4-α-glucan branching enzymes. Both are eubacterial in origin and both are located on chromosome VI, although not close together. They are approximately 110 kb apart and many intervening genes are present. The evidence that these genes were acquired together comes from the phylogenetic analysis presented in Figure 5 . The duplication that gave rise to the two 1,4-α-glucan branching enzymes is old, and is well supported by the tree shown in Figure 5 . A number of eubacteria (11), including cyanobacteria, contain this duplication. The 1,4-α-glucan branching enzymes of C. parvum and T. gondii represent one copy each of this ancient duplication. This suggests that the ancestor of C. parvum and T. gondii acquired the genes after they had duplicated and diverged in eubacteria. Expression of transferred genes Each of the genes identified in the above analyses (Table 2 ) appears to be an intact non-pseudogene, suggesting that these genes are functional. To verify the functional status of several of the transferred genes, semi-quantitative reverse transcription PCR (RT-PCR) was carried out to characterize their developmental expression profile. Each of the RNA samples from C. parvum -infected HCT-8 cells was shown to be free of contaminating C. parvum genomic DNA by the lack of amplification product from a reverse transcriptase reaction sham control. RT-PCR detected no signals in cDNA samples from mock-infected HCT-8 cells. On the other hand, RT-PCR product signals were detected in the C. parvum -infected cells of six independent time-course experiments for each of the genes examined (those for G6PI, leucine aminopeptidase, BT-1, a calcium-dependent protein kinase, tyrosyl-tRNA synthetase, dihydrofolate reductase- thymidine synthetase (DHFR-TS)). The expression profiles of the acquired genes show that they are regulated and differentially expressed throughout the life cycle of C. parvum in patterns characteristic of other non-transferred genes (Figure 6 ). A small published collection of 567 EST sequences for C. parvum is also available. These ESTs were searched with each of the 31 candidate genes surviving the phylogenomic screen. Three genes - aspartate ammonia ligase, BT-1 and lactate dehydrogenase - are expressed, as confirmed by the presence of an EST (Table 2 ). Discussion A genome-wide search for intracellular and horizontal gene transfers in C. parvum was carried out. We systematically determined the evolutionary origins of genes in the genome using phylogenetic approaches, and further confirmed the existence and expression of putatively transferred genes with laboratory experiments. The methodology adopted in this study provides a broad picture of the extent and the importance of gene transfer in apicomplexan evolution. The identification of gene transfers is often subject to errors introduced by methodology, data quality and taxonomic sampling. The phylogenetic approach adopted in this study is preferable to similarity searches [ 33 , 34 ] but several factors, including long-branch attraction, mutational saturation, lineage-specific gene loss and acquisition, and incorrect identification of orthologs, can distort the topology of a gene tree [ 35 , 36 ]. Incompleteness in the taxonomic record may also lead to false positives for IGT and HGT identification. In our study, we have attempted to alleviate these factors, as best as is possible, by sampling the GenBank nonredundant protein database, dbEST and organism-specific databases and by using several phylogenetic methods. Still, these issues remain a concern for this study as the taxonomic diversity of unicellular eukaryotes is vastly undersampled and studies are almost entirely skewed towards parasitic organisms. The published analysis of the C. parvum genome sequence identified 14 bacteria-like and 15 plant-like genes based on similarity searches [ 7 ]. Six of these bacterial-like and three plant-like genes were also identified as probable transferred genes in the phylogenomic analyses presented here. We have examined the fate of genes identified by one analysis and not the other to uncover the origin of the discrepancy. First, methodology is the single largest contributing factor. Genes with bacterial-like or plant-like BLAST similarities which, from the phylogenetic analyses, do not appear to be transfers were caused by the fact that PyPhy was unable to generate trees due to an insufficient number of significant hits in the database, or because of the stringent coverage length and similarity requirements adopted in this analysis. Only seven of the previously identified 15 plant-like and 11 of 14 eubacterial-like genes survived the predefined criteria for tree construction. Second, subsequent phylogenetic analyses including additional sequences from non-GenBank databases failed to provide sufficient evidence or significant support for either plant or eubacterial ancestry. Third, searches of dbEST and other organism-specific databases yielded other non-plant or non-eubacterial organisms as nearest neighbors, thus removing the possibility of a transfer. The limitations of similarity searches and incomplete taxonomic sampling are well evidenced in our phylogenomic analyses. From similarity searches, C. parvum , like P. falciparum [ 26 ], is more similar to the plants Arabidopsis and Oryza than to any other single organism. Almost 800 predicted genes have best non-apicomplexan BLAST hits of at least 10 -7 to plants and eubacteria (Table 1 ). Yet only 31 can be inferred to be transferred genes at this time with the datasets and methodology available (Table 2 ). In many cases (for example, phosphoglucomutase) the C. parvum gene groups phylogenetically with plant and bacterial homologs, but with only modest support. In other cases, such as pyruvate kinase and the bi-functional dehydrogenase enzyme (AdhE), gene trees obtained from automated PyPhy analyses indicate a strong monophyletic grouping of the C. parvum gene with plant or eubacterial homologs, but this topology disappears when sequences from other unicellular eukaryotes, such as Dictyostelium , Entamoeba and Trichomonas are included in the analysis (data not shown). The list of genes in Table 2 should be considered a current best estimate of the IGTs and HGTs in C. parvum instead of a definitive list. As genomic data are obtained from a greater diversity of unicellular eukaryotes and eubacteria, phylogenetic analyses of nearest neighbors are likely to change. Did Cryptosporidium contain an endosymbiont or plastid organelle? The C. parvum sequences of cyanobacterial and algal origin reported here had to enter the genome at some point during its evolution. Formal possibilities include vertical inheritance from a plastid-containing chromalveolate ancestor, HGT from the cyanobacterial and algal sources (or from a secondary source such as a plastid-containing apicomplexan), or IGT from an endosymbiont/plastid organelle during evolution, followed by loss of the source. Cryptosporidium does not harbor an apicoplast organelle or any trace of a plastid genome [ 7 ]; thus an IGT scenario would necessitate loss of the organelle in Cryptosporidium or the lineage giving rise to it. The exact position of C. parvum on the tree of life has been debated, with developmental and morphological considerations placing it within the Apicomplexa, and molecular analyses locating it in various positions, both within and outside the Apicomplexa [ 3 ], but primarily within. If we assume that C. parvum is an apicomplexan, and if the secondary endosymbiosis which is believed to have given rise to the apicoplast occurred before the formation of the Apicomplexa, as has been suggested [ 18 ], C. parvum would have evolved from a plastid-containing lineage and would be expected to harbor traces of this relationship in its nuclear genome. Genes of likely cyanobacterial and algal/plant origin are detected in the nuclear genome of C. parvum (Table 2 ) and thus IGT followed by organelle loss cannot be ruled out. What about other interpretations? While it is formally possible that these genes were acquired independently via HGT in C. parvum , their shared presence in other alveolates (including the non-plastidic ciliate Tetrahymena ) provides the best evidence against this scenario as multiple independent transfers would be required and so far there is no evidence for intra-alveolate gene transfer. Vertical inheritance is more difficult to address as it involves distinguishing between genes acquired via IGT from a primary endosymbiotic event versus a secondary endosymbioic event. Our data, especially the analysis of G6PI and BT-1 are consistent with both primary and secondary endosymbioses, provided that the secondary endosymbiosis is pre-alveolate in origin. As more genome data become available and flanking genes can be examined for each gene in a larger context, positional information will be informative in distinguishing among the alternatives. The plastidic nature of some genes is particularly apparent. There is a shared indel among leucine aminopeptidase protein sequences in apicomplexans, cyanobacteria and plant chloroplast precursors (Figure 3 ). The C. parvum leucine aminopeptidase does contain an amino-terminal extension of approximately 85-65 amino acids (depending on the alignment) relative to bacterial homologs, but this extension does not contain a signal sequence. The extension in P. falciparum is 85 amino acids and the protein is believed to be targeted to the apicoplast [ 26 , 37 ]. No similarity is detected between the C. parvum and P. falciparum amino-terminal extensions (data not shown). Other genes were less informative in this analysis. Among these, aldolase was reported in both P. falciparum [ 38 ] and the kinetoplastid parasite Trypanosoma [ 38 ] as a plant-like gene. The protein sequences of aldolase are similar in C. parvum and P. falciparum , with an identity of 60%. In our phylogenetic analyses, C. parvum clearly forms a monophyletic group with Plasmodium , Toxoplasma and Eimeria . This branch groups with Dictyostelium , Kinetoplastida and cyanobacterial lineages, but bootstrap support is not significant. The sister group to the above organisms are the plants and additional cyanobacteria, but again with no bootstrap support (see Additional data file 1 for phylogenetic tree). Another gene, enolase, contains two indels shared between land plants and apicomplexans (including C. parvum ) and was suggested to be a plant-like gene [ 29 ], but alternative explanations exist [ 39 ]. The biochemical activity of the polyamine biosynthetic enzyme arginine decarboxylase (ADC), which is typically found in plants and bacteria, was previously reported in C. parvum [ 19 ]. However, we were unable to confirm its presence by similarity searches of the two Cryptosporidium genome sequences deposited in CryptoDB using plant ( Cucumis sativa , GenBank accession number AAP36992), cyanobacterial ( Nostoc sp., NP-487441; Synechocystis sp., NP-439907) and other bacterial ( Yersinia pestis , NP-404547) homologs. A plethora of prokaryotic genes Several HGTs from bacteria have been reported previously in C. parvum [ 25 , 32 , 40 ]. We detected many more in our screen of the completed C. parvum genome sequence (Table 2 ). In most cases, the exact donors of these transferred genes were difficult to determine. However, for those genes whose donors could be more reliably inferred (Table 2 ), several appear to be from different sources and hence represent independent transfer events. In one compelling case, both the trpB and aspartate ammonia ligase genes are located 4,881 bp apart on the same strand of a contig for chromosome V and there is no gene separating them. Both genes are of eubacterial origin and neither gene is detected in other apicomplexans. In addition, the aspartate ammonia ligase gene is expressed, as evidenced by an EST. In another case, copies of a 1,4-α-glucan branching enzyme gene duplication pair that is present in many eubacteria, were detected on the same chromosome in C. parvum . C. parvum also contains many transferred genes from distinct eubacterial sources that are not present in other apicomplexans (for example, IMPDH, TK (thymidine kinase), trpB and the gene for aspartate ammonia ligase). The endosymbiotic event that gave rise to the mitochondrion occurred very early in eukaryotic evolution and is associated with significant IGT. However, most of these transfer events happened long before the evolutionary time window we explored in this study [ 41 ]. Many IGTs from the mitochondrial genome that have been retained are almost universally present in eukaryotes (including C. parvum which does not contain a typical mitochondrion [ 7 , 42 - 44 ]) and thus would not be detected in a PyPhy screen since the 'nearest phylogenetic neighbor' on the tree would be taxonomically correct and not appear as a relationship indicative of a gene transfer. The impact of gene transfers on host evolution Gene transfer is an important evolutionary force [ 21 , 22 , 45 , 46 ]. Several of the transferred genes identified in C. parvum are known to be expressed. IMPDH has been shown to be essential in C. parvum purine metabolism [ 32 ] and TK has been shown to be functional in pyrimidine salvage [ 25 ]. It is not yet clear whether these genes were acquired independently in this lineage, or have been lost from the rest of the apicomplexan lineage, or whether both these have happened. However, it is clear that their presence has facilitated the remodeling of nucleotide biosynthesis. C. parvum no longer possesses the ability to synthesize nucleotides; instead it relies entirely on salvage. Many apicoplast and algal nuclear genes have been transferred to the host nuclear genome, where they were subsequently translated in the cytosol and their proteins targeted to the apicoplast organelle. However, as there is no apicoplast in C. parvum , acquired plastidic proteins are theoretically destined to go elsewhere. In the absence of an apicoplast, it is tempting to suspect that plastid-targeted proteins would have been lost, or would be detected as pseudogenes. No identifiable pseudogenes were detected and at least one gene is still viable. The C. parvum leucine aminopeptidase, which still contains an amino-terminal extension (without a signal peptide), is intact and is expressed, as shown in Figure 6 . None of the cyanobacterial/algal genes identified in our study contains a canonical presequence for apicoplast targeting. One exception to this is phosphoglucomutase, a gene not present in Table 2 because of its poorly supported relationships in phylogenetic analyses. This gene exists in two copies as a tandem duplication in the C. parvum genome. One copy has a long amino-terminal extension (97 amino acids) beginning with a signal peptide. The extension does not contain characteristics of a transit peptide. Expression of a fluorescent reporter construct containing this extension in a related parasite, T. gondii , did not reveal apicoplast targeting but instead secretion via dense granules (see Additional data file 1). Exactly how and where intracellularly transferred genes (especially those that normally target the apicoplast) have become incorporated into other metabolic processes remains a fertile area for exploration. Conclusions Cryptosporidium is the recipient of a large number (31) of transferred genes, many of which are not shared by other apicomplexan parasites. The genes have been acquired from several different sources including α-, β-, and ε-proteobacteria, cyanobacteria, algae/plants and possibly the Archaea. We have described two cases of two genes that appear to have been acquired together from a eubacterial source: trpB and the aspartate ammonia ligase gene are located within 5 kb of each other, while the two copies of 1,4-α-glucan branching enzyme represent copies of an ancient gene duplication also observed in cyanobacteria. Once thought to be a relatively rare event, reports of gene transfers in eukaryotes are increasingly common. The abundance of available eukaryotic genome sequence is providing the material for analyses that were not possible only a few years ago. Analysis of the Arabidopsis genome [ 47 ] has revealed potentially thousands of genes that were transferred intracellularly. HGTs are still a relatively rare class of genes among multicellular eukaryotes, most probably because of the segregation of the germ line. By definition, unicellular eukaryotes do not have a separate germ line and are obligated to tolerate the acquisition of foreign genes if they are to survive. Among unicellular eukaryotes, there are now many reports of HGTs: Giardia [ 48 , 49 ], Trypanosoma [ 38 ], Entamoeba [ 21 , 49 ], Euglena [ 50 ], Cryptosporidium [ 25 , 32 , 40 ] and other apicomplexans [ 51 ]. As discussed earlier, genes transferred from distant phylogenetic sources such as eubacteria could be potential therapeutic targets. In apicomplexans, transferred genes are already some of the most promising targets of anti-parasitic drugs and vaccines [ 7 , 25 , 52 ]. We have shown that several transferred genes are differentially expressed in the C. parvum genome, and in two cases (IMPDH and TK), the transferred genes have been shown to be functional [ 25 , 32 ]. The successful integration, expression and survival of transferred genes in the Cryptosporidium genome has changed the genetic and metabolic repertoire of the parasite. Materials and methods Cryptosporidium sequence sources Genomic sequences for C. parvum and C. hominis were downloaded from CryptoDB [ 53 ]. Genes were predicted for the completed C. parvum (IOWA) sequence as previously described using the Glimmer program [ 54 ] trained on Cryptosporidium coding sequences [ 52 ]. A few predicted genes that demonstrated apparent sequence incompleteness were reconstructed from genomic sequence by comparison with apicomplexan orthologs. The predicted protein encoding data set contained 5,519 sequences. A comparison of this gene set to the published annotation revealed that the Glimmer-predicted gene set contained all but 40 of the 3,396 annotated protein-encoding sequences deposited in GenBank. These 40 were added to our dataset and analyzed. Glimmer does not predict introns and some introns are present in the genome [ 7 , 20 ]; thus our gene count is artificially inflated. Likewise, the official C. parvum annotation did not consider ORFs of less than 100 amino acids that did not have significant BLAST hits and thus may be a slight underestimate [ 7 ]. Database creation An internal database (ApiDB) containing all available apicomplexan sequence data was created [ 25 ]. A second BLAST-searchable database, PyPhynr, was constructed that included SwissProt, TrEMBL and TrEMBL_new, as released in August 2003, predicted genes from C. parvum , ORFs of more than 120 amino acids from Theileria annulata , and more than 75 amino acids from consensus ESTs for several apicomplexan organisms. Genomic sequences for T. gondii (8x coverage) and clustered ESTs were downloaded from ToxoDB [ 55 , 56 ]. Genomic data were provided by The Institute for Genomic Research (TIGR), and by the Sanger Institute. EST sequences were generated by Washington University. In addition, this study used sequence data from several general and species-specific databases. Specifically, the NCBI GenBank nr and dbEST were downloaded [ 57 ] and extensively searched. To provide taxonomic completeness, additional genes were obtained via searches of additional databases including: Entamoeba histolytica [ 58 ], D. discoideum [ 59 ], the kinetoplastids Leishmania major [ 59 ], T. brucei [ 59 ], T. cruzi [ 60 ], and a ciliate Tetrahymena thermophila [ 61 ]. Sequence data for T. annulata , E. histolytica , D. discoideum , L. major and T. brucei were produced by the Pathogen Sequencing Unit of the Sanger Institute and can be obtained from [ 62 ]. Preliminary sequence data for T. thermophila was obtained from TIGR and can be accessed at [ 63 ]. Phylogenomic analyses and similarity searches The source code of the phylogenomic software PyPhy [ 27 ] was kindly provided by Thomas Sicheritz-Ponten and modified to include analyses of eukaryotic groups, and changes to improve functionality [ 51 ]. For initial phylogenomic analyses, a BLAST cutoff of 60% sequence length coverage and 50% sequence similarity was adopted and the neighbor-joining program of PAUP 4.0b10 for Unix [ 64 ] was used. A detailed description of our phylogenomic pipeline and PyPhy implementation are described [ 51 ] and outlined in Figure 1 . Output gene trees with phylogenetic connections (that is, the nearest non-self neighbors at a distinct taxonomic rank) [ 27 ] to prokaryotes and algae-related groups were manually inspected. As the trees are unrooted, several factors were considered in the screen for candidate transferred genes. If the C. parvum gene does not form a monophyletic group with prokaryotic or plant-related taxa regardless of rooting, the subject gene was eliminated from further consideration. If the topology of the gene tree is consistent with a phylogenetic anomaly caused by gene transfer, but may also be interpreted differently if the tree is rooted otherwise, it was removed from consideration at this time. If the top hits of both nr and dbEST database searches are predominantly non-plant eukaryotes, and the topology of the tree was poor, the subject gene was considered an unlikely candidate. Finally, all 551 protein sequences predicted to be NEAPs in the malarial parasite P. falciparum [ 26 ] were used to search the C. parvum genome and the results were screened using a BLAST cutoff E-value of 10 -5 and a length coverage of 50%. Sequences identified by these searches were added to the candidate list (if not already present) for manual phylogenetic analyses to verify their likely origins. It should be noted that all trees were screened for the existence of a particular phylogenetic relationship. In some cases the proteins utilized to generate a particular tree are capable of resolving relationships among many branches of the tree of life, and in others they are not. Despite these differences in resolving power, the proteins which survive our phylogenetic screen and subsequent detailed analyses described below exhibit significant support for the branches of the tree in which we are interested. Similar procedures were used to characterize the complement of nuclear-encoded genes of plastid origin in the Arabidopsis genome [ 65 ]. BLAST searches were performed on GenBank releases 138-140 [ 57 ]. Detailed phylogenetic analyses of candidate genes identified by phylogenomic screening: candidate genes surviving the PyPhy phylogenomic screen were reanalyzed with careful attention to taxonomic completeness, including representative species from major prokaryotic and eukaryotic lineages when necessary and possible. New multiple sequence alignments were created with ClustalX [ 66 ], followed by manual refinement. Only unambiguously aligned sequence segments were used for subsequent analyses (see Additional data file 1). Phylogenetic analyses were performed with a maximum likelihood method using TREE-PUZZLE version 5.1 for Unix [ 67 ], a distance method using the program neighbor of PHYLIP version 3.6a package [ 68 ], and a maximum parsimony method with random stepwise addition using PAUP* 4.0b10 [ 64 ]. Bootstrap support was estimated using 1,000 replicates for both parsimony and distance analyses and quartet puzzling values were obtained using 10,000 puzzling steps for maximum likelihood analyses. Distance calculation used the Jones-Taylor-Thornton (JTT) substitution matrix [ 69 ], and site-substitution variation was modeled with a gamma-distribution whose shape parameter was estimated from the data. For maximum likelihood analyses, a mixed model of eight gamma-distributed rates and one invariable rate was used to calculate the pairwise maximum likelihood distances. The unrooted trees presented in Figures 4 and 5 were drawn by supplying TREE-PUZZLE with the maximum parsimony tree and using TREE-PUZZLE distances as described above to calculate the branch lengths. The trees were visualized and prepared for publication with TreeView X Version 0.4.1 [ 70 ]. Genomic Southern analysis C. parvum (IOWA) oocysts (10 8 ) were obtained from the Sterling Parasitology Laboratory at the University of Arizona and were lysed using a freeze/thaw method. Genomic DNA was purified using the DNeasy Tissue Kit (Qiagen). Genomic DNA (5 μg) was restricted with Bam H1 and Eco R1 respectively and electrophoresed on a 0.8% gel in 1x TAE buffer, transferred to a positively charged nylon membrane (Bio-Rad), and fixed using a UVP crosslinker set at 125 mJ as described in [ 71 ]. C. parvum genomic DNA for the probes (700-1,500 bp) was amplified by PCR (see Additional data file 1). Semi-quantitative reverse transcription-PCR Sterilized C. parvum (IOWA isolate) oocysts were used to infect confluent human adenocarcinoma cell monolayers at a concentration of one oocyst per cell as previously described [ 72 ]. Total RNA was prepared from mock-infected and C. parvum -infected HCT-8 cultures at 2, 6, 12, 24, 36, 48 and 72 h post-inoculation by directly lysing the cells with 4 ml TRIzol reagent (GIBCO-BRL/Life Technologies). Purified RNA was resuspended in RNAse-free water and the integrity of the samples was confirmed by gel electrophoresis. Primers specific for several transferred genes identified in the study were designed (see Additional data file 1) and a semi-quantitative RT-PCR analysis was carried out as previously described [ 72 ]. Primers specific for C. parvum 18S rRNA were used to normalize the amount of cDNA product of the candidate gene to that of C. parvum rRNA in the same sample. PCR products were separated on a 4% non-denaturing polyacrylamide gel and signals from specific products were captured and quantified using a phosphorimaging system (Molecular Dynamics). The expression level of each gene at each time point was calculated as the ratio of its RT-PCR product signal to that of the C. parvum 18S rRNA. Six independent time-course experiments were used in the analysis. Additional data files Additional data is provided with the online version of this paper, consisting of a PDF file (Additional data file 1 ) containing: materials and methods for genomic Southern analysis; the amino-acid sequences of genes listed in Table 2 ; accession numbers for sequences used in Figure 4 ; accession numbers for sequences used in Figure 5 ; expression of C. parvum phosphoglucomutase in T. gondii ; table of primers used for RT-PCR experiments; phylogenetic tree of aldolase; alignment files for phylogenetic analyses in Figure 4 ; and the alignment of 1,4-α-glucan branching enzyme sequences used in Figure 5 . Supplementary Material Additional data file 1 This file contains the materials and methods for genomic Southern analysis; the amino-acid sequences of genes listed in Table 2; accession numbers for sequences used in Figure 4; accession numbers for sequences used in Figure 5; expression of C. parvum phosphoglucomutase in T. gondii ; table of primers used for RT-PCR experiments; phylogenetic tree of aldolase; alignment files for phylogenetic analyses in Figure 4; and the alignment of 1,4-α-glucan branching enzyme sequences used in Figure 5 Click here for additional data file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545779.xml |
526219 | A comprehensive comparison of comparative RNA structure prediction approaches | Background An increasing number of researchers have released novel RNA structure analysis and prediction algorithms for comparative approaches to structure prediction. Yet, independent benchmarking of these algorithms is rarely performed as is now common practice for protein-folding, gene-finding and multiple-sequence-alignment algorithms. Results Here we evaluate a number of RNA folding algorithms using reliable RNA data-sets and compare their relative performance. Conclusions We conclude that comparative data can enhance structure prediction but structure-prediction-algorithms vary widely in terms of both sensitivity and selectivity across different lengths and homologies. Furthermore, we outline some directions for future research. | Background Motivation RNA, once considered a passive carrier of genetic information, is now known to play a more active role in nature. Many recently discovered RNAs are catalytic, for example RNase P which is involved in tRNA maturation and the self-splicing introns involved in mRNA maturation [ 1 ]. In addition, there is evidence that RNA based organisms were an essential step in the evolution of modern DNA-protein based organisms [ 2 , 3 ]. The number of non-coding RNAs (ncRNA) in humans remains a mystery, but progress in this direction suggests the number of ncRNAs produced is comparable to the number of proteins [ 4 - 6 ]. Surprisingly, the number of protein coding genes does not correlate with our concept of "organism complexity", hence it has been hypothesised that control of gene expression via a combination of alternative splicing and non-coding RNAs are responsible for this, implying that the "Central Dogma" (RNA is transcribed from DNA and translated into protein) at least in higher eukaryotes is woefully inadequate [ 7 , 8 ]. A fundamental tenet of biology is that a stable tertiary structure is essential for biological function. In the case of RNA the secondary structure (the base-pair set for an RNA molecule) provides a scaffold for the tertiary structure [ 9 , 10 ]. Yet, the experimental determination of RNA structure remains difficult [ 11 ]; Researchers increasingly turn to computational methods. To date the most popular structure prediction algorithm is the Minimum Free Energy (MFE) method for folding a single sequence, this has been implemented by two packages: Mfold [ 12 ] and RNAfold [ 13 ]. However, there are several independent reasons why the accuracy of MFE structure prediction is limited in practise (see discussion below). Generally the best accuracy can be achieved by employing comparative methods [ 14 ]. This paper explores the extent to which this statement is true, given the current state of the art, for automated methods. There are currently three approaches to automated comparative RNA sequence analysis where the comparative study is supported by available algorithms (see plans A, B, and C, figure 1 ). A researcher following plan A may align sequences using standard multiple sequence alignment tools (i.e. ClustalW [ 15 ], t-coffee [ 16 ], prrn [ 17 ],...), then use signals provided by structure neutral mutations for the inference of a consensus structure. Frequently the mutual-information measure is used for this [ 18 - 20 ]. Recently tools have been developed that use a combination of MFE and a covariation-score [ 21 , 22 ] or probabilistic models compiled from large reference data-sets [ 23 , 24 ]. However, a multiple-sequence-alignment step assumes a well conserved sequence. This is often not so with swiftly evolving ncRNA sequences, in this case incorrect sequence alignments can destroy any covariation signal. This has motivated plan B, the use of the "Sankoff-Algorithm", an algorithm designed for the simultaneous alignment, folding and inference of a protosequence for a set of homologous structural RNA sequences [ 25 ]. The recurrences combine sequence alignment and Nussinov (maximal pairing) folding [ 26 ]. The algorithm requires extreme computational resources ( O ( n 3 m ) in time, and O ( n 2 m ) in space, where n is the sequence length and m is the number of sequences). Current implementations, Foldalign [ 27 , 28 ], Dynalign [ 29 ] and PMcomp [ 26 ], are restricted implementations of the Sankoff-algorithm which impose pragmatic limits on the size or shape of substructures. The final approach (plan C) applies when no helpful level of sequence conservation is observed. We may exclude the sequence alignment step, predict secondary structures for each sequence (or sub-group of sequences) separately, and directly align the structures. Because of the nested branching nature of RNA structures, these are adequately represented as trees. The concept of a similarity measurement via edit operations, a standard procedure for string comparisons, has been generalised to trees [ 30 - 33 ]. Tree comparison and tree alignment models have been proposed [ 34 , 35 ] and implemented [ 13 , 36 - 39 ]. The crucial point in plan C is the question whether the initial independent folding produces at least some structures that align well and hence give clues as to the underlying consensus structure – when one exists. An increasing number of researchers have recently released novel RNA structure analysis and prediction algorithms [ 22 , 23 , 37 , 40 - 43 ]. Yet few algorithms are tested upon standardised example data-sets, and often they are not compared with algorithms of the same pedigree. Algorithm evaluation is a regular event for protein structure prediction groups [ 44 - 47 ], gene-prediction [ 48 - 50 ] and multiple sequence alignments [ 51 - 54 ]. Based on reliable data-sets, we evaluate: • the viability of plan A, B, or C given tools available today, and • the relative performance of the tools used within each plan. We shall explicitly not evaluate computational efficiency, which (by necessity) differs widely between the tools. We also do not evaluate user friendliness (such as ease of installation and convenience of input or output formats, etc.) except for some remarks in the discussion section. Data-sets, documentation and relevant scripts are freely available from . Structural alignments and consensus structures RNA secondary structure inference is the prediction of base-pairs which form the in vivo structure, given only the sequence of bases. Three general considerations apply: (1) The in vivo structure is not only predetermined by the primary structure, but also by cellular components such as chaperones, base modifications, and even by the transcriptional process itself. There are currently no computational tools available that assess these effects. (2) There are 'ribo-switches', whereby two or more functional structures exist for a given sequence [ 55 - 57 ]. Such cases will fool all the tools studied here, because asking for a single consensus structure is simply the wrong question. On the other hand, the potential of conformational switching can be reliably detected [ 58 - 60 ]. (3) Structures may contain pseudo-knots, which are ignored by most current tools due to reasons of computational complexity and scarcity of these motifs. We do not consider pseudoknots here. However, several comparative approaches that include pseudoknots are currently under development, and certainly merit a comparative study of their own. Note that in an application scenario, we often do not know whether the considerations (1–3) apply. The comparative approach to structure inference is initiated from a set of homologous RNA sequences. Attempts are made to infer the in-vivo structure for each of them, as well as a consensus structure that captures the common, relevant structural aspects. The consensus structure per se does not exist in vivo, and so some mathematical rigour should be applied when working with this notion. An RNA sequence is a string over the RNA alphabet { A , C , G , U }. An RNA sequence B = b 1 ,..., b n contains n bases, but no structural information. For comparative analysis, we are given the RNA sequences B 1 ,..., B k . A secondary structure can be associated with each sequence B as a string S over the alphabet {"(",".",")"}, where parentheses in S must be properly nested, and B and S must be compatible : If ( s i , s j ) are matching parentheses, then ( b i , b j ) must be a legal base-pair. A base-pair is also denoted as b i · b j , s i · s j , or simply i · j when the sequence is clear from the context. Both sequences and structures may be padded with a gap symbol "-", in order to align sequences and structures of different lengths. For compatibility of padded sequences and structures, we require that b i = "-" iff s i = "-". A multiple structural alignment is a multiple sequence alignment of the 2 * k sequences, B 1 , S 1 ,..., B k , S k , such that B i is compatible with S i , and the following consistency criterion is satisfied: For any S i and S j and any base-pair , we have ≠ ")" and ≠ "(", and if = "(" or = ")", then . This means that if one partner of a base-pair in S j is aligned to one partner in S i , their partners must also be aligned to each other (see figure 2 for an illustration). A consensus structure C for a multiple structural alignment can be determined by a majority rule approach using a threshold p with 0.5 < p ≤ 1. We define c k = x if = x for at least sequences S i , and c k = ".", otherwise. The latter definition is somewhat arbitrary; when relating the consensus structure to a particular sequence B in the alignment, we quietly turn those dots into gaps that align with gaps in B . For p = 1, we speak of a strict consensus, and the base-pair set in C is the intersection of the base-pairs in all S i . A consensus structure exhibits base-pairs shared by the majority of structures under consideration, but has no sequence information associated with it. Each individual structure for a concrete sequence typically has additional base-pairs which are properly nested between those that constitute the consensus. Given a consensus structure C and a sequence B compatible with it, we can obtain a structure refold ( B , C ) which is the best thermodynamic folding for B that exhibits the base-pairs specified by C , plus additional ones that do not conflict with the former. Refolding can be achieved by RNAfold with option - C (this option is used to constrain the minimum free energy prediction with prior knowledge – such as known base-pairs, unpaired regions, etc). If B and S contain gaps, we remove them before refolding and reintroduce them in the same positions afterwards. Given a consistent structural alignment, it is easy to derive a consensus structure, as we can count majorities at individual positions. If the 5' partner of a base-pair passes the majority threshold, consistency implies that its 3' partner also makes it into the consensus. Given a consensus structure and a sequence alignment without structural information, we can approximate a structural alignment by computing S i = refold ( B i , C ). We call this structural alignment reconstruction. While all S i will be consistent with C , and with each other as far as the base-pairs of C are concerned, they may be inconsistent for the base-pairs introduced in refolding. This is tolerable, since if we trust the consensus to capture the relevant common structural features, there is no need to require that all members of a family agree upon extra-consensus features. We note in passing that it seems worthwhile to study the conditions under which consensus derivation and structural alignment reconstruction are mutually inverse operations, but such theoretical issues are outside our present scope. Interpreting database information While the plans A, B and C we are about to evaluate strive to find a good consensus structure from sequence data, the "truth" available to us comes in a different form. Structural databases only convey a consensus by example : They provide a reference sequence, say B 1 , with an experimentally proved structure S 1 , and provide a multiple sequence alignment of B 1 , S 1 and additional sequences B 2 ,..., B n in the family under consideration. The sequence alignment is chosen to exhibit structural similarities between the reference structure and the other family members, but in general, we do not know the precise model of achieving similarity, nor do we know whether this model has been solved to optimality. One consequence of this situation would be to conclude that the reference structure is the only reliable anchor point available to us for evaluation. Comparative analysis tools would then be evaluated by the capacity to predict this particular structure by using family information. This would be a meaningful way to proceed, however, the effect of structural homogeneity within a sequence family would go unmeasured, and so would the difficulty or success of exploiting it. We therefore proceed in a different way which we call consensus reconstruction . The reference structure S 1 need not be compatible with any B i except for i = 1. However, we can still compute S i := refold ( B i , S 1 ) by treating bases as unpaired where they violate compatibility. (This is also achieved with RNAfold , option -C.) What we obtain in this way is a reconstructed structural alignment, which will be consistent to the extent that the reference structure indeed describes the common structural features, and to the extent that the database sequence alignment reflects these. In all our test cases, this alignment was overall consistent, an indicator that the families and their structural features are in fact well defined. From this alignment, we derive a consensus structure as explained above using a threshold p = 0.5, which will serve as the standard of truth in our evaluation. One may argue that our approach to reconstruct the truth is somewhat ad-hoc and should be replaced by a more systematic method. However, this is what the tools we evaluate try to achieve, and we should not add one of our own as the standard of truth. Hence, our consensus reconstruction is designed to stay as close as possible to the database information. Caveats Results of observations based on the above measures must be interpreted with care. We list a number of caveats that must be kept in mind when proceeding to the subsequent sections. Use of defaults In all tests, one could possibly obtain better predictions by tuning the program's parameters. We felt that it would be inappropriate to do so, since in the evaluation, we know the correct result and could use this knowledge in the tuning, whereas in a true application context, one does not have such guidance. Hence we used the recommended defaults in all cases. Tool abuse In some cases we apply a tool to data where we know that the model structure has features not recognised by the tool. An example is a structure with multiloops or pseudoknots, searched for with a tool that explicitly excludes such structures. We permit such cases, because again, in a true application context one does not know whether the tool is appropriate or not, and it is still of interest to see how close to the correct structure one can get. Standard of truth We take for granted the correctness of structural alignments taken from the literature, and the consensus reconstructed thereof. Should one of the tested algorithms produce a result that is actually better (closer to the functionally important structure), it may be penalised. Also, we do not consider a large number of data-sets here, it is possible that performance of some algorithms improves on a different selection of data-sets. Tools improve Our data reflect the state of the art in 2004. Most of the tools tested are very recent, and their authors are still improving them. Hence, not all observations will remain reproducible. In fact, we hope this study helps to obtain better results in the future. Methods We have compiled RNA sequence alignments consisting of up to 11 sequences derived from reliable sources (see table 1 ). These have been used to test several RNA analysis packages. Each alignment contains at least one reference sequence B 1 with (preferably) an experimentally verified secondary structure S 1 . Experimental verification of a structure may be from a variety of sources: x-ray crystallography, NMR, enzymatic structure probing or phylogenetic inference. A comparison of phylogenetic with x-ray crystallographic structures has shown the phylogenetic predictions of rRNA to be very reliable (sensitivity > 97%) [ 61 ]. This data specifies a "consensus by example", as explained above, to which our consensus reconstruction was applied to obtain the "true" consensus. To avoid results bias, we constructed test alignments, with corresponding phylogenies that, wherever possible, were free of highly similar clades. In addition, we endeavoured to ensure that the reference sequence was central to the phylogeny, or more specifically, not an out group. To meet these requirements, sequences from large data-sets were sorted into high-similarity and medium-similarity groups (with respect to the model sequence), from which maximum-likelihood phylogenies [ 62 ] were constructed. These were pruned until the desired size and topology was achieved. For each data-set two sequence alignments were constructed, one of high sequence identity (approximately 90–99%) and the other more diverse data-set of medium sequence identity (approximately 70–90%). Our data-sets are quite diverse and must for the purposes of this study be considered difficult to analyse in structural terms. The shape of ribosomal RNA is believed to be influenced by interaction with ribosomal proteins. The shape of RNase P shows relatively little sequence and structure conservation, and furthermore, it contains pseudoknots which are generally excluded by prediction algorithms. Transfer RNAs are known to be a hard case for thermodynamic folding, primarily due to the propensity of modified bases which influence structure formation. All tools tested may perform better upon less complex data-sets, but the purpose of this study is not to show how good the algorithms are but to compare relative performance when prediction is difficult. Performance Measures Sensitivity ( X ) and selectivity ( Y ) are common measures for determining the accuracy of prediction methods [ 63 ]. Selectivity is also known as the "specificity" [ 28 ] and "positive predictive value" [ 64 , 65 ]. We use slightly modify versions of the standard definitions of X and Y for examining RNA secondary structure prediction: where TP is the number of "true positives" (correctly predicted base-pairs), FN is the number of "false negatives" (base-pairs in the reference structure that were not predicted) and FP is the number of "false positives" (in-correctly predicted base-pairs). However, not all FP base-pairs are equally false! We classify FP base-pairs as either inconsistent , contradicting or compatible. Predicted base-pairs which conflict with a base-pair in the reference structure are labelled inconsistent (i.e. i · j is predicted where either i · k and/or h · j are paired in the reference structure ( h ≠ i and j ≠ k )). Predicted base-pairs ( i · j ) which are non-nested with respect to the reference structure are labelled contradicting (i.e. there exists base-pairs k · l in the reference satisfying k < i < l < j ). Note that some base-pairs may both contradict and be inconsistent with the reference structure. Predicted base-pairs which are neither true positive, contradicting or inconsistent are labelled compatible and can be considered neutral with respect to algorithm accuracy. Hence these are subtracted in the selectivity evaluation, their number is ξ in the above equation. It is of interest to note that the base-pair metric [ 66 , 67 ] between the reference and predicted structures d BP ( S ref , S pred ) is the sum of FN and FP , and hence is different from the measure used here. A measure combining both selectivity and sensitivity is useful for ranking algorithms. For this we employ the Matthews correlation coefficient [ 63 ] defined below: MCC ranges from -1 for extremely inaccurate ( TP = TN = 0) to 1 for very accurate predictions ( FP - ξ = FN = 0). When comparing RNA structures TN = 0 occurs only in extreme examples, hence MCC generally ranges from 0 to 1. Furthermore, for the specific case of RNA structure comparisons, MCC can be approximated by the arithmetic-mean or geometric-mean of X and Y [ 28 ]. Results Single sequence methods The accuracy of the MFE single sequence method has been evaluated elsewhere and was found to have an accuracy of 73% when averaged over many different RNAs and "base-pair slippage" was tolerated in the evaluation [ 68 ]. A recent and more stringent work found MFE predictions had a sensitivity of 56% and selectivity of 46% for RNase P, SRP and tmRNA structures [ 64 ]. Similar values are also reported by the "Gutell Lab" for tRNA and rRNA structures [ 69 - 71 ]. We need to clarify the accuracy of this method on the particular data-sets we employ here for comparison with the multi-sequence methods. After all, if MFE folding worked perfectly for our given data-sets, there would be no need to resort to comparative methods. Mfold & RNAfold Mfold [ 12 , 72 ] and RNAfold [ 13 , 73 ] both implement the Zuker-Stiegler algorithm for computing minimal free energy (MFE) structures assuming a "nearest neighbour model" and using empirical estimates of thermodynamic parameters for neighbouring interactions and loop entropies to score structures. The algorithm is O ( n 3 ) in time and O ( n 2 ) in memory where n is the sequence length. Both employ the same thermodynamic parameters [ 68 ]. Hence, differences in the predictions are generally minor and are the result of slightly different implementations. There appears to be no significant differences in terms of algorithm accuracy. The sensitivity, selectivity and correlation of MFE methods (for the four data-sets considered here) ranged from 22–63%, 20–60% and 0.18–0.61 respectively (See figures 3 & 4 ). The low accuracies (22%, 20% & 0.18) are due to an alternative long-stem conformation of S. cerevisiae tRNA-PHE which the free energy methods favour. Mfold infers 'suboptimal' structures by calculating minimum free energy structures with the restriction that every possible base-pair is forced in a one-by-one fashion. Unique structures are then ranked by energy. Investigating the top two suboptimal structures from Mfold resulted in an overall increase in the range of sensitivity, selectivity and correlation, 22–69%, 20–67% and 0.18–0.68 respectively. The predictions shown here are used to illustrate the potential advantages of using comparative analyses over single sequence methods. Sfold Sfold [ 41 , 74 ] represents another energy-based single-sequence folding algorithm. For a given RNA, Sfold stochastically samples all possible structures in the Boltzmann ensemble of secondary structures using conditional probabilities which are computed with the partition function [ 75 ]. Clustering techniques could then be used to obtain representative ' likely ' structures. Instead, the current implementation samples 1000 structures, sorts these by energy, the minimum and maximum energy structures are computed and the energy range divided into 10 equally sized energy blocks. The minimum energy structure from each block is returned with ranking 1 to 10. We consider the top 3 structures labelled 'Sfold (1–3)'. In terms of accuracy, the results are very similar to those of the Zuker-Stiegler single sequence methods, although with a slightly higher variance (See figures 3 & 4 ). Intrinsic limits of single sequence methods There are systematic limits to the accuracy of single sequence prediction methods. The thermodynamics may not be accurate, as some parameters are extrapolated and parameter measuring conditions in vitro are different from in vivo conditions. Indeed the thermodynamic model itself is an estimate of the real physics of RNA folding. Also, many bases of structural RNAs are chemically modified by sugar methylation, pseudo-uridine, dihydrouracil, etc, these are generally ignored by these methods. Kinetics of folding are also ignored. Given only a single sequence, we have no way to distinguish base-pairs and structure elements important for the consensus from those that are peculiar for the given sequence. Finally, some functional RNAs have bistable structures, while in others, the structure is irrelevant, hence not conserved, and the optimal MFE structure is biologically meaningless. This is some justification of why researchers proceed to comparative methods. Comparative method: alignment folding (plan A) To simulate realistic RNA folding studies we use ClustalW [ 15 ] to re-align each of our test data-sets, then folded these using each of the methods mentioned below. The resultant predicted structures were then compared to our reconstructed consensus structures. RNAalifold RNAalifold [ 21 , 76 ] implements an extension of the Zuker-Stiegler algorithm for computing a consensus structure from RNA alignments. The algorithm computes an averaged energy matrix (where N is the number of sequences in the alignment) and a covariation score matrix, augmented with penalties for inconsistent sequences, B ij . A standard trace-back procedure is performed to recover a consensus structure with the optimal sum-of-average-energy-and-covariation-score . The algorithm is remarkably efficient O ( N · n 2 + n 3 ) in time and O ( n 2 ) in memory. The sensitivity, selectivity and correlation of the RNAalifold predictions ranged from 57–91%, 57–100% and 0.57–0.95 respectively, showing a significant increase in the accuracy measures when compared to the MFE-methods. Pfold Pfold implements a "stochastic context free grammar" (SCFG) designed to produce a "prior probability distribution of RNA structures" for an RNA alignment input [ 23 , 24 , 77 ]. A maximum-likelihood phylogeny is used to weight posterior probabilities computed from large reference data-sets. The algorithm is generally accurate and efficient. Hence, the over-all sensitivity, selectivity and correlation of the Pfold predictions ranged from 0–100%, 0–100% and 0.0–1.0, respectively. But removing those points where Pfold predictions were empty structures (LSU rRNA (H & M) and SSU rRNA (M), see figure 3 ), the prediction accuracies ranged from 66–100%, 89–100% and 0.77–1.0, respectively. The zeros are due to 'under-flow errors', a solution is presently under construction by the authors (pers. commun. Bjarne Knudsen). ILM ILM (iterated loop matching) is one of the few comparative RNA folding algorithms which can return pseudo-knotted structures [ 22 , 78 ]. It uses a combination of thermodynamic and mutual information content scores [ 18 ] to produce a secondary structure. All possible stems ("small" internal loops and bulges inclusive) are generated and ranked according to a combination of thermodynamic and mutual-information scores. The stem with maximal score is selected, scores are updated and stems conflicting the selection removed, then the next highest scoring stem is selected. This algorithm is iterated until no stems remain. ILM generally ranked low in terms of selectivity and was not as sensitive as either RNAalifold or Pfold on the high similarity data, but did improve on the medium similarity data-sets (see figure 3 ). The over-all sensitivity, selectivity and correlation of ILM predictions ranged from 44–100%, 37–75% and 0.40–0.86, respectively. To ensure the low selectivity values weren't due to the reference-structure being pseudo-knot free we re-evaluated ILM with reference-structures replete with pseudo-knots. The new sensitivity, selectivity and correlation values ranged from 31–100%, 26–75% and 0.29–0.86, in fact evaluating with pseudo-knotted structures did little to increase ILM selectivity. But, keep in mind that the sensitivity of the other (non-knot-inclusive) methods must decrease when a significant proportion of the true base-pairs are engaged in pseudo-knots. The inclusion of pseudo-knots prediction vastly increases the number of possible secondary structures, this is why they are generally excluded from exhaustive folding algorithms. In addition, there is a general lack of experimentally derived thermodynamic parameters which include pseudo-knots. ILM is a method still under development, hence the performance may improve once pseudo-knots can be more accurately modelled. Comparative method: simultaneous sequence alignment and folding (plan B) Sankoff The Sankoff algorithm is a dynamic programming approach to obtain a common base-pair list with maximal sum of base-pair weights. Basically, this is a merger of sequence alignment and Nussinov [ 79 ] (maximal-pairing) folding dynamic programming methods [ 26 ]. Sankoff's algorithm can be used to obtain both an alignment and consensus structure. Full implementations of the "Sankoff algorithm" for the solution of simultaneous RNA folding, alignment and protosequence problems have proven too computationally taxing ( O ( n 3 m ) in time, and O ( n 2 m ) in space for sequence length n and m sequences) to be practical [ 25 ]. Hence, three restricted versions of this algorithm have been implemented. These are Foldalign [ 27 ], Dynalign [ 29 ] and recently PMcomp has also been published [ 26 ]. Carnac [ 80 , 81 ] is another recent innovation designed to detect conserved stems in unaligned sequences, we include it here as a relative of the Sankoff approach. Foldalign Foldalign [ 27 ] can be interpreted as "a mixture of local alignment and maximum number of base-pairs algorithm" [ 28 , 82 ]. A combination of "clustal" [ 15 ] and "consensus" [ 83 ] heuristics are used to build multiple sequence alignments from pair-wise comparisons. Restricting maximum motif size (for this study 50 was used) and forbidding bifurcating structures (multi-loops) reduces the time complexity to O ( n 4 N ) in time (where N is the number of sequences and n is the length of the longest sequence). A simple match-based scoring scheme is used to rank putative conserved structure elements. The Tool Abuse Caveat generally applies to the tool Foldalign as all of our data-sets contain multi-loops. The use of Foldalign for the prediction of global, multi-looped secondary structures is not recommended-as Foldalign is specifically designed for the location of short regulatory motifs such as IREs [ 84 ] where the motifs are only related at the level of (non-bifurcating) structure and not at the level of sequence. Hence the relatively poor sensitivity, selectivity and correlation, which ranged from 5–24%, 23–36% and 0.11–0.27 respectively, for our test data-sets. Dynalign Dynalign [ 29 , 85 ] is a pairwise implementation of the Sankoff algorithm, which uses a "full energy model" to locate a common low energy structure (including multi-loops) and align two structural RNAs. The computational complexity of the full Sankoff is reduced by restricting the difference in the indices i and j of aligned nucleotides (where i indexes positions in sequence 1 and j indexes sequence 2) to be less than M . In addition, Dynalign uses the same method employed by MFold to reduce the conformation space, by limiting the size of internal loops [ 29 , 86 ]. The complexity is thus reduced to O ( n 3 M 3 ). The current Dynalign implementation is restricted to pair-wise sequence comparisons. Rather than compute all pairwise foldings we compared all sequences with the reference structure. Due to the computational expense of this algorithm it could only be used to predict tRNA and RNase P structures. Dynalign performed well on the tRNA, medium sequence homology data-set (sensitivity, selectivity and correlation of 94%, 95% and 0.94 respectively, when averaged over all pairwise alignments with the reference sequence). With this one high-scoring point removed, averaged sensitivity, selectivity and correlation values ranged from 32–54%, 33–54% and 0.32–0.54 respectively. Comparing the performances of MFold and Dynalign showed that MFold performance was always superior on the RNase P data-set, Dynalign however did much better on the shorter and more diverse tRNA sequences. Performance gains could be made by investing more computer time and refolding RNase P with larger ' maximum insert size', which was set to 10 during this study. The use of Dynalign on the RNase P data-sets in this study is therefore a case of tool-abuse, as the parameters recommended by the authors of Dynalign were not used (to ensure calculations completed in reasonable time). Carnac The Carnac algorithm, as mentioned previously, is not strictly an implementation of the Sankoff algorithm. A set of filters are employed through which sets of sequences are passed in a pair-wise fashion [ 80 , 81 , 87 ]. Sequences are scanned for stems and "high similarity" regions of sequences (dubbed "anchor points") are identified, a dynamic program is used to select conserved stems using anchor point and covariation information. The Carnac algorithm was remarkably selective at base-pair predictions. However, the sensitivity of the algorithm was generally low, although when evaluated with the correlation coefficient it is comparable to RNAalifold and Pfold. Sensitivity, selectivity and correlation values for Carnac predictions ranged from 45–71%, 92–100% and 0.65–0.82 respectively. The sensitivity of Carnac can be increased by constraining a minimum free energy fold (i.e. with "RNAfold-C") with the Carnac predicted structure, but this cost in terms of selectivity. On average this increased the sensitivity by 22.5, decreased the selectivity by 17.2 and slightly increased the correlation by 0.05. Alignment of predicted structures (plan C) RNA forester RNAforester [ 37 , 88 ] implements the tree alignment model. In contrast to approaches that produce only a similarity value, but no underlying alignment, it computes pairwise alignments of two input structures. RNAforester can produce either global or local alignments; we used the global mode. A structure alignment is itself a branching (tree-like) structure; the set of matched base-pairs can be derived from it and evaluated as with the other approaches. We used the tRNA and RNase P data-sets and generated structure single sequence predictions with RNAfold. All predicted structures were aligned pairwise and a neighbour-joining approach used to cluster and align high similarity sequences and structure profiles. The highest scoring alignment was used to derive a predicted consensus that was evaluated against the consensus tRNA model structures. Sensitivity, selectivity and correlation ranges of consensus structures computed from the highest scoring RNAforester alignments were 29–67%, 27–67% and 0.26–0.66 respectively. It seems likely that much of the inaccuracy of this approach is due to MFE structure prediction, however the structure-clustering approach frequently separates mis-folded MFE predictions from the accurate folds. MARNA The MARNA algorithm [ 39 , 89 ] proceeds by constructing edge weights between nucleotides in a pairwise fashion. Weights are structure-enhanced-sequence-similarities transformed from edit distances proposed by Zhang [ 90 ]. Phase two pipes the set of alignment edges into t-coffee [ 16 ] for multiple alignment production. The resultant alignments are not strictly structural alignments in the sense defined above. Rather, these are sequence alignments influenced by structure. Sensitivity, selectivity and correlation values of consensus structures computed from MARNA alignments of MFE structures ranged from 29–52%, 32–84% and 0.30–0.65 respectively. We also tried trimming high entropy base-pairs from the MFE predictions using the bound Q ij > 1, where , , and p ij are pair-probabilities computed using McCaskilPs partition function [ 75 ]. The new accuracy ranges were 29–71%, 92–100% and 0.53–0.84. A related approach for trimming of low probability was recently shown to improve the selectivity of MFE predictions [ 65 ]. MARNA is generally less dependant upon the accuracy of the input structures hence performs slightly better with the poorly predicted tRNA structures than RNAforester. Discussion We have evaluated three different strategies for comparative structure prediction, and altogether eight tools (not counting the single sequence methods). The results of which are summarised in figures 3 & 4 . A surprising discovery given that the test data-sets are so diverse is that algorithm specific clusters formed in sensitivity versus selectivity scatter plots, indicating algorithm-specific eccentricities. A number of algorithms which might have been evaluated here have been excluded, primarily due to the heavy computational costs of the various implementations on our longer data-sets. We favoured recent algorithms which could be compiled on modern computers and those with input and output which could be simply dealt with (for example returning dot-bracket [ 13 , 37 , 91 ] or tabular-connect type formats [ 12 , 29 , 41 ], rather than coordinates and lengths of stacks or graphic (gif/pdf) representations favoured by a minority of researchers). Practical recommendations For well aligned short sequences, both Pfold and RNAalifold generally perform well, PFold performed marginally better than RNAalifold. It is likely that some moderate refinements to RNAalifold would improve accuracy without altering the efficiency, for example, if gaps were not penalised in the free-energy evaluation and a more sophisticated model for scoring mutations was employed, perhaps ribosum matrices [ 92 ] could be used to weight base-pair bonuses and penalties. For well aligned, long sequences the performance and speed of RNAalifold was excellent. For data-sets consisting of short (< 200 bases) and diverse sequences Dynalign might do well, as it does not require sequence similarity – in fact the scoring function does not include sequence comparison. Otherwise, one might choose to use a mixture of RNAalifold and/or Pfold to fold similar clades and RNAforester and/or MARNA to align folded clades. Advocates of plan A should note that many multiple sequence alignment algorithms generally do not favour transitions over transversions or employ ad hoc 2-parameter methods to model these (ClustalW [ 15 ] for example). Structural RNA sequences however evolve rapidly via structure neutral mutations which are frequently transitions and rarely transversions [ 92 , 93 ]. Multiple sequence algorithms which employ more complex yet more accurate models of sequence evolution will undoubtedly produce "better" alignments for folding. Carnac produced highly selective structures for all the test data-sets, which if used to constrain a free energy fold produced sensitive predictions with a cost to selectivity. The consistency of Carnac performance is remarkable, for all the data-sets considered here this heuristic approach performed well. It is however unclear how Carnac will perform on highly diverse data-sets. For advocates of plan C, we have an encouraging message: Both MARNA and RNAforester perform better on the medium similarity data than on high similarity data. This seems paradoxical at first glance, but one must understand that for an approach purely based on predicted structures, high sequence similarity can be a curse rather than a blessing: If sequences are very similar, they may jointly fold into the wrong MFE structure. With more sequence variation, it becomes more likely that at least some family members have good predictions, which by their mutual similarity can be picked out from the rest. This means that especially in the case of low sequence similarity, where nothing else works, plan C, currently the least explored strategy of all, has a certain promise. Conclusions Finally, let us outline some directions for future research. An implementation of the single sequence pseudoknot algorithms [ 42 , 43 , 94 ] employing similar strategies to RNAalifold [ 21 ] for alignment folding would be most useful. Based upon the RNAalifold results this approach would dramatically increase the accuracy of these algorithms upon certain data-sets. Also, an extension of these allowing constrained foldings to incorporate prior knowledge would be of assistance, this has proved extremely useful for MFE predictions. Sampling structures from reference alignments is also likely to prove beneficial. The implementation of fast and accurate variants of the Sankoff algorithm remains an open problem. Again allowing constrained foldings and alignments would be useful. The further development of "BLAST-like" folding heuristics for this should be a priority, obviously Carnac is a good start. The MARNA approach for producing structurally enhanced multiple alignments produced rather selective results after trimming high-entropy base-pairs from MFE predictions. This suggests that weighting edit-distances with partition-function derived probabilities or entropies will produce reasonable RNA alignments. A consensus structure could then be derived from MFE-structures or from PFold or RNAalifold predictions on the resultant alignment. This approach would effectively decouple the Sankoff algorithm into manageable structure-enhanced-alignment and folding stages. Note added in proof Two further developments are likely to increase the power of plan C. Pure multiple structure alignment (as opposed to pairwise alignment used here) presented in [ 95 ] may leave out some misfolded structures from a progressively constructed profile aligment. A small but representative set of near-optimal structures can now be derived by abstract shape analysis [ 96 ]. Combining both approaches, one could consider a progressive multiple alignment approach where these representative, near-optimal structures are included for each sequence. More training data is essential for this field to progress, for this homology search tools are essential. Infernal [ 91 , 97 ] used to construct the Rfam database [ 98 , 99 ] is an excellent approach but sensitivity might be increased with a phylogenetic approach and RNA-specific sequence search tools. The implementation of methods combining energetics, covariation [ 21 ] and co-transcriptional folding [ 100 ] in a statistically reasonable manner is also a potentially fruitful direction for development. Authors' contributions PPG carried out the experiments, the analysis and drafted the manuscript. RG suggested comparing comparative structure prediction methods and assisted in the manuscript preparation. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526219.xml |
517946 | A novel method of cultivating cardiac myocytes in agarose microchamber chips for studying cell synchronization | We have developed a new method that enables agar microstructures to be used to cultivate cardiac myocyte cells in a manner that allows their connection patterns to be controlled. Non-contact three-dimensional photo-thermal etching with a 1064-nm infrared focused laser beam was used to form the shapes of agar microstructures. This wavelength was selected as it is not absorbed by water or agar. Identical rat cardiac myocytes were cultured in adjacent microstructures connected by microchannels and the interactions of asynchronous beating cardiac myocyte cells observed. Two isolated and independently beating cardiac myocytes were shown to form contacts through the narrow microchannels and by 90 minutes had synchronized their oscillations. This occurred by one of the two cells stopping their oscillation and following the pattern of the other cell. In contrast, when two sets of synchronized beating cells came into contact, those two sets synchronized without any observable interruptions to their rhythms. The results indicate that the synchronization process of cardiac myocytes may be dependent on the community size and network pattern of these cells. | Finding Single-cell based analysis methods have become more and more important for understanding the cell-group effects such as how information is controlled and recorded in a cell group or a network shape. Early tissue culture studies of cardiac myocyte cells demonstrated that a single beating cell can influence the rate of a neighbouring cell in close contact and that a group of heart cells in a culture, beating synchronously with a rapid rhythm, can act as pacemaker for a contiguous cell sheet [ 1 ]. Though the former results predicted that a rapidly beating region of tissue acts as pacemaker for a slower one and examined how the synchronization process of two isolated beating cardiac myocytes [ 2 ], the cell-to-cell connection could not be controlled completely without using microstructures on the cultivation plate. As means of attaining the spatial arrangement of cardiac myocytes, we have developed a new single-cell cultivation method and a system using agar microstructures, based on 1064-nm photo-thermal etching [ 3 - 6 ]. We have also developed the on-chip single-cell sorting method for cultivating particular cells chosen from clued mixture of cells [ 7 ], and have found the adaptation process of epigenetic memorization in cells by storing the information as the localization of proteins [ 8 ]. This paper reports the practical use of the agar chamber for screening the community size effect of the synchronization process of adjacent cardiac myocyte cells having independent oscillation. Figure 1 shows the schematic drawing of the agar microchambers on the chip. The microchambers and microchannels were constructed by localized melting of a portion of the 5-μm-thick agar layer using a 1064-nm the infrared focused laser beam, a process we have termed photo-thermal etching. The 1064-nm laser beam is not absorbed by either water or the agar, and selectively melts a portion of the agar just near the chromium thin layer as this layer absorbs the beam energy. Microstructures such as holes and channels can be easily produced using this non-contact etching within only a few minutes without the requirement of any cast moulding process. The melting of agar by laser occurred as follows: (a) the 1064-nm infrared laser beam was focused on the agar layer on the glass slide; (b) the agar at the focal point and on the light pathway started to melt; (c) when the focused beam was moved parallel to the chip surface, a portion of agar around the focal spot of laser melted and diffused into water; (d) after the heated spot had been moved, a channel was created at the bottom of the agar layer connecting the two adjacent holes. The microscope confirmed the melting had occurred, and then either the heating was continued until the spot size reached the desired size, or the heating position was shifted to achieve the desired shape. Cardiac myocytes were cultivated in each hole of the agar microchambers on the chip as shown in Fig. 1 . Collagen-type I (Nitta gelatin, Osaka, Japan) was coated on the glass layer surface to improve the attachment of the cell to the bottom of the microchambers. Figure 1 (A): Schematic drawing of the on-chip agar cultivation assay. (B): Optical micrograph of 24-h cultivation of two cardiac myocyte cells. (C): Time-course of oscillation of cardiac myocytes shown in Fig. (B). (D): Optical micrograph of 24-h cultivation of two sets of the synchronized pairs. (E): Time-course of oscillation of cardiac myocytes shown in Fig. (D). Neonatal rat cardiac myocytes were isolated and purified as follows. First, the hearts of 1- to 3-day-old Wistar rats (Nippon Bio-supp. Center, Tokyo, Japan) were excised under ether anaesthesia. The ventricles were separated from the atria and then washed with phosphate buffered saline (PBS, 137 mM NaCl, 2.7 mM KCl, 8 mM Na 2 HPO 4 , 1.5 mM KH 2 PO 4 , pH 7.4) containing 0.9 mM CaCl 2 and 0.5 mM MgCl 2 . The ventricles were minced in PBS without CaCl 2 or MgCl 2 and then incubated in PBS containing 0.25% collagenase (Wako, Osaka, Japan) for 30 minutes at 37°C to digest the ventricular tissue. This procedure was repeated twice more and the cell suspension was then transferred to cell culture medium (DMEM [Invitrogen Corp., Carlsbad, CA USA] supplemented with 10% fetal bovine serum, 100 U/ml penicillin, and 100 μg/ml Streptomycin) at 4°C. The cells were filtered through a 40-μm nylon mesh and centrifuged at 180 g for 5 minutes at room temperature. The cell pellet was re-suspended in a HEPES buffer (20 mM HEPES, 110 mM NaCl, 1 mM NaH 2 PO 4 , 5 mM glucose, 5 mM KCl, and 1 mM MgSO 4 , pH 7.4). Cardiac myocytes present in the suspension were separated from other cells (i.e., fibroblasts and endothelial cells) by the density centrifugation method. The cell suspension was then layered onto 40.5% Percoll (Amersham Biosciences, Uppsala, Sweden) diluted in the HEPES buffer, which had previously been layered onto 58.5% Percoll diluted in the same buffer. The cell suspension was then centrifuged at 2200 g for 30 minutes at room temperature. Cardiac myocytes were retrieved from the interface of the 40.5% and 58.5% Percoll layers. Retrieved cells were then re-suspended in the cell culture medium. An aliquot (5- μl) of the suspension was diluted to achieve a final concentration of 3.0 × 10 5 cells/ml then plated into the chip. Individual cardiac myocytes were picked up by a micropipette and manually introduced into each chip microchamber and incubated on a cell-cultivation microscope system at 37°C in the presence of a humidified atmosphere of 95% air /5% CO 2 . It should be noted that because the microchamber sidewalls were made of agar, then the cells could not easily pass over the chambers. Phase-contrast microscopy was used to measure the contraction rhythm of the cardiac myocytes and the network formation of cells in the two adjacent chambers that were connected by the focused beam. The spontaneous contraction rhythm of cultured cardiac myocytes was evaluated by a video-image recording method. Images of beating cardiac myocytes were recorded with a CCD camera through the use of a phase contrast microscope. The sizes (cross-section of volume) of cardiac myocytes, which changed considerably with contraction, were also analyzed and recorded every 1/30 s by a personal computer with a video capture board. Figure 1 shows a micrograph image of two isolated, independently beating cardiac myocytes coming into contact through the microchannel. Ninety min after the physical contact, the two connected cells started to oscillate synchronously. The time course change of the heart beating was as shown in Fig. 1 . As shown in the graph, the process of synchronization was accomplished only after one of the cells stopped beating and then synchronized its oscillation with other cell. Movie 1 (see additional file 1 "movie1.mpg" ) depicts the process of beating synchronization. Once the synchronized oscillation of the two cells was accomplished (arrowhead in Fig. 1 ), then the two cells maintained synchronization similar to that observed in whole tissue. A time interval of approximately 90 min was needed to form the gap junction between the two adjacent cells. The same method was also used to make more complicated network patterns of cardiac myocytes. Figure 1 shows a micrograph of a four-cell network. As shown in the graph (Fig. 1 ), two sets of the beating pairs synchronized without having to stop unlike that previously observed for the synchronization of isolated cells (see additional file 2 "movie2.mpg" ). This suggests that the synchronization dynamics and rhythm of the cell group is more stable than that of single cells. In conclusion, we present a 1064-nm photo-thermal etching technology with which to create agarose microchambers for growing networks of cardiac myocyte cells. Using the system, we first observed the differences of the synchronization process of cardiac myocyte cells and their dependence on community size. This system has great potential for use in the biological/medical fields for cultivating the next stage of single-cell based networks and measuring their properties in laboratories. Authors' contributions KK and TK carried out the microchamber design, cell preparation, single cell cultivation and observation, image analysis. Both authors contributed equally to this article. KY conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript. Supplementary Material Additional File 1 two cells. synchronized oscillation of the two cells Click here for file Additional File 2 two sets of cells. synchronized oscillation of the two sets of cells Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC517946.xml |
538755 | Leniency and halo effects in marking undergraduate short research projects | Background Supervisors are often involved in the assessment of projects they have supervised themselves. Previous research suggests that detailed marking sheets may alleviate leniency and halo effects. We set out to determine if, despite using such a marking schedule, leniency and halo effects were evident in the supervisors' marking of undergraduate short research projects (special study modules (SSM)). Methods Review of grades awarded by supervisors, second markers and control markers to the written reports of 4 th year medical students who had participated in an SSM during two full academic years (n = 399). Paired t-tests were used to compare mean marks, Pearson correlation to look at agreement between marks and multiple linear regression to test the prediction of one mark from several others adjusted for one another. Results There was a highly significant difference of approximately half a grade between supervisors and second markers with supervisors marking higher. (t = 3.12, p < 0.01, difference in grade score = 0.42, 95% CI for mean difference 0.18–0.80). There was a high correlation between the two marks awarded for performance of the project and the written report by the supervisor (r = 0.75), but a low-modest correlation between supervisor and second marker (r = 0.28). Linear regression analysis of the influence of the supervisors' mark for performance on their mark for the report gave a non-significant result. This suggests a leniency effect but no halo effect. Conclusions This study shows that with the use of structured marking sheet for assessment of undergraduate medical students, supervisors marks are not associated with a halo effect, but leniency does occur. As supervisor assessment is becoming more common in both under graduate and postgraduate teaching new ways to improve objectivity in marking and to address the leniency of supervisors should be sought. | Background There is compelling evidence from the literature that supervisors may be unreliable when asked to assess the performance of their own students. Effects such as the so-called 'halo' effect [ 1 ] in which a good or bad performance in one area affects the assessor's judgement in other areas and 'leniency'[ 2 ] where assessors are reluctant for a variety of reasons including fear of impairing the student-teacher relationship, fear of a negative emotional reaction from the student, or of poor reflection on the teacher's own expertise may come into play when assessing students' work. Increasingly however, particularly in medical education, teachers and supervisors are being asked to assess their own students. We describe a study to investigate to what extent effects such as halo and leniency were operating in supervisor marked Special Study Modules (SSMs) in the Edinburgh University undergraduate course. SSMs were introduced into the fourth year of the 5-year undergraduate medical curriculum in 1995. This was in response to the recommendations from the General Medical Council's document Tomorrow's Doctors [ 3 ]. Edinburgh SSMs aim to develop students' skills in self-directed and enquiry-led learning, team working and writing a short thesis or report (of about 3000 words). The development also gives students an opportunity to choose an area of study and to pursue it in depth. Students spend 8 weeks on individual projects under the supervision of a member of the University of Edinburgh academic staff working on a wide range of projects in virtually every specialty including clinical audit, laboratory-based research and clinical projects, with over 300 supervisors involved. For assessment an identical structured form was used by all assessors. Supervisors were asked to assess students on both their performance during the 8-week SSM and on their written report. Each component was awarded a separate grade by the supervisor and a combined grade for both was calculated by taking the mean grade. This mean grade contributed 50% to the final SSM mark. A second marker, usually another SSM supervisor working in a related area of research, with no prior knowledge of the student or the project would also assess the written report and this mark contributed 50% of the final mark. It was intended that this would permit the supervisors to be able to compare their own students' projects with others and would ensure greater consistency in the marking. Where there was a discrepancy of more than one full alphabetic grade category (e.g. A and C) between the supervisor and the second marker or where a fail grade was awarded, the report was assessed without prior knowledge of previous marks by at least one other experienced member of the Board of Examiners (control marker). The mark schemes for these assessments are described in Figure 1 . Other than the guidance described there is no formal training of assessors. Figure 1 Marking scheme On reviewing the marks we noticed that there appeared to be a high correlation between the supervisor's marks for any one student's performance during the attachment and marks for their written report but a low correlation between the supervisor's and second marker's marks for the student's written report. This observation led us to investigate the hypothesis that the supervisors' knowledge of the students influenced their mark for the written report. Methods We reviewed the grades of all the students from two full academic years (n = 399) who had participated in an SSM between 1999–2001 to answer the following questions: What is the correlation between the supervisor's marks for performance and report, and if this is high is there a causal relationship? Is there a real difference in the marks awarded for the report between the supervisor and the second marker, and if so what is the cause of the difference? In cases of discrepant marks where the reports were further marked by control markers; what is the correlation between the control markers with the supervisors' and second markers? The grades awarded for Performance and Reports were translated to a numerical scale thus: A+ = 1, A = 2, A- = 3, B+ = 4, through to E = 14. No grades below E (Marginal Fail) were awarded. We used paired t-tests to compare mean marks, Pearson correlation for looking at agreement between markers, and multiple linear regression to test the prediction of one mark from several others adjusted for one another. Results Table 1 shows the mean and standard deviation expressed in a numerical scale of grades given by supervisors, second markers, and control markers. Table 1 Mean and standard deviation of grades expressed on a numerical scale (grade score) awarded by the supervisor for performance and for the written report, and by the second marker and control markers for the written report (A+ = 1, A = 2 etc.; the lower the grade score the higher the mark) Marker Component marked N Grade score Standard deviation Supervisor Performance 383 4.12 2.60 Written report 383 4.64 2.50 Combined mark 389 4.45 2.48 Second marker Written report 373 5.16 2.40 Mean of Control markers Written report 98 5.86 2.21 Final mark 399 5.18 2.06 Using paired t-tests to compare mean marks for the written report between supervisors and second markers revealed a highly significant difference (t = 3.12, p < 0.01), with the supervisor scoring higher than the second marker (difference in grade score = 0.42, 95% confidence interval for mean difference 0.18 – 0.80). Correlation between the two marks was modest, r = 0.28. Control markers tended to mark the lower scoring students. While there was a numerical difference (lower) between control marks for the written report and the supervisor this failed to reach significance (t = 1.81, p = 0.07). Despite there being no significant difference between control markers and second markers, correlation was low (r = 0.11). There was considerably higher correlation between the two marks awarded by each supervisor i.e. for the students' performance and written report r = 0.75 but again there was a highly significant difference in the mean marks t = 5.69, P < 0.001 (difference in grade score = 0.52; 95% confidence interval for mean difference 0.34 – 0.69) Analysis of the influence of the supervisor's mark for performance on his/her mark for the report was done by linear regression. This gave a non-significant result for the performance mark adjusted for the written mark. Table 2 summarises these comparisons. Table 2 Summary of statistical analysis of data Supervisor Written Report Control Marker Written Report Supervisor Performance t = 5.69, p < 0.001. Performance scoring higher than report (difference in grade score = 0.52) Highly significant difference. r = 0.75 Linear regression – non-significant result t = 3.07, p = 0.003. Significant difference Second Marker Written Report t = 3.12, p < 0.01 Highly significant difference. Supervisor scoring higher than second marker (difference in grade score = 0.42, 95% confidence interval for mean difference 0.18 – 0.80). r = 0.28 t = 0.68 No significant difference. r = 0.11 Control Marker Written Report t = 1.81, p = 0.07 No significant difference. Discussion Analysis of the grades awarded demonstrated that there is a significant difference in the mean marks awarded by the supervisors and second markers, with the supervisors marking nearly half a grade higher than the second markers. The correlation was also modest between these markers' assessments of the reports suggesting that the two groups of markers were not using the same criteria to reach their decision, despite being provided with descriptors and a mark scheme. It is important to note that most supervisors were also second markers. At the same time they were assessing their own students' project, and so had a direct and simultaneous comparison. Therefore, the same individual appeared to use different criteria depending on whether they marked their supervised student's report or others. The lack of significant difference between the mean marks awarded by the second marker and the control marker suggests that they were awarding the same range of grades overall but the modest correlations indicate that in the case of individual students there was again significant inter-marker variability. Control markers, unlike supervisors and second markers (who may only supervise one project a year) have experience of reviewing large numbers of SSM reports. There was also a significant difference in the mean marks awarded by supervisors for performance and for written reports but in this analysis there was a much higher correlation between the marks. However, further analysis of this finding by linear regression failed to demonstrate an undue influence of the performance mark on that of the report. Although we have been unable to provide evidence that the supervisor's mark for performance has an undue influence on the mark for the written report (halo effect), we have demonstrated that the supervisors mark significantly higher than second markers, suggesting a leniency effect. This indicates that the supervisor's mark is influenced by having known and worked with the student. Such effects have been demonstrated before in many forms of education [ 4 - 8 ]. Some of the factors contributing to this may include insight and therefore sympathy for the student's difficulties in performing the project; inability to be objective when the student has become part of the work team; unwillingness of the supervisor to acknowledge that a piece of work emanating from his team is poor quality, or lacking the confidence or courage to feed back personally a bad assessment to the student. These factors need further exploration. Increasingly in medical education supervisors are expected to summatively assess their students [ 9 , 10 ]. Assessors are unlikely to be affected equally by leniency and halo effects and this will advantage some and disadvantage others among their students. These effects are likely to be strongest on supervisors who, like some of those in our study, are assessing a relatively small number of students and are inexperienced in assessment [ 6 ]. If we are to continue to use supervisor-based assessments we must find ways to combat these effects. Other authors' suggestions for improving objectiveness and partially overcoming halo and leniency effects include detailed marking sheets [ 6 , 11 ], training for assessors in providing feedback of assessments [ 5 ], and also providing feedback on assessors' marking performance [ 6 ]. We are aware that the marking scheme in Figure 1 , while structured, still permitted a fair degree of interpretation by examiners. Since carrying out this project we have introduced more detailed marking schemes with specific questions and detailed descriptors for each level of achievement for assessing the students' performance and report. This now includes an assessment of how the student overcame any problems which arose and how this may have affected the outcome of the project. We have also provided more detailed guidance to markers. We intend to review the inter-marker variability in light of the increased guidance given to markers. These findings raise the ethical question as to whether or not we should continue to utilise supervisors in this assessment process. We are planning to continue to use supervisors as markers because of the expertise they bring to the specific field of study and their realistic expectation of the difficulties encountered by the student during the course of the project. Also the supervisor is sometimes the only person capable of marking the student's performance, which we consider a very valuable assessment of the students personal and professional abilities. We do realise that this is a difficult responsibility for supervisors. Better staff development of supervisors as markers and a more detailed marking schedule may help ensure appropriate marks for performance. Furthermore, we will also consider introducing 360 degree assessment to include all members of staff who have interacted with the student, particularly to improve formative feedback to students. Conclusions In this paper we have demonstrated the problem of inter-marker variability between the supervisor of undergraduate projects and the second marker even when using a mark scheme. This emphasises the difficulty in creating mark schemes and providing adequate staff training which ensures that markers apply the criteria in the same way in very varied reports. On average, supervisors awarded higher marks for their students' reports than the second markers but the influence of the performance mark on this was not significant. We would suggest that this difference is due to leniency in the supervisor resulting from the student being part of the supervisor's team, but these influences need further exploration. Competing interests SR, HC and BMcK are all involved in undergraduate teaching at the University of Edinburgh Authors' contributions BMcK, HC and SR contributed equally to the design of the research and the writing of the project. RE analysed the data. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC538755.xml |
555585 | Arginase attenuates inhibitory nonadrenergic noncholinergic nerve-induced nitric oxide generation and airway smooth muscle relaxation | Background Recent evidence suggests that endogenous arginase activity potentiates airway responsiveness to methacholine by attenuation of agonist-induced nitric oxide (NO) production, presumably by competition with epithelial constitutive NO synthase for the common substrate, L-arginine. Using guinea pig tracheal open-ring preparations, we now investigated the involvement of arginase in the modulation of neuronal nitric oxide synthase (nNOS)-mediated relaxation induced by inhibitory nonadrenergic noncholinergic (iNANC) nerve stimulation. Methods Electrical field stimulation (EFS; 150 mA, 4 ms, 4 s, 0.5 – 16 Hz)-induced relaxation was measured in tracheal preparations precontracted to 30% with histamine, in the presence of 1 μM atropine and 3 μM indomethacin. The contribution of NO to the EFS-induced relaxation was assessed by the nonselective NOS inhibitor L-NNA (0.1 mM), while the involvement of arginase activity in the regulation of EFS-induced NO production and relaxation was investigated by the effect of the specific arginase inhibitor nor-NOHA (10 μM). Furthermore, the role of substrate availability to nNOS in EFS-induced relaxation was measured in the presence of various concentrations of exogenous L-arginine. Results EFS induced a frequency-dependent relaxation, ranging from 6.6 ± 0.8% at 0.5 Hz to 74.6 ± 1.2% at 16 Hz, which was inhibited with the NOS inhibitor L-NNA by 78.0 ± 10.5% at 0.5 Hz to 26.7 ± 7.7% at 8 Hz (P < 0.01 all). In contrast, the arginase inhibitor nor-NOHA increased EFS-induced relaxation by 3.3 ± 1.2-fold at 0.5 Hz to 1.2 ± 0.1-fold at 4 Hz (P < 0.05 all), which was reversed by L-NNA to the level of control airways in the presence of L-NNA (P < 0.01 all). Similar to nor-NOHA, exogenous L-arginine increased EFS-induced airway relaxation (P < 0.05 all). Conclusion The results indicate that endogenous arginase activity attenuates iNANC nerve-mediated airway relaxation by inhibition of NO generation, presumably by limiting L-arginine availability to nNOS. | Background The inhibitory nonadrenergic noncholinergic (iNANC) nervous system is the most effective bronchodilating neural pathway of the airways. Inhibition of nitric oxide synthase (NOS) markedly reduces the iNANC relaxation of both guinea pigs [ 1 - 3 ] and human airways [ 4 , 5 ], indicating that nitric oxide (NO) is a major neurotransmitter of the iNANC system. In addition, vasoactive intestinal polypeptide (VIP) has been implicated in iNANC relaxation [ 6 , 7 ], and colocalization of NOS and VIP has been demonstrated both in guinea pig [ 8 ] and in human airway nerves [ 9 ]. NO is generated by a family of NOS isoforms that utilize the semi-essential amino acid L-arginine, oxygen and NADPH as substrates to produce NO and L-citrulline [ 10 ]. Three isoforms of NOS have been identified: neuronal NOS (nNOS), endothelial NOS (eNOS) and inducible NOS (iNOS). In the airways, the constitutive NOS (cNOS) isoforms are mainly expressed in the iNANC neurons (nNOS), the endothelium (eNOS) and the epithelium (nNOS and eNOS), whereas iNOS, which is induced by proinflammatory cytokines during airway inflammation, is mainly expressed in macrophages and epithelial cells [ 11 ]. Another L-arginine metabolizing enzyme is arginase, which hydrolyzes L-arginine to L-ornithine and urea. Arginase is classically considered to be an enzyme of the urea cycle in the liver, but also occurs in extrahepatic tissues, including the lung [ 12 , 13 ]. Two distinct isoforms of arginase have been identified in mammals: arginase I, a cytosolic enzyme, mainly expressed in the liver, and arginase II, a mitochondrial enzyme, which is mainly expressed in extrahepatic tissues [ 13 ]. Extrahepatic arginase has been implicated in the regulation of NO synthesis by limiting the availability of intracellular L-arginine for NOS [ 12 - 15 ]. In addition, arginase might be involved in cell growth and tissue repair via the production of L-ornithine, a precursor of polyamines and proline [ 13 ]. Both arginase isoforms are constitutively expressed in the airways, particularly in the bronchial epithelium and in fibroblasts from peribronchial connective tissue [ 12 ]. Using a perfused guinea pig tracheal tube preparation, we have previously demonstrated that endogenous arginase activity is functionally involved in the regulation of airway smooth muscle tone [ 16 ]. Endogenous arginase potentiates methacholine-induced airway constriction by diminishing agonist-induced NO production, by competition with epithelial cNOS for the common substrate, L-arginine [ 16 ]. Previous studies had demonstrated that L-arginine availability is indeed a limiting factor for agonist-induced NO-production and airway relaxation [ 17 ]. A role for arginase in the iNANC system has been found in internal anal sphincter [ 18 ] and penile corpus cavernosum [ 19 , 20 ]. Thus, arginase inhibition increased electrical field stimulation (EFS)-induced relaxation of these preparations, indicating that endogenous arginase activity attenuates nNOS-mediated NANC relaxation. The role of endogenous arginase in the regulation of iNANC-derived NO generation in the airways has not yet been investigated. In the present study, we demonstrated that endogenous arginase activity and L-arginine availability are importantly involved in the modulation of iNANC nerve-mediated NO-production and relaxation of guinea pig tracheal smooth muscle. Methods Animals Male specific pathogen free HsdPoc:Dunkin Hartley guinea pigs (Harlan Heathfield, UK), weighing 500 – 800 g, were used in this study. The animals were group-housed in individual cages in climate-controlled animal quarters and given water and food ad libitum , while a 12-h on/12-h off light cycle was maintained. All protocols described in this study were approved by the University of Groningen Committee for Animal Experimentation. Tissue preparation The guinea pigs were sacrificed by a sharp blow on the head. After exsanguination, the trachea was removed from the larynx to the bronchi and rapidly placed in a Krebs-Henseleit (KH) buffer solution of 37°C, gassed with 95% O 2 and 5% CO 2 . The composition of the KH-solution in mM was: NaCl 117.50; KCl 5.60; MgSO 4 1.18; CaCl 2 2.50; NaH 2 PO 4 1.28; NaHCO 3 25.0 and D-glucose 5.50; pH 7.4. The trachea was prepared free of serosal connective tissue. Twelve single proximal tracheal open-ring preparations were mounted for isotonic recording (0.3 g preload) between two parallel platinum point-electrodes in water-jacketed (37°C) organ baths containing 20.0 ml of gassed KH-solution and indomethacin (3 μM), which remained present during the whole experiment to eliminate any influence of prostanoids. Electrical field stimulation-induced relaxation experiments After a 30 min equilibration period, tracheal preparations were relaxed with isoprenaline (0.1 μM) to establish basal tone. After a washout period of 30 min with three washes with fresh KH solution, maximal contraction of the tracheal preparations to histamine was determined with cumulative additions of the agonist (0.1, 1, 10 and 100 μM). After washout (30 min), the tracheal preparations were precontracted with histamine to 30% of the maximal histamine-induced tone in the presence of atropine (1 μM) to prevent EFS-induced cholinergic airway contraction. On the plateau, biphasic EFS (150 mA, 4 ms, 4 s, 0.5 – 16 Hz) was applied and frequency response curves (0.5 – 16 Hz in doubling steps) were recorded. Per preparation, one frequency response curve was performed. When used, the nonselective NOS inhibitor N ω -nitro-L-arginine (L-NNA; 100 μM), the specific arginase inhibitor N ω -hydroxy-nor-L-arginine (nor-NOHA; 10 μM), a combination of both inhibitors, or L-arginine (0.3, 1.0 or 5.0 mM) were applied 30 min prior to histamine-addition. In line with previous observations [ 21 ], neither the NOS inhibitor, nor the arginase inhibitor and L-arginine affected agonist-induced tone in the open-ring preparations. All measurements were performed in triplicate. After the final EFS-induced relaxation, followed by washout, isoprenaline (10 μM) was added to establish basal tone. Data analysis All individual relaxations elicited by EFS were estimated as peak height of the EFS-induced response, and were expressed as a percentage of maximal relaxation as established in the presence of isoprenaline. The contribution of NO to the EFS-induced relaxation was determined by the effect of the NOS inhibitor L-NNA. Similarly, the role of arginase activity in the modulation of EFS-induced airway relaxation was determined by the effect of the arginase inhibitor nor-NOHA. The role of substrate availability in EFS-induced airway relaxation was assessed by measuring the responses in the presence of various concentrations of exogenous L-arginine. All data are expressed as means ± s.e.m. Statistical significance of differences was evaluated using a paired or unpaired two-tailed Student's t-test as appropriate, and significance was accepted when P < 0.05. Chemicals Histamine dihydrochloride, indomethacin, atropine sulphate, N ω -nitro-L-arginine, (-)-isoprenaline hydrochloride and L-arginine hydrochloride were obtained from Sigma Chemical Co. (St. Louis, MO, USA). N ω -hydroxy-nor-L-arginine was kindly provided by Dr J.-L. Boucher (Université Paris V). Results In guinea pig tracheal open-ring preparations, EFS induced a frequency-dependent relaxation of histamine-induced tone ranging from 6.6 ± 0.8% at 0.5 Hz to 74.6 ± 1.2% at 16 Hz. Incubation with the NOS inhibitor L-NNA caused a significant inhibition of the EFS-induced relaxation at 0.5 to 8 Hz, particularly at the lower frequencies. The effect of L-NNA ranged from 78.0 ± 10.5% inhibition at 0.5 Hz to 26.7 ± 7.7% inhibition at 8 Hz ( P < 0.01 all; Fig. 1 ). Figure 1 Role of NO and arginase in iNANC nerve-induced relaxation of guinea pig tracheal smooth muscle. Electrical field stimulation-induced relaxation of precontracted guinea pig tracheal open-ring preparations in the absence and presence of the NOS inhibitor L-NNA (100 μM), the arginase inhibitor nor-NOHA (10 μM) or a combination of both inhibitors. Results are means ± s.e.m. of 8 experiments. * P < 0.05 and ** P < 0.01 compared to control, † P < 0.05 and ‡ P < 0.01 compared to nor-NOHA-treated. In contrast, incubation with the arginase inhibitor nor-NOHA significantly enhanced EFS-induced relaxation by 3.3 ± 1.2-fold at 0.5 Hz to 1.2 ± 0.1-fold at 4 Hz ( P < 0.05 all; Fig. 1 ), that is, at the frequencies most sensitive to L-NNA. The increased relaxation in the presence of nor-NOHA was fully reverted by L-NNA ( P < 0.05 all), to the level of control preparations in the presence of L-NNA alone (Fig. 1 ). Incubation with L-arginine caused a dose-dependent increase of total EFS-induced relaxation, which was maximal at 5.0 mM L-arginine (data not shown). In the presence of 5.0 mM L-arginine, a significant increase in EFS-induced relaxation was observed at all frequencies compared to untreated preparations ( P < 0.05 all, Fig. 2 ). At the lower frequencies, this increase was similar to the increase in EFS-induced relaxation observed after incubation with nor-NOHA (Fig. 2 ). Figure 2 Role of L-arginine availability and arginase in iNANC nerve-induced relaxation of guinea pig tracheal smooth muscle. Electrical field stimulation-induced relaxation of precontracted guinea pig tracheal open-ring preparations in the absence and presence of exogenous L-arginine (5.0 mM) or the arginase inhibitor nor-NOHA (10 μM). Results are means ± s.e.m. of 5–13 experiments. * P < 0.05 and ** P < 0.01 compared to control. Discussion Using perfused tracheal preparations, we have previously demonstrated that endogenous arginase activity is involved in the regulation of agonist-induced airway constriction by inhibition of NO production, presumably by competition with cNOS for L-arginine [ 16 ]. In the present study, we demonstrated that endogenous arginase activity is also involved in the regulation of iNANC nerve-mediated airway smooth muscle relaxation. In line with previous observations [ 1 ], it was demonstrated that the NOS inhibitor L-NNA inhibited EFS-induced iNANC relaxation of guinea pig tracheal preparations. This inhibition was most pronounced at the lower frequencies, indicating a prominent role of nNOS-derived NO at these frequencies. By contrast, inhibition of arginase activity by nor-NOHA caused a considerable (up to 3.3-fold) increase in EFS-induced relaxation at low frequencies, indicating that endogenous arginase activity restricts iNANC nerve-mediated airway smooth muscle relaxation. The increased relaxation after arginase inhibition was completely reverted by L-NNA, clearly indicating that arginase activity attenuates iNANC nerve-mediated airway smooth muscle relaxation by limiting NO production, presumably by competition with nNOS for their common substrate, L-arginine. The observation that exogenous L-arginine significantly enhanced the EFS-induced airway smooth muscle relaxation confirms that L-arginine is indeed a limiting factor in EFS-induced, NO-mediated airway smooth muscle relaxation under basal conditions. Remarkably, the effect of nor-NOHA was similar to that observed in the presence of the maximally effective L-arginine concentration, indicating that endogenous arginase activity is a major factor in regulating the neural NO-mediated airway smooth muscle relaxation. Recently, we discovered that increased arginase activity is importantly involved in the pathophysiology of asthma by contributing to the allergen-induced NO-deficiency and subsequent airway hyperresponsiveness to methacholine after the early asthmatic reaction, by limiting the availability of L-arginine for cNOS to produce bronchodilating NO [ 22 ]. Arginase activity as well as expression was also considerably increased in two mouse models of allergic asthma, irrespective whether the animals were challenged with ovalbumin or with Aspergillus fumigatus [ 23 ]. Moreover, enhanced mRNA or protein expression of arginase I was observed in human asthmatic lung tissue, particularly in inflammatory cells and in the airway epithelium [ 23 ], while increased arginase activity was measured in asthmatic serum [ 24 ]. In guinea pig tracheal strips, it has previously been demonstrated that EFS-induced iNANC relaxation is reduced after ovalbumin-challenge, due to a deficiency of iNANC nerve-derived NO [ 25 ]. Thus, it is tempting to speculate that increased arginase activity could similarly be involved in allergen-induced reduced iNANC activity. A role for arginase by restricting the L-arginine availability for nNOS in iNANC nerves has also been proposed in the pathophysiology of erectile dysfunction [ 19 ]. In support, increased expression and activity of arginase II contributing to reduced NO production has been demonstrated in diabetic cavernosal tissue [ 26 ]. Neuronal arginase may also be involved in gastrointestinal motility disorders, by reducing nNOS-mediated iNANC relaxation in the internal anal sphincter [ 18 ]. Conclusion This is the first demonstration that endogenous arginase activity is functionally involved in iNANC nerve activity in the airways, by attenuating the generation of nNOS-derived NO. Disturbance of this novel regulation mechanism of airway responsiveness might be involved in the pathophysiology of allergic asthma. Abbreviations cNOS, constitutive nitric oxide synthase; EFS, electrical field stimulation; eNOS, endothelial nitric oxide synthase; iNANC, inhibitory nonadrenergic noncholinergic; iNOS, inducible nitric oxide synthase; KH, Krebs-Henseleit; L-NNA, N ω -nitro-L-arginine; NADPH, nicotinamide adenine dinucleotide phosphate; nNOS, neuronal nitric oxide synthase; nor-NOHA, N ω -hydroxy-nor-L-arginine; VIP, vasoactive intestinal polypeptide Competing interests The authors declare that they have no competing interests. Authors' contributions HMa designed and coordinated the study, performed a major part of the experiments, performed the statistical analysis and drafted the manuscript. MAT assisted substantially in performing the experiments. JZ participated in the design of the study, interpretation of results and final revision of the manuscript. HMe conceived of the study, participated in its design and direction, as well as in preparing the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC555585.xml |
539305 | Identification of proteins in laser-microdissected small cell numbers by SELDI-TOF and Tandem MS | Background Laser microdissection allows precise isolation of specific cell types and compartments from complex tissues. To analyse proteins from small cell numbers, we combine laser-microdissection and manipulation (LMM) with mass spectrometry techniques. Results Hemalaun stained mouse lung sections were used to isolate 500–2,000 cells, enough material for complex protein profiles by SELDI-TOF MS (surface enhanced laser desorption and ionization/time of flight mass spectrometry), employing different chromatographic ProteinChip ® Arrays. Initially, to establish the principle, we identified specific protein peaks from 20,000 laser-microdissected cells, combining column chromatography, SDS-PAGE, tryptic digestion, SELDI technology and Tandem MS/MS using a ProteinChip ® Tandem MS Interface. Secondly, our aim was to reduce the labour requirements of microdissecting several thousand cells. Therefore, we first defined target proteins in a few microdissected cells, then recovered in whole tissue section homogenates from the same lung and applied to these analytical techniques. Both approaches resulted in a successful identification of the selected peaks. Conclusion Laser-microdissection may thus be combined with SELDI-TOF MS for generation of protein marker profiles in a cell-type- or compartment-specific manner in complex tissues, linked with mass fingerprinting and peptide sequencing by Tandem MS/MS for definite characterization. | Background Investigation of cell-type specific gene expression and regulation in complex tissues is hampered by the lack of accuracy of cell isolation and sensitivity of post-isolation analysis. Laser-microdissection techniques have proven to be a reliable tool for selectively harvesting cell clusters or single cell profiles from stained tissue sections for mRNA and protein investigation. When combined with qualitative and quantitative PCR, mRNA can be successfully analysed from a few cells [ 1 - 3 ]. The combination of laser-microdissection and cDNA arrays allows investigation of differential gene expression in a cell type specific manner for a multitude of genes in parallel [ 4 , 5 ]. For proteome analysis, two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) has been previously performed from 50,000 to 250,000 microdissected cells, followed by peptide mass finger printing of single spots [ 6 - 8 ]. Isolation of such high cell numbers by laser-microdissection is extremely time consuming or even impracticable in complex tissues. Recently, several groups have successfully combined laser-microdissection with surface-enhanced laser desorption/ionization mass spectrometry (SELDI MS) to generate reproducible MS profiles from 200–5,000 cells [ 9 - 14 ]. Changes in these protein profiles resulting from different biological conditions can be employed as biomarkers. Similarly, using matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS), spectra could be generated from 500 to 2,500 microdissected cells [ 15 , 16 ]. However, the definite identification of peptides/proteins underlying single biomarkers from laser-microdissected material demands substantially higher cell amounts and laborious, time-intensive procedures. To date, only one biomarker has been identified after profiling of laser-microdissected tissue. In this case, Melle et al . combined 2D-PAGE with peptide mapping and tandem mass spectrometry to identify a protein significantly higher expressed in tumour tissue and confirmed the identity by immunodepletion assay and immunhistochemistry [ 13 ]. In addition to generating compartment-specific biomarker profiles, we aimed to develop alternative strategies circumventing the laborious 2D-PAGE for definite protein identification using limited cell numbers derived from microdissected material. The strategies were evaluated at septal and vascular compartments of the complex lung tissue. Results Generation of protein profiles by SELDI mass spectrometry Laser-microdissection and manipulation was used to isolate 500–2,000 alveolar septum cells (Figure 1 ), which were then transferred into 15 μl of HEPES/Triton X-100 lysis buffer. Approximately 30 intrapulmonary vessels (corresponding to 500–2,000 cell profiles) were microdissected from tissue sections and lysed identically. Following the first isolation, the remaining cell pellet was subjected to Urea/Thiourea/CHAPS (UTC) buffer. a) To assess the effect of different lysis buffers and different surface properties of the ProteinChip ® Arrays, HEPES/Triton X-100 protein lysate as well as UTC lysate was applied independently to SAX (strong anionic exchanger) and WCX (weak cationic exchanger) ProteinChip ® Arrays. Compared to the weaker HEPES lysis, UTC buffer resulted in a remarkably higher yield of peaks (signal to noise ratio (S/N) ≥ 3) on WCX arrays. On the other hand, HEPES buffer gave more individual spectra on SAX arrays. 500 cells were sufficient to detect more than 35 peaks on both SAX and WCX arrays. However, immobilization of 2,000 cells resulted in over 50 peaks on WCX arrays (Figure 2A ). In regard of limiting cell numbers a two step extraction procedure (HEPES buffer followed by UTC) was proven to be useful to display a higher amount of peaks for differential expression analysis. Therefore this procedure was used for all further profiling experiments. b) Comparing alveolar septum cells to intrapulmonary vessels, the profiles differed considerably, with only few overlapping peaks. Representative profiles are given in Figure 2B . c) Four representative SELDI TOF MS spectra of alveolar septum cells from four different animals are shown in Figure 3A . These data show good reproducibility of protein detection by SELDI-MS in agreement with previous studies by Zhukov et al . [ 14 ] who also assessed reproducibility of SELDI-MS after using laser-microdissected lung material. To determine the limit in protein abundance for further identification, peaks with different intensities were chosen: one high-abundant protein with molecular weight of 15.7 kD and two low-abundant proteins of 13.8 and 14.0 kD, respectively. The three protein peaks with different intensities are presented in the zoomed area of Figure 3B . For identification of these proteins following strategies were evaluated. Enrichment of proteins from microdissected cells by column chromatography and SDS-PAGE Protein lysate of 10,000 to 20,000 microdissected septum cells was extracted by UTC buffer, the remaining material of the same needles was further extracted by SDS sample buffer. Protein samples from UTC and SDS extracts were separated by SDS-PAGE. Although both extracts showed several colloidal Coomassie Brilliant Blue (CBB) stained bands the SDS extract revealed several proteins in the MW region between 12–16 kD (Figure 4 ). Some of the clearly separated gel bands in the molecular weight region of the target proteins were excised and subjected to in-gel Trypsin digestion. Band 1 represented the 14.0 kD peak while band 2 corresponded to the 13.8 kD peak as identified later by peptide mapping and MS/MS experiments. The 15.7 kD protein was enriched by micro-spin column chromatography using a Q HyperD ® spin column (Ciphergen Biosystems, CA). The protein extract equivalent to approximately 50,000 cells was applied to the column and 6 fractions were eluted according to a stepwise pH gradient and concentrated by trichloroacetic acid precipitation. Aliquots of 3 μl (100 μl total volume) of each fraction were applied to a NP20 ProteinChip Array (hydrophilic chemistry) in order to detect the presence and enrichment of the selected proteins. In the organic fraction (last elution step) all three proteins (13.8, 14.0 and 15.7 kD) were detected on an NP20 ProteinChip. After separation of the complete organic fraction by SDS-PAGE, proteins were stained by colloidal CBB. A protein band with estimated molecular weight between 15–16 kD was excised and subjected to tryptic digestion (not shown). Identification of the isolated target proteins by tryptic peptide mass fingerprinting on ProteinChip ® Arrays and direct peptide fragmentation by Tandem MS/MS Protein bands isolated from the gel were subjected to Trypsin digestion. Gel pieces were extracted twice and the resulting peptide fragments applied to NP20 and H4 ProteinChip ® Arrays with hydrophilic and hydrophobic chromatographic properties, respectively. Peptide mass fingerprinting of the gel bands was done using the PBSIIc instrument. The results are given in Table 1 . Two histone proteins (band 1 and 2) and haemoglobin beta were the first candidates in Profound database search. For unambiguous identification, selected peptides were sequenced directly from the arrays by collision-induced dissociation (CID), using a ProteinChip ® Interface coupled to a Tandem mass spectrometer [ 17 - 19 ]. Representative MS and MS/MS spectra from band 2 are given in Figure 5 . The peptide with an m/z ratio of 1692.88 (Figure 5A ) was selected for sequencing by CID-MS/MS (Figure 5B ). The obtained sequence was assigned to histone proteins (H2A1 or H2A4, Table 1 ). It is notable that the molecular masses of the identified proteins correlated well with the results obtained from the profiling experiments (Table 1 and Figure 3B ). Analysis of tryptic peptide fragments from band 3 confirmed the molecular mass from profiling experiments (15.7 kD) and showed strong evidence for haemoglobin beta (Sequence coverage 45.7%). Enrichment of the target protein markers from tissue sections by SDS-PAGE Due to the labour-intensive requirements of isolating high cell numbers by laser-microdissection, we sought to recover the target proteins in whole lung tissue sections, intending to use this material for subsequent protein identification. Between 5–7 cryosections of lung tissue (10 μm) previously known to contain the target proteins from laser-microdissection were collected and proteins were isolated following the procedure as for the laser-microdissected cells. Applying an aliquot of approximately 4,000 cells to a spot of WCX ProteinChip ® Array, we were able to recover the target protein markers within the homogenate spectrum (Figure 6 ). Using another aliquot of the section material for SDS-PAGE, we isolated single bands of expected weight, as already described for the microdissected cells. This material was subjected to tryptic digestion and peptide mass fingerprint. Again, the two histone proteins and haemoglobin beta were identified. Discussion The combination of laser-microdissection and mass spectrometry has been shown to be a reliable tool for compartment and cell type-specific biomarker profiling in complex tissues. Several groups have described the successful generation of mass spectra from as few as 500 to 2,000 cells after microdissection [ 11 , 15 , 16 ] which was reproduced in the present investigation. Such spectra may be employed to ascertain the cellular origin of microdissected samples. Moreover, when showing differential expression under various biological conditions, mass spectral peaks may serve as biomarkers, independent of their identification. This fast and convenient technique thus represents a valuable tool to provide disease- or status-specific protein marker patterns to be used for diagnostic and predictive purposes [ 20 - 22 ]. In the present investigation we could confirm feasibility and excellent reproducibility of this approach when analyzing lung tissue compartments. To investigate proteins on 2D-gels, high amounts of cells are required. Therefore, using this approach as a starting point for protein profiling with subsequent MS identification, 50,000 to 250,000 microdissected cells have to be introduced per gel. Our aim was to minimize this laborious procedure, being hardly compatible with microdissection of minor cellular compartments. Thus, we first generated compartment specific profiles from laser-microdissected material by mass spectrometry and subsequently collected high cell numbers to identify the previously selected proteins. Moreover, using an interface to a Tandem MS/MS instrument, analysis of tryptic mass fingerprints can be performed from the same array without need for additional material. To reduce the demand of material on the gel, different staining techniques can be used (e.g. silver or SYPRO RUBY ® fluorescence staining). While advantageous in gel staining due to higher sensitivity, the problems are shifted towards MS technique: the identification may fail due to minute amounts of protein per gel spot. Therefore, in our study we performed robust and easy CBB staining to be able to detect the required amount of material in MS techniques. Additionally, for low abundant proteins, direct Trypsin digestion of an eluted, gel-resolved protein can be performed directly on the array. Marker protein isolation and enrichment of the protein peak may also be enhanced by application of suitable pH and salt conditions directly on the array. A promising alternative for labour-intensive 2D-gel applications can be the detection of biomarker proteins from microdissected cells and their subsequent identification in tissue slices. Since the exact weight of the target protein is known from preceding experiments with microdissected cells, the section homogenate can be screened for the respective peaks. As tissue sections are easy and fast to obtain, they were used to generate profiles on WCX arrays from the same lungs as from microdissection. While these spectra differed partially from those derived from microdissected alveolar septum cells, the peaks corresponding to the pre-defined target proteins were easily detected in the homogenate spectrum. In addition, corresponding bands could be detected by SDS-PAGE and subsequent tryptic mass fingerprinting confirmed the identity of histones H2B F and H2A1 and haemoglobin beta. Typically, high-resolution 2D techniques require several days from sample application to the final staining of protein spots. Isolation of 500 to 2,000 cells by laser-microdissection requires minutes to hours, depending on the targeted cell type, tissue area or organ compartment. Array pre-treatment, immobilization of the lysate and washing lasts around 90 minutes and MS measurement is performed within few minutes. This calculation may reveal the time saved by our approach. Due to the accuracy of measurement we found that the molecular weight derived from SELDI mass spectrometry corresponded very well with the exact protein mass. For the three investigated proteins, the mass accuracy was approximately 0.1% for external calibration and is thus remarkably higher than using 2D-PAGE. Nevertheless, the exact molecular weight alone is insufficient to identify the concerning protein directly via database search. Conclusions Combination of laser-microdissection with SELDI-TOF MS generates reproducible and credible biomarker profiles in a cell-type- or compartment-specific manner from complex tissues. For identification of underlying peptides/proteins, this approach may be combined with enrichment and isolation strategies, linked with mass fingerprinting by SELDI-TOF MS and peptide sequencing by Tandem MS/MS for definite chemical characterization. These techniques allow analysis of differential protein expression of low cell numbers microdissected from complex tissues. Methods Mouse lung preparation All animal experiments were approved by local authorities [Regierungspräsidium Giessen, no. II25.3-19c20-15(1) GI20/10-Nr.22/2000]. Lung preparation was performed as described previously [ 5 ]. In brief, male BALB/c mice (Charles River, Sulzfeld, Germany, 20–22 g) were exposed to normobaric normoxia in a chamber at FiO 2 = 0.21. After 1 day, animals were sacrificed; lungs were flushed via the pulmonary artery, and 800 μl prewarmed TissueTek ® (Sakura Finetek, Zoeterwoude, The Netherlands) were instilled into the airways via a tracheal cannula. Afterwards, lungs were excised and frozen in liquid nitrogen immediately. Laser-assisted microdissection Laser-microdissection and manipulation was performed as described previously [ 1 - 3 , 5 ]. In brief, cryosections (10 μm) from lung tissue were mounted on glass slides. After hemalaun staining for 30 sec, sections were subsequently immersed in 70% and 96% ethanol and stored in 100% ethanol until use. Not more than 10 sections were prepared at the same time to restrict storage time. Alveolar septum cells and intrapulmonary vessels, respectively, were microdissected under visual control using the Laser Microbeam System (P.A.L.M., Bernried, Germany) and isolated by a sterile 30 G needle (Figure 1 ). Needles with adherent material were transferred into a reaction tube containing HEPES/Triton X-100 lysis buffer (50 mM HEPES, pH 7.2, 1% Triton X-100). Comparative protein expression profiling by SELDI Needles with adherent cells were transferred to the reaction tubes containing 15 μl of the HEPES/Triton X-100 lysis buffer. After vigorous shaking for 30 min at room temperature, samples were centrifuged at 14,000 g for 10 min. 3 μl of this supernatant were directly applied to the spots of SAX and WCX ProteinChip ® Arrays and incubated in a humid chamber for 1 h. For profiling, SAX ProteinChip ® Arrays were preincubated in SAX Binding Buffer (100 mM Tris, pH 8.5, 0.02% Triton X-100). After first extraction of the microdissected material using HEPES buffer, 15 μl of UTC buffer (6 M Urea, 2 M Thiourea, 2% CHAPS, 75 mM DTT) were applied to needles/cells. The pellet in UTC buffer was centrifuged for 10 min at 14,000 g and the complete supernatant was used for profiling on WCX ProteinChip ® Arrays. These were pre-treated with 10 mM HCl for 5 min and equilibrated two times with WCX binding buffer (100 mM NaOAc, pH 4.5, 0.02% Triton X-100) for 5 min. Fifteen μl of UTC supernatant were diluted 1:10 v/v in WCX binding buffer and incubated under vigorous shaking for 45 min. In order to deal with high amounts of volume (150 μl) the Bioprocessor (Ciphergen Biosystems. Inc.) was applied. After removal of the samples every spot was washed twice using binding buffer followed by a final 10 sec water rinse. After air-drying, saturated sinapinic acid (0.6 μl) dissolved in 50% acetonitrile and 0.5% trifluoracetic acid (TFA) was added twice. Subsequently, mass analysis of bound peptides/proteins was performed using the Ciphergen PBS IIc system. ProteinChip ® Arrays were analyzed by averaging 100–150 laser shots collected in the positive mode. Optimization range of the time lag focusing was set between 10–30 kD. Deflector settings were used to filter out peaks with <2000 m/z . Calibration was performed externally using purified peptide and protein standards. Obtained spectra were analyzed by the ProteinChip Software version 3.01. Small scale column chromatography and SDS-PAGE SDS-PAGE was performed as described previously [ 23 ] with minor modifications. We used 15% acrylamid separation and 5% stacking gels in a mini-gel chamber (Roth, Karlsruhe). Supernatants of UTC buffer extraction containing approximately 2000 cells/μl were mixed 1:1 with twofold concentrated sample buffer (100 mM Tris-Cl pH 6.8, 4% SDS, 20% Glycerin, 3% DTT, 0.05% Bromphenolblue). Samples were boiled for 3 min and a total volume of 20 μl (equivalent to 20,000 cells) was loaded onto single lanes of the gel. Afterwards, 1 μl SDS sample buffer per 1,000 cells was added to the needles in order to solubilize remaining proteins after UTC buffer extraction. SDS extracts were also applied to SDS gels. For selective enrichment of marker proteins we used small sized anionic exchange spin columns resulting in 6 fractions after elution using a stepwise pH gradient from pH 9–3 and an organic fraction. The protein amount reflecting 50,000 cells was loaded onto a Q HyperD spin column and aliquots of the eluted fractions were analyzed on NP20 ProteinChip ® Arrays to reveal the enrichment of selected protein peaks in a certain fraction. The fractions containing the enriched proteins were concentrated by TCA precipitation in order to apply the complete fraction to a single lane of an SDS gel. Staining was performed by colloidal Coomassie brilliant blue (CBB, Roth). Protein tryptic digestion (peptide mass fingerprint) The CBB stained bands matching the expected molecular weight regions of selected proteins were excised and subjected to Trypsin digestion. Gel pieces were washed three times with 400 μl of 100 mM ammonium bicarbonate and 50% acetonitrile for 15 min followed by 15 min incubation in 100% acetonitrile. After removal, gel pieces were dried shortly in a speed-vac centrifuge. Depending on the gel volume, 10–15 μl of a Trypsin solution (20 ng Trypsin/μl in 25 mM ammonium bicarbonate) was applied and digestion was performed overnight (16 h) at 37°C. Afterwards, reaction tubes were centrifuged and 0.5–1.5 μl aliquots of each supernatant were applied to the spots of H4 (hydrophobic surface coating) and NP20 (hydrophilic surface coating) ProteinChip ® Arrays. A 20% matrix solution of alpha-4 hydroxy-cinnamic acid (CHCA) was applied to the spots. The remaining gel pieces were extracted with a 60% acetonitrile and 0.2% TFA solution for 1 h with a 5 min sonication to extract remaining organic peptides. Supernatants from this second extraction step were also applied to H4 and NP20 ProteinChip ® Arrays. All Arrays were measured in the PBS IIc system by averaging 150 laser shots. After subtraction of all peaks also present in the blank gel piece (e.g. Trypsin autolysis peaks), m/z values were submitted to Profound and Mascot for database searching. Tandem MS/MS analysis Peptides from tryptic digestion were applied to NP20 ProteinChip ® Arrays and 2 × 0.6 μl of a saturated CHCA solution was added. For quality control of the peak intensities, the NP20 Arrays were analyzed in a PBS IIc instrument. Afterward, the arrays were transferred to a Tandem MS instrument. Data were acquired on a Micromass QTOF II (Manchester, UK) tandem quadrupole-time of flight (Q-TOF) mass spectrometer equipped with a PCI 1000 ProteinChip ® Tandem MS Interface (Ciphergen Biosystems). Ions were created using a pulsed nitrogen laser operating with 30 pulses/sec. Nitrogen gas was used for collisional cooling of formed ions and argon gas was used for all low-energy collision-induced dissociation experiments. The system was externally calibrated in MS/MS mode using the parent ion and selected fragments of adrenocorticotropic hormone (ACTH) human fragment 18–39 (m/z = 2465.1983; Sigma Aldrich). Authors' contributions GK: laser-microdissection, preparation of samples, PAGE, SELDI-MS MM: SELDI-MS and optimalisation for microdissected material RB: tandem MS measurements RMB: instruction to laser-microdissection WS: design of project, preparation of the manuscript NW: animal model, preparation of the lungs LF: design of project, preparation of the manuscript | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC539305.xml |
545796 | GOToolBox: functional analysis of gene datasets based on Gene Ontology | Tools are presented to identify Gene Ontology terms that are over- or under-represented in a dataset, to cluster genes by function and to find genes with similar annotations. | Rationale Since complete genome sequences have become available, the amount of annotated genes has increased dramatically. These advances have allowed the systematic comparison of the gene content of different organisms, leading to the conclusion that organisms share the majority of their genes with only relatively few species-specific genes. On this basis, one can develop strategies to infer gene annotations from model species to less experimentally tractable organisms. However, such functional inferences require the definition of species-independent annotation policies. In this regard, the Gene Ontology consortium [ 1 ] has been created to develop a unified view of gene functional annotations for different model organisms. Three structured vocabularies (or ontologies) have been proposed, which allow the description of molecular functions, biological processes and cellular locations of any gene product, respectively. Whereas the majority of GO terms are common to several organisms, some of them are specific to a few organisms only, enabling the description of some aspects of gene function which are specific to few lineages only. Within each of these ontologies, the terms are organized in a hierarchical way, according to parent-child relationships in a directed acyclic graph (DAG). This allows a progressive functional description, matching the current level of experimental characterization of the corresponding gene product. The hierarchical organization of the gene ontology is particularly well adapted to computational processing and is used for the functional annotations of gene products of several model organisms such as budding yeast [ 2 ], Drosophila [ 3 ], mouse [ 4 ], nematode [ 5 ] and Arabidopsis [ 6 ]. More recently, GO annotations for human genes have been proposed in the context of the GOA project [ 7 ]. In parallel, the recent development of new high-throughput methods has generated an enormous amount of functional data and has motivated the development of dedicated analysis tools. For instance, one might wonder whether genes detected as being coexpressed in a DNA chip experiment are related in terms of molecular or cellular function. In practical terms, we address here the following generic questions. First, are there statistically over- or under-represented GO terms associated with a given gene set, compared to the distribution of these terms among the annotations of the complete genome? Second, among a particular gene set, are there closely functionally related gene subsets? And third, are there genes having GO similarities with a given probe gene? To formulate such questions properly in a well defined mathematical framework, we have developed a set of methods and tools, collectively called GOToolBox, to process the GO annotations for any model organism for which they are available (Figure 1 ). All the programs are written in PERL and use the CGI and DBI modules. All the ontology data and the gene-GO terms associations are taken from the GO consortium website. These data are structured in a PostGreSQL relational database, which is updated monthly. Statistics are calculated using the R statistical programming environment. The web implementation of the GOToolBox is accessible at [ 8 ]. Features In this section, we describe the five main functionalities of the GOToolBox suite. Two of them (GO-Proxy and GO-Family) are not encompassed by any other GO-processing tool currently available (see also 'Comparison of the GOToolBox with other GO-based analysis programs'). Dataset creation The first step in analyzing gene datasets consists in retrieving, for each individual gene of the dataset, all the corresponding GO terms and their parent terms using the Dataset creation program. The genomic frequency of each GO term associated with genes present in the dataset is then calculated. The resulting information is structured and stored in a data file, available for download on the GOToolBox server for one week. This file contains also the counts of terms within a reference gene dataset (genome or user-defined), and can then be used as an input for the GO-Stats and GO-Proxy programs described below. Ontology options An optional tool, GO-Diet, allows either the reduction of the term dataset to a slim GO hierarchy (either one proposed by the GO consortium or a user-defined one) or the restriction of the considered terms to a chosen depth of the ontology. It is also possible to filter terms based on the way these have been assigned to the gene products (evidence code). This tool is useful to decrease the number of GO terms associated with a gene dataset, thereby facilitating the analysis of the results of programs described below, particularly when the input gene list and/or the number of associated GO terms is large. Note that the GO-Diet program can generate a gene-term association file in the TLF format, allowing the use of GO terms as gene labels with the TreeDyn tree drawing program [ 9 ]. The GO-Diet options are proposed in the Dataset-Creation form. GO term statistics Frequencies of terms within the dataset are calculated and compared with reference frequencies (for example with genomic frequencies or with the frequencies of these terms in the complete list of genes spotted on an array). This procedure allows the delineation of enrichments or depletions of specific terms in the dataset. The probability of obtaining by chance a number k of annotated genes for a given term among a dataset of size n , knowing that the reference dataset contains m such annotated genes out of N genes, is then calculated. This test follows the hypergeometric distribution described in Equation 1: where the random variable X represents the number of genes within a given gene subset, annotated with a given GO term. Implemented in the GO-Stats tool, this formula permits the automatic ranking of all annotation terms, as well as the evaluation of the significance of their occurrences within the dataset. An illustration of such an approach is given in 'Mining biological data'. A typical GO-Stats output is presented in Figure 2 . GO-based gene clustering The goal of the GO-Proxy tool is to group together functionally related genes on the basis of their GO terms. The rationale sustaining our method is that the more genes have common GO terms, and the less they have specific associated terms, the more likely they are to be functionally related. For any two genes of the gene set, the program calculates an annotation-based distance between genes, taking into account all GO terms that are common to the pair and terms which are specific to each gene. Indeed, any two genes can have 0, 1 or several shared GO terms (common terms) and a variable number of terms specific for each gene (specific terms). This distance is based on the Czekanowski-Dice formula (Equation 2): In this formula, x and y denote two genes, Terms(x) and Terms(y) are the lists of their associated GO terms, # stands for 'number of ' and Δ for the symmetrical difference between the two sets. This distance formula emphasizes the importance of the shared GO terms by giving more weight to similarities than to differences. Consequently, for two genes that do not share any GO terms, the distance value is 1, the highest possible value, whereas for two genes sharing exactly the same set of GO terms, the distance value is 0, the lowest possible value. All possible binary pairs of genes from the dataset are considered, resulting in a distance matrix. Next this matrix is processed with a clustering algorithm, such as the WPGMA algorithm, and a functional classification tree is drawn, in which the leaves correspond to input genes. On the basis of this tree, classes can be defined, for instance by using partition rules, and the statistical relevance of the terms associated with each class is calculated using the method described for GO-Stats. The Czekanowski-Dice distance and the corresponding clustering have already proved their effectiveness in delineating protein functional classes derived from the analysis of protein-protein interaction graphs [ 10 ]. Finding GO-related genes A last tool, GO-Family, aims at finding genes having shared GO terms with a user-entered gene, on the basis of a functional similarity calculation. It searches the genomes either of one or several supported species (five at the moment). Given an input gene name, the program retrieves the associated GO terms and compares them with those of all other genes by calculating a functional similarity percentage. The program then returns the list of similar genes, sorted by score. By similar genes, we mean either genes having more than one common associated term, or genes which have different associated terms but one or more common parent terms. When measuring the similarity percentage S between the input gene A and another gene G, one can identify terms that are common to the two genes (Tc), and terms that are specific to A (Ta) and G (Tg). Three different similarity measures have been implemented and proposed to the user: Si = (Tc/(Ta+Tc)) × 100 (3) Sp = (Tc/(Ta+Tg+Tc)) × 100 (4) Scd = (1 - ((Ta+Tg)/(Ta+Tg+2Tc))) × 100 (5) respectively called similarity percentage relative to the input gene ( Si ), similarity percentage relative to the pair of genes ( Sp ) and Czekanowski-Dice proximity percentage ( Scd ). The results are then ranked by decreasing similarity values. A typical GO-Family output is presented in Figure 3 . Mining biological data with the GOToolBox In this section, we provide two examples showing how combinations of several GO analysis tools can be used to validate or further delineate gene functional classifications. Application of GOToolBox to the study of protein-protein interaction networks PRODISTIN [ 10 ] is a functional classification method for proteins, based on the analysis of a protein-protein interaction network, that aims to compare and predict a cellular role for proteins of unknown function. Given a set of proteins and a list of interactions between them, a distance is calculated between all possible pairs of proteins. A distance matrix is then generated, to which the NJ clustering algorithm is applied. A classification tree is then built, within which functional classes are defined, based on the annotation terms associated with the proteins involved in known biological processes. GO-Diet and GO-Stats are useful at two steps of the analysis (Figure 4a ). The first is to generate the GO annotation set necessary to define the functional classes of proteins. In this particular study devoted to the yeast interactome, the term dataset was fitted to the fourth ontology level using GO-Diet. We chose to work at this particular level because it was previously shown to provide a good representation of the complexity of the cellular functions of the proteins described by the biological process annotations [ 10 ]. The second step is to estimate the relevance of the annotations associated with the resulting classes using associated GO terms. The GO-Stats program can be used in this framework, using as reference dataset the list of proteins given as an input to PRODISTIN (Figure 4b ). As shown in Table 1 , the classes issued from PRODISTIN can be associated to one or to several GO terms. In the latter case, the calculated annotation biases emphasize the most relevant terms for the functional assignment of the class (first row in Table 1 ), allowing the ranking of the annotation terms. When the class is associated with a single GO term (second and third rows in table 1 ), GO-Stats estimates the probability of obtaining a class with the same size and functional coherence associated by chance with this GO term. For instance, in Table 1 , the term 'RNA metabolism' is clearly over-represented in the second class, whereas this is certainly not true in the case of the 'cell cycle' class. Functional clustering of sets of transcriptional factor targets GO can also be used to split gene sets into coherent functional subclasses on the basis of shared annotation terms. As an illustration, we have analyzed a gene set encompassing putative targets of the Engrailed transcription factor in Drosophila melanogaster . These genes were identified on the basis of in vivo UV cross-linking and chromatin immunoprecipitation experiments (X-ChIP) [ 11 ]. These experiments led to the cloning and sequencing of several hundreds of DNA fragments, allowing the computational identification of a well conserved DNA pattern, which was closely related to the known engrailed consensus. In order to delineate potential functional biases among engrailed targets, we have used Go-Diet and Go-Proxy to cluster the corresponding genes on the basis of 'Biological Processes' GO annotations. In the first step, the set of putative target genes has been fed to the dataset-creation program and slimmed down by cutting the annotations to the fourth level of the Gene Ontology, using GO-Diet. This eliminates the poorly informative terms. In a second step, the resulting dataset has been processed with GO-Proxy, leading to 11 classes as shown in Table 2 . Finally, for each of these classes, the probability of obtaining it by chance has been calculated, enabling the evaluation of the significance of the corresponding class relative to the initial gene dataset. In this analysis, the GOToolBox suite has proved to be very useful to define different functionally related sub-groups within a set of genes harbouring different functions (D.M., F. Maschat and B.J., unpublished work). Comparison of GOToolBox with other GO-based analysis programs In this study, we have described the GOToolBox suite, which performs five main tasks: gene dataset creation, selection and fitting of ontology level (GO-Diet), statistical analysis of terms associated with gene sets (GO-Stats), GO-based gene clustering (GO-Proxy), and gene retrieval based on GO annotation similarity (GO-Family). Recently, several web-based GO-processing tools have been developed to display, query or process GO annotations. In this section, we are interested in comparing GOToolBox to several GO-processing programs. As shown in Table 3 , comparisons were performed with 12 web-based programs listed on the official GO site [ 12 ]. Functionalities unique to the GOToolBox suite First, it should be highlighted that, to the best of our knowledge, no other program performing all five functions proposed in GOToolBox exists at present. Furthermore, the GO-Proxy and GO-Family tasks are unique to GOToolBox. These two functionalities are potentially very useful to the biologist. Indeed, on the one hand, the GO-Proxy implementation of a gene-to-gene distance calculation based on several GO terms allows the determination of classes consisting of functionally related genes. This feature should prove useful in all cases where the user wishes to identify functional subgroups within a list of genes of interest. On the other hand, the ability to search for genes similar to a user-defined gene on the basis of related GO terms (GO-Family) is also unique among all GO processing tools. When used to find functionally similar genes within a given species, the GO-Family program is often able to find paralogs as well as other genes with related functions, independently of sequence similarities. Similarly, when used to find functionally similar genes in other species, the program can successfully identify genes with related functions, including orthologs. In addition, the GO-Family program could be very valuable in the context of genome annotation: it could be used by database annotators to verify the coherence of the annotations of genes with known related functions, which if correctly annotated, would indeed be expected to be detected by the program. Because of the presence of these two programs in our suite, we are inclined to think that GOToolBox represents a major improvement over other GO-based Web tools. Comparison of statistical analyses performed by all GO-based Web tools Numerous programs have been developed to provide statistical evaluation of the occurrence of GO terms (Table 3 ). We compared these programs to GO-Stats at two levels: the statistics used to calculate the enrichment/depletion of GO terms, and the availability of different features, such as the output types and the GO terms filtering utilities to create the gene dataset. As shown in Table 3 (column 3), four different approaches to calculating the probability of having x genes annotated for a given GO category have been implemented in various dedicated programs: hypergeometric distribution, binomial distribution, Fisher exact test and Chi-square test. The two latter are non-parametric tests and are therefore less powerful than P -value calculations obtained with both the hypergeometric and the binomial distributions. In particular, the Chi-square test seems to be the less efficient, because it only gives valid results for large gene datasets, and it does not distinguish between over and under-represented terms [ 13 ]. The binomial distribution permits us to calculate the probability of obtaining x genes annotated for a given GO category when randomly picking k times one gene among N genes, leaving the possibility that one gene can be picked many times, which is not the correct situation in our case. It is important to note that when N is large, the hypergeometric distribution tends to give the same results as the binomial distribution. On average, the hypergeometric distribution seems to be both the most adapted model and the most powerful statistical test. To compare the results obtained with the different methods for P -value calculation, we have implemented these methods in the GO-Stats module of GOToolBox, excepted the Chi-square test for reasons explained above. The implementation of these tests in GO-Stats permits us to compare the methods without having to deal with problems due to program-specific input formats, data update, and supported/unsupported organism species, as is often the case when using different programs. In addition, this gives great flexibility to the user, allowing he or she to use different statistical methods. We verified that (as might be expected) different programs using the same statistical methods give the same results. This was essentially true, with slight variations probably due to the use of different versions of GO by some programs (data not shown). Therefore, the comparison between programs mainly relies on the number of possible statistical tests that are available. As shown in Table 3 , three programs (GOToolBox, GFINDer [ 13 ] and CLENCH [ 14 ]) propose the same three possible statistical tests, whereas all other programs have implemented only one method. However, among these three programs, GOToolBox is the only one in which a multiple testing correction is implemented to adjust P -values and provide a correction for the occurrence of false positives. We choose the Bonferroni correction since it appears to be the most stringent in assessing the significance of enrichment/depletion Comparison of other features proposed by GO-based web tools In addition to the statistical tests used by the different programs, the presence of functional features offering flexibility to the end-user can also be considered as a criterion for program comparison. Features such as the GO terms filtering utilities and output types proposed by different programs are worth comparing (Table 3 , last two columns). The GO terms filtering functions allows one to restrict the number of GO terms associated with each gene in the dataset, to facilitate interpretation of the results. Many ways to perform this restriction are possible: either mapping the terms on a slim ontology or fitting the terms to a given level (depth) of the ontology hierarchy. As shown in Table 3 , only GOToolBox allows the use of both these filtering methods. They have been implemented and are accessible under the 'Create Dataset' form. In addition, in GOToolBox it is possible to restrict the number of terms associated with each gene, by taking into account only terms inferred in a particular way (for instance, terms inferred from direct assay) and to combine the filtering methods with the slim mapping or the level fitting described above. As far as the output types are concerned, several programs propose a tabulated output file with terms ranked according to their P -values, (with the exception of GoMiner [ 15 ] and GOTM [ 16 ], therefore precluding the interpretation of the results in these cases). However, a positive attribute of GO Term Finder [ 17 ], GOTM and GoMiner over GOToolBox is that they propose directed acyclic graph (DAG) graphics for visualization of results. At the moment, GO-Stats allows the visualization of relationships between terms in tabulated output only, but a future version of GOToolBox will also incorporate a DAG graphical output option. In conclusion, the GOToolBox is a multipurpose, flexible and evolvable software suite that compares favorably to all existing GO-based web-analysis programs. Its two unique features, GO-Proxy and GO-Family, enable new kinds of analyses to be carried out, based on the functional annotations of gene datasets These new functionalities are likely to be very useful to many biologists wanting to extract novel and meaningful biological information from gene datasets. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545796.xml |
434153 | Immunity Promotes Virulence Evolution in a Malaria Model | Evolutionary models predict that host immunity will shape the evolution of parasite virulence. While some assumptions of these models have been tested, the actual evolutionary outcome of immune selection on virulence has not. Using the mouse malaria model, Plasmodium chabaudi, we experimentally tested whether immune pressure promotes the evolution of more virulent pathogens by evolving parasite lines in immunized and nonimmunized (“naïve”) mice using serial passage. We found that parasite lines evolved in immunized mice became more virulent to both naïve and immune mice than lines evolved in naïve mice. When these evolved lines were transmitted through mosquitoes, there was a general reduction in virulence across all lines. However, the immune-selected lines remained more virulent to naïve mice than the naïve-selected lines, though not to immunized mice. Thus, immune selection accelerated the rate of virulence evolution, rendering parasites more dangerous to naïve hosts. These results argue for further consideration of the evolutionary consequences for pathogen virulence of vaccination. | Introduction Genetic variation in pathogen virulence (harm to the host) has been found whenever it has been looked for. A considerable body of theory, based on the transmission consequences of virulence, has been developed to predict how natural selection will act on this genetic variation and how it will shape virulence levels in natural populations of disease-causing organisms ( Frank 1996 ; Dieckmann et al. 2002 ). For instance, natural or vaccine-acquired host immunity protects hosts from dying, thereby relieving the parasite of the potential fitness costs of prematurely shortened infections. Thus, host populations with high levels of immunity can maintain more virulent pathogens than can naïve host populations ( Gandon et al. 2001 ). To date, the best example of virulence evolving upwards in response to enhanced levels of host defense comes from an uncontrolled “experiment” in the field: upon release into a highly susceptible host population, the myxomatosis virus evolved lower virulence ( Fenner and Ratcliffe 1965 ) but then later increased in virulence once the host population had evolved resistance ( Best and Kerr 2000 ). As well as altering between-host selection pressures on virulence, host immunity can alter the nature of inhost selection. Different directions of virulence evolution are expected depending on the details of inhost competition among parasites (e.g., Nowak and May 1994 ; Van Baalen and Sabelis 1995 ; Chao et al. 2000 ; Brown et al. 2002 ). Unfortunately, these details are not well understood for any pathogen ( Read and Taylor 2001 ). The only generality is that serial passage of pathogens almost always increases virulence ( Ebert 1998 ), implying that virulent variants have a fitness advantage within hosts. However, all serial passage experiments of which we are aware were conducted in immunologically naïve hosts, so the effects of immunity on virulence evolution are unknown. In theory, immunity could impose selection in several ways. For instance, lower parasite loads should reduce resource competition (e.g., for red blood cells) among parasites occupying the same host, but increase the competition for enemy-free space (e.g., by immune evasion). This could lead to more aggressive parasites racing to stay ahead of proliferating immune responses ( Antia et al. 1994 ); it could also lead to the evolution of novel antigenic variants that have a selective advantage only in immunized hosts. Immunization will also alter the timing of immune selection, thus potentially selecting for changes in parasite life history parameters that affect virulence, such as an earlier or higher rate of production of transmission stages ( Koella and Antia 1995 ). Finally, the rate at which virulence evolution occurs may be limited by the size of the parasite population inside the host, and therefore may be retarded by host immunity. Thus, at least in theory, there are many potential consequences for virulence evolution of prior host immunity, both long-term and short-term in nature. One barrier to testing theoretical models of virulence evolution is that the models typically predict the outcome at evolutionary and epidemiological equilibrium. New equi-libria may or may not take a long time to reach, but will in any case depend on the dynamics of the host population and the environmental conditions under which transmission occurs: this means that experimental evolution to new equilibria will be hard to study in the laboratory for medically relevant pathogens. However, the short-term consequences for virulence evolution, which are at least as important to public health policy as the long-term consequences, may be more tractable. This is especially true for diseases for which animal models are available. In this study, we begin the empirical effort to determine the likely direction of immune-mediated virulence evolution by performing experimental evolution of the rodent malaria parasite, Plasmodium chabaudi , in laboratory mice. We evolved multiple lines of P. chabaudi in immunized and naïve mice by repeated serial passage of blood-stage parasites (i.e., bypassing the normally obligate mosquito vector) starting from two different starting populations. After 20 passages, the lines had evolved sufficiently to make comparisons between the immune-selected lines (I-lines) and naïve-selected lines (N-lines) for virulence and transmissibility. Results/Discussion We found that both the I-lines and N-lines evolved to become more virulent than their ancestral populations, but the I-lines became even more virulent than the N-lines ( Figure 1 A). This higher virulence was manifest in both naïve and immunized mice. When the lines were transmitted through mosquitoes, there was generally a reduction in virulence across all the lines, but the I-lines remained more virulent than the N-lines to naïve mice, though not to immunized mice ( Figure 1 B). We discuss these two principal findings separately below. Figure 1 Virulence Evolution in Mouse Malaria during Serial Passage in Immunized Versus Naïve Mice Virulence was measured by minimum red blood cell density (y-axis) in lines of P. chabaudi before (“ancestral lines;”black and gray symbols) and after serial passage through immunized (“I-lines;” red lines and triangles) or naïve (“N-lines,” green lines and circles) mice before (A) and after (B) mosquito transmission. Evolved and ancestral lines were compared in both naïve (solid lines) and immunized mice (broken lines). Filled symbols, before mosquito transmission; open symbols, after mosquito transmission. Lines were selected from an avirulent, “unadapted” clone (CW-0; left set of lines) and a virulent, “preadapted” ancestral population (CW-A; right): the latter was derived from the former by 12 serial passages in a previous experiment ( Mackinnon and Read 1999b ). Each symbol (with ± 1 standard error based on the variance between subline means) represents the mean of mice infected with an ancestral line or a set of passaged lines (i.e., five sublines, two mice per subline). Prior to mosquito transmission (A), differences between the I-lines and N-lines were significant in three out of the four cases ( p < 0.05 for lines from the unadapted line infecting naïve mice, p < 0.01 for unadapted infecting immunized, and p < 0.001 for preadapted infecting immunized): in the fourth case ( p > 0.1 for preadapted infecting naïve), virulence of the ancestral line was already apparently near-maximal. After mosquito transmission (B), the differences between the I-lines and N-lines remained the same in naïve mice as before transmission (interaction between the mosquito transmission effect and the I-line-versus-N-line difference was p > 0.7 in both the unadapted and preadapted cases). However, these line differences were eliminated in immunized mice (interaction term: p = 0.02 for the unadapted case, p = 0.08 for the preadapted case). Mosquito transmission significantly reduced the virulence of the preadapted ancestral line in immunized mice ( p = 0.03) but not in the other ancestral-line-by-immune-treatment combinations ( p > 0.2 in these cases). In the selection lines, mosquito transmission significantly reduced the virulence in five out of the eight comparisons ( p = 0.009 and p = 0.13 in the N-lines in naïve mice derived from CW-0 and CW-A, respectively, with values of p = 0.55 and p = 0.005 in N-lines in immunized mice, p = 0.022 and p = 0.26 in I-lines in naïve mice, and p = 0.006 and p < 0.0001 in I-lines in immunized mice). Ancestral pretransmission lines had similar levels of virulence in the separate pretransmission and posttransmission experiments, with the exception of the preadapted ancestral line in immunized mice, which had higher virulence in the latter than the former ( p = 0.002). Similar results to the above were obtained when virulence was measured by maximum weight loss (unpublished data). No deaths occurred during the pretransmission experiments, but in addition to the one death that occurred early in the infection prior to the occurrence of any weight loss or anemia (excluded from analyses), five occurred in the posttransmission experiment, four of these in naïve mice (two in the N-lines, one in the I-lines, and one in the nontransmitted ancestral line, all derived from the preadapted line) and one in an immunized mouse (preadapted, nontransmitted ancestral line). Immunity Selects for Higher Virulence The results suggest that immune selection on blood-stage parasites is more efficient at selecting virulent variants than is selection in naïve mice. Response to selection is a function of the amount of variation in the population and the proportion of the population that survives to produce offspring, i.e., the selection intensity. The higher selection response in the I-lines is unlikely to be due to greater variation on which selection could act because the parasite population size on the day of transfer in immunized mice was on average 2-fold smaller than in naïve mice ( Figure 2 ). It is also unlikely to be due to lower host death in the I-lines as there were no line differences in mortality in naïve mice over the entire course of the experiment (10/223 naïve mice infected with N-lines versus 2/40 naïve mice infected with I-lines, p > 0.10 by 2-tailed Fisher's Exact test, zero mortality in immunized mice), and all but one of the deaths occurred after the day of transfer. The most likely explanation is that immunity generated more intense selection by killing a greater proportion of the parasite population up until the point of transfer ( Figure 2 ). Winners of the race into the syringe on day 7 were those parasite variants that survived immune selection, and these parasites proceeded to cause more damage to their host later in the infection. Figure 2 Effect of Immunization on Asexual Parasitemia and Gametocytemia Each curve represents the mean asexual parasitemia (dark blue) and gametocytemia (light blue) over all parasite lines (ancestral and selected) in naïve (solid lines; n = 47) and immunized mice (broken lines; n = 50) during the pretransmission evaluation phase. Immunization reduced asexual parasitemia and gametocytemia throughout the infection ( p < 0.001 based on the log 10 daily average taken over all days). The arrow indicates the day of transfer during the selection phase of the experiment. But why would selection favor more virulent parasites? Our previous studies have consistently shown that peak parasite densities in the acute phase are positively correlated to the level of virulence that they generate ( Mackinnon and Read 1999a , 1999b , 2003 ; Mackinnon et al. 2002 ; Ferguson et al. 2004 ). We therefore expected to find that the higher virulence in I-lines was accompanied by higher parasite densities, in which case we would deduce that immune selection had favored variants that were better able to outgrow immune defenses. While we found positive relationships between asexual multiplication and virulence across all the lines including the ancestral ones ( Figure 3 A), the I-lines and N-lines were statistically indistinguishable ( p > 0.05) for (i) parasitemia on day 4, (ii) parasitemia on day 6 or 7, (iii) the increase in parasitemia from day 4 to day 6 or 7, and (iv) maximum parasitemia, with one exception: maximum parasitemia was significantly higher in I-lines than N-lines derived from unadapted ancestors when measured in immunized mice, and this only in one of the two replicate experiments (23% versus 6.9% parasitemia, p < 0.001). Thus, there is little evidence to suggest that the increased virulence was due to a higher asexual multiplication rate (or a lower death rate of asexuals) in those parasites that successfully made it into the syringe. Our data demonstrate that immunity acts as a powerful and upward inhost selective force on virulence, but the precise mechanism awaits further study. Figure 3 Relationships between Virulence, Asexual Multiplication, and Transmission Potential across Ancestral and Selection Lines Virulence, as measured by minimum red blood cell density, is plotted against maximum parasitemia in (A), and average daily gametocyte production (a measure of lifetime transmission potential) is plotted against virulence in (B). Data are all from pretransmission lines—ancestral and selected—measured in naïve (closed symbols; solid line) and immunized mice (open symbols; broken line). Regression analysis for both traits showed significant ( p < 0.001) and similar ( p > 0.05) slopes within both naïve and immunized mice, and significantly lower ( p < 0.001) maximum parasitemia and gametocyte production in immunized than in naïve mice. When the two data points from naïve mice with values of above 3 × 10 9 rbc/ml were excluded from the analyses, the slopes remained statistically similar ( p > 0.05). Unselected ancestral populations, black squares; N-lines, green circles; I-lines, red triangles; avirulent unadapted ancestral population, small symbols; virulent preadapted ancestral population, large symbols. There were positive relationships between virulence and lifetime transmission potential across all the lines ( Figure 3 B), consistent with our previous studies (reviewed in Mackinnon and Read 2004 ), but the differences between the I-lines and N-lines were not statistically significant ( p > 0.05). Gametocyte densities are a good predictor of transmission probability in P. chabaudi and other Plasmodium species ( Mackinnon and Read 2004 ), so these results demonstrate that the more virulent parasites evolved in semi-immune mice would transmit as successfully as the less virulent parasites evolved in naïve hosts. Thus, in the absence of a cost, virulent variants favored by within-host immune selection are expected to spread throughout an immunized host population. The Effects of Mosquito Transmission Malaria parasites, like many microbes ( Ebert 1998 ), are remarkable in their ability to rapidly adapt to changes in their host environment, and some of this is known to be due to phenotypic switching mechanisms in virulence-related phenotypes such as binding to host cells ( Barnwell et al. 1983 ), red cell surface antigen expression ( Brown and Brown 1965 ; Barnwell et al. 1983 ; David et al. 1983 ; Handunetti et al. 1987 ; Gilks et al. 1990 ), and red cell invasion pathways ( Dolan et al. 1990 ). Some of these phenotype-based changes are transient, while others appear to be stable, i.e., maintained over sequential blood-stage passages. In our experiment, it is possible that the increases in virulence we observed following serial passage were at least partly due to altered gene expression rather than changes at the genome level. The public health consequences of this sort of change depend on whether the higher virulence is maintained during mosquito transmission, and upon transfer to hosts with different levels of immunity from those in which selection took place. We found that the I-lines were more virulent than the N-lines in both naïve and immunized hosts (see Figure 1 A). However, after mosquito transmission, the I-lines remained more virulent than the N-lines, only in naïve hosts: the difference in immune hosts was negated by mosquito transmission (see Figure 1 B). Possible reasons for this are discussed further below. For now, we note that the data are consistent with (though do not directly test) the prediction ( Gandon et al. 2001 ) that enhancement of host immunity by anti-blood-stage vaccination will render malaria populations more dangerous to naïve hosts, at least in the short- to medium-term. Whether or not our long-term prediction ( Gandon et al. 2001 ) that immunized populations will drive virulence to a higher level at evolutionary equilibrium proves true can be established only by monitoring vaccine-covered parasite populations in the field. We observed a general reduction in virulence across all lines following mosquito transmission (see Figure 1 ), particularly when measured in immunized mice, and particularly in lines that had been selected under immune pressure, i.e., the I-lines, and in the CW-A ancestral line, which had been serially passaged on day 12 postinfection (PI). Many laboratory studies in malaria have shown that high or low virulence phenotypes accrued through serial passage can be maintained upon transmission through mosquitoes ( James et al. 1936 ; Coatney et al. 1961 ; Alger et al. 1971 ; Walliker et al. 1976 ; Knowles and Walliker 1980 ; Walliker 1981 ; Barnwell et al. 1983 ), although occasional major losses (or gains) of virulence do occur ( Alger et al. 1971 ; Walliker et al 1976 ; Knowles and Walliker 1980 ; Gilks et al. 1990 ). Mosquito transmission could play a significant role in virulence evolution that is driven by inhost selective processes (as distinct from the between-host selective processes underlying the vaccination hypothesis in Gandon et al. [2001] ). The mechanistic basis for the reduction in virulence following mosquito transmission remains to be determined. We offer the following speculations. It may be that the virulence reductions we and others have observed are due to stochastic loss of virulent variants during the population bottlenecking that occurs during mosquito transmission (the variability between lines in virulence loss during mosquito transmission favors this hypothesis). Alternatively, virulence reduction may be due to the deterministic forces of selection against virulent variants that have lost or reduced the ability to transmit through mosquitoes ( Ebert 1998 ): the potential trade-off between virulence in the vertebrate host and production and infectivity of sporozoites in the mosquito has not yet been explored. A further possibility is that the virulence reductions observed following mosquito transmission are due to the systematic resetting during meiosis of the expression of genes that have been switched on or up-regulated during asexual serial passage. For example, it is known that mosquito transmission induces the expression of a different set of the clonally variant (i.e., phenotypically switching) surface antigens from those expressed at the time of ingestion by the mosquito ( McLean et al. 1987 ; Peters et al. 2002 ). It is possible that the variants that appear early in the infection, either because of some genetically programmed ordering of expression or because of higher intrinsic switching rates, are recognized by the immune system in a preimmunized host, thus giving the late-appearing variants a selective advantage. Our data are consistent with this idea, since mosquito transmission eliminated the difference between the I-lines and N-lines in immunized mice but not in naïve mice, suggesting that part of the virulence advantage in immunized hosts was due to novelty in the clonally variant surface antigens. Finally, an interesting possibility is that it is loss of diversity per se during mosquito transmission (either at the genetic level or at the phenotypic expression level) that causes a reduction in virulence by limiting the invading parasites' ability to evade immune defenses: our data are also consistent with this hypothesis. Any of these mechanisms could explain the loss of virulence during mosquito transmission, but none are sufficient to explain why the I-lines were more virulent than the N-lines in naïve mice both before and after mosquito transmission. Thus, more than one distinct underlying mechanism probably explains the virulence differences observed here, such as differences in intrinsic virulence properties and differences in levels of antigenic diversity within the lines. Identifying the mechanisms, any links between them, and their relative roles in determining parasite survival in naïve versus immunized hosts are of key importance in understanding virulence evolution and immunoepidemiology of malaria in the field. Other Serial Passage Studies in Malaria To what extent do our observations accord with previous work on serial passage of malaria in immune-modified environments? Results from other studies are difficult to interpret as none maintained control lines for selection (i.e., lines that were passaged in the nonmanipulated immune environment), most had no replication of lines within selection treatment, and some used just a single selection step. Nevertheless, some tentative conclusions may be drawn. Comparisons of selected and ancestral parasites have been made after three different forms of immune manipulation: (i) down-regulation of immunity by removal of the spleen prior to infection, (ii) up-regulation of immunity by transfer of immune serum at the beginning of infection, and (iii) up-regulation of immunity by infection, sometimes with subcurative drug treatment in order to establish a chronic infection. In the first two, parasites were selected from the primary wave of parasitemia, as in our experiment, whereas in the third, selected parasites were isolated from relapses much later in the infection (40–150 d PI). Parasite lines passaged through splenectomized hosts often lose the ability to bind to host endothelial cells (cytoadherence) in the microvasculature of the deep tissues and therefore the ability to avoid being passaged through the spleen ( Garnham 1970 ), the primary site of immune-mediated clearance ( Wyler 1983 ). This loss of binding is often accompanied by a loss of ability to express ( Barnwell et al. 1983 ; Handunetti et al. 1987 ; Gilks et al. 1990 )—or a major alteration in the level of expression of ( David et al. 1983 ; Fandeur et al. 1995 )—the highly variable and clonally variant switching parasite antigens on the surface of the red cell known to be important for the maintenance of long-term chronic infections ( Brown and Brown 1965 ). In P. falciparum at least ( David et al. 1983 ; Hommel et al. 1983 ), this coincident change in the two properties is because both phenotypes are mediated by the same parasite molecule, denoted PfEMP1 ( Baruch et al. 1995 ; Smith et al. 1995 ; Su et al. 1995 ). Importantly, in two of three studies, the line of parasites that lost cytoadherence and/or surface antigen expression had much-reduced virulence to spleen-intact naïve hosts compared to their ancestral lines ( Barnwell et al. 1983 ; Langreth and Peterson 1985 ; Gilks et al. 1990 ). If our immunization procedure was priming the spleen for effective parasite clearance, our results are consistent with these findings. However, the second form of immune selection—passage of acute-phase parasites from hosts injected with antiserum at the beginning of the infection—yielded parasites with lower virulence to naïve mice than their ancestors in one study ( Wellde and Diggs 1978 ), although it had no impact on virulence in two other studies (see Briggs and Wellde 1969 ). The third type of immune selection—isolation of parasites from relapses late in the infection—has generated parasites with virulence to naïve hosts that is lower than ( Cox 1962 ), higher than ( Sergent and Poncet 1955 ), or similar to ( Cox 1959 ) that of their ancestors. In all these studies, which involved only single passages, selected parasites were more virulent than their ancestors to immunized hosts, suggesting that the selected parasites were predominantly of a novel antigenic type (a fact that has sometimes been demonstrated; Voller and Rossan 1969 ). Whether antigenic novelty is traded off against multiplication rate or virulence among the repertoire of variants expressed during a single infection—as has also been suggested from field population studies ( Bull et al. 1999 )—is an interesting question that deserves more attention. However, in our study, in which we focused on the longer-term and more natural environment of hosts preimmunized with a heterogeneous parasite population, the higher virulence of the I-lines compared to the N-lines in both naïve and immunized mice leads us to deduce that selection associated with virulence overrides selection for immune evasion alone. Conclusion Our data demonstrate that host immunity can increase the potency of inhost selection for higher virulence in malaria. Whether our results generalize to other immunization protocols, parasite clones, parasite species, host genotypes, repeated mosquito passage, and so on requires extensive further experimentation. But, coupled with the malaria parasite's famous ability to rapidly adapt to novel conditions in the laboratory (see above) and to variant-specific vaccine pressure ( Genton et al. 2002 ) and drugs ( Peters 1987 ) in the field, these results urge the continuous monitoring of virulence of parasite populations if asexual-stage malaria vaccines become widely used. And for other microparasites (bacteria, viruses, and protozoa) that rely on rapid multiplication within the host for successful transmission, similar concerns might apply. Materials and Methods Selection phase. Starting from two separate ancestral lines derived from clone CW (see below), five parasite lines (“sublines”) from each ancestral line were repeatedly passaged in mice (female C57Bl/6J, 7–10 wk old) that were naïve to malaria infection (N-lines), and five from each ancestral line were passaged in immunized mice (I-lines, see below), forming 20 lines (“sublines”) in total. Passages involved the syringe transfer to a fresh mouse of 0.1 ml of diluted blood containing 5 × 10 5 parasites from a donor mouse that had been infected 7 d previously. Day 7 PI is during the period of rapid population growth, and is about 2 d prior to peak parasitemia, after which population size rapidly declines (see Figure 2 ). Parasite lines under the same selection regime (i.e., passage in immune versus naïve mice) were not mixed at each transfer, thus yielding five independent replicate sublines in each of the four selection treatment–ancestral line groups. Immunization was by infection with 10 4 parasites of a different clone (denoted ER), followed by drug cure with 10 mg/kg of mefloquine for 4 d starting on day 5 PI. Naïve mice were injected with parasite-free media but were not drug treated. Re-infection took place on average 3 wk after the end of drug treatment (range 1.5–5 wk): as the half-life of mefloquine in mice is reported to be 18 h ( Peters 1987 ), the residual amount in the blood by this stage was expected to be very low. The same deep-frozen stock of ER was used each generation. ER is genetically distinct from CW at marker loci (data not shown) and was originally isolated from different hosts. Before use in this experiment, ER had undergone two passages since mosquito transmission and more than 20 passages prior to that. No recrudescent infections in immunized mice were detected prior to challenge. In generations 10 and 11, all lines were passaged through naïve mice. The serial passage experiments in this study were replicated using two different starting populations (ancestral lines)—one avirulent (CW-0) and one virulent (CW-A). CW-0 had been cloned by serial dilution from an isolate obtained from its natural host, the thicket rat, Thamnomys rutilans, and then blood passaged every 12 d for a total of 12 passages to produce the CW-A line. During these passages, CW-A was subjected to selection for low virulence on the basis of how much weight loss it caused to mice. Despite this selection, however, CW-A increased in virulence relative to CW-0 during these passages ( Mackinnon and Read 1999b ). Prior to use in the current experiments, both CW-0 and CW-A underwent four further serial passages in naïve mice, and were not recloned. All the lines, including the ancestral lines, were transmitted once through Anopheles stephensi mosquitoes by allowing 50–100 mosquitoes aged 2–5 d to take a blood meal for 20–30 min on an anaesthetized gametocytemic mouse that had been inoculated 6–10 d previously, i.e., prior to the peak of infection. Then, 11–12 d later these mosquitoes—typically 10–20 of them infected as assessed by random surveys of oocyst prevalence—were allowed to feed back onto anaesthetized naïve mice. After 7–10 d, the blood from these sporozoite-infected mice was harvested and stored in liquid nitrogen. These aliquots were used to initiate blood infections in naïve mice that were then used as donors of asexual parasites to mice involved in the posttransmission experiments. As the lines were transmitted through mosquitoes noncontemporaneously, and involved typically one mouse per subline, comparisons among the lines for infectivity to mosquitoes were not made during these transmission exercises. Evaluation phase. After 18 passages, the pretransmission lines were evaluated in two replicate experimental blocks in naïve (generations 19 and 21) and immunized mice (generations 20 and 22). Ancestral lines were only evaluated in generations 21 and 22. This set of trials was denoted the “pretransmission experiments.” In a separate set of experiments, the “posttransmission experiments,” the mosquito-transmitted lines were compared with each other, as well as with the nontransmitted ancestral lines in two replicate experimental blocks in both naïve (generations 23 and 24) and immunized mice (generations 25 and 26). In both these experiments, across both blocks, ten mice were used for each of the four selection groups (two per subline), and five mice were used per ancestral line. Red blood cell density was measured every 1 or 2 d until day 18 PI by flow cytometry (Coulter Electronics, Luton, United Kingdom), and the minimum density reached was taken as a measure of virulence. Liveweight of the mouse was also recorded every 1–2 d. During the pretransmission experiments (generations 19–22), parasitemia and gametocytemia (proportions of red blood cells infected with asexual parasites and gametocytes, respectively) were evaluated from Giemsa-stained thin blood smears every 2 d from day 4 PI until day 18 PI, and then four more times until day 43 PI. Total lifetime transmission potential was measured as the average gametocytemia throughout the infection from day 4 to day 18 PI. Analysis. Statistical analyses were performed separately for the pretransmission and posttransmission experiments as these were carried out at different times. The virulence measure used for the final analysis was minimum red blood cell density, though other measures of virulence were also analyzed (unpublished data). Since selection treatment was replicated on sublines, thus making subline the independent experimental unit, the means of mice within sublines were first calculated. These were then analyzed for the effects of immune environment on selection response by fitting a linear model to these data with factors for selection line (with three levels for nontransmitted ancestral lines, N-lines, and I-lines in the case of the pretransmission experiments, and four levels for the transmitted versions of these three lines plus the nontransmitted ancestral lines in the case of the posttransmission experiments), ancestral population (CW-0, CW-A), and an interaction between these two factors. Thus, statistical tests of differences between the selection lines and other factors in the model were made using t-tests, with the variance for subline means as the residual. An alternative model fitted to data on individual mice (rather than means of sublines) that incorporated subline as a random effect was found to be unsatisfactory because in some treatment groups, the model did not converge and estimates of the subline variance were highly variable between groups. To determine the effects of mosquito transmission on the line differences in virulence, a further analysis was performed on the combined data from the pretransmission and posttransmission experiments fitting a fixed effect factor of line-within-experiment in the statistical model (seven levels—three lines for the pretransmission experiment and four for the posttransmission experiment). These analyses were carried out separately for each of the four immune-treatment-by-ancestral-line groups. Since the pretransmission ancestral line was included in both the pretransmission and posttransmission experiments, the effect of mosquito transmission (and its standard error) on the N-lines and I-lines, which was not measured directly (i.e., in a single experiment), could be estimated by reference to this line. For example, the effect of mosquito transmission in the N-lines was estimated from the difference between the N-lines and their pretransmission ancestral line in the pretransmission experiment minus the analogous contrast in the posttransmission experiment. This was done using the method of linear contrasts provided for in the SAS GLM procedure (SAS 1990). The effect of mosquito transmission on the difference between the I-lines and N-lines was similarly calculated but without reference to the pretransmission ancestral line. The effect of mosquito transmission on the ancestral lines was estimated from the direct comparison available from only the posttransmission experiment data. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC434153.xml |
549190 | A novel pancoronavirus RT-PCR assay: frequent detection of human coronavirus NL63 in children hospitalized with respiratory tract infections in Belgium | Background Four human coronaviruses are currently known to infect the respiratory tract: human coronaviruses OC43 (HCoV-OC43) and 229E (HCoV-229E), SARS associated coronavirus (SARS-CoV) and the recently identified human coronavirus NL63 (HCoV-NL63). In this study we explored the incidence of HCoV-NL63 infection in children diagnosed with respiratory tract infections in Belgium. Methods Samples from children hospitalized with respiratory diseases during the winter seasons of 2003 and 2004 were evaluated for the presence of HCoV-NL63 using a optimized pancoronavirus RT-PCR assay. Results Seven HCoV-NL63 positive samples were identified, six were collected during January/February 2003 and one at the end of February 2004. Conclusions Our results support the notation that HCoV-NL63 can cause serious respiratory symptoms in children. Sequence analysis of the S gene showed that our isolates could be classified into two subtypes corresponding to the two prototype HCoV-NL63 sequences isolated in The Netherlands in 1988 and 2003, indicating that these two subtypes may currently be cocirculating. | Background Coronaviruses are large, enveloped, positive stranded RNA-viruses [ 1 ]. The viral RNA genome is 27–32 kb in size, capped, polyadenylated and encapsidated in a helical nucleocapsid. The envelope is studded with long, petal-shaped spikes, giving the virus particle a characteristic crown-like appearance. Three distinct groups of coronaviruses have been described based on serological affinity and genome sequence. Coronaviruses can infect humans and a variety of domestic animals and can cause highly prevalent diseases such as respiratory, enteric, cardiovascular and neurologic disorders [ 2 , 3 ]. Until recently only three human coronaviruses were thoroughly studied. Human coronavirus OC43 (HCoV-OC43; group 2) and human coronavirus 229E (HCoV-229E; group 1) were identified in the 1960s. They are responsible for 10–30% of all common colds, and infections occur mainly during winter and early spring [ 4 - 7 ]. A third novel human coronavirus, SARS-CoV, was identified as the causal agent during the 2002–2003 outbreak of severe acute respiratory syndrome (SARS) [ 8 - 10 ]. Phylogenetic analysis showed that the SARS-CoV does not closely resemble any of the three previously known groups of coronaviruses, and therefore a tentative fourth group of coronaviruses was suggested [ 11 , 12 ]. However, an early split-off of the SARS-CoV from the coronavirus group 2 lineage has also been suggested [ 13 , 14 ]. A new human coronavirus associated with respiratory illness, HCoV-NL63, was recently identified by a research team in The Netherlands [ 15 ]. The virus was isolated in January 2003 from a nasopharyngeal aspirate of a 7-month-old child suffering from bronchiolitis, conjunctivitis and fever. Screening of specimens from patients with respiratory symptoms identified seven additional HCoV-NL63 infected individuals, both children and adults, between December 2002 and February 2003. The complete viral genome sequence was determined. The characteristic genome organisation of coronaviruses can be observed: the 5' two-third of the genome contains two large open reading frames (ORF), ORF1a and ORF1b. In the 3' part of the genome, genes encoding four structural proteins are found: spike (S), envelope (E), membrane (M), and nucleocapsid (N). The hemagglutinin-esterase (HE) gene, characteristic for group 2 coronaviruses, is not present in HCoV-NL63. Sequence analysis demonstrated that HCoV-NL63 shares 65% sequence identity with HCoV-229E. Phylogenetic analysis confirmed that HCoV-NL63 is a new group 1 coronavirus, most closely related to HCoV-229E and porcine epidemic diarrhea virus (PEDV) [ 15 ]. Shortly after van der Hoek and colleagues published their discovery of the new human coronavirus HCoV-NL63, a second research group described the characterization of essentially the same virus [ 16 ]. The virus was isolated from a nose swab sample collected from an 8-month-old child suffering from pneumonia in The Netherlands in April 1988. Real-time RT-PCR assays were designed for screening of respiratory tract samples. Four additional HCoV-NL63 positive samples, from children aged 3 months to 10 years, were detected between November 2000 and January 2001. HCoV-NL63 can be considered as a new important cause of respiratory illnesses and two different subtypes might be currently cocirculating in the human population [ 15 ]. In this study we wanted to explore the incidence of HCoV-NL63 infection in children diagnosed with respiratory tract infections in Belgium. Methods Isolates and patients We studied 309 isolates from 279 patients with severe respiratory symptoms collected from January 2003 until March 2004 at the University Hospital in Leuven, Belgium. These isolates originated from bronchoalveolar lavages, pharyngeal swabs, nasopharyngeal aspirates, and sputum samples. Routine diagnostic testing was performed for respiratory syncytial virus (RSV), influenza virus, parainfluenza virus and adenovirus. No prior amplification by cell culture was performed. The results of diagnostic tests for RSV were negative for 244 isolates, while 65 isolates were positive for RSV. Patients ranged in age from 1 month to 16 years, with a mean age of 2 years. The temporal distribution of the isolates corresponded to the yearly RSV epidemic period: 236 samples were collected from January to June 2003 and 73 samples were recovered during the first trimester of 2004 (Figure 1A ). Figure 1 Detection of HCoV-NL63 and HCoV-OC43 in samples from patients suffering from severe respiratory symptoms. (A) Number of samples tested per month. (B) Patients infected with HCoV-NL63 and HCoV-OC43. A single HCoV-229E positive sample was isolated in April 2003 (not shown). Pancoronavirus RT-PCR assay RNA was extracted from the collected specimens by using the QIAamp Viral RNA Mini kit (QIAGEN, Westburg, The Netherlands) according to instructions of the manufacturer. Screening of the samples was performed by amplifying a 251 bp fragment of the polymerase gene using the following primer set: Cor-FW (5'-ACWCARHTVAAYYTNAARTAYGC-3') and Cor-RV (5'-TCRCAYTTDGGRTARTCCCA-3') (Figure 2 ). These one-step RT-PCR assays (OneStep RT-PCR kit; QIAGEN) were undertaken in a 50 μl reaction volume containing 10 μL RNA-extract, 10 μl 5x QIAGEN OneStep RT-PCR Buffer, 2 μl dNTP mix (final concentration of 400 μM of each dNTP), 1.8 μl QIAGEN OneStep RT-PCR Enzyme Mix (a combination of Omniscript and Sensiscript reverse transcriptase and HotStarTaq DNA polymerase), 4 μM of each primer, and RNase-free water to 50 μl. The reaction was carried out with an initial reverse transcription step at 50°C for 30 min, followed by PCR activation at 95°C for 15 min, 50 cycles of amplification (30 sec at 94°C; 30 sec at 48°C; 1 min at 72°C), and a final extension step at 72°C for 10 min in a GeneAmp PCR system 9600 thermal cycler (Applied Biosystems, Foster City, CA, USA). PCR-products were run on a polyacrylamide gel, stained with ethidium bromide, and visualized under UV-light. Figure 2 Selection of primers for the novel pancoronavirus RT-PCR. Shown is the alignment of 14 coronaviral sequences of a conserved region of the polymerase gene. The forward (Cor-FW) and reverse (Cor-RV) primer sequences are shown at the bottom (Y = C/T, W = A/T, V = A/C/G, R = A/G, H = A/T/C, N = A/C/T/G). The coordinates of Cor-FW and Cor-RV are 14017 and 14248, respectively, in the HCoV-NL63 complete genome sequence. The 14 coronavirus sequences used here are available from GenBank under the following accession numbers: HCoV-NL63, AY567487; HCoV-229E, AF304460; infectious bronchitis virus (IBV), Z30541; SARS-CoV, AY313906; HCoV-OC43, AY391777; PEDV, AF353511; bovine coronavirus (BCoV), AF391541; transmissible gastroenteritis virus, AF304460; MHV, X51939; PHEV, AF124988; sialodacryoadenitis virus (SDAV), AF124990; turkey coronavirus (TCoV), AF124991; canine respiratory coronavirus (CRCV), AY150273; feline infectious peritonitis virus (FIPV), AF124987. RT-PCR assays for HCoV-NL63 Samples that were found positive for HCoV-NL63 were confirmed using one-step RT-PCR assays, which amplified four different regions of the HCoV-NL63 genome. Amplification of a 314-bp gene fragment in the nucleocapsid region was performed with two specific HCoV-NL63 primers: N5-PCR1 (5'-CTGTTACTTTGGCTTTAAAGAACTTAGG-3', nt 26695-nt 26721) and N3-PCR1 (5'-CTCACTATCAAAGAATAACGCAGCCTG-3', nt 26982-nt 27008). Secondly a 237-bp fragment in ORF1b was amplified using the primers repSZ-1 and repSZ-3 described by van der Hoek and colleagues [ 15 ]. A third RT-PCR assay was carried out on the HCoV-NL63 positive samples amplifying a 839-bp fragment with ORF1a specific primers: SS5852-5P and P4G1M-5-3P [ 15 ]. These one-step RT-PCR assays were performed essentially as described above. They were carried out using 5 μL RNA-extract and 0.6 μM of each primer. Only 45 cycles of amplification were run and annealing temperature was set at 50°C. Furthermore a 663 bp fragment of the spike gene was amplified using a RT-nested PCR. The outer primer set SINL5 (5'-GAGTTTGATTAAGAGTGGTAGGTTG-3', nt 20391-nt 20415) and SINL3 (5'-AACAGTGTAGTTAACTACACGG-3', nt 21068-nt 21089) were used in a one-step RT-PCR, performed as described above, using 10 μl of RNA-extract and an annealing temperature of 48°C. A nested PCR was carried out with the inner primer set SINL5n (5'-GGTTGTTGTTACGCAATAATGGTCGT-3', nt 20411-nt 20436) and SINL3n (5'-ACACGGCCATTATGTGTGGTGAC-3', nt 21051-nt 21073). The nested reaction mix was composed of 1 unit Taq polymerase, 1 μl of a 25 mM dNTP-mix, 10 μl 5X buffer C (PCR Optimizer Kit, Invitrogen, The Netherlands), and 30 pmol of forward and reverse primer in a 50 μl reaction volume. As template 10 μl of the outer PCR product was added. The cycling conditions were as follows: an initial denaturation at 94°C for 5 min, followed by 40 cycles of amplification (45 sec at 94°C, 45 sec at 54°C, 1 min at 72°C), and a final extension of 5 min at 72°C. PCR-products were run on a polyacrylamide gel, stained with ethidium bromide, and visualized under UV-light. The amplicons were purified using the QIAquick PCR purification kit (QIAGEN) and sequenced with the respective primer pairs using the ABI PRISM BigDye Terminator Cycle Sequencing Reaction kit (version 3.1) on an ABI PRISM 3100 DNA sequencer (Applied Biosystems) according to the manufacturer's instructions. Positive and negative controls were included in each PCR experiment. The HCoV-NL63 positive control was RNA isolated from a HCoV-NL63 culture. Sequence analysis and phylogenetic analysis of the amplicons Chromatogram sequencing files were inspected with Chromas 2.2 (Technelysium Pty Ltd, Helensvale, Australia), and contigs were prepared using SeqMan II (DNASTAR, Madison, WI, USA). The obtained consensus sequences were compared with the prototype HCoV-NL63 sequences available in GenBank database release 142.0 using BLAST analysis (NCBI BLAST server). Multiple sequence alignments were prepared using CLUSTAL X version 1.82 [ 26 ], and manually edited in the GeneDoc Alignment editor [ 27 ]. Phylogenetic analysis was conducted using MEGA version 2.1 [ 28 ]. Nucleotide sequence accession numbers The sequences determined in this study have been deposited in the GenBank sequence database under accession numbers AY758276 to AY758301. RT-PCR assays for HCoV-OC43 and HCoV-229E Our collection of samples was also screened using the pancoronavirus RT-PCR assay for the presence of HCoV-OC43 and HCoV-229E. Positive results were confirmed by one-step RT-PCR using HCoV-OC43 and HCoV-229E specific primer pairs located in the membrane glycoprotein region (OC43-FW: 5'-GGCTTATGTGGCCCCTTACT-3', nt 28580-nt 28599; OC43-RV: 5'-GGCAAATCTGCCCAAGAATA-3', nt 28894-nt 28913; 229E-FW: 5'-TGGCCCCATTAAAAATGTGT-3', nt 24902-nt 24921; 229E-RV: 5'-CCTGAACACCTGAAGCAAT-3', nt 25456-nt 25475) [ 18 ]. One-step RT-PCR and sequence analysis were performed essentially as described above. Annealing conditions during the RT-PCR assay were modified: the annealing temperature was set at 55°C. Results Pancoronavirus RT-PCR assay A pancoronavirus RT-PCR assay is a usefull tool to test for all coronaviruses in a clinical sample. Besides quick screening for several pathogens in one assay, it supplies the possibility to identify previously unknown coronaviruses. The consensus RT-PCR assay as described by Stephensen et al., designed to amplify all known coronaviruses, is not able to detect HCoV-NL63 because of several mismatches with the primer sequences [ 15 , 17 ]. We modified these consensus primers based on an alignment of the HCoV-NL63 prototype sequence and 13 other coronavirus sequences (Figure 2 ). To determine whether the newly designed pancoronavirus RT-PCR assay efficiently amplifies a broad range of coronaviruses the RT-PCR assay was tested on cell culture supernatant of the four known human coronaviruses and three animal coronaviruses: HCoV-NL63, HCoV-OC43, HCoV-229E, SARS-CoV, feline infectious peritonitis virus (FIPV), porcine hemagglutinating encephalomyelitis virus (PHEV), and murine hepatitis virus (MHV). Amplification of the expected 251 bp region was observed for all tested coronaviruses (Figure 3 ). The sensitivity of the pancoronavirus RT-PCR assay was assessed by testing tenfold dilutions of HCoV-NL63 and HCoV-OC43 RNA. While 50 copies of HCoV-OC43 RNA copies per μl nasopharyngeal aspirate could be detected, the sensitivity for HCoV-NL63 was a bit lower i.e. 5 × 10 3 RNA copies per μl nasopharyngeal aspirate. Figure 3 Gel electrophoresis after pancoronavirus RT-PCR assay. The indicated band of 251 bp corresponds with the expected amplicon size. As a marker Molecular Weight Marker VI was used (Boehringer Mannheim, Germany). Detection of HCoV-NL63 in clinical specimens The pancoronavirus RT-PCR assay was used for screening of specimens from hospitalized patients with respiratory symptoms collected between January 2003 and March 2004. Samples, from which a 251 bp fragment could be amplified, were further identified by sequencing using the pancoronavirus primers. We studied 309 specimens with a temporal distribution that corresponded with the yearly RSV epidemic period (Figure 1A ). A total of 244 samples were found negative for RSV by diagnostic testing. The 279 patients in this study comprised of 211 patients aged <2 years (75.6%), 68 aged 2–16 years (24.4%). We detected HCoV-NL63 in 7 samples (2.3%). One positive sample was collected at the end of January 2003 and coinfection with RSV type B was present. Five of the positive samples were collected within a ten-day period at the end of February 2003, and one positive sample was collected at the end of February 2004, which showed coinfection with adenovirus and parainfluenza virus (Figure 1B , Table 1 ). The seven positive samples were obtained from one patient aged 1 month, four patients of 1 year, one patient of 2 years, and one patient of 16 years. The patient files showed that all subjects suffered from respiratory tract illness and some had underlying disease (Table 1 ). Table 1 Patients hospitalized with respiratory tract illness associated with HCoV-NL63 infection Patient nr. Age Sex Symptoms Underlying disease Specimen Sample date 1153 a 1 year male URTI: fever, coughing, wheezing, rhinitis, diarrhoea none NPA 27 Jan 2003 33545 16 years male LRTI: fever, coughing, respiratory distress, pharyngitis Smith-Lemli-Opitz syndrome NPA 14 Feb 2003 21596 1 year female LRTI: fever, coughing, respiratory distress Vater syndrome, epilepsy NPA 20 Feb 2003 53887 1 month female URTI: fever, rhinitis, two siblings have URTI none NPA 20 Feb 2003 40001 1 year male LRTI: respiratory distress, cardiac arrest, rotavirus-positive diarrhoea epilepsy NPA 21 Feb 2003 64880 2 years male URTI: fever, coughing, wheezing neurofibromatosis NPA 24 Feb 2003 70688 b 1 year female LRTI: pneumonia, fever, cyanosis, diarrhoea none PS 25 Feb 2004 a positive for RSV type B; b positive for adenovirus and parainfluenza virus LRTI, lower respiratory tract illness; URTI, upper respiratory tract illness; PS, pharyngeal swab; NPA, nasopharyngeal aspirate The seven HCoV-NL63 positive respiratory samples were confirmed by alternative RT-PCR assays. Amplification of a fragment of the nucleocapside gene and ORF1b was carried out. Sequence analysis of the N gene fragments and the ORF1b fragments showed 98–100% similarity to the prototype HCoV-NL63 sequences available in the GenBank database (AY567487, AY518894). A third one-step RT-PCR was carried out for each positive sample to amplify part of the ORF1a gene. Sequence analysis of the ORF1a PCR-products revealed 99% sequence identity with both HCoV-NL63 prototype sequences available in GenBank. A neighbor-joining phylogenetic tree was constructed based on an alignment of the ORF1a nucleotide sequences from the HCoV-NL63 positive samples and the available HCoV-NL63 sequences in GenBank. HCoV-229E was used as an outgroup. The dendrogram shows that all HCoV-NL63 sequences cluster together, but two subclusters can be observed (Figure 4 ). Figure 4 Phylogenetic analysis of the partial ORF1a nucleotide sequences. Accession numbers: HCoV-NL63, AY567487; HCoV-NL, AY518894; HCoV-229E, AF304460; NL-p466, AY567488; NL-p246, AY567489; NL-p251, AY567490; NL-p496, AY567491; NL-p223; AY567492; NL-p248, AY567493; NL-p72, AY567494; CAN39, AY675541; CAN52, AY675542; CAN57, AY675543; CAN140, AY675544; CAN146, AY675545; CAN214, AY675546; CAN449, AY675547; CAN470, AY675548; CAN483, AY675549; CAN495, AY675550; CAN528, AY675551; CAN531, AY675552; CAN543, AY675553. Inspection of the two full genome HCoV-NL63 sequences available in GenBank demonstrates that especially the aminoterminal region of the Spike protein can be very divergent. Therefore we decided to amplify this region to investigate the variability of these region in our patients. An RT-nested PCR assay was used to amplify part of the S gene. These partial spike sequences showed 98% similarity with the HCoV-NL63 prototype strains. An alignment of the S gene sequences from the Belgian samples, partial spike sequences from the positive samples identified in The Netherlands (data not shown), and the prototype HCoV-NL63 sequences, was used to constitute a neighbor-joining phylogenetic tree. The neighbor-joining tree was evaluated by 500 bootstrap pseudoreplicates. Two clusters can again be observed (Figure 5 ). Figure 5 Phylogenetic analysis of the partial S gene nucleotide sequences based on an alignment of the Belgian spike sequences, spike sequences from the positive samples identified in The Netherlands, and the prototype HCoV-NL63 sequences available in GenBank. Accession numbers: HCoV-NL63, AY567487; HCoV-NL, AY518894. PEDV was used as an outgroup. Detection of HCoV-OC43 and HCoV-229E Screening of our sample collection for the presence of HCoV-OC43 and HCoV-229E was also performed. We detected HCoV-OC43 in 7 of 309 samples (2.3%) and HCoV-229E in one sample (0.3%). The seven HCoV-OC43 positive samples were collected during the winter and early spring of 2003 and 2004. The sample in which we detected HCoV-229E was collected in April 2003. The positive samples were confirmed by RT-PCR using specific HCoV-OC43 and HCoV-229E primer pairs that amplify part of the M gene [ 18 ]. The HCoV-OC43 and HCoV-229E partial membrane sequences of the contemporary Belgian strains showed 97–99% similarity with the HCoV-OC43 and HCoV-229E prototype sequences in GenBank. Discussion RSV, influenza viruses, adenoviruses, and parainfluenzaviruses are probably the most important viral agents of severe respiratory diseases. However, a substantial part of respiratory tract infections can not be attributed to any known pathogen. Underlying conditions and immunosuppression largely determine the impact of respiratory viruses on individuals [ 19 ]. The common cold viruses HCoV-OC43 and HCoV-229E have also been associated with more severe lower respiratory tract conditions in infants and immunocompromised patients [ 20 - 23 ]. The clinical symptoms associated with HCoV-NL63 infections still need to be determined, but there are some indications that HCoV-NL63 can cause severe respiratory illnesses in children and immunocompromised adults [ 15 , 16 ]. We detected HCoV-NL63, using a pancoronavirus RT-PCR, in patients suffering from relatively severe respiratory diseases necessitating hospitalization. These positive samples were collected from children aged 1 month to 16 years. Two patients suffered from severe underlying disease: one patient suffered from Smith-Lemli-Opitz syndrome, a rare autosomal recessive disorder due to a primary enzymatic defect in the cholesterol metabolism. A second patient was diagnosed with VATER, a syndrome characterized by the sporadic association of specific birth defects or abnormalities such as vertebrae and vascular anomalies, anal atresia, trachea and esophagus problems and renal anomalities. All HCoV-NL63 infected patients established a complete recovery from their respiratory symptoms. One-step RT-PCR assays were used to detect and confirm these positive samples. Results from epidemiological surveys conducted in the 1970's have led to the conclusion that human coronaviruses are distributed worldwide and circulate during seasonal outbreaks [ 22 ]. Our results indicate that HCoV-NL63 is the causal agent in a significant portion of respiratory diseases of unknown etiology. We detected HCoV-NL63 in respiratory samples collected in February 2003, with a frequency of 7.1%, and during February 2004, with a frequency of 2.5%. These results seem to support the tendency of human coronaviruses to circulate mainly during the winter season [ 7 , 24 ]. However, in this study, sampling was only performed from January to May during the yearly RSV epidemic period, while no samples from the summer and autumn months were screened. The first publication on HCoV-NL63 showed that the virus circulated in Amsterdam during the winter months of 2002/2003 [ 15 ]. More recently another set of Amsterdam samples was screened, obtained during the winter of 2001/2002 and 2003/2004. HCoV-NL63 was found in one trachea sample obtained in February 2002, and in two oropharyngeal aspirates from December 2003 and January 2004, respectively (data not shown). Combined with the data that we present here from Belgium, these findings confirm that HCoV-NL63 reappears each winter season similar to the previously known respiratory viruses. Recently, research teams from Australia, Japan and Canada, have submitted partial HCoV-NL63 sequences to the GenBank database (AY600442-AY600446, AY662694-AY662698, AY675541-AY675553). This indicates that this newly discovered human coronavirus has a worldwide distribution. Sequence analysis of the highly conserved nucleocapsid region showed that the Belgian isolates are similar to the two prototype HCoV-NL63 complete genome sequences in GenBank isolated in the Netherlands in 1988 and 2003. Furthermore, phylogenetic analysis of part of the ORF 1a region of our patients showed the same subclusters of HCoV-NL63 that were described previously [ 15 ] (Figure 4 ). This finding supports the suggestion that several HCoV-NL63 subtypes with distinct molecular markers are cocirculating, also in Belgium. A large insert in the 5' part of the S gene of HCoV-NL63 compared with HCoV-229E has been described [ 15 , 16 ]. Both HCoV-NL63 complete genome sequences show only 89% sequence identity in this spike insert region, which implies that there are at least two different HCoV-NL63 subtypes. Sequence analysis of this spike insert region revealed that our samples show similarity to both prototype HCoV-NL63 subtypes, which was confirmed by phylogenetic analysis. The partial S gene sequences cluster together with the two prototype HCoV-NL63 sequences in two different groups (Figure 5 ). This confirms that the HCoV-NL63 subtypes first isolated in 1988 and 2003 are cocirculating. When analysing the dendrograms based on ORF1a and S gene sequences, a discordance in the clustering pattern of some HCoV-NL63 isolates (e.g. HCoV-NL and NL-p223) can be observed, suggesting a possible recombination event. Further research of complete genome sequences of these isolates is required. Drawing conclusions based on phylogenetic analysis of one single gene therefore requires caution as the true phylogeny can only be demonstrated by analysing complete genome sequences. Screening of our sample collection for the presence of HCoV-OC43 and HCoV-229E revealed seven HCoV-OC43 positive samples and only one HCoV-229E positive sample. All positive samples were isolated during winter and early spring, which is concordant with the results of previous epidemiological studies. HCoV-OC43 infected samples were mainly identified during February 2003 and February 2004 (Figure 1B ). These data show that the epidemic seasons of HCoV-OC43 and HCoV-NL63 coincide. The positive samples were collected from children aged 1 to 12 years, whom all suffered from respiratory symptoms. The very low detection rate of HCoV-229E compared with the frequent detection of HCoV-NL63, might imply that HCoV-NL63, closely related to HCoV-229E, is currently more important as a causal agent of respiratory diseases. At the moment, there are no data concerning cross-neutralization between HCoV-229E and HCoV-NL63. In theory, such cross-neutralization might be possible, since both viruses are relatively closely related species belonging to coronavirus group 1. Antigenic cross-reactivity has already been demonstrated between SARS-CoV and group 1 coronaviruses TGEV, FIPV and CCoV [ 25 ]. The development of a pancoronavirus RT-PCR assay using a primer set that matches all known coronaviruses might be useful for the identification of new coronaviruses. This pancoronavirus RT-PCR-assay can also be used as a diagnostic tool to detect any of the four currently known human coronaviruses in clinical samples. Conclusions Human coronavirus NL63 is a new important respiratory pathogen that can cause severe respiratory infections in children. Sequence analysis of the HCoV-NL63 isolates detected in our study demonstrates that our Belgian isolates can be classified into two subtypes corresponding to the two prototype HCoV-NL63 sequences isolated in The Netherlands in 1988 and 2003. Our findings indicate that these two subtypes may currently be cocirculating. Competing interests The author(s) declare that they have no competing interests. Author's contributions EM conceived of the study and designed it together with LV, EK, and MVR. EM developed the pancoronavirus RT-PCR and performed the RT-PCR and sequencing reactions. EM and LV drafted the manuscript. KZ assembled the respiratory samples. SL performed the RT-PCR sensitivity assays. PM was responsible for the graphical support of the manuscript. LVDH, KP and BB developed the HCoV-NL63 RT-PCRs and helped with the design of the study and the writing of the manuscript. All authors contributed to the final version of the manuscript, read and approved it. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549190.xml |
524368 | Identification of endogenous retroviral reading frames in the human genome | Background Human endogenous retroviruses (HERVs) comprise a large class of repetitive retroelements. Most HERVs are ancient and invaded our genome at least 25 million years ago, except for the evolutionary young HERV-K group. The far majority of the encoded genes are degenerate due to mutational decay and only a few non-HERV-K loci are known to retain intact reading frames. Additional intact HERV genes may exist, since retroviral reading frames have not been systematically annotated on a genome-wide scale. Results By clustering of hits from multiple BLAST searches using known retroviral sequences we have mapped 1.1% of the human genome as retrovirus related. The coding potential of all identified HERV regions were analyzed by annotating viral open reading frames (vORFs) and we report 7836 loci as verified by protein homology criteria. Among 59 intact or almost-intact viral polyproteins scattered around the human genome we have found 29 envelope genes including two novel gammaretroviral types. One encodes a protein similar to a recently discovered zebrafish retrovirus (ZFERV) while another shows partial, C-terminal, homology to Syncytin (HERV-W/FRD). Conclusions This compilation of HERV sequences and their coding potential provide a useful tool for pursuing functional analysis such as RNA expression profiling and effects of viral proteins, which may, in turn, reveal a role for HERVs in human health and disease. All data are publicly available through a database at . | Background It has become evident that the human genome harbors a fairly small number of genes, and exons account for little over 1% of our DNA. This stands in stark contrast to various types of repetitive DNA, and it has been estimated that transposable elements alone take up almost half of our genome [ 1 ]. Among such multi-copy elements are human endogenous retroviruses (HERVs). These represent stably inherited copies of integrated retroviral genomes (so-called provirus structures) that have entered our ancestors' genome. It has been estimated that HERVs and related sequences such as solitary long terminal repeat structures (solo-LTRs) and retrotransposon-like ( env -deficient) elements constitute approximately 8% of the human genome [ 1 ]. Phylogenetic analysis of the retroviral polymerase gene ( pol ) [ 2 ] and envelope genes ( env ) [ 3 ] have identified at least 26 distinct HERV groups. However, less well-defined sequence comparisons suggest that there may be well over 100 different HERV groups [ 4 , 5 ]. Within the family of Retroviridae most of the seven genera are represented by endogenous members, and HERVs are divided into class I, II and III depending on sequence relatedness to gammaretroviruses , betaretroviruses or spumaviruses , respectively. Many HERVs are named according to tRNA usage (i.e. HERV-K has a primer binding site that matches a lysine tRNA), while others have been more or less provisionally named by their discoverer. It seems increasingly clear that the nomenclature for endogenous retroviruses (ERVs) needs to be revised to accommodate such wide diversity. Furthermore, it is evident that many more ERVs are yet to be discovered as retroviral elements are present in most, if not all, vertebrates and even in some invertebrates [ 6 , 7 ]. With a single exception (HERV-K) all HERV groups are ancient (i.e. entered the genome prior to human speciation) and entered our genome at least 25 million years ago [ 6 , 8 , 9 ] presumably as an infection of the germ-line. Alternatively, it is possible that ERVs have evolved from pre-existing genomic elements such as LTR-retrotransposons [ 10 ]. After colonization most HERV groups have spread within the genome either by re-infection or intracellular transposition [ 11 , 12 ] and have reached copy numbers ranging from a few to several hundreds [ 13 ]. The vast majority of these provirus copies are non-functional due to the accumulation of debilitating mutations. Indeed, no replication-competent HERVs have yet been described, although fully intact members of the HERV-K group have been reported [ 14 ]. Other mammalian species such as mouse, cat and pig harbor modern replication-competent ERVs that to a large extent may interact with related exogenous viruses [ 15 , 16 ]. The presence of endogenous retroviral sequences in our genome has several possible implications: i ) replication and (random) insertion of new proviral structures, ii ) effect on adjacent cellular genes, iii ) long range genomic effects and iv ) expression of viral proteins (or RNA). Since the majority of HERVs are highly defective no de novo insertions have been observed and presumably HERV mobilization very rarely results in spontaneous genetic disorders or gene knock-outs as seen with other active retrotransposons such as L1 elements [ 17 ]. However, existing HERV loci have been shown to alter gene expression by providing alternative transcription initiation, new splice sites or premature polyadenylation sites [ 18 ]. Moreover, the presence of enhancers and hormone-responsive elements in the LTR structure of existing HERVs may up- or down-regulate the transcription of flanking cellular genes. It has been speculated that transcription initiation from HERVs/solo-LTRs into neighboring genes in the antisense orientation might interfere with gene expression. Alternatively, gene transcripts encompassing antisense viral sequences could down-regulate HERV expression. The human C4 gene may provide an example of the latter, where antisense HERV-K sequences are generated and display an effect on a heterologous target [ 19 ]. Such effects may possibly rely on formation of dsRNA and RNA interference. On a genome scale the presence of closely related sequences may trigger events of ectopic recombination and hence lead to chromosomal rearrangements. Sequence analysis of provirus flanking-DNA suggests that this has occurred during primate evolution [ 20 ]. The frequency and significance of such events in human disorders are not clear at present. Finally, HERVs may express viral proteins. The common retroviral genes, gag , (pro) , pol and env lead to expression of 3 viral polyproteins (Gag, Gag-Pol and Env) that are processed by a viral or host protease into the active structural and enzymatic subunits. Although most HERV genes are no longer intact, a small fraction has escaped mutational decay. For a subgroup of HERV-K (HDTV) all proteins can apparently be expressed and particle formation has been detected in teratocarcinoma cell lines [ 13 ]. Furthermore, HERV-K (HDTV) also directs expression of a small accessory protein Rec (formerly cORF) that up-regulates nucleo-cytoplasmic transport of unspliced viral RNA [ 21 , 22 ]. Loci from other HERV groups have maintained a single intact open reading frame, such as the env genes from HERV-H [ 23 ], HERV-W [ 24 ] and HERV-R (ERV3) [ 25 ]. Conservation of an open reading frame during primate evolution clearly suggests some biological function. Animal studies have demonstrated that ERV proteins may in fact serve a useful role for the host either by preventing new retroviral infection or by adopting a physiological role. Syncytin, an Env-derived protein that mediates cell-cell fusion during human placenta formation, provides a striking example of the latter [ 26 , 27 ]. Recently, a second Env protein, dubbed Syncytin 2, proposed to have a similar cell-fusion role [ 28 ] was identified. Env proteins may also inhibit cell entry of related exogenous retroviruses that use a common surface receptor, and a Gag-derived protein restricts incoming retroviruses in mice [ 29 ]. In the literature, expression of HERVs has frequently been linked with human disease including various cancers and a number of autoimmune disorders [ 30 ]. While causal links between disease and HERV activity have yet to be established, it is clear from animal models that expression of endogenous retroviral proteins can affect cell proliferation and invoke or modulate immune responses. A few recent examples include i ) the possible association of Rec (HERV-K) with germ-cell tumors [ 31 ], ii ) the immunosuppressive abilities of HERV-H Env in a murine cancer model resulting in disturbed tumor clearance [ 32 ] and iii ) the possible superantigenic (SAg) properties of envelopes from HERV-K and HERV-W [ 33 , 34 ] and the increased activity of such proviruses in multiple sclerosis [ 34 ], rheumatoid arthritis [ 35 ], schizophrenia [ 36 ] and type-1 diabetes [ 33 ]. SAg expression from the HERV-K18 locus may furthermore be induced by INF-α and thus viral infection such as Epstein-Barr virus [ 37 , 38 ]. One major problem in verifying putative disease association is the multi-copy nature of HERVs and the ambiguous assignment to individual provirus; a problem that can be solved by properly annotating the human genome. Among Env-associated effects the mechanism of SAg-like activity is believed to involve true epitope-independent stimulation of T-cells, while the mechanism of action of the immunosuppressive CKS-17-like domain is still unknown. This immunosuppressive peptide region maps to the envelope gene [ 39 ] and may significantly alter the pathogenic properties of retrovirus and even enhance cancer development. Phylogenetic analysis suggests that a CKS17-like motif arose early in the evolution of retrovirus and is widespread in many current HERV lineages [ 3 ], thus identification of novel envelope genes attracts particular attention. Computer-assisted identification of HERV loci has previously been reported. These include searching conserved amino-acid motifs within the pol gene [ 2 , 40 ] and env gene [ 3 ], detection of full-length env genes by nucleotide similarity [ 41 ] and compiling of LTR- or ERV-classified repeats as reported by RepeatMasker analysis [ 4 , 5 , 42 ]. Currently only Paces et al. [ 5 , 42 ] provide a searchable database where individual loci are mapped as chromosomal coordinates [ 43 ]. However, except for detection of 16 full-length env genes in a recent survey by de Parseval et al [ 41 ] and a detailed analysis of intactness of HERV-H- related proviruses [ 40 ], no one has systematically detected HERV regions and scanned them for content of viral open reading frames. In this paper we report mapping of 7836 regions in the human genome that show sequence resemblance to known retroviral genomes which cover the majority of large proviral structures or HERV loci, and, importantly, provide a detailed annotation of all viral open reading frames. Results In order to screen the human genome for HERV-related sequences we have performed multiple nucleotide BLAST searches and subsequently clustered neighboring hits into larger regions up to about 10 kb in size (Figure 1A/1B ). The query sequences cover all known retroviral genera and include both endogenous and exogenous strains from various host organisms. To avoid detection of solo-LTR structures we used the coding regions as query (Figure 1A ). The corresponding DNA sequences were scanned for the presence of all viral open reading frames (vORFs, here defined as a stop codon to stop codon fragment above 62 codons) with significant homology to known retroviral proteins (E<0.0005) and annotated as Gag, Pol or Env. From our initial BLAST-identified regions we detect 7836 genuine HERV-related regions in which at least one, mostly several vORF can be detected. The majority of these HERV regions correspond quite well to the internal parts of a provirus locus. However, the insertion of other repetitive elements inside a provirus will produce a mosaic structure that is less well-defined. In terms of our HERV regions this may lead to either "partition" of a provirus into two or more consecutive HERV regions (as illustrated by the "provirus into provirus" insertion depicted in Figure 1B ) or enclosure of minor stretches of non-retroviral DNA (such as Alu elements or small microsatellites) within the sequences of some HERV regions. Hence, the precise boundaries of the retrovirus-related DNA (as often defined by nucleotide similarity alone) must be manually inspected and the flanking LTRs must be identified in order to deduce the exact proviral structure. To assist in LTR determination we have scanned for flanking direct repeats and included LTR elements as identified by RepeatMasker analysis [ 4 ]. Due to these exceptions we shall refer to our data as "HERV regions" although they in most cases correspond to individual HERV loci. Figure 1 A: Genomic organization of simple retroviruses when present as a provirus (DNA) integrated in the host genome. The regulatory long terminal repeats (LTRs) flank the internal three major genes gag , pol and env . A fourth gene pro is present between gag and pol for some retroviruses, while part of either gag or pol in others. B: Individual BLAST hits (white and yellow boxes) on either strand of the human genome were clustered into HERV regions (blue boxes) or discarded by using a score function. Finally, only HERV regions with at least one retroviral ORF were kept (see Materials and Methods). In the example illustrated HERV ID 5715 was presumably inserted into an existing HERV locus with the opposite orientation. HERV ID 5715 is located in the first intron of the CD48 gene (antisense direction) and is also known as HERV-K18 or IDDMK 1,2 22. C: HERV ID 5715 with graphical vORF annotation. Putative LTR structures are indicated and all ORFs (stop-codon to stop-codon fragments above 62 aa) are mapped and annotated by homology criteria The average region size is 4300 nucleotides and the ~7800 HERV regions cover ~1.1% of the human genome. All data are publicly available as a searchable database at Our data include i ) chromosomal coordinates and sequence information of the 7836 HERV regions, ii ) annotation of ~38000 retroviral ORFs within these regions and iii ) graphical visualization of individual HERV regions (Figure 1C ) or larger chromosomal window. All DNA and predicted vORF sequences can be retrieved and is linked to external genome browsers for further analysis. Skewed chromosomal distribution and few intragenic HERVs The 7836 HERV regions (~2.7 per Mb) are not uniformly distributed among the 22+2 chromosomes (χ 2 test, P~0). Table 1 summarizes the genome distribution statistics, from which it is clear that chromosomes 2, 7, 9, 10, 15, 16, 17, 20 and 22 are less densely populated, while chromosomes 4, 19, X and Y have higher density than expected from a random distribution. In particular, the Y chromosome stands out with more than 14 HERVs per Mb. The distribution of HERVs per Mb along each of the chromosomes is also not uniform, perhaps except at chromosome 21 (Table 1 ). Furthermore, we observe local "hotspots", most prominent for chromosomes 19, X and Y. For instance, a 5 Mb window in chromosome Y (position 18–23 Mb) encompasses 120 HERV regions. Moreover, there are a number of cases where HERVs have presumably been inserted right next to or even into an existing locus. HERV ID 5715 provides a nice example of the latter, where a HERV-K member presumably has integrated into an existing HERV-K (Figure 1B ). We also detect the perfect HERV-K tandem repeat previously reported [ 44 ]. However, in contrast to Reus et al. [ 44 ] we find a single in-frame stop codon within both gag genes (HERV ID 26658–9. W). We also find other examples of closely situated HERV loci as for instance HERV ID 44313 that is composed of two proviruses of distinct origin (HERV-K and a γ-retrovirus-like sequence) both severely degenerated. Table 1 Genomic distribution of HERV regions Chr. Length (Mb) Windows analyzed a Observed HERVs Expected HERVs χ 2 test b χ 2 test within chr. c 1 246 228 654 614.7 0.0987 <0.0001 2 243 242 534 652.4 1.3E-06 <0.0001 3 199 197 581 531.1 0.0250 <0.0001 4 192 190 641 512.2 4.0E-09 <0.0001 5 181 180 446 485.3 0.0656 <0.0001 6 171 169 496 455.6 0.0513 <0.0001 7 159 157 342 423.3 4.9E-05 0.0003 8 146 145 396 390.9 0.7923 <0.0001 9 136 120 258 323.5 2.0E-04 <0.0001 10 135 135 304 364.0 1.3E-03 <0.0001 11 134 133 379 358.6 0.2695 <0.0001 12 132 132 393 355.9 0.0440 <0.0001 13 113 98 239 264.2 0.1146 <0.0001 14 105 88 205 237.3 0.0335 <0.0001 15 100 83 135 223.8 1.7E-09 <0.0001 16 90 82 101 221.1 2.6E-16 <0.0001 17 82 80 98 215.7 4.4E-16 <0.0001 18 76 77 167 207.6 0.0043 0.0001 19 64 57 259 153.7 9.4E-18 <0.0001 20 64 62 76 167.2 1.0E-12 <0.0001 21 47 36 85 97.1 0.2181 0.0588 22 49 36 55 97.1 1.7E-05 <0.0001 X 154 152 629 409.8 9.7E-29 <0.0001 Y 50 25 359 67.4 1.1E-278 <0.0001 TOTAL 3068 2905 7832 7832 d a Only windows overlapping with NCBI GoldenPath (release 34) b Single chromosomes tested against group of other chromosomes. P-values below the significance level 0.00208 (0.05/24, Bonferroni corrected) are underlined. c The genomic positions of HERVs were χ 2 tested against a random distribution using 10000 simulations for each chromosome. d Four additional HERV regions are located in the DR51 haplotype of the HLA region on chromosome 6 and not counted here. The number of HERV regions that are located within (non-HERV) genes are significantly reduced as compared to a random distribution (χ 2 test, P < 10 -300 ), and only 13% of our 7836 HERV regions are situated inside a gene, despite that 33% of the genome is spanned by genes (Figure 2 ). In total 813 genes (see Additional file 1 ) carry one or more HERV regions within their predicted boundaries and as such provide a valuable set of genes that may show altered expression due to the presence of internally located proviruses. There is a strong bias (χ 2 test, P < 10 -52 ) for intragenic HERVs to be orientated antisense relative to the gene (Figure 2 ). HERV sequences located between genes are equally distributed between the two strands, and the orientation does not depend on the distance from the gene (data not shown). Figure 2 Number of HERV regions located inside genes, and their orientation relative to the gene. The expected number assumes a random genomic distribution. Limited number of intact viral open reading frames Of the ~38000 retroviral ORFs 25% are classified as Gag, 7% as Pro, 55% as Pol and 13% as Env proteins. This correlates well with the expected size of the gag , pro , pol and env genes, although Pol may be slightly overrepresented. The far majority of the vORFs (stop to stop) are short (Table 2 ) and presumably do not encode any functional proteins, although a role in cellular processes cannot be excluded. Long vORFs on the other hand may still retain their original viral function. In total 42 HERV regions encompass either a Gag or an Env ORF above 500 codons or a Pol ORF above 700 codons (which approach the size of intact viral proteins) and together they count 17 Gag, 13 Pol and 29 Env proteins (Table 2 and Figure 3 ). Only two HERV-K related loci (HERV ID 13983 and 29013) carry long reading frames for all viral genes. However, none of them are completely intact. In fact, 41 of the above 59 long vORFs, are all betaretroviral and stem from the HERV-K group. Interestingly, 15 of the remaining 18 non- betaretroviral ORFs are envelope proteins (see below). Our method only detects a single non- betaretroviral Gag ORF above 500 codons (which is located in a gammaretroviral structure, HERV ID 44200–1), while two long Pol ORFs are both present in full-length HERV-Fc and HERV-H elements (HERV ID 1178 and 10816) that also harbour intact env genes [ 41 , 45 ]. Table 2 Distribution of vORF lengths (stop codon to stop codon) vORF size (aa/codons) Gag Pro Pol Env HERV regions 63 – 100 4820 1322 10390 2354 6795 100 – 200 4015 1002 9110 2278 5803 200 – 300 643 165 1426 361 1894 300 – 400 160 54 286 81 527 400 – 500 33 3 70 24 123 500 – 600 1 20 12 33 600 – 700 4 10 9 22 700 – 800 10 4 7 15 800 – 900 1 1 2 900 – 1000 1 5 6 > 1000 1 3 4 Figure 3 Genomic distribution of all Gag (red) and Env (blue) ORFs above 500 aa and Pol (green) ORFs above 700 aa. Right-pointing triangles denote intact ORFs, while left-pointing triangles denote ORFs that are almost-intact besides a single stop codon or frame-shift mutation. If one extends the search criteria and scans the human genome for retroviral genes where a single mutation (one nucleotide insertion, deletion or substitution) either removes premature termination or restores the correct reading frame, the number of long Gag, Pol and Env proteins increases two-fold to 27, 23 and 43, respectively (Figure 3 ). Novel envelope genes identified Our method detects 29 Env ORFs (stop to stop) above 500 codons (Table 3 ), which comprise a few seemingly intact or almost-intact env genes in the human genome not previously reported. One particularly interesting locus (HERV ID 40701) shows similarity to a recently reported full-length endogenous retrovirus from Zebrafish ( Danio rerio ), dubbed ZFERV [ 46 ]. A phylogenetic analysis of the Zebrafish ERV suggested that it is distinct from existing retrovirus genera being most similar to gammaretroviruses [ 46 ]. An analysis of a short Gag and Pol ORF upstream of the Env gene (HERV ID 40701) confirms the relatedness to gammaretroviruses (weak similarity to Feline leukemia virus). Also, two loci (HERV ID 44200–1 and 44204–5) harbor novel Env-like ORFs that C-terminally show homology to Env from HERV-W/syncytin-1 [ 26 , 27 ] and HERV-RFD/syncytin-2 [ 28 ], while the N-terminal sequences show no clear homology. The identified ORFs are highly similar (96% aa identities) except for a small C-terminal truncation and both genes are located within a narrow 40 kb region at chromosome 19 (Table 3 ). Interestingly, both these loci are positive in our EST mapping analysis (see below). Furthermore, among the 29 Env ORFs, five turned out to carry a specific 292 bp deletion (indicative for type 1 HERV-K-HML-2) that fuses the pol and env reading frames. The same deletion is present in the HERV-K18 Env locus that has been reported to have SAg-like activity [ 37 ]. Table 3 Previously and newly identified long Env ORFs in the human genome Gene a Bibliographic name Chromosomal position of locus (NCBI release 34) Length c ORF ID Comment EST matches d HERV H- like Env Chr. X 70307525–70316940 (+1) 474 4769 N-term unknown Minor C-term deletion EnvF(c)1 Chr. X 95868842–95875915 (+1) 583 8944 Intact a HERV-W Env Chr. X 105067535–105070015 (-1) 475 24413 Minor N-term deletion 3 HERV-K Env (type 1) Chr. 1 75266332–75270814 (+1) 586 42910 In frame pol-env fusion 3 HERV-K Env (type 1) K18-SAg IDDMK 1,2 22 Chr. 1 157878336–157885675 (+1) 560 46511 In frame pol-env fusion EnvH3 EnvH/p59 Chr. 2 155926784–155933168 (+1) 554 70149 Intact a HERV-K Env (type 1) Chr. 2 130813720–130815944 (-1) 687 80419 In frame pol-env fusion EnvH1 EnvH/p62 H19 Chr. 2 166767087–166774769 (-1) 583 82113 Intact a EnvR(b) Chr. 3 16781208–16788508 (+1) 513 86185 Intact a HERV-K Env (type 1) Chr. 3 114064939–114072223 (-1) 597 103885 In frame pol-env fusion C-term deletion EnvH2 EnvH/p60 Chr. 3 167860265–167867997 (-1) 562 107739 Intact a HERV-K-like Env Chr. 5 34507318–34513254 (-1) 475 153615 N- and C-term deletion EnvFRD Syncytin 2 Chr. 6 11211667–11219905 (-1) 537 171089 Intact a 16 EnvK4 HERV-K109 Chr. 6 78422690–78431275 (-1) 697 174741 Intact a EnvK2 b HML-2.HOM HERV-K108 Chr. 7 4367317–4383401 (-1) 698 188263 188274 Intact a 4 EnvR Erv3 Chr. 7 63862984–63871411 (-1) 605 191393 Intact a 17 EnvW Syncytin (1) Chr. 7 91710047–91718755 (-1) 537 192333 Intact a 100 EnvF(c)2 Chr. 7 152498159–152502575 (-1) 545 195475 Intact a 1 EnvK6 HERV-K115 Chr. 8 7342682–7353583 (-1) 698 204173 Intact a HERV-K Env Chr. 11 101104479–101112064 (+1) 661 240932 Minor C-term deletion 6 HERV-K-like Env Chr. 12 104204746–104209814 (+1) 658 255589 Minor C-term deletion EnvK1 Chr. 12 57008431–57016689 (-1) 697 260042 Intact a ZFERV-like Env Chr. 14 91072914–91085655 (-1) 664 285129 HERV-K Env (type 1) Chr. 16 35312483–35314318 (+1) 550 293143 In frame pol-env fusion EnvT Chr. 19 20334642–20343232 (+1) 664 310016 Intact a HERV-W/FRD-like Env Chr. 19 58210000–58211244 (+1) 477 312172 N-term unknown Minor C-term deletion 3 HERV-W/FRD-like Env Chr. 19 58244133–58246051 (+1) 535 312208 N-term unknown 3 EnvK3 HERV-K (C19) Chr. 19 32821287–32829201 (-1) 698 314652 Intact a a Nomenclature for verified and complete env genes as in de Parseval et al. [41]. Note that EnvK5 (HERV-113) at Chr. 19 [14] is not present in the NCBI release 34 of the human genome. b EnvK2 is organized as a tandem repeat. c ORF length from start to stop codon. d Number of ESTs that map to the same genomic region (see text). EST matching to HERV regions with long ORFs We mapped 265 ESTs to one of the 42 HERV regions that encode a long Gag, Pol or Env ORF (Figure 3 ). The EST GenBank accession number, the matching HERV ID and the source organ and tissue type are provided as supplementary material (see Additional file 2 ). Briefly, 20 of the 42 HERV regions were found to have matching ESTs suggesting transcriptional activity. For the long envelope genes we have included the number of EST matches in Table 3 . Our analysis reveals that besides "activity" of members of the HERV-K group, only HERV-Fc(2), HERV-R (Erv3) and a few HERV-W/FRD members (including Syncytin-1 and -2) have unambiguous EST matches. By far, Syncytin-1, dominates with 100 EST matches, followed by Syncytin-2 and HERV-R. Syncytin-1 and Syncytin-2 were predominantly found in placental EST libraries (see Additional file 2 ), which is also true for 5 of 17 HERV-R ESTs. Interestingly, among the two (partial) HERV-W/FRD-like env genes four of 6 ESTs are also derived from placental tissues. Discussion We report a mapping of 7836 loci in the human genome that show nucleotide sequence similarity to retroviral genomes and importantly, we provide a detailed analysis of their coding potential by annotation of all viral ORFs (stop-codon to stop-codon fragments longer than 62). This compilation of HERV regions and their corresponding viral ORFs is available as a searchable database [ 47 ]. A graphical example is provided in Figure 1C . In total our HERV regions (which exclude flanking LTRs) amount to 1.1 % of the human genome, a number that agrees well with previous reports [ 1 , 42 ]. The vast majority of the mapped HERV regions contain several frame-shift mutations or in-frame stop codons that truncate the viral ORFs and thus testify to their old association with the human genome. In fact, we detect only 42 proviruses that have retained Gag, Pol or Env ORFs in the size range that approach full-length proteins (Figure 3 and Table 2 ). As expected the majority are part of the evolutionary young HERV-K (HML-2) group. Neither of these HERV-K loci are completely intact, although one potential replication-competent locus (HERV-K113, polymorphic for humans and not present in the NCBI34 genome) has been reported [ 14 ]. Alternatively, complementation among HERV-K loci may open up for infectious particle formation, and clearly defines interesting candidates to investigate experimentally. Moreover, assuming a high error-rate during transcription or retrotransposition, one cannot exclude that almost-intact loci may occasionally revert to their original functional state and become replication-competent. Based on our data about 34 gag , pol or env genes can be restored by a single point mutation or a single insertion-deletion event. Within our list of intact or almost-intact viral ORFs in the human genome, we detect only a single gag gene and two pol genes that are not from the HERV-K group. However, among the 29 long envelope genes 15 are gammaretroviral (Table 3 ). The fragmented, pseudogene nature of the gag and pol genes (small ORFs) in several of these provirus loci strongly suggests that selection has preserved the env genes. In case of syncytin-1 and -2 (HERV-W and HERV-FRD members, respectively) evolutionary conservation can be understood in functional terms, since the encoded envelope proteins have been suggested to play an essential role in placental development by causing trophoblast syncytia formation [ 28 , 48 ]. Compelling evolutionary evidence for purifying selection in these genes has recently been gathered to support this hypothesis [ 28 , 49 , 50 ]. Concerning other ancient loci such as HERV-R (erv3) no evidence for a physiological role has yet been established despite a remarkable conservation and expression of the env gene. Potential cellular roles for envelope genes that may drive purifying selection include i ) protection from infection by related retroviruses by receptor interference as demonstrated for the murine fv4 locus [ 51 ], ii ) mediator of organized cell-cell fusion like the syncytin genes [ 26 - 28 ] and iii ) a hypothesized role in preventing the immune response against the developing embryo by means of the immunosuppressive domain [ 52 ]. Two seemingly intact env genes not detected in the recent survey of intact human envelope genes [ 41 ] are equally interesting in terms of possible functional conservation. One is located on chromosome 14q32.12 and this novel gene shows low but significant similarity to a recently reported endogenous retrovirus from Zebrafish (ZFERV [ 46 ]). BLAST analysis of the protein coding regions suggests that this HERV group belong to the gammaretroviral genera. Whether this gene is still active or whether the encoded protein still maintains function and/or plays a cellular role is yet to be established. Although we were unable to detect any unambiguous EST matches to this gene (Table 3 ), RT-PCR analysis indicates low RNA abundance in a few human tissues including placenta (Kjeldbjerg AL, Aagaard L, Villesen P and Pedersen FS, unpublished). A second seemingly intact novel env gene is found on chromosome 19q13.41, and interestingly a C-terminal truncated "twin" gene is located just 40 kb away. Both genes appear to be active as judged by EST data (Table 3 ) mostly in placental tissue (see Additional file 2 ). We have been able to confirm this by RT-PCR analysis (unpublished), and ongoing expression analysis aims at clarifying the activity and function of these novel genes. Among the long betaretroviral env genes five turned out to carry a specific 292 bp deletion that fuses the pol and env reading frames. This deletion variant of the HERV-K (HML-2) group is indicative of the type 1 genomes [ 53 ] that despite the lack of functional proteins have been mobilized quite efficiently. Alternatively, recombination or gene conversion may have conserved this HERV-K deletion variant [ 11 , 54 ]. It is noteworthy that the Env protein from one of these Ä292-genes, HERV-K18, is reported to have SAg-like activity [ 37 ], and a similar function of the other four K18 SAg-like genes is an open question. Although our analysis is extensive it is most likely not exhaustive. The sensitivity is obviously limited by our query sequences, and some ancient HERVs may have suffered from the mutational decay to a degree which makes is impossible to detect them by homology. For instance, the ZFERV-related env gene reported by us was only detected due to inclusion of the ZFERV sequence [ 46 ], and although available data such as HERVd [ 43 ] also points to this region it is reported as a number of incomplete HERVs. Similarly, nucleotide based searches (as RepeatMasker and BLAST detection) only partially detect the novel HERV-W/RFD-like envelope genes and the intact envelope genes among HERV-Fc family even though these proviruses are fairly intact as suggested by a recent mobilization of HERV-Fc in the primate lineage [ 45 ]. Thus, inclusion of more retroviral query sequences as our vORF validated HERV data may likely improve detection methods in an iterative manner ("phylogenetic walking") as previously applied by Tristem [ 2 ]. Finally, screening the human genome in silico does not guarantee detection of polymorphic HERV loci in which the empty pre-integration site is still segregating in the human population. Indeed, an experimental survey has recently detected two such polymorphic loci in the human population (HERV-K113 and 115 [ 14 ]), and like HERV-K113 other recently acquired proviruses may escape our attention. In general, our analysis of the genomic positions of our ~7800 HERV regions revealed three distinct patterns, which all confirm earlier reported results: i ) there is an unequal distribution of HERVs between chromosomes and along the genome. In particular the Y chromosome stands out with a five-fold excess of our vORF positive (internal) HERV sequences (Table 1 ), and it has thus been dubbed "a chromosomal graveyard" [ 55 ]. This agrees well with previous genome surveys of LTR/ERV-related elements and the phenomenon may likely be associated with the high level of heterochromatin and low levels of recombination [ 55 - 58 ]. ii ) HERVs are underrepresented within genes and iii ) HERVs found in introns are predominantly orientated in the antisense direction (Figure 2 ). This pattern is well known [ 56 , 58 ] and expected due to selection against gene disruption or interference by retroviral regulatory elements such as promoters, splice sites and polyadenylation signals. This selection may have counteracted a preference for proviral integration (and retrotransposition) near or inside genes as suggested by recent studies for several retroviral genera [ 59 , 60 ]. Conclusion Initially, HERV discovery was driven by the search for replication-competent viruses and their possible association with human cancers as established in other species. Recent research has demonstrated that the presence of endogenous retroviral sequences in our genome has a number of complex functional and evolutionary consequences and cannot simply be regarded as "junk" DNA. The increased complexity and diversity of HERVs as testified by the identification of two novel env genes in this survey make expression analysis and functional assessment a difficult task. To aid this process our genome-wide HERV data as well as predictions of Gag, Pol and Env reading frames in these loci are a useful resource and our data can be searched and visualized at Clearly, the 42 HERVs encompassing intact or near-intact gag , pol and env genes as described here are interesting experimental objects, although less intact viral proteins may also hold biological activity. In the near future use of comparative genomics and mapping of allele polymorphisms will most certainly enhance identification of endogenous retroviruses and reveal selection patterns that may eventually decipher a role for these genes in human health and/or disease. Methods In order to identify HERV regions in the human genome we performed BLAST searches using sensitive parameters. BLAST hits were saved in a database and subsequently clustered into putative HERV loci. These putative loci were then scanned for viral Open Reading Frames (vORFs) and the presence of flanking direct repeat sequences (putative LTRs). Subsequently, ORFs were categorized based on a library of known retroviral proteins and non-retroviral proteins. Identifying HERV regions In order to cover as many different HERV families as possible we compiled a query set of 237 publicly available sequences from Genbank, published papers and Repbase sequences [ 4 ]. These sequences cover all known retroviral genera and include both endogenous and exogenous strains from various host organisms (the query set is available upon request). Each query sequence was manually edited, removing LTR elements in order to avoid detection of solo LTRs. BLAST searches against contigs from the NCBI release 34 of the human genome were performed using WU-BLAST (Gish, W. (1996–2003) ), with default parameters except for W = 8, E = 0.001, V = 1000000, B = 1000000. Search results were stored in a MySql database and mapped to chromosomal positions using Ensembl Bioperl packages [ 61 ]. Overlapping BLAST hits were clustered into putative HERV regions allowing a gap of 500 nucleotides between hits. A region-score was calculated based on the sum of e-value weighted hitlengths divided by region length. Only regions longer than 300 nucleotides and a region-score > 3.0 (threshold based on empirical tests) were kept, resulting in 45658 putative HERV region. Detection of direct flanking repeats (putative LTRs) were done by comparing a window before and after the HERV region. ORF finding and categorization For the 45658 putative HERV loci, we scanned the DNA sequence (including 1000 bases flanking the locus) for forward open reading frames (stop-codon to stop-codon) of lengths > 62 aminoacids (aa). Stop-codon-to-stop-codon fragments were chosen to accommodate the use of non-conventional translational initiation by retroviruses at the internal pro and pol genes (by means of ribosomal frame-shifting and terminations suppression). Therefore the predicted proteins in particular for gag and env genes may contain incorrect N-terminal regions that must be removed by looking for appropriate start codons. ORF lengths below 63 aa were discarded as the probability of finding ORFs less than 63 aa in a random sequence increases to more than 0.05 (assuming equal codon frequencies). All ORFs were then assigned to a category by FASTA searching against a library of known retroviral proteins (RV) and known non-retroviral proteins (NON_RV). RV proteins were downloaded from NCBI and categorized into either GAG, POL, PRO, ENV, ACC (accessory protein) or UNWANTED (for unwanted or unknown proteins). NON_RV proteins consists of all human SwissProt proteins of length 400–700 aa not including the words "endogenous, virus, envelope, env-, env, gag-, gag, pol-, pol, reverse". The final library consisted of 6260 records (3454 RV proteins + 2806 NON_RV proteins). ORF was assigned to the same category as the highest scoring hit. All loci with a significant RV ORF (vORF) were flagged as HERVs (E < 0.0005) – this data set consists of 7836 loci. Manual inspection of long ORF above 400 codons revealed that two envelope ORFs (ORF ID 86185 and 312172) were (mis)categorized as non-significant (NonS) due to low sequence similarity to our retroviral protein library. EST matching to individual proviruses In order to match the human ESTs to the vORF positive HERV regions we first performed an all against all search using NCBI MegaBLAST [ 62 ]. The output was filtered so that only the best matching pairs (HERV-EST) were kept and put into a database. The ESTs that matched the HERV regions encompassing a long ORF were subsequently assigned to a human genomic region using EST mapping data from UCSC Genome Browser [ 63 ]. ESTs that unambiguously mapped to the same genomic region as the HERV regions of interest were counted as positive EST matches. List of abbreviations used ERV Endogenous retrovirus EST Expressed sequence tag HERV Human endogenous retrovirus LTR Long terminal repeat vORF Viral open reading frame Competing interests The authors declare that they have no competing interests. Authors' contributions The study was conceived by LAA and FSP; PV and LAA participated in designing and coordinating the study; PV carried out all programming and compilation of , while LAA prepared query sequences, detailed analysis of the results and drafted the manuscript and PV and CW performed the statistical analysis. All authors read and approved the final manuscript. Supplementary Material Additional File 1 Table 1. Genes with one or more vORF HERVs inside. Genes were selected from Ensembl (Current Release 21.34d.1) and compared with all HERV regions containing a retroviral ORF. 813 genes (642 with descriptions) contained 1182 HERVs (969) overlapping the gene chromosomal coordinates (exons + introns). The HERV score is a measure of the density of retroviral blast hits in the region. Click here for file Additional File 2 Table 2. ESTs matching HERVs containing a long viral ORF. ESTs were compared to HERVs using megaBLAST. Only ESTs that best matched the target HERV were kept. Finally, ESTs mapping conclusively to the same genomic regions as the target HERV were kept. EST library information (organ and tissue) was parsed from Genbank. The positions are in NCBI35 coordinates due to overly stringent settings of EST mappings in the NCBI34 mapping at UCSC. HERV positions were lifted to NCBI35 coordinates using the "lift genome annotations" tool at Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524368.xml |
545782 | Genome-wide patterns of carbon and nitrogen regulation of gene expression validate the combined carbon and nitrogen (CN)-signaling hypothesis in plants | Microarray analysis and the 'InterAct class' method were used to study interactions between carbon and nitrogen signaling in Arabidopsis . | Background Carbon and nitrogen are two major macronutrients required for plant growth and development. Specific carbon and nitrogen metabolites act as signals to regulate the transcription of genes encoding enzymes involved in many essential processes, including photosynthesis, carbon metabolism, nitrogen metabolism, and resource allocation [ 1 - 5 ]. For example, studies have shown that carbon sources (for example, glucose or sucrose) affect the expression of genes involved in nitrogen metabolism, including genes encoding nitrate transporters and nitrate reductase [ 6 , 7 ]. Conversely, nitrogen sources (such as nitrate) have been shown to affect the expression of genes involved in carbon metabolism, including genes encoding PEP carboxylase and ADP-glucose synthase [ 8 ]. Responses to carbon and nitrogen result in important changes at the growth/phenotypic level as well. For example, carbon and nitrogen treatments have antagonistic effects on lateral root growth [ 9 ], while their effect on cotyledon size, chlorophyll content and endogenous sugar levels appear to be synergistic [ 10 ]. In plants, there are multiple carbon-responsive signaling pathways [ 11 - 13 ], and progress has been made in uncovering parts of the sugar-sensing mechanisms in plants, including the identification of a putative glucose sensor, hexokinase [ 14 ]. However, our current knowledge of the mechanisms by which genes and biological processes are regulated by carbon signaling in plants and how they are regulated at the level of transcription is still limited. For example, a search of the PlantCare [ 15 , 16 ] and TRANSFAC [ 17 ] databases revealed only seven plant cis elements that have been shown to be carbon-responsive cis elements (C-elements) and none has been identified from studies in Arabidopsis thaliana . Although much less is known concerning the mechanisms controlling nitrogen signaling, microarray analysis has been used to identify nitrogen-responsive genes [ 8 , 18 ]. It has recently been proposed that glutamate receptor 1.1 (AtGLR1.1) functions as a regulator of carbon and nitrogen metabolism in A. thaliana [ 19 ], but a global understanding of the genes and processes that are regulated by carbon and nitrogen signaling in plants and the mechanism by which this occurs is still lacking. Previously, microarrays were used to identify genes and biological processes regulated by interactions between carbon and light signaling in A. thaliana , including the identification of a putative cis regulatory element that is responsive to either light or carbon signals [ 13 ]. In this study, we present a genome-wide analysis of the effects of transient carbon and/or nitrogen treatments on mRNA levels, with a particular focus on genes whose mRNA levels are affected by the carbon and nitrogen (CN) treatment. This study has enabled us to evaluate a number of models for intersections between carbon and nitrogen signaling (Figure 1 ) and to identify genes and biological processes that are regulated by the interactions between carbon and nitrogen signaling pathways. In addition, we have identified putative cis elements that may be responsible for coordinating a gene's responses to both these signaling pathways. Results Testing models of carbon and nitrogen regulation The goal of this study was to use a genomic approach to test the hypothesis that carbon and nitrogen signaling pathways interact to regulate the expression of genes in Arabidopsis . We predicted six general models that could describe the possible modes of gene regulation due to carbon, nitrogen and CN together. Three of these models do not involve interactions between carbon and nitrogen signaling. The 'No effect' model includes genes not regulated by carbon, nitrogen and/or CN. The 'C-only' model includes genes regulated only by carbon. Finally, the 'N-only' model includes genes regulated only by nitrogen. Three additional models are needed to describe the regulation of genes affected by interactions between carbon and nitrogen signaling (Figure 1a ). Model 1 (CN independent) depicts a gene W , for which carbon and nitrogen signals act as independent pathways, so that the effects of carbon and nitrogen are additive. Model 2 (CN dependent) depicts a gene X , for which regulation requires carbon and nitrogen, and neither carbon alone nor nitrogen alone has an effect. Model 3 (CN dependent/independent) incorporates both an independent and a dependent component to the interactions of carbon and nitrogen signaling. For gene Y , carbon alone has an independent inductive effect, while nitrogen has a carbon-dependent effect as it can enhance the effect of carbon, but has no effect on its own (Model 3 CN-enhanced). For gene Z , nitrogen alone has an independent inductive effect, while carbon has a nitrogen-dependent effect. These general models can be broken down into more descriptive sub-models. For example, Model 2 can be broken into two sub-models for which CN results in either an inductive or repressive effect. To test the in vivo significance of the above models, a microarray analysis of RNA from plants treated transiently with distinct carbon and nitrogen treatments was carried out, and the results were analyzed to determine the carbon and nitrogen regulation of different genes. For this study, we analyzed RNA isolated from Arabidopsis seedlings exposed to four different transient carbon and/or nitrogen treatments (-C/-N, +C/-N, -C/+N, and +C/+N) (Figure 2 ) using Affymetrix whole-genome microarray chips. Analysis of gene expression across these treatments was performed on the whole genome using InterAct Class [ 13 , 20 ], an informatic tool that enabled us to classify genes into each of the above models based on their relative responses to carbon and/or nitrogen treatments. The analysis of the microarray data with InterAct Class enabled us to group genes whose relative responses to carbon, nitrogen and CN were similar to each other. In this case, each InterAct class is made up of four values listed in the following order: value 1 = the expression due to carbon; value 2 = the expression due to nitrogen, value 3 = the expression due to carbon and nitrogen supplied as a combined treatment (CN); and value 4 = the synthetic expression of C+N calculated by adding the expression due to carbon plus the expression due to nitrogen, which is a 'virtual' treatment. InterAct Class is a ranking system used to qualitatively compare gene-expression profiles across multiple treatments. For each gene, each treatment is assigned a value representing the effect of the treatment on the expression of that gene. Treatments that result in repression of a gene are assigned a negative number, treatments that do not significantly affect a gene are assigned zero, and treatments that cause induction are assigned a positive number. If more than one treatment causes induction or repression, the treatments are ranked so that the treatment that causes the most induction or repression will be assigned the number furthest from zero. The four hypothetical genes in Figure 1a ( W , X , Y and Z ) were classified by InterAct Class (Figure 1b ), demonstrating that, with this program it becomes easy to determine whether the regulation of a gene is due to a complex (non-additive) interaction between carbon and nitrogen signaling. For such genes, the value assigned to CN (the third InterAct Class number) will be higher or lower than the value assigned to C+N (the fourth InterAct Class number). These genes will fall into Models 2 and 3 (Figure 1b , genes X , Y and Z ). Out of 23,000 genes on the Affymetrix chip, 3,652 passed our stringent filtering criteria for reproducibility among treatment replicates and were assigned an InterAct class. Our subsequent analysis of the expression patterns of these 3,652 genes validated the existence of 60 different InterAct classes (Table 1 and Additional data file 1). These 60 InterAct classes represent a broad spectrum of expression patterns that validate each of the six general models for gene regulation. This analysis shows that of the 3,652 genes in the analysis, the vast majority (2,485) is responsive to carbon and/or nitrogen treatment. Moreover, almost half of these genes (1,175 genes) are regulated by an interaction between carbon and nitrogen signaling (Table 1 ). For example, there are 175 genes that are in Model 3 CN-enhanced, for which expression due to CN is greater than expression due to C+N (Table 1 and Additional data file 1). This suggests that an interaction between carbon and nitrogen signaling affects the expression of this set of genes. MIPS funcat analysis uncovers biological processes that are regulated by carbon and/or nitrogen The InterAct classes were assigned to one of the six general models. To identify biological processes that contain a significant number of genes regulated by carbon, nitrogen and/or CN, we determined which Munich Information Center for Protein Sequences (MIPS) functional categories (funcats) [ 21 , 22 ] were statistically under-represented in the No effect model (InterAct class 0000), compared to all the genes assigned an InterAct class (Table 2 ) (not to all the genes in the genome; this takes into account any bias that may have occurred as a result of the filtering process before InterAct class analysis). Under-representation of a biological process in the No effect model means that for that particular funcat, there are fewer genes in the No effect model than expected on the basis of how all the genes assigned to an InterAct class behave. This means that processes under-represented in the 0000 InterAct class contain a significant number of genes that respond to carbon and/or nitrogen treatments compared to the general population of genes in the analysis. For example, 31.6% (1,089/3,447) of the genes assigned to an InterAct class and a funcat are assigned to the No effect model (Table 2 ). This percentage was used as a basis of comparison to determine if genes in any specific funcat varied significantly from the general population. For example, if genes in the metabolism funcat are not regulated by carbon and/or nitrogen in a significant fashion, the number of genes expected to be in the No effect model would be equal to the total number of genes in the metabolism funcat that are assigned an InterAct class (496) times 0.316, which would equal 156.7 genes. However, the actual number of metabolism genes in the No effect model is 120, which is significantly less than 156.7 ( p -value = 6.0 × 10 -4 ). Therefore, the metabolism funcat is under-represented in the No effect model, showing that metabolism displays significant regulation by carbon and/or nitrogen. This analysis revealed several primary funcats (01 = metabolism, 02 = energy and 05 = protein synthesis) that are significantly under-represented in the No effect model (Table 2 ). Thus, a significant number of genes involved in metabolism, protein synthesis and energy respond to carbon, nitrogen and/or CN. For the funcats that are under-represented in the No effect model, this type of analysis was extended to examine the regulation of these funcats in all of the sub-models. This analysis enabled us to determine into which sub-models the genes from these funcats fell and to determine whether the genes in these funcats are under- and over-represented (-S and +S respectively) in these sub-models (Table 3 ) (see Additional data file 1 for the p -value, and the funcat analysis extended to every sub-model and every funcat). Identification of cis elements associated with CN-regulated genes To begin to elucidate the mechanisms that control gene regulation in response to carbon and nitrogen treatments, we sought to identify putative cis elements that might be responsible for regulating genes in Model 3 CN-enhanced (Table 1 ). These genes are likely to contain cis elements involved in interactions between carbon and nitrogen signaling because the expression due to CN is greater than that due to C+N. Previously, genes that are biologically related and similarly expressed were used to find putative cis -regulatory elements involved in carbon and/or light regulation [ 13 ]. For this study, to identify related genes in metabolism, we added a new statistical functionality to the informatic tool PathExplore [ 23 ], which enabled us to identify metabolic pathways that contain more genes than expected in a list of genes [ 24 ]. As used here, PathExplore is useful to find functionally related genes from analyses that combine data from multiple microarray chips (for example, InterAct Class and clustering). In this case, we searched for pathways that contained more than the expected number of genes in Model 3 CN-enhanced, compared to the general population. Three genes involved in ferredoxin metabolism were found to be over-represented in Model 3 CN-enhanced ( p -value = 0.022) (Table 4a ). These genes were also found to be induced in roots and shoots of nitrate-treated plants [ 18 ], and the protein products of these genes are all predicted to be localized to the chloroplast [ 25 ], further suggesting that they are biologically related and co-regulated. As we found that genes in the funcat protein synthesis are over-represented in Model 3 CN-enhanced (Table 3 ), we selected a set of genes in protein synthesis that are in Model 3 CN-enhanced for additional cis search analysis. Four nuclear genes encoding ribosomal proteins predicted to be localized to the mitochondria [ 25 ] were assigned to InterAct class 1021 (Table 4b ). These four genes meet the criteria of being biologically related and having similar expression patterns and were also analyzed for potential cis -regulatory elements. Over-represented motifs in the promoters of the four protein synthesis genes or the three ferredoxin metabolism genes were identified using AlignAce [ 26 , 27 ] (AlignAce motifs). We predicted two general mechanisms for which we might be able to identify cis -regulatory elements by which carbon and nitrogen can have a non-additive effect (for example, Model 3 CN-enhanced) on the transcription of a gene (Figure 3 ). These models predict that because the genes used for cis discovery are induced by carbon alone, there must be a transcription factor (and cognate cis element) that responds to carbon alone. Such carbon-responsive cis elements (C-elements) can be identified because they should also be over-represented in the promoters of genes that are induced by carbon alone (the C-only inductive model). From this analysis, a number of the AlignAce motifs identified from the ferredoxin metabolism and protein synthesis genes in the Model 3 CN-enhanced were also shown to be associated with C-only inductive model genes (Table 5 ; C1-C11). The simplest model that could result in the expression due to CN being greater than C+N is depicted in Figure 3a . In this model, the promoters that contain a C-element are also regulated by a completely independent transcription factor (and cognate cis element) that responds specifically to a CN-signaling pathway (Figure 3a ). If such a CN-responsive cis element (CN-element) exists, it would be predicted to be over-represented in the promoters of genes in Model 3 CN-enhanced, but would not be over-represented in the C-only inductive model. Two of the AlignAce motifs fit this pattern (motifs CN1 and CN2, Table 5 ), suggesting that they are CN-elements. If CN1 and CN2 regulate gene expression, they might be expected to be evolutionarily conserved. Unfortunately, A. thaliana and/or Oryza sativa have multiple genes encoding ferredoxin and ferrodoxin reductase, and as such, the true orthologs of the genes used for this analysis can not be conclusively identified for a promoter analysis (the same is true for the ribosomal genes used for analysis). Another prediction is that if CN1 and CN2 regulate gene expression, biologically related genes might also contain CN1 and CN2. Interestingly, ferredoxin-dependent nitrite reductase (At2g15620) contains three copies of CN1 and one copy of CN2 in its promoter. This gene is in Model 3 CN-enhanced (InterAct class 1021), its protein product is localized to the chloroplast [ 25 ] and its expression is induced in shoot and roots of nitrate-treated plants [ 8 ], suggesting that the gene is biologically related to and co-regulated with the ferredoxin and ferredoxin reductase genes used for this analysis. We next tested if finding three copies of CN1 and one copy of CN2 in the promoter of ferredoxin-dependent nitrite reductase was statistically likely by testing randomized versions of the promoter. We found that three copies of CN1 were unlikely ( p -value = 0.0364), but it would not be unlikely to find one copy of CN2 ( p -value = 0.200). In addition, a total of four copies of CN1 and CN2 was very unlikely ( p -value = 0.018) in any combination (for example, three CN1 and one CN2, two CN1 and two CN2, or one CN1 and two CN2, and so on). As A. thaliana has only one copy of ferredoxin-dependent nitrite reductase, we searched the O. sativa genome sequence for ferredoxin-dependent nitrite reductase genes. Again, we found only one gene [ 28 ]. BLAST [ 29 ] did not find enough similarity between the promoters of the A. thaliana ferredoxin-dependent nitrite reductase gene and the O. sativa gene for an alignment. Despite this lack of similarity, we tested for the presence of CN1 and CN2 in the promoter of this gene; three copies of CN1 ( p -value = 0.052) and one copy of CN2 ( p -value = 0.389) were found. Again, it was very unlikely that a total of four copies of CN1 and CN2 ( p -value = 0.045) would occur in the promoter sequence. Identification of nitrogen-dependent enhancers of carbon regulation (NDEs) A second mechanism by which the expression due to CN could be greater than C+N could involve a nitrogen-responsive cis element that alone has little or no effect on gene regulation, but when present in combination with a C-element, enhances the induction caused by carbon and is dependent on a carbon-responsive transcription factor (Figure 3b ). Other regulatory modules in plants have been identified in which the regulation due to one cis element requires the presence of another [ 30 ]. In the example examined here, the nitrogen-dependent cis element enhances the induction caused by the C-element, making it a nitrogen-dependent enhancer of carbon regulation (NDE). To identify NDEs, our strategy for cis element identification was modified. NDEs would be expected to be over-represented in the promoters of Model 3 CN-enhanced genes, but only when present in combination with a separate C-element, as both elements are required to give the enhanced expression due to CN. However, some of the AlignAce motifs are potentially involved in regulating expression due to the carbon treatment in cooperation with the already identified C-elements. These cis elements would be similar to NDEs as they would be over-represented in genes induced by carbon in combination with the already identified C-elements. As these motifs are not NDEs, we sought to identify them and remove them from the analysis. AlignAce motifs were tested to determine whether they are over-represented in the promoters of genes whose promoters contain any of the C-elements and are in the C-only inductive model. Those that were found to be over-represented were eliminated from further analysis because these motifs are potentially involved in carbon regulation and are not NDEs. Next, the remaining 33 AlignAce motifs were tested to determine if any are NDEs by determining whether they are over-represented in combination with a C-element within the promoters of the Model 3 CN-enhanced genes. Seven of the potential NDEs are over-represented ( p -value < 0.05) with at least one C-element in the promoters of the Model 3 CN-enhanced genes, resulting in 12 significant combinations between putative NDEs and C-elements (that is, some of the potential NDEs are over-represented with more than one C-element; data not shown). To determine if this approach resulted in an enrichment of NDEs, the promoter sequence of each gene was randomized, and the same test was performed. This enabled us to determine whether the remaining 33 AlignAce motifs were over-represented in combination with each C-element in the randomized promoters of the Model 3 CN-enhanced genes. Sets of the randomized promoters (200 sets) were tested, and none of them had as many significant pairs of potential nitrogen-dependent enhancers of carbon regulation and C-elements than the 12 found in the actual promoters. This randomization proves that our approach successfully enriched for NDEs in the actual promoters of the Model 3 CN-enhanced genes and that all the observed significant combinations cannot be due to false positives ( p -value < 0.005). Not surprisingly, each of the seven potential NDEs was found to be over-represented with C-elements using the randomized promoters. This shows that false positives can occur in testing for NDEs. The results from the randomized promoters were used to identify which potential NDEs are over-represented with more C-elements than expected (that is, all the combinations for that NDE cannot be explained by false positives). Two NDEs (N1 and N2) were found to be associated with C-elements (Table 5 ; C3, C6, C7 and C10) in six (N1C6, N1C7, N2C3, N2C6, N2C7 and N2C10) of the 12 significant combinations between the 33 remaining AlignAce motifs and the C-elements. N1 and N2 are involved in more significant combinations than expected on the basis of the randomization study (Table 6 ; last column). If N1 or N2 work with the C-elements (C3, C6, C7 and C10) to regulate gene expression in response to CN, then genes that contain both motifs and are in Model 3 CN-enhanced should be misrepresented in certain functional groups as these genes are truly co-regulated. This misrepresentation should occur not only with respect to the genome, but also with respect to the genes in Model 3 CN-enhanced. This result is expected because these genes are more closely related to each other than to the other genes in Model 3 CN-enhanced, and because their CN regulation is the result of the action of the same transcription factor(s). Funcat analysis was used to determine if any functional categories were misrepresented in the genes whose promoters contain N1C6, N1C7, N2C3, N2C6, N2C7 or N2C10 and are in Model 3 CN-enhanced. As the genes used to derive most of the pertinent cis motifs encode proteins that are localized to mitochondria, we also tested to see if these genes were misrepresented in the predicted localization of the proteins they encode with respect to the genes in Model 3 CN-enhanced. For the genes whose promoters contain N1C6, N1C7, N2C3, N2C6, N2C7, or N2C10 and are in Model 3 CN-enhanced, only the 'protein synthesis' funcat was found to be misrepresented amongst the primary funcats as compared to all the genes in Model 3 CN-enhanced (Table 7 ). The genes predicted to encode mitochondria-localized proteins are over-represented for some combinations, but genes localized to the cytoplasm or chloroplast are never misrepresented (Table 7 ). Two combinations (N2C3 and N2C8) do not show over-representation in protein synthesis and/or genes encoding mitochondria-localized proteins, suggesting they are false positives. All the others show over-representation in some category, further suggesting the potential biological relevance of these cis elements (Table 7 ). Discussion This report contains the one of the first genome-wide investigations of carbon- and nitrogen-signaling interactions in A. thaliana [ 31 ]. While the focus of our analysis is related to genes controlled by carbon and nitrogen interactions, information from this study can also be used to globally identify genes and processes responsive to regulation by carbon or nitrogen alone. This type of analysis reveals that carbon is a more ubiquitous regulator of the genome compared to nitrogen. The most obvious manifestation of this is the number of genes assigned an InterAct class that are regulated by C-only (1,310) versus N-only (4) (Table 1 ). This result is not surprising, because carbon plays a major part in many biological processes and is therefore a major regulator of those processes. However, our studies show that nitrogen has a significant role in modifying the effect of carbon on gene expression. In particular, it is noteworthy that many genes show a response to CN (208 genes) treatment that is different from plants treated with carbon alone (Table 1 and Additional data file 1). This analysis demonstrates that nitrogen does have an effect on gene expression, but that in the vast majority of cases, the nitrogen effect is largely carbon-dependent. The carbon dependence of nitrogen regulation may reflect the metabolic interdependence of carbon and nitrogen. For example, carbon skeletons are required on which to assimilate nitrogen into amino acids. Biological processes containing genes that respond significantly to carbon, nitrogen and/or CN were initially identified by finding MIPS funcats [ 21 , 22 ] that contained genes that were under-represented in InterAct class 0000 (the No effect model) (Table 2 ). Funcats under-represented in the No effect model have a significant number of genes regulated by carbon and/or nitrogen. It is not surprising that processes like metabolism, protein synthesis, and energy are under-represented in the No effect model. These processes control metabolism or require energy generated by metabolism, and therefore expression of genes involved in these processes are likely to change in response to changes in levels of carbon, nitrogen and/or CN caused by external feeding or depletion after starvation. Protein synthesis regulation might be because it is a downstream process responding to an increase of amino acids as a result of feeding carbon, nitrogen and/or CN. To gain a better understanding of how the metabolism, energy and protein synthesis funcats are regulated by carbon and/or nitrogen, the sub-models in which they are misrepresented were identified (Table 3 ). This analysis revealed that the energy funcat is over-represented in InterAct classes that correspond to repression by carbon. It has been shown that carbon sources repress the expression of genes involved in photosynthesis [ 32 ]. As photosynthesis genes are part of the energy funcat, the photosynthesis sub-funcat (02.40) was tested and found to be over-represented in the C-only repressive model, in agreement with the previously observed repression of photosynthesis genes by carbon [ 32 ]. Surprisingly, metabolism is over-represented in Model 3 CN-suppressed, indicating that many of the genes involved in metabolism show less expression due to CN than expected. The majority of the genes (28 out of 34) were repressed by carbon, induced by nitrogen and repressed by CN, and were assigned to InterAct classes such as -21-2-1 (see Additional data file 1). Several of these genes encode enzymes involved in the catabolism of complex carbohydrates, including β-fructofuranosidase (At1g12240), β-amylase (At3g23920) and β-glucosidase (At3g60130 and At3g60140). ASN1 (At3g47340), which has been proposed to be involved in producing asparagine for the transport of nitrogen when carbon levels are low and has been shown to be repressed by carbon [ 32 ], was assigned Model 3 CN-suppressed (-21-2-1). In addition, GDH1 (At5g18170), which has been proposed to be involved in ammonia assimilation when ammonia levels are high, is repressed by carbon, and induced by nitrogen [ 33 ], and was assigned InterAct class -21-2-1, again a Model 3 CN-suppressed class. These genes therefore seem to be regulated as a result of decreased levels of carbon, increased levels of nitrogen or an imbalance between carbon and nitrogen. For example, when carbon sources are limiting (nitrogen is in excess), ASN1 is induced because it is involved in shifting the excess nitrogen to asparagine, as asparagine is an efficient way to store and transport nitrogen with respect to carbon [ 34 ]. However, when carbon is in excess or carbon and nitrogen are balanced, ASN1 is repressed. The regulation of these genes demonstrates the exquisite control of metabolic genes required to balance carbon and nitrogen availability. Our studies also showed that protein synthesis is one of the processes most affected by the interactions between carbon and nitrogen signaling (Table 3 ). In addition, the funcat entitled 'protein with binding function or cofactor requirement' (structural or catalytic) is also over-represented in Model 3 CN-enhanced (see Additional data file 1), partly due to genes that encode proteins involved in translation, including At4g10450 (putative ribosomal protein L9 cytosolic; InterAct class 2132) and At4g25740 (putative ribosomal protein S10; InterAct class 1021) (see Additional data file 1). This suggests that protein synthesis is regulated by carbon (see above), but also by complex interactions between carbon and nitrogen signaling. Little work has been done on the transcriptional control of protein synthesis by carbon and/or nitrogen signaling in plants. However, it has been shown in yeast that genes encoding ribosomal proteins are induced by nitrogen in the presence of carbon; whether this induction by nitrogen requires carbon to be present was not addressed in the yeast study [ 35 ]. Furthermore, in the fungus Trichoderma hamatum , the gene for ribosomal protein L36 is regulated by interactions between carbon and nitrogen, as it is induced only by CN, and not by carbon or nitrogen alone [ 36 ]. Our studies of carbon and nitrogen regulation of gene expression in plants, combined with the studies in fungi, suggest that transcriptional regulation of genes involved in protein synthesis by carbon and nitrogen signaling interactions is evolutionarily conserved. Finally, we sought to identify the cis -regulatory mechanisms involved in carbon and nitrogen signaling interactions. We hypothesized that there could be two general transcriptional mechanisms that would result in the expression due to CN being greater than that due to C+N (Figure 3 ). In one case, the regulation due to carbon and the regulation due to CN are completely independent (Figure 3a ), and in the other case, the regulation due to nitrogen is dependent on a carbon-responsive transcription factor and cis element (Figure 4b). Since CN1 and CN2 (Table 5 ) are over-represented in Model 3 CN-enhanced genes (for example, InterAct class 1021) independently of a C-element, we propose that CN1 and CN2 regulate gene expression due to CN that is independent of a C-element (Figure 3a ). This hypothesis is supported because CN1 and CN2 were found in the ferredoxin-related genes, which contain no C-elements that are over-represented in Model 3 CN-enhanced. However, we cannot rule out the possibility that CN1 and CN2 are promiscuous NDEs (Figure 3b ) that interact with many C-elements, which might result in over-representation of CN1 and CN2 in Model 3 CN-enhanced genes, but not in over-representation of a specific C-element. Further analysis suggests that CN1 is involved in regulating the expression of ferredoxin-dependent nitrite reductase. Finding three copies of CN1 in the promoter of the A. thaliana ferredoxin-dependent nitrite reductase gene is statistically unlikely ( p -value = 0.0364), and while three copies in the promoter of the O. sativa gene did not reach the 0.05 cutoff, this might represent some small change in the specificity of the regulating factor between O. sativa and A. thaliana . The failure of BLAST to detect any similarity between the promoters of these two genes suggests that their transcriptional regulators share very little sequence specificity, so a slight change in specificity is not unexpected. The same analysis suggests that CN2 is a false positive because it is not over-represented in the promoters of ferredoxin-dependent nitrite reductase genes. However, we cannot rule out the possibility that the combination of CN1 and CN2 is what is important in regulating these genes, as having a total of four copies of CN1 and CN2 is unlikely in the promoters of both genes. One possibility is that there is a positional relationship between the copies of CN1 and CN2 that is important. From a quick visual inspection, there does not appear to be a conserved relationship between the three copies of CN1 and one copy of CN2 in the A. thaliana and O. sativa promoters. These issues will have to be resolved by further experimental work; however, these results do suggest that ferredoxin, ferredoxin reductase and ferredoxin-dependent nitrite reductase are co-regulated by carbon and nitrogen due to CN1 and/or CN2. CN1 and/or CN2 therefore might act to link nitrogen reduction and energy metabolism. Our analysis found CN-elements in the promoters of the ferredoxin-related genes (Table 4a ), but not in those of the nuclear-encoded ribosomal mitochondrial protein genes (Table 4b ). Also none of the C-elements found in the ferredoxin-related genes (C1 through C5) is over-represented in the Model 3 CN-enhanced genes, suggesting that these elements have no role in CN regulation and that the CN and carbon signaling are independent (Table 5 ). In contrast, most of the C-elements in the promoters of the ribosomal protein genes are also over-represented in the promoters of the Model 3 CN-enhanced genes (C6 through C9), suggesting that they have a role in carbon and CN regulation. In addition, the majority of the C-elements (C6, C7 and C10) found to be over-represented in combination with NDEs (N1 and N2), and the most statistically significant of these enhancers (N2), was found in the promoters of the ribosomal proteins (Table 6 ). This suggests that the CN transcriptional regulation of genes for ribosomal proteins is primarily due to NDEs (Figure 3b ). Thus, it is not surprising that many of the genes potentially regulated by the combination of C-elements and NDEs are involved in protein synthesis (Table 7 ). However, the putative NDEs most probably regulate genes involved in a number of different biological processes. For example, genes that contain the combination N1C7 and are in Model 3 CN-enhanced include metabolic genes (for example, At3g25900 (homocysteine S-methyltransferase), At2g30970 (aspartate aminotransferase) and At3g52940 (C-14 sterol reductase)), histone-related proteins (for example, At1g54690 (histone H2A) and At2g27840 (histone deacetylase-related)), and putative signaling/regulatory proteins (for example, At4g39990 (Ras-related GTP-binding protein BG3), At5g38480 (14-3-3 protein) and At3g18130 (guanine nucleotide-binding protein)). This analysis represents a first step in understanding how carbon and nitrogen signaling interact to control gene expression and has identified genes and putative cis elements that are responsive to carbon and nitrogen signaling interactions. It is noteworthy that the putative CN-elements and NDEs represent cis elements that have not been previously identified and as such may represent novel components of the CN regulatory circuit. Further study of the identified genes and cis elements is required to bring about a complete understanding of interactions between carbon and nitrogen signaling. Materials and methods Plant growth and treatment for analysis Arabidopsis thaliana seeds of the Columbia ecotype were surface-sterilized and plated on designated media and vernalized for 48 h at 8°C. Plants were grown semi-hydroponically under 16-h-light (70 E/m 2 /sec)/8-h-dark cycles at a constant temperature of 23°C on basal Murashige and Skoog (MS) medium (Life Technologies) supplemented with 2 mM KNO 3 , 2 mM NH 4 NO 3 and 30 mM sucrose [ 37 ]. Two-week-old seedlings were transferred to fresh MS media without nitrogen (KNO 3 and NH 4 NO 3 ) or carbon (sucrose) and dark-adapted for 48 h. To perform specific metabolic treatments, 25 dark-adapted seedlings were transferred to fresh MS medium containing 0% or 1% (w/v) sucrose and/or 2 mM KNO 3 and 2 mM NH 4 NO 3 or no nitrogen, and illuminated with white light for an additional 8 h (70 E/m 2 /s 1 ). Following these transient carbon and nitrogen treatments, whole seedlings were harvested, immediately frozen in liquid nitrogen, and stored at -80°C before RNA extraction. RNA isolation and microarray analysis RNA was isolated from whole seedlings using a phenol extraction protocol as previously described [ 38 ]. Double-stranded cDNA was synthesized from 8 μg total RNA using a T7-Oligo (dT) promoter primer and reagents recommended by Affymetrix. Biotin-labeled cRNA was synthesized using the Enzo BioArray High Yield RNA Transcript Labeling Kit. The concentration and quality of cRNA was estimated through an A260/280 nm reading and running 1:40 of a sample on a 1% (w/v) agarose gel. cRNA (15 μg) was used for hybridization (16 h at 42°C) to the Arabidopsis ATH1 Target (Affymetrix). Washing, staining and scanning were carried out as recommended by the Affymetrix instruction manual. Expression analysis was performed with the Affymetrix Microarray Suite software (version 5.0) set at default values with a target intensity set to 150. Three biological replicates for each treatment were carried out. Using Affymetrix probes to assign genes to InterAct classes Only Affymetrix probes representing genes that were deemed to be expressed in all treatments and replicates were used for subsequent analysis by InterAct Class [ 13 , 20 ]. For a gene to be considered expressed, the absolute call made by Affymetrix Microarray Suite 5.0 must be 'present' (P) for each of three replicates for each of four treatments (12 chips total). These genes have reliable values assigned to them that can be used for further analysis, while the proper InterAct Class assignment of a gene with an A ('absent') call would not be ensured. It should also be noted that the always P genes are less noisy than the genes that have an A call (data not shown). In the InterAct Class analysis, four values were assigned to each gene on the basis of its response to carbon and/or nitrogen. The first three values are the expression due to carbon (the expression in treatment 2 minus the expression in treatment 1; see Figure 2 ), the expression due to nitrogen (the expression in treatment 2 minus the expression in treatment 1; see Figure 2 ), and the expression due to CN (the expression in treatment 4 minus the expression in treatment 1; see Figure 2 ). The fourth InterAct Class value represents the expected expression due to C+N, which was calculated by adding the expression due to carbon to the expression due to nitrogen. The expression due to carbon, the expression due to nitrogen, the expression due to CN and the C+N values were calculated for each replicate and then analyzed with InterAct Class without binning [ 20 ]. Statistical analysis of InterAct Classes and functional categories p -values were calculated for the MIPS functional categories (funcats) [ 21 , 22 ] analysis as described previously [ 13 ]. Briefly, the number of genes assigned to the funcat being analyzed and any InterAct class was used as n ; p was the number of genes assigned to the specific model being analyzed divided by the number of genes assigned to an InterAct class and funcat; k was the number of genes in the funcat being analyzed and assigned to the model being analyzed. This analysis, with the baseline being all the genes assigned an InterAct class, accounts for any biases that may have been caused by discarding all the absent genes. The one-tailed p -value was considered when the Poisson approximation of binomial probabilities was used. For the binomial-ratio and the exact binomial probability test, the p -value for k or more out of n was used. Identification of putative cis -regulatory elements in promoters of CN-regulated genes Pathways whose genes are over-represented in Model 3 CN-enhanced were identified using the informatic tool PathExplore [ 23 ] function 13 [ 24 ]; the methodology is described in pages at these websites. Briefly, a binomial test is used, and the genes assigned an InterAct class were used as the parent list, n was the number of genes in Model 3 CN-enhanced (the child list), k was the number of genes in the pathway being analyzed and in the child list, and p was the number of genes in the pathway being analyzed and in the parent list divided by the number of genes in the parent list. We limited our search to pathways that contained more than two genes in the Model 3 CN-enhanced list. To identify cis -regulatory elements involved in regulating genes in Model 3 CN-enhanced and protein synthesis, we used genes involved in protein synthesis that were assigned Model 3 CN-enhanced, to drive the cis search: At1g07070 (60S ribosomal protein L35a), At2g36620 (60S ribosomal protein L24), At5g07090 (ribosomal protein S4), and At5g58420 (ribosomal protein S4 like). The methodology used to identify putative carbon and CN regulatory elements was carried out as described previously [ 11 ]. RSA tools was used to extract the A. thaliana promoters for every gene [ 39 , 40 ]; AlignAce was then used to identify over-represented motifs in the promoters of the genes being analyzed (AlignAce motifs) [ 24 ]. To determine if a motif is over-represented in the promoters of genes in a particular sub-model, the sequence extracted from RSA tools and its reverse complement were searched to determine how many promoters contained the AlignAce motif and in what copy number. Then a binomial test was used to determine if the number promoters that contain the motif in the proper number of copies are over-represented in a particular sub-model. For this analysis, the number of genes with the AlignAce motif being analyzed in their promoter is n , p is the number of genes in the sub-model (for example, Model 3 CN-enhanced) divided by the total number of genes assigned an InterAct class, and k is the number of genes whose promoters contain the AlignAce motif being analyzed (in a specific copy number) and that is in the particular sub-model being tested. A p -value was only calculated if k is greater than nine. In each case, the lowest p -value is given. Cis elements over-represented in the C-only inductive model are considered to be putative C-elements, and cis elements that are over-represented in the promoters of Model 3 CN-enhanced genes and are not over-represented in the promoters of C-only inductive genes, are considered to be putative CN-elements (Table 5 ). To identify interacting elements, a similar analysis was used. For example, to identify motifs interacting with a C-element (Table 5 ) in regulating induction due to carbon (C-associated elements), genes whose promoters contain the C-element were identified. The promoters of these genes were then checked for a second motif. The number of genes that contained the C-element being analyzed and the second motif was used as n . The number of genes in the C-only inductive model that contained the C-element being analyzed divided by the number of genes assigned an InterAct class and that contained the C-element being analyzed was used as p . The number of genes whose promoters contain that C-element and the second motif being analyzed (in a specified copy number) and that are in the C-only inductive model was used as k . In this example, the analysis will determine if the genes that contain the second motif and the C-element being analyzed are over-represented in the C-only inductive model compared to the genes that just contain the C-element. The same approach was used to identify NDEs as described below. Further analysis for NDEs The 33 motifs (13 motifs from ribosomal proteins plus 20 motifs from ferredoxin-related proteins (data not shown)) that are not N-, CN- or C-associated elements were tested to determine whether they are potential NDEs. They were tested to see whether genes whose promoters contained these motifs plus a C-element (Table 5 ) are over-represented in Model 3 CN-enhanced, as compared to all the genes whose promoters contain the C-element as described above. If a p -value less than 0.05 is obtained, the C-element and potential NDE are a significant combination and are likely to regulate carbon and nitrogen interactions. As each motif is tested with each of the 11 C-elements, two steps were taken to control for the multiple tests. First, single strands of the promoter sequences of the A. thaliana genes were randomized 200 times, the reverse complement of the randomized strand was determined, and the number of times the 33 remaining AlignAce motifs were found to be over-represented ( p -value < 0.05) with the C-elements was determined and compared to the number of significant combinations ( p -value < 0.05) between the 33 remaining motifs and the C-elements when the actual promoters were used. In no set of the randomized promoters were the potential NDEs found to form more significant combinations with the 11 C-elements than the actual promoter sequences ( p -value < 1/200 = 0.005). In the second control step, the number of significant combinations that each of the 33 remaining AlignAce motifs was involved in was determined and compared to the number of significant combinations found with the 200 sets of randomized promoters. For one motif, if one random set is significant with as many C-elements as the real promoters the p -value would be 0.005 (1/200). Further analysis of CN1 and CN2 The promoter for At2g15620 was extracted from RSA tools [ 39 , 40 ]. The reverse complement of the strand from RSA tools was determined to identify the occurrence of CN1 and CN2 in either strand of the promoter as described above to determine over-representation of the AlignAce motifs in the promoters of the genes in Model 3 CN-enhanced. To determine whether CN1 and CN2 occur more times than expected in the promoter, the sequence from RSA tools [ 39 , 40 ] was randomized 5,000 times and the above procedure was repeated. The number of times CN1 and/or CN2 were found in the randomized versions as many or more times than the actual promoter was determined and used to calculate a p -value (that is, if 50 random cases do as well as or better than the actual case p -value = 50/5,000 (0.05)) The sequence database was searched using BLAST [ 29 ] for a gene similar to At2g15620 in the O. sativa sequence. Only one hit was found. This gene is annotated as a ferredoxin-dependent nitrate reductase [ 28 ]. The 1,000 base-pairs upstream of this gene were taken and 'BLAST align two sequences' was used to determine whether this sequence is similar to the promoter of At2g15620. BLAST did not find enough similarity to create an alignment. The sequence was then subjected to the same test described above for the promoter of At2g15620. Funcat analysis of the NDEs Funcat analysis of the genes whose promoters contain specific cis elements was performed similarly to the approach described above. Briefly, the number of genes assigned to the funcat being analyzed and Model 3 CN-enhanced was used as n ; p was the number of genes assigned to Model 3 CN-enhanced and the funcat being analyzed divided by the number of genes assigned to Model 3 CN-enhanced and a funcat; k was the number of genes in the funcat being analyzed that was assigned to the Model 3 CN-enhanced category and containing the combination of C- and N-element being analyzed. Statistical significance of localization was calculated similarly. The only difference being that instead of genes assigned a funcat, genes whose protein products are predicted to be localized in the compartment being analyzed were used. Predicted protein localizations were extracted from the TAIR web page [ 25 ]. Additional data files The following additional data are available with the online version of this paper: Additional data file 1 containing a table listing the Affymetrix probe ID, gene, and InterAct class for all the Affymetrix probes assigned an InterAct class; Additional data file 2 listing the data from 12 Affymetrix microarray chips used in this study. Supplementary Material Additional data file 1 A table listing the Affymetrix probe ID, gene, and InterAct class for all the Affymetrix probes assigned an InterAct class Click here for additional data file Additional data file 2 The data from 12 Affymetrix microarray chips used in this study Click here for additional data file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545782.xml |
548503 | Human resources: the Cinderella of health sector reform in Latin America | Human resources are the most important assets of any health system, and health workforce problems have for decades limited the efficiency and quality of Latin America health systems. World Bank-led reforms aimed at increasing equity, efficiency, quality of care and user satisfaction did not attempt to resolve the human resources problems that had been identified in multiple health sector assessments. However, the two most important reform policies – decentralization and privatization – have had a negative impact on the conditions of employment and prompted opposition from organized professionals and unions. In several countries of the region, the workforce became the most important obstacle to successful reform. This article is based on fieldwork and a review of the literature. It discusses the reasons that led health workers to oppose reform; the institutional and legal constraints to implementing reform as originally designed; the mismatch between the types of personnel needed for reform and the availability of professionals; the deficiencies of the reform implementation process; and the regulatory weaknesses of the region. The discussion presents workforce strategies that the reforms could have included to achieve the intended goals, and the need to take into account the values and political realities of the countries. The authors suggest that autochthonous solutions are more likely to succeed than solutions imported from the outside. | Introduction Health reforms that aim at increasing efficiency, quality and users' satisfaction need to take into consideration human resource issues, because the health sector is labor-intensive and the performance of health systems depends on qualified and motivated workers [ 1 - 4 ]. At the same time, the support of the workforce is crucial to ensure successful implementation of reforms. In Latin America, the need to improve the performance of the workforce had been pointed out in many health sector assessments conducted in the 1970s and 1980s by the United States Agency for International Development (USAID), the World Bank (WB), other agencies and independent researchers. (See for Argentina [ 5 - 7 ], for Bolivia [ 8 - 10 ], for Brazil [ 11 ], for Chile [ 7 , 12 ], for Colombia [ 12 - 14 ], for Costa Rica [ 15 ], for the Dominican Republic [ 16 - 18 ], for Ecuador [ 19 , 20 ], for El Salvador [ 21 ], for Guatemala [ 22 , 23 ], for Mexico [ 12 , 24 - 26 ], for Nicaragua [ 27 ], for Panama [ 28 , 29 ], and for Uruguay [ 7 ].) From these reports and studies, and notwithstanding the differences among the countries in the region, we can summarize the problems present during the 1970s and 1980s as follows: • The skill mix of health personnel was often inadequate to meet the needs of the communities, and highly qualified staff often performed tasks that could be conducted by less-trained providers. The health systems of the region were characterized by an excess number of medical specialists and insufficient numbers of other professionals such as primary care providers, nurses, pharmacists, public health specialists, epidemiologists, health economists, accountants, social workers, administrators, communication experts, planners, health educators, nutritionists, physical therapists and sanitary engineers; • There was an over-concentration of qualified health personnel in hospitals and urban centers, coupled with shortages in poor neighborhoods and rural areas; • A large majority of physicians held at least two jobs, one in the government and one in the private sector. In countries with fragmented health systems, physicians could even have three jobs: they worked part time for the social security institute, they worked for the ministry of health, and also held a private practice. Dual or triple employment generated conflicts of interests; physicians used the public sector to draw patients for their private practice, and their productivity in the public post was low and absenteeism high; • Human resources management systems were weak, largely due to dispersal of accountability: in many countries the terms and conditions of employment were under the control of the public service commission or the ministry of finance, and the education of human resources was under the control of the ministry of education or the private sector. Ministries of health did not have any input in determining the types and number of persons to be trained, and their involvement in hiring and managing the health workforce was limited. Health managers handled relations with the labor unions, had some limited supervisory roles, ensured organizational adherence to recruitment policies, and were responsible for some training. • Salary increases were generally based on years of service. In the majority of countries, central labor unions negotiated working conditions directly with governments and signed collective agreements that left administrators with little room to compensate workers according to performance; • Personnel decisions (hiring and promotion) were too often guided by favoritism, political dictates, and nepotism; • Health professionals were insufficiently committed to the public system due to the conflict of interests mentioned above, poor personnel management systems and the perception that wages were low; • The medical profession strongly dominated the definition of health sector policies and the regulation of the conditions of practice of all health professions; • Communication between providers and patients was poor, and providers and service users had very different social and cultural backgrounds. In countries with Amerindian-speaking populations, providers did not speak their languages; • The regulation of training institutions and conditions of practice was weak; • The training of health promoters and other auxiliary personnel such as dental assistants, midwives, laboratory technicians, equipment maintenance and repair technicians, and pharmacy clerks was poor or non-existent, thus their performance was poor. According to the literature reviewed, these conditions led to low productivity and efficiency; inadequate equipment; shortages of supplies and drugs; unmotivated and inadequately trained staff; questionable quality of care; and low users' satisfaction. By the mid-1970s, the need to reform the human component of the health services was very urgent, and the urgency increased with the severe economic downturn that countries of the region suffered during in the early 1980s. The size of the Latin America health labor force (about nine million [ 30 ]) implied that reformers attempting to resolve the human resources problems mentioned above needed to dedicate a large amount of time and resources to it. This paper reviews the impact of the health reforms carried out under the leadership of the World Bank. Data come from a review of the literature including the leading Latin American and non-Latin American journals, monographs, documents found in ministries and reform offices, technical reports, papers presented at conferences and fieldwork carried out by the authors between 1980 and the present in Bolivia, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Honduras, Mexico and Peru. The World Bank's neoliberal health sector reforms In the early 1980s, with a few exceptions, countries in the region entered a severe economic downturn. The International Monetary Fund and the World Bank required – as a condition for new lending and for refinancing the debts – reduction of public spending. The impact of structural adjustment programs on health services was severe and compromised the ability of governments to maintain the physical infrastructure of the facilities, provide the necessary supplies and equipment, and maintain competitive salaries for the workforce. The World Bank took advantage of this situation to provide loans to the ministries of health and social security funds. Together with the loans, the Bank offered guidelines for the reorganization of the health services according to the ideological economic principles held by the institution. The World Bank had started its activities in the health sector in the early 1970s, and by the early 1990s it was the world's largest health sector lender. In 1999, the total accumulated value of health, nutrition, and population loans worldwide amounted to USD 16.8 billion (in 1996 dollars) [ 31 ]. By the late 1980s, the Bank placed health financing at the center of its health policy dialogue with borrowers. In 1987 the Bank proposed four changes [ 32 ]: to impose user fees at government facilities; introduce insurance or other risk coverage systems; use nongovernmental resources more effectively; and decentralize planning, budgeting and purchasing for government health services [ 32 ] A few years later, it looked at the roles of the government and the market in the health sector, and described the main components that have guided most World Bank-led health reform efforts [ 33 ]. These principles were reiterated in 1997 [ 34 ]. The underlying principles of the Bank-led reforms include the belief that the private sector is more efficient than the public sector, and that decentralized administrative units are better-equipped to respond to the needs of the population than centralized governments. The Bank also proposes to limit government financing to a basic package of services, to be determined for each country by cost-effectiveness studies and the country's ability to pay. Services included in the basic package are made available at no cost to the indigent population; governments and the rest of the population subsidize them. In the Bank's health reform model, the role of the state is limited to that of a regulator of the health care market. The United States health system is closer to the World Bank model than the national health services or social security funds of other industrial nations. The World Bank-led reforms did not include strategies to correct the human resources problems presented earlier, even if some of them had been identified by Bank-sponsored studies; the World Bank reformers believed that market forces would resolve them. As we will see, the contrary occurred and privatization and decentralization had some negative consequences for the workforce. The reliance on the market also hid the structural problems that needed to be taken into account at the time reformers designed human resource policies. The design and implementation process which were characterized by secrecy, generated dissatisfaction among health workers. We have organized our analysis of human resources during the reforms under five categories: resistance of the workforce to the implementation of a market-oriented health care model; faulty implementation processes; inadequately trained personnel in managerial positions and in service delivery; institutional and legal dimensions including insufficient and faulty information, lack of financial resources, and civil service statutes; and weak regulatory legislation to ensure the quality of professionals and the performance of the sector. Resistance to the implementation of a market-oriented health care model The neoliberal health reforms intended to change the values that had inspired the Latin American systems and the relations between the government and the health workforce. Thus, health would no longer be considered a right; only the insured would be entitled to receive a broad array of services. Health workers would lose the protection they had enjoyed as public servants and become part of a flexible workforce (Table 1 ). Table 1 Resistance of the workforce to a market-oriented health care model a. Health is not a right, and the reformed system is no longer based on the principles of solidarity and access to health care. b. Health workers become part of a flexible workforce and are encouraged to compete, instead of collaborate, among themselves. c. Worker unions lose power to influence the system and negotiate work conditions of behalf of affiliates. d. The reform is an abdication of the responsibilities of the government to protect the population. e. Physicians fear losing their professional autonomy. Humans tend to resist change, but opposition to health reform was compounded by conflicts with the value system that inspired it. Most Latin American constitutions recognize health as a right and most governments had adopted Alma Ata's primary health care principles as their strategy to promote Health for All. Health policies were based on the principle of solidarity and fostered cooperation among health workers and also between them and other workers in related sectors such as education and agriculture [ 35 ]. It was assumed that health workers would do the best for their clients out of a sense of personal duty [ 4 ]. Human resource changes were needed to implement Alma Ata, but the budget reductions required by the World Bank caused a significant deterioration of working conditions. As their purchasing power worsened, health workers intensified undesirable behaviors to increase their income. These included levying illegal fees, diverting patients from the public sector to their private clinics [ 15 ], using public supplies and equipment for personal profit [ 36 ], and reducing their productivity in the public sector [ 15 ]. As indicated, Latin American physicians have supplemented their income from the public system by means of their private practice. Physicians value the stability and fringe benefits of a public post, but as the public delivery system deteriorated in many countries, private clients became their main source of revenue. By the late 1980s in most Latin American countries (with the exception of Argentina, Costa Rica, Cuba, Dominican Republic, Guatemala and Panama), more than half of the health expenditures, including the cost of medicines, occurred in the private sector [ 37 ], and in most countries of the region, personnel expenditures accounted for about 60% to 70% of the public health budget [ 38 ]. One of the objectives of the neoliberal reforms was to create a more flexible labor force by decreasing the number of tenured public employees and increasing the number of temporary personnel. This change threatened: the job security of the civil service, a very important dimension in countries with little political stability and where workers are frequently exposed to arbitrary political removals; the providers' income, which now would depend on their ability to compete for contracts and clients; and that which workers expected from their employer (recognition, opportunities for self-actualization and promotion). The promoters of health reforms failed to acknowledge that in a politically unstable region, civil service tenure was necessary to maintain an efficient, productive, and loyal labor force. Predictably, these threats triggered the opposition of professional associations and unions, led to strikes, and lowered productivity during the reform process. Health workers of most countries of the region are unionized [ 39 ]. The unions protect the workers from the politicization of the appointments and promotions, and their leaders regularly engage in collective bargaining with the managers of the public sector. The stability provided by the civil service status facilitates the formation of strong union leadership: union leaders remain in office longer than policymakers. Labor unions anticipated that the decentralization and privatization of the sector would have a negative impact on their membership and their bargaining power. Labor unions in Bolivia, Ecuador, El Salvador, Mexico, Nicaragua, Peru and Venezuela expressed their opposition to the reforms on the grounds that they were an excuse for governments to relinquish their constitutional responsibility to ensure access to care and would lead to dramatic changes in work conditions [ 39 ]. In some countries, such as El Salvador and Mexico, unionized workers successfully stalled or delayed the implementation of the reform. Analysts of the health reform policies agree with the workers' concerns. Segall [ 4 ] asserts that a market system does not nurture the service ethic that should characterize the workforce and leads providers to adopt self-seeking behaviors instead of working in the interests of the patient or the community. Others worry that the uncertainties generated by the reform, the stress and demands added to the workforce and the misalignment between the values of the workers and those of the reformed system are very detrimental to the workers' motivation [ 1 , 40 ]. According to Rigoli and Dussault [ 41 ] the unions' fears were justified; union membership appears to have decreased in recent years. In addition, health professionals strongly opposed the idea of having their professional autonomy limited by professional administrators who would force them to adhere to diagnostic and treatment protocols based on economic principles rather than on technical criteria. The cool reception that Mexican professional associations offered to foreign insurance companies and health maintenance organizations is a good example of this concern [ 42 ]. Institutional and legal constraints The large majority of countries in Latin America do not have accurate information on the numbers and distribution of the health workforce. There is no centralized entity responsible for gathering information on the health personnel practicing in the country or in the different geographical regions, and even when only public sector workers are examined, the numbers collected at different administrative levels differ (people may move to a different location, and regional or local governments hire additional staff without reporting to the central authorities). The reforms worsened the possibility of having accurate information. If the system is decentralized and the majority of providers are in the private sector, the sources of information from which human resources data will need to be collated multiplies. The challenge may be even greater if each agency gathers the information differently, and without this basic information it is very difficult to engage in human resources development planning (Table 2 ). Table 2 Institutional and legal dimensions a. Lack of accurate information on the availability of human resources and their distribution. b. Civil service status limits the capacity of managers to change the working conditions of personnel. c. Decentralizing human resources is expensive: homologation of salaries and benefits; hiring of additional personnel. d. Decentralized governments have limited ability to manage personnel to respond to the needs of the population. Civil service laws and negotiated agreements with worker unions have limited the freedom of public sector managers to reorganize the workforce; managers had to "administer the rules" [ 43 ] issued elsewhere. For instance, managers could not change the status of public sector employees to flexible contract workers, dismiss employees, increase workloads or change the working schedule. They may also have had difficulties rewarding their employees because the salaries and the conditions of employment were set outside the health sector by the public employment commission, the ministry of labor, or the ministry of finance or their corresponding decentralized entities. Reformers did not foresee these limitations. Managers overcame these constraints by hiring additional personnel under flexible contracts. In Brazil there are now about 15 different types of contracts for public sector employees; a significant expansion of the use of temporary workers and contracts without fringe benefits has been reported in Argentina [ 44 ], Colombia [ 45 ], Ecuador [ 2 ], El Salvador [ 30 ], Panama and Peru [ 2 ]. Funds for the temporary positions have different origins, including extrabudgetary government allocations, revenues generated through user fees, or savings from positions left empty due to retirements or administrative leaves. Transferring human resources to other politico-administrative levels was more difficult than anticipated. Kolehmainen-Aitken [ 46 ] and Homedes and Ugalde [ 47 ] have suggested that this is the most complex part of the decentralization process. The fear of transfers caused discontent and anxiety. On the other hand, managers did not want to absorb everybody who was transferred [ 46 ]. The decentralized personnel felt insecure about the new reporting mechanisms and the new managers' expectations, and vulnerable to political crossfire. Decentralizing human resources is expensive. Prior to decentralization – in countries such as Bolivia, Colombia and Mexico – local or regional governments frequently hired additional personnel under different pay scales and benefit packages than those used by the central government. After decentralization, each decentralized administrative unit had to set pay scales, fringe benefits, performance assessments and reward systems for its workers. Because it was not possible to lower the salaries and reduce the benefits of workers with higher salaries and benefits – generally federal/national workers – it was necessary to bring the salaries and benefits of all the workers to the level of those who had the most generous package. Higher salaries and benefits represent an increase in the fixed costs of the system for an indefinite period. Moreover, the creation and/or strengthening of the decentralized administrative structures are often not accompanied by a decrease in the number of federal employees, further raising the costs of personnel to the system [ 48 ]. In Mexico, from 1985 to 1987 the cost of transferring federal employees to 14 states was 140,000 million pesos (approximately 452 million US dollars) [ 49 ], and in Colombia the World Bank indicated that the cost of transferring state and departmental health personnel to Cali's Municipal Health Secretariat was prohibitive [ 50 ]. Decentralization has been promoted under the assumption that the proximity of decision makers to the communities facilitates providing services more in accordance with the needs of the community. But the decentralized structures' ability to respond to local needs has been constrained by: the civil servants they inherited, who are often inadequate in terms of skill mix and geographical distribution; the conditions of employment imposed by other government sectors and the unions; and the local elite, who do not have in mind the health needs of the communities and who lobby to have relatives and friends hired by local health authorities. In several countries of the region, decentralization has broadened the urban-rural inequities in the distribution of personnel and in the quality of services [ 51 - 54 ]. Rich decentralized units are able to offer better working conditions and attract qualified personnel from poorer municipalities, a sort of brain drain. Similarly, the development of the private sector also tends to drain qualified resources from the public sector [ 43 ]. Inadequately trained personnel Countries considered the human resource implications of reform only when they faced resistance from the unions or realized they did not have the financial and human resources required to implement the reform [ 55 ]. The health reform plan in Costa Rica recognized the ambiguity of the policies and procedures related to human resources but did not modify them. As planned, incentives were established to increase workers' productivity and the use of short-term contracts. These two strategies ended up triggering greater grievances on the part of public sector employees [ 43 ]. The human resources units of the health ministries and the regional and local administrations were inadequately prepared for the reform (Table 3 ). Traditionally they had had a narrow scope of responsibilities, often limited to managing the relationship with the unionized workforce, ensuring compliance with national/provincial policies on recruitment and deployment of personnel, and organizing continuing education activities [ 56 ]. The personnel attached to these departments generally had limited or no human resources development training [ 30 , 57 ]. Table 3 Inadequately trained personnel a. Human resources units are not adequately staffed, especially to manage change. b. Lack of management experts, especially experts in insurance systems and contract managers. c. Insufficient numbers of people trained in primary health care and public health related fields. d. Training centers unable to produce personnel to operate the reformed health system. The tasks required by the reform overstretched the capacity of these units. Included among these tasks were: the development of new organizational structures, defining new job descriptions, reassigning responsibilities and designing new reporting systems, establishing performance evaluation methods and assisting all decentralized units to carry out similar duties in their jurisdictions. The decentralization of personnel often unveils rivalries and discontent among personnel who feel unfairly treated. All these issues need to be addressed, negotiated and resolved; and most human resources units were ill-prepared to lead the process [ 56 ]. For example, when the health system in Bolivia was decentralized, the salaries for several workers in Santa Cruz were delayed for months. Costa Rica introduced performance-based contracts between the Social Security Fund (Caja Costarricense de Seguro Social) and the health units, but according to a senior executive of the Ministry of Health, the targets were set at minimum levels and some units decreased their level of production [personal communication with the authors]. The mismatch between the abilities of the workforce and the needs of the system documented in the 1980s still prevails. The lack of coordination between training institutions and employers is at the root of the problem, along with weaknesses in the regulation of health professions and the dominance of physician's groups on the policy making process. According to Bach [ 56 ], shortages of personnel trained in disciplines such as primary health care, health economics, public health, health communication, health education, nutrition, and environmental engineering continue to severely limit the possibilities for improving the quality and efficiency of the health care system. Only in very few instances has reform included the human resources development activities to address these issues. For example, Costa Rica trained multifunctional teams for their lowest-level facilities and promoted the training of general doctors instead of specialists [ 58 ]. The National Autonomous University of Mexico modified its curriculum to promote family medicine; it introduced public health concepts and interdisciplinary experiences around issues related to promoting the health and wellbeing of the elderly and protecting workers from occupational hazards [ 59 ]. In most countries, managerial positions were traditionally given to physicians with little or no management training [ 55 ]. The neoliberal reforms require managers and staff with experience in specific dominions such as insurance, capacity to write contracts and enforce them, ability to monitor performance, knowledge of performance-based reimbursement systems, and expertise in health services research to evaluate progress. Decentralization to the state and municipal levels generates the need for even more managers, and countries that have given autonomy to hospital executives to manage their human and financial resources face additional challenges. With the exception of Chile, countries in the region had limited expertise in private health insurance [ 60 ]; Costa Rica, the Dominican Republic, Mexico and Venezuela engaged expensive foreign technical assistance to develop performance-based contracts and management information systems. The promoters of the reforms recognized the need for good managers but, because they had heavily criticized the public sector and questioned its role, it was difficult to recruit qualified staff into managerial positions and, in turn, it was difficult for those recruited to motivate and retain qualified staff [ 56 , 57 ]. Most health reform projects included management training. The Pan American Health Organization [ 61 ] evaluated 15 such training programs and concluded that the training did not change the performance of the systems and that for only two projects was management capacity improved. The reasons for the failure included: difficulties in recruiting trainers; inability of the local universities to respond to the needs of the projects; inappropriate selection of training participants; conflict between project units and the ministries of health; political interference; and the absence of a human resource development plan. Theoretically the deficiencies in formal training can be corrected with supervision and continuing education activities, but this did not occur in Latin America. The authors of the report emphasize the need for countries to develop comprehensive human resources development plans to ensure the efficacy and sustainability of the training programs. After having spent millions of dollars in training and infrastructure, the region's capacity to manage contracts is still limited [ 62 ], and when contracts are in place, they are expensive to administer and the legal system is insufficiently developed to enforce them. In Colombia, most public hospitals were unable to compete with the private sector and are now bankrupt [ 63 ]. Poor management in decentralized entities has been considered one of the main reasons for the decentralization failure [ 64 , 65 ]. Faulty reform implementation process Countries in the region used a top-down approach to define and implement reform. The implementation was often led by a handful of top health executives, newly created reform offices, the political elite and international agencies, which in turn contracted for the technical assistance of international consultants and universities closely aligned with the neoliberal ideology of the World Bank. In general, there was little interest in involving professional associations, unions or even local universities in this process [ 2 , 56 ] (Table 4 ). Table 4 Faulty reform implementation process a. Lack of involvement of professional associations and labor unions in the definition of the reform. b. Secrecy surrounding definition of the reform raises suspicions among those responsible for implementing it and predisposes them to resist the changes. c. Lack of transition plan. In Costa Rica the labor unions were involved initially in the reform, but their influence was undermined by the World Bank and the Inter-American Development Bank, which adopted a more closed, centralized decision-making style [ 58 ]. According to a top executive of the Ministry of Health in El Salvador, the group preparing the reform operated in secret, and in his view, the secretive process was desirable because if health workers had participated they would have created obstacles to its implementation [ 66 ]. Indeed, it appears likely that the labor unions and professional associations in this country and many others would not have approved neoliberal reforms. Some authors have offered a different interpretation of the exclusion of the workforce and assert that it was the complexity of the human resources issues and the need to involve many players (the ministries of education, labor, finance and health, and professional associations and unions) in their solutions that led reform promoters to ignore the labor force and other stakeholders [ 30 ]. Regardless of the underlying motivation, the secrecy that surrounded the process of defining and implementing the reforms produced rumors, confusion and hostility against the reform among civil servants and professional groups [ 30 , 39 , 67 , 68 ]. The objectives and processes by which reforms were introduced were never made clear; and often the reforms were perceived as responding more to ideological concerns of international organizations, in particular the International Monetary Fund and the World Bank, than to the needs, resources and sociopolitical realities of the countries [ 57 ]. The president of the Medical Association of El Salvador said that his association had tried to obtain information about the reform for over a year and that all his knowledge was based on "rumors and guesswork that led nowhere" [ 66 ]. In addition, health reformers did not consider the strategies and resources needed during the transition and early stages of implementation; such as allocating new financial resources and establishing clear communication channels. In addition, as discussed in the previous section, the reforms failed to adequately train managers who could lead the transition to a new management system [ 40 ]. Inadequate regulatory system to ensure high-quality training and health providers The need for regulation increases in health systems where the private sector plays a prominent role. In Latin America it was not until the early 1990s, largely as a consequence of the health reforms, that health policy makers became aware of the urgency to regulate the health system. A regulatory system includes adequate regulations, the institutions to enforce them, and a judicial system that ensures a timely response in the event of conflict. Enforcing regulations is important to guarantee that trained personnel provide safe and adequate services (Table 5 ). Table 5 Inadequate regulatory framework to ensure the quality of professionals and the performance of the sector a. Limited quality controls in training institutions. b. Physician-dominated field that precludes other professional groups from being recognized as health care providers within the official health care system. c. Limited accreditation of health care professionals. Prior to the reforms, the regulatory systems were poorly developed and, when they existed, they were not tailored to the needs of consumers and enforcement was very limited [ 69 ]. For instance, the licensing of providers was a purely bureaucratic formality with no assessment of qualifications. The patronage and bossism observed in many countries were further expressions of regulatory deficiencies [ 70 ]. The number of medical schools has grown spectacularly in the last two decades and, in most countries, there were no mechanisms to ensure the quality of the training institutions or to test the abilities of the graduates. The number of medical schools in Chile grew by 68% between 1992 and 2000, in Peru the growth was also by 68%, in Argentina by 61% and in Brazil by 21%. The growth occurred mainly in the private sector [ 71 ]. Costa Rica had no private universities until the 1970s; now there are 70, several of which train health professionals [ 72 ]. By its very nature the regulation of the health profession relies heavily on the opinions of professionals and especially on physicians, who in turn place a great value on their autonomy and have had little interest in responding to social and political demands [ 55 ]. Medical associations have traditionally opposed health reforms and have had a very strong influence on health policy-making [ 29 , 66 , 73 , 74 ]. The dominance of physicians has alienated other professions such as therapists, nurses, pharmacists, optometrists and psychologists [ 75 ]. For example, in Chile the government proposed to train and use more optometric technicians, but medically trained ophthalmologists opposed the proposal. After a long negotiation process involving ophthalmologists, optometric technicians, insurance companies, universities, parliamentary representatives and consumers, an agreement was reached. The four-year trial period allows optometric technicians to expand their scope of practice while medical schools take in more ophthalmologists for training [ 58 ]. The regulation of the health professions has a long way to go in Latin America, and it is probably impossible to establish sustainable regulatory mechanisms in the absence of political and judicial reforms; for the system to work, it needs to free itself from political interference [ 70 ]. Some argue that the separation between professional associations and licensing bodies [ 76 ] must be increased, and there is general agreement that the perspective of the general public [ 75 ] must be included. Consequences of the health reform on human resources Bach [ 56 ], Brito et al. [ 77 ], Dussault and Dubois [ 78 ], Rigoli and Dussault [ 41 ] have identified human resources issues as the main obstacle for the success of the reforms. The neoliberal health reforms did not solve the workforce problems that had previously been identified, and created additional ones that have had a negative impact on the health systems (see Table 6 ). Table 6 Consequences of the health reform on human resources a. Working conditions have worsened, and talented workers migrate to the private sector or to other countries. b. The motivation of workers has deteriorated. c. Productivity and quality may have deteriorated. d. The uneven ratio of specialists to primary physicians has not changed. e. The uneven distribution of personnel (hospital and urban bias) persists. f. Corruption has not decreased. The implementation of the reforms has been uneven in the Latin American region. Technical, logistical, political and financial problems have surfaced everywhere. While most countries decentralized, a few – such as Colombia and Chile – managed to significantly expand private insurance and, with the exception of Brazil, very few have engaged in large contracts with private sector providers. The most salient feature has been significant changes in hiring modalities. In Brazil there are 15 different types of contract arrangements [ 30 ] and in Peru the need to expand service coverage led to hiring 10 000 health professionals (physicians, nurses and technicians) between 1992 and 1996 under temporary contracts without social security; by the late 1990s about 12% of the health workers did not have social security [ 77 ]. Health workers in Ecuador have suffered wage reductions, in Mexico with decentralization the states have increasingly hired temporary workers [ 47 ], and in Argentina there has been a rise in precarious contracts, even fraudulent ones, such as full-time jobs under the label "autonomous professional" [ 30 ]. Another important result of the reform is the surge of multiple jobs, particularly in Argentina, Brazil, El Salvador, Panama, Peru, Uruguay, and to a lesser degree Chile [ 30 ]; that has caused stress and dissatisfaction among physicians. A survey of nurses conducted in Argentina, Brazil, Colombia and Mexico [ 44 ] revealed that the reforms increased stressful conditions at work, job dissatisfaction, insecurity from flexible contracts, malpractice concerns, inter-institutional migration, and new bureaucratic tasks for which nurses were not trained. The nurses specifically mentioned that they needed to do more work in less time with fewer staff; and complained about excessive paperwork, including billing, and about having less time for direct patient care than before the reform. One of the nurses who participated in the study said "Patients may feel that we really don't care that much about them, because we just don't have enough time to spend with them and really know what is going on [ 44 ]." In a different survey, nurses who had gone through reform restructuring held more negative perceptions of patient care than those who had not; and they also expressed a higher desire to unionize [ 41 ]. Furthermore, according to the Tavistock Group [ 79 ] "cooperation throughout a health care system can produce better outcomes and much greater value for individuals and for society. Such cooperation requires agreement across disciplinary, professional and organizational lines about the fundamental ethical principles that should guide all decisions in a truly integrated system of health care delivery." If this statement is correct, by fragmenting the system through privatization and decentralization, and by introducing competition among health professionals, the neoliberal health reforms have compromised the quality of care. Similarly, one of the basic principles of neoliberal reforms – that the efficiency of the system will increase by using flexible contracts and rewarding productivity – is not supported by the data. The World Bank conducted an evaluation of civil service reforms in 15 countries and concluded: "None of the cases reviewed so far have revealed any empirical evidence that the Civil Service Reform and related Technical Assistance Loans have succeeded in fostering the needed change in work attitudes, ethics and organizational culture that could lead to greater efficiency/productivity in the civil service" [ 1 ]. Research has also uncovered problems with using performance-based payment schemes. For example, in health it is often difficult to establish who is responsible for the outcome. Costa Rica implemented a pilot project in Barva de Heredia in which physicians received an incentive based on productivity, but this was not extended to the nurses and the other clinic staff [ 80 ]. The project increased the costs to the system without increasing the productivity of physicians or the quality of care, and as a result the government halted its plans to extend the model to other health facilities. Health providers can also manipulate the information to maximize their benefits rather than the well-being of the patients; and substandard working conditions rather than workers' action may be responsible for a poor outcome [ 81 ]. Mexican providers opposed a malpractice evaluation system because they did not want to be held liable for errors due to equipment deficiencies and lack of supplies [ 42 ]. Health workers in Costa Rica feared that the focus on productivity compromises the commitment to patients [ 58 ] and discourages the provision of services that require extra time, such as health education [ 2 , 55 ] and mental health counseling. Establishing a valid and reliable merit-pay system is extremely complicated; placing too much emphasis on material rewards may displace more intrinsic motivators such as the pleasure of doing good, or caring for the patient. Bennet and Franco [ 1 ] even suggest that loyalty to the organization may decrease as the worker becomes aware of more lucrative opportunities with other employers. This could have serious consequences. Attracted by NGOs and the higher salaries of private hospitals, the most talented public servants could leave the public sector. Others raised questions about the sustainability of these strategies. In Brazil, productivity-based payment systems resulted in increased productivity, but the increase was not sustained over time and created competition among workers who were expected to collaborate [ 82 ]. For an interesting discussion on incentive systems and motivation in a different context, see Le Grand [ 83 ]. There is a belief among neoliberal economists that private sector workers are more productive and less corrupt than public employees. Recent hospital studies confirm that because of fear of termination, absenteeism is less frequent among non-tenured physicians hired through short-term contracts than among civil servants, but short-term contracts have not increased commitment to the institution [ 84 ]. Corruption continues to be pervasive in both private and public hospitals, and productivity differences between the private and public hospitals have not been documented [ 85 ]. Costa Rica has attempted to reduce the waiting lists by contracting for the provision of services with private groups, under the condition that the recipient of the contract is someone not working for the clinic that makes the referral, a condition that is often violated [ 72 ]. Decentralization can also be seen as a transfer of financial responsibilities from central government to local authorities, which has the potential to affect wages and job stability [ 39 ] and increase inequity. Poor local authorities cannot compete with the conditions of employment offered by wealthier municipalities and have difficulties in attracting personnel. In a decentralized system it is more difficult to structure career ladders, especially for workers who choose to locate in rural areas [ 81 ]; and decentralization can also exacerbate forms of patronage and political domination. The experience in many countries has proven that it is more difficult to resist the politicization of decision-making when health managers interact with local political leaders without central controls [ 46 , 57 , 70 ]. In decentralized health systems, particular attention needs to be paid to establishing good coordination among those responsible for the vertical program at the central level, the decentralized administrative units, and the clinical staff. The absence of good coordination may result in health workers' reporting to two supervisors: the person responsible for the vertical program and the supervisor of the health facility or region, as is the case in Mexico [ 47 ]. In sum, in the majority of Latin American countries, the neoliberal reforms have not made the health delivery model more responsive to the needs of the community; have not increased the productivity of health workers; have had a negative impact on working conditions and staff motivation; appear to have further compromised the quality of care; and have had a limited impact on the capacity to regulate the health professions and training institutions. Discussion More recent studies suggest that many of the old workforce problems remain unresolved [ 56 , 70 , 86 , 87 ]. Even the World Bank, which promoted the reforms, has finally recognized [ 88 ] that the neoliberal strategies are not having the desired impact. The Bank questions the performance of the private sector and highlights the need to find the institutional arrangements and policies that best respond to local conditions and resources. Health reform provided a perfect opportunity to promote and encourage workforce improvement. Prerequisites for the progress of such processes are political will, effective relationships between the educational and service-providing institutions, and the open collaboration of professional groups. However, the reforms had the opposite effect. The neoliberal orientation challenged the use of conventional regulation strategies because, by encouraging professionals to seek their own interest instead of the interests of society as a whole, it questioned whether society and regulatory bodies could continue to trust and have faith in the criteria expressed by health professionals. Human resources account for the lion's share of health budgets, and poor performance had been identified as the main constraint to efficiency, quality of services and user's satisfaction. The values that guided the neoliberal reform and the privatization and decentralization initiatives worsened the problems affecting the Latin American workforce and added new challenges. The reform implementation process was also responsible for the failures. The promoters of the neoliberal health reforms underestimated the importance of involving professional organizations and unions in the planning and implementation of the reform efforts, raising suspicion and resistance among organized health workers. In addition, the reforms were designed in secret and implemented using a top-down approach. The only significant advance in human resource development in the last 10 years is the increased interest in strengthening the capacity to regulate training institutions and practitioners. Solving the problems that affect human resources is no easy task, and probably the solutions are different for each country; ignoring the problems and hoping that the market will resolve them is a recipe for failure. Managing change is very complex; embarking on reform without having secured the collaboration of the workforce and ensuring the availability of sufficient qualified staff is irresponsible. The exclusion of the organized labor force from reform discussions is indefensible; the collaboration and motivation of the health workers is essential to the reform, and the leaders of the organized groups can assist in informing and aligning the workforce [ 40 , 89 ]. Policy changes requiring a different skill mix of health personnel require careful planning because there is a significant time lag between deciding that there is a need to train additional professionals and having them available. For example, a 10% rise in the number of students in a medical school produces only a 2% increase in the supply of doctors 10 years later [ 78 ]. Most countries did include a training component, but as mentioned earlier it was insufficient, and part of the problem was that the reform implementation was rushed, without the benefit of field-testing the underlying theories, gathering evidence on the appropriateness of the strategies, or learning from the reform experience. One thing that reforms could have done was to support interventions that, independently from the reform, some countries had designed to overcome workforce weaknesses. Some interventions were national programs, others were pilot projects, and there were also experimental projects. The following are examples of these autochthonous interventions. To solve the urban-rural gap and improve equity, most ministries of health in the region created the obligatory rural health service known as pasantía or servicio social that requires physicians to spend one year in a rural health center prior to graduation or immediately after receiving their degree [ 35 ]. A few social security institutes have also found ways to reduce health inequities; such is the case of the Mexican and Ecuadorian Institutes of Social Security (IMSS and IESS). IMSS organized COPLAMAR, an extensive primary and secondary care program for poor rural populations, which brought general practitioners and specialists to rural areas. The Ecuadorian program known as Seguro Social Campesino offered primary care services for rural dwellers and, when needed, hospitalization at the Ecuadorian Social Security Institute; this program aimed at reducing the rural-urban gap in a country of which 70% of the population then lived in the rural areas [ 90 ]. Mexico created a training program for traditional midwives who worked in dispersed rural populations; the objective of the program was to enhance the quality of their services and reduce maternal and infant mortality [ 91 ]. The Ministry of Health of the Dominican Republic, in an effort to increase health equity, trained and deployed more than 5 000 health promoters in rural centers. The promoters periodically visited every rural household to monitor infant growth, promote nutrition and sanitation, and assist in immunization campaigns [ 35 ]. Costa Rica attracted the world's attention with the Open Walls Hospital, a program that required the specialists of a regional hospital to schedule – when needed – weekly visits to dispersed rural populations. The program also intended to convey the message to specialists that they were not different from other health workers and had an obligation to serve poor rural dwellers even when doing so would involve personal inconvenience [ 80 ]. To enhance the professional status of primary care practitioners, reduce referrals, and improve quality of care, IMSS created many positions in the specialty of family physician, forcing the medical profession to recognize the status of the new specialty. Colombia's Ministry of Health sent nurses to a graduate health education program taking advantage of a US fellowship program in order to diversify the human resource composition of the Ministry. Some of these projects were successful; this was the case of family physicians and COPLAMAR in Mexico. However, the first 14 Mexican states that decentralized in the 1980s dismantled the program, transferring it to the states' departments of health to create the state health system, and the quality of rural health deteriorated rapidly [ 92 ]. Others function poorly, as is the case of the compulsory year of social service in all the countries where it was established. Similarly, the health promoters program in the Dominican Republic suffered from insufficient training, lack of continuous education, absence of efficient supervision, and poor remuneration, comments that can be extended to all health promoter programs of the region. The Seguro Social Campesino has suffered from inadequate financing, and plans to extend it to the entire rural population have been placed on permanent hold. Other programs were discontinued because of indifference and lack of support from policy makers. Thus, the Hospital without Walls ceased after all Ministry of Health hospitals were transferred to the Social Security Institute. Due to budgetary problems, the Colombian Ministry of Health did not employ the health educators on their return from graduate school. The health reforms would have provided a perfect opportunity to support and strengthen many of these and other autochthonous interventions. A proper course of action would have been to evaluate the projects that had been developed locally, identify their strengths and weaknesses, and establish their viability and sustainability. Through trial and error and with appropriate resources and incentives, many of them are likely to be more effective and less costly than foreign programs invented by those who hardly know the realities of developing countries and are inspired by ideological principles and questionable economic theories. There is much more that needs to be done to improve the training and management of human resources for health, and very often the solutions depend on the collaboration of a wide range of stakeholders such as those who produce health workers, those who employ them, those who pay for their services, those who negotiate working conditions and those who define the standards of professional practice. It is no easy task and can be successfully accomplished only if there is strong political will, if there is openness and trust among all stakeholders, and if sufficient resources and time are allocated to this effort. Most countries of the region have the capacity to find appropriate solutions to the problems they are facing. Dussault [ 55 ] argued that change is possible only on the basis of shared values, and as we have seen, the values that inspired the neoliberal reform did not coincide with those expressed in the Latin American Constitutions and in the primary health care principles that had guided the development of the health sector during the 1970s and 1980s. As Segall mentioned [ 4 ], it is important to recover the spirit of cooperation among health providers, and there is a need to take explicit steps to raise their motivation and patient-centered behavior. Without a motivated workforce, all other efforts to change the system may be even counterproductive. Policy makers and administrators will have to explicitly identify strategies that foster collaboration, inner motivation and work ethics, and this may require abandoning the market orientation of the neoliberal reforms and embracing the values that inspired the primary health care movement. Competing interests The author(s) declare that they have no competing interests. Authors' contributions Both authors have contributed equally to the design, data collection, data analysis, drafting and completion of this article. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548503.xml |
509239 | A multilocus likelihood approach to joint modeling of linkage, parental diplotype and gene order in a full-sib family | Background Unlike a pedigree initiated with two inbred lines, a full-sib family derived from two outbred parents frequently has many different segregation types of markers whose linkage phases are not known prior to linkage analysis. Results We formulate a general model of simultaneously estimating linkage, parental diplotype and gene order through multi-point analysis in a full-sib family. Our model is based on a multinomial mixture model taking into account different diplotypes and gene orders, weighted by their corresponding occurring probabilities. The EM algorithm is implemented to provide the maximum likelihood estimates of the linkage, parental diplotype and gene order over any type of markers. Conclusions Through simulation studies, this model is found to be more computationally efficient compared with existing models for linkage mapping. We discuss the extension of the model and its implications for genome mapping in outcrossing species. | Background The construction of genetic linkage maps based on molecular markers has become a routine tool for comparative studies of genome structure and organization and the identification of loci affecting complex traits in different organisms [ 1 ]. Statistical methods for linkage analysis and map construction have been well developed in inbred line crosses [ 2 ] and implemented in the computer packages MAPMAKER [ 3 ], CRI-MAP [ 4 ], JOINMAP [ 5 ] and MULTIMAP [ 6 ]. Increasing efforts have been made to develop robust tools for analyzing marker data in outcrossing organisms [ 7 - 12 ], in which inbred lines are not available due to the heterozygous nature of these organisms and/or long-generation intervals. Genetic analyses and statistical methods in outcrossing species are far more complicated than in species that can be selfed to produce inbred lines. There are two reasons for this. First, the number of marker alleles and the segregation pattern of marker genotypes may vary from locus to locus in outcrossing species, whereas an inbred line-initiated segregating population, such as an F 2 or backcross, always has two alleles and a consistent segregation ratio across different markers. Second, linkage phases among different markers are not known a priori for outbred parents and, therefore, an algorithm should be developed to characterize a most likely linkage phase for linkage analysis. To overcome these problems of linkage analysis in outcrossoing species, Grattapaglia and Sederoff [ 13 ] proposed a two-way pseudo-testcross mapping stratety in which one parent is heterozygous whereas the other is null for all markers. Using this strategy, two parent-specific linkage maps will be constructed. The limitation of the pseudo-testcross strategy is that it can only make use of a portion of molecular markers. Ritter et al. [ 7 ] and Ritter and Salamini [ 9 ] proposed statistical methods for estimating the recombination fractions between different segregation types of markers. Using both analytical and simulation approaches, Maliepaard et al. [ 10 ] discussed the power and precision of the estimation of the pairwise recombination fractions between markers. Wu et al. [ 11 ] formulated a multilocus likelihood approach to simultaneously estimate the linkage and linkage phases of the crossed parents over multiple markers. Ling [ 14 ] proposed a three-step analytical procedure for linkage analysis in out-crossing populations, which includes (1) determining the parental haplotypes for all of the markers in a linkage group, (2) estimating the recombination fractions, and (3) choosing a most likely marker order based on optimization analysis. This procedure was used to analyze segregating data in an outcrossing forest tree [ 15 ]. Currently, none of these models for linkage analysis in outcrossing species can provide a one-step analysis for the linkage, parental linkage phase and marker order from segregating marker data. In this article, we construct a unifying likelihood analysis to simultaneously estimate linkage, linkage phases and gene order for a group of markers that display all possible segregation patterns in a full-sib family derived from two outbred parents (see Table 1 of Wu et al. [ 11 ]). Our idea here is to integrate all possible linkage phases between a pair of markers in the two parents, each specified by a phase probability, into the framework of a mixture statistical model. In characterizing a most likely linkage phase (or parental diplotype) based on the phase probabilities, the recombination fractions are also estimated using a likelihood approach. This integrative idea is extended to consider gene orders in a multilocus analysis, in which the probabilities of all possible gene orders are estimated and a most likely order is chosen, along with the estimation of the linkage and parental diplotype. We perform extensive simulation studies to investigate the robustness, power and precision of our statistical mapping method incorporating linkage, parental diplotype and gene orders. An example from the published literature is used to validate the application of our method to linkage analysis in outcrossing species. Table 1 Estimation from two-point analysis of the recombination fraction ( ± SD) and the parental diplotype probability of parent P ( ) and Q ( ) for five markers in a full-sib family of n = 100 Parental diplotype r = 0.05 r = 0.20 Marker P a × Q a | | | | a b c d | | | | 0.530 ± 0.0183 0.2097 ± 0.0328 a b a b 0.9960 0.9972 0.9882 0.9878 | | | | 0.0464 ± 0.0303 0.2103 ± 0.0848 a o × o a 1 (0 b ) 0(1 b ) 1 (0 b ) 0(1 b ) | | | | 0.0463 ± 0.0371 0.1952 ± 0.0777 a b b b 1 1/0 c 1 1/0 c | | | | 0.0503 ± 0.0231 0.2002 ± 0.0414 a b c d 1 1/0 c 1 1/0 c a Shown is the parental diplotype of each parent for the five markers hypothesized, where the vertical lines denote the two homologous chromosomes. b The values in the parentheses present a second possible solution. For any two symmetrical markers (2 and 3), = 1, = 0 and = 0, = 1 give an identical likelihood ratio test statistic (Wu et al. 2002a). Thus, when the two parents have different diplotypes for symmetrical markers, their parental diplotypes cannot be correctly determined from two-point analysis. c The parental diplotype of parent P 2 cannot be estimated in these two cases because marker 4 is homozygous in this parent. The MLE of r is given between two markers under comparison, whereas the MLEs of p and q given at the second marker. Two-locus analysis A general framework Suppose there is a full-sib family of size n derived from two outcrossed parents P and Q. Two sets of chromosomes are coded as 1 and 2 for parent P and 3 and 4 for parent Q. Consider two marker loci and , whose genotypes are denoted as 12/12 and 34/34 for parent P and Q, respectively, where we use / to separate the two markers. When the two parents are crossed, we have four different progeny genotypes at each marker, i.e., 13, 14, 23 and 24, in the full-sib family. Let r be the recombination fraction between the two markers. In general, the genotypes of the two markers for the two parents can be observed in a molecular experiment, but the allelic arrangement of the two markers in the two homologous chromosomes of each parent (i.e., linkage phase) is not known. In the current genetic literatuire, a linear arrangement of nonalleles from different markers on the same chromosomal region is called the haplotype . The observable two-marker genotype of parent P is 12/12, but it may be derived from one of two possible combinations of maternally- and paternally-derived haplotypes, i.e., [11] [22] or [12] [21], where we use [] to define a haplotype. The combination of two haplotypes is called the diplotype . Diplotype [11] [22] (denoted by 1) is generated due to the combination of two-marker haplotypes [11] and [22], whereas diplotype [12] [21] (denoted by ) is generated due to the combination of two-marker haplotypes [12] and [21]. If the probability of forming diplotype [11] [22] is p , then the probability of forming diplotype [12] [21] is 1 - p . The genotype of parent Q and its possible diplotypes [33] [44] and [34] [43] can be defined analogously; the formation probabilities of the two diplotypes are q and 1 - q , respectively. The cross of the two parents should be one and only one of four possible parental diplotype combinations, i.e., [11] [22] × [33] [44]), [11] [22] × [34] [43], [12] [21] × [33] [44] and [12] [21] × [34] [43], expressed as 11, 1 , 1 and , with a probability of pq , p (1 - q ), (1 - p ) q and (1 - p ) (1 - q ), respectively. The estimation of the recombination fraction in the full-sib family should be based on a correct diplotype combination [ 10 ]. The four combinations each will generate 16 two-marker progeny genotypes, whose frequencies are expressed, in a 4 × 4 matrix, as for [11] [22] × [33] [44], for [11] [22] × [34] [43], for [12] [21] × [33] [44] and for [12] [21] × [34] [43]. Note that these matrices are expressed in terms of the combinations of the progeny genotypes for two markers and , respectively. Let n = ( n j 1 j 2 ) 4 × 4 denote the matrix for the observations of progeny where j 1 , j 2 = 1 for 13, 2 for 14, 3 for 23, or 4 for 34 for the progeny genotypes at these two markers. Under each parental diplotype combination, n j 1 j 2 follows a multinomial distribution. The likelihoods for the four diplotype combinations are expressed as where N 1 = n 11 + n 22 + n 33 + n 44 , N 2 = n 14 + n 23 + n 32 + n 41 , N 3 = n 12 + n 21 + n 34 + n 43 , and N 4 = n 13 + n 31 + n 24 + n 42 . It can be seen that the maximum likeihood estimate (MLE) of r ( ) under the first diplotype combination is equal to one minus under the fourth combination, and the same relation holds between the second and third diplotype combinations. Although there are identical plug-in likelihood values between the first and fourth combinatins as well as between the second and third combinations, one can still choose an appropriate from these two pairs because one of them leads to greater than 0.5. Traditional approaches for estimating the linkage and parental diplotypes are to estimate the recombination fractions and likelihood values under each of the four combinations and choose one legitimate estimate of r with a higher likelihood. In this study, we incorporate the four parental diplotype combinations into the observed data likelihood, expressed as where Θ = ( r , p , q ) is an unknown parameter vector, which can be estimated by differentiating the likelihood with respect to each unknown parameter, setting the derivatives equal to zero and solving the likelihood equations. This estimation procedure can be implemented with the EM algorithm [ 2 , 11 , 16 ]. Let H be a mixture matrix of the genotype frequencies under the four parental diplotype combinations weighted by the occurring probabilities of the diplotype combinations, expressed as where Similar to the expression of the genotype frequencies as a mixture of the four diplotype combinations, the expected number of recombination events contained within each two-marker progeny genotype is the mixture of the four different diplotype combinations, i.e., where the expected number of recombination events for each combination are expressed as Define The general procedure underlying the { τ + 1}th EM step is given as follows: E Step: At step τ , using the matrix H based on the current estimate r { τ } , calculate the expected number of recombination events between two markers for each progeny genotype and , where d j 1 j 2 , h j 1 j 2 , p j 1 j 2 and q j 1 j 2 are the (j 1 j 2 )th element of matrix D , H , P and Q , respectively. M Step: Calculate r { τ +1} using the equation, The E step and M step among Eqs. (4) – (7) are repeated until r converges to a value with satisfied precision. The converged values are regarded as the MLEs of Θ . Model for partially informative markers Unlike an inbred line cross, a full-sib family may have many different marker segregation types. We symbolize observed marker alleles in a full-sib family by A 1 , A 2 , A 3 and A 4 , which are codominant to each other but dominant to the null allele, symbolized by O . Wu et al. [ 11 ] listed a total of 28 segregation types, which are classified into 7 groups based on the amount of information for linkage analysis: A. Loci that are heterozygous in both parents and segregate in a 1:1:1:1 ratio, involving either four alleles A 1 A 2 × A 3 A 4 , three non-null alleles A 1 A 2 × A 1 A 3 , three non-null alleles and a null allele A 1 A 2 × A 3 O , or two null alleles and two non-null alleles A 1 O × A 2 O ; B. Loci that are heterozygous in both parents and segregate in a 1:2:1 ratio, which include three groups: B 1 . One parent has two different dominant alleles and the other has one dominant allele and one null allele, e.g., A 1 A 2 × A 1 O ; B 2 . The reciprocal of B 1 ; B 3 . Both parents have the same genotype of two codominant alleles, i.e., A 1 A 2 × A 1 A 2 ; C. Loci that are heterozygous in both parents and segregate in a 3:1 ratio, i.e., A 1 O × A 1 O ; D. Loci that are in the testcross configuration between the parents and segregate in a 1:1 ratio, which include two groups: D 1 . Heterozygous in one parent and homozygous in the other, including three alleles A 1 A 2 × A 3 A 3 , two alleles A 1 A 2 × A 1 A 1 , A 1 A 2 × OO and A 2 O × A 1 A 1 , and one allele (with three null alleles) A 1 O × OO ; D 2 . The reciprocals of D 1 . The marker group A is regarded as containing fully informative markers because of the complete distinction of the four progeny genotypes. The other six groups all contain the partially informative markers since some progeny genotype cannot be phenotypically separated from other genotypes. This incomplete distinction leads to the segregation ratios 1:2:1 (B), 3:1 (C) and 1:1 (D). Note that marker group D can be viewed as fully informative if we are only interested in the heterozygous parent. In the preceding section, we defined a (4 × 4)-matrix H for joint genotype frequencies between two fully informative markers. But for partially informative markers, only the joint phenotypes can be observed and, thus, the joint genotype frequencies, as shown in H , will be collapsed according to the same phenotype. Wu et al. [ 11 ] designed specific incidence matrices ( I ) relating the genotype frequencies to the phenotype frequencies for different types of markers. Here, we use the notation for a ( b 1 × b 2 ) matrix of the phenotype frequencies between two partially informative markers, where b 1 and b 2 are the numbers of distinguishable phenotypes for markers and , respectively. Correspondingly, we have . The EM algorithm can then be developed to estimate the recombination fraction between any two partial informative markers. E Step: At step τ , based on the matrix ( DH )' derived from the current estimate r { τ } , calculate the expected number of recombination events between the two markers for a given progeny genotype and : where , , and is the ( j 1 j 2 )th element of matrices ( DH )', H ', P ' and Q ', respectively. M Step : Calculate r { τ +1} using the equation, The E and M steps between Eqs. (8) – (11) are repeated until the estimate converges to a stable value. Three-locus analysis A general framework Consider three markers in a linkage group that have three possible orders , and . Let o 1 , o 2 and o 3 be the corresponding probabilities of occurrence of these orders in the parental genome. Without loss of generality, for a given order, the allelic arrangement of the first marker between the two homologous chromosomes can be fixed for a parent. Thus, the change of the allelic arrangements at the other two markers will lead to 2 × 2 = 4 parental diplotypes. The three-marker genotype of parent P (12/12/12) may have four possible diplotypes, [111] [222], [112] [221], [121] [212] and [122] [211]. Relative to the fixed allelic arrangement 1|2| of the first marker on the two homologous chromosomes 1 and 2 , the probabilities of allelic arragments 1|2| and 2|1| are denoted as p 1 and 1 - p 1 for the second marker and as p 2 and 1 - p 2 for the third marker, respectively. Assuming that allelic arrangements are independent between the second and third marker, the probabilities of these four three-marker diplotypes can be described by p 1 p 2 , p 1 (1 - p 2 ), (1 - p 1 ) p 2 and (1 - p 1 ) (1 - p 2 ), respectively. The four diplotypes of parent Q can also be constructed, whose probabilities are defined as q 1 q 2 , q 1 (1 - q 2 ), (1 - q 1 ) q 2 and (1 - q 1 ) (1 - q 2 ) respectively. Thus, there are 4 × 4 = 16 possible diplotype combinations (whose probabilities are the product of the corresponding diplotype probabilities) when parents P and Q are crossed. Let r 12 denote the recombination fraction between markers and , with r 23 and r 13 defined similarly. These recombination fractions are associated with the probabilities with which a crossover occurs between markers and and between markers and . The event that a crossover or no crossover occurs in each interval is denoted by D 11 and D 00 , respectively, whereas the events that a crossover occurs only in the first interval or in the second interval is denoted by D 10 and D 01 , respectively. The probabilities of these events are denoted by d 00 , d 01 , d 10 and d 11 , respectively, whose sum equals 1. According to the definition of recombination fraction as the probability of a crossover between a pair of loci, we have r 12 = d 10 + d 11 , r 23 = d 01 + d 11 and r 13 = d 01 + d 10 . These relationships have been used by Haldane [ 17 ] to derive the map function that converts the recombination fraction to the corresponsding genetic distance. For a three-point analysis, there are a total of 16 (16 × 4)-matrices for genotype frequencies under a given marker order ( ), each corresponding to a diplotype combination, denoted by , where for 1|2| or 2 for 2|1| denote the two alternative allelic arrangements of the second and third marker, respectively, for parent P, and for 1|2| or 2 for 2|1| denote the two alternative allelic arrangements of the second and third marker, respectively, for parent Q. According to Ridout et al. [ 18 ] and Wu et al. [ 11 ], elements in are expressed in terms of d 00 , d 01 , d 10 and d 11 . Similarly, there are 16 (16 × 4)-matrices for the expected numbers of crossover that have occurred for D 00 , D 01 , D 10 and D 11 for a given marker order, denoted by , , and respectively. In their Table 2 , Wu et al. [ 11 ] gave the three-locus genotype frequencies and the number of crossovers on different marker intervals under marker order . Table 2 Estimation from three-point analysis of the recombination fraction ( ± SD) and the parental diplotype probabilities of parent P ( ) and Q ( ) for five markers in a full-sib family of n = 100 Parental diplotype Marker P × Q Case 1 Case 2 Case 1 Case 2 Recombination fraction = 0.05 | | | | a b c d | | | | 0.0511 ± 0.0175 a b a b 0.1008 ± 0.0298 0.9978 0.9986 | | | | 0.0578 ± 0.0269 0.0557 ± 0.0312 a o × o a 0.9977 0 0.0988 ± 0.0277 1 0 | | | | 0.0512 ± 0.0307 0.0476 ± 0.0280 1 1/0 a b b b 0.0932 ± 0.0301 1 1/0 1 1/0 | | | | 0.0514 ± 0.0229 a b c d 1 1 | | | | Recombination fraction = 0.20 | | | | a b c d | | | | 0.2026 ± 0.0348 a b a b 0.3282 ± 0.0482 0.9918 0.9916 | | | | 0.2240 ± 0.0758 0.2408 ± 0.0939 a o × o a 0.9944 0 0.3241 ± 0.0488 1 0 | | | | 0.1927 ± 0.0613 0.1824 ± 0.0614 a b b b 0.3161 ± 0.0502 1 1/0 1 1/0 | | | | 0.2017 ± 0.0393 a b c d 1 1 | | | | Case 1 denotes the recombination fraction between two adjacent markers, whereas case 2 denotes the recombination fraction between the two markers separated by a third marker. See Table 1 for other explanations. The joint genotype frequencies of the three markers can be viewed as a mixture of 16 diplotype combinations and three orders, weighted by their occurring probabilities, and is expressed as Similarly, the expected number of recombination events contained within a progeny genotype is the mixture of the different diplotype and order combinations, expressed as: Also define The occurring probabilities of the three marker orders are the mixture of all diplotype combinations, expressed, in matrix notation, as We implement the EM algorithm to estimate the MLEs of the recombination fractions between the three markers. The general equations formulating the iteration of the { τ + 1}th EM step are given as follows: E Step: As step τ , calculate the expected number of recombination events associated with D 00 ( α ), D 01 ( β ), D 10 ( γ ), D 11 ( δ ) for the ( j 1 j 2 j 3 )th progeny genotype (where j 1 , j 2 and j 3 denote the progeny genotypes of the three individual markers, respectively): Calculate , , , and , ( k = 1,2,3) using where n j 1 j 2 j 3 denote the number of progeny with a particular three-marker genotype, h j 1 j 2 j 3 , , , , , p 1( j 1 j 2 j 3) , p 2( j 1 j 2 j 3) , q 1( j 1 j 2 j 3) and q 2( j 1 j 2 j 3) are the ( j 1 j 2 j 3 )th element of matrices H , D 00 , D 01 , D 10 , D 11 , P 1 , P 2 , Q 1 and Q 2 , respectively. M Step: Calculate , , and using the equations, The E and M steps are repeated among Eqs. (19) – (32) until d 00 , d 01 , d 10 and d 11 converge to values with satisfied precision. From the MLEs of the g's , the MLEs of recombination fractions r 12 , r 13 and r 23 can be obtained according to the invariance property of the MLEs. Model for partial informative markers Consider three partially informative markers with the numbers of distinguishable pheno-types denoted by b 1 , b 2 and b 3 , respectively. Define is a ( b 1 b 2 × b 3 ) matrix of genotype frequencies for three partially informative markers. Similarly, we define , and . Using the procedure described in Section (2.2), we implement the EM algorithm to estimate the MLEs of the recombination fractions among the three partially informative markers. m-point analysis Three-point analysis considering the dependence of recombination events among different marker intervals can be extended to perform the linkage analysis of an arbitrary number of markers. Suppose there are m ordered markers on a linkage group. The joint genotype probabilities of the m markers form a (4 m -1 × 4)-dimensional matrix. There are 2 m -1 × 2 m -1 such probability matrices each corresponding to a different parental diplotype combination. The reasonable estimates of the recombination fractions rely upon the characterization of a most likely parental diplotype combination based on the multilocus likelihood values calculated. The m -marker joint genotype probabilities can be expressed as a function of the probability of whether or not there is a crossover occurring between two adjacent markers, where l 1 , l 2 , ..., l m -1 are the indicator variables denoting the crossover event between markers and , markers and , ..., and markers and , respectively. An indicator is defined as 1 if there is a crossover and 0 otherwise. Because each indicator can be taken as one or zero, there are a total of 2 m -1 D's. The occurring probability of interval-specific crossover can be estimated using the EM algorithm. In the E step, the expected number of interval specific crossovers is calculated (see Eqs. (19) – (22) for three-point analysis). In the M step, an explicit equation is used to estimate the probability . The MLEs of are further used to estimate m ( m - 1)/2 recombination fractions between all possible marker pairs. In m -point analysis, parental diplotypes and gene orders can be incorporated in the model. Monte Carlo simulation Simulation studies are performed to investigate the statistical properties of our model for simultaneously estimating linkage, parental diplotype and gene order in a full-sib family derived from two outbred parents. Suppose there are five markers of a known order on a chromosome. These five markers are segregating differently in order, 1:1:1:1, 1:2:1, 3:1, 1:1 and 1:1:1:1. The diplotypes of the two parents for the five markers are given in Table 1 and using these two parents a segregating full-sib family is generated. In order to examine the effects of parameter space on the estimation of linkage, parental diplotype and gene order, the full-sib family is simulated with different degrees of linkage ( r = 0.05 vs. 0.20) and different sample sizes ( n = 100 vs. 200). As expected, the estimation precision of the recombination fraction depends on the marker type, the degree of linkage and sample size. More informative markers, more tightly linked markers and larger sample sizes display greater estimation precision of linkage than less informative markers, less tightly linked markers and smaller sample sizes (Tables 1 and 2 ). To save space, we do not give the results about the effects of sample size in the tables. Our model can provide an excellent estimation of parental linkage phases, i.e., parental diplotype, in two-point analysis. For example, the MLE of the probability ( p or q ) of parental diplotype is close to 1 or 0 (Table 1 ), suggesting that we can always accurately estimate parental diplotypes. But for two symmetrical markers (e.g., markers and in this example), two sets of MLEs, = 1, = 0 and = 0, = 1, give an identical likelihood ratio test statistic. Thus, two-point analysis cannot specify parental diplotypes for symmetrical markers even when the two parents have different diplotypes. The estimation precision of linkage can be increased when a three-point analysis is performed (Table 2 ), but this depends on different marker types and different degrees of linkage. Advantage of three-point analysis over two-point analysis is more pronounced for partially than fully informative markers, and for less tightly than more tightly linked markers. For example, the sampling error of the MLE of the recombination fraction (assuming r = 0.20) between markers and from two-point analysis is 0.0848, whereas this value from a three-point analysis decreases to 0.0758 when combining fully informative marker but increases to 0.0939 when combining partially informative marker . The three-point analysis can clearly determine the diplotypes of different parents as long as one of the three markers is asymmetrical. In our example, using either asymmetrical marker or , the diplotypes of the two parents for two symmetrical markers ( and ) can be determined. Our model for three-point analysis can determine a most likely gene order. In the three-point analyses combining markers , markers and marker , the MLEs of the probabilities of gene order are all almost equal to 1, suggesting that the estimated gene order is consistent with the order hypothesized. To demonstrate how our linkage analysis model is more advantageous over the existing models for a full-sib family population, we carry out a simulation study for linked dominant markers. In two-point analysis, two different parental diplotype combinations are assumed: (1) [ aa ] [ oo ] × [ aa ] [ oo ] ( cis × cis ) and (2) [ ao ] [ oa ] × [ ao ] [ oa ] ( trans × trans ). The MLE of the linkage under combination (2), in which two dominant alleles are in a repulsion phase, is not as precise as that under combination (1), in which two dominant non-alleles are in a coupling phase [ 12 ]. For a given data set with unknown linkage phase, the traditional procedure for estimating the recombination fraction is to calculate the likelihood values under all possible linkage phase combinations (i.e., cis × cis , cis × trans , trans × cis and trans × trans ). The combinations, cis × cis and trans × trans , have the same likelihood value, with the MLE of one combination being equal to the subtraction of the MLE of the second combination from 1. The same relationship is true for cis × trans and trans × cis . A most likely phase combination is chosen corresponding to the largest likelihood and a legitimate MLE of the recombination fraction ( r ≤ 0.5) [ 10 ]. For our data set simulated from [ aa ] [ oo ] × [ aa ] [ oo ], one can easily select cis × cis as the best estimation of phase combination because it corresponds to a larger likelihood and a smaller (Table 3 ). Our model incorporating the parental diplotypes can provide comparable estimation precision of the linkage for the data from [ aa ] [ oo ] × [ aa ] [ oo ] and precisely determine the parental diplotypes (see the MLEs of p and q ; Table 3 ). Our model has great advantage over the traditional model for the data derived from [ ao ] [ oa ] × [ ao ] [ oa ]. For this data set, the same likelihood was obtained under all possible four diplotype combinations (Table 3 ). In this case, one would select cis × trans or trans × cis because these two phase combinations are associated with a lower estimate of r . But this estimate of r (0.0393) is biased since it is far less than the value of 0.20 hypothesized. Our model gives the same estimation precision of the linkage for the data derived from [ ao ] [ oa ] × [ ao ] [ oa ] as obtained when the analysis is based on a correct diplotype combination (Table 3 ). Also, our model can precisely determine the parental diplotypes ( = = 0 ). Table 3 Comparison of the estimation of the linkage and parental diplotype between two dominant markers in a full-sib family of n = 100 from the traditional and our model Traditional model Our model cis × cis cis × trans trans × cis trans × trans Data simulated from cis × cis Correct diplotype combination Correct Incorrect Incorrect Incorrect Log-likelihood a -46.2 -92.3 -92.3 -46.2 under each diplotype combination 0.1981 ± 0.0446 0.5000 ± 0.0000 0.5000 ± 0.0000 0.8018 ± 0.0446 Estimated diplotype combination Selected under correct diplotype combination 0.1981 ± 0.0446 0.1982 ± 0.0446 Diplotype probability for parent P ( ) 1.0000 ± 0.0000 Diplotype probability for parent Q ( ) 1.0000 ± 0.0000 Data simulated from trans × trans Correct diplotype combination Incorrect Incorrect Incorrect Correct Log-likelihood a -89.6 -89.6 -89.6 -89.6 under each diplotype combination 0.8573 ± 0.1253 0.0393 ± 0.0419 0.0393 ± 0.0419 0.1426 ± 0.1253 Estimated diplotype combination Selected Selected under correct diplotype combination 0.1426 ± 0.1253 0.1428 ± 0.1253 Diplotype probability for parent P ( ) 0.0000 ± 0.0000 Diplotype probability for parent Q ( ) 0.0000 ± 0.0000 a The log-likelihood values given here are those from one random simulation for each diplotype combination by the traditional model. In three-point analysis, we examine the advantage of implementing linkage analysis with gene orders. Three dominant markers are assumed to have two different parental diplotypes combinations: (1) [ aaa ] [ ooo ] × [ aaa ] [ ooo ] and (2) [ aao ] [ ooa ] × [ aao ] [ ooa ]. The traditional approach is to calculate the likelihood values under three possible gene orders and choose one of a maximum likelihood to estimate the linkage. Under combination (1), a most likely gene order can be well determined and, therefore, the recombination fractions between the three markers well estimated, because the likelihood value of the correct order is always larger than those of incorrect orders (Table 4 ). However, under combination (2), the estimates of linkage are not always precise because with a frequency of 20% gene orders are incorrectly determined. The estimates of r 's will largely deviate from their actual values based on a wrong gene order (Table 4 ). Our model incorporating gene order can provide the better estimation of linkage than the traditional approach, especially between those markers with dominant alleles being in a repulsion phase. Furthermore, a most likely gene order can be determined from our model at the same time when the linkage is estimated. Table 4 Comparison of the estimation of the linkage and gene order between three dominant markers in a full-sib family of n = 100 from the traditional and our model MLE Traditional model Our model Data stimulated from [ aaa ] [ ooo ] × [ aaa ] [ ooo ] Correct gene order Correct Incorrect Incorrect Estimated best gene order (% a ) 100 0 0 0.2047 ± 0.0422 0.2048 ± 0.0422 0.1980 ± 0.0436 0.1985 ± 0.0434 0.3245 ± 0.0619 0.3235 ± 0.0618 0.9860 ± 0.0105 0.0060 ± 0.0071 0.0080 ± 0.0079 Data simulated from [ aao ] [ ooa ] × [ aao ] [ ooa ] Correct gene order Correct Incorrect Incorrect Estimated best gene order (% a ) 80 11 9 0.1991 ± 0.0456 0.8165 ± 0.1003 0.9284 ± 0.0724 0.2104 ± 0.0447 0.1697 ± 0.0907 0.8220 ± 0.0338 0.1636 ± 0.0608 0.2073 ± 0.0754 0.3218 ± 0.0755 0.2703 ± 0.0586 0.7821 ± 0.0459 0.2944 ± 0.0929 0.9952 ± 0.0058 0.0045 ± 0.0058 0.0003 ± 0.0015 a The percents of a total of 200 simulations that have a largest likelihood for a given gene order estimated from the traditional approach. In this example used to examine the advantage of implementing gene orders, known linkage phases are assumed. Our model is further used to perform joint analyses including more than three markers. When the number of markers increases, the number of parameters to be estimated will be exponentially increased. For four-point analysis, the speed of convergence was slow and the accuracy and precision of parameter estimation have been affected for a sample size of 200 (data not shown). According to our simulation experience, the improvement of more-than-three-point analysis can be made possible by increasing sample size or by using the estimates from two- or three-point analysis as initial values. A worked example We use an example from published literature [ 18 ] to demonstrate our unifying model for simultaneous estimation of linkage, parental diplotype and gene order. A cross was made between two triple heterozygotes with genotype AaVvXx for markers , and . Because these three markers are dominant, the cross generates 8 distinguishable genotypes, with observations of 28 for A - / V - / X - , 4 for A - / V - / xx , 12 for A - / vv / X - , 3 for A - / vv / xx , 1 for aa / V - / X - , 8 for aa / V - / xx , 2 for aa / vv / X - and 2 for aa / vv / xx . We first use two-point analysis to estimate the recombination fractions and parental diplotypes between all possible pairs of the three markers. The recombination fraction between markers and is , whose the estimated parental diplotypes are [ Av ] [ aV ] × [ AV ] [ av ] or [ AV ] [ av ] × [ Av ] [ aV ]. The other two recombination fractions and the corresponding parental displotypes are estimated as , [ Vx ] [ vX ] × [ VX ] [ vx ] or [ VX ] [ vx ] × [ Vx ] [ vX ] and , [ AX ] [ ax ] × [ AX ] [ ax ], respectively. From the two-point analysis, one of the two parents have dominant alleles from markers and are repulsed with the dominant alleles from marker . Our subsequent three-point analysis combines parental diplotypes and gene orders to estimate the linkage between these three dominant markers. The estimated gene order is . The MLEs of the recombination fractions are , and . The parental diplotype combination is [ XAV ] [ xav ] × [ XAv ] [ xaV ] or [ XAv ] [ xaV ] × [ XAV ] [ xav ]. The three-point analysis for these three markers by Ridout et al. [ 18 ] led to the estimates of the three recombination fractions all equal to 0.20. But their estimates may not be optimal because the effect of gene order on was not considered. Discussion Several statistical methods and software packages have been developed for linkage analysis and map construction in experimental crosses and well-structured pedigrees [ 2 - 6 ], but these methods need unambiguous linkage phases over a set of markers in a linkage group. For outcrossing species, such as forest trees, it is not possible to know exact linkage phases for any of two parents that are crossed to generate a full-sib family prior to linkage analysis. This uncertainty about linkage phases makes linkage mapping in outcrossing populations much more difficult than that in phase-known pedigrees [ 7 , 9 ]. In this article we present a unifying model for simultaneously estimating the linkage, parental diplotype and gene order in a full-sib family derived from two outbred parents. As demonstrated by simulation studies, our model is robust to different parameter space. Compared to the traditional approaches that calculate the likelihood values separately under all possible linkage phases or orders [ 9 , 10 , 18 ], our approach is more advantageous in three aspects. First, it provides a one-step analysis of estimating the linkage, parental diplotype and gene order, thus facilitating the implementation of a general method for analyzing any segregating type of markers for outcrossing populations in a package of computer program. For some short-generation-interval outcrossing species, we can obtain marker information from grandparents, parents and progeny. The model presented here allow for the use of marker genotypes of the grandparents to derive the diplotype of the parents. Second, our model for the first time incorporates gene ordering into a unified linkage analysis framework, whereas most earlier studies only emphasized on the characterization of linkage phases through a multilocus likelihood analysis [ 11 , 14 , 15 ]. Instead of a comparative analysis of different orders, we proposed to determine a most likely gene order by estimating the order probabilities. Third, and most importantly, our unifying approach can significantly improve the estimation precision of the linkage for dominant markers whose alleles are in repulsion phase. Previous analyses have indicated that the estimate of the linkage between dominant markers in a repulsion phase is biased and imprecise, especially when the linkage is not strong and when sample size is small [ 12 ]. There are two reasons for this: (1) the linkage phase cannot be correctly determined, and/or (2) there is a fairly high possibility (20%) of detecting a wrong gene order. Our approach provides more precise estimates of the recombination fraction because correct parental diplotypes and a correct gene order can be determined. Our approach will be broadly useful in genetic mapping of outcrossing species. In practice, a two-point analysis can first be performed to obtain the pairwise estimates of the recombination fractions and using this pairwise information markers are grouped based on the criteria of a maximum recombination fraction and minimum likelihood ratio test statistic [ 2 ]. The parental diplotypes of markers in individual groups are constructed using a three-point analysis. With a limited sample size available in practice, we do not recommend more-than-three-point analysis because this would bring too many more unknown parameters to be precisely estimated. If such an analysis is desirable, however, one may use the results from these lower-point analyses as initial values to improve the convergence rate and possibly the precision of parameter estimation. In any case, our two- and three-point analysis has built a key stepping stone for map construction through two approaches. One is the least-squares method, as originally developed by Stam [ 5 ], that can integrate the pairwise recombination fractions into reconstruction of multilocus linkage map. The second is to use the hidden Markov chain (HMC) model, first proposed by Lander and Green [ 2 ], to construct genetic linkage maps by treating map construction as a combinatorial optimization problem. The simulated annealing algorithm [ 19 ] for searching for optima of the multilocus likelihood function need to be implemented for the HMC model. A user-friendly package of software that is being written by the senior author will implement two- and three-point analyses as well as the algorithm for map construction based on the estimates of pairwise recombination fractions. This software will be online available to the public. Our maximum likelihood-based approach is implemented with the EM algorithm. We also incorporate the Gibbs sampler [ 20 ] into the estimation procedure of the mixture model for the linkage characterizing different parental diplotypes and gene orders of different markers. The results from the Gibbs sampler are broadly consistent with those from the EM algorithm, but the Gibbs sampler is computationally more efficient for a complicated problem than the EM algorithm. Therefore, the Gibbs sampler may be particularly useful when our model is extended to consider multiple full-sib families in which the parents may be selected from a natural population. For such a multi-family design, some population genetic parameters describing the genetic structure of the original population, such as allele frequencies and linkage disequilibrium, should be incorporated and estimated in the model for linkage analysis. It can be anticipated that the Gibbs sampler will play an important role in estimating these parameters simultaneously along with the linkage, linkage phases, and gene order. Authors' contributions QL derived the genetic and statistical models and wrote computer programs. YHC participated in the derivations of models and statistical analyses. RLW conceived of ideas and algorithms, and wrote the draft. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC509239.xml |
555552 | Elevated creatine kinase activity in primary hepatocellular carcinoma | Background Inconsistent findings have been reported on the occurrence and relevance of creatine kinase (CK) isoenzymes in mammalian liver cells. Part of this confusion might be due to induction of CK expression during metabolic and energetic stress. Methods The specific activities and isoenzyme patterns of CK and adenylate kinase (AdK) were analysed in pathological liver tissue of patients undergoing orthotopic liver transplantation. Results The brain-type, cytosolic BB-CK isoenzyme was detected in all liver specimens analysed. Conversely, CK activity was strongly increased and a mitochondrial CK (Mi-CK) isoenzyme was detected only in tissue samples of two primary hepatocellular carcinomas (HCCs). Conclusion The findings do not support significant expression of CK in normal liver and most liver pathologies. Instead, many of the previous misconceptions in this field can be explained by interference from AdK isoenzymes. Moreover, the data suggest a possible interplay between p53 mutations, HCC, CK expression, and the growth-inhibitory effects of cyclocreatine in HCC. These results, if confirmed, could provide important hints at improved therapies and cures for HCC. | Background Creatine kinase (CK) isoenzymes catalyse the reversible transfer of the phosphate group of phosphocreatine (PCr) to ADP, to yield ATP and creatine (Cr). The CK/PCr/Cr system is present primarily in tissues with high and fluctuating energy demands such as brain, heart and skeletal muscle, and serves as a temporal and spatial "energy buffer" that helps to maintain a high intracellular phosphorylation potential in situations of increased metabolic demand (for reviews, see [ 1 , 2 ]). In mammals, Cr can be taken up by the intestine from the food, or can be synthesized de novo . The liver is the main site of Cr production in the body (see [ 2 ]). After its synthesis, Cr is transported through the blood and is taken up by Cr-containing tissues via a specific Cr transporter. Whereas the importance of the liver in Cr biosynthesis is undisputed, some confusion still exists on the CK activity and PCr content in this organ. The majority of findings suggest no or minute levels of CK and PCr in liver tissue and, in particular, in hepatocytes (e.g., [ 3 - 6 ]). Other studies that used more sensitive experimental approaches provided evidence for low levels of PCr and CK, specifically localized in sinusoidal endothelial cells ([ 7 - 9 ]; see also [ 10 ]). Finally, in a few cases, more extreme findings were made: unusually high levels of CK activity were measured in liver tissue by Shatton et al. [ 11 ], Goullé et al. [ 12 ], and Wali & Makinde [ 13 ]. The majority of studies indicated that the low levels of CK activity in liver are due solely to the brain-type cytosolic CK (BB-CK) isoenzyme. On the other hand, besides BB-CK which was suggested to be present in endothelial and Kupffer cells, Vaubourdolle et al. [ 14 ] also provided evidence for the presence of the muscle-type cytosolic (MM-CK) isoenzyme in Ito cells, and for mitochondrial CK (Mi-CK) in hepatocytes. Similarly, Kanemitsu et al. [ 15 ] purified Mi-CK from normal human liver, which would imply significant amounts of this isoenzyme in liver tissue. Finally, increases in serum CK activity were frequently observed in cases of severe liver disease, with the most obvious source of CK being the pathological liver tissue itself [ 16 - 18 ]. In a number of studies reporting significant levels of CK activity in liver, interference by adenylate kinase (AdK) isoenzymes in the CK activity assays [ 19 - 21 ] is very likely (e.g., [ 13 ]), or can at least not be excluded, thus questioning the validity of these studies. Another possible cause for inconsistent findings might be compensatory up-regulation of CK expression in pathological liver tissue. Two lines of evidence that favour this hypothesis are: (i) partially hepatectomized rat liver was reported to show an increase in BB-CK activity (see [ 22 ]); and (ii) overexpression of CK isoenzymes in the liver of transgenic mice was shown to stabilize energy metabolism under low-oxygen stress and after a metabolic challenge [ 23 , 24 ], to accelerate regeneration of liver mass following major hepatectomy [ 10 , 25 ], and to increase endotoxin tolerance [ 5 , 26 ]. Because of these conflicting data, the goal of the present study was to analyse in detail the CK and AdK activities in pathological liver tissue of patients undergoing orthotopic liver transplantation. Methods Liver samples The present project was approved by the ethics commission of the University of Innsbruck. In total, 25 liver samples were analysed. Twenty-three samples were obtained from 18 explanted organs of liver transplant recipients, one sample was obtained at autopsy (no. 1), and the last sample was from a normal rat liver. According to pathomorphological criteria, the 25 samples can be divided into 5 groups: (1) Nine samples of cirrhotic liver tissue (nos. 5, 7, 11, 13, 17–19, 23, 24: 4 due to hepatitis B or C virus infection, 3 due to primary or secondary biliary cirrhosis, 1 due to chronic alcohol abuse, and 1 due to vena hepatica occlusion); (2) six samples of neoplastic tissue (nos. 4, 9, 14, 15, 20, 21: 3 cholangiocellular carcinomas, 2 primary hepatocellular carcinomas, and 1 liver metastasis of a malignant melanoma); (3) three samples of necrotizing liver tissue due to acute or subacute organ rejection (nos. 2, 6, 22); (4) five samples of macroscopically normal liver parenchymal tissue (nos. 3, 8, 10, 16, 25) surrounding focal liver pathologies (i.e., 2 primary HCCs [samples 4 and 9]; metastasis of malignant melanoma [sample 15]; vena hepatica occlusion [sample 24]); (5) two samples originating from a normal rat liver (no. 12) and from a patient with steatosis hepatis (no. 1). Preparation of homogenate, cytosolic and mitochondrial fractions of human and rat liver All steps were performed on ice or at 4°C. Approximately 5 g of liver tissue was homogenized in 45 ml buffer A (250 mM sucrose, 5 mM HEPES, 0.5 mM EGTA, pH 7.4). The homogenate was subjected to centrifugation for 5 min at 800 g. The pellet was discarded, and the supernatant centrifuged for 4 min at 5,100 g (centrifugation C2). The supernatant of C2 was further clarified by centrifugation for 12 min at 12,300 g, thus yielding the cytosolic fraction. The pellet of C2 was resuspended in 10 ml buffer A, followed by centrifugation for 2 min at 12,300 g (C3). After resuspension of the C3 pellet in 10 ml buffer A and centrifugation for a further 10 min at 12,300 g, the sediment was resuspended in 4 ml buffer A, thus yielding the mitochondrial fraction. One-ml aliquots of the different fractions were immediately frozen in liquid nitrogen and stored at -80°C until analysis. Measurements of CK and AdK activity For CK and AdK activity measurements, the following assay medium was used: 110 mM imidazole, pH 6.7, 20.5 mM glucose, 11 mM Mg-acetate, 2.05 mM EDTA·Na 2 , 2.1 mM ADP, 2.1 mM NADP, 21 mM N-acetylcysteine, 9 U/ml of hexokinase, and 5.8 U/ml of glucose-6-phosphate dehydrogenase (both from Sigma). Enzymatic activity was measured at 25°C as an increase in NADPH absorbance at 340 nm. For AdK, three separate measurements were made for each sample in the same assay medium. For CK measurements, 5.1 mM AMP was added to the assay medium to inhibit AdK activity. For each sample, three measurements with 10.3 mM PCr and three measurements without PCr (blank measuring residual AdK activity after inhibition with AMP) were made, and the CK activity was calculated as the difference of the respective means. All values in this paper represent specific activities per mg of homogenate, cytosolic or mitochondrial protein. Protein amounts were measured according to the method of Bradford [ 27 ] with bovine serum albumin as standard. Cellulose polyacetate electrophoresis (CPE) CPE was performed at room temperature for 90 min at a constant voltage of 150 V, but otherwise as described previously [ 28 ]. CK and AdK isoenzyme bands were visualized at 37°C with an overlay gel technique in a reaction protocol similar to the one described above for the measurement of enzymatic activity. NADPH was reacted with nitrobluetetrazolium in the presence of phenazine methosulfate to yield formazan. For visualization of CK bands, AMP was added to the overlay gel to inhibit AdK activity. Since AMP alone may not be sufficient to inhibit all AdK activity [ 21 ], two identical cellulose polyacetate strips were run; one was developed with PCr in the overlay gel, whereas for the other, PCr was omitted from the overlay gel (blank). Results CK and AdK activities were measured in total homogenate (Fig. 1 ), cytosolic and mitochondrial fractions (data not shown) obtained by differential centrifugation from 25 normal and pathological liver samples. Highest CK activities were observed in the two primary HCCs analysed (liver samples no. 4 and 9), with specific CK activities in the homogenate of 0.36 and 0.21 U·(mg protein) -1 , respectively. In most other liver samples, the specific CK activities in the homogenate were below 0.05 U·(mg protein) -1 . Whereas enzymatic activity measurements revealed low, but consistent CK activity in many of the cytosolic fractions, no CK activity was detected in the mitochondrial fractions, except for HCC sample no. 9 with a specific CK activity of approx. 0.1 U·(mg protein) -1 (due to limited sample size, subcellular fractionation was not feasible for HCC sample no. 4). These findings were corroborated qualitatively by isoenzyme electrophoresis on cellulose polyacetate strips. Visualization of the different CK isoenzymes by an overlay gel technique revealed that the brain-type cytosolic BB-CK isoenzyme was present in all liver samples. Conversely, bands for the dimeric and octameric forms of Mi-CK were only observed in the two primary HCC samples (nos. 4 and 9; Fig. 2 ). The CK/AdK activity ratio in the homogenate was 1.4 and 2.6 for the two primary HCCs (liver samples no. 4 and 9, respectively), 0.5 for liver sample no. 5 (secondary biliary cirrhosis), and < 0.2 for all other liver samples. Similar findings were made for the cytosolic and mitochondrial fractions, with CK/AdK activity ratios of < 0.3 for the cytosolic fraction and < 0.05 for the mitochondrial fraction. For HCC sample no. 9, however, these ratios were significantly higher: 4.8 (cytosolic fraction) and 0.33 (mitochondrial fraction). Discussion CK is an enzyme still widely analysed in clinical diagnostics. Although a wealth of CK measurements have been reported in the scientific literature, there still exist inconsistency and incomplete knowledge on such an apparently simple question as the CK (isoenzyme) content of mammalian liver in both health and disease. In the present study, we detected the presence of BB-CK in all liver samples analysed by using CK activity measurements and cellulose polyacetate electrophoresis. However, in the normal and most pathological liver samples that we analysed, the specific CK activity was very low (< 0.05 U·[mg protein] -1 ), levels which are comparable with or lower than data reported for rat and human liver [ 29 , 30 ], but much lower than the specific CK activities in skeletal muscle, heart and brain (2–37 U·[mg protein] -1 ; [ 29 , 31 - 33 ]). We additionally observed that (i) the specific AdK activities in these samples were consistently higher than the specific CK activities (on average, > 10-fold), (ii) both activity measurements and cellulose polyacetate electrophoresis revealed similar specific AdK activities in the cytosolic and mitochondrial fractions (although from different AdK isoenzymes; data not shown), and (iii) mitochondrial respiration, in the presence of ATP, could be fully stimulated by AMP, but not by creatine (data not shown). This last observation favours the interpretation that in normal hepatocytes, CK isoenzymes are not expressed, and that the AdK isoenzyme system plays a function in high-energy phosphate buffering and transport, which is similar to the role of CK in brain, skeletal muscle and heart. Although histochemical data are missing, the results obtained here are most consistent with a localization of small amounts of BB-CK in sinusoidal endothelial cells [ 14 ]. Interestingly, we observed a strong induction of both BB-CK and Mi-CK expression in two samples of primary HCC. Despite CK/AdK activity ratios in vitro of 1.4–2.6, the specific CK activities were still relatively low (0.21–0.36 U·[mg protein] -1 ). Therefore, in the absence of histochemical data, it cannot be concluded with certainty whether the increased levels of CK are due to increased vascularization of the tumour (possibly associated with a higher proportion of CK-containing endothelial cells), or to induction of CK expression in the malignant cells. Induction of CK expression has been observed previously in many types of tumours (see [ 2 ]) and may reflect an adaptation of the tumour tissue to the increased energetic demands. Evidence for induction of CK in liver tumours mostly comes from hepatoma cells grown in tissue culture [ 34 , 35 ] or, indirectly, from increased amounts of circulating BB-CK and Mi-CK in the blood of patients with liver tumours [ 36 ]. On the other hand, analysis of the tumour tissue itself, both by classical biochemical methods and by microarray technology, provided inconsistent results. Some authors reported induction of CK expression in liver tumours ([ 37 - 39 ]; and, in part, [ 40 ]; M. Sakamoto and S. H. Yim, personal communication), others repression [ 41 ], and still others observed no statistically significant differences between normal and malignant liver tissue [ 11 , 30 ]. This may be a reflection of the diverse clinicopathological and biological phenotypes of HCC, with different underlying molecular defects. A key player in the picture might be the p53 tumour suppressor gene. Mutations in p53 are quite prevalent in HCC, especially in tumours with low cellular differentiation [ 42 , 43 ]. On the other hand, p53 was shown to control BB-CK expression: transrepression as observed for wild-type p53 is prevented by different mutations in the p53 gene [ 44 ]. Therefore, it is tempting to speculate that induction of BB-CK in HCCs is caused directly or indirectly by mutations in p53. Expression of CK in HCC may have therapeutic implications, which is all the more important given (i) the limited responsiveness of HCC to currently available therapeutic approaches and, thus, (ii) the poor prognosis associated with this disease. Cr analogues (cyclocreatine and β-guanidinopropionic acid) and also Cr itself were previously shown to have antitumour activity, both in cell culture and in in vivo models ([ 45 , 46 ]; see also [ 2 ]). The responsiveness of tumour cells to growth inhibition by cyclocreatine seems to be correlated with their specific CK activity; cell lines with a specific CK activity of > 0.10 U·(mg protein) -1 were generally sensitive to the drug. As for the liver, β-guanidinopropionic acid and creatine slowed the growth of AS30-D ascites tumour cells in culture (chemically induced rat hepatoma; [ 34 ]). Similarly, cyclocreatine revealed antitumour effects in a rat model of chemically induced hepatocarcinogenesis [ 47 ]. Conclusion The present findings shed light on some old enigmas and open up fascinating avenues for future research. Our findings do not support significant expression of CK in normal liver and most liver pathologies, but rather indicate that many of the previous misconceptions in this field can be explained by interference from AdK isoenzymes. On the other hand, given the need for improved understanding of the molecular pathogenesis of HCC, and for improved therapies and cures, the induction of CK expression in HCC described here calls for a more in-depth analysis of the interplay between p53 mutations, HCC, CK expression, and the growth-inhibitory effects of cyclocreatine in HCC. List of abbreviations AdK, adenylate kinase; BB-CK, brain-type cytosolic CK isoenzyme; CK, creatine kinase; Cr, creatine; HCC, hepatocellular carcinoma; Mi-CK, mitochondrial CK; MM-CK, muscle-type cytosolic CK isoenzyme; PCr, phosphocreatine. Competing interests The authors declare that they have no competing interests. Authors' contributions GM and RM covered the medical part of this study. GM, FNG and MW performed the biochemical experiments. MW drafted the manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC555552.xml |
517952 | A rational treatment of Mendelian genetics | Background The key to a rational treatment of elementary Mendelian genetics, specifically to an understanding of the origin of dominant and recessive traits, lies in the facts that: (1) alleles of genes encode polypeptides; (2) most polypeptides are catalysts, i.e. enzymes or translocators; (3) the molecular components of all traits in all cells are the products of systems of enzymes, i.e. of fluxing metabolic pathways; (4) any flux to the molecular components of a trait responds non-linearly (non-additively) to graded mutations in the activity of any one of the enzymes at a catalytic locus in a metabolic system; (5) as the flux responds to graded changes in the activity of an enzyme, the concentrations of the molecular components of a trait also change. Conclusions It is then possible to account rationally, and without misrepresenting Mendel, for: the origin of dominant and recessive traits; the occurrence of Mendel's 3(dominant):1(recessive) trait ratio; deviations from this ratio; the absence of dominant and recessive traits in some circumstances, the occurrence of a blending of traits in others; the frequent occurrence of pleiotropy and epistasis. | 1. Background The currently favoured explanation for the origin of Mendel's dominant and recessive traits is untenable [ 1 ]. The primary error in this current attempted explanation is the assumption that there is a direct, proportional, relationship in a diploid cell between a series of allegedly dominant and recessive alleles written as ( AA + 2 Aa + aa ) and the dominant, hybrid and recessive traits written as ( AA + 2 Aa + aa ). This assumption (Figure 2 , in reference [ 1 ]) incorporates four fundamental faults: Figure 2 Accounting for Mendel's observation of a 3(dominant):1(recessive) trait ratio in his F2 populations of plants. Mendel's notations for a dominant trait, a hybrid and a recessive trait were ( A ), ( Aa ) and ( a ) respectively. For reasons given in the preceding paper [1], a hybrid trait is represented in Figure 2 by ( H ). The molecular components of all traits are synthesised by a metabolic pathway. When the activity of any one enzyme in a metabolic pathway is changed in discrete steps, the flux to a trait component responds in non-linear (non-additive) fashion [3]. If the flux response is quasi-hyperbolic, as shown here, the hybrid trait ( H ) will be indistinguishable from the trait ( A ) expressed in the wild-type cell or organism, even when the enzyme activity in the hybrid ( H ) has been reduced to 50% of the wild-type activity. Trait ( a ), will be distinguishable from both traits ( A ) and ( H ) only if the enzyme activity is further reduced to a sufficient extent. Under these circumstances the trait series ( A + 2 H + a ) becomes (3 A + a ); Mendel's 3(dominant):1(recessive) trait ratio is accounted for without introducing arbitrary and inconsistent arguments [1]. (i) A failure to distinguish between the parameters and the variables of any system of interacting components, specifically between the determinants (alleles in modern terminology) and what is determined (the form of the trait or characteristic expressed in a cell or organism). Thus, because Mendel defined the terms dominant and recessive for traits or characters , it was illegitimate (and illogical) to call alleles dominant or recessive, and to represent them by the same letters used by Mendel to represent traits [ 1 ]. (ii) A trait series written as ( AA + 2 Aa + aa ) suggests, incorrectly, that dominant and recessive traits comprise two aliquots, ( A + A ) or ( a + a ), of dominance or recessivity. (iii) A failure to take account of the long established fact that the first non-nucleotide product of the expression of an allele is a polypeptide and that most polypeptides are enzymes or membrane-located translocators. (iv) A failure to note that the components of all tangible traits comprised the molecular products of metabolic pathways, i.e., the products of sequences of enzyme-catalysed reactions. Correction of the first two of these four faults has already been achieved (section 4 in reference [ 1 ]) by writing an allele series as ( UU + 2 Uu + uu ) and the corresponding trait series as ( A + 2 H + a ). In these statements ( U ) and ( u ) are normal and mutant (not dominant and recessive) alleles respectively. Mendel's notation ( A ) and ( a ) is used to represent dominant and recessive traits but ( H ) replaces Mendel's implausible notation ( Aa ) for a hybrid class of trait [ 1 ]. Mutations at another gene locus, in the same or a different cell, will be written as ( WW + Ww + ww ); the corresponding trait series will appear as ( B + 2 H + b ). Mendel's notation ( Aa ) for a hybrid trait will be used in this article only when referring directly to Mendel's paper [ 2 ]. 2. A rational explanation of Mendel's observations Our stated task was to explain logically how an allele series ( UU + 2 Uu + uu ) is expressed as a series of qualitatively distinguishable F2 traits ( A + 2 H + a ) when F1 hybrids ( H ) are allowed to self-fertilise [ 1 ]. This is very simply achieved by correcting faults (iii) and (iv) in four successive steps (sections 2.1–2.4) based on a paper published 23 years ago [ 3 ]. A fifth step (section 2.5) allows us to go beyond that paper to explain how the trait ratio 3(dominant):1(recessive) sometimes occurs and sometimes does not. A sixth step (section 2.6), consistent with the earlier ones, explains why dominance and recessivity are not always observed. Section 2.7 validates an earlier section. Section 2.8 accounts for some aspects not dealt with in textbooks and reviews of genetics. The treatment in this section 2 is extended in section 3 to account for quantitatively different traits, in section 4 to illustrate some implications of the present treatment, and in section 5 to account for pleiotropy and epistasis. Section 6 defines the conditions that must be met if a rational account is to be given for the occurrence of dominant and recessive traits. 2.1. A generalised metabolic system If: the first non-nucleotide product of expression of an allele is a polypeptide and most polypeptides are enzymes [ 3 , 4 ], it follows that most mutations at any one gene locus will result in the formation of a mutant enzyme at a catalytic locus in a metabolic pathway. This is true even if the functioning enzyme is composed of more than one polypeptide, each specified by different genes. It then follows that we need to ask how the concentration of a normal molecular component of a trait will be affected by a mutation of any one enzyme within a metabolic system . In short, a systemic approach, outlined below, is obligatory. This is the key to an understanding of the origin of dominant and recessive traits, as first pointed out in the following two sentences: "When as geneticists, we consider substitutions of alleles at a locus, as biochemists, we consider alterations in catalytic parameters at an enzyme step. - -. The effect on the phenotype of altering the genetic specification of a single enzyme - - - is unpredictable from a knowledge of events at that step alone and must involve the response of the system to alterations of single enzymes when they are embedded in the matrix of all other enzymes ." ([ 3 ]; p.641). 2.2 Metabolic systems and steady states Metabolic processes are facilitated by a succession of catalysed steps; i.e. by enzyme-catalysed transformations of substrates to products or by carrier-catalysed translocation of metabolites across membranes. Because enzymes and membrane-located carriers (or porters) are saturable catalysts that exhibit similar kinetics it is convenient in this article to refer only to enzymes and to represent both kinds of catalysts by the letter E . Any segment of a sequence of enzyme-catalysed reactions can then be written as shown in Figure 1 . Figure 1 A segment of a model metabolic pathway. This diagram shows those features, discussed in the text, that permit a systemic analysis of the response of any variable of a metabolic system (e.g. a flux J or the concentration of any intracellular metabolite S) to changes in any one parameter of the system (e.g. an enzyme activity). Each S is an intracellular metabolite; each X is an extracellular metabolite. In a diploid cell, every E stands for a pair of enzymes (allozymes), each specified by one of the two alleles at a gene locus. Each E is then a locus of catalytic activity within a system of enzymes; each v stands for the individual reaction rates catalysed jointly by a pair of allozymes in a diploid cell. Either or both allozymes at such a locus may be mutated. There are ten important features of any such system. (1) Each enzyme, E 1 to E 6 , is embedded within a metabolic pathway, i.e. within a system of enzymes. (2) All components of this system except the external metabolites X 0 and X 6 are enclosed by a membrane. (3) E 1 and E 6 may then represent membrane-located enzymes or translocators. (4) X 0 and X 6 interact with only one enzyme, whereas each internal metabolite ( S 1 , S 2 , S 3 , S 4 , S 5 ) interacts with two flanking enzymes. (5) In a haploid cell there will be one specimen of an enzyme molecule ( E ) at each catalytic locus. In a diploid cell there will be two specimens of enzyme molecules (two allozymes) at each catalytic locus: one specified by the maternal allele, the other by the paternal allele, at the corresponding gene locus or loci. The effective catalytic activity at each metabolic locus in a diploid will be, in the simplest case, the sum of the two individual activities. It is the single effective enzyme activity ( v ) at each catalytic locus that concerns us here, irrespective of whether the cell is haploid, diploid or polyploid. (6) The catalytic activity ( v ) at any one metabolic locus can be left at its current value or changed to and maintained at a new value by the experimentalist, e.g. by suitable genetic manipulation of an allele. Each allele in these circumstances is therefore an internal parameter of the system; it is accessible to modification by the direct and sole intervention of the experimentalist [ 1 ]. (7) Because X 0 and X 6 are external to the system in Figure 1 , their concentrations can be fixed, and maintained at a chosen value, by the direct intervention of the experimentalist; they are external parameters of the metabolic system. (8) In contrast to X 0 and X 6 , the concentrations of metabolites S 1 to S 5 within the system cannot be fixed and maintained at any desired value solely by the direct intervention of the experimentalist. The concentrations of S 1 to S 5 are internal variables of the system. (If a fixed amount of any one of these metabolites were to be injected through the membrane into the system, continued metabolism would ensure that the new intracellular metabolite concentration could not be maintained). (9) By the same arguments, each reaction rate ( v ) and the flux ( J ) through the system are also variables of the system. (10) The magnitude of each variable of the system is determined at all times by the magnitudes of all the parameters of the system and of its immediate environment. The variables comprise the concentrations ( s 1 , s 2 , s 3 , s 4 , s 5 ) of the intracellular metabolites shown in Figure 1 and any other intracellular metabolites; the individual reaction rates v 1 , v 2 , v 3 , v 4 , v 5 , v 6 ; and the flux J through this system of enzyme-catalysed steps. It follows that, provided we maintain the concentrations of X 0 and X 6 constant, the system depicted (Figure 1 ) will, in time, come to a steady state such that: v 1 = v 2 = v 3 = v 4 = v 5 = v 6 = J (the flux through this system). At the same time the concentration of each intracellular metabolite S 1 to S 5 will settle to an individual steady value. 2.3. The response of the system variables to a change in any one system parameter In a metabolic system, the product of any one enzyme-catalysed reaction is the substrate for the immediately adjacent downstream enzyme (Figure 1 ). If, for any reason, the concentration of the common intermediate metabolite of two adjacent enzymes is changed (for example by mutation of one of the two adjacent enzymes), the concentration of the other adjacent enzyme will not change but its activity will change in accordance with the known response of an enzyme activity (at constant enzyme concentration) to a change in the concentration of its substrate or product. In other words, no matter how complicated that system may be, the activity of any one enzyme depends, at all times, on the activity of the adjacent enzyme; and this is true for every pair of adjacent enzymes throughout the system (up to the point in the system where a terminal product is formed). [This last statement is obviously still true for the system in Figure 1 if we omit the words in parentheses but only because the extracellular product X 6 is a terminal product. X 6 is not an intermediate metabolite, flanked by two adjacent enzymes; it is not a substrate that is further metabolised by the system depicted. There are instances where an intracellular terminal product is formed. We must therefore add the words in parentheses if the statement is to apply generally]. A finite change (by mutation) in any one allele at a locus will change the activity ( v ) of one enzyme at the corresponding metabolic locus; but, for reasons just stated in the first paragraph of this section 2.3, the activity ( v ) of each of the other enzymes will alter, the flux ( J ) will change, and the concentrations of all the metabolites ( S 1 - S 5 ) will also change, some more than others, until the system settles to a new steady state. Thus, finite changes in the magnitude of any one of the internal or external parameters of the system will shift the original values of all the variables of the system to a new set of steady-state values. But, providing the external parameters X 0 and X 6 are kept constant, we can be sure that a change in any one selected internal parameter (an allele or an enzyme) would be the sole cause of any changes in the system variables. In short, we are obliged to adopt a whole-system (a systemic) approach if we want to understand how the flux to a trait component responds to a change in any one internal or external parameter of the system, no matter how that change in a parameter value is brought about. We are here concerned with changes in any one internal parameter such as a mutation in one or both alleles of a diploid cell. Suppose the activity of any one of the enzymes E 1 to E 6 in Figure 1 were to be changed stepwise (e.g. by a series of mutations of one or both alleles at a locus in a diploid) so that the residual activity of the enzyme was decreased in successive steps to, say, 75%, 60%, 45%, 25%, 0% of its initial activity. How would the flux (flow) through the whole series of enzymes vary; i.e. how would the flux (to a trait component) respond, and how would the concentration of that molecular component of a trait respond, when any one enzyme activity was changed by mutation in a series of finite steps? It was shown, by experiment, that graded changes in the activity of any one of four different enzymes in the arginine pathway resulted in a non-linear (quasi-hyperbolic) response of the flux to arginine in constructed heterokaryons of Neurospora crassa ([ 3 ], Figures 1a,1b,1c,1d ). Similar non-linear (non-additive) flux responses were observed when a series of mutations occurred in a single enzyme in four other metabolic pathways in four different diploid or polyploid systems ([ 3 ], Figures 1e,1f,1g,1h ). Similar flux responses were observed during genetic down-modulation of any one of five enzymes involved in tryptophan synthesis in Saccharomyces cerevisiae [ 5 ]. The same quasi-hyperbolic response of a defined flux to a series of graded changes in one enzyme activity was observed in a haploid cell [ 6 ]. We can therefore dismiss the possibility that these non-linear responses (of a flux-to-a-trait-component) were restricted to the systems investigated by Kacser and Burns [ 3 ] or were in some way related to the ploidy of the cells and organisms they studied. On the contrary, the various flux responses are a fundamental biochemical property of the fluxing metabolic system. It does not matter how the graded changes in activity of any one enzyme are brought about. Mutation is one way but not the only one. Graded replacement of a defective gene that expressed the chloride translocator in the cystic fibrosis mouse produced continuously non-additive responses of various functions associated with chloride transport, including the duration of the survival of the mouse [ 7 ]. Induced synthesis of graded concentrations of a single membrane-located enzyme resulted in continuously non-linear changes in growth rate, glucose oxidation, the uptake and phosphorylation of α-methyl glucose by Escherichia coli cells [ 8 ]. Stepwise decreases in cytochrome c oxidase activity (by titrating rat muscle mitochondria with an enzyme-specific inhibitor) had little effect on respiration until the enzyme activity was decreased to about 25% of normal; further decreases in this one enzyme activity caused a precipitous, continuously non-linear, decrease in mitochondrial respiration [ 9 ]. Other examples of non-linear (non-additive) responses of a defined flux to a change in activity of one enzyme in a metabolising system have been recorded [ 10 ], [[ 11 ], Figures 6.2,6.3,6.4,6.6.6.7,6.8 ]. The results of these various "genetic" and "biochemical" experiments illustrate the generality of the statement by Kacser and Burns [ 3 ] quoted in section 2.1 of this article. Figure 6 Biochemistry and genetics merged thirty years ago. The symbol indicates the catalysed translocation of an extracellular substrate or substrates ( X 3 ) and the subsequent intracellular catalysed transformations, including scavenging pathways, that form nucleoside triphosphate (NTP) precursors for the transcription process. Similarly, indicates the catalysed translocation of the extracellular substrates ( X 2 ) and the subsequent synthesis from ( X 2 ), and other intracellular substrates, of the amino acid (AA) precursors for the translation process. The enzymes subsumed as E Ts and E Tl are involved in the final stages of the expression (transcription and translation) of genes g1, g2, g3, g4 - - etc as polypeptides (P 1 , P 2 , P 3 , P 4 - - etc). In diploid cells a pair of proteins will be synthesised from each pair of alleles at a gene locus. Those pairs of polypeptides (proteins) that are catalytically active in a diploid cell are represented by the single symbols E 1 , E 2 , E 3 , E 4 - - - etc in this Figure 6. Further details are given in Section 5.5. 2.4. A rational explanation for the origin of dominant and recessive traits How did the observations of non-linear responses of individual fluxes to graded changes in any one enzyme activity lead to a rational explanation for the origin of Mendel's dominant and recessive trait classes [ 2 ]? For reasons already given, we cannot arrive at the answers to this question by relying on the illogical and illegitimate idea that alleles are themselves dominant or recessive. Such entities have never existed and do not now exist. Alleles can only be normal or abnormal (i.e. normal or mutant). If the ploidy of the cell cannot explain the non-additive response of a flux to mutations in an allele, it is equally certain that naming alleles as dominant or recessive will not provide the explanation [ 1 ]. We need to focus attention on the universally observed non-linear (often quasi-hyperbolic) responses of the flux-to-a-trait-component (and the concomitant change in concentration of that component) when the activity of any one enzyme, within a metabolic system of enzymes, is changed (decreased or increased), in stages, by any means available (including down-modulation by mutation and up-modulation by increasing the gene dose). In this Section 2.4, and in Sections 2.5–2.7, consideration of the role of allele pairs ( uu , uU , UU ) in determining the outcome of mutations or changes in gene dose is set aside; this role will be considered in Section 2.8. For the moment, attention is focussed on what can be learned from the non-linear response of a flux – to the molecular component(s) of a trait – when the activity of one enzyme in a metabolic system is changed in graded steps by mutation or by changes in gene dose. Figures 1a,1b,1c,1d in Reference [ 3 ] showed that the flux to the normal trait component (arginine), and thus the concentration of arginine, was not significantly diminished before any one of four enzyme activities was decreased by more than 50%. In Figures 1b,1d the enzyme activity was decreased to about 15% of normal activity in Neurospora crassa before any significant diminution in the flux to arginine (and in the concentration of arginine) was detectable [ 3 ]; any further diminution of either enzyme activity caused a continuous but precipitous fall in the production of this trait component. Similar characteristics were displayed by a diploid (Figure 1h in Reference [ 3 ]). Figure 2 represents these observations. Flux response plots with these characteristics are quasi-hyperbolic and asymmetric in the sense that, over low ranges of enzyme activity, the flux (and the metabolite concentrations in that fluxing pathway) respond markedly to small increases or decreases in enzyme activity; on the other hand, over high ranges of enzyme activity, substantial changes in activity have a small, if any, effect on the flux to a trait component and on the concentrations of the molecular components of a defined trait. A change in any "Flux-to-trait-component" implies a change in the concentrations of those metabolic products that typify a defined trait. It was shown that a dominant trait ( A ) corresponded to the normal (100%) activity of the enzyme that was subsequently mutated to give lower activities [ 3 ]; i.e., the plotting co-ordinate (wild-type enzyme activity versus trait A ) defined the terminus of the asymptote of the flux response plot depicted in Figure 2 . A hybrid ( H ) must then correspond to any point on the asymptote of Figure 2 that would not allow us (and would not have allowed Mendel) to distinguish a F1 hybrid ( H ) from its parent that displayed a dominant trait ( A ). A recessive ( a ) must then correspond to any point on the steeply falling part of the flux-response plot (Figure 2 ) that would allow us (or would have allowed Mendel) to distinguish the dominant trait ( A ) and the hybrid ( H ) from the recessive trait ( a ), e.g. dominant trait red flowers and hybrid red flowers from the recessive trait white flowers [ 1 ]. Note especially that a recessive trait would not necessarily correspond to zero flux (a complete metabolic block and a complete absence of the normal, downstream, metabolic products) in Figure 2 . The paper by Kacser and Burns [ 3 ] thus explains, for the first time in 115 years, how recessive traits arise from a sufficient decrease, by mutation, in one enzyme activity when that enzyme is embedded in a metabolic system. The explanation depends on recognising that when graded changes occur by mutation (in one, both or all of the allozymes at any one metabolic locus in biochemical pathways) there will be a non-linear response of the flux to the molecular component(s) of a defined trait; and concurrently a non-linear response of the concentrations of the normal molecular components of a trait (section 2.3). Section 2.9 in reference [ 1 ] showed that it was difficult to understand how Mendel's recessive traits ( a ) were displayed in 1/4 of his F2 population of plants ( A + 2 Aa + a ) when these same recessive traits were not displayed in Mendel's hybrids ( Aa ). We have replaced Mendel's implausible idea that his F1 hybrids ( Aa ) displayed only trait ( A ). We have substituted the plausible idea – based on experimental evidence [ 3 ] – that, under certain conditions, the F1 hybrid trait ( H ) is indistinguishable from trait ( A ). In the treatment advocated here, there is no problem in understanding how 1/4 of the individual plants in the F2 population of genetically related plants ( A + 2 H + a ) displayed the recessive trait ( a ). We can now also see why Mendel emphasised the need to study crosses between parental plants that displayed readily distinguishable trait forms, e.g. red flowers ( A ) in one parent and white flowers ( a ) in the other [ 1 ]. Figure 2 shows that this distinction would be possible only if the activity of one enzyme in the dominant trait plant was sufficiently diminished in the recessive trait plant. Note too that trait dominance and trait recessivity are not independent phenomena (nor are they opposite, one to the other). We cannot define a dominant trait except as an alternative to a recessive trait; both traits must be observable before we can identify either of them. The statements in these last two sentences were obvious in Mendel's original paper [ 2 ] but they have been inexplicably overlooked by many later authors. 2.5. Mendel's 3(dominant):1(recessive) trait ratio occurs sometimes, not always Does this explanation for the origin of dominant and recessive traits also account for the occurrence of Mendel's 3(dominant):1(recessive) trait ratio? The answer is yes. Does it also explain why this ratio is not always observed? The answer is again, yes (although the original authors [ 3 ] did not pose or answer these two questions). If the flux response plot is sufficiently asymmetric (approaches a hyperbolic plot, as in Figure 2 ), the concentration of molecular components of a defined trait will not be measurably different (when the activity of one enzyme is decreased by, say, 50%) from the concentrations of those same molecular components when the enzyme activity was 100%. If the trait displayed by the hybrid ( H ) is indistinguishable from the trait ( A ), as in Figure 2 , the trait distribution in the F2 population ( A + 2 H + a ) becomes 3( A ) + ( a ); i.e. the trait ratio in this population will be 3(dominant):1(recessive). This explanation for the occurrence of the 3:1 trait ratio in Mendel's, or any other F2 population of cells or organisms, depends entirely on an experimentally observed, sufficiently asymmetric, response of the flux (to the molecular components of defined trait) when changes occur in enzyme activity at any one metabolic locus in a fluxing biochemical pathway (Figure 1 ). It does not depend on the naïve and illegitimate assumption that alleles are either dominant or recessive (Sections 3.2, 3.3, 4 in Reference [ 1 ]). Figure 2 illustrates one of a family of regularly non-linear (non-additive) response plots which exhibit various degrees of asymmetry [ 3 ]. Is the flux response always sufficiently asymmetric for the 3:1 trait ratio to be observed? It is not. A flux response was observed in one particular (diploid) metabolic system (Reference [ 3 ], Figure 1f ) that was still clearly non-linear (non-additive) but not as asymmetric as that shown in Figure 2 . As in Figure 2 , so in Figure 3 , a recessive trait ( b ) can be clearly distinguished from the dominant trait ( B ) because the concentrations of the molecular components of this trait were sufficiently different when one enzyme activity in the metabolic system is decreased to a sufficient extent. The trait displayed by the hybrid ( H ) is now distinguishable (rather than indistinguishable) from the dominant trait ( B ) expressed in a genetically related normal cell or organism when, as in Figure 2 , the enzyme activity is decreased to an arbitrarily chosen 50% of the normal activity. The 3(dominant):1(recessive) trait ratio will not then be observed (Figure 2 ). A blend of traits ( B ) and ( b ) is possible in the hybrid ( H ), for example when traits ( B ) and ( b ) are distinguished by colour differences. Figure 3 Mendel's 3(dominant):1(recessive) trait ratio does not always occur. Mendel's notation for a dominant trait, a hybrid and a recessive trait were ( B ), ( Bb ) and ( b ) respectively. For reasons given in the preceding paper [1], the hybrid is represented in Figure 3 by ( H ). When graded changes are made in any one enzyme in a metabolic pathway the response of the flux through that pathway is always non-linear (non-additive) but not always quasi-hyperbolic (Figure 2). Consequently when the enzyme activity at one metabolic locus is decreased in the heterozygote to (say) 50% of wild-type, the trait displayed by the hybrid ( H ) is now distinguishable from the trait ( B ) displayed by the wild type cell or organism and from the trait ( b ) displayed by the homozygously mutant cell or organism. Mendel's 3(dominant):1(recessive trait ratio will not be observed. The explanation is consistent with the explanation for the observation of the 3:1 trait ratio in Figure 2 and achieves what the currently favoured explanation of Mendel's observations cannot achieve [1]. 2.6. Dominant and recessive traits are not always observed It is well known that dominance and recessivity are not universally observed. Are they therefore of no significance? Some authors have been tempted to think so. Their view is understandable because, before the work of Kacser and Burns [ 3 ], we lacked any credible explanation for the occurrence of dominant and recessive traits. Can we now see why dominance and recessivity are not always observed? The answer is again, yes. Examination of Figure 2 and Figure 3 shows that it will be possible to observe dominant and recessive traits in genetically related organisms only when the enzyme activity at a metabolic locus is decreased from 100% to an activity approaching, but not necessarily reaching, 0% activity. When the response plot is of the kind shown in Figure 2 , it would be possible to decrease the expressed enzyme activity at a metabolic locus by at least 75%, perhaps by 85%, without eliciting any detectable change in trait from that displayed by the wild-type or normal organism. In other words some mutations will not, apparently, display Mendelian dominance and recessivity (dominant and recessive traits ). Only if the effective enzyme activity is decreased by at least 95% in this instance (Figure 2 ), would clear dominance and recessivity be noted. This is an extreme case; Figure 3 illustrates the other extreme. Between these extremes, various degrees of asymmetry of flux response plots may be observed (Figure 1 in Reference [ 3 ]). Nevertheless, unless: (i) the change in enzyme activity is measured, (ii) it is realised that there is a non-additive relationship between a change in any one enzyme activity at a metabolic locus and a change in expressed trait, and (iii) the shape of the flux response plot (Figure 2 , Figure 3 ) is revealed by plotting, it is simply not possible to state that the system under investigation does or does not display Mendelian dominance and recessivity. Terms such as semi-dominance merely indicate that the flux response plot is not quite asymmetric enough to be sure that a 50% reduction in enzyme activity produces a trait that is indistinguishable from the dominant trait. 2.7. Is the Kacser & Burns treatment universally applicable? The change in the concentrations a normal metabolites has been treated in the present article as the source of a change in trait. This accords with the treatment in Figure 1 of reference [ 3 ]. Allowance should, however, be made for the possibility that the change in concentration of a metabolite is, in reality, a change in the concentration of a "signalling" metabolite (e.g. an allosteric activator or inhibitor of another enzyme in the pathway that generated the "signalling" metabolite, or in another pathway). Such mechanisms merely shift the cause of the change in metabolite concentration to another part of the matrix of intracellular metabolic pathways. In other words, the Kacser and Burns approach remains a valid explanation for the origin of dominant and recessive traits. 2.8. Accounting for all the plotting points in Figures 2 and 3 In Figure 2 , the relative enzyme activities (100, 50, 0) would be expressed from the series of allele pairs UU , Uu , uu in a diploid cell (Section 1) only if the mutant allele ( u ) was expressed as a catalytically inactive polypeptide. The same considerations apply to the relative enzyme activities expressed from the allele pairs WW , Ww , ww in Figure 3 . It is obvious that the continuously non-linear response plots (Figures 2 , 3 ; and References [ 3 - 10 ]]) could not be constructed if these three allele pairs were the only ones available to express a corresponding series of enzyme activities. Figure 1 in Reference [ 3 ] showed that more than three distinct enzyme activities were observed in experimental practice in any one system. It is easy to see how relative enzyme activities other than 0, 50, 100 could be observed in a polyploid or heterokaryon (Figure 1a,1b,1c,1d,1e in Reference [ 3 ]). To account for the occurrence in a diploid of relative enzyme activities in addition to those taking values of 0, 50, 100 (in Figures 2 and 3 , and in Figures 1f,1g,1h of Reference [ 3 ]), we need to allow for allele pairs in addition to the three ( UU , Uu , uu or WW , Ww , ww ) in which the mutant alleles ( u or w ) express a catalytically inactive polypeptide. The restriction to just three allele pairs in a diploid may be traced to Sutton [ 1 ]. He wrote Mendel's F2 trait series ( A + 2 Aa + a ), incorrectly, as ( AA + 2 Aa + aa ) and the number of distinguishable chromosome pairs as ( AA + 2 Aa + aa ), so establishing a false one-for-one relationship between pairs of chromosomes ( AA or aa ) and dominant or recessive traits ( AA or aa ). Sutton's notation for chromosome pairs was later transferred to allele pairs. In this article, dominant and recessive traits are represented, as Mendel did, by ( A ) and ( a ) respectively; alleles have been represented by different letters (e.g. UU , Uu , uu ) in order to distinguish alleles (parameters) from traits (variables). We should allow for the situation where ( U † ) is a mutant of ( U ) that would express an allozyme activity lower than that expressed from ( U ) but not so low as that expressed from ( u ); and where ( u *) would be a mutant of ( U ) that expresses an allozyme activity greater than that expressed by ( u ) = 0 in the traditional treatment but not so great as to merit the notation ( U ). The outcome of different hypothetical crosses that involve different mutations of one both alleles at a given locus in genetically related diploid parents would then be as follows: (1) Repeated crosses ( Uu × Uu ) would give, on average, the allele series ( UU + 2 Uu + uu ) thus permitting expression of no more than three distinctive enzyme activities at the corresponding metabolic locus. (2) The cross ( Uu * × Uu ) would give the allele series ( UU + Uu + Uu * + uu *) in which two of the allele pairs differ from those in the progeny of the first cross; and in which three different heterozygotes are formed. (3) The cross ( U † u × Uu ) would give the allele series ( UU † + Uu + U † u + uu ) in which only one allele pair in the progeny populations is identical with one of the allele pairs in the progeny from the second cross. (4) The cross ( UU † × Uu ) would give, on average, the allele series ( UU † + UU + Uu + U † u ) which has only two allele pairs in common with the progeny of the third of these crosses of genetically related parents. (5) The cross ( U † u × Uu *) would give, on average, the allele series (UU † + U † u* + Uu + uu*) . In the second and fourth crosses it was assumed that the two heterozygous parents did possess exactly the same normal allele ( U ) at this particular locus so, among their progeny, the allele pair ( UU ) occurred. Analogously, among the progeny from the third cross, the allele pair ( uu ) occurred. But, importantly, in each of crosses (2), (3) and (4) three different heterozygotes occurred in each progeny population (a heterozygote is defined in a diploid by the occurrence of allele pairs other than those represented here by UU or uu ). The allele pairs in the heterozygotes in any one progeny population of these crosses (2), (3) and (4) are not all identical with those in the progeny of another of these crosses. The parents in the fifth cross did not share an identical allele; no two alleles of a pair are then identical in the progeny. The allele pair ( Uu ) occurs in all of the progeny of these five crosses but only because one of two parents carried this allele pair or because one parent carried allele ( U ) and the other carried allele ( u ). Cross (1) typifies events in self-fertilising organisms but is not typical of sexual reproduction in other organisms (cf Figure 2 in reference [ 1 ]). Male and female parents that are identically heterozygous at any locus must be rare. Crosses (2)-(5) between two heterozygous parents will produce, under the circumstances noted above, truly homozygous allele pairs (such as UU and uu ) but they will also produce, on average, three different heterozygotes among their progeny (four heterozygotes in the fifth cross). The consequences are then as follows: From each locus in a diploid cell that expresses catalytic polypeptides, allozymes (pairs of enzymes) will be expressed; one from the gamete donated by the male parent the other from the gamete donated by the female parent. For simplicity, it will be assumed here that the combined allozyme activity at each catalytic locus in the metabolic pathways of the cell is the sum of the activities the two allozymes at each such locus. The traditional allele series ( UU + 2 Uu + uu ) in a diploid will then generate the enzyme series ( EE + 2 Ee + ee ) at one metabolic locus in different, genetically related, individuals. This enzyme series provides two extreme combined allozyme activities, namely 100% ( EE ) and 0% ( ee ). There are no allele pairs at this locus that could provide <0% or >100% enzyme activity. All other allele pairs, e.g. ( UU † ), ( U † u ), ( U † u* ), ( Uu* ), ( uu* ), would provide combined allozyme activities that lie between the 100% and 0% values just described. Only if (u) happens to be a null mutant, will the heterozyote ( Uu ) express a single enzyme activity ( v ) equal to 50% of the maximum available from ( UU ). Only in this circumstance will the allele pair ( uu ) express two inactive polypeptides; the enzyme activity will then be zero at a metabolic locus and a "metabolic block" will occur at that locus. Assembling the data from, for example, the second and third of the three hypothetical crosses between the genetically related parents described above gives an allele series ( UU , UU † , U † u , Uu , Uu *, uu*, uu ). They would contribute seven different allozyme pairs ( EE , EE † , E † e , Ee , Ee* , ee *, ee ) at one metabolic locus and seven different, single, enzyme activities ( v ), one from each pair of allozymes. Given a range of enzyme activities in excess of the traditional three, a sufficient number of co-ordinates will be available to establish a continuously non-additive plot of the response of one defined flux ( J ) against changes in enzyme activity ( v ) at one metabolic locus in genetically related cells or organisms (Figures 2 , 3 ). There is no guarantee that all of these mutants will be generated in every case but since ( U † ) and ( u* ) each represent only one of several possible mutations of allele ( U ), we may be reasonably confident of observing traits expressed from allele pairs in addition to, or instead of, those expressed from the two traditional mutant pairs ( Uu ) and ( uu ). Assembling sets of enzyme activity and flux (or metabolite concentration) data from the progeny of different but genetically related parents then creates the non-linear flux response plots illustrated in Figures 2 and 3 . All plotting points in the idealised Figures 2 and 3 should be regarded as tokens for the experimental plots published earlier [ 3 ]. This simple explanation for the occurrence of more than three co-ordinates for a plot of flux response against changes in enzyme activity (or gene dose) means that it is no longer acceptable to base arguments and conclusions on the assumed presence of only one heterozgote ( Uu ) in a diploid allele series at a locus, and on only one corresponding hybrid trait. Furthermore, statements that all heterozygotes express 50% (and only 50%) of the phenotype expressed from the homozygous wild-type are based on the false idea that the mutant allele ( u ) always produces a totally inactive enzyme. Figures 1a,1b,1c,1d,1e,1f,1g,1h of Reference [ 3 ] depended upon the availability of 5, 6 or 7 plotting points relating the flux response to experimentally determined changes in enzyme activity (effectively to changes in allele constitution at a locus). In addition to the traditional heterozygote ( Uu ), there must be a number of heterozygotes (e.g. UU † , U † u , Uu *, uu* ), and a corresponding a range of enzyme activities ( v ), that account for the response of a flux ( J ) to a change in enzyme activity at one metabolic locus (Figures 1 , 2 , 3 ). In Figure 2 , some of these additional heterozygotes will establish the asymptote of the flux response plot. The trait expressed from any such heterozygote would be indistinguishable from the trait expressed from the normal allele pair ( UU ); they could have accounted for the occurrence of Mendel's hybrids ( Aa ) which appeared to display only the dominant trait ( A ). This is further evidence that the traditional treatment of elementary Mendelian genetics is inadequate and misleading [ 1 ]. 3. Quantifiable differences between any two forms of a trait Differences in traits are generally and usefully described by qualitative terms: hirsute/bald; red flowers/white flowers; lithe/obese; muscular/"skinny"; slow/fleet; albino/black. Such descriptive terms do, however, disguise the obvious fact these apparently qualitative differences in outward appearance are based on quantitative differences in the concentrations of molecular products that contribute to the outward appearance or function of a cell or organism. These comments apply to the apparently qualitative differences examined by Mendel (Table 1 in reference [ 1 ]) and to those traits forms typified by a trait series ( A + 2 H + a ) where ( A ) indicates the dominant trait form, ( a ) the recessive trait form and ( H ) a hybrid trait that may be indistinguishable (Figure 2 ) from the dominant traits ( A ) or distinguishable (Figure 3 ) from the dominant trait ( B ). It should not therefore be supposed that the paper by Kacser and Burns [ 3 ] provided an explanation only for the occurrence of qualitative differences between any two traits. On the contrary, a continuously variable response of each of several defined fluxes was brought about when mutations of alleles at one locus changed the activity of one enzyme in a metabolic pathway (or when changes in gene dose changed the concentration and thus the activity of one enzyme in a metabolic pathway). The flux responses were labelled "Flux to arginine", "Flux to biomass", "Flux to melanin", "Flux to products", "Flux to DNA repair" (Figure 1 in reference [ 3 ]). The molecular compositions of "arginine", "biomass", "melanin", and "products" (of ethanol metabolism) were not changed. Their concentrations were changed as graded mutations at a gene locus caused graded changes in one enzyme activity in those pathways that created arginine, biomass, melanin, or the products (of ethanol metabolism). Similarly, a change in the "flux to DNA repair" was achieved by graded increases in the dose of the gene specifying the synthesis of the "repair enzyme" that excises covalently-linked adjacent thymines in DNA and allows incorporation of thymidine in place of the excised pyrimidines. This "repair enzyme" activity is absent in Xeroderma pigmentosum patients. Additional examples of quantitative changes in the concentration of molecular components of a trait will be found in other publications [ 5 - 11 ]. None of these changes provide any justification for representing a trait by twinned letters, e.g. ( AA ) or ( aa ). The single letters ( A ) and ( a ) stood for qualitative differences in trait form in Mendel's work; they stand equally well for quantitative changes in a trait in modern work. The non-linear response plots of Kacser and Burns [ 3 ] apply to quantitative and to apparently qualitative changes in the phenotype that arise from mutations of any one enzyme at a metabolic locus in a biochemical pathway. 4. Implications of the systemic approach of Kacser and Burns [ 3 ] Figure 2 shows the response of the phenotype to changes in enzyme activity at a metabolic locus or to changes in gene dose at the corresponding gene locus. It follows, if the response plot takes this form, that increasing the dose of this particular gene in a wild-type haploid cell (or the dose of the normal homozygous alleles in a wild-type diploid or polyploid cell) is unlikely to produce a detectable change in the phenotype (e.g. an increase in the concentration of the trait component produced by a metabolic pathway; or a change in cell function associated with that pathway). It was demonstrated that it was necessary, under these circumstances, to increase concurrently the gene dose at each of no fewer than five loci if significant increases in the flux (and in the concentration of metabolic product) was to be achieved [ 5 ]. The systemic approach to a rational explanation of the origins of dominant and recessive traits [ 3 ] has obvious implications for biotechnologists. Figure 2 (representing several plots in Reference [ 3 ]) also suggests that somatic recessive conditions (in contrast to so-called dominant conditions) could be ameliorated by partial gene replacement therapy. Experiments in the cystic fibrosis mouse model support this suggestion [ 7 ]; they show that the systemic approach to the origins of dominant and recessive traits has implications for medical genetics. It was pointed out (section 2.6) that substantial decreases in the dose of normal alleles at any one locus (or in the enzyme activity at the corresponding metabolic locus) may not elicit detectable changes in the trait(s) of the cell. In other words, given a response plot approximating to that shown in Figure 2 , traits – including associated cell functions – are inherently buffered against substantial increases or decreases in the dose of any one gene, or against substantial changes in enzyme activity at the corresponding metabolic locus. This appears to be the probable origin of the so-called "robustness" or buffering of chemotaxis against changes in enzyme kinetic constants [ 12 - 15 ]. This proposed explanation for metabolic buffering is quite general; it does not depend on the particular kinetic mechanisms that have been suggested to account for this buffering [ 12 ]; it also suggests that there is no need to postulate the presence of diagnostic "biological circuits" as the source of this buffering of the phenotype against mutations at a single locus. Attempts to improve the concentration of metabolic products by increasing the gene dose at one locus above that available in the wild-type or normal cell could be successful, at least to some self-limiting extent, if a response plot like Figure 3 applies. Induced synthesis of one membrane-located enzyme activity to between 20% and 600% of wild-type activity illustrates the possibility [ 8 ]. In this instance, plots like Figure 3 applied only to changes in the uptake and phosphorylation of α-methyl glucoside; changes in growth rates and glucose oxidation gave response plots like Figure 2 . The explanation for the difference may lie in the suggestion [ 3 ] that shorter pathways will yield response plots like Figure 3 , while the longer the pathway, the more likely is it that markedly asymmetric plots like Figure 2 will be observed. 5. Expansions of the present treatment 5.1. Why mutating one enzyme in a metabolic pathway may alter more than one trait; and mutating more than one enzyme may annul these changes in more than one trait If the explanation for the origin of dominant and recessive traits depends on realising that fluxing metabolic pathways generate the molecular components of all traits, and that mutating any one enzyme in these pathways alters the flux and the concentrations of those normal metabolic products that are molecular components of a trait, other genetic phenomena could perhaps also be explained. Only two of the thirteen texts surveyed [ 1 ] gave a definition, in their glossaries, of pleiotropy and epistasis. Both agreed that pleiotropy was a phenomenon where a change at one gene locus brought about a change in more than one trait. Both attributed epistasis to an interaction between genes or their alleles. Neither of these descriptions of pleiotropy and epistasis is particularly revealing. The following account, like those preceding it, does not depend on the fiction that all mutations generate inactive enzymes. Figure 1 is elaborated as shown in Figure 4 . One pathway, like that shown in Figure 1 , is now coupled to another analogous pathway by the conserved metabolite pair ( p , q ). The sum of the concentrations of ( p ) and ( q ) is constant (is conserved) but the ratio of the two concentrations ( p / q ) is a free variable. All the characteristics of the metabolic system in Figure 1 (Section 2), apply to each of the two fluxing pathways in Figure 4 . Claims in the biochemical literature in the past that changes in the ratio ( p/q ) controlled metabolic fluxes were and remain untenable; one variable of a system cannot be said to control another variable of the system. Figure 4 Accounting for the occurrence of pleiotropy. One unbranched pathway is coupled to another by a conserved metabolite pair p and q . Such coupling is not uncommon in cellular systems and is one source of pleiotropy. Mutation of any one enzyme in one pathway will affect both fluxes ( J a and J b ) to a trait component and the concentrations of those trait components. See also Figure 5. Figure 4, like Figure 1, illustrates the need to adopt a systemic approach in attempts to understand the responses of a metabolising system to changes in any enzyme activity brought about by mutation. Figure 1 may also be elaborated as shown in Figure 5 . An input flux from X 1 to S 4 divides into two output fluxes [ 16 ]. Of the input flux, a proportion (α) enters one of the two output fluxes ( J a ) and a proportion (1-α) enters the other output flux ( J b ). The magnitude of (α) is determined by the magnitudes of the activities of all the enzymes of the metabolic system; (α) is a systemic characteristic [ 17 ]. Again, all the characteristics of the model metabolic system in Figure 1 (Section 2), apply to each of the two pathways that generate fluxes J a and J b shown in Figure 5 . Figure 5 Accounting for the occurrence of pleiotropy and epistasis. Mutation of any one of enzymes E 2 , E 3 , E 4 would affect both fluxes J a and J b to separate trait components. Mutation of any one of enzymes E 5a , E 6a , etc would decrease flux J a to a trait component but increase J b to another trait component; the concentrations of trait components in pathway J a would decrease, those in pathway J b would increase. Epistasis would occur if a subsequent mutation occurred in any one of enzymes E 5b , E 6b etc. A branched metabolic pathway is thus a potential source of pleiotropy and epistasis; see the text for further discussion. This diagram, like that in Figure 4, emphasises the importance of adopting a systemic approach in understanding the potential effect, on a trait or traits, of a mutation in any one enzyme in enzyme-catalysed systems. 5.2. The origin of pleiotropy explained It will be obvious that a mutation of any one enzyme in either of the two pathways of Figure 4 will cause changes in the fluxes through both of the coupled pathways (and the concentrations of metabolites in both pathways). Similarly, a mutation in any one enzyme of the input flux of Figure 5 will affect the concentrations of metabolites in both output fluxes J a and J b . Pleiotropy (a change in more than one trait as a consequence of a single mutation), when it is detected, is thus seen to depend on mutating an enzyme within a metabolic pathway, on the consequential changes in metabolite concentrations, and on the structure and interdependence of biochemical pathways. Only if one of the enzymes in the input pathway shows zero activity will both output fluxes ( J a and J b ) cease (Figure 5 ). 5.3. The origin of epistasis explained Given a steady input flux from X 1 to S 4 (Figure 5 ), a mutation of one of the enzymes ( E 5a , E 6a or any other enzyme in this output limb) would decrease flux J a and increase flux J b . The concentrations of metabolites in pathway J a would decrease and those in pathway J b would increase, a further example of a pleiotropic response to a single mutation. But suppose that, following the mutation of E 5a , a mutation occurred in E 6b or any other enzyme in this alternative output limb. Clearly, the effect of the first mutation on the cell characteristics would be at least partly nullified by the second mutation – a phenomenon known as epistasis and sometimes attributed in genetic texts to an interaction between genes but shown here to depend on mutations of one or more enzymes, and on the structure and interdependence of metabolic pathways. Only if the activity of one of the enzymes in one of the two output pathways is diminished to zero by mutation, will the products of that output limb downstream from the mutation be lost. If the fluxes proceeded in the opposite direction to that shown in Figure 5 (so that two pathways merged into one), mutation of an enzyme in one of the input fluxes followed by a mutation of an enzyme in the other input pathway could again elicit epistatic responses in the system. 5.4. Are pleiotropy and epistasis always detectable? Particular but common metabolic structures (Figures 4 , 5 ) provide the potential for pleiotropy and epistasis; i.e. changes in concentrations of normal metabolites when an enzyme is mutated within a metabolic pathway. Whether pleiotropy or epistasis is detected, or not, will depend on the severity of the mutation and on the nature of the flux response plots (Figures 2 , 3 ) as demonstrated in section 2. 5.5. Biochemistry and genetics are not separable topics Beadle and Tatum [ 18 ] isolated a series of mutants of Neurospora crassa and tested their ability to grow on basal medium or on basal medium supplemented with different metabolites or cofactors. Wild-type Neurospora crassa grew on basal medium. Different isolated mutants would grow only if the basal medium was supplemented with the specific product of an enzyme rendered partially or fully inactive in one of the mutants. These brilliant observations led to the paradigm "one gene, one function" [ 19 , 20 ], later to "one gene, one enzyme". These observations [ 18 ] made explicit what was implied by the observations of Garrod [ 21 - 24 ]] on inborn errors of metabolism namely: metabolism is catalysed by a sequence (or system) of different enzymes; and a sufficient decrease (by mutation) in the activity of any one enzyme may cause a change in the trait(s) or characteristic(s) of the system (e.g. the ability to grow, to accumulate cell mass [ 18 ]). Beadle [ 20 ] expressed surprise that Garrod's work had received so little attention. He wrote: "It is a fact both of interest and historical importance that for many years Garrod's book had little influence on genetics. It was widely known and cited by biochemists, and many geneticists in the first two decades of the century knew of it and the cases so beautifully described in it. Yet in the standard textbooks written in the twenties and thirties - - - - few mention its cases or even give a reference to it. I have often wondered why this was so. I suppose most geneticists were not yet inclined to think of hereditary traits in chemical terms. Certainly, biochemists with a few notable exceptions such as the Onslows, Gortner and Haldane were not keenly aware of the intimate way in which genes direct the reactions of living systems that were the subject of their science." This lack of attention to the implications of Garrod's work is all the more surprising when it is recalled that Bateson [[ 25 ], p.133] pointed out that alkaptonuria (a change in concentration of the normal metabolite, homogentisic acid, and one of Garrod's inborn errors of metabolism) was an example of a Mendelian recessive trait or character; see also [[ 26 ], p.19]. In other words, some important aspects of genetics depended on recognising the role of changes in an enzyme activity, within a metabolic system, in effecting a change in a trait. The aphorism "one gene, one enzyme" was refined to "one allele, one polypeptide" after the elucidation of the structure of DNA [ 27 , 28 ] and the rapid advances made in the next 10 or 15 years in elucidating the mechanisms of expression of diploid alleles as pairs of polypeptides or proteins [ 29 - 32 ]] most of which are enzymes [ 3 , 4 ]. These more recent discoveries (Figure 6 ) emphasise what was implied by the work of Beadle and Tatum [ 18 ]: the molecular components of dominant and recessive traits or characteristics, in all biological forms, are generated by fluxing metabolic pathways catalysed by sequences or systems of enzymes. Dominant and recessive traits are not the direct product of the expression of alleles as suggested by the currently favoured explanation of Mendel's observations (Figure 2 in Reference [ 1 ]); they are produced indirectly by a system of enzymes (Figures 1 , 4 , 5 , 6 ). Figure 6 depicts the direct relationship between any one gene (g1, g2, g3, g4) and the synthesis of individual polypeptides (P 1 , P 2 , P 3 , P 4 ) most of which, but not all, are enzymes ( E 1 , E 2 , E 3 , E 4 ). All polypeptides, catalytic and non-catalytic, are synthesised in this way. X 1 , X 2 and X 3 in Figure 6 are immediately identified as extracellular parameters of a cell system. X 3 stands for those substrates that lead, through a series of enzyme-catalysed reactions, to the synthesis of nucleoside triphosphates (NTPs) and their subsequent incorporation into mRNA. Note that mRNA is a terminal product of this pathway. It is a coding entity, a proxy for DNA. Each mRNA specifies the order of incorporation of individual amino acids into a polypeptide, but no individual mRNA molecule participates as a substrate in the subsequent steps of the catalysed formation of a polypeptide. The control of the overall expression of a gene as a polypeptide is therefore necessarily treated in Metabolic Control Analysis as a cascade of two fluxing metabolic pathways, one that starts at X 3 , the other that starts at X 2 [ 33 ]. X 2 stands for those extracellular substrates that lead, through a series of enzyme-catalysed reactions, to the synthesis de novo of amino acids (AAs) and their subsequent incorporation, along with any existing amino acids, into a polypeptide (P). In a haploid cell, one polypeptide is synthesised from each gene locus. In a diploid, one polypeptide is synthesised from each of two alleles at a gene locus. If these pairs of polypeptides are catalytically active, each enzyme in a diploid cell ( E 1 , E 2 , E 3 , etc) consists of a pair of allozymes, one of each pair specified by the allele derived from the male parent, the other specified by allele derived from the female parent. Each pair of allozymes, whether normal or mutated, exhibits only one measurable activity ( v ) at a catalytic locus in a metabolic pathway. If the pairs of polypeptides (P) synthesised by a diploid cell are not catalytically active they will not, of course, play a direct role in catalysing a metabolic pathway. They may have other important functions (e.g. as hormones) and may be components of traits. X 1 stands for all those initial extracellular substrates feeding the matrix of inter-dependent biochemical pathways that typify all functioning cells. It is these pathways that generate the non-protein, non-polyribonucleotide, molecular products of all cell traits. Each of these three major fluxing pathways (Figure 6 ) is catalysed by a succession of enzyme-catalysed reactions as shown in Figure 1 . The flux through any one of these pathways will respond to a mutation of any one enzyme in the pathway as shown in Figures 2 , 3 ; any change in these fluxes could change the concentrations of the intermediate metabolites or the final product (section 2.3); but, provided mutations do not alter the specificity of an enzyme, they will not change the existing molecular structure or composition of these metabolites. Most attention is concentrated on the pathway initiated by X 1 for the simple reason that this pathway stands for all the matrix of interdependent biochemical fluxes that generate such a wide range of the non-protein (and non-polyribonucleotide) molecular components of cell traits (e.g. skin pigments, membrane lipids, chlorophyll, xanthocyanins, non-peptide hormones, neural transmitters, chitin, serum cholesterol, peptidoglycans, etc, etc). If any one of the three major pathways shown in Figure 6 is coupled to another pathway (Figure 4 ) or contains a branch (Figure 5 ) there will be, potentially detectable, pleiotropic and epistatic responses to mutations of any of the pathway enzymes (section 5.3). Such pathway coupling and branching is a common feature of the pathways that start with one of the extracellular substrates typified by X 1 . If the implications of the work of Beadle and Tatum [ 18 ] were not fully realised at the time, Figure 6 might have suggested that a fresh approach to an understanding of the origins of dominant and recessive traits was needed. The currently favoured explanation for Mendel's findings ([ 1 ], Figure 2 ) does not take account of the biochemical pathways of the synthesis of enzymes (Figure 6 ) established 30–40 years ago, does not acknowledge that the molecular components of all traits are synthesised by systems of enzymes, does not take account of the change in concentration of molecular components of traits when any one enzyme is mutated, and fails to distinguish the system parameters (alleles) from the system variables (traits). Note that changes in the concentrations of external metabolites (whether they are substrates like X 1 , X 2 , X 3 in Figure 6 , or extracellular inhibitors or activators of intracellular enzymes) may effect changes in intracellular metabolism and consequently modify the effects of a mutation. This topic is not immediately relevant in the present article but is a notable feature of Metabolic Control Analysis. Descriptions of the role of the Combined Response Coefficient ( R ) in permitting extracellular effectors to modulate intracellular metabolism (and thus the effects of a mutation) will be found elsewhere [ 11 , 34 - 36 ]. If pleiotropic and epistatic responses to a mutation are as common as is suggested (sections 5.1–5.4), the question then arises: how do we account for Mendelian segregation of traits during sexual reproduction? The answer lies in the fact that a mutation at a biochemical locus, within the matrix of interdependent pathways, has its most obvious effect on the most closely associated pathways. Distant pathways (on the scale of cellular dimensions) will be less obviously affected. Kacser and Burns (Reference [ 3 ], p.649) pointed out that "This apparent independence of most characters makes simple Mendelian genetics possible, but conceals the fact that there is universal pleiotropy. All characters should be viewed as 'quantitative' since, in principle, variation anywhere in the genome affects every character." Section 3 in the present article emphasised the importance of quantitative changes in cell traits. The considerations in this paragraph are germane to the apparent absence of a detectable change of phenotype in some so-called 'knock-out' experiments. 6. Conditions that must be met to explain dominance and recessivity The explanation advocated in this article for the origins of dominant and recessive traits from normal and mutant alleles in a diploid is based on: (i) An obligatory distinction, by notation and nomenclature, between the variables (traits) and the parameters (alleles and enzymes) of genetic/biochemical systems. (ii) The contention that the molecular components of all traits are the products of fluxing metabolic systems (Figures 1 , 4 , 5 , 6 ). (iii) Experimental evidence for an inevitable non-linear response of a flux (through a metabolic system of enzymes) to graded changes in the activity of any one of those enzymes [ 3 ], evidence that is supported by a number of independent observations [ 5 - 11 ]. (iv) A demonstration that dominant and recessive traits arise from changes in the concentration of the normal molecular components of a defined trait. (v) The argument that changes in concentration of a trait component may nevertheless be revealed as a qualitative change in that trait. (vi) A demonstration that both alleles (normal or mutant) at a locus in a diploid are generally expressed. If the normal allele expresses a catalytically active polypeptide, many mutants of this allele will express an enzyme with lower activity; a mutated enzyme with zero activity is an extreme case. (vii) The demonstration that an explanation of Mendel's observations cannot be based on an allele series containing only three terms (e.g. uu , 2 uU , UU ) one of which is a unique heterozygote ( uU ). (viii) A demonstration that dominant and recessive traits cannot be generated by those polypeptides that are not enzymes embedded in a system of enzymes. (ix) Rejection of the unjustified traditional claim that a hybrid ( H ) expresses a dominant trait ( A ) because the (allegedly) recessive allele ( u ) in a heterozygote ( Uu ) is always completely ineffective or because the allegedly dominant allele ( U ) suppresses the allegedly recessive allele ( u ) in the heterozygote [ 1 ]. (x) Rejection of the traditional, unsubstantiated and implausible claim that one so-called dominant allele in a heterozygote is as effective as two such alleles in the wild-type cell [ 1 ]. It was also shown that pleiotropy and epistasis can be explained by taking a similar system approach to that used in explaining the origin of dominant and recessive traits. It is then apparent that, to account rationally for Mendel's observations of dominant and recessive traits, a minimum of four conditions must be met. (i) Alleles must be distinguished by notation, nomenclature and concept from traits; functions of components of the genotype must be distinguished from properties of components of the phenotype. Traits alone may be dominant or recessive. (ii) Alleles cannot be called "dominant" or "recessive". (When alleles are so called, the flaws present in the current attempts to explain Mendel's observations will inevitably re-appear [ 1 ]). (iii) It must be shown how dominant traits become distinguishable from recessive traits in the same cell or organism (Figure 2 , 3 ). (iv) It must be shown how a hybrid trait sometimes becomes indistinguishable from the dominant trait and sometimes does not (Figures 2 , 3 ). The first circumstance will account for Mendel's 3(dominant):1(recessive) trait ratio; the second for exceptions to this ratio. If all four conditions are be met; the first two conditions must first be met. The treatment given in sections 2–5 meets each of these requirements. 7. Conclusions Kacser and Burns [ 3 ] provided the basis for a rational explanation for the origin of dominant and recessive traits that arose from mutations of alleles at any one gene locus in a diploid or polyploid cell (sections 2.3, 2.4). Inherent in this explanation, as set out above (sections 2.5, 2.6), are further explanations for the occurrence of the 3(dominant):1(recessive) trait ratio in some situations in a diploid (Figure 2 ), for the absence of this trait ratio from other situations (Figure 3 ), for the absence of dominant and recessive traits in yet others and for the appearance of a blend of parental traits in some heterozygotes. These five demonstrations are internally consistent. In contrast to the currently favoured attempt to explain Mendel's results [ 1 ], no arbitrary assumptions are introduced (section 2.8) to explain how heterozygous allele pairs (e.g. UU † , U † u , Uu *, uu* ) may produce a trait that is indistinguishable from the trait expressed from the "homozygous" allele pairs ( UU ). In other words, provided: (a) all current misrepresentations of Mendel's paper [ 1 ] are first discarded, (b) alleles are distinguished by notation and nomenclature from the traits they specify, (c) alleles are regarded as normal or mutant (but not dominant or recessive), it is possible to provide a rational and internally consistent explanation for the origin of Mendel's dominant and recessive traits, for the occurrence of his 3:1 trait ratio, and for exceptions to these observations noted by later investigators. The same systemic approach is applicable to current problems in biotechnology and medical genetics (section 4). It also explains the origins of pleiotropy and epistasis (section 5); and challenges the assumption that a mutation in a non-catalytic protein provides an example of Mendel's dominant and recessive traits [ 1 ]. Mendel found, by experiment, that the proportions of plant forms in each of his F2 populations was represented by ( A + 2 Aa + a ). In the present paper these proportions have been written as ( A + 2 H + a ). If the symbol ( H ) for a hybrid in Figure 2 is replaced mentally and temporarily by ( Aa ), it will be clear why Mendel postulated that his hybrids ( Aa ) displayed trait ( A ) and not trait ( a ). If the same exercise is repeated in Figure 3 by replacing ( H ) temporarily by ( Bb ), it will be clear why Mendel observed an anomalous blending of flower colours in the hybrids when he crossed parental bean plants bearing different flower colours. The treatment of elementary Mendelian genetics advocated here is based on the work of Kacser and Burns [ 3 ]. So far as the present author is aware, that paper has not been described by any student textbook of "classical" or "molecular" genetics published in the intervening 23 years. Orr [ 37 ] did not see the full significance of the Kacser and Burns paper [ 3 ]. Darden [[ 38 ], p. 72] declared that "(trying) to unravel the complex relations between mutant alleles and enzymes (Kacser and Burns, 1981) - - - is not a major research topic in genetics." Several possible reasons for this failure to see the merits of the Kacser and Burns paper [ 3 ] may be worth consideration. They include: (1) Persistent misrepresentations of Mendel's paper, and incorporation of these distortions into currently favoured explanations of Mendel's observations [ 1 ]. (2) A failure to recognise the consequences of not distinguishing between the function of the alleles and the properties of traits in attempting to explain Mendel's results. Normal and mutant alleles specify the kind (and order of incorporation) of amino acids into polypeptides (most but not all are enzymes). Dominance and recessivity are a reflection of changes in the concentration(s) of the molecular component(s) of a trait when an enzyme is mutated within a fluxing metabolic pathway. (3) Tardy recognition of the need to adopt the systemic approach of Metabolic Control Analysis in explaining the response of the variables of a biological system to perturbations of the magnitude any one system parameter . (4) A reluctance to accept a change in concepts even when currently accepted representations of Mendel's results are demonstrably untenable. (5) Elucidation of the double helical structure of DNA (Figure 6 ) and all that followed in the next 10–15 years imposed profound changes on genetics but was not perhaps always taken into account. (6) A determination in some quarters to regard genetics as an autonomous subject. It has been obvious at least since the work of Beadle and Tatum [ 18 ] that such claims cannot be sustained. Genetics is intimately related to, and in some respects dependent upon, biochemistry. The converse is equally true. Genetics and biochemistry are not separable topics in biology. It is significant that Kacser & Burns were also one of two sets of authors who initiated the systemic approach to the control of metabolite concentrations and fluxes [ 39 , 40 ]. This approach was elaborated by the original authors and many others. For some accounts and reviews, see [ 11 , 36 , 41 - 44 ]. 8. A correction In an earlier paper [ 45 ] it was stated that Mendel had inferred the presence of segregating particles. These particulate determinants were then represented by ( A ) and ( a ). These statements are here formally withdrawn. They were consistent with textbook treatments of Mendelian genetics [ 1 ] but a subsequent reading of Mendel's original paper revealed that these statements, and others that occur frequently in the recent reviews of Mendel's paper and in current textbooks, were incorrect and misleading. A history of the misunderstandings and misrepresentations that have sustained the currently favoured depiction of Mendelian genetics [ 1 ] will be presented elsewhere. A paper setting out the concepts of parameters and variables will also be submitted. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC517952.xml |
534812 | An Evolutionary View of Tiger Conservation | null | Tyger! Tyger! burning bright In the forests of the night What immortal hand or eye Could frame thy fearful symmetry? When William Blake wrote these words in the late 1700s, the deforestation and habitat destruction that would decimate wild tiger populations had already begun. In 1900, an estimated 100,000 wild tigers lived throughout much of Asia, from India in the west to Sumatra and Indonesia in the south to Siberia in the east. Today, the ongoing stresses of habitat loss, hunting, and an illegal trade in tiger parts have spared fewer than 7,000 tigers. Of eight traditionally classified subspecies of Panthera tigris , three have gone extinct since the 1940s. Conservation strategies to combat this grinding attrition are tailored to each subspecies. But several lines of evidence suggest that subspecies designations—based on geographic range and morphological traits such as body size, skull traits, coat color, and striping patterns—may be flawed. An earlier molecular study of 28 tigers found little evidence of genetically distinct subspecies, while surveys of tiger habitat found few physical barriers sufficient for subspecies isolation. To get a clearer picture of the genetic structure of existing tiger populations, Shu-Jin Luo, Jae-Heup Kim, Stephen O'Brien, and nineteen colleagues performed a comprehensive genetic analysis of mitochondrial and nuclear genes from over 130 tigers. By identifying distinct patterns of variation within these gene families, the authors reconstructed the evolutionary distribution and ancestry of the tiger. Their results support many of the traditional subspecies designations and identify further subdivisions in others. A Bengal tiger in the tall grassland (Photo: Ullas Karanth, WCS-India) Luo et al. collected “voucher specimens” (taken from animals of verified wild ancestry and geographic origin) of blood, skin, and hair from 134 tigers representing the entire tiger range, and examined them, along with samples of preserved pelts and hair, for three molecular markers. The markers—a stretch of mitochondrial DNA (mtDNA) sequence, a gene with highly variable DNA sequence called DRB that's involved in pathogen recognition, and short repeating genetic elements called microsatellites—act as unique signposts that flag significant demographic and evolutionary events in the tiger populations. mtDNA sequences were extracted from tigers originating in the Russian far east (Siberian, or Amur, tigers), south China, northern Indochina, the Malaya Peninsula, Sumatra, and the Indian subcontinent. The mtDNA analysis identified 30 haplotypes—characteristic regions on a chromosome—that could be clustered. Some of the clusters supported traditional classifications—e.g., for the Sumatran ( P. t. sumatrae ) and ( P. t. tigris ) Bengal tigers—but others suggested that the Indochinese subspecies ( P. t. corbetti ) should be divided into two groups, representing a northern Indochinese and a peninsular Malaya population (which the authors designated respectively as P. t. corbetti and P. t. jacksoni , after the tiger conservationist Peter Jackson). Interestingly, clusters for the captive South China tigers also fell into two distinct lineages— P. t. amoyensis , the traditional grouping, and P. t. corbetti , though the designation is still tentative. These subdivisions were largely supported by the other genetic analyses. The distinct genetic patterns found in the tiger populations suggest six rather than five living subspecies. Reduced gene flow and genetic drift in isolated populations, as well as human activity, likely caused these partitions. The low genetic variability seen in the Siberian tigers, for example, might be explained by severe population declines: the animals were nearly exterminated in the early 1900s, and today only 500 remain. Sumatran tigers, on the other hand, show relatively high genetic variability and uniqueness, possibly reflecting a historically large breeding population that was later isolated. Whether recent population and habitat declines, as opposed to earlier events, can fully explain these patterns is not clear. But these results offer valuable data for conservation strategies and captive breeding programs that rely on distinctions in subspecies taxonomy and geographic provenance. Evoking both the darker side of creation and humanity, Blake could not have imagined the modern fate of his “Tyger.” Scholars have long debated the multilayered meaning of his poem, including the second stanza, which starts, In what distant deeps or skies Burnt the fire of thine eyes? Will we reduce future generations to a literal reading? | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC534812.xml |
555591 | Phylogenomic approaches to common problems encountered in the analysis of low copy repeats: The sulfotransferase 1A gene family example | Background Blocks of duplicated genomic DNA sequence longer than 1000 base pairs are known as low copy repeats (LCRs). Identified by their sequence similarity, LCRs are abundant in the human genome, and are interesting because they may represent recent adaptive events, or potential future adaptive opportunities within the human lineage. Sequence analysis tools are needed, however, to decide whether these interpretations are likely, whether a particular set of LCRs represents nearly neutral drift creating junk DNA, or whether the appearance of LCRs reflects assembly error. Here we investigate an LCR family containing the sulfotransferase (SULT) 1A genes involved in drug metabolism, cancer, hormone regulation, and neurotransmitter biology as a first step for defining the problems that those tools must manage. Results Sequence analysis here identified a fourth sulfotransferase gene, which may be transcriptionally active, located on human chromosome 16. Four regions of genomic sequence containing the four human SULT1A paralogs defined a new LCR family. The stem hominoid SULT1A progenitor locus was identified by comparative genomics involving complete human and rodent genomes, and a draft chimpanzee genome. SULT1A expansion in hominoid genomes was followed by positive selection acting on specific protein sites. This episode of adaptive evolution appears to be responsible for the dopamine sulfonation function of some SULT enzymes. Each of the conclusions that this bioinformatic analysis generated using data that has uncertain reliability (such as that from the chimpanzee genome sequencing project) has been confirmed experimentally or by a "finished" chromosome 16 assembly, both of which were published after the submission of this manuscript. Conclusion SULT1A genes expanded from one to four copies in hominoids during intra-chromosomal LCR duplications, including (apparently) one after the divergence of chimpanzees and humans. Thus, LCRs may provide a means for amplifying genes (and other genetic elements) that are adaptively useful. Being located on and among LCRs, however, could make the human SULT1A genes susceptible to further duplications or deletions resulting in 'genomic diseases' for some individuals. Pharmacogenomic studies of SULT1Asingle nucleotide polymorphisms, therefore, should also consider examining SULT1A copy number variability when searching for genotype-phenotype associations. The latest duplication is, however, only a substantiated hypothesis; an alternative explanation, disfavored by the majority of evidence, is that the duplication is an artifact of incorrect genome assembly. | Background Experimental and computational results estimate that 5–10% of the human genome has recently duplicated [ 1 - 4 ]. These estimates represent the total proportion of low-copy repeats (LCRs), which are defined as homologous blocks of sequence from two distinct genomic locations (non-allelic) >1000 base pairs in length. LCRs, which are also referred to in the literature as recent segmental duplications, may contain all of the various sequence elements, such as genes, pseudogenes, and high-copy repeats. A set of homologous LCRs make up an LCR family. Non-allelic homologous recombination between members of an LCR family can cause chromosomal rearrangements with health-related consequences [ 5 - 7 ]. While data are not yet available to understand the mechanistic basis of LCR duplication, mechanisms will emerge through the study of individual cases [ 8 ]. At the same time, the appearance of LCR duplicates may be an artifact arising from one of a number of problems in the assembly of a genome of interest. Especially when classical repetitive sequences are involved, it is conceivable that mistaken assembly of sequencing contigs might create in a draft sequence of a genome a repeat where none exists. In the post-genomic world, rules have not yet become accepted in the community to decide when the burden of proof favors one interpretation (a true repeat) over another (an artifact of assembly). Again, these rules will emerge over time through the study of individual cases. Through the assembly of many case studies, more general features of duplication and evolutionary processes that retain duplicates should emerge. Although each LCR family originates from one progenitor locus, no universal features explain why the particular current progenitor loci have been duplicated instead of other genomic regions. From an evolutionary perspective, duplicated material is central to creating new function, and to speciation. One intriguing hypothesis is that genes whose duplication and recruitment have been useful to meet current Darwinian challenges find themselves in regions of the chromosome that favor the generation of LCRs. Browsing a naturally organized database of biological sequences, we identified human cytosolic sulfotransferase (SULT) 1A as a recently expanded gene family with biomedically related functions. SULT1A enzymes conjugate sulfuryl groups to hydroxyl or amino groups on exogenous substrates (sulfonation), which typically facilitates elimination of the xenobiotic by the excretory system [ 9 ]. Sulfonation, however, also bioactivates certain pro-mutagenic and pro-carcinogenic molecules encountered in the diet and air, making it of interest to cancer epidemiologists [ 10 , 11 ]. These enzymes also function physiologically by sulfonating a range of endogenous molecules, such as steroid and thyroid hormones, neurotransmitters, bile salts, and cholesterol [ 9 ]. Three human SULT1A genes have been reported [ 12 , 13 ]. The human SULT1A1 and 1A2 enzymes are ~98% identical and recognize many different phenolic compounds such as p -nitrophenol and α-naphthol [ 14 - 19 ]. The human SULT1A3 enzyme is ~93% identical to SULT1A1 and 1A2, but preferentially recognizes dopamine and other catecholamines over other phenolic compounds [ 19 - 23 ]. High resolution crystal structures of SULT1A1 and 1A3 enzymes have been solved [ 24 - 26 ]. Amino acid differences that contribute to the phenolic and dopamine substrate preferences of the SULT1A1 and 1A3 enzymes, respectively, have been localized to the active site [ 27 - 30 ]. Polymorphic alleles of SULT1A1 , 1A2 , and 1A3 exist in the human population [ 31 - 33 ]. An allele known as SULT1A1*2 contains a non-synonymous polymorphism, displays only ~15% of wild type sulfonation activity in platelets, and is found in ~30% of individuals in some populations [ 31 ]. Numerous studies comparing SULT1A1 genotypes in cancer versus control cohorts demonstrate that the low-activity SULT1A1*2 allele is a cancer risk factor [ 34 - 36 ], although other studies have failed to find an association [ 12 ]. Ironically, the protection from carcinogens conferred by the high activity SULT1A1*1 allele is counterbalanced by risks associated with its activation of pro-carcinogens. For example, SULT1A enzymes bioactivate the pro-carcinogen 2-amino-α-carboline found in cooked food, cigarette smoke and diesel exhaust [ 37 ]. The sulfate conjugates of aromatic parent compounds convert to reactive electrophiles by losing electron-withdrawing sulfate groups. The resulting electrophilic cations form mutagenic DNA adducts leading to cancer. Recently, it has become widely understood that placing a complex biomolecular system within an evolutionary model helps generate hypotheses concerning function. This process has been termed "phylogenomics" [ 38 ]. Through our bioinformatic and phylogenomic efforts on the sulfotransferase 1A system, we detected a previously unidentified human gene that is very similar to SULT1A3 , transcriptionally active, and not found in the chimpanzee. In addition, we report that all four human SULT1A genes are located on LCRs in a region of chromosome 16 replete with other LCRs. A model of SULT1A gene family expansion in the hominoid lineage (humans and great apes) is presented, complete with date estimates of three preserved duplication events and identification of the progenitor locus. Positively selected protein sites were identified that might have been central in adapting the SULT1A3 and 1A4 enzymes to their role in sulfonating catecholamines such as dopamine and other structurally related drugs. Results and Discussion Four human SULT1A genes on chromosome 16 LCRs The human SULT1A1 and 1A2 genes are tandemly arranged 10 kilobase pairs (kbp) apart in the pericentromeric region of chromosome 16, while the SULT1A3 gene is located ~1.7 million base pairs (Mbp) away (Figure 1B and 1C ). In addition to the three known SULT1A genes, we found a fourth gene, SULT1A4 , by searching the human genome with the BLAST-like alignment tool [ 39 ]. SULT1A4 was located midway between the SULT1A1 / 1A2 gene cluster and the SULT1A3 gene (Figure 1B and 1C ). Figure 1 Genomic organization of the SULT1A LCR family. (A) 30 LCRs (red) aligned to the SULT1A3 LCR (blue). Core sequences of SULT1A and LCR16a families are shown between dashed lines. (B) Chromosome 16 positions of 29 SULT1A3-related LCRs. (C) Known genes, bacterial sequencing contigs, and LCRs (outlined in boxes) in three 350 kbp regions of chromosome 16. The SULT1A4 gene resided on 148 kbp of sequence that was highly identical to 148 kbp of sequence surrounding the SULT1A3 gene (Figure 1A and Table 1 ). The high sequence identity between the SULT1A3 and 1A4 genomic regions suggested that they were part of a low copy repeat (LCR) family. This suspicion was confirmed by mining the Recent Segmental Duplication Database of human LCR families [ 40 ]. In addition to the four-member SULT1A LCR family, the 148 kbp SULT1A3 LCR was related to 27 other LCRs (Figure 1A and Table 1 ). Many of the SULT1A3-related LCRs are members of the previously identified LCR16a family [ 41 , 42 ]. The SULT1A3-related LCRs mapping to chromosome 16 collectively amounted to 1.4 Mbp of sequence – or 1.5% of chromosome 16. Table 1 SULT1A3-related LCRs LCR Name* Chromosome Strand Start End Length % Identity† A chr18p + 11605429 11633851 28422 97.8 B chr16p + 11985022 12003971 18949 97.6 C chr16p - 14747420 14753000 5580 94.8 D chr16p - 14766628 14792117 25489 96.5 E chr16p - 14805750 14832437 26687 96.6 F chr16p - 14996007 15072649 76642 96.9 G chr16p + 15161625 15185467 23842 95.6 H chr16p - 15417052 15453865 36813 95.9 I chr16p - 16394409 16416404 21995 96.4 J chr16p + 16437719 16461029 23310 96.5 K chr16p + 18371484 18394809 23325 96.4 L chr16p + 18414255 18434928 20673 96.4 M chr16p - 18834216 18904410 70194 96.5 N chr16p + 18962854 18969729 6875 95.2 O chr16p + 21376182 21480283 104101 97.8 P chr16p + 21808293 21910109 101816 98.3 Q chr16p - 22414809 22523008 108199 97.1 R chr16p + 28316465 28341127 24662 97.8 S chr16p + 28427424 28467064 39640 97.3 1A1 chr16p + 28481970 28490644 8674 86.0 1A2 chr16p + 28494950 28502357 7407 86.6 T chr16p - 28621035 28630803 9768 97.7 U chr16p - 28692200 28714506 22306 98.1 V chr16p - 28800873 28828646 27773 97.7 W chr16p - 29084138 29108487 24349 97.8 X chr16p + 29426409 29498137 71728 97.6 1A4 chr16p + 29498152 29644489 146337 99.1 1A3 chr16p + 30236110 30388351 152241 100 Y chr16q + 69784235 69818803 34568 96.2 Z chr16q + 70016088 70061019 44931 97.4 AA chr16q - 74188141 74209430 21289 97.4 *LCR names are as in Figure 1. †Percent identity is relative to the 1A3 LCR. To determine if other genes in the SULT super family were also recently duplicated during LCR expansions, we searched the Segmental Duplication Database [ 4 ] for human reference genes located on LCRs. No other complete cytosolic SULT genes were located on LCRs, but 25% of the SULT2A1 open reading frame (ORF) was located on an LCR (Table 2 ). Table 2 Duplication Status of SULT Genes Accession Gene Chromosome ORF Length ORF Duplicated NM_001055 SULT1A1 , phenol chr16 895 895 NM_001054 SULT1A2 , phenol chr16 895 895 NM_003166 SULT1A3 , dopamine chr16 895 895 NM_014465 SULT1B1 chr4 804 0 NM_001056 SULT1C1 chr2 898 0 NM_006588 SULT1C2 chr2 916 0 NM_005420 SULT1E1 chr4 892 0 NM_003167 SULT2A1 , DHEA chr19 864 210 NM_004605 SULT2B1 chr19 1059 0 NM_014351 SULT4A1 chr22 862 0 The steroid sulfatase gene, which encodes an enzyme that removes sulfate groups from the same biomolecules recognized and sulfonated by SULT enzymes, is frequently deleted in patients with scaly skin (X-linked icthyosis) due to nonallelic homologous recombination between LCRs on chromosome X [ 43 , 44 ]. As demonstrated by the X-linked icthyosis example, SULT1A copy number or activity in the human population could be modified – with health-related consequences – by nonallelic homologous recombination between LCRs on chromosome 16. SULT1A4 : genomic and transcriptional evidence The sequence of the SULT1A4 gene region from the human reference genome was so similar to that of the SULT1A3 region (>99% identity) that the differences were near those that might arise from sequencing error or allelic variation. It was conceivable, therefore, that some combination of sequence error, allelic variation, and/or faulty genome construction generated the appearance of a SULT1A4 gene that does not actually exist. We therefore searched for additional evidence that the SULT1A4 gene was material. We asked whether any evidence was consistent with the hypothesis of an artificial SULT1A4 LCR from erroneous genome assembly, as opposed to the existence of a true duplicate region. Here, the quality of the genomic sequencing is important. The junction regions at the ends of the SULT1A4 LCR were sufficiently covered; at least nine sequencing contigs overlapped either junction boundary (Figure 1C ). This amount of evidence has been used in other studies to judge the genomic placement of LCRs [ 45 ]. As another line of evidence, we compared the nucleotide sequences of the SULT1A4 and 1A3 genomic regions (Table 3 ). Among the 876 coding positions the only difference was at position 105, where SULT1A4 possessed adenine (A) and SULT1A3 possessed guanine (G). Thus, if two genes do exist, they differ by one silent transition at the third position of codon 35. The untranslated regions, however, contained thirteen nucleotide differences while the introns contained seven additional differences (Table 3 ). These 21 differences between the SULT1A4 and 1A3 genomic regions disfavor the hypothesis that sequencing errors played a role in the correct/incorrect placement of these LCRs. Table 3 SULT1A4 and SULT1A3 Genomic Region Differences Location* Nucleotide SULT1A4 Region SULT1A3 Region 5' UTR -6,246 G C 5' UTR -6,118 C T 5' UTR -6,007 G C 5' UTR -5,246 - T 5' UTR -4,433 - T Intron 1B -2,775 C T Intron 1B -2,671 - T Intron 1B -2,670 - T Intron 1B -2,594 T G Intron 1A -91 - A Exon 2 +105 A G Intron 4 +853 - A Intron 4 +1,487 A G Exon 8 +3,569 - A Exon 8 +3,570 - A Exon 8 +3,571 - T Exon 8 +3,572 - T 3' UTR +5,379 G C 3' UTR +6,438 C - 3' UTR +6,335 C - 3' UTR +6,210 C - * 21 alignment positions are shown where the nucleotide/gapping (-) of the SULT1A4 region differed from that of the SULT1A3 region. Exon and intron names of the SULT1A3 gene are according to [33]. All nucleotides are numbered relative to the first nucleotide of the start codon, which has a value of +1. There was no position 'zero'. The last nucleotide of the coding sequence occurs at position +3,188. Approximately 3 kb of upstream (5' UTR) and downstream (3' UTR) nucleotides were included in the comparison. The SULT1A4 gene was located near the junction of two LCRs (Figure 1C ). For this reason, it was not clear whether SULT1A4 had a functional promoter. We took a bioinformatic approach to address this question. Expressed sequences ascribed to SULT1A3 were downloaded from the NCBI UniGene website [ 46 ]. Each sequence was aligned to SULT1A3 and SULT1A4 genomic regions. Based on the A/G polymorphism at the third position of codon 35, five expressed sequences were assigned to SULT1A4 and nine to SULT1A3 (Table 4 ). Other expressed sequences were unclassified because they did not overlap codon 35. If the SULT1A4 does exist, there is ample evidence from expressed sequences to make conclusions about its transcriptional activity. Table 4 Evidence of SULT1A4 Expression Accession Gene* Tissue† Pos. 105 [Genbank:CB147451] SULT1A4 Liver A [Genbank:BF087636] SULT1A4 head-neck A [Genbank:W76361] SULT1A4 fetal heart A [Genbank:W81033] SULT1A4 fetal heart A [Genbank:BC014471] SULT1A4 pancreas, epitheliod carcinoma A [Genbank:F08276] SULT1A3 infant brain G [Genbank:BF814073] SULT1A3 Colon G [Genbank:BG819342] SULT1A3 Brain G [Genbank:BM702343] SULT1A3 optic nerve G [Genbank:BQ436693] SULT1A3 large cell carcinoma G [Genbank:AA323148] SULT1A3 cerebellum G [Genbank:AA325280] SULT1A3 cerebellum G [Genbank:AA349131] SULT1A3 fetal adrenal gland G [Genbank:L25275] SULT1A3 placenta G *Gene classifications made according to the nucleotide at position 105 as described in the text. † Tissue descriptions were taken from GenBank accessions. The codon 35 A/G polymorphism was reported as allelic variation in SULT1A3 by Thomae et al . [ 33 ]. It is conceivable that Thomae et al . sequenced both SULT1A3 and SULT1A4 because of the identical sequences surrounding them. In their study, 89% of CAA (1A4) and 11% of CAG (1A3) codon 35 alleles were detected in one population. Why were the frequencies not more equal, as would be expected if SULT1A4 is always CAA and SULT1A3 is CAG? One hypothesis is that SULT1A3 is indeed CAG/CAA polymorphic as reported, while SULT1A4 is always CAA. Interestingly, in both the chimpanzee and gorilla, codon 35 of SULT1A3 is CAA. This implies that the ancestral SULT1A3 gene (prior to duplication) likely had a CAA codon. An A to G transition might have been fixed in a fraction of SULT1A3 genes after the divergence of humans and great apes. If this scenario is true, some transcripts assigned to SULT1A4 on the basis of codon 35 may actually be from individuals expressing the ancestral CAA version of SULT1A3 . SULT1A progenitor locus We aligned the coding sequences of all available SULT1A genes and used various nucleotide distance metrics and tree-building algorithms to infer the gene tree without constraints. The unconstrained topology placed platypus as the out group, with the placental mammals ordered (ox,(pig,(dog,(rodents)),(rabbit,(primates)))). This differed from the topology inferred while constraining for the most likely relationships among mammalian orders (platypus,((dog,(ox,pig)), ((rabbit,(rodents)), primates))) [ 47 ]. We considered both trees, and found that the conclusions drawn throughout the paper were robust with regard to these different topologies. Therefore, only the tree inferred while constraining for most likely relationships among mammalian orders is discussed (Figure 2 ). Figure 2 SULT1A gene tree. TREx upper-limit date estimates of hominoid SULT1A duplications are shown as Ma in red. K A /K S values estimated by PAML are shown above branches. Infinity (8) indicates a non-reliable K A /K S value greater than 100. The 1A3/1A4 branch is dashed. NCBI accession numbers of sequences used: chimpanzee 1A1 [Genbank:BK004887], chimpanzee 1A2 [Genbank:BK004888], chimpanzee 1A3 [Genbank:BK004889], ox [Genbank:U34753], dog [Genbank:AY069922], gorilla 1A1 [Genbank:BK004890], gorilla 1A2 [Genbank:BK004891], gorilla 1A3 [Genbank:BK004892], human 1A1 [Genbank:L19999], human 1A2 [Genbank:U34804], human 1A3 [Genbank:L25275], human 1A4 [Genbank:BK004132], macaque [Genbank:D85514], mouse [Genbank:L02331], pig [Genbank:AY193893], platypus [Genbank:AY044182], rabbit [Genbank:AF360872], rat [Genbank:X52883]. Using the transition redundant exchange (TREx) molecular dating tool [ 48 ], we placed upper-limit date estimates at the SULT1A duplication nodes (Figure 2 ). The SULT1A gene family appears to have expanded ~32, 25, and 3 million years ago (Ma). Therefore, the SULT1A duplications likely occurred after the divergence of hominoids and old world monkeys, with the most recent duplication occurring even after the divergence of humans and great apes. Mouse, rat, and dog genomes each contained a single SULT1A gene. The simplest evolutionary model, therefore, predicted that one of the four hominoid SULT1A loci was orthologous to the rodent SULT1A1 gene. Syntenic regions have conserved order of genetic elements along a chromosomal segment and evidence of synteny between homologous regions is useful for establishing relationships of orthology and paralogy. Human SULT1A1 is most like rodent Sult1a1 in sequence and function and before the advent of whole genome sequencing it was assumed that they were syntenic and therefore orthologous [ 49 ]. Complete genome sequences have since emerged and alignments between them are available in the visualization tool for alignments (VISTA) database of human-rodent genome alignments [ 50 , 51 ]. The VISTA database contains mouse-human pairwise alignments and mouse-rat-human multiple alignments. The multiple alignments were found to be more sensitive for predicting true orthologous regions between rodent and human genomes [ 51 ]. We searched the VISTA database for evidence of any human-rodent syntenic regions involving the four SULT1A loci. The more sensitive multiple alignments failed to record any human-rodent syntenic regions involving the SULT1A1 , SULT1A2 , or SULT1A4 loci but detected synteny involving the SULT1A3 loci and both rodent genomes (Figure 3 ). These results are indicative of a hominoid specific SULT1A family expansion from a progenitor locus corresponding to the genomic region that now contains SULT1A3 . The results from the VISTA database were not as clear when the less sensitive alignment method was employed (Figure 3 ). Figure 3 Synteny plots demonstrating SULT1A3 is the progenitor locus of the hominoid SULT1A family. Each box shows a VISTA percent identity plot between a section of the human genome and a section of a rodent genome. Different rodent genomes and alignment methods are indicated as follows: 1 = mouse (Oct. 2003 build) multiple alignment method (MLAGAN); 2 = rat (June 2003 build) multiple alignment method (MLAGAN); 3 = mouse (October 2003 build) pairwise alignment method (LAGAN). Human gene locations are shown above and human chromosome 16 coordinates below. SULT1A3 and 1A4 LCRs were 99.1% identical overall (Table 1 ). More careful inspection revealed that the SULT1A3 and 1A4 LCRs were 99.8% identical over the first 120 kbp, but only 98.0% identical over the last 28 kbp (data not shown). This 10-fold difference in percent identities (0.2% vs. 2.0%) suggested that the SULT1A4-containing LCR was produced by two independent duplications. The chimpanzee draft genome assembly aligned with the human genome [ 52 ] provides evidence in support of this hypothesis. There is conserved synteny between human and chimp genomes over the last 28 kbp of the 1A4-containing LCR, but no synteny over the first 120 kbp where the SULT1A4 gene is located (data not shown). This finding and the TREx date estimate for the SULT1A3 / 1A4 duplication event at ~3 Ma indicate that SULT1A4 is a human invention not shared by chimpanzees – our closest living relatives. It should be noted that the chimpanzee genome assembly is less reliable than the assembly of the human genome. The coverage is significantly lower, and the methods used for assembly are viewed by many as being less reliable, in part because they relied on the human assembly. Other possibilities, less supported the available evidence, should be considered, including deletion of the chimpanzee SULT1A4 gene since the human-chimp divergence, or failure of the draft chimpanzee genome assembly to detect the 120 kbp segment on which the SULT1A4 gene resides. Adaptive evolution in hominoids From an analysis of gene sequence change over time, molecular evolutionary theory can generate hypotheses about whether duplication has led to functional redundancy, or whether the duplicates have adopted separate functional roles. If the latter, molecular evolutionary theory can suggest how different the functional roles might be by seeking evidence for positive (adaptive) selection for mutant forms of the native proteins better able to contribute to fitness. Positive selection of protein function can best be hypothesized when the ratio of non-synonymous (replacement) to synonymous (silent) changes normalized to the number of non-synonymous and synonymous sites throughout the entire gene sequence (K A /K S ) is greater than unity. Various models of evolutionary sequence change can be used to calculate these ratios. The simplest assumes a single K A /K S ratio over the entire tree (one-ratio). More complex models assume an independent ratio for each lineage (free-ratios), variable ratios for specific classes of sequence sites (site-specific), or variable ratios for specific classes of sequence sites along specified branches (branch-site specific) [ 53 - 57 ]. Estimating the free parameters in each of these models by the maximum likelihood method [ 58 ] enables testing two nested evolutionary models as competing hypotheses, where one model is a special case of another model. The likelihood ratio test (LRT) statistic, which is twice the log likelihood difference between the nested models, is comparable to a χ 2 distribution with degrees of freedom equal to the difference in free parameters between the models [ 59 ]. Evidence for adaptive evolution typically requires a K A /K S ratio >1 and a statistically significant LRT [ 60 ]. We estimated K A /K S ratios for each branch in the 1A gene tree by maximum likelihood with the PAML program [ 61 ]. A typical branch in the SULT1A gene tree had a ratio of 0.16, and the ratio was 0.23 on the branch separating extant SULT1A3 / 1A4 genes from the single SULT1A gene in the last common ancestor of hominoids (Figure 2 ). Thus, the K A /K S ratio estimated as an average over all sites did not suggest adaptive evolution along the 1A3/1A4 branch. We then implemented three site-specific and two branch-site evolutionary models that allow K A /K S ratios to vary among sites. Four of the five models estimated that a proportion of sites (2–8%) had K A /K S >1 (Table 5 ). Each model was statistically better at the 99 or 95% confidence level than the appropriate null model as determined using the LRT statistic (Table 6 ). Table 6 lists the specific sites that various analyses identified as being potentially involved in positive selection and a subset of these sites that are changing along the SULT1A3/1A4 branch. Table 5 Likelihood Values and Parameter Estimates for SULT1A Genes Model f.p.* Log L Parameter Estimates † One-ratio 39 - 5,047.81 K A /K S = 0.15 Free-ratios 69 - 5,005.18 K A /K S ratios for each branch shown in Figure 2 Site-specific Neutral 36 - 5,021.14 p 0 = 0.48 ( p 1 = 0.52) K A /K S 0 = 0 K A /K S 1 = 1 Selection 38 - 4,884.89 p 0 = 0.41 p 1 = 0.13 ( p 2 = 0.46) K A /K S 0 = 0 K A /K S 1 = 1 K A /K S2 = 0.19 Discrete (k = 2) 37 - 4,931.05 p 0 = 0.68 p 1 = 0.32 K A /K S 0 = 0.06 K A /K S 1 = 0.77 Discrete (k = 3) 40 - 4,880.78 p 0 = 0.59 p 1 = 0.33 ( p 2 = 0.08) K A /K S 0 = 0.02 K A /K S 1 = 0.31 K A /K S2 = 1.24 Beta 37 - 4,884.27 p = 0.27 q = 1.07 Beta+selection 39 - 4,879.97 p = 0.30 q = 1.33 p 0 = 0.98 p 1 = 0.02 K A /K S = > 2.0 Branch-site specific Model A 38 - 5,013.29 p 0 = 0.48 p 1 = 0.49 ( p 2 = 0.03) K A /K S 0 = 0 K A /K S 1 = 1 K A /K S2 = > 2.0 Model B 40 - 4,886.52 p 0 = 0.68 p 1 = 0.30 ( p 2 = 0.02) K A /K S 0 = 0.04 K A /K S1 = 0.56 K A /K S2 = > 2.0 *f.p. is the number of free parameters in each model. † Evidence for positive selection is shown in boldface. Proportions of sites in each K A /K S class, p 0 , p 1 , and p 2 , were not free parameters when in parentheses. Neutral site-specific model assumes two site classes having fixed K A /K S ratios of 0 and 1, with the proportion of sites in each class estimated as free parameters. Selection site-specific model assumes a third proportion of sites with K A /K S estimated from the data. Discrete model assumes 2 or 3 site classes (k) with the proportion of sites, and K A /K S ratios for each proportion, estimated as free parameters. Beta model assumes a beta distribution of sites, where the distribution is shaped by the parameters p and q . Beta+selection model assumes an additional class of sites having a K A /K S ratio estimated from the data. Model A, an extension of the neutral model, assumes a third site class on the 1A3/1A4 branch with K A /K S estimated from the data. Model B, an extension of the discrete model with two site classes (k = 2), also assumes a third site class on the 1A3/1A4 branch with K A /K S estimated from the data. Table 6 Likelihood Ratio Tests for the SULT1A Genes Selection vs. Neutral Discrete (k = 3) vs. One-ratio Beta+selection vs. Beta Model A vs. Neutral Model B vs. Discrete (k = 2) Log L 1 - 4,884.89 - 4,880.78 - 4,879.97 - 5,013.29 - 4,886.52 Log L 0 - 5,021.14 - 5,047.81 - 4,884.27 - 5,021.14 - 4,931.05 2ΔLog L 272.50 334.06 8.60 15.70 89.06 d.f. 2 4 2 2 2 P-value P < 0.001 P < 0.001 0.01 < P < 0.05 P < 0.001 P < 0.001 Positively selected sites * 3 (0.86) 7 (0.63) 30 (0.71) 35 (0.73) 71 (0.88) 77 (0.92) 84 (0.92) 85 (0.95) 86 (0.97) 89 (0.99) 89 (0.88) 89 (0.99) 89 (0.99) 93 (0.97) 105 (0.72) 105 (0.53) 107 (0.82) 107 (0.75) 132 (0.87) 132 (0.78) 143 (0.51) 146 (0.80) 146 (0.97) 222 (0.99) 222 (0.58) 236 (0.53) 245 (0.99) 245 (0.99) 261 (0.90) 275 (0.70) 288 (0.89) 290 (0.95) 293 (0.72) *In parentheses for each positively selected site is the posterior probability that the site belongs to the class with K A /K S >1. Posterior probabilities >90% are bold-face. Positively selected sites also experiencing non-synonymous change on the 1A3/1A4 branch are underlined. A hypothesis of adaptive change that is based on the use of K A /K S values can be strengthened by joining the molecular evolutionary analysis to an analysis based on structural biology [ 62 , 63 ]. Here, we ask whether the sites possibly involved in an episode of sequence evolution are, or are not, randomly distributed in the three dimensional structure. To ask this question, we mapped the sites to the SULT1A structure (Figure 4 ). Sites holding amino acids whose codons had suffered synonymous replacements were evenly distributed throughout the three-dimensional structure of the enzyme, as expected for silent changes that have no impact on the protein structure and therefore cannot be selected for or against at the protein level (Figure 4 ). In contrast, sites experiencing non-synonymous replacements during the episode following the duplication that created the new hominoid gene are clustered on the side of the protein near the substrate binding site and the channel through which the substrate gains access to the active site (Figure 4 and Table 7 ). This strengthens the hypothesis that replacements at the sites are indeed adaptive. The approach employed here based on structural biology does not lend itself easily to evaluation using statistical metrics. Rather, the results are valuable based on the visual impression that they give, and the hypotheses that they generate. Figure 4 Non-synonymous changes along the 1A3/1A4 branch cluster on the SULT1A1 enzyme structure [PDB: 1LS6] [26]. Red sites experienced non-synonymous changes, green sites experienced synonymous changes. The PAPS donor substrate and p -nitrophenol acceptor substrates are shown in blue. Image was generated using Chimera [86]. Table 7 Non-synonymous Changes on the 1A3/1A4 Branch Site* Nucleotide Changes/Site Hominoid SULT1A Ancestor Hominoid SULT1A3 Ancestor Residue PP † Physicochemical Properties Residue PP Physicochemical Properties 44 1 Ser (1.00) tiny polar → Asn (1.00) small polar 71 1 His (0.99) non-polar aromatic positive → Asn (1.00) small polar 76 1 Phe (1.00) non-polar aromatic → Tyr (1.00) aromatic 77 2 Met (0.99) non-polar → Val (1.00) small non-polar aliphatic 84 1 Phe (1.00) non-polar aromatic → Val (1.00) small non-polar aliphatic 85 1 Lys (1.00) Positive → Asn (1.00) small polar 86 2 Val (0.98) small non-polar aliphatic → Asp (1.00) small polar negative 89 3 Ile (0.98) non-polar aliphatic → Glu (1.00) polar negative 93 1 Met (0.00) non-polar → Leu (1.00) non-polar aliphatic 101 1 Ala (1.00) tiny non-polar → Pro (1.00) small 105 1 Leu (1.00) non-polar aliphatic → Ile (1.00) son-polar aliphatic 107 1 Thr (1.00) tiny polar → Ser (1.00) tiny polar 132 1 Ala (1.00) tiny non-polar → Pro (1.00) small 143 1 Tyr (1.00) aromatic → His (1.00) non-polar aromatic positive 144 2 His (0.99) non-polar aromatic positive → Arg (1.00) polar positive 146 2 Ala (1.00) tiny non-polar → Glu (1.00) polar negative 148 1 Val (1.00) small non-polar aliphatic → Ala (1.00) tiny non-polar 222 1 Leu (0.99) non-polar aliphatic → Phe (1.00) non-polar aromatic * Sites underlined were identified as being positively selected using the branch-site specific models. † Posterior probabilities that the ancestral residues are correct, conditional on the model of sequence evolution used. We then examined literature where amino acids had been exchanged between SULT1A1 and SULT1A3. One of the sites, at position 146, identified as being involved in adaptive change, is known to control substrate specificity in SULT1A1 and 1A3 [ 27 - 30 ]. The remaining sites identified are nearby. Conclusion An interesting question in post-genomic science asks how to create biological hypotheses from various drafts of whole genome sequences. In generating these hypotheses, it is important to remember that a genomic sequence is itself a hypothesis, about the chemical structure of a small number of DNA molecules. In many cases, biologists wish to move from the genomic sequence, as a hypothesis, to create hypotheses about biological function, without first "proving" the genome sequence hypothesis. This type of process, building hypotheses upon unproven hypotheses, is actually common in science. In fact, very little of what we believe as fact is actually "proven"; formal proof is virtually unknown in science that involves observation, theory, and experiment. Rather, scientists generally accumulate data until a burden of proof is met, with the standards for that burden being determined by experience within a culture. In general, scientists have an idea in an area as to what level of validation is sufficient to avoid making mistakes an unacceptable fraction of the time, and proceed to that level in their ongoing work, until they encounter a situation where they make a mistake (indicating that a higher standard is needed), or encounter enough examples where a lower standard works, and therefore come to accept a lower standard routinely [ 64 ]. Genomics has not yet accumulated enough examples for the culture to define the standards for a burden of proof. In the example discussed here, several lines of reasoning would be applied to analyze the sulfotransferase gene family. First, the fact that the draft genome for chimpanzee contains three paralogs, while the draft genome for human contains four, would normally be interpreted (as it is here) as evidence that an additional duplication occurred in the time since chimpanzee and humans diverged. It would also, however, be consistent with the loss of one of four hypothetical genes present in the common ancestor of chimpanzee and humans in the lineage leading to chimpanzee. Another possibility is that the finishing stages of the chimpanzee genome project will uncover a SULT1A4 gene. Normally, one would resolve this question using an out group taxon, a species that diverged from the lineage leading to chimpanzee and human before chimpanzee and human themselves diverged. The nearest taxa that might serve as an out group today are, however, rat and mouse. As noted above, they diverged so long ago ( ca . 150 MY separates contemporary rodents from contemporary primates) that the comparison provides no information. And no closer out group taxon ( e.g ., orangutan) has had its genome completely sequenced. Here, the two hypotheses (duplication versus loss after the chimpanzee-human speciation) are distinguished (to favor post-speciation duplication) based on an analysis of the silent nucleotide substitutions using the TREx metric. The very small number of nucleotide differences separating the SULT1A3 and SULT1A4 coding regions favors the generation of the two paralogs after chimpanzee and human diverged. This comparison, however, potentially suffers from the statistics of small numbers. The number of differences in the coding region (exactly one) is small. By considering ~10 kbp of non-coding sequence, however, additional differences were found. It is possible that in the assembly of the human genome, a mistake was made that led to the generation of a SULT1A4 region that does not actually exist. In this hypothesis, the ~20 nucleotide differences between the SULT1A3 and SULT1A4 paralogs must be the consequence of allelic polymorphism in the only gene that exists. This is indeed how some of the data were initially interpreted. Does the preponderance of evidence favor the hypothesis of a very recent duplication to generate a pair of paralogs ( SULT1A3 and SULT1A4 )? Or does the evidence favor the hypothesis that the SULT1A4 gene is an illusion arising from gene assembly error coupled to sequencing errors and/or allelic variation at ca. 20 sites? The culture does not yet have a standard of assigning the burden of proof here, although a choice of hypothesis based simply on the count of the number of mistakes that would need to have been made to generate each hypothesis (none for the first, at least three for the second) would favor the former over the latter. Thus, perhaps naively, the burden of proof now favors the former, and we may proceed to generate the biological hypothesis on top of the genomic hypothesis. Here, the hypothesis has immediate pharmacogenomic and genomic disease implications due to the specific functional behaviors of SULT1A enzymes. LCR-mediated genomic rearrangements could disrupt or amplify human SULT1A gene copy number. Given our current environmental exposure to many forms of carcinogens and pro-carcinogens that are either eliminated or activated by SULT enzymes, respectively, it is plain to see how SULT1A copy number variability in the human population could underlie cancer susceptibilities and drug or food allergies. The majority of evidence indicates that a new transcriptionally active human gene, which we refer to as SULT1A4 , was created when 120 kbp of chromosome 16 duplicated after humans diverged from great apes. Thus, SULT1A4 , or possibly another gene in this region, is likely to contribute to distinguishing humans from their closest living relatives. It is also conceivable that an advantage in gene regulation, as opposed to an advantage from gene duplication, was the driving force behind the duplication of this 120 kbp segment. While cause and effect are difficult to separate, the examples presented here support the hypothesis that genes whose duplication and recruitment are useful to meet current Darwinian challenges find themselves located on LCRs. The SULT1A4 gene is currently the most obvious feature of the duplicated region and has been preserved for ~3 MY without significant divergence of its coding sequence. One suggestion for the usefulness of SULT1A4 is that it expanded sulfonating enzymes to new tissues. The SULT1A4 gene is located only 10 kbp upstream from the junction boundary of its LCR and 700 kbp away from the SULT1A3 locus. It is possible, therefore, that promoter elements from the new genomic context of the SULT1A4 gene would drive its expression in tissues where SULT1A3 is not expressed – a hypothesis testable by more careful transcriptional profiling. Multiple SULT1A genes were apparently useful inventions by our stem hominoid ancestor. Following the duplication of an ancestral primate SULT1A gene ~32 Ma, positive selection acted on a small proportion of sites in one of the duplicates to create the dopamine sulfonating SULT1A3 enzyme. In the example presented here, the evidence of adaptive change at certain sites is corroborated by the ad hoc observation that the sites cluster near the active site of the protein. The well known substrate binding differences at the active sites of SULT1A1/1A2 and SULT1A3 (and now SULT1A4) substantiate these findings. When studying well-characterized proteins as we have done here, episodes of functional change can be identified by piecing together several lines of evidence. It is not immediately possible, unfortunately, to assemble as much evidence for the majority of proteins in the biosphere. Thus, an important goal in bioinformatics is to recognize the signal of functional change from a restricted amount of evidence. Of the three lines of evidence employed here (codon-based metrics, structural biology, and experimental), structural biology, with its obvious connections to protein function and impending growth from structural genomics initiatives, will probably be the most serviceable source of information for most protein families. This should be especially true for protein families not amenable to experimental manipulation, or with deep evolutionary branches where codon-based metrics are unhelpful. If we are to exploit the incontrovertible link between structure and function, however, new structural bioinformatic tools and databases relating protein structure to sequence changes occurring on individual branches are much needed. This bioinformatic study makes several clear predictions. First, a PCR experiment targeted against the variation between the hypothetical SULT1A3 and SULT1A4 human genes should establish the existence of the two separate genes. Second, a reverse transcription-PCR experiment would be expected to uncover transcriptional activity for the SULT1A3 and SULT1A4 human genes. Since this paper was submitted, these experiments have been done, and indeed confirm our predictions made without the experimental information [ 65 ]. Further, after this manuscript and its computationally-based predictions were submitted for publication, a largely finished sequence for chromosome 16 has emerged [ 66 ] that confirms our analysis here in every respect. Methods SULT1A LCR family organization in the human genome The July 2003 human reference genome (based on NCBI build 34) was queried with the SULT1A3 coding region using the BLAST-like alignment tool [ 39 ], and search results were visualized in the UCSC genome browser [ 67 ]. Two distinct locations on chromosome 16 were identified as equally probable. One location was recognized by NCBI Map Viewer [ 68 ] as the SULT1A3 locus. The other locus was dubbed SULT1A4 following conventional naming for this family. The coding sequence and genomic location of SULT1A4 , as well as expressed sequences derived from SULT1A4 , have been deposited with the GenBank Third Party Annotation database under accession [Genbank:BK004132]. To determine the extent of homology between the SULT1A3 and 1A4 genomic locations, ~500 kbp of sequence surrounding SULT1A3 and ~500 kbp of sequence surrounding SULT1A4 were downloaded from NCBI and compared using PIPMAKER [ 69 , 70 ]. Before submitting to PIPMAKER, high-copy repeats in one of the sequences were masked with REPEATMASKER [ 71 ]. The Human Recent Segmental Duplication Page [ 72 ] was consulted to identify other LCRs related to the SULT1A3-containing LCR. Chromosomal coordinates of 30 SULT1A3-related LCRs were arranged in GFF format and submitted to the UCSC genome browser as a custom track. Sequences corresponding to the chromosomal coordinates of the 30 LCRs were then downloaded from the UCSC genome browser and parsed into separate files. Each LCR was aligned with the SULT1A3-containing LCR using MULTIPIPMAKER [ 73 ]. The Segmental Duplication Database [ 74 ] was used to examine the duplication status of each gene in the cytosolic SULT super family. The bacterial artificial chromosome contigs supporting each member of the SULT1A LCR family, and the known genes within each LCR, were inspected with the UCSC genome browser [ 75 ]. The DNA sequences of nine bacterial artificial chromosome contigs supporting the SULT1A4 genomic region [NCBI Clone Registry: CTC-446K24, CTC-529P19, CTC-576G12, CTD-2253D5, CTD-2324H19, CTD-2383K24, CTD-2523J12, CTD-3191G16, RP11-28A6] and seven contigs supporting the SULT1A3 region [NCBI Clone Registry: CTD-2548B1, RP11-69O13, RP11-164O24, RP11-455F5, RP11-612G2, RP11-787F23, RP11-828J20] were downloaded from the UCSC genome browser website. Phylogenetics The MASTERCATALOG was used for performing initial inspections of the SULT gene family and for delivering a non-redundant collection of SULT1A genes. Additional SULT1A ORFs were extracted from gorilla working draft contigs [Genbank:AC145177] ( SULT1A1 and 1A2 ) and [Genbank:AC145040] ( SULT1A3 ) and chimpanzee whole genome shotgun sequences [Genbank:AACZ01082721] ( SULT1A1 ), [Genbank:AADA01101065] ( SULT1A2 ), and [Genbank:AACZ01241716] ( SULT1A3 ) using PIPMAKER exon analysis. These new SULT1A genes have been deposited with the GenBank Third Party Annotation database under accession numbers [Genbank:BK004887-BK004892]. DNA sequences were aligned with CLUSTAL W [ 76 ]. The multiple sequence alignment used in all phylogenetic analyses is presented as supplementary data [see Additional file 1 ]. Pairwise distances were estimated under various distance metrics (Jukes-Cantor, Kimura 2-parameter, and Tamura-Nei) that account for among-site rate variation using the gamma distribution [ 77 ]. Phylogenies were inferred using both neighbor-joining and minimum evolution tree-building algorithms under the following constraints ((((primates), rodents), (artiodactyls, carnivores)), platypus). Phylogenetic analyses were conducted using the MEGA2 v2.1 [ 78 ] and PAUP* v4.0 [ 79 ] software packages. Parameter estimates of site class proportions, K A /K S ratios, base frequencies, codon frequencies, branch lengths, and the transition/transversion bias were determined by the maximum likelihood method with the PAML v3.14 program [ 61 ]. Positively selected sites, posterior probabilities, and marginal reconstructions of ancestral sequences were also determined using PAML. Sites experiencing synonymous changes along the 1A3/1A4 branch were recorded by hand from an ancestral sequence alignment. Molecular dating Starting with aligned DNA sequences, the number ( n ) of two-fold redundant codons (Lys, Glu, Gln, Cys, Asp, Phe, His, Asn, Tyr) where the amino acid had been conserved in pairs of aligned sequences, and the number of these codons where the third position was identically conserved ( c ) were counted by the DARWIN bioinformatics platform [ 80 , 81 ]. The pairwise matrix of n and c values for all SULT1A genes is presented as supplementary data [see Additional file 2 ]. The c / n quotient equals the fraction of identities ( f 2 ) in this system, or the transition redundant exchange (TREx) value [ 48 ]. TREx values were converted to TREx distances ( kt values) by the following equation: kt = - ln [( f 2 - Eq .) / (1 - Eq .)], where k is the rate constant of nucleotide substitution, t is the time separating the two sequences, and Eq . is the equilibrium state of the TREx value [ 48 ]. The equilibrium state of the TREx value was estimated as 0.54 for primates, and the rate constant at two-fold redundant sites where the amino acid was conserved ( k ) was estimated as 3.0 × 10 -9 changes/site/year for placental mammals (T. Li, D. Caraco, E. Gaucher, D. Liberles, M. Thomson, and S.A.B., unpublished data). These estimates were determined by sampling all pairs of mouse:rat and mouse:human orthologs in the public databases and following accepted placental mammal phylogenies and divergence times [ 82 , 83 ]. Therefore, the date estimates reported are based on the contentious assumptions that (i) rates are constant at the third position of two-fold redundant codons across the genome, (ii) the fossil calibration points are correct, and (iii) the mammalian phylogeny used is correct. Branch lengths were obtained for the constrained tree topology from the pairwise matrix of TREx distances using PAUP* v4.0. Upper-limit date estimates for nodes corresponding to SULT1A duplication events were obtained by summing the longest path of branches leading to a node and dividing that value by k . Comparative genomics Human-chimpanzee genome alignments were inspected at the UCSC Genome Browser. Human-rodent genome alignments were examined with the VISTA Genome Browser [ 50 , 51 , 84 ]. VISTA default parameters were used for drawing curves. Alignments constructed using both the pairwise method (LAGAN) and the multiple alignment method (MLAGAN) between the human genome builds frozen on April 2003 or July 2003 and both rodent genomes were inspected. Transcriptional profiling All expressed sequences ascribed to SULT1A3 were downloaded from NCBI UniGene [ 85 ] and aligned with SULT1A3 and SULT1A4 genomic regions using PIPMAKER. Alignments were inspected for the polymorphism in codon 35, as well as any other potential patterns, to determine whether they were derived from SULT1A3 or SULT1A4 . Abbreviations kbp (kilobase pairs); LCR (low copy repeat); Mbp (million base pairs); Ma (million years ago); ORF (open reading frame); SULT (sulfotransferase); TREx (transition redundant exchange); VISTA (visualization tool for alignments). Authors' contributions M.E.B carried out the study and drafted the manuscript. S.A.B participated in designing the study and preparing the manuscript. Supplementary Material Additional File 1 Multiple sequence alignment of SULT1A genes.Multiple sequence alignment of SULT1A genes used in all phylogenetic analyses. Characters conserved in all sequences are indicated with asterisks. Click here for file Additional File 2 Pairwise n and c values for SULT1A genes. Pairwise n and c values between SULT1A genes. The names of the sequences are the row-headers and the column-headers. Lower triangular matrix contains n values, and upper triangular matrix contains c values. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC555591.xml |
534806 | Discovery of mammalian genes that participate in virus infection | Background Viruses are obligate intracellular parasites that rely upon the host cell for different steps in their life cycles. The characterization of cellular genes required for virus infection and/or cell killing will be essential for understanding viral life cycles, and may provide cellular targets for new antiviral therapies. Results Candidate genes required for lytic reovirus infection were identified by tagged sequence mutagenesis, a process that permits rapid identification of genes disrupted by gene entrapment. One hundred fifty-one reovirus resistant clones were selected from cell libraries containing 2 × 10 5 independently disrupted genes, of which 111 contained mutations in previously characterized genes and functionally anonymous transcription units. Collectively, the genes associated with reovirus resistance differed from genes targeted by random gene entrapment in that known mutational hot spots were under represented, and a number of mutations appeared to cluster around specific cellular processes, including: IGF-II expression/signalling, vesicular transport/cytoskeletal trafficking and apoptosis. Notably, several of the genes have been directly implicated in the replication of reovirus and other viruses at different steps in the viral lifecycle. Conclusions Tagged sequence mutagenesis provides a rapid, genome-wide strategy to identify candidate cellular genes required for virus infection. The candidate genes provide a starting point for mechanistic studies of cellular processes that participate in the virus lifecycle and may provide targets for novel anti-viral therapies. | Background Cellular genes are likely to participate in all phases of viral life cycles including attachment to cellular receptors, internalization, disassembly, translation of mRNA, assembly and egress from the cells [ 1 ]. The susceptibility to virus infection varies greatly among different cell types, and virus-resistant cells frequently emerge post-infection [ 2 - 4 ]. This suggests that genetic determinants can influence host cell contributions to the virus life cycle. Despite examples of mammalian genes that influence virus infection, the identification of such genes has been hampered by the lack of practical methods for genetic analysis in cultured cells. In the present study, we tested whether tagged sequence mutagenesis – a gene entrapment strategy widely used to mutate genes in mouse embryonic stem cells [ 5 - 10 ] could be used to identify candidate cellular genes required for lytic infection by reovirus, a small cytolytic RNA virus that replicates in the cytoplasm. The mammalian reoviruses serve as useful models for virus-host cell interaction due to their capacity to replicate preferentially in proliferating and undifferentiated cells [ 3 ]. Gene traps are efficient mutagens as assessed by studies in mice of mutations induced originally in embryonic stem cells. In somatic cells, the approach assumes that loss-of-function mutations induced by gene entrapment may confer reovirus resistance as a result of gene dosage effects (e.g. haploinsufficiency), pre-existing heterozygosity or loss of heterozygosity. Following infection with the U3NeoSV1 retrovirus gene trap shuttle vector, libraries of mutagenized rat intestinal epithelial (RIE)-1 cell clones were isolated in which each clone contained a single gene disrupted by provirus integration [ 6 ]. The entrapment libraries were infected with reovirus type 1, and virus-resistant clones were selected under conditions that also selected against the emergence of persistently infected cells (PI) that may express virus resistance in the absence of cellular mutations [ 4 ]. Genes disrupted in a total of 151 reovirus resistant cells were identified by sequencing regions of genomic DNA adjacent to the entrapment vector [ 6 ]; of these, 111 contained mutations in previously characterized genes and anonymous transcription units. Reovirus-resistant clones were selected at higher frequencies from entrapment libraries than from non-mutagenized cells, suggesting that reovirus-resistant phenotypes were induced by gene trap mutagenesis. However in any genetic screen, clones with the selected phenotype may arise from spontaneous mutations, and consequently, additional experiments are required to demonstrate that individual genes disrupted by gene entrapment actually contribute to the reovirus-resistant phenotype. For example, a mutation in Ctcf mutation, a transcriptional repressor of insulin growth factor II (IGF-II), was one of 4 mutations associated with reovirus resistance that affected IGF-II expression and/or signalling. Subsequent experiments demonstrated that enforced IGF-II expression is sufficient to confer high levels of reovirus resistance [ 4 ]. In short, genes collectively identified by tagged sequence mutagenesis in a panel of reovirus resistant clones provide candidates for mechanistic studies of cellular processes that participate in the virus lifecycle. Since the disrupted genes do not adversely affect cell survival, drugs that inhibit proteins encoded by the genes are not expected to be overtly toxic to cells. Hence, the candidate genes may also include targets for novel anti-viral therapies. Results Tagged sequence mutagenesis and selection of reovirus resistant clones Twenty libraries of mutagenized RIE-1 cells, each representing approximately 10 4 independent gene trap events, were isolated following infection with the U3NeoSV1 gene trap retrovirus. U3NeoSV1 contains coding sequences for a neomycin resistance gene in the U3 region of the viral long terminal repeat (LTR). Selection for neomycin resistance generates clones with proviruses inserted within actively transcribed genes. Cells pooled from each entrapment library were separately infected with Type 1 reovirus at a multiplicity of infection of 35, and reovirus-resistant clones were selected in serum-free media to suppress the emergence of persistently infected (PI) cells (4). A total of 151 reovirus-resistant clones were isolated – approximately 1 mutant per 10 3 gene trap clones or 1 mutant per 10 7 reovirus infected cells. For comparison, the frequency of recovering resistant clones from RIE-1 cells not mutagenized by gene entrapment was less than 10 -8 . This suggests that reovirus-resistant phenotypes were induced by gene trap mutagenesis. Reovirus-resistant cells selected in serum-free media did not express viral antigens (Figure 1 ) and did not produce infectious virus as assessed by plaque assay (E.L. Organ, unpublished results). Most clones were resistant to infection by high titre reovirus and were further analyzed (Figure 2 ). While reovirus resistance did not initially result from the establishment of a persistent infection, many clones became persistently infected upon subsequent passages, presumably because mutant cells that display virus resistance are susceptible to the establishment of a PI state [ 2 ] from residual virus used in selection. Figure 1 Characterization of phenotypic properties of cloned RIE-1 cells resistant to reovirus type 1 infection (A) Cells were stained for reovirus antigen as previously described [3]. Only the PI cells contain reovirus antigen as detected by immunohistochemistry (dark wells). Upper wells contain cloned mutant RIE-1 cells from two sets of RIE-1 mutant cell lines selected for reovirus resistance. The lower wells contain PI RIE-1 (left) and uninfected wild type RIE-1 (right). (B) Reovirus susceptible L-cell monolayers, maintained in 1 ml of completed medium, were used to detect the presence of virus in a 100 μl lysate obtained of mutant cells (upper two wells), PI RIE-1 cells (lower left) or uninfected parental RIE-1 cells (lower right). Note, that only L-cell monolayers exposed to a lysate from PI RIE-1 cells lysed within one week of exposure (gentian violet stain). Figure 2 RIE-1 mutant cells resist lytic infection by reovirus The columns contain an unselected RIE-1 cell library, RIE-1 40C, and representative reovirus resistant mutant cell clones. Serial two-fold dilutions of reovirus were made with the highest titer in the upper row, MOI = 1 × 10 4 . Resistance to reovirus type 1 infection was observed in the mutant cells 3 to 7 days post-infection. The bottom row of cells, denoted by "C", were not infected to serve as controls for cell viability and proliferation. Cells were stained with gentian violet four days post-infection. A clear well indicates cell death following virus infection. Identification of genes disrupted in reovirus-resistant clones The U3NeoSV1 gene trap vector contains a plasmid origin of replication and ampicillin resistance gene; thus, regions of genomic DNA adjacent to the targeting vector were readily cloned by plasmid rescue and sequenced [ 6 ]. The flanking sequences were compared to the nucleic acid databases to identify candidate cellular genes that confer resistance to lytic infection by reovirus when altered by gene entrapment. Altogether, the 151 cloned flanking sequences matched 111 annotated gene and transcription units in the public DNA sequence databases [non-redundant (nr), high-throughput genomic sequences (htgs), and human, mouse, and rat genome sequences [ 6 ]. 40 flanking sequences were uninformative because they matched repetitive elements or regions of genomic DNA not associated with any annotated transcription unit. The Supplementary Table [see Additional File 1 ] lists genes disrupted in reovirus resistant clones for which some functional information is available. Many of these genes encode proteins that are known to physically interact. Genes associated with particular metabolic or signalling pathways are shown in Table 1 . These include gene products that could play potential roles in all aspects of virus replication: entry, disassembly, transcription, translation, and reassembly (Table 1 , Figure 3 , Supplementary Table [see Additional File 1 ]). Eleven genes encoding calcyclin, insulin growth factor binding protein 5 protease (prss11), type C-like lectin protein ( Clr )- f and - C , Dnaja1-/Aprataxin + ( Aprx ), GATA binding protein 4 ( Gata4 ), Bcl2 like-1 ( Bcl2l1 ); and chromosome 10 open reading frame 3 ( Chr10orf3 ) and myoferlin , fer-1 like protein 3 (Fer1l3) , S100a6 (encoding calcyclin), and two functionally anonymous cDNAs were independently mutated in separate cell libraries (Supplementary Table [see Additional File 1 ]). The proviruses in these independent of mutant clones were located within 7 to over 1500 nucleotides of each other (data not shown). Table 1 Classification of trapped genes according to function Trapped genes are listed by the official HUGO Gene Nomenclature Committee names, when available. Functional placement of genes or their products are determined by literature assignments. Some genes perform more than one cellular role, and are classified arbitrarily and others have undefined roles. Transcription Cytoskeletal-Related Membrane Signalling Vesicle/Trafficking Brd2 Anx3 Abca4 E2ip2 Anxa1 Brd3 Cald1 Celsr2 Fkbp8 Anxa2 Ctcf Calm2 Csmd2 Fusip1 Atp6v0c E2f2 Kif13b Erbb2ip Gata4 Copg2 Gtf2e1 Mapt OL16 Grb2 Golga2 Hnrpl Ppm1a Pgy1 Jak1 Hm13 Hoxc13 Rps18 Rab13 Madh7 Igf2r Hp1-bp74 Stmn1 Serp1 Map3k7ip1 Psa Id3 Tpm1 Pde4b Rabl3 Zfp7 Rraga Rin2 Znf207 Ryk S100a6 Translation Apoptosis Metabolism Chaperonin Ubiquination/Proteosome Cstf2 Bcl2l Gas5 Dnaja1 Psma7 Eif3s10 IkBζ Lipc Ube1c Srp19 Mical2 Mgat1 Rfp2 Pts Unassigned Aptx Hspc135 Ocil Wdr5 Clr-f Klhl6 Ror1 Dlx2 Mox2r Scmh1 Dre1 Numb Trim52 Figure 3 A model of the life cycle of reovirus: proposed checkpoints based on function of the cellular genes identified by the insertional mutagenesis The virus life cycle begins (top, clockwise) with virus binding to cell surface receptor and being endocytosed into early endosomes. These endosomes then associate with annexin-II ( Anxa2 ) [62] and fuse with annexin-II-associated vesicles containing newly synthesized lysosomal enzymes migrating from the Golgi [63], which further fuse with the lysosome. The vacuolar H + -ATPase ( Atp6v0c ) acidifies the lysosome, allowing acid-dependent proteases to digest the outer coat from the virus particles and activate them [64]. These activated particles then pass through the lysosomal membrane and begin transcription of mRNA. The Golgi protein gm 130 ( Golga2 ) is believed to mediate the docking of vesicles as they carry their newly synthesized cargo through the Golgi stack [65, 66]. N -acetylglucosaminyl transferase I ( Mgat1 ) initiates the glycosylation of cell surface proteins (receptors?) and may play a major role, through kinship recognition, in helping maintain the correct assortment of lysosomal enzymes [67-71]. The Igf2r shuttles enzymes bound for the lysosome from the Golgi [72] and transfer Igf2 to the lysosome. While the roles of calcyclin and the α-tropomyosin ( Tpm1 ) are still unclear, they specifically bind each other, and calcyclin is known to bind Anxa2 [16, 20]. Thus, they may be involved in endosome fusion. Eif3s10 specifically binds the virus message to begin its preferential translation. The DnaJa1 protein may facilitate the proper folding of virus proteins with its chaperone function [73]. However, DnaJa1 protein and Eif3 may play additional roles in virus trafficking or apoptosis, respectively. Eventually, morphogenesis is complete when crystalline-like arrays of new virions form, cell lysis occurs, and virus is released. Many of the cellular proteins encoded by mutated genes have direct or indirect roles in trafficking of endosomes or lysosomal fusion and thus may play roles in the early disassembly or delivery of transcriptionally active virions to the appropriate cell location. While the presence of multiple, independent mutations in specific genes provides indirect evidence for their involvement in the reovirus lifecycle, the genes could also represent hot spots for gene entrapment. The U3NeoSV1 vector preferentially targets genes with 5' exons that can splice in-frame with a cryptic splice site in the Neo gene to produce enzymatically active Neo fusion proteins. As a consequence, mutagenesis by U3NeoSV1 is actually quite biased, such that of 400 mutations characterized in ES cells, one-third involved genes disrupted multiple times, including Pecam 1 which was targeted 9 times [ 11 ]. However, none of the multiply targeted genes associated with reovirus resistance involved previously observed entrapment hotspots. Conversely, over 10% of the mutations identified in ES cells involved genes for RNA binding proteins, a preference not observed among genes collectively associated with reovirus resistance. Only Madh 7 and Gas5 , each represented once among the reovirus-resistant clones, were disrupted by U3NeoSV1 in ES cells. Both genes are commonly targeted by other retroviral gene trap vectors and thus probably represent hot spots for gene entrapment [ 5 , 8 ]. Potential involvement of disrupted genes in virus replication The genes associated with reovirus resistance can be grouped according to their presumed role in virus entry, disassembly, translation, and maturation. Reovirus enters the host via an endocytic pathway that requires acidification and proteolysis to remove the viral outer capsid. The presumptive roles of several candidate genes would be anticipated to affect virus replication by interfering with virus disassembly. For example the mannose-6-phosphate receptor/insulin-growth factor-2 receptor ( Igf2r ) transports cathepsins to the lysosome [ 12 ] and acidification of the lysosome is dependent upon the vacuolar H + -ATPase ( Atp6v0c ) [ 13 ]. NH 4 Cl is a weak base that interferes with the function of two of the tagged genes, the Igf2r and the Atp6v0c , and blocks the disassembly of reovirus and several other viruses that enter cells via the endocytic pathway. Moreover, specific inhibitors of the vacuolar H + -ATPase gene product have been used to block the infectivity of reovirus and influenza A virus [ 13 , 14 ]. Four mutations in three different genes [ Igf2r , Prss11 , a protease associated with insulin binding protein 5, and Ctcf a transcriptional repressor of IGF-II] are predicted to affect IGF-II expression and/or signalling. Cells containing the Ctcf mutation were subsequently found to express elevated levels IGF-II, while enforced IGF-II expression was sufficient to confer high levels of reovirus resistance. The resistance was caused, at least in part, by a block in virus disassembly [ 4 ]. Similarly, both anti-IGF-II receptor antibodies and soluble IGF-II receptor have been reported to inhibit herpes simplex virus infection in vitro [ 15 ]. By inference, the recovery of several clones with mutations in genes involved in IGF-II expression/signalling pathway suggests that mutations in multiple genes may affect the same phenotype by acting on a common pathway. Additionally, several of the inserted mutations encode proteins that are thought to participate in trafficking of cargo in cells, and may participate in various stages of virus infection. These include mutations in genes encoding three annexins (Anxa1, Anxa2, Anxa3) and calcyclin ( S100a6/S100a1 ) – proteins that may bind to each other [ 16 - 18 ] – and mutations affecting cytoskeletal and cytoskeletal associated proteins ( Cald1 , Kif13b, Mapt , Mkln1 , Stmn1 , Tm9sf4 , and Tpm1 ). Annexin-II associates with cytomegalovirus virions and anti-annexin-II antibodies have been found to prevent cytomegalovirus plaque formation [ 19 ]. Annexin-II is known to bind to several of the other gene products mutated in our library. These include annexin-I, calcyclin, and α-tropomysin (Supplementary Table [see Additional File 1 ]) [ 17 , 18 , 20 ]. One of the clones has a disruption of a novel cell receptor, OL-16, which is a member of the immunoglobulin superfamily [ 21 , 22 ]. A presumptive cellular receptor for reovirus, junctional adhesion molecule ( Jam )-1 [ 22 ], has been shown to bind to all reovirus serotypes [ 23 ], whereas reovirus infection has been found to be host-cell specific [ 24 ]. OL-16 is expressed both in L-cells and in RIE-1 cells that can be infected by reovirus type 1 but not in murine erythroleukemia (MEL) cells that are resistant to infection by type 1 reovirus [ 25 ]. Forced expression of an OL-16 transgene in MEL cells increases their susceptibility to reovirus type 1 infection (J. Sheng and D. Rubin, unpublished results). Several of the candidate genes have products that interact with either reovirus or with other viruses (Supplementary Table [see Additional File 1 ]). Cellular activities involved in post-transcriptional gene regulation may influence the processing or translation of virus transcripts. Two candidate genes participate in these processes. Eif3, part of a multi-subunit translation initiation complex, has been found to specifically bind the 5' end of hepatitis C and classical swine fever virus mRNA [ 26 ]. The Cstf 64 KaD subunit, which affects polyadenylation of mRNA, can be cross-linked to the mRNA of herpes simplex mRNA in infected HeLa cell extracts [ 27 ]. Other candidate genes are associated with the interferon pathway and host inflammatory responses [ 28 - 30 ]. For example, IκBζ (MAIL), as a component of the NK-κB pathway, may directly or tangentially affect interferon production, inflammation, or apoptosis. In addition, one gene encodes 6-pyruvoyl-tetrahydropterin synthase ( Pts ), a major regulator of interferon activity [ 31 ] associated with inducible nitric oxide synthase (iNOS). iNOS levels within cells affects the efficiency of replication of many viruses, including the avian reoviruses [ 32 , 33 ]. Many of the targeted gene products have roles involving the Golgi or endosomal compartments (Figure 3 ), and additional genes play a role in differentiation or growth arrest. Of these, several are in the transforming growth factor (TGF)-β and NF-κB regulatory pathways, Ppm1a [ 34 ], Madh7 [ 35 - 37 ], Ube1c [ 38 , 39 ], and Map3k7ip1 [ 40 - 43 ] (Table 1 , Supplementary Table [see Additional File 1 ]). In addition, subunits of the eif3 complex have been functionally linked to Mapkbp1 and the proteosome [ 44 ]. We have also disrupted a number of genes that participate in apoptosis (Supplementary Table [see Additional File 1 ]), and three disrupted genes affect N -linked protein glycosylation, a process that may affect compartmentalization of proteins or ligand interactions. Reovirus resistant cells have altered susceptibility to HSV-1 As several of the genes listed in the Supplementary Table [see Additional File 1 ] have been associated with herpes simplex-1 (HSV-1) replication, seven clones were tested for their susceptibility to HSV-1 infection [ 15 , 45 ]. These experiments utilized HSV-1(KOS)tk12, an infectious virus that expresses a lac Z reporter as an immediate-early gene [ 46 ]. Data representing seven clones with mutations that tag known genes are provided in Figure 4 . Four clones, with mutations in the Eif3s10 , AnxaI , Mgat1 , and Igf2r genes, were resistant to HSV-I infection (Figure 4B,4C , b, d, f, and h) and there was a diminished capacity to express the immediate-early lacZ reporter gene. However, two of the clones (Figure 4B,4C , c and e) with mutations in genes encoding calcyclin and annexin-II, were more susceptible than the parental RIE-1 cells to HSV-1 infection and expressed higher levels of the immediate-early lacZ reporter gene. Representative clones that contain altered levels of HSV-1 immediate early gene expression are shown in Figure 4A . LacZ expression in cells containing a disrupted calcyclin ( S100a6 ) gene was readily apparent 4 h following infection, whereas lacZ expression was barely detected in Eif3s10 mutant cells 16h following infection. In all cases, levels of lacZ expression correlated with susceptibility to HSV-1 infection, suggesting that resistance involved early steps in the viral lifecycle. Figure 4 HSV-1 infection is affected in cell clones selected as reovirus resistant The level of transcription and translation of the reporter gene, lacZ, present in the immediate early genes of HSV-1 is shown in A and B. A) The level of expression of mRNA is shown at 4, 8 and 16 h following infection for a library of non-reovirus selected RIE-1 cells (L42), and two clones that disrupt the Eif3s10 (p162) and S100a6 genes. The cell clone with a disrupted S100a6 gene has a dramatic increase in HSV-1 expression with a concomitant decrease in cellular gene expression by 16 h. While there is more mRNA loaded in the lane with a disrupted Eif3s10 gene than is present in the other lanes, there is no evidence for HSV-1 expression in this cell until 16 h following infection. B) Translation of the LacZ reporter in the immediate early genes of HSV-1. At 8 hours following infection, the translation of the LacZ gene is dramatically increased in clones with mutant S100a6 and Anxa2 genes, barely detectable in the population of non-selected library cells (L42) and a cell clone that tags the Aptx + / DnaJa1 - genes, and is not evident in the other mutant clones. C) Cell survival was determined by gentian violet staining of cells at 72 hours. L42 cells, and clones with mutations in the Aptx + / DnaJA1 , annexin II , and S100a6 genes lysed whereas clones with mutations in the Eif3s10 , Anax1 , Mgat1 , and Igfr2 genes were resistant to lytic infection. a-library 42, non-selected; b- Eif3s10 ; c-calcyclin; d- Anxa1 ; e- Anxa2 ; f- Mgat1 ; g- Aptx + / DnaJa1 (negative strand); h- Igf2r Discussion Candidate genes required for lytic reovirus infections were identified by tagged sequence mutagenesis, a process that permits rapid identification of genes disrupted by gene entrapment. Since virus-resistant mutants may arise by a variety of mechanisms, additional experiments are needed to demonstrate that individual genes disrupted by gene entrapment actually contribute to the reovirus-resistant phenotype. Even so, several lines of evidence suggest that the genes collectively identified by tagged sequence mutagenesis include cellular activities that participate in the virus lifecycle. First, reovirus-resistant clones were selected at higher frequencies from entrapment libraries than from non-mutagenized cells, suggesting that reovirus-resistant phenotypes were induced by gene trap mutagenesis. Second, the genes associated with reovirus resistance differed from genes targeted in an unselected manner in mouse ES cells. Known mutational hot spots of the U3NeoSV1 were under-represented, and a number of mutations associated with virus resistance appeared to cluster within specific cellular processes and/or affected different components of multi-protein complexes that are likely to play roles in the virus lifecycle. These include IGF-II expression/signalling (3 genes), cytoskeletal/vesicular/trafficking (20 genes), signalling pathways (11 genes), and apoptosis (4 genes). Finally, we recently demonstrated that the disruption of Ctcf , a transcriptional repressor of IGF-II, was directly responsible for reovirus-resistance. In particular, cells containing the Ctcf mutation express elevated levels IGF-II, while parental RIE1 cells forced to express IGF-II acquired high levels of reovirus resistance. The mutation in Ctcf was chosen for further analysis because it was one of 3 mutations affecting IGF-II expression and/or signalling [ 4 ]. By inference, the recovery of inserts affecting other genes in the IGF-II signalling pathway suggests that the same phenotype can be achieved through mutations in multiple genes in a common pathway. Taken together, these results suggest that the candidate genes identified by tagged sequence mutagenesis provide useful information to direct mechanistic studies of cellular processes that participate in the virus lifecycle. These studies utilized a diploid cell line to select for reovirus resistance. Therefore, recessive phenotypes resulting from loss-of-function mutations are generally expected to require separate inactivation of genes carried on both autosomes. In principle this could occur through pre-existing heterozygosity or by loss of the unoccupied allele by one of several mechanisms such as gene conversion, non-disjunction or transcriptional repression. Several of the candidate genes discovered in these experiments are imprinted, and therefore may be anticipated to be mono-allelic in their expression, including the maternally imprinted Igf2r . Alternatively, mutations induced by gene entrapment may confer reovirus resistance as a result of gene dosage effects (e.g. haploinsufficiency). For example, recent data suggests that the most common genetic disease in Caucasians, cystic fibrosis, involves mutations in the ABC-cassette transporter protein, CFTR, that confer resistance to infection by Salmonella typhi [ 47 ]. Protection is afforded to persons heterozygous at this allele. Similarly, cyclosporin analogs, which affect P glycoprotein (a member of the ABC cassette transporters), inhibit the growth of Cryptosporidium parum [ 48 ]. Of note, one of the genes associated with reovirus resistance identified in the present study, Pgy1 ( Abcb1 ), encodes P glycoprotein (Table 1 , Supplementary Table [see Additional File 1 ]). Finally, while the U3NeoSV1 entrapment vector lacks the MuLV enhancer element, we cannot exclude the possibility that the phenotype observed was related to dominant mutations caused either by transcriptional activation of adjacent cellular genes or from the expression of truncated proteins with dominant-negative activity. The circumstances that allow gene entrapment to disrupt the function of diploid genes illustrate that events secondary to provirus integration may be required for expression of some reovirus resistant phenotypes. Consequently, while the entrapment libraries were theoretically large enough (2 × 10 5 independent mutations) to disrupt all expressed genes, it seems unlikely that all the genes that are required for virus infection, and that can be targeted by tagged sequence mutagenesis, were identified in the present study. Reovirus infection may induce apoptosis in vivo and in vitro , and the suppression of apoptosis enhances the survival of mice infected with reovirus type 3 [ 49 , 50 ]. Mutations associated with reovirus resistance included a number in proapototic genes including, IκBζ and Bcl2l1 ; however, the precise role of this pathway and the genes we have disrupted in modulating reovirus infectivity is unknown [ 51 , 52 ]. Therefore, while many genes are associated with known pathways, further studies will be required to understand the manner by which these pathways influence reovirus infection. Genetic alterations giving rise to reovirus resistant clones have variable effects on HSV-1 replication, with some reovirus resistant clones showing enhanced HSV1 replication. The reasons for this are unknown, although each of these two viruses enter cells by different mechanisms. The early steps in HSV replication require entry into cells, release of the capsid with migration to the nucleus for which virus and cellular proteins play roles [ 53 , 54 ], whereas the entry of reovirus does not involve transit to the nucleus. Enhanced HSV-1 replication in clones containing mutations in the S100a6 (calcyclin) and Anxa2 genes was accompanied by a dramatic increase in immediate-early gene expression. This temporal enhancement of HSV-1 replication may reflect activities of calcyclin and annexin 2 proteins that suppress HSV-1 entry [ 55 - 57 ]. In addition to the clone with a mutation in Anxa1 , clones with mutations in Eif3s10 , Mgat1 , and Igf2r also show decreases in transcription and translation of virus mRNA and cell death. Of these, mutations in the Igf2r are known to affect HSV replication [ 15 , 54 , 58 ]; whereas, association of HSV replication with proteins encoded by the Eif3s10 , Anxa1 , and Mgat1 are novel. These data suggest that some of the candidate genes discovered in clones surviving reovirus infection may affect common cellular processes that are used by other viruses. Studies in a variety of systems indicate that resistance to infection may be found in nature or achieved in cultured cells. Results presented here support the hypothesis that pathways involved in reovirus infection can be identified through a functional genomics approach based upon insertional mutagenesis. Systematic selection of virus-resistant mutant cells in which the mutant gene can be easily identified may also identify targets for the development of anti-viral therapies. Drugs that disrupt cellular processes may circumvent the problem of virus resistance, generally observed with drugs against virus-encoded proteins. Moreover, since mutations associated with virus resistance are, by necessity, not lethal to the cell, drugs that target the same processes are not expected to have overtly toxic side effects. The fact that the resulting candidate genes may play roles in the replication of other viruses, suggests that different viruses may use similar host proteins for common steps required for virus entry, disassembly, transcription, translation, and reassembly. This view is supported by our studies with HSV-1 and by published reports implicating several of the genes disrupted in reovirus resistant cells (Supplementary Table [see Additional File 1 ]) in the replication of other viruses. Thus, mutant clones selected for resistance to lytic infection to one virus may provide targets for therapeutics that are active against other families of viruses. The dramatic increase in the pace of the genome project has led to an explosion of information concerning the sequence of the genome of several species of animals and pathogenic organisms. However, most of the gene sequences have not been functionally ascribed with regard to host-parasite interactions. As there are approximately 30 to 50 × 10 3 mammalian genes, the definition of function will become the major task facing scientists interested in the relationship between host genes and viral disease over the next decade. Conclusions Candidate host genes that participate in lytic virus infections were identified utilizing insertional mutagenesis. Mutant cell clones were recovered that lost their capacity to support virus replication, but were able to proliferate. There was enrichment for genes that were involved in particular metabolic or signalling pathways, with many of the genes being selected more than once from independently derived libraries of RIE-1 cells. Several of the gene products are known to bind to each other. These genes or their products, which are identified by this process of selection, may provide targets for therapeutic intervention. Methods RIE-1, L-Cells and Virus Reovirus type 1, strain Lang, was initially obtained from Bernard N. Fields. Virus was passaged in L-cells and a third passaged stock was purified over a CsCl gradient as previously described and was used for these experiments [ 59 ]. To develop PI cell lines, RIE-1 cells were infected with reovirus type 1, at a multiplicity of infection (MOI) of 5, and surviving cells were maintained in Dulbecco's modification of Eagle's minimum essential medium (DMEM) (Irvine Scientific, Santa Ana, CA, USA). The herpes simplex virus (HSV)-1 clone, HSV-1 KOStk12, that expresses a reporter gene, lacZ, as an immediate-early gene [ 46 ] was a generous gift of Patricia Spear, Northwestern University, USA. For RIE-1 and L-cells, medium was supplemented with 10% fetal bovine serum, 2 mM per ml, L-Glutamine 100 units per ml, Penicillin, and 100 μg per ml Streptomycin (Irvine Scientific, Santa Ana, CA, USA) [complete medium]. In some experiments, serum was omitted from the medium. The continuance of cell monolayers following infection with reovirus or HSV-1 was determined by staining with gentian violet. Tagged sequence mutagenesis and selection for reovirus resistance Following infection of RIE-1 cells with the U3neoSV1 vector, MOI of 0.1, mutagenized cells were selected for neomycin resistance in medium containing 1 mg/ml G418 sulfate (Clontech, Palo Alto, CA, USA) [ 6 ]. Twenty libraries of mutant RIE-1 cells, and one library of A549 human adenocarcinoma cells, each consisting of 10 4 gene entrapment events, were expanded until approximately 10 3 sibling cells represented each mutant clone. These cells were plated at a sub-confluent density and incubated in serum-free media for 3 days until they became quiescent, and infected with reovirus serotype 1, MOI of 35 plaque forming units (pfu) per cell. Eighteen hours following infection, the cells were detached with trypsin, and plated in DMEM medium containing 10% fetal bovine serum (FBS) (Hyclone Laboratories, Inc., Logan, Utah, USA). After 6 hrs, the medium was removed and cells were maintained in serum-free medium until only a few cells remained attached to the flask. On average, one to ten clones were recovered from a library consisting of 10 7 mutant cells, an enrichment for selected cells of six orders of magnitude. Cells that survived the selection were transferred to cell culture plates in media containing 10% FBS and cells were divided for extraction of DNA and cryopreservation. Transcription and translation of HSV-1 immediate early gene reporter The transcription and translation of the HSV-1 immediate early gene reporter gene, lacZ , was determined by standard northern blot techniques and β-galactosidase assay, respectively. Generation of libraries of mutagenized RIE-1 cells Libraries of mutagenized cells were infected with reovirus serotype-1, strain Lang, to select for clones resistant to lytic infection. Selection of virus-resistant clones was performed in serum-free medium to suppress the emergence of persistently infected (PI) cells [ 4 ]. This is important since PI cells, which arise by a process involving adaptive mutations in both the virus and the cell genomes [ 60 ], provide a means whereby RIE-1 cells can acquire virus resistance in the absence of cellular mutations. Uninfected RIE-1 cells undergo growth arrest, whereas PI RIE-1 cells are killed in serum-free medium. DNA sequence analysis Genomic DNA immediately adjoining the 5' end of the proviral insert in each of 130 cell lines was cloned by plasmid rescue [ 6 ]. Approximately 300 to 600 base pairs of this flanking DNA were sequenced and compared with the non-redundant (nr) and expressed sequence tag (dbEST) nucleic acid databases [ 61 ]. The probability of a match with orthologous sequences in the databases varies due to interspecies variation, the amount of exon in the flanking DNA (in cases where the flanking DNA matches cDNA sequences), alternative splicing and sequencing errors. Matches with sequences in the database were considered potentially significant if probability score was <10 -5 and the sequence was non-repetitive. In most cases, the matching gene was in the same transcriptional orientation as the provirus. Moreover, matches involving cDNA sequences were co-linear across exons present in the flanking genomic DNA and diverged at splice sites. As indicated, virtually all of the genes identified had matches to murine, rat, or human gene sequences with p < 10 -10 . Authors' contributions ELO, and JS conducted most of the laboratory work. HER provided the vectors and advice on their use. DHR discovered that persistently infected cells require serum to survive, allowing the selection of genetically resistant cell clones, and did the genetic analysis. Drs. Ruley and Rubin provided funding and supervision for the research, and prepared the manuscript. All authors have read and approved the final manuscript. Supplementary Material Additional File 1 Genes associated with resistance to lytic reovirus infection identified by tagged sequence mutagenesis A list of previously named genes disrupted by the insertional mutagen, U3neoSV1, and recovered in cell clones resistant to lytic infection is provided. The rat mRNA and vector insertion site accession number, rat chromosome location, human homologue chromosome location, link to the NCBI Entrez Gene and NCBI Nucleotide databases, and known virus interactions are listed. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC534806.xml |
524354 | Comparisons of the M1 genome segments and encoded μ2 proteins of different reovirus isolates | Background The reovirus M1 genome segment encodes the μ2 protein, a structurally minor component of the viral core, which has been identified as a transcriptase cofactor, nucleoside and RNA triphosphatase, and microtubule-binding protein. The μ2 protein is the most poorly understood of the reovirus structural proteins. Genome segment sequences have been reported for 9 of the 10 genome segments for the 3 prototypic reoviruses type 1 Lang (T1L), type 2 Jones (T2J), and type 3 Dearing (T3D), but the M1 genome segment sequences for only T1L and T3D have been previously reported. For this study, we determined the M1 nucleotide and deduced μ2 amino acid sequences for T2J, nine other reovirus field isolates, and various T3D plaque-isolated clones from different laboratories. Results Determination of the T2J M1 sequence completes the analysis of all ten genome segments of that prototype. The T2J M1 sequence contained a 1 base pair deletion in the 3' non-translated region, compared to the T1L and T3D M1 sequences. The T2J M1 gene showed ~80% nucleotide homology, and the encoded μ2 protein showed ~71% amino acid identity, with the T1L and T3D M1 and μ2 sequences, respectively, making the T2J M1 gene and μ2 proteins amongst the most divergent of all reovirus genes and proteins. Comparisons of these newly determined M1 and μ2 sequences with newly determined M1 and μ2 sequences from nine additional field isolates and a variety of laboratory T3D clones identified conserved features and/or regions that provide clues about μ2 structure and function. Conclusions The findings suggest a model for the domain organization of μ2 and provide further evidence for a role of μ2 in viral RNA synthesis. The new sequences were also used to explore the basis for M1/μ2-determined differences in the morphology of viral factories in infected cells. The findings confirm the key role of Ser/Pro208 as a prevalent determinant of differences in factory morphology among reovirus isolates and trace the divergence of this residue and its associated phenotype among the different laboratory-specific clones of type 3 Dearing. | Background RNA viruses represent the most significant and diverse group of infectious agents for eukaryotic organisms on earth [ 1 , 2 ]. Virtually every RNA virus, except retroviruses, must use an RNA-dependent RNA polymerase (RdRp) to copy its RNA genome into progeny RNA, an essential step in viral replication and assembly. The virally encoded RdRp is not found in uninfected eukaryotic cells and therefore represents an attractive target for chemotherapeutic strategies to combat RNA viruses. A better understanding of the structure/function relationships of RNA-virus RdRps has been gained from recent determinations of X-ray crystal structures for several of these proteins, including the RdRps of poliovirus, hepatitis C virus, rabbit calicivirus, and mammalian orthoreovirus [ 3 - 6 ]. However, the diverse and complex functions and regulation of these enzymes, including their interactions with other viral proteins and cis-acting signals in the viral RNAs, determine that we have hardly scratched the surface for understanding most of them. The nonfusogenic mammalian orthoreoviruses (reoviruses) are prototype members of the family Reoviridae , which includes segmented double-stranded RNA (dsRNA) viruses of both medical (rotavirus) and economic (orbivirus) importance (reviewed in [ 7 - 9 ]). Reoviruses have nonenveloped, double-shelled particles composed of eight different structural proteins encasing the ten dsRNA genome segments. Reovirus isolates (or "strains") can be grouped into three serotypes, represented by three commonly studied prototype isolates: type 1 Lang (T1L), type 2 Jones (T2J), and type 3 Dearing (T3D). Sequences have been reported for all ten genome segments of T1L and T3D, as well as for nine of the ten segments of T2J (all but the M1 segment) ( e.g. , see [ 10 , 11 ]). Each of these segments encodes either one or two proteins on one of its strands, the plus strand. After cell entry, transcriptase complexes within the infecting reovirus particles synthesize and release full-length, capped plus-strand copies of each genome segment. These plus-strand RNAs are used as templates for translation by the host machinery as well as for minus-strand synthesis by the viral replicase complexes. The latter process produces the new dsRNA genome segments for packaging into progeny particles. The particle locations and functions of most of the reovirus proteins have been determined by a combination of genetic, biochemical, and biophysical techniques over the past 50 years (reviewed in [ 8 ]). Previous studies have identified the reovirus λ3 protein, encoded by the L1 genome segment, as the viral RdRp [ 6 , 12 - 14 ]. Protein λ3 is a minor component of the inner capsid, present in only 10–12 copies per particle [ 15 ]. It has been proposed to bind to the interior side of the inner capsid, near the icosahedral fivefold axes, and recent work has precisely localized it there [ 16 , 17 ]. In solution, purified λ3 mediates a poly(C)-dependent poly(G)-polymerase activity, but it has not been shown to use virus-specific dsRNA or plus-strand RNA as template for plus- or minus-strand RNA synthesis, respectively [ 14 ]. This lack of activity with virus-specific templates suggests that viral or cellular cofactors may be required to make λ3 fully functional. Within the viral particle, where only viral proteins are known to reside, these cofactors are presumably viral in origin. The crystal structure of λ3 has provided substantial new information about the organization of its sequences and has suggested several new hypotheses about its functions in viral RNA synthesis and the possible roles of cofactors in these functions [ 6 ]. Notably, crystallized λ3 uses short viral and nonviral oligonucleotides as templates for RNA synthesis to yield short dsRNA products [ 6 ]. The reovirus μ2 protein has been proposed as a transcriptase cofactor, but it remains the most functionally and structurally enigmatic of the eight proteins found in virions. Like λ3, μ2 is a minor component of the inner capsid, present in only 20–24 copies per particle [ 15 ]. It is thought to associate with λ3 in the particle interior, in close juxtaposition to the icosahedral fivefold axes, but has not been precisely localized there [ 16 , 17 ]. A recent study has shown that purified μ2 and λ3 can interact in vitro [ 18 ]. The M1 genome segment that encodes μ2 is genetically associated with viral strain differences in the in vitro transcriptase and nucleoside triphosphatase (NTPase) activities of viral particles [ 19 , 20 ]. Recent work with purified μ2 has shown that it can indeed function in vitro as both an NTPase and an RNA 5'-triphosphatase [ 18 ]. The μ2 protein has also been shown to bind RNA and to be involved in formation of viral inclusions, also called "factories", through microtubule binding in infected cells [ 18 , 21 - 23 ]. Nevertheless, its precise function(s) in the reovirus replication cycle remain unclear. Other studies have indicated that the μ2-encoding M1 segment genetically determines the severity of cytopathic effect in mouse L929 cells, the frequency of myocarditis in infected mice, the levels of viral growth in cardiac myocytes and endothelial cells, the degree of organ-specific virulence in severe combined immunodeficiency mice, and the level of interferon induction in cardiac myocytes [ 24 - 29 ]. The complete sequence of the M1 segment has been reported for both T1L and T3D [ 23 , 30 , 31 ]. However, computer-based comparisons of the M1 and μ2 sequences to others in GenBank have previously failed to show significant homology to other proteins, so that no clear indications of μ2 function have come from that approach. Nevertheless, small regions of sequence similarity to NTP-binding motifs have been identified near the middle of μ2, and recent work has indicated that mutations in one of these regions indeed abrogates the triphosphatase activities of μ2 [ 18 , 20 ]. For this study, we performed nucleotide-sequence determinations of the M1 genome segments of reovirus T2J, nine other reovirus field isolates, and reovirus T3D clones obtained from several different laboratories. The determination of the T2J M1 sequence completes the sequence determination of all ten genome segments of that prototype strain. We reasoned that comparisons of additional M1 and μ2 sequences may reveal conserved features and/or regions that provide clues about μ2 structure and function. The findings provide further evidence for a role of μ2 in viral RNA synthesis. We also took advantage of the newly available sequences to explore the basis for M1/μ2-determined strain differences in the morphology of viral factories in reovirus-infected cells. Results and Discussion M1 nucleotide and μ2 amino acid sequences of reovirus T2J and nine other field isolates We determined the nucleotide sequence of the M1 genome segment of reovirus T2J to complete the sequencing of that isolate's genome. T2J M1 was found to be 2303 base pairs in length (GenBank accession no. AF124519) (Table 1 ). This is one shorter than the M1 segments of reoviruses T1L and T3D [ 23 , 30 , 31 ], due to a single base-pair deletion in T2J corresponding to position 2272 in the 3' nontranslated region of the T1L and T3D plus strands (Fig. 1 , Table 1 ). Like those of T1L and T3D, the T2J-M1 plus strand contains a single long open reading frame, encoding a μ2 protein of 736 amino acids (Fig. 2 , Table 1 ), having the same start and stop codons (Fig. 1 ), and having a 5' nontranslated region that is only 13 nucleotides in length (Table 1 ). Because of the single-base deletion described above, the 3' nontranslated region of the T2J M1 plus strand is only 82 nucleotides in length, compared to 83 for T1L and T3D (Table 1 ). Regardless, M1 has the longest 3' nontranslated region of any of the genome segments of these viruses, the next longest being 73 nucleotides in S3 (reviewed in [ 32 ]). Table 1 Features of M1 genome segments and μ2 proteins from different reovirus isolates Reovirus isolate a M2 or μ2 property b T1L c T2J T3D d T3D e T1C11 T1C29 T1N84 T2N84 T2S59 T3C12 T3C18 T3C44 T3N83 Accession no.: X59945 AF124519 M27261 AF461683 AY428870 AY428871 AY428872 AY428873 AY428874 AY551083 AY428875 AY428876 AY428877 total nuc 2304 2303 2304 2304 2304 2304 2304 2304 2304 2304 2304 2304 2304 5' NTR 13 13 13 13 13 13 13 13 13 13 13 13 13 3' NTR 83 82 83 83 83 83 83 83 83 83 83 83 83 total AA 736 736 736 736 736 736 736 736 736 736 736 736 736 mass (kDa) 83.3 84.0 83.3 83.2 83.2 83.3 83.4 83.3 83.5 83.2 83.3 83.3 83.4 pI 6.92 7.44 6.98 6.89 7.10 7.09 6.98 6.92 6.96 6.89 6.92 7.09 7.01 Asp+Glu 85 84 85 85 84 84 85 85 84 85 85 84 85 Arg+Lys+His 102 105 102 101 103 103 102 102 100 101 102 103 103 a Abbreviations defined in text. b nuc, nucleotides; NTR, nontranslated region; AA, amino acids; pI, isoelectric point. c All indicated values are the same for the T1L M1 and μ2 sequences obtained for the Brown laboratory clone [31] (indicated GenBank accession number), the Nibert laboratory clone [23]; GenBank accession no. AF461682), and the Coombs laboratory clone (this study). d T3D M1 and μ2 sequences for the Joklik laboratory clone [30] (indicated GenBank accession number), and the Cashdollar laboratory clone [23]; GenBank accession no. AF461684). e T3D M1 and μ2 sequences for the Nibert laboratory clone [23] and the Coombs laboratory clone (this study). Figure 1 Sequences near the 5' (A) and 3' (B) ends of the M1 plus strands of 14 reovirus isolates. The start and stop codons are indicated by bold and underline, respectively. The one-base deletion in the 3' noncoding region of the T2J sequence is indicated by a triangle. Positions at which at least one sequence differs from the others are indicated by dots. GenBank accession numbers for corresponding sequences are indicated between the clones' names and 5' sequences in "A". Clones are: T1L (type 1, Lang), T1C11 (type 1, clone 11), T1C29 (type 1, clone 29), T1N84 (type 1, Netherlands 1984), T2J (type 2, Jones), T2N84 (type 2, Netherlands 1984), T2S59 (type 2, simian virus 59), T3D (type 3, Dearing), T3C12 (type 3, clone 12), T3C18 (type 3, clone 18), T3C44 (type 3, clone 44), and T3N83 (type 3, Netherlands 1983). T1L clones were obtained from Dr. E.G. Brown (Brown) or our laboratories (Coombs/Nibert). T3D clones were obtained from Drs. W.K. Joklik, L.W. Cashdollar (Joklik/Cashdollar) and our laboratories (Coombs/Nibert). Figure 2 Alignment of the deduced μ2 amino acid sequences of T1L, T2J, T3D, and various field isolates. The single-letter amino acid code is used, and only the T1L μ2 sequence from the Brown laboratory is shown in its entirety. For other isolates, only those amino acids that differ from this T1L sequence are shown. Clones arranged in same order as in Fig. 1; the second T1L μ2 sequence is from the Nibert and Coombs laboratories, the first T3D μ2 sequence is from the Joklik and Cashdollar laboratories, and the second T3D μ2 sequence is from the Nibert and Coombs laboratories. Amino acid positions are numbered above the sequences. Some symbols represent various nonconservative changes among the isolates: *, change involving a charged residue; § change involving an aromatic residue; †, change involving a proline residue; ‡, change involving a cysteine residue. Residue 208, which has been previously shown to affect microtubule association by μ2, is indicated by a filled diamond. Residues 410–420 and 446–449, which have been previously identified as NTP-binding motifs are indicated by filled circles. Consecutive runs of wholly conserved residues ≥ 15 amino acids in length are indicated by the lines numbered 1 to 8. To gain further insights into μ2 structure/function relationships, we determined the M1 nucleotide sequences of nine other reovirus field isolates [ 33 , 34 ]. The M1 segments of each of these viruses were found to be 2304 base pairs in length (GenBank accession nos. AY428870 to AY428877 and AY551083), the same as T1L and T3D M1 (Fig. 1 ). Like those of T1L, T2J, and T3D, the M1 plus strand from each of the field isolates contains a single long open reading frame, again encoding a μ2 protein of 736 amino acids (Fig. 2 ) and having the same start and stop codons (Fig. 1 ). Their 5' and 3' nontranslated regions are therefore the same lengths as those of T1L and T3D M1 (Table 1 ). As part of this study, we also determined the M1 nucleotide sequences of the reovirus T1L and T3D clones routinely used in the Coombs laboratory. We found these sequences to be identical to those recently reported for the respective Nibert laboratory clones [ 23 ]. Further comparisons of the M1 nucleotide sequences The T2J M1 genome segment shares 71–72% homology with those of both T1L and T3D (Table 2 ). This makes T2J M1 the most divergent of all nonfusogenic mammalian orthoreovirus genome segments examined to date, with the exception of the S1 segment, which encodes the attachment protein σ1 and which shows less than 60% nucleotide sequence homology between serotypes [ 35 , 36 ]; reviewed in [ 11 ]. In contrast, the homology between T1L and T3D M1 is ~98%, among the highest values seen to date between reovirus genome segments from distinct field isolates [ 11 , 31 , 34 , 37 - 39 ]. Table 2 Pairwise comparisons of M1 genome segment and μ2 protein sequences from different reovirus isolates Identity (%) compared with reovirus isolate a Virus isolate T1L b T1L c T2J T3D d T3D e T1C11 T1C29 T1N84 T2N84 T2S59 T3C12 T3C18 T3C44 T3N83 T1L b -- 99.9 f 80.8 98.6 98.8 99.2 98.0 98.4 98.8 96.3 98.8 99.0 98.0 98.2 T1L c 99.9 f -- 81.0 98.8 98.9 99.3 98.1 98.5 98.9 96.2 98.9 99.2 98.1 98.4 T2J 71.6 71.6 -- 80.0 80.2 80.4 80.3 80.2 80.4 81.5 80.2 80.3 80.3 80.4 T3D d 97.8 97.9 70.9 -- 99.6 98.6 97.4 97.8 98.2 95.5 99.6 98.5 97.4 98.0 T3D e 97.9 98.0 71.0 99.7 -- 98.8 97.6 98.0 98.4 95.7 100 98.6 97.6 98.1 T1C11 98.7 98.7 71.3 97.1 97.1 -- 98.0 98.4 98.8 96.1 98.8 99.6 98.0 98.8 T1C29 96.3 96.4 71.1 95.8 95.8 95.5 -- 97.3 97.8 95.7 97.6 97.8 100 97.0 T1N84 96.3 96.3 70.8 95.7 95.8 95.9 94.5 -- 98.5 95.7 98.0 98.2 97.3 97.4 T2N84 97.1 97.1 71.0 96.5 96.6 96.7 95.4 96.5 -- 96.2 98.4 98.6 97.8 97.8 T2S59 89.8 89.9 71.3 89.2 89.3 89.2 89.4 89.1 89.7 -- 95.7 95.9 95.7 95.1 T3C12 97.8 97.9 71.0 99.7 99.9+ 97.2 95.7 95.7 96.6 89.3 -- 98.6 97.6 98.1 T3C18 98.8 98.9 71.2 97.3 97.4 99.4 95.8 95.8 96.8 89.4 97.4 -- 97.8 98.6 T3C44 96.5 96.6 71.1 95.9 95.9 95.7 99.7 94.6 95.5 89.4 95.9 96.0 -- 97.0 T3N83 97.7 97.8 71.4 96.4 96.4 98.6 94.7 94.9 95.8 88.5 96.4 98.4 95.0 -- a Abbreviations defined in text. b T1L M1 and μ2 sequences for the Brown laboratory clone [31]; GenBank accession no. X59945). c T1L M1 and μ2 sequences for the Nibert laboratory clone [23]; GenBank accession no. AF461682) and the Coombs laboratory clone (this study). d T3D M1 and μ2 sequences for the Joklik laboratory clone [30]; GenBank accession no. M27261), and the Cashdollar laboratory clone [23]; GenBank accession no. AF461684). e T3D M1 and μ2 sequences for the Nibert laboratory clone [23]; GenBank accession no. AF461683) and the Coombs laboratory clone (this study). f Values for M1-gene sequence comparisons are shown below the diagonal, in bold; values for μ2-protein sequence comparisons are shown above the diagonal. The M1 genome segments of the nine other reovirus isolates examined in this study are much more closely related to those of T1L and T3D than to that of T2J (Table 2 ), as also clearly indicated by phylogenetic analyses (Fig. 3 and data not shown). Such greater divergence of the gene sequences of T2J has been observed to date with other segments examined from multiple reovirus field isolates [ 11 , 34 , 37 - 39 ]. Type 2 simian virus 59 (T2S59) has the next most broadly divergent M1 sequence, but it is no more similar to the M1 sequence of T2J than it is to that of the other isolates (Table 2 , Fig. 3 ). In sum, the results of this study provided little or no evidence for divergence of the M1 sequences along the lines of reovirus serotype (Fig. 3 ), consistent with independent reassortment and evolution of the M1 and S1 segments in nature. Upon considering the sources of these isolates [ 34 ], the results similarly provided little or no evidence for divergence of the M1 sequences along the lines of host, geographic locale, or date of isolation (Fig. 3 ). These findings are consistent with ongoing exchange of M1 segments among reovirus strains cocirculating in different hosts and locales. Similar conclusions have been indicated by previous studies of other genome segments from multiple reovirus field isolates [ 11 , 34 , 37 - 39 ]. The M1 nucleotide sequence of type 3 clone 12 (T3C12) is almost identical to that of the T3D clone in use in the Coombs and Nibert laboratories, with only a single silent change (U→C) at plus-strand position 1532 ( i.e., 99.9+% homology). However, several of the T3C12 genome segments show distinguishable mobilities in polyacrylamide gels (data not shown), confirming that T3C12 is indeed a distinct isolate. Figure 3 Most parsimonious phylogenetic tree based on the M1 nucleotide sequences of the different reoviruses. Sequences for T1L and T3D clones from different laboratories are shown (laboratory source(s) in parentheses). Horizontal lines are proportional in length to nucleotide substitutions. Further comparisons of the μ2 protein sequences The T2J μ2 protein shares 80–81% homology with those of both T1L and T3D (Table 2 , Fig. 2 ). Consistent with the M1 nucleotide sequence results, this makes T2J μ2 the most divergent of all nonfusogenic mammalian orthoreovirus proteins examined to date, with the exception of the S1-encoded σ1 and σ1s proteins, which show less than 55% amino acid sequence homology between serotypes [ 35 , 36 ]; reviewed in [ 11 ]. In contrast, the homology between T1L and T3D μ2 approaches 99%, among the highest values seen to date between reovirus genome segments from distinct isolates [ 11 , 31 , 34 , 37 - 39 ]. Also consistent with the M1 nucleotide sequence results, the μ2 proteins of the nine other reovirus isolates examined in this study are much more closely related to those of T1L and T3D than to that of T2J (Table 2 , Fig. 3 ), affirming the divergent status of the T2J μ2 protein. The μ2 protein sequence of T3C12 is identical to that of the T3D clone in use in the Coombs and Nibert laboratories. In addition, the μ2 protein sequence of T1C29 is identical to that of T3C44. These are the first times that reovirus proteins from distinct isolates have been found to share identical amino acid sequences [ 11 , 32 , 34 , 37 - 39 ], reflecting the high degree of μ2 conservation. The encoded μ2 proteins of the twelve reovirus isolates are all calculated to have molecular masses between 83.2 and 84.0 kDa, and isoelectric points between 6.89 and 7.44 pH units (Table 1 ). This range of isoelectric points is the largest yet seen among reovirus proteins other than σ1s [ 11 ], but is largely attributable to the divergent value of T2J μ2 (others range only from 6.89 to 7.10). The substantially higher isoelectric point of T2J μ2 is explained by it containing a larger number of basic residues (excess arginine) than do the other isolates (Table 1 ). Comparisons of the twelve μ2 sequences showed eight highly conserved regions, each containing ≥ 15 consecutive residues that are identical in all of the isolates (Fig. 2 ). The highly conserved regions are clustered in two larger areas of μ2, spanning approximately amino acids 1–250 and amino acids 400–610. Conserved region 5 in the 400–610 area encompasses the more amino-terminal of the two NTP-binding motifs in μ2 (Fig. 2 ) [ 18 , 20 ]. The other NTP-binding motif is also wholly conserved, but within a smaller consecutive run of conserved residues. The region between the two motifs is notably variable (Fig. 2 ). Conserved region 5 also contains the less conservative of the two amino acid substitutions in T1L-derived temperature-sensitive ( ts ) mutant tsH11.2 (Pro414→His) [ 40 ]. The pattern of conserved and variable areas of μ2 was also seen by plotting scores for sequence identity in running windows over the protein length ( e.g. , [ 32 ]). In addition to the conserved regions described above, areas of greater than average variation are evident in this plot, spanning approximately amino acids 250–400 and 610–736 (the carboxyl terminus) (Fig. 4 ). The 250–400 area is notable for regularly oscillating between conserved and variable regions (Fig. 4 ). The two large areas of greater-than-average sequence conservation, spanning approximately amino acids 1–250 and 400–610 (Fig. 4 ), are likely to be involved in the protein's primary function(s). The more variable, 250–400 area between the two conserved ones might represent a hinge or linker of mostly structural importance. Figure 4 Window-averaged scores for sequence identity among the T1L, T2J, and T3D μ2 proteins. Identity scores averaged over running windows of 21 amino acids and centered at consecutive amino acid positions are shown. The global identity score for the three sequences is indicated by the dashed line. Two extended areas of greater-than-average sequence variation are marked with lines below the plot. Two extended areas of greater-than-average sequence conservation are marked with lines above the plot. Eight regions of ≥ 15 consecutive residues of identity among all twelve μ2 sequences from Fig. 2, as discussed in the text, are numbered above the plot. The Ser/Pro208 determinant of microtubule binding is marked with a filled diamond. The two putative NTP-binding motifs are marked with filled circles. As indicated earlier, μ2 is one of the most poorly understood reovirus proteins, from both a functional and a structural point of view. For example, atomic structures are available for seven of the eight reovirus structural proteins, with μ2 being the missing one. Thus, in an effort to refine the model for μ2 structure/function relationships based on regional differences, we obtained predictions for secondary structures, hydropathy, and surface probability. PHD PredictProtein algorithms suggest that μ2 can be divided into four approximate regions characterized by different patterns of predicted secondary structures (Fig. 5C ). An amino-terminal region spans to residue 157, a "variable" region spans residues 157 to 450, a "helix-rich" region spans residues 450 to 606, and a carboxyl-terminal region spans the sequences after residue 606. The amino-terminal region contains six predicted α-helices and three predicted β-strands, and is highly conserved across all twelve μ2 sequences. The "variable" region is the most structurally complex and contains numerous interspersed α-helices and β-strands. The "helix-rich" region contains seven α-helices and is highly conserved across all twelve μ2 sequences. The carboxyl-terminal region varies across all three serotypes. Overall, the μ2 protein is predicted to be 48% α-helical and 14% β-sheet in composition, making it an "α-β " protein according to the CATH designation [ 41 ]. Interestingly, most tyrosine protein kinases with SH 2 domains are also "α-β " proteins by this designation. The T1L and T3D μ2 hydropathy profiles were identical to each other. Both show numerous regions of similarity to the hydropathy profile of the T2J μ2. However, there also are several distinct differences between the T1L and T2J profiles (Fig. 5 ). Alterations in amino acid charge at residues 32, 430 to 432, and 673 in the T2J sequence account for the major differences in hydrophobicity between T2J and the other serotypes. In addition, the carboxyl-terminal 66 residues show multiple differences in hydropathy. The surface probability profiles of each of the three serotype's μ2 proteins are identical (Fig. 5 ) and show numerous regions that are highly predicted to be exposed at the surface of the protein as well as regions predicted to be buried. Figure 5 Secondary structure predictions of μ2 protein. (A) Hydropathicity index predictions of T2J (- - -) and T1L (-----) μ2 proteins, superimposed to accentuate similarities and differences. Hydropathy values were determined by the Kyte-Doolittle method [72], using DNA Strider 1.2, a window length of 11, and a stringency of 7. (B) Surface probability predictions of the T2J μ2 protein, determined as per Emini et al. [73], using DNASTAR. The predicted surface probability profiles of T1L and T3D (not shown) were identical to T2J. (C) Locations of α-helices and β-sheets were determined by the PHD PredictProtein algorithms [74], and results were graphically rendered with Microsoft PowerPoint software. , α-helix;. , β-sheet;—, turn. Differences in fill pattern correspond to arbitrary division of protein into four regions; N, amino terminal; V, variable; H, helix-rich; C, carboxyl terminal. The locations of variable regions are indicated by the thick lines under the domain representation. The MOTIF and FingerPRINTScan programs were used to compare the highly conserved regions of μ2 with other sequences in protein data banks (ProSite, Blocks, and ProDomain). The results revealed that several of the conserved regions in μ2 share limited similarities with members of the DNA polymerase A family and with the SH 2 domain of tyrosine kinases. The sequence YEAgDV in μ2, located in conserved region 2 (Fig. 2 ), is similar to the "YAD" motif of DNA polymerase A from a number of different bacteria ( e.g. , YEADDV in Deinococcus radiodurans ). The YAD motif is located in the exonuclease region of DNA polymerase A, a region which also functions as an NTPase and enhances the rate of DNA polymerization [ 42 ]. The SH 2 domain of tyrosine kinases was the highest score hit for the conserved regions of μ2 with FingerPRINTScan. Four of the five motifs in the 100 amino acid SH 2 domain matched the μ2 sequence. The SH 2 domain mediates protein-protein interactions through its capacity to bind phosphotyrosine [ 43 ]. The protein motifs found by focusing on the conserved regions of μ2 provide supportive evidence that this protein is involved in nucleotide binding and metabolism. However, the described similarities did not match with greater than 90% certainty and no other significant homologies were detected. The inability to identify higher-scoring GenBank similarities, first noted when sequences of the T3D and T1L M1 genes were reported [ 30 , 31 ] attests to the uniqueness of this minor core protein. Biochemical confirmations In an effort to provide biochemical confirmation of the predicted variation in the different isolates' μ2 proteins, we analyzed the T1L, T2J, and T3D proteins by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and immunoblotting. Despite the slightly larger molecular mass calculated from its sequence (Table 1 ), T2J μ2 displayed a slightly smaller relative molecular weight on gels than T1L and T3D μ2 (Fig. 6A ). This aberrant mobility may reflect the higher isoelectric point of T2J μ2 (Table 1 ). Polyclonal anti-μ2 antibodies that had been raised against purified T1L μ2 [ 44 ] reacted strongly with both T1L and T3D μ2, but only weakly with T2J μ2 (Fig. 6B ), despite equal band loading as demonstrated by Ponceau S staining. These antibody cross-reactivities correlated well with the predicted protein homologies (Table 2 ). Figure 6 SDS-PAGE and immunoblot analyses of virion and core particles. Proteins from gradient-purified T1L (1), T2J (2), and T3D (3) particles were resolved in 5–15% SDS-polyacrylamide gels as detailed in Materials and methods. Gels were then fixed and stained with Coomassie Brilliant Blue R-250 and silver (A). Alternatively, proteins from the gels were transferred to nitrocellulose, probed with anti-μ2 antiserum (polyclonal antibodies raised against T1L μ2, kindly provided by E. G. Brown), and detected by chemiluminescence (B). Virion proteins are indicated to the left of panel A, except for μ2, which is indicated between the panels. Factory morphologies among reovirus field isolates We took advantage of the new M1/μ2 sequences to extend analysis of the role of μ2 in determining differences in viral factory morphology among reovirus isolates [ 23 ]. Sequence variation at μ2 residue Pro/Ser208 was previously indicated to determine the different morphologies of T1L and T3D factories: Pro208 is associated with microtubule-anchored filamentous factories, as in T1L and the Cashdollar laboratory clone of T3D, whereas Ser208 is associated with globular factories, as in the Nibert laboratory clone of T3D [ 23 ]. For the previous study we had already examined the factories of T2J and some of the nine other isolates used for M1 sequencing above. We nonetheless newly examined the factories of all ten isolates in the present study, using the same stocks used for sequencing. T3C12 was the only one of these isolates that formed globular factories; the remainder, including T2J, formed filamentous factories (Fig. 7 , Table 4 ). This finding is consistent with the fact that T3C12 is the only one of these isolates that has a serine at μ2 residue 208, like T3D from the Nibert laboratory; the remainder, like T1L and T3D from the Cashdollar laboratory, have a proline there (Fig. 2 , Table 4 ) [ 23 ]. Thus, although the results identify no additional μ2 residues that may influence factory morphology, they are consistent with the identification of Pro/Ser208 as a prevalent determinant of differences in this phenotype among reovirus isolates. Figure 7 Viral factory morphology as demonstrated by the distribution of μNS in cells infected with various reovirus isolates. CV-1 cells were infected at 5 PFU/cell with the isolate indicated above each panel, fixed at 18 h p.i., and immunostained with μNS-specific rabbit IgG conjugated to Alexa 594. Size bars, 10 μm. Table 4 Properties of different reovirus isolates Virus isolate a Virus factory morphology b Amino acid at μ2 position 208 T1L filamentous c Pro c T2J filamentous d Pro T3D e filamentous c Pro c T3D f globular c Ser c T1C11 filamentous Pro T1C29 filamentous Pro T1N84 filamentous d Pro T2N84 filamentous d Pro T2S59 filamentous d Pro T3C12 globular d Ser T3C18 filamentous d Pro T3C44 filamentous Pro T3N83 filamentous d Pro a Abbreviations defined in the text. b Determined by immunofluorescence microscopy as described in the text. c Reported in Parker et al. [23]. d Reported in supplementary data of Parker et al. [23]. e T3D clone from the Cashdollar laboratory. f T3D clone from the Nibert laboratory. Factory morphologies and M1/μ2 sequences of other T3D and T3D-derived clones T3D clones from the Nibert and Cashdollar laboratories have been shown to exhibit different factory morphologies based on differences in the microtubule-binding capacities of their μ2 proteins and the presence of either serine or proline at μ2 residue 208 [ 23 ]. We took the opportunity in this study to examine additional T3D clones. The clones from some laboratories formed globular factories in infected cells whereas those from other laboratories or the American Type Culture Collection formed filamentous factories (Fig. 8 , Table 5 ). T3D-derived ts mutants tsC447 , tsE320 , and tsG453 [ 45 ] formed filamentous factories (Fig. 8 , Table 5 ). Other ts mutants were not examined; however, [ 46 ] have shown evidence that tsF556 [ 45 ] forms filamentous factories as well. Figure 8 Viral factory morphology as demonstrated by the distribution of μNS in cells infected with T3D clones obtained from different laboratories or with T3D-derived ts clones. Laboratory sources are indicated in parentheses. CV-1 cells were infected at 5 PFU/cell with the clone indicated above each panel, fixed at 18 h p.i., and immunostained with μNS-specific rabbit IgG conjugated to Alexa 488. Size bars, 10 μm. Table 5 Properties of different T3D and T3D-derived clones Positions of variation in T3D μ2 Virus isolate Laboratory source Virus factory morphology 150 208 224 372 T3D Nibert a globular b Gln Ser b Glu Ile T3D Coombs a globular Gln Ser Glu Ile T3D Schiff a globular Gln Ser Glu Ile T3D Tyler a globular Gln Ser Glu Ile T3D Cashdollar c filamentous b Arg Pro b Glu Met T3D Duncan c filamentous Arg Pro Glu Met T3D Shatkin filamentous Gln Pro Ala Ile T3D ATCC filamentous Gln Pro Glu Ile tsC447 Coombs c filamentous Gln Pro Glu Ile tsE320 Coombs c filamentous Gln Pro Glu Ile tsG453 Coombs c filamentous Gln Pro Glu Ile a Origin traceable to B. N. Fields laboratory. b Reported in Parker et al. [23]. c Origin traceable to W. K. Joklik laboratory; derived from T3D; sequences of tsC447 (GenBank accession no. AY428878), tsE320 , and tsG453 are identical. We additionally determined the M1 sequences of the wild-type and ts T3D clones newly tested for factory morphology. All clones with globular factories have a serine at μ2 position 208 whereas all those with filamentous factories have a proline there (Table 5 ). These findings provide further evidence for the influence of residue 208 on this phenotypic difference. All wild-type T3D clones with globular factories were recently derived from a Fields laboratory parent whereas all wild-type or ts T3D clones with filamentous factories were derived from parents in other laboratories. (Although extensively characterized by both Fields ( e.g. , [ 47 , 48 ]) and Joklik ( e.g. , [ 49 , 50 ]), the original T3D-derived ts mutants in groups A through G were generated in the Joklik laboratory [ 45 ]). This correlation suggests that formation of filamentous factories is the ancestral phenotype of reovirus T3D and that the Ser208 mutation in T3D μ2 was established later, in the Fields laboratory. As we noted in a previous study [ 23 ], several other laboratories reported evidence for filamentous T3D factories in the 1960's ( e.g. , [ 51 , 52 ]), following its isolation in 1955 [ 53 ]. Since microtubules were noted to be commonly associated with T3D factories in Fields laboratory publications from as late as 1973 [ 54 ], but not in one from 1979 [ 55 ], the μ2 Ser208 mutation was probably established in, or introduced into, that laboratory during the middle 1970's. Investigators should be alert to these different lineages of T3D and their derivatives for genetic studies. For example, reassortant 3HA1 [ 56 ] contains a T3D M1 genome segment derived from clone tsC447 , and its factory phenotype is filamentous (data not shown). Additional genome-wide comparisons of T1L, T2J, and T3D Several types of genome-wide comparisons of T1L, T2J, and T3D have been reported previously [ 11 ]. For this study we examined the positions and types of nucleotide mismatches in these prototype isolates in order to gain a more comprehensive view of the evolutionary divergence of their protein-coding sequences. Most mismatches between T2J and either T1L or T3D segments, ~68%, are in the third codon base position, while ~21% are in the first position and ~11% are in the second position. Each of these mismatch percentages was converted to an evolutionary divergence value by multiplying mismatch percentage by 1.33 [ 31 ] (Table 3 ). These values have been used to argue that the homologous T1L and T3D genome segments diverged from common ancestors at different times in the past, with the M1 and L3 segments having diverged most recently and the M2, S1, S2, and S3 segments having diverged longer ago [ 31 ]. The consistently high values for divergence at third codon base positions among pairings with T2J genome segments (Table 3 ) indicate that all ten T2J segments diverged from common ancestors substantially before their respective T1L and T3D homologs. Relative numbers of synonymous and nonsynonymous nucleotide changes identified in pairwise comparisons of the coding sequences of these isolates (Table 3 ) support the same conclusion. Table 3 Pairwise comparisons of variation at different codon positions in reovirus genome segments Variation (%) in the long open reading frame of genome segment Codon position Isolate pair L1 L2 L3 M1 M2 M3 S2 S3 S4 first a T1L:T2J 16.9 19.9 12.2 24.6 11.1 25.3 13.7 25.5 13.1 T2J:T3D 16.7 20.4 12.7 26.1 10.7 25.0 14.0 25.5 13.9 T1L:T3D 2.4 15.4 1.4 1.5 6.0 7.6 6.1 6.6 4.0 second a T1L:T2J 5.3 8.0 3.3 11.8 1.7 10.0 4.1 8.4 5.1 T2J:T3D 5.1 7.5 3.2 11.8 1.7 9.6 4.1 8.0 5.5 T1L:T3D 0.8 3.5 0.3 0.4 2.1 2.0 0.0 2.2 1.1 third a T1L:T2J 77.1 83.7 79.4 80.1 81.5 81.2 74.0 79.1 73.8 T2J:T3D 76.7 77.4 79.1 81.0 82.7 83.0 73.0 73.9 76.7 T1L:T3D 12.9 76.1 7.5 6.5 53.3 39.2 53.6 48.1 21.9 syn. b T1L:T2J 88.3 90.2 89.6 85.8 90.0 87.1 83.8 90.2 81.9 T2J:T3D 87.5 84.2 89.3 87.0 89.3 89.8 83.6 85.4 84.2 T1L:T3D 15.0 85.9 8.8 7.9 59.3 46.4 63.1 58.2 25.8 nonsyn. b T1L:T2J 5.9 9.1 3.8 12.6 2.6 11.8 4.8 10.2 6.2 T2J:T3D 5.9 8.9 3.9 13.1 3.2 11.5 4.7 9.6 6.8 T1L:T3D 0.8 5.0 0.3 0.5 1.2 2.0 0.7 1.3 1.3 cons. c T1L:T2J 60.0 66.3 57.1 63.8 50.0 60.6 50.0 60.8 73.5 5.0 8.7 2.5 12.2 1.3 10.7 2.9 8.5 6.8 T2J:T3D 62.7 64.5 56.1 64.6 65.2 60.5 52.0 60.8 71.1 5.1 8.6 2.5 12.9 2.1 10.0 3.1 8.5 7.4 T1L:T3D 36.4 77.4 88.9 80.0 50.0 62.5 100 40.0 63.6 0.6 5.6 0.6 1.1 1.1 2.8 1.2 1.0 1.9 noncon. c T1L:T2J 18.1 10.7 17.9 17.0 11.1 18.9 20.8 17.6 14.7 1.5 1.4 0.8 3.3 0.3 3.3 1.2 2.5 1.4 T2J:T3D 18.6 9.9 19.3 16.3 13.0 16.8 20.0 15.7 21.1 1.5 1.3 0.9 3.3 0.4 2.8 1.2 2.2 2.2 T1L:T3D 18.2 8.6 11.1 0.0 12.5 3.1 0.0 20.0 27.3 0.3 0.6 0.1 0.0 0.3 0.1 0.0 0.5 0.8 S1 not included because of uncertainty in where to place gaps. a Values determined for each pairwise comparison as: # base changes / total such positions × 100. b Values determined as # of observed changes/ # of positions at which changes could have occurred × 100. c Upper value indicates proportion of all amino acid substitutions that are conservative or nonconservative (using CLUSTAL W analysis with BLOSUM weighting); semi-conservative substitutions not included. Lower bold value indicates proportion of indicated types of alterations as a percentage of total number of amino acids within whole protein. The types of amino acid substitutions within each of the prototype isolates' proteins were also examined. Pairwise analyses showed that most substitutions in most proteins were conservative (Table 3 ). Nonconservative substitutions were relatively rare in most proteins' pair-wise comparisons. For example, comparison of the T1L and T3D μ2 proteins showed none (0.0%) of the 10 amino acid substitutions were nonconservative, and most T1L:T3D comparisons gave low nonconservative substitution values ranging from 0.1–0.5% of total amino acid residues within the respective proteins. However, some genes, most notably M1, M3, and S3, demonstrated higher nonconservative variation, with values approaching 3.5% of total amino acid residues. Most of these higher nonconservative substitution values were observed when T2J proteins were compared to either T1L or T3D proteins. In addition, in many proteins, the majority of nonconservative substitutions were located within the amino-terminal portions (first ~20%) of the respective proteins (data not shown). The frequencies with which different redundant codons are used to encode certain mammalian amino acids are non-random (reviewed in [ 57 ]). This phenomenon is mirrored by different abundances of the complementary tRNA molecules in mammalian cells. For example, CG pairs are underrepresented in mammalian genomes and common in their "rare" codons (see Table 6 ). A recent study revealed that many RNA viruses of humans display mild deviations from host codon-usage frequencies and that these deviations are more prominent among viruses with segmented genomes [ 57 ]. However, reoviruses were not included in that study. By examining reovirus isolates T1L, T2J, and T3D, for which whole-genome sequences are now available, we found that codons that qualify as rare in mammals are not rare in reovirus (Table 6 ). Moreover, the few codons that qualify as rare in reovirus (ACC, AGC, CCC, CGG, CUC, and GCC; data not shown) are common in mammals. The basis and significance of these deviations remain unknown, but could have impacts on the rates of translation of reovirus proteins. It is perhaps notable in this regard that the four most highly expressed reovirus proteins (μ1, σ3, μNS, and σNS) have the lowest average frequencies of codons that are rare in mammals (Table 6 ). Thus, incorporation of rare codons into reovirus coding sequences could be a mechanism of dampening the expression of certain viral proteins. Table 6 Codon-usage frequencies in reovirus for eight codons that are rare in mammals Frequencies of selected codons in coding sequences of: a Mammalian genomes Reovirus genomes Individual reovirus genome segments (major protein encoded by each) Codon AA b Exp c Mus Bos Homo T1L T2J T3D L1 (λ3) L2 (λ2) L3 (λ1) M1 (μ2) M2 (μ1) M3 (μNS) S1 (σ1) S2 (σ2) S3 (σNS) S4 (σ3) ACG Thr 0.25 0.11 0.13 0.11 0.23 0.30 0.24 0.17 0.28 0.22 0.27 0.17 0.16 0.30 0.38 0.26 0.20 CCG Pro 0.25 0.11 0.12 0.11 0.17 0.20 0.17 0.12 0.20 0.15 0.27 0.20 0.14 0.18 0.25 0.07 0.11 CGU Arg 0.17 0.09 0.08 0.08 0.20 0.22 0.24 0.22 0.19 0.14 0.25 0.19 0.31 0.12 0.16 0.21 0.29 CUA Leu 0.17 0.08 0.09 0.08 0.15 0.13 0.14 0.18 0.13 0.14 0.19 0.09 0.18 0.16 0.09 0.05 0.16 GCG Ala 0.25 0.10 0.11 0.11 0.24 0.26 0.26 0.29 0.22 0.30 0.31 0.15 0.16 0.25 0.30 0.10 0.29 GUA Val 0.25 0.12 0.11 0.12 0.18 0.17 0.15 0.20 0.23 0.12 0.15 0.23 0.14 0.23 0.17 0.14 0.23 UCG Ser 0.17 0.05 0.06 0.06 0.14 0.17 0.14 0.13 0.14 0.18 0.16 0.11 0.03 0.13 0.18 0.20 0.16 UUA Leu 0.17 0.06 0.07 0.07 0.20 0.18 0.20 0.32 0.20 0.16 0.23 0.14 0.07 0.18 0.32 0.13 0.16 mean - 0.21 0.09 0.10 0.09 0.19 0.20 0.19 0.22 0.20 0.19 0.21 0.18 0.16 0.21 0.22 0.16 0.18 a As fraction of all codons for the particular amino acid. Bold, value higher than that in any of the indicated mammals; underlined, value more than double that in any of the indicated mammals. b Amino acid encoded by the codon c Expected frequency if codons for each amino acid are used randomly (assuming equal A, C, G, and U contents and no di- or trinucleotide bias). Methods Cells and viruses Reoviruses T1L, T2J, T3D, and T3C12 were Coombs and/or Nibert laboratory stocks. Other reovirus isolates were provided by Dr. T. S. Dermody (Vanderbilt University). Virus clones were amplified to the second passage in murine L929 cell monolayers in Joklik's modified minimal essential medium (Gibco) supplemented to contain 2.5% fetal calf serum (Intergen), 2.5% neonatal bovine serum (Biocell), 2 mM glutamine, 100 U/ml penicillin, 100 μg/ml streptomycin, and 1 μg/ml amphotericin B, and large amounts of virus were grown in spinner culture, extracted with Freon (DuPont) or Vertrel-XF (DuPont), and purified in CsCl gradients, all as previously described [ 19 , 58 ]. Sequencing the M1 genome segments All oligonucleotide primers were obtained from Gibco/BRL. Genomic dsRNA was extracted from gradient-purified virions with phenol/chloroform [ 59 ]. Strain identity was confirmed by resolving aliquots of each in 10% SDS-PAGE gels and comparing dsRNA band mobilities [ 60 ]. Oligonucleotide primers corresponding to either the 5' end of the plus strand or the 5' end of the minus strand were as previously described [ 40 ]. Additional oligonucleotides for sequencing were designed and obtained as needed. cDNA copies of the M1 genes of each virus were constructed by using the 5' oligonucleotide primers and reverse transcriptase (Gibco/BRL). The cDNAs were amplified by the polymerase chain reaction [ 61 ] and resolved in 0.7% agarose gels [ 59 ]. The bands corresponding to the 2.3-kb gene were then excised, purified, and eluted with Qiagen columns, using the manufacterer's instructions. Sequences of the respective cDNAs were determined in both directions by dideoxynucleotide cycle sequencing [ 62 - 64 ], using fluorescent dideoxynucleotides. Sequences at the termini of each M1 segment were determined by one or both of two methods. For some isolates, sequences near the ends of the segment were determined by modified procedures for rapid amplification of cDNA ends (RACE) as previously described [ 32 , 65 ]. In addition, the sequences at the ends of all M1 segments were determined in both directions by a modification of the 3'-ligation method described by Lambden et al. [ 66 ]. Briefly, viral genes from gradient-purified virions were resolved in a 1% agarose gel, and the M segments were excised and eluted with Qiagen columns as described above. Oligonucleotide 3'L1 (5'-CCCCAACCCACTTTTTCCATTACGCCCCTTTCCCCC-3'; phosphorylated at the 5' end and blocked with a biotin group at the 3' end) was ligated to the 3' ends of the M segments according to the manufacterer's directions (Boehringer Mannheim) at 37°C overnight. The ligated genes were repurified by agarose gel and Qiagen columns to remove unincorporated 3'L1 oligonucleotide and precipitated overnight with ice-cold ethanol. The precipitated genes were dissolved in 4 μl of 90% dimethyl sulfoxide. cDNA copies of the ligated M1 genes were constructed by using oligonucleotide 3'L2 (5'-GGGGGAAAGGGGCGTAATGGAAAAAGTGGGTTGGGG-3') and gene-specific internal oligonucleotide primers designed to generate a product of 0.5 to 1.2 kb in length. These constructs were amplified by PCR, purified in 1.5% agarose gels, excised, and eluted as described above. Sequences of these cDNAs were determined with gene-specific internal oligonucleotides and with oligonucleotide 3'L3 (5'-GGGGGAAAGGGGCGTAAT-3') by dideoxy-fluorescence methods. Sequence analyses DNA sequences were analyzed with DNASTAR, DNA Strider, BLITZ, BLAST, and CLUSTAL-W. Phylogenetic analyses were performed using the PHYLIP programs . DNAPARS (parsimony) (Fig. 3 ) and DNAML (maximum likelihood) (data not shown) produced essentially identical trees. These programs were run using the Jumble option to test the trees using 50 different, randomly generated orders of adding the different sequences. In addition, DNAPENNY (parsimony by brand-and-bound algorithm) generated a tree with the same branch orders as DNAPARS and DNAML. RETREE and DRAWGRAM were used to visualize the tree and to prepare the image for publication. Final refinement of the image was performed with Illustrator. Synonymous and nonsynonymous substitution frequencies were calculated according to the methods of Nei and Gojobori [ 67 ] as applied by Dr. B. Korber at . Codon frequencies in the M1 coding sequences were determined using the COUNTCODON program maintained at . Values for codon frequencies in mammalian genomes were obtained from the Codon Usage Database maintained at Protein sequence analyses were performed using the GCG programs in SeqWeb version 2 (Accelrys). Multiple sequence alignments were done with PRETTY. Determinations of molecular weights, isoelectric points, and residue counts were done with PEPTIDESORT. Determinations of percent identities in pairwise comparisons were done with GAP. Plots of sequence identity over running windows of different numbers of amino acids (Fig. 4 and data not shown) were generated with PLOTSIMILARITY, and the image for publication was refined with Illustrator (Adobe Systems). In addition, protein sequences were analysed for conservative and nonconservative substitutions by pairwise CLUSTAL-W analyses, using BLOSUM matrix weighting [ 68 ]. SDS-PAGE Gradient-purified virus and core samples were dissolved in electrophoresis sample buffer (0.24 M Tris [pH 6.8], 1.5% dithiothreitol, 1% SDS), heated to 95°C for 3–5 min, and resolved in a 5–15% SDS-PAGE gradient gel (16.0 × 12.0 × 0.1 cm) [ 69 ] at 5 mA for 18 h. Some sets of resolved proteins were fixed, and stained with Coomassie Brilliant Blue R-250 and/or silver [ 70 ]. Immunoblotting Gradient-purified viral and core proteins were resolved by SDS-PAGE as described above, and sets of resolved proteins were transferred to nitrocellulose membranes with a Semi-Dry Transblot manifold (Bio-Rad Laboratories) according to the manufacturer's instructions. Transfer of all proteins was confirmed by Ponceau S staining. Nonspecific binding was blocked in TBS-T (10 mM Tris [pH 7.5], 100 mM NaCl, 0.1% Tween 20) supplemented with 5% milk proteins, and the membranes probed with polyvalent anti-μ2 antibody (a kind gift from Dr. E. G. Brown, University of Ottawa). Membranes were washed with TBS-T, reacted with horseradish peroxidase-conjugated goat anti-rabbit IgG (Jackson ImmunoResearch Laboratories), and immune complexes detected with the enhanced chemiluminescence system (Amersham Life Sciences) according to the manufacturer's instructions. Infections and IF microscopy CV-1 cells were maintained in Dulbecco's modified Eagles medium (Invitrogen) containing 10% fetal bovine serum (HyClone Laboratories) and 10 μg/ml Gentamycin solution (Invitrogen). Rabbit polyclonal IgG against μNS [ 71 ] was purified with protein A and conjugated to Alexa Fluor 488 or Alexa Fluor 594 using a kit obtained from Molecular Probes and titrated to optimize the signal-to-noise ratio. Cells were seeded the day before infection at a density of 1.5 × 10 4 /cm 2 in 6-well plates (9.6 cm 2 /well) containing round glass cover slips (18 mm). Cells on cover slips were inoculated with 5 PFU/cell in phosphate-buffered saline (PBS) (137 mM NaCl, 3 mM KCl, 8 mM Na 2 HPO 4 [pH 7.5]) containing 2 mM MgCl 2 . Virus was adsorbed for 1 h at room temperature before fresh medium was added. Cells were further incubated for 18–24 h at 37°C before fixation for 10 min at room temperature in 2% paraformaldehyde in PBS or 3 min at -20°C in ice-cold methanol. Fixed cells were washed with PBS three times and permeabilized and blocked in PBS containing 1% bovine serum albumin and 0.1% Triton X-100. Antibody was diluted in the blocking solution and incubated with cells for 25–40 min at room temperature. After three washes in PBS, cover slips were mounted on glass slides with Prolong (Molecular Probes). Samples were examined using a Nikon TE-300 inverted microscope equipped with phase and fluorescence optics, and images were collected digitally as described elsewhere [ 23 ]. All images were processed and prepared for presentation using Photoshop (Adobe Systems). Authors' Contributions PY and NDK participated equally in designing primers and determining the T2J M1 sequence; TJB, MMA, and JSLP determined the M1 sequences of the T3C12 clone and other labs' T3D clones, as well as factory morphologies of all clones; and all authors participated in writing the manuscript. MLN and KMC are the principal investigators and KMC determined the M1 sequences of the other field isolates and ts mutants. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524354.xml |
548271 | The role of 'confounding by indication' in assessing the effect of quality of care on disease outcomes in general practice: results of a case-control study | Background In quality of care research, limited information is found on the relationship between quality of care and disease outcomes. This case-control study was conducted with the aim to assess the effect of guideline adherence for stroke prevention on the occurrence of stroke in general practice. We report on the problems related to a variant of confounding by indication, that may be common in quality of care studies. Methods Stroke patients (cases) and controls were recruited from the general practitioner's (GP) patient register, and an expert panel assessed the quality of care of cases and controls using guideline-based review criteria. Results A total of 86 patients was assessed. Compared to patients without shortcomings in preventive care, patients who received sub-optimal care appeared to have a lower risk of experiencing a stroke (OR 0.60; 95% CI 0.24 to 1.53). This result was partly explained by the presence of risk factors (6.1 per cases, 4.4 per control), as reflected by the finding that the OR came much closer to 1.00 after adjustment for the number of risk factors (OR 0.82; 95% CI 0.29 to 2.30). Patients with more risk factors for stroke had a lower risk of sub-optimal care (OR for the number of risk factors present 0.76; 95% CI 0.61 to 0.94). This finding represents a variant of 'confounding by indication', which could not be fully adjusted for due to incomplete information on risk factors for stroke. Conclusions At present, inaccurate recording of patient and risk factor information by GPs seriously limits the potential use of a case-control method to assess the effect of guideline adherence on disease outcome in general practice. We conclude that studies on the effect of quality of care on disease outcomes, like other observational studies of intended treatment effect, should be designed and performed such that confounding by indication is minimized. | Background There is a long tradition of studying at population level the quality of medical care provided to patients who died from conditions amenable to medical intervention. This type of study (so called 'in-depth' or 'audit' study), aims to identify deficiencies in medical care that may have contributed to death. It was first systematically carried out on maternal death, and later on other causes of avoidable death [ 1 - 4 ]. This method can be applied to other potentially avoidable conditions, e.g. those that could be avoided by appropriate preventive care. The general approach is to document in detail the process of care provided to a single patient preceding the occurrence of an adverse event, followed by an assessment of the quality of care by an expert panel, either with or without the use of explicit criteria [ 5 ]. An important limitation of this type of study, without control subjects, is its inability to fully establish a causal relationship between identified deficiencies in care and the adverse outcome, and to determine to what extent identified deficiencies are associated to the occurrence of such an event. Identified deficiencies in care are expected to indicate only to a certain extent an increase in risk of an adverse health outcome, while the probability of having an adverse outcome can be calculated only if we compare the care provided to patients who suffered an adverse outcome with that of patients who did not suffer such an event. For this reason it has been proposed to perform a case-control study with patients with an adverse event as 'cases' and a comparable group of patients without an adverse outcome as 'controls' [ 6 ]. We performed a case-control study with the aim to assess the effect of guideline adherence for stroke prevention on the occurrence of stroke in general practice. Unfortunately, we encountered various obstacles in the design and conduct of this study, in particular related to the recruitment of cases and controls, in availability of information on the care delivery process in the GP's data registration system, and in controlling for differences other than differences in the quality of care. The aim of this paper is to highlight the problems related to a variant of confounding by indication, that may be common in quality of care studies. Observational studies of intended treatment effects are particularly prone to 'confounding by indication', and can produce misleading estimates on either, or both, the size and direction of treatment effects [ 7 , 8 ]. Confounding by indication refers to an extraneous determinant of the outcome parameter that is present if a perceived high risk or poor prognosis is an indication for intervention. This means that differences in care, for example, between cases and controls may partly originate from differences in indication for medical intervention such as the presence of risk factors for particular health problems. The latter has frequently been reported in studies evaluating the efficacy of pharmaceutical interventions [ 9 , 10 ], screening tests [ 11 ], and vaccines [ 12 ]. We hypothesise that this may not only apply to indications for medical intervention but also for guideline adherence and quality of care. In comparing retrospectively the quality of care between patients with and without a stroke, stroke patients may have received more preventive care because more indications for preventive interventions were present. Because differences in indications for preventive intervention correspond with the probability of an adverse outcome (more indications will be associated with a higher risk of an adverse outcome), when comparing care between cases and controls it is necessary to control for these differences. If one omits to control for confounding by indication, it is expected that more and probably better care, correlates with a higher risk of stroke. In quality of care research, there is, as yet, little information regarding the role of confounding by indication in studies that investigate the effect of quality of care on disease outcomes. Methods Sample From the Dutch national GP register, a random sample was taken of 58 GPs working in Rotterdam and the surrounding region. The study was restricted to patients with a first-ever stroke meeting the following criteria for inclusion: (a) diagnosis of intracerebral haemorrhage or infarction according to the World Health Organization (WHO) definition of stroke [ 13 ], (b) age between 39–80 year, (c) occurrence of stroke in the period 1996–1997, (d) stroke caused by cardiovascular disease (CVD) and not by trauma, infection or malignancy, (e) presence of hypertension, (f) GP of the patient practising in the southern part of Rotterdam or surrounding region, (g) patient registered with local GP for not less than two years, and (h) patient not living in a nursing home during the two years period prior to stroke. Cases and controls were selected from the GPs' patient register, using health outcome (stroke) and risk factor (e.g. hypertension) entries. For each case, two controls were randomly selected and matched with the cases in terms of overall distribution on sex, age, and hypertension (most important risk factor for stroke). Cases and controls were not matched on the same GP. Data collection In a pilot study among 32 GPs, the quality of care measurement instruments (audit procedure and questionnaire) were tested. GPs participating in the pilot study did not participate in this study. Data on the process of care, two years prior to the occurrence of stroke (for controls from January 1995 to January 1997), were collected by means of structured face-to-face interviews with the GP, using separate questionnaires for each stroke patient. GPs were interviewed between March and October 1999. At the time of interview, GPs used either hand-written or electronic patient records to retrieve patient information. In case information was not available in the patient's record, information was drawn from the GP's memory. For each question, the type of data source was registered. The questionnaire comprised questions related to patient characteristics and family and medical history of CVD and risk factors, and the detection and treatment of cardiovascular risk factors such as hypertension, diabetes mellitus, transient ischemic attack (TIA) and cardiac failure. Similarly, data were collected on lifestyle-related risk factors such as smoking status, overweight, and excessive alcohol intake. Expert panel and assessment method The quality of preventive care and its potential to prevent stroke was assessed and valued by a six-member panel of experts. The panellists (three neurologists and three GPs) were selected on the basis of their clinical expertise with respect to stroke prevention, experience in quality of care evaluation, academic or non-academic background and professional discipline. Six practice guidelines relevant to stroke prevention (hypertension, diabetes mellitus, TIA, peripheral vascular disease, cardiac failure and angina pectoris) were selected by the panel [ 14 ]. These guidelines, based on scientific evidence, broad consensus, and clinical evidence, are developed and implemented by the Dutch College of General Practitioners as part of a national guideline program operational since 1987 [ 15 ]. From each guideline, the panellists identified specific elements of care and systematically converted these into review criteria (n = 65), allowing detailed measurement of GP's adherence [ 16 ]. All these criteria were all used to construct the patient questionnaire. In a two-round evaluation, with a final plenary round, cases were assessed by the panellists (panellists were divided in sub-panels). Each sub-panel assessed a specific number of cases. Based on identified elements of sub-optimal care and seriousness of shortcoming in terms of 'minor' and 'major', the panellists allocated grades on a scale of 0 to 3 (Table 1 ). Table 1 Grades of (sub)optimal care given by the expert panel (in both groups allpatients are hypertensive) Cases Controls n % n % Grading: 0 No sub-optimal factors have been identified 12 43 18 31 1 Sub-optimal factor(s) have been identified, but are unlikely to be related to the occurrence of stroke in this patient 8 29 18 31 2 Sub-optimal factor(s) have been identified, and possibly have failed to prevent the stroke in this patient 4 14 18 31 3 Sub-optimal factor(s) have been identified, and are likely to have failed to prevent the stroke in this patient 4 14 4 7 Sub-optimal care Grading 1, 2, 3 16 57 40 69 Total Grading 0, 1, 2, 3 28 100 58 100 The two-round process was focussed on detecting consensus among the panellists (providing the same grade), and no attempt was made to force the panellists to consensus. The intersubpanel agreement was k = 0.63 (overall agreement on assigned grades between sub-panels was 74%). A detailed description of the assessment method is provided elsewhere [ 17 ]. Analysis Analysis of the data was done by using simple cross-tabulations, and by using logistic regression analysis to model the chance of getting a stroke as a function of the presence of sub-optimal care (as ascertained by the panel), age and sex, and risk factors for stroke. Results GP Participation and recruitment of cases / controls The rate of participation was 62% (36 GPs). The main reason for GPs not to participate in the study was lack of time and interest (68%). Participating and non-participating GPs did not differ significantly in age, practice type, and date of qualification. Ninety-two percent of the GPs used electronic GP information systems. Among cases and controls there was a nonsignificant difference in mean age, however, cases were slightly older than controls (67 versus 65 years). Initially, before we excluded patients 'without' hypertension, GPs identified and selected 50 cases and 58 controls (1.4 case and 1.6 control per GP). Expected number of cases was 2.5 stroke patients per GP per year [ 18 ]. After excluding patients without hypertension, 28 cases and 58 controls with hypertension entered the study. Availability of data Overall, data for verification of the initial diagnosis of stroke, assessment of GPs' guideline adherence, and judgement of the causality of the relationship between non-adherence and the occurrence of stroke could be collected from the patient records. However, information on risk factors such as family history of CVD, body weight (overweight), excessive alcohol intake, and smoking was less easily obtained. Depending on the type of risk factor, in 8–56% of all subjects, information on risk factors was unknown to the GP (8% in patients with overweight, 11% in patients smoking cigarettes, 17% in patients with excessive alcohol consumption, and 56% in patients with a family history of CVD). In 41–58% information was taken from the GP's memory, instead of the patient register. Indications for confounding by indication In 43% of the cases and 31% of the controls, no sub-optimal care could be identified (grade 0), whereas in 57% and 69%, respectively, sub-optimal care was identified (grade 1, 2 or 3). Thus the Odds Ratio for a case to receive sub-optimal care was 0.60 (95%CI 0.24 – 1.53) compared to a control (Table 1 ). Compared with controls receiving sub-optimal care, the number of shortcomings in care per case receiving sub-optimal care was higher (28/16 = 1.7 versus 41/40 = 1.0) (Table 2 ). The percentage of shortcomings in hypertensive care, however, was considerably higher among controls (90% versus 57%, respectively). The latter, apparently, correlates with the fact that controls less often have risk factors other than hypertension (next paragraph). Table 2 Guideline-derived elements of care used to indicate shortcomings in care among stroke patients and controls Practice guideline Elements of care Cases Controls Arguments derived from practice guideline: Hypertension - Detection of hypertension 1 2 - Confirmation diagnosis hypertension 2 1 - Pharmacologic therapy (anti-hypert. med) 2 1 - Follow-up (quarterly) 8 17 - Follow-up (annually) 3 16 Arguments derived from practice guideline: Diabetes mellitus - Follow-up (quarterly) 4 3 - Laboratory evaluation 1 0 - Referral to eye specialist 1 0 Arguments derived from practice guideline: TIA - Treatment (therapy and follow-up after TIA) 1 1 Arguments derived from more than one practice Guideline - Advice to quit smoking 2 0 - Dietary advice (overweight) 1 0 - Evaluation of cardiovascular risk profile 2 0 Total number of shortcomings 28 41 Total number of patients with shortcomings 16 40 TIA, Transient Ischemic Attack Note: each patient could have more than one element of sub-optimal care The mean number of risk factors among cases (6.1 per patient) was higher than among controls (4.4 per patient) (Figure 1 ). Multivariate logistic regression indicates that cases receiving sub-optimal care (grade 1, 2, or 3) have a lower risk of stroke (crude OR 0.60) (Table 3 ). If adjusted for sex and age distribution, the odds ratio does not change significantly (adjusted OR 0.64). Subsequently, in an attempt to investigate the possible role of confounding by indication we adjusted for risk factor prevalence. Indeed, with an adjusted OR of 0.82 (95% CI 0.29–2.30), it seems that risk factor prevalence to some extent explains why patients receiving sub-optimal care have a lower risk of stroke. Figure 1 Risk factor distribution. Prevalence (%) of risk factors for stroke among stroke patients (n = 28) and controls (n = 58). Total number of risk factors among stroke patients is 172, and among controls 277. Mean number of risk factors per case is 6.1, and for controls 4.4. This relationship is statistically borderline significant (p = 0.096), and could be an explanation for the somehow surprising result found earlier, that is, that cases receive sub-optimal care less often than controls. Table 3 Relationship between quality of care and the occurrence of stroke(Odds Ratio and 95% CI) MODEL 1 MODEL 2 MODEL 3 Care: Optimal 1.00 (ref.) 1.00 (ref.) 1.00 (ref.) Sub-optimal 0.60 (0.24–1.53) 0.64 (0.25–1.65) 0.82 (0.29–2.30) Sex: Male 1.00 (ref.) 1.00 (ref.) Female 0.90 (0.36–2.30) 0.61 (0.22–1.72) Age: 1.03 (0.98–1.08) 1.03 (0.98–1.08) Risk factors: 0.76 (0.61–0.94) Note: to control for risk factors, the number of risk factors per patient were included in the regression model. Patients with a higher number of risk factors for stroke, indeed have a lower risk of sub-optimal care (OR for the number of risk factors present 0.76; 95% CI 0.61–0.94). As expected, higher numbers of risk factors per patient also increases the risk of stroke (OR for the number of risk factors present 1.34; 95% CI 1.10–1.62). Discussion This study demonstrated confounding by indication in a case-control study analysing the association between guideline adherence and the occurrence of stroke in general practice. It also provided insight into the possibilities for controlling for this confounding bias. We learned that, at present, difficulties in patient recruitment and data retrieval seriously limit the potential use of a case-control method to assess the relationship between guideline adherence for stroke prevention and stroke in general practice. We found that in specific domains data were incomplete and not readily available in the patient records. As a consequence, in many cases GPs were unable to identify stroke patients from their patient register, which most likely introduced under-reporting of stroke patients. As compared to national frequencies (2.5 stroke patients per GP per year) [ 18 ], GPs participating in our study identified less stroke patients, 1.7 stroke patient per GP. The same applies to information on patients' family history of CVD and lifestyle-related risk factors, which was inaccurate and in many cases not available in the patient's register. The latter finding is consistent with previous work on the accuracy of information on CVD risk factors in GPs' patient records [ 19 , 20 ], indicating that data from GP's record on lifestyle-related risk factors of CVD are frequently incomplete or absent. Incomplete information on risk factors for stroke is a serious threat to the validity of the results of case-control studies investigating the relationship between process of care and health care outcome. It complicates evaluation of GP's adherence to recommended guidelines, and makes it difficult, if not impossible, to control for confounding by indication. Apart from that, information on risk factors that was available in the patient records is presumably not 100% valid. Strong indications for the existence of confounding by indication were found, albeit different from how it is usually described in literature. Confounding by indication, which is conceived as a substantial problem in observational studies of treatment efficacy, usually refers to a situation in which patients who are more in need both receive more care have a higher risk of adverse health outcome [ 8 ]. In our study, we show that confounding by indication can also cause patients with an adverse health outcome (stroke) to appear to receive better quality of care. A more detailed analysis showed, similar to results found in previous studies, that this result partly emanates from a higher prevalence of risk factors for stroke among patients suffering stroke at a later stage in life, which not only increases the risk of stroke but also GPs' compliance to guidelines. We hypothesize that, on average, patients with more risk factors for stroke receive more attention or visit their GP more frequently, which in turn facilitates guideline adherence (e.g. compliance to quarterly follow-up of treated hypertensive patients) and at the same time results in better quality of care. Controlling for (recorded) risk factors reduced the counter-intuitive result by approximately one half, and we hypothesise that incomplete registration of risk factors for stroke explains why the risk of stroke in stroke-prone or high-risk patients associated with sub-optimal care remained below 1.00, even after controlling for risk factors. We hope that our paper draws the attention of quality of care researchers to this variant of confounding by indication, that may lead to biased associations between process measures of quality of care and care outcomes. Conclusions This study shows that, at present, difficulties in patient recruitment and data retrieval seriously limit the potential use of a case-control method to assess the relationship between guideline adherence for stroke prevention and stroke in general practice. It demonstrates the role of confounding by indication, causing patients with an adverse health outcome to appear to receive better quality of care. Competing interests The author(s) declare that they have no competing interests Authors' contributions JK, NK, PK, AP and JM conceived and designed the study. Analyses were performed by JK and GB. All authors contributed to this article and earlier drafts of the manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548271.xml |
548265 | Tissue eosinophilia: a morphologic marker for assessing stromal invasion in laryngeal squamous neoplasms | Background The assessment of tumor invasion of underlying benign stroma in neoplastic squamous proliferation of the larynx may pose a diagnostic challenge, particularly in small biopsy specimens that are frequently tangentially sectioned. We studied whether thresholds of an eosinophilic response to laryngeal squamous neoplasms provides an adjunctive histologic criterion for determining the presence of invasion. Methods Eighty-seven(n = 87) cases of invasive squamous cell carcinoma and preinvasive squamous neoplasia were evaluated. In each case, the number of eosinophils per high power field(eosinophils/hpf), and per 10 hpf in the tissue adjacent to the neoplastic epithelium, were counted and tabulated. For statistical purposes, the elevated eosinophils were defined and categorized as: focally and moderately elevated (5–9 eos/hpf), focally and markedly increased(>10/hpf), diffusely and moderately elevated(5–19 eos/10hpf), and diffusely and markedly increased (>20/10hpf). Results In the invasive carcinoma, eosinophil counts were elevated focally and /or diffusely, more frequently seen than in non-invasive neoplastic lesions. The increased eosinophil counts, specifically >10hpf, and >20/10hpf, were all statistically significantly associated with stromal invasion. Greater than 10 eosinophils/hpf and/or >20 eosinophils/10hpf had highest predictive power, with a sensitivity, specificity and positive predictive value of 82%, 93%, 96% and 80%, 100% and 100%, respectively. Virtually, greater than 20 eosinophils/10 hpf was diagnostic for tumor invasion in our series. Conclusion Our study suggests for the first time that the elevated eosinophil count in squamous neoplasia of the larynx is a morphologic feature associated with tumor invasion. When the number of infiltrating eosinophils exceeds 10/hpf and or >20/10 hpf in a laryngeal biopsy with squamous neoplasia, it represents an indicator for the possibility of tumor invasion. Similarly, the presence of eosinophils meeting these thresholds in an excisional specimen should prompt a thorough evaluation for invasiveness, when evidence of invasion is absent, or when invasion is suspected by conventional criteria in the initial sections. | Background Invasive squamous cell carcinoma(SC) is the most common malignancy of the larynx[ 1 ]. Distinguishing between preinvasive squamous neoplasia (high grade squamous cell dysplaisa/ squamous cell carcinoma in-situ, SCIS) and SC may be difficult in small biopsy specimens, particularly when the tissue is superficial and fragmented, a prominent inflammatory infiltrate obscures the epithelial-stromal interface, and/or there is tangential sectioning of the acanthotic neoplastic squamous epithelium. Even in larger resection specimens, the presence of invasion may sometimes be elusive if the invasive element lacks paradoxical maturation characterized by prominent eosinophilic cytoplasm that may undergo either central or individual cell keratinzation, well developed cell borders, and large vesicular nuclei with prominent nucleoli. The existence of an adjunctive feature associated with invasion would be helpful in assessing whether there is any degree of invasion in these challenging cases, or whether such a feature should raise the suspicion that the lesion may harbor an invasive component when it is absent by conventional diagnostic criteria. A moderate to marked stromal eosinophils that may infiltrate into the neoplastic epithelium has occasionally been reported in invasive carcinoma [ 2 - 4 ]. Spiegel et al recently reported that the presence of eosinophils is associated with invasion in the neoplastic squamous lesions in the female genital tract, and proposed that eosinophilia provided as adjunctive morphologic feature in identifying SC in the cervix and vulva[ 5 , 6 ]. One of us (DT) has observed moderate to marked stromal eosinophilia in cases of SC of the larynx, whereas stromal eosinophils were usually either absent or rare in cases of laryngeal SCIS. We speculated that the degree of stromal eosinophilia is a pathologic feature that would provide an adjunctive criterion for distinguishing SC from SCIS in the larynx, and undertook a systematic study to test this hypothesis. In this study, we focused on a single head and neck region, the larynx, to avoid any potential selection bias, since squamous neoplasia and the associated host response and changes in the head and neck are heterogeneous and varied in different anatomic locations[ 7 ]. Methods The biopsy and resection specimens with available H&E stained slides of laryngeal SC and SCIS diagnosed at Roswell Park Cancer Institute from 1993 through 2000 were reviewed by two of the authors simultaneously (MZ and DT). Cases with prior radiation and/ or chemotherapy were excluded. All histology specimens at Roswell Park Cancer Institute were fixed in 10% formalin. Paraffin blocks of 5 μm thickness were cut and the sections were stained with conventional hematoxylin and eosin. For each biopsy and resection specimen, the original diagnosis was recorded and compared with the review diagnosis. Cases with any degree of invasion including "minimal invasion" or "microscopic invasion" in either a biopsy or resection specimen or both were classified as SC. Cases with SCIS only in a biopsy specimen that subsequently had invasion in the resection specimen were classified as SC. On the other hand, a resection specimen lacking invasion was required for a case to be classified as SCIS. For each of specimen, the high-power field(hpf) (Olympus BH2 ×10 ocular and ×40 objective lens) with a maximum number of eosinophils was identified and recorded as eos/hpf. Then, the eosinophils in the adjacent nine contiguous hpf were counted, added to those in the first, and recorded as eos/10hpf. Only nucleated cells with intensely red cytoplasmic granules were accepted as eosinophils, and care was taken to exclude red blood cells with superimposed mononuclear and polymorphonuclear inflammatory cells. And those that were confined to lymphovascular spaces were excluded. During the course of the study, it was noted that frozen section preparation results in the degranulation of eosinophils and leads to difficulty in their being recognized; however, under these conditions, collections extracytoplasmic typical red granules approximately the expected size of an eosinophil allows for their identification. As an internal control, non-neoplastic portions of the specimens, whenever available, were also evaluated. Statistical methods Frequency was computed for each eosinophil category in invasive and non-invasive squamous neoplasia specimens obtained from biopsy and excision. Chi-square test was utilized to examine the difference of frequency distribution between each elevated eosinophil category and the referent eosinophil category (0–4 eos/10hpf). This analysis was conducted independently for specimens obtained from biopsy and excision. Sensitivity, specificity, positive predictive value and negative predictive value were computed for eosinophil counts that were exceeding 10 eos/hpf or 20 eos/10hpf to evaluate if these two classifications of eosinophil count would suggest any significance of clinical implication from statistical point of view. P value less than 0.05 was used to determine statistical significance. Elevated eosinophils were defined and categorized as: focally and moderately elevated (5–9 eos/hpf), focally and markedly increased(>10/hpf), diffusely and moderately elevated(5–19 eos/10hpf), and diffusely and markedly increased (>20/10hpf), while 0–4 eosinophils/10hpf was used as a baseline. The recorded eosinophil counts were analyzed to determine whether there were thresholds of stromal versus neoplastic squamous eosinophils per 1 hpf and 10 hpf that were significantly associated with invasive tumor in both biopsy and follow-up excisional/resectional specimens. Results A total of 87 cases were evaluated and sixty-eight percent of the cases (n = 59) displayed a chronic inflammation in the stroma. Fifty-seven were biopsy specimens and 30 were ablative resection specimens. The diagnoses of 57 biopsy specimens were 35 invasive carcinoma, 4 minimal invasive carcinoma, 2 suspicious for invasion, and 16 were preinvasive squamous cell neoplasia. 27 of the biopsy specimens (18 invasive carcinoma, 1 minimal invasive carcinoma, 1 suspicious for invasion and six preinvasive squamous cell neoplasm) were followed by ablative resections (7 wide excisional resection, and 20 total laryngectomy). The follow-up specimens confirmed 19 invasive carcinoma and six preinvasive carcinoma, and revealed an invasive carcinoma in the suspicious case. In this case, there were stromal nests of neoplastic squamous cells that were associated with 15 and 34 eosinophils per 1 and 10 hpf, respectively, and clinical image studied revealed an advanced disease (stage IIIa). Additionally, Other 3 ablative resection specimens (all excisional biopsies) were initial operations. There were 2 invasive carcinoma and 1 preinvasive lesion. The distribution of eosinophil counts is summarized in Table 1 . Both diffuse and focal elevated eosinophilic infiltration were noted in invasive tumor and, to much less extend, in the non-invasive counterparts. Typical examples of eosinophilic infiltration in the stroma tissue were illustrated in Figures 1A–1C . As shown in Table 2 , thirty-six (57%) of patients with invasive squamous cell carcinoma were found to have diffuse eosinophilic infiltration (>20/10hpf), whereas elevated eosinophils were not diffusely observed in all non-invasive lesions (p < 0.05). The same held true for focally marked eosinophilia (>10/hpf) when compared invasive group with non-invasive group. One exception was observed in a non-invasive neoplasia (Case 37, high grade dysplasia/SCIS) which showed a marked increased eosinophils (10 eosinophils/hpf). Noticeably, this high grade dysplasia with elevated eosinophil infiltrate uncovered an invasive carcinoma in the follow-up resection specimen. These results indicated that a close association exists between the stromal invasion and the presence of elevated tissue eosinophils. Stromal eosinophilia was statistically significantly associated with invasion in squamous cell carcinoma (Table 2 ). Table 1 Distribution of eosinophils in invasive and non-invasive squamous neoplasia Specimen and Diagnosis Eosinophils/hpf Eosinophils/10hpf 0 1–4 5–9 10–20 >20 0 1–4 5–19 20 or greater Biopsy Invasive CA(41) 1(2%) 5(12%) 8(20%) 15(36%) 12(29%) 1(3%) 5(12%) 1 10(24%) 1 25(61%) 1 Non-invasive(16) 13(81%) 1(6%) 1(6%) 1(6%) none 7(44%) 2 6(38%) 2 3(18%) 2 none Excision/Resection Invasive CA(22) 1(5%) 4(18%) 8(36%) 6(23%) 3(14%) 1(5%) 3(14%) 7(31%) 11(50%) Non-invasive(8) 6(76) 1(12%) 1(12%) none none 4(50%) 3(38%) 1(12%) none 1 Two invasive carcinoma case less than 10hpf in size 2 One non-invasive carcinoma case less than 10hpf in size. Figure 1 a. Absence of eosinophils in normal squamous epithelium. Note that a moderately inflamed submucosal tissue. Arrows point to the inflammatory cells. 200× b. A squamous cell carcinoma in-situ (non-invasive tumor) with no elevated eosinophils in a chronic inflammatory background. Arrows point to the inflammatory cells. 200× c. Markedly increased eosinophils in an invasive squamous cell carcinoma. Note that the eosinophils (arrows) were a major component of the infiltrating nucleated cells. 200× Table 2 Significance of eosinophils in invasive and non-invasive squamous neoplasia Lesion Eosinophils Counts 5–9 eos/hpf 1 p >or = 10 eos/hpf p 5–19 eosin/10hpf 2 p >or = 20 eos10/hpf p Invasive CA in biopsy 8/41(20%) >0.05 27/41(66%) <0.01 10/41(24%) >0.05 25(61%) <0.005 Non invasive lesion in biopsy 1/16(6%) 1/16(6%) 3/16(18%)) 0/16(--) Invasive CA in Excision 8/22(36%) >0.05 9/22(41%) <0.05 7(31%) >0.05 11((50%) <0.05 Non-invasive lesion in excision 1/8(12%) 0/8(--) 1(12%) 0/8(----) 1 eos/hpf: for each specimen, a high power field (hpf) (Olympus BH2 ×10 ocular and ×40 objective lens) with a maximum number of eosinophils in the lesion area 2 eos/10hpf : for each specimen, one high power field (hpf) (Olympus BH2 ×10 ocular and ×40 objective lens) with a maximum number of eosinophils in the lesion area and additional nine hpfs in the contiguous areas There is no association of elevated tissue eosinophils with overall inflammatory response of the stroma in the specimens studied (p < 0.05). Specifically, among the 37 cases with > = 10 eosinophils/hpf, 24 cases displayed a non-specific inflammation, while 50 cases with <10 eosinophils/hpf, 35 displayed a non-specific inflammation. Among the 36 cases with > = 20 eosinophils/10hpf, 24 cases revealed a non-specific inflammation, while 51 cases with <20 eosinophils/10hpf, 34 revealed a non-specific inflammation. In fact, some invasive carcinomas (n = 6) virtually contained no chronic inflammatory background, but showed a marked elevated tissue eosinophila (Fig. 2 ). In addition, a number of cases (n = 21) with elevated eosinophila showed a distinct polarization of the infiltrating cells, namely eosinophilic cells accumulating in the tumor invading front (Fig. 2C ). Figure 2 a. A low power view of an invasive carcinoma. Note that there was no significant inflammatory background. 40× b. A higher power view of the square area labeled as A in Figure 2a. No eosinophils were present in the stromal tissue between the tumor nests. 200× c. A higher power view of the square area labeled as B in Figure 2a. Elevated eosinophils were present at the invading front of the carcinoma. 200× The predictive values of tissue eosinophils in assessing stromal invasion in squamous neoplastic lesions of larynx are presented in Table 3 . In biopsy specimens, diffusely elevation of eosinophils (>20/10 hpf) had a sensitivity, specificity and positive predictive value of invasion of 80%, 100% and 100%, respectively. In these specimen, the presence of >10 eosinophils/hpf predicted invasion in all cases with a sensitivity of 81% and positive predictive value 96%, respectively, while values below this threshold had a predictive value of an absence of invasion 68%. Similarly, the presence of > 20 eosinophils/10hpf in the excisional specimens had a sensitivity, specificity, and positive predictive value of invasion of 69%, 100% and 100 %, respectively. In these excisional specimens, the presence of >10 eosinophils/hpf had sensitivity, specificity, and positive predictive values for invasion of 64%, 100% and 100%, respectively. Values below the thresholds of >10 eosinophils/hpf or 20 eosinophils/10hpf had a predictive value of an absence of invasion of 40% and 42%, respectively. Table 3 Predictive value of eosinophils in assessing stromal invasive in squamous neoplasia of larynx Eosinophils in lesion Sensitivity(%) Specificity(%) Positive predictive value(%) Negative predictive value(%) Biopsy specimens >or = 10 eos/hpf 66% 94% 96% 52% >or = 20/10hpf 80% 100% 100% 68% Excisonal specimens >or = 10 eos/hpf 64% 98% 100% 58% >or = 20 eos/10 hpf 68% 100% 100% 58% Sensitivity, specificity, positive predictive value and negative predictive value were computed for eosinophil counts that were exceeding 10 eos/hpf or 20 eos/10hpf to evaluate if these two classifications of eosinophil count would suggest any significance of clinical implication from statistical point of view Sections with adequate non neoplastic epithelium were available in twelve cases. Eight of them were absent for eosinophils. The highest counts for non-neoplastic epithelium were 4 eosinophils/hpf and 8 eosinophils/10hpf. Although it seems that non-neoplastic epithelium contains less eosinophils than squamous neoplasia, no statistical analyses were performed since the number of available non-neoplastic regions in this series was too small. Discussion For decades, pathologists have used a variety of histologic features, including desmoplastic stromal reaction, intrastromal foreign body reaction to keratin, and the presence of separate minute clusters of intrastromal neoplastic cells, to assess and identify invasion[ 8 , 9 ]. However, when evaluating a small, poorly-oriented, tangentially-cut specimens, one sometimes enters an area replete with uncertainties. The presence of a morphologic feature associated with invasion would be helpful in determining whether any degree of invasion has occurred in the equivocal cases. In practice, we have noticed a frequent presence of eosinophilic infiltration in invasive squamous cell carcinoma of the larynx, which is usually absent in non-invasive neoplastic counterparts. Such a consistent observation has prompted us to carry out the current study. In this series, a systematic study of eosinophils in tissues of squamous neoplasia of larynx suggests that elevated eosinophils are a morphologic marker for assessing tumor invasiveness. We observed that in the invasive squamous carcinomas eosinophils were significantly elevated focally and /or diffusely, statistically more frequent than in non-invasive neoplasia. The increased eosinophil counts (>10 hpf, and >20/10 hpf) in laryngeal biopsy and excisional specimens were all statistically significantly associated with stromal invasion. In contrast, values below both of these thresholds had a significant predictive value for the absence of invasion. The slight decrease in the correlation of >10 eosinophils/hpf with invasion in excisional specimens, relative to that in biopsy counterparts, may be attributed to the increased chance of observing microscopic clusters of eosinophils unrelated to invasion in the larger specimens. It is not surprising to observe inflammation in the specimens examined, likely due to several factors including the specific anatomic location and an overall inflammatory response of the stroma to the tumor, among others[ 7 , 9 ]. However, there is no association of elevated eosinophils with overall inflammatory response of the stroma in the specimens studied. Furthermore, a number of cases with elevated eosinophila showed a distinct polarization of the infiltrating cells, specifically eosinophilic cells accumulating in the tumor invading front (Fig. 2C ). Cumulatively, our findings strongly indicate that elevated tissue eosinophila is a specific cell response independent of a non-specific inflammatory reaction. Although elevated eosinophil counts are statistically significantlly associated with stromal invasion in squamous cell carcinoma of larynx, occasionally the presence of high number of eosinophils were observed in the non-invasive counterpart tissues (Table 1 ). In other words, the presence of eosinophils in squamous neoplasia of larynx is not pathognomonic for stromal invasion and caution must be exerted when evaluating the number of infiltrating eosinophils. However, the quantitation method and thresholds identified in the current experiment may represent an adjunctive feature in assessment of stromal invasion in squamous neoplasia. Specifically, the presence of eosinophils at these thresholds should raise the suspicion that invasive or microinvasive carcinoma is present within the specimen, particularly when >10 eosinophils/hpf and or 20 eosinophils/10hpf are observed. Since the first observation of malignancy with marked blood eosinophilia described by Rheinbach in 1893, eosinophilia has been described in human cancers from a variety of organs [ 10 - 12 ]. In head and neck squamous cell carcinoma, it has been reported that the presence of tissue eosinophils ranges between 22 and 89% [ 13 - 16 ]. Most of these series have focused on whether the presence of a prominent eosinophilic infiltrate has a prognostic value, or is an indicator of response to treatment. Some authors have claimed that the presence of a marked or moderate eosinophilic is associated with a poor prognosis[ 12 , 17 ], while others have found that eosinophilia is a favorable prognostic feature[ 13 , 14 ]. No study has addressed the value of eosinophils in distinguishing invasive from non-invasive squamous neoplasia in the head and neck. The mechanism of eosinophilic accumulation in cases of invasive carcinoma remains largely unknown. It has been suggested that such eosinophilic infiltration may be induced by a tumor-derived eosinophil chemotactic factor[ 18 , 19 ]. A recent study further indicated that stromal eosinophils in squamous cell carcinoma may play a key role in tumor invasion through activation of gelatinase[ 20 , 21 ]. It was found that 92-kd gelatinase, a key member of the matrix metalloproteineaes which are involved in tumor invasion by breaking down the basement membrane and extracellular matrix, is actively expressed by eosinophils. In conclusion, although the etiology of tissue eosinophils in invasive carcinoma is unknown, our study is the first to suggest that an elevated eosinophil count in the squamous neoplasia of larynx may serve as a morphologic feature associated with tumor invasion. The presence of more than individual eosinophils, specifically when the number of infiltrating eosinophils exceeds 10/hpf and or >20/10 hpf in a biopsy of larynx with squamous neoplasia, represents a histologic marker for the presence of tumor invasion. Similarly, the presence of eosinophils reaching these thresholds in an excisional specimen should prompt a thorough search for invasiveness when evidence of invasion is absent, or when invasion is suspected by conventional criteria in the initial sections. Although the present study assesses a quantitative parameter of tumor invasion, in our daily practice we find it useful that a readily appreciable elevation of tissue eosinophilia alerts us to search for possible invasiveness in tissue biopsy of laryngeal lesions. Competing interests The author(s) declare that they have no competing interests. Authors' contributions Drs. Said, Speiegel, Tan for study design; Drs. Said, Tan, for pathology evaluation; Drs. Alrwi, Douglas, Hicks, Loree, Riguel, Wiseman for surgical evaluation and clinical follow-up as well as card review. Dr. Yang for statistical analyses. Dr. Cheney for administrative and financial support. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548265.xml |
555546 | Carotid intimal-media thickness as a surrogate for cardiovascular disease events in trials of HMG-CoA reductase inhibitors | Background Surrogate measures for cardiovascular disease events have the potential to increase greatly the efficiency of clinical trials. A leading candidate for such a surrogate is the progression of intima-media thickness (IMT) of the carotid artery; much experience has been gained with this endpoint in trials of HMG-CoA reductase inhibitors (statins). Methods and Results We examine two separate systems of criteria that have been proposed to define surrogate endpoints, based on clinical and statistical arguments. We use published results and a formal meta-analysis to evaluate whether progression of carotid IMT meets these criteria for HMG-CoA reductase inhibitors (statins). IMT meets clinical-based criteria to serve as a surrogate endpoint for cardiovascular events in statin trials, based on relative efficiency, linkage to endpoints, and congruency of effects. Results from a meta-analysis and post-trial follow-up from a single published study suggest that IMT meets established statistical criteria by accounting for intervention effects in regression models. Conclusion Carotid IMT progression meets accepted definitions of a surrogate for cardiovascular disease endpoints in statin trials. This does not, however, establish that it may serve universally as a surrogate marker in trials of other agents. | Atherosclerosis is a generalized disease that causes lesions in large- and medium-sized elastic and muscular arteries. As lesions progress, arterial walls are remodeled, a process through which the size of the arterial lumen is preserved. Because of this, the disease is clinically asymptomatic during its earlier stages and may go unnoticed for decades as the risk for its clinical manifestation as acute vascular disease grows [ 1 , 2 ]. Epidemiological studies and intervention trials based on the incidence of acute vascular disease endpoints require years of follow-up, the participation of large populations, or both. As a consequence, such studies consume considerable time and financial resources [ 3 ]. The use of surrogate markers for atherosclerosis extent and progression is widespread. Currently, the most established of these is based on carotid intima-media thickness (IMT) as measured by B-mode ultrasound. It is a natural extension to consider these measures as surrogate markers for cardiovascular disease clinical endpoints [ 4 , 5 ]. If this extension is valid, the time, expense, and participant burden in understanding and developing treatments to reduce the risk of clinical endpoints can be reduced. To be rigorous, this definition must be based on accepted definitions and/or set of criteria for surrogacy. This document examines the evidence that carotid IMT, a marker for atherosclerosis, meets two prominent set of criteria for defining surrogate outcomes. Defintions of surrogate markers Both clinical and statistical criteria for surrogacy have been proposed. Clinical Criteria for Surrogacy Boissel, et al. lay out criteria that markers must meet to be considered as valid surrogates for clinical endpoints [ 6 ]. We group these into three domains. B1: ( Efficiency ) The surrogate marker should be relatively easy to evaluate, preferably by non-invasive means, and more readily available than the gold standard. The time course of the surrogate should precede that of the endpoints so that disease and/or disease progression may be established more quickly via the surrogate. Clinical trials based on surrogates should require fewer resources, less participant burden, and a shorter time frame. B2: ( Linkage ) The quantitative and qualitative relationship between the surrogate marker and the clinical endpoint should be established based on epidemiological and clinical studies. The nature of this relationship may be understood in terms of its pathophysiology or in terms of an expression of joint risk. B3: ( Congruency ) The surrogate should produce parallel estimates of risk and benefit as endpoints. Individuals with and without vascular disease should exhibit differences in surrogate marker measurements. In intervention studies, anticipated clinical benefits should be deducible from the observed changes in the surrogate marker. Statistical Criteria for Surrogacy Prentice views surrogacy as a statistical property and defines it with mathematical expressions [ 7 , 8 ]. Four criteria are required for S to serve as a surrogate for endpoint T with respect to intervention Z. P1: The intervention should affect the distribution of T. P2: The intervention should affect the distribution of S. P3: The distribution of T should be dependent on S. P4: Endpoint T should be conditionally independent of Z given S, i.e. S should fully account for the impact of Z on T. This definition may be specific to a particular setting and cohort; a marker may meet the criteria for surrogacy for one intervention, but fail criteria for others. The criteria for surrogacy are based on explicit models, and may also be dependent on covariates and additional explanatory factors being collected and incorporated into these models. Establishing Surrogacy These clinical and statistical definitions require different approaches to establish surrogacy, neither of which is clear-cut. To meet the criteria outlined by Boissel, et al. [ 6 ], experience and data from clinical trials are required to demonstrate efficiency and congruence, and data from bench and cohort studies are required to establish plausible linkage. Arguments for surrogacy address whether these data are sufficiently compelling. To meet the criteria outlined by Prentice [ 7 ], decisions must be made on the parametric model describing the relationship between intervention and outcomes. The plausibility for surrogacy is argued by the ability of the surrogate marker, once incorporated in this model, to account (induce conditional independence) for this relationship using experimental data (and by P1 is limited to interventions that affect outcomes). Since statistical relationships cannot be established with certainty, arguments are required that the empirical evidence for conditional independence provided by data are sufficiently compelling to adopt the hypothesis of conditional independence required by the Prentice criteria. B-mode ultrasound imt B-mode ultrasound imaging technology has evolved to the extent that the walls of superficial arteries can be imaged non-invasively, in real-time, and with high resolution. Unlike angiography or 'luminology', ultrasound imaging can visualize the arterial wall at every stage of atherosclerosis, from 'normal' arterial wall to complete arterial occlusion. Arterial wall thickness can therefore be measured as a continuous variable from childhood to old age, in patients and healthy controls [ 9 ]. Studies that have evaluated the origin of the lumen-intima and the media-adventitia ultrasound interfaces in relation to carotid and femoral far-wall arterial histology have demonstrated that the distance between these interfaces reflects the intima-media complex. Consequently, this distance is referred to as IMT [ 10 , 11 ]. IMT has been widely used in both observational studies and intervention studies. Surrogacy of carotid imt with respect to statins We are interested in examining the potential of carotid IMT to serve as a surrogate marker for cardiovascular events, in particular cardiovascular mortality, myocardial infarctions, and clinical stroke. The clinical and statistical arguments for surrogacy are contextual, i.e. are based on specific relationships and mechanisms. It is unreasonable to make open-ended claims that a carotid IMT is a surrogate for these endpoints for all interventions and all cohorts, a point that has not been emphasized sufficiently. Our specific focus is to examine surrogacy in clinical trials of HMG-CoA reductase inhibitors (statins). Empirical evidence is largely drawn from statin clinical trials conducted on cohorts of adults at elevated risk for cardiovascular endpoints. Our choice of statins is based, in part, on the many published trials available for these agents. We acknowledge that it is quite possible that IMT may be a valid surrogate for cardiovascular events with respect to statins (i.e. accounting for the effects of these agents on cardiovascular events), but may not be a valid surrogate for other agents (e.g. diuretics or postmenopausal hormone therapy) or endpoints. We used Medline searches to identify seven placebo-controlled clinical trials of statins that report both IMT outcomes and cardiovascular events (see Table 1 ): the Asympotomatic Carotid Artery Progression Study (ACAPS), the Kuopio Atherosclerosis Prevention Study (KAPS), the Pravastatin, Lipids, and Atherosclerosis in the Carotid Arteries Study (PLAC-2), the Carotid Atherosclerosis Italian Ultrasound Study (CAIUS), the Regression Growth Evaluation Statin Study (REGRESS), the Beta-Blocker Cholesterol Lowering Asymptomatic Plaque Study (BCAPS), and the Fukuoka Atherosclerosis Trial (FAST) [ 12 - 18 ]. Table 1 Clinical trials involving HMG-CoA reductase inhibitors and reporting both carotid IMT and cardiovascular event outcomes. Clinical Trial (N*) Statin Relative Impact on IMT Progression of Primary Outcome (mm/yr): Mean [95% CI] (reported p-value) Relative Impact on Reported Cardiovascular Endpoints: Odds Ratio [95% CI] Abstracted CVD Event Odds Ratio ACAPS (25) (N = 919) Lovastatin -0.015 [-0.023, -0.007] (p = 0.001) CVD Death, MI, Stroke 0.34 [0.12, 0.69] KAPS (26) (N = 447) Pravastatin -0.014 [-0.022, -0.006] (p = 0.005) CVD Death, MI, Stroke 0.57 [0.22, 1.47] PLAC-II (47) (N = 151) Pravastatin -0.009 [-0.031, 0.013] (p = 0.44) Clinical Coronary Events 0.37 [0.11, 1.24] CAIUS (48) (N = 305) Pravastatin -0.014 [-0.021, -0.005] (p = 0.0007) CVD Death, MI 1.02 [0.14, 7.33] REGRESS (28) (N = 255) Pravastatin -0.030 [-0.056, -0.004] (p = 0.002) Clinical Events 0.51 [0.24, 1.07] BCAPS (49) (N = 793) Fluvastatin -0.008 [-0.013, -0.003] (p = 0.002) CVD Death, MI, Stroke 0.64 [-0.24, 1.66] FAST (50) (N = 164) Pravastatin Significant Benefit (p < 0.001) CVD Death, MI 0.32 [0.10, 1.06] Pooled Estimate -0.012 [-0.016, -0.007]** 0.48 [0.30, 0.78] *Arms used in meta-analysis; **Excludes FAST Do IMT measurements meet clinical criteria for surrogate markers of cardiovascular disease events? The three criteria described by Boissel, et al. [ 6 ] for surrogate markers relate to efficiency, linkage, and congruency and will be described in turn. B1: Efficiency Carotid IMT has been widely used in clinical trials. Reliable protocols have been established for its measurement and it is arguably more sensitive to the effects of interventions than cardiovascular disease events. Six of seven clinical trials in Table 1 reported a significant beneficial impact of statins on IMT progression with respect to their primary IMT outcome measure; the seventh trial, PLAC-II found no significant impact on its primary IMT outcome measure, but reported a significant impact on a secondary IMT measure. In six of seven trials, there were beneficial trends with respect to reported cardiovascular disease endpoints; however only for one trial (ACAPS) did this trend reach nominal statistical significance. Thus, while IMT measures were sufficiently sensitive so that benefit could be established within trials of this size, the general benefit with respect to cardiovascular events could not be generally established. B2: Linkage The strong association between carotid IMT and cardiovascular events has been demonstrated repeatedly. For example, the Cardiovascular Health Study, found it to be the risk factor most strongly associated with incident cardiovascular events [ 19 ]. In the Rotterdam Study, Del Sol, et al. found that a single carotid IMT measurement was of the same importance as a battery of commonly used risk factors in the prediction of CHD and CVD [ 20 ]. The Atherosclerosis Risk in Communities (ARIC) study found that carotid IMT of 1 mm or more was associated with two to five times the increased hazard of CHD and four to eight times the increased hazard of stroke [ 21 , 22 ]. Using a nested case-control approach and a mean duration of follow-up of 2.7 years, the Rotterdam Study found that per standard deviation increase (0.16 mm) in IMT, the odds ratio for stroke was 1.41 and for myocardial infarction was 1.43 [ 23 ]. Atherosclerosis is a manifestation of the pathophysiology underlying cardiovascular disease. The links between carotid IMT and atherosclerosis are well-established and IMT measures, as markers of atherosclerosis, have contributed greatly to the understanding of atherosclerosis progression [ 24 , 25 ]. These measures have characterized the role of many risk factors for atherosclerosis and currently serve the basis for several studies examining its genetics. The mechanisms by which atherosclerosis is causally related to cardiovascular events are also well-established. B3: Congruency IMT (continuous) and events (categorical) represent different measurement scales, thus it is difficult to argue they are influenced by statin therapy to quantitatively similar degrees. We drew evidence that the impacts are qualitatively similar using a meta-analysis of the clinical trials listed in Table 1 and developed pooled estimates of the relative impact of HMG-CoA reductase inhibitor (statin) therapy on IMT progression and on the odds ratio of cardiovascular endpoints [ 26 ]. Because standard errors for IMT changes were not reported for the FAST trial, it was excluded from this analysis. Across the trials, statin therapy was associated with an average decrease of IMT progression of 0.012 mm/yr with 95% confidence interval [-0.016, -0.007]. This pooled estimate confirms with greater precision the results from the individual trials. More importantly, the meta-analysis yields a significant odds ratio of 0.48 [0.30, 0.78] for the reduction cardiovascular events associated with statin therapy. Thus, a meta-analysis across a number of trials demonstrates a benefit with respect to cardiovascular disease events that is congruent with the benefits established by individual IMT trials. Do IMT measurements meet the criteria of statistical criteria for surrogate markers of cardiovascular disease events? The four criteria of Prentice [ 7 ] are as follows. P1: Impact of Interventions on Endpoint There is convincing evidence, some of which is summarized in the meta-analysis described above, that statin therapy reduces the risk of cardiovascular events, to the extent that this is now an indication for their use. P2: Impact of Interventions on Carotid IMT As noted above, this association is supported by the results of our meta-analysis (Table 1 ) and elsewhere (e.g. [ 4 ]). P3: Link Between Carotid IMT and Cardiovascular Events The considerable evidence of this association has been discussed above. P4: Conditional Independence Between Statin Therapy and Cardiovascular Events Given Carotid IMT We know of no published literature that examines this conditional independence for statin therapies. Such a study is difficult to mount as it requires both sufficient power to demonstrate the relative impact on IMT progression of an intervention and sufficient size and follow-up time after this demonstration to assess the ability of measured IMT progression to account for subsequent risk. The only published account to examine the conditional independence of cardiovascular events given carotid IMT is for colestipol-niacin therapy in the Cholesterol Lowering Atherosclerosis Study (CLAS) clinical trial [ 27 ]. The 2-year CLAS trial demonstrated that colestipol-niacin therapy reduced IMT progression [ 28 ]; the trial cohort was surveyed an average of 8.8 years after the conclusion of CLAS to tally post-trial incidence of coronary events (nonfatal MI, coronary death, and coronary artery revascularization). These investigators found that while treatment assignment, by itself, was significantly related to occurrence of these events (relative risk 0.41; p = 0.01), when on-study IMT progression was included as a covariate, this relationship evaporated (relative risk 1.1; p > 0.2). Friedman, et al. use the term proportion of treatment effect captured (PTE) to describe how well a surrogate marker meets criterion P4 [ 29 ]; at face value, the findings from CLAS produce an estimate that PTE exceeds 1. In our meta-analysis, when IMT progression is included as a covariate in regression models linking cardiovascular disease events to statin treatment, the relative odds ratio is mediated from 0.48 (as tabulated below) to 0.64 and is no longer statistically significant (p = 0.13). This suggests that changes in IMT may account for some, but not all, of the effect of statins on cardiovascular events (i.e. a PTE of 0.3). Several issues complicate this argument, however. Even if a surrogate successfully meets Prentice's criteria for surrogacy within individual trials, because designs, cohorts, and endpoints vary it is to be expected that a surrogate would only account for some, not all, of treatment effects in regression models across trials. Secondly, like many markers, IMT is subject to measurement error and that is not insubstantial. This measurement error, if uncorrected, may lead to marked underestimates of relationships [ 30 ]: measured IMT progression may appear to account for less of the relationship between interventions and events than true progression. These issues obscure the validation of surrogacy from meta-analyses based on published summary statistics. We can only conclude that IMT progression may account for at least some of the treatment effects attributable to statin therapy, but that it is difficult to quantitate the degree of this relationship and that full surrogacy cannot be ruled out. Summary We have examined, in a structured and rigorous manner, the evidence that carotid IMT progression may serve as a surrogate for cardiovascular disease endpoints in statin trials. Each of the criteria for surrogacy described by Boissel, et al. appears to be met. The first three of Prentice's criteria are met, and the fourth is met by the one published study for which it can be evaluated (although not for statin therapy). Meta-analyses of statin trials provide support for Boissel's criteria and the first three of Prentice's criteria, and are not inconsistent for Prentice's fourth criterion. It is possible that these arguments may generalize to other agents whose mechanisms are similar to statins, however additional analyses, based on criteria for surrogate outcomes, would be required to make this extension. Competing interests MAE received an honorarium from Sankyo Pharma, Inc for a meeting during which ideas for this manuscript were developed. He is an occasional consultant to other companies concerning the design of clinical trials involving carotid ultrasonography. DHO serves on data safety and monitoring boards for Pfizer and Astra/Zeneca and serves as consultant to Sankyo Pharma and to Merck. JGT received an honorarium from Sankyo Pharma, Inc for a portion of this work. TO has no competing interests. GE has received an honorarium and consulting fees from AstraZeneca Pharmaceuticals for assistance in the planning and implementation ofa clinical trial involving carotid ultrasonography and statin therapy. He also serves as an occasional consultant to other companies on the design and conduct of trials involving carotid ultrasonography in which statins may be included as background therapies, but are not part of the experimental intervention. HM is an occasional consultant to Sankyo Pharma, including attending the meeting during which ideas for this manuscript were developed. He very occasionally consults with other Pharma companies like MSD, Essex, and Lilly. He is a consultant to Boston Scientific for interventional cardiology and intravascular ultrasound and is also a regular teacher in carotid stenting for the Guidant and Cordis companies. Authors' contributions MAE, DHO, and HM conceived and drafted this manuscript. MAE, JGT, and TM organized and conducted its meta-analysis. GE provided oversight to analyses and contributed to interpretation of results. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC555546.xml |
554102 | Access to communication technologies in a sample of cancer patients: an urban and rural survey | Background There is a growing awareness among providers of the symptom burden experienced by cancer patients. Systematic symptom screening is difficult. Our plan was to evaluate a technology-based symptom screening process using touch-tone telephone and Internet in our rural outreach cancer program in Indiana. Would rural patients have adequate access to technologies for home-based symptom reporting? Objectives 1) To determine access to touch-tone telephone service and Internet for patients in urban and rural clinics; 2) to determine barriers to access; 3) to determine willingness to use technology for home-based symptom reporting. Methods Patients from representative clinics (seven rural and three urban) in our network were surveyed. Inclusion criteria were age greater than 18, able to read, and diagnosis of malignancy. Results The response rate was 97%. Of 416 patients completing the survey (230 rural, 186 urban), 95% had access to touch-tone telephone service, while 46% had Internet access (56% of urban patients, 38% of rural patients). Higher rates of Internet access were related to younger patient age, current employment, and higher education and income. The primary barrier to Internet access was lack of interest. Use of the Internet for health related activities was less than 50%. The preferred means of symptom reporting in patients with internet access were the touch-tone telephone (70%), compared to reporting by the Internet (28%). Conclusion Access to communication technologies appears adequate for home-based symptom reporting. The use of touch-tone telephone and Internet reporting, based upon patient preference, has the potential of enhancing symptom detection among cancer patients that is not dependent solely upon clinic visits and clinician inquiry. | Background In recent years awareness of the symptom burden experienced by many cancer patients has grown [ 1 , 2 ]. At some time in their illness, symptoms such as fatigue, pain, nausea, depression, and hopelessness are very likely to occur. These symptoms can be disabling and they can even limit treatment. There is a growing body of literature demonstrating that interventions for these troubling symptoms are effective [ 3 , 4 ]. These interventions can improve the patient's quality of life by enabling the patient to function better at home and at work. While there is awareness among providers of symptom distress experienced by patients and there are effective symptom interventions, the problem in the day-to-day care of cancer patients is symptom identification [ 5 ]. At a recent meeting convened by the National Institute of Health, it was concluded that little is known about the actual frequency and validity of symptom screening for common cancer and cancer treatment related symptoms. In the summary statement there was expert consensus about the need for routine screening for symptoms from the point of diagnosis. Assessments should be repeated during the course of treatment. Symptom data should be integrated into routine care of cancer patients. Community Cancer Care (CCC) is an organization with home offices in Indianapolis, Indiana, that provides professional services and program development services to 23 hospitals throughout the state of Indiana. Professional oncology services are provided by 18 medical oncologists-hematologists who are employed by CCC or serve under contract. One psychiatrist, an advanced-practice nurse, and a certified nurse are dedicated to quality-of-life efforts. Each year, an average of 2500 new patients are seen in the network of clinics. At any given time, approximately 16,000 patients are receiving care in the CCC network. While the CCC has clinics in metropolitan Indianapolis, rural outreach and program development in rural hospitals have been a major focus of CCC since its inception in 1983. Twenty-one clinics are located in Indiana towns with populations less that 16,000. Twenty counties served by CCC have populations less than 45,000. Using paper and pencil scales we unsuccessfully tried to install a symptom screening process into the daily clinic workflow. The clinic process was slowed. Some patients could not complete the instruments. Patients' report of their symptoms could not be analyzed quickly and placed on the chart for the provider to use. Symptom screening was limited to the day of the clinic visit. We could not easily evaluate a patient's status between office visits. Trends in symptom occurrence were difficult to identify. With pencil and paper instruments it was a laborious and expensive process to establish a database for our patients' symptom reports, a necessary step in program evaluation. Because of these limitations, our goal is to develop a technology solution to gather, analyze, and present symptom reports to physicians and nurses. Several feasible options for reporting symptoms would include either a touch-tone telephone or an Internet connected computer. Because of well-documented differences between access to the telephone service and the Internet [ 7 ], we conducted a survey in urban and rural oncology clinics to determine how many of our network patients had access to the required communication technology. For patients who had access to the Internet we were interested in identifying predictors of access as well as patients' willingness to use the Internet for symptom reporting and other cancer-related reasons. Methods Procedures The study design and survey instrument were reviewed and approved by the Institutional Review Board of Community Medical Research Institute in Indianapolis. A convenience sample of cancer patients was gathered from the clinic network of CCC. Three urban clinics and seven rural clinics were conveniently selected for data collection. All of these sites had concentrated, busy clinic days during which patients could be recruited. Clinics were designated "urban" or "rural" based on their zip code being categorized urban or rural by the U.S. Department of Health and Human Services, Office of Rural Health Policy [ 6 ]. Staff members at clinic sites were instructed to offer the survey instrument to all patients attending clinic during selected weeks of March and April 2003. All patients were volunteers. All patients had to be at least 18 years of age, be able to read, and have a diagnosis of malignancy (either solid tumor or blood). The number of patients who refused to complete the survey was recorded. The survey instrument The survey instrument included nine items about demographics and access to touch tone telephone service and the Internet. If patients indicated they did not have access to the Internet the survey instrument directed them to questions about reasons they did not have access. If patients indicated they did have access to the Internet, the survey instrument directed them to seven additional questions about how they use or might use the Internet. Statistical analysis We used two-sample t-tests to test for mean differences and chi-square tests to test for differences in proportions of demographic characteristics across clinic setting and access to the Internet. Logistic regression models were used to evaluate access to the Internet as a function of clinic setting adjusting for demographic characteristics. Results Four hundred and sixteen patients completed the survey (230 rural, 186 urban). The response rate was 97%. Thirteen patients refused to complete the survey stating they were too ill or too tired. Table 1 summarizes characteristics of the sample, comparing patients in urban vs. rural settings. Patients in the rural sample were significantly older, had lower education levels, and were more likely to be Caucasian than patients in the urban sample. Touch-tone telephone service was available to most (95%) respondents, while 46% (95% [CI] 0.41–0.51), had access to the Internet. Compared to urban patients, those in rural settings had comparable telephone access but were less likely to have Internet access (38% vs. 56%, p < .001). Most patients (> 80%) reported accessing e-mail and Internet from home. As shown in Table 2 , patients with Internet access were significantly younger and had higher education and income levels than patients without Internet access. Additionally, patients with Internet access were more likely to be currently employed and from an urban clinic. Table 3 summarizes the results of a logistic regression model for Internet access. Higher income and current employment increased the likelihood of having Internet access while older age and less education decreased the likelihood. Two-thirds (67%) of people cited lack of interest for not having Internet access. Other common reasons were unfamiliarity with the Internet (21%), cost (20%), and hesitation to use a computer (13%). There were no significant differences between the urban and rural patients regarding why they did not access the Internet. Fifty percent of patients with Internet access reported using it for health care purposes in both rural and urban clinics, and nearly 60% reported having used the Internet to seek information about their cancer. Among the 169 patients with Internet access who indicated their preferred method(s) for symptom reporting, the telephone was identified as the most popular method (70.4% of respondents) followed by Internet-based symptom reporting (28%) and touch-screen computer in the clinic waiting room (15%). Compared to urban patients, rural patients were somewhat more likely to prefer telephone symptom reporting (79% vs. 63%, P = .02) and less likely to prefer Internet bases reporting (20% vs. 35%, P = .02). Finally, 137 respondents indicated the different ways they might use the Internet for their health care. Requesting information from a physician or nurse was the most frequently cited potential use (77% of respondents). Other reasons included, submitting information about their own condition (59%), identifying and managing symptoms (54%), scheduling appointments (52%), and obtaining prescriptions (50%). Discussion The high rate (95%) of access to touch-tone telephone service among cancer patients in our network is comparable to data from other government surveys [ 7 , 8 ]. Internet access in both our urban sample (56%) and our rural sample (38%) are below general population estimates for the United States [ 9 ], but equal to the data generated for Indiana in a 2000 survey [ 7 ]. In a more recent survey, 63% of Indiana residents reported access to the Internet [ 10 ]. While the proportion reporting access in our sample was less, this may in part be due to the over-sampling of rural subjects as well as certain demographic characteristics. Age, education level, income, and employment status were major variables influencing Internet access. While fewer individuals in rural seetings reported having internet access, the rural-urban differences were no longer significant when adjusting for age, educational level, annual income and employment status. Thus rural-urban differences may be due to socio-demographic factors more than to a higher presence of technology barriers in rural settings. Barriers to Internet use identified by patients and limited use of the Internet offer opportunities for better patient communication and education. Over half of the patients without Internet access reported they were not interested. Perhaps waiting room computers with links to cancer-related web sites with good educational and problem-solving content could spur interest. Educational programs for our cancer patients about the Internet and its use may also be helpful. Cost of Internet services did not seem to be a significant factor. These data suggest that a very significant proportion of cancer patients (more that half of those with Internet access) were willing to use this modality to communicate with their cancer clinic for multiple tasks. While email may offer a convenient means of communication with a physician's office, there are many barriers to its use. Eysenbach has written a thorough review of the potential problems of liability and time pressures [ 11 ]. While Katz and his colleagues found no time-savings when email was used as a communication tool, it may well be that it could be an effective tool in some rural settings [ 12 ]. Other researchers have also suggested that patient satisfaction and participation in their health care can be increased by use of the Internet by patients [ 13 ]. The findings of this survey must be interpreted with caution. While very few patients refused to complete the survey, the patient sample is a convenience sample not a total sample and not a random sample of our patients. With only 46% of our sample (191 patients) having access to the Internet, generalization should be cautious pending replication in a larger sample. The survey instrument did not include questions about readiness to use a touch tone phone for completing a symptom questionnaire by patients, and this is a limitation to the study. Conclusion Our findings suggest that either touch-tone telephone or Internet-based computer methods might be used to collect home-based symptom ratings for cancer patients in both urban and rural centers. While access to technologies is adequate, acceptance and usability of such a system remains to be demonstrated. Patient preference for a telephone-based or Internet-based system can be definitively ascertained only after patients use both systems. With lack of interest being the most common barrier to Internet access, education and "get acquainted" programs for patients who do not have Internet access may be warranted. Alternatively, since many patients prefer touch tone telephone for symptom reporting, the use of IVR (Interactive Voice Recording) technology provides another way for symptom reporting, coupled with centralized nurse care management of cancer-related symptoms. Indeed, we are proceeding to test this in a study in which cancer patients will have an option of home-based symptom monitoring by either IVR or the internet coupled with centralized nurse care management of cancer-related symptoms. Patient resource centers with Internet access in outpatient clinics may be another mechanism to consider. Competing interests The author(s) declare that they have no competing interests. Authors' contributions MA: Assisted with data analysis and interpretation of results. Reviewed, corrected and submitted the final manuscript. DET: Designed the project including the questionnaire, obtained IRB approval, supervised completion of the survey, assisted with data analysis and writing the final manuscript. DB: Coordinated data collection from clinic sites and compiled survey results. KK: Assisted with data analysis, interpretation of results, in addition to review, preparation and submission of the final manuscript. AP: Carried out the data analysis and assisted with interpretation of results in addition to preparation of the methods and results sections of the manuscript. SE: Assisted in site recruitment and review of the manuscript. WMD: Assisted in site recruitment and review of the manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554102.xml |
548517 | Simultaneous development of the Pediatric GERD Caregiver Impact Questionnaire (PGCIQ) in American English and American Spanish | Background The objective of this study was to develop simultaneously a new questionnaire, the Pediatric GERD Caregiver Impact Questionnaire (PGCIQ), in American English and American Spanish in order to elucidate the impact of caring for a child with GERD. Methods Two focus group discussions were conducted in American English and American Spanish to develop a relevant conceptual model. Focus group participants were the primary caregivers of children with GERD (newborn through 12 years of age). Participant responses were qualitatively analyzed to identify potential differences in caregiver perspectives by the caregiver's language, socio-economic status and demographic profile as well as the child's age and disease severity level. Items in the PGCIQ were generated simultaneously in English and Spanish by reviewing results of qualitative analysis from focus groups in each language. The PGCIQ was finalized in both languages after testing content validity and conducting an in-depth translatability assessment. Results Analysis of focus group comments resulted in the development of a first draft questionnaire consisting of 58 items in 10 domains. Content validity testing and an in-depth translatability assessment resulted in wording modification of 37 items, deletion of 14 items and the addition of a domain with five items. Feedback from the content validity testing interviews indicated that the instrument is conceptually relevant in both American English and American Spanish, clear, comprehensive and easy to complete within 10 minutes. The final version of the PGCIQ contains 49 items assessing ten domains. An optional module with nine items is available for investigative research purposes and for use only at baseline. Conclusion The PGCIQ was developed using simultaneous item generation, a process that allows for consideration of concept relevance in all stages of development and in all languages being developed. The PGCIQ is the first questionnaire to document the multidimensional impact of caring for an infant or young child with GERD. Linguistic adaptation of the PGCIQ in multiple languages is ongoing. A validation study of the PGCIQ is needed to examine its psychometric properties, further refine the items and develop an appropriate scoring model. | Background Gastric reflux of short duration is a normal physiological event for all infants less than six to seven weeks old. When gastric reflux occurs more frequently and past the age of seven weeks, it becomes a clinically significant problem that is diagnosed as pediatric gastroesophageal reflux disease (GERD). Symptoms of pediatric GERD include pain, irritability, frequent spitting-up or vomiting, constant or sudden crying, poor sleep habits and frequent waking. At present, these symptoms represent a clinically significant problem for one of every 500 infants between the ages of six weeks and 18 months [ 1 ]. Although these symptoms are not uncommon in childhood, few symptomatic children are treated [ 2 ]. The prevalence of symptoms consistent with pediatric GERD varies with age and depends upon the type of symptoms. When referring solely to regurgitation symptoms, over 80% of children experience spontaneous remission by age 18 months [ 4 ]. In comparison, findings in the literature indicate that remission of GERD symptoms occurs in 70% of the population at three years of age [ 8 , 17 ]. In fact, research suggests that among children three to nine years of age, only 2.3%, 1.8% and 7.2% will experience symptoms of regurgitation, heartburn and epigastric pain, respectively [ 18 ]. Interestingly, despite large variation in prevalence rates by age group, differences in prevalence have not been reported across gender, ethnic groups or socio-economic classes. Many health care professionals (HCPs) specializing in the treatment of GERD feel that published prevalence rates underestimate the extent of the condition [ 6 , 7 ]. Interviews with HCPs suggest that pediatric GERD is under-treated because many pediatricians are not aware of how to effectively diagnose and treat the condition [ 8 ]. Furthermore, many specialists and caregivers believe that GERD is often "missed" by physicians, since it is "normal and common for infants to spit up several times a day" [ 4 ]. Failure to properly diagnose, lack of treatment, or sub-optimal treatment of these symptoms can lead to serious complications such as failure to thrive, anemia, esophagitis and respiratory disorders [ 4 ]. In addition to having serious consequences for infants and young children, reports from parent advocacy groups suggest that untreated or ineffectively treated pediatric GERD exerts a substantial negative impact on the life of the child's primary caregiver [ 4 ]. Caregivers of pediatric GERD patients report sleep loss and psychological and physical strain related to the ineffective or inadequate treatment of pediatric GERD [ 4 ]. The burden of care appears to affect every facet of the caregiver's life, including daily activities, social interactions, professional pursuits and family relationships. This burden results in changes in the caregiver's physical and psychological health, quality of life and financial well-being. The current study had two objectives related to the assessment of the impact of pediatric GERD on the caregiver's daily life. The first objective was to determine if there was an existing instrument suitable for measuring the impact of caring for an infant or child with GERD. If no instrument could be identified, the second objective was to develop an instrument suitable to quantify the impact of pediatric GERD on caregivers, thereby providing a means to improve public awareness of the issue. Questionnaire development rationale A focused literature review was conducted to determine if a caregiver-reported outcome measure that assesses the impact of caring for an infant or child with GERD had been developed. This review included a search of commercial and Mapi Values in-house medical databases of published literature from 1990 to the present in order to identify available instruments and studies relevant to caregiver burden in pediatric GERD. Additionally, the review examined whether generic quality of life measures had been previously applied to the assessment of the impact of caring for a pediatric GERD patient. The literature review uncovered numerous instruments developed for caregivers of adult, elderly or terminally ill patients (i.e., cancer or AIDS patients) [ 9 ]. In contrast, few instruments or studies were identified that specifically assess the impact of a child's illness on the primary caregiver. Common approaches to assessing the impact of caring for a chronically ill child included asking caregivers open-ended questions about family strain [ 10 ], evaluating the effect of the child's illness on family resources [ 10 ] and measuring the impact of the child's illness on the caregiver's well-being and quality of life [ 11 ]. No instruments that explore the impact on the caregiver's quality of life of caring for a child with GERD were identified during this review. Nor were any generic measures identified as having been used to quantify the impact of caring for a pediatric GERD patient on the primary caregiver. Furthermore, no single disease-specific instrument was found that assesses the economic, emotional, psychosocial and physical burden experienced by the caregiver of a chronically ill child. The "Pediatric GERD Caregiver Impact Questionnaire" (PGCIQ) was developed in American English and American Spanish to address the need for an instrument to assess the impact of caring for a child with GERD. This instrument was developed for use in observational studies and multi-national clinical trials to systematically assess and document the physical, psychosocial, psychological and financial impact of caring for pediatric GERD patients. In addition, items in the PGCIQ were specifically developed to capture changes in caregiver burden in response to successful treatment. This instrument will allow documentation of the impact of caring for a child with GERD and provide evidence to increase public and physician awareness of the condition. The PGCIQ was developed simultaneously in American English and American Spanish to accommodate the rapidly changing population composition of the United States (U.S.). The U.S. Hispanic population grew by 61% from 1970 to 1980 and by another 53% in the following 10 years [ 12 ]. Between 1990 and 2000, Hispanics were the fastest growing ethnic group in the country [ 13 ]. Because the PGCIQ is intended for use predominantly in the U.S., it was deemed critical that the instrument be sensitive within Spanish-speaking as well as English-speaking populations. Utilizing the simultaneous development approach reduces risk factors that threaten the validity of cross-cultural comparisons in the two language groups. Although the questionnaire was initially developed in American English and American Spanish, it was designed to be suitable for cross-cultural and linguistic adaptation into multiple languages. Thus, the PGCIQ was developed to provide valuable information in multi-national clinical evaluations regarding the value of different GERD treatments from the caregiver's perspective. Information obtained from the PGCIQ can be used to inform and educate health care providers and payers about the needs of caregivers of pediatric GERD patients. Methods Simultaneous questionnaire development The development of the PGCIQ followed the methodology of simultaneous questionnaire development. This process was selected to reduce the risk of systematic measurement error at the item level (i.e., item bias) and ensure that construct bias, which occurs when the construct is measured, was identical across all developed language versions. When combining this methodology with a translatability assessment conducted by linguistic experts, the procedure was intended to yield an "optimal" measure for adaptation of the American English version into different cultures. Additionally, the process aimed to produce a measure that was less susceptible to cultural differences than a questionnaire developed in one language and followed by translation into other languages [ 14 ]. Study participants Participants were recruited by pediatric gastroenterologists and pediatricians. The primary caregivers included in the discussions were 18 years or older and were caring for children (newborn to 12 years of age) who were either newly diagnosed with GERD or seeking treatment for a new episode of GERD after a period without treatment. Diagnoses of GERD required that children present with common clinical manifestations of GERD. Neonates and young babies were required to have chronic regurgitation, most commonly demonstrated by vomiting. In young children, diagnosis was confirmed by physical discomfort that manifested as prolonged crying, fussing, arching, or refusal of feedings. As well, all children three months or older were required to have symptoms of GERD requiring acid suppressive therapy. Caregivers were excluded if their child had a history of acute life-threatening events due to manifestations of GERD or a severe unstable illness that could exacerbate caregiver burden. In order to optimize the relevance of the instrument across a wide range of age groups, the study authors attempted to achieve an equal distribution of children within three groups: premature infants to three months, four to 11 months and 12 months to 12 years. Across all three groups, participant eligibility was confirmed by case report forms completed by the treating physician. Concept elicitation Four focus group discussions, two in American English and two in American Spanish, were conducted to elicit issues relevant to caring for a child with GERD. For each language group, discussions were conducted on the East and West Coasts to ensure adequate representation of Mexicans, Puerto Ricans and Cubans, the three largest groups of Spanish-speaking Americans currently residing in the US [ 15 ]. Each two-hour focus group was conducted using a structured focus group discussion guide that was developed in American English and linguistically adapted into American Spanish. American English and American Spanish focus groups were facilitated by two female researchers both living in the U.S. The native Spanish speaker was from Ecuador. Focus group moderators received identical, in-depth training from Mapi Values, and all focus groups were videotaped to ensure adherence to the discussion guide. Caregivers were asked to discuss how caring for their child has impacted their lives in the following areas: daily activities, social and family life, emotional and physical functioning and financial well-being. At the start of each focus group discussion, caregivers completed a 48-item GERD Caregiver Informational Survey, developed for this study, which contained questions about the caregiver's and child's demographic, socioeconomic and clinical profile. Qualitative analysis of the focus group discussions was conducted separately for each language group. This analysis included a comprehensive review of verbatim transcripts by native speaking researchers who also conducted the focus group discussions. Participants' comments were coded to highlight key concepts and psychosocial correlates. Coded comments were subsequently grouped together to elicit the domains and issues important to caregivers. These domains and correlates provided the framework of the conceptual model for questionnaire development (Figure 1 ). Figure 1 Conceptual Model Following creation of a conceptual model, global concepts within each language group were compared to evaluate similarities and differences in caregiver perspectives. In addition, responses were evaluated to identify potential differences by the child's developmental stage, severity of disease, socio-economic status and demographic profile. Participants' verbatim quotes from each language group were then consolidated within the common domains. Item generation Items were simultaneously generated in American English and American Spanish after consolidating caregivers' verbatim comments. For each item, American English and American Spanish speaking interviewers discussed relevant concepts identified from the focus group discussions and agreed upon conceptually equivalent wording. This process ensured that each item in the questionnaire was pertinent to caregivers in both language groups and formed the basis for Version 1 of the questionnaire. Translatability assessment A panel of linguistic experts conducted a translatability assessment of the PGCIQ to finalize Version 1 of the PGCIQ. The questionnaire was evaluated for its cultural adaptability and translatability in all potential target languages to ensure the cultural equivalence of items. Additionally, this process was used to limit the threat of bias in all current and potential target languages. Items identified as irrelevant for future target languages were considered for deletion or modification in the English and Spanish versions. Content validity testing Content validity interviews were conducted by the native speaking researchers who also conducted the focus group discussions. Researchers interviewed caregivers to assess the ease of comprehension, clarity, cultural equivalence and relevance of the first version of the PGCIQ. Participants were recruited following the same methods and criteria as for the focus group discussions. The interviews aimed to assess the clarity, comprehension and appropriateness of all instructions, questionnaire items and response scales. Interviews were transcribed verbatim in the participants' native language. Each transcript was comprehensively reviewed, analyzed and coded to highlight caregiver impressions of Version 1 of the PGCIQ. Coded comments were subsequently grouped together to elicit the caregivers' feedback by item. Items were considered for modification if more than one caregiver in either language had difficulties with or suggestions for the item. The interview results from each language group were analyzed separately and later compared for potential differences. In addition, caregiver responses were evaluated for potential differences by the child's developmental stage, severity of disease, socio-economic status and demographic profile. If an item was identified during content validity testing as potentially problematic, the item was reviewed and simultaneously modified by interviewers in both languages. If an item was problematic in at least one of the languages, the interviewers reviewed the caregivers' verbatim suggestions and attempted to agree upon a conceptually equivalent modification. If no suitable wording could capture the same concept in both languages, the item was deleted from the questionnaire. The final version of the questionnaire aimed to be relevant to caregivers of pediatric GERD patients, conceptually equivalent in American English and American Spanish, free from bias or jargon and easy to understand, interpret and complete within 10–15 minutes. The final version of the questionnaire was also developed to be appropriate for administration for a fifth grade literacy level, as confirmed by a Flesch-Kincaid Grade level test [ 17 ] and linguistic adaptability, as confirmed by a translatability assessment. Results Focus group discussion participants Caregivers characteristics Twenty-seven caregivers, 12 American English-speaking and 15 American Spanish-speaking, participated in the focus group discussions. The final distribution of interview participants by geographic region was East Coast English (n = 7), West Coast English (n = 6), East Coast Spanish (n = 5) and West Coast Spanish (n = 9). The average age of focus group participants was 37 years of age with a range of 18 to 64 years of age. The participants were predominantly female (89%), married (81%), living with their spouse and children (89%) and recipients of a high school diploma (81%). The fact that participants were primarily female reflects the composition of the population of primary caregivers seen by the investigators (referring physicians). Moreover, the study investigators attributed the fewer numbers of single and divorced caregivers in the discussion groups to difficulties in taking time off work. Analysis of the caregiver characteristics also revealed differences by language group. Relative to Spanish-speaking participants, a greater proportion of English-speaking respondents were female (100% vs. 80%) and married (100% vs. 67%). Table 1 contains the characteristics of the focus group participants. Table 1 Focus group discussions: caregiver and child characteristics American English n (%) American Spanish n (%) Total n (%) Number of Caregivers by Language 12 (44%) 15 (56%) 27 (100%) Gender Female 12 (100%) 12 (80%) 24 (89%) Male 0 (0%) 3 (20%) 3 (11%) Age Mean 37 37 37 Range 21–58 18–64 18–64 Marital Status Divorced 0 (0%) 1 (7%) 1 (4%) In long-term relationship 0 (0%) 1 (7%) 1 (4%) Married 12 (100%) 10 (67%) 22 (81%) Single 0 (0%) 2 (13%) 2 (7%) Other 0 (0%) 1 (7%) 1 (4%) Relationship to Child Aunt or Uncle 0 (0%) 1 (7%) 1 (4%) Grandparent 1 (8%) 0 (0%) 1 (4%) Parent 11 (92%) 14 (93%) 25 (93%) Number of Children by Caregiver Language 12 (44%) 15 (56%) 27 (100%) *Age (years, months) Mean 3.6 2.2 2.8 Median 1.6 3.3 2.5 Range 0.2–11.5 0.5–3.6 0.5–11.5 Gender Female 6 (50%) 8 (53%) 14 (52%) Male 5 (42%) 7 (47%) 12 (44%) Outlier excluded 1 (8%) 0 (0%) 1 (4%) *outlier age of 18.2 was excluded in determination of mean, median and range values Children characteristics The children in this study were primarily female (52%) and their average age was 2.8 years, with a range of six months to eleven years. The average age was determined after excluding an outlier (12 year-olds) from the calculation. In terms of the target age groups, 8% of the sample population was premature up to three month-old infants; 33% were four month to 12 month-old infants; and 55% were 12 month to 12 year-old children. In total, 81% of the children were three years-old or younger, with only five of the children over the age of three. The majority of this sample consisted of children under the age of three years. The age range of this population was consistent with the literature which indicated that the prevalence of GERD symptoms significantly declines with age, with less than 30% of GERD cases persisting past three years of age [ 17 ]. According to caregivers' ratings, their children were primarily in very good (26%), good (26%) or fair health (26%). Analysis of additional clinical measures revealed significant differences in the children's health by language group, with Spanish-speakers reporting more hours of fussiness per day, emergency room visits and hospitalizations. Table 1 also contains the characteristics of the children of the focus group participants. Concept elicitation Qualitative analysis of the focus group discussion results revealed nine key concepts relevant to caregivers of pediatric GERD patients: experiences related to GERD diagnosis, taking care of the child, daily activities, emotional well-being, household expenses, physical health, social life, relationships, and employment prior to and after GERD diagnosis. Each key concept is discussed in the following sections. Experiences related to GERD diagnosis Caregivers recalled feelings of fear, helplessness and guilt associated with the onset of their children's GERD symptoms. Many of these feelings were discussed in the context of negative interactions with HCPs. Caregivers shared stories about their struggle to get doctor appointments or referrals and being told that their children's symptoms were "normal" or that they were "overreacting" or "paranoid." These experiences appeared to be particularly stressful for the Spanish-speaking caregivers. In several cases, Spanish-speaking caregivers stayed with their physicians even though they received inadequate medical attention due to their lack of comfort with the U.S. health system. Thus, their children tended to be diagnosed at an older age than children of English-speakers resulting in elevated feelings of fear, helplessness and guilt. Although caregivers stressed the trauma they experienced in order to obtain their child's diagnosis, this experience did not appear to be change once the child had received an accurate GERD diagnosis. Therefore, this domain will most likely not be affected by treatment and therefore, has been included as optional. Taking care of the child Caregivers discussed four unique parenting issues related to taking care of a child with GERD: using special feeding techniques, putting the child to bed safely, disciplining the child and finding a reliable childcare provider. First, caregivers of children under four years of age discussed the need to use special feeding techniques. These caregivers commented that they needed to feed their children "small portions," feed them "constantly" and prepare special meals or formulas. Second, caregivers of infants reported adopting unique bedtime routines. These caregivers used equipment such as wedges, car seats, pillows and monitors to put their children to bed safely. Third, caregivers of children over the age of four cited discipline as a challenging issue. Several caregivers recalled emotional tales of disciplining their children for throwing up "on purpose," while others feared that their children would spit up every time they were "yelled at" or "punished." Finally, virtually all of the caregivers cited childcare as a major concern. Most caregivers were unwilling or unable to find childcare providers capable of handling their children's special needs. Even those caregivers who found suitable care providers experienced a sense of distress due to concerns that others would not care for their child adequately. For example, one caregiver stated "No one else is going to give your child the attention [he/she] needs". Impact on daily activities The caregivers cited three daily activities that were significantly impaired by their children's GERD: mealtimes, housework and bathing. Concerning mealtimes, the caregivers shared stories about spending time "preparing multiple meals" for the family (separate meals for their child), lacking time to "sit down and eat" and having to eat cereal, frozen meals or take-out for dinner multiple evenings in a row. With regard to housework, the caregivers shared anecdotes about having to do laundry "every day," "constantly cleaning the carpet" and putting plastic covers over the furniture to protect it from their children's spit-up or vomit. Finally, the caregivers recalled stories about not having time to bathe, struggling to shower while holding the child and feeling too tired to get dressed. Limitations in these three activities appeared to be most pertinent among caregivers of infants and caregivers of children with more severe GERD symptoms. Impact on emotional well-being Caring for a child with GERD had a major impact on the caregivers' emotional well-being. The caregivers mentioned a myriad of feelings including fear, worry, grief and depression. The lingering feelings of fear experienced by caregivers most frequently pertained to the possibility that their children might choke and stop breathing. These feelings of fear were reported primarily by caregivers of infants. Feelings of worry were experienced by caregivers of all age groups and predominantly focused on the child's "failure to thrive." These feelings of worry were particularly burdensome for caregivers of children with severe GERD who experienced substantial weight loss and significant developmental delays. Two specific differences between language groups were found in this domain. English-speaking caregivers mentioned feelings of guilt for "disliking" or "not wanting" their children and feelings of envy upon seeing other "healthy babies." In contrast, Spanish-speakers explained that it was a "given" that caregivers would be close to their children and that they did not experience the feeling of "disliking" their child. Impact on household expenses Caregivers discussed incurring additional expenses for a variety of products and services related to caring for their children including doctor/hospital bills, special infant formula(s), medicine, laundry, childcare, new or replaced furniture, new or replaced carpets, cleaning supplies and special equipment designed for managing GERD (i.e., wedges, car seats and high chairs). During the focus group discussions, the majority of caregivers spontaneously provided information related to their employment and income level when discussing the financial impact of caring for their children. Where possible, this information was noted and recorded by the moderators for analysis. Additionally, the caregivers were asked information about their insurance coverage and typical expenses in the Caregiver Informational Survey. Analysis of these data indicated that the caregivers' expenses did not differ by their socio-economic status or language group. However, caregivers from lower income households appeared to be more greatly impacted by their additional household expenses and reported greater financial strain. Impact on physical health The majority of caregivers indicated that their physical health had declined since their children first exhibited symptoms. Common terms used by the caregivers to describe the state of their physical health were "tired," "exhausted," "stressed out" and "distracted." In addition, the caregivers shared numerous stories about carrying their infants all day, sleeping in shifts with their spouses and developing physical ailments such as migraines and skin conditions due to the strain of caring for a child with GERD. These physical health challenges seemed to be particularly burdensome for caregivers of children with more severe cases of GERD. Impact on social life Caregivers reported reduced social interactions outside of the home and increased social isolation due to their children's GERD. Numerous caregivers commented that they preferred to "stay at home" due to the embarrassment of dealing with their children's constant "vomiting" and "screaming" in public and the emotional strain of seeing healthy babies of the same age as their children. For several caregivers, social isolation was exacerbated by difficulties experienced while talking on the telephone due to their children's "vomiting", "screaming" and need for attention. Issues of social isolation appeared to be most relevant for caregivers of younger infants and caregivers of children with more severe symptoms. Impact on relationships Caregivers described how caring for their child's GERD affected their relationships with their spouse/partner, relatives, other child(ren) in the family, and their child with GERD. Discussions about the caregiver's relationship with their spouse/partner focused upon the tension experienced between the couple jointly caring for a child with GERD. Tension was often attributed to disagreements over the child's medical care, reduced personal time, reduced desire for physical intimacy and decreased communication. Comments about their relationships with relatives emphasized feelings of frustration with family members who "just don't get it," "complain so much" and "always judge." Remarks about their relationships with other children in the family reflected reduced time to dedicate to the siblings of the GERD patient. Some caregivers confessed that due to the attention required by their child with GERD, they were "less patient," more "demanding" and more "neglecting" of their other children. Finally, stories about the impact on the bond between the child with GERD and the primary caregiver reflected a dual-directional relationship. In some cases, the child's illness seemed to weaken the bond with the primary caregiver, whereas the illness appeared to strengthen the bond in other instances. No major differences were revealed in the impact on these four relationships by language, age, socio-economic or symptom severity group. Impact on employment The impact of caring for a child with GERD on an individual's employment was primarily dependent on the caregiver's employment situation prior to the child's diagnosis. The caregivers who were previously employed mentioned a number of ways that caring for a child with GERD affected their paid employment including: changes in work schedules, ability to take enough time off to care for their child, reduced productivity and changes in long-term career goals. In order to provide sufficient care for their children, many of the caregivers changed their work schedule by leaving their jobs altogether, switching jobs or cutting back their hours. Moreover, several caregivers took more paid or unpaid vacation time in order to take their children to doctors' appointments, share the burden of care with their spouses or provide more attention to their children. Employed caregivers also described indirect costs in terms of reduced productivity and inability to concentrate at work. Many of those caregivers who were not employed prior to the child's diagnosis reported altering or sacrificing their career plans to ensure they had enough time to devote to the child with GERD. The aforementioned employment issues appeared to be relevant to caregivers across language groups, age groups and symptom severity levels. Overall, however, the employment issues appeared to be especially pertinent for individuals from low-income households, many of whom worked in hourly rather than salaried positions. Item generation Item generation based on nine identified areas and the caregivers' verbatim responses resulted in 10 domains with a total of 58 items that were simultaneously generated in American English and American Spanish. The first draft assessed the following areas: experiences related to diagnosis, caring for the child, daily activities, emotional well-being, physical health, social life, relationships, household expenses and employment prior to onset of GERD and current employment. The domain assessing the caregivers experiences related to diagnosis was developed for baseline use only. This domain contains six items with "yes" and "no" response options and has a recall period of present time. Issues about employment were divided into two domains pertaining to the time before the child's diagnosis as well as the current time period in order to quantify how the direct and indirect costs associated with caring for a child with GERD change in response to treatment. With the exception of four domains (experiences related to diagnosis, household expenses, employment prior to onset of GERD and current employment), all questions in the remaining domains have ordinal scales evaluating intensity and frequency. These questions are phrased in the first person, for example: "During the last two weeks, I needed to prepare special meals or formulas for my child." The response format, a 5-point Likert scale was chosen, as the upper limit of an individual's capacity for discriminate judgments has been shown to be near seven (plus or minus two) [ 18 ]. Previous studies have suggested that 5-point Likert scales provide an appropriate level for respondents at a fifth grade level of literacy to discriminate without a loss of information. Furthermore, a Likert scale greater than five-points becomes problematic for translation into other languages. The nine core domains and one optional domain in the first version of the PGCIQ were selected to correspond with the key themes identified from the focus group discussions. To ensure cultural equivalence of items, only those concepts that had been identified in both language groups served as the basis for generating items. Considering that 80% of caregivers had children under the age of three, item generation was predominantly based upon verbatim comments elicited by caregivers of infants and young children. Child's age In the item generation focus group discussions, over 80% of the caregivers had children aged three years or younger. Specific concepts that were noted in the focus group discussions as relevant to caregivers of older children and adolescents, but not to caregivers of infants or toddlers, included discipline issues, concern about the child's emotional well-being and concern about the child's academic development and self-esteem. In contrast, concepts that were relevant to caregivers of infants and toddlers, but not to caregivers of older children or adolescents, included the need to feed the child frequently, feed the child small portions and spend a significant amount of time feeding the child. In general, infants and toddlers also seemed to require more constant one-on-one attention, thereby exerting a greater impact on the caregivers' daily activities, social functioning and family lives. Caregiver's gender, education and socio-economic status The PGCIQ was formulated in caregivers with a range of socio-economic and educational backgrounds. In general, all domains were relevant to caregivers across socio-demographic groups. However, the "employment" and "household expenses" domains appeared more relevant to caregivers of lower economic and educational status due to a higher prevalence of hourly wage earners. Employees working for hourly wages reported greater difficulties getting time off from work to care for their child, less coworker understanding, more financial strain and more pressure to remain in positions they did not enjoy than employees who were salaried. Those in lower income households also struggled more with additional household expenses due to the child's GERD and were more likely to report they could not afford products or services (for example, car seats, wedges, therapy sessions) to improve their children's health. Language group The focus group discussions were conducted in American English and American Spanish. All domains appeared to be relevant to both language groups. Specific issues that were more relevant to the English-speaking groups included experiencing feelings of "envy," feelings of guilt for "disliking the child" and feeling "bonded with the child with GERD." In contrast, employment challenges and issues of tension, reduced intimacy and communication in the partner relationship appeared to be more relevant to the Spanish-speaking groups. Relative to the English-speakers, the Spanish-speakers also voiced the following unique challenges: more financial strain, more pressure to be an "ideal" parent, more difficulty getting attention from English-speaking doctors, less understanding from co-workers and less access to information about GERD. Translatability assessment The results of the in-depth translatability assessment revealed that 16 of 58 items were potentially problematic for future adaptation of the PGCIQ into other languages. These 16 items were reviewed by native speakers. Four of the items were deleted and 12 of the items were modified to improve the cultural adaptability of the PGCIQ. The modified items were flagged for further testing in content validity interviews. Content validity interviews Caregiver and child characteristics Version 1 of PGCIQ was tested in twenty caregivers, 10 English-speaking and 10 Spanish-speaking, who agreed to participate in content validity testing interviews. The final distribution of interview participants by geographic region was East Coast English (n = 4), West Coast English (n = 6), East Coast Spanish (n = 3) and West Coast Spanish (n = 7). The characteristics of the interview population were similar to those of the focus group population. The average age of the participants was 38 years with a range from 24 to 79 years. The majority of the participants were female (70%), married (75%), living with their spouse and children (70%) and recipients of a high school degree (75%). These characteristics resemble those of the focus group population and favor the experiences of married females. Table 2 provides the characteristics of content validity interview participants. Table 2 Content validity interviews: caregiver and child characteristics American English N (%) American Spanish N (%) Total N (%) Number of Caregivers by language 10 (50%) 10 (50%) 20 (100%) Gender Female 8 (80%) 6 (60%) 14 (70%) Male 2 (20%) 2 (20%) 4 (20%) Missing Data 0 (0%) 2 (20%) 2 (10%) Age Mean 31 47 38 Range 24–37 30–79 24–79 Relationship to Child Grandparent 0 (0%) 3 (30%) 3 (15%) Parent 10 (100%) 5 (50%) 15 (75%) Missing Data 0 (0%) 2 (20%) 2 (10%) Number of Children by Caregiver Language 10 (50%) 10 (50%) 20 (100%) Child's Age (years, months) Mean 0.7 4.2 2.3 Median 0.5 2.0 0.7 Range 0.4–1.4 0.3–10.5 0.3–10.5 Gender Female 4 (40%) 5 (50%) 9 (45%) Male 6 (60%) 5 (50%) 11 (40%) Missing Data 0 (0%) 0 (0%) 0 (0%) The distribution of children was 45% female and 40% male. An additional 15% of the gender data was missing and is not included in these figures. The average age of the population was 2.3 years, with a range from 0.33 to 10.5 years. As noted, the majority of comments used for item generation of Version 1 pertained to infants and toddlers under the age of three years. Thus, an attempt was made to test the relevance of the PGCIQ in caregivers of older children during the content validity interviews. Despite attempts to recruit caregivers of older children, the final age distribution was skewed towards children 12 months-old and younger. Premature through three month-old infants composed 17%, four month-old to 12 month-old infants composed 56% and 12 month-old through 12 year-old children composed 28% of the final distribution. Similar to the age distribution of the focus group discussions, 85% of the interview participants cared for children through the age of three, with only three of the 20 caregivers having children over three years. Table 2 contains characteristics of the children of the interview participants. Caregiver Impressions Overall, the respondents had positive impressions of the PGCIQ and described it as "easy to understand" and "easy to answer." The respondents were also satisfied with the recall period, length and the format of the questionnaire. The average time of completion was eight minutes, with a range from five to 15 minutes. The caregivers suggested a number of revisions to improve clarity. These suggestions, combined with the results of the in-depth translatability assessment, resulted in wording modification of 25 items and deletion of 10 items. The majority of wording modifications were slight changes to enhance the clarity of the items. For example, one item "During the last two weeks, I was limited in preparing meals" was modified to "During the last two weeks, I was limited in preparing meals for my family." Additionally, several caregivers suggested including a new domain to capture their relationships with family members. In response to this feedback, a new domain, "Your Relationship with Your Family Members," was added. Items in this section used the same wording as the items in the "Your Relationship with Your Partner" section, with the word "partner" replaced by "family member(s)." Final questionnaire The version of the PGCIQ created after content validity testing contains 49 items assessing 10 core domains: "Taking Care of Your Child," "Your Daily Activities," "Your Emotional Well-Being," "Your Household Expenses," "Your Physical Health," "Your Social Life," "Your Relationship with Your Partner," "Your Relationship with Your Family Members," "Your Employment Prior to Caring for Your Child with GERD," and "Your Current Employment." An additional, optional module with nine items, "Your Experiences Related to Diagnosis," is available for investigative research purposes and for use only at baseline. Discussion The following sections present considerations for the PGCIQ's use and potential limitations of the study. The following topics are addressed: "Your Relationship with Your Family Members"; symptom severity level; child's age and caregiver characteristics. Your Relationship with Your Family Members The new domain, "Your Relationship with Your Family Members," was added following content validity testing interviews based on feedback from participants that additional questions about relationships with family members would provide valuable information. This new domain has not been tested with the wording "family members." As this is a significant change that has not been tested, this domain must be scrutinized during the linguistic validation and psychometric testing process. Symptom severity level The PGCIQ was generated utilizing data from caregivers of children with a range of severity levels. Children with more severe GERD or significant co-morbidities (for example, hiatal hernias or developmental disorders) tended to increase the impact upon the primary caregiver across all nine domains. Conversely, a longer time period since diagnosis tended to reduce the impact on the caregiver. The reduction in impact over time appeared to be the result of two factors. First, many children eventually received successful treatment, which alleviated the GERD symptoms and reduced the strain on the caregiver. Second, the caregivers learned to accept and adapt to their children's condition with time. Child's age Given the age distribution of the study population, the PGCIQ is currently recommended for use in caregivers of pediatric GERD patients, newborn through three years old. Despite attempts to recruit across three age groups, ranging from newborn to 12 years of age, 80% of the focus group participants and 85% of the interview participants cared for a child aged newborn through three years of age. Although these age distributions were consistent with findings in the literature that approximately 70% of GERD cases spontaneously remit after age three, we felt that the majority of data favored infants and toddlers. Further testing of this instrument is therefore recommended in order to consider its use in populations of caregivers of children over the age of three. Researchers interested in extending the PGCIQ to children over the age of three are encouraged to consider the issues relevant to a child's age presented in the results section. Caregiver's gender, education and socio-economic status The focus group discussion and content validity participants were primarily married women. From interviews with physicians, it was found that the majority of caregivers who seek treatment for pediatric GERD patients are women. Even though this instrument has been developed in a primarily female population, the identified caregiver issues appear to span gender roles. The men who participated in the focus group discussions were a small, but vocal minority, who elicited the same core domains as the female participants. Particular items of concern when using the questionnaire in men may be those related to domestic daily activities, given that these items tap more stereotypically female responsibilities. However, if the man is the primary caregiver, then it would stand to reason that he would be responsible for domestic daily activities. Future studies should seek to test the validity of the PGCIQ in larger populations of men and women to ensure that the items are equally relevant across gender roles. Conclusions The PGCIQ has been developed in American English and American Spanish following a rigorous methodology of simultaneous item development for use in observational studies and multi-national clinical trials. The instrument is being linguistically validated from the American English version into several other languages following the appropriate methodology. After linguistic validation and psychometric testing is complete, a theoretical scoring model can be developed, and the PGCIQ may be used in international studies. Considering the age composition of the focus group and content validity interview population, the PGCIQ is currently recommended for use only in caregivers of pediatric GERD patients, newborn through three years old. Further testing of this instrument is recommended in order to consider its use in populations of caregivers of children over the age of three. Additionally, the study population was predominantly female, married and of moderate to high educational status. However, analysis of the caregivers' verbatim comments suggested that the nine domains were equally relevant to men and women, married and unmarried participants and caregivers of different educational levels. Future tests of the PGCIQ should include large samples of men and women of different marital and educational status to confirm the relevance of these domains. The PGCIQ is the first questionnaire to document the multi-dimensional impact of caring for an infant or young child with GERD, making it a valuable tool to gather information and quantify the relevant issues and impacts experienced by caregivers of pediatric GERD patients. Information gathered from this tool can be used to inform and educate physicians, managed health care organizations, pharmacists and other health care providers about the needs of caregivers. In addition, items in the PGCIQ were specifically developed to capture changes in caregiver impact in response to successful treatment. Because caregivers mentioned all nine domains as affected by their child's GERD, we conjecture that intervention effects would be associated with changed scores in all nine domains. Thus, the PGCIQ may provide valuable information in multi-national clinical evaluations regarding the value of different GERD treatments from the caregivers' perspective. This simultaneously generated patient-reported outcome instrument is one of the few examples of simultaneous development identified in the literature. This methodology results in a measure with a reduced risk of factors that might otherwise threaten the successful cross-cultural adaptation of the instrument, making the PGCIQ an appropriate instrument for adaptation into multiple languages. It is important for the PGCIQ to undergo psychometric testing to develop a final scoring model before the instrument can be used to assess the impact of pediatric GERD on caregivers. Academics and researchers interested in obtaining a copy of the PGCIQ should contact the lead author. Authors' contributions JK Participated in the study design and coordination DLK participated in the design, coordination and analysis and its interpretation SB participated in the field work and coordination and analysis and interpretation JAC conceived of the study and participated in its design | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548517.xml |
514532 | Public – private 'partnerships' in health – a global call to action | The need for public-private partnerships arose against the backdrop of inadequacies on the part of the public sector to provide public good on their own, in an efficient and effective manner, owing to lack of resources and management issues. These considerations led to the evolution of a range of interface arrangements that brought together organizations with the mandate to offer public good on one hand, and those that could facilitate this goal though the provision of resources, technical expertise or outreach, on the other. The former category includes of governments and intergovernmental agencies and the latter, the non-profit and for-profit private sector. Though such partnerships create a powerful mechanism for addressing difficult problems by leveraging on the strengths of different partners, they also package complex ethical and process-related challenges. The complex transnational nature of some of these partnership arrangements necessitates that they be guided by a set of global principles and norms. Participation of international agencies warrants that they be set within a comprehensive policy and operational framework within the organizational mandate and involvement of countries requires legislative authorization, within the framework of which, procedural and process related guidelines need to be developed. This paper outlines key ethical and procedural issues inherent to different types of public-private arrangements and issues a Global Call to Action . | Public-private partnerships in health – a global call to action Public-private partnerships are being increasingly encouraged as part of the comprehensive development framework. The need to foster such arrangements is supported by a clear understanding of the public sectors inability to provide public goods entirely on their own, in an efficient, effective and equitable manner because of lack of resources and management issues. These considerations have necessitated the development of different interface arrangements, which involve the interfacing of organizations that have the mandate to offer public good on one hand, and those that could facilitate this goal. Within the health sector, public-private partnerships are also the subject of intensely fueled debate [ 1 ]. Several examples, which fall within this framework, highlight a potential for the creation of a powerful mechanism for addressing difficult problems by leveraging on the strengths of different partners; however, these also illustrate complex issues, as such arrangements bring together a variety of players with different and sometimes conflicting interests and objectives, working within different governance structures [ 2 ]. This paper focuses on public-private partnerships that are intended to address broad questions of providing sustainable health outcomes rather than on the day-to-day interaction that occurs when the government buys a health service from a private supplier or where it leaves the entire matter of health service supply to the private sector. The public sector in this paper refers to national, provincial/state and district governments; municipal administrators, local government institutions, all other government and inter-governmental agencies with the mandate of delivering 'public goods'. The word private denotes two sets of structures; the for-profit private encompassing commercial enterprises of any size and the non-profit private referring to Non Governmental Organizations (NGOs), philanthropies and other not-for-profits. The word partnership in this paper refers to long term, task oriented, and formal relationships. There has been ample critique relating to the convention of using the word partnership to describe such arrangements; much of this debate is valid, given that there are certain requisites for coining such an association. For the same reasons it also needs to be differentiated from privatization , which involves permanent transfer of control through transfer of ownership right or an arrangement in which the pubic sector shareholder has waived its right to subscribe. A distinction also needs to be made between partnerships and contractual arrangements , particularly with regard to the relationship between the public sector and NGOs. Although such arrangements can be used for strategic purposes, they are inherently distinct from partnerships . Types of public-private interface arrangements the database of the Initiative on Public-Private Partnerships for Health of the Global Forum for Health Research lists 91 international partnership arrangements in the health sector, which can qualify to be called public-private partnerships. Of these, 76 are dedicated to infectious disease prevention and control, notably AIDS, tuberculosis and malaria; four focus on reproductive health issues, three on nutritional deficiencies whereas a minority focus on other issues (health policy and research {1}, injection and chemical safety {2}, digital divide {1}, blindness and cataract {4})[ 3 ]. This categorization takes into account large transnational public-private partnerships. There are, however, many other arrangements at a country level, which bring in their wake similar challenges as the ones posed by larger partnerships. Several classifications have been proposed to conceptualize and categorize public-private partnerships. These may be based on the terms of the constituent membership or the nature of activity [ 4 , 5 ]. However by virtue of the definitions and the characteristics of the public and private sectors, it can be stated that public-private arrangements are fostered either when governments and inter-governmental agencies interface with the for-profit private sector to tap into resources, or the non-profit private sector for technical expertise or outreach. Several varieties of arrangements of various sizes, forms and scope at a global, regional or country level qualify to fall within this categorization. Transnational partnerships involving a visible role of the for-profit sector are at one end of the spectrum. These usually involve larger partnerships and a complex grouping; depending upon their structure, they may bring together several governments, local and international NGOs, research institutions and UN agencies in transnational programs, often also involving the non-profit sector. Such partnerships can be housed and coordinated by different sources [ 6 ]. They can be owned by the pubic sector and have private sector participants such as in the case of Global Alliance for Vaccines and Immunization (GAVI) [ 7 ], Roll Back Malaria (RBM) [ 8 ], Stop TB partnership (Stop TB) [ 9 ], Safe Injections Global Network (SIGN) [ 10 ], Global Polio Eradication Programme (PEI) [ 11 ], the Special Programme for Research and Training in Tropical Diseases (TDR) [ 12 ], and the Special Programme for Research Development and Research Training in Human Reproduction (HRP) [ 13 ]. Partnerships can be principally orchestrated by companies such as in the case of Action TB [ 14 ], and can be legally independent such as the International Aids Vaccine Initiative (IAVI) [ 15 ], Medicines for Malaria Venture (MMV) [ 16 ], Global Alliance for TB Drug Development (GATBDD) [ 17 ], and the Concept Foundation (CF) [ 18 ]. Large partnerships can also be hosted by a civil society NGO; examples include the Malaria Vaccine Initiative (MVI) [ 19 ], the Mectizan Donation Programme (MDP) [ 20 ], and the HIV Vaccine Initiative (HVI) [ 21 ]. At the other end of the spectrum, there are examples of individual governments forming partnerships with the for-profit private sector [ 22 ]. There are also examples of situations when a government partners with an NGO with a particular technical strength, technical or outreach related. The recent evolution of a public-private partnership for the prevention and control of non-communicable diseases in Pakistan is an example of this approach, where the government leverages on the technical strength of the private sector partner for addressing an emerging health challenge [ 23 ]. Examples also exist of NGOs seeking support from corporate partners both at a national and an international level. The World Heart Federation has recently structured transparent and successful business relationships with the corporate sector for supporting global programs with initial encouraging results [ 24 , 25 ]. Partnerships in the health sector can be for various purposes; categories as stated by the Initiative on Public-Private Partnerships for Health have been summarized in Table 1 . Such partnerships are novel arrangements and potentially present an opportunity for more than one partner(s) to contribute to the same goal. Many of these have positively contributed to health outcomes in the past; developing technologies for tropical diseases, surveillance and screening strategies, contributing to technical aspects of sustainable drug development and vector control are amongst a few examples [ 26 , 27 ]. Notwithstanding, partnerships involving the for-profit private sector bring in their wake many concerns as they involve a donor-recipient relationship [ 28 ]. Table 1 Categorization of public-private partnerships based on the purpose they serve Purpose Partnership 1 Product development GATBDD, IAVI, MMV and MVI. 2 Improving access to healthcare products CF, MDP, Accelerated Access Initiative (AAI) [48], Global Alliance to Eliminate Leprosy (GAEL) [49], Global Alliance to Eliminate Lymphatic Filiariasis (GAELF) [50] and the Global Polio Eradication Initiative (GPEI) [51]. 3 Global coordination mechanisms GAVI, RPS, Stop TB, Global Alliance for Improved Nutrition (GAIN) [52], and the Micronutrient Initiative (MI) [53]. 4 Strengthening health services Alliance for Health Policy and Systems Research (AHPSR) [54], Multilateral Initiative on Malaria (MIM) [55], African Comprehensive HIV/AIDS Partnerships (ACHAP) [56]. 5 Public advocacy and education Alliance for Microbicide Development (AMD) [57], African Malaria Partnership (AMP) [58], Global Business Coalition on HIV and AIDS (GBC) [59] and Corporate Council on Africa (CCA) [60]. 6 Regulation and quality assurance The International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) [61], Pharmaceutical Security Institute (PSI) [62] and the Anti-Counterfeit Drug Initiatives [63] In many countries, there are long established links of the public sector with NGOs. Theoretically, since NGOs are not driven by a profit generating motive, many of the ethical challenges that potentially exist in partnering with the for-profit are not of relevance in this case. However, it could also plausibly be argued that NGOs, who though objective and altruistic, may, in fact, have quite complex motives. In promoting public-private partnerships therefore, several issues need to be clearly flagged in an attempt to address them in tandem with efforts that aim to foster such relationships. Within that context, a set of ethical and process related challenges are summarized hereunder: Ethical challenges, which are largely generic across the range of public-private partnerships relate to the following dimensions 1. Global norms and principals: many of the large partnerships involving a variety of players are of a transnational nature. However, against this backdrop, there are no global norms and principals, to set a framework within which global public health goals can be pursued in a partnership arrangement. 2. Impartiality in health: if public-private partnerships are not carefully designed, there is a danger that they may reorient the mission of the public sector, interfere with organizational priorities, and weaken their capacity to uphold norms and regulations. Such a shift is likely to displace the focus from the marginalized and may therefore be in conflict with the fundamental concept of equity in health. 3. Social safety nets: it has been increasingly argued that engaging in a partnership mode provides the public sector an opportunity to renounce their responsibilities; this in a sense may lead to withdrawal of social safety nets. Failure to commit to maintain the role of the state in such partnerships may result in a laissez-faire attitude, prejudicial to the interest of the most vulnerable groups. 4. Conflict of interest: many partnerships are initiated on the premise that they fulfill a social obligation, and can involve good intentions on part of individuals and organizations. However the basic motive that drives the 'for-profit' sector demands that these involve a financial pay off in the long term. In such cases, the difference between corporate sponsorships and scientific philanthropic donations with long term visible public health goals needs to be clearly separated. This issue has been further complicated in recent years as many global health initiatives funded by endowments generated by foundations have partnerships with the private sector as a key feature [ 29 ]. Such donor-recipient relationships bring in their wake many concerns. These include concerns relating to such arrangements providing the 'for profit' private sector an opportunity to improve their organizational image by engaging in cause-related marketing and concerns relating to these engagements facilitating access of the commercial sector to policy makers. On the other hand, many NGOs even in the developing countries are little more than lobby groups with a particular interest, which may or may not be aligned to public good. 5. Redirecting national health polices: there are also concerns that such partnerships redirect national and international health polices and priorities and have the potential to defeat crucial local and national efforts. 6. Fragmentation of the health system: partnerships generally tend to aim for short term high profile goals and tend to pick the lowest lying fruits. Partnerships do have the mandate and cannot be held accountable to synchronize their activities with emerging processes within countries aimed at developing their health systems. Therefore if they are instituted in countries with weak health systems they have the potential to fragment the healthcare system by instituting independent vertical programs. The changing global agenda around 'vaccines' helps to highlight many of these issues. Previously polices around vaccination were grounded in the general principal that promoted equitable access to few vaccines around the world. However new initiatives and their vertical systems have less of a focus on sustainability, may not contribute to strengthening of the health system and have the potential of redirecting national health policies, which focus on equitable care [ 30 ]. 7. Contribution to common goals and objectives: it is common for partners to have different objectives while pursuing a relationship though it may be implicit that partnerships are contributing to common goals. 8. Lack of outcome orientation: many a times, partnerships exist in form and do not contribute to improvements in quality and efficiency. Operational and process-related challenges in public private partnerships relate to the following dimensions 1. Legislative frameworks, polices and operational strategies: many developed countries have legislation to interface with the private sector [ 31 ]. However in the developing world, there is a general failure, to have overarching legislation relating to public-private partnerships. As a result, such arrangements develop on an ad hoc and opportunistic basis and may have questionable credibility; as a results of this failure, polices and specific operational strategies fail to develop. 2. Participatory approach to decision making: the expression 'partnership' gives the impression of equality. However many a times, a participatory approach to the decision making process is usually not optimally accomplished. This has implications of varying degrees. Almost all the large 91 transnational partnerships referred to earlier are focused on the developing world. However, among these, 85 have their secretariats in Europe and North America; the United States and Switzerland being the commonest host countries. Cleary this lack of proximity to the intended beneficiaries has a bearing on the manner in which the beneficiaries have a role in the decision making process [ 32 ]. The decision-making process in a partnership may also be biased because of the stronger partners' influence. At a county level and in the case of governments interfacing with NGOs, the stronger partner, which his usually the Government generally tends to make the rules. On the other hand, in the case of relationships with the 'for-profit' private sector, there is the danger of the financially stronger partner influencing the public sectors decision making process on policies, regulatory and legislative matters, which have implications for their profit-making motive. 3. Governance structures: workable partnerships require a well-defined governance structure to be established to allow for distribution of responsibilities to all the players. Public-private partnerships may run into problems because of ill-defined governance mechanisms. Recent evaluation of the RBM project while acknowledging the successes of the partnership in drawing global attention to the scale of the problem posed by Malaria has outlined serious governance-related issues [ 33 ]. More recently, independent evaluation of the Global Stop TB partnership has also resulted in the issuance of detailed recommendations for improved governance [ 34 ]. 4. Power Relationships: skewed power relationships are a major impediment to the development of successful relationships. Governments in developing countries usually tend to assume core responsibility of the joint initiative and take charge of the weaker partner. In case of NGOs with outreach-related strengths, this usually takes the form of a 'contractual relationship' without much regard to the participatory processes, which should be key to a public-private partnership arrangement. In case of relationships with NGOs with technical strength, there are issues relating to power relationships of a more serious nature with regard to who assumes the leadership role. 9. Criteria for selection: the criteria for selection are an important issue both from an ethical and process-related perspective as it raises the questions of competence and appropriateness. In many instances the public sector is vague about important issues related to screening potential corporate partners and those in the non-profit sector. 10. Sustainability: the question of long-term sustainability is often ignored in public-private partnerships. An analysis of the operation of GAVI has concluded that it overemphasizes high technology vaccines, lacks sustainability, relies too heavily on the private sector and consequently, runs the risk of compounding health inequities in the poorest countries [ 35 ]. 11. Accountability: many partnerships do not ensure that all players are held accountable for the delivery of efficient, effective and equitable services in a partnership arrangement. Often in public-private relationships it is unclear as to whom are these partnerships accountable to, according to what criteria, and who sets priorities? To hold partners accountable for their actions, it is imperative to have clear governance mechanisms and clarify partner's rights and obligations. Clarity in such relationships is needed in order to avoid ambiguities that lead to break up of partnerships. A case in point is the recent breakup of GAEL with the exit of the International Association of Anti-leprosy [ 36 ]. The Call to Action In the world we live today, global agendas are being increasingly shaped by the private sector. The 'for-profit' private sectors' immense resources make it an irresistible partner for public health initiatives. These arrangements can also be mutually synergistic. Governments and international agencies can tap into additional resources to full fill their mandate whereas the commercial sector can fulfill its social responsibility, for which it is being increasingly challenged. Additionally, the recent SARS epidemic and bio-terrorist threats should help to make the private sector understand the value of investment in health for reasons beyond fulfilling their social obligations. Active involvement of the 'non-profit' sector and donor coordination in country goals is also being increasingly encouraged within comprehensive development frameworks; this approach is synchronous and in harmony with the Poverty Reduction Strategy Paper Framework [ 37 ]. The development and health actors have highlighted the need to harness the potential that exists in collaborating with the private sector to advance public health goals. This is also becoming increasingly essential as both the public and the private sector recognize their individual inabilities to address emerging public health issues that continue to be tabled on the international and within country policy agendas. Public-private partnerships therefore seem both, unavoidable and imperative. However in building such collaborations, certain measures must be taken at a global level to assist global partnerships and set a framework within which efforts at a country level can emanate. As a first step, there is a need to develop a set of global norms and ethical principles; a broad-based agreement over these must be achieved. The transnational nature and global outlook of emerging partnerships necessitate that these stem from a broad-based international dialogue. It is critical that the driving principles for such initiatives be rooted in 'benefit to the society' rather than 'mutual benefit to the partners' and should center on the concept of equity in health. Norms must stipulate that partnerships contribute to strengthening of social safety nets in disadvantaged settings and should be set within the context of 'social responsibility' as the idea is not meant for private funds to be put to public use nor to privatize public responsibilities. Global principles must specify that partnerships should be in harmony with national health priorities; they should complement and not duplicate state initiatives and should be optimally integrated with national health systems without any conflict of interest. Norms must make it mandatory for all partners in a 'partnership' arrangement to contribute to common goals as a true partnership is one in which the partners, though having different motivation and values have a shared objective. Global norms must outline that partners be committed to making contributions, sharing risks and the decision making process. Principles should emphasize an outcome orientation. Development of a public-private partnership in itself should not be seen as an outcome , but a process and an output ; it is important for partnerships not to just exist in form but to contribute to improvements in health outcomes. It must be made binding for international agencies to develop transparent policy and procedural frameworks. Many international agencies have established guidelines on interacting with the private sector [ 38 - 45 ]. However there is a need for comprehensive polices and operational strategies, which are crucial to ensuring transparency and protecting public interest [ 46 ]. Inviting third party reviews and ensuring an open process for deliberations will help to ensure transparency and reflect that these processes are indeed being structured in public interest. Global efforts should demand, encourage and assist the development of policy and legislative frameworks shaping public-private partnerships within countries [ 47 ]. However in the setting of developing countries, there is a need for international actors to guide these and for them to emanate within the framework of global norms and standards. Assisting with capacity development through donor coordination may be a necessary prerequisite to this approach. Legislative and policy frameworks within countries will help to legitimize public-private relationships, lend credence to this approach, help to foster an enabling environment and provide a mandate for the development of ethical guidelines to further direct these initiatives. Within stipulated legislative and policy frameworks, support must be provided to developing countries to develop specific guidelines to steer such relationships. Guidelines can assist with the development of selection criteria and help specify roles of the public and the private sectors. They can also assist with the development of models that outline combined governance structures, clearly aimed at improved systems of governance. Guidelines must also articulate a clear policy on a participatory approach to the decision making process. In addition, they should assist with assigning responsibilities to various levels of Government and then hold people and institutions both within Governments and those in the private sector that partner with them accountable for their performance. Though an evidence-based approach and ethical considerations must never be compromised in such endeavors and every effort should be made to ensure that goals are mutually compatible, guidelines also need to be flexible in order to accommodate each partner's organizational requirements and integrity. Moreover they need to be pragmatic. The public sector needs to recognize the basic legitimacy of the private sector and the profit motive that drives it. It is also essential for the public sector to respect the organizational autonomy and priorities of the non-profit sector. In this context, partnerships and contractual relationships need to be carefully differentiated. Partnerships must also be the subject of noteworthy empirical research, which would enable a detailed assessment of the specific issues inherent to the various types of public-private partnership arrangements from an ethical and methodological perspective. The impetus for driving global and national efforts in creating a transparent and conducive environment for public-private partnerships needs to come from the public sector. This raises the issue of capacity within countries; the gap needs to be bridged by assistance from UN agencies, which have the mandate of harnessing and coordinating support among a variety of players for global actions. However, the results of such actions will only be as good as Governments make them; weak and poorly informed Governments cannot remedy their own deficiencies by seeking to yolk the private sector to their own uncertain cart. In conceptualizing a framework that assists with setting global norms and guidelines and within-country legislative actions, it needs to be recognized that the dynamics of public-private partnership arrangements are generic across social sector. It may therefore be useful to allow this commonality to prevail in initiating global and country-specific actions. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC514532.xml |
423163 | Correction: Mre11 Assembles Linear DNA Fragments into DNA Damage Signaling Complexes | null | In PLoS Biology , volume 2, issue 5: Mre11 Assembles Linear DNA Fragments into DNA Damage Signaling Complexes Vincenzo Costanzo, Tanya Paull, Max Gottesman, Jean Gautier DOI: 10.1371/journal.pbio.0020110 Because of a labeling error, the size of the DNA fragment used throughout the experiments was reported incorrectly (reported as 1 kb). The actual size of the DNA fragment was 131 bp. The fragment corresponds to the M13 DNA sequence from nucleotide 5656 to nucleotide 5787. This difference in fragment size does not affect any of the conclusions of the paper. This correction note may be found online at DOI: 10.1371/journal.pbio.0020229 . | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC423163.xml |
554116 | A Theory of Mind investigation into the appreciation of visual jokes in schizophrenia | Background There is evidence that groups of people with schizophrenia have deficits in Theory of Mind (ToM) capabilities. Previous studies have found these to be linked to psychotic symptoms (or psychotic symptom severity) particularly the presence of delusions and hallucinations. Methods A visual joke ToM paradigm was employed where subjects were asked to describe two types of cartoon images, those of a purely Physical nature and those requiring inferences of mental states for interpretation, and to grade them for humour and difficulty. Twenty individuals with a DSM-lV diagnosis of schizophrenia and 20 healthy matched controls were studied. Severity of current psychopathology was measured using the Krawiecka standardized scale of psychotic symptoms. IQ was estimated using the Ammons and Ammons quick test. Results Individuals with schizophrenia performed significantly worse than controls in both conditions, this difference being most marked in the ToM condition. No relationship was found for poor ToM performance and psychotic positive symptomatology, specifically delusions and hallucinations. Conclusion There was evidence for a compromised ToM capability in the schizophrenia group on this visual joke task. In this instance this could not be linked to particular symptomatology. | Background Theory of Mind and schizophrenia Theory of Mind (ToM) describes the ability to recognise that other people have minds containing beliefs and intentions and to be able to interpret these correctly. The term, first coined by Premack and Woodruff [ 1 ], is also referred to as mind-reading [ 2 ] or 'mentalising', when the correct inferences regarding the intentions and belief of others are used to predict and control behaviour [ 3 ]. ToM ability has been conceived as a capacity to represent epistemic mental states comprising an agent and an attitude to the truth of a proposition e.g. "Peter believes that it is raining" [ 4 , 5 ]. The truth of this proposition concerning the mental state of an agent (Peter, who believes it is raining) need not be affected by the truth of the embedded proposition (it is raining), which may be false [ 6 , 7 ]. In this way, Leslie and Roth [ 6 ] proposed that a major requirement for computing such representations is a mechanism that decouples the content of the proposition (it is raining) from reality. These special representations have come to be termed metarepresentations or M-representations. It is widely reported that there are observed ToM deficits in schizophrenia from the numerous behavioural and neuroimaging studies that have been conducted investigating this phenomenon [e.g. [ 8 ]]. It is proposed that certain symptoms characteristic of schizophrenia may also reflect specific impairments in ToM abilities [see [ 9 ]], these being positive symptoms of delusions and hallucinations and chronic negative symptoms. Frith [ 10 ] also hypothesised that positive schizophrenic symptoms could result from impairment in metarepresentation. In more detail, Frith hypothesized that in certain cases of schizophrenia something may go wrong with the decoupling process involved in computing metarepresentations [ 11 ]. This might occur in two ways. Using the above scenario, firstly the content (it is raining) becomes detached from the rest of the proposition (Peter believes that...) and secondly, the content is perceived as a representation of the real world rather than someone's belief about it. This statement, unattached to any implication that it is a thought or belief of the patient, or another person, may then be misconstrued, e.g. as a third-person auditory hallucination. Different forms of hallucination may be experienced according to the precise propositions misperceived. Misinterpretation of the behaviour or intentions of others may manifest as the delusions of reference, misidentification and persecution, experienced by some individuals with schizophrenia. Indeed, it can be said that rather than an absence of ToM capabilities in these individuals there is actually an inappropriate and excessive use of basically intact theory of mind capabilities [ 9 ]. This follows since a basically intact theory of mind mechanism is needed, say, to infer other people's persecutory intentions (even when these are mistaken inferences) and there is an over-attribution of intentions of this type in persecutory-deluded people with schizophrenia [ 12 , 13 ]. Frith has also referred to a distinction between over-mentalising in schizophrenia and under-mentalising in autism [ 14 ]. Pictorial studies Sarfati et al used a strictly pictorial task in which 3 picture cartoon sequences were shown depicting a character producing an action and the participants had to choose the fourth and final picture from a choice of three images [ 15 ]. Successful image choice depended on the understanding of the character's intent behind the action. They found that individuals with schizophrenia who had thought and speech disorganisation had a significant specific difficulty attributing mental states to others. Sarfati et al then enlarged this experimental protocol by introducing a verbal dimension to the task [ 16 ]. There were now two answer conditions to the original 3 picture cartoon sequences relaying character intent: the pictorial condition identical to the above and a new verbal condition where the choice of endings were comprised of verbal sentences. Disorganised individuals with schizophrenia performed significantly worse than the other experimental groups. Interestingly, all the groups' performance improved in the verbal condition, but the presence of verbal material did not make the disorganised patient's performance similar to that of the other groups. Sarfati et al followed this work up by looking at the difference in performance on the same task before and after the introduction of the verbal answer condition [ 17 ]. They compared a schizophrenia group and a matched control group. The entire control group and half the schizophrenia group who did not perform at the best level in the pictorial answer condition, remediated with verbalization. In contrast to their a priori hypothesis, it was not schizophrenia patients with thought and language disorders who remediated in the verbal condition. Langdon et al, used a task comprised of 4 card black and white cartoon picture sequences of four varieties: social script stories testing logical reasoning about people without needing to infer mental states, mechanical stories testing Physical cause and effect reasoning, false belief stories testing general mind reading abilities and capture stories testing inhibitory control. Cards were place face down in a square layout and participants had to turn the cards over and place them in the correct order to show a logical sequence of events. In order to control for possible contributory effects of executive dysfunction, inhibitory control was tested using capture picture-sequences and executive planning was tested using the Tower of London task. In both studies, it was found that individuals with schizophrenia showed a selective ToM impairment which could not be completely explained by reasoning, planning deficits or poor inhibitory control [ 18 , 19 ]. Brüne showed individuals a muddled cartoon 4 picture sequence depicting a ToM scenario between characters [ 12 ]. The participants had to put the pictures into the correct sequence and then answer first and second order ToM questions related to the depiction. Where as first-order questions require acknowledgement of what one story character thinks about the world, second-order questions require acknowledgement of what one story character thinks about another story character's thoughts. The schizophrenia group was outperformed by the control group. Corcoran et al used visual jokes to look at potential ToM deficits in schizophrenia [ 3 ]. Two sets of jokes were used: a Physical set of slapstick humour that did not require ToM capabilities to understand the joke contained within the picture and a ToM set in which an appreciation of the mental states of the characters (false belief and deception) were required. ToM deficits were found in individuals with schizophrenia exhibiting passivity phenomena (e.g. thought insertion/withdrawal) and behavioural disorders. The primary interest of the current study was to examine the associations between specific schizophrenic symptoms and ToM capabilities using the cartoon method devised by Corcoran [ 3 ], but with a larger battery of visual jokes (over treble the number of picture stimuli). Patients with schizophrenia were compared to a closely matched group of healthy controls. It was anticipated, in keeping with Frith's model [ 11 ], and the data from Corcoran et al [ 3 ], that not only would the schizophrenia group perform significantly worse than the controls, but that the severity of positive symptoms, in particular hallucinations and delusions, would be most strongly associated with ToM impairment. Methods Participants Forty participants aged from 19–65 years were recruited for this study. Twenty of these had a diagnosis of DSM IV schizophrenia [ 20 ]. These were either in-patients of an acute psychiatric ward who were clinically stable and awaiting discharge, or outpatients attending clinics at the Royal Edinburgh Hospital. They were all receiving antipsychotic medication. Antipsychotic medication dose at time of testing was recorded for each patient and using standard published tables was converted into daily chlorpromazine equivalent dosage [ 21 , 22 ]. Twenty healthy volunteers from various community and hospital sources were also recruited as a control group. An estimate of their current level of overall intellectual function was made using the Quick Test [ 23 ]. Demographic characteristics for both the experimental and control groups are shown in Table 1 and the clinical details of psychiatric participants can be seen in Table 2 . Table 1 The demographic characteristics-mean (SD) – of the subject groups Group n (m:f) Age Estimated IQ Years of Education Schizophrenia 20 (12:8) 39.8 (11.6) 97 (9.5) 13.3 (2.9) Control 20 (11:9) 39.8 (13.2) 100 (7.7) 13.5 (2.5) Table 2 Clinical details of the patients with schizophrenia Age of onset Duration of illness(yrs) Number of admissions Medication Mean (sd) Mean (sd) Mean (sd) Typical Atypical Antipsychotics 28.4 (10.6) 10.9 (11) 8.85 (13.2) 40% 60% Symptom assessment To assess their present symptomatology, the schizophrenia patients were assessed on the Krawiecka Standardized Scale for Rating Chronic Psychotic Patients [ 24 ]. Symptoms present over the previous week, or signs at interview, are assigned a score on a five-point scale (where 0 = absent, 1 = mild, 2 = moderate, 3 = marked, 4 = severe). Ratings are given for four positive symptoms (coherently expressed delusions, hallucinations, incoherence and irrelevance of speech and incongruity), two negative symptoms (poverty of speech and flattened behaviour) and three non specific symptoms (depression, anxiety and psychomotor retardation). As a result, the maximum scores obtainable were 16 for positive symptoms, 8 for negative symptoms and 12 for non specific symptoms. The Krawiecka scores were also used to investigate in more detail the effect specific positive symptomatology had on ToM capabilities: the scores out of four given for delusions and hallucinations were used in this analysis. All participants in this study gave written, informed consent. The task Sixty-three single-image cartoon jokes, printed on A4 cards were generously provided by the authors of previous studies [ 25 ]. Thirty-one of these were designated to be 'theory of mind cartoons'. Understanding the humour in these jokes required the attribution of ignorance, false belief or deception to one of its characters and therefore, an analysis of their mental state. The other 32 jokes were Physical ("slapstick") or behavioural in nature and subsequently did not require ToM capabilities for their correct interpretation. All of the images were caption-less. Examples of each type are shown in Figure 1 . Figure 1 (a) An example of the Physical jokes subset. (b) An example of the ToM jokes subset. It was explained to the subjects that they would be shown cartoons intended to be funny. The two complete sets of cartoons were then shown to each subject in turn. The order in which they were presented was alternated so that half the participants viewed the ToM cartoons first, and half viewed the ToM cartoons second. The subjects were shown each joke one by one and instructed to indicate to the observer when they believed they had understood its meaning. This response time was then recorded to the nearest second using a stopwatch. The participants then gave a short explanation of their interpretation of the joke's meaning. Responses were scored 1 for a correct answer and 0 for an incorrect answer. For a theory of mind answer to be correct, appropriate mental state language had to be used. Furthermore, participants were asked to subjectively grade each cartoon image for humour and difficulty on a scale of 1–5, where 1 was not funny or very easy and 5 very funny or very difficult respectively. Simple Physical descriptions of the scenario were required for the Physical joke responses to be scored correct. An example of acceptable responses can be viewed in Table 3 . Table 3 Examples of acceptable and unacceptable replies to jokes featured in Fig 1 (a) Physical Joke Acceptable responses 'The man is using the swing like a giant Newton's Cradle' 'The children are swinging against each other, like one of those desk toys' Unacceptable responses 'The man is happy because the children are on swings' 'The man wants to send him on the end flying off the swing, so he gets hurt' (b) Theory of mind joke 'The man thinks that someone is putting a gun in his back, but it's a guitar' 'The couple don't realise that they are making the man think he is being robbed' Unacceptable responses 'The couple are waiting for a bus and the man is jumping to reach something' 'The couple are trying to push the man over with the guitar so that they can get on the bus first' Tests were all performed in quiet, distraction-free rooms. Statistical analysis Data analysis was performed using SPSS for Windows Version 11.0. General linear model repeated measures ANOVA was used to determine the significance of any difference in the Physical versus ToM scores seen between the groups. General linear model ANCOVA controlling for Physical joke score was used to investigate the selectivity of any group difference in ToM capabilities. Linear regression analysis was used to relate Physical and ToM scores to Krawiecka sub-totals for positive, negative and non specific symptoms, individual Krawiecka symptoms, medication dose and joke block presentation order. Independent two-tailed t-tests were used to compare the group score differences in the two conditions (when carrying out simple contrasts following the general linear model repeated measures ANOVA), the average subjective ratings for humour and difficulty assigned to the stimuli by the participants, and the average response times to get the jokes. Results Patients with schizophrenia compared to controls Using general linear model, repeated measures ANOVA, highly significant main effects were found for repeated measure (i.e. joke type: F = 112.9, p < 0.0001) and group (F = 42.6, p < 0.0001) as well as a significant interaction of group by joke (F = 10.3, p = 0.003). Table 4 summarises this. Table 4 Performance on Physical and ToM jokes between the study groups Physical jokes score mean (sd) ToM jokes score mean (sd) Schizophrenia Group 23.3(4.5) 12.7(6.2) Controls 28.2(2.94) 22.6(2.4) Follow-up t-tests comparing individuals with schizophrenia to controls were highly significant for both the ToM condition (p < 0.0001) and the Physical condition (p < 0.001). Additionally, within both the patient and control groups, scores were significantly worse for ToM jokes than Physical jokes (p < 0.0001 for both groups). However, the significant interaction showed that the difference of 10.6 for the patient group was greater than that for the controls (5.6). Using the general linear model, ANCOVA, controlling for Physical joke score, a significant group difference on ToM joke scores was still evident, F = 19.5, p < 0.05. The two groups were well matched for age, IQ and sex, and any difference between them was shown to be insignificant by independent 2-tailed t-test (p > 0.1). It was unnecessary, therefore, to perform regression analyses to co-vary for these factors. Subjective joke ratings and response times and order of joke set presentation It was found via independent T-test analysis that there was no significant difference between the schizophrenia patients and control participants' subjective ratings for humour and difficulty or between the average response times of correct responses (p > 0.05). Results are summarized in Table 5 . Table 5 Subjectivity scores and response times Picture Condition Average humour score Average difficulty score Average time for correct responses Controls Physical 2.3 (.48) 1.9 (.62) 5.04 (2.2) ToM 2.4 (0.35) 1.9 (.57) 5.2 (2.9) Schizophrenia Group Physical 2.4 (.47) 2.4 (.68) 7.2 (2.5) ToM 2.6 (0.42) 2.4 (.66) 6.8 (2.7) NB: Values are means; standard deviations in parentheses. Furthermore, linear regression indicated that the order of presentation of the joke sets had no significant effect on ToM or Physical joke scores. Symptoms Correlations were run to investigate the relationships between performances on ToM and Physical jokes and different symptom scores (assessed on the Krawiecka five-point scale). These data are displayed in Table 6 . Table 6 Krawiecka symptom scores in patients with schizophrenia and their association with performance on ToM and Physical joke conditions. N Mean Krawiecka Score SD Correlation with ToM* Correlation with Physical* Positive symptoms 20 5.0 3.2 -0.029 0.36 Negative Symptoms 20 1.6 1.8 -0.108 0.015 Non specific Symptoms 20 1.6 1.8 0.100 0.157 Delusions 20 2.5 1.6 0.153 -0.083 Hallucination 20 1.9 1.7 -0.053 0.173 Depression 20 0.65 0.875 0.306 0.222 Incoherence of Speech 20 0.3 0.657 -0.194 0.097 Poverty of Speech 20 0.45 0.826 -0.186 -0.102 * None of these correlations reach significance As stated, performance was not significantly reduced in association with increasing severity of positive or negative symptoms as a whole or delusions and hallucinations specifically. The features of depression, incoherence and poverty of speech were also analysed to see if they could be having an effect on the patients ToM and Physical joke performance but there were no significant findings. The converted equivalent daily chlorpromazine patient medication doses were correlated to performance and also found to be non significant for both cartoon conditions. Discussion Schizophrenia subjects compared to controls This study showed that individuals with schizophrenia and normal IQ had a poorer understanding of both types of jokes (and at least a reduced ability to relay their humorous intent) than matched healthy controls. This is to be expected, as schizophrenia patients have previously been reported to show poor appreciation of humour [ 3 ]. It seems unlikely that this is explained by depression as regression analysis showed it not to be significantly related to poor ToM performance. However, the difference between the Physical and ToM joke scores was significantly greater for schizophrenia patients, than controls. This implies that it is some aspect of the schizophrenia disease process that is associated with ToM impairment in the patient group, rather than a general difficulty with appreciation of humour. If the schizophrenia group had a poorer understanding of the jokes then we would expect this to be reflected in the subjective gradings for humour and difficulty. As shown in table 5 , the schizophrenia group actually graded the jokes non-significantly higher for both humour and difficulty. Furthermore, despite both groups performing significantly worse in the ToM condition than in the Physical condition, they both graded the two joke sets as equally difficult. Possible explanations for this could be that people were instructed that the cartoons were meant to be funny and so consequently may have stated that a joke was humorous even if they didn't find a joke funny. The subjective gradings of the jokes did not necessarily require a correct understanding of the joke for a numerical value for humour and difficulty to be assigned. Everyone could give numerical gradings for a joke but not everyone could correctly describe the jokes or use the relevant mentalising language in their joke description. It was found that both groups found the ToM jokes significantly more difficult than the Physical ones. The former were certainly more detailed and by their very nature were comprised of characters in ToM scenarios. It could be that these jokes were more difficult to understand, but there was no significant difference between the response times of the two joke types for either group. Poor verbal report of mentalistic terms may be an intrinsic feature of schizophrenia and this could have resulted in this schizophrenia group's poor performance on this set of jokes. Language and thought are intrinsically linked and the question arises as to whether disordered verbalisation in schizophrenia is a speech disturbance only or part of a disorder in thinking [ 26 ]. Likewise, the observed ToM deficit seen in this study could reflect a lack of response in mentalistic terms, related either to a specific deficit in inferential skills or to a more general inability to verbalise others mental states [ 16 ]. As regards our patients' verbalisation skills, they all scored none or low Krawiecka scores for the symptoms of poverty of speech and incoherence/irrelevance of speech. We therefore believe that their poor performance was the result of a compromised ToM function rather than a general verbalisation expression deficit. This data suggests that, as predicted, schizophrenia patients have problems in interpreting the thoughts of others, supporting the findings of previous work [ 3 ]. The closely matched demographic characteristics of the two groups, suggests that problems in 'mentalising' evident in schizophrenia, are not simply attributable to the influence of factors such as age, sex and, importantly, IQ. There is however an alternative interpretation to these results. The individuals with schizophrenia may not be showing a domain -specific difficulty with ToM function but rather may be performing differentially more poorly than the control group on the more difficult ToM condition, such that the observed deficit could reflect a differential sensitivity to increased task difficulty. Symptom specific findings When the obtained totals for positive Krawiecka symptoms were analysed it was found that there was not a significant relationship between higher positive symptomatology and poor ToM performance, contrary to what had been predicted. Closer scrutiny of individual positive symptoms also revealed that neither delusions, hallucinations nor speech incoherence were significantly linked to an impaired ToM performance. Previous studies have shown paranoid delusions to be significantly related to poor ToM performance, in both first and second order ToM tasks and in both verbal and pictorial paradigms [[ 27 , 28 ] 32]. Interestingly, Langdon et al [ 14 ], also using a pictorial paradigm, found no evidence linking poor mentalising capabilities to positive symptoms. These findings might be attributed to several individuals who despite scoring the maximum Krawiecka score (4) for delusions, hallucinations or both, performed similarly to controls in the ToM condition. Alternatively, perhaps the nature of our patients' delusions and hallucinations may not be those specifically implicated in ToM impairment. Unfortunately, our sample size was too small to allow further investigation of patients with different types of delusion. Unlike the findings of previous research, negative features of schizophrenia were not associated with ToM capabilities. However, the mean Krawiecka scores for these features were low within the subject group, and our number of subjects was relatively small. Limitations and further work This study was limited especially for symptom sub-groups analyses, by its relatively small sample size, although we did find disease effects. With a large sample, further symptom-specific sub-groups could be made (e.g. different types of delusions or hallucinations, formal thought disorder, different aspects of negative symptomatology, etc). Furthermore, another control group of non-schizophrenia, psychiatric patients may have been useful to explore more closely the role of diagnosis as opposed to symptoms. One of our previous studies used a psychiatric control group of patients with a psychotic affective disorder and found that positive psychotic symptomatology was linked to poor ToM performance and was not diagnosis specific [ 29 ]. This implies that ToM deficits are not necessarily specific to schizophrenia but could be related to psychoses and specifically to the positive symptoms of delusions and hallucinations. Although, as acknowledged above, we found no evidence for such an association in the present study. We believe that the Physical cartoons themselves acted as an adequate internal control. If the schizophrenia group had performed as poorly on the Physical cartoons as they did on the ToM cartoons, this could imply either a general verbalization deficit or a general cognitive impairment. Since this was not the pattern found, our results count against a domain-general interpretation of this type. Furthermore, as mentioned previously, regression analysis showed no significant effect of language impairment, as assessed using the Krawiecka symptoms of poverty of speech and incoherence of speech, on ToM joke performance. ANCOVA also showed that the group differences on the ToM jokes could not be accounted for by the group differences on the Physical jokes. This was taken as evidence for an observable and selective compromise of ToM capacity within the schizophrenia group. However, an unrelated cognitive neuropsychological task could have been implemented testing another cognitive domain (e.g. executive function, working memory) and this could have been used to further elaborate whether the observed compromised ToM function was a specific deficit or secondary to general cognitive impairment [see for example, 18–19 who used the Tower of London task in this way]. Further research is then required in ToM and schizophrenia to see whether the presence of schizophrenia itself is enough to impair ToM capabilities or whether ToM impairment is due, instead, to presence of particular symptoms or presence of some general neuropsychological deficit. A further question that we did not address at all in the present study was whether the ToM deficits observed in schizophrenia could be state (related to fluctuating symptom severity) or trait in nature. Conclusion The schizophrenia group performed significantly worse in both the Physical and ToM conditions on this visual joke task than the matched control group. The performance on the ToM condition was significantly worse and is taken as evidence for a compromised ToM capability in the schizophrenia group which is in keeping with previous research. In this instance poor ToM performance could not be significantly linked to any particular symptomatology as had been hypothesised. Competing interests The author(s) declare that they have no competing interests. Authors' contributions DM conceived and designed the study, collected neuropsychological test data and drafted the manuscript. HT helped implement the study and collect neuropsychological test data and co wrote first draft. DMac and DCO were involved in the psychiatric ratings of the patients and revisions of later drafts. PM advised on statistical analysis and helped to write the corresponding sections. SL supervised clinical aspects of study and revised later drafts and ECJ revised final draft and approved this version to be published. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554116.xml |
544846 | Core biopsy as a tool in planning the management of invasive breast cancer | Background Core biopsy is a method of choice for the triple assessment of breast disease as it can reliably distinguish between benign and malignant tumours, between in-situ and invasive cancers and can be useful to assess oestrogen receptor status. This study was carried out to assess the reliability of core biopsy in predicting the grade and type of cancer accurately as obtaining this information can influence initial therapeutic decisions. Patients and methods A total of 105 patients who had invasive breast carcinoma diagnosed by core biopsy in year 2001 and who subsequently underwent surgical management were included. The core biopsy results were compared with final histology with the help of kappa statastics. Results A moderate level of agreement between the predicted grades and final grades was noted (kappa = 0.585). The agreement was good between predicted and final type of tumour (kappa = 0.639). Conclusions Core biopsy as a predictor of grade and type has limited use at present. We suggest that initial clinical decisions should not be based on the results of core biopsy. | Background Core biopsy is rapidly replacing fine needle aspiration cytology (FNAC) as a procedure of choice for the triple assessment of the breast problems. Where there is access to an experienced cytopathologist, the FNAC can provide a rapid and cost effective means of triage of patients who would benefit from more expensive core biopsy [ 1 ]. Core biopsy is, however, more reliable predictor of the pathology [ 2 - 4 ] and can distinguish between benign and malignant tumours and between in-situ and invasive cancers. Collins et al have shown that majority (83%) of core biopsies and excisional procedures demonstrate exact histological agreement [ 5 ]. Core biopsy may give a good guide to grade and type of the cancer and it can also be used to assess the oestrogen receptor (ER) status. Core biopsy has also been found to be a good tool to assess effect of neo-adjuvant chemotherapy on the grade of breast cancer [ 6 ]. As the range of options for the treatment of the breast cancer widens, it has become increasingly important that clinicians are provided with accurate prognostic information to base the initial therapeutic decisions on. Prognostic factors for breast cancer have been extensively studied. Histological grade and type can be used to predict biological behaviour as has been assessed by overall survival and local recurrence for women with primary breast carcinoma [ 7 - 9 ]. Histological grade is one of the three prognostic factors used in calculating the Nottingham Prognostic Index [ 10 ]. The aim of this study, therefore, was to see how reliable core biopsy is in predicting the grade and the type of cancers, as that could influence the further management of breast cancer. Patients and methods All patients with invasive breast cancer diagnosed by the core biopsy and treated subsequently by surgical excision in the year 2001, at a district general hospital were included in the study. Of the 105 patients whose records were studied retrospectively, 47 lesions were palpable and 58 lesions were screen detected. The core biopsies were performed under ultrasound guidance as a part of triple assessment and at least four cores were obtained from palpable lesions and six or more from screen detected lesions with a 22 mm automated core biopsy gun. Two dedicated breast pathologists had authorised all the reports. Age of patients ranged from 35 to 84 with a median age of 62 years. The histology reports for the core biopsy and final histology were extracted and compared. Carcinoma in-situ diagnosed by core biopsy and patients who underwent neo-adjuvant therapy were excluded from the study. Level of agreement between core and excision biopsy was assessed using kappa statistics. Results Of 105 patients there was no prediction of grade in 2 patients and in 19, a prediction of grade 1 or 2 or grade 2 or 3 was made. This left 84 where a clear prediction was made. On final histology 35 (33.3%) were categorised as grade I, 45 (42.8%) were grade II and 25 (23.8%) were grade III. The predicted grades versus final grade results are detailed in table 1 . Table 1 Cross tabulation showing predicted verses final grade Final grade Grade 1 2 3 Predicted grade 1 21 0 0 2 5 35 9 3 1 6 7 1 or 2 7 1 1 2 or 3 0 2 8 Not predicted 1 1 0 Kappa = 0.585 Of the 84 cores in which clear prediction of grade was made 63 (75%) were correct. All 21 of grade 1's were predicted correctly, 35 (71%) of grade 2's were predicted correctly but only 7 (50%) or grade 3's were predicted correctly on core biopsy. Of the predicted grade 2's which were reclassified, 5 (10%) were downgraded and 9 (18%) were upgraded. Of the reclassified grade 3's, 6 (43%) were downgraded to grade 2 and 1 (7%) was downgraded to grade 1. Of 105 patients, 101 patients had a prediction of type made. Of 84 cases predicted to be ductal, 81 (96%) were correct and one case was reclassified as mixed histology. Of the 14 predicted to be lobular 9 (64%) were correct and one reclassified as mixed (Table 2 ). Of the three cases predicted as mixed only one was mixed on final pathology. Table 2 Cross tabulation of predicted verses final tumour types. Final type Type Lobular Ductal Ducto-lobular Predicted type Lobular 9 4 1 Ductal 2 81 1 Ducto-lobular 1 1 1 Uncertain 1 2 1 Kappa = 0.639 In general the level of agreement between the predicted grades and final grades was moderate (kappa = 0.585) and between predicted and final types was slightly better (kappa = 0.639). Discussion Fajardo et al reported percutaneous, image guided biopsy to be an accurate diagnostic alternative to surgical biopsy in women with mammographically detected suspicious breast lesions [ 11 ]. The false negative results occur to a lesser degree with image guided core biopsy [ 12 ]. However needle size [ 13 ] or amount of clinical material obtained [ 14 ] has not been found to influence the histology results. A recent study has shown that access to expert breast pathologists can avoid inconsistencies observed in the category of borderline lesions between the expert and general pathologists [ 15 ]. Histological grade and type, tumour size and presence or absence of axillary node metastases is well-recognised prognostic factors of breast cancer. Tumour grade, size and nodal involvement are three factors considered in Nottingham Prognostic Index [ 10 ]. Histological grade and type on their own can be helpful in predicting the biological behaviour of the tumour as regards to local recurrence and overall survival [ 7 - 9 ]. Preoperative grading and typing with core biopsy, therefore, can influence further management of the cancer this is all the more important as the sensitivity and specificity of mammogram for predicting grade or type is poor [ 16 ]. Green Hough (1925) was the first to categorise the breast tumours into three grades according to its differentiation. He also assessed the association of grades with "cure" though the term cure was not clearly defined [ 17 ]. Since then a clear association between grades and prognosis has been established [ 17 - 23 ]. Higher the grade, greater is the chance of the tumour relapsing [ 24 , 25 ]. It has also been noted that oestrogen receptor (ER) negative tumours are usually of higher grade [ 26 - 28 ]. Higher the tumour grade more aggressive is the tumour and nodal involvement too is directly related to aggressiveness of the tumour [ 29 ]. All these factors suggest that higher the grade of tumour more radically should it be managed. Knowing the grade accurately, preoperatively, would help in planning out further management of the tumour. It is possible to identify all these prognostic factor in core biopsy. A small earlier study has shown 80% sensitivity of core biopsy for correct diagnosis and a poor (50%) sensitivity for diagnosing invasive cancers in mammographically detected cancers [ 30 ]. It is not possible to comment on this in present study as only invasive cancers were included in the present study. Of the two major histological types, lobular is known for its multifocality and multicentricity and its diffusely infiltrating nature [ 31 ]. It is important to correctly identify lobular carcinoma, as these tumours are often hormone responsive [ 21 ]. Our results suggest that the prediction of grade and type of breast cancer from core biopsy has only limited use at present. The group of patients we would like to be predicted most accurately would have been those with a high-grade and lobular type, for the reasons stated above. Our results suggest that these patients are most difficult to predict in practice. However, present study being retrospective has its own drawbacks. A prospective study specifically aimed at Kappa statistics between core biopsy and final histopathology may be able to answer this question better. Further refinements are needed in technique of core biopsy and these technical innovations will ultimately improve the results of core biopsy. Competing interest The author(s) declare that they have no competing interests. Authors' contributions AD : Original idea, planning of study, background search, data compilation and drafting the manuscript. TG : Data collection, data compilation, help with the manuscript drafting. SH : Overall supervision and guidance with the study, helped in the analysis and helped with manuscript drafting and revisions. All authors read and approved the final version Funding source None | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544846.xml |
543456 | Long branch attraction, taxon sampling, and the earliest angiosperms: Amborella or monocots? | Background Numerous studies, using in aggregate some 28 genes, have achieved a consensus in recognizing three groups of plants, including Amborella , as comprising the basal-most grade of all other angiosperms. A major exception is the recent study by Goremykin et al. (2003; Mol. Biol. Evol . 20:1499–1505), whose analyses of 61 genes from 13 sequenced chloroplast genomes of land plants nearly always found 100% support for monocots as the deepest angiosperms relative to Amborella , Calycanthus , and eudicots. We hypothesized that this conflict reflects a misrooting of angiosperms resulting from inadequate taxon sampling, inappropriate phylogenetic methodology, and rapid evolution in the grass lineage used to represent monocots. Results We used two main approaches to test this hypothesis. First, we sequenced a large number of chloroplast genes from the monocot Acorus and added these plus previously sequenced Acorus genes to the Goremykin et al. (2003) dataset in order to explore the effects of altered monocot sampling under the same analytical conditions used in their study. With Acorus alone representing monocots, strongly supported Amborella -sister trees were obtained in all maximum likelihood and parsimony analyses, and in some distance-based analyses. Trees with both Acorus and grasses gave either a well-supported Amborella -sister topology or else a highly unlikely topology with 100% support for grasses-sister and paraphyly of monocots (i.e., Acorus sister to "dicots" rather than to grasses). Second, we reanalyzed the Goremykin et al. (2003) dataset focusing on methods designed to account for rate heterogeneity. These analyses supported an Amborella -sister hypothesis, with bootstrap support values often conflicting strongly with cognate analyses performed without allowing for rate heterogeneity. In addition, we carried out a limited set of analyses that included the chloroplast genome of Nymphaea , whose position as a basal angiosperm was also, and very recently, challenged. Conclusions These analyses show that Amborella (or Amborella plus Nymphaea ), but not monocots, is the sister group of all other angiosperms among this limited set of taxa and that the grasses-sister topology is a long-branch-attraction artifact leading to incorrect rooting of angiosperms. These results highlight the danger of having lots of characters but too few and, especially, molecularly divergent taxa, a situation long recognized as potentially producing strongly misleading molecular trees. They also emphasize the importance in phylogenetic analysis of using appropriate evolutionary models. | Background A correct understanding of relationships among the "earliest" lineages of angiosperms is important if we wish to elucidate the causes and consequences of their origin, to understand patterns and tempos of character evolution in the earliest lineages, and to decipher subsequent patterns of diversification. [We sometimes use "earliest", "deepest", "basal", etc. as a convenient shorthand to refer to plants hypothesized to belong to lineages that result from the first or one of the first evolutionary branchings within angiosperm evolution. We do not mean to imply that any extant plants (e.g., Amborella ) are themselves the "earliest" angiosperms, but rather that they belong to the lineage of angiosperms that resulted from the first evolutionary split in angiosperm evolution. When the term "sister" is used to refer to a phylogenetic placement it refers to the sister group to the rest of the angiosperms unless otherwise specified.] A breakthrough in the seemingly intractable problem of identifying the earliest lineages of angiosperms occurred in 1999 and 2000, when each of many multigene studies identified the same three groups as the earliest branching angiosperms [ 1 - 9 ]. Most of these studies, as well as most subsequent analyses [ 10 - 17 ] have converged on the placement of the monotypic genus Amborella , a vessel-less shrub with unisexual flowers endemic to New Caledonia, as the sister-group to all living angiosperms (Fig. 1 , Table 1 ), with the next two divergences within angiosperms corresponding to the water lilies (Nymphaeaceae) and then the Austrobaileyales. This grade leads toward the well-supported remainder of the flowering plants, also known as core angiosperms [ 18 ] (Fig. 1 ). The monophyly of each of the five lineages of core angiosperms is well established, but relationships among them are unclear (Fig. 1 ). Figure 1 Current consensus hypothesis of angiosperm relationships. Tree topology is based on [42, 91] and references in Table 1. Small asterisks indicate the general phylogenetic position of the ten angiosperms (generic names shown for all but the three grasses) examined by Goremykin et al. [19]. The large asterisk indicates the addition in this study of the early-arising monocot Acorus to the Goremykin et al. [19] dataset. The height of the triangles reflects the relative number of species in eudicots (~175,000 species), monocots (~70,000), and magnoliids (~9,000) as estimated by Judd et al. [18] and Walter Judd (personal communication). The other five angiosperm groups shown contain only between 1 and ~100 species. Table 1 Comparison of recent studies a that identify the sister lineages of angiosperms. Study reference No. of genes (genomes b ) No. of angiosperms No. of nucleotides Amborella sister to the rest of angiosperms c Basal vs. core angiosperms c Monophyly of monocots c [4] 5 (c, m, n) 97 8,733 + 90 + 97 + 99/98 [3] 5 (c, m, n) 45 6,564 + 94 d + 99 d + 98 d [6] 3 (c, n) 553 4,733 + 65 e + 71 e + 95 e [1] 2 (n) 26 2,208 + 92/83 f + 86 + 100 [2] 2 (n) 52 2,606 + 88/57 f + 68 + 87 [8] 6 (c, m, n) 33 8,911 - n/a g + 99 + 100 [9] 17 (c) 18 14,244 + 69 + 94 + 53 [11] 1 (c) 38 4,707 + 99 + 100 + 100 [14] 1 (c) 361 1,749 + 86 + 89 + 99 a Not included are several other studies also supportive of Amborella -sister, but which are largely duplicative of the above [5, 7, 31], or whose structure does not match sufficiently with the structure of this table [10, 12, 13], or which have extremely limited sampling (6 taxa) within angiosperms [15]. b c = chloroplast; m = mitochondrial; n = nuclear c Indicated relationship recovered (+) or not recovered (-); parsimony BS values shown unless otherwise specified. See Fig. 1 for definition of indicated relationships. d Only BS values derived from ML analysis are shown. e Jackknife support values. f Bootstrap values were inferred from separate phyA and phyC treatments; other BS values in this study were derived from concatenated phyA and phyC sequences. g n/a – not applicable. This study found Amborella + Nymphaea as sister to all other angiosperms (see Discussion). In sharp contrast stands the study of Goremykin et al. [ 19 ], in which the Amborella chloroplast genome was sequenced and in which 61 protein genes shared among 13 land plants (including 10 angiosperms) were analyzed. In 31 of 33 phylogenetic analyses this study found that " Amborella is not the basal angiosperm and not even the deepest branching among dicots" ([ 19 ] Abstract). Instead, these results indicate, with 100% BS in most analyses, that the first split within angiosperm evolution occurred between monocots and dicots. Goremykin et al. [ 19 ] imply that the earlier studies are in error with respect to the placement of Amborella because these "studies were based on a limited number of characters derived from only a few genes" and used "unmasked sequences of chloroplast genes [i.e., with all three codon positions included] with high substitution rates at their synonymous sites" (p. 1503). Thus, we are faced with a major paradox. On the one hand, many different studies, employing in aggregate 28 different genes (19 chloroplast, five mitochondrial, and four nuclear; Table 1 ), consistently and strongly place the branch leading to Amborella deeper in angiosperm evolution than the branch leading to the monocots, whereas a study that employed twice as many genes found the opposite result, also with strong support. It is critical to resolve this paradox, for the groups and issues involved are such important ones in angiosperm phylogeny. One notable difference between the two sets of studies concerns taxon sampling, which can be critical in phylogenetic analysis [ 20 - 24 ]. Even though sampling strategies in the Amborella -deep studies listed in Table 1 varied substantially, ranging from 18 to 553 species of angiosperms and from 2,208 to 14,244 nucleotides (NT) of aligned data, a commonality was their relatively broad taxon sampling. Most of these studies attempted to represent the diversity of living angiosperms by including critical species identified by prior morphological [ 25 - 28 ] and single-gene molecular analyses [ 29 - 31 ]. Even the listed study with the fewest taxa [ 9 ] was based on exemplar species, compiled by the Green Plant Phylogeny Research Coordination Group and chosen to represent most of the major putatively basal lineages suggested by a large body of previously accumulated results. In contrast, the Goremykin et al. [ 19 ] study included only 10 angiosperms. Five of these belong to a single derived group (eudicots) and three are grasses (the only monocots sampled), leaving Amborella and Calycanthus (the only sampled member of the other three lineages of core angiosperms) as the other two angiosperms sampled (Fig. 1 ). It is known that grasses have accelerated substitution rates in all three genomes [ 9 , 32 - 35 ], especially the chloroplast genome, making them a poor representative for such a large and diverse group as monocots. Relevant here is that the grasses-sister topology obtained by Goremykin et al. [ 19 ] (see their Fig. 3 , which also corresponds to our Fig. 3A ) shows one long branch, leading to grasses, connecting to another long branch, separating angiosperms from the outgroups. When the outgroups are removed and the Goremykin et al. [ 19 ] tree is taken as an unrooted network, it becomes apparent that there is no difference between their ingroup topology and those of studies that obtained the Amborella -sister rooting. In other words, given the taxonomic sampling of Goremykin et al. [ 19 ], their grasses-sister topology differs from the canonical Amborella -sister topology only with respect to where the outgroup branch attaches [ 36 ], either to grasses or to Amborella (see Discussion and Fig. 8 for an elaboration of this point). These observations led us to suspect that the grasses-sister topology is an artifact stemming from long branch attraction (LBA), a phenomenon known [ 37 - 39 ] to give strongly supported, but spurious results under precisely the set of conditions operative in the Goremykin et al. [ 19 ] study. These are 1) inadequate taxon sampling, 2) large amounts of data per taxon, 3) two known long branches (the grass branch and the outgroup branch) separated by short internodes, and 4) phylogenetic analyses that do not account for rate heterogeneity. The current study was undertaken to test whether the grasses-sister topology is indeed an LBA artifact. We hypothesize that, by analyzing the Goremykin et al. [ 19 ] dataset with a focus on rate heterogeneity and taxon sampling of monocots, the Amborella -sister topology will be recovered instead. In addition, we carried out a similar, but much more limited set of analyses in response to a follow-up paper by Goremykin et al. [ 40 ] that appeared while this manuscript was in the final stages of preparation and which similarly challenged the position of Nymphaea as a basal angiosperm. Results Addition of Acorus We gathered new sequence data for an additional monocot representative, Acorus , and added it to the 13 taxa, 61 gene first- and second-position alignment matrix of Goremykin et al. [ 19 ] to give a 14 taxa, 61 gene first- and second-position alignment matrix. Acorus was chosen for two reasons. First, it is well supported as the sister to all other monocots [ 41 - 43 ]. Thus, Acorus plus grasses represent monocot diversity about as well as any two groups of monocots. Second, unlike grasses, its chloroplast genome does not appear to have evolved at unusually high rates [ 9 , 44 ]. The Acorus dataset consisted of 40 protein gene sequences, 22 newly determined in this study and 18 from preexisting databases. This corresponds to 65.6% (40/61) of the genes and 71.4% (32,072/44,937) of the nucleotide characters analyzed by Goremykin et al. [ 19 ]. A number of initial analyses were conducted in parallel on the "full" Acorus matrix, containing data for all 61 genes and including gaps where data for Acorus were not available, and a "truncated" matrix, containing only those 40 genes where Acorus sequences were available. Inspection of the resulting trees revealed no topological incongruences and no significant change in bootstrap support (BS) between the full and truncated analyses [see Additional files 1 and 2 ]. The results presented hereafter for Acorus are based on the full matrix dataset. This allows us to include all available relevant data, allowing the fullest and most direct comparisons to the Goremykin et al. [ 19 ] analyses. Representative results of either adding Acorus to the Goremykin et al. [ 19 ] matrix or substituting it for grasses are shown in Fig. 2 . Using Acorus instead of grasses to represent monocots has a major effect on the results. This is especially dramatic for equal-weighted maximum parsimony (MP) analyses of both nucleotides and amino acids, where there is a shift from 100% BS for monocots-sister when only grasses are used to represent monocots (Figs. 2A and 2D ) to 100% and 93% support for Amborella -sister when Acorus is used instead (Figs. 2B and 2E ). The same topological shift is seen with maximum likelihood (ML) using equal rates across sites (cf. Figs. 2G and 2H ), although the swing in BS values is less pronounced (61% for grasses-sister vs. 100% for Amborella -sister). Transversion parsimony (RY-coding) of the original dataset (Fig. 2J ) gives the Amborella -sister topology, but with poor support (56%). Substituting Acorus for grasses improves the support for Amborella -sister to 100% (Fig. 2K ). Figure 2 The effect of changing sampling of monocots as a function ofphylogenetic method. Analysis of the 61-gene data matrix using: Rows A-C , DNA parsimony; D-F , protein parsimony; G-I DNA ML HKY85 with no rate categories; J-L , RY-coded DNA parsimony. The first column of trees is with the Goremykin et al. [19] taxon sampling (grasses, but not Acorus ), the second is with Acorus but not grasses, and the third is with both grasses and Acorus . All analyses used the first- and second-position matrix, either with or without the addition of Acorus as explained in Methods. Trees J-L use the same matrices, but with the nucleotides RY-coded. Figure 3 Neighbor joining analyses using different evolutionary models and/or taxon sampling. Distance matrices were calculated from the first- and second-position matrix of Goremykin et al. [19] using ( A ) the K2P model, ( B ) the ML HKY85 model with four gamma-distributed rate categories and parameters estimated from the corresponding ML analysis, and ( C ) the K2P model with Acorus added to the first- and second-position matrix as described in Methods. Inclusion of both grasses and Acorus produced two very different topologies, depending on the method used. On the one hand, standard MP, with both nucleotides (Fig. 2C ) and amino acids (Fig. 2F ), gives a grasses-sister topology in which monocots are paraphyletic with 100% BS (i.e., there is 100% support for Acorus as the sister to "dicots" to the exclusion of grasses). On the other hand, equal-rates ML (Fig. 2I ) and transversion parsimony (Fig. 2L ) give an Amborella -sister topology, with moderate (79%) to strong (98%) support, in which monocots are monophyletic with equivalent support. To make the results more directly comparable to the Goremykin et al. study [ 19 ] and to investigate the performance of various distance-based models, we tested many different neighbor joining (NJ) models. We did this also because, of all MP, ML and NJ methods initially investigated, the only approaches that failed to give the Amborella -sister topology when Acorus was substituted for grasses were the NJ methods without a ML model. When the PAUP* [ 45 ] distance is set to any of 12 settings (Mean, P, JC [ 46 ], F81 [ 47 ], TajNei [ 48 ], K2P [ 49 ], F84 [ 50 ], HKY85 [ 51 ], K3P [ 52 ], TamNei [ 53 ], GTR [ 54 , 55 ] or LogDet [ 56 , 57 ]), Amborella , Calycanthus , and Acorus form a monophyletic group with 100% BS. Importantly, however, this same grouping is obtained, with all 12 distance settings, even when grasses are included, such that, as in equal-weighted parsimony analyses (Figs. 2C and 2F ), grasses are sister to all other angiosperms and monocots are not monophyletic (Fig. 3C and analyses not shown). Finally, it should be noted that ML and NJ methods using models (see next section) that give Amborella -sister when only grasses represent monocots, continue to do so, but with higher BS, when Acorus is added, either with or without grasses [see Additional files 1 and 2 ]. Site-to-site rate heterogeneity If the lineage leading to Amborella is sister to the rest of angiosperms, as the analyses with Acorus strongly indicate, why do so many of the Goremykin et al. [ 19 ] analyses support the grasses-sister topology? We explored this question by conducting analyses using a broad range of models and methods as applied to their data matrix (i.e., with only grasses representing monocots). We first compared the relative likelihood of the grasses-sister and Amborella -sister topologies using ML with all 56 combinations of the 14 substitution models and four rate-heterogeneity conditions specified by the MODELBLOCK script provided by MODELTEST [ 58 ]. The four rate-heterogeneity conditions are 1) equal rates across sites, 2) estimated percentage of invariant sites, 3) four gamma-distributed rate categories and 4) a combination of invariant sites and gamma-rate categories. With equal rates across sites, the grasses-sister topology received the higher likelihood for all 14 substitution models (Table 2 ). For the least complex, Jukes-Cantor [ 46 ] model (a single substitution rate with equal base frequencies), all four rate-heterogeneity conditions preferred the grasses-sister topology. In a more complex model (F81), which uses estimated base frequencies, the Amborella- sister topology was preferred when either invariant sites or gamma rate categories were used but not when they were used in combination. For the other 12 models, the Amborella- sister topology was preferred for all three conditions that allowed for rate heterogeneity across sites (Table 2 ). Table 2 The 56 MODELTEST models and the grasses- or Amborella -sister topology that received the higher likelihood. Model equal +I +G +I +G JC grasses grasses grasses grasses F81 grasses Amborella Amborella grasses K80 grasses Amborella Amborella Amborella HKY grasses Amborella Amborella Amborella TrNef grasses Amborella Amborella Amborella TrN grasses Amborella Amborella Amborella K81 grasses Amborella Amborella Amborella K81uf grasses Amborella Amborella Amborella TIMef grasses Amborella Amborella Amborella TIM grasses Amborella Amborella Amborella TVMef grasses Amborella Amborella Amborella TVM grasses Amborella Amborella Amborella SYM grasses Amborella Amborella Amborella GTR grasses Amborella Amborella Amborella The four rate-heterogeneity conditions used in these MODELTEST analyses are: 1) "equal" = equal rates across sites; 2) "+I" = estimated percentage of invariant sites; 3) "+G" = four gamma-distributed rate categories; and 4) "+I+G" = combination of invariant sites and 4 gamma-rate categories. These results held when the parameters estimated on one topology (either Amborella - or grasses- sister) were used to calculate the likelihood of the other topology (the topology used had only a minor effect on the values of the parameter estimates). For both topologies, the model chosen by MODELTEST using either the hierarchical likelihood ratio tests or the Akaike information criterion was the 5-substitution-type-transversion (TVM) + I + G model, where the probability of going between A and G is equal to that of C and T. With this model, using parameter estimates from either topology, a heuristic search found the Amborella -sister topology with 98% BS, and the SH-test [ 59 ] showed the grasses-sister topology to be significantly worse at the 5% level (p = 0.04). These MODELTEST analyses identified site-to-site rate heterogeneity, accounted for using either gamma-distributed rates or invariant sites, as a critical analytical parameter. We therefore explored this in greater detail using one particular substitution model, the HKY85 model [ 51 ]. We chose the moderately complex and commonly used HKY85 substitution model with empirical base frequencies over the TVM model to help speed up the calculation of bootstrap replicates. A ML-HKY85 analysis with equal rates and an estimated transition:transversion (Ti/Tv) ratio of 1.485 gives the same, grasses-sister topology (Fig. 4A ) as found by Goremykin et al. [ 19 ] (see Fig. 2G , which is equivalent topologically to their Fig. 3 ), albeit with low BS (61%) for grasses-sister. In contrast, a tree built using four rate categories, with the gamma shape parameter (α = 0.31) estimated from the Goremykin et al. [ 19 ] matrix and topology, gives 96% BS for Amborella -sister (Fig. 4B ). Although we present here only the commonly used, four-rate-category model, a two-rate-category model gives the same qualitative results in all cases analyzed [see Additional file 3 ]. Figure 4 Maximum likelihood analyses using different evolutionary models. Trees A-C were calculated using the first- and second-position Goremykin et al. [19] matrix. Tree D was calculated using all three codon positions. All trees were built using ML with the HKY85 model and the following treatments of rate heterogeneity: A . No rate categories. B . Four gamma-distributed rate categories. C . Estimated proportion of invariant sites (no gamma rate categories). D . No rate categories (all three positions). Parameters were estimated separately for each analysis as described in Methods. To assess the stability of the topology to changes in the α parameter, we scanned the range α = [0.01–20.0], with the number of rate categories fixed at four. The same, Amborella -sister topology obtained using the estimated α (0.31) was also recovered over a wide range of α values (α = 0.01–9.0; Fig. 5A ). The BS for Amborella -sister and the SH-test p-value [ 59 ] of the Amborella -sister over the grasses-sister topology both improve as α decreases to the estimated value and continue to improve as α approaches zero (Fig. 5A ). As α approaches infinity, the rate categories approach the same value (i.e., equal rates) [ 60 ]. Accordingly, the BS and p-value curves in Fig. 5 approach the values of the equal-rates trees. Figure 5 Bootstrap support and the SH-test p-value for the Amborella -sister or grasses-sister topologies as a function of (A) the gamma distribution α parameter value or (B) the proportion of invariable sites. The left vertical line in A and right line in B indicate the rate-heterogeneity parameter estimated from the data. The right vertical line in A and left line in B indicate the boundary where the topology of the best tree transitions between Amborella -sister and grasses-sister. All analyses were performed using the 61-gene first- and second-position matrix of Goremykin et al. [19] and the ML HKY85 model with the α parameter or proportion of invariant sites indicated on the X-axis. The transition-transversion parameter was estimated for each specified rate-heterogeneity parameter. p(Δ|L Amb -L grasses |) signifies the SH-test p-value for the difference between the likelihood scores of the two topologies. Bootstrap searches and SH-tests were performed as described in Methods. We performed a similar analysis with the proportion of invariant sites (Plnvar option in PAUP). Using the estimated PInvar = 0.58 without gamma-distributed rate categories, we obtained the Amborella -sister topology (Fig. 4C ) with 97% BS. As with α, the Amborella -sister topology was stable over a wide range of PInvar [0.09 <= PInvar <= 0.995 (Fig. 5B )]. The BS and the SH-test p-value for Amborella -sister improve as PInvar increases (Fig. 5B ). The SH-test for Amborella -sister is significant at the 5% level using the estimated value of PInvar and remains significant as PInvar increases. The BS for a sister-group relationship of Amborella and Calycanthus is identical (within the variance expected for BS values) with that for grasses-sister across the entire range of both α and PInvar values, while both of these BS values always equal 100 minus the BS value for Amborella -sister (Figs. 5A and 5B ). This is exactly as expected (see Discussion) if the only difference between the grasses-sister/ Amborella + Calycanthus topology and the Amborella -sister topology is where the outgroup branch roots within angiosperms. Put another way, almost all of the BS replicates were one of these two topologies. There are 20,071 (out of 30,017; 66.9%) constant sites in theGoremykin et al. [ 19 ] matrix. When these constant sites are removed, the highest HKY85 ML tree (using equal rates) places Amborella -sister with 98% BS and with p = 0.03 for the SH-test relative to grasses-sister [see Additional file 4 , Fig. A]. Furthermore, NJ analysis with the equal-rate ML model also obtains Amborella -sister (with 100% BS) when constant sites are removed [see Additional file 4 , Fig. B]. This is another way of allowing the rates to increase since the rates of the sites that are changing are not constrained by the constant sites. This allows the ML model to work with a more homogenous set of rates and reduces the need for using rate categories. Removing these constant sites allows the ML model to simulate the actual evolutionary process of sites that are changing more accurately than when imposing a proportion of invariant sites because there is no invariant site weighting of the sites that are changing. As a consequence of the faster rate with constant sites excluded, the branch lengths of the resulting trees are ~2.6 times longer than when constant sites are included. We further explored the NJ method using ML models of evolution to compute distances and with constant sites included. We were able to precisely reproduce the grasses-sister result (Fig. 3 from Goremykin et al. [ 19 ]) with NJ and the K2P model(Fig. 3A ). NJ using a distance matrix calculated based on ML and using parameters estimated with the HKY85 model with equal rates alsogives grasses-sister with 100% BS. However, distances calculated using the ML HKY85 model and estimated proportion of invariant sites puts Amborella- sister with low BS of 58% [see Additional file 5 ], while distances derived from the ML HKY85 model with four gamma-distributed rate categories estimated gives Amborella -sister with stronger support (89%; Fig. 3B ). Third codon positions In order to most directly assess the Goremykin et al. [ 19 ] analyses, which used only first and second codon position, the above analyses were restricted to first and second codon positions. In addition, however, most of the above analyses were also carried out with a dataset that includes all three codon positions. The resulting trees provide similar if not higher support for Amborella -sister than those obtained with just first and second positions. For example, using all three positions, the gamma rates ML tree analogous to Fig. 4B gives 100% BS for Amborella -sister, and the ML distance based NJ tree analogous to Fig. 3B gives 99% BS for Amborella -sister (trees available upon request). The most noteworthy shift towards stronger support involves ML analysis with equal rates, where inclusion of third positions changes the topology, from grasses-sister (with 61% BS; Fig. 4A ) to Amborella -sister (and with 100% support; Fig. 4D ). We also conducted a few analyses of third positions only (again using the set of taxa analyzed by Goremykin et al. [ 19 ]). These too recovered Amborella -sister, with 100% BS using ML with either equal rates or gamma-distributed rates [see Additional file 6 ]. Individual gene analyses By taking rate heterogeneity into account or improving taxon sampling, we have shown that the concatenated genes dataset supports the Amborella -sister hypothesis, strongly so in most analyses. To explore the effects of phylogenetic methods and taxon sampling on individual gene analyses, we analyzed each of the 61 genes in the Goremykin et al. [ 19 ] dataset individually (Fig. 6 ). These much smaller subsets of data are, as expected, more sensitive than the concatenated dataset to the model of DNA evolution, taxon sampling, and inclusion/exclusion of third positions. Without appropriately taking these factors into account some genes give topologies that conflict with the current consensus view of plant phylogeny. With all three positions and using ML with four gamma-distributed rate categories, the highest likelihood tree in 29 of 61 genes is the Amborella -sister topology and only five genes give grasses-sister (Fig. 6A ). The highest scoring trees for the remaining genes (most of which are short) place a wide variety of groups as sister, in nearly all cases with low BS (data not shown). Bootstrap support values and the number of trees having Amborella sister increase with gene length (Fig. 6A ). When MP is used on the same datasets the opposite pattern is observed. Here, the grasses are sister in 27 of 61 trees, whereas Amborella is sister with only 12 genes (Fig. 6B ). Excluding third positions results in the same trend in terms of MP versus ML, but the support values are much lower and the number of highly unlikely topologies is much higher (see Additional file 7 ). Figure 6 Support for Amborella -sister or grasses-sister from the 61 chloroplast genes analyzed individually. A . ML HKY85 analyses with four gamma-distributed rate categories. Parameter estimates were calculated individually for each gene in a manner analogous to that performed on the concatenated dataset. B . MP analyses. All three codon positions are included in all analyses shown in both figures. Solid red lines correspond to Amborella -sister and dashed blue lines to grasses-sister topologies. The single gene trees also illustrate the effect of taxon sampling. When Acorus is added and all three positions are used in ML analyses with four rate categories, none of the gene trees find monocots sister, whereas exactly half of the 40 genes put Amborella sister [see Additional file 8 , top figure]. When the third position is excluded, 12 genes put Amborella sister and BS levels drop significantly, while still no genes put monocots sister [see Additional file 8 , bottom figure]. Very similar results are obtained when the grasses are removed [see Additional file 9 ]. In contrast to the parsimony results without Acorus (where grasses-sister is the favored topology; Fig. 6B ), when Acorus is added and parsimony is used (with all three positions), only two genes put monocots sister (and both with low, 13 and 34%, BS), whereas 11 of 40 genes put Amborella sister [see Additional file 10 , top figure]. With Acorus added and grasses removed, 21 genes place Amborella sister and 1 places Acorus sister [see Additional file 10 , bottom figure]. Addition of Nymphaea While this manuscript was in the final stages of preparation, the chloroplast genome sequence of Nymphaea alba became available (released to EMBL database on July 13, 2004). This sequence was generated as part of a very recent study, also by Goremykin et al. [ 40 ], in which it was added, as the only new sequence, to the same data matrix as analyzed in their earlier study [ 19 ] and subjected to a similar set of phylogenetic analyses. Under these conditions, the grasses-sister topology was again recovered (and with 100% support) in nearly all analyses, with Nymphaea and Amborella recovered as sister taxa (also with 100% support). In their abstract, Goremykin et al. [ 40 ] present these findings as supporting their prior conclusion [ 19 ] that monocots are sister to the rest of angiosperms. However, their Discussion presents a more nuanced treatment than before, concluding that "we may be some ways from being confident of identifying the most basal angiosperms. Clearly the sequencing of genomes for more closely related outgroups and putatively basal angiosperms will be important for overcoming potential problems of model misspecification and long-branch attraction." We carried out a limited set of analyses of the 14-taxa Goremykin et al. [ 40 ] data matrix. We did so because of time constraints and because it became immediately clear from our relatively few analyses with Nymphaea that our main results and conclusions were entirely unchanged by its inclusion/exclusion. Using the Goremykin et al. [ 40 ] methods, we also recovered the same, grasses-sister trees they reported (data not shown). However, when using analytical conditions described in the preceding sections, we never found grasses-sister (Fig. 7 ). Instead, grasses were grouped with the other core angiosperms with strong BS (86–100%). Interestingly, contrary to most published studies (see Background and Table 1 ), Amborella alone did not emerge as sister to all other angiosperms in any of these analyses. Most commonly (Figs. 7B,7C,7D ), Amborella and Nymphaea together comprised the sister lineage to other angiosperms (with 66–100% BS), whereas an equal-rates ML analysis found Nymphaea deepest (albeit with low, 47% BS) and Amborella next deepest (Fig. 7A ). Figure 7 Inclusion of Nymphaea in analyses that account for rate heterogeneity. A . ML HKY85 with no rate categories (cf. Fig. 4A). B . ML HYK85 with four gamma-distributed rate categories (cf. Fig. 4B). C . ML with estimated proportion of invariant sites (no gamma rate categories; cf. Fig. 4C). D . NJ using a ML HKY85 model with four gamma-distributed rate categories to calculate distances (cf. Fig. 3B). All analyses used first- and second-positions only. Discussion The grasses-sister topology is an LBA artifact That long branch attraction can be a serious problem in phylogenetic inference has long been known to the systematics community, ever since this phenomenon was first explored by Felsenstein [ 37 ]. Felsenstein described conditions of unequal evolutionary rates under which phylogenetic inference will result not only in an incorrect topology, but will converge asymptotically to the wrong phylogeny with increasing confidence as more data are added, ultimately producing 100% support for the wrong tree (hence, be positively misleading). Hendy and Penny [ 39 ] showed that this phenomenon can occur for parsimony even under equal evolutionary rates if taxa are insufficiently sampled along a branch, while Lockhart et al. [ 61 ] showed that an ML equal-rates model can incorrectly join long branches when there is rate heterogeneity across sites. In the case of DNA sequence data, due to the limited number of character states, taxa with the greatest sequence divergence are expected to be "attracted" to each other by chance alone if long and short branches are sufficiently different in length. With large amounts of data, this can result in spurious, yet strongly supported, relationships. We used two complementary approaches to test the hypothesis that the grasses-sister topology favored in the study of Goremykin et al. [ 19 ] is caused by spurious attraction of the long branches leading to angiosperms and to grasses. Both approaches were designed to make the most direct comparisons possible to their dataset and phylogenetic methodology. First, and most importantly, we found that – even in the absence of corrections for rate heterogeneity – addition of just one more monocot to their dataset produced trees strongly supportive of 1) the Amborella -sister topology and 2) the idea that the grasses-sister topology is a consequence of LBA causing a misrooting of angiosperms. When the monocot Acorus was directly substituted for grasses, strong support for Amborella -sister was obtained (Fig. 2 ). This even occurred under analytical conditions that give strong support for grasses-sister when Acorus is not included. When Acorus and grasses were both included, two alternative, seemingly radically different topologies were obtained. Reconciliation of these topologies gets to the heart of the phylogenetic issues at hand. For as Fig. 8 shows, these two topologies are actually entirely congruent with respect to relationships among the various angiosperms, differing only in where the outgroup branch attaches within angiosperms [ 62 ], i.e., on the branches leading either to Amborella or to grasses (also see Fig. 5 and its treatment in Results). Figure 8 Competing hypotheses for the rooting of angiosperms showing the same underlying angiosperm topology when outgroups are excluded. A . Rooting within monocots (Mono), on the branch between grasses and all other angiosperms (see Fig. 2C, whose BS values are shown here, and also Fig. 2F; also see Goremykin et al. [19]). B . Unrooted network, with arrow showing alternative rootings as in A and C. C . Canonical rooting on the branch between Amborella and the rest of angiosperms (see Fig. 2I, whose BS values are shown here, and also Fig. 2L). We emphasize that 100% BS was obtained for Amborella -sister and for monocot monophyly (compared to 79% and 78% in C) using ML methods that allow for site-to-site rate heterogeneity (e.g., Additional files 1–3). The Amborella -sister topology is in agreement with the many diverse phylogenetic studies summarized in Table 1 and in Background, except for that of Goremykin et al. [ 19 ]. With Acorus included (Figs. 2I and 2L ), it also shows monocots as monophyletic, consistent with a large body of evidence [ 7 , 35 , 41 - 43 , 63 ], and depicts faster chloroplast DNA evolution on the monocot lineage leading to grasses than in the Acorus lineage, also consistent with a substantial body of evidence (e.g. [ 9 , 44 ]). Conversely, the grasses-sister topology (Figs. 2C and 2F ) is consistent only with the Goremykin et al. [ 19 ] results, fails to recover monophyly of monocots [has them either paraphyletic (Figs. 2C and 2F ) or even polyphyletic (Fig. 3C ), and always with 100% support], and fails to portray the known rapid evolution of chloroplast DNA in the lineages leading to grasses. All this leads us to conclude that the grasses-sister topology is almost certainly an artifact, most likely due to LBA between the long branches leading to grasses and to angiosperms. Second, we reanalyzed the same dataset used by Goremykin et al. [ 19 ] and found that methods that account for rate heterogeneity across sites [ 61 , 64 - 67 ] put Amborella sister, usually with high BS (Figs. 2J , 3B , 4B , 4C , and 5 ; also see most Additional files). This was true for all 14 MODELTEST substitution models (Table 2 ) except for the simplest, JC model. When rates vary between sites, as with the chloroplast dataset under consideration, it is usually appropriate to model the evolutionary process to reflect this. The evolutionary models explored here point to LBA as the cause of the controversial grasses-sister topology and demonstrate that even with conservative corrections for rate heterogeneity, Amborella moves to the sister position within angiosperms (e.g., Figs. 5A and 5B ). In summary, our two principal approaches for reassessing the results and analyses of Goremykin et al. [ 19 ] lead to what we regard as compelling evidence for two major conclusions. First, Amborella , not grasses, is the sister angiosperm among this set of taxa. Second, any tendency for angiosperms to root on grasses is an LBA artifact stemming from the confluence of limited taxon sampling, rapid evolution in grasses, a long branch between the outgroups and angiosperms, and rate heterogeneity across sites. Furthermore, we point out that while our manuscript was nearly finished, two independent papers appeared [ 68 , 69 ] that also challenged Goremykin et al. [ 19 ] and reached similar conclusions to our study. Both studies are complementary to ours, because instead of taking the Goremykin et al. [ 19 ] 61-gene chloroplast dataset as the starting point, as we did, they used a 3-gene dataset (the same two chloroplast genes and one nuclear gene) plus the Goremykin et al. [ 19 ] set of taxa as the starting point for a variety of taxon-sampling experiments. In addition, an important forthcoming study [ 70 ] which added five new chloroplast genome sequences to the dataset of Goremykin et al. [ 19 ], found "strong support" for the Amborella-sister topology. That four entirely independent studies, using a variety of taxon sets, character sets, and analytical approaches, all lead to such similar results and conclusions makes it all the more likely that the grasses-sister topology is indeed a phylogenetic artifact. Is Amborella or Amborella +Nymphaeaceae sister to the rest of angiosperms? Although our results reject grasses/monocots as the sister to all other angiosperms, support for Amborella as the first branch of angiosperm evolution must necessarily be qualified given the very limited sampling of whole chloroplast genomes (besides Amborella , only monocots, Calycanthus , and eudicots; see Fig. 1 ). There is still uncertainty as to the exact placement of Amborella relative to the other two deepest lineages of angiosperms, especially Nymphaeaceae [ 8 , 9 ], although the overall weight of published evidence currently favors Amborella as the deepest angiosperm (see [ 10 , 12 ] and references in Table 1 ). This uncertainty is heightened by our limited analyses that included Nymphaea and used methods that account for rate heterogeneity. These analyses never recovered an Amborella -sister topology. Instead, they most commonly found a sister clade comprising both Amborella and Nymphaea (Figs. 7B,7C,7D ), or even found Nymphaea alone to be the sister-most angiosperm (Fig. 7A ). Likewise, in the one analysis reported by Goremykin et al. [ 40 ] in which Amborella and Nymphaea were found sister to the other angiosperms these two taxa clustered as sisters rather than forming a basal grade. Clearly, then, the question of which group is sister to the rest of extant angiosperms should be regarded as unsettled and in need of further exploration, using much more data (such as whole chloroplast genomes from a large number of diverse angiosperms, as well as more mitochondrial and/or nuclear data) and better analytical methodologies as they become available. At the same time, we must face up to two serious limitations arising from extinction. First, Amborella trichopoda is the only known species in the entire Amborellaceae/Amborellales, i.e., it is the only taxon available whose DNA can be used to represent a lineage of ca. 150 million years in age arising at or near the base of angiosperms. Second, the stem branch leading to angiosperms is long in length and years [ 9 , 62 ] (also approaching 150 million years) and thus represents a long-branch attractor, with the potential to spuriously attract other branches besides that leading to grasses. LBA between outgroup and ingroups is particularly insidious, because, as illustrated in Fig. 2 (C and F vs. I and L), it tends to mask the long nature of the ingroup branches. Amborella does not show any evidence of having a long branch in published analyses with more extensive taxon sampling. It is nonetheless difficult to rule out (but see [ 10 ]) the possibility that Amborella may be only near-sister among angiosperms (e.g., part of a Nymphaeaceae/ Amborella clade that itself is the earliest branch of angiosperms; as suggested by Barkman et al. [ 8 ] and some of our analyses), with its generally sister position representing only a slight topological distortion (nearest neighbor interchange) caused by attraction to the long outgroup branch. For that matter, we point out (also see [ 71 ]) that the long branch leading to angiosperms also makes it difficult to rule out the possibility that the monophyletic-gymnosperm topologies recovered by multigene analyses (e.g., [ 35 , 72 - 74 ]) might result from LBA between angiosperms and the outgroup branch leading to seed plants. General implications Many of our analyses, including all but one of the 61-gene concatenate analyses shown, included only first and second codon positions. This is because Goremykin et al. [ 19 ] chose to exclude third codon positions from their analyses, and because we wanted to make the most direct comparisons possible to their analyses. Third positions were excluded because most of the 61 chloroplast genes were claimed to be "very divergent" at synonymous sites (K s for most genes between Pinus and angiosperms was between 0.50 and 1.50 substitutions/site), which they felt could lead to "misleading" phylogenetic results. However, because our analyses with all three positions or only third positions gave such similar results to those using only first and second positions, we believe that for this particular dataset third positions are not contributing "excessive" homoplasy and leading to spurious affiliations. This conclusion is consistent with a considerable body of literature dealing with the phylogenetic utility of third positions in organellar genes [ 75 - 80 ], while simulations have shown that "saturated" data can be very reliable, provided that taxon sampling is sufficiently high [ 21 , 24 ]. Caution is nonetheless well advised in situations involving relatively sparse taxon sampling (some of which may be unavoidable, i.e., where extinction has been significant) and/or greater divergences than in this study. For example, chloroplast third positions are problematic in analyses across all of algal/plant evolution (e.g., [ 81 ]), and even appear to be problematic at the relatively shallow level of seed plant phylogeny [ 35 , 73 , 82 ]. Our findings, and those of others [ 68 - 70 , 83 ], highlight the potential danger of phylogenetic analyses that employ lots of genes, but too few and/or the wrong taxa. Adequate taxon sampling is in a sense even more important here than with single or few-gene trees, because of the potential for even subtle systematic bias in a particular lineage's evolution to generate strongly supported misleading trees. Equally, if not more importantly, our results emphasize the crucial importance of using phylogenetic methods that best model the underlying molecular evolutionary processes, especially by accounting for site-to-site rate variation. Methods Sequencing chloroplast genes from Acorus We used long PCR to generate full-length or partial sequences from Acorus gramineus Soland. (a voucher specimen is deposited at the IND herbarium) for 22 of the 61 chloroplast genes analyzed by Goremykin et al. [ 19 ]. Long PCRs were conducted using the AccuTaq™ LA DNA Polymerase (Sigma, Atlanta, GA, USA), following instructions provided by the manufacturer. Initially, sets of primers designed by Graham and Olmstead [ 9 ], which cover a large portion of the chloroplast genome ( psbC-D and psbE-J operons; from rpl2 to 3'- rps12 gene), as well as the primers described in [ 84 - 87 ] for the rbcL , atpB , trnL-F , and trnE-D region, respectively, were used for amplifications and/or sequencing. For the most part, however, based on the initial sequences, a number of sequencing primers were designed and used for chromosome walking with long PCR products. Primer sequences are available upon request from SS. PCR products were separated by electrophoresis using 0.8% agarose gels, visualized with ethidium-bromide, and cleaned using Qiagen columns (Valencia, CA, USA). Cleaned products were then directly sequenced using the BigDye™ Terminator cycle sequencing kit (PE Applied Biosystem, Foster City, CA, USA) on an ABI 3100 DNA automated sequencer (PE Applied Biosystem, Foster City, CA, USA). Sequence data were edited and assembled using Sequencher™ 4.1 (Gene Codes Corporation, Ann Arbor, MI, USA). The Acorus sequences for these 22 chloroplast genes ( atpA , atpE , clpP , cemA , lhbA , 3'- petB , petD , petG , petL , psaB , psaI , rpl20 , rpoA , rpoB , rpoC1 , rpoC2 , rps2 , rps14 , rps18 , rps19 , ycf3 , ycf4 ) are deposited in GenBank (accession numbers AY757810-AY757831). These were combined for phylogenetic analyses with full-length or partial Acorus sequences already available in GenBank for 18 other chloroplast genes [AF123843 ( psbB , psbT , psbN , psbH ), AF123771 ( rps7 , 3'- rps12 ), AF123828 ( psbE , psbF , psbL ), AF123813 ( psbD , psbC ), AF123785 ( rpl2 ), D28866 ( rbcL ), X84107 ( rps4 ), U96631 ( psbA ), AB040155 ( matK ), AF197616 ( atpB ), and AJ344261 ( psaA )]. The 40 Acorus genes used here come from two closely related species – A. calamus (14 genes) and A. gramineus (26 genes) – and correspond to 65.6% (40/61) of the genes and 71.4% (32,072/44,937) of the nucleotide characters analyzed by Goremykin et al. [ 19 ]. Alignment For all first and second codon position analyses, the data matrix provided by V. Goremykin was used without modification. For analyses that included Acorus , the Acorus genes were individually aligned with the individually extracted gene alignments from the Goremykin et al. [ 19 ] dataset using CLUSTALW [ 88 ], and the resulting gene alignments were concatenated to regenerate a matrix identical to the original except for the extra row containing Acorus . Using the same procedure, Acorus was also added to the amino acid matrix provided by V. Goremykin. The relevant 61 chloroplast genes of Nymphaea [ 40 ] were likewise added to both alignments. We also constructed a new matrix consisting of all three codon positions by extracting genes from 13 sequenced chloroplast genomes of land plants (GenBank numbers: AP002983, AP000423, AJ271079, Z00044, AJ400848, AJ506156, AJ428413, X86563, AB042240, X15901, D17510, AP004638, X04465), aligning them, and hand editing apparent mistakes. The first and second position version of this matrix was nearly identical to the Goremykin et al. [ 19 ] matrix, except for a few minor differences (the overall length was slightly shorter due to removal of terminal extensions that either were created by single taxon indels or where multiple extending genes were nonhomologous). All phylogenetic trees resulting from this first and second position matrix and the Goremykin et al. [ 19 ] matrix were identical in topology and nearly identical in BS values. All alignments used in this study are available in Nexus format upon request of DWR. Phylogenetic analyses Phylogenetic analyses were performed in PAUP* 4.0b10 [ 45 ]. Unless specified, all nucleotide-based trees were built using only first- and second-codon positions. For ML analyses, parameters were initially estimated using an equal-weighted parsimony tree. A ML tree was then built, and parameters were re-estimated using this tree if it differed from the parsimony tree. This iteration was continued until the last two topologies converged (the final ML topology was almost always equal to the one in which the ML parameters were estimated from the parsimony topology). For all ML analyses we also calculated a NJ tree using distances calculated from the ML model being tested. For DNA and protein parsimony the default PAUP* 4.0b10 [ 45 ] step matrices were used. Bootstrap support [ 89 ] was estimated with 100 replicates using parameters estimated from the final topology. Thus the methodology cited for a particular tree refers to the model used for the bootstrap replicates. For parsimony and ML searches the heuristic algorithm was used with simple and as-is stepwise addition, respectively; tree bisection-reconnection swapping; and no limit on the number of trees saved in memory. Unless specified, the default PAUP* settings were used in all analyses. An automated script (available upon request from DWR) was used to run the analyses. Detailed log files and trees of each analysis were saved and are available upon request from DWR. Most analyses were performed on two 3 GHz Linux machines. Treetool [ 90 ] was used for viewing and printing trees. The Shimodaira-Hasegawa (SH) test [ 59 ] was performed using the "lscores" command of PAUP* with the options SHTest = RELL and BootReps = 10000. ML parameters being tested were estimated on each topology to calculate its own log likelihood except where otherwise specified. Abbreviations BS – bootstrap support; LBA – long branch attraction; ML – maximum likelihood; MP – maximum parsimony; NJ – neighbor joining; Ti/Tv – transition:transversion; NT – nucleotides; Plnvar – proportion of invariant sites Authors' contributions SS generated the new sequences (from Acorus ) used in this study and conceived and drafted the first and last figures. DWR carried out the phylogenetic analyses and made all other figures. All three authors contributed to the overall design of the study, drafted parts of the manuscript, and read and approved the final manuscript. Supplementary Material Additional File 1 Trees from truncated matrix with Acorus . These first- and second-position trees show that the results are essentially the same when positions that have Acorus data missing are removed. The first row using the ML HKY85 model is with four rate categories and parameters estimated as described in Methods. The third row uses the ML model parameters calculated as in the first row to calculate a distance matrix that was used for NJ analyses. For comparison the corresponding bootstrap values for Amborella sister to the angiosperms in the full matrix, going across each row, are 1. (99 vs. 100, 100 vs. 100), 2. (NA but same topology and similar BS, 100 vs. 100), 3. (86 vs. 88, 84 vs. 90). Click here for file Additional File 2 Trees from truncated RY-coded matrix with Acorus included . This are the same analyses as in Additional file 1 except the DNA is RY-coded. For comparison, the corresponding BS values for the Amborella sister relationship in the full matrix, along each row, are: 1. (100 vs. 100, 100 vs. 100), 2 (98 vs. 100, 100 vs. 100), 3. (100 vs. 100, 100 vs. 100). Click here for file Additional File 3 Comparison of gamma-distributed rates with two versus four rate categories . This figure shows that using two rate categories gives essentially the same results as using four rate categories with this dataset. The dataset is the first- and second-position, 61-gene matrix with grasses, Acorus , or both used to represent monocots. The ML HKY85 model was used and parameters were estimated as described in Methods. Click here for file Additional File 4 Trees when constant sites are removed from the first- and second-position matrix of Goremykin et al. [19] . A . ML HKY85 and equal rates. B . NJ with distances calculated using an ML HKY85 model and equal rates. Click here for file Additional File 5 NJ analysis using ML proportion of invariant distances . Distances were calculated using the ML HKY85 model, the estimated proportion of invariant sites, and the first- and second-position matrix of Goremykin et al. [19]. Click here for file Additional File 6 ML trees using third positions only . A . HKY85 model with equal rates. B . HKY85 model with four gamma-distributed rates. Click here for file Additional File 7 Sister group to the rest of angiosperms found in individual gene analyses using first- and second-position data without Acorus Top , ML HKY85 with four gamma-distributed rates. Bottom , Parsimony analysis. Click here for file Additional File 8 Sister group to the rest of angiosperms found in individual gene analyses using the ML HKY85 model with four gamma-distributed rates and Acorus added . Top , all three positions. Bottom , first and second positions. Click here for file Additional File 9 Sister group to the rest of angiosperms found in individual gene analyses using the ML HKY85 model with four gamma-distributed rates with Acorus added and grasses removed . Top , all three positions. Bottom , first and second positions. Click here for file Additional File 10 Sister group to the rest of angiosperms found in individual gene analyses using parsimony on all three positions . Top , Acorus added. Bottom , Acorus added and grasses excluded. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC543456.xml |
545999 | Evaluation of methods for predicting the topology of β-barrel outer membrane proteins and a consensus prediction method | Background Prediction of the transmembrane strands and topology of β-barrel outer membrane proteins is of interest in current bioinformatics research. Several methods have been applied so far for this task, utilizing different algorithmic techniques and a number of freely available predictors exist. The methods can be grossly divided to those based on Hidden Markov Models (HMMs), on Neural Networks (NNs) and on Support Vector Machines (SVMs). In this work, we compare the different available methods for topology prediction of β-barrel outer membrane proteins. We evaluate their performance on a non-redundant dataset of 20 β-barrel outer membrane proteins of gram-negative bacteria, with structures known at atomic resolution. Also, we describe, for the first time, an effective way to combine the individual predictors, at will, to a single consensus prediction method. Results We assess the statistical significance of the performance of each prediction scheme and conclude that Hidden Markov Model based methods, HMM-B2TMR, ProfTMB and PRED-TMBB, are currently the best predictors, according to either the per-residue accuracy, the segments overlap measure (SOV) or the total number of proteins with correctly predicted topologies in the test set. Furthermore, we show that the available predictors perform better when only transmembrane β-barrel domains are used for prediction, rather than the precursor full-length sequences, even though the HMM-based predictors are not influenced significantly. The consensus prediction method performs significantly better than each individual available predictor, since it increases the accuracy up to 4% regarding SOV and up to 15% in correctly predicted topologies. Conclusions The consensus prediction method described in this work, optimizes the predicted topology with a dynamic programming algorithm and is implemented in a web-based application freely available to non-commercial users at . | Background Transmembrane proteins are divided to date into two structural classes, the α-helical membrane proteins and the β-barrel membrane proteins. Proteins of the α-helical membrane class have their membrane spanning regions formed by hydrophobic helices which consist of 15–35 residues [ 1 ]. These are the typical membrane proteins, found in cell membranes of eukaryotic cells and bacterial inner membranes [ 1 ]. On the other hand, β-barrel membrane proteins, have their transmembrane segments, formed by antiparallel β-strands, spanning the membrane in the form of a β-barrel [ 2 , 3 ]. These proteins are found solely in the outer membrane of the gram-negative bacteria, and presumably in the outer membranes of mitochondria and chloroplasts, a fact, perhaps, explained by the endosymbiotic theory [ 4 - 7 ]. Transmembrane protein topology prediction has been pursued for many years in bioinformatics, mostly focusing on the α-helical membrane proteins. One reason for that, is that α-helical transmembrane segments are more easily predicted by computational methods, due to the easily detectable pattern of highly hydrophobic consecutive residues, and the application of simple rules as the "positive-inside rule" [ 8 ]. On the other hand, another reason is the relative abundance of α-helical membrane proteins compared to that of the β-barrel membrane proteins. This discrepancy, is present in both the total number of membrane proteins in complete genomes, an also in the datasets of experimentally solved 3-dimensional structures. Currently, the number of structures of outer membrane proteins known at atomic resolution raises rapidly, due to improvements in the cloning and crystallization techniques [ 9 ]. This, fortunately, gave rise to an increase of the number of prediction methods and the online available web-predictors. The first computational methods that were deployed for the prediction of the transmembrane strands were based on hydrophobicity analyses, using sliding windows along the sequence, in order to capture the alternating patterns of hydrophobic-hydrophilic residues of the transmembrane strands [ 10 , 11 ]. Other approaches included the construction of special empirical rules using amino-acid propensities and prior knowledge of the structural nature of the proteins [ 12 , 13 ], and the development of Neural Network-based predictors to predict the location of the Cα's with respect to the membrane [ 14 ]. The major disadvantages of these older methods, were the limited training sets that they were based on, and the reduced capability to capture the structural features of the bacterial outer membrane proteins, especially when it comes to sequences not having similarity with the proteins of the training set. During the last few years, other more refined methods, using larger datasets for training, appeared. These methods, include refined Neural Networks (NNs), [ 15 , 16 ], Hidden Markov Models (HMMs) [ 17 - 21 ] and Support Vector Machines (SVMs) predictors [ 22 ]. Some of these methods are based solely on the amino acid sequence and others use also as input evolutionary information derived from multiple alignments. Other popular methods such as the method of Wimley [ 23 ] and BOMP [ 24 ] do not explicitly report the transmembrane strands, but instead they are oriented towards genome scale discrimination of β-barrel membrane proteins. In this work, we evaluate the performance of the available prediction methods to date. Using a non-redundant dataset of 20 outer membrane β-barrel proteins, with structures known at atomic resolution, we compare each predictor in terms of the per-residue accuracy (using the correctly predicted residues, and the Mathews correlation coefficient [ 25 ]) and that of the strands' prediction accuracy measured by the segments overlap measure (SOV) [ 26 ]. We also report the number of the correctly predicted topologies (i.e. when both strands localization and orientation of the loops are correctly predicted). We conclude, that the recently developed Hidden Markov Model methods HMM-B2TMR [ 17 ], ProfTMB [ 21 ] and PRED-TMBB [ 20 ], perform significantly better than the other available methods. We also conclude that the prediction accuracy is affected significantly, if the full sequences (including long N-terminal and C-terminal tails and the signal peptide) are used for input and not only the transmembrane β-barrel domain. This finding is again more profound when referring to the NN and SVM predictors, since the regular grammar of the HMMs maps successfully the model topology to the proteins' modular nature. Finally, we developed a consensus prediction method, using as input the individual predictions of each algorithm, and we conclusively show that this approach performs better, in all the measures of accuracy, compared to each individual prediction method separately. Although consensus methods have proven to be more accurate in the past, in the case of α-helical membrane proteins [ 27 - 29 ] and also for secondary structure prediction of globular, water soluble proteins [ 30 - 32 ], this is the first time that such a method is applied to β-barrel outer membrane proteins. Results and discussion The results obtained from each individual algorithm, on the test set of the 20 proteins are summarized in Table 1 . It is obvious that all of the methods perform worse for the measures of per-segment accuracy in the case of full-length sequences. On the other hand, for measures of per-residue accuracy, most of the methods perform better in the case of full-length sequences, a fact already mentioned in [ 21 ]. This is explained, considering the fact that when using full-length sequences, more non-transmembrane residues are predicted correctly, thus increasing the fraction of correctly predicted residues and the correlation coefficient. Furthermore, when ranking the different methods PRED-TMBBposterior performs better, followed by HMM-B2TMR and ProfTMB. PRED-TMBBnbest, performs slightly worse than PRED-TMBBposterior in terms of per-residue accuracy and SOV, but is inferior to the other top-scoring HMMs in terms of the correctly predicted topologies. In order to assess the statistical significance of these observations and draw further safe conclusions, we should rely on a statistical analysis of the results obtained. The MANOVA test (Table 2A ) yields a highly significant p-value for both the 2 independent variables (p < 10 -4 ). This means, that there is truly a difference in the vector of the five measured attributes across the different methods and the type of sequence that we use as input. By including in the model the interaction term between the two factors, we get a marginally insignificant p-value (p = 0.0619), indicating that some of the methods behave differently with input sequences of different type. Examining each one of the attributes independently (Table 3A ), we observe that the type of the input sequence does not influence significantly the effect on all the measures of per-residue accuracy (correctly predicted residues and the correlation coefficient, p-values equal to 0.9444 and 0.0224 respectively) but, instead, influences a lot the per-segment measures such as SOV (p < 10 -4 ), correctly predicted topologies (p = 0.0193) and correct barrel size (p = 0.0001). In all cases, the type of the method is a highly significant factor (p < 10 -4 ), reflecting the fact that there are true differences in the performance of the methods. The interaction term in the ANOVA is significant only for the SOV measure (p = 0.0272), and marginally significant for the correctly predicted residues (p = 0.402). However, these results do not provide us with a clue as to which method performs better (or worse) than the others; it states that one or more methods depart significantly from the mean. The ranking of the methods has to be concluded by observing Table 1 . In order to discover the statistically significant differences between the methods, we proceeded by grouping the methods according to the type of the algorithm they utilize. This way, we grouped together the HMM-based methods (HMM-B2TMR, PRED-TMBB, ProfTMB and BETA-TM) and the NN and SVM-based methods (TMBETA-NET, B2TMPRED, PSI-PRED and TBBPred). Thus, instead of having a factor with 8 levels describing the methods, we now have a factor with 2 levels (HMM and not HMM). The MANOVA test (Table 2B ) once again yields a statistically significant result, for both the 2 factors (p < 10 -4 ) and the interaction term (p = 0.0025), giving us a clear indication that the visually observed superiority of the HMM-based methods has a statistically significant justification. The statistically significant interaction of the 2 factors, furthermore suggests that the decrease in some of the measured attributes when submitting full-length sequences, is smaller (if anything) for HMM-based methods than for the NN and SVM-based ones. In fact, considering the three top-scoring HMM methods, we observe that the per-segment measures are not influenced from the type of the input sequence whereas the per-residue measures are significantly increased with full-length sequences as input, reflecting the fact that more non-transmembrane residues are correctly predicted, as noticed already in [ 21 ]. Considering each one of the measures of accuracy with ANOVA (Table 3B ), the type of the method is a highly significant factor in all of the tests, and the type of the input sequence highly significant for the per-segment measures of accuracy. The interaction term is highly significant for SOV (p = 0.0011) and marginally insignificant for correctly predicted residues (p = 0.052). These findings suggest, that the HMM-based predictors perform better, on average, than the NN and SVM-based methods, in almost all of the measured attributes. We should mention here, that the difference between HMM and NN/SVM methods is larger for the measures of per-segment accuracy than for per-residue accuracy. Even the simplest and less accurate HMM-based method, BETA-TM, that uses single sequence information compares favorably to the refined NN/SVM methods that use profiles derived from multiple alignments. As a matter of fact, only B2TMPRED, which uses a dynamic programming algorithm to refine the prediction, predicts more accurately than BETA-TM the correct topology and/or the barrel size of the proteins, but still cannot reach the accuracy of the other HMM-based methods. Furthermore, the HMM-based methods are not influenced significantly whether full-length sequences or just the β-barrel domains are submitted for prediction. Interestingly, the NN/SVM methods, often falsely predict the signal peptide sequences as transmembrane strands in the precursors whereas HMMs do not. This observation is consistent with the theory regarding the nature of HMM and NN-based methods. Thus, it is consistent with the fact that the regular grammar of the HMMs can capture more effectively the temporal variability of the protein sequence and map successfully the proteins' modular nature to a mathematical sound model. Therefore, it is not surprising that also for α-helical membrane proteins' topology prediction the best available predictors are those based on HMMs [ 33 ]. On the other hand, NN methods are more capable of capturing long-range correlations along the sequence. This results to the correct identification of an isolated strand, but since the β-barrel proteins follow strict structural rules, the modular nature of the barrels is captured more effectively by HMMs. NNs may often falsely predict isolated transmembrane strands in non-barrel domains or predict strands with a non-plausible number of residues or even barrels with an odd number of strands. From a structural perspective, it is also of great interest to consider that the repetitive structural domains of β-barrels are the β-hairpins whereas the α-helical membrane proteins counterparts are the isolated hydrophobic helices often connected by loop regions of arbitrary length. These observations, will have a significant impact not only on isolated predictions for one or few proteins, but also on predictions for sequences arising from genome projects where one expects to have the precursor sequences. Thus, predictions on such sequences will be more reliable, when obtained from HMM-predictors rather than NN and SVM-based ones. However, the performance of even the best currently available predictors are not as good as the predictions obtained for α-helical membrane proteins [ 33 ]. This is somewhat expected, and has a simple interpretation considering the grammatical structure of the short amphipathic transmembrane β-strands as opposed to the longer and highly hydrophobic transmembrane α-helices [ 1 ]. One issue that was not possible to investigate statistically is that of the use of evolutionary information in the form of profiles derived from alignments. It is well known, that the inclusion of information arising from alignments, increases significantly the performance of secondary structure prediction algorithms [ 34 ]. This was exploited in the past, in the case of α-helical membrane protein prediction [ 35 , 36 ], and it was investigated thoroughly in a recent work [ 37 ]. However, for β-barrel membrane proteins there is not such a clear answer. The authors of the methods that use evolutionary information [ 15 , 17 , 21 ] justified their choice showing that the inclusion of alignments as input, improves the performance of their models up to 18%. Furthermore, we showed here that NN-based methods, using multiple alignments (B2TMPRED) perform significantly better, compared to similar methods that are relying on single sequences (TMBETA-NET). However, the top scoring HMM method, PRED-TMBB, performs comparably to the other HMM methods that are using evolutionary information, even though it relies on single sequence information. This finding may be explained considering the choice of the training scheme for PRED-TMBB, since it is the only method trained according to the CML criterion, and with manually curated annotations for the transmembrane strands. However, it raises an important question as to whether the prediction accuracy, could be improved more by using evolutionary information, or not. Future studies on this area will reveal if improvements in the prediction could arise by combining evolutionary information with appropriate choice of training schemes, or if we have eventually reached a limit of the predictive ability for β-barrels membrane proteins, and we depend only on the advent of more three-dimensional representative structures. Comparing the performance of individual methods, one has to keep in mind several important aspects of the comparison. From the one hand, the limited number of β-barrel membrane proteins known at atomic resolution, resulted in having a test set, that includes some (or all) of the proteins used for training each individual method or a close homologue. This does not imply that the comparison of the methods is biased (regarding the ranking), but that the absolute values of the measures of accuracy may be influenced. Thus, when it comes to newly solved structures, we may expect somewhat lower rates in the measures of accuracy for all methods examined. On the other hand, when comparing the results of the individual methods, as they appear in the original publications, we observe some discrepancies. These arise, mainly due to the fact, that when reporting results of a prediction method, the authors usually report the measures of accuracy obtained in the jackknife test (leave one out cross-validation test). Furthermore, the authors of the individual methods report the measures of accuracy obtained using as input different types of sequences, and comparing using as observed different annotations for the transmembrane strands. For instance, other authors report measures of accuracy obtained from the β-barrel domain of the proteins, others from the sequences deposited in PDB, and others report also the results from precursor sequences. As for the observed transmembrane strands used for comparisons, most of the authors used the annotations for the strands found in PDB, and only PRED-TMBB used manually annotated segments that resemble better the part of the strand inserted into the lipid bilayer. The last observation, partly explains the better prediction accuracy obtained by PRED-TMBB, mainly in the measures of per-residue accuracy (correctly predicted residues and correlation coefficient). One important result of this study is the development of the consensus prediction method, for predicting the transmembrane strands of β-barrel membrane proteins. Even though consensus prediction has been proved to be a valuable strategy for improving the prediction of α-helical membrane proteins [ 27 , 29 , 38 ], no such effort has been conducted before, for the case of transmembrane β-barrels. A consensus of all of the available methods, does not improve the prediction accuracy compared to the top-scoring methods, indicating that there is a considerable amount of noise in the individual predictions, originating mainly from the low-scoring methods. However, when using the three top-scoring HMM methods (PRED-TMBB, HMM-B2TMR and ProfTMB) along with one or more of the best performing NN/SVM methods (B2TMPRED, TBBPred-SVM, TBBPred-NN and TBBPred-Combined) we get impressive results, outperforming the top-scoring methods in almost all measured attributes. As it is obvious from Tables 1 and 4 , the consensus prediction method performs better than each one of the individual predictors. The improvement ranges from a slight improvement around 1% for the correctly predicted residues and correlation coefficient, up to 4% for SOV and 15% for the correctly predicted topologies. We should note that these particular results were achieved using PRED-TMBBposterior, ProfTMB, HMMB2TMR, B2TMPRED and TBBPred-NN, but other combinations of the aforementioned methods perform similarly (Table 4 ). This large improvement in the measures of per-segment accuracy is an important finding of this study. However, in the web-based implementation of the consensus prediction method, we allow the user to choose at will the methods that will be used for the final prediction. This was decided for several reasons: Firstly, for a newly found protein, we might have larger variations on the predictions, and we could not be sure if the choice of different algorithms will give better results or not. Secondly, the different predictors are not sharing the same functionality and availability. For instance, some predictors respond by e-mail (B2TMPRED, PSIPRED), most of the others by http (PRED-TMBB, BETA-TM, TMBETA-NET etc), and others may be downloaded and run locally (ProfTMB, PSIPRED), whereas one of the top-scoring methods (HMM-B2TMR) is available as a commercial demo only, requiring a registration procedure. These facts, forced us not to have a fully automated server (but instead we require the user to cut 'n paste the predictions) but also to allow flexibility on the chosen methods, and let the user decide alone which methods he will use. For this reason, we also give to the users the opportunity to provide, if they wish, custom predictions. This way, a user may choose to use another method, that will come up in the future, or, alternatively, to use manually edited predictions. Conclusions We have evaluated the currently available methods, for predicting the topology of β-barrel outer membrane proteins, using a non-redundant dataset of 20 proteins with structures known at atomic resolution. By using multivariate and univariate analysis of variance, we conclude that the HMM-based methods HMM-B2TMR, ProfTMB and PRED-TMBB perform significantly better than the other (mostly NN-based) methods, in both terms of per-residue and per-segment measures of accuracy. We also found, a significant decrease in the performance of the methods when full-length sequences are submitted for prediction, instead of just the β-barrel domain. However, the HMM-based methods are more robust as they were found largely unaffected by the type of the input sequence. This is an important finding that has to be taken in account, not only in the cases of single proteins' predictions, but mostly in cases of predictions performed on precursor sequences arising from genome projects. Finally, we have combined the individual predictors, in a consensus prediction method, that performs significantly better even than the top-scoring individual predictor. A consensus prediction method is for the first time been applied for the prediction of the transmembrane strands, of β-barrel outer membrane proteins. The consensus method, is freely available for non-commercial users at , where the user may choose which of the individual predictors will include, in order to obtain the final prediction. Methods Data sets The test set that we used has been compiled mainly with consideration of the SCOP database classification [ 39 ]. In particular, all PDB codes from SCOP that belong to the fold "Transmembrane beta-barrels" were selected, and the corresponding structures from the Protein Data Bank (PDB) [ 40 ] were obtained. For variants of the same protein, only one solved structure was kept, and multiple chains were removed. The structure of the β-barrel domain of the autotransporter NalP of N. meningitidis [ 41 ] was also included, which is not present in the SCOP classification although it is clearly a β-barrel membrane protein. The sequences have been submitted to a redundancy check, removing chains with a sequence identity above a certain threshold. Two sequences were considered as being similar, if they demonstrated an identity above 70% in a pairwise alignment, in a length longer than 80 residues. For the pairwise local alignment BlastP [ 42 ] was used with default parameters, and similar sequences were removed implementing Algorithm 2 from [ 43 ]. The remaining 20 outer membrane proteins constitute our test set (Table 5 ). The structures of TolC [ 44 ], and alpha-hemolysin [ 45 ], were not included in the training set. TolC forms a trimeric β-barrel, where each monomer contributes 4 β-strands to the 12-strand barrel. Alpha-hemolysin of S. aureus is active as a transmembrane heptamer, where the transmembrane domain is a 14-strand antiparallel β-barrel, in which two strands are contributed by each monomer. Both structures are not included in the fold "transmembrane beta-barrels" of the SCOP database. In summary, the test set (Table 5 ), includes proteins functioning as monomers, dimers or trimers, with a number of transmembrane β-strands ranging from 8 to 22, and is representative of the known functions of outer membrane proteins to date. In order to investigate the effect of the full sequence on the different predictors, we conducted two sets of measurements. In the first place, all proteins were submitted to the predictors, in their full length. We chose not to remove the signal peptides, considering the fact that completely unannotated sequences, mostly originating from genome projects, are most likely to be submitted to predictive algorithms, in their pre-mature form. Of the 20 sequences constituting our set, 4 belonging to the family of TonB-dependent receptors, namely FhuA [ 46 ], FepA [ 47 ], FecA [ 48 ] and BtuB [ 49 ] posses a long (150–250 residues) N-terminal domain that acts as a plug, closing the large pore of the barrel. This domain is present in all four of the structures deposited in PDB. One of the proteins of our dataset, OmpA possesses a long 158 residue C-terminal domain falling in the periplasmic space, which is absent from the crystallographically solved structure [ 50 ]. Finally, the Secreted NalP protein, possesses a very long, 815 residues in length, N-terminal domain that is being transported to the extracellular space passing through the pore formed by the autotransporter β-barrel pore-forming domain, of which we have the crystallographically solved structure [ 41 ]. For the second set of measurements, for all proteins constituting our dataset we extracted only the transmembrane β-barrel domain. In the case, of long N-, or C-terminal domains mentioned above, we retained only the last or first 12 residues, respectively. Even in the structures known at atomic resolution, there is not a straightforward way to determine precisely the transmembrane segments, since the lipid bilayer itself is not contained in the crystal structures. This is the case for both α-helical and β-barrel membrane proteins. There are, however a lot of experimentally and theoretically derived sources of evidence, suggesting that the lipid bilayer in gram-negative bacteria, is generally thinner than the bilayer of the inner membrane or those of a typical cell membrane of an eukaryote. Thus, it is believed that the outer membrane possesses an average thickness around 25–30 Å, a fact mainly explainable by its lipid composition, average hydrophobicity and asymmetry [ 51 ]. The annotations for the β-strands contained in the PDB entries, are inadequate since there are strands that clearly extend far away from the bilayer. Some approaches have been used in the past, to locate the precise boundaries of the bilayer, but they require visual inspection of the structures and human intervention [ 23 , 52 ]. In order to have objective and reproducible results, we used the annotations for the transmembrane segments deposited in the Protein Data Bank of Transmembrane Proteins (PDB_TM) [ 53 ]. The boundaries of the lipid bilayer in PDB_TM have been computed with a geometrical algorithm performing calculations on the 3-dimensional coordinates of the proteins, in a fully automated procedure. Prediction methods The different freely available web-predictors, evaluated in this work, along with the corresponding URLs are listed in Table 6 . OM_Topo_predict, is the first Neural Network-based method trained to predict the location of the Cα's with respect to the membrane [ 14 ]. Initially, the method was trained on a dataset of seven bacterial porins known at atomic resolution, but later it was retrained in order to include some newly solved (non-porin) structures . B2TMPRED is a Neural Network-based predictor that uses as input evolutionary information derived from profiles generated by PSI-BLAST [ 15 ]. The method was trained in a non-redundant dataset of 11 outer membrane proteins, and uses a dynamic programming post processing step to locate the transmembrane strands [ 54 , 55 ]. HMM-B2TMR, is a profile-based HMM method, that was trained for the first time on a non-redundant set of 12 outer membrane proteins [ 17 ] and later (current version) on a larger dataset of 15 outer membrane proteins [ 55 ]. This method also uses as input profiles derived from PSI-BLAST. It was trained according to a modified version of the Baum-Welch algorithm for HMMs with labeled sequences [ 56 ], in order to incorporate the profile as the input instead of the raw sequence, whereas for decoding utilized the posterior decoding method, with an additional post-processing step involving the same dynamic programming algorithm used in B2TMPRED [ 55 ]. We should note, that HMM-B2TMR is the only method that currently is available as a commercial demo only, requiring a registration procedure. PRED-TMBB is a HMM-based method developed by our team [ 19 ]. Initially, it was trained on a set of 14 outer membrane proteins [ 19 ] and later on a training set of 16 proteins [ 20 ]. It is the only HMM method trained according to the Conditional Maximum Likelihood (CML) criterion for labeled sequences, and uses as input single sequences. The prediction is performed either by the Viterbi, the N-best algorithm [ 57 ] or "a-posteriori" with the aid of a dynamic programming algorithm used to locate both the transmembrane strands and the loops. In this work, we chose to use both N-best and "a-posteriori" decoding, and treat them as different predictors. This was done, since the two alternative decoding algorithms, follow an entirely different philosophy, and in some cases yield different results. BETA-TM, is a simple HMM method trained on 11 non-homologous proteins using the standard Baum-Welch algorithm [ 58 ]. It also operates on single sequence mode, and the decoding is performed with the standard Viterbi algorithm. ProfTMB is the last addition to the family of profile-based Hidden Markov Models [ 21 ]. It also uses as input evolutionary information, derived from multiple alignments created by PSI-BLAST. It is trained using the modified Baum-Welch algorithm for labeled sequences whereas the decoding is performed using the Viterbi algorithm. Its main difference with HMM-B2TMR, PRED-TMBB, BETA-TM and other previously published, but not publicly available HMM predictors [ 18 ], is the fact that it uses different parameters (emission probabilities) for strands having their N-terminal to the periplasmic space, and other for those having their N-terminal to the extracellular space. Furthermore, it uses different states for the modeling of inside loops (periplasmic turns) with different length. TMBETA-NET is a Neural Network based predictor using as input single sequence information [ 16 ]. This method uses a set of empirical rules to refine its prediction, in order to eliminate non-plausible predictions for TM-strands (for instance a strand with 3 residues). TBBpred is a predictor combining both NNs and SVMs [ 22 ]. The NN-based module also uses evolutionary information, derived from multiple alignments, whereas the SVM-predictor uses various physicochemical parameters. The user may choose one of the methods, or combine them both. The authors of the method have shown, that combining the predictions obtained by NNs and SVMs, improves significantly the prediction accuracy [ 22 ]. For the evaluation of the performance and for the Consensus Prediction, we chose to use all three options, in order to investigate which one performs better. Finally, we evaluated the prediction of the transmembrane strands, obtained from a top-scoring general-purpose secondary structure prediction algorithm. This was done, in order to investigate systematic differences in the prediction of the transmembrane β-strands, but also because experimentalists continuously use such algorithms in deciphering assumed topologies for newly discovered β-barrel membrane proteins [ 59 - 61 ]. For this purpose, we have chosen PSI-PRED, a method based on Neural Networks, using multiple alignments derived from PSI-BLAST for the prediction, that has been shown to perform amongst the top-scoring methods for secondary structure prediction [ 62 ]. Other, equally successful methods such as PHD [ 63 ], perform similarly but they are not considered here. Measures of accuracy For assessing the accuracy of the prediction algorithms several measures were used. For the transmembrane strand predictions we report the well-known SOV (measure of the segment's overlap), which is considered to be the most reliable measure for evaluating the performance of secondary structure prediction methods [ 26 ]. We also report the total number of correctly predicted topologies ( TOP ), i.e. when both the strands' localization and the loops' orientation have been predicted correctly, and the correctly predicted barrel size ( BS ), i.e the same with the correctly predicted topologies, but allowing for one strand mismatch [ 20 ]. As measures of the per residue accuracy, we report here both the total fraction of the correctly predicted residues ( Q β ) in a two-state model (transmembrane versus non-transmembrane), and the well known Matthews Correlation Coefficient ( C β ) [ 25 ]. Statistical analysis The measures of accuracy mentioned earlier are the dependent variables that we wish to compare. We treat each prediction on each protein as an observation, and as independent variables we use the type of the submitted sequences ( TYPE ) that could be either the full precursor sequence or the transmembrane barrel domain only, a factor with two categories, and the individual predictive method ( METHOD ), which has 11 categories. Furthermore we tried to group the methods to those based on a Hidden Markov Model and those that were not. This factor ( HMM ) was evaluated later, in order to assess the impact of the type of the prediction method. The formal way to assess the overall statistical significance is to perform a two-way multivariate analysis of variance (MANOVA) [ 64 ]. For the evaluation of the statistical significance we evaluated the Wilk's lambda, but the results are not sensitive to this choice since other similar measures (Hotelling-Lawley trace, Roy largest root e.t.c) gave similar results. A statistical significant result, for both the 2 factors ( TYPE , METHOD ), will imply that the vector of the measured attributes varies significantly across the levels of these factors. We also included into the models, the interaction term between the two factors ( TYPE*METHOD or TYPE*HMM ). This was necessary in order to investigate, the potential differences of the dependent variables in the various combinations of the independent variables. For instance, a significant interaction of TYPE with HMM, will indicate that the effect of the input sequence will be different on the two types of methods. Having obtained a significant result from the MANOVA test, we could use a standard 2-way analysis of variance (ANOVA) for each of the dependent variables, in order to be able to confirm which one of the measured attributes, varies significantly across the two factors. In the ANOVA models, we also included the interaction terms. In all cases, statistically significant results were declared those with a p-value less than 0.05. We report for the ANOVA and MANOVA models, the test statistic and the corresponding p-value, for the fitted models (including the interaction term). The consensus prediction method In order to produce a combined prediction, we have two alternatives: One is to use some kind of ensemble Neural Network, or, alternatively, to summarize the individual predictions using a consensus method. Ensemble Networks show a number of significant advantages over the consensus methods [ 65 , 66 ], but suffer for the limitation that each individual predictor has to be available, every time that a request is made. Since we are dealing with web-based predictors, and we do not have the option to have local copies of each predictor installed, this could be disastrous, thus, the consensus method is the only available and reliable solution. Suppose we have an amino acid sequence of a protein with length L , denoted by: x = x 1 , x 2 ,..., x L , and for each residue i we have the prediction of the j th predictor ( j = 1, 2, ..., 7 ) where, Thus, we can define a per-residue score S i by averaging over the independent contributions of each predictor: This way, we can obtain a consensus prediction score for the whole sequence, This score is capable of yielding inconsistent predictions, such as a strand with 3 residues for example. For this reason it is then submitted to a dynamic programming algorithm, to locate precisely the transmembrane strands. The algorithm is essentially the same used by [ 19 ], with the major difference being the fact that it considers only two states (transmembrane vs. non-transmembrane). It optimizes the predicted topology, according to some predefined parameters, imposed by the observed structures. We also force the algorithm to consider as valid only topologies with an even number of transmembrane strands, as those observed in the crystallographically solved structures. Having determined the number of the transmembrane strands, the final choice of the topology is based on the consideration of the length of the predicted loops. As it has already been mentioned for the 3-dimensional structures, the periplasmic loops have significantly lower length than the extracellular ones, thus by comparing the total length of the two alternative topologies, we decide for the final orientation of the protein. Authors' contributions PGB conceived of the study, performed the collection and analysis of the data and drafted the manuscript, TDL participated in data collection, implemented the consensus algorithm and designed the web interface and SJH supervised and coordinated the whole project. All authors have read and accepted the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545999.xml |
516443 | Malaria and anemia prevention in pregnant women of rural Burkina Faso | Background Pregnant women are a major risk group for malaria in endemic areas. Only little information exists on the compliance of pregnant women with malaria and anaemia preventive drug regimens in the rural areas of sub-Saharan Africa (SSA). In this study, we collected information on malaria and anaemia prevention behaviour in pregnant women of rural Burkina Faso. Methods Cross-sectional qualitative and quantitative survey among 225 women of eight villages in rural northwestern Burkina Faso. Four of the villages had a health centre offering antenatal care (ANC) services while the other four were more than five kilometers away from a health centre. Results Overall ANC coverage (at least one visit) was 71% (95% in health centre villages vs 50% in remote villages). Malaria and anaemia were considered as the biggest problems during pregnancy in this community. ANC using women were quite satisfied with the quality of services, and compliance with malaria and anaemia prevention regimens (chloroquine and iron/folic acid) was high in this population. Knowledge on the benefit of bed nets and good nutrition was less prominent. Distance, lack of money and ignorance were the main reasons for women to not attend ANC services. Conclusions There is an urgent need to improve access of rural SSA women to ANC services, either through increasing the number of rural health centres or establishing functioning outreach services. Moreover, alternative malaria and anaemia prevention programmes such as intermittent preventive treatment with effective antimalarials and the distribution of insecticide-treated bed nets need to become implemented on a large scale. | Background Each year between 75.000 and 200.000 infant deaths are attributed to malaria infection in pregnancy globally, and between 200.000 and 500.000 pregnant women develop severe anaemia as a result of malaria in Sub-Saharan Africa (SSA) [ 1 ]. Pregnant compared to non-pregnant women are at an increased risk for malaria, and the severity of the clinical manifestations in the women and her foetus depends on the level of pre-pregnancy immunity [ 2 ]. While in areas of low malaria endemicity all pregnant women are equally susceptible to the consequences of malaria infection, in areas of high endemicity women appear to be most susceptible during their first pregnancy [ 3 ]. However, more recent publications point to significant susceptibility in primigravidae as well as in multigravidae [ 4 ]. Pregnancies in women living in malaria endemic regions are associated with a high frequency and density of P. falciparum parasitaemia, with high rates of maternal morbidity including fever and severe anaemia, with abortion and stillbirth, and with high rates of placental malaria and consequently low birth weight in newborns caused by both prematurity and intrauterine growth retardation [ 1 , 3 , 5 ]. In order to reduce malaria-related ill health, regular chemoprophylaxis has been recommended to all pregnant women living in malaria-endemic areas [ 6 ]. Most African countries, including Burkina Faso, include routine chemoprophylaxis in their official antenatal care programmes. However, in practice, coverage of chemoprophylaxis is limited due to low accessibility and quality of antenatal care (ANC) services as well as problems with compliance [ 7 ]. It has been estimated from a survey in four African countries that less than 20% of women use a prophylactic regimen close to the WHO recommendations [ 8 ]. While insecticide-treated bed nets and curtains (ITN) have been shown to substantially reduce malaria morbidity and mortality in children, results from initial trials on the efficacy of ITNs for malaria prevention in pregnancy produced conflicting evidence and were classified as inconclusive by the Cochrane Collaboration [ 3 , 9 ]. However, with the publication of the findings from a major ITN trial in a holoendemic area of western Kenya, the use of ITN during pregnancy is getting more credibility [ 10 ]. In Burkina Faso, the official policy for malaria and anaemia prevention in pregnant women comprises of chloroquine and iron/folic acid supplementation respectively. However, in the rural areas of Burkina Faso little information exists on coverage with ANC and on compliance with preventive regimens. We assessed the coverage of antenatal care and investigated the knowledge and adoption of preventive practices with respect to malaria and anaemia in users and non-users of antenatal care. Methods Study area The study took place in the rural part of the research zone of the Centre de Recherche en Santé de Nouna (CRSN) in Nouna Health District, northwestern Burkina Faso. The CRSN research zone consists of Nouna town and 41 of the surrounding villages with a total population of around 60.000 inhabitants. The Nouna area is a dry orchard savanna, populated mainly by subsistance farmers of different ethnic groups. Malaria is holoendemic but highly seasonal in this part of Westafrica [ 11 ]. Most malaria transmission takes place during or shortly after the rainy season which usually lasts from June until October [ 11 ]. Modern health services in the CRSN research zone are limited to four village-based health centres and the district hospital in Nouna town. As a consequence, malaria control is mainly based on home treatment with chloroquine, the official first-line treatment drug in Burkina Faso. Roughly half of all households in the area possess at least one untreated bed net and since 2000, ITN are distributed as part of an ongoing trial in young children [ 12 , 13 ]. The official policy for malaria and anaemia prevention during pregnancy in Burkina Faso consists of a curative dose of 1 500 mg chloroquine during three days followed by a weekly dose of 300 mg chloroquine and a combined dose of daily 200 mg iron and 0.25 mg folic acid. This regimen should be followed from the first ANC visit until six weeks after delivery. Since the year 2002 it is the official policy in Burkina Faso to offer ANC services free of charge. This includes ANC card, physical examination and counselling, and malaria/anaemia prevention drugs. However, urine examination, gloves, and drugs for other concomitant diseases still have to be paid for. Urine examination and gloves usually cost 150–200 F CFA (1 Euro = 650 F CFA). Study design The study was cross-sectional and descriptive in nature, using both qualitative and quantitative methods for data collection. The study was implemented in May and June 2003. The research team comprised of the investigators and six trained local interviewers familiar with the common spoken local languages and French. The questionnaires were pre-tested before administration. The study took place in eight of the 41 villages of the CRSN study area. Villages were selected as follows: At a first stage, the four villages of the CRSN study area where a health centre exists were purposely selected. To account for the socio-demographic variability and geographical accessibility, in each of these health centre defined sub-areas, another village distant of at least 5 kilometres to the health centre was randomly selected. Qualitative research We conducted six Focus Group Discussions (FGD). Two with pregnant women users of ANC, two with husbands of pregnant women users of ANC, and two with pregnant women non-users of ANC. Respective FGDs were held with groups of six to 12 participants from study villages with and without a health centre. Key informant interviews were conducted with four maternity health workers, seven traditional birth attendants and 29 women group leaders. The interviews assessed their knowledge, attitudes and practices about malaria and anaemia prevention in pregnancy. Quantitative research The design of the quantitative survey instrument was informed by the results of the qualitative interviews. A structured questionnaire was administered to all women from the eight study villages who had delivered a life child during the last six months (n = 225). Information on births was available through the existing Demographic Surveillance System (DSS) in the study area [ 14 ]. The questions focussed on socio-demographic characteristics, obstetrical history, knowledge and practice of preventive measures against malaria and anaemia during pregnancy, factors influencing the utilisation of ANC services and on the compliance with chloroquine and iron/folic acid supplementation during pregnancy. The questionnaires were filled in by the interviewers who also cross-checked given answers with all available ANC cards (n = 156/225). On the ANC cards, the estimated age of pregnancy at first visit was reported through the fundal height in centimetres, as women were not able to recall their last menstruation period during ANC visits. These data were afterwards transformed into a specific scoring system. We used two definitions for malaria prophylaxis. A complete curative dose of 1 500 mg chloroquine followed by regularly weekly 300 mg doses afterwards (complete prophylaxis), and an incomplete regimen consisting of only 300 mg weekly doses (incomplete prophylaxis). A combined dose of daily 200 mg iron and 0.25 mg folic acid was defined as a complete prophylaxis. Statistical analysis The data were entered in Microsoft Access 2000, cleaned, and then analysed with Epi Info 2000. Univariate analysis was done with chi-square test or Fisher's exact test to compare proportions for categorical variables. Results were considered to be significant when the 2-sided P value was <0.05. Ethical aspects We received ethical approval from the institutional Ethical Committee at the Department of Tropical Hygiene in Heidelberg, Germany, and the local Ethical Committee in Nouna, Burkina Faso. Oral informed consent was obtained from all participants. Results Study population The characteristics of participants on the quantitative survey are shown in table 1 . The great majority of the survey women were married, illiterate, housewife/farmers and their ages range between 15 and 49 years. The distribution of ethnicity among survey women was as follows: 41% Bwaba, 39% Marka, 15% Mossi, 3% Samo and 2% others. The median number of pregnancies among survey women was 6 (range 1–13). There were no differences between the background characteristics of users compared to non-users of ANC services, except regarding distance to the nearest ANC services. When comparing women living at a distance ≤ 5 km with women living >5 km, distance was significantly associated with ANC use (p < 0.001). Table 1 Background characteristics of women interviewed ANC Background Characteristics (%) Users n = 159 Non Users n = 66 All n = 225 Age Group 15–19 24 (15) 12 (18) 36 (16) 20–29 81 (51) 37 (56) 118 (52) 30–49 54 (34) 17 (26) 71 (32) Education No Schooling 145 (91) 62 (94) 207 (92) Primary Education 14 (9) 4 (6) 18 (8) Parity 1 26 (16) 8 (12) 34 (15) 2–3 46 (29) 17 (26) 63 (28) 4–13 87 (55) 41 (62) 128 (57) Distance*(Km) 0–4 99 (62) 5 (8) 104 (46) 5–7 43 (27) 59 (89) 102 (45) 8–15 17 (11) 2 (3) 19 (8) * Distance to the nearest health center in kilometers ANC coverage In this study population, the minimal ANC coverage (defined as at least one ANC visit during pregnancy) was 159/225 (71%) and 63/225 (28%) if we consider the optimal frequency of at least three ANC visits (national goal). Minimal ANC coverage was 97/102 (95%) in villages with a health centre vs 62/123 (50%) in remote villages (p < 0.001), while the optimal ANC coverage was 55/102 (54%) in villages with a health centre vs 8/123 (7%) in remote villages (p < 0.001). Among ANC users, 27%, 40% and 33% of women visited ANC services one time, two times or more than two times respectively during their pregnancy. The first ANC visit of ANC users was in 14% during the first trimester, in 57% during the second trimester, and in 27% during the third trimester. Malaria and anaemia prevention knowledge and behaviour Malaria and anaemia were considered as the most common diseases during pregnancy by the majority of the participants in the FGD, key informant interviews and survey women. In the Dioula language (lingua franca in the region) malaria is equivalent to "Soumaya", light to moderate anaemia to "Djolidessé" and severe anaemia to "Djoliban". Most women in the FGD were knowledgeable about the malaria prevention effect of chloroquine and the anaemia prevention effect of iron/folic acid. The knowledge of malaria and anaemia prevention measures by ANC users is given in table 2 . Regarding malaria, the majority of survey women stated that it can be prevented with chloroquine (white tablets), while only a minority mentioned mosquito nets or others measures. Regarding anaemia, iron/folic acid (red colour tablets or vitamins) supplementation was stated by the majority of survey women as being protective, while a much smaller percentage of women mentioned nutrition as an important factor. Stating chloroquine and mosquito nets as prevention measures against malaria and iron/folic acid against anaemia was significantly associated with ANC use (p < 0.002; p < 0.001; p < 0.001). Table 2 Knowledge of preventive measures against malaria and anemia ANC All P Value Knowledge factors (%) Users n = 159 Non Users n = 66 All n = 225 p-value Malaria prevention Chloroquine 128 (66) 35 (59) 163 (65) <0.001 Mosquito nets 40 (21) 2 (3) 42 (17) < 0.001 Hygiene 6 (3) 3 (5) 9 (4) n.s. Protective clothing 12 (6) 0 (0) 12 (5) <0.05 Does not know 7 (4) 19 (32) 26 (10) <0.001 Anemia prevention Iron/Folic acid 129 (76) 16 (25) 145 (63) <0.001 Adequate nutrition 29 (17) 8 (13) 37 (16) n.s. Does not know 11 (7) 39 (62) 50 (22) <0.001 Table 3 shows data on self-reported use of malaria and anaemia prophylaxis with chloroquine and iron/folic acid in the population of ANC users together with respective data taken from their ANC cards. A correct prescription of chloroquine (complete prophylaxis) on ANC cards was seen in 60%, and a correct prescription of iron/folic acid (complete prophylaxis) was seen in 78%. In contrast, self-reported oral information during ANC visits on complete and incomplete prophylaxis regimens were lower and matched well with self-reported information on the chloroquine and iron/folic acid dosages taken. Most women reported being compliant with the oral information on chloroquine and iron/folic acid regimens from first ANC visit until delivery. Table 3 Prescription in ANC card of chloroquine prophylaxis and iron/folic acid supplemen-tation, self-reported ANC instructions and self-reported intake among ANC users (n = 159) Chloroquine (%) Iron/folic acid (%) Prescription on ANC card Complete prophylaxis 95 (60) 124 (78) Incomplete prophylaxis 29 (18) - Incorrect prescription 32 (20) 32 (20) No prescription (no ANC card) 3 (2) 3 (2) Instructions given at ANC visits (self-reported) Complete prophylaxis explained 46 (29) 111 (70) Incomplete prophylaxis explained 65 (41) - Incorrect instructions given 41 (26) 41 (26) No instructions given 7 (4) 7 (4) Dosages taken (self-reported) Complete prophylaxis 46 (29) 149 (94) Incomplete prophylaxis 67 (42) - Incorrect dose 43 (27) 8 (5) No prophylaxis 3 (2) 2 (1) Duration of chloroquine and iron/folic acid prophylaxis From 1 st ANC until before delivery 20 (13) 20 (13) From 1 st ANC until delivery 115 (72) 124 (78) From 1 st ANC until after delivery 21 (13) 13 (8) No prophylaxis 3 (2) 2 (1) Factors influencing the use of ANC services Of ANC users, the great majority reported to be satisfied with the quality of ANC services. Only 16% of ANC users were aware of the fact that ANC services have recently become free of charge, and 42% reported that they still had paid for services. Most (73%) ANC users had paid between zero and 200 F CFA, while 12% and 6% had paid between 200 and 1.500 and between 1.500 and 7.500 F CFA respectively. Apart from distance to the next health centre, lack of resources and ignorance were the most frequent stated reasons during qualitative and quantitative interviews why women did not attend ANC services. The majority of ANC non-users reported no specific prophylaxis during their pregnancy, but 6% took irregularly self medication for malaria/anaemia prevention. Moreover, 30% of ANC non-users had sought advice from traditional birth attendants (TBA). Typical statements in the FGDs with non-users of ANC services werecollected as follow: • "ANC is not free of charge, you must pay for ANC card and the medicine and it could be up to 500 CFA and for me I find it is expensive. Our husbands find it expensive too, that is why the majority of us can not attend ANC services". • "I don't know the advantages of ANC services, some of the pregnant women if they don't fell sick they wouldn't accept to attend ANC ". • "There is lack of advices from health workers, if you go to the ANC for the 1 st time, they give drugs, they don't tell you when you should come back and how many visits you should attend. That is how they do here". • "If you are not sick, you don't pay for drugs, what you pay for is the ANC card at 150 F CFA...." • " But even the ANC card is free of charge, only the gloves cost 100 F CFA...." • "I had not yet heard about the free ANC services ... Since I know now that ANC is free of charge I will attend the services." • "We do not have money to go to the health centre it will be nice to have one in our village". Itching, vomiting and fatigue were the most frequently stated side effects of malaria/anaemia prevention drug regimens during interviews, sometimes leading to non-compliance. Most women also stated that bed nets are considered too expensive for their household. Discussion The main findings of this study are (1) that coverage of antenatal care is far from complete, particularly in villages without a health centre, (2) that malaria- and anaemia-related knowledge and compliance with preventive measures is comparatively high with a wide gap between users and non-users of antenatal care and (3) that health services need to improve their response to the women's need for preventive care in pregnancy. Use and coverage of antenatal care In this community-based study from a rural area of Burkina Faso, we found an overall ANC coverage of 71%, which is however not representative given the used methodology. Most women had two ANC visits during their pregnancy, mainly during the second and third trimester. As we found ANC coverage to be much higher in villages with a health centre compared to villages quite distant from a health centre, and as we included similar numbers of health centre and non-health centre villages, our ANC coverage figure is likely to be an overestimate. The national demographic and health survey in Burkina Faso claims an ANC attendance (at least one ANC during one pregnancy) of 59% [ 15 ]. Although health workers from rural health centres in Burkina Faso are advised to do regularly outreach work in the villages of their respective catchment areas, in practice such visits are rare due to a number of reasons such as lack of transport. Our findings thus support the need for better access of rural SSA women to ANC services [ 16 ]. Other reasons given for non-use of ANC services included ignorance and lack of money. This confirms similar observations from other rural African areas [ 20 ]. Interestingly, the Burkinabé Government had recently changed its policy towards free ANC service provision. However, our findings show that this policy is rather confusing as some parts of ANC procedures are not included. Consequently it was not yet fully understood by the population. However, this change in policy was considered promising during our interviews, and it was also reassuring that most of the ANC users were satisfied with the quality of ANC services. Knowledge and compliance with preventive interventions related to malaria and anaemia Malaria and anaemia were seen as important disease entities during pregnancy in our interviews. Moreover, most women interviewed were quite knowledgeable about effective malaria/anaemia prevention measures. However, compared to ANC users ANC non-users were significantly less knowledgeable about malaria/anaemia prevention measures. The interpretation is not straight forward because more knowledgeable women may be more likely to attend antenatal care or increased knowledge may be the result of health education in anatenatal care; most likely both effects contribute to the observed gap between ANC users and ANC non-users. Responsiveness of health services There is now a broad agreement on the need of new strategies for community participation on the implementation of effective malaria control activities, as well as for a better education of service providers of both the public and the private sector [ 17 ]. It was reassuring to find nearly all of the women with reported ANC use to have an ANC card in their house. Specific prescriptions were found on most cards, but in particular malaria prophylaxis prescriptions were often not complete. This explains why self-reported compliance with recommended prevention regimens was sub-optimal with regard to malaria. Such discrepancies have been observed already in a recent study in pregnant women of Nouna town [ 18 ]. However, self-reported compliance matched well with the reported oral instructions given by respective health workers. This points to the importance of correct oral information in rural areas with high prevalence of illiteracy [ 19 ]. The best model for effective and cost-effective malaria prevention during pregnancy in SSA still needs to be developed. Chloroquine has been the mainstay for malaria control in sub-Saharan Africa (SSA), but the emergence of chloroquine-resistant Plasmodium falciparum has put into question the efficacy of this well-known drug [ 21 ]. The first cases of in vitro and in vivo Chloroquine resistance in Burkina Faso were seen in 1983 and 1988, respectively, and reported clinical failure rates after use of Chloroquine for treatment of uncomplicated malaria in children were around 5% in the early 1990s and 10% during the most recently performed surveys [ 22 ]. Although this is considered still below the threshold of clinical failures considered to require a change of first-line treatment, it has recently been shown that chloroquine failed to prevent malaria in pregnant women of Burkina Faso [ 23 ]. Current alternatives include intermittent treatment with pyrimethamine-sulfadoxine and the use of ITNs. Sulphadoxine-pyrimethamine, given in 1–3 therapeutic dosages during the second and third trimester of pregnancy, has recently been demonstrated to be an effective and cost-effective schedule for malaria prevention [ 24 ]. Compared to chloroquine prophylaxis, this regimen has the advantage that it is given to women when they attend antenatal clinics, thus avoiding problems with compliance. ITNs are increasingly considered as an important tool in the prevention of malaria in young children and pregnant women, and the provision of ITN through ANC services has recently been proposed as a promising distribution channel [ 13 , 25 ]. The findings of this study confirm the cost barrier to the private purchase of bed nets and ITNs and thus support the call for major subsidies if a high ITN coverage is going to be achieved [ 12 , 13 , 25 ]. Competing interests None declared. Authors'Contributions CM, AJ and OM designed the study. FS and BK were responsible for the conduct of the study in Burkina Faso. CM analysed the data. All authors contributed to the interpretation of the data, helped write the paper, and read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516443.xml |
212697 | Digital Evolution | In silico experiments reveal how evolution can work--without missing links. What can biologists learn from them? | Rich Lenski decided he was onto a good thing from his very first encounter with digital evolution. It all began when he used the technology in which artificial organisms in the form of computer code evolve independently by self-replicating, mutating, and competing to re-examine an earlier study with bacteria. The original study had contradicted ‘some influential theory’ suggesting that random mutations show a systematic tendency towards synergistic interactions. His digital results, he discovered, matched his organic ones. ‘It's great when these two powerful experimental systems agree, because it suggests some generality about the evolution of genetic architectures', recalls Lenski, professor of microbial ecology at Michigan State University (MSU). ‘But even if the digital and biological realms sometimes come into scientific conflict, it would only lead one to ask why and then probe the relevant factors more deeply’. Complex Challenges and the Virtue of Simplicity He can hardly contain himself. ‘It's a win–win situation, leading towards increased generality, on the one hand, and further experiments to better understand specific outcomes, on the other’. For his part, Lenski has since gone much further with the technology ( Box 1 ; Figure 1 ) and also soon expects to be announcing results that could broaden digital evolution's appeal even more. Box 1. Impossible Evolutionary Experiments Richard Lenski is using digital organisms to do ‘impossible’ evolutionary experiments. In one, he says, ‘we test every incipient mutation before it occurs in a population and then allow it or disallow it, depending on its fitness effect, to see how important neutral and deleterious mutations are for long-term adaptation’. Lenski, professor of microbial ecology at Michigan State University, says his mind boggles at how digital evolution opens up so many avenues for research. ‘I sometimes feel like a kid in a candy store who might starve because he can't make up his mind what he wants’. These opportunities and, at the other end, the prospect of having too much data to analyse, which Lenski admits is a strange thing for an evolutionary biologist to complain about, enforce a discipline to prioritise and define objectives: ‘What exactly is the hypothesis I want to test, and what exactly must I measure to test that hypothesis?’ Such enthusiasm for the technology makes it difficult for him to understand why some biologists might dismiss digital evolution as ‘very interesting but with no value’ or turn their backs on it altogether. ‘My own view’, says Lenski, ‘is that something that is very interesting is also worth thinking about and exploring more fully, especially when it offers the opportunity to examine complex problems in greater depth and with more precision than is otherwise possible’. But he cautions against mistaking his enthusiasm for studying digital organisms as a call to abandon other lines of research. ‘There's obviously much of value for understanding evolution that comes from many different empirical and theoretical perspectives’, he says. ‘That's one reason that evolutionary biology is such a vibrant field right now’. Lenski still spends as much research time on bacteria as he does on digital organisms. ‘Although it's sometimes frustrating not to be able to devote 100% to each system, each one is so interesting to me that I couldn't bear to drop either of them’. The two systems have different strengths and limitations, which Lenski tries to exploit in his research, he says. From his laboratory's studies on long-term E. coli populations, he and his colleagues showed earlier this year how they used gene-expression arrays to work backwards to a set of key mutations in a global regulatory gene. More recent work, currently being written up, ‘has led us to some adaptive mutations in several other key loci’, he notes. As for his digital research using the Avida software system, Lenski acknowledges that speed is an obvious advantage, but not the most significant one. ‘An even more important advantage is the ability to observe the dynamics and dissect the outcomes of evolution with absolute precision. For example, there are no missing links in the digital world’. Nevertheless, he wryly highlights one shortcoming of Avida: ‘We'll know that we have been successful once the Avidians have evolved the ability to design their own experiments and write the papers without us’. Figure 1 Hybrid Graphic of Petri Dishes with Bacteria Blending into Digital Organisms Lenski spends as much research time with bacteria (left) as he does with digital organisms (right), balancing the strengths and limitations of the two systems in an effort to understand and explain the principles of evolutionary theory. (Hybrid graphic courtesy of Dusan Misevic, Michigan State University.). This is a world where time-scales contract and, above all, where other constraints of ‘wet’ biology have no place. Earlier this year, he and Chris Adami, who heads the Digital Life Laboratory at the California Institute of Technology (Caltech), published some breathtaking findings from the field. Their collaboration brings together biologists and computer scientists, physicists and philosophers in an artificial world on a quest to understand how evolution works. Though they may still be some way from reaching that objective, their latest advance suggests that they are on the right track. The research confronts evolutionary theory's long-standing challenge to explain how an organism can develop complex features simply as a result of random mutation and natural selection. The challenge remains a controversial one, too. Supporters of intelligent design, a branch of the creationist movement, promote the notion of ‘irreducible complexity’ as evidence that Darwinian evolution is a flawed theory. The notion purports that a complex feature cannot evolve sequentially from its elements, and must have been designed in one step by some higher intelligence. Traditional investigations, based on molecular biology and palaeontology, have yielded much evidence about the incremental evolution of the eye or the brain, for instance. But continuing ignorance about many developmental processes and the absence of key fossil records mean that accounts without missing links, to endorse the theory, may never be realised. Which is what tempted Lenski and Adami to examine the challenge in their virtual world. This is a world where timescales contract and, above all, where other constraints of ‘wet’ biology have no place. ‘It's not just the speed, by any means’, says Lenski. ‘It's also the power to manipulate almost any variable one can imagine, to measure variables with absolute precision, to store information that then allows one to trace back a complex chain of events, and to take evolved organisms and subject them to new sorts of analyses that one might not even have anticipated when first collecting the data’. It is a place where virtue is made of simplicity. ‘The worlds we're dealing with here are extraordinarily simple compared with the real world’, says Adami. ‘Any of the biochemistry associated with transcription and translation, for example, anything more complex than relatively short viral types of genomes, that's out of our league’, he notes. ‘We can't see transcription and translation because we don't have transcription and translation–-we go right from sequence to function’. But the principles of evolutionary theory make such restrictions unimportant, he says. ‘Many of the [theory's] predictions don't depend on these little details of molecular biology’, notes Adami. ‘The principles are very, very general, and very simple, and in the end they are mostly responsible for the overall dynamics that you see in these simple systems’. Lenski goes further. These virtual realities, he says, ‘offer us a window into an alternative world, and perhaps even a part of the future of our own, where the fundamental evolutionary mechanisms of mutation and natural selection play out in a novel physical realm’. Lenski is interested in watching evolution as it happens and has a track record in the study of evolving organic systems, primarily using Escherichia coli . ‘We're making great strides elucidating the precise genetic bases of the adaptation that has occurred during tens of thousands of generations in our long-term E. coli populations’, he reports. ‘Even after more than 30,000 generations in a constant environment, we're still seeing some major phenotypic evolutionary changes’, he adds. Evolution in Action Adami, who also works in theoretical physics at the Jet Propulsion Laboratory at Caltech, has developed a software platform, known as Avida, for research on evolving computer programs, the digital organisms that he terms ‘Avidians’. The second version, Avida 2.0, became available for free public use ( http://dllab.caltech.edu/avida/ ) earlier this year, a decade after work began. ‘I came to Caltech in 1992 on a special fellowship’, he recalls, ‘which basically told me, “You can do whatever you want and we're not going to check on you for three years—just sit there and think of something”’. So he did—and discovered the pioneering work on evolving computer programs by Tom Ray, the computational ecologist who invented the Tierra software system. ‘In a sense, Tom Ray's Tierra was a proof of concept–-he showed that computer programs can evolve, and it was a watershed moment. Without his work, mine wouldn't have existed’, acknowledges Adami. ‘But I wanted this digital life system to be an experimental system just like, let's say, Rich Lenski and E. coli bacteria’. Adami worked quickly with the help of undergraduates to design and write code and soon had a beta-version ready: ‘Sure, these kids can program’, he laughs. But the programmers were human and errors crept in. The team would run the system overnight and discover ‘weird things’ the next morning: ‘The path of evolution went in a strange way, not because the world dictated it, but because some bug dictated it’, notes Adami. ‘You need to know your system perfectly, at least at the beginning, and that was really the hard part for the next five years’. On the way, however, the work attracted the attention of Microsoft, the software company, which was eager to know how its designers could evolve computer programs instead of writing them and inevitably introducing bugs, too. Some software already stretches to more than 10 million lines of code, and Microsoft, concerned for its survival as the fittest, foresaw a problem. It predicted programs expanding so much that, sometime between 20 and 50 years into the future, they would reach what Adami calls the ‘complexity wall’, where the number of errors would make them unusable. The alternative of evolving programs looked like a great idea to Microsoft, especially the way Adami tells it. ‘I know a piece of software that's 3 billion lines of code that controls all our actions’, he says, referring to the human genome. ‘There may be bugs, but they don't lead to a crash. It's very robust programming, with pieces taken from all kinds of different sources, and somehow it works. And the reason why it works is because it was evolved and not written’. For a year, the Caltech team explored the features of programming languages that make one language more evolvable than another, but moved on when Microsoft's interests switched to more directly applied science and Adami wanted to continue to focus on the fundamental principles underpinning evolution. Avida was ready to run and beginning to offer a much more versatile platform than Tierra, with advances that have since been honed even further. ‘We can exchange not only the [processor's] instruction set on the fly, we can also change the entire structure of the CPU [central processing unit] on the fly’, says Adami. ‘If you want to test different physics or chemistry, the flexibility of Avida compared with Tierra is like the difference between driving a modern Porsche and a Model-T Ford. They're both cars, but …’ The most important difference, insists Adami, ‘is the possibility of rewards to programs if they accomplish interesting things, in this case computations’. He draws a parallel between the way replicating micro-organisms exploit chemical reactions to yield energy and the way evolving Avidians perform computations to secure extra CPU time. ‘It's a one-to-one analogy’, notes Adami, ‘and the fact that it works so well may tell you something very, very fundamental about the duality between computational chemistries and biochemical chemistries’. In Adami's collaboration with Lenski to show how complex features can evolve sequentially, the Avidian genome is a circular sequence of instructions in computer code. At the start of its computational existence, an Avidian can only replicate. If it evolves logic functions in the process, however, the system rewards it with energy, in the form of time on the CPU. This reward enables the evolving Avidian to execute instructions that in turn help it to mature to secure more rewards, and so on, to safeguard its future. The results thrilled the experimenters. Teams at Caltech and MSU were able to trace the genealogy of Avidians, without any missing links, from simple self-replicator through unexpected transitional form to complex performer of many logic functions, with random mutation and natural selection alone responsible for the evolution. ‘Many biologists are delighted to see such a clear demonstration of the evolution from scratch of demonstrably complex features’, says Lenski, ‘and in a way that accords so well with the hypothesis first voiced by Darwin and nowadays supported by a large body of comparative data that complex new features arise by co-opting existing structures that previously served other functions’. He also notes much interest in the way that damaging mutations sometimes proved to be essential stepping stones in the evolution of new functions. To opponents of evolutionary theory, Lenski is eager to emphasise that the study ‘does not address the origin of life, nor whether the universe itself was designed to allow the evolution of complex organisms. Rather, our study shows that random mutation and natural selection can produce quite complex features, via many pathways, provided that the environment also favours some (but not all) transitional forms, even when the transitional forms are favoured for performing different functions from those that evolve later’. The Limits to Truth For many other biologists, however, digital evolution seems to have very little relevance. One eminent British evolutionary biologist dismissed the research in just eight words, according to the field's godfather, Tom Ray. ‘His comment: “It's just not biology. Period. End of discussion”. That's the whole story right there’, recalls Ray. Less strident reservations concern the limits on complexity that the virtual world imposes and suspicions about the ability of digital processing to mirror evolutionary principles accurately. For Francisco Ayala, professor of biological sciences at the University of California, Irvine, it appears to be simply a question of trust in the natural world. ‘Computers can give you only what you put in’, he says. ‘With natural models, you're not putting anything in—you're segregating a small region as an aspect of reality’. There are also more mundane worries over the technical skills needed for the computational operations, a fear acknowledged by Lenski. ‘Computational skills are certainly opening up some exciting new directions [in evolutionary biology]’, he says, ‘but there are of course many other useful skills and fascinating directions’. At Caltech, meanwhile, Adami's team is trying to make Avida easier to use, backed by the National Institutes of Health's first-ever funding for digital-life work. Misunderstandings about the technology arise over whether the research is an ‘instance’ or a ‘model’ of evolution, suggests Ray, who now divides his time between the Advanced Telecommunications Research Laboratories in Kyoto and the University of Oklahoma, where he holds chairs in zoology and computer science. ‘I never intended [Tierra] as a model, but that's the way a lot of people saw it because they weren't really prepared for this new idea, this different perspective of another instance of life’, he says. ‘They had a more traditional view of what you do with a computer, which is that you send e-mail, you process things, and you make models’. Levels of veracity determine limits of extrapolation, says Ray. ‘Digital evolution is an abstraction, and it's not going to be able to tell us what humans will evolve into or why dinosaurs went extinct or what will be the next emerging disease…. You need the whole planet to do that kind of modelling’. But once you appreciate the constraints, ‘it's a phenomenally good tool, because it's evolution in a bottle. You can instrument it 100%’, he notes. ‘I think Lenski and Adami have done a very good job of developing it that way’. Ray himself is now more interested in genomics and pharmacology and their application in a biologically inspired engineering project to design software agents, or ‘virtual creatures’, as he terms them. For Lenski, experiments with Avida provide ‘both an “instance” and a “model” of evolution’. He says that ‘populations of the digital organisms really do evolve and adapt, albeit in an unfamiliar physical realm. At the same time, they provide a sort of experimental model for testing and understanding the general principles of evolution’. And he agrees with Ray that digital evolution is not intended to explain how we got where we are today, ‘in the sense of unravelling which species are more related to which other species, or what organismal features are adaptive for what purposes, and so forth’. The goal, says Lenski, is to examine evolutionary processes and dynamics in greater depth and detail than are otherwise possible. ‘Watching a process as it occurs and being able to probe genetic details and manipulate environmental variables can provide new insights and evidence that one cannot get by comparative studies that typically require one to infer historical processes from present-day patterns’. The First Steps to Freedom Such developments fascinate and enthral Paul Rainey, an evolutionary ecologist, even though he rarely needs any computing power for his research and recognises that digital evolution still lacks an ecological dimension. Rainey, who earlier this year moved from Oxford to become professor of ecology and evolution at the University of Auckland, uses bacterial populations of Pseudomonas fluorescens , which grow from single genotypes in pristine tubes, to test long-standing hypotheses about the causes of ecological diversification. ‘The bottom line is that we're reducing the complexity we see in the real world to a much more manageable level’, he says. ‘The nice thing about bacterial populations is that ecological and evolutionary timescales coincide, so that you can actually see the ecological context of evolutionary change’. ‘It's a phenomenally good tool, because it's evolution in a bottle.’ Rainey, a friend and colleague of Lenski's, would welcome the chance to take advantage of the speed, robustness, and flexibility of digital evolution to further his research, but doubts whether the technology will ever be able to match the performance of his ‘wet’ laboratory. Though his natural model is simple, it remains far too complex to program, he suspects. ‘We try to understand how selection is working in this very complex ecological context, which includes interactions between genotypes and within genotypes and interactions with an environment that is constantly changing’, he says. ‘This sets the scene for selection, and the selective forces are constantly changing…. None of that complexity is really captured in Avida’. But Rainey is in for a surprise, according to Adami. ‘The pace of development of Avida has accelerated’, he says. ‘More people are working on it because we have bigger grants. And Charles Ofria [who helped to design the software as a postgraduate at Caltech] is doing much of the development at Michigan State [University, where he is now assistant professor of computer science and engineering] with his students’. The result is that Avidians have made their first steps towards sexual freedom within ecologically diverse environments or, more accurately, code recombinations in a multi-niche virtual world. For almost a decade, says Adami, Avida has been a single-niche world in which every organism in the population sees exactly the same world and only a single species inhabits that world. But Avida has now been expanded, he continues, ‘in such a manner that populations can see different types of worlds and they can adapt independently to different resources’. A research paper is being finalised on how the software is making its first steps towards incorporating the notion of evolutionary ecology. ‘We show what pressures are necessary to make a population that is homogenous branch out and speciate into a stable system’, notes Adami. ‘Now we want to explore recombination, which we've always shied away from.’ With asexual reproduction virtually understood, the researchers are ready to tackle sexual reproduction in the digital world, says Adami. ‘Some people are furiously working at implementing that.’ ‘Our goal is not to mimic natural systems in detail, but rather to expand Avida to give digital organisms access to more of the basic processes of life’, says Lenski. ‘Our goal is not so much to endow the ancestral organisms with additional capabilities, but rather we want to see how digital organisms will evolve if they are placed in an altered world where such things as sex and communication are physically possible. I see many years of interesting research along these lines’. Reflecting on future applications for the research, Lenski suggests it highlights how the traffic in computational biology is now becoming a significant and little recognised twoway exchange. Computer scientists are not only helping biologists to organise and analyse their vast datasets, says Lenski, but ‘biological principles, from evolution and genetics to neurobiology and ecology, are informing computer scientists and engineers in designing software and hardware … and that holds tremendous promise for the future’. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC212697.xml |
529466 | Rupture of totally implantable central venous access devices (Intraports) in patients with cancer: report of four cases | Background Totally implantable central venous access devices (intraports) are commonly used in cancer patients to administer chemotherapy or parenteral nutrition. Rupture of intraport is a rare complication. Patients and methods During 3 years period, a total of 245 intraports were placed in cancer patients for chemotherapy. Four of these cases (two colon cancer and one each of pancreas and breast cancer) had rupture of the intraport catheter, these forms the basis of present report. Results Mean time insitu for intraports was 164∀35 days. Median follow-up time was 290 days and total port time in situ was 40180 days. The incidence of port rupture was 1 per 10,000 port days. Three of the 4 cases were managed by successful removal of catheters. In two of these the catheter was removed under fluoroscopic control using femoral route, while in the third patient the catheter (partial rupture) was removed surgically. One of the catheters could not be removed and migrated to right ventricle on manipulations. Conclusion Port catheter rupture is a rare but dreaded complication associated with subcutaneous port catheter device placement for chemotherapy. In case of such an event the patient should be managed by an experienced vascular surgeon and interventional radiologist, as in most cases the ruptured catheter can be retrieved by non operative interventional measures. | Background Totally implantable central venous access devices (intraports) are commonly used in patients with cancer to administer chemotherapy, blood and blood products, antibiotics, parenteral nutrition and to obtain blood samples for laboratory analysis. The catheter is usually placed in the subclavian vein under local anesthesia. There are many complications associated with this technique like hemothorax, pneumothorax, pocket infection, infection of the tunnel or the port, bleeding, hematoma and thrombosis of the catheter or the vein. A very rare complication of the intraport catheter is rupture inside the subclavian vein [ 1 ]. The literature regarding common complications is abound, however there is very little information on port ruptures. Some attempts has been made to correlate these with type of port used, site of placement, type of chemotherapy, duration of catheter use etc [ 2 , 3 ]. However, what is ideal is still debatable. We report here 4 cases of intraport ruptures encountered in our practice. Patients and methods Between June 1995 and May 1998, 245 patients underwent intraports insertions at the "Agii Anargiri" Anticancer Hospital of Kifissia, Athens, Greece, and were followed up for possible device-related morbidity. Baseline demographic information and the indication for the placement were obtained from the retrospective review of patients' medical and operative reports. Follow-up continued till the device was removed or the patient died. Follow-up time ranged from 1 month to over 3 years. Port-A-Cath ® with titanium portal and detachable silicon rubber catheter (Arrow International™, USA) were used in all cases. All patients requiring removal or replacement of the device, prior to completion of the intended treatment were identified from the operative room and hospital records. All intraports were inserted by percutaneous access technique to the subclavian vein. The tip of the catheter was positioned in the superior vena cava or the right atrium. The ports were then placed within a subcutaneous pocket created on the anterior chest wall. The position was documented by immediate intraoperative chest roentgenogram. Four of these patients had port rupture of which three were complete and one partial, one of these ruptures was accidental. Routine removal of the subcutanous intraport was carried out in the operating room under local anesthesia with xylocaine 1% (10 cc). An incision was placed on the skin in the area over the drum of the device. Then the drum was prepared by cutting off the tissues that surround the port. The port and the catheter were caught with Kocher forceps. The tissues around the catheter were dissected and the catheter was slowly pulled out. The catheter was cut into three pieces and was sent for culture. Patients were discharged after two hours of observation. Results Over a three year period between 1995 and 1998 a total of 245 port devices were fitted in cancer patients (139 females and 106 males). Mean age of the patients was 58 ± 6.3 years. Total port time in situ was 40180 days, while mean (SD) port time in situ was 164 (35) days. After a median follow-up of 290 days (range 30–690 days) four ports ruptured, thus bringing the number of fractures per 1000 port days to 0.01% or one rupture every 10,000 port days. Two patients had cancer of the colon and one each had cancer of breast and pancreas. Both patients with colonic cancer received 5 flurouracil and leukovorin, patient with breast cancer received epirubicin and paclitaxal and patient with pancreatic cancer received gemcitabine, paclitaxal and cisplatinum. No correlation was observed with type of chemotherapy or site of disease. Three of the port ruptures were full and one partial one of the ruptures was accidental (case 1). The detail of the cases has been provided in next section. Three of the patients survived for two years after the catheter removal, while the fourth patient with accidental port rupture and whose catheter was left behind in the right ventricle died of progressive disease two month later. Case 1 A 65-years-old female with colonic cancer, received intraport for chemotherapy administration. After 6 months due to the poor response to chemotherapy it was decided to remove the catheter. During the manipulations for removal, the catheter, accidentally ruptured at the point of its entrance in to subclavian vein. The peripheral part of the catheter remained in the vein, while only the central part could be removed. Another attempt to uncover the subclavian vein till superior vena cava failed. Patient underwent thoracotomy for removal of remaining catheter two days later. Superior vena cava was opened and catheter removal was attempted, however, during this process the catheter slipped into right atrium and further attempts to retrieve it were abandoned. Patient was started on anticoagulant treatment using enoxaparin sodium , 1 mg/kg/12 h for 5 days and 40 mg/day for further 14 days in order to prevent thromboembolic event. The patient died two months later due to progressive disease without obvious complications related to the retained catheter. Case 2 A 68-year-old female with colonic cancer received intraport for administration of chemotherapy. After one and a half year of treatment it was decided to remove the catheter as the catheter has thrombosed due to non heparinization. At the time of its removal the catheter ruptured at the point of its entry to subclavian vein. The peripheral part of the catheter remained in the vein. An unsuccessful attempt was made to expose subclavian vein till superior vena cava. Later this catheter migrated to right ventricle. The catheter was removed using the technique of Yedlicka et al [ 5 ], through the left femoral vein by advancing a vessel catheter to right ventricle under fluoroscopic control. The broken catheter was caught with endovessel forceps and was removed through femoral vein. Case 3 A 74-years-old female suffering from breast cancer underwent intraport insertion for administration of chemotherapy. After fourteen months of treatment it was decided to remove the catheter as the catheter had thrombosed. On attempted removal the catheter was found to be ruptured at its entry to subclavian vein (Figure 1 ). N ext day, the broken part of the catheter was removed successfully under fluoroscopic control using the technique described above for case 2 (Figure 2 ). No complications were observed. The biomechanical analysis of removed catheter showed a significant decrease in the elasticity of the material (Figure 3 ). Figure 1 Chest X-ray showing the port in correct position while the catheter is not shown here. The catheter had moved in the right atrium, from where it was removed fluoroscopically. Figure 2 X-ray showing the intravascular catheter being fluoroscopically removed by means of a special endo-vessel grasper. Figure 3 (a) The ruptured catheter had been sent for biomechanical analysis that identified alteration of its elastic properties. (b) The port removed by means of the open technique Case 4 In a 56-year-old female patient with pancreatic carcinoma underwent an intraport placement for chemotherapy. Eight months later, the patient complained of pain in the back during the administration of chemotherapy. A fluoroscopic examination showed partially broken catheter in the vein while the other part was lying in the subcutaneous tissue. The catheter was removed from the subclavian vein carefully to avoid complete rupture of the catheter. Similar to case 3 above, the biomedical examination showed a significant reduction in the elasticity of the catheter material. Discussion Since Aubaniac first described the percutaneous central venous catheterization in 1952, insertion of central venous access devices for fluid administration has increased rapidly. The total complication rate associated with central venous catheter devices ranges from 0.4 to 29% [ 3 , 4 , 6 ]. Spontaneous rupture of the port catheters appears to be a very rare and dreaded event. Biffi et al , in 1997 [ 1 ], reported three cases of port catheter rupture out of 178 ports inserted by them. The incidence of port rupture was estimated to be 1.68 % (0.09/1000 port days). In their series the rupture occurred 66 days after the placement, during a pause between subsequent chemotherapy cycles. The symptoms consisted of palpitations and chest discomfort in two while the third patient remained asymptomatic. All the catheters in their series were removed by interventional technique without any complications [ 1 , 5 ]. The biomechanical analysis, of ruptured catheters in our series showed a significant decrease in the elasticity of the material. No correlation between changes of mechanical properties of the material and specific chemotherapy administrated through the port has been established so far [ 7 , 8 ]. Even in our series all patients received differing chemotherapy and no correlation was observed between the agent and property of the material, this is also due to smaller number of such events. In case 1 the port catheter rupture was due to wrong manipulations during catheter removal and accidental cutting of peripheral part without first holding it in clamp. The further catheter movement was induced by negative intra-thoracic pressure during respiration [ 9 ]. Other potential cause of catheter fracture could be incorrect fixation of the catheter to the locking steel ring, repeated high pressure injections to resolve clot formation, alteration of the catheter mechanical properties etc. In two of our cases decreased elasticity of the material was found however one case could not be attributed to any know cause. There was no correlation of port rupture with type of chemotherapy or site of cancer probably due to only 4 events. Ballarini et al [ 10 ], suggested that catheter thrombosis mainly occurs due to incorrect fixation of the locking steel ring to the port where it is associated with rupture. The estimated incidence of port catheter rupture and embolization varies from 0.9 to 1.7% of the cases [ 1 , 10 - 12 ]. It was 1.65% in our series. There is no controversy that the foreign bodies inserted in the systematic circulation must be removed. This is best achieved under fluoroscopy with specific catheters and snare loops. Removal of intravascular material by means of minimally invasive techniques presents excellent results, while at the same time it minimizes morbidity and mortality [ 13 , 14 ]. If such an attempts fail, open surgery should be considered. Conclusions Intravascular rupture of subcutaneous port catheters is a rare complication. Etiology still remains elusive, however wrong placement has been advocated as the most important cause. Other causes include catheter material faults and alterations of the material's mechanical properties, probably due to the administered substances; however, there is no data to support the effect of administered substances. Ruptured catheters are best removed by minimally invasive radiological techniques. Competing interests The authors declare that they have no competing interests. Authors' contributions DF participated in the operations, and drafted the manuscript. CT participated in the operations, patients follow-up search of literature and preparation of draft. GF conducted the follow-up and helped to draft the manuscript. AN and SR participated in the operations, design of the study and preparation of manuscript for publication. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC529466.xml |
529300 | The development of a strategy for tackling health inequalities in the Netherlands | Over the past decade, the Dutch government has pursued a research-based approach to tackle socioeconomic inequalities in health. We report on the most recent phase in this approach: the development of a strategy to reduce health inequalities in the Netherlands by an independent committee. In addition, we will reflect on the way the report of this committee has influenced health policy and practice. A 6-year research and development program was conducted which covered a number of different policy options and consisted of 12 intervention studies. The study results were discussed with experts and policy makers. A government advisory committee developed a comprehensive strategy that intends to reduce socioeconomic inequalities in disability-free life expectancy by 25% in 2020. The strategy covers 4 different entry-points for reducing socioeconomic inequalities in health, contains 26 specific recommendations, and includes 11 quantitative policy targets. Further research and development efforts are also recommended. Although the Dutch approach has been influenced by similar efforts in other European countries, particularly the United Kingdom and Sweden, it is unique in terms of its emphasis on building a systematic evidence-base for interventions and policies to reduce health inequalities. Both researchers and policy-makers were involved in the process, and there are clear indications that some of the recommendations are being adopted by health policy-makers and health care practice, although more so at the local than at the national level. | Introduction Before 1980, socioeconomic inequalities in health were a non-issue in public health (research) in the Netherlands. This changed in the early 1980's as a result of the publication of the Black Report in England [ 1 ], and a report on inequalities in health between neighborhoods in the city of Amsterdam [ 2 ]. Gradually, interest in health inequalities rose, first among researchers and then among policy-makers. Interest among policy-makers was further strengthened by the "Health For All by the year 2000" targets of the World Health Organization that the Dutch government officially endorsed in 1985 [ 3 ]. In 1986, the Ministry of Health published its Health 2000 report which was the first government document to include a paragraph on socioeconomic inequalities in health [ 4 ]. This was followed in 1987 by a conference organized by the prestigious Scientific Council for Government Policy, the outcome of which was a recommendation to start a research program on health inequalities [ 5 ] (see Table 1 ). Table 1 Summary of policy developments from 1980 to 2000 1985 The Dutch government adopted the WHO Health For All policy targets 1986 Publication of the Health 2000 Report [15] by the Ministry of Welfare, Health and Cultural Affairs, including a paragraph on socioeconomic inequalities in health 1987 National conference on socioeconomic inequalities in health, organized under the aegis of the Scientific Council for Government Policy, resulting in a proposal for a national research programme (1989–1993) funded by the ministry of Welfare, Health and Cultural Affairs 1991 National conference, again organized under the aegis of the Scientific Council for Government Policy, resulting in an agreement among several parties involved to implement activities to reduce inequalities in health 1994 Results of the first national research programme were reported to the Minister of Public Health 1995 Publication of an important policy document by the Ministry of Public Health, Welfare and Sport ( Health and Wellbeing ). Reduction of socioeconomic inequalities in health was mentioned as one of the policy goals. Initiation of second national research programme (1995–2000) 1996 Publication of a second document on Public Health Status and Forecasts , by the National Institute of Public Health and Environmental Protection. Socioeconomic inequalities in health were stressed as a major public health problem 2000 Report of the Lemstra committee on the enforcement of public health. The reduction of socioeconomic inequalities was mentioned as an important policy aim. Growing demand by the Ministry of Public Health and parliament for information on effective interventions to reduce inequalities in health 2001 Results of the second national research programme, and recommendations based on these results, reported to the Minister of Public Health Since then, the Dutch Ministry of Health has followed a systematic, research-based approach to tackling socioeconomic inequalities in health. An initial five-year research program mapped the nature and determinants of socioeconomic inequalities in health in the Netherlands [ 6 ]. A second six-year program launched in 1994 sought to gain systematic experience with interventions and policies designed to reduce socioeconomic inequalities in health. We report on the final phase of the second program: the development of a strategy to tackle health inequalities, and the production of a report containing recommendations for health policy making [ 7 ]. These recommendations were partly based on the results of the evaluation studies included in the second program. In addition, we will reflect on the way this report has influenced health policy and practice. The report deals with socioeconomic inequalities in health, defined as systematic differences in health status between people with higher and lower socioeconomic status, as indicated by educational level, occupational class, and/or income level. Like other European countries, the Netherlands has substantial inequalities in health between socioeconomic groups. Differences in life expectancy at birth between socioeconomic groups are in the order of 4 years, and differences in healthy life expectancy have recently been calculated to be a staggering 14 years [ 8 ]. Inequalities in health care utilization, on the other hand, are quite modest, not only in an absolute sense [ 9 ], but also in comparison with other European countries [ 10 ]. In addition to socioeconomic health inequalities there are other important variations in health as well, e.g. between genders, regions, ethnic groups, and other socio-demographic variables [ 11 ]. Some of these are interwoven with socioeconomic inequalities in health, but the two programs mentioned above have tried to separate out the socioeconomic dimension from the other dimensions, in order not to dilute attention across too wide an area. Case study The research and development program The main focus of the program was on developing and evaluating interventions and policies, but a number of other activities (monitoring of health inequalities, longitudinal explanatory study, research seminars, publications, documentation centre) were undertaken as well. Table 2 lists the evaluation studies that were commissioned after two calls for proposals and assessment by peer review. All interventions were aimed at tackling well-known determinants of socio-economic inequalities in health, such as poverty, smoking, working conditions, and accessibility of health care. Evaluation studies started between 1997 and 1999. The majority had a quasi-experimental design and compared health outcomes (e.g. school absenteeism) or intermediate measures (e.g. folic acid use) between an experimental and a control group. Positive results were reported for seven interventions: integrated program to prevent school children to start smoking, teeth brushing at primary school, adapted working methods and equipment for brick-layers, rotation of tasks among dustmen, formation of local care networks, peer education for Turkish diabetics, and introduction of nurse practitioners for asthma/Chronic Obstructive Pulmonary Disease patients. The other evaluation studies either failed because of an inadequate evaluation design or produced negative results [ 12 ]. [see Table 2 ]. Table 2 Intervention studies undertaken within the second national program on socioeconomic inequalities in health Interventions targeting socioeconomic disadvantage • Supplementary benefits to parents living in poverty, identified during preventive health screening of children (no evidence on effectiveness collected) Interventions targeting health-related selection • Counselling of secondary school children with frequent school absence due to illness (evaluation design failed) Interventions targeting factors mediating the effect of socioeconomic disadvantage on health • Tailored mass media campaign to promote periconceptional folic acid use (intervention did not reduce socioeconomic gap in folic acid use) • Community-based intervention to improve health-related behavior in deprived neighborhoods (evaluation results will become available in 2002) • Integrated program (including social skills teaching and monetary rewards) to prevent school children in lower general and vocational education to start smoking (intervention reduced smoking initiation rate) • Teeth brushing at primary schools (intervention eliminated socioeconomic gap in teeth brushing) • Adapted working methods (raised brick-laying) and equipment (lifting machine) for brick-layers (intervention reduced physical workload and sickness absenteeism) • Rotation of tasks (driving and minicontainer loading) among dustmen (intervention reduced physical workload and sickness absenteeism) • Introduction of self-organising teams in various production organisations (evaluation design failed) Interventions targeting accessibility and quality of health care services • Formation of local care networks among general practitioners, housing corporation staff and police officers to prevent homelessness among chronic psychiatric patients (intervention reduced house evictions and forced admissions to psychiatric hospitals) • Peer education to diabetic patients of Turkish origin (intervention improved glycaemic control and healthy behaviour, but only in women) • Introduction of nurse practitioners for asthma/COPD patients to general practice in deprived areas (intervention increased treatment compliance and reduced exacerbations) When the results of the evaluation studies became available, meetings were held in 2000 with scientific experts and representatives from policy makers and from practice in six different areas (income, education, health promotion, working conditions, housing conditions, health care). During these meetings possible recommendations for new policies and interventions were tested and refined [ 13 ]. The input for the meetings not only included the results of the evaluation studies, but also added two additional papers. The first paper, drawn up by a scientist, gave an overview of effective interventions to reduce socioeconomic inequalities in health in that area. In the second paper, the implications of this overview for policy were analysed by an author with experience in that specific policy area (e.g. former secretary of state for educational affairs and the former minister of social affairs). The meetings contributed to a better understanding of current policy initiatives, and the major obstacles and promoting factors for a policy aimed at reducing inequalities in health. The government advisory committee Subsequently, the committee overseeing the program held a number of plenary meetings to develop a comprehensive strategy to reduce health inequalities. Committee members were appointed by the Minister of Health, and they included former and active politicians of various political backgrounds, as well as a representative of the ministry of health and researchers. A conscious attempt was made to represent the whole (relatively narrow) political spectrum in the Netherlands. Members ranged from left (represented by the social-democrat mayor of the fourth largest city in the country) to right (represented by a former chairman of, and current House of Lords member for, the conservative party, who was later succeeded by another House of Lords member for the same party), and the committee was chaired by a former christian-democrat Minister of Social Affairs. Researchers had an important influence on the whole process: JM was secretary of the committee, and KS acted as co-ordinator of the program, and both were involved in writing draft versions of the final report. The committee reported directly to the Minister of Health. The rationale for the strategy The committee started from the assumption that existing inequalities in health at least partly rank as unjust and that the government is responsible for achieving a reduction of these health differences. This assumption was based on the argument that health should be seen as a condition for the options open to individuals to structure their own life as far as possible according to their own ideas. Those health differences that are the consequence of an unequal distribution of living conditions over which individuals have no control, were thus seen as health inequities, to be tackled by the government. It was argued that this would require a comprehensive strategy, given the persistent and widespread character of socio-economic inequalities in health. The committee wanted its strategy for reducing health inequalities to be based on sound evidence. Ideally, factors targeted by the strategy should be known to contribute to the explanation of health inequalities, and interventions and policies should be known to diminish exposure of lower socioeconomic groups to these factors. While the first requirement could be met relatively easily (and documentation was provided, with references, in the final report of the committee), the second requirement was more difficult to meet. Although the program produced evidence on effectiveness of interventions and policies and showed some positive results, this left important gaps in the knowledge base, both in terms of coverage of various policy options and in terms of strength of evidence. This problem was also encountered in other countries [ 14 ]. The committee considered that one cannot expect further evidence to become available unless large-scale measures to reduce inequalities in health are taken. It therefore decided to recommend a combination of implementation of 'promising' interventions with continued evaluation efforts. For each of the interventions and policies that were recommended for implementation, it carefully listed the available evidence, plus references. In addition, the committee also paid attention to the political feasibility of possible policy recommendations. This aspect was discussed during the plenary meetings, in the light of the (political) experience of the committee members as well as the outcome of the working conferences that were mentioned before. Targets The committee decided to base its strategy on a number of quantitative targets, because these can aid in plotting a clear policy course and can function as milestones for interim assessments of the strategy. It took the World Health Organization target as its starting point [ 15 ], and reformulated it for the Netherlands as: "By the year 2020, the difference in healthy life expectancy between people with a low and people with a high socioeconomic status should be reduced from 12 to 9 years, due to a (stronger) increase in healthy life expectancy in the lowest socioeconomic groups." In order to attain such an ambitious goal, major efforts are required, if only because during the last decades inequalities in health in the Netherlands have increased rather than decreased [ 16 ]. Although it was considered unwise to give up on the ambition laid down in this 'inspirational' target, the strategy focused on a set of 'intermediate' targets that seem feasible today or in the near future. These targets were chosen to represent each of the main entry-points for reducing socioeconomic inequalities in health, and were limited to intermediate outcomes for which quantitative data for the Netherlands are currently available. Package of policies and interventions Table 3 lists the interventions and policies constituting the strategy recommended by the committee. The strategy covers all four entry-points and spans the entire range between 'upstream' measures targeting socioeconomic disadvantage and 'downstream' measures targeting accessibility and quality of health care services. Where current policies were expected to contribute to reducing health inequalities (education policies, income policies, work disability benefit schemes, health care financing schemes), the committee explicitly recommended continuation. This is by no means trivial, because none of these achievements of the past can be considered safe for the future. For example, the Dutch government is considering a reform of the health care financing system that could lead to reduced coverage of health care for those insured under the current public scheme, and then would jeopardize equal financial accessibility. Table 3 Recommended interventions and policy measures Interventions and policies targeting socioeconomic disadvantage • Continuation of policies that promote educational achievement of children from lower socioeconomic families. • Prevention of an increase of income inequalities through adequate tax and social security policies. • Intensification of anti-poverty policies, particularly policies that relieve long-term poverty through special benefit schemes and assistance with finding paid employment. • Further development and implementation of special benefit schemes for families whose financial situation threatens the health of their children. Interventions and policies targeting health-related selection • Maintaining benefit levels for long-term work disability, particularly for those who are fully work disabled and those who are partly work disabled due to occupational health problems • Adaptation of working conditions for the chronically ill and disabled in order to increase their work participation. • Health interventions among long-term recipients of social assistance benefits in order to remove barriers for finding paid employment. • Further development and implementation of counselling schemes for school pupils with regular or long-term absenteeism because of health problems. Interventions and policies targeting factors mediating the effect of socioeconomic disadvantage on health • Adapting health promotion programs to the needs of lower socioeconomic groups, particularly by focusing on environmental measures including the introduction of free fruit at primary schools and an increase of the excise tax on tobacco. • Implementation of school health promotion programs that target health-related behaviour (particularly smoking) among children from lower socioeconomic families. • Introduction of health promotion efforts into urban regeneration programs. • Implementation of technical and organisational measures to reduce physical workload in low-level occupations. Interventions and policies targeting accessibility and quality of health care services • Maintaining good financial accessibility of health care for people from lower socioeconomic groups • Relieving the shortage of general practitioners in disadvantaged areas. • Reinforcing primary health care in disadvantaged areas by employing more practice assistants, nurse practitioners and peer educators, e.g. for implementing cardiovascular disease prevention programs and better care for chronically ill persons. • Implementation of local care networks aiming for the prevention of homeliness and other social problems among chronic psychiatric patients. In a number of other areas, the committee recommended intensified or new policies. These recommendations were partly based on reported positive results of intervention studies. This applies to the recommendations relating to school health promotion programs, technical and organizational measures to reduce physical workload, reinforcement of primary care in disadvantaged areas by employing practice nurses and peer educators, and local care networks to prevent social problems among chronic psychiatric patients. The results of some of the other intervention studies led to recommendations for further development of those interventions, as in the case of special benefit schemes for families living in poverty and counseling schemes for school absenteeism. Most of the other recommendations, however, are primarily based on an understanding of the factors that have been shown to contribute to health inequalities, and of the best way to deliver interventions targeting these factors. The committee did not attempt to estimate the costs of the recommended interventions and policies. Implementation As experience has taught that implementing effective interventions should not be taken for granted, the committee advised that a steering group be formed to drive and control the process of implementing effective interventions. On the one hand, this should function as a highly visible focal point at which the expertise available in the Netherlands is made accessible to all relevant policy areas. On the other hand, the steering group should be able to act on its own initiative to capture and retain attention for socio-economic inequalities in health and to promote the implementation of policy proposals. Given these two functions, the committee advised including experts as well as representatives from the main relevant policy areas in the steering group. Research and development Given the fact that research has not yet fully disclosed the origins of socioeconomic inequalities in health, the committee considered continuation of explanatory research to be vital because it may lead to new entry-points for intervention. The same applies to further development of effective interventions and policies. The committee therefore recommended evaluation of all recommended interventions and policies during and after their implementation. Presentation of the report The committee published its main report in March 2001 [ 7 ]. The report was launched at a press conference, and presented to both the minister of health and the minister of the 'Major Cities policy'. It received wide media coverage. All major newspapers wrote extensively about the findings and recommendations, and these were also presented and discussed in various national television and radio programmes. Some criticism was heard as well. These include the argument that any (shared) responsibility on the part of the government for reducing socio-economic inequalities in health is at odds with the social trend towards stimulating individuals to take responsibility for themselves. This was discussed in the context of health related behaviour (smoking, nutritional pattern etc.) in particular. A closing conference took place in October 2001. During that conference, the results of the evaluation studies as well as the proposed policy strategy were presented to a broad public, and reflected upon by, among others, Sir Donald Acheson from the UK. In addition, policy implications were discussed. Participants included researchers, policy makers and representatives from practice, not only from the public health and health care field, but also from other policy areas (social security, working conditions etc.). Follow-up The official cabinet reaction to the recommendations presented to parliament in November 2001 was positive but further elaboration of the recommendations as well as decision-making was deferred to the next cabinet [ 17 ]. A new cabinet was formed after turbulent elections in spring 2002 but fell within 3 months, and did not make decisions on a strategy to reduce socioeconomic inequalities in health. New elections were held in January 2003. The delay in political decision making does not seem to have hindered the implementation of specific interventions that were evaluated within the programme. So far, at least a few of the interventions that have been proven to be effective have been implemented on a larger scale. These include the integrated programme to prevent school children from starting smoking, and the local care networks for chronic psychiatric patients. Discussion While many countries, including the UK, Sweden and Finland have had national research efforts in the field of socioeconomic inequalities in health during the second half of the 1990's, the Dutch program is unique for its emphasis on evaluation of interventions. More generally, the main distinguishing feature of the Dutch approach is its focus on commissioning evaluations of interventions. Although this was done in a systematic way, using an explicit conceptual and methodological framework, the program also had its obvious limitations. It had a modest budget (totalling 3 million Euro over a period of 6 years) and funded not more than 12, rather small-scale intervention studies targeting relatively easily modifiable factors. The latter is not only due to the small budget of the program, but also to strict methodological requirements which in practice made it nearly impossible to study the effectiveness of broader policy measures [ 18 ]. In hind-sight, we consider this the most important limitation of the program: the lack of studies on the possible impact of broader policy measures, mainly related to the strict methodological criteria that were applied in the process of selection of the research proposals. Even for the more specific and narrowly defined interventions selected for the program, some of the evaluation studies failed because the design could not be implemented. In the end, therefore, the contribution of the intervention studies to strategy development was modest. The unique elements of the Dutch approach should not distract from the fact that the Dutch experience received important inputs from abroad. Its start is a late response to the British Black Report and is directly related to the efforts of the European Office of the World Health Organization to put health equity on national policy agendas [ 15 ]. During the program there were close contacts between members of the committee and researchers and policy-makers in other European countries, through the European Network for Interventions and Policies to Reduce Inequalities in Health [ 19 ], so that experiences in other countries could be taken into account. The report of the Independent Inquiry in Britain [ 20 ] acted as a rich source of ideas, while a recent Swedish report on tackling inequalities in health [ 21 ] strengthened the confidence in the usefulness of target setting for reducing inequalities in health. The Dutch approach reflects the input of both researchers and policy-makers, although the balance between the two has oscillated over time. The first signals that health inequalities should be addressed came from researchers, but were picked up by policy-makers within the Ministry of Health in the mid-1980's who were then looking for opportunities to strengthen health policy (as opposed to health care policy) in the Netherlands. This small group of bureaucrats succeeded in launching and following through the first research program, but left the Ministry or changed posts before the program came to an end. Partly due to continuous personnel changes in the Ministry, the intensity of the exchanges between researchers and policy-makers gradually diminished during the second program. When the final report was published reactions from within the Ministry were rather cool, although the Minister, who had taken a personal interest in the matter, responded very favourably. At this stage, however, it seems that without a continuing "push" from the research-side the bureaucrats could easily loose interest altogether, particularly now that there are rapid changes of cabinet. A major obstacle for a comprehensive package of policy measures seems to be the relatively weak position of the Ministry of Health as compared to other policy areas. It is obvious that a substantial reduction of health inequalities can be achieved only by involving other policy areas next to that of (preventive and curative) health care. This starting point seems to contrast with the ideas of the ministries in other policy areas, that seem to consider this issue as the responsibility of the Ministry of Health in particular. So far, the Ministry of Health does not seem to have a lot of success in convincing other policy areas of the importance of contributing to reducing inequalities in health. The lack of success in mobilising other policy areas at the national level is probably partly related to the fact that the issue of inequalities is perceived as rather abstract by these other areas. This probably requires the issue of inequalities in health to be "re-phrased" for that specific policy area, in terms that fit within their ideas. Housing corporations for example do not consider themselves to be responsible for tackling health inequalities but they do feel responsibility for high quality living conditions, which then might automatically contribute to a better health status of people in lower socio-economic groups. Paradoxically, an approach in which the issue of inequalities in health is cut into small pieces, requires a steering. This forms the background of the plea of the committee for a steering group. Remarkable progress has been made, not only in terms of knowledge production but also in terms of increased confidence among policy-makers and practitioners to take action to reduce inequalities in health. Many health agencies in the Netherlands are working to reduce socioeconomic inequalities in health. This is illustrated by the fact that the 'National Contract on Public Health', concluded in 2001 between many national and local agencies in the field of public health, has selected the reduction of socioeconomic inequalities in health as its first priority. Many local health agencies have already implemented some of the interventions discussed in this paper. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC529300.xml |
544852 | The value of the qualitative method for adaptation of a disease-specific quality of life assessment instrument: the case of the Rheumatoid Arthritis Quality of Life Scale (RAQoL) in Estonia | Background Due to differences in current socio-economical situation and historically shaped values, different societies have their own concepts of high-quality life. This diversity of concepts interferes with quality of life (Qol) research in health sciences. Before deciding to apply a Qol assessment tool designed in and for another society, a researcher should answer the question: how will this instrument work under the specific circumstances of my research. Our study represents an example of the utilization of qualitative research methods to investigate the appropriateness of the Rheumatoid Arthritis Quality of Life Scale (RAQol) for the assessment of Qol in Estonian patients. Methods Semi-structured interviews were conducted with the rheumatoid arthritis (RA) patients of Tartu University Hospital and these were analyzed using the principles of the grounded theory. Results We described the significance of the questionnaire's items for our patients and also identified topics that were important for the Qol of Estonian RA patients, but that were not assessed by the RAQol. We concluded that the RAQol can be successfully adapted for Estonia; the aspects of Qol not captured by the questionnaire but revealed during our study should be taken into account in future research. Conclusions Our results show that qualitative research can successfully be used for pre-adaptation assessment of a Qol instrument's appropriateness. | Background With this article we are introducing our experience of how qualitative research can be utilized in the process of adapting of a quality of life (Qol) assessment instrument. We will argue the unique benefits of the qualitative method for assuring the validity of an adapted measure. Qol cannot be treated as something uniform and stable. One reason for its variability in time and space is the social nature of the "quality". Different societies with their current socio-economic situation, values and traditions, carry their own, to certain degree dissimilar understandings of high-quality life. Therefore Qol can be seen as a social construct. For further discussion on Qol domains see Schalock, 2004 [ 1 ]. Inclusion of Qol assessment into a set of health outcome measures is, regarding the achievements in improving the survival of chronically ill patients, well justified. But the social complexity of the construct means that its evaluation is also complex compared to traditional outcome measures e.g. symptoms or results of lab tests. Restricting Qol assessment to condition-bound groups (disease-specific Qol) permits potential influences on everyday life to be homogenized; problems with the various individual significances of these condition-characteristic impacts remain. Different Qol concepts in health sciences try to improve assessment's generalizability. Functionalistically oriented health-related Qol concentrates on restrictions in performance of everyday activities [ 2 ]. Although this approach allows presumably quite characteristic impacts of health condition or disease to be described and compared, it does omit several, mostly social and cultural, factors. Without considering these influences the detailed specification of Qol remains incomplete. The needs-based Qol model proceeds from the motivationalists' idea about universal human needs and defines Qol as a level of satisfaction of these needs [ 3 ]. Although universal in theory, this approach still has its imperfections regarding practical assessment: ways of fulfillment of even universal needs depend on the possibilities offered by society; measurement of these needs' satisfaction is accessible through rating their fulfillment acts; thereby, assessment of Qol cannot be liberated from time and space dimensions. Before deciding to use an existing instrument for Qol assessment, a researcher should answer the question: how will this questionnaire work in the particular circumstances of the research? Even an instrument that has performed excellently in the country of origin can loose some validity when applied in a different social context. There are three categories of topics that should be recognized for every Qol instrument: first, those important for the patients being investigated, and already assessed by the instrument ; second, those unimportant (or not so important) for the patients being investigated, but still assessed by the instrument; and third, those important the for the patients being investigated, but not assessed by the instrument. It is evident that an appropriate questionnaire consists mostly of the first category items, the content of the second category items is minimized and all topics important for patients are included. The items belonging to the two first categories can be determined during a standard adaptation process. The drawback of this approach is the amount of adaptation work that usually has to be done before receiving evidence about the merits of the questionnaire. Delimiting of the third category requires a deeper insight into the society-specific determinants of Qol. Because of the smallness of Estonia (population about 1.4 million) studies involving only our patients may demonstrate a lack of statistical power; also some restrictions concerning scientific potential and funding should be admitted. Therefore a promising choice for Estonian clinical and health sciences can be seen in cooperative research projects. Reasoned selection, judicious adaptation and application of internationally approved assessment instruments could be crucial for success. As a part of the Soviet Union, Estonia stayed isolated from Europe for years and quite strong ideological pressure was brought to bear on the citizens. After establishing independence in the 1990s, Estonian society has undergone abrupt ideological and economic changes. A researcher working in the Qol assessment field should take into account the suspected impact of these factors on understanding life quality among the local population. Our work offers an example of the application of qualitative interviews for exploring the essence of quality of everyday life for Estonian rheumatoid arthritis (RA) patients. We will illustrate the evaluation of the three topic categories using an example of RA specific Qol assessment scale. The questionnaire of interest Rheumatoid Arthritis Quality of Life Scale (RAQol) was developed in the 1990s as an outcome measure to assess the impact of RA and its treatment on QoL. The content was derived from interviews with 50 RA patients conducted simultaneously in Great Britain and the Netherlands [ 4 , 5 ]. The theoretical basis for the RAQoL is the needs-based model of QoL. The basic list of needs the model considers to be crucial to Qol had been published earlier [ 3 ]. List of needs • food, drink, sleep, activity, sex, pain avoidance • warmth, shelter, security, safety, freedom from fear, stability • affection, love, physical contact, intimacy, attachment, communication, sharing experiences, sharing goals, affiliation • curiosity, exploration, play, stimulation, enjoyment, creativity, meaningfulness • identity, status, recognition, approval, appreciation, usefulness to others, respect, competence, self esteem, mastery, achievement, power, independence, freedom • time structure • self actualization As a RA specific instrument, the RAQol was designed to assess the fulfillment of only those needs whose importance for RA patients emerged during the interviews. The original wording from the interviews was retained as much as possible. The final RAQoL is a 30-item measure where each item is in the form of a simple statement to which patients indicate whether or not it is true for them at that moment. The following example of the RAQol items demonstrates the wide range of everyday areas covered by the instrument: self-care, different indoor and outdoor activities, emotions and conditions, and interpersonal relations. An example of the RAQol items • Item 18. I have problems taking a bath/shower • Item 7. Jobs about the house take me a long time • Item 6. I find it difficult to walk to the shops • Item 16. I often get depressed • Item 12. I find it hard to concentrate • Item 13. Sometimes I just want to be left alone • Item 29. I avoid physical contact An affirmed item indicates an adverse quality of life. Each item on the RAQoL is scored '1' for the affirmed statement or '0' for the disaffirm statement. All item scores were summed to form a total score ranging from 0 (good QoL) to 30 (poor QoL). The original RAQol exists in two versions – UK English and Dutch; currently seven language adaptations of the RAQoL are available for use. Excellent test-retest reliability, internal consistency and construct validity of the original instrument and its versions has been demonstrated [ 6 - 10 ]. The RAQoL provides a valuable tool for assessing the impact of RA on QoL in international clinical trials and other research studies. For a full list of the RAQol items see de Jong et al. 1997 [ 5 ]. Goal and research questions The lack of an instrument for systematic assessment of the Qol of RA patients in Estonia raised the question of the feasibility of adapting the RAQol. Our task was to assess the value of application of the qualitative method for describing the appropriateness of the questionnaire before the adaptation. For studying the three above-described topic categories in connection with the RAQol, we formulated the following two research questions: • Are the RAQol's topics important for our patients? • What else do our patients find to be significant in connection with their everyday life quality? Method The choice of method and structure of interview We decided to apply thematic analysis following the principles of the grounded theory. Our choice was determined by the second research question. To answer it, the analysis had to be guided by the data itself, to enable new motifs and theories to spring up. The grounded theory, evolved by Barney G Glaser and Anselm L Strauss in the 1960s [ 11 ] and developed later by them and other scholars [ 12 , 13 ], offers a systematic approach for analyzing qualitative data using both inductive (open and axial coding, generation of core categories) and deductive (selective coding and theoretical/selective sampling) approaches in data processing [ 12 ]. The prerequisite for this inductive-deductive data handling is the simultaneous running of the processes of data collecting, coding and analysis [ 13 ]. For data collection we decided to apply individual interviews, which we considered to be the method of choice for investigating RA patients' perceptions of quality of their everyday lives. An alternative focus-group method was rejected because of the possible intimacy of some topics, which might have been difficult to discuss openly in a group setting. Considering the formulated research questions we agreed that the semi structured interview format should be preferred. We decided not to acquaint our research subjects with the original version of the questionnaire. We thought that familiarization with the instrument might restrain the respondents from disclosing their everyday life problems in full, especially those problems not captured by the RAQol. The RAQol items were arranged into groups according to the dimensions of everyday life they reflect. Four groups emerged (note some overlapping): self-care and indoor activities (items 1, 3, 5, 7, 8, 10, 11, 18, 21, 26, 30), outdoor activities (items 2, 4, 6, 10, 14, 17, 20, 21, 25), emotions and conditions (items 9, 12, 13, 16, 19, 21, 22, 23, 24, 28) and relations (items 2, 4, 13, 15, 17, 20, 22, 25, 27, 29). We formed four open-ended interview questions to cover these dimensions. The questions were intended to introduce informal conversation during which different Qol aspects connected with the everyday life could be revealed. 1. How does your disease influence your coping with everyday indoor activities including self-care? 2. How does your disease influence your coping with outdoor activities? 3. What emotions can you describe in connection with your disease? 4. How has your disease influenced your relations with other people? We prepared three to four additional secondary questions for each of the four interview questions for cases when some guidance of the interviewee is necessary. The fifth interview question was added in order to give the interviewees a possibility to speak freely about their everyday life problems not assessed by the RAQol. The sixth question was intended to elucidate the hierarchical importance of everyday problems/restrictions for our patients. 5. What other impacts does your disease have on your life? 6. Which disease impacts on your life do you consider to be the most important? We assumed the structure of the interview – from more detailed to general – to be appropriate for our patients; for most of them it would be the first time to be interviewed. The respondents We were determined to include patients who met ARA 1987 diagnostic criteria for RA [ 14 ]. We decided to exclude patients with a concurrent disease or health condition, which, according to available medical documentation and the opinion of their physician, can be considered to have a significant impact on Qol. By choosing interviewees from among the inpatients of the Rheumatology department of Tartu University Hospital, one of two specialized rheumatology centers in Estonia, we had access to sufficient medical data to follow the established inclusion and exclusion criteria. In Estonian rheumatology inpatient care is the dominating approach and it often also comprises some traditionally outpatient procedures. Recruiting inpatients gave us the opportunity to sample the whole spectrum of RA patients in circumstances where participation in the research would not interfere with their everyday routine. Also, conducting interviews under hospital conditions allowed us to create similar settings, free of major distractors, for each conversation. Our sampling strategy derived from the wish to collect as multifaceted data as possible. We applied principles of theoretical sampling where information gathered during previous interviews determined the selection of subsequent respondents in order to get views from different positions. The final list of characteristics, which we considered to be important for guaranteeing versatility of the sample, included: age, gender, duration of RA, severity of RA (assessed by functional class and radiological stage), education, working status, marital status, members of family unit, and living conditions. The characteristics of the patients are presented in Table 1 . Table 1 Patients' characteristics. For radiographic stage and functional class estimation, data were collected from medical records; the Larsen-Dale [28] and Steinbrocker [29] classifications were used respectively. ID Gender Age Duration of RA, years Radiographic stage Functional class Education Working status Marriage status Living with Living conditions 1. Male 30 1 II II basic not working because of the condition separated parents flat in village 2. Male 58 12 III II vocational not working because of the condition married wife flat in town 3. Female 38 15 IV III vocational not working because of the condition married husband and daughter (15 years old) house in village 4. Female 54 20 III II vocational working married husband house in village 5. Female 66 10 V II secondary retired married husband, grown-up daughter, grandson (11) flat in town 6. Female 49 4 IV III higher not working because of the condition single female flat mate flat in town 7. Female 47 13 III II higher half-time working separated son (13) flat in town 8. Male 54 10 IV III secondary working married wife farm 9. Female 64 20 IV II vocational not working because of the condition widow grown-up daughter flat in town 10. Female 69 50 V III higher retired separated alone flat in town The process of data collection The interviewing took place from February to June 2002. We consulted medical records to determine inclusion and exclusion criteria, patients' demographics and disease characteristics. No patients were contacted earlier than on the third day in hospital in order to allow them some time for adjustment. A day before the planned interview the goal and expected course of the interview were explained to the patient, the patient's were asked for their agreement to be interviewed. One of the patients we contacted refused to participate on account of being due to leave hospital the next afternoon. All recruited patients gave their informed consent. Interviewing was conducted in a private room in the rheumatology department. All interviews were conducted by one researcher (MT) and were audio taped. Eight interviews were carried out in Estonian; two respondents (3 and 5) were interviewed in Russian, the respondents' native language. Interviews lasted from one and a half hours to three hours. For most of our respondents it was their first chance to discuss their everyday life problems with somebody outside the family. But all of the interviewees were very willing to share their experiences after overcoming some diffidence at the beginning of the interview, and talked openly about their lives with the disease. For the interviewer, growing knowledge with every subsequent interview allowed her to move away from strict adherence to the interview questions, towards more informal conversation and, if necessary, the examination of some topics in depth. Every interview was transcribed word-for-word and discussed before recruiting the next participant. Those interviews conducted in Russian were translated from the tape by the interviewer and transcribed in Estonian. The tenth interview was exceptional. This patient was not hospitalized in the rheumatology department at that time, but we decided to invite her to our study due to the exceptionally long duration of RA recorded in her medical documentation – 50 years. The patient was contacted by phone and her agreement to participate was reached. We met at the patient's home and she was asked to tell her life story with stress on everyday problems. Thus we got an exiting narrative history of Estonian rheumatology from a patient's perspective and collected valuable information for our current research. No new topics relevant to our research questions came forth after the seventh interview. Therefore we decided to stop at the tenth. This decision agrees with the theoretical sampling idea that one should stop recruitment of respondents when the researchers decide that the study has reached its saturation [ 15 , 16 ]. The intensive discussion we carried out simultaneously with the data collection gave us the opportunity to determine the stage when the inflow of new data no longer added any essential information to our study. Coding and analyzing the data As the first step we read and discussed the interviews' transcriptions extensively. Tentative code families were chalked out as a notional framework for subsequent coding. Open coding of the transcripts was performed independently by the two of us, MT and JS. The codes adhering to the previously identified code families were ascribed to expressions composed mostly of one or two sentences in order to distinguish their leading ideas; to some expressions several codes were attached Due to the different structure and extent of the tenth interview, selective coding was applied and only those parts relevant for our research questions were coded. There were no disagreements between researchers at the level of code families; some inter-coder discrepancy appeared in appointing particular codes. Still, full consensus on open codes was reached in discussion. We analyzed the differences in coding and concluded that they could be ascribed to the coders' different – medical and sociological – backgrounds. We illustrate our conclusion with the example of coding a patient's expression: can't even go help my sister in the country with potato planting, a bit sad (2). It was coded as 'inability to offer physical help' by MT and 'alterations in traditional family relations' by JS; coders agreed on the 'relations with close ones' code family. During discussion the consensus code 'inability to perform in family roles' was established, which introduced a deeper exploration of the topic of role performance. We agreed on the coding discrepancies being a benefit rather than a drawback of our research process. They allowed us to highlight and subsequently integrate different aspects of applied codes on the boundaries of marked tentative code families. Hence we decided not to perform formal inter-coder reliability analysis due to its diminished informative potential for this particular research. Axial codes were created through grouping and condensing of open codes by MT. Axial coding was discussed and approved by all researchers. Side by side examination of affined axial codes of different interviews formed the basis for creating core categories – composition of a syllabus of motifs that emerged from interviews. Core categories were formed together by MT and JS, and were discussed and acknowledged by the whole research group. In analysis we used the selective sampling of core categories to meet our two research objectives. First, to assess the importance of the RAQol topics for our patients, we compared every single item of the RAQol with the coded data of our interviews. Our assessment of importance was born in discussion and took account of the closeness of the meanings of the item and the relevant expressions of the interviewee, the frequency of their occurrence, and the significance for the respondents. Second, to describe the Qol topics that were significant for our patients but were not evaluated by the questionnaire, we compared the coded data without a counterpart among the items with the list of needs offered by the needs-based Qol model. Results We will present our results as answers to the research questions. The descriptions of two of the three topic categories are included in the first, and the description of the third category in the second answer. The importance of the RAQol topics for our patients Three groups of items can be highlighted: items whose importance was demonstrated by the data; items that could conditionally be considered to be important; items whose significance could not be shown on the basis of the interview data. Important items Most topics assessed by the instrument were essential for our interviewed patients. For 22 of 30 items, the interview data provided sufficient evidence to consider them applicable for the evaluation of the Estonian RA patients' Qol. Examples of patients' utterances supporting the significance of the content and also the appropriateness of the format of each of these items can be given from two or more interviews. In some cases a remarkable diversity of utterances connected with one particular item was noted. We will illustrate this finding using item 17 as an example: I'm unable to join in activities with my family or friends. The following responses represent six different reasons to agree with the proposition included in the item. We have given word for word translations of the quotes into English, the IDs of patients are given in brackets. Walking difficulties • physically difficult to walk anywhere (1) • a whole fuss with moving, don't want to torture myself (2) Financial restrictions • I used to ride the bus a lot before, now it's so expensive, I can't afford to visit anyone very often (9) Difficulties connected with forced immobility • Whenever you're sitting somewhere, are somewhere, it's hard to stay in one position all of the time, you have to make yourself move, go somewhere, or whatever (1) • when your feet are ill and you sit for a very long time, then you can't even get up and move (4) Changed quality of participation • what's the use of going if you're no good anyway (2) • of course I didn't go anywhere, only peeped from the car window (5) • not going to the pub, don't know what to do there, can't handle dancing, and drinking doesn't work out either, no point in just sitting there the whole night (8) Being ashamed of themselves • when my joints were so tender and painful that I had to talk about my disease all the time then I definitely didn't look for company and I couldn't eat anyhow and I used a spoon for eating food that you ought to eat with a fork (9) Unwillingness to create problems • as I cause such a situation that my hosts have to help and watch me all of the time, I'd rather not go (6) Worries about coping • conditions of a home I go to, how cold it might be, if there's only cold water, is the toilet outside (6) Difficulties related to bathing were mentioned in eight interviews and therefore item number 18: I have problems taking a bath/shower, was included in the group of the appropriate items. Still, the interviews also offered some hints that the value of the item as a measure of ability to carry out body care could be diminished by Estonian traditional sauna culture, especially popular in rural areas (more than one third of the Estonian population is rural.) I don't care much for the bath, mainly I have let my kids take me to the country and have gone to the sauna, that's much more like it (2) • sometimes I go to the sauna, we have a sauna in the cottage in the countryside, its no big deal to wash myself there, in the summer at least; a sauna might really be more convenient than this bath; there's just no hassle with getting up (7) • I can manage washing myself in the sauna but I can't get up from the bath on my own (8) Items that could conditionally be considered to be important Motifs related to four of the remaining eight items emerged a number of times from the interviews. However, before adding these items to the list of appropriate ones, certain aspects should be taken into consideration. Two respondents (2 and 5) burst into tears during the interview when looking back on their lives with the disease, which verified the significance of item number 19: I sometimes have a good cry because of my condition. The third interviewee spontaneously talked about crying, describing it as something bright and relieving. It can be presumed that in some cases crying may be interpreted as a way of coping, which is not unambiguously related to the level of Qol. • if you cry, you should cry thoroughly and then you'll feel better, but if you keep something inside you it'll start eating at you, if you get a chance to have a good cry it'll make you feel good /.../ crying is self-purification, later you feel light; that's a feeling a healthy person doesn't get (9) A single word or expression matching English 'frustration' cannot easily be found in Estonian. The foreign word 'frustratsioon' has been introduced into Estonian quite lately but remains unknown to the majority of lay people. Although there are a number of expressions in the interview data corresponding to item number 9: I often get frustrated, formulation of an Estonian version of the item suitable for embracing them all will be complicated. • this is the feeling that you just are, the disease has already got a hold, the sequence of events just keeps going on and on (2) • feelings of injustice that someone else is well but I'm not (5) • it depresses me, and what the mind has built up is ruined by some moment and then the emotional breakdown comes again (6) • often my thoughts run ahead of me and I'm feeling rested and would like to do something, but when I get down to it, then that's it, I end up hindering the work and not doing anything myself (8) • I can't forgive my disease for ruining my life structure that had been carefully and arduously built over a long time (6) In four interviews the insecurity connected with the disease progression and its unpredictability was one of the main topics. In Estonia's rapidly changing society, securing ones future cannot be an easy task and disabled people do not have much of a chance for success. Therefore the likening of capability to determine their future to ability to control their disease by patients is expected, and item number 28: I feel that I'm unable to control my condition, can be considered to be important. But again, difficulties in the formulation of the Estonian wording of the item will arise. Though quite common in everyday spoken Estonian the equivalent to the expression 'to control a condition' was never used by the interviewees when talking about their concerns. • afraid of planning, you never know when it strikes back again (1) • not yet [unable to cope] but it might come in the future; I don't know what will happen next year /.../ you're afraid that if you really get so poorly that you can't take care of yourself who will take care of you (4) • I still try to be like a human being but don't know how long I can manage (5) • there's really nothing for granted in this world of course, but I am preparing a back-up solution in case things get worse (8) No patients reported that they were disturbed by continuously thinking about the disease. Three interviewees with a disease duration of over 10 years talked about not thinking of their condition as a positive phenomenon connected with adaptation to the disease. We concluded that item number 23: My condition is always on my mind, can be treated as significant, although our interviews revealed no evidence that the problem is recognized when present. • I don't think that I'm an ill person at the moment, just when these hands are painful or I just can't cope with everything; I don't think that I'm ill, that I'm so miserable, I don't think about it (4) • I have to change and re-adjust my basic values, but I don't think about it all of the time, maybe I have adjusted them subconsciously /.../ I then, subconsciously, not thinking about it, eat something softer or don't go biting on a big apple if my jaw joint is painful (7) • I don't' even think about my disease at this point, it's like a husband now, day and night, its there in everything I do and I know that as long as I live I will have it and I cant get rid of it and I don't make it a problem anymore (9) Items, whose significance could not be shown To this group we included four items. In one interview, the impact of pain on attention was described. We did not find this evidence sufficient for designating item number 12: I find it hard to concentrate, as significant. In our opinion, these expressions describe switching of attention to another stimulus and do not refer to concentration difficulties. One reason why the interview data did not support the importance of this item can be the lack of currently active intellectual workers in our sample. • You can't think about anything but pain; crossing the road you might get hit because you're only thinking about the pain and forget that you have to keep an eye on the road (6) No data corresponding to item number 1: I have to go to bed earlier than I would like, emerged from the interviews. Because tiredness is a characteristic feature of RA, one explanation for the inability of our data to show this item's importance is associated with the paucity of absorbing nighttime activities, especially in Estonian rural areas. Item number 24: I often get angry with myself, had no matches in our interviews. It is difficult to find any obvious explanations for this particularity but we can suggest that in some cases anger was rechanneled against the medical system – a phenomenon that we will discuss later. Our data failed to demonstrate the importance of item number 28: I avoid physical contact. We believe that intimacy connected with this topic could be the reason our respondents avoided openly discussing it. If so, due to the greater impersonality of a questionnaire format, the item can retain its significance as a part of the instrument. What else our patients found to be significant in connection with their everyday life quality As a result of the analysis, three groups of topics connected with satisfaction of the needs included in the list of those crucial for Qol and important for our patients, but not assessed by the questionnaire, were described. Next we will name these needs and present the evidence of the limitations in their fulfillment. Our deeper inquiry into the reasons for these peculiarities will be presented in the discussion section. Identity, status, appreciation, respect, usefulness to others, and self esteem Adaptation to a disease is a difficult process comparable to passing through phases of grief. An inevitable and hurtful part of it is abandoning of old roles and the recognition of new ones. Changing role functioning was a common motif in the interviews. Regret and anxiety due to inability to perform in the roles of a healthy person were expressed by the majority of respondents. Gender role • What man isn't disturbed by the inability to take care of himself, then you're like a kid not a man (8) • My appearance isn't as attractive [as a woman] anymore as it could have been without the disease (4) Role in family • I can't even go help my sister in the country with potato planting, a bit sad (2) • just sad because of him [the son], he asks me why I can't come outside to play soccer with him, well, I really can't (7) Age role • totally like a small kid, someone else has to help you all the time, whatever it is, meals or something else (1) • I walked with a stick, it was a catastrophe, such an old granny (3) Work related role • I want to do something, just to make something and do a job, my hands want to work; people drop by and try to rope you in – I can't, there's no doer, all the time "I can't", other days I can't at all (2) • I am afraid of going to work soon; this hand is so ugly; what a hairdresser with such a horrible hand! (4) The new role of a diseased person was generally interpreted as something deprecatory; being ill was considered as opposite to being normal. • I can't move; my movements aren't like they should be, not like healthy and normal people have (1) • before I used to wear high heel shoes like normal people (3) • you're like some prehistoric creature, everybody goes by modern means but you like going back to the stone age (6) Feeling ashamed of the disparity led to preoccupation with concealing the condition. Surrounding people were often seen as appraisers; their opinions were valued more highly than success in coping was. As a result, even the simplest aids that could be noticed by others were rejected. In some cases the fear of being labeled as different elicited the preference of social isolation. Concealing the condition • I don't want to see things that point to my condition, don't want these to be seen, and of course, don't want to have to always hide everything (6) • I control myself so that others won't notice the way I am – I have so many acquaintances that for a long time didn't know I was ill (9) • I'm even afraid to tell anyone that this hand is so ill, I'm so quiet (4) Rejection of aids • [a special cup with two ears] I have it at home, but I don't use it; at this point I still try to be humanlike (5) • you don't go shopping with a crutch, no way (8) • I feel very uneasy eating in company; I still want to eat like people do, with regular tableware (9) Preference of social isolation • if I still find it impossible to eat or drink in company, then I don't; I won't go into company [to eat] like this (6) Safety, freedom from fear, and stability A well functioning medical system should strengthen the feelings of security and stability of its clients. In the words of our respondents the medical system constituted an enemy. The system was something to blame, to vent anger on; at the same time it was described as an inevitability that had to be obeyed and against which protection was required. Blaming • the system is wrong, the sick funds and all, why can't I get procedures done that are necessary for me (3) • you sign up there [for a rheumatologist], you wait and wait, a lot changes in that time; you wait a month and a half; you get there; by that time the drugs have run out, later you're at fault for not having taken them (2) • I went to a private clinic, almost like crying at the door, there are no vacancies, there's no one I could talk to, and I didn't. I went away to the country/.../now I'm here and now I'm told that this Achilles' has been broken for at least a month (10) Obeying and need for protection • [left alone with the disease] just can't be such a thing, not alone, but life is like this, don't know what to wish (2) • helplessness, no person to protect me [against the system] (3) Physicians, some of them named as saviors and supporters, were still more often seen as being an impersonal part of the system. Professional incompetence and superficiality were connected with non grata turns in the course of the disease. One of the respondents regarded physicians as co-victims of the system, incapable of defending their patients. Part of blamed system • if I had been sent to the rheumatologist right away then maybe things wouldn't have gotten this bad; thus, when my neck was stiff at first, it was thought I had caught a cold and it would pass (1) • the troubles started when my leg trauma was labelled as radiculitis (2) • initially no action was taken and consequently the general attitude [of the doctors] became such that nobody had the guts to do anything with me, and because it was such a awkward situation because they didn't have the right doctor who would have done something, I was left a bit high and dry (10) Co-victims • the kidney doctor said: not to come to me, I am out of money, go to the GP /.../you go to the GP, she can't help you, prescribes what she can, she can't prescribe rheuma-drugs (2) Procuring food, drink and other necessities of life The ability to do necessary shopping is directly assessed by item number 6: I find it difficult to walk to the shops. Difficulties with walking to the shop were described by our respondents and the item was considered to be important. But other problems connected with shopping emerged even more often in the interviews – managing in the shop, checking out, and carrying the purchases. Managing in a shop • I don't want to stand in a line, my feet start to ache (7) • [shopping trolley] is very big, you have to push it hard and it's narrow there [in the shop] (9) Checking out • checking out at the counter you're in a hurry and then comes this psychological moment that you get nervous and can't handle it (7) • I open my wallet, they take what they need (5) Carrying the purchases • when the bag is too heavy then my feet can't stand it; you have calculate how much you buy (6) • sometimes you would like to buy more but you can't carry it; I just put stuff that I really need in the cart (9) For our interviewees, a topic closely related to shopping was the capability to use public transport. The obstacles encountered when entering, leaving and riding a bus were described. Getting on • I can't get onto the bus, even worse with the tram; the steps are very steep, I can't manage however I try (3) Getting off • I can't get off, I ride to the next stop; the bus doesn't pull over to the sidewalk but stops further away (6) Riding • very bad to ride, I just stagger, lose my balance, sometimes you can fall quite badly; the worst is when you have to stand (9) • sitting down and getting up [on bus] is difficult for me, it's better I hold on to something for some time (5) The lack of money was mentioned by all of our respondents. Its interference with the satisfaction of the majority of needs was described. • I wouldn't say my spending has increased a lot, but my income is, yes, notably lower; before I even paid more income tax than I get paid now; the habits I had before have become impossible for me, financially (1) • a handicapped person could also make his/her life comfortable and tolerate everything if it was possible financially (6) • the pension is so tiny, I can't do anything with this; I can't manage a family with this (7) Discussion Our results showed the high significance of the majority of the RAQol items for the interviewees. This allows us to state that the RAQol can successfully be adapted into Estonian for usage in international research projects. Our results also highlight the difficulties in translating some specific items (number 9 and 28), which should be reckoned with during the adaptation process. Three Qol aspects that were important for Estonian RA patients but were not evaluated by the RAQol – the issues concerning changes in role performance, safety and stability of communication with the medical system, as well as some issues of procuring the necessities of life-were revealed by the analysis of interviews. Next we would like to discuss the reasons for these peculiarities and suggest some additional resources for in-depth reading. Only 13 years have elapsed since the end of Soviet rule in Estonia. Our respondents spent a considerable part of their lifetimes in the Soviet ideological environment and this has undoubtedly influenced their values and beliefs. An obligatory component of the homo sovieticus mentality was the placing of collective interests above personal ones; individuals were appreciated by their contribution to the common good. Successful performance in social roles approved by the regime was honored; inability to meet validated ideals was considered shameful. Although we can talk of a dramatic change of approbated values and ideals after the end of the Soviet era – an independent, successful, competitive young individual is idealized now –, but the tendency to disapprove of the inability to fit expectations has remained. We believe that this potpourri of old and new values and attitudes could explain the high significance of themes connected with roles and role functioning in our respondents' conversations. A person disabled because of disease could not perform successfully in acknowledged Soviet-time roles (the existence of people with special needs was simply hushed up by the Soviet media); the same is true for present-day Estonia. Being categorized as "socially uncompetitive" would alter the identity and self-esteem of the disabled individuals and concealment of the condition could be seen as defense reaction. In the light of this, topics connected with role functioning and social acceptability should be included among those observed during a Qol investigation. See also [ 17 - 21 ]. The transition from one economic and ideological system to another in Estonia has caused social instability, which is reflected in an increase in people's subjective sense of insecurity and fear. The most insecure time for the Estonian population was in the early 90ties; since that time, due to stabilization of the economy and economic development, a sense of security is returning. Still, there is remarkable disaffection with different spheres of public administration; future-oriented reorganizations that do not provide immediate benefits are treated with caution. The health care system, which has undergone fundamental reforms in the post-soviet period, is the favored object of criticism for the Estonian media. In the Soviet era health care was funded from the state budget and all citizens had free access to health services. Today's health care delivery system in Estonia is financed through health insurance; the private sector is growing. The introduction of family practitioners, with novel financing principles and responsibilities, in the mid 1990ties has changed previously existing relations between patients and medical specialists; access to consultants has lost its immediacy. Bureaucracy, long waiting lists, visit charges (something unthinkable for Soviet medicine) and open discussion of the financial difficulties of the health service in the Estonian media have all made their contributions to lowering confidence in the health care system in the eyes of our patients. Communication with health care deliverers constitutes a significant part of everyday life for RA patients. Therefore the impact of this communication on the everyday life of patients cannot not be ignored. In the case of a transitional society like Estonia's, the effect of health care delivery on patients' needs for stability and safety should be considered. See also [ 22 - 26 ]. In health care planning, the strategic military interests of the Soviet Union were given priority; this resulted in the development of an excessive large hospital network. Habilitation and rehabilitation were upstaged and drastically under-funded. In the Soviet era the people with special needs were partly institutionalized as disabled; their problems and even their existence (with the exception of war veterans) were simply hushed up by the authorities and media. There were no adaptations for people with physical special needs in public places and in the physical environment of towns in Soviet Estonia; elementary facilities were not accessible. People in wheelchairs began to be seen on our street with arrival of western tourists in early 90ties; they were joined shortly afterwards by our own special needs persons. During the following years adaptations considering the requirements of people with special needs have little by little been introduced into the physical environment. However, due to limited resources the improvements are coming slowly and conditions have not reached the level where the comfort of special needs persons can be guaranteed. Coping with errands and chores should be taken into account as a distinct Qol topic in Estonia today. See also [ 27 ]. According to the Statistical Office of Estonia the average monthly pension for the disabled in Estonia in 2003 was 1110 Estonian kroons (approximately 71 EUR). This constituted one sixth of average monthly gross wages and salaries, and was 301 kroons (19 EUR) lower than estimated minimum means of subsistence for 30 days in the same period. From these figures we can see that financial troubles can overshadow every aspect of the everyday life of our patients. In our opinion adding the assessment of these three aspects – changes in role performance, safety and stability of communication with the medical system, and some aspects of procuring the necessities of to life – should be considered by Estonian researchers, especially when carrying out disease-specific Qol studies at the national level. Conclusions In our research we used qualitative interviews for assessing the appropriateness of a RA-specific Qol instrument, the RAQol, for adaptation for use in Estonia. We described the importance of the items to our patients and identified Qol topics that were significant for our respondents but that were not assessed by the questionnaire. We also discussed the nature of the discrepancies in significance of Qol topics for Estonian patients. Our results show that the utilization of a qualitative study as an introductory part of Qol assessment instrument adaptation provides possibilities for more thoroughly considered Qol research. By evaluating the significance of items in the particular context, it allows the mechanical acceptance of instruments just because they have performed well in other societies and cultures to be avoided. Moreover, it offers an opportunity to identify topics that are not included in the instrument but are important for local interpretation of Qol, which are otherwise often overlooked. For researchers, qualitative studies offer a deeper understanding of the instrument in question and of the research topic – Qol of the patients. The data collected during this qualitative research process also has the potential to be used for a wider analysis of the Qol of Estonian RA patients. Still, our current interests were centered on the appropriateness of the adaptation of a particular Qol instrument, and therefore the use of the gathered data was quite limited. But we believe that the knowledge gained will be beneficial for our forthcoming research projects. Authors' contributions MT: the idea and conceptual construction of the research, recruitment of the participants and conduction of the interviews, coordination of the coding and analysis processes JS: methodological construction and supervision of the research, active participation in the coding and analysis processes KM and EH: participation in discussions and decision-making throughout the whole course of the research All authors have read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544852.xml |
526755 | Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models | We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. | 1. Background Cell phone safety remains a topic of broad public concern that attracts frequent media attention. This attention is focused on two areas of scientific controversy concerning cell phone safety. The first area is that of non-thermal biological effects. The existence of these effects is an important open question, but it is not the topic of this paper. However, if these effects exist, their manifestation will certainly be related to the amount of RF energy deposited in the tissue – RF dosimetry [ 1 ]. The second area of controversy, and the topic of this paper, is that of RF dosimetry, specifically computational RF dosimetry. Simply put, this is a computer simulation that estimates the deposition of RF energy, the specific absorption rate (SAR), in the head of a user. Because live human heads can not be safely instrumented for these measurements, computational RF dosimetry provides the best estimate of SAR in actual human heads. For this same reason, compliance testing is done with phantom heads. The phantom head that is now the world-wide standard for compliance testing is the Specific Anthropomorphic Mannequin (SAM). SAM was developed by members of IEEE Standards Coordinating Committee 34, SubCommittee 2, Working Group 1 (SCC34/SC2/WG1). This working group was created to develop recommended practices for determining SAR in the head via measurement techniques [ 2 ]. SAM has also been adopted by the European Committee for Electrical Standardization (CENELEC) [ 3 ], the International Electrotechnical Commission [ 4 ], Association of Radio Industries and Businesses [ 5 ], and Federal Communications Commission [ 6 ]. SAM is a lossless plastic shell and ear spacer. Because current technology does not allow reliable measurement of SAR in small complex structures, like a simulated pinna, SCC34/SC2 chose to use a lossless ear spacer on SAM to maximize the energy reaching the head and minimize measurement uncertainty. SAM's dimensions were taken from the 90 th -percentile anthropometric data corresponding to the adult male head as tabulated by the US Army [ 7 ]. The SAM shell is filled with a homogeneous fluid having the average electrical properties of head tissue at the test frequency. A primary design goal for SAM was that, "SAM shall produce a conservative SAR for a significant majority of persons during normal use of wireless handsets" [ 2 ]. To test whether this goal has been met, investigators have used computational RF dosimetry to compare the SAR in SAM to that in anatomically correct models of the human head. These anatomically correct head models are commonly derived from MRI scans. Each two-dimensional scan must be analyzed to identify individual tissue types. The two-dimensional scans must then be merged into a three-dimensional model that maintains smooth boundaries between tissue types [ 8 , 9 ]. Some investigators have found that SAM underestimates SAR in adults and children by a factor of two or more [ 10 ]. Other investigators have found that SAM overestimates SAR in both adults and children [ 11 , 12 ]. These contradictory findings produce confusion on the part of the public and regulatory agencies, and call the validity of computational RF dosimetry into question. While the published results of computational RF dosimetry comparing the SAM to anatomically correct models appear contradictory, a close examination of the work reveals that there are several procedural and reporting problems that may well account for the discrepancies in results. The groups headed by Gandhi and Kuster are not the only ones pursuing computational RF dosimetry using anatomically correct models of the human head [ 13 - 25 ]. Not all of these studies included SAM but, to various extents, all evidenced the same procedural and reporting problems that make comparison of results difficult. 2. Problem areas 2.1 Treatment of the pinna The first, and the most significant of these problems, is the treatment of the external ear (pinna). Specifically, the problem is determining whether the pinna may, or may not, be considered as part of the 1- or 10-gram SAR averaging volumes. When considering SAR averaging volumes the head and the pinna should be viewed as mutually exclusive, in-other-words the pinna is not part of the head but it is attached to the head. Some investigators have chosen to treat the pinna in accordance with IEEE Std C95.1-1999 [ 26 ] and the ICNIRP Guidelines [ 27 ]. These standards do not consider the pinna to be an extremity. This means the pinna is subject to the same exposure limit, for peak spatial SAR, as the head. Investigators that refer to these standards include pinna tissue in the 1- or 10-gram averaging volumes used to compute SAR in anatomically correct models. Because the pinna is usually the tissue closest to the feed-point of the cell phone antenna the highest point SAR values are usually found in the pinna; consequently, averaging volumes that include pinna tissue will produce higher SAR. Other investigators have treated the pinna in accordance with draft revision IEEE Std C95.1-200X. This draft standard expands the definition of extremity to include the pinna, which makes the pinna subject to a higher spatial peak SAR, see Table 1 . These investigators exclude pinna tissue from their head tissue SAR averaging. Table 1 SAR limits SAR limits from three different standards for extremities and other tissues. These limits are for exposure of the general public in an uncontrolled environment. ICNIRP 1998 IEEE C95.1-1999 IEEE C95.1-200X Extremities 4 W/kg over 10 g 4 W/kg over 10 g 4 W/kg over 10 g Other tissues 2 W/kg over 10 g 1.6 W/kg over 1 g 2 W/kg over 10 g When comparing published results it is often difficult, or impossible, to determine whether head tissue SAR values are based on averaging volumes that include or exclude the pinna. In fact, some papers make no mention of how the pinna was treated. Although head tissue SAR is the major focus of attention, papers that consider the pinna as an extremity can not simply ignore its existence, the pinna must still meet the higher spatial peak SAR for extremities. Another part of the problem dealing with the treatment of the pinna is simply determining what tissue constitutes the pinna. The IEEE defines the pinna as, the largely cartilaginous projecting portion of the outer ear consisting of the helix, lobule, and anti-helix [ 2 ]. Unfortunately these anatomical structures vary with each individual and their boundaries are subjective. Consequently, when excluding the pinna some investigators have excluded considerably more or less tissue than others. Because the pinna contains high SAR values, excluding or including tissue near the pinna from the averaging volume, markedly changes the peak spatial 1- or 10-gram average. 2.2 Models The second problem area is the lack of common models. The only computer models that are common to all the computational RF dosimetry studies are the SAM and the Visible Human Male. The anatomic data for the Visible Human Male originated at the National Institutes of Health but many groups and individuals lent a hand in converting it into a computational model. While a few investigators have different models the only ones that can be compared across all the published results are SAM and the Visible Human. This also means that the only repeatable comparison that can be made is between the SAM and the Visible Human. It seems obvious that one can neither prove nor disprove that SAM produces a SAR greater than the maximum local SAR induced in humans for a significant majority of persons during normal use of wireless handsets, when there is only one anatomically correct model available for comparison. Although not a major problem, it is still true that dielectric properties and names of tissue types in anatomically correct models have varied between investigators. Of course the head model is only half of any computational RF dosimetry study, the model of the RF source is the other half. The only common source that has been used in several published studies is a dipole [ 15 , 28 - 30 ]. Simulated cell phones have varied in size, shape, antenna type, antenna length, and sophistication. Like the anatomical head models there are some very realistic models of cell phones in use but they are either proprietary or too expensive for widespread use. 2.3 Positioning The third problem is that of inconsistent positioning of the model cell phone relative to the head model. Simulated SAR in near-field situations is mainly a function of the geometry of the RF current density distribution on the source model and its geometric separation from the lossy head tissue [ 2 ]. When the separation distance is small a one or two mm change can significantly alter the observed SAR [ 30 , 31 ]. The CAD files defining SAM show specific reference points and lines used to position cell phones for compliance testing. IEEE Std. 1528 defines two test positions for compliance testing, the touch and tilted position, see Figures 1 and 2 respectively. These positions are routinely used in computational RF dosimetry studies but the anatomical head models do not have defined reference points. These reference points are defined with respect to anatomical features but, as with the definition of the pinna, the interpretation of these anatomical features can vary from investigator to investigator. Consequently, even if two investigators are using the same cell phone and head model, there is no assurance that their positioning of the cell phone relative to the head model is the same. Figure 1 Touch position. Specific Anthropomorphic Mannequin with cell phone in touch position on the left side. RE = Right Ear, LE = Left Ear, M = Mouth. Figure 2 Tilt position. Specific Anthropomorphic Mannequin with cell phone in tilted position on the left side. RE = Right Ear, LE = Left Ear, M = Mouth. 2.4 Finite Difference Time Domain (FDTD) considerations 2.4.1 Rotation artifacts Usual practice is to align a monopole cell phone antenna with the FDTD grid to avoid the stairstep effect. The head model is then rotated to the correct position relative to the cell phone. After rotation, the voxelized model must be remeshed to align the voxels with the FDTD grid. This is not a trivial task and algorithms to perform remeshing are constantly being improved. The authors have noted some unintended artifacts in voxelized models after remeshing. The first of these is grooving. Figure 3 shows a planar slice through the ear spacer and cheek of the SAM. Note the grooves in what should be a smooth surface. The SAR is zero in the grooves but at the end of the grooves it is higher than in the surrounding voxels due to high E fields within the grooves. These artifacts can distort both the magnitude and location of the peak spatial SAR. Figure 3 Artifacts in slice through ear and cheek of SAM. Slice through ear spacer and cheek of the Specific Anthropomorphic Mannequin (SAM). Two of the many groove artifacts caused by rotation and remeshing are annotated. The upper portion of the figure is the ear spacer which, because it is lossless, has no Specific Absorption Rate (SAR). The lower portion of the figure shows the SAR in the simulated tissue just inside the shell of SAM; red is the highest SAR, violet is the lowest SAR. The jagged edges caused by grooving are not limited to surface features. Figure 4 shows unrotated and rotated slices through the same anatomic model. The smooth interface between tissue types has been distorted and isolated regions of different tissue types have been created in some locations. Figure 4 Artifacts in slice through anatomically correct model. The image on the left is an XY slice through an unrotated anatomically correct model of a human head. Each color represents a different tissue type. Each tissue type comprises a contiguous region and the boundaries between types are smooth. The image on the right is another XY slice through the same model after rotation around all three axes and remeshing; this is not the same plane represented by the image on the left because that plane is no longer parallel to any of the coordinate axes. In the image on the right tissue types are no longer contiguous regions and the boundaries between types show an unrealistic sawtooth pattern. Grooving has not been observed with all FDTD software and even when it has been seen it has not occurred with all models. Researchers should routinely examine their models after rotation to insure grooving is not a problem. All FDTD programs must, of necessity, perform their calculations on voxelized models. However some programs use CAD models which are only converted to voxelized format after all rotation has been done. These programs avoid most coordinate transformation problem but they are not infallible. They must still convert smoothly undulating biological surfaces into rectilinear voxels. Figure 5 shows empty voxels (air) along a tissue interface where they should not exist. Figure 5 Empty voxels along tissue boundary. This image is a close-up of empty voxels caused by rotation and remeshing along a tissue boundary. The white areas are empty voxels along the boundary between the two tissue types indicated by red and blue. 2.4.2 SAR Calculations Because the FDTD method calculates the electric fields at the voxel edges, the X, Y and Z-directed power components associated with a voxel are defined in different spatial locations. These components must be combined to calculate SAR in the voxel. There are three possible approaches to calculate the SAR: the 3-, 6-, and 12-field components approaches. The 12-field components approach is the most complicated but it is also the most accurate and the most appropriate from the mathematical point of view [ 32 ]. The 12-field components approach correctly places all E field components in the center of the voxel using linear interpolation. Therefore, the power distribution is now defined at the same location as the tissue mass. For these reasons the 12-field components approach is preferred by IEEE 1529 [ 33 ]. However, the actual approach used to calculate SAR in the FDTD voxels is usually not reported. After the SAR in every voxel is determined multiple voxels must be combined to compute the 1- or 10-gram SAR spatial averaging volumes. These normally cubic volumes become difficult to construct at the surface of a model or when the volume is constrained to a particular tissue type. The particular algorithm used to construct these volumes can influence the resultant 1- or 10-gram SAR values. However, the actual algorithm used to construct the spatial averaging volumes is usually not reported. 2.5 Reporting results When the cell phone model is placed next to the SAM or anatomically correct model, it changes the cell phone's antenna feed-point impedance. The antenna feed-point impedance (Z), feed-point current (I) and net input power (P net ) are related by Because net power and feed-point current are usually not initial conditions in FDTD simulations, different feed-point impedances will produce different results for net power, feed-point current, and SAR. If different head models produce the same feed-point impedance this would not be a concern; however, several studies [ 16 , 17 ] have shown that the feed-point impedance depends on the head model, the size of the head next to the mobile phone and the mobile phone model itself. To compare SAM with various anatomic models it is necessary to assume the same cell phone model at the same emission level for all simulations. Typically, for a given simulation, the SAR is normalized by feed-point current or net power. The normalized value is then multiplied by the feed-point current or net power level chosen for comparison. Commonly SAR is compared for net input power levels of 125 mW, 600 mW, or 1 W or for the corresponding feed-point current assuming a 50 ohm feed-point impedance. Some investigators have chosen to scale their results to net power while others have used feed-point current. Unfortunately the choice of scaling is frequently omitted and the feed-point impedance is almost never reported making it impossible to compare differently scaled results. 3. A possible solution To address the controversy and its underlying problems the Protocol for the Computational Comparison of the SAM Phantom to Anatomically Correct Models of the Human Head was developed by IEEE Standards Coordinating Committee 34, SubCommittee 2, Working Group 2 (SCC34/SC2/WG2). This working group was created to develop recommended practices for determining SAR in the head via computational techniques [ 33 ]. This standard is still in draft. The protocol has two parts; a benchmark validation study; and a set of common definitions, models, and reporting requirements. The benchmark validation study is underway with fifteen participants. All participants should finish the study by mid-2004 and the results should be published by early 2005. Hopefully the common definitions, models, and reporting requirements will be used in future investigations making comparison of results easier. 3.1 Treatment of the pinna The protocol asks all participants to report peak spatial SAR for averaging volumes that both include and exclude the pinna. The voxels comprising the pinna in the provided anatomic models are flagged so all participants will conform to one definition of the pinna. The pinna voxels are flagged by prefixing the standard tissue type with pinna-; such as pinna-skin, pinna-cartilage, and pinna-fat. The electrical properties of the flagged pinna voxels are unchanged. The IEEE Std 1528 definition for the pinna was followed and the choice of each flagged voxel was confirmed by an Ear-Nose-Throat surgeon. 3.2 Models The Benchmark Validation Study calls for each participant to run twelve simulations: three head models, at two frequencies (835 and 1900 MHz), and in two cell phone positions (touch and tilted). The models are SAM, the Visible Human, and a seven year old Japanese male [ 16 ]. Each model is provided as a voxel file with an ASCII header file. For the two anatomically correct models, the tissue names and properties in the header file were made consistent with the definitions found on the Italian National Research Council, Institute for Applied Physics web site [ 34 ]. Although they are not part of the benchmark validation study, SCC34/SC2/WG2 plans on releasing several new anatomically correct models in the next few months to expand the population of models available for study. A generic cell phone is described for use in all benchmark validation studies, see Figure 6 . The length of the antenna is 71 mm for 835 MHz and 36 mm for 1900 MHz. Because the cell phone and SAM are symmetric, and the anatomically correct models are approximately symmetric, SCC34/SC2/WG2 chose to do all simulations with the phone on the right hand side of the head. Figure 6 Generic cell phone. The Generic cell phone designed for the intercomparison protocol. Blue = perfect electrical conductor, gray = plastic insulator, green = rubber insulator, red = antenna feed-point voltage source, yellow = acoustic output. 3.3 Positioning The reference points, necessary for positioning the cell phone relative to the anatomically correct model, are also contained in the header file for each model. To aid comparison of results from all the participants, a common coordinate system was defined with origin at the acoustic output of the cell phone, see Figure 7 . The participants are asked to report the following positioning data for all simulations: the distance between the antenna feed-point and the nearest tissue voxel, the coordinates of the Ear Reference Point (ERP), and the direction cosines (as a rotation matrix) for the coordinate transformation of the head models for touch and tilted positions. As defined in IEEE Std 1528, the ERP is 15 mm posterior to the ear canal in the plane passing through the mouth and both ear canals. Figure 7 Coordinate system. The left image shows the cell phone referenced coordinate system as seen from the right side of the Specific Anthropomorphic Mannequin (SAM). The right image shows the coordinate system as seen from the top of the SAM. The SAM Ear Reference Points, left and right, are where the Y axis intercepts the surface of the mannequin. 3.4 FDTD considerations The FDTD technique is called for by the P1529 draft [ 33 ]. FDTD was chosen because it is stable and accurate, doesn't require enormous computational resources and can handle complex geometries. The participants are asked to report the following FDTD data for all simulations: The boundary conditions used and the minimum distance between the model and the boundary of the computational space. The time step size and the number of time steps used. The grid (voxel) size and whether the grid was homogeneous or graded. To calculate the SAR in each voxel the protocol recommends the 12-field components approach. All 1- or 10-gram spatial averaging volumes are to be constructed in accordance with IEEE C95.3, Annex E. 3.5 Reporting The participants are asked to report the following SAR data for all simulations: The peak spatial SAR, both 1 g and 10 g averages, for all tissue (head plus pinna), head only, and pinna only averaging volumes and the location of the averaging cubes. The peak point value SAR and its location. A color coded SAR distribution for both 1 g and 10 g averages, in the ZY plane. 3.5.1 Scaling reported results For a realistic study it would be ideal to simulate the real world situation. The question that remains is: "Does a real-world cell phone keep the power or the feed-point current constant when placed next to different human beings with different head shapes and head sizes"? Unfortunately there is not a definitive answer to this question. The behavior of a mobile phone depends on the system design and the power amplifier circuits. A detailed discussion for real-world mobile telephones has to be addressed by future projects with more realistic mobile phones models for numerical simulations. For now it is important to scale the calculated SAR values to net input power and feed-point current and to present both results. The behavior of a real-world mobile phone is within the SAR range of scaling to the net input power and scaling to the feed-point current. For human health and safety considerations a worst case approach is desirable. Until further knowledge on the behavior of a real-world cell phone is available, the scaling producing the worst case result (largest SAR value) must be taken into account. 4. Conclusion The current version of IEEE Std C95.1 [ 26 ] does not classify the pinna as an extremity making it subject to the basic SAR exposure limitation of 1.6 W/kg over 1 g. However the much anticipated 200X revision of C95.1 will reclassify the pinna as an extremity raising its SAR exposure limit to 4 W/kg over 10 g. Confusion over the inclusion or exclusion of the pinna in the SAR averaging volume will continue until the IEEE officially releases C95.1-200X. The IEEE should release C95.1-200X as soon as practical, and if this can not be done in a reasonably short time, a supplement should be published clarifying the new status of the pinna as an extremity. Investigators should inspect all models after rotation to be sure they are free of artifacts caused by meshing along the new coordinate axes. If necessary, artifacts should be manually corrected before running the simulation. Blindly accepting the output of meshing algorithms can lead to errors. All relevant data and assumptions for the computational RF dosimetry study, as discussed in section 2 "Problem areas", must be reported in such detail that the reader is able to compare the results to other studies. The names and electrical properties for all anatomically correct models should comply with those shown on the Italian National Research Council, Institute for Applied Physics web site. To facilitate broad based comparisons, new anatomically correct models should be placed in the public domain or made available for a modest fee. The number of anatomically correct models suitable for electromagnetic modelling and widely available for comparison to the SAM is still low. Because the SAM is intended to represent a significant majority of persons during normal use of wireless handsets, comparison to a large variety of anatomically correct models is desirable. It is the hope of IEEE SCC34/SC2/WG2 that consistent results in the benchmark validation will show that, by adhering to some common definitions and procedures, FDTD studies from different investigators using different anatomically correct models are comparable. Authors' contributions BB, past chairman of IEEE SCC34/SC2/WG2, drafted the Protocol for the Computational Comparison of the SAM Phantom to Anatomically Correct Models of the Human Head and this manuscript. WK, present chairman of IEEE SCC34/SC2/WG2, wrote approximately 25% of the manuscript, developed and simulated the phone model and supplied several of the figures. Both authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526755.xml |
535701 | Short Sleep Duration Is Associated with Reduced Leptin, Elevated Ghrelin, and Increased Body Mass Index | Background Sleep duration may be an important regulator of body weight and metabolism. An association between short habitual sleep time and increased body mass index (BMI) has been reported in large population samples. The potential role of metabolic hormones in this association is unknown. Methods and Findings Study participants were 1,024 volunteers from the Wisconsin Sleep Cohort Study, a population-based longitudinal study of sleep disorders. Participants underwent nocturnal polysomnography and reported on their sleep habits through questionnaires and sleep diaries. Following polysomnography, morning, fasted blood samples were evaluated for serum leptin and ghrelin (two key opposing hormones in appetite regulation), adiponectin, insulin, glucose, and lipid profile. Relationships among these measures, BMI, and sleep duration (habitual and immediately prior to blood sampling) were examined using multiple variable regressions with control for confounding factors. A U-shaped curvilinear association between sleep duration and BMI was observed. In persons sleeping less than 8 h (74.4% of the sample), increased BMI was proportional to decreased sleep. Short sleep was associated with low leptin ( p for slope = 0.01), with a predicted 15.5% lower leptin for habitual sleep of 5 h versus 8 h, and high ghrelin ( p for slope = 0.008), with a predicted 14.9% higher ghrelin for nocturnal (polysomnographic) sleep of 5 h versus 8 h, independent of BMI. Conclusion Participants with short sleep had reduced leptin and elevated ghrelin. These differences in leptin and ghrelin are likely to increase appetite, possibly explaining the increased BMI observed with short sleep duration. In Western societies, where chronic sleep restriction is common and food is widely available, changes in appetite regulatory hormones with sleep curtailment may contribute to obesity. | Introduction In population studies, a dose-response relationship between short sleep duration and high body mass index (BMI) has been reported across all age groups [ 1 – 7 ]. In the largest studied sample, elevated BMI occurred for habitual sleep amounts below 7–8 h [ 2 ]. A U-shaped curvilinear relationship between sleep duration and BMI was observed for women, but for men, there was a monotonic trend towards higher BMI with shorter sleep duration. Importantly, a recent prospective study identified a longitudinal association between sleep curtailment and future weight gain [ 6 ]. How sleep curtailment may interact with body weight is unknown, but hormones regulating appetite and energy expenditure may be involved. A number of hormones may mediate the interactions between short sleep duration, metabolism, and high BMI. We hypothesized that the two key opposing hormones in appetite regulation, leptin and ghrelin [ 8 , 9 ], play a significant role in the interaction between short sleep duration and high BMI. Leptin is an adipocyte-derived hormone that suppresses appetite [ 10 ]. Ghrelin is predominantly a stomach-derived peptide that stimulates appetite [ 9 , 11 ]. Other mediators of metabolism that may contribute include adiponectin and insulin. Adiponectin is a novel hormone secreted by adipocytes and is associated with insulin sensitivity [ 12 , 13 ]. We investigated the associations among sleep duration (acute and habitual), metabolic hormones, and BMI in the population-based Wisconsin Sleep Cohort Study [ 14 ]. Methods Overview The institutional review board of the University of Wisconsin Medical School approved all protocols for the study, and informed consent was obtained from all participants. The Wisconsin Sleep Cohort Study is an ongoing longitudinal study of sleep habits and disorders in the general population [ 14 ]. Briefly, to construct a defined sampling frame, all employees aged 30–60 y of four state agencies in south central Wisconsin were mailed a survey on sleep habits, health, and demographics in 1989. Mailed surveys were repeated at 5-y intervals. A stratified random sample of respondents was then recruited for an extensive overnight protocol including polysomnography at baseline. Stratification was based on risk for sleep-disordered breathing (SDB), with an oversampling of habitual snorers to ensure an adequate distribution of SDB. Analyses were adjusted for the weighted sampling when appropriate. Recruitment for baseline studies was staggered to conduct seven studies per week; study entry and follow-up time thus varied within the cohort. Exclusion criteria included pregnancy, unstable cardiopulmonary disease, airway cancers, and recent upper respiratory tract surgery. The baseline response rate was 51%, with most refusals due to the inconvenience of sleeping away from home. Follow-up studies have been conducted at 4-y intervals, with up to three follow-up studies to date. Collection of morning, fasted blood was added to the protocol in 1995. Extensive survey and other data available from the sampling frame have been used to evaluate the potential for response and drop out biases. Figure 1 provides an overview of the study population and the data collected. The sample comprised 1,024 participants with an overnight study and blood sample. A 6-d diary of sleep duration was added as part of a protocol to assess daytime sleepiness after the initiation of the cohort study; these data were available for 721 participants. Figure 1 Sample Construction and Data Collected All employees (aged 30–60 y) of four state agencies in south central Wisconsin were mailed surveys starting in 1989 regarding general health and sleep habits. From this population, a stratified random sample of respondents was recruited for an extensive overnight protocol providing polysomnography and sleep questionnaire data and morning, fasted serum for hormone and metabolite measurement. The metabolic hormones measured were ghrelin (856 participants), leptin (1,017 participants), adiponectin (1,015 participants), and insulin (1,014 participants) (see Table 1 ). Based on scheduling availability, 721 participants completed an added protocol to measure daytime sleepiness that included a 6-d sleep diary, of which 714 reported on naps (see Table 1 ). See text for further description of the study population and definitions of the sleep measures used. Table 1 Characteristics of the Sample n = 1,024 except as noted a Median (first quartile, third quartile) LDL, low-density lipoprotein DOI: 10.1371/journal.pmed.0010062.t001 Data Collection This investigation is based on Wisconsin Sleep Cohort Study data collected from the mailed sleep surveys, overnight studies, and 6-d sleep diaries. Overnight studies were conducted in laboratory bedrooms, with participants setting their own sleep and rise times. After informed consent was obtained, questionnaires on lifestyle and health history were administered, and height and weight measured. A blood sample was collected shortly after awakening from overnight polysomnography. Polysomnography An 18-channel polysomnographic system was used to assess sleep states, and respiratory and cardiac variables [ 14 ]. Sleep was studied using electroencephalography, electro-oculography, and chin electromyography (Grass Instruments, Quincy, Massachusetts, United States). Continuous measurement of arterial oxyhemoglobin saturation by pulse oximetry (Ohmeda, Englewood, Colorado, United States), oral and nasal airflow, nasal air pressure, and thoracic cage and abdominal respiratory motion (Respitrace Ambulatory Monitoring, Ardsley, New York, United States) were used to assess SDB [ 14 ]. Each 30-s interval of the polysomnographic record was scored for sleep stage and SDB using standard criteria [ 14 ]. The average number of apneas and hypopneas per hour of measured sleep, the apnea-hypopnea index (AHI), was the measure for SDB. For analyses including AHI as a covariate, participants were excluded if they had sleep studies with less than 4 h of usable polysomnography, or if they were receiving treatment for SDB. Polysomnographic measures of acute sleep Polysomnographic measures of sleep duration just prior to blood sampling were used to evaluate degree of “acute sleep restriction.” “Total sleep time” was total hours of polysomnographically defined sleep. “Wake after sleep onset” (WASO) was hours of wake time after three epochs of sleep had occurred. “Sleep efficiency” was total sleep time divided by time from lights out until arising in the morning. Self-reported sleep measures of chronic sleep Two variables were used to evaluate degree of “chronic sleep restriction” by estimating average nightly sleep duration: (i) “usual sleep” (from questionnaires) and (ii) “average nightly sleep” (from sleep diaries). Questionnaire Usual sleep was estimated from the following questions: how many hours of sleep do you usually get in (a) a workday night? (b) a weekend or non-work night? These questions were included in all mailed surveys and were added to questionnaires completed at the overnight study in 1998. For participants studied after 1998, data from questionnaires administered at the overnight study were used (58%); for the remainder of the sample, data from the mailed survey closest in time to the overnight study were used. Usual sleep was calculated as (5 × workday sleep + 2 × weekend sleep)/7. Sleep diary Average sleep duration was also estimated using a 6-d sleep diary, kept by 721 participants as part of an added protocol to measure daytime sleepiness. The median time between the blood collection and completion of the diary was 18 d. Almost all diaries (97%) were completed within 6 mo of blood sampling. In diaries, participants recorded the time they went to bed and arose each day, and the duration of any naps. “Average nightly sleep” was calculated as the sum of the hours between bedtime and arising divided by six. “Average nightly sleep plus naps” added naps to the above sum. The Relationship between BMI and Sleep Duration For the analysis of the association of BMI and sleep duration, the sample comprised 1,040 participants with at least one 6-d sleep diary. Of these, 4-, 8-, and 12-y follow-up studies had been completed by 397, 179, and 11 participants respectively (1,828 visits), providing repeated measures data for greater analytic efficiency and precision. Hormone Assays Following overnight fasting, serum was collected soon after awakening and stored at −70 °C. All samples were assayed in duplicate. It was not possible to assay samples from all participants in all assays because of the volume of serum available; this particularly affected the ghrelin assay, which required the most volume. Leptin and insulin were determined using enzyme-linked immunoassays (ELISA; Linco Research, St. Charles, Missouri, United States). Total ghrelin and adiponectin were measured by radioimmunoassay (Linco Research). Sensitivity for the leptin and insulin enzyme-linked immunoassays was 0.5 ng/ml and 2 μU/ml, respectively. Sensitivity for the ghrelin and adiponectin radioimmunoassays was 10 pg/ml and 1 ng/ml, respectively. Intra- and inter-assay variations were all less than 5% and less than 10%, respectively. The quantitative insulin sensitivity check index (QUICKI) was 1/(log ( I ) + log ( G )), where I is fasting insulin and G is fasting glucose [ 15 , 16 ]. Statistical Analysis All analyses were cross-sectional and performed using SAS/STAT 8.2. Leptin, ghrelin, and adiponectin were square-root transformed and insulin log transformed based on the distribution of residuals from the multivariate regression models. We evaluated the relationship of age, sex, BMI, and time of storage of blood sample on hormones using multiple regression. Partial correlations adjusted for age, sex, and BMI were calculated for hormones and QUICKI, with and without control of other potential confounders. The relationships between hormones and sleep were evaluated using multiple linear regression after control for potential confounders including age, sex, BMI, SDB, and morningness tendencies (as measured using the Horne-Ostberg questionnaire, an indirect surrogate of earlier rising time). In all analyses involving insulin, glucose, and QUICKI (but not leptin, ghrelin, and adiponectin), participants with diabetes (self-reported diagnosis, or currently taking insulin or diabetic medications, or with glucose >300 mg/dl) were excluded. Participants with SDB were not removed from the analyses shown. When controlling for AHI in models, participants who used continuous positive airway pressure or who had inadequate sleep were excluded. Because controlling for AHI did not significantly change the sleep-hormone regression coefficients, these analyses are not shown. The relationship of BMI with average nightly sleep was evaluated using a quadratic fit. This was examined using multiple visits ( n = 1,828) from 1,040 participants with sleep diary data available. Mixed modeling techniques were used to account for within-subject correlation for participants with multiple visits. SAS procedure mixed was used for modeling and hypothesis testing using robust standard errors and a compound symmetric within-subject correlation structure. All reported p values are two-sided. For illustrative purposes, changes in leptin, ghrelin, and BMI for different sleep amounts were calculated at the average values and sex distribution of the relevant sample. Results Table 1 shows the characteristics of the sample, unadjusted for the weighted sampling scheme. Figure 2 shows the mean BMI for 45-min intervals of average nightly sleep after adjustment for age and sex. We found a significant U-shaped curvilinear relationship between average nightly sleep and BMI after adjustment for age and sex (average nightly sleep coefficient = −2.40, p = 0.008; (average nightly sleep) 2 coefficient = 0.156, p = 0.008; the two coefficients define a curve). The minimum BMI was predicted at 7.7 h of average nightly sleep. The most striking portion of the curve was for persons sleeping less than 8 h (74.4% of the sample), where increased BMI was proportional to decreased sleep. An increase in BMI from 31.3 to 32.4 (+3.6%) corresponded approximately to an average nightly sleep duration decrease from 8 h to 5 h, as estimated at the mean age (53.1 y) and sex distribution (54.4% male) of the sample with available sleep diary data. Figure 2 The Relationship between BMI and Average Nightly Sleep Mean BMI and standard errors for 45-min intervals of average nightly sleep after adjustment for age and sex. Average nightly sleep values predicting lowest mean BMI are represented by the central group. Average nightly sleep values outside the lowest and highest intervals are included in those categories. Number of visits is indicated below the standard error bars. Standard errors are adjusted for within-subject correlation. Table 2 shows the association of each of the hormones, glucose, and QUICKI with age, sex, and BMI. Serum ghrelin, leptin, adiponectin, insulin, and glucose were significantly correlated with BMI and sex. Storage time had significant effects on some but not all variables; in all cases, however, effect size was small, and the effect was corrected for in all calculations. Adiponectin and glucose were correlated with age. Table 3 shows partial correlations among the measured and calculated variables, adjusted for age, sex, and BMI. All correlations agree with previous studies and validate our assays and population sample. We also examined several potential confounders to be later controlled for, if needed, in our models. Identified relationships included: ghrelin with high-density lipoprotein (HDL), alcohol intake, and creatinine; leptin with diastolic blood pressure, smoking, and blood urea nitrogen (BUN); adiponectin with HDL, triglycerides, uric acid, and BUN; insulin with HDL, triglycerides, uric acid, and smoking; glucose with HDL, triglycerides, uric acid, alcohol intake, BUN, and creatinine; QUICKI with HDL, triglycerides, uric acid, smoking, and BUN. When diabetics (diagnosed) and participants with high glucose (glucose >300 mg/dl) were removed from this analysis, all relationships remained significant except for the correlation between ghrelin and adiponectin, and ghrelin and insulin. Table 2 Relationships among Metabolic Hormone Levels, Age, Sex and BMI Each row represents a single regression analysis a All models also included a term for time since sample was drawn (time of storage). A change of one unit of the predictor variables (age, female indicator) results in a change of the coefficient's size and direction in the transformed variable b Square-root transformation used in these models c Natural logarithm transformation used in these models d Participants with diabetes were excluded (self-reported diagnosis, currently taking insulin or diabetic medications, or glucose >300 mg/dl) DOI: 10.1371/journal.pmed.0010062.t002 Table 3 Partial Pearson Correlations of Metabolic Hormones and QUICKI after Adjustment for Sex, Age, and BMI Shown are unstandardized correlation coefficients a Square-root transformation b Natural logarithm transformation c A total of 843 participants had data for all variables; correlations were calculated within this subset d QUICKI = 1/(log( I ) + log( G )) where I is fasting insulin and G is fasting glucose DOI: 10.1371/journal.pmed.0010062.t003 Using polysomnography to measure objective sleep immediately prior to blood sampling, ghrelin correlated significantly with total sleep time, sleep efficiency, and WASO ( Table 4 ). Using questionnaire and diary data estimating chronic sleep, significant correlations were found between leptin and average nightly sleep (with and without naps) and usual sleep amounts ( Table 4 ). A significant correlation was also observed between ghrelin and average nightly sleep plus naps. These relationships were consistently found when other possible confounding factors such as medications, hypertension, AHI, and factors listed above were included in the statistical model (analysis not shown). In unadjusted models, leptin was significantly correlated with total sleep time and average weekly sleep with naps, and ghrelin was significantly correlated with sleep efficiency and WASO. Figure 3 A shows the mean leptin levels for half-hour increments of average nightly sleep after adjustment for age, sex, BMI, and time of storage (see Table 2 ). In the multiple regression model (see Table 4 ), there was a significant increasing trend in leptin for average nightly sleep duration ( p = 0.01). When evaluated at the average values and sex distribution of our sample, a decrease from 8 to 5 h in average nightly sleep was associated with a predicted 15.5% decrease in leptin. Figure 3 B shows the mean ghrelin levels for half-hour increments of total sleep time after adjustment for age, sex, BMI, and time of storage (see Table 2 ). In the multiple regression model (see Table 4 ), there was a significant decreasing trend in ghrelin with total sleep time ( p = 0.008). When evaluated at the average values and sex distribution of our sample, a decrease from 8 to 5 h of polysomnographically defined total sleep time was associated with a predicted 14.9% increase in ghrelin. There was no significant correlation between sleep duration (acute or chronic) and serum adiponectin, insulin, glucose, or QUICKI. Results of our analyses were unchanged after adjusting for the weighted sampling scheme. Figure 3 The Association between Sleep Duration and Serum Leptin and Ghrelin Levels (A) Mean leptin levels and standard errors for half-hour increments of average nightly sleep after adjustment for age, sex, BMI, and time of storage (see Table 2 ). Average nightly sleep values outside the lowest and highest intervals are included in those categories. Sample sizes are given below the standard error bars. The y-axis uses a square-root scale. Data derived from 718 diaries because three participants had missing leptin data. (B) Mean ghrelin levels and standard errors for half-hour increments of total sleep time after adjustment for age, sex, BMI, and time of storage (see Table 2 ). Total sleep time values outside the lowest and highest intervals are included in those categories. The y-axis uses a square-root scale. Note that ranges for total sleep time amounts are typically shorter than those for average nightly sleep amounts (A; see Figure 1 ), and do not correlate strongly (see text). Table 4 Relationships between Sleep Variables and Metabolic Hormones, Adjusted for Age, Sex, BMI, and Time of Sample Storage Each coefficient is from a separate regression model. The first three sleep variables were derived from nighttime polysomnography data; average nightly sleep (with and without naps) was derived from sleep diary data, and usual sleep was derived from questionnaire data a Sample sizes for all polysomnography-derived sleep variables (sleep efficiency, total sleep time, and WASO) b Square-root transformation used in these models c Outliers excluded. One participant was removed from all models because of a very high leptin level and low BMI (21 kg/m 2 ). For the leptin/average nightly sleep and leptin/average nightly sleep with naps models, two participants were removed: one was a large outlier (very high leptin level), and one had 6-d diary sleep of less than 12 h, which was influential. Removing these two points resulted in a slightly smaller, less significant coefficient. For the leptin/usual sleep model, one outlier with a large leptin value was removed. Again, this resulted in a slightly smaller, less significant coefficient d Natural logarithm transformation used in these models e Participants with diabetes were excluded (self-reported diagnosis, currently taking insulin or diabetic medications, or glucose >300 mg/dl, n = 78) DOI: 10.1371/journal.pmed.0010062.t004 Discussion We found that habitual sleep duration below 7.7 h was associated with increased BMI, similar to findings in other studies including children [ 1 , 17 ], adolescents [ 5 ], and adults [ 2 , 3 ]. We also report a significant association of sleep duration with leptin and ghrelin that is independent of BMI, age, sex, SDB, and other possible confounding factors (analysis not shown for SDB and other confounders). Short sleep duration was associated with decreased leptin and increased ghrelin, changes that have also been observed in reaction to food restriction and weight loss and are typically associated with increased appetite. These hormone alterations may contribute to the BMI increase that occurs with sleep curtailment. Previous studies have shown that both acute sleep deprivation [ 18 ] and chronic partial sleep deprivation (sleep restriction) [ 19 ] can cause a decrease in serum leptin concentrations. These studies, however, were performed under highly controlled laboratory circumstances. Our results validate the association of decreased leptin with decreased sleep time in a large sample of adults under real-life conditions and, now, indicate a role for ghrelin. Leptin deficiency increases appetite and produces obesity [ 8 , 20 ]. Leptin administration suppresses food intake and reduces energy expenditure [ 21 , 22 ]. Importantly, low leptin as observed with sleep loss has a greater impact on appetite than high leptin levels, which are associated with leptin resistance, as occurs with obesity [ 8 ]. Levels of ghrelin, a potent stimulator of appetite [ 23 , 24 , 25 ], were higher in those with shorter sleep. Ghrelin levels are also positively associated with hunger ratings [ 26 ], but decrease with increased BMI (see Table 2 ). In one study, after 3 mo of dietary supervision, a reduction in BMI of approximately 5% was associated with a 12% increase in ghrelin and a 15% decrease in leptin [ 27 ]. These changes, in participants of similar BMI to our sample and presumably producing increased appetite, are comparable to those observed with sleep loss of 2–3 h/night. With sleep loss, however, relatively high ghrelin and low leptin levels are associated with increased BMI. These changes can be hypothesized to play a contributory, rather than compensatory, role in the development of overweight and obesity with sleep restriction. Our findings are strengthened by the large and well-characterized population-based sample, attention to bias and confounding factors, and in-laboratory polysomnographic data. The changes in hormones with sleep duration were consistent and of significant magnitude. They also represent the first demonstration of a correlation between peripheral hormone levels and both self-reported (questionnaire and diary data) and polysomnographically determined sleep amounts in a general population sample. While these data are more comprehensive than previous studies on this topic, some misclassification error may exist because of intra-person variability or limitations of polysomnographic measurement. Little is known about the stability of self-reported sleep duration and polysomnographic measures of sleep duration over time. We examined the stability of the self-reported sleep duration data, and found these measures to be stable. For 860 participants who completed three surveys, the mean (standard deviation) of intra-person differences in usual sleep for two 5-y periods was 0.10 (0.47) h. For 190 participants with at least three sleep diaries, the mean (standard deviation) of intra-person differences in average nightly sleep for two 4-y intervals was 0.09 (0.41) h. Furthermore, the subjectively reported hours of usual sleep and the diary-derived average nightly sleep values were highly correlated ( r = 0.55, p < 0.001). One-night polysomnographically defined total sleep time had a similar intra-person mean difference (0.10 h), with a somewhat larger standard deviation (0.68) for 713 participants with at least three sleep studies. Elevated ghrelin mainly correlated with acute sleep loss as measured by polysomnography immediately prior to blood sampling (see Table 4 ; Figure 3 B), while reduced leptin correlated with chronic sleep restriction indicated by self-reported sleep measures (see Table 4 ; Figure 3 A). Measures of usual and one-night polysomnographically defined sleep time were only weakly, but statistically significantly, correlated ( r = 0.12, p < 0.001), supporting the concept that these measures reflect long-term and short-term changes in sleep amounts, respectively. Our findings are in agreement with the current view that leptin is important in signaling long-term nutritional status while ghrelin has a more significant role in acute hunger. The changes in leptin and ghrelin with sleep restriction could, therefore, provide a powerful dual stimulus to food intake that may culminate in obesity. Longitudinal and intervention studies will be necessary to define further the link between sleep curtailment and increased BMI. Only total ghrelin was measured, since active octanoylated ghrelin is unstable. Although both total and active ghrelin appear to be regulated in a similar and parallel manner, future studies will need to focus on measurement of the biologically active form. Other potentially important appetite regulatory hormones, such as PYY 3–36 [ 28 ], were not measured. Measures of appetite were not included in the Wisconsin Sleep Cohort Study overnight protocol; therefore, a direct examination of the relationship between the observed hormone changes with sleep duration and alterations in appetite was not possible. Hormone measurements were all performed on a single fasted, morning sample and may not reflect the 24-h profile. It is possible that participants with shorter sleep woke up earlier and that hormone differences may be partially related to circadian time. Leptin and ghrelin levels rise slightly during the night [ 29 ], and this could result in higher hormone levels in short sleepers. This may be an issue for ghrelin, as levels increased with acute sleep restriction. It is, however, unlikely to play a role in the leptin finding, since lower levels were found with chronic but not acute sleep restriction. Additionally, studies have shown a high correlation between morning, fasting leptin and ghrelin levels and 24-h mean profile [ 29 , 30 ]. We also found that the ghrelin and leptin changes were unaffected by morningness tendencies. The fact that studies investigating the diurnal profile of these hormones found similar hormonal changes over the entire 24-h period after experimental sleep restriction also corroborates our results [ 18 , 19 ]. The robustness of our findings and similar observations from smaller controlled studies [ 18 , 19 ] also suggest that our statistically significant results are unlikely to be a reflection of the number of analyses carried out. Animal studies have suggested a link between sleep and metabolism [ 31 , 32 ]. In rats, prolonged, complete sleep deprivation increased both food intake and energy expenditure. The net effect was weight loss and, ultimately, death [ 33 ]. Rats fed a high protein-to-calorie diet had accelerated weight loss, compared to sleep-deprived rats fed calorie-augmented (fatty) diets [ 32 ]. Food consumption remained normal in sleep-deprived rats fed protein-rich diets, but increased 250% in rats fed calorie-rich diets. Preference for fatty foods has also been reported anecdotally in sleep-deprived humans [ 32 , 34 ]. Sleep deprivation may thus increase not only appetite but also preference for lipid-rich, high-calorie foods. Animal experiments that have found weight loss after prolonged sleep deprivation have to be interpreted in the context of a stressful procedure producing intense sleep debt [ 35 , 36 ], which may interfere with adequate food intake. From our study, we hypothesize that the moderate chronic sleep debt associated with habitual short sleep is associated with increased appetite and energy expenditure. In societies where high-calorie food is freely available and consumption uncontrolled, after milder chronic sleep restriction, the equation may be tipped towards food intake for high-calorie food rather than expenditure, culminating in obesity. Short sleepers may also have more time to overeat. Sleep loss from a baseline of 7.7 h was associated with a dose-dependent increase in BMI. This was the predominant effect in a population increasingly curtailing sleep [ 37 ]. Sleep greater than 7.7 h, however, was also associated with increased BMI. Patients with SDB (a pathology associated with increased BMI) may spend a longer time in bed to compensate for fragmented sleep; however, controlling for AHI did not change the curvilinear BMI–sleep association. Another possibility is that in long sleepers, reduced energy expenditure due to increased time in bed has a greater impact than reduced food intake. In favor of this hypothesis, long sleepers exercise less [ 38 ]. In our data, we found that the odds ratio of high levels of self-reported exercise (>7 h/wk), based on a single survey question, decreased with increased sleep time, but controlling for this variable also did not change our findings (analyses not shown). Insulin resistance with sleep deprivation has been reported in a laboratory study of young, healthy volunteers [ 39 ]. When controlling for BMI, we found no significant correlation between insulin, glucose, or adiponectin levels and various measures of sleep duration. Also, there was no significant correlation between QUICKI (or the homeostatic model assessment HOMA [ 16 ]; data not shown) and sleep duration. This may be due to difficulties in detecting small effects on glucose tolerance under less-controlled conditions of large population studies. Our results demonstrate an important relationship between sleep and metabolic hormones. The direct effect of chronic partial sleep loss on food intake, energy expenditure, and obesity now needs to be explored. Altering sleep duration may prove to be an important adjunct to preventing and treating obesity. Patient Summary Why Was This Study Done? Recent studies have shown that there is a link between sleeping less and gaining weight. It isn't clear why there is this link—perhaps, for example, those who are awake in the middle of the night tend to head for the refrigerator for a snack. Another possibility is that the amount of sleep that we have might affect the hormones that control our appetite. We know that extreme sleep deprivation affects the level of leptin, a hormone that controls appetite. Here, the researchers wanted to study levels of leptin and other appetite hormones under more normal conditions, in people with a range of sleeping habits. What Did the Researchers Do? The researchers studied participants in a large sleep study that has been going on in Wisconsin for over 15 years. These participants have been filling out questionnaires about their sleep habits and their health in general, have kept sleep diaries, and have occasionally spent a night in the laboratory, where researchers studied their sleep in more detail. After sleeping overnight in the laboratory, the participants gave blood samples, which were tested for hormones. What Did They Find? The researchers found that people who slept less were on average heavier. And people who slept less had lower levels of leptin and higher levels of ghrelin, another hormone that controls food intake. What Does This Mean? The combination of low leptin and high ghrelin is likely to increase appetite. In other words, short sleep might stimulate appetite, which increases weight. What Next? Future studies need to examine the effect of regular short sleeping hours on appetite, food intake, and obesity. These studies could help to answer the question of whether the rise in obesity in many societies is partly due to the fact that people are sleeping less. And it seems well worth testing whether increasing sleep to seven or eight hours per night could help people to lose weight. Additional Online Information National Sleep Foundation Web page on obesity and sleep: http://www.sleepfoundation.org/features/obesity.cfm World Federation of Sleep Research Societies: http://www.wfsrs.org/homepage.html Red en Medicina del Sueño: http://www.rems.com.ar/ European Sleep Research Society: http://www.esrs.org/ | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC535701.xml |
515374 | Do Genes Respond to Global Warming? | null | Scientists continue to argue the extent that human activities drive global warming, but few would argue that it exists. The International Panel on Climate Change predicts that greenhouse gases will increase global temperatures by 3.6 degrees F by 2100—a rise unprecedented over the past 10,000 years. What might the world look like as we approach that point? Wetlands will disappear. Floods, hurricanes, and droughts will become progressively more severe. Infectious diseases will increase in virulence and range. Montana's famed glaciers may all but disappear within 30 years. A quarter of species may vanish by 2050. While the effects of climate change on species' geographic ranges and population dynamics have been studied to some extent, scientists know little about how species respond to climate change at the genetic level. In this issue, Elizabeth Hadly and colleagues analyze three different dynamic processes—environmental change, population response, and gene diversity fluctuations—and present evidence that climate change influences variation in genetic diversity. Focusing on two mammal species—the Montane vole and the northern pocket gopher—Hadly et al. asked how the two species responded to historical climate-induced habitat alterations in northwestern Wyoming. They gathered fossils from Yellowstone National Park's Lamar Cave, which contains a treasure trove of carbon-dated deposits that mirror the community of mammals in the area today. Comparing genetic material extracted from fossil samples from different time points over the past 3,000 years to genetic data taken from contemporary animals, Hadly's team tracked genetic changes in populations of the two species and used this information (along with relative fossil abundance and modern population density) to estimate changes in effective population size over time. (Effective population size refers to the number of individuals contributing genetic material to the next generation. Populations with a small effective population size, for example, would be highly vulnerable to environmental catastrophe.) The genetic and demographic data were then combined with environmental records to analyze the relationship between the factors. Studying these populations in space and time—an approach the authors call “phylochronology”—offers an opportunity to analyze the genetic diversity of a species against the backdrop of environmental fluctuation within an evolutionary time frame. It also suggests how microevolutionary forces—factors that affect genetic variation in populations over successive generations—shape genetic responses to climate change. Such evolutionary forces include mutation, genetic drift (the random gene fluctuation in small populations that stems from the vagaries of survival and reproduction), and gene flow (changes in the gene frequency of a population caused by migration). The past 3,000 years includes two periods marked by dramatic climate change—the Medieval Warm Period and the Little Ice Age—that had different effects on local mammal populations depending on their habitat preferences. Habitat specialists, the vole and pocket gopher live in the wet mountain regions of western North America. Though both showed population increases during wetter climates and declines during warmer periods, Hadly et al. predicted the gene diversity fluctuations of the two species would differ based on their different ecological behaviors. And that's what they found: genetic response is tied to population size. Pocket gophers have low population densities, stick close to home, and are fiercely territorial, while voles live in high-density populations and range more widely. For the gophers, population declines resulted in reduced gene diversity; for the voles—which have a larger effective population size and greater dispersal between populations—population declines resulted in increased gene diversity. But what forces underlie these differences in genetic variation? A recent study suggests that migration (a primary agent of gene flow) is most common in and between low-density patches in vole populations, which implicates gene flow as the driver of gene diversity patterns. But the authors don't rule out selection as a possibility, and suggest how to go about resolving the question. Hadly et al. show that phylochronology opens a unique window onto the relatively recent evolutionary past and offers “the potential to separate cause from effect.” They also conclude that “differences in species demography can produce differential genetic response to climate change, even when ecological response is similar.” With a 3-degree temperature increase in just the past 50 years in the American West, conservation of biodiversity may well depend on such insights. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC515374.xml |
526769 | CLC-2 single nucleotide polymorphisms (SNPs) as potential modifiers of cystic fibrosis disease severity | Background Cystic fibrosis (CF) lung disease manifest by impaired chloride secretion leads to eventual respiratory failure. Candidate genes that may modify CF lung disease severity include alternative chloride channels. The objectives of this study are to identify single nucleotide polymorphisms (SNPs) in the airway epithelial chloride channel, CLC-2, and correlate these polymorphisms with CF lung disease. Methods The CLC-2 promoter, intron 1 and exon 20 were examined for SNPs in adult CF dF508/dF508 homozygotes with mild and severe lung disease (forced expiratory volume at one second (FEV1) > 70% and < 40%). Results PCR amplification of genomic CLC-2 and sequence analysis revealed 1 polymorphism in the hClC -2 promoter, 4 in intron 1, and none in exon 20. Fisher's analysis within this data set, did not demonstrate a significant relationship between the severity of lung disease and SNPs in the CLC-2 gene. Conclusions CLC-2 is not a key modifier gene of CF lung phenotype. Further studies evaluating other phenotypes associated with CF may be useful in the future to assess the ability of CLC-2 to modify CF disease severity. | Background Although greater than 1000 mutations of the CF gene product, CFTR are known, none of these can be used to make predictions about the occurrence of common complications, the severity, or course of pulmonary disease. The identification of a gene, which modifies the phenotypic expression of CF would be very important for understanding this complex disease. Because CF is a disease of chloride transport in respiratory epithelia, alternative chloride channels present in the airway may be able to partially compensate for the CF defect. CLC-2 is one candidate alternative chloride channel in respiratory epithelia. Localization to the luminal surface of the airway and perinatal downregulation of CLC-2 in mammalian lung suggests a role in lung morphogenesis [ 1 , 2 ]. Persistent expression of CLC-2 mRNA and protein in tissues unaffected in CF suggests that CLC-2 may compensate for defects in CFTR expression [ 1 ]. CLC-2 has the capacity to conduct chloride in mature respiratory epithelia [ 3 , 4 ]. The rat CLC-2 promoter has SP-1 domains that are important for gene regulation [ 5 ]. A splice variant of CLC-2 skipping exon 20 occurs in rat lung, suggesting that alternative splicing may have functional significance in this tissue [ 6 ]. Because investigation of human CLC-2 genomic structure would be important for studies of gene regulation, we sought to identify single nucleotide polymorphisms in potential regulatory domains of human CLC-2. Genomic DNA was isolated from CF adults with severe and mild obstructive lung disease in order to determine if SNPs segregate with CF lung phenotype. Methods CLC-2 protein expression in CF nasal polyps Nasal polyps from CF patients were obtained at the time of elective surgery from 1989 to 1996. Genotypes of CF mutations for each patient was available, but not clinical status, according to approval by the Johns Hopkins Medical Institution Review Board. At harvest, the tissue was washed 3 times in HBSS, and incubated at 4°C overnight in Protease XIV (Sigma). Epithelial cells were isolated by gentle agitation and filtered through a 70-μm nylon cell strainer (Becton Dickinson; Franklin Lakes, NJ). Cells were grown on 1% collagen coated 35 mm dishes for 1 week. Cell lysates were prepared using 2% sodium dodecyl sulfate (SDS) at 65°C and a cell scraper. Equivalent amounts of total protein from primary CF nasal polyp cultured cell lysates were loaded onto an SDS-polyacrylamide gel electrophoresis (PAGE) system, electrophoresed and transferred to a nitrocellulose membrane. CLC-2 protein levels were detected using the polyclonal chicken anti-CLC-2 antibody and the enhanced chemiluminescent reaction as previously described [ 2 ]. Population studied for CLC-2 polymorphisms Variable expression of CLC-2 protein in nasal cell lysates (see Results) suggested that CLC-2 is differentially expressed in adults and that examination of human CLC-2 genomic structure would be important to investigate this differential expression. Identification of volunteers for nasal epithelial cell culture was not permitted with the original IRB consent process. Therefore, a cohort of CF patients was recruited for polymorphism analysis. A review of the Johns Hopkins Medical Institution CF center database was conducted in 1998 to identify patients that had reached adulthood (age > 18 years), homozygous for the most common CF genotype delF508, so that the affect of various CFTR genotypes would not affect the investigation of CLC-2 polymorphisms. Status of obstructive lung disease was defined using most recent pulmonary function studies. Those patients with spirometry FEV1 ≤ 40% predicted were classified as severe, those with spirometry FEV1 ≥ 70% predicted as mild in order to classify 2 severity levels of CF lung disease. Of 74 eligible subjects (age > 18 years, del F508 homozygous), 43 had FEV1 ≥ 70%, 9 had FEV1 = 41–69%, and 22 had FEV1 ≤ 40%; 31 were recruited during routine visits to the CF center from June 1998 to January 2000. With informed consent, participants provided blood samples for genomic DNA isolation. This study was approved by the Institutional Review Board at Johns Hopkins Medical Institution. DNA was isolated from lymphocytes using standard procedures. Identification of CLC-2 polymorphisms The genomic structure of rat CLC-2 has been previously published [ 5 , 6 ]and has important sites for gene regulation. The human CLC-2 genomic sequence, however, was largely unknown at the start of this study. Promotors are an important site to examine for SNPs, which might affect regulation of a gene. The first intron of a gene also can function as an important regulatory domain. Because the rat lung has a splice variant that deletes exon 20 [ 6 ] due to an unusually high CT content in the upstream intron 19 and a rare AAG acceptor site, this region was also examined for polymorphisms. Primer pairs were thus chosen from rat [accession gi|4406230] and human CLC-2 sequence [accession S7770] to amplify the promoter, intron 1 and exon 20 from adult CF subjects homozygous for delF508. Sequencing of the human CLC-2 promoter initially from one human genomic sample was performed by polymerase chain reaction using the 5'-flanking region of rat hpolE1 (dTCC GGG TCA ATA TCC TTC ACA TCG), which is approximately 2000 base pairs upstream from the rat CLC-2 coding sequence [ 5 ] and the 3'-hCLC-2 promoter primer (dCGC CCG TGG CTC CAT CCC TTC), which corresponds to sequence from the N-terminus of the hCLC-2 coding region [accession S7770 [ 7 ]]. PCR amplification was performed using the MasterAmp™ PCR Optimization Kit buffer J (Epicentre Technologies, Madison, WI) due to the high GC rich content of this region in the rat [ 5 ]. The amplified product was cloned into the TA cloning Vector (Invitrogen), plasmid DNA grown in E. coli , and DNA isolated using the Mini-prep kit (Qiagen, Valencia, CA) according to the manufacturer's instructions. The sequence of this ~2000 bp product yielded genomic DNA for design of primer pairs that would yield overlapping PCR-amplified DNA fragments of the promoter. Sequencing and identification of human CLC-2 promoter polymorphisms in 15 CF patients with severe obstructive lung disease (FEV1 ≤ 40% predicted) and 16 CF patients with mild disease (FEV1 ≥ 70% predicted) were performed by polymerase chain reaction using overlapping primers designed from the initial hCLC-2 clone. The only product that yielded a SNP was amplified using primers 15F dGTC CCA GGA GTA GAC TTC C and 16R dCAC TGC CCT CTG GCC TC providing a 760 base pair product, using cycling conditions of 94°C 6 mins, 35 cycles of 94°C 30 s, 59°C 30 s, 72°C 30 s and 72°C 6 min. A nested reaction with 20 uM primers 17F dTCC CCT CCG GCC TAC CCC TTC CGG T and 18R dGGA AGG ATT CGG AGA GGG TTG GGG C amplified both a 150 and 300 bp product using Epicentre MasterAmp™ buffer J (Madison, WI) with cycling conditions 94°C 6 mins, 35 cycles of 94°C 30 s, 64°C 30 s, 74°C 30 s and 74°C 6 min. Because regulation of a gene may occur also through its first intron we amplified this region from all subjects using primers 1F' dCGC TGC AGC ACG AGC AGA C and 1R' dCCC AAG GTC CTG AGT GTA CC, which yielded a product 2273 bp product. Cycling conditions were 95°C 6 mins, 35 cycles of 95°C 30 s, 63°C 30 s, 72°C 3 minutes and 72°C 6 min. Finally, because exon 20 is alternatively spliced in rat lung [ 6 ] we examined whether or not SNPs existed in this region including parts of exon 19 and 21 and the intervening introns using primers 20F dGCC TCT TCT GTG GCA GTC C and 20R dCTT CAG GGC TCA TCT CGC C using PCR amplification conditions of 92°C 6 mins, 30 cycles of 92°C 30 s, 55°C 30 s, 72°C 30 s and 72°C 6 min These amplify a 481 bp fragment covering the 3' end of E19 to 5' end of E21. With PCR amplification of all 31 genomic CF samples using primers listed in Table 1 , the presence of amplified products was confirmed on agarose gels. Amplified DNA and primers were separated using Millipore filters. The purified PCR products were sequenced in both directions using the same primers used for amplification and Big Dye cycle sequencing kit (ver. 2 or 3.1, ABI) in accordance with the manufacturer's instructions. The fluorescently labeled products were separated and detected using either an ABI 377 or 3700 or 3730xl Automatic Sequencer (ABI). The trace files were read using Phred [ 8 , 9 ] and Phrap [ 10 ]. Each potential polymorphism was confirmed by visual inspection. Table 1 Primers used to amplify CLC-2 polymorphisms Primer oligomer expected size (bp) rat hpolE1 dTCC GGG TCA ATA TCC TTC ACA TCG 2128 hClC-2 promoter dCGC CCG TGG CTC CAT CCC TTC 15F dGTC CCA GGA GTA GAC TTC C 760 bp 16R dCAC TGC CCT CTG GCC TC 17F dTCC CCT CCG GCC TAC CCC TTC CGG T 147 + 300 bp 18R dGGA AGG ATT CGG AGA GGG TTG GGG C Intron 1F dCGC TGC AGC ACG AGC AGA C 2273 Intron 1R dCCC AAG GTC CTG AGT GTA CC Exon 20F dGCC TCT TCT GTG GCA GTC C 481 Exon 20R dCTT CAG GGC TCA TCT CGC C Results Expression of CLC-2 protein in CF nasal cells CLC-2 protein is nearly undetectable in postnatal rat lung [ 2 ], however we hypothesized that postnatal expression of CLC-2 in CF individuals might confer a protective advantage for the respiratory epithelium of CF individuals. We examined human CLC-2 protein expression using lysates from primary nasal cells obtained from elective polypectomy of CF patients with a variety of CFTR mutations. Similar amounts of total protein from nasal lysates electrophoresed on an SDS-PAGE system had variable amounts of CLC-2 protein detected [figure 1 ]. High levels of CLC-2 protein were expressed in some lysates, but CLC-2 protein was nearly undetectable in others suggesting that CLC-2 expression is variably regulated in humans. CFTR genetic mutation information was available for these patients and did not correlate with levels of CLC-2 protein expressed [figure 1 ]. In addition, the expression of ClC-2 protein was diminished in transformed bronchial epithelial IB3-1 cells [ 11 ] (lane 10, figure 1 ), which were derived from primary nasal epithelial cells of a subject with delF508/ W1282X (lane 8, figure 1 ). While data about the genetic mutations of the CFTR were available on these patients, information about their clinical status was not according to an agreement with the Johns Hopkins Institutional Review Board. Figure 1 ClC-2 expression by Western blot of nasal polyp lysates from CF adults with the following genotypes: Lanes 1,3,6 dF508/dF508; Lane 2: dF508/d559T; Lane 4: unknown; Lane 5: S549N/R553X; Lane 7,9: dF508/unknown; Lane 8: F508/W1282X.; Lane 10, IB3-1 cell line, genotype F508/W1282X. Arrow identifies CLC-2 bands. Single nucleotide polymorphisms in CLC-2 In order to minimize the confounding of genotype, race and age, all individuals were homozygous for delF508 mutation of CFTR, Caucasian, and over 17 years old. FEV1% determined 2 cohorts, one with mild CF lung disease with average FEV1% of 77.4 ± 3.18 SEM (Table 2 , n = 16, 9 male). The group with severe lung disease had an average FEV1% of 35.6 ± 3.13 SEM (n = 15, 9 male). The mean age of the mild and severe groups was not significantly different (22.6 ± 1.37 years vs. 24.7 ± 1.56 years mean ± SEM). Because CLC-2 expression could be regulated through the promoter, for each patient's DNA, we amplified the CLC-2 promoter, primers that produced overlapping sequences that were examined for SNPs. In addition, intron1 and exon 20 were investigated for SNPs because of their potential role in CLC-2 expression. Table 2 Demographics of study subjects. FEV1 Gender Age (years) (± SEM) Severe 35.6 (3.13) 9 M / 6 F 24.7 (1.56) Mild 77.4 (3.18) 9 M / 7 F 22.6 (1.37) Promotor PCR amplified a 2128 bp promotor product confirmed by agarose gel. Sequence comparison revealed that bp 21 to 2128 of the amplified sequences was compatible with bp 317320 to 319427 of ref|NT_0292533|Hs_29412 and that there were no differences between the two sequences. Examination of these products determined that the upstream region was RPB8 exons 1–3 of the human gene polr2H (gi|8052522|) as expected from the rat genomic structure [ 5 ]. Human CLC-2 promoter is 69% GC rich and contains 4 GC boxes in the 225 bp upstream from the ATG start site (sequence to submit to GenBank). This area is very similar to rat ClC-2 promotor, where binding of transcription factors Sp1 and Sp3 occurs [ 5 ]. Human CLC-2 promotor sequence is very conserved with as much as 82% sequence identity with rat (gi 4406230) and 77% with mouse (gi 28494743). Guinea pig genomic sequence (gi 5001715) aligns with approximately 100 bp of the terminal end of human CLC-2 promoter and rabbit (gi 642465) only with 19 bp upstream of the coding sequence (Figure 2a and 2b ). One G/A polymorphism was identified in the 5' upstream sequence of human CLC-2. This SNP is -693 relative to the ATG start site of hCLC-2 (figure 2b , asterisk, genbank S7770), and has not previously been identified. The -693 G/A polymorphism is a putative AP-2 binding site, predicted by TESS and MATINSPECTOR [ 12 , 13 ], which may affect regulation of the gene. Figure 2 Diagram of alignment of human CLC-2 promoter and mammalian homologues (H = human, R = rat, M = mouse, GP = guinea pig, and Rb = rabbit. CLC-2 translation initiation site in all 5 species is denoted by "start". One single nucleotide polymorphism (SNP) is present at nt -693 (human). Hpol is a polymerase whose gene product is on the complementary strand, upstream from the CLC-2 promoter. There were five subjects with severe CF lung disease (FEV1 < 40%), who had the genotype A/G, whereas eleven had G/G at position -693 (Table 3 ). Of the individuals with mild CF lung disease (FEV1 > 70%), 6 had A/G and 9 had G/G. By Fisher's test analysis there was no difference in the frequency of the promotor polymorphism between the severe and mild groups (p = 0.72). Table 3 Promotor & Intron 1 hClC-2 polymorphisms Promotor Intron 1 -693 358 427 1089 1909 FEV1 <40 AG (5) GG (13) AA (13) TT (9) GG (15) GG (11) GC (2) AG (2) CT (6) GC (0) FEV1 >70 AG (6) GG (11) AA (11) TT (6) GG (12) GG (9) GC (3) AG (3) CT (10) GC (2) P-value 0.72 0.32 0.32 0.21 0.22 Intron 1 The first intron of human CLC-2 was amplified and the 2273 bp product confirmed by gel electrophoresis. This sequence correlates with bp 319453 to 321725 of ref|NT_0292533|Hs_29412. Human CLC-2 intron 1 has regions with as much as 74% sequence identity with rat (gi 2873366) and 85% with mouse (gi28494743) (Figure 3a and 3b ). Examination of 31 human CF samples revealed four SNPs: 358 G/C, 427 A/G, 1089 T/C and 1909 G/C (Figure 3a ). There is complete linkage disequilibrium between SNP 358 and 427. Two CF subjects with severe lung disease (FEV1 < 40%) had 358 G/C, 2 had 427 A/G, and 6 had 1089 C/T, 0 had 1909 G/C (Table 3 ). Of the mild subjects (FEV1 > 70%), 3 of 14 had 358 G/C, 3 of 14 had 427 A/G, 10 of 16 had 1089 C/T, and 2 of 14 had 1909 G/C. By Fisher's test analysis there was no difference in the frequency of any one of the intron 1 polymorphisms between the severe and mild groups (Table 3 , p = 0.32, 0.32, 0.21, and 0.22 for SNPs 358, 427, 1082 and 1902 respectively). Figure 3 Sequence alignment of human, rat, mouse, guinea pig, and rabbit CLC-2 promoter. Site of human SNP at position -693 shown with asterisk. Conserved GC boxes underlined. Exon 20 Primers used to examine the potential exon 20 splice variant region in hCLC-2 amplified a 481 bp fragment that correlates with bp 328123 to 328556 of human genomic sequence NT_0292533 and 2446 to 2617 of hCLC-2 cDNA (accession S7770). There were no SNPs identified using all 31 patient samples. Conclusions With an autosomal recessive pattern of inheritance, CF was long considered a monogenic disease with 1 mutant allele inherited from each parent. While CF neonatal screening is offered in several states of the U.S., counseling of families has been difficult, because CF genotyping does not easily predict onset and severity of pulmonary complications [ 14 ]. Strategies to identify modifier genes for the CF phenotype are important for defining disease prognosis and developing new strategies to prevent progression of the disease. There are several chloride conductances, which have been characterized in the mammalian lung: the cAMP-dependent cystic fibrosis transmembrane conductance regulator (CFTR) [ 15 ], the Ca ++ -dependent chloride channel (CaCC) [ 16 - 18 ], the outwardly rectifying chloride channel (ORCC) [ 19 ], the purinergic receptor-mediated chloride channel [ 20 , 21 ], and the voltage- and volume-regulated, ClC family of chloride channels [ 3 , 22 - 24 ]. One or more of the chloride channels present in the respiratory epithelium may be able to partially compensate for defects in another. For example, there was no lung pathology in the first CF knock-out mouse models, where there is enhanced activity of a Ca ++ -dependent chloride channel [ 25 - 27 ], however lung disease is present when alternative chloride channels are absent [ 26 ]. The CF mouse, however, develops severe intestinal disease leading to premature death, which has been attributed to inadequate secretion via alternative chloride channels. Ca ++ -dependent chloride conductance is low in the intestine of the CF knock-out mouse. To take advantage of alternative chloride channels in the lung, UTP analogues have been used to stimulate chloride secretion in CF individuals via the purinergic receptor-mediated chloride channels [ 20 , 21 ]. One member of the ClC family of chloride channels may also be an alternative chloride conductance in the airway epithelium. We have demonstrated that CLC-2 mRNA and protein are abundantly expressed in the fetal lung [ 1 , 2 ] and that acidic pH can activate chloride secretion [ 3 , 22 ]. CLC-2 mRNA and protein are much higher in brain and kidney compared to tissues that are more severely affected by defective CFTR (lung, intestine, liver) [ 1 ] suggesting that CLC-2 expression may protect against disease manifestations in certain tissues. CLC-2 immunolocalizes to the apical surface of the respiratory epithelium [ 2 , 22 ], consistent with the potential to function as a chloride channel in a secretory organ. In this study, we have shown that several CF subjects do express CLC-2 protein as adults (figure 1 ), unlike in rats [ 2 ]. In single channel recordings, overexpression of CLC-2 in a CF bronchial epithelial cell line demonstrated that chloride secretion can be enhanced [ 3 ]. While the CLC-2 knock-out mouse has degeneration of the retina and testes [ 28 ], loss of CLC-2 function has not been associated with lung disease. To date overexpression of CLC-2 has not been described in an animal model to determine if this channel can be upregulated and serve as a potential therapeutic target for CF. In this study of 31 CF subjects, we identified 5 single nucleotide polymorphisms that have not previously been described for human CLC-2. One of these is -693 relative to the ATG start site of hCLC-2 (Genbank S7770). The -693 G/A polymorphism is a putative AP-2 binding site, predicted by TESS and MATINSPECTOR [ 12 , 13 ] and may be important for regulation of the gene. This polymorphism was no more frequent in the CF subjects with mild lung disease compared with the subjects with severe lung disease. In the rat, SP-1 sites are important for gene regulation [ 5 , 29 , 30 ]. ClC-2 expression in the lung is developmentally downregulated at birth [ 2 ] and is dependent on Sp binding to GC boxes in the ClC-2 promotor [ 5 ]. These GC boxes are highly conserved in human and rat, suggesting they are important sites for gene regulation. Phosphorylation of Sp-1 decreases its DNA binding activity and coincides with the downregulation of CLC-2 expression [ 5 ]. SNPs in the conserved GC boxes were not identified in the subjects of this study. We also identified 4 polymorphisms in hCLC-2 intron1. These did not appear more or less frequently in the mild CF subjects. Two of the polymorphisms were in complete linkage disequilibrium. The polymorphisms were not identified in areas that were highly conserved in rat or mouse. While splice variants of exons may be affected by intron/exon boundaries, we did not find any polymorphisms in the region of the rat exon 20 splice variant [ 6 ]. These findings suggest that CLC-2 is not regulated differently at the genomic level in relatively healthy CF adults. Lack of an association in this study does not exclude the possibility that CLC-2 plays a role in modifying the CF phenotype as might be suggested by the variability of CLC-2 protein expression in primary respiratory epithelial cells from CF subjects in this study. Although we were limited by inadequate power with a small sample size and because phenotypic contrast was low, our data suggest that gene regulation of CLC-2 in relation to polymorphisms in regulatory domains does not play a major role in protection against CF lung disease. Studies which rely on recruitment of small numbers of patients have been shown to detect a difference when a strong relationship is present [ 31 ]. Another limitation may be in the selection of FEV1 at a single point in time, rather than using rate of decline of FEV1. Other studies of CF modifier genes have similarly found difficulty in confirming a candidate gene, which also relied on FEV1 at one time point. In addition, the effect on lung phenotype may occur at an earlier stage of CF lung disease, and examination of adults only as in this study may have limited our ability to detect a difference. The polymorphisms identified in this report should facilitate further investigation of CLC-2 regulation. While we did examine subjects with the same CF genotype (namely delF508 homozygous), measures of ion transport (sweat chloride, nasal potential difference), time to colonization with Pseudomonas, and frequency of pneumonia, should be taken into account in future studies. The identification of candidate genes, which may modify CF lung disease is important so that new therapies may be developed. Multi drug resistant genes have recently been identified that provide some "protection" to the CF lung phenotype [ 32 ]. Ion transport dysfunction of CFTR and the channels it regulates, however, may not be the only determinant of disease severity. Many have suggested that inflammatory mechanisms may also impact disease progression and survival in CF individuals [ 33 ]. Other classes of candidate genes possibly related to CF phenotype include tumor necrosis factor alpha (TNF-α), nitric oxide synthase (NOS), alpha 1-antitrypsin, mannose-binding lectin [ 34 ], and other ion channels such as the basolateral K+ channels [ 17 , 35 - 38 ]. Lastly, gene expression and function may be independent of genomic polymorphisms, as suggested by our data demonstrating variable expression of hClC-2 protein in CF nasal polyps and must also be considered as a mechanism whereby, CLC-2 could alter the course of CF disease. Haug et al. recently identified mutations in the CLC-2 coding region that are associated with idiopathic generalized seizures in humans [ 39 ]. No lung disease was reported from loss of function of CLC-2, so presumably CLC-2 is not critical for the function of mature respiratory epithelium when CFTR is present. A ClC-2 knock-out mouse shows severe degeneration of the retina and testes, but no evident lung disease [ 28 , 40 ]. While there has been no report of ClC-2 lung abnormality in these mice, they do not replicate the human seizure disorder and mouse models do not exclude the possibility of a role in airway epithelial ion transport. For example, initial studies of CF knock out mice also suggested no discernible lung disease that mimics CF in humans [ 41 , 42 ]. The activation of CLC-2 currents by acidic pH, suggests that alterations of key regulatory domains of the channel may affect function. There is disagreement about whether or not a specific region of the N-terminus of CLC-2 is the sensor for acid and voltage regulation [ 43 - 45 ]. This study provides important information about the human CLCN2 genomic organization. Several polymorphisms of key regulatory domains of CLCN2 were identified in a cohort of subjects with cystic fibrosis, who carry the same CF genotype. While we have found no significant association of CLC-2 polymorphisms with FEV1 % predicted in adulthood, further study of potential polymorphisms in CF subjects at an earlier age and investigation of potential mutations in the coding region of CLC-2 that would lead to enhanced transepithelial chloride transport would be necessary to determine if CLC-2 can modify CF. Competing interests The author(s) declare that they have no competing interests. Authors' contributions CB provided overall study design, analysis, and drafted the manuscript. TH designed sequencing methods and analyzed alignment. AS and PB conducted experiments. EB contributed to the design of the study and OS designed sequencing methods and provided analysis including statistics. All authors read and approved the final manuscript. Figure 4 Diagram of alignment of human CLC-2 intron1 and mammalian homologues (H = human, R = rat, and M = mouse. Four single nucleotide polymorphisms (SNP) are present at nt 358, 427, 1089, and 1909 (human). Figure 5 Sequence alignment of human, rat, and mouse CLC-2 promoter (nt 26-1488). Figure 6 Sequence alignment of human, rat, and mouse CLC-2 promoter (nt 1489-2138). Site of human SNP at position 1909 shown with asterisk. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526769.xml |
514718 | Atypical vessels as an early sign of intracardiac myxoma? | We report on a woman with previously unknown left atrial myxoma, who underwent percutaneous coronary intervention. 45 months after the initial coronary angiography, echocardiography demonstrated a large atrial myxoma, which was not seen echocardiographically before. The retrospective analysis of the pre-intervention coronary angiography revealed atypical vessels in the atrial septum, which are interpreted as early signs of myxoma. | Background Primary cardiac tumors are rare disorders, with an incidence of 0.02% at autopsy. Three quarters of the primary tumors of the heart are benign, half of which are myxomas [ 1 ]. As noninvasive cardiac imaging becomes widely available, with increasing resolution provided by echocardiography, computed tomography and magnetic resonance imaging, cardiac tumors are being diagnosed more often. Angiography, apart from its preoperative role to rule out concomitant coronary artery disease, is rarely needed for the diagnostic work-up of cardiac tumors. This report describes the delayed presentation of a left atrial myxoma which was not depicted in an initial coronary angiography performed 51 months earlier in a woman with chest pain. Case Report A 62-year-old woman with known metabolic syndrome was referred to our clinic to exclude coronary artery disease invasively. She has been experiencing chest pain for four months which has not increased in frequency or duration since it started. She denied pain at rest, nocturnal pain, difficulty breathing, or palpitations. An echocardiographic stress examination revealed significant ischemia with anteroseptal hypokinesia. Cardiac chambers were morphologically normal. The "baseline" echocardiography and stress echo was unsuspicious of a left atrial myxoma (Fig. 1 ). Figure 1 Stress echocardiography: long axis view (baseline). No sign of myxoma in left chamber or in left atrium. LA = left atrium, LV = left ventricle, Ao = Ascending aorta (with kind permission of Dr. Herbst, Potsdam) Coronary angiography revealed coronary artery disease with a stenosis of the proximal left anterior descending coronary artery (LAD) near the left main stem. A stent was implanted in the proximal LAD with no residual stenosis. The ventricle was morphologically normal. A coronary angiography performed 13 months after stent implantation showed no re-stenosis and a normal left ventricle. No echocardiogram was performed at this time 32 months after the control coronary angiography, the patient was readmitted because of increasing dyspnoea and palpitations. A transthoracic echocardiography disclosed a big (70 × 30 mm) mass in the left atrium attached to the interatrial septum. The tumor prolapsed into the left ventricle obstructing the mitral valve orifice (Fig 2 ). The mean pressure gradient across the mitral valve was 8 mm Hg. Figure 2 Echocardiography (45 months) long axis view. Myxoma in the left atrium prolapsing into the left ventricle. LV = left ventricle, My = myxoma A subsequent coronary angiography and LV and RV catheterization detected a mean diastolic pressure gradient of 12 mm Hg between the pulmonary capillary wedge and the left ventricular enddiastolic pressure, and no re-stenosis of the LAD stent. The angiography was notable for a large area with small atypical, tortuous vessels in the region of the interatrial septum. These vessels were shown to originate from branches of the right coronary artery (RCA) and the circumflex coronary artery (RCX). (Fig. 5 ) Figure 5 Right coronary artery (RCA) in 60 degree LAO position (45 months, pre-operative coronary angiography). White arrow = atypical vessels in the interatrial septum Surgery was promptly performed and the tumor was successfully excised. Histology confirmed the diagnosis of a cardiac myxoma. A retrospective analysis of the initial coronary angiographies (baseline, 13 and 45 months) disclosed the atypical vessels in a small area of interatrial septum (Fig. 3 , 4 , 5 ). Figure 3 Right coronary artery (baseline) in 90 degree LAO projection. White arrow = atypical vessels in the interatrial septum Figure 4 Left coronary artery (13 months) in 45 degree LAO and 30 degree CRAN projection. White arrow = atypical vessels in the interatrial septum Discussion Myxomas are benign but potentially dangerous with disturbance of rhythm, peripheral embolization or mechanical valvular obstruction, of the atrial or ventricular cavity [ 2 , 3 ]. The site, mobility, and size of the myxoma determine the clinical course. Some authors found no correlation between the size of the tumor and the clinical picture [ 4 ], others reported symptoms with left atrial myxomas weighting more than 70 g [ 5 ]. The rate of growth of myxomas is not exactly known [ 5 , 6 ]. An increase in size of 1.8 – 5.8 cm/year and in weight of up to 14 g/year was reported [ 7 , 8 ]. The myxoma of our patient reached 70 × 30 mm before being symptomatic. Myxomas presenting with systemic embolism or intracavitary obstruction can be easily detected non-invasively. The early depiction of small intracardiac tumors by means of angiography relies on the detection of atypical vessels supplied by branches of the left or right coronary artery. Our case demonstrates that this early angiographic sign is difficult to find. However, vascular malformations are not pathognomonic for myxomas [ 9 ] and stromal tumors, such as myxomas, have generally poor blood supply, hence a coronary angiographic finding with a neo-vascularized or highly vascularized intracardiac area is more suggestive for other type of cardiac neoplasm. A cluster of small, tortuous and dilated vessels is also seen in old and organised thrombi, haemangiomas and venous malfomations [ 10 ]. The size of malformation has no relation to the size of the tumor [ 11 ]. It has been reported in the literature that myxomas can be induced by radiation [ 12 , 13 ]. Between the first presentation to our clinic and the diagnosis of myxoma, the patient underwent two coronary angiograms including stenting, with a cumulative radiological exposure of about 30 mSv (corresponding to at least 1500 chest X-rays). To our knowledge, the patient did not undergo any other relevant radiation exposure in the past. In the present case, a retrospective analysis of the patient angiographies disclosed the atypical vessels which were initially overseen. These vessels could have probably been interpreted as an early sign of myxoma. Author's contributions HPD and FK have written the manuscript and have equally contributed to this publication. HPD, VG and WR have performed the coronary angiographies. WK has performed cardiac surgery. HPD, ACB, FK and GB participated in the design and coordination of the final manuscript. All authors have read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC514718.xml |
548701 | The Molecular Biology Toolkit (MBT): a modular platform for developing molecular visualization applications | Background The large amount of data that are currently produced in the biological sciences can no longer be explored and visualized efficiently with traditional, specialized software. Instead, new capabilities are needed that offer flexibility, rapid application development and deployment as standalone applications or available through the Web. Results We describe a new software toolkit – the Molecular Biology Toolkit (MBT; ) – that enables fast development of applications for protein analysis and visualization. The toolkit is written in Java, thus offering platform-independence and Internet delivery capabilities. Several applications of the toolkit are introduced to illustrate the functionality that can be achieved. Conclusions The MBT provides a well-organized assortment of core classes that provide a uniform data model for the description of biological structures and automate most common tasks associated with the development of applications in the molecular sciences (data loading, derivation of typical structural information, visualization of sequence and standard structural entities). | Background Recent scientific and technical advances in the field of experimental biology, particularly in genomics, have produced large amounts of biological data, which has posed new conceptual challenges. The visualization and visualization-driven analysis of these experimentally derived data has become a key component of the scientific process. Until recently, these needs were typically approached by designing applications specialized in a set of well-defined specific tasks. So, for example, popular applications for molecular visualization include, but are not limited to, Molscript [ 1 ], PyMol [ 2 ], Rasmol [ 3 ] and Swiss-PdbViewer [ 4 ]. However, the analysis of these molecular data frequently requires novel approaches to visualization and integration with a variety of data types. Therefore, the ability to quickly prototype and develop software suitable for diverse tasks becomes paramount. Hence, libraries, like the MBT described here, are particularly useful, especially given that many applications can be accessed through the Web. This paper, aimed at bioinformatics software developers, provides a concise presentation of the design and capabilities of the toolkit, presents a number of models for its usage, and illustrates its performance with several applications. The paper is organized as follows. The following section provides a general overview of the application programming interface (API). The Results and Discussion section presents details about core components of the toolkit and illustrates its capabilities with two applications. The Conclusions section summarizes the main functionality of the toolkit, discusses availability and documentation, presents performance data and outlines future development plans. Implementation The application-programming interface (API) that serves as the foundation of the toolkit has emerged from the exploration of the typical needs encountered by a researcher in the analysis of a documented biological molecule. The list of requirements includes reading the data file and converting the information to data objects that are easily accessible by applications, extracting subsets of chemical components according to different chemical or biological properties of interest (e.g. chains, residues, CA atoms, ligands, etc.), deriving information that is not explicitly present in the original input data (e.g., covalent and hydrogen bonds, torsion angles, secondary structures) and visualizing chemical or physical properties of different subsets of the molecule, or the entire molecule. Based on the requirements identified above, we have arrived at the multi-layered design illustrated in Fig. 1 . The bottom (input/output) layer contains facilities for importing molecular data from a variety of sources. The StructureFactory class offers a uniform approach to loading structural data, independent of the format of the source. The class makes use of a set of loaders that can import data from a variety of sources, either on the same machine or located on a remote server. Moreover, this allows the developer to write applications that do not have to uniquely specify the source of data. Instead, a number of methods in the StructureFactory class enable loading of structures based on a series of source descriptors: file name, PDB id code, URL location, etc. The first loader capable of handling the source descriptor is used thus providing a data access and retrieval mechanism that is transparent to the user. The output of the load methods in the StructureFactory is a Structure object, which is effectively an interface to a summary of raw primary and secondary structure data. The Structure object is not restricted to holding structural information and can contain any other data relevant to the organization of the molecular entity. For instance, it can store the description of single or multiple protein sequences preserving the ordering and alignment of the amino acid residues and their properties. The StructureMap class builds the internal data model of the simple or complex molecule and provides hierarchical access to both raw data and derived information generated from the input. This includes access to chains, residues, atoms, bonds, nucleic acid components, ligand atoms, fragments associated with secondary structure components, or features defined by the user. The StructureStyles class provides information about rendering, coloring and selection attributes associated with any structure component produced by the StructureMap . A wide variety of methods in this class allow any module of an application, in particular any viewer, to set and retrieve the information describing the visually represented parameters of any structure component. The StructureDocument is designed as a container class that maintains a log of all loaded structures and viewers that are instantiated by an application in a given session. It also has the role of generating events associated with the addition or removal of structures and viewers. The next level of the API contains graphical user interface (GUI) elements. This portion of the package contains high level constructs, such as windows and panels designed to display 3D graphics, protein or DNA sequences or tree representations of the chemical components of the molecule, as well as lower level components that can be used to build a molecular scene according to a developer's preferences. Finally, at the application level, a number of applications are provided that can be used to illustrate the features of the toolkit, or as starting templates for new application development. For example, the applications illustrate how to use methods in the StructureMap class to retrieve all secondary structures within a given molecule, or how to obtain a list of atoms or residues. This is the level that most developers would modify in order to create custom applications tailored to their specific needs. Note that care has been taken to further enable application developers to use different components of the toolkit to build purely analytical applications that have no visualization component. For example, a developer could write a command-line tool that simply loads a molecule and gathers statistics about the structure data without presenting any graphical interface. Such an application could be part of a back-end process that runs in a web-server environment. Results and discussion Core components The MBT was developed as an object-oriented Java-based environment and hence is flexible, modular and lightweight, which facilitates maintenance, web deliverability and limits the required computer resources. Moreover, the toolkit can be easily extended and having been written completely in Java is effectively platform-independent. The internal data model describes the hierarchical organization of a protein molecule – Structure, Chain, Residue, and Atom. We have implemented this data model by designing a class hierarchy to efficiently encode its elements. The components of this class hierarchy (Fig. 2 ) are built around a Structure object (Fig. 1 ). Structure is a container designed to hold all raw information pertaining to the given unit of biological information: protein sequence, genomic sequence, taxonomy information, experimental data and so on. The remainder of the class hierarchy represents the components of the macromolecule. The toolkit is not inherently limited to operating on biological molecules, and can easily be used to manipulate small organic and inorganic molecules. Selection, rendering and coloring attributes for all biological molecules are handled by StructureStyles . The Fragment class contains, for example, one of the four known conformations: α-helix, β-strand, turn or random coil, but is sufficiently general to define ranges of residues grouped according to any property of interest. In addition, the data model contains classes that describe other objects associated with derived information, such as covalent or hydrogen bonds. In order to provide uniform style features across the toolkit a StructureStyles class has been implemented. This class maintains a representation of the rendering characteristics of all structure component objects so that any application module has access to the style data for any given object that needs to be visually represented. The set of style parameters maintained by the StructureStyles class is comprehensive – it is the union of the sets of style parameters required by any known viewer. Communication between different components of the toolkit is enabled by a flexible event handling mechanism. Changes in the data, rendering styles, addition or removal of viewers and many other actions with toolkit-wide impact generate descriptive events, which are recursively propagated across the toolkit components, allowing an automatic synchronization of the state of different active parts of an application. Version 1.0 of the MBT provides three standard viewers: a 3D structure viewer, a primary sequence viewer, and a tree viewer. The 3D viewer is implemented using the Java3D™ extension. The use of Java3D for visualization was motivated by the convenience of the availability of high-level constructs for building complex 3D scenes. Analysis of the performance aspects [ 5 ] of Java3D has shown that some performance issues can be overcome through a careful organization of the molecular scene. Existing applications indicate that the visualization of most molecules using typical desktops and graphics cards is fast and fully interactive. For example, typical protein data sets with four to five thousand atoms (e.g., PDB identifiers 4 HHB, 10 MH, 6 GEP) load and display in four or five seconds on a Pentium III 1.2 GHz laptop computer. A schematic representation of the data structure used by the 3D viewer is shown in Figure 3 . For each primary or secondary structure element, a geometry object ( GeometryEntity ) is built, which is then attached as a BranchGroup node to a SceneGraphObject (the common superclass for all graph objects in Java3D) representing the three-dimensional image of the molecule. The PsGeometry (primary structure) and SsGeometry (secondary structure) classes provide a number of methods that build complete 3D scenes from a given set of primary or secondary structure data object (See Fig. 2 ). The geometry engine of the 3D viewer uses a flexible approach to generate ribbon-like surfaces. It allows the construction of ribbons using an extrusion with any shape of the cross section. A few most commonly used shapes are immediately available as core components of the toolkit. However, the developers could easily implement and register with the toolkit any additional shapes that may be of interest in their specific applications. The quality of geometry can be controlled either directly by setting individual geometry rendering parameters, or indirectly by a general quality parameter that optimizes the number of facets/vertices used in the construction of different geometric shapes. This allows for an easy adjustment of the application parameters in a wide performance-quality range, from very fast line-only drawing, to a somewhat slower, publication-quality rendering. The sequence viewer is a module designed to display primary sequences of proteins and nucleic acids that are either derived from the loaded structures or acquired from individual sequence files or sequence alignments. The sequence viewer uses AWT drawing methods, which does not impose any specific requirements on the client system, as they are part of the standard Java distribution. The viewer is designed as a full-featured module capable of performing most of the sequence analysis tasks, including basic statistics, pattern and motif searching and display of secondary structure mapping onto the sequence. The viewer is capable of displaying an unlimited number of sequences and provides multiple representation options. The latter include residue coloring by several criteria with a possibility of an easy extension, setting sequence display to any of the available system fonts, a flexible selection system, and more. As stated, the integrated event handling of the toolkit allows for simultaneous updates of the presentation layer for any participating viewers. Hence, the toolkit has the built-in support for common selection and common coloring across all registered viewers. This offers an important visual cue to many applications, linking for example sequence and structural components. The tree viewer offers a hierarchical view of all components of a given molecule. It reflects the logical organization of the derived StructureMap data including Molecule , Chain , Residue , and Atom objects. The tree view provides a convenient mechanism to select portions of a molecule based upon the biological relationship between atoms, residues, and chains. Finally, the MBT provides a repository of data and methods that can be used for the retrieval and/or derivation of physical, chemical and structural information associated with the molecules loaded by an application. For example, the package contains a periodic table with physico-chemical properties of the elements, as well as methods for the derivation of the secondary structure information, using the Kabsch-Sander algorithm [ 6 ]. Full details of all these features are provided with the documentation. Applications built using the MBT Applications can be explored and downloaded from the MBT Website [ 7 ]. They have been tested on a variety of UNIX, Windows and MAC OS X platforms. The Ligand Explorer [ 8 ] (a.k.a. LigPro; Figure 4 ) is an integral part of the reengineered RCSB Protein Data Bank (PDB) [ 9 , 10 ], which is currently in beta testing. In the present PDB, a user interested in protein-ligand interactions must download the structure, decide on a graphics program and likely learn a scripting language to provide details of hydrophilic and hydrophobic interactions between protein and ligand at different cutoff distances. Ligand Explorer achieves this at the push of a button. This produces a view with all ligands highlighted. The user then selects a ligand for a review of detailed interactions. Ligand Explorer can be downloaded as a separate application and used to access local files or files on the PDB's servers. The protein kinase exploration tool (Fig 5 .) is part of the new protein kinase resource [ 11 ]. It uses the 3D viewer supplied by the toolkit with a few modifications that allow more extensive coloring and rendering options. The multiple sequence viewer presents the multiple sequence alignments resulting from the multiple structure alignments which are stored in the database. Another viewer displays the superfamily relationship of the sequences present in the database. Conclusions The molecular biology toolkit (MBT) provides a set of pluggable and extensible classes for use by application developers interested in the visualization and analysis of macromolecular data. The MBT provides a set of pre-written data loaders, viewers, a common data model and the means to add to and customize the toolkit for specific applications delivered as applets through the Web or as standalone applications. Base functionality and comparable tools (where applicable) are as follows: • Classes to load raw data from a number of common protein structure and sequence data sources (PDB, mmCIF, FASTA, etc.) and a means to easily add new "loader" modules independent of the applications they might serve. • A common data model to which raw information is imported, mapped, and indexed. A number of data record types ( StructureComponent objects) are provided (e.g., Atom, Bond, Residue, Chain) and new data types are easily registered. Further the data model provides an extensible means to describe viewable or visible attributes (e.g., color, radius, drawing style) of these objects. • Written entirely in Java, programs may be embedded (using Java WebStart or the Java Plug-In) directly inside web pages. This enables the deployment of tightly coupled interactive web content much like the popular MDL Chime plug-in. • Applications are not restricted to the features provided by the MBT APIs. The Programmers Guide details how to extend the system. • With source code provided, core features of the toolkit may be directly modified or extended for independent use (though, this may cause your code to diverge from and become incompatible with subsequent releases of MBT). However, adding code within the existing framework and contributing it back for others to use is encouraged. The source code has been extensively commented to produce a rich and complete set of hyper-linked javadoc API documents. • A set of pre-written viewers (Sequence, Structure, Tree) that can be extended, replaced, omitted, or augmented with completely new viewers that implement entirely new visualization techniques. • The 3D StructureViewer module provides visual representations similar to RasMol and Molscript such as balls-and-sticks, CPK spheres, split-bonds, extrusion/ribbon-style backbone traces, and secondary structure cartoons (Helix, Turn, Coil, Strand). It also provides 3D labeling. • A scripting interface is being developed for MBT to enable toolkit functionality to be command-line or even script-driven (similar to MDL Chime or PyMol). Version 1.0 of the toolkit and sample applications, including those described here, are available for download from the project web page [ 7 ]. The same site contains links to various documentation pages (Project Introduction, Talk/Presentation Slides, Related Links, Installation Guide, Build Guide, Programmers Guide, Examples Source, and Toolkit API). The MBT has been tested on common hardware and UNIX, Windows and MAC OS X operating systems. A good consumer-level graphics card is recommended. The loading and generation of a 3D scene when representing a typical protein structure takes a few seconds. Large structures from the PDB [ 9 ] that contain over 10 5 atoms require physical memory in excess of 500 MBytes and on a notebook computer with a 1.2 GHz processor can take nearly one minute. Efforts at optimization are on-going. New applications are on-going including the generation of high quality images for all structures in the Protein Data Bank and new ways of visualizing protein-protein interactions. We invite contributions to the MBT by sending mail to mbt@sdsc.edu . Bugs may be reported to a bug tracker available on the project web site [ 7 ]. Availability and requirements • Project Name: Molecular Biology Toolkit (MBT) • Project Home Page: • Operating System: Platform independent • Programming Language: Java • Other requirements: Java 1.3.1 or higher, Java3D • License: Free for educational, research and non-profit purposes • Any restrictions to use by non-academics: Contact the University of California at San Diego's Technology Transfer Office ( invent@ucsd.edu , 1-858-534-5815) Authors' contributions JLM is one of the designers of the API and co-developer of the toolkit. AG designed and implemented the geometry generation modules, implemented the algorithms for secondary structure generation and bond detection, and drafted the paper. OVB developed the PKR Explorer. QZ developed the Ligand Explorer. PEB coordinated the whole project, suggesting the general functionality and scientific objectives of the toolkit. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548701.xml |
551536 | Preferred analysis methods for Affymetrix GeneChips revealed by a wholly defined control dataset | A 'spike-in' experiment for Affymetrix GeneChips is described that provides a defined dataset of 3,860 RNA species. A 'best route' combination of analysis methods is presented which allows detection of approximately 70% of true positives before reaching a 10% false discovery rate. | Background Since their introduction in the mid 1990s [ 1 , 2 ], expression-profiling methods have become a widespread tool in numerous areas of biological and biomedical research. However, choosing a method for analyzing microarray data is a daunting task. Dozens of methods have been proposed for the analysis of both high-density oligonucleotide (for example, Affymetrix GeneChip) and spotted cDNA or long oligonucleotide arrays, with more being put forward on a regular basis [ 3 ]. Moreover, it is clear that different methods can produce substantially different results. For example, two lists of differentially expressed genes generated from the same dataset can display as little as 60-70% overlap when analyzed using different methods ([ 4 ] and see Additional data file 1). Despite the large number of proposed algorithms, there are relatively few studies that assess their relative performance [ 5 - 9 ]. A significant challenge to undertaking such studies is the scarcity of control datasets that contain a sufficiently large number of known differentially expressed genes to obtain adequate statistics. The comparative studies that have been performed have used a small number of positive controls, and have included a background RNA sample in which the concentrations of the various genes are unknown, preventing an accurate assessment of false-positive rates and nonspecific hybridization. The most useful control datasets to date for evaluating the effectiveness of analysis methods for Affymetrix arrays are cRNA spike-in datasets from Affymetrix and Gene Logic. The Affymetrix Latin square dataset [ 10 ] is a series of transcriptional profiles of the same biological RNA sample, into which 42 cRNAs have been spiked at various known concentrations. The dataset is designed so that, when comparing any two hybridizations in the series, all known fold changes are powers of two. The Gene Logic dataset [ 11 ] has a similar experimental design, but with 11 cRNAs spiked in at varying fold changes, ranging from 1.3-fold upwards. Here we present a new control dataset for the purpose of evaluating methods for identifying differentially expressed genes (DEGs) between two sets of replicated hybridizations to Affymetrix GeneChips. This dataset has several features to facilitate the relative assessment of different analysis options. First, rather than containing a limited number of spiked-in cRNAs, the current dataset has 1309 individual cRNAs that differ by known relative concentrations between the spike-in and control samples. This large number of defined RNAs enables us to generate accurate estimates of false-negative and false-positive rates at each fold-change level. Second, the dataset includes low fold changes, beginning at only a 1.2-fold concentration difference. This is important, as small fold changes can be biologically relevant, yet are frequently overlooked in microarray datasets because of a lack of knowledge as to how reliably such small changes can be detected. Third, our dataset uses a defined background sample of 2,551 RNA species present at identical concentrations in both sets of microarrays, rather than a biological RNA sample of unknown composition. This background RNA population is sufficiently large for normalization purposes, yet also enables us to observe the distribution of truly nonspecific signal from probe sets which correspond to RNAs not present in the sample. We have used this dataset to compare several algorithms commonly used for microarray analysis. To perform a direct comparison of the selected methods at each stage of analysis, we applied all possible combinations of options to the data. Thus, it was possible to assess whether some steps are more critical than others in maximizing the detection of true DEGs. Our results show that at several steps of analysis, large differences exist in the effectiveness of the various options that we considered. These key steps are: first, adjusting the perfect match probe signal with an estimate of nonspecific signal (the method from MAS 5.0 [ 12 ] performs best); second, checking that the log fold changes are roughly distributed around 0 (by observing the so-called M versus A plot [ 13 ], the plot of log fold change (M) versus average log signal intensity (A)), and if necessary, performing a normalization at the probe-set level to center this plot around M = 0; and third, choosing the best test statistic (the regularized t -statistic from CyberT [ 14 ] is most accurate). Overall, we find a significant limit to the sensitivity of microarray experiments to detect small changes: in the best-case scenario we could detect approximately 95% of true DEGs with changes greater than twofold, but less than 30% with changes below 1.7-fold before exceeding a 10% false-discovery rate. We propose a 'best-route' combination of existing methods to achieve the most accurate assessment of DEGs in Affymetrix experiments. Results and discussion Experimental design A common use of microarrays is to compare two samples, for example, a treatment and a control, to identify genes that are differentially expressed. We constructed a control dataset to mimic this scenario using 3,860 individual cRNAs of known sequence in a concentration range similar to what would be used in an actual experimental situation (see Materials and methods). The cRNAs were divided into two samples - 'constant' (C) and 'spike' (S) - and each sample was hybridized in triplicate to Affymetrix GeneChips (six chips total). The S sample contains the same cRNAs as the C sample, except that selected groups of approximately 180 cRNAs each are present at a defined increased concentration compared to the C sample (Figure 1 , Table 1 ). Out of the 3,860 cRNAs, 1,309 were spiked in with differing concentrations between the S and C samples. The rest (2,551) are present at identical relative concentration in each sample, to serve as a guide for normalization between the two sets of microarrays. For the sake of consistency with typical discussions of microarray experiments, we sometimes refer to the cRNAs with positive log fold changes as DEGs, despite their not representing true gene-expression data. Assignment of Affymetrix probe sets to DGC clones In the Affymetrix GeneChip design, the expression level of each RNA species is reported by a probe set, which in the DrosGenome1 chip [ 15 ] comprises 14 oligonucleotide probe pairs. Each probe pair contains two 25-mer DNA oligonucleotide probes; the perfect match (or PM) probe matches perfectly to the target RNA, and the mismatch (or MM) probe is identical to its PM partner probe except for a single homomeric mismatch at the central base-pair position, and thus serves to estimate nonspecific signal. The DrosGenome1 chip used in this experiment is based on release version 1.0 of the Drosophila genome sequence and thus does not represent the most up-to-date annotated version of the genome. To ensure that probe-target assignments are made correctly, we assigned the 14,010 probe sets on the DrosGenome1 GeneChip to the spiked-in RNAs by BLAST of the individual PM probe sequences against the Drosophila Gene Collection release 1.0 (DGC [ 16 ]) clone sequences that served as the template for the cRNA samples (Materials and methods). Of the 3,860 DGC clones used in this study, 3,762 (97%) have full-length cDNA sequence available at the DGC web site, 90 have 3' and 5'-end sequence only, and eight have no available sequence. For each probe set, all clone sequences with BLAST matches to PM probe sequences in that probe set are collected, allowing at most two (out of 25 base-pair (bp)) mismatches, and only allowing matches on the correct strand. If at least three PM sequences match to a given clone, then the probe set is assigned to that clone. Matches of one probe set to more than one clone are allowed. In this manner, 3,866 probe sets are assigned to at least one DGC clone each. Among these probe sets, 1,331 have an increased concentration between the S and C chips, whereas 2,535 represent RNAs with equal concentration between the two samples. Among those probe sets which do not have any assignment using this criterion, if fewer than three PM probes within the probe set have a BLAST match to any clone, the probe set is then called 'empty' (that is, its signal should correspond to nonspecific hybridization). There are 10,131 empty probe sets; combined with the 2,535 1x probe sets, about 90% of the probe sets on the chip represent RNAs with constant expression level between the C and S samples. The rest of the probe sets are then called 'mixed', meaning that they match to more than one clone, but each with only a few PM probe matches. There are only 13 mixed probe sets. The numbers of probe sets assigned to each fold-change class are depicted in Table 1 . Assessment of absent/present call metrics Our dataset design provides the rare knowledge of virtually all of the RNA sequences within a complex sample (excepting the small number (3%) of clones for which only partial sequence was available, and the possible rare mistakenly assigned or contaminated clone). We can therefore evaluate various absent/present call metrics on the basis of their ability to distinguish between the known present and absent RNAs. We investigate this issue at both the probe pair level and probe set level. For the probe pair level assessment, we first identify the probe pairs which we expect to show signal, and those which should not. We thus define two classes of probe pairs: first, perfect probe pairs, whose PM probe matches perfectly to a target RNA sequence, and neither PM nor MM probe matches to any other RNA in the sample with a BLAST E-value cutoff of 1 and word size of 7, and second, empty probe pairs, whose PM and MM probes do not match to any RNA sequence when using the same criteria. On the chip, which contains 195,994 probe pairs, there are 50,859 perfect probe pairs and 117,904 empty ones. Observation of the signal for these probe pairs (Figure 2a,b ) clearly shows that there is considerable signal intensity for the empty probe pairs. Figure 2c shows the ability of several metrics - log 2 (PM/MM), PM-MM, , and log 2 (PM) - to distinguish between perfect and empty probe pairs, by calculating receiver-operator characteristics (ROC) curves using the perfect probe pairs as true positives and the empty ones as true negatives. Each point on a curve depicts the specificity and sensitivity for RNA detection, when using a specific value of the corresponding metric as a cutoff for classifying probe sets as present or absent. Instead of depicting the false-positive rate (the fraction of true negatives that are detected as present) on the x -axis, which is customary for these types of graphs, we show the false-discovery rate (the fraction of detected probe sets which are true negatives), which distinguishes between the metrics more effectively for the top-scoring probe sets. Figure 2 clearly shows that metrics that compare the PM signal with the MM signal, such as log 2 (PM/MM) and PM-MM, are the most successful at distinguishing perfect from empty probe pairs. This indicates that the PM signal alone is a less effective indicator of RNA presence, probably because the probe hybridization affinity is highly sequence-dependent. However, even with the more successful metrics, only about 60% of the perfect probe sets are detected before reaching a 10% false-discovery rate, indicating that there is still a high level of variability in probe pair sensitivity, even when using the MM signal to estimate the probe hybridization affinity. When signals from the 14 probe pairs in each probe set are combined to create a composite absence/presence call, a much larger fraction of the spiked-in RNA species can be detected reliably. To obtain absent/present calls at the probe-set level, we perform the Wilcoxon signed rank test using each of the metrics listed above [ 17 ]. The p -values from this test are used to generate the ROC curves in Figure 2d . Again, the best results are obtained when the metric compares PM with MM signals, as opposed to monitoring signal alone. The metric used in MAS 5.0 ((PM-MM)/(PM+MM)), which is equivalent to log 2 (PM/MM), performs best. Therefore, the MM signals are important in generating accurate presence/absence calls. In our dataset, about 85% of the true positives could be detected before having a 10% false-discovery rate. The detection of perfect probe pairs is not improved when we include additional information from replicates. The 15% of probe sets which are called absent may represent truly absent RNAs, owing to failed transcription or labeling (see Additional data file 5). However, as we do not have an independent measure of failed transcription for the individual cRNA sequences in the target sample, we cannot completely rule out the possibility that they are the result of non-responsive probes or a suboptimal absent/present metric that fails to score low-abundance cRNAs. Regardless, as non-responsive probes or missing target cRNAs should affect both the C and S chips identically, these factors should not limit the value of this dataset in making relative assessments of different analysis methods. Generating expression summary values The first task in analyzing Affymetrix microarrays is to combine the 14 PM and 14 MM probe intensities into a single number ('expression summary') which reflects the concentration of the probe set's target RNA species. Generating this value involves several discrete steps designed to subtract background levels, normalize signal intensities between arrays and correct for nonspecific hybridization. To compare the effectiveness of different analysis packages at each of these steps, we created multiple expression summary datasets using every possible (that is, compatible) combination of the options described below. Algorithms were chosen for their popularity with microarray researchers and their open-source availability, and were generated using the implementations found in the Bioconductor 'affy' package [ 18 ]. Figure 3 summarizes the options that we chose within Bioconductor. We also used the dChip [ 19 ] and MAS 5.0 [ 12 ] executables made available by the respective authors in order to cross-check with the open-source implementations within Bioconductor. In addition, we applied two analysis methods that incorporate probe sequence-dependent models of nonspecific signal ( Perfect Match [ 20 ] and gcrma [ 21 ]). The combinations of options that were used to generate the 152 expression summary datasets are detailed in Additional data file 2. Background correction An estimate of the background signal, which is the signal due to nonspecific binding of fluorescent molecules or the autofluorescence of the chip surface, was generated using two possible metrics. The MAS background [ 17 ] is calculated on the basis of the 2nd percentile signal in each of 16 subsections of the chip, and is thus a spatially varying metric. The Robust Multi-chip Average ( RMA ) algorithm [ 22 ] subtracts a background value which is based on modeling the PM signal intensities as a convolution of an exponential distribution of signal and a normal distribution of nonspecific signal. Normalization at the probe level The signal intensities are normalized between chips to allow comparisons between them. Because in our dataset, a large number of RNAs are increased in S versus C (and none are decreased), commonly used methods often result in apparent downregulation for spiked-in probe sets in the 1x change category. We thus added a set of modified normalization methods which used our knowledge of the 1x probe sets. The following different methods were applied. Constant is a global adjustment by a constant value to equalize the chip-wide mean (or median) signal intensity between chips. Constantsubset is the same global adjustment but equalizing the mean intensity for only the probe sets with fold change equal to 1. Invariantset [ 23 ] is a nonlinear, intensity-dependent normalization based on a subset of probes which have similar ranks (the rank-invariant set) between two chips. Invariantsetsubset is the same as invariantset but the rank-invariant set is selected as a subset of the probe sets with fold change equal to 1. Loess normalization [ 24 ] is a nonlinear intensity-dependent normalization which uses a local regression to make the median fold change equal to zero, at all average intensity levels. Loesssubset normalization is the same as loess but using only the probe sets with fold change equal to 1. Quantile normalization [ 24 ] enforces all the chips in a dataset to have the same distribution of signal intensity. Quantilesubset normalization is the same as quantile but normalizes the spiked-in and non-spiked-in probe sets separately. PM correction We chose three ways to adjust the PM signal intensities to account for nonspecific signal. The first is to subtract the corresponding MM probe signal ( subtractmm ). The second is the method used in MAS 5.0, in which negative values are avoided by estimating the nonspecific signal when the MM value exceeds its corresponding PM intensity [ 17 ]. The third is PM only (no correction). The subtractmm and MAS methods are compatible only with the MAS background correction method; that is, it does not make sense to combine these with RMA background correction. Expression summary The 14 probe intensity values were combined using one of the following robust estimators: Tukey-biweight ( MAS 5.0); median polish (RMA); or the model-based Li-Wong expression index (dChip). Analyses including the subtractmm PM correction method require dealing with negative values when PM is less than MM, which occurs in about a third of the cases. Within Bioconductor, the Li-Wong estimator can handle negative values, but the other two metrics mostly output 'not applicable' (NA) for the probe set when any of the constituent probe pairs has negative PM - MM. The result for MAS and median polish is NA for about 85% of the probe sets on the chip. To study the consequence of losing so many probe sets, we modified one of these two metrics ( median polish ) to accept negative (PM - MM) ( medianpolishna ), and added this metric whenever subtractmm was used. Normalization at the probe set level Many of the expression summary datasets that were produced still show a dependence of fold change on the signal intensity (Figure 4a ). To correct this, a second set of expression summary datasets was created, in which a loess normalization at the probe set level was used to center the log-fold changes around zero (Figure 4b ). Comparison of the observed fold changes with known fold changes For each of the 150 expression summary datasets that we generated, fold changes between the S and C samples were calculated and then compared with the actual fold changes. Most expression summary datasets show good correlation between the observed and actual fold changes (Figure 5 ). The greatest sources of variability are probe sets with low signal intensity; as Figure 5b shows, the correlation improves dramatically when we filter out the probe sets with low signal. For all the expression summary datasets, the agreement between observed and actual fold changes is good (R 2 = 86 ± 3%) when the probe sets in the lowest quartile of signal intensity are filtered out. The expression summary datasets which involve correcting the PM signal by subtracting the MM signal ( subtractmm ) have the highest correlation coefficient, because low-intensity probe sets have been filtered out during processing, as described above. We therefore suggest that an important feature of a successful microarray analysis is to account for probe sets with low signal intensity, either by filtering them out or by using a signal-dependent metric for significance. Several ways of accomplishing such filtering are described below. We also observed that the fold changes resulting from the chips are consistently lower than the actual fold changes. Apparently, the decrease in fold change is only partly the result of signal saturation (Figure 5b-c ), and is not a byproduct of the robust estimators used to calculate expression summaries (because the low fold changes are also observed at the probe pair level; see Additional data file 3). In other experiments we have also observed that our Affymetrix fold-change levels are smaller than those obtained by quantitative reverse transcription (RT)-PCR (data not shown). One likely explanation is that we do not have an adequate estimate for nonspecific signal. For example, if we choose the MM signal as the nonspecific signal (thus calculating PM - MM, or PM - CT from MAS 5.0), we are probably overestimating the nonspecific signal, as the MM intensity value responds to increasing target RNA concentrations, and therefore contains some real signal. On the other hand, if we choose not to use a probe sequence-dependent nonspecific signal (such as in RMA), we are likely to underestimate the nonspecific signal for a large number of probes. In either case, the result is decreased fold change magnitudes. Artificially low fold-change values have been noted by others, including those investigating the Affymetrix Latin square [ 6 ], GeneLogic [ 22 ] and other [ 25 ] datasets, although some of the differences they report are smaller than are observed here. Test statistics and ROC curves Because a typical microarray experiment contains a large number of hypotheses (here 14,010) and a limited number of replicates (in this case three), high false-positive rates are a common problem in identifying DEGs. An important factor in minimizing false positives is to incorporate an appropriate error model into the signal/noise metric. We compared three t -statistic variants, which differ in their calculations of noise. The first is significance analysis for microarrays ( SAM ) [ 26 ], in which the t -statistic has a constant value added to the standard deviation. This constant 'fudge factor' is chosen to minimize the dependence of the t -statistic variance on standard deviation levels. The second is CyberT [ 14 ], in which the standard deviation is modeled as a function of signal intensity. The third is the basic (Student's) t -statistic. For CyberT and the basic t -test, we performed the tests on the expression summaries after log transformation, as well as on the raw data. As shown in the example ROC curve, the CyberT statistic outperforms the other statistics for the vast majority of expression summary datasets (Figure 6a ). Inspection of the false positives and false negatives shows the reason for the different performance. Because CyberT uses a signal intensity-dependent standard deviation, probe sets at low signal intensities have reduced significance even when their observed fold change is high (Figure 6b ). As shown in Figure 6c , the SAM algorithm (using the authors' Excel Add-in) does not effectively filter out these same false-positive probe sets (with low signal intensity and high fold change). Upon further inspection, we observed that the SAM algorithm favors using large values for the constant fudge factor, so that the t -statistic depends more on the fold change value, than on the noise level. The basic t -statistic is prone to false positives resulting from artificially low standard deviations, owing to the limited number of replicates in a typical microarray experiment (scattered magenta spots in Figure 6d ). This comparison agrees with the result of Broberg [ 9 ], who also found that the CyberT approach (there called 'samroc') outperforms several other methods. Because the CyberT statistic clearly performs the best, we use only this statistic to compare the options for the other steps in microarray analysis, below. Comparison of options at each of the other analysis steps Performance of the various options that were investigated varied significantly, as seen by the ROC curves shown in Figure 7 . First, we find that a second loess normalization at the probe set level generally yields a superior result (Figure 7a,f ), as could be expected by observing the strong intensity-dependence of the fold-change values in Figure 4 . This intensity-dependence is most likely the result of the unequal concentrations of labeled cRNA for the C and S chips. However, this artifact is not unique to this dataset. We routinely observe similar intensity-dependent fold changes in comparisons of biological samples, especially when there are small differences in starting RNA amounts between the two samples (see Additional data file 4 for an example). Therefore, in the absence of a biological reason to suppose that the fold change should depend on signal intensity, it is important to view the plot of log fold change versus signal and recenter it around y = 0 when necessary. Owing to the significant improvement seen when the second normalization is used, the subsequent figures (Figure 7b-f ) only show the comparison of the remaining options in conjunction with this step (blue curves in Figure 7a ). Among the background correction methods, the MAS 5.0 method generally performs better than the RMA method (Figure 7b ). No clearly superior normalization method was found at the probe level (Figure 7c ), even when using the subset normalization variants, although quantile normalization tended to underperform in the absence of the second normalization step. With respect to adjusting the PM probe intensity with an estimate of nonspecific signal, Figure 7d clearly shows that either subtracting the MM signal ( subtractmm ), or using the MAS 5.0 correction method, is better than using uncorrected or RMA-corrected PM values ( PM-only ). The MAS 5.0 method performs the best because it does not create any negative values. This result is in apparent conflict with the conclusions of Irizarry et al . [ 5 ], who show drastically reduced noise at low signal intensity levels when the PM signal is not adjusted with MM values, and therefore better detection of spiked-in probe sets when using the fold change as the cutoff criterion. However, when Irizarry et al . use a test statistic that takes the variance into account, PM-only and MM-corrected methods ( MAS ) have similar sensitivity/specificity (Figure 3d,e from [ 5 ]). In the dataset presented here, the MAS PM-correction method yields a high variance at low signal-intensity levels, which effectively reduces the false-positive calls at this intensity range when using CyberT, thus resulting in better performance than when using PM-only . We can reconcile the Irizarry et al . result with our observations by considering a major difference between the datasets used by the two studies. Both the Affymetrix and GeneLogic Latin square datasets used in [ 5 ] involve a small number (10-20) of spiked-in cRNAs in a common biological RNA sample, and therefore comparisons are made between two samples that are almost exactly the same. As a result, the nonspecific component of any given probe's signal is expected to be almost identical in the two samples, and should not contribute to false-positive differential expression calls. In contrast, a large fraction of our dataset is differentially expressed; in addition, the C sample contains a high concentration of (unlabeled) poly(C) RNA. Because nonspecific hybridization depends both on a probe's affinity and on the concentrations of RNAs that can hybridize to it in a nonspecific fashion, we expect that each probe's signal can have different contributions of nonspecific hybridization between the C and S chips. Figure 2a shows that nonspecific hybridization can be a large component of a probe's signal. We hypothesize that, for our dataset, PM-only performs worse than MM-corrected methods ( subtractmm or MAS ) because PM-only does not try to correct for nonspecific hybridization in a probe-specific fashion. In contrast, for the Latin square datasets used in [ 5 ], PM-only works just as well as MM-corrected methods because the contribution of nonspecific hybridization is constant. Therefore, datasets which compare substantially different RNA samples (such as two different tissue types) should probably be processed using the MAS 5.0 method for PM correction. Figure 7e compares the different robust estimators that were used to create expression summaries. Of these, median polish (RMA) and the Tukey Biweight methods ( MAS 5.0) perform the best. Figure 7f highlights the 10 best summary method option sets, which are also depicted in Figure 3 , as well as straight applications of some popular software, with or without an additional normalization step at the probe-set level. The result from the MAS 5.0 software, when adjusted with the second loess normalization step, ranks among the top 10. However, the other methods (dChip, RMA and MAS 5.0 without probe-set normalization) are not as sensitive or specific at detecting DEGs. We were concerned that some of our analyses might be confounded by a possible correlation between low fold change and low expression summary levels, which could affect the interpretations of Figure 7 (comparing different methods) and the detection of small fold changes (see below). We therefore examined the distribution of expression levels within each spiked-in fold change group, and compared the methods with respect to their ability to detect a subset of probe sets with low expression summary levels (Additional data file 5). We found that the distribution of expression levels for the known DEGs was comparable among all the fold-change groups, and that all the conclusions reported here are similarly applicable to the low expression subset. However, the sensitivity of all methods was reduced, suggesting that they perform less well on weakly expressed than on highly expressed genes. As the number of low signal spike-ins was relatively small (265 probe sets), resulting in reduced accuracy for the ROC curves, the development of additional control datasets specifically focusing on DEG detection at low cRNA concentrations will be an important extension of this study. Models dependent on probe sequence provide a promising route to improving the accuracy of nonspecific signal measures. Here, we applied two different models ( perfect match and gcrma ) to the control dataset. With respect to detecting the true DEGs, these two models perform reasonably well, although slightly less well than the MAS 5.0 PM correction method. When we consider only the low signal DEGs (Additional data file 5), gcrma outperforms perfect match, and is similar in effectiveness to the top analysis option combinations. Estimating false discovery rates We have identified a set of analysis choices that optimally ranks genes according to significance of differential expression. To decide how many of the top genes to investigate further in follow-up experiments, it would be useful to have accurate estimates of the false-discovery rate (FDR or q -value), which is the fraction of false positives within a list of genes exceeding a given statistical cutoff. We used our control dataset to compare the actual q -values for the 10 optimal expression summary datasets with q -value estimates from the permutation method implemented in SAM. As shown in Figure 8b , permutation-based q -value calculations using each of the top ten datasets underestimate the actual q -value for a given cutoff. We attempted to reduce the contribution of biases inherent in any given data-processing step by combining the results from the top 10 expression summary datasets. The goal is to pinpoint those genes that are called significant regardless of small changes in the analysis protocol (changes that only marginally affect the DEG detection sensitivity and specificity according to our control dataset). To identify these 'robustly significant' genes, we created a combined statistic from the top 10 datasets depicted in Figure 7f , taking into account the significance of each individual test, as well as the variation in fold change between datasets (see Materials and methods). This combined statistic distinguishes between true and false DEGs equally as well as the best of the 10 input datasets (Figure 8a ). To make false-discovery rate estimates using this combined statistic, each of the 10 datasets was permuted (using the same permutation) and the combined statistic was recalculated. Figure 8b shows that this combined statistic gives a more accurate q -value estimate than any of the individual datasets. However, there is still considerable difference between the estimated and actual q -values. For example, if we estimate q = 0.05, the corresponding CyberT statistic has an actual q = 0.18, and if we estimate q = 0.1, then the actual q = 0.3. Therefore, until more accurate methods for estimating the false-discovery rate are developed, we recommend that a conservative choice of false-discovery rate cutoff be used (for example < 1%) to prevent actual numbers of false-positive DEG calls (that is, the true, rather than estimated, FDR) from being too high. Assessment of sensitivity and specificity As the identities and relative concentrations of each of the RNAs in the experiment were known, we were able to assess directly the sensitivity and specificity obtained by the best-performing methods. Examination of the ROC curves in Figure 7 reveals that sensitivity begins to plateau as the false discovery rate ( q ) increases from 10% to 30%. Taking an upper acceptable bound for q as 10%, the maximum sensitivity obtained is about 71%. Thus, under the best-performing analysis scheme, roughly 380 (29%) of the 1,309 DEGs are not detected as being differentially expressed, with the number of false positives equaling about 105. At q = 2%, sensitivity reduces to around 60%, meaning that more than 520 DEGs are missed, albeit with fewer than 20 false positives. We next looked at the dependence of sensitivity and specificity on the magnitude of the spiked-in fold-change value. We find that at q = 10%, sensitivity is increased to 93% when only cRNAs that differ by twofold or more are considered as DEGs (Figure 9a ). This sensitivity decreases only slightly (to 90%) when q is lowered to 5%. However, sensitivity drops off sharply as differences in expression below twofold are considered. At q = 10%, only 82% of DEGs with 1.5-fold or greater changes in expression are identified, dropping to 71% for all DEGs at 1.2-fold change or above (77% and 67% at q = 5%, respectively). The reduction in sensitivity is almost wholly due to the low-fold-change genes: less than 50% of DEGs with fold change 1.5, and none of the DEGs with fold change 1.2, are detected at q = 10% (Figure 9b ). It is tempting to conclude from this that we are achieving adequate sensitivity in our experiments and merely need not bother with DEGs below the twofold change level. However, we would argue that obtaining greater sensitivity should be an important goal. There is ample demonstration in the biological and medical literature that small changes in gene expression can have serious phenotypic consequences, as seen both from haploinsufficiencies and from mutations that reduce levels of gene expression through transcriptional regulation or effects on mRNA stability. Furthermore, effective fold changes seen in a microarray experiment might be considerably smaller than actual fold changes within a cell, if the sample contains additional cell populations that dilute the fold-change signal. As it is often not possible to obtain completely homogeneous samples (for example, when profiling an organ composed of several specialized cell types), this is likely to prove a very real limitation to detecting DEGs. In cases where pure cell populations can be obtained, for example by laser capture microdissection, the numbers of cells are often small and RNA needs to undergo amplification in order to have enough for hybridization. Here, non-linearities in RNA amplification might also lead to observed fold changes that fall below the twofold level. We used three microarray replicates for this study, as this is frequently the number chosen by experimentalists because of cost and limiting amounts of RNA. One possible extension of this work would be to examine how many replicates are necessary for reliable detection of DEGs at a given fold change level. Conclusions We have compared a number of popular analysis options for the purpose of identifying differentially expressed genes using an Affymetrix GeneChip control dataset. Clear differences in sensitivity and specificity were observed among the analysis method choices. By trying all possible combinations of options, we could see that choices at some steps of analysis are more critical than at others; for example, the normalization methods that we considered perform similarly, whereas the choice of the PM adjustment method can strongly influence the accuracy of the results. On the basis of our observations, we have chosen a best route for finding DEGs (Figure 3 ). As any single choice of analysis methods can introduce bias, we have proposed a way to combine the results from several expression summary datasets in order to obtain more accurate FDR estimates. However, these estimates remain substantially lower than actual false-discovery rates, demonstrating the need for continued development of ways to assess the false-discovery rate in experimental datasets. Our analysis further revealed the existence of a high false-negative rate (low sensitivity), especially for those DEGs with a small fold change, and thus suggests the need for improved analysis methods for Affymetrix microarrays. In order to be feasible, this study investigated only a fraction of the current options. The raw data from our hybridizations are available in Additional data files 6-7 and on our websites [ 27 , 28 ], and we encourage the use of this dataset for benchmarking existing and future algorithms. Also important will be the construction of additional control datasets to explore issues not well covered by the present study, such as performance of the analysis methods for specifically detecting low-abundance RNAs and the effects of including larger numbers of replicate arrays. We hope that these experiments will help researchers to choose the most effective analysis routines among those available, as well as guide the design of new methods that maximize the information that can be obtained from expression-profiling data. Materials and methods cRNA and hybridization PCR products from Drosophila Gene Collection release 1.0 cDNA clones [ 16 ] were generated in 96-well format, essentially as described [ 29 ]. Each PCR product includes T7 and SP6 promoters located 5' and 3' to the coding region of the cDNA, respectively. Each PCR reaction was checked by gel electrophoresis for a band of detectable intensity and the correct approximate size. Those clones which did not yield PCR product were labeled as 'failed' and eliminated from subsequent analysis. From sequence verification of randomly selected clones, we estimate the number of mislabeled clones to be < 3%. The contents of the plates were collected into 19 pools, such that each pool contained the PCR product from one to four plates (approximately 96-384 clones). Biotinylated cRNA was generated from each pool using SP6 polymerase (detailed protocol available upon request) and the reactions were purified using RNeasy columns (Qiagen). Concentration and purity for each pool was determined both by spectrophotometry and with an Agilent Bioanalyzer. The labeled products were then divided into each of two samples - constant (C) and spike (S) - at specific relative concentrations (Table 1 , Figure 1 ). Because the C sample contains less total RNA than the S sample, 20 μg of (unlabeled) poly(C) RNA was added to the C sample to equalize the nucleic acid concentrations. By mixing the labeled pools just before hybridization, we ensured that the fold change between C and S is uniform for all RNAs within a single pool, while still allowing the absolute concentrations of individual RNAs to vary. The two samples were then hybridized in triplicate to Affymetrix Drosophila arrays (DrosGenome1) using standard Affymetrix protocols. We chose to hybridize each replicate chip from an aliquot of a single C (or S) sample, resulting in technical replication; thus this dataset does not address the noise introduced by the labeling and mixing steps. The clones comprising each pool can be found in Additional data file 8, and the resulting Affymetrix chip intensity files (.CEL) files are available in Additional data files 6-7. Estimate of RNA concentrations The total amount of labeled cRNA that was added to each chip (approximately 18 μg) was comparable to a typical Affymetrix experiment (20 μg). Although we do not know the individual RNA concentrations, we estimate that these span the average RNA concentration in a biological GeneChip experiment. Our biological RNA samples typically result in about 40% of the probe sets on the DrosGenome1 chip called present, so the mean amount of individual RNA is 20 μg/(14,010 × 0.40) = 0.003 μg/RNA. In the C chips, the average concentration of individual RNAs in the different pools range from 0.0008 to 0.007 μg/RNA, so the concentrations are roughly similar to those in a typical experiment. We note, however, that there is no way to ensure that the concentration distribution is truly reflective of a real RNA distribution. This is especially true with respect to the low end of the range, as it is usually unknown how many of the absent genes on an array are truly absent versus weakly expressed and thus poorly detected by the analysis algorithms used. Therefore, our analysis possibly favors methods that perform best when applied to highly expressed genes. Software All of the analysis was performed using the statistical program R [ 30 ], including the affy and gcrma packages from Bioconductor [ 18 ], and scripts adapted from the hdarray library by Baldi et al . [ 31 , 32 ]. In addition, we used the dChip [ 19 ], MAS 5.0 [ 12 ], Perfect Match [ 20 , 21 ] and SAM [ 27 ] executables made available by the respective authors. Note that the false-discovery rate calculations were slightly different depending on the t -statistic variant: for the SAM statistic, false discovery rates from the authors' Excel Add-in software was used, whereas for the CyberT and basic t -statistics, the Bioconductor false-discovery rate implementation was applied, which includes an extra step to enforce monotonicity of the ROC curve. In our experience, this extra step does not qualitatively alter the results. All scripts generated in this study are available for use [ 27 , 28 ]. Calculation of the statistic that combines the results of multiple expression summary datasets Say we have n datasets and C ij , S ij are the logged signals for a given probe set in the j th C and S chips, respectively, in dataset i . The mean signal (for this probe set) for the C chips in dataset i is: where is the number of C chips in dataset i ; similarly, the mean signal for the S chips in dataset i is: The mean fold change over all datasets is: The modified standard deviation for the C chips in dataset i is based on the CyberT estimate: where const is the weight for the contribution of the average standard deviation for probe sets with the same average signal intensity as C ij . The modified standard deviation for the S chips in dataset i ( sd.S i ) is defined analogously. The pooled variance over all 10 datasets is defined as: The variance between the 10 datasets is defined as: Then the combined statistic was chosen to be: Additional data files Additional data is available with the online version of this article. Additional data file contains a figure and explanatory legend showing the degree of overlap between two lists of differentially expressed genes. Additional data file 2 lists all analysis option combinations used to generate the expression summary datasets in this study. Additional data file 3 is a plot of observed vs actual spiked-in fold changes at the probe level. Additional data file 4 shows an example of asymmetric M (log 2 fold change) vs A (average log 2 signal) plot for the comparison of two biological samples. Additional data file 5 contains a comparison of the analysis methods with respect to the detection of DEGs with low signal. Additional data file 6 is a Zip archive containing plain text files (in Affymetrix CEL format), Affymetrix *.CEL files for the C chips in this dataset. Additional data file 7 is a Zip archive containing plain text files (in Affymetrix CEL format), Affymetrix *.CEL files for the S chips in this dataset. Additional data file 8 contains detailed information for the individual DGC clones used in this study. Supplementary Material Additional data file 1 A figure and explanatory legend showing the degree of overlap between two lists of differentially expressed genes Click here for additional data file Additional data file 2 All analysis option combinations used to generate the expression summary datasets in this study Click here for additional data file Additional data file 3 A plot of observed vs actual spiked-in fold changes at the probe level Click here for additional data file Additional data file 4 An example of asymmetric M (log 2 fold change) vs A (average log 2 signal) plot for the comparison of two biological samples Click here for additional data file Additional data file 5 A comparison of the analysis methods with respect to the detection of DEGs with low signal Click here for additional data file Additional data file 6 A Zip archive containing plain text files (in Affymetrix CEL format), Affymetrix *.CEL files for the C chips in this dataset Click here for additional data file Additional data file 7 A Zip archive containing plain text files (in Affymetrix CEL format), Affymetrix *.CEL files for the S chips in this dataset Click here for additional data file Additional data file 8 Detailed information for the individual DGC clones used in this study Click here for additional data file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC551536.xml |
555750 | Strategies to prevent HIV transmission among heterosexual African-American women | Background African-American women are disproportionately affected by HIV, accounting for 60% of all cases among women in the United States. Although their race is not a precursor for HIV, the socioeconomic and cultural disparities associated with being African American may increase their risk of infection. Prior research has shown that interventions designed to reduce HIV infection among African-American women must address the life demands and social problems they encounter. The present study used a qualitative exploratory design to elicit information about strategies to prevent HIV transmission among young, low-income African-American women. Methods Twenty five low income African American women, ages 18–29, participated in five focus groups of five women each conducted at a housing project in Houston, Texas, a large demographically diverse metropolitan area that is regarded as one of the HIV/AIDS epicenters in the United States. Each group was audiotaped, transcribed, and analyzed using theme and domain analysis. Results The participants revealed that they had most frequently placed themselves at risk for HIV infection through drugs and drinking and they also reported drug and alcohol use as important barriers to practicing safer sex. The women also reported that the need for money and having sex for money to buy food or drugs had placed them at risk for HIV transmission. About one-third of the participants stated that a barrier to their practicing safe sex was their belief that there was no risk based on their being in a monogamous relationship and feeling no need to use protection, but later learning that their mate was unfaithful. Other reasons given were lack of concern, being unprepared, partner's refusal to use a condom, and lack of money to buy condoms. Finally, the women stated that they were motivated to practice safe sex because of fear of contracting sexually transmitted diseases and HIV, desire not to become pregnant, and personal experience with someone who had contracted HIV. Conclusion This study offers a foundation for further research that may be used to create culturally relevant HIV prevention programs for African-American women. | Background Despite the impressive strides that have been made by behavioral scientists in developing culturally sensitive HIV intervention programs for minority populations in the United States, HIV infection continues to be a major public health problem and is increasingly affecting minority populations, persons infected through heterosexual contact, the poor and women [ 1 ]. African-American women are one group disproportionately affected by HIV [ 1 , 2 ]. Although they constitute only 13% of the female population, they account for 68% of all HIV cases among women in the United States [ 3 - 5 ] and AIDS is the leading cause of death among African-American women, aged 25 to 44 years [ 4 , 6 ]. The rate of HIV infection among African-American women is estimated to be four times higher than the rate for Latinas and more than nineteen times higher than the rates for Anglo women [ 7 ]. The disproportionate rate is further amplified among African-American women who use drugs [ 2 ]. The main exposure categories for all women in the United States is heterosexual contact accounting for approximately 40% of all new AIDS cases among this group [ 2 , 5 ]. This is followed by injection drug use that constitutes 25% of all new cases of AIDS among women. Among African-American women, 42% of AIDS cases are attributed to personal injection drug use while 38% are attributable to heterosexual transmission [ 8 ]. These behaviors often occur within the same context. From 1995 to 1999, mortality from AIDS has decreased more among men than women and among Whites and people of higher income status. However, the percentage decrease during the same time period was lowest among African-American women and women from the south 1 , suggesting a need for an effective HIV prevention program for these women. Aside from heterosexual contact and injection drug use, depression, physical and sexual abuse, and lack of condom negotiation skills are some of the psychosocial determinants of HIV risk behaviors among women [ 5 ]. Drug use, violence and depression have been described as a tripartite of risk factors that appear to have a profound influence on HIV risk and HIV infection among African-American women [ 8 ]. For example, crack cocaine use has had a devastating effect on the African-American community and appears to increase the likelihood of riskier sexual behavior as the amount of crack use increases [ 9 ]. In addition, conditions of poverty and homelessness are closely related to trading sex for drugs, a condition that affects many crack cocaine users and one that increases HIV risk [ 10 ]. While African-American women may not be placed at risk of HIV infection because of their race and ethnicity, St. Lawrence, Eldridge, Reitman, Little, Shelby, and Brasfield [ 11 ] note that race and ethnicity may be a reflection of the socioeconomic and cultural disparities that are associated with HIV transmission. According to Sanders-Philips [ 12 ], an understanding of the socioeconomic dynamics of HIV transmission among African-American women and incorporating this information into HIV prevention programs could significantly enhance HIV prevention efforts for these women. An array of socioeconomic and cultural factors exacerbate high-risk behaviors that place African-American women at risk for HIV infection [ 12 , 13 ], the most notable being the role of poverty. Although poverty in itself is not a precursor for HIV infection, several studies have established a direct link between low socioeconomic status and AIDS incidence [ 14 - 17 ]. For many African-American women, changing HIV-risk-related behavior is difficult because, daily, they deal with problems of poverty by engaging in sex for drug exchanges, prostitution, violence, and powerlessness in negotiating safer sex practices in relationships with African-American men [ 12 , 13 ]. It has been suggested that HIV risk-reduction programs for African-American women must address the life demands and social problems that these women face including poverty, alcohol and drug use, and other cultural and contextual issues that influence the role of women in safer sex decision making in the African-American community [ 11 , 18 ]. Research evidence has shown that for interventions to be effective among ethnic minority populations, they must be presented in a socio-cultural context as well as have gender specificity and a sound theoretical framework [ 19 - 21 ]. To date, only a limited number of studies have explored the impact of these variables on the decision to practice safe sex among African-American women, and the role that they play in developing culturally relevant HIV prevention interventions. Individual and small group behavior change techniques have long been used in HIV prevention interventions for women and have resulted in increased condom use by inner-city women in primary health care settings, mental health clinics, and among women living in economically disadvantaged neighborhoods [ 4 ]. Community-level interventions have been used less frequently, yet they are needed to disseminate health promotion messages that influence individual behavior change and strengthen social norms to support and reinforce such change. Lauby and colleagues [ 4 ] reported that a two-year, large-scale community-based intervention significantly impacted partner communication about condom use and attempts to get a main partner to use condoms. Their research highlights the effectiveness of reaching large numbers of women and changing their condom-use behaviors regarding communication with a main sex partner. Safer sex requires both partners' consent and often the male in risky sexual situations is resistant [ 2 ]. Theall, et al [ 5 ], examined the factors associated with HIV seropositivity among African-American women, aged 18 to 59 years, who were active crack cocaine users and/or injection drug users and found that an inability to say no to male sex partners was the strongest predictor of positive serostatus and as a result, skills building for negotiating and communicating safer sex practices is needed in prevention programs. Several HIV interventions have been developed to promote condom use and enhance sexual communication skills among African American women. The most promising interventions have been programs that are based on social cognitive principles. However, Kalichman et al [ 21 ] notes that HIV prevention programs that are based on social cognitive principles and proven to be effective in the scientific literature have not been widely utilized in community settings because of their dependence on expert interventionists for implementation in face-to-face formats, making them difficult to transfer to community-based organizations. In contrast, social cognitive theory principles applied to HIV prevention can be delivered effectively by videotapes and community-based organization personnel with minimal training in skills building techniques. The rationale for using videotapes as part of an HIV intervention delivery system is provided by the emerging literature which demonstrates the feasibility of this medium in changing high risk sexual behaviors [ 21 , 22 ]. Using constructs from social cognitive theory, the health belief model and theory of reasoned action, Roye & Hudson [ 22 ] conducted a study to assess the impact of a culturally appropriate videotape-based intervention on condom use among urban adolescent women who use contraception. The study showed that the videotape based intervention promoted favorable changes in sexual behaviors. Similarly, Kalichman et al [ 21 ] tested the efficacy of a culturally sensitive HIV prevention intervention for African-American women by randomly assigning African-American women to three intervention conditions: a single session public health service videotape intervention that provided HIV information delivered by two white women; a second videotape intervention that provided the same information but delivered by an African-American woman; and a third intervention module that was similar to the second but with the addition of culturally relevant materials. The women who received the intervention that used culturally relevant materials reported in follow-up assessments an increase in antibody testing and requests for condoms. Taken together, these studies demonstrate that culturally sensitive videotape-based HIV prevention interventions may be effective in changing high risk sexual behaviors. The research presented here results from qualitative studies conducted among African-American women in Houston, Texas, to elicit information that could be used to develop an HIV prevention intervention for similar populations. The research had two overarching purposes: to examine the sociocultural contexts of sexual risk taking among African-American women and to determine how a videotape-based HIV prevention intervention could be tailored so that it is effective in preventing HIV transmission among African-American women. Methods Design This study utilized a qualitative exploratory design to elicit information about strategies for preventing HIV transmission among African-American women. Twenty five low income African-American women, aged 18–29, participated in five focus groups of five women each conducted at a housing project in Houston, Texas. Houston, Texas, a large demographically diverse metropolitan area, was selected as the study site because of its distinction as a leading HIV/AIDS epicenter in the United States [ 6 ]. The housing complex was targeted for convenience sampling and because it is located within the same predominantly low-income African-American community as the research institution. Approval for the study was obtained from the relevant university Committee for the Protection of Human Subjects. Procedure Focus group participants were recruited by displaying posters and flyers at strategic locations in the housing complex identified by the project manager. The flyer listed the study inclusion criteria: African-American heterosexual female, aged 18 to 29, self-reported unprotected vaginal intercourse with two or more partners or injection drug using partner in the last six months, or having been diagnosed or treated for a sexually transmitted disease in the past year. The flyer listed a university telephone number that prospective participants could call to obtain additional information about the study and/or to schedule their participation in a group. A trained research assistant confirmed the caller's eligibility. The study participants were recruited using a convenience sampling approach and study participants encouraged their friends to enroll in the study. Of the 89 prospective participants who contacted the university, 42 agreed to participate. Of that number, seventeen individuals did not appear on the scheduled day and thus 25 individuals formed the study sample. The groups were conducted at the housing project's clubhouse by a facilitator, slightly older than the target group, trained in focus group methodology and having approximately seven years' experience conducting groups with low-income African-American women. The facilitator was assisted by a trained research assistant who took notes to ensure that pertinent information not captured by the audiotape was obtained and to record major points and items discussed. Participants were told that their involvement would help researchers and clinicians to learn about what they feel is important and better ways of helping the African-American community to combat HIV and AIDS. They were informed that their involvement was voluntary, that they could refuse to answer any question, and that they could cease participation at any time without penalty. Agreement was also obtained to audiotape the session. Prior to the start of each group, active written informed consent was obtained from each participant. The facilitator discussed with participants the issue of confidentiality of the information discussed at the meeting. Because some of the women were familiar with one another as a result of residency within the same housing complex, the facilitator ensured that all women were aware that what was discussed during the session should not leave the session at its conclusion. They were advised that they would receive a $25 mall gift certificate and condoms as incentives for their participation. Respondents were also advised that the tapes, which were anonymous, would be destroyed following transcription and checking. All questions as well as the informed consent were provided in English. Data Collection Semi-structured and open-ended questions were used to elicit information from the participants based on our research interest in determining perceptions of AIDS as a threat to African-American communities, barriers and facilitators to safer sex practices, characteristics of past situations which have placed persons at risk for HIV infection, the role of alcohol and drugs in creating high-risk sexual situations, suggestions on ways to enhance the saliency of AIDS in the African-American community and on how HIV intervention videotapes could be produced to achieve maximum levels of interest [See Table 1 ]. Table 1 Focus group guide questions 1. Do you perceive AIDS as a threat to the African American community and why? 2. What are the perceived roles of women in heterosexual relationships in the African American community? 3. What are the expectations for personal and sexual responsibilities for contraception and sexually transmitted diseases prevention among African American women? 4. What situations have placed you at risk for HIV infection in the past? 5 How have alcohol and drug use placed you at risk of HIV infection? 6. What are the things that motivate you to practice safe sex? 7. What are the things or barriers that prevent you from practicing safe sex 8. Why do you think that AIDS is spreading so rapidly in the African American community? 9. What information do you think we need to include in a videotape developed to train African Americans about HIV prevention that will encourage them to watch the videotapes? 10. Do you have any other suggestions on how AIDS can be prevented in the African American community? 11. What can we do to get people to sign up for focus groups such as this one and also get them to participate in HIV/AIDS training programs? 12. What can we do to make these training programs most useful to you? Focus group questions were generated in three stages. The first involved conducting interviews with six key informants [four public health researchers, one pharmacist, and one nurse experienced in HIV/AIDS prevention among African Americans] to generate working hypotheses on facilitators and barriers to HIV prevention and program messages and methods. The hypotheses formulated were then tested in interviews with women similar to members of the target population to reshape the hypotheses and continued until no new information emerged. The resulting hypotheses were used to generate questions that were then field tested with members of the target population. At the end of the first group, additional changes were made as needed to the way in which questions were asked not to elicit different information, but rather to add clarity for the participants. The resulting focus group interviews from which this research is reported were conducted using a guide consisting of a written list of questions and probes. McCracken [ 23 ] has highlighted the advantages of utilizing such a guide including to ensure that all areas of interest are covered and to focus the researcher's attention on listening to the informants, thereby enabling a better understanding of their lines of thought and possibly, unanticipated explanations of the concepts. The duration of each group was about two hours. The focus groups were transcribed at the completion of all groups. Because the questions were carefully crafted and the purpose of the groups was to promote self-disclosure and to generate ideas and perceptions about HIV/AIDS in the African-American community, any idea that emerged was considered valid and not subject to verification by the research team. Data analysis Data analysis was performed according to the standard grounded theory approach of Glaser and Strauss [ 24 ]. The relatively unclear understanding of the sociocultural contexts of HIV sexual risk taking among African-American women made a qualitative analysis particularly useful. Codes and categories were developed by doing a line-by-line analysis of the participants' transcripts [ 25 ] and identifying the emerging themes. The thematic concepts representing ideas expressed by a majority of the members of three or more focus groups were characterized as a domain and are reported below. Results Study participants The study sample consisted of 25 women, ages 18 to 29. The mean age was 24 years. Among the 25 women in the sample, two had completed some post-secondary education and the remainder had completed eighth or ninth grade. Twenty of the women were unemployed and most of these women identified themselves as homemakers. Of the remaining five, the most frequently cited source of employment [n = 4] was the medical field [nursing or medical assistant]. Two identified themselves as HIV positive. None of the respondents had health insurance. The names used for the quotes given in this report are pseudonyms. Risk situations for HIV transmission The situational determinants of HIV risk taking and their impact on HIV/AIDS prevention behaviors and education programs were examined. The rate of HIV infection and AIDS is higher among African-American women in Houston and nationally [ 6 ] and African-American women are among the poorest of racial/ethnic minorities [ 26 ]. Although poverty itself is not a precursor for HIV infection, it does lead to several psychosocial factors that may place African-American women at higher risk for infection. The present population revealed that they had most frequently placed themselves at risk for HIV infection through drugs and drinking and they also reported drug and alcohol use as important barriers to practicing safer sex. Although these behaviors may not be directly linked to poverty, it is reported that people who are oppressed will often turn to substances as a way of coping with daily life [ 27 ]. The women also reported that the need for money and having sex for money to buy food or drugs had placed them at risk for HIV transmission. Comprehension of the situations explicated by low-income African-American women that place them at risk for HIV is of critical importance when developing programs that address HIV risk reduction. Most women had placed themselves at risk of transmission through drug use or needle sharing and having unprotected sex. The sexual activity took place in some instances in exchange for drugs or money and to purchase basic necessities such as food for their children. Other women stated that they had been placed at risk of HIV transmission because they believed they were in monogamous relationships but later learned their partners had been unfaithful. Edith, a 27 year old volunteer, described the situations that have placed her at risk for HIV in the past year: When you are on drugs and then you are drinking, it impairs your senses and you don't use common sense or knowledge of what you are doing. You just get caught up in the moment. Mary, a 26 year old homemaker explains: Having sex period is a risk. When you can't feed your kids and you need money. When you go sleep around and have sex and they're not using condoms. Cause some of them say they don't use them and some of them say they don't want to use them. They have all kinds of excuses not to use them. Celia, a 25 year old medical worker, described how she was placed at risk of HIV infection: I married a man, not really knowing him, and he was sleeping with a lot of women, and sleeping with me unprotected. Yeah, right after we got married, he told me he wasn't going to cheat on me no more and when I found out some of the women he was cheating on me with, I knew that they always stay at the doctor cause something always be wrong with them. I was pregnant and hoping that nothing was wrong with me. Related to risk for HIV, the participants were asked to discuss why HIV is so prevalent within their [the African-American] community. Unprotected sex, lack of awareness, lack of medical access, and early sexual initiation were most frequently mentioned. When asked why HIV is spreading so rapidly in the African-American community, unprotected sex was stated most often. Lack of knowledge or awareness about prevention was also frequently stated. However, drug use was not highly ranked and was infrequently stated compared to women's beliefs that lack of access to medical care and non-priority of health as well as early sexual initiation and feelings of invincibility among young people were more significant contributors to HIV's rapid spread among African Americans. Betty, a 25 year old homemaker said: I think it's spreading around so fast because people that have AIDS just constantly have sex and they pass it on and pass it on and pass it on. Celia expressed her agreement and added: Everybody's sleeping with everybody. The women are sleeping with women; the men are sleeping with men and then they sleeping with each other. Ménage á trios or threesomes, foursomes, they popping Ecstasy pills, just partying hard, they smoking marijuana, they speed balling, they taking downers, handlebars. Uh, I don't know, just really not caring and so they're not even going to the doctor to see if anything is wrong with them. They're just continually sleeping and not stopping to take care of their own bodies, to make sure that they are okay. Kelli, a 24 year old operator stated: Well, first of all because we're uneducated. And we're just unconcerned. Like they say, you have to do what you have to do to get what you need, and if it's one of them five minute things, they ain't thinking about no condom. Kim, a 24 year old cashier said: I don't know – people feel like they can't get it. Maybe they feel like it just can't get to them. They feel like it can get to other people, but it can't get to them. Although the women related their individual risk for HIV infection to drugs and alcohol, they did not associate drug and alcohol use with the rapid spread of HIV within the overall African-American community. Barriers to safe sex practices When asked to name situations that have prevented them from protecting themselves against HIV infection, the reasons given were much the same as when they were asked about situations that have placed them at risk for HIV infection. About one-third of the women named drug and alcohol use as responsible for them not taking needed precautions. Surprisingly, about one-third of women stated that their barrier was their belief that there was no risk based on their being in a monogamous relationship and feeling no need to use protection, but later learning that their mate was unfaithful. Other reasons given were lack of concern, being unprepared, partner's refusal to use a condom, and lack of money to buy condoms. Chaka discussed how drug and alcohol use were barriers to safe sex practices for her: Working in the club, you drink and if you intensify that with drugs, you don't' know who you're going home with. You don't know who been to your house and what they have done to you on account of you being high, on account of you being drunk or you done overindulged in either or and it's just not a good feeling. Charlie stated that she was placed at risk of contracting HIV by her spouse: I married a man not really knowing him and he was sleeping with a lot of women and then sleeping with me unprotected. Vicki, an 18 year old high school student said: Like I said, not having the money to go and buy the protection. That's going to prevent you from preventing it. In this culture, particularly among poorer uneducated women, men may play a more domineering role over women [ 28 ]. There is also a misconception among African-American men and some women that condom use reduces the sensation produced during sexual intercourse [ 29 ]. Some of the women in this group reported that their partners had refused to use a condom. Facilitators to safe sex practices The women were motivated to practice safe sex because of fear of contracting sexually transmitted diseases and HIV, desire not to become pregnant, and personal experience with someone who had contracted HIV. Over one-third of women acknowledged that fear of contracting sexually transmitted infections including hepatitis and HIV motivated them to practice safe sex. A participant stated: Every disease that's in the book that's not curable is enough to scare my clothes on. Mary said: Syphilis, gonorrhea, HIV, herpes; that ought to want to motivate anybody to practice safe sex. Other women said that their desire not to become pregnant motivated them to practice safe sex. Sally was motivated by the need to care for her children. She stated: Because I have three children and I don't plan on dying until the Lord takes me. A smaller number of women declared that they were motivated to practice safe sex because they had seen, first hand, the effects of HIV. Kim said: Things that motivate me – you see somebody outside on the streets and you see them with it and you see the effect that it's had on them and you look at them and you say that's something that you don't want to do so it motivates you to practice safe sex. Intervention components When asked to describe what should be included in a videotape aimed at prevention of HIV within the African-American community, the most prevalent response among participants was to include personal experiences of people affected by HIV and AIDS. They believed that testimonials from those infected with HIV and sensational footage of the ravaging effects HIV and AIDS have on the human body would be most effective. To create a video that addresses HIV prevention, over half of the participants recommended sensationalism to garner the audience's attention. The purpose of the video would be to show the illness, pain, rejection, medication regimen, and years of life lost among those infected. Fonda, a 24 year old GED preparer with 2 years of college said: To make them watch it, show the blood, the guts, the pus, the sores, the relationships... Sally said: Show them how the body deteriorates from having AIDS. Show them everything else in the world that they are going to miss out on if they don't take care of themselves. Another participant said: You need to let them see how these people are just in so much pain and rejection, and not having the finances and things. In order to live like the guy that's an athlete [Magic Johnson], you know to live by the medication, how people are going to other places to get it. A variety of other suggestions were given including the use of popular culture in the forms of rap and gospel music videos, productions and concerts and creating a Surgeon General's warning against unsafe sex [much like what has been done with tobacco],. Ceah, a 25 year old health care worker said: Drum them in any way you can. If they like rap, rap it to them. If they like gospel, you sing it to them. If prayer is what it takes, you pray it to them. By any means necessary, you get your message across. Try all ways. Other recommendations included developing a video in the form of a comedy-drama or a cartoon or using a campaign similar to the one utilized by Mothers Against Drunk Driving in which a person pre-HIV is shown followed by the person who has become ill post-HIV infection. The participants also offered suggestions for recruiting African Americans into prevention programs where the videotape could be shown. The majority of women suggested the use of financial incentives including air conditioning units, fans, food, and amusement park tickets. The women recommended recruiting participants within the communities in which they live, going door-to-door if needed, and having such programs take place in community settings, such as schools and neighborhood centers. Other suggestions included having family friendly events that men would want to attend, thus ensuring women and children will follow. Fonda believed that you have to give something to get people to participate. She said: You know, people want something for something; nobody wants nothing for nothing. Nothing is free in this world. Shirley, a GED student suggested community involvement: Fundraisers, cooking things, get the community involved with you as the person that wants to get these things started. Once you get these things started, once you get the community, you got it. Nilene suggested: Just get it [word] out, it has to be let out in some kind of way, as far as radio stations, TVs, billboards. People in the communities, the main office, the campus, I mean everywhere, it has to be everywhere. It can't be in one spot, it has to come out, go out. Veronica, a 25 year old youth coordinator with one year of college says: If you get the men there, the women will come. A basketball tournament, yes, they like basketball and once you get the men there, the women are gonna come. Seriously, it's bad to say it like that but you're giving something. It's a sex symbol, yes, you understand what I'm saying, but it's a way to get them out there. Conclusion Prior research has indicated the need to develop HIV intervention programs that target the socioeconomic contexts of women at risk for infection including cognitive or social cognitive, educational and/or skills building content-specific components. Women in the present study recommended that HIV/AIDS videotaped messages should be developed that highlight the sensational effects of the disease. Contrary to research indicating this method does not work [ 30 - 32 ] and based on the results of the research presented here, it may be worthwhile to field test a videotape that features HIV-positive people with groups of African Americans to evaluate the utility of such a teaching tool as they may have an essential role to play in AIDS prevention. Similar to our findings with African American men [ 33 ], the participants also recommended free, confidential testing in a community-based setting with the provision of incentives for testing and participation, findings that offer a first glimpse at what researchers and practitioners can do to create culturally relevant HIV prevention programs for African-American women. Although the findings are limited due to the small sample size, the use of convenience sampling and the location in the southern part of the United States, this research may provide a base for conducting larger studies among low-income African-American women. Before programs are developed, the barriers poor African-American women face on a daily basis should be addressed. Programs are not only needed to help negotiate these barriers, the barriers should also be included in program development. This may necessitate the involvement of various social service agencies as well as health educators and nursing and medical professionals. The women also need skills training to enhance their abilities to negotiate safer sex practices with their partners. If the tide of HIV and AIDS infection among African Americans is to be reduced, programs must incorporate culturally relevant contextual information presented to the target audience in a setting and in a manner that addresses their norms and beliefs and provides them the knowledge and skills needed to make correct decisions 2 . Health professionals may wish to learn more about the barriers these women face and work with social service providers to address the issues most salient to the women before developing patient education materials for HIV/AIDS prevention. Methods that appeal to the target audience should be devised but nursing professionals should remember that low-income African-American women are a heterogeneous group. Interventions such as videotapes should be developed to have a wide appeal yet have contextual, cultural, and gender specificity and it is important to remember that it is best to educate women within a community-based setting. Resources are needed to identify, recruit and retain African American women into HIV intervention programs. Competing interests The author(s) declare that they have no competing interests. Contributors EJE, AFM and GOO conceived and designed the study. EJE, AFM, RJP jointly planned and executed the data analyses. EJE wrote the paper with assistance from AFM, RJP, GOO and NIO. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC555750.xml |
519002 | Gene Recruitment of the Activated INO1 Locus to the Nuclear Membrane | The spatial arrangement of chromatin within the nucleus can affect reactions that occur on the DNA and is likely to be regulated. Here we show that activation of INO1 occurs at the nuclear membrane and requires the integral membrane protein Scs2. Scs2 antagonizes the action of the transcriptional repressor Opi1 under conditions that induce the unfolded protein response (UPR) and, in turn, activate INO1 . Whereas repressed INO1 localizes throughout the nucleoplasm, the gene is recruited to the nuclear periphery upon transcriptional activation. Recruitment requires the transcriptional activator Hac1, which is produced upon induction of the UPR, and is constitutive in a strain lacking Opi1. Artificial recruitment of INO1 to the nuclear membrane permits activation in the absence of Scs2, indicating that the intranuclear localization of a gene can profoundly influence its mechanism of activation. Gene recruitment to the nuclear periphery, therefore, is a dynamic process and appears to play an important regulatory role. | Introduction For over a hundred years, it has been recognized that chromatin is distributed non-randomly within the interphase nucleus ( Rabl 1885 ; Boveri 1909 ). More recently, three-dimensional fluorescence microscopy studies have established that chromosomes are organized into distinct, evolutionarily conserved subnuclear territories (reviewed by Cockell and Gasser [1999] ; Isogai and Tjian [2003] ). However, DNA is mobile and can move between these domains (reviewed in Gasser [2002] ). Recent studies suggest that the subnuclear localization of genes can have dramatic effects on their chromatin state, rate of recombination, and transcription ( Cockell and Gasser 1999 ; Isogai and Tjian 2003 ; Bressan et al. 2004 ). Heterochromatin, for example, is generally found concentrated in close proximity to the nuclear envelope. Several genes conditionally colocalize with heterochromatin under conditions in which they are repressed. The transcriptional regulator Ikaros, for example, interacts both with regulatory sequences upstream of target genes and with repeats enriched at centromeric heterochromatin. When repressed, these genes become colocalized with heterochromatin, suggesting that Ikaros promotes repression by directly recruiting target genes into close proximity with heterochromatin ( Brown et al. 1997 , 1999 ; Cobb et al. 2000 ). Consistent with this view, euchromatic sequences that become colocalized with heterochromatin are transcriptionally silenced ( Csink and Henikoff 1996 ; Dernburg et al. 1996 ). In Saccharomyces cerevisiae, genes localized in proximity to telomeres are similarly transcriptionally silenced ( Gottschling et al. 1990 ). Silencing is due to Rap1-dependent recruitment of Sir proteins to telomeres ( Gotta et al. 1996 ), which promotes local histone deacetylation and changes in chromatin structure (reviewed in Rusche et al. [2003] ). Physical tethering of telomeres at the nuclear periphery through interactions with the nuclear pore is required for silencing ( Gotta et al. 1996 ; Laroche et al. 1998 ; Galy et al. 2000 ; Feuerbach et al. 2002 ). When a reporter gene flanked by silencer motifs was relocated more than 200 kb away from a telomere, silencing was lost ( Maillet et al. 1996 ). Silencing was restored to this gene by overexpression of SIR genes. Therefore it is thought that tethering serves to promote efficient recruitment of Sir proteins, which are enriched at the nuclear periphery and limiting elsewhere ( Maillet et al. 1996 ). Another example of gene silencing at the nuclear periphery comes from experiments in which defects in the silencer of the HMR locus could be suppressed by artificially tethering this locus to the nuclear membrane ( Andrulis et al. 1998 ). Thus, localization of chromatin to the nuclear periphery has been proposed to play a major role in transcriptional repression. By contrast, we report here that dynamic recruitment of genes to the nuclear membrane can have profound effects on their activation. The gene under study here is INO1, a target gene of the unfolded protein response (UPR), which encodes inositol 1-phosphate synthase. The UPR is an intracellular signaling pathway that is activated by the accumulation of unfolded proteins in the endoplasmic reticulum (ER), which can be stimulated by treatment with drugs that block protein folding or modification or, in yeast, by starvation for inositol ( Cox et al. 1997 ). These conditions activate Ire1, a transmembrane ER kinase/endoribonuclease ( Cox et al. 1993 ; Mori et al. 1993 ), which, through its endonuclease activity, initiates nonconventional splicing of the mRNA encoding the transcription activator Hac1 ( Cox and Walter 1996 ; Shamu and Walter 1996 ; Kawahara et al. 1997 ; Sidrauski and Walter 1997 ). Only spliced HAC1 mRNA is translated to produce the transcription factor; the Ire1-mediated splicing reaction, therefore, constitutes the key switch step in the UPR ( Sidrauski et al. 1996 ; Ruegsegger et al. 2001 ). Hac1 is a basic-leucine zipper transcription factor that binds directly to unfolded protein response elements (UPREs) in the promoters of most target genes to promote transcriptional activation ( Cox and Walter 1996 ; Travers et al. 2000 ; Patil et al. 2004 ). However, a subset of UPR target genes uses a different mode of activation. Transcriptional activation of these genes, including INO1, depends on Hac1 and Ire1. These target genes contain an upstream activating sequence that is regulated by the availability of inositol, the UAS INO element, in their promoters that is repressed by Opi1 under non-UPR conditions ( Greenberg et al. 1982 ; Cox et al. 1997 ). Opi1 repression is relieved in a Hac1-dependent manner upon induction of the UPR ( Cox et al. 1997 ). Positively acting transcription factors Ino2 and Ino4 then promote transcription from UAS INO -containing promoters ( Loewy and Henry 1984 ; Ambroziak and Henry 1994 ; Schwank et al. 1995 ). Our previous work established that the production of Hac1 by UPR induction functions upstream of Opi1, suggesting that the role of the UPR is to counteract Opi1-mediated repression ( Cox et al. 1997 ). To understand the regulation of UAS INO -controlled genes by the UPR, we have examined the molecular events leading to the activation of INO1 . We find that Scs2, an integral protein of the nuclear and ER membrane that was recently shown to play a role in telomeric silencing ( Craven and Petes 2001 ; Cuperus and Shore 2002 ), is required to activate INO1 . We observe dynamic INO1 recruitment to the nuclear membrane under activating conditions. Importantly, we find that recruitment requires Hac1 and is opposed by Opi1. Furthermore, we show that artificial recruitment of INO1 to the nuclear membrane can bypass the requirement for Scs2. Gene recruitment to the nuclear membrane therefore plays an instrumental role in INO1 activation. Results Abundance and Localization of the Transcriptional Regulators Ino2, Ino4, and Opi1 Are Unaffected by UPR Induction To characterize the molecular basis of transcriptional activation of INO1, we first asked whether the steady-state levels of the known transcriptional regulators—the activators Ino2 and Ino4 and the repressor Opi1—were affected by induction of the UPR. To this end, we monitored the levels of myc -tagged proteins by Western blotting after UPR induction by inositol starvation ( Figure 1 A). Induction of the UPR did not result in a significant change of the abundance of any of the proteins. Thus, in contrast to what has been suggested in previous studies ( Ashburner and Lopes 1995a , 1995b ; Cox et al. 1997 ; Schwank et al. 1997 ; Wagner et al. 1999 ), INO1 transcription is not regulated through adjustment of the abundance of these regulators. Figure 1 Scs2 Regulates the Function of Opi1 on the Nuclear Membrane (A) Steady state protein levels and localization of Opi1, Ino2, and Ino4 under repressing and activating conditions. Strains expressing myc -tagged Opi1, Ino2, or Ino4 ( Longtine et al. 1998 ) were grown in the presence ( INO1 repressing condition) or absence ( INO1 activating condition) of myo -inositol for 4.5 h. Tagged proteins were analyzed by Western blotting (size-fractionated blots on the left, designated Opi1, Ino2, and Ino4) and indirect immunofluorescence (photomicrographs on the right). For Western blot analysis, 25 μg of crude lysates were immunoblotted using monoclonal antibodies against either the myc epitope (top bands in each set) or, as a loading control, Pgk1 (bottom bands in each set; indicated with an asterisk). Immunofluorescence experiments were carried out using anti- myc antibodies and anti-mouse Alexafluor 488. Bright-field (BF) and indirect fluorescent (IF) images for a single z slice through the center of the cell were collected by confocal microscopy. (B) Ino2 and Ino4 heterodimerize under both repressing and activating conditions. Cells expressing either HA-tagged Ino4 (negative control) or HA-tagged Ino4 and myc -tagged Ino2 were grown in the presence or absence of 1 μg/ml tunicamycin (Tm; an inhibitor of protein glycosylation that induces protein misfolding in the ER) for 4.5 h and lysed. Proteins were immunoprecipitated using the anti- myc monoclonal antibody. Immunoprecipitates were size-fractionated by SDS-PAGE and immunoblotted using the anti-HA monoclonal antibody. (Continued on next page) (C) Coimmunoprecipitation of Scs2 with Opi1. Detergent-solubilized microsomal membranes from either an untagged control strain (lane C) or duplicate preparations from the Opi1- myc tagged strain ( myc lanes, 1 and 2) were subjected to immunoprecipitation using monoclonal anti- myc agarose. Immunoprecipitated proteins were size-fractionated by SDS-PAGE and stained with colloidal blue. Opi1- myc and the band that was excised and identified by mass spectrometry as Scs2 are indicated. IgG heavy and light chain bands are indicated with an asterisk. (D) Coimmunoprecipitation with tagged proteins. Immunoprecipitation analysis was carried out on strains expressing either Scs2-HA alone (lanes 1–3) or Scs2-HA together with Opi1p- myc (lanes 4–6). Equal fractions of the total (T), supernatant (S), and bound (B) fractions were size-fractionated by SDS-PAGE and immunoblotted using anti- myc or anti-HA monoclonal antibodies. (E) Epistasis analysis. Haploid progeny from an OPI1/opi1 Δ SCS2/scs2 Δ double heterozygous diploid strain having the indicated genotypes were streaked onto minimal medium with (+ inositol) or without (– inositol) 100 μg/ml myo -inositol and incubated for 2 d at 37 °C. Next, we tested whether the subcellular localization of these regulators is modulated. We examined the localization of myc -tagged Opi1, Ino2, and Ino4 by indirect immunofluorescence ( Figure 1 A). Again, we observed no significant change upon UPR induction: Ino2 and Ino4 localized to the nucleus under both repressing and activating conditions. Localization of Opi1 also showed no change. Like Ino2 and Ino4, Opi1 localized to the nucleus under both conditions. However, in agreement with recent data by Loewen et al. (2003) , we found that Opi1 was concentrated at the nuclear membrane and diffusely distributed throughout the nucleoplasm ( Figure 1 A). Furthermore, coimmunoprecipitation experiments showed that Ino2 and Ino4 heterodimerize under both conditions, suggesting that this interaction is not regulated ( Figure 1 B). Taken together, these observations therefore pose an interesting puzzle: How is regulation achieved when the localization and abundance of all three regulators is unchanged between activating and repressing conditions? Opi1 Is Regulated by an Integral ER/Nuclear Membrane Protein To begin to explore a possible functional significance of Opi1's unusual localization pattern at the nuclear membrane, we sought to identify binding partners that might tether Opi1 to the membrane. To this end, we immunoprecipitated myc -tagged Opi1 under nondenaturing conditions from mildly detergent-solubilized microsomal membranes. Bands that were enriched in the immunoprecipitated fraction from the myc -tagged strain were identified by matrix-assisted laser desorption ionization mass spectrometry ( Figure 1 C). This procedure identified Scs2, a bona fide integral membrane protein known to reside in nuclear membranes and ER ( Nikawa et al. 1995 ; Kagiwada et al. 1998 ; Kagiwada and Zen 2003 ). To confirm that Scs2 and Opi1 interact, we performed coimmunoprecipitation analysis from extracts of strains expressing myc -tagged Opi1 and hemagglutinin (HA)-tagged Scs2. We observed specific recovery of Scs2-HA in Opi1- myc immunoprecipitates ( Figure 1 D). Recent results from a genome-wide immunoprecipitation study ( Gavin et al. 2002 ) and in vitro peptide binding studies ( Loewen et al. 2003 ) corroborate the interaction between Opi1 and Scs2. In contrast to Opi1, the transcriptional repressor Scs2 has been implicated in the activation of INO1 transcription: Overexpression of SCS2 suppresses the Ino – growth phenotype in cells that cannot activate the UPR ( Nikawa et al. 1995 ), and loss of Scs2 impairs activation of INO1 ( Kagiwada et al. 1998 ; Kagiwada and Zen 2003 ). Therefore, either Scs2 is the downstream target of Opi1-mediated repression, or Scs2 functions upstream to relieve Opi1-mediated repression. To distinguish between these possibilities, we analyzed the growth of the double mutant in the absence of inositol. As shown in Figure 1 E, opi1 Δ cells grew in absence of inositol because INO1 is constitutively expressed. In contrast, scs2 Δ cells did not grow under these conditions. Double mutant opi1 Δ scs2 Δ cells grew in the absence of inositol, indicating that Scs2 functions to regulate Opi1 and is dispensable in the absence of Opi1. Given that Scs2 is an integral membrane protein, these data suggest that regulation of Opi1 occurs at the nuclear membrane. Ino2 and Ino4 Bind to the INO1 Promoter Constitutively Ino2 and Ino4 have been shown by gel-shift analysis of yeast extracts to bind directly to the UAS INO in the INO1 promoter ( Lopes and Henry 1991 ; Ambroziak and Henry 1994 ; Bachhawat et al. 1995 ; Schwank et al. 1995 ). Binding was observed in extracts from cells grown under repressing or activating conditions, and was increased in the absence of Opi1 ( Wagner et al. 1999 ). To monitor the interaction of Ino2 and Ino4 with the INO1 promoter in vivo, we used chromatin immunoprecipitation (ChIP) ( Solomon et al. 1988 ; Dedon et al. 1991 ). Consistent with the gel-shift experiments, we found that Ino2-HA and Ino4-HA bound to the INO1 promoter under both repressing and activating conditions ( Figure 2 A). Real-time quantitative PCR analysis of immunoprecipitated DNA confirmed that both Ino2 and Ino4 associated with the INO1 promoter constitutively ( Figure 2 B). Although we observed an increase in the association of Ino2 with the INO1 promoter under inducing conditions compared with repressing conditions, these results argue that occupancy of the promoter by Ino2/Ino4 is not sufficient for activation but that it must be a subsequent step in the activation process that is regulated by the UPR. Figure 2 Ino2/Ino4 Bind to the INO1 Promoter Constitutively (A) Untagged control cells (upper images), or cells in which the endogenous copies of INO2 and INO4 were replaced with HA-tagged Ino2 (center images) or HA-tagged Ino4 (lower images) were harvested in mid-logarithmic phase and washed into medium with or without myo -inositol. After 4.5 h, about 1.5 × 10 8 cells were harvested and processed for Northern blot analysis (light images with dark bands, right). Northern blots were probed against both INO1 and ACT1 (loading control) mRNA. The remaining cells were fixed with formaldehyde and lysed. Chromatin was sheared by sonication and then subjected to immunoprecipitation with anti-HA agarose. Input DNA (In) and immunoprecipitated DNA (IP) were analyzed by PCR using primers to amplify the INO1 promoter and the URA3 gene. Amplified DNA was size-fractionated by electrophoresis on ethidium bromide-stained agarose gels (dark images with light bands, left). (B) Quantitative PCR analysis. Input and IP fractions were analyzed by real-time quantitative PCR. The ratio of INO1 promoter to URA3 template in the reaction is shown. Error bars represent the standard error of the mean (SEM) between experiments. The molecular mechanism by which Opi1 represses transcription is not understood. In particular, it is not clear whether Opi1 binds to the INO1 promoter directly. Early gel-shift experiments using yeast lysates suggested that Opi1 might interact with DNA ( Lopes and Henry 1991 ). However, this association has not been confirmed, and its significance is unknown. We used ChIP analysis and real-time quantitative PCR to assess the interaction of Opi1 with the INO1 promoter in vivo. We observed specific enrichment of the INO1 promoter by immunoprecipitation of Opi1 from cells grown in the presence of inositol (repressing condition) but no significant enrichment of the INO1 promoter by immunoprecipitation of Opi1 from cells starved for inositol (activating condition; Figure 3 ). By contrast, when we performed the immunoprecipitations from either hac1 Δ or scs2 Δ strains, we observed greater enrichment of the INO1 promoter sequences from cells grown under both activating and repressing conditions. These results are consistent with the notion that Opi1 binds to chromatin at the INO1 promoter and that the function of Hac1 and Scs2 is to promote Opi1 dissociation. Figure 3 UPR-Dependent Dissociation of Opi1 from Chromatin (A) Chromatin-associated Opi1 dissociates upon activation of the UPR. Cells of the indicated genotypes were harvested after growth for 4.5 h with or without myo -inositol, fixed, and processed as in Figure 2 . The scs2 Δ mutant was transformed with pRS315-Opi1- myc , a CEN ARS plasmid that expresses Opi1- myc at endogenous levels. Input DNA (In) and immunoprecipitated DNA (IP) were analyzed by PCR using primers to amplify the INO1 promoter and the URA3 gene. Amplified DNA was separated by electrophoresis on ethidium bromide–stained agarose gels. (B) Quantitative PCR analysis. Input and IP fractions were analyzed by real-time quantitative PCR. The ratio of INO1 promoter to URA3 template in the reaction is shown. Error bars represent the SEM between experiments. In contrast to immunoprecipitation of Ino2 and Ino4, which specifically recovered the INO1 promoter and not the control URA3 sequences (see Figure 2 ), immunoprecipitates of Opi1 recovered significant amounts of URA3 sequences as well ( Figure 3 A, upper bands). It is clear from the quantitative PCR analysis that Opi1 binding to the INO1 promoter is specific ( Figure 3 B). The different conditions used in the qualitative gel analysis (measuring PCR products after many cycles) and the quantitative PCR (measuring PCR products in the linear range of amplification) are likely to account for this difference. The INO1 Gene Relocalizes within the Nucleus upon UPR Activation Since Opi1 dissociation from the INO1 promoter correlates with activation and requires Hac1 and Scs2, an integral nuclear membrane protein, we wondered whether activation might occur at the nuclear periphery and thus might be dependent on the subnuclear positioning of the gene. Consistent with this hypothesis, we found that a form of Scs2 (Scs2ΔTMD) lacking the transmembrane domain, which was localized throughout the cell and was not excluded from the nucleus ( Figure 4 A, compare cytosolic protein Rps2 to Scs2ΔTMD for colocalization with 4′,6′-diamidino-2-phenylindole), and was nonfunctional, rendering cells inositol auxotrophs, despite being expressed at levels comparable to full length Scs2 ( Figure 4 B and 4 C). Figure 4 Membrane Association Is Essential for Scs2 Function The carboxyl-terminal transmembrane domain of Scs2 was removed by replacement with three copies of the HA epitope (Scs2ΔTMD-HA; Longtine et al. [1998] ). (A) Scs2ΔTMD localization. Ribosomal protein S2 (Rps2-HA), Scs2-HA, and Scs2ΔTMD-HA were localized by immunofluorescence against the HA epitope. DNA was stained with 4′,6′-diamidino-2-phenylindole. Images were collected in a single z-plane (≤ 0.7 μm thick) by confocal microscopy. Unlike Rps2-HA, which was excluded from the nucleus (indicated with white arrows), Scs2ΔTMD-HA staining was uniform and evident in the nucleoplasm. (B) Scs2ΔTMD steady-state levels. Equal amounts of whole cell extract from cells expressing either Scs2-HA or Scs2ΔTMD-HA were analyzed by immunoblotting. (C) Scs2ΔTMD is nonfunctional. Strains expressing the indicated forms of Scs2 were streaked onto medium with or without myo -inositol and incubated for 2 d at 37 °C. If INO1 were regulated at the nuclear periphery, then the INO1 locus should colocalize with the nuclear membrane under activating conditions. To test this idea, we constructed a strain in which an array of Lac operator (Lac O in Figure 5 ) binding sites was integrated adjacent to the INO1 locus ( Robinett et al. 1996 ). The strain also expressed a green fluorescent protein (GFP)-Lac repressor fusion protein (GFP-Lac I in Figures 5 and 6 ) that binds to the Lac operator array to allow localization of the INO1 gene. In a control strain, we integrated the same Lac operator array adjacent to the URA3 locus. Cells were fixed and GFP was visualized by indirect immunofluorescence. Most cells showed a single intranuclear spot localizing the tagged gene; the remaining cells showed two spots due to their post-replication state in the cell cycle. In both the tagged INO1 and the tagged URA3 strains, we simultaneously visualized the ER and nuclear membrane by indirect immunofluorescence against Sec63- myc using a different fluorophore ( Figure 5 A). Figure 5 The INO1 Gene Is Recruited to the Nuclear Membrane upon Activation An array of Lac operator repeats was integrated at INO1 or URA3 in strains expressing GFP-Lac repressor and myc -tagged Sec63. GFP-Lac repressor and Sec63- myc were localized in fixed cells by indirect immunofluorescence. Data were collected from single z sections representing the maximal, most focused signal from the Lac repressor. (A) Two classes of subnuclear localization. Shown are five representative examples of localization patterns that were scored as membrane-associated (photomicrographs and plots on left) or nucleoplasmic (right). For each image, the fluorescence intensity was plotted for each channel along a line that intersects both the Lac repressor spot and the center of the nucleus. (B) INO1 is recruited to the nuclear membrane upon activation. The fraction of cells that scored as membrane-associated is plotted for each strain grown in the presence (+) or absence (–) of inositol. The site of integration of the Lac operator (Lac O), the version of the GFP-Lac repressor (GFP-Lac I; either wild-type or having the FFAT membrane-targeting signal) expressed, and the relevant genotype of each strain is indicated. The dashed line represents the mean membrane association of the URA3 gene. The vertical arrow indicates the frequency of membrane association in the wild-type strain under activating conditions. Error bars represent the SEM between separate experiments. Each experiment scored at least 30 cells. The total number of cells (and experiments) scored for each column were: bar 1, 70 (2); bar 2, 66 (2); bar 3, 39 (1); bar 4, 71 (2); bar 5, 140 (4); bar 6, 88 (2); bar 7, 88 (2); bar 8, 92 (3); bar 9, 74 (2); and bar 10, 38 (1). Figure 6 Artificial Relocalization of INO1 Bypasses the Requirement for Scs2 (A) Northern blot analysis of membrane-targeted INO1 . Strains of the indicated genotypes having the Lac operator array integrated at INO1 and expressing either the wild-type GFP-Lac repressor or GFP-FFAT-Lac repressor were grown in the presence or absence of 1 μg/ml tunicamycin (Tm) for 4.5 h, harvested, and analyzed by Northern blot. Blots were probed for either INO1 or ACT1 (as a loading control) mRNA. The wild-type strain CRY1, lacking both the Lac operator array and the Lac repressor, was included in the first two lanes for comparison. (B) Wild-type or scs2 Δ mutant strains in which the Lac operator had been integrated at INO1 were transformed with either GFP-Lac repressor or GFP-FFAT-Lac repressor. The resulting transformants were serially diluted (tenfold between wells) and spotted onto medium lacking inositol, uracil, and histidine, and incubated for 2 d at 37 °C. (C) Wild-type and scs2 Δ mutant strains transformed with either GFP-Lac repressor or GFP-FFAT-Lac repressor, but lacking the Lac operator, were streaked onto medium lacking inositol and histidine and incubated for 2 d at 37 °C. To ask whether INO1 associates with the nuclear membrane, we developed stringent criteria for scoring INO1 localization ( Figure 5 A). Using confocal microscopy, we collected a single z slice through each cell that captured the brightest, most focused point of the GFP-visualized Lac operator array. Images in which this slice traversed the nucleus (i.e., cells that showed a clear nuclear membrane ring staining with a "hole" of nucleoplasm), were binned into two groups: Cells in which the peak of the spot corresponding to the tagged gene coincided with nuclear membrane staining were scored as membrane-associated, and cells in which the peak of the spot corresponding to the tagged gene was offset from nuclear membrane staining were scored as nucleoplasmic. This procedure allowed us to determine the fraction of cells in a given population in which the tagged gene colocalized with the membrane, thus providing a quantitative measure for membrane association. Five examples of each group, with fluorescence intensity plotted along a line bisecting the nucleus and the spot, are shown in Figure 5 A. To confirm that our scoring criterion would identify nuclear membrane association in a meaningful way, we applied it to two controls. As a control for membrane association, we localized INO1 in a strain expressing GFP-Lac repressor fused to a peptide motif from Opi1 containing two phenylalanines in an acidic tract (FFAT motif), which serves as a nuclear membrane–targeting signal ( Loewen et al. 2003 ). This motif was shown to bind to Scs2 and to be required for Opi1 targeting to the nuclear envelope ( Loewen et al. 2003 ). Importantly, targeting of Opi1 to the nuclear membrane still occurred in the absence of Scs2 in an FFAT-dependent manner ( Loewen et al. 2003 ), indicating that, in addition to Scs2, there must exist another, yet-unidentified receptor for FFAT in the nuclear membrane. As shown in Figure 5 B, the localization of INO1 scored as 85% membrane-associated ( Figure 5 B, bar 1), confirming both our scoring criteria and the previous result that FFAT indeed promotes nuclear membrane targeting. As a control for random distribution, we localized URA3 in a strain expressing GFP-Lac repressor without the FFAT targeting signal. URA3 scored as 23% membrane-associated ( Figure 5 B, bar 3). Induction of the UPR after depletion of inositol had no effect on the localization of either FFAT-tagged INO1 or URA3 in these strains ( Figure 5 B, bars 2 and 4). Given that 25% of the volume of the nucleus is contained in the outer shell represented by only 10% of the radius, this level of background is consistent with a random distribution of the URA3 gene throughout the nuclear volume. Based on the spatial resolution of our data ( Figure 5 A), a spot only 10% of the radius distant from the membrane signal would have been scored as membrane-associated. We therefore defined the mean frequency of membrane-association of the URA3 control between these two conditions (25% ± 3%) as the baseline for subsequent comparisons ( Figure 5 B, dashed line). We next compared the membrane association of INO1 under repressing and activating conditions. Under repressing conditions, the membrane association of INO1 was only slightly greater than the baseline (32% ± 3%; Figure 5 B, bar 5). In striking contrast, when INO1 was activated, the frequency of membrane association of INO1 increased significantly over baseline (52% ± 3%; Figure 5 B, bar 6). Thus, we conclude that, in a significant portion of cells, the INO1 gene became associated with the nuclear membrane under UPR-inducing conditions. To confirm that the observed recruitment was indeed due to UPR induction, we compared the membrane association of INO1 under repressing or activating conditions in the hac1 Δ mutant. Because Hac1 is required for activation of INO1, we predicted that membrane association would be prevented in this mutant. Indeed, INO1 failed to become membrane associated in hac1 Δ mutants starved for inositol ( Figure 5 B, bars 7 and 8). Our earlier experiments suggested that Hac1 functions to promote dissociation of Opi1 from the INO1 promoter. We therefore tested next whether the presence of Opi1 prevents membrane association. To this end, we determined INO1 localization in the opi1 Δ strain, in which INO1 is constitutively transcribed ( Cox et al. 1997 ). Indeed, we observed a high degree of membrane association, both in the presence and absence of inositol (68% ± 5%; Figure 5 B, bars 9 and 10). Artificial Recruitment of INO1 Suppresses the scs2 Δ Ino – Phenotype The experiments described above indicate that there is a correlation between membrane association of INO1 and its transcriptional activation. To establish causality, we examined the effect of artificially targeting INO1 to the nuclear membrane. In an otherwise wild-type background, artificial targeting of INO1 to the nuclear membrane via FFAT-Lac repressor binding (same strain as in Figure 5 B, bars 1 and 2) had no effect on INO1 expression as assessed by Northern blot analysis ( Figure 6 A) or on the growth of the wild-type strain in the absence of inositol ( Figure 6 B; compare top two panels). This result suggests that membrane targeting per se is not sufficient to cause activation. In contrast, in the scs2 Δ mutant we observed that the inositol-requiring growth phenotype of the strain was suppressed by expression of the membrane-targeted FFAT-Lac repressor ( Figure 6 B; compare bottom two panels). This effect was strictly dependent on having the Lac operator array integrated at the INO1 locus; expressing GFP-FFAT-Lac repressor in the absence of the array ( Figure 6 C)—or if the array was integrated at the URA3 locus (unpublished data)—did not improve the growth of the scs2 Δ mutant in the absence of inositol. Consistent with the previous report that FFAT does not require Scs2 to promote nuclear membrane targeting, we observed approximately 50% membrane association of INO1 in the strain expressing the FFAT-Lac repressor (78 cells counted, unpublished data). Thus, the defect in transcription of INO1 in the scs2 Δ mutant could be rescued, at least partially, through artificial targeting of INO1 to the nuclear membrane. This result demonstrates that nuclear membrane association is functionally important for achieving INO1 transcriptional activation. Discussion It is becoming increasingly clear that the spatial arrangement of chromosomes within the nucleus is important for controlling the reactions that occur on DNA and might be regulated (reviewed in Cockell and Gasser [1999] ; Isogai and Tjian [2003] ). Here we have shown that activation of INO1 occurs at the nuclear membrane and requires the integral membrane protein Scs2. Moreover, artificial recruitment of INO1 to the nuclear membrane permits activation in the absence of Scs2, indicating that the precise intranuclear localization of a gene can profoundly influence its activation. Most importantly, we have shown that the localization of INO1 depends on its activation state; gene recruitment therefore is a dynamic process and appears to play an important regulatory role. Regulation of Gene Localization The nucleoplasm is bounded by the inner nuclear membrane, which provides a template that is likely to play a major role in organizing the genome. It is clear from numerous microscopic and biochemical studies that chromatin interacts with nuclear membrane proteins, associated proteins such as filamentous lamins, and nuclear pore complexes (DuPraw, 1965; Murray and Davies, 1979; Paddy, 1990; Worman et al., 1990; Belmont et al., 1993; Glass et al., 1993; Foisner and Gerace, 1993; Sukegawa and Blobel, 1993 ; Luderus et al., 1994; Marshall et al., 1996 ). Indeed, several transcriptionally regulated genes have been shown to colocalize with heterochromatin at the nuclear periphery when repressed ( Csink and Henikoff 1996 ; Dernburg et al. 1996 ; Brown et al. 1997 , 1999 ). Likewise, silencing of genes near telomeres requires physical tethering of telomeres to nuclear pore complexes at the nuclear periphery ( Gotta et al. 1996 ; Maillet et al. 1996 ; Andrulis et al. 1998 ; Laroche et al. 1998 ; Galy et al. 2000 ; Andrulis et al. 2002 ; Feuerbach et al. 2002 ). Thus, the nuclear periphery has been generally regarded as a transcriptionally repressive environment ( Gotta et al. 1996 ; Maillet et al. 1996 ; Andrulis et al. 1998 ; Laroche et al. 1998 ; Galy et al. 2000 ; Andrulis et al. 2002 ; Feuerbach et al. 2002 ). In contrast, the work presented here shows that gene recruitment to the nuclear periphery can be important for transcriptional activation. This conclusion is supported by a recent study published while this manuscript was in preparation ( Casolari et al. 2004 ). These authors found that a subset of actively transcribed genes associates with components of nuclear pore complexes and that activation of GAL genes correlates with their recruitment from the nucleoplasm to the nuclear periphery and pore-complex protein association ( Casolari et al. 2004 ). The results presented here argue that recruitment of genes to the nuclear periphery is controlled by transcriptional regulators and is important for achieving transcriptional activation. Thus, together, the work by Casolari et al. (2004) and the work presented here demonstrate that gene recruitment to the nuclear periphery can have a general role in activating transcription. This notion is consistent with the “gene gating hypothesis” put forward by Blobel (1985) . As proposed in this hypothesis, transcription of certain genes may be obligatorily coupled to mRNA export through a particular nuclear pore complex. It remains to be shown for INO1, however, whether gene recruitment to the nuclear periphery involves interaction with nuclear pore complex components. Several other scenarios could explain why INO1 activation might require gene recruitment to the nuclear periphery. First, INO1 transcriptional activation requires the SAGA histone acetylase, and both the SWI/SNF and INO80 chromatin remodeling complexes ( Kodaki et al. 1995 ; Pollard and Peterson 1997 ; Ebbert et al. 1999 ; Shen et al. 2000 ; Dietz et al. 2003 ). Conversely, repression requires the Sin3/Rpd3 histone deacetylase and the ISW chromatin remodeling complex ( Hudak et al. 1994 ; Sugiyama and Nikawa 2001 ). Thus, if these factors have distinct subnuclear distributions, then the localization of genes regulated by them might influence their transcriptional state. Consistent with this notion, the SAGA complex interacts with nuclear pore complexes, and therefore might be concentrated at the nuclear periphery, where INO1 activation occurs ( Rodriguez-Navarro et al. 2004 ). Second, because INO1 and many other UAS INO -regulated genes are involved in the biosynthesis of phospholipids, it is possible that the state of the membrane itself plays a role, perhaps sensed by Scs2, in activating transcription. It has been shown that defects in phospholipids biosynthesis can disrupt regulation of INO1, although the mechanism of this regulation remains unknown ( Greenberg et al. 1982 ; McGraw and Henry 1989 ; Griac et al. 1996 ; Griac 1997 ; Shirra et al. 2001 ). Third, inositol polyphosphates have been shown to regulate SWI/SNF-catalyzed chromatin remodeling, and it is possible that their production is spatially restricted ( Shen et al. 2003 ; Steger et al. 2003 ). Role of Factors Regulating INO1 Activation Our current understanding of INO1 activation is summarized in a model in Figure 7 . The positive transcription activators Ino2 and Ino4 constitutively associate with the INO1 promoter, which is kept transcriptionally repressed by Opi1. We do not currently understand the mechanism by which Opi1 prevents activation. Activation of the UPR leads to the production of Hac1, which, by an unknown mechanism, promotes Opi1 dissociation from chromatin. We propose that Scs2 at the nuclear membrane binds to Opi1 released from the DNA and thus keeps Opi1 sequestered and prevented from rebinding. Indeed, overproduction of Scs2 bypasses the requirement for Hac1 in activation of INO1 transcription and allows hac1 Δ cells to grow in the absence of inositol ( Nikawa et al. 1995 ), supporting the role of Scs2 as a sink for Opi1 and suggesting that Opi1 may cycle between chromatin-bound and free states. Figure 7 Model for INO1 Gene Recruitment and Transcriptional Activation Ino2 and Ino4 bind constitutively to the INO1 promoter. Under repressing conditions, Opi1 associates with chromatin to prevent activation, and the INO1 locus localizes to the nucleoplasm. Hac1 synthesis under UPR-inducing conditions promotes dissociation of Opi1 from chromatin. Scs2 binds to Opi1 at the nuclear membrane to stabilize the non-chromatin-bound state. Dissociation is coupled to recruitment of INO1 to the nuclear membrane, where transcriptional activation occurs. Both Hac1 ( Cox et al. 1997 ) and Scs2 (see Figure 1 ) are dispensable for INO1 activation in the absence of Opi1, suggesting that their role is to relieve Opi1 repression. However, our data suggest that Hac1 and Scs2 have distinct functions: While the absence of either protein prevents the dissociation of Opi1 from chromatin and the activation of INO1, we propose that the role of Hac1 is to promote dissociation and that of Scs2 is to prevent reassociation. This model explains why artificially tethering INO1 to the nuclear membrane suppresses the absence of Scs2 but not the absence of Hac1 (unpublished data): We propose that the environment of membrane-tethered INO1 promotes late steps in the transcription activation—such as chromatin remodeling, discussed above—permitting INO1 to be expressed upon transient Hac1-induced Opi1 dissociation. Therefore, we envision that dissociation of Opi1 from the INO1 promoter is coupled to the delivery of the gene to an environment near the nuclear membrane that is permissive for its activation. The mechanistic role of Scs2 is currently not known. Its recently discovered function in promoting telomeric silencing ( Craven and Petes 2001 ; Cuperus and Shore 2002 ) suggests that Scs2 may play a more global role in the regulation of transcription at the nuclear membrane. Scs2 contains a major sperm protein domain, named after a homologous protein in Ascaris suum sperm that forms a cytoskeletal structure and confers motility to sperm cells. It is thus tempting to speculate that Scs2 might similarly self-associate in the plane of the nuclear membrane, perhaps providing a two-dimensional matrix on which membrane-associated reactions could be organized. One suggestion from our data is that Scs2 may function as a local sink for Opi1. But it is also clear that other nuclear membrane components are likely to participate in the reaction. Opi1, for example, still localizes to nuclear membranes even in scs2 Δ cells, indicating that another, yet-unidentified Opi1 binding partner must exist ( Loewen et al. 2003 ; unpublished data). Similarly, artificial INO1 recruitment to the membrane via the FFAT motif suppresses the scs2 Δ phenotype (see Figure 6 )—i.e., it is sufficient to position INO1 in an environment permissive for its induction—yet the FFAT binding protein and the molecular nature of the permissive environment remain unknown. Upon inducing the UPR, only 52% of the cells scored INO1 as membrane-associated (see Figure 5 ). Thus, under activating conditions, two types of cells are present in the population at any one time: those in which the INO1 gene is recruited to the membrane, and those in which the INO1 gene is dispersed throughout the nucleoplasm. This score correlated with the level of INO1 transcription; INO1 was membrane-associated in 68% of the cells in the opi1 mutant, which exhibits a correspondingly higher degree of activation than that observed in the wild-type strain. A quantitatively similar nuclear peripheral-nucleoplasmic distribution was observed upon activation of GAL genes ( Casolari et al. 2004 ), suggesting that it may be a general feature of gene recruitment. There are at least two possible interpretations for the observed bimodal distributions. First, the distribution profiles might represent heterogeneity in the activation of INO1 among cells. In this case, activation of INO1 would be variable in individual cells exposed to identical conditions. Gene recruitment thus would stably trap INO1 in a permissive environment for activation, and the localization of INO1 would strictly correlate with its activation state. Alternatively, gene recruitment might alter the balance between two rapidly exchanging states; that is, stable membrane recruitment would not be required for activation. In this case, the observed distributions would represent snapshots of transient colocalization of INO1 with the nuclear membrane within a population of cells that are uniformly activating transcription. Dynamic measurements of gene recruitment and single cell activity assays will need to be developed to distinguish between these possibilities. But no matter which of these possibilities proves to be correct, gene recruitment emerges as a new mechanism regulating eukaryotic gene expression and may be crucial to the regulation of many genes. Materials and Methods Antibodies and reagents Monoclonal anti- HA antibody HA11 was obtained from Babco (Berkeley, California, United States). Monoclonal anti- myc , anti- myc agarose and anti-HA agarose were from Santa Cruz Biotechnology (Santa Cruz, California, United States). Monoclonal anti-Pgk1, rabbit polyclonal anti-GFP, goat anti mouse IgG-Alexafluor 594, and goat anti-rabbit IgG Alexafluor 488 were from Molecular Probes (Eugene, Oregon, United States). All restriction endonucleases and DNA modification enzymes were from New England Biolabs (Beverly, Massachusetts, United States). Unless indicated otherwise, all other chemicals and reagents were from Sigma (St. Louis, Missouri, United States). Strains and plasmids All yeast strains used in this study were derived from wild-type strain CRY1 (ade2–1 can1–100 his3–11,15 leu2–3,112 trp1–1 ura3–1 MAT a ) . Tags and disruptions marked with either the kan r gene from E. coli or the His5 gene from S. pombe were introduced by recombination at the genomic loci as described ( Longtine et al. 1998 ). Strains used in this study, with relevant differences indicated are JBY345 (OPI1–13myc::kan r ), JBY350-r1 (scs2 Δ :: kan r ), JBY359 (SCS2-HA:: kan r ), JBY356–1A (opi1 Δ ::LEU2), JBY356–1B (opi1 Δ ::LEU2 scs2 Δ :: kan r ), JBY356–1C (scs2 Δ :: kan r ), JBY356–1D (wild-type control), JBY361 (scs2 Δ TMD-HA:: kan r ), JBY370 (INO2-HA3::His5+), JBY371 (INO4-HA3::His5+), JBY393 (INO4-myc::His5+ MAT a ), JBY397 (SEC63–13myc:: kan r INO1:LacO128:URA3 HIS3:LacI-GFP), JBY399 (SEC63–13myc::Kan^r INO1:LacO128:URA3 HIS3:LacI-FFAT-GFP), JBY401 (ino4 Δ ::LEU2 SEC63–13myc::Kan r INO1:LacO128:URA3 HIS3:LacI-GFP MATα), JBY404 (opi1 Δ ::LEU2 SEC63–13myc::Kan r INO1:LacO128:URA3 HIS3:LacI-GFP), JBY406 (opi1 Δ ::LEU2 SEC63–13myc::Kan r INO1:LacO128:URA3 HIS3:LacI-FFAT-GFP), JBY409 (SEC63–13myc::Kan r URA3:LacO128:URA3 HIS3:LacI-GFP), JBY412 (INO2-myc::His5+), JBY 416 (hac1 Δ ::URA3 SEC63–13myc::Kan r LacO128:INO1 HIS3:LacI-GFP) . Plasmid pRS315-Opi1- myc was created by first amplifying the OPI1-myc coding sequence and 686 bp upstream from the translational start site from strain JBY345 using the following primers: OPI1 promoter Up (5′-GGGAGATACAAACCATGAAG-3′) and OPI1 down (5′-ACTATACCTGAGAAAGCAACCTGACCTACAGG-3′). The resulting fragment was cloned into pCR2.1 using the Invitrogen (Carlsbad, California, United States) TOPO TA cloning kit. The OPI1-myc locus was then cloned into pRS315 as a HindIII-NotI fragment. Plasmid pASF144 expressing GFP-lacI has been described ( Straight et al. 1996 ). Plasmid pGFP-FFAT-LacI was constructed by digesting pASF144 with EcoRI and ligating the fragment to the following hybridized oligonucleotides, encoding the FFAT motif from OPI1: LacI_FFAT1 (5′-AATTGGACGATGAGGAGTTTTTTGATGCCTCAGAGG-3′) and LacI_FFAT2 (5′-AATTCCTCTGAGGCATCAAAAAACTCCTCATCGTCC-3′). The orientation of the insert was confirmed by DNA sequencing. Both pAFS144 and pGFP-FFAT-LacI were digested with NheI, which cuts within the HIS3 gene, and transformed into yeast. Plasmid p6INO1LacO128 was constructed as follows. The INO1 coding sequence, with 437 bp upstream and 758 bp downstream, was amplified from yeast genomic DNA using the following primers: INO1_promoter_Up (5′-GATGAGGCCGGTGCC-3′) and INO1_3′down (5′-AAGATTTCCTTCTTGGGCGC-3′), and cloned into pCR2.1 using the Invitrogen TOPO TA cloning kit, to produce pCR2.1-INO1. INO1 was moved from pCR2.1 into pRS306 as a KpnI fragment, to produce pRS306-INO1. The Lac operator array was then cloned from pAFS52 into pRS306-INO1 as a HindIII-XhoI fragment, to produce plasmid 10.2. Because the Lac operator fragment was smaller than had been reported (2.5 kb instead of 10 kb), presumably reflecting loss of Lac operator repeats by recombination, the Lac operator array was duplicated by digesting plasmid 10.2 with HindIII and SalI and introducing a second copy of the 2.5-kb HindIII-XhoI fragment, as described ( Robinett et al. 1996 ). The resulting plasmid, p6INO1LacO128, has a 5-kb Lac operator array, corresponding to approximately 128 repeats of the lac operator. To integrate this plasmid at INO1, p6INO1LacO128 was digested with BglII, which cuts within the INO1 gene, and transformed into yeast. The INO1 gene was removed from this plasmid to generate p6LacO128. This plasmid was used to integrate the Lac operator array at URA3 by digestion with StuI and transformation into yeast. Immunoprecipitations Cells were lysed using glass beads in IP buffer (50 mM Hepes-KOH pH 6.8, 150 mM potassium acetate, 2 mM magnesium acetate, and Complete Protease Inhibitors [Roche, Indianapolis, Indiana, United States]). The whole cell extract was used for coimmunoprecipitation of Ino2- myc and Ino4-HA. For immunoprecipitation of Opi1- myc , microsomes were pelleted by centrifugation for 10 min at 21,000 × g and resuspended in IP buffer. Triton X-100 was then added to either whole cell extract (Ino2- myc ; final concentration of 1%) or the microsomal fraction (Opi1- myc ; final concentration of 3%) and incubated for 30 min at 4 °C; detergent-insoluble material was then removed by centrifugation at 21,000 × g , 10 min. Anti- myc agarose was added to the supernatant and incubated 4 h at 4 °C, while rotating. For the experiment in Figure 1 D, a fraction of the total was collected after antibody incubation. After agarose beads were pelleted, an equal fraction of the supernatant was collected. Beads were washed either five (see Figure 1 B and 1 D) or ten times (see Figure 1 C) with IP buffer. A fraction of the final wash equal to the pellet fraction in Figure 1 D was collected. After the final wash, proteins were eluted from the beads by heating in sample buffer and separating by SDS-PAGE (see Figure 1 B and 1 D). Trypsin digestion, gel extraction, and mass spectrometry of proteins that coimmunoprecipitated with Opi1 were performed by the HHMI Mass Spectrometry facility (University of California, Berkeley, United States). Immunoblot and Northern blot analysis For immunoblot analysis, 25 μg of crude protein, prepared using urea denaturing lysis buffer ( Ruegsegger et al. 2001 ), was separated on Invitrogen NuPage polyacrylamide gels, transferred to nitrocellulose, and immunoblotted. RNA preparation, electrophoresis, and labeling of probes for Northern blot analysis has been described ( Ruegsegger et al. 2001 ) . Immunofluorescence Immunofluorescence was carried out as described ( Redding et al. 1991 ), except that cells were harvested and fixed by incubation in 100% methanol at –20 °C for 20 min. Fixed, spheroplasted, detergent-extracted cells were probed with 1:200 monoclonal anti- myc (see Figures 1 and 5 ), 1:200 monoclonal anti-HA (see Figure 4 ), or 1:1000 rabbit polyclonal anti-GFP (see Figure 5 ). Secondary antibodies were diluted 1:200. Vectashield mounting medium (Vector Laboratories, Burlingame, California, United States) was applied to cells before sealing slides and visualizing using a Leica TCS NT confocal microscope (Leica, Wetzlar, Germany). For experiments localizing the GFP-Lac repressor, we first collected a single z slice through each cell that captured the brightest, most focused point of the GFP-visualized Lac operator array. This z slice was picked blind with respect to the nuclear membrane staining. Images in which this slice showed a clear nuclear membrane ring staining with a "hole" of nucleoplasm were then scored as follows: Cells in which the peak of the GFP-Lac repressor spot coincided with Sec63- myc nuclear membrane staining were scored as membrane-associated, and cells in which the peak of this spot was offset from nuclear membrane staining were scored as nucleoplasmic. Chromatin immunoprecipitation Chromatin immunoprecipitation was carried out on strains expressing endogenous levels of tagged Ino2, Ino4, and Opi1 as described ( Strahl-Bolsinger et al. 1997 ), with the following modifications. The time of formaldehyde fixation was specific for each tagged protein. Strains expressing Ino2-HA were fixed for 15 min, strains expressing Ino4-HA were fixed for 60 min, and strains expressing Opi1- myc were fixed for 30 min. After lysis, cells were sonicated 15 times for 10 s at 30% power using a microtip on a Vibracell VCX 600 Watt sonicator (Sonics and Materials, Newtown, Connecticut, United States). After sonication, lysates were centrifuged 10 min at 21,000 × g to remove insoluble material and incubated for 4 h with anti-HA agarose or anti- myc agarose. After elution of immunoprecipitated DNA and reversal of crosslinks by heating to 65 °C for 8 h, DNA was recovered using Qiaquick columns from Qiagen (Alameda, California, United States). Eluted samples were analyzed by PCR using the following primers against the INO1 promoter or the URA3 gene: INO1_proUp2 (5′-GGAATCGAAAGTGTTGAATG-3′), INO1_proDown (5′-CCCGACAACAGAACAAGCC-3′), URAup (5′- GGGAGACGCATTGGGTCAAC-3′), and URADown (5′-GTTCTTTGGAGTTCAATGCGTCC-3′). Real time quantitative PCR analysis PCR reactions were carried out as described ( Rogatsky et al. 2003 ) using a DNA Engine Opiticon 2 Real-Time PCR machine (MJ Research, Waltham, Massachusetts, United States), using 1/25 of the immunoprecipitation fraction and an equal volume of a 1:400 dilution of the input fraction as template. Primers used were: INO1up3 5′-ATTGCCTTTTTCTTCGTTCC-3′), INO1down2 (5′-CATTCAACACTTTCGATTCC-3′), URAup2 (5′-AGACGCATTGGGTCAAC-3′), and URAdown2 (5′-CTTCCCTTTGCAAATAGTCC-3′). Dilution of the input fraction from 1:25 to 1:12,800 in fourfold steps demonstrated that reactions were within the linear range of template. This dilution series was used as a standard curve of C(T) values versus relative template concentration for both primer sets. The concentration of the INO1 promoter and the URA3 gene were calculated using this standard curve. The ratio of INO1 promoter to URA3 was corrected for each sample to make the input ratio equal to 1.0. Supporting Information Accession Numbers The GenBank accession numbers of the genes and proteins discussed in this paper are Ire1 (NP_116622), INO1 (NP_012382), Trl1 (NP_012448), Opi1 (NP_011843), Hac1 (NP_011946), Ino2 (NP_010408), Ino4 (NP_014533), Scs2 (NP_009461), Rap1 (NP_014183), URA3 (NP_010893), SSS1 (NP_010371), Pgk1 (NP_009938) lacI (NP_414879), and Ikaros (Q03267). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC519002.xml |
526782 | Molecules That Cause or Prevent Parkinson's Disease | An overview of the molecules and associated cell biology underlying neuron death in Parkinson's Disease | The consequence of Parkinson's disease (PD) is well described: a progressive movement disorder that, whilst responding to symptomatic therapy, chronically disables its sufferers and adds an enormous economic burden in an aging society. We have some clues to the process underlying the disease from the snapshot provided by postmortem studies of diseased brains. Groups of neurons in specific brain regions are lost, notably those that produce dopamine in a part of the midbrain called the substantia nigra. Those neurons that do survive to the end of the disease course contain accumulations of proteins and lipids within their cytoplasm. Named after their discoverer, these “Lewy bodies” are one piece of evidence that protein aggregation is related to the ongoing disease process. In contrast, the causes of PD are poorly defined except in those rare variant forms that are clearly genetic. Several families have been described where PD-like syndromes are inherited in either a dominant or recessive fashion, and four of the underlying genes have been identified. The precise relationships between these different syndromes are complex and are the subject of some controversy. For example, it is not clear whether all the genetic diseases given PARK nomenclature have Lewy bodies and should be considered “true” PD—the term parkinsonism is preferred for these syndromes ( Hardy and Langston 2004 ). For the purposes of this primer, I will concentrate on the molecular biology of the genes linked to PD rather than disease etiology. However, my assumption is that symptoms of the disease are a reflection of neuronal dysfunction, and that in the disease state the balance between damage and survival tips in the direction of cell loss. Whilst dominant mutations overwhelm the ability of cells to survive, recessive mutations result in the absence of protective proteins and make the neuron grow weaker. Aggregation of α-Synuclein in Neurodegeneration On the detrimental side of the cell survival equation is the PD gene that was discovered first, α -synuclein . The synaptic protein encoded by this gene, α-synuclein, is prone to aggregation, and, as is the case for other aggregating proteins, mutations in α- synuclein are associated with dominantly inherited disease. Related to this, α-synuclein is a major protein component of Lewy bodies. The phenotype of patients with α- synuclein mutations varies from PD to a more diffuse Lewy body disease in which pathology is detected in the cerebral cortex and other areas of the brain. Mutations in α- synuclein include three point mutations (A30P, E46K, and A53T) and multiplication of the wild-type (normal) gene. All of these mutations increase the tendency of α-synuclein to aggregate, suggesting that disease is a consequence of protein aggregation. An interesting example is the triplication of the wild-type gene: toxicity and aggregation can both be driven by increased expression and are thus qualitative, not quantitative, effects ( Singleton et al. 2003 ). The fact that the wild-type protein can aggregate suggests that the process is fundamentally similar for both inherited and sporadic PD in which wild-type α-synuclein is also present in Lewy bodies. Several commentators have suggested that non-genetic risk factors may also promote damage via their effects on (wild-type) α-synuclein conformation or aggregation (e.g., Di Monte 2003 ). This reinforces the notion that α-synuclein is central to the pathogenesis of both sporadic and familial PD. There is some controversy about the exact nature of the toxic species produced by α-synuclein, as one point mutation (A30P) behaves differently from the others. Instead of forming fibrils, which are insoluble, high-molecular-weight species, A30P forms relatively soluble, partially aggregated species ( Conway et al. 2000 ). These intermediate-sized protein aggregates are referred to as oligomers or protofibrils. Some authors have argued that since A30P causes disease, oligomers/protofibrils are the authentic toxic species. It is generally assumed that fibrils are the form of α-synuclein deposited into Lewy bodies, but whether Lewy bodies damage cells is controversial. One possibility is that by sequestering α-synuclein into this insoluble body and compartmentalizing the potentially toxic species away from possible targets in the cytoplasm, the Lewy body represents an attempt of the cell to protect itself ( Olanow et al. 2004 ). Whether the Lewy body is damaging or neuroprotective, there are clearly several possible targets for toxic α-synuclein within the cell. For example, aggregated α-synuclein can permeabilize cellular membranes and thus might damage organelles ( Volles and Lansbury 2003 ). Mitochondrial function and synaptic transmission may be especially affected, and both of these can secondarily increase oxidative stress within the cytosol ( Greenamyre and Hastings 2004 ). When overexpressed, mutant α-synuclein can inhibit the proteasome ( Petrucelli et al. 2002 ), a multiprotein complex that degrades many unwanted or inappropriate proteins in cells. Mutant forms of α-synuclein also inhibit chaperone-mediated autophagy, another important protein turnover pathway that involves lysosomes ( Cuervo et al. 2004 ). Between these two effects, it is likely that cells with aggregated α-synuclein will become less able to handle damaged or misfolded proteins. It is also possible that other cellular processes that we have not yet identified are affected by the presence of this protein that has such an innate tendency to aggregate. Presumably, neurons require α-synuclein for their normal function and thus cannot simply dispense with this protein that has toxic properties, although mice in which the α- synuclein gene is knocked out have no obvious deficits (see Dauer and Przedborski [2003] for discussion). Parkin, DJ-1, and PINK1 in Neuroprotection Evolution has provided cells with many ways to protect themselves. As we will see, mutations that cause recessive diseases result in the loss of these neuroprotective functions. The genes involved in recessive parkinsonism are, in order of discovery, parkin, DJ-1, and PINK1 . The three protein products of these genes all have different functions, thus implicating several different cellular functions in neuroprotection. Parkin is an E3 ubiquitin–protein ligase, promoting the addition of ubiquitin to target proteins prior to their degradation by the proteasome. The identification of parkin's function was facilitated by the observation that the protein contains a RING finger ( Zhang et al. 2000 ), a common motif amongst this class of E3 enzymes. Several parkin substrates have been proposed, and at least two are damaging to neurons if they are allowed to accumulate ( Dong et al. 2003 ; Yang et al. 2003 ). Therefore, our best evidence to date indicates that parkin benefits neurons by removing proteins that might otherwise damage the cell. In fact, expression of parkin is neuroprotective in a number of contexts, and there is even evidence for a beneficial effect of this E3 ligase on mitochondrial function ( Shen and Cookson 2004 ). Data on PINK1 are limited, but the protein contains two motifs that indicate its likely cellular role. At the amino-terminus of PINK1 is a mitochondrial-targeting sequence, and mitochondrial localization has been confirmed in the one study published to date ( Valente et al. 2004 ). Most of the rest of PINK1 is a Serine/Threonine protein kinase domain, followed by a short carboxy-terminal region of unclear significance. The substrates of PINK1 have not yet been identified, but presumably phosphorylation of these substrates controls some critical function for neuronal survival. In their paper, Valente and colleagues show that PINK1 decreases damage to mitochondria induced by proteasome inhibition, but a recessive mutant PINK1 is unable to protect cells. The discussion of protein functions gets more complicated in the case of DJ-1. Unlike parkin or PINK1, there are no motifs within DJ-1 that hint strongly at a single function. Instead, DJ-1 is a member of a large superfamily of genes with several different functions across species ( Bandyopadhyay and Cookson 2004 ). These include proteases in thermophilic bacteria, transcription factors, and chaperones that promote protein refolding. Several research groups have published data in support of DJ-1 having one or more of these activities, including the report, published in this issue of PLoS Biology , that DJ-1 is a molecular chaperone that regulates α-synuclein, among other molecules ( Shendelman et al. 2004 ). It is not yet firmly established which activity of DJ-1 is most relevant to recessive parkinsonism. The important function of DJ-1 might be unrelated to any of the above activities. For example, there are several roles of this protein in modulation of transcriptional responses, which may be critical in maintaining neuronal viability ( Bonifati et al. 2003 and references therein) DJ-1 is also known to be responsive to oxidative conditions, under which cysteine residues are oxidized to form cysteine-sulfinic acids. There is some discussion about which cysteine residue is oxidized; the most likely is cysteine 106, which is present in a nucleophile elbow in the protein. We have suggested that modifying this residue precludes DJ-1 oxidation under mild conditions and also blocks the neuroprotective activity of DJ-1 against mitochondrial toxicity ( Canet-Aviles et al. 2004 ). Therefore, whatever the function of DJ-1, it seems to be related to oxidation. In support of this idea, cells with DJ-1 knocked out show increased sensitivity to oxidative stress ( Yokota et al. 2003 ). Another study published in this issue of PLoS Biology shows that dopamine neurons differentiated from embryonic stem cells lacking functional DJ-1 are especially sensitive to oxidative stress ( Martinat et al. 2004 ). This discussion indicates that the genes responsible for recessive parkinsonism all have different functions but are all, in a broad sense, neuroprotective. A very difficult question to answer is whether this has anything to do with α-synuclein. We have shown that parkin can mitigate the toxicity of mutant α-synuclein ( Petrucelli et al. 2002 ). Although there are reports that a proportion of α-synuclein is a parkin substrate ( Shimura et al. 2001 ), most of the protein is not degraded by the ubiquitin-proteasome system. Recent evidence points, instead, to an important role of the lysosome, the other major pathway within cells for degrading unwanted proteins, in clearing α-synuclein ( Cuervo et al. 2004 ). On balance, therefore, there is no direct evidence that parkin controls α-synuclein toxicity by an effect on protein levels within the cell. Furthermore, parkin does not just prevent α-synuclein toxicity: it is beneficial against several other stresses (discussed in Shen and Cookson 2004 ), leading to the possibility that this protein protects neurons against more than just the processes implicated in PD. It has also been suggested that DJ-1 can prevent the accumulation of aggregated α-synuclein and that cysteine 53 is critical for this activity ( Shendelman et al. 2004 ). However, DJ-1 is not just a chaperone for α-synuclein; it can also promote refolding of citrate synthase, glutathione transferase, and neurofilament light. Other research groups have reported similar findings ( Olzmann et al. 2004 ), although there are differences between these studies in which cysteine residues are thought to be required for DJ-1 function. Given that there are some differences in these results, further clarification of the role for DJ-1 in α-synuclein-mediated toxicity is needed. More generally, we have to bear in mind that whether recessive parkinsonism has anything to do with α-synuclein is still an open question. What is clear is that some neurons rely on parkin, DJ-1, or PINK1 to protect themselves against the many stresses that they face. However, mutations in these genes do not cause generalized neurodegeneration; in fact, they tend to be more restricted and less progressive than, for example, α-synuclein mutations. This suggests, at least to my mind, that recessive mutations indicate something about the neurons that are damaged in these disorders. Why is this of more than academic importance? Perhaps by identifying the proximal events that are sufficient to cause a specific set of neurons to degenerate, we might begin to design therapies that address the underlying degeneration in PD and not just the consequences. Figure 1 Molecules That Cause or Prevent Parkinson's Disease (A) shows a simplified, linear view of the aggregation pathway of α-synuclein (in blue). The monomer of α-synuclein is a natively unfolded protein with several repeats, shown by dark bars on the monomer. The protein has an innate tendency to aggregate with other molecules of α-synuclein, first into oligomers (also known as protofibrils), then into fibrils. It is the fibrillar forms of α-synuclein that are deposited into the classic pathological structures of PD, Lewy bodies. There are several studies that suggest that the oligomeric intermediates are the major toxic species, although this is not certain. (B) shows the recessive mutations associated with parkinsonism and their possible relationships to subcellular targets, either mitochondria (left) or the proteasome (right). Insults to either of these can cause cellular damage and may interact. For example, proteasome inhibitors can cause mitochondrial damage, which can be antagonized by PINK1. Parkin can promote the turnover of proteasomal substrates, and DJ-1 can prevent mitochondrial damage. Quite whether (B) relates to (A) is not clear, but recent results with DJ-1 imply that DJ-1 has chaperone activity towards oligomers of α-synuclein (see text). Although there is much to be done to resolve the order of these events, it is likely that, either alone or in concert, damage to multiple cellular pathways leads to neuronal dysfunction and, eventually, cell death. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526782.xml |
544932 | Hominids Lose Control | null | What makes us human? From a philosophical perspective, the answer may lie in part in our apparently unique need—and self-awareness—to ask the question in the first place. From a biological perspective, the answer lies in part in the sequence of our DNA. While fossil evidence has provided a rough draft of the story of human evolution, much more remains to be learned about the path our genes followed, a path that diverged millions of years ago from our closest living hominid relatives, the chimp and bonobo. Charting differences between human genomes and those of our evolutionary relatives—both near and distant—has become a powerful tool for filling in the gaps in the human fossil record. Comparing the human genome to the genomes of other great apes can provide a window into the molecular changes that may ultimately spell the difference between human and nonhuman primates. That task was recently aided by the release of the draft sequence of the chimpanzee genome. Comparing the protein-coding sequences of human and chimp has identified molecular dissimilarities between us, which is to be expected. Though many differences between species can be explained at the molecular level by differences in protein structure, where and when a given protein is produced can be just as or even more important. Differences in protein expression arise from sequences in non-coding DNA that influence the timing and regulation of protein production and action. In a new study, Peter Keightley and colleagues conduct parallel comparative genomics studies—comparing regulatory regions in the chimp and human genome with those of mouse and rat—and make a startling discovery. The hominid lineages show a surprising lack of selective constraint—deleterious mutations have apparently accumulated—compared to the rodents, racking up an estimated additional 140,000 harmful mutations fixed, or retained, in the human and chimp lineages since they diverged. Such mutations have been selectively eliminated in mouse and rat. The authors focused on DNA sequences making up the bulk of gene-regulating elements—regions immediately preceding or following protein-coding sequences, as well as the first intron of each gene (an intron is a non-coding DNA sequence squeezed between two adjacent coding fragments). The degree of conservation in these areas was weighed against the conservation in other nearby non-coding sequences, which were assumed to be free of selective constraints. Keightley and colleagues found marked conservation in the regulatory regions between mice and rats, but nearly none between humans and chimps. This result suggests that the gene-regulating elements of hominids are subject to nearly unfettered mutation accumulation, likely due to an absence of natural selection forces strong enough to stabilize the ancestral sequences common to both human and chimpanzee. How can one explain these puzzling results? Keightley and colleagues propose that selection is ineffective against mildly unfavorable mutations in the gene-regulating regions because of the small effective population size in the evolutionary history of hominids. What do these results suggest for the future of human evolution? It's unlikely that the regulatory gatekeepers of our genome will allow mutations to spin out of control. Even if the number of unwanted mutations were to increase, stronger natural selection against them is likely to develop in parallel, Keightley and colleagues explain, protecting our fitness from a downward spiral. The authors' results support the notion that population size exerts a powerful influence on evolutionary changes at the molecular level and that many changes in gene control regions are under weak selection. With each new sequenced genome added to the comparative genomics lexicon, scientists are becoming increasingly conversant in the grammar and syntax of gene sequences—and filling in more and more gaps in the human story, letter by letter. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544932.xml |
523836 | Generalized Seizure in a Mauritian Woman Taking Bupropion | PRESENTATION of CASE A 24-y-old woman was admitted to the emergency department having had a generalized seizure (acute loss of consciousness, convulsive movements of her arms and legs, and confusion on regaining consciousness). She was on the sixth day of treatment with 300 mg daily of slow-release bupropion (Zyban SR) as an aid to smoking cessation. She had a past medical history of tonsillectomy and hay fever, for which she was taking budesonide nasal drops (two drops daily, each drop 200 mcg). She was on no other medication. There was no history of head trauma, liver disease, or alcohol withdrawal. Clinical examination, including neurological examination, was normal. The patient's weight was 48 kg. Her blood pressure was 130/80 mm Hg. Electrocardiogram showed a sinus tachycardia at 102 beats per minute. Radiography of the skull and a computed tomography scan of the brain without contrast were both normal. The patient's blood glucose, urea, electrolytes, and liver function tests were all normal. Her serum calcium was 2.01 mmol/l (normal range, 2.0–2.6 mmol/l) and her hemoglobin was 116 g/l (normal range, 120–140 g/l). The bupropion was discontinued, and the patient recovered without any further seizures or other neurological sequelae. | Case Discussion Bupropion for Smoking Cessation Originally developed as an antidepressant, bupropion has more recently been licensed in many countries as an aid to smoking cessation. It came onto the Mauritian market as a smoking cessation aid in November 2003. The British National Formulary ( www.bnf.org ) recommends starting the drug one to two weeks before the target smoking stop date, initially at a dose of 150 mg daily for 6 d and then 150 mg twice daily. The maximum period of treatment is 7–9 wk; treatment should be discontinued if abstinence is not achieved by 7 wk. The efficacy of bupropion as an aid to smoking cessation has been shown in randomized, double-blind, placebo-controlled trials [ 1 , 2 ]. But there have also been reports of death, seizure, serum sickness, generalized acute urticaria, myocardial infarction, and psychosis in people taking the drug [ 3 , 4 , 5 , 6 , 7 ]. Our report is of a woman who had a generalized seizure on the sixth day of treatment with bupropion; she had no other risk factors for seizures. Contraindications to Bupropion To reduce the risk of seizures, the drug should not be given to patients with a current seizure disorder or any history of seizures, with a current or previous diagnosis of bulimia or anorexia nervosa, with a known central nervous system tumour, or to those experiencing abrupt withdrawal from alcohol or benzodiazepines [ 8 , 9 ]. The United Kingdom Medicines Control Agency states that bupropion must not be prescribed in patients with other risk factors for seizures, unless there is a compelling clinical justification for which the potential medical benefit of smoking cessation outweighs the potential increased risk of seizure [ 8 ]. Predisposing risk factors for seizure include the following: concomitant use of medications known to lower seizure threshold (including antipsychotics, antidepressants, antimalarials, tramadol, theophylline, systemic steroids, quinolones, and sedating antihistamines), history of head trauma, diabetes treated with hypoglycemics or insulin, history of alcohol abuse, and use of stimulants or anorectic products [ 8 ]. The Seizure Risk Bupropion is associated with a dose-related risk of seizure. The Medicines Control Agency states that the incidence of seizures is one in 1,000 based on doses up to the maximum recommended daily dose of 300 mg per day [ 8 ]. The seizure risk may be reduced by taking no more than 150 mg (1 pill) at a time and, if taking two daily doses of 150 mg each (two pills per day), ensuring that doses are taken at least 8 h apart (see www.bnf.org ). Up to 24 July 2002, in the UK there were 184 reports of seizures suspected as being associated with the use of bupropion [ 8 ]. In about half of the reports, patients had a past history of seizures and/or risk factors for their occurrence. The Presenting Case In the case we have presented, the patient had no current or previous history of seizure disorders, bulimia, or anorexia nervosa. She was on the sixth day of treatment with bupropion, taking 300 mg per day in two separate doses. She had no other predisposing risk factors for seizure. In particular she was taking no prescription medications, or over-the-counter medications (such as those containing ephedrine products) that are known to lower the seizure threshold. While systemic steroids can lower the seizure threshold, systemic absorption of the nasal steroid drops the patient was taking is an extremely unlikely cause for her seizure (she was not taking a high dose of high-strength drops). Her weight (48 kg) may have been an important factor, since bupropion reaches higher plasma levels in smaller individuals—indeed clinical trials of the drug have generally excluded patients weighing under 100 lb (about 45 kg) [ 2 ]. Learning Points This case report is a useful reminder to clinicians that bupropion is associated with a dose-dependent risk of seizures. In about half of the reports of seizure associated with bupropion, there was a past history of seizures and/or risk factors for their occurrence. It is extremely important to adequately assess seizure risk before prescribing bupropion. Patients should be made aware of the possible adverse effects of the drug. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC523836.xml |
555744 | Persistent demographic differences in colorectal cancer screening utilization despite Medicare reimbursement | Background Colorectal cancer screening is widely recommended, but often under-utilized. In addition, significant demographic differences in screening utilization exist. Insurance coverage may be one factor influencing utilization of colorectal cancer screening tests. Methods We conducted a retrospective analysis of claims for outpatient services for Washington state Medicare beneficiaries in calendar year 2000. We determined the proportion of beneficiaries utilizing screening fecal occult blood tests, flexible sigmoidoscopy, colonoscopy, or double contrast barium enema in the overall population and various demographic subgroups. Multiple logistic regression analysis was used to determine the relative odds of screening in different demographic groups. Results Approximately 9.2% of beneficiaries had fecal occult blood tests, 7.2% had any colonoscopy, flexible sigmoidoscopy, or barium enema (invasive) colon tests, and 3.5% had invasive tests for screening indications. Colonoscopy accounted for 41% of all invasive tests for screening indications. Women were more likely to receive fecal occult blood test screening (OR 1.18; 95%CI 1.15, 1.21) and less likely to receive invasive tests for screening indications than men (OR 0.80, 95%CI 0.77, 0.83). Whites were more likely than other racial groups to receive any type of screening. Rural residents were more likely than urban residents to have fecal occult blood tests (OR 1.20, 95%CI 1.17, 1.23) but less likely to receive invasive tests for screening indications (OR 0.89; 95%CI 0.85, 0.93). Conclusion Reported use of fecal occult blood testing remains modest. Overall use of the more invasive tests for screening indications remains essentially unchanged, but there has been a shift toward increased use of screening colonoscopy. Significant demographic differences in screening utilization persist despite consistent insurance coverage. | Background Screening for colorectal cancer is now recommended by several organizations [ 1 - 5 ], and insurance coverage of screening tests is becoming more widespread. For example, the Centers for Medicare and Medicaid Services began reimbursement for the commonly used screening tests in 1998, covering 100% of charges for fecal occult blood tests and 80% of charges for flexible sigmoidoscopy, colonoscopy for high risk individuals, and barium enema. Coverage was extended to include colonoscopy for average risk individuals in July, 2001. Despite existing guidelines, many eligible people are not receiving screening tests according to current recommendations [ 6 - 9 ]. In 2001, only 23.5% of surveyed adults over the age of 50 had received fecal occult blood testing in the previous year, and 43.4% had received lower endoscopy in the previous 10 years [ 7 ]. However, use of screening colonoscopy may be increasing [ 10 ]. Age, race, insurance coverage, and place of residence, have all been associated with utilization [ 7 , 9 , 11 - 15 ]. Although lack of insurance coverage may be one reason for under-utilization, we recently showed that the proportion of Medicare beneficiaries receiving invasive colorectal screening tests (defined as colonoscopy, flexible sigmoidoscopy, or barium enema) did not increase in 1998, immediately after introduction of Medicare coverage for these tests [ 16 ]. In a 9-month period during this year, only 6.3% of Washington state Medicare beneficiaries received fecal occult blood testing, 6.3% had any type of invasive tests, and 3.2% had invasive screening tests. The purpose of this study was to examine the effect of insurance coverage on overall utilization of screening tests and on demographic differences in screening utilization. Methods Data source The study was approved by the University of Washington Institutional Review Board. We used the calendar year 2000 Physician/Supplier Part B Standard Analytic File and the Denominator File, which are administrative databases covering Medicare beneficiaries and maintained by the Centers for Medicare and Medicaid Services. The Denominator File contains information about date of birth, gender, race, place of residence, vital status, and enrollment in Medicare Part A, Medicare Part B or capitated health plans. The Physician/Supplier Part B Standard Analytic File contains claims data for outpatient physician and supplier services, including the date of the visit, associated diagnoses (coded as International Classification of Diseases [ICD9] codes), and procedures performed (coded as Current Procedural Terminology [CPT] or common procedure [HCPCS] codes). Patient selection All Medicare beneficiaries listed as Washington State residents in the Denominator File in calendar year 2000 were eligible for inclusion (n = 772,153). Beneficiaries who were less than 65 years old (n = 132,711), who died during the study year (n = 32,678), or who were not enrolled in both parts A and B throughout the study year (n = 33,263) were excluded. We excluded patients enrolled in capitated health plans during any part of the study year (n = 170,232) because they may have received screening tests while in these plans for which claims were not submitted. Based on ICD9 codes in the Physician/Supplier Standard Analytical File, we also excluded patients with a diagnosis code for a personal history of colon polyps (V12.72, n = 683), colon or rectal cancer (V10.05 or V10.06, n = 398), or inflammatory bowel disease (555.x, 556.x, n = 227) in this calendar year, since these patients are at increased risk of colorectal cancer and may need more frequent surveillance. Patients without one of these diagnoses were analyzed as average risk. However, if patients did have a history of one of these conditions, but this code was not listed in calendar year 2000, they could have been misclassified as being at average risk. We did not exclude patients with a family history of colorectal cancer, as we felt they could not be reliably identified from the available ICD9 diagnosis codes. We had 401,961 eligible beneficiaries for analysis. Identification of screening tests We identified screening fecal occult blood tests using the HCPCS code assigned by the Centers for Medicare and Medicaid Services for this test (G0107). We also examined the use of invasive colon tests (flexible sigmoidoscopy, colonoscopy, and double contrast barium enema). However, using ICD9 codes, it can be difficult to designate a given invasive colon test as screening or diagnostic [ 17 ]. To define tests as screening indication or diagnostic indication, we used the following algorithm, similar to our previous study [ 16 ]. We first identified these procedures using all CPT or HCPCS codes for colonoscopy, flexible sigmoidoscopy, and barium enema (colonoscopy – 44388, 44389, 44392, 44393, 44394, 45378, 45380, 45383, 45384, 45385, G0105, G0121; sigmoidoscopy – 45300, 45305, 45308, 45309, 45315, 45320, 45330, 45331, 45333, 45338, 45339, G0104; barium enema – 74270, 74280, G0106, G0120, G0122). We then defined an invasive procedure as performed for a screening indication if: 1) the procedure was coded using the relevant HCPCS codes for screening tests; 2) ICD9 codes V76.51 (screening-malignant neoplasm-colon) or V76.41 (screening-malignant neoplasm-rectum) were associated with the procedure; or 3) there were no ICD9 diagnosis codes of gastrointestinal tract symptoms, weight loss, or anemia associated with any physician visits within the previous 3 months (abdominal pain – 787.3, 789.0x, 789.6x; altered bowel habits – 564.0, 787.x; gastrointestinal bleeding – 578.x; positive fecal occult blood test – 792.1; weight loss – 783.2; iron deficiency anemia – 280.x; anemia, unspecified – 285.9). Because of this 3-month exclusion rule, we analyzed only claims submitted between April 1, 2000 and December 31, 2000. We analyzed only the first test performed, as later tests may have been performed to evaluate abnormalities found on the initial test. Data analysis We determined the proportion of average risk beneficiaries who received screening fecal occult blood tests or who underwent flexible sigmoidoscopy, colonoscopy, or double contrast barium enema. For these tests, proportions were calculated using all tests identified by all CPT and/or HCPCS codes (all invasive tests) or only tests identified by the algorithm described above (invasive tests for screening indications). We also analyzed screening test utilization in population subgroups as defined by age, sex, race, and place of residence (urban vs. rural). We compared differences in proportions of beneficiaries undergoing screening using chi-square tests. Place of residence was defined as urban or rural depending on the health service area in which the patient lived. Rural health service areas include all ZIP codes that are closest to a rural hospital, as defined by the Washington State Department of Health. Multiple logistic regression analysis was used to determine the relative odds of screening in different demographic groups (Stata 8.0, Stata Corp., College Station, TX). Significance of the regression models was tested using the log-likelihood statistic, and the method of Hosmer and Lemeshow was used to assess goodness of fit of the regression models. Results Beneficiaries were predominantly white, and there were more females than males (Table 1 ). In the nine-month study period, 9.2% of Washington State Medicare beneficiaries had a claim submitted for screening fecal occult blood tests (Table 1 ). Fecal occult blood testing was more common in women than in men, in beneficiaries aged 70 to 74 than in other age groups, and in rural residents than in urban residents. Whites were the most likely to receive screening fecal occult blood tests, and Hispanics the least likely. These differences were all statistically significant (p < 0.001). Overall, 7.2% had any invasive test (colonoscopy, flexible sigmoidoscopy, or barium enema for diagnostic or screening indications) in the 9-month study period (Table 1 ). Utilization of invasive tests for screening indications was uncommon, occurring in only 3.5% during the nine-month study period. With all invasive tests combined, men, beneficiaries aged 70 to 74, whites, and urban residents were more likely to utilize tests than women, other age groups, other racial groups, and rural residents, respectively. With all invasive tests for screening indications combined, similar demographic variation in utilization was found. Fifty-eight percent of all invasive tests and 41% of invasive tests for screening indications were colonoscopies. However, when examining utilization of colonoscopy, sigmoidoscopy, and barium enema separately, some interesting demographic differences were seen (Table 1 ). Men, beneficiaries aged 70 to 74, whites, and urban residents were more likely to undergo colonoscopy. Flexible sigmoidoscopy was more common in men, beneficiaries age 65 to 69, whites, and urban residents. These differences were still present, but less pronounced when looking at colonoscopy and flexible sigmoidoscopy for screening indications. Use of barium enema for screening was infrequent in both rural and urban patients. Although Hispanics were less likely to utilize colonoscopy and sigmoidoscopy, they were more likely to undergo barium enema than whites. We developed multiple logistic regression models to determine the relative odds of receiving screening tests in different population subgroups (Table 2 ). Parallel, previously published data from 1994–98 are presented for comparison [ 16 ]. These models show that women were more likely to receive screening fecal occult blood tests (odds ratio 1.18; 95% confidence interval 1.15, 1.21), but less like to receive invasive tests for screening indications (odds ratio 0.80; 95% confidence interval 0.77, 0.83). Beneficiaries aged 75 and over were less likely to be screened than younger beneficiaries. For example, compared with beneficiaries aged 65–69, those aged 75–79 were less likely to be screened with either fecal occult blood tests (odds ratio 0.94; 95% confidence interval 0.91, 0.96) or with invasive tests (odds ratio 0.82; 95% confidence interval 0.78, 0.86). Screening utilization was also significantly lower in beneficiaries aged 80 years or older compared with those aged 65–69. Hispanics were less likely than whites to be screened with either fecal occult blood tests (odds ratio 0.30; 95% confidence interval 0.23, 0.38) or with the invasive tests (odds ratio 0.40; 95% confidence interval 0.28, 0.56). Rural residents were more likely to be screened with fecal occult blood tests (odds ratio 1.20, 95% confidence interval 1.17, 1.23), but less likely to receive invasive tests for screening indications (odds ratio 0.89; 95% confidence interval 0.85, 0.93). We developed similar multiple logistic regression models to look individually at utilization of colonoscopy, sigmoidoscopy, or barium enema for diagnostic or screening indications (Table 3 ). Utilization of colonoscopy and flexible sigmoidoscopy was less common in women than in men, while women were more likely to undergo barium enema (odds ratio 1.13, 95% confidence interval 1.04, 1.24). The odds of beneficiaries undergoing colonoscopy initially increased slightly with age, but then decreased at age 80 and over. The odds of undergoing sigmoidoscopy decreased with age, while the odds of undergoing barium enema increased with age. Hispanics were less likely than whites to undergo colonoscopy or sigmoidoscopy, but more likely to undergo barium enema (odds ratio 1.84, 95% confidence interval 1.22, 2.78). Other racial groups were less likely than whites to utilize colonoscopy, flexible sigmoidoscopy or barium enema. With inclusion of only screening tests (Table 4 ), women again utilized colonoscopy and sigmoidoscopy less often than men, but utilized barium enema similarly (odds ratio for barium enema 0.98, 95% confidence interval 0.84, 1.13). Utilization of colonoscopy was relatively constant until age 80, but then declined. Again, Hispanics were less likely than whites to utilize colonoscopy or sigmoidoscopy for screening indications (odds ratio for colonoscopy 0.36; 95% confidence interval 0.21, 0.62). Urban residents were more likely than rural residents to receive colonoscopy or sigmoidoscopy for screening indications (odds ratio for colonoscopy 1.10; 95% confidence interval 1.03, 1.18), but utilized barium enema similarly. Discussion We previously showed that colorectal cancer screening tests are under-utilized and that utilization did not increase shortly after introduction of the Medicare screening benefit [ 16 ]. In this study, we extend these findings and examine the effect on utilization after 2 to 3 years of insurance coverage. Although utilization of fecal occult blood testing increased moderately between 1998 and 2000 (6.30% vs. 9.15% over 9 months, respectively), utilization of more invasive tests remained infrequent (6.26% in 1998 vs. 7.19% in 2000 receiving any invasive test; 3.17% in 1998 vs. 3.48% in 2000 receiving invasive tests for screening indications over 9 months). This was true for all demographic subgroups examined. However, there was some shift in the type of invasive procedure done, with increasing use of colonoscopy compared with flexible sigmoidoscopy and barium enema. In 2000, 58% of all invasive tests and 41% of invasive tests for screening indications were colonoscopies, compared to 47% and 35% in 1998, respectively. Medicare coverage for screening colonoscopy in average risk beneficiaries did not begin until July 2001, and therefore most colonoscopy exams during our study were likely done in high risk patients. Utilization of screening colonoscopy may have increased even further after this change in reimbursement policies to cover average risk individuals. In addition, we show that insurance coverage for screening does not eliminate disparities in screening utilization. In fact, disparities actually increased over time in some instances. Compared to 1994–1998 [ 16 ], the relative odds of any invasive testing for Hispanics versus whites actually decreased in 2000, while the effect for invasive tests for screening indications in different racial groups was mixed (Table 2 ). Disparities related to gender and place of residence were essentially unchanged between 1994–8 and 2000. These findings extend those of other studies in the general population [ 12 , 14 ], where universal insurance coverage of screening was not present, and studies of previous years in Medicare beneficiaries [ 13 , 15 ]. The precise reasons for the observed demographic disparities in screening are unknown. The sex and race-related disparities are consistent with other data showing differential use of medical services in general in these population subgroups. Screening in general was most common in beneficiaries age 65 to 74. As the potential benefit of screening decreases with age and shorter life expectancy, the age-related decrement in screening after age 75 may be clinically appropriate. Regarding geographic differences, the availability of screening services, especially for the invasive and more resource intensive tests such as colonoscopy and sigmoidoscopy, may be greater in urban than in rural areas. Fecal occult blood testing is less resource intensive and is likely to be more available in rural areas, potentially explaining some of the geographic differences in screening. Projecting these data out over a longer period gives a more complete picture of utilization differences. For example, 3.5% of whites, 2.8% of blacks, and 1.6% of Hispanics had invasive screening tests done over the 9-month study period. Assuming constant screening rates, over a 5-year period, 23% of whites would undergo invasive tests, compared to only 19% of blacks and 11% of Hispanics. These differences are further magnified if fecal occult blood test utilization is also considered. These screening disparities may contribute to the known differences in colorectal cancer incidence and survival in different racial groups [ 18 , 19 ]. This study has several limitations. First, we cannot clearly separate the effects of Medicare coverage from secular trends towards increasing utilization of screening tests. Second, we used administrative claims databases to assess health services utilization. Although the accuracy of coding for the diagnoses and procedures studied here is not established, claims coding surgical services and procedures is fairly reliable and accurate [ 20 - 24 ]. Third, we analyzed data from only one state, and these results may not necessarily be generalizable to other regions. In particular, the number of minorities in this study was relatively small, and the confidence intervals for minority groups in the multiple logistic regression models were wide. However, patterns of utilization of colorectal cancer screening tests were similar in Kansas Medicare beneficiaries [ 9 , 12 ]. Another study looking at national trends in colorectal tests in Medicare beneficiaries found that use of sigmoidoscopy, barium enema, and fecal occult blood testing declined over a similar period, while colonoscopy utilization increased [ 25 ]. Our study only included patients age 65 and older, and we did not assess utilization in patients younger than this. This was a cross-sectional study, and patients who may have previously been screened with procedures that are not recommended annually would have been classified as unscreened in our analysis. Utilization of colonoscopy and colonoscopy practice patterns may have changed substantially since 2000 [ 26 - 28 ]. Lastly, we excluded patients who enrolled in capitated health plans, where health services utilization patterns may differ from those in traditional fee-for-service plans [ 29 , 30 ]. Screening frequency should also be studied within these health plans. Although we developed an algorithm to distinguish screening from diagnostic tests, we cannot be certain that tests we designated as screening were truly intended as screening tests. We excluded patients with physician visits for gastrointestinal symptoms over the prior 3 months, but it may take longer to have a colonoscopy scheduled for these indications. This may influence our estimates of screening frequency. In our previous study, we found that 82% of procedures designated as screening from the HCPCS codes would have been classified as screening from our algorithm [ 16 ]. However, even if all colonoscopies, flexible sigmoidoscopies, or barium enemas done were intended as screening, only 7.2% of the study population would have had an invasive screening test during the 9-month study period, or 9.6% per year. Since not all invasive tests done are intended for screening, we believe that less than 7% of our study population had invasive screening tests during the 9-month study period. This study extends our previous work and shows that provision of insurance coverage of screening tests does not necessarily increase utilization of such tests in the medium term. Even 2 to 3 years after beginning universal coverage and widespread publicity about colorectal cancer screening [ 31 ], screening rates changed only modestly, and demographic differences in screening utilization remained. Thus, insurance coverage may be only one small factor affecting patients' decisions to undergo colorectal cancer screening [ 32 ]. We did find a moderate shift towards use of the most expensive test, colonoscopy, over time. It may be that the more ready availability of colonoscopy services in urban areas influences patients' or providers' decisions to use this form of screening. Conversely, female or non-white beneficiaries may be more reluctant to undergo invasive screening tests, or providers may be less likely to offer invasive tests to these subgroups. In addition, out-of-pocket costs for screening tests may still be prohibitive for some populations, affecting screening utilization. We did not have information about private insurance or indirect costs which could influence decisions about screening. These aspects of disparities in screening utilization cannot be addressed using administrative claims data. Therefore, further efforts should be made to identify and address additional barriers to and preferences about colorectal cancer screening in the general Medicare population, and especially in underserved subgroups. Conclusion Overall utilization of colorectal cancer screening tests increased only modestly 2 to 3 years after institution of Medicare coverage, but there was a shift towards screening colonoscopy and away from less invasive tests. Demographic differences in screening persisted despite consistent insurance coverage. List of abbreviations CPT: Current Procedural Terminology HCPCS: Health Care Common Procedures Coding System ICD9: International Classification of Diseases – 9 Competing interests The author(s) declare that they have no competing interests. Authors' contributions CK conceived the study, participated in data analysis and drafted the manuscript. WK participated in design and analysis of the study. LMB participated in design of the study and data analysis. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC555744.xml |
544885 | Analysis of oligonucleotide array experiments with repeated measures using mixed models | Background Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease) or absence (Control) of the disease, and brain regions including olfactory bulb (OB) or cerebellum (CER). In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. Results In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH) procedure of controlling false discovery rate (FDR) at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the α -level ( α new = 0.0033) determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD) procedure at the level of α new to control the family-wise error rate (FWER) for each gene examined. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER. | Background Experiments in which subjects are assigned randomly to levels of a treatment factor (or treatment combinations of more than one factor) and then are measured for trends at several sampling times, spaces or regions (within-subject factors) are increasingly common in clinical and medical research. The analysis of interaction, main effects and simple effects are appropriate for analyzing these types of experiments [ 1 ]. Main effects are average effects of a factor, and interaction effects measure differences between the effects of one factor at different levels of the other factor. As an example, this paper studies a 2 × 2 factorial treatment design, in which effects of two factors (treatment and region, for example) are studied and each factor has only two levels (with or without certain treatment, two different regions of studied subjects). The measurements from different regions of a subject are repeated measures on the individual and are correlated. In combination with microarray technology [ 2 ], this type of design allows one to investigate how treatments alter changes in gene expression in time or region simultaneously across a large number of genes. Two issues are crucial in the analysis of microarray experiments with repeated measures. Firstly, sources of variability must be identified, and the correlation structure among within-subject measurements needs to be taken into account; and secondly, multiple testing is also an immediate concern if tests of interaction, main effects, and/or simple effects are performed for each gene. It has been shown that replication is the key not only to increasing the precision of estimation but also to estimating errors associated with tests of significance [ 3 ]. Previously, a number of ways to identify and model various sources of errors were proposed for replicated microarray experiments, and corresponding methods of extracting differentially expressed genes were suggested [ 4 - 8 ]. Recently, a linear modelling approach [ 9 ] and analysis of microarray experiments using mixed models were also introduced [ 10 - 12 ], in which the dependency structure of repeated measurements at the probe level were discussed. Statistical methods to analyze more complicated experiments, where correlated measurements are taken on one or more factor levels have not yet been fully described. In this study, we modified the two-staged linear mixed models [ 10 ], and extended them to more complicated designs. Attention to the multiplicity problem in gene expression analysis has been increasing. Numerous methods are available for controlling the family-wise type I error rate (FWER) [ 13 - 17 ]. Since microarray experiments are frequently exploratory in nature and the sample sizes are usually small, Benjamini and Hochberg [ 18 ]suggested a potentially more powerful procedure, the false discovery rate (FDR), to control the proportion of errors among the identified differentially expressed genes. A number of studies for controlling FDR have followed [ 17 , 19 - 25 ]. However, these approaches for dealing with the multiplicity problems in microarray experiments are largely focused on relatively simple one-way layout experimental designs, and the number of genes that are involved in an experiment was the major concern. More complicated designs, such factorial designs with two or more factors, intensify the multiplicity problem not only because thousands of genes are involved in an experiment, but also because tests for interactions, main effects, and, possibly, simple effects need to be performed to further characterize differences for each gene. It has not been suggested explicitly, however, how to deal with such multiple-testing problems for two factors (or more than two factors) factorial experiments in the microarray literature. In this paper, we present a method for analyzing oligonuleotide array experiments with repeated measures using a linear mixed model, which allows us to model variance-covariance structures associated with such complicated experiments. Our method is also related to that of Wolfinger et al ., 2001, Chu et al ., 2002, Kerr et al ., 2000, and Wernisch et al ., 2002 [ 5 , 9 - 11 ]. In addition, we construct a generalized F test to test the null hypothesis that all the means for all Disease by Region combinations are equal. Benjamini and Hochberg (BH) procedure of controlling FDR at 5% is used for comparing P values of the generalized F tests. The test to determine whether the interaction term is significant is performed only for each gene with a significant generalized F test. In addition, simple effects are examined for the genes with significant interaction effect and main effects are tested for those differentially expressed genes which do no exhibit significant interaction. In the 2 × 2 factorial, this sequence of tests controls the maximum FWER and, hence the FWER for all genes. We also illustrate how to summarize and categorize the interactions using simple diagrams. We demonstrate our method on the analysis of microarray data from two regions of the brain, the olfactory bulb (OB) and the cerebellum (CER), from control subjects and patients with AD. Although a 2 × 2 experiment was used in this manuscript, our methods can be extended to designs with more than 2 factors or more than 2 levels in one or more factors. The OB was used because AD patients show pronounced decrements in their olfactory sensitivity early in the clinical course of the disease [ 26 ]. The cerebellum was selected as a control tissue because it is generally considered to be minimally affected in AD. Results Analysis of gene expression in OB and CER of controls and AD patients Based on the statistical methods described (see Methods), 708 genes were considered to be significant by the procedure of controlling FDR at 5% for multiple testing across genes. The largest P-value considered to be significant was 0.0033 determined by the FDR procedure. Among the 708 genes, 137 show significant interaction at the level of 0.0033, 49 genes with significant disease effect (32 were up-regulated and 17 were down-regulated in AD patients) and 559 genes with significant regional effects (331 were up-regulated and 228 were down-regulated in the OB) (Table 1 ). There were 37 genes that appear on both lists of significant disease and regional effects (not shown). Further validation studies, such as real time RT-PCR, could be performed to examine which interpretation is more reasonable. Table 1 Summary of genes with main effects Main effects Disease Region Direction I D I D # of genes 32 17 331 228 Fold change 1.1~2.9 1.2~2.8 1.1~104.3 1.1~121.7 I: significant upregulation of gene expression; D: significant downregulation of gene expression. A significant interaction effect for a gene has to be explained so that the gene can be further categorized based on the nature of the possible alterations of their expression levels. The interaction patterns were identified based on the change directions and test results for the following simple effects: control vs AD for OB, control vs AD for CER, CER vs OB for control, and CER vs OB for AD. The interaction effects can also be illustrated using simple diagrams by plotting together the average log2 based intensities under control and AD conditions for both OB and CER. Nonparallel lines in a diagram often imply an interaction effect. An interaction effect can be either directional or magnitudinal. In this study, directional interactions refer to the situations when the changes (in gene expression) between AD and control in OB are in the opposite directions compared to the changes between AD and control in CER. In a magnitudinal interaction, the directions of the changes between AD and control are the same but the magnitudes of changes are significantly different. The gene LOC91614 (UniGene Cluster Hs.180545), which encodes novel 58.3 kDa protein, is an example with directional interaction effect. As shown in Figure 1 A1, it is significantly up-regulated 2.08 fold in the OBs of AD patients (Table 2 ) and significantly down-regulated 3 fold (1/0.33) in their CERs (Table 2 ). The function of this gene is unknown, but, based on the domains identified in its protein sequence, it is likely to be involved in intracellular signalling cascades. Given this divergence in the direction of regulation in these 2 brain regions, this gene would be of interest for further characterization. The gene encoding the proteolytic lysosomal enzyme cathepsin H (UniGene Cluster Hs.114931) has a different pattern of interaction effects as shown in Figure 1 A2. It was significantly up-regulated 3.51 fold in the OBs of AD patients (Table 2 ) and shows a slight non-significant trend toward up-regulation in their CERs (Table 2 ). This is consistent with the pronounced activation of lysosomal enzymes that occurs in regions of the AD brain vulnerable to neurodegeneration (Nixon et al ., 2000), and with the slight increase in lysosomal density in the CER compared with the pronounced increase in sites of the AD brain with significant neuropathology (prefrontal cortex and hippocampus; [ 27 ]). The patterns for other genes with significant interaction effects were also determined by the similar method described above. Figure 1 Simple diagrams to illustrate significant interaction, main effects of Disease and Region . The average log transformed intensities under control and AD conditions for both OB and CER were plotted together for each gene with either significant interaction or main effects. The two points from each region were connected using a straight line and the non-parallel lines imply interaction. Two examples genes with interaction effect were shown in A. A1, represents a directional interaction and A2 indicates an interaction in magnitude. Two genes with only main effect of disease were illustrated in B, one of which showed down-regulation in AD (B1), while the other genes were upregulated in AD for both OB and CER. In the bottom panel, two genes with only regional differences were shown. The gene in C1 has high expression level in CER, and the gene in C2 has an opposite situation. See also Table 4, 5. Table 2 Example of genes with significant interaction effects Gene LOC91614 Cathepsin H OB CER OB CER Con 7.02 6.61 6.58 6.78 7.09 6.83 10.53 10.63 10.49 9.46 9.71 9.42 6.74 ± 0.25 6.90 ± 0.17 10.55 ± 0.07 9.53 ± 0.16 AD 7.58 7.59 8.21 5.30 5.03 5.58 12.30 12.41 12.37 9.54 9.82 9.79 7.80 ± 0.36 5.30 ± 0.28 12.36 ± 0.06 9.72 ± 0.15 Overall P 0.00058 2.12e-06 Interaction P 0.00036 5.86e-06 ConOB vs ADOB Re + + fold 2.08 3.51 dir I I ConCER vs ADCER Re + + fold 0.33 1.14 dir D N ConOB vs ConCER Re - + fold 0.90 2.03 dir N I ADOB vs ADCER Re + + fold 5.66 6.23 dir I I ConOB_ADOB: control vs AD for OB; ConCER_ADCER: control vs AD for CER; ConOB_ConCER: OB vs CER for control subjects; ADOB_ADCER: OB vs CER for AD patients. Overall P: the P value of the generalized F test. Interaction P: the P value of the interaction term. Re: the results of the protected Fisher's LSD procedure, where "+" indicates a significant difference and "-" implies a non significant difference ; dir: the direction of alteration in gene expression levels; D for decrease, I for increase and N for no change in AD when comparing control vs AD, or in OB when comparing CER vs OB; fold: fold change of each pairwise comparison calculated from the inverse transformed log2 based data. The fold change of LOC91614 gene expression between AD and control in OB was calculated as: 2 (7.58+7.59+8.21)/3 /2 (7.02+6.61+6.58)/3 = 2.08. The fold changes in other situations were calculated in a similar way. Similar notations were also used in Table 3. See also Figure 1. In the absence of interaction effects, main effects are often meaningful. The genes that have either significant main effect of Disease or Region were also identified and characterized by examining the average difference between AD and controls or the average difference between OB and CER. Main effects can also be illustrated by the simple diagrams described above, in which the lines are often parallel. Four genes were used as examples to illustrate main effect of Disease and main effect of Region (Table 3 , Figure 1 ). Table 3 Example of genes with significant main effects Gene Log2 based data Overall P-value Interaction P-value Main effect of Disease Main effect of Region OB CER P value fold dir P value fold dir Con 12.68 12.73 12.83 12.37 12.34 12.53 12.75 ± 0.08 12.41 ± 0.10 HMGN2 0.0003 0.6462 0.0009* 0.63 D 0.011 † 1.24 N AD 12.01 11.94 12.16 11.53 12.04 11.81 12.04 ± 0.11 11.79 ± 0.26 Con 10.44 10.22 10.26 10.09 10.16 9.80 10.31 ± 0.12 10.02 ± 0.19 TSG101 0.0002 0.0726 0.0021* 1.51 I 0.056 1.42 N AD 11.20 11.01 10.96 10.30 10.59 10.34 11.06 ± 0.13 10.41 ± 0.16 Con 11.49 10.91 10.83 12.56 12.47 12.62 11.08 ± 0.36 12.55 ± 0.08 RELN 0.0025 0.8328 0.5004 0.92 N 0.0005* 0.37 D AD 11.33 11.00 10.66 12.45 12.35 12.41 11.00 ± 0.34 12.41 ± 0.06 Con 14.01 14.54 13.89 13.21 13.34 12.74 14.15 ± 0.35 13.10 ± 0.32 B2M 0.0011 0.3767 0.5986 0.94 N 0.0002* 2.20 I AD 14.03 13.84 14.47 12.70 12.88 13.06 14.11 ± 0.32 12.88 ± 0.18 * indicates the P values pass the FDR 5% criteria. † indicates the P values are smaller than 0.05 but larger than the critical value 0.0033 determined by the FDR procedure. Overall P-value and interaction P-value are the same with the Overall P and Interaction P in Table 4. P values of main effect of disease and region are the P values of Type III ANOVA test using proc mixed procedure in SAS. Fold of main effect of disease: the ratio of the average intensities of AD (average over OB and CER) over Control (average over OB and CER). Fold of main effect of region: the ratio of the average intensities of OB (average over Control and AD) over CER (average over Control and AD). See also Figure 1. The genes HMGN2 and TSG101 both have significant effect of Disease (Table 3 ). HMGN2 (high mobility group nucleosomal binding protein 2) was significantly down-regulated (1.6 fold; p = 0.0009) in the OBs of AD patients compared to elderly non-demented controls; there was no significant difference in mean expression levels in the OB and CER as shown in Figure 1 B1 and B2. Its down-regulation in the OBs of AD patients is consistent with the generally reduced level of gene expression that has been described in the AD brain [ 28 ]. The gene TSG101 , was up-regulated 1.51 fold ( p = 0.0021), with no significant differences in expression levels in the OB and CER. The encoded protein is a member of the mammalian class E vps proteins, which mediate ubiquitination-dependent receptor sorting within the endosomal pathway. The up-regulation of TSG101 suggests a potential disruption of OB neurogenesis. Two examples of genes with Regional effects are RELN and B2M (Table 3 ). RELN is expressed at lower levels in the OB than in the CER (2.7 fold, p <0.0005) as shown in Figure 1 C1 and C2. The encoded protein is a secreted extracellular matrix molecule that interacts with integrin signalling to generate a signal for migratory developing neurons to stop and form layers; thus, a defect in this gene results in improper development of the cerebellum as well as other brain regions [ 29 ]. B2M , the gene encoding β 2 microglobulin, is expressed at 2.2-fold higher levels in the OB than in the CER ( p < 0.0002). One potential explanation for the higher levels of B2M expression in the OB than the CER is that antigens can enter the brain directly along the pathway provided by the axon of the olfactory receptor neuron or within the sheath of the olfactory nerve; numerous proteins and pathogens enter the brain via this route ( e.g. , [ 30 , 31 ]. The potentially higher level of antigenic stimulation in the OB may result in the up-regulation of B2M expression, which would not occur in the CER due to the lack of such a direct connection with the external environment. The remaining genes with either significant effects of Disease or Region were also identified and categorized in a similar way and summarized in Table 1 . Discussion In this study, we adopted a linear mixed model to analyze oligonuleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes and compared our method to another frequently used approach. Using the method described above, we identified 708 differentially expressed genes, 137 of which have significant interaction, and 571 genes have main effect of either Disease or Region. Using simple diagrams, we can illustrate and further categorize the interactions and main effects. This linear mixed model approach allows us to identify various sources of variability, including experimental effects, random effects of subjects and random error. The performance of the generalized F statistic depends on the validity of the assumed covariance structures and the degree of replication. We assumed homoscedastic variances for each gene. This may not be true for all genes in reality. With small sample sizes, which are common in microarray studies, simpler covariance structures which require the estimation of fewer variance components are preferred. Simulation studies showed that, with sample size of 3, the generalized F test performs reasonably well in cases with homoscedastic variances. We also tested the factorial effects on the 708 genes which were identified by BH procedure using the more conservative Bonferroni adjustment to the α -level in order to simultaneously control FDR and the possibility of performing multiple tests for the factorial effects. For example, controlling FDR at 0.05/3 = 1.67%, produced a list of 77 genes. The method we developed is more powerful. In addition, only regional effects were identified without significant interaction and main effect of disease by the alternative method. In this manuscript, we adopt BH procedure to control FDR at 5% based on the generalized F tests. Any other standard multiple testing procedures may also be applied. A specific sequence of tests was used to identify factorial effects and control the gene-based FWER in our study. For researchers who are interested in all pairwise comparisons among treatment groups, Hayter's modification of the LSD method [ 32 ] controls the FWER for all genes. We also assumed independence of the significant tests among genes. This assumption, which is also adopted in the majority of the microarray literature, may not be completely valid since gene expression is tightly regulated. The correlation among the genes varies from developmental stages, tissue to tissue, etc., and we may never be able to quantify it precisely. The assumption that genes are correlated in small clusters has been adopted by Benjamini and Yekutieli [ 21 ] in their FDR control study. This assumption, however, has not been completely verified. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures, allowing us to quantify various sources of error. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER. Our methods can be extended to designs with more than 2 factors or more than 2 levels in one or more factors. The generalized F test can be constructed for any number of factors or levels of factors. Methods Sources and processing of tissue OBs were obtained with appropriate informed consent from patients with Alzheimer's disease (AD) and control subjects enrolled in the Biologically Resilient Adults in Neurological Studies (BRAiNS) project of the Sanders-Brown Center on Aging. At autopsy, OBs and pieces of the lateral tip of the cerebellums were removed from 6 females, 3 with AD (mean age, 79.0 years; mean postmortem interval 3.8 h) and 3 controls (mean age, 78.6 years; mean postmortem interval 2.9 h), and immediately placed in liquid nitrogen. BRAiNS control subjects had no clinical evidence of dementia or other neurological problems and scored within the normal range on yearly mental status tests; on neuropathological examination, their brains exhibited age-related but not disease-related changes. AD patients received a diagnosis of probable AD in the Memory Disorders Clinic; on neuropathological examination, their brains met multiple criteria for definite AD and exhibited no indications of complications from cerebrovascular disease [ 33 ]. OBs and cerebellum were homogenized in TRI-Reagent (Molecular Research, Inc., Cincinnati, OH), and total RNA was extracted according to the manufacturer's protocol. RNA concentration was determined spectrophotometrically; its integrity and quality were assessed by spectrophotometry, agarose gel electrophoresis, and Bioanalyzer (Agilent, Technologies, Wilmington, DE) virtual gels. Following target preparation, the samples were hybridized onto the Affymetrix Human Genome U133_A and _B GeneChips at the University of Kentucky Microarray Core Facility according to Affymetrix protocols. Experimental design OBs and pieces of the lateral tip of the cerebellums were previously removed from each of 3 control subjects and 3 patients with AD (all female with similar ages). Total RNA was extracted from OB and CER tissues for each subject. Five μ g RNA from the OB and CER of each individual were hybridized with Affymetrix Human Genome U133_A and _B chips (2 GeneChips/tissue/individual = 24 GeneChips). Data from U133_A and _B chips for each RNA sample were combined to give 12 data sets with signal intensities for 44828 targets. Under the assumption of independence among genes, we have a 2 × 2 factorial design for each gene with one factor being either control or AD and with repeated measures (regions, OB or CER) on each subject. The arrangement for the 2 × 2 mixed factorial design in this experiment is shown as in Table 4 , where μ 11 , μ 12 , μ 21 , and μ 22 denotes the average log 2 based expression levels measured in OB of controls, CER of controls, OB of AD patients and CER of AD patients respectively. Corresponding measurements from the same subject are correlated and they are marked as same color. Our primary interests are to identify various sources of variability and differentially expressed genes. Table 4 The arrangement for the 2 × 2 factorial design with repeated measures OB CER Control μ 11 μ 12 AD μ 21 μ 22 μ 11 , μ 12 , μ 21 , and μ 22 are the true means of measurements in OB of controls, CER of controls, OB of AD patients and CER of AD patients respectively. Data preparation Normalization Background correction and initial total intensity normalization were first performed for the microarray raw data using Affymetrix Version 5 software [ 34 ], resulting in gene intensities for each gene-chip combination. The log intensities values were used in later processing. We chose the local regression method (loess) [ 35 - 37 ] to normalize the chips within each of the four treatment combinations. The total intensity method was performed to normalize array across treatment combinations. Data Filtering In our study, all positive control genes and genes that resulted in an "absent" call for all chips were removed from further analysis. If there was no evidence that these genes were expressed in any of the samples, then these genes can be removed to reduce problems associated with multiple comparisons. Other methods of removing low intensity points were also suggested by Bolstad et al ., 2003 [ 37 ]. All ESTs were also removed from the analysis. Since the primary interest of these experiments is to identify known genes that are differentially regulated, eliminating ESTs will further reduce problems with multiple comparisons. After data filtering steps, 10,590 genes remained, and the base-2 logarithms of background-corrected and normalized intensities of these genes were subject to further statistical analyses. Algorithm and analysis Analysis of variance components We use a linear mixed model to describe the experiment. Let Y gijk be the base-2 logarithm of background-corrected and normalized intensity of the g th gene, g = 1, ..., 10590, in the i th Treatment group i = 1, 2, from the j th Region, j = 1, 2, on the k th subject k = 1, 2, 3. "Treatment" here signifies the health condition of the subjects (controls or AD patients). A complete linear mixed model for this experiment: Y gijk = μ + D i + S ik + R j + (DR) ij + A ijk + G g + (GD) gi + (GR) gj + (GDR) gij + ε gijk , (1) where μ is the grand mean, D i and R j and are the main effects of treatments, regions respectively, and (DR) ij are the treatment-region interaction effects. Here S ik are the random effects of subjects within disease group and A ijk are the random effects of chips. The symbols G g , (GD) gi , (GR) gj and (GDR) gij represent the main effect of gene, gene-treatment interaction effects, gene-region interaction effects, gene-treatment-region interaction effects, while ε gijk are the additive stochastic errors In general, it is impractical, using currently available software, to fit linear models such as (1) with microarray data involving manipulation of the full covariance matrix of observation variables that usually contains thousands of levels. To be conceptually and computationally more efficient, Wolfinger et al ., 2001 [ 10 ] suggested a two-step model to separate experimental-wise systematic effects (normalization sub-model) and the remaining effects for each gene (gene sub-model). In our case, however, the design matrix for the fixed effects of D i , R j and (DR) ij is orthogonal to the design matrix for the fixed effects involving each gene, including G g , (GD) gi , (GR) gj and (GDR) gij . Therefore, the normalization model has no effect on the inference for each gene under the assumption in (1). A simpler model can be adopted for each gene, and the random effect S ik is absorbed by S gik terms and A ijk is absorbed by ε gijk terms. We make standard stochastic assumptions that the random effects S gik , and ε gijk are normally distributed with zero means with variances σ gs 2 , and σ g 2 respectively. These random effects are assumed to be independent both across their indices. The model equation then becomes Y gijk = μ g + D gi + R gj + (DR) gij + S gik + ε gijk . (2) In matrix notation, the model equation for each gene can be written Y = X β + Z u + ε, (3) where Y is a vector of observations, X and Z are matrices of known constants for the fixed effects and random effects, respectively, β is a vector containing fixed effect parameters D gi , R gj , and (DR) gij , u is a vector of random effects, and ε is the error or residual vector. Therefore, Y ~ MVN ( X β,V ) where V = ZDZ' + Σ . The covariance matrices D = var( u ) and Σ = var( ε ) can have any valid variance-covariance matrix form. The variances of gene specific subject effects S can vary for different treatments and different genes, while ε effects can have different variances for different treatments, regions and different genes. The remaining terms are fixed effects. All effects and variance components in the model can be estimated using the method of restricted maximum likelihood (REML) [ 38 ]. In the homogeneous variance case assumed here, since observations across subjects are independent, the variance-covariance matrix for gene g, V g , is block diagonal where V g = diag ( Σ g ) and If the assumption of homoscedasticity is not viable, the variance-covariance for gene g can easily be accommodated by allowing Σ g to vary across disease groups. Estimation of model parameters The estimate of primary interest is β , which containing treatment, region effects and treatment-region interaction for each gene. For each gene, β is estimated by The estimated has covariance where in practice components of V are replaced by their REML estimates. See Verbeke and Molenberghs (2000) [ 1 ] for methods to derive equations (3)–(5) and REML estimates of the random components in details. Construct a generalized F test Genes showing significant interaction effects are defined as those in which the difference in expression levels between control and AD is not the same with the difference between OB and CER. Main effects are meaningful in the absence of interaction effect. Genes showing a significant disease-related effect or main effect of disease are defined as those either under- or over-expressed by AD patients compared to controls at the same extent in both OB and CER, while genes with significant main effect of region are those either under- or over-expressed in OB compared to CER at the same extent by both AD and controls. If the expression levels for a gene are the same across all treatment-region combinations, then there will be neither significant interaction nor main effects; therefore this gene should be excluded from further analysis. The expression of other genes may be altered by treatment or/and region effects, and further analysis of these genes is needed to characterize the experimental effects. Therefore the first step to select differentially expressed genes in factorial designs is to choose those for each of which the hypothesis of equality of all cell means, μ 11 = μ 12 = μ 21 = μ 22 , is rejected. Because of the specific variance-covariance structure for a repeated measures experiment with two levels of the within subject factor, it is convenient to test the equivalent composite hypothesis for each gene g which is stated in terms of the main effects and the interaction. Specifically, we consider We can test this composite null hypothesis of no interaction and main effects simultaneously by setting up 3 corresponding linear contrasts listed in Table 5 . A contrast is a linear combination of parameters, for which the coefficients sum to zero [ 39 ]. Let L be the 3 × 8 matrix containing the coefficients of the 3 contrasts, then H o is simplified as L β = 0 , where L β is estimable, and can be tested using the generalized F test Table 5 Set up hypothesis using linear contrasts H o L Effects Mean D g1 D g2 R g1 R g2 ( DR ) g11 ( DR ) g12 ( DR ) g21 ( DR ) g22 (DR) gij = 0 ( μ 21 - μ 11 ) - ( μ 22 - μ 12 ) = 0 0 0 0 0 -1 1 1 -1 D gi = 0 ( μ 11 + μ 12 ) - ( μ 21 + μ 22 ) = 0 2 -2 0 0 1 1 -1 -1 R gj = 0 ( μ 11 + μ 21 ) - ( μ 12 + μ 22 ) = 0 0 0 2 -2 1 -1 1 -1 The hypothesis in terms of model parameters and means were listed. The coefficients for the model parameters of the linear contrasts were determined for the corresponding hypotheses. Under H o , the generalized F is distributed approximately as Snedecor's F with degrees of freedom rank(L) and ν ( F [rank(L), ν ] ). Since the variance-covariance matrix V satisfies a compound symmetry condition, in our example this statistic is distributed as F [3, 4] . Under other assumptions of the variance-covariance structures, the denominator degrees of freedom ν can be approximated by the degrees of freedom to estimate L(X'V -1 X) -1 L' using Satterthwaite's procedure [ 38 , 40 ]. Details about how to select appropriate covariance structures were discussed by Littell et al. (1996) [ 38 ] and Keselman et al. (1998) [ 41 ]. Adjustment for multiple tests Multiple testing problems in microarray experiments with factorial designs are at least two-fold. Usually, hypothesis tests are performed for each of thousands of genes involved, and tests of main effects and interactions may also be needed for each gene. Based on the generalized F test we constructed above, we now suggest a method for adjusting multiple tests. The most commonly used methods to adjust multiple tests are of controlling either FWER or FDR. These methods are first applied to the P-values from the generalized F tests, providing a list of genes that exhibit significant difference among the four cell means of Disease by Region combination. Some of these genes may have significant interactions, or only the main effects of treatment and/or region are significant. Further characterizing the significant interactions are one of the major interests for researchers, and methods for investigate interaction contrasts are available [ 42 - 44 ]. In our study, simple effects were examined for the genes the have a significant interaction to detect the difference between specific comparisons. Protected by the generalized F test, Fisher's least significant difference test (LSD) method can be used to test the necessary simple effects. Here the appropriate error terms for these simple effects depend on whether the comparisons involve measurements from same Disease groups or not. This sequence of tests proposed in this paper are more powerful, while still allowing for the control of FWER or FDR, compared to directly adjusting P-values using BH procedure with Bonferroni correction. In the latter method, if we control overall FDR at 5%, we would perform BH procedure at level of 1.67% or 0.05/3 for each test of interaction, main effect of disease or region. Recipe of the analysis A short summary of the statistical methods used in this study follows: 1. Linear mixed models were used to describe the data based on the experimental design and some common assumptions, and the variance components were specified. 2. For each gene, a generalized F test was performed based on the described model, and the corresponding P-value was obtained. 3. To adjust the multiple tests for numbers of genes, the BH method of controlling FDR [ 18 ] at 5% was applied to the P-values obtained above, providing a list of genes (list I) that exhibit significant differences among the means of the Disease*Region combinations. 4. Using α new , which equals to the largest P-value considered to be significant in step3 as the cut-off point, we choose genes with significant interactions (list II) from list I and, for genes in list II, to test the simple effects. By complete enumerating of all possible combinations of main effects and interaction effect, one can prove that α new is an appropriate choice to control the FWERs while selecting genes with either significant interaction or main effects in 2 × 2 factorial experiments. From the remaining genes, significant main effects of either disease or region (list III) were selected. In the example used in this study, α new = 0.0033. Statistical software Data normalization and generation of simulated data were performed using S-plus version 6.1. We used SAS (version 9.0) proc mixed procedure to do Model fitting and significance analysis. The SAS program implementing linear mixed models for the AD data is available on request from the first author. Simulation studies We constructed a generalized F test to select differentially expressed genes (see method). To assess the performance of the constructed generalized F test with small sample sizes, we performed simulation studies. Since expression levels of genes in the OB or CER from an individual (either a control or patient with AD) are considered to be repeated measures, correlated data should be generated for the simulations. First, we studied the case (case I) with equal variance and covariance structure for each individual subject (control or AD patient). We generated 10,000 sets of correlated data; each set has 6 bivariate observations, with mean 20 and the following covariance structure for each subject under either Disease condition (i = 1 for control, and i = 2 for AD; j or j' = 1 for OB, and j or j' = 2 for CER, k = 1, 2, 3): where Y gijk and Y gij'k are measurements from the j th and j' th levels of Region for the kth subject in the i th level of Disease for gene g. The generalized F-statistics were computed for each of the 10,000 data sets and the histogram of the generalized F-statistics was compared with that of randomly generated F values from a F [3, 4] distribution as shown in figure 2A . The histogram of the generalized F-statistics has a slightly larger tail. The proportion of the generalized F-statistics that were no larger than the critical value, F [3, 4, α = 0.05] = 6.59 was 5.12%, instead of the nominal 5%, 4.97% random generated F values were smaller than 6.59. Figure 2 Histograms of Simulated F statistics . The histograms of the F statistics from F [3, 4] (grey in A, B), simulated data with same covariance structure among individuals (cyan, case I in A) or unequal variance for subjects from controls and AD patients (blue, case II in B). Case I has slightly larger tail than random generated F values, and the right tail of case II were thicker than both of cases above. More complicated variance and covariance structure can also be assumed. For example, the controls and AD patients may have a different covariance matrix. We then generated simulated data to study cases like above (case II). Using the same covariance structure as above, we generate 10,000 sets of data for controls. For AD, we generated 10,000 sets of data using a different covariance structure Then we computed the generalized F-statistics and compared them with randomly generated F values described above (Figure 2B ). The histogram of the generalized F-statistics has a slightly larger tail than those of both random generated F values and case II. There was 5.42% of the generalized F-statistics in case II were larger than the critical value 6.59. With small sample size in both cases (n = 3), the constructed generalized F-statistics behave reasonably well. Authors' contributions HL carried out the study. AJS, and CLW supervised the study. TVG and MLG carried out the molecular genetics studies. All authors contributed to the writing of this manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544885.xml |
529314 | State-Dependent Decisions Cause Apparent Violations of Rationality in Animal Choice | Normative models of choice in economics and biology usually expect preferences to be consistent across contexts, or “rational” in economic language. Following a large body of literature reporting economically irrational behaviour in humans, breaches of rationality by animals have also been recently described. If proven systematic, these findings would challenge long-standing biological approaches to behavioural theorising, and suggest that cognitive processes similar to those claimed to cause irrationality in humans can also hinder optimality approaches to modelling animal preferences. Critical differences between human and animal experiments have not, however, been sufficiently acknowledged. While humans can be instructed conceptually about the choice problem, animals need to be trained by repeated exposure to all contingencies. This exposure often leads to differences in state between treatments, hence changing choices while preserving rationality. We report experiments with European starlings demonstrating that apparent breaches of rationality can result from state-dependence. We show that adding an inferior alternative to a choice set (a “decoy”) affects choices, an effect previously interpreted as indicating irrationality. However, these effects appear and disappear depending on whether state differences between choice contexts are present or not. These results open the possibility that some expressions of maladaptive behaviour are due to oversights in the migration of ideas between economics and biology, and suggest that key differences between human and nonhuman research must be recognised if ideas are to safely travel between these fields. | Introduction The study of animal behaviour has often incorporated concepts from economic theory. This was the case, for instance, with the introduction of game theory to the study of animal conflict ( Maynard-Smith and Price 1973 ; Maynard-Smith 1974 ). Similarly, optimal foraging theory ( Charnov 1976 ; Stephens and Krebs 1986 ) was based on viewing animals as maximisers, with utility often being replaced by rate of energy gain as a proxy for Darwinian fitness, and natural selection playing the role of the short-sighted architect of the decision mechanisms followed by individuals. The foundation for this migration of ideas between fields is the notion that optimal choice is defined by the value of the consequences of each option, and that this value is jointly determined by the option's properties and the chooser's state. This is clear within models, but presents considerable difficulties for empirical tests, and we address some of these problems in this paper. One consequence of expecting individuals to behave as if they maximised the expected value of a particular function (say, inclusive fitness) is captured in the economic concept of rationality. Since “rationality” is used with very different meanings in different fields (see Kacelnik [2004] for a discussion of rationality and its meanings), it is important to point out that here we will use the term only in its economic sense. Rationality, in this restricted sense, encapsulates several principles that are necessary conditions for the existence of a scale of value consistent across contexts ( Mas-Collel et al. 1995 ). Transitivity, for instance, is a hallmark of rational choice theories. It states that if “a” is preferred to “b”, and “b” to “c”, then “a” should also be preferred over “c”. If, say, “c” were to be preferred to “a”, it would not be possible to place the three options on an ordinal scale. Another principle included in economic rationality is that of independence from irrelevant alternatives (IIA; Arrow 1951 ), namely, the expectation that preference between a pair of options should be independent of the presence of inferior alternatives. There are different versions of IIA, depending on how demandingly one defines “preference”. A strong probabilistic version, known as the “constant-ratio rule” ( Luce 1959 ), states that the relative proportion of choices made between two options should be the same (as opposed to merely maintaining the same order), regardless of whether they are on their own (binary choices) or in the presence of a third (less preferred) option (trinary choices). A weaker version, known as “regularity”, states that rationality is violated if the proportion of choices for any preexisting option is increased after the addition of a new alternative to the choice set ( Luce and Suppes 1965 ). Breaches of rationality are well documented in observational or experimental studies on human choice ( Tversky 1969 ; Huber et al. 1982 ; Payne et al. 1992 ; Simonson and Tversky 1992 ; Tversky and Simonson 1993 ; Wedell and Pettibone 1996 ; Gigerenzer et al. 1999 ), and have forced a reinterpretation of much of the existing data and models. In many studies, these violations are taken to imply context-dependent valuation, namely the notion that the (subjective) value of each option is not determined only by its properties and consequences, but instead is constructed at the moment of choice as a function of the number and nature of other options available—a finding used, for example, in marketing and political campaigning for manipulating consumer preferences through the strategic presentation of products and candidates. An alternative view ( Kacelnik and Krebs 1997 ; Gigerenzer et al. 1999 ) is that although these mechanisms can cause costly choices, they (the mechanisms) are evolutionarily and/or ecologically rational, meaning that on average in the environment where they evolved or were individually acquired they generate stochastically optimal outcomes. Whichever the interpretation, however, locally costly deviations from rationality do occur, and can offer significant insights in the development of theoretical models of decision-making. A number of psychological mechanisms have been proposed to explain the effect of inferior alternatives on choice and other examples of irrationality. According to them, the observed failure to exhibit consistent preferences across contexts would be attributable to the dependence of the information-processing mechanisms used by individuals, or of the heuristics used for making choices, on the nature of the choice problem and available alternatives ( Shafir et al. 1989 ; Wedell 1991 ; Payne et al. 1992 ). While normative microeconomic theory is independent of process and focuses on revealed preferences, these developments relate the theory to cognition and give weight to the process by which agents reach decisions ( Kahneman and Tversky 2000 ). It is worth remembering that consistency of preference is accepted by all parties to be only relevant when constancy in the state of the subjects and in the properties of the options is assumed. A subject that prefers lamb to ice cream before dinner, ice cream to coffee immediately after dinner, and coffee to lamb a few minutes later is not considered to be showing intransitivity or violating any principle of rationality, because she is (trivially) changing state between the choices. Similarly, a subject that takes a mango when presented with a basket of many mangoes and only one apple, but takes an apple when faced with equal numbers of both fruits may not be considered irrational, because the value of an option may change when it is the last one, as there is a reputation cost of being impolite and taking the last available fruit of any kind ( Sen 1997 ). Many human experiments are comparisons between groups of subjects that can be assumed to be in equal states at the time of testing but, as we shall see, this is often not ensured in nonhuman animals. Violations of rationality by animals have also been reported ( Shafir 1994 ; Hurly and Oseen 1999 ; Waite 2001a ; Bateson 2002 ; Bateson et al. 2002 , 2003 ; Shafir et al. 2002 ). If these observations are corroborated and found to be systematic, the predictive power of the normative approach to animal behaviour should be questioned. Additionally, the observation of similarly irrational behaviour by animal and human subjects raises the possibility that the same cognitive mechanisms or processes operate in both cases. In fact, explanations for irrationality based on phenomena such as regret and overconfidence (e.g., Loomes and Sugden 1982 ), proposed with humans in mind, could be tested by examining whether the same circumstances elicit the expression of the same type of paradoxical behaviour in human and nonhuman subjects. If they do, and the mechanisms seem unlikely to operate in nonhuman agents, one may be advised to seek alternative explanations that work well for all kinds of subjects. Although these possibilities make the study of rationality valuable, critical procedural differences between the two fields have not been sufficiently acknowledged. One crucial distinction derives from the fact that, while human subjects can be verbally instructed about the properties of the alternatives, animals must be exposed to the contingencies to experience or learn about them. This difference hinders the comparison of the mechanisms underlying human and animal choices, since repeated exposure to different contexts often affects the organism's state, thus removing the justification for expecting transitivity, regularity, or any other principle of consistency. In the case of foraging research, different contexts can alter the subjects' net rate of intake during training, so that at the time of choice the resulting state differs and it may be unjustified to expect consistency of preferences. The fact that optimal decisions should be contingent upon state ( Houston and McNamara 1999 ) has been indeed an essential part of normative modelling in biology. As a consequence, apparent violations of rational principles by animals could also result from straightforward state-dependent optimality, the very framework being questioned. It may be added that, although we focus on changes in energetic state that could unwittingly be caused by training, these are not the only possible state consequences of instruction by exposure to the contingencies during training. A subject may be in a different state if the consumption of food items during training affects its nutritional requirements for the achievement of a balanced diet in future choice opportunities ( Simpson and Raubenheimer 1993 ; Raubenheimer and Simpson 1997 ), or if changes in the context of choice provide it with different information about its future options ( Houston 1997 ). Here we further develop the basis upon which to compare economic rationality between humans and nonhumans, and test whether state-dependent decision-making can be responsible for apparent violations of economic rationality in animal choices. To this end, we compare the foraging preferences of European starlings (Sturnus vulgaris) between members of a fixed, focal pair of options across different choice contexts. Our basic paradigm is defined in Figure 1 . The members of the focal pair of options differed in that while one of them (focal amount [FA]) offered a higher amount of food, the other (focal delay [FD]) was associated with a better (shorter) delay to food. These two attributes (amount and delay) were counterbalanced between the focal options so as to preserve their ratio (amount/delay), which is known to be (other factors remaining the same) a strong predictor of preference. A third option, or “decoy”, was also available during training and in some of the choice trials. The decoy could be either decoy amount (DA) or decoy delay (DD), depending on treatment (“High Intake” and “Low Intake” respectively, see Figure 1 ). We refer to the third option as “decoy” because its ratio of amount to delay was lower than in the focal options, and hence it is expected not to be preferred over either (in economic nomenclature, the decoys were “dominated” by the focal options). As postulated by context-dependent (or “comparative”) models of choice ( Shafir et al. 1989 , 1993 ; Wedell 1991 ; Tversky and Simonson 1993 ), a decoy can potentially affect preferences between a pair of options whenever subjective values are assigned comparatively, namely whenever an option's subjective value depends on the interaction of its properties with those of the remaining alternatives in a set, as well as when the decoy affects a subject's perception of the choice problem. We thus test whether each animal's preference between the focal options changes between two treatments that differed with respect to which of the two decoys was present. To increase comparability with previous research, the parameter values of the decoys were chosen to maximise their putative effect upon preference within the focal pair as postulated by psychological models purported to explain irrational choice (see Materials and Methods for details). Figure 1 Amount and Delay to Food Corresponding to Each Option The figure shows the parameters of the experiments using the conventional representation used in foraging theory, with energy gains in the ordinate and time in the abscissa. The origin of coordinates is the point of choice, so that time to the right indicates the delay between choice and reward, while time to the left represents all other times in the cycle, in this case the ITI. The options forming the focal choice pair are shown as white circles while those used as decoys are shown by black circles. FA and FD offer the same short-term rate of food intake (slope of the solid lines) of 0.5 units/s, whereas DA and DD offer the same short-term rate of 0.25 units/s. The slopes of the dashed lines (interrupted for space economy) indicate long-term rate of intake, considering the inter-trial interval of 60 s between consecutive feeding opportunities. “High intake” (horizontally adjacent rectangles) and “low intake” (vertically adjacent rectangles) denote the treatments in which decoy DA and DD (or their simulated energetic consequences), respectively, were present in addition to the focal pair. Since DA offers a higher long-term rate of gain than DD, intake is higher in the treatment where DA is present (“High Intake”). The reverse rationale applies to treatment “Low Intake.” Figure 1 also shows that, although the two focal options and the two decoys were equated in the ratio of amount to delay, they were not equated in terms of their energetic consequences. When all times in the cycle are included, the order in terms of energetic rate of return (the slope of the broken lines in Figure 1 ) is FA > DA > FD > DD. This means that differences in energetic state as a consequence of training with either of the two decoys could be a confounding factor in interpreting putative differential effects of the decoys. Specifically, repeated exposure to DA could lead to a higher cumulative intake, hence changing choices due to the expression of state-dependent preferences instead of the use of a comparative cognitive mechanism of choice. Our study is aimed at separating these possibilities. We tested preference between the focal options under three conditions: (1) treatments differing in energetic states when the decoys are absent; (2) treatments differing in which of the two decoys is present, when the energetic consequences caused by each decoy are not controlled; (3) treatments differing in which decoy is present, when the energetic differences they may cause are abolished by supplementary feeding. Our rationale for this design is that if the effects of the decoys are independent of their energetic consequences (i.e., differences in preference between the treatments are observed in all conditions), then these effects may indeed be evidence for a comparative cognitive mechanism of valuation, possibly caused by the same types of cognitive biases and heuristics reported in the human literature. However, if the effects of the decoys are abolished by controlling for energetic consequences and are generated by imposing state changes in the absence of decoys, it would be more parsimonious to explain the effects as state-dependent decision-making. Our results strongly favoured the latter hypothesis. Results Discrimination of Amounts We started by testing whether the birds could discriminate between the amounts of reward associated with each of the foraging options shown in Figure 1 (there is already strong evidence that they are able to discriminate between the delays used [ Brunner et al. 1992 ]). Figure 2 shows the proportion of choices of each bird to the option offering the largest amount of food. All birds in both groups significantly preferred the option offering the larger amount (binomial tests, p < 0.01 in all cases), confirming that the birds discriminate between these amounts. Figure 2 Discrimination Test Proportion of choices (± standard error [s.e.]) made by birds for the option offering the largest amount of food when time parameters were held constant. Choice proportions are significantly different from random for all birds (binomial test, p < 0.01). Birds 1, 2, and 3 (white bars) were presented with choices between one and two units of food, and birds 4, 5, and 6 (black bars) with choices between two and five units of food. Effects of Intake Rate without Decoys The proportion of choices made for each focal option in the absence of decoys when energetic state was manipulated experimentally is shown in Figure 3 A. Although the purpose of this experiment was to examine how the strength of preference between the focal options was affected by differences in energetic state, it is worth pointing out that there was an overall preference for FA (the option with higher long-term rate of gain) over FD even though the ratio of amount to delay was the same for both options. This preference for FA is not caused by the accumulated energetic consequences of exposure to each option, because the two focal options were experienced in mixed sessions. In Mazur's (1987) “hyperbolic” model, which is widely used in the behavioural analysis literature, the time between feeding events (or inter-trial interval ) is not included, but instead a constant with the value of 1 s is added to the delay in the denominator. The effect of this term is also to make the value of FA higher than that of FD, consistent with the observed trend. Figure 3 Individual Proportion of Choices for FA Relative to FD in Treatments “High Intake” and “Low Intake” (A) Effect of intake on choices without decoys. Here, extra food simulates the intake consequences that the two decoys cause when they are present and consumed on 25% of the feeding opportunities ( p < 0.01). (B) Results of an experiment with decoys when energetic consequences of the decoys were allowed to take effect (group NC) ( p = 0.06). (C) Results of an experiment with decoys, similar to (B), in which the energetic consequences of the decoys were abolished (group C). In (B) and (C), each symbol corresponds to each of the subjects. The dashed lines show the mean values in each of the cases. The critical observation for the present purposes, however, is that the magnitude of the preference between the focal options differed significantly between intake treatments. Specifically, preference for FA over FD was higher in treatment “Low Intake” (the treatment with lower accumulated intake) than in treatment “High Intake” ( F 1,8 = 12.1, p < 0.008; Figure 3 A). The details of the supplementary feeding are given in Materials and Methods , but it is important to highlight that the difference in supplementary intake between treatments “High Intake” and “Low Intake” simulated the differences in state that would be consequent on repeated experience of decoys DA and DD, respectively. These results thus show that energetic state per se can directly affect the strength of preference between alternatives. The stability criteria (see Materials and Methods ) were reached by all but one bird. We therefore also conducted the analysis excluding this bird. The results were the same: Preference for FA over FD was significantly higher in treatment “Low Intake” than in treatment “High Intake” ( F 1, 7 = 18.9, p = 0.003). Test of Economic Rationality in the Presence and Absence of Controls for Intake In this experiment, two groups of six starlings each (group C had intake controlled between treatments, and NC had intake not controlled between treatments) were trained with three options (the two focal options and one of the decoys) and then allowed to choose between either two (binary trials) or three (trinary trials) of those options. The two treatments, “High Intake” and “Low Intake”, differed in which of the two decoys (DA and DD, respectively) was present during training and in the trinary choices. Within each group, every subject experienced both treatments. Considering the results of the previous experiment, preference for FA over FD should be higher in the treatment with lower accumulated intake (treatment “Low Intake”) for group NC, in which such intake differences were not eliminated. No differences should, however, be observed in group C, in which intake differences were abolished. The results from the binary choice trials (where only the focal options were present) are shown in Figures 3 B and 3 C. As predicted, for group NC, preference for FA tended to be higher in treatment “Low Intake” ( F 1,4 = 7.4, p = 0.06; Figure 3 B). In group C, intake differences resulting from the decoys were abolished by supplementary feeding, and no differences in preference between treatments were detected ( F 1,4 = 0.2, p = 0.677; Figure 3 C). To summarise, differences in the level of preference for the focal options in binary choices are present when intake differs but there are no decoys ( Figure 3 A) and when decoys are present and their intake consequences are not controlled ( Figure 3 B), but disappear when the decoys are present but their intake consequences are neutralised ( Figure 3 C). We also analysed the temporal aspects of state changes on choice. Because the long-term rate of gain offered by DA was higher than that offered by DD, in group NC the difference in cumulative intake between the “High Intake” and “Low Intake” treatments must have increased over the trials in a session. Accordingly, the rate of increase in the strength of preference for FA over FD along the trials (as measured by the slope of the regression of trial number against average proportion of choices for FA) was significantly higher in the latter than in the former treatment (group NC: F 1,4 = 27.35, p = 0.006). This difference was not observed for the group in which intake was controlled (group C: F 1,4 = 1.29, p = 0.32). The rational principle of regularity and the constant-ratio rule can be examined by comparing choices between the focal options in binary (only the two focal options present) versus trinary (two focal options and one decoy) trials (see details in Data Analysis). We make two types of comparisons, between-treatments and within-treatments. In the between-treatments comparison we compare the binary trials of one treatment with the trinary trials of the other. For example, we compare the binary trials of treatment “Low Intake” (when training included exposure to FA, FD, and DD, but choices were between FA and FD presented alone) against the trinary trials of treatment “High Intake” (when training included FA, FD, and DA, and choices were between FA, FD, and DA), and vice versa. In the within-treatments comparisons, we compare binary versus trinary trials within the same treatment (e.g., binary versus trinary trials of treatment “Low Intake” and binary versus trinary trials of treatment “High Intake”). Notice that within a given treatment (within-treatment comparisons), accumulated intake was the same in binary and trinary trials for both groups of subjects (C and NC), whereas between treatments (between-treatment comparisons), accumulated intake differed between binary and trinary trials for the group of subjects in which intake differences were not controlled (NC). Therefore, if intake, rather than purely cognitive effects, is the cause of changes in preference for the focal options, apparent violations of regularity and of the constant-ratio rule should be observed only in the between-treatments comparisons for group NC. Conversely, if the presence of the decoys has a cognitive effect upon preferences that is independent of state, such violations should be observed both in the between- and within-treatments comparisons. Table 1 lists the predicted direction of preferences for each of the treatments, considering the hypothesis that differences in intake generated by exposure to the decoys, rather than purely cognitive effects of the decoys, cause the apparent violations of rationality. The directions of preferences were predicted on the basis of the results of the experiment without decoys, which showed that preference for FA was higher in the treatment with lower accumulated intake. Hence, we expect preference for FA to be higher in treatment “Low Intake”; namely, we expect P(FA[“Low Intake”]) > P(FA[“High Intake”]), and consequently, P(FD[“Low Intake”]) < P(FD[“High Intake”]), where P is the strength of preference for the corresponding focal option in the relevant treatment. For simplicity, only the predictions for group NC are shown in Table 1 , since under the energetic hypothesis we do not expect differences in preference levels (and therefore violations of rationality) for the group in which intake differences were abolished (group C). Table 1 Between- and Within-Treatments Comparison of Binary and Trinary Choice Trials for Group NC All predictions are based on the results of the experiments without decoys, which showed that preference for FA was higher in the treatment with lower accumulated intake, that is, P(FA[Low Intake]) > P(FA[High Intake]), and conversely P(FD[Low Intake]) < P(FD[High Intake]), and assume that differences in intake generated by exposure to the decoys is the sole cause of difference in preferences between treatments “FA*” indicates preference for FA relative to FD in the trinary choice trial, calculated as shown in equation 1 (see text) Bin, binary; F(int.), P(F[Intake]); Int, intake; N, no breach, Trin, trinary Figures 4 and 5 show the results for the between- and within-treatments comparisons, respectively. The left panels in both figures show the results for group NC ( Figures 4 A, 4 C, 5 A, and 5 C) and the right panels for group C ( Figures 4 B, 4 D, 5 B, and 5 D). No violations of either regularity or differences in relative choice proportions were observed in group C (repeated-measures ANOVA, all p > 0.1). For group NC, the observed directions of preferences were, in all cases, consistent with all predictions shown in Table 1 . In terms of the significance of the observed changes in preference in the between-treatment comparisons ( Figure 4 A and 4 C), one out of the two predicted apparent violations of regularity was statistically significant: There was a significant increase in the absolute proportion of choices for an option (FD) in the trinary with respect to the binary context (F 1,4 = 7.8, p = 0.049; Figure 4 A). The constant-ratio rule (see Data Analysis in Materials and Methods ) was also violated as predicted in Table 1 , because the preference for FA relative to FD was significantly higher in the binary than in the trinary context (F 1,4 = 9.2, p = 0.039; Figure 4 A). Finally, against the hypothesis that the effect of decoys on preferences were caused by purely cognitive processes of comparison, there were no significant differences in preferences in the within-treatments comparisons of binary versus trinary trials (repeated-measures ANOVA, all p > 0.1; Figure 5 A and 5 C). Figure 4 Between-Treatments Comparison in Binary and Trinary Choice Trials The bars show the mean (± s.e.) absolute (FD and FA: leftmost and centre pairs of columns in each panel, respectively) and relative (FA*: rightmost pair of columns in each panel) proportion of choices for each option in binary (white bars) and trinary (black bars) trials when intake rate is not controlled (group NC: A and C; white background) or is controlled (group C: B and D; grey background). Relative preferences were calculated using equation 1 (see text). We compared the preference between the same two focal options between the binary context of one treatment (e.g., treatment “High Intake”) and the trinary of the other (e.g., treatment “Low Intake”). In group C (B and D), none of the differences between binary and trinary contexts were statistically significant. For group NC (A and C), the asterisk (*) indicates a significant violation of either regularity or the constant-ratio rule at p < 0.05. Figure 5 Within-Treatments Comparison of Binary and Trinary Choice Trials The bars show the mean (± s.e.) absolute (FD and FA) and relative (FA*) proportion of choices for each option in the binary (white bars) and trinary (black bars) trials for group NC (A and C) or group C (B and D). Relative preferences were calculated using equation 1 (see text). We compared the preference for the same options between the binary and trinary contexts of the same treatment (when there are no differential energetic effects). There were no violations of either regularity or the constant-ratio rule ( p > 0.1). The presence of apparent violations when the effect of the decoys on intake was allowed and their absence when the effect was abolished was again consistent with the hypothesis that these apparent irrationalities were caused by differences in intake brought about by exposure to the decoys. The hypothesis is further confirmed by the absence (both for the group that received supplementary feeding and the one that did not receive it) of violations in the within-treatment comparisons, when the cognitive effect of the decoys was allowed but state was neutralised. In group NC, all birds reached the stability criteria. In group C, however, one bird did not reach the criteria in treatment “Low Intake”, and another did not reach stability in treatment “High Intake”. We therefore reanalysed the data for this group excluding these two birds. The results were the same, namely, in none of the tests for this group was rationality breached. Discussion Our aim in this study was to foster the development of a solid interdisciplinary basis upon which to compare research on economic rationality in humans and nonhumans, and to investigate whether normatively inspired hypotheses of animal behaviour may be systematically misleading, as they implicitly assume rationality. To this end, we examined whether violations of economic rationality in animals that have recently been reported in the literature represent real violations of rationality caused by the use of comparative cognitive mechanisms of choice as proposed for humans or, alternatively, to unwittingly imposed differences in the state of the subjects. To do this requires testing whether one can reproduce the reported breaches of rationality and whether they are abolished when cognitive effects are allowed but state differences are eliminated. A further test requires generating such violations by changes of state alone. We have achieved all of these conditions in our experiments. Why should preferences be modulated by energetic state? To start dealing with this question, it is necessary to start by considering why choices do not go exclusively to the option with maximum value. From an evolutionary perspective, one possibility is that subjects are adapted to some level of ambiguity (for instance, because the properties of options may change with time), and tracking these properties requires some level of response to each available option ( Houston et al. 1982 ). If partial preferences are taken as a given, the next stage is to model the factors that may affect them quantitatively. Here, it is possible that partial preferences depend on the benefits that could be derived from each of the available options and that these benefits depend on the state of the subject. To capture this possibility, the probability of choosing a suboptimal action (in this case FD, which offers a poorer long-term rate of gain) could be modelled as a function of the difference between the benefit accruing from each option while the subject is in a given state. For example, inspired by the “matching law” from behavioural analysis ( Herrnstein 1961 ), Kacelnik (1984) tested the fit of a model termed “profitability matching” for starlings experiencing the conditions of the “marginal value” foraging model. In the model, each strategy is deployed in proportion to the ratio of its payoff relative to the sum of the payoffs of all available alternatives. Functionally, such a strategy, while failing to maximise rate of return, may often approximate the optimal strategy or at least avoid costly deviations from it. As highlighted by other authors ( McNamara and Houston 1987 ), more frequent deviations from the optimal policy should be expected when their costs are smaller. In Figure 6 , we build on this assumption and on the “state” model proposed by Kacelnik and Marsh (2002 ; see also Marsh et al. 2004 ) and extend it to illustrate the putative effects of variations in state. We assume that repeated exposure to treatments offering lower and higher objective intake rates (corresponding to decoys DD and DA, respectively) causes some correlated measure of state to be higher in the latter case ( Figure 6 B). That is, state is assumed to be a positive function of rate of intake during the period preceding the choice itself. We then consider the improvement in state produced by choosing either of the targets (FA and FD). The difference between the improvement in state (ΔS) caused by choosing FA over FD is the same under both treatments, but the biological consequences may differ in magnitude if benefit is not linearly related to state. For the conditions experienced by the starlings in our experiment (where deprivation was very mild), it is reasonable to assume that biological gains were a decreasing function of their initial state (e.g., the contribution of a food item decreases with increasing reserves; see also McNamara and Houston 1982 ). Figure 6 A illustrates this relationship. The figure shows that the cost of choosing the target option with lower long-term rate of energetic gain (FD) is more severe in treatment “Low Intake” than in “High Intake” (|δDD|>|δDA|). Preference for FA should thus be higher under treatment “Low Intake” if the frequency of choices for the leaner focal option is inversely related to their cost. This model is consistent with the equivalence between the effect of supplementary feeding and that of the decoys. Figure 6 A Functional Model of How State Can Affect Partial Preferences (A) Fitness is plotted as a concave function of the organism's state. Exposure to DD leads to a poorer state (Sd D ) than that reached after exposure to DA (Sd A ) (see also B). Sd D + FD and Sd D + FA denote the state reached by subjects under treatment “Low Intake” as a consequence of choosing focal options FD and FA, respectively. Similarly, Sd A + FD and Sd A + FA represent the state reached by subjects in treatment “High Intake” after choosing FD and FA, respectively. (B) State is assumed to be a growing, linear function of energy intake. DD and DA represent the average intake rates experienced by subjects that include the decoys with the same names in their diet. Although choosing FA is always better than choosing FD, and the difference between the states caused by this choice is the same under either treatment (Sd D and Sd A ), the fitness difference between choosing FA and FD is higher under treatment “Low Intake” (δDD) than “High Intake” (δDA). This should lead to a higher level of preference for FA in the former treatment if choices of the low-yielding option were to be reduced in proportion to their cost. From a mechanistic perspective, it is also possible that, under conditions of higher energetic intake, animals are less motivated to search and work for food. Our data support this possibility. The time subjects took to start working once presented with any option in no-choice trials (i.e., their latency to first peck) was significantly longer in the treatment “High Intake” than in “Low Intake” in the experiment without decoys ( F 1,8 = 24.6, p = 0.01) and in the experiment with decoys, but only for the group of subjects for which intake was not controlled (group NC: F 1,4 = 39.3, p = 0.003; compare this result to that for group C: F 1,4 = 3.7, p = 0.13). It is thus possible that these potential differences in motivational state led subjects to pay less attention to the alternatives during a choice opportunity in the treatment “High Intake”, resulting in the observed differences in preference levels. Within this framework, we now use two examples to consider whether unwittingly induced changes in intake between contexts could also underlie previously reported findings of irrational behaviour. Energetic State and Rationality in Jays A recent study tested the effect of background context on the foraging preferences of semi-tame food-hoarding grey jays ( Waite 2001a ). The jays were initially split into two groups, and each group was given 25 binary choices in only one of two backgrounds: In background context A, the jays had to choose between one and three raisins placed 0.5 m inside separate tubes (where distance into the tubes should correlate with perceived risk); in B, jays chose between two identical options, each offering one raisin 0.5 m inside the tubes. Both groups were subsequently presented with the choice between one raisin 0.3 m into one tube and three raisins 0.7 m into another tube. In violation of IIA, context had an effect: Preference for the option offering more raisins at a greater perceived risk was higher in the group that had experienced context B, a result interpreted as consistent with the existence of cognitive biases leading to departures from value maximisation ( Waite 2001a ). However, experience with the two background contexts and consequent differences in amount of food hoarded (i.e., level of energy reserves for future use) between groups could also have led to the observed results. Those jays that had been in context A had collected approximately 62 raisins, whereas those in context B had collected an average of 25 raisins. Assuming that the state of the jays was such that fitness increased following a decelerated function of accumulated raisins, it is possible that those jays previously presented with lower food supplies had more to gain by choosing the option yielding the larger amount of food. Equating their hoards with energy reserves, one could say that they were “hungrier” in context B and hence afforded greater risks to pick the maximum reward. This trade-off between energetic state and predation risk has been extensively discussed within the behavioural ecology literature (e.g., Houston and McNamara 1999 ) Energetic State and Rationality in Hummingbirds The comparison between binary and trinary choices sometimes employed in studies designed to test economic rationality in animal behaviour can also lead to changes in state. For example, Bateson et al. (2002) compared preferences of Rufous hummingbirds (Selasphorus rufus) among three flower types differing in volume and concentration of sucrose (target: 15 μl, 40% sucrose; competitor: 45 μl, 30%; decoy: 10 μl, 35%) in binary (target and competitor) and trinary (target, competitor, and decoy) contexts. The birds experienced both contexts consecutively. In each of them, they made repeated choices between the available flower types until a minimum number of 150 choices for the target and competitor had been reached. The strength of preferences for the competitor over the target increased significantly in the presence of the decoy (trinary context), and the authors interpreted these results as being inconsistent with the use of absolute evaluation mechanisms as normally postulated by functional accounts of behaviour. Yet, the resulting differences could have been caused by exposure to energetically different contexts. Net rate of energy intake of the target, competitor, and decoy were, respectively, 81.9, 92.0, and 59.5 J/s. Considering, for instance, an average proportion of choices for the decoy of about 20% (as described in the study), subjects would necessarily experience a higher intake rate in the binary than in the trinary context unless they modified the relative allocation of responses. Therefore, if the interval between consecutive foraging bouts did not differ systematically between contexts, cumulative gain along the 150 choices would have been lower in the trinary context, favouring preference for the option offering the higher net rate of intake, as reported. Further analyses on the extent to which state differed between contexts, testing for differences in inter-bout intervals, variability in choices, and unplanned differences in nutrient balance (i.e., in the actual volumes and concentration of sugar and water; see, for example, Simpson and Raubenheimer 1993 ) experienced in each context in this and other experiments with hummingbirds (e.g., Bateson et al. 2003 ) are therefore needed before concluding that the results imply violations of rationality rather than compensations for differences in state generated by the introduction of a decoy. These examples were provided to illustrate that acknowledging and controlling for the effects that differences between choice sets may produce on an organism's state is paramount when investigating the influence of context on choice behaviour. This is particularly important because it is often difficult to predict how changes in state will affect preferences. For instance, our results showed a higher level of preference for the larger but more delayed reward when the starlings were under a poorer schedule—a result also previously reported by some authors ( Christensen-Szalanski et al. 1980 ; Rechten et al. 1983 ). Conversely, results showing an effect of state in the opposite direction have also been reported in the literature and interpreted as demonstrating greater impulsivity under hungrier conditions ( Snyderman 1983 ; Lucas et al. 1993 ). From a functional viewpoint, whether lowering energetic state should shift preference towards bigger and later rewards over smaller and more immediate ones or vice versa depends on the details of the problem. While many authors dealing with the problem of temporal discounting focus on one-shot choices, animal experiments are conducted in repeated trials, where delays mean lost opportunity, and where consideration of variance (“risk”) must come into the picture. For example, when the pressing factor is maximisation of rate of intake, greater ITIs have two effects: They alter the state of the subjects (lowering their energy reserves), and they shift the difference in long-term rates in favour of larger, more delayed rewards (a result that may mistakenly be considered a decrease in impulsivity). On the other hand, when risk is the main factor, it is impossible to make a general prediction, because the consequences of variance in both size and delay to reward are functionally sensitive to the curvature of the fitness versus state function, and this is likely to have one or more inflexion points. Because the directionality of state effects under biologically rational choice is difficult to predict, to demonstrate the presence of true breaches of rationality or to confirm previous findings as evidence of irrationality using these experimental economics paradigms, it is therefore essential not only to investigate the immediate effects of state on preference, but also to ensure that these violations are reproduced and not altered in any direction when state is controlled. Additionally, the observation that rewards received under higher states of need often lead to faster acquisition ( Capaldi and Havancik 1973 ; Tarpy and Mayer 1978 ; Balleine 1992 ) makes it fundamental to control for differences in state whenever subjects have to learn the properties of the rewards. Conclusion Following the growing body of claims for irrational choice behaviour by human subjects, recent reports on breaches of rationality in animals may be interpreted as questioning the predictive power of the optimality approach in behavioural ecology, favouring the view that the reported inconsistencies result from rigid rules of evaluation and choice leading to the assignment of context-dependent values to options and devaluing the contribution of functional reasoning. We do not doubt that the precise empirical description of decision rules is important. Indeed, findings of locally irrational behaviour are useful tools for the investigation of the mechanisms underlying choices, often forcing a reinterpretation of existing data and models of optimal decision-making. Additionally, the potential dependence of valuation mechanisms on the context of choice might have direct implications for other biological systems. For example, Shafir et al. (2003) have recently emphasised the role of pollinator perception and choice strategies in mediating the evolution of floral nectar distribution strategies, as well as the potential use of knowledge of cognition-mediated mechanisms of choice on the development of biological control programs. Still, if ideas are going to travel safely between economics and biology, crucial details of the experimental paradigms must be scrutinised, and differences between human and nonhuman research must be acknowledged. Here we emphasise that, due to the need of exposing animals to the contingencies of the choice problem, contextual changes may lead to variations in the state of individuals, which in turn can affect both the amount of knowledge acquired by subjects and the parameters of the decision faced by the individuals, thus calling into question the significance of apparent violations of rational axioms. We do not claim that state dependence accounts for all reported inconsistencies in animal choice (e.g., Waite 2001b ; Shafir et al. 2002 ), nor are we suggesting that animal choices are based directly upon calculations of optimal state-dependent actions instead of direct psychological mechanisms of choice. Indeed, the notion of “rules of thumb” that perform well in most relevant ecological situations, but may also lead to suboptimal behaviour, has been long accepted in behavioural and evolutionary biology, and may well also comprise some of the comparative mechanisms of choice and “fast and frugal” heuristics previously described for human beings (e.g., Gigerenzer et al. 1999 ). However, we believe that if evolutionarily inspired normative models of behaviour are to be treated fairly, a deep scrutiny of the causes underlying observations of apparent economic irrationality in animal (and, for that matter, human) choices should be attempted. Economic theory has been and still is a source of inspiration for optimality theorising in biology, and experimental economics may just as well inspire understanding of the predictive failure of some of these models. Conversely, the systematic observation of local cases of irrationality in animals may provide insights into the nature of the mechanisms of choice employed by humans. However, our study highlights that at least some apparent similarities in the expression of “maladaptive” behaviours may be due to oversights in the implementation of experiments testing ideas that originate in other disciplines. Materials and Methods Our main experiment consisted of training starlings to choose between either three or two simultaneously presented foraging options. Each option was implemented as a coloured, intermittently flashing key that, when pecked once by the subject, caused the other keys to darken, stopped flashing, and then delivered a certain amount of food following the first peck after a programmed delay. The amount and delay to food determined the features of each option. In total, there were four options in the experiment, two forming a “focal pair” of target options and two that were called “decoys”. Parameters of the options The actual reward parameters corresponding to the four options are shown in Figure 1 . The two focal options, FA and FD, offered a ratio of amount of reward to delay to reward of 0.5 food units per second, while each of the two decoys, DA and DD, offered a ratio of 0.25 units per second (the slopes of the solid lines in Figure 1 ). We call these ratios “short-term rates” for consistency with previous literature (viz., Bateson and Kacelnik 1996 ). Short-term rate is known to be strongly correlated to attractiveness, but it is not a description of objective intake rate, because it does not include times other than the delay between choice and outcome. This difference is important and underlies this study. Functional approaches to foraging behaviour, such as classical optimal foraging theory ( Stephens and Krebs 1986 ), have highlighted that energetic gains are a function of total intake over total time. This relationship is expressed by defining the value of an option by its real rate of returns as given by A/(D + ITI), where A is food amount, D is the delay between choice and outcome, and ITI, is the sum of all other times in the foraging cycle. This expression, known as “long-term rate”, is the slope of the broken lines in Figure 1 . Any consideration of the effect of energy state on preferences must consider long-term rate, even if the subjects used only short-term rates to form preferences. Scholars concerned with the mechanisms by which stimuli acquire significance (and hence potential attractiveness) to a learning animal, such as the behavioural analysis and conditioning literatures, have focused on the conditions that make the association between the outcome and the predictive event easier. In this case, the predictive event is the onset of the stimulus marking the delay to food, which coincides with the animal's action of pecking at it ( Green et al. 1981 ; Kacelnik 1984 ; Mazur 1987 ; Bateson and Kacelnik 1996 ). We programmed the alternatives so that the two focal options were as close as possible to being equally attractive and superior to the two decoys, themselves equated in short-term rate. At the same time, we used the fact that the energetic consequences of the two decoys are very different to manipulate energetic consequences. The parameters of the decoys were chosen to maximise their putative cognitive effect on preferences between the focal options. Several accounts of the effect of poorer alternatives have proposed that they may have an effect because decision-makers compare each attribute of the options (in our case amount and delay) independently, not integrated into a single expression of value. In the conditions described by Figure 1 , two putative mechanisms could cause DA to increase preference for FA. One of them, referred to as the comparative model ( Shafir et al. 1989 ; Wedell 1991 ; Shafir et al. 1993 ; Tversky and Simonson 1993 ), postulates that DA could favour FA by means of its asymmetric relationship of dominance (with dominance used as a synonym for superiority) with the focal options. The overall idea is that an option gains value when it is better than other options in the set along a particular attribute. In the present case, DA is dominated by FA in one attribute (delay) and equal in the other (amount), but it is dominated by FD in one attribute (delay) while it dominates it in the other (amount). Thus, if subjects are influenced by the number of relationships of dominance between attributes, FA could be more attractive than FD for being the only option that dominates DA completely. A second mechanism of interest, known as the “range effect”, says that the same difference in a physical attribute can have a greater effect when embedded in a narrower range of values ( Parducci 1965 ). Therefore, the quantitative advantage of an option along a given attribute decreases as a function of the range of values present. For example, if only FA and FD are present, the range of delays is 6 s, and FD's advantage is 100%. When, say, DA is added, the range increases to 16 s, and FD's advantage over FA is only 37.5% of the total range. Since the range in amounts is not modified by adding DA, this again may favour FA. The same two mechanisms could make DD enhance the attractiveness of FD over FA. Note, however, that although these mechanisms provide a possible direction for which differences in preferences could be observed, changes in preference levels between contexts are usually interpreted as being compatible with context dependence regardless of their direction and the number of attributes describing the options ( Hurly and Oseen 1999 ; Bateson 2002 ; Bateson et al. 2002 ). Subjects. Subjects were 28 naïve starlings captured in Oxford (English Nature licence # 20020068). After capture, the birds were kept outdoors and, during the experiment, transferred to individual indoor cages (120 cm × 60 cm × 50 cm) that served as housing and testing chambers. Lights were on between 0500 and 1900 h, and temperature ranged from 12 °C to 16 °C. Subjects were visually but not acoustically isolated. The experiments took place between June and October 2002. Apparatus Each cage had a panel with a central food hopper and three response keys. Computers running Arachnid language (Paul Fray, Cambridge, UK) served for control and data collection. Rewards were units of Orlux pellets, crushed and sieved to an even size (0.025 ± 0.005 g). Automatic pellet dispensers (Campden Instruments, Leicester, UK) delivered rewards at a rate of 1 unit/s. Each option was signalled by a different key colour (red, green, yellow, blue, white, or pink). Experimental protocol Subjects were first trained to peck at keys to obtain rewards until all birds pecked at least 80% of the food opportunities. A discrete trials procedure with two types of trials (no-choice and choice) was then employed. No-choice trials provided the birds information about each alternative, but also contributed to their rate of intake. These trials started with one key blinking. The first peck caused the light to stay steadily on. The first peck after the programmed delay had elapsed triggered the delivery of the programmed amount of food, followed by a fixed ITI of 60 s, during which all keys were off. Since food was delivered following the first peck after the end of the designated delay, experienced delays were slightly longer than those programmed (median [interquartile range] in experiment with decoys: treatment “Low Intake”, delay FD = 4.1 s [0.2 s], delay FA = 10.2 s [2.0 s], delay DD = 4.1 s [0.2 s]; treatment “High Intake”, delay FD = 4.1 s [0.2 s], delay FA = 10.2 s [1.8 s], decoy DD = 20.2 s [0.3 s]). Choice trials began with two or three keys (depending on whether choice was binary or trinary) simultaneously blinking. The first peck on any of them caused the pecked key to turn steadily on and the others to turn off. After that, the trial continued as in no-choice trials. The order and sides in which the options were presented was randomised. After the sessions, subjects were fed ad libitum with turkey crumbs for at least 2 h, then supplemented with ten mealworms, and then food deprived until the beginning of the experimental sessions on the following morning. In all experiments (except the calibration for discrimination of amounts) we used a within-subjects design with two treatments. The within-subjects design was preferred owing to the high level of variability between individual starlings' energetic requirements, which would have prevented accurate control of energetic state between groups. We therefore focus our analyses on within-subjects comparisons, since variations in the energetic state of subjects between groups of different subjects would hinder the comparison of their preference levels. The pairing of options with colours was balanced across subjects and changed between treatments. Treatment order was balanced across birds. Subjects were given one resting day with ad libitum food between treatments. Each treatment lasted for 20 sessions. Data from the last five sessions were used for analyses. Discrimination of amounts In the discrimination experiment, we used a between-subjects design with six male starlings, split into two groups of three birds each. Group 1 chose between 1 and 2 units of food, and group 2 chose between 2 and 5 units. Rewards were delivered after the first peck on the corresponding key. Subjects experienced two sessions per day, at 0800 and 1300 h. Each session consisted of 84 trials, divided into 21 blocks of four trials each: two no-choice (each option presented once) followed by two choice trials. Effects of intake rate on choice without decoys To investigate the potential effect of different intake rates on preference between the target options, we used ten birds (five males and five females) in a within-subjects design with two treatments, which differed with respect to the amount of supplementary food delivered to the starlings. One of the treatments simulated the intake effect (total amount of food consumed) of experience with decoy DA (treatment “High Intake”) on 25% of the foraging opportunities (this proportion was established from the average proportion of trials in which the decoy was experienced in a pilot study). The other treatment simulated the intake effect of experience with decoy DD (treatment “Low Intake”) on 25% of all foraging opportunities. To achieve this we delivered unconditionally the reward corresponding to the appropriate decoy once per experimental block (details below), after the ITI that followed the last trial of that block. Unlike the trials with the pair of focal options, no action was needed on the part of the subjects to receive the unconditional reward, nor was any specific discriminative stimulus associated with it. There were three daily sessions, at 0600, 1000, and 1400 h. Each session consisted of 36 trials, grouped into 12 blocks of three trials each. Each block started with two no-choice trials (each focal option once), followed by an ITI and an unconditional food delivery in which the amount of the simulated decoy was delivered after the delay corresponding to that decoy had elapsed. The third and last trial of each block was a choice between the two focal options. Test of economic rationality in the presence and absence of controls for intake Twelve birds were randomly assigned to two groups (intake controlled, or group C, and intake not controlled, or group NC) of six birds each (three males and three females in each group). All subjects experienced two treatments, one with decoy DA (treatment “High Intake”) and another with DD (treatment “Low Intake”). In group C, the differences in intake rate between treatments caused by the exposure to the energetically different decoys were eliminated with supplementary feeding. Three daily sessions started at 0500, 0900, and 1400 h. Each session consisted of 63 trials, grouped into seven blocks of nine trials each: three no-choice trials followed by six choice trials in a random order (two trinary choices, two binary choices between the focal options, and two binary choices between each focal and the decoy). To equalise intake between treatments in group C, we adopted the following procedure. We calculated the maximum obtainable amount of food and delay per block for treatment “High Intake” (the treatment offering the higher cumulative delay and amount), and in both treatments delivered supplementary rewards up to this amount and delay twice per block. Thus, in every block of trials we equalised intake and total time to the same value in both treatments. The supplement was delivered after the fourth and the ninth trial of each block, after adding the appropriate delay to the ITI. In both treatments, supplements in the middle of the block were followed by a 5-min no-food interval to prevent satiation. Blocks were separated by 10-min intervals. Data analysis According to the principle of IIA, the strength of preference between two options should be independent of the presence of other (less preferred) options. These other options may either form part of the general situational background and be absent at the time of the choice, or form part of an enriched set of options at choice time. To test whether differences in background led to breaches of IIA, we compared choice proportions in binary choices (in which the two target options of the focal pair were paired) between the two treatments by conducting separate tests for each group of subjects. To test the temporal effects of potential state changes over the trials in the experimental sessions (i.e., whether the strength of preference between the focal options changed along a session), we calculated the slope of the regression of trial number against (transformed; see below) proportion of choices for FA over FD and tested whether the group of slopes was different between treatments for both groups of subjects. We also tested for differences in preference between the focal options across contexts, comparing binary with trinary choice trials. We performed two analyses. First, we tested whether the relative strength of preference between the focal options differed between the binary and trinary contexts. Relative preferences were calculated as where p (FA,FD;{FA,FD,D}) is the relative preference for FA over FD when the alternatives indicated inside the curly brackets were present, and n (FA;{FA,FD,D}) is the number of choices for FA within the same set of alternatives. D stands for either of the decoys (DA or DD). The second term in the denominator follows the same notation. According to a strong probabilistic version of IIA known as the constant-ratio rule ( Luce 1959 ), relative preferences should be the same between binary and trinary contexts. Second, we compared absolute strength of preference for each of the targets between the two contexts to test for violations of regularity. Regularity is a weaker form of IIA ( Luce and Suppes 1965 ), which asserts that the absolute proportion of choices for an option cannot increase when a new option is added to the choice set. Again, breaches of regularity are usually taken as strong evidence that the value of an option is assigned in a context-dependent way (see Schuck-Paim and Kacelnik [2002] for an intuitive explanation). We used repeated-measures ANOVA on square-root arcsine–transformed choice proportions, having treatment and order as within- and between-subjects factors, respectively. In all cases we tested the effect of order and interaction between the factors, but neither was significant. The assumptions of normality and homogeneity of variances were not violated for any of the transformed datasets. The Greenhouse-Geiser correction was applied whenever the assumption of sphericity was violated. Tests were always two-tailed. In all conditions, we additionally tested whether the birds' preferences were already stable when the experimental sessions were interrupted. We considered preferences to be stable when the regression of choice proportions (for the focal choice pair in binary choices, where only FA and FD were available) in five consecutive sessions (against session number) was not significant and the standard deviation of these proportions did not exceed 0.20. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC529314.xml |
549594 | Free Community Science and the Free Development of Science | null | In free community science, where large numbers of scientists participate as volunteers in a single project, the ideal of scientific cooperation finds a new expression. Free community science was inspired by the free software movement, which itself was inspired by the application of the ideal of scientific cooperation, as it was applied to software development by the operating system developers of the Massachusetts Institute of Technology Artificial Intelligence Lab in the 1970s. This ideal has suffered for two decades from corporate pressure to privatize science, so it is very gratifying to see that the free software movement can today help reinvigorate the principle that inspired it. The ideal of scientific cooperation goes beyond the conduct of individual projects. Scientific cooperation is also being reinvigorated today through the open-access movement, which promotes the public's freedom to redistribute scientific and scholarly articles. In the age of the computer networks, the best way to disseminate scientific writing is by making it freely accessible to all and letting everyone redistribute it. I give a vote of thanks to the Public Library of Science for leading the campaign that is now gaining momentum. When research funding agencies pressure journals to allow free redistribution of new articles they fund, they should apply this demand to the old articles “owned” by the same publishers—not just to papers published starting today. Journal editors can promote scientific cooperation by adopting standards requiring internet publication of the supporting data and software for the articles they publish. The software and the data will be useful for other research. Moreover, research carried out using software cannot be checked or evaluated properly by other scientists unless they can read the source code that was used. A significant impediment to publication and cooperation comes from university patent policies. Many universities hope to strike it rich with patents, but this is as foolish as playing the lottery, since most “technology licensing offices” don't even cover their operating costs. Like the Red Queen, these universities are running hard to stay in the same place. Society should recognize that funding university research through patents is folly, and should fund it directly, as in the past. Meanwhile, laws that encourage universities to seek patents at the expense of cooperation in research should be changed. Another impediment comes from strings attached to corporate research funding. Universities or their public funding agencies should ensure private sponsors cannot block research they do not like. These sponsors must never have the power to veto or delay publication of results—or to intimidate the researchers. Thus, sponsors whose interests could be hurt by publication of certain possible results must never be in a position to cut the funding for a specific research group. The free software movement, the free redistribution policy of this journal, and the practice of free community science for developing diagnostic disease classifications [ 1 ] are all based on the same fundamental principle: knowledge contributes to society when it can be shared and developed by communities. All three face opposition from those who would like to privatize knowledge and charge tolls for its use. In the free software movement we have 20 years' experience in resisting this opposition, and we have built up considerable strength and momentum. We can give the other two movements a boost, so they can advance more quickly. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549594.xml |
545594 | Variation in alternative splicing across human tissues | Analysis of the alternative splicing patterns of genomically aligned ESTs revealed that human brain, testis and liver have unusually high levels of alternative splicing and identified candidate cis -acting factors likely to play important roles in tissue-specific alternative splicing in human cells. | Background The differentiation of a small number of cells in the developing embryo into the hundreds of cell and tissue types present in a human adult is associated with a multitude of changes in gene expression. In addition to many differences between tissues in transcriptional and translational regulation of genes, alternative pre-mRNA splicing (AS) is also frequently used to regulate gene expression and to generate tissue-specific mRNA and protein isoforms [ 1 - 5 ]. Between one-third and two-thirds of human genes are estimated to undergo AS [ 6 - 11 ] and the disruption of specific AS events has been implicated in several human genetic diseases [ 12 ]. The diverse and important biological roles of alternative splicing have led to significant interest in understanding its regulation. Insights into the regulation of AS have come predominantly from the molecular dissection of individual genes (reviewed in [ 1 , 12 ]). Prominent examples include the tissue-specific splicing of the c- src N1 exon [ 13 ], cancer-associated splicing of the CD44 gene [ 14 ] and the alternative splicing cascade involved in Drosophila melanogaster sex determination [ 15 ]. Biochemical studies of these and other genes have described important classes of trans -acting splicing-regulatory factors, implicating members of the ubiquitously expressed serine/arginine-rich protein (SR protein) and heterogeneous nuclear ribonucleoprotein (hnRNP) families, and tissue-specific factors including members of the CELF [ 16 ] and NOVA [ 17 ] families of proteins, as well as other proteins and protein families, in control of specific splicing events. A number of cis -regulatory elements in exons or introns that play key regulatory roles have also been identified, using a variety of methods including site-directed mutagenesis, systematic evolution of ligands by exponential enrichment (SELEX) and computational approaches [ 18 - 22 ]. In addition, DNA microarrays and polymerase colony approaches have been developed for higher-throughput analysis of alternative mRNA isoforms [ 23 - 26 ] and a cross-linking/immunoprecipitation strategy (CLIP) has been developed for systematic detection of the RNAs bound by a given splicing factor [ 27 ]. These new methods suggest a path towards increasingly parallel experimental analysis of splicing regulation. From another direction, the accumulation of large databases of cDNA and expressed sequence tag (EST) sequences has enabled large-scale computational studies, which have assessed the scope of AS in the mammalian transcriptome [ 3 , 8 , 10 , 28 ]. Other computational studies have analyzed the tissue specificity of AS events and identified sets of exons and genes that exhibit tissue-biased expression [ 29 , 30 ]. However, a number of significant questions about tissue-specific alternative splicing have not yet been comprehensively addressed. Which tissues have the highest and lowest proportions of alternative splicing? Do tissues differ in their usage of different AS types, such as exon skipping, alternative 5' splice site choice or alternative 3' splice site choice? Which tissues are most distinct from other tissues in the spectrum of alternative mRNA isoforms they express? And to what extent do expression levels of known splicing factors explain AS patterns in different tissues? Here, we describe an initial effort to answer these questions using a large-scale computational analysis of ESTs derived from about two dozen human tissues, which were aligned to the assembled human genome sequence to infer patterns of AS occurring in thousands of human genes. Our results distinguish specific tissues as having high levels and distinctive patterns of AS, identify pronounced differences between the proportions of alternative 5' splice site and alternative 3' splice site usage between tissues, and predict candidate cis -regulatory elements and trans -acting factors involved in tissue-specific AS. Results and discussion Variation in the levels of alternative splicing in different human tissues Alternative splicing events are commonly distinguished in terms of whether mRNA isoforms differ by inclusion or exclusion of an exon, in which case the exon involved is referred to as a 'skipped exon' (SE) or 'cassette exon', or whether isoforms differ in the usage of a 5' splice site or 3' splice site, giving rise to alternative 5' splice site exons (A5Es) or alternative 3' splice site exons (A3Es), respectively (depicted in Figure 1 ). These descriptions are not necessarily mutually exclusive; for example, an exon can have both an alternative 5' splice site and an alternative 3' splice site, or have an alternative 5' splice site or 3' splice site but be skipped in other isoforms. A fourth type of alternative splicing, 'intron retention', in which two isoforms differ by the presence of an unspliced intron in one transcript that is absent in the other, was not considered in this analysis because of the difficulty in distinguishing true intron retention events from contamination of the EST databases by pre-mRNA or genomic sequences. The presence of these and other artifacts in EST databases are important caveats to any analysis of EST sequence data. Therefore, we imposed stringent filters on the quality of EST to genomic alignments used in this analysis, accepting only about one-fifth of all EST alignments obtained (see Materials and methods). To determine whether differences occur in the proportions of these three types of AS events across human tissues, we assessed the frequencies of genes containing skipped exons, alternative 3' splice site exons or alternative 5' splice site exons for 16 human tissues (see Figure 1 for the list of tissues) for which sufficiently large numbers of EST sequences were available. Because the availability of a larger number of ESTs derived from a gene increases the chance of observing alternative isoforms of that gene, the proportion of AS genes observed in a tissue will tend to increase with increasing EST coverage of genes [ 10 , 31 ]. Since the number of EST sequences available differs quite substantially among human tissues (for example, the dbEST database contains about eight times more brain-derived ESTs than heart-derived ESTs), in order to compare the proportion of AS in different tissues in an unbiased way, we used a sampling strategy that ensured that all genes/tissues studied were represented by equal numbers of ESTs. It is important to point out that our analysis does not make use of the concept of a canonical transcript for each gene because it is not clear that such a transcript could be chosen objectively or that this concept is biologically meaningful. Instead, AS events are defined only through pairwise comparison of ESTs. Our objective was to control for differences in EST abundance across tissues while retaining sufficient power to detect a reasonable fraction of AS events. For each tissue we considered genes that had at least 20 aligned EST sequences derived from human cDNA libraries specific to that tissue ('tissue-derived' ESTs). For each such gene, a random sample of 20 of these ESTs was chosen (without replacement) to represent the splicing of the given gene in the given human tissue. For the gene and tissue combinations included in this analysis, the median number of EST sequences per gene was not dramatically different between tissues, ranging from 25 to 35 (see Additional data file 1). The sampled ESTs for each gene were then compared to each other to identify AS events occurring within the given tissue (see Materials and methods). The random sampling was repeated 20 times and the mean fraction of AS genes observed in these 20 trials was used to assess the fraction of AS genes for each tissue (Figure 1a ). Different random subsets of a relatively large pool will have less overlap in the specific ESTs chosen (and therefore in the specific AS events detected) than for random subsets of a smaller pool of ESTs, and increased numbers of ESTs give greater coverage of exons. However, there is no reason that the expected number of AS events detected per randomly sampled subset should depend on the size of the pool the subset was chosen from. While the error (standard deviation) of the measured AS frequency per gene should be lower when restricting to genes with larger minimum pools of ESTs, such a restriction would not change the expected value. Unfortunately, the reduction in error of the estimated AS frequency per gene is offset by an increase in the expected error of the tissue-level AS frequency resulting from the use of fewer genes. The inclusion of all genes with at least 20 tissue-derived ESTs represents a reasonable trade-off between these factors. The human brain had the highest fraction of AS genes in this analysis (Figure 1a ), with more than 40% of genes exhibiting one or more AS events, followed by the liver and testis. Previous EST-based analyses have identified high proportions of splicing in human brain and testis tissues [ 29 , 30 , 32 ]. These studies did not specifically control for the highly unequal representation of ESTs from different human tissues. As larger numbers of ESTs increase the chance of observing a larger fraction of the expressed isoforms of a gene, the number of available ESTs has a direct impact on estimated proportions of AS, as seen previously in analyses comparing the levels of AS in different organisms [ 31 ]. Thus, the results obtained in this study confirm that the human brain and testis possess an unusually high level of AS, even in the absence of EST-abundance advantages over other tissues. We also observe a high level of AS in the human liver, a tissue with much lower EST coverage, where higher levels of AS have been previously reported in cancerous cells [ 33 , 34 ]. Human muscle, uterus, breast, stomach and pancreas had the lowest levels of AS genes in this analysis (less than 25% of genes). Lowering the minimum EST count for inclusion in this analysis from 20 to 10 ESTs, and sampling 10 (out of 10 or more) ESTs to represent each gene in each tissue, did not alter the results qualitatively (data not shown). Differences in the levels of exon skipping in different tissues Alternatively spliced genes in this analysis exhibited on average between one and two distinct AS exons. Analyzing the different types of AS events separately, we found that the human brain and testis had the highest levels of skipped exons, with more than 20% of genes containing SEs (Figure 1b ). The high level of skipped exons observed in the brain is consistent with previous analyses [ 29 , 30 , 32 ]. At the other extreme, the human ovary, muscle, uterus and liver had the lowest levels of skipped exons (about 10% of genes). An example of a conserved exon-skipping event observed in human and mouse brain tissue is shown in Figure 2a for the human fragile X mental retardation syndrome-related ( FXR1 ) gene [ 35 , 36 ]. In this event, skipping of the exon alters the reading frame of the downstream exon, presumably leading to production of a protein with an altered and truncated carboxy terminus. The exon sequence is perfectly conserved between the human and mouse genomes, as are the 5' splice site and 3' splice site sequences (Figure 2a ), suggesting that this AS event may have an important regulatory role [ 37 - 39 ]. Differences in the levels of alternative splice site usage in different tissues Analyzing the proportions of AS events involving the usage of A5Es and A3Es revealed a very different pattern (Figure 1c,d ). Notably, the fraction of genes containing A3Es was more than twice as high in the liver as in any other human tissue studied (Figure 1d ), and the level of A5Es was also about 40-50% higher in the liver than in any other tissue (Figure 1c ). The tissue with the second highest level of alternative usage for both 5' splice sites and 3' splice sites was the brain. Another group of human tissues including muscle, uterus, breast, pancreas and stomach - similar to the low SE frequency group above - had the lowest level of A5Es and A3Es (less than 5% of genes in each category). Thus, a picture emerges in which certain human tissues such as muscle, uterus, breast, pancreas and stomach, have low levels of AS of all types, whereas other tissues, such as the brain and testis, have relatively high levels of AS of all types and the liver has very high levels of A3Es and A5Es, but exhibits only a modest level of exon skipping. To our knowledge, this study represents the first systematic analysis of the proportions of different types of AS events occurring in different tissues. Repeating the analyses by removing ESTs from disease-associated tissue libraries, using available library classifications [ 40 ], gave qualitatively similar results (see Additional data files 2, 3, and 4). These data show that ESTs derived from diseased tissues show modestly higher frequencies of exon skipping, but the relative rankings of tissues remain similar. The fractions of genes containing A5Es and A3Es were not changed substantially when diseased-tissue ESTs were excluded. From the set of genes with at least 20 human liver-derived ESTs, this analysis identified a total of 114 genes with alternative 5' splice site and/or 3' splice site usage in the liver. Those genes in this set that were named, annotated and for which the consensus sequences of the alternative splice sites were conserved in the orthologous mouse gene (see Materials and methods) are listed in Table 1 . Of course, conservation of splice sites alone is necessary, but not sufficient by itself, to imply conservation of the AS event in the mouse. Many essential liver metabolic and detoxifying enzyme-coding genes appear on this list, including enzymes involved in sugar metabolism (for example, ALDOB , IDH1 ), protein and amino acid metabolism (for example, BHMT, CBP2 , TDO2 , PAH , GATM ), detoxification or breakdown of drugs and toxins (for example, GSTA3 , CYP3A4 , CYP2C8 ). Sequences and splicing patterns for two of these genes for which orthologous mouse exons/genes and transcripts could be identified - the genes BHMT and CYP2C8 - are shown in detail in Figure 2b,c . In the event depicted for BHMT , the exons involved are highly conserved between the human and mouse orthologs (Figure 2b ), consistent with the possibility that the splicing event may have a (conserved) regulatory role. This AS event preserves the reading frame of downstream exons, so the two isoforms are both likely to produce functional proteins, differing by the insertion/deletion of 23 amino acids. In the event depicted for CYP2C8 , usage of an alternative 3' splice site removes 71 nucleotides, shifting the reading frame and leading to a premature termination codon in the exon (Figure 2c ). In this case, the shorter alternative transcript is a potential substrate for nonsense-mediated decay [ 41 , 42 ] and the AS event may be used to regulate the level of functional mRNA/protein produced. Differences in splicing factor expression between tissues To explore the differences in splicing factor expression in different tissues, available mRNA expression data was obtained from two different DNA microarray studies [ 43 - 45 ]. For this trans -factor analysis, we obtained a list of 20 splicing factors of the SR, SR-related and hnRNP protein families from proteomic analyses of the human spliceosome [ 46 - 48 ] (see Materials and methods for the list of genes). The variation in splicing-factor expression between pairs of tissues was studied by computing the Pearson (product-moment) correlation coefficient ( r ) between the 20-dimensional vectors of splicing-factor expression values between all pairs of 26 human tissues. The DNA microarray studies analyzed 10 tissues in addition to the 16 previously studied (Figure 3 ). A low value of r between a pair of tissues indicates a low degree of concordance in the relative mRNA expression levels across this set of splicing factors, whereas a high value of r indicates strong concordance. While most of the tissues examined showed a very high degree of correlation in the expression levels of the 20 splicing factors studied (typically with r > 0.75; Figure 3 ), the human adult liver was clearly an outlier, with low concordance in splicing-factor expression to most other tissues (typically r < 0.6, and often much lower). The unusual splicing-factor expression in the human liver was seen consistently in data from two independent DNA microarray studies using different probe sets (compare the two halves of Figure 3 ). The low correlation observed between liver and other tissues in splicing factor expression is statistically significant even relative to arbitrary collections of 20 genes (see Additional data file 8). Examining the relative levels of specific splicing factors in the human adult liver versus other tissues, the relative level of SRp30c message was consistently higher in the liver and the relative levels of SRp40, hnRNP A2/B2 and Srp54 messages were consistently lower. A well established paradigm in the field of RNA splicing is that usage of alternative splice sites is often controlled by the relative concentrations of specific SR proteins and hnRNP proteins [ 49 - 52 ]. This functional antagonism between particular SR and hnRNP proteins is often due to competition for binding of nearby sites on pre-mRNAs [ 49 , 53 , 54 ]. Therefore, it seems likely that the unusual patterns of expression seen in the human adult liver for these families of splicing factors may contribute to the high level of alternative splice site usage seen in this tissue. It is also interesting that splicing-factor expression in the human fetal liver is highly concordant with most other tissues, but has low concordance with the adult liver (Figure 3 ). This observation suggests that substantial changes in splicing-factor expression may occur during human liver development, presumably leading to a host of changes in the splicing patterns of genes expressed in human liver. Currently available EST data were insufficient to allow systematic analysis of the patterns of AS in fetal relative to adult liver. An important caveat to these results is that the DNA microarray data used in this analysis measure mRNA expression levels rather than protein levels or activities. The relation between the amount of mRNA expressed from a gene and the concentration of the corresponding protein has been examined previously in several studies in yeast as well as in human and mouse liver tissues [ 55 - 58 ]. These studies have generally found that mRNA expression levels correlate positively with protein concentrations, but with fairly wide divergences for a significant fraction of genes. Over-represented motifs in alternative exons in the human brain, testis and liver The unusually high levels of alternative splicing seen in the human brain, testis and liver prompted us to search for candidate tissue-specific splicing regulatory motifs in AS exons in genes expressed in each of these tissues. Using a procedure similar to Brudno et al . [ 59 ], sequence motifs four to six bases long that were significantly enriched in exons skipped in AS genes expressed in the human brain relative to constitutive exons in genes expressed in the brain were identified. These sequences were then compared to each other and grouped into seven clusters, each of which shared one or two four-base motifs (Table 2 ). The motifs in cluster BR1 (CUCC, CCUC) resemble the consensus binding site for the polypyrimidine tract-binding protein (PTB), which acts as a repressor of splicing in many contexts [ 60 - 63 ]. A similar motif (CNCUCCUC) has been identified in exons expressed specifically in the human brain [ 29 ]. The motifs in cluster BR7 (containing UAGG) are similar to the high-affinity binding site UAGGG [A/U], identified for the splicing repressor protein hnRNP A1 by SELEX experiments [ 64 ]. The consensus sequences for the remaining clusters BR2 to BR6 (GGGU, UGGG, GGGA, CUCA, UAGC, respectively), as well as BR7, all resembled motifs identified in a screen for exonic splicing silencers (ESSs) in cultured human cells (Z. Wang and C.B.B., unpublished results), suggesting that most or all of the motifs BR1 to BR7 represent sequences directly involved in mediating exon skipping. In particular, G-rich elements, which are known to act as intronic splicing enhancers [ 65 , 66 ], may function as silencers of splicing when present in an exonic context. A comparison of human testis-derived skipped exons to exons constitutively included in genes expressed in the testis identified only a single cluster of sequences, TE1, which share the tetramer UAGG. Enrichment of this motif, common to the brain-specific cluster BR7, suggests a role for regulation of exon skipping by hnRNP A1 - or a trans- acting factor with similar binding preferences - in the testis. Alternative splice site usage gives rise to two types of exon segments - the 'core' portion common to both splice forms and the 'extended' portion that is present only in the longer isoform. Two clusters of sequence motifs enriched in the core sequences of A5Es in genes expressed in the liver relative to the core segments of A5Es resulting from alignments of non-liver-derived ESTs were identified - LI1 and LI2. Both are adenosine-rich, with consensus tetramers AAAC and UAAA, respectively. The former motif matches a candidate ESE motif identified previously using the computational/experimental RESCUE-ESE approach (motif 3F with consensus [AG]AA [AG]C) [ 19 ]. The enrichment of a probable ESE motif in exons exhibiting alternative splice site usage in the liver is consistent with the model that such splicing events are often controlled by the relative levels of SR proteins (which bind many ESEs) and hnRNP proteins. Insufficient data were available for the analysis of motifs in the extended portions of liver A5Es (which tend to be significantly shorter than the core regions) or for the analysis of liver A3Es. A measure of dissimilarity between mRNA isoforms To quantify the differences in splicing patterns between mRNAs or ESTs derived from a gene locus, a new measure called the splice junction difference ratio (SJD) was developed. For any pair of mRNAs/ESTs that align to overlapping portions of the same genomic locus, the SJD is defined as the proportion of splice junctions present in both transcripts that differ between them, including only those splice junctions that occur in regions of overlap between the transcripts (Figure 4 ). The SJD varies between zero and one, with a value of zero for any pair of transcripts that have identical splice junctions in the overlapping region (for example, transcripts 2 and 5 in Figure 4 , or for two identical transcripts), and has a value of 1.0 for two transcripts whose splice junctions are completely different in the regions where they overlap (for example, transcripts 1 and 2 in Figure 4 ). For instance, transcripts 2 and 3 in Figure 4 differ in the 3' splice site used in the second intron, yielding an SJD value of 2/4 = 0.5, whereas transcripts 2 and 4 differ by skipping/inclusion of an alternative exon, which affects a larger fraction of the introns in the two transcripts and therefore yields a higher SJD value of 3/5 = 0.6. The SJD value can be generalized to compare splicing patterns between two sets of transcripts from a gene - for example, to compare the splicing patterns of the sets of ESTs derived from two different tissues. In this case, the SJD is defined by counting the number of splice junctions that differ between all pairs of transcripts ( i , j ), with transcript i coming from set 1 (for example, heart-derived ESTs), and transcript j coming from set 2 (for example, lung-derived ESTs), and dividing this number by the total number of splice junctions in all pairs of transcripts compared, again considering only those splice junctions that occur in regions of overlap between the transcript pairs considered. Note that this definition has the desirable property that pairs of transcripts that have larger numbers of overlapping splice junctions contribute more to the total than transcript pairs that overlap less. As an example of the splice junction difference between two sets of transcripts, consider the set S 1, consisting of transcripts (1,2) from Figure 4 , and set S 2, consisting of transcripts (3,4) from Figure 4 . Using the notation introduced in Figure 4 , SJD( S 1, S 2) = d ( S 1, S 2) / t ( S 1, S 2) = [ d (1,3) + d (1,4) + d (2,3) + d (2,4)]/ [ t (1,3) + t (1,4) + t (2,3) + t (2,4)] = [3 + 4 + 2 + 3]/ [3 + 4 + 4 + 5] = 12/16 = 0.75, reflecting a high level of dissimilarity between the isoforms in these sets, whereas the SJD falls to 0.57 for the more similar sets S 1 = transcripts (1,2) versus S 3 = transcripts (2,3). Note that in cases where multiple similar/identical transcripts occur in a given set, the SJD measure effectively weights the isoforms by their abundance, reflecting an average dissimilarity when comparing randomly chosen pairs of transcripts from the two tissues. For example, the SJD computed for the set S 4 = (1,2,2,2,2), that is, one transcript aligning as transcript 1 in Figure 4 and four transcripts aligning as transcript 2, and the set S 5 = (2,2,2,2,3) is 23/95 = 0.24, substantially lower than the SJD value for sets S 1 versus S 3 above, reflecting the higher fraction of identically spliced transcripts between sets S 4 and S 5. Global comparison of splicing patterns between tissues To make a global comparison of patterns of splicing between two different human tissues, a tissue-level SJD value was computed by comparing the splicing patterns of ESTs from all genes for which at least one EST was available from cDNA libraries representing both tissues. The 'inter-tissue' SJD value is then defined as the ratio of the sum of d ( S A , S B ) values for all such genes, divided by the sum of t ( S A , S B ) values for all of these genes, where S A and S B refer to the set of ESTs for a gene derived from tissues A and B, respectively, and d ( S A , S B ) and t ( S A , S B ) are defined in terms of comparison of all pairs of ESTs from the two sets as described above. This analysis uses all available ESTs for each gene in each tissue (rather than samples of a fixed size). A large SJD value between a pair of tissues indicates that mRNA isoforms of genes expressed in the two tissues tend to be more dissimilar in their splicing patterns than is the case for two tissues with a smaller inter-tissue SJD value. This definition puts greater weight on those genes for which more ESTs are available. The SJD values were then used to globally assess tissue-level differences in alternative splicing. A set of 25 human tissues for which at least 20,000 genomically aligned ESTs were available was compiled for this comparison (see Materials and methods) and the SJD values were then computed between all pairs of tissues in this set (Figure 5a ). A clustering of human tissues on the basis of their inter-tissue SJD values (Figure 5b ) identified groups of tissues that cluster together very closely (for example, the ovary/thyroid/breast cluster, the heart/lymph cluster and the bone/B-cell cluster), while other tissues including the brain, pancreas, liver, peripheral nervous system (PNS) and placenta occur as outgroups. These results complement a previous clustering analysis based on data from microarrays designed to detect exon skipping [ 24 ]. Calculating the mean SJD value for a given tissue when compared to the remaining 24 tissues (Figure 5c ) identified a set of human tissues including the ovary, thyroid, breast, heart, bone, B-cell, uterus, lymph and colon that have 'generic' splicing patterns which are more similar to most other tissues. As expected, many of these tissues with generic splicing patterns overlap with the set of tissues that have low levels of AS (Figure 1 ). On the other hand, another group of tissues including the human brain, pancreas, liver and peripheral nervous system, have highly 'distinctive' splicing patterns that differ from most other tissues (Figure 5c ). Many of these tissues were identified as having high proportions of AS in Figure 1 . Taken together, these observations suggest that specific human tissues such as the brain, testis and liver, make more extensive use of AS in gene regulation and that these tissues have also diverged most from other tissues in the set of spliced isoforms they express. Although we are not aware of reliable, quantitative data on the relative abundance of different cell types in these tissues, a greater diversity of cell types is likely to contribute to higher SJD values for many of these tissues. Conclusions The systematic analysis of transcripts generated from the human genome is just beginning, but promises to deepen our understanding of how changes in the program of gene expression contribute to development and differentiation. Here, we have observed pronounced differences between human tissues in the set of alternative mRNA isoforms that they express. Because our approach normalizes the EST coverage per gene in each tissue, there is higher confidence that these differences accurately reflect differences in splicing patterns between tissues. As human tissues are generally made up of a mixture of cell types, each of which may have its own unique pattern of gene expression and splicing, it will be important in the future to develop methods for systematic analysis of transcripts in different human cell types. Understanding the mechanisms and regulatory consequences of AS will require experimental and computational analyses at many levels. At its core, AS involves the generation of alternative transcripts mediated by interactions between cis -regulatory elements in exons or introns and trans -acting splicing factors. The current study has integrated these three elements, inferring alternative transcripts from EST-genomic alignments, identifying candidate regulatory sequence motifs enriched in alternative exons from different tissues, and analyzing patterns of splicing-factor expression in different tissues. Our results emphasize differences in the frequencies of exon skipping versus alternative splice site usage in different tissues and highlight the liver, brain and testis as having particularly high levels of AS, supporting the idea that tissue-regulated AS plays important roles in the differentiation of these tissues. The high levels of alternative splice site usage in the liver may relate to the unusual patterns of splicing-factor expression observed in the adult liver, suggesting aspects of developmental regulation of AS at the tissue level. Obtaining a more comprehensive picture of AS will require the integration of additional types of data upstream and downstream of these core interactions. Upstream, splicing factors themselves may be differentially regulated in different tissues or in response to different stimuli at the level of transcription, splicing, or translation, and are frequently regulated by post-translational modifications such as phosphorylation, so systematic measurements of splicing factor levels and activities will be required. Downstream, AS may affect the stability of alternative transcripts (for example, in cases of messages subject to nonsense-mediated mRNA decay), and frequently alters functional properties of the encoded proteins, so systematic measurements of AS transcript and protein isoforms and functional assays will also be needed to fully understand the regulatory consequences of AS events. Ultimately, it will be important to place regulatory events involving AS into the context of regulatory networks involving control at the levels of transcription, translation and post-translational modifications. Materials and methods Data and resources Chromosome assemblies of the human genome (hg13) were obtained from public databases [ 67 ]. Transcript databases included approximately 94,000 human cDNA sequences obtained from GenBank (release 134.0, gbpri and gbhtc categories), and approximately 5 million human expressed sequence tags (ESTs) from dbEST (repository 02202003). Human ESTs were designated according to their cDNA library source (in total about 800) into different tissue types. Pertinent information about cDNA libraries and the corresponding human tissue or cell line was extracted from dbEST and subsequently integrated with library information retrieved from the Mammalian Gene Collection Initiative (MGC) [ 68 ], the Integrated Molecular Analysis of Gene Expression Consortium (IMAGE) [ 69 ] and the Cancer Genome Anatomy Project (CGAP) [ 70 ]. Library information obtained from MGC, IMAGE and CGAP is provided in Additional data file 5. Genome annotation by alignment of spliced transcripts The GENOA genome annotation script [ 71 ] was used to align spliced cDNA and EST sequences to the human genome. GENOA uses BLASTN to detect significant blocks of identity between repeat-masked cDNA sequences and genomic DNA, and then aligns cDNAs to the genomic loci identified by BLASTN using the spliced-alignment algorithm MRNAVSGEN [ 71 ]. This algorithm is similar in concept to SIM4 [ 72 ] but was developed specifically to align high-quality cDNAs rather than ESTs and thus requires higher alignment quality (at least 93% identity) and consensus terminal dinucleotides at the ends of all introns (that is, GT..AG, GC..AG or AT..AC). EST sequences were aligned using SIM4 to those genomic regions that had aligned cDNAs. Stringent alignment criteria were imposed: ESTs were required to overlap cDNAs (so that all the genes studied were supported by at least one cDNA-genomic alignment); the first and last aligned segments of ESTs were required to be at least 30 nucleotides in length, with at least 90% sequence identity; and the entire EST sequence alignment was required to extend over at least 90% of the length of the EST with at least 90% sequence identity. In total, GENOA aligned about 85,900 human cDNAs and about 890,300 ESTs to the human genome. The relatively low fraction of aligned ESTs (about 18%), and average aligned length of about 550 bases (the average lengths were not significantly different between different tissues, see Additional data file 6), reflect the stringent alignment-quality criteria that were imposed so as to be as confident as possible in the inferred splicing patterns. The aligned sequences yielded about 17,800 gene regions with more than one transcript aligned that exhibited a multi-exon structure. Of these, about 60% exhibited evidence of alternative splicing of internal exons. Our analysis did not examine differences in 3'-terminal and 5'-terminal exons, inclusion of which is frequently dictated by alternative polyadenylation or alternative transcription start sites and therefore does not represent 'pure' AS [ 73 , 74 ]. The EST alignments were then used to categorize all internal exons as: constitutive exons; A3Es, A5Es, skipped exons, multiply alternatively spliced exons (for example, exons that exhibited both skipping and alternative 5' splice site usage), and exons that contained retained introns. An internal exon present in at least one transcript was identified as a skipped exon if it was precisely excluded in one or more other transcripts, such that the boundaries of both the 5' and 3' flanking exons were the same in the transcripts that included and skipped the exon (for example, exon E3 in Figure 1 ). Similarly, an internal exon present in at least one transcript was identified as an A3E or A5E if at least one other transcript contained an exon differing in length by the use of an alternative 3' splice site or 5' splice site. The 'core' of an A3E or A5E is defined as the exon portion that is common to all transcripts used to infer the AS event. The extension of an alternatively spliced exon is the exon portion added to the core region by the use of an alternative 3' splice site or 5' splice site) that is present in some, but not all transcripts used to infer the AS event. Pairs of inferred A3Es or A5Es differing by fewer than six nucleotides were excluded from further analysis, as in [ 8 ], because of the possibility that such small differences might sometimes result from EST sequencing or alignment errors. As the frequency of insertion-deletions errors greater than three bases using modern sequencing techniques is vanishingly small (P. Green, personal communication), a six-base cutoff should exclude the vast majority of such errors. Alternatively spliced exons/genes identified in specific tissues are available for download from the GENOA website [ 71 ]. Quantifying splice junction differences between alternative mRNA isoforms To quantify the difference in splicing patterns between mRNAs or ESTs derived from a gene locus, the splice junction difference ratio (SJD) was calculated. For any pair of mRNAs/ESTs that have been aligned to overlapping portions of a genomic locus, the SJD is defined as the fraction of the splice junctions that occur in overlapping portions of the two transcripts that differ in one or both splice sites. A sample calculation is given in Figure 4 . The SJD measure was calculated by taking the ratio of the number of 'valid' splice junctions that differ between two sequences over the total number of splice junctions, when comparing a pair of ESTs across all splice junctions present in overlapping portions of the two transcripts. A splice junction was considered valid if: the 5' splice site and the 3' splice site satisfied either the GT..AG or the GC..AG dinucleotide sequences at exon-intron junctions; and if the splice junction was observed at least twice in different transcripts. Identification of candidate splicing regulatory motifs Over-represented sequence motifs ( k -mers) were identified by comparing the number of occurrences of k -mers (for k in the range of 4 to 6 bases) in a test set of alternative exons versus a control set. In this analysis, monomeric tandem repeats (for example, poly(A) sequences) were excluded. The enrichment score of candidate k -mers in the test set versus the control set was evaluated by computing χ 2 (chi-squared) values with a Yates correction term [ 75 ], using an approach similar in spirit to that described by Brudno et al . [ 59 ]. We randomly sampled 500 subsets of the same size as the test set from the control set. The enrichment scores for k -mers over-represented in the sampled subset versus the remainder of the control set were computed as above. The estimated p -value for observing the given enrichment score (χ 2 -value) associated with an over-represented sequence motif of length k was defined as the fraction of subsets that contained any k -mer with enrichment score (χ 2 -value) higher than the tested motif. Correcting for multiple testing is not required as the p -value was defined relative to the most enriched k -mer for each sampled set. For sets of skipped exons from human brain- and testis-derived EST sequences, the test sets comprised 1,265 and 517 exons skipped in brain- and testis-derived ESTs, respectively, and the control sets comprised 12,527 and 8,634 exons constitutively included in human brain- and testis-derived ESTs, respectively. Candidate sequence motifs in skipped exons from brain and testis-derived ESTs with associated p -values less than 0.002 were retained. For the set of A5E and A3E events from human liver-derived EST sequences, the test set comprised 44 A3Es and 45 A5Es, and the control set comprised 1,619 A3Es and 1,481 A5Es identified using ESTs from all tissues excluding liver. In this analysis, A3Es and A5Es with extension sequences of less than 25 bases were excluded and sequences longer than 150 bases were truncated to 150 bases, by retaining the exon sequence segment closest to the internal alternative splice junction. Over-represented sequence motifs in A3Es and A5Es from liver-derived EST sequences with associated p -values less than 0.01 were retained. Gene-expression analysis of trans -acting splicing factors SR proteins, SR-related proteins, and hnRNPs were derived from published proteomic analyses of the spliceosome [ 46 - 48 ]. Expression values for these genes were obtained from the 'gene expression atlas' using the HG-U95A DNA microarray [ 43 ] and from a similar set of expression data using the HG-U133A DNA microarray [ 45 ]. Altogether, 20 splicing factors - ASF/SF2, SRm300, SC35, SRp40, SRp55, SRp30c, 9G8, SRp54, SFRS10, SRp20, hnRNPs A1, A2/B2, C, D, G, H1, K, L, M, and RALY - were studied in 26 different tissues present in both microarray experiments (Figure 5 ). The data from each gene chip - HG-U95A and HG-U133A - were analyzed separately. The average difference (AD) value of each probe was used as the indicator of expression level. In analyzing these microarray data, AD values smaller than 20 were standardized to 20, as in [ 43 ]. When two or more probes mapped to a single gene, the values from those probes were averaged. Pearson (product-moment) correlation coefficients between 20-dimensional vectors for all tissue pairs were calculated, using data from each of the two DNA microarray studies separately. Additional data files Additional data files containing the following supplementary data, tables and figures are available with the online version of this paper and from the GENOA genome annotation website [ 71 ]. The lists of GenBank accession numbers of human cDNAs and ESTs that were mapped to the human genome by the GENOA pipeline, GENOA gene locus identifiers, and gene loci with spliced alignments for the 22 human autosomes and two sex chromosomes are provided at our website [ 76 ]. Sets of constitutive and alternative exons in genes expressed in the human brain, testis and liver, and control sets used are also provided [ 77 ]. Additional data file 1 lists the average and median number of ESTs per gene and tissue, and the total number of genes per tissue using different minimum numbers of ESTs. Additional data file 2 lists the average total number of AS genes and AS genes containing SEs, A3Es and A5Es using ESTs derived from normal, non-diseased tissues. Additional data file 3 lists the number of constitutively spliced and AS genes, and AS genes containing SEs, A3Es and A5Es. Additional data file 4 shows the average fractions of AS genes and average fractions of AS genes containing SEs, A3Es and A5Es using ESTs derived from normal, non-disease-derived tissues. Additional data file 5 lists categories of cDNA libraries and designated tissues derived from the MGC, IMAGE and CGAP. Additional data file 6 shows the average lengths of ESTs that aligned to gene loci expressed in different tissues. Additional data file 7 lists human splicing factors of SR, SR-related and hnRNP genes, corresponding Ensembl gene numbers and Affymetrix microarray probe identification numbers. Additional data file 8 shows the distribution of the average Pearson correlation coefficient values across different tissues for expression levels of random sets of genes obtained from the Affymetrix microarray data. Supplementary Material Additional data file 1 The average and median number of ESTs per gene and tissue, and the total number of genes per tissue using different minimum numbers of ESTs Click here for additional data file Additional data file 2 The average total number of AS genes and AS genes containing SEs, A3Es and A5Es using ESTs derived from normal, non-diseased tissues Click here for additional data file Additional data file 3 The number of constitutively spliced and AS genes, and AS genes containing SEs, A3Es and A5Es Click here for additional data file Additional data file 4 The average fractions of AS genes and average fractions of AS genes containing SEs, A3Es and A5Es using ESTs derived from normal, non-disease-derived tissues Click here for additional data file Additional data file 5 Categories of cDNA libraries and designated tissues derived from the MGC, IMAGE and CGAP Click here for additional data file Additional data file 6 The average lengths of ESTs that aligned to gene loci expressed in different tissues Click here for additional data file Additional data file 7 Human splicing factors of SR, SR-related and hnRNP genes, corresponding Ensembl gene numbers and Affymetrix microarray probe identification numbers Click here for additional data file Additional data file 8 The distribution of the average Pearson correlation coefficient values across different tissues for expression levels of random sets of genes obtained from the Affymetrix microarray datat Click here for additional data file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC545594.xml |
544891 | Correlates of serum lipoprotein (A) in children and adolescents in the United States. The third National Health Nutrition and Examination Survey (NHANES-III) | Objective To determine the correlates of serum lipoprotein (a) (Lp(a)) in children and adolescents in the United States. Methods Cross-sectional study using representative data from a US national sample for persons aged 4–19 years participating in The Third National Health Nutrition and Examination Survey (NHANES-III). Results We observed ethnicity-related differences in levels of Lp(a) > 30 mg/dl, with values being markedly higher in African American (black) than nonhispanic white (white) and Mexican American children in multivariate model ( P < 0.001). Higher levels of Lp(a) > 30 mg/dl associated with parental history of body mass index and residence in metro compared to nonmetro in Blacks, and high birth weight in Mexican American children in the NHANES-III. In the entire group, total cholesterol (which included Lp(a)) and parental history of premature heart attack/angina before age 50 ( P < 0.02) showed consistent, independent, positive association with Lp(a). In subgroup analysis, this association was only evident in white ( P = 0.04) and black ( P = 0.05) children. However, no such collective consistent associations of Lp(a) were found with age, gender, or birth weight. Conclusion Ethnicity-related differences in mean Lp(a) exist among children and adolescents in the United States and parental history of premature heart attack/angina significantly associated with levels of Lp(a) in children. Further research on the associations of Lp(a) levels in childhood with subsequent risk of atherosclerosis is needed. | Introduction Levels of serum or plasma Lp(a) above 30 mg/dL are associated with increased risk of coronary artery disease and stroke in adults of European descent [ 1 - 3 ]. Given the high degree of structural homology of one of the domains of its apolipoprotein(a) component with plasminogen, one proposed mechanism is interference with thrombolysis [ 1 , 3 ]. Adults of African descent have mean levels of Lp(a) approximately twice those of Europeans but do not have commensurately increased risk of atherosclerotic disease; nor has Lp(a) been shown to be a coronary artery disease risk factor in Blacks [ 4 , 5 ]. The explanations for the differential effects of Lp(a) on CVD risk among different populations are poorly understood. Because birth weight has been shown to influence levels of Lp(a) [ 6 ], and adverse patterns of blood lipids and atherosclerosis itself begin in childhood, studies of population and individual differences in the early onset and progression of risk factors through adolescence are important [ 7 ]. Given the reported contribution of intrinsic factors, family history, and environmental factors to the CVD risk in adults [ 8 - 10 ], the identification of inherited risk markers and environmental variables that may interact with levels of Lp(a) > 30 mg/dl to modify its influence on the development of atherosclerosis at an early age, is therefore imperative. Few studies have examined the epidemiology of Lp(a) in representative samples of total populations of children and adolescents[ 11 , 12 ]. However, no study has examined whether the effects of inherited and acquired or environmental factors interact with Lp(a) > 30 mg/dl, to cause differential attributable risk in different populations using data from a nationally representative sample of children in the US. We utilized data from a national survey of over 30,000 persons age 1 year and older with extensive blood lipid data to examine correlates of Lp(a) in children and adolescents and specifically to determine whether: [ 1 ] ethnic differences in shape of Lp(a) distributions seen in adults are also seen as early as age 4 in children; [ 2 ] family history of cardiovascular disease is associated with higher levels of Lp(a); [ 3 ] the effects of ethnicity and family history of CVD on the levels of Lp(a) are influenced by low birth weight, other personal, behavioral or environmental variables. Methods Data for this analysis was obtained from The Third National Health and Nutrition Examination Survey (NHANES-III) conducted on a nationwide multi-stage probability sample of about 40,000 persons from the civilian, non-institutionalized population aged 2 months and over of the United States excluding reservation lands of American Indians. Of these, 31,311 were examined. Our analysis was restricted to children aged 4–11 years (518 whites, 877 blacks, and 685 Mexican Americans) and adolescents aged 4–19 years (336 Whites, 665 Blacks and 504 Mexican Americans) with valid Lp(a) measurements in Phase II of the survey conducted in 1991–1994. Details of the planning, sampling, operation, informed consent procedures, and measures taken to maintain confidentiality of information have been previously detailed [ 13 ]. Demographic, medical history and behavioral information were collected prior to the examination by household interview of the parents or guardians of children and of adolescents aged 12 and over. Parents of children aged 2 months-11 years were asked "How much did the child weigh at birth?". Parents responding "don't know" were asked "Did the child weigh more than 5 1/2 pounds (2500 grams) or less? Responders were then asked, "Did the child weigh more than 9 pounds (4100 grams) or less?" An approximate category of weight at birth was created by combined responses to exact birth weights and the latter two questions. Participants' parent or guardian was also asked, "Has either of the biological parents ever been told by a doctor that he or she had a) high blood pressure or stroke before age 50 b) heart attack or angina before the age of 50? c) high blood cholesterol at any age? d) diabetes at any age? All "Yes" responses were followed by "Which, father, mother, or both?" Other interview variables are described elsewhere [ 13 ]. Blood samples were obtained at the examination centers [ 14 ]. A subsample of persons 12 years and over was asked to fast overnight for the examination of lipids in the morning. Lp(a) in serum was measured immunochemically by using an enzyme-lined immunosorbant assay (ELISA) (Strategic Diagnostics, Newark, DE) [ 14 ], which does not have cross reactivity with plasminogen or LDL non sensitive to apo(a) size heterogeneity. The normal range was set at 0 to 30 mg/dL because concentrations above 30 mg/dL have been associated with increase risk for coronary heart disease and stroke [ 1 , 3 ] (Plasma concentrations were 3 % lower than serum concentrations). The quality control of the Lp(a) assay has been described in detail elsewhere [ 14 ]. Serum samples with Lp(a) > 80 mg/dL were diluted into the assay range with sample diluent. Serum total cholesterol were determined at the Centers for Disease Control using a modified ferric chloride technique (GFAA/Perkin-Elmer Model 3030 and 5100) [ 14 ]. High-density lipoprotein (HDL) was measured in serum following the precipitation of other lipoproteins with a polyanion/divalent cation mixture and triglycerides were measured enzymatically by Hitachi 704 Analyzer autoanalyzer (Boehringer-Mannheim Diagnostics). LDL cholesterol level was calculated by the Friedewald equation for individuals 12 years and older who were examined in the morning and fasted 9 hours or more, and whose triglyceride concentration was less than or equal to 400 mg/dL. Because fasting was not required in children, LDL could be calculated on only 15% of this sample. In selected analyses, serum total cholesterol was corrected for Lp(a) cholesterol as follows: TCc = TC - Lp(a) × 0.30 [ 11 ]. Standing height was measured to the nearest 0.1 centimeter, weight to the nearest 0.01 kg, triceps, subscapular, suprailiac and mid-thigh skinfold thickness to the nearest 0.1 millimeter and waist and buttocks circumference to the nearest 0.1 centimeter [ 15 , 16 ]. Statistical analysis Population estimates for many of the variables other than Lp(a) have been published by the National Center for Health Statistics [ 14 , 17 ]. Because, body weight, family history, socio-economic factors including income, gender, ethnicity, birth weight and regional diversity have been shown to influence levels of Lp(a) and or CVD risk in general, [ 18 , 6 , 21 ] our analyses of the population estimate and correlates of Lp(a) were mindful of these factors. In order to ensure adequate weight for a given age group, and to examine pre- peri- and post pubertal effects on levels of Lp(a), quintile distribution of age was used as a categorical variable. Detailed descriptive statistics and measures of association were computed initially using unweighted data. Kendall's nonparametric rank correlation was used to assess the association of Lp(a) with other variables and compared to Pearson correlation [ 22 ]. To determine the influence of gender and ethnicity on the distribution of Lp(a), analysis of covariance was used to compute adjusted means for subjects within sex and ethnic categories, and to assess the statistical significance of differences of means among groups. Stepwise logistic multiple regression analysis was used to develop models for predicting Lp(a) >30 mg/dL for each sex, and ethnic group [ 22 ]. Only variables with pre-specified hypotheses and with statistically significant univariate correlation coefficients were eligible to enter the regression models. Following these preliminary analyses, preplanned hypotheses and major findings of the unweighted analyses were confirmed using techniques that incorporated sampling weights and design features of the survey [ 14 ]. Population estimates for mean Lp(a) and percentiles and statistical tests of weighted proportions were produced using Statistical Analysis System (SAS) callable SUDAAN [ 23 ]. Chi-Square analysis was used for the comparisons of distributions of Lp(a) categorized into 10 mg/dL strata between sex, ethnicity and age groups. Associations of Lp(a) with other variables were confirmed in final weighted analysis, using PROC LOGIST procedure in SUDAAN [ 23 ] with alpha set at < 0.05. Since substantial proportions of white and Mexican American children had undetectable Lp(a), log or other transformations could not produce an approximately normal distribution of Lp(a) for parametric analyses. Therefore analytic results presented are primarily those using Lp(a) > 30 mg/dL as a categorized variable. Results Univariate Analyses Ethnicity Blacks had higher median Lp(a) than whites, who had higher levels than Mexican Americans (Table 1 ). The difference was already apparent at ages 4–5 years. Further, the shape of the Lp(a) distributions differed markedly for BLACKS compared to other ethnic groups at each age and overall. Blacks had a bimodal distribution that was less skewed than whites or Mexican Americans (Figure 1 ). The percentage of children aged 4–19 with Lp(a) > 30 mg/dL, was higher in Blacks (54.3, SE 1.8) than in Whites (20.3, SE 2.4) or Mexican Americans (16.8, SE 2.3), both overall (Chi square 47.4, p < 0.001) and in each age group (Table 1 ). Table 1 Selected percentiles of lipoprotein(a) distributions and prevalence of concentrations > 30 mg/dL in children and young adults aged 4–19 years by ethnic group and age: NHANES-III, 1988–1994. Ethnic group Age (yrs) Lipoprotein(a) (mg/dL) N Percentile Percent > 30 (mg/dL) 5 10 50 90 95 Nonhispanic white 4–5 0 0 7 38 62 15.0 214 6–11 0 0 12 48 65 18.8 304 12–15 0 0 10 48 56 20.2 187 16–19 0 0 9 53 62 25.8 149 Nonhispanic black 4–5 2 6 31 75 94 52.5* 303 6–11 1 5 32 76 100 53.3* 574 12–15 0 5 33 77 95 56.4* 358 16–19 1 6 31 69 76 54.6* 307 Mexican American 4–5 0 0 5 30 48 8.2 309 6–11 0 0 9 45 62 21.0 376 12–15 0 0 9 48 58 20.1 272 16–19 0 0 8 36 52 11.4 232 * Indicates unadjusted statistically significant difference in levels of Lp(a) > 30 mg/dl between black and white children, and between black and Mexican American children in the same age group. Significance level was set at < 0.05. Figure 1 Percent Frequency of Lp(a) mg/dl in children aged 4–16 years, by ethnic group in the Third National Health and Nutrition Examination Survey, 1988–1994. Age Table 3 shows percentiles by sex, age and ethnic group. Among boys of all sex-ethnicity groups, median Lp(a) was higher at age 6–11 than at age 4–5, then tended to decline slightly through age 16–19. Among girls, there was no consistent pattern for median Lp(a), being highest at age 16–19 in whites, at age 12–15 in blacks, and at 6–11 in Mexican Americans. The percentage with Lp(a) > 30 mg/dL varied significantly by age group only in Mexican Americans (Chi square = 10.6, P = 0.02) (Table 1 ). Table 3 Selected percentiles of lipoprotein(a) distributions and prevalence of concentrations > 30 mg/dL in children and young adults aged 4–19 years by ethnic group and age: NHANES-III, 1988–1994. Ethnicity Age (Yrs) Lipoprotein(a) mg/dL N Percentile 5 10 50 90 95 Girls Nonhispanic white 4–5 0 0 7 35 62 113 6–11 0 0 9.5 34 60 146 12–15 0 0 8.5 47 55 106 16–19 0 0 13 54 61 84 Nonhispanic black 4–5 1 7 32.5 74 100 146 6–11 0 3 31 72 98 288 12–15 2 9 34 78 102 191 16–19 1 5 30 71 80 166 Mexican American 4–5 0 0 6 28 36 152 6–11 0 0 11 46 59 177 12–15 0 0 9 47 56.5 140 16–19 0 0 10 34 52 119 Boys Nonhispanic white 4–5 0 0 7 38 62 101 6–11 0 0 17 63 75 158 12–15 0 0 11 49 56 81 16–19 0 0 7 38 65 65 Nonhispanic black 4–5 2 5 31 75 94 157 6–11 2 6 34 78 105 286 12–15 0 2 32 75 78 167 16–19 0 9 32 66 75 141 Mexican American 4–5 0 0 5 34 57 157 6–11 0 0 8 45 67 199 12–15 0 0 8.5 49 60 132 16–19 0 0 7 37 55 113 Gender Median Lp(a) did not differ consistently by gender across age or ethnic groups (Table 3 ). Similarly, the percentage with Lp(a) > 30 mg/dL did not differ significantly by gender within ethnic groups (Chi Square = 0.003, P = 0.96). Birth Weight Birth weight by parental recall was available for children aged 4–11 years (Table 4 ). Birth weight did not vary with Lp(a) in any ethnic group. Further, in blacks, the percentage with Lp(a) >30 mg/dL did not differ by birth weight category (52.5, 53.3, 56.4, respectively). Small numbers of cases with both abnormal birth weight and elevated Lp(a) >30 mg/dL among whites and Mexican Americans precluded meaningful analysis. Table 4 Selected percentiles of lipoprotein(a) distributions and prevalence of concentrations > 30 mg/dL in children aged 4–11 years by ethnic group and birth-weight: NHANES-III, 1988–1994. Ethnic group Age (yrs) Lipoprotein(a) mg/dL N Birth Weight Median 50 Percent > 30 mg/dL Nonhispanic white <2500 gm 12 15.0 32 2500–4100 gm 10 18.8 441 >4100 gm 7 20.2 43 Nonhispanic black <2500 gm 32 52.5 115 2500–4100 gm 32 53.3 702 >4100 gm 29 56.4 44 Mexican American <2500 gm 3 8.2 45 2500–4100 gm 8 21.0* 573 >4100 gm 5 20.1* 57 * Indicates within group statistically significant association of Lp(a) > 30 mg/dl with birth weight using birth weight < 2500 gm as the reference value. Significance level was set at < 0.05. Family history In all the groups combined (age range 4–16 years), the percentage with Lp(a) > 30 mg/dL was significantly higher among those with parental history of heart attack/angina before age 50 years compared to those without (50.0 percent versus 30.2 percent), Chi square 2.72, P = 0.011), whereas the percentage (Lp(a) > 30 mg/dL) was similar among those with parental history of diabetes and high cholesterol vs. those without (Figure 2 ). Due to small numbers of children with a history of heart attack within ethnic groups, the difference in percentage of persons with Lp(a) did not attain significance in within-groups analysis: white children, 42.56% versus 18.36%, P = 0.23, black children 72.37% versus 54.06%, P = 0.12, Mexican American children 34.46% versus 18.26%, P = 0.33 (Table 2 ). In white children, the percentage was higher in children with a parent with high blood cholesterol compared to children without (26.74% versus 16.12%; P = 0.02). However, no significant differences were seen in other groups. Figure 2 Prevalence of lipoprotein(a) concentration > 30 mg/dl in children aged 4–16 years, by combined ethnic group, and parental history of heart disease or angina, high cholesterol, or diabetes below age 50 in the Third National Health and Nutrition Examination Survey, 1988–1994 Table 2 Median lipoprotein(a) and prevalence of concentrations >30 mg/dL in children aged 4–16 years by ethnic group and parental history of heart or an angina below age 50, high cholesterol, or diabetes: NHANES-III, 1988–1994 Ethnic group Lipoprotein(a) mg/dL N Median (mg/dL) Percent > 30 (mg/dL) Heart Attack High Cholesterol Diabetes Heart Attack High Cholesterol Diabetes Heart Attack High Cholesterol Diabetes Nonhispanic white Yes 25 13 12.5 42.6* 26.7 32.8 15 138.0 34 No 10 8 8 18.4 16.1 18.0 722 586.0 701 Nonhispanic black Yes 45 28 33.5 72.4* 48.9 56.6 53 111.0 86 No 32 33 32 54.1 55.5 54.8 1242 1176.0 1206 Mexican American Yes 12 9.5 7 34.5* 19.6 19.2 21 134.0 45 No 7.5 7 8 18.3 18.7 18.7 990 870.0 966 * Indicates statistically significant association of parental history of heart attack before age 50 with levels of Lp(a) > 30 mg/dl nonhispanic black, nonhispanic white and Mexican American children. Significance level was set at < 0.05. Region Among white children, median Lp(a) was lower in the Midwest (8.2 mg/dL) and South (8.2) than in the Northeast (16.4), or West (10.1). No differences were noted for Blacks and too few Mexican Americans lived in the Northeast and Midwest for evaluation. In white children, the percentage with Lp(a) > 30 mg/dL were 15.1 in the Midwest, Northeast 25.1, South 21.2, West 21.4 ( P = 0.56). Among blacks, a greater percent of metropolitan, compared to non-metropolitan (57% Vs 49.7%; P = 0.04) had Lp(a) > 30 mg/dL. No significant differences were seen in other groups. Income Family income < $20,000 was not associated with Lp(a) > 30 mg/dL in whites or Mexicans, but Blacks with low income tended to have a higher percentage of individuals with Lp(a) >30 mg/dL (56.9% versus 50.5%, P = 0.10). Poverty income ratio was not significantly correlated with Lp(a). Multivariate Analyses The following additional variables known to influence CVD risk were assessed as correlates of Lp(a) by ethnic group: age, total serum cholesterol, HDL cholesterol, casual triglycerides, hours of fasting, weight, height, body mass index, waist circumference, waist-to-hip ratio, subscapular skinfold thickness, suprailiac skinfold thickness, pulse rate, systolic and diastolic blood pressure, heavy activity frequency or TV hours. Logistic regression analysis with Lp(a) > 30 as dichotomous dependent variable and age in months as independent variable revealed a significant linear association in whites (beta = 0.004, SE = 0.002, P = 0.03) and a quadratic association in Mexican Americans (age beta 0.068, SE = 0.013, P < 0.001, age squared beta -0.000, SE beta 0.000, P < 0.001), indicating a lower prevalence of Lp(a) > 30 mg/dL at both ages 4–5 and 16–19 than at 6–15 years. Age and age squared, and sex were entered first in all analyses described below. Controlling for age, sex was not significantly associated with high Lp(a) in any group. Compared to whites, non-hispanic black ethnicity was significantly associated with high Lp(a) after controlling for age and sex ( P < 0.001). Mexican American ethnicity was not significantly associated with lower prevalence of high Lp(a) ( P = 0.35). Within ethnic groups at ages 4–11 years, low birth weight (<2500 g) was not significantly associated with high Lp(a) after controlling for age and sex. High birth weight (>4100 g) was associated with high Lp(a) (beta 1.87, P = 0.02) only in Mexican Americans. Parental history of heart attack/angina before age 50 was significantly associated with Lp(a) >30 after controlling for age and sex both in white children (beta -1.14, SE 0.55, P = 0.04) and in blacks (beta -0.80, SE 0.39, P = 0.05). Parental history of heart attack/angina was also significantly associated with high Lp(a) in all children ( P = 0.02) after controlling for age, sex and ethnicity. In white children, parental history of high blood cholesterol ( P = 0.07) and diabetes mellitus ( P = 0.16) were not significantly associated with high Lp(a) after adjustment for age and sex. Residence in central cities/fringe areas remained significantly associated with high Lp(a) in black children after controlling for age, sex, region, season, and time of the day (beta = -0.33, SE = 0.13, P = 0.02). Region, rural/urban code, family income < 20,000 or higher poverty income ratio were not significantly associated with high Lp(a) in white or Mexican American children (all P > 0.05). Body mass index or weight were significant predictors of Lp(a) > 30 mg/dL only among black children after controlling for age, sex, region, rural/urban code, season and poverty income ratio, e.g. weight(kg) P = 0.02. Neither HDL cholesterol nor casual triglyceride concentration was significantly associated with Lp(a) after controlling for multiple variables. Total serum cholesterol was significantly associated with Lp(a) after controlling for multiple variables in all three ethnic groups as expected. Discussion The most important findings of this study are that ethnicity significantly associated with Lp(a), and that parental history of heart attack significantly associated with Lp(a) levels >= 30 mg/dl. While non-hispanic black children had significantly higher total Lp(a) level, compared to white and Mexican American children, no consistent associations of age or gender with Lp(a) were found in NHANES-III below age 20. Low birth weight (<2500 g) was not significantly associated with high Lp(a) after controlling for age and sex in the entire group. Higher levels of Lp(a) >30 mg/dl was evident in metropolitan compared to non-metropolitan non-hispanic black children. Mechanisms Lp(a) is a circulating particle that consists of phospholipids, cholesterol, and apolipoprotein B-100 (i.e. a LDL particle), with apolipoprotein(a) attached to the latter at a single point [ 5 , 24 ]. Like LDL, Lp(a) when oxidized may promote atherosclerosis by promoting formation of foam cells which release growth factors. Lp(a) acquires a pathogenic profile on entering the arterial cell wall as a result of the influence of factors operating in the inflammatory environment of the atheromatous vessel, such as proteolytic enzymes of the metalloproteinase family [ 25 ]. About 80% of the amino acids in apo(a) are homologous with those of plasminogen, suggesting a possibly thrombolytic effect which might both promote atherosclerosis and trigger acute thrombotic occlusions [ 1 , 3 , 26 ]. Whereas levels of Lp(a) above 30 mg/dL was shown to increase risk of coronary heart disease in European samples [ 3 , 24 ], no such association has been found in black populations, in whom the concentration is twice that in Europeans [ 24 , 27 ]. Study of serum concentrations of this particle in children is especially important since, unlike LDL, its concentration is postulated to be remarkably stable throughout the life of an individual. Thus, identification of persons at increased risk early in life would permit more effective intervention to lower levels of modifiable risk factors such as LDL cholesterol. Environment, Ethnicity, Age and Gender A number of studies of adults have compared Lp(a) levels in Whites and Blacks, and have uniformly reported two-fold higher levels in Blacks [ 4 , 5 , 11 ]. In Texas, Mexican American adults were found to have lower Lp(a) than whites. Conversely, Kambor et al observed higher mean and median plasma Lp(a) concentrations in hispanic men than white men in Colorado with lesser difference seen in women[ 28 ]. Although the explanations for these findings remain unclear, environmental factors, genetic admixture [ 29 ] or a combination of both should be considered. No previous reports of studies comparing Lp(a) in whites and blacks and Mexican American children in the same study were found prior to NHANES-III. In fact, few comparisons of plasma Lp(a) concentrations were found for hispanic children or other children below age 8 years prior to NHANES-III [ 30 , 31 ]. The present findings extend the published report by examining children in greater detail and examining the relationship of birth weight and family history of cardiovascular disease with Lp(a) in children. Perhaps one of the most noteworthy observation from this study is the significantly higher levels of Lp(a) >30 mg/dl in black, compared to white and Mexican American children (Table 1 ). Ethnic-related differences in Lp(a) similar to those in adults were found in children as young as 4–5 years of age, supporting the presence of higher levels of Lp(a) in black children compared to other ethnic groups (Table 1 ). This observation is consistent with findings of the Bogalusa Heart Study of white and black children, and the NHLBI Growth and Health Study of girls that showed higher Lp(a) levels in black than white children [ 30 , 32 ]. Findings in Mexican Americans in the NHANES-III study is analogous to reports from the Colorado study showing a greater percent of Hispanics than whites (19% versus 12%) to have Lp(a) > 25 mg/dL. The explanation for ethnic-related differences in levels of Lp(a) in the US remains unclear. Other than total cholesterol, no single environmental or biological variable consistently associated with levels of Lp(a) in the NHANES-III sample. While levels of Lp(a) > 30 mg/dl received significant contributions from BMI and residence in metro compared to non-metro in black children, only higher birth weight significantly contributed to levels of Lp(a) in Mexican American children. Contrary to a previous report of association of low birth weight with elevated Lp(a) concentration in black children, [ 6 ] we found no consistent association of low birth weight with levels of Lp(a) in black or white children in the present study (Table 4 ). At the genetic level, heritability estimates were reportedly higher for Whites than for Blacks [ 33 , 34 ] despite the disproportionately higher levels of Lp(a) in Blacks. This observation raises an important question about the genetic determinants of differential levels of Lp(a) in non-hispanic Blacks compared to Whites. A recent study on genetic linkage analysis by Barkley et al found no linkage evidence to support the presence of a single but separate gene with large effects specifically segregating in non-hispanic Blacks that may account for elevated Lp(a) levels[ 35 ] Conversely, high levels of Lp(a) levels have been suggested to be an old African trait that is associated with mutations in the coding sequences of apo(a) [ 36 ]. Collectively, these disagreements among studies suggest that higher levels of Lp(a) in non-hispanic Blacks compared to other ethnic groups, may result from a complex interaction of genes with environmental and metabolic factors, [ 37 ]. Future identification of the presence and nature of this interaction is imperative. Studies of adults found an association of higher age and female gender with higher Lp(a) levels [ 38 ]. The Bogalusa Heart Study found a small but significant gender difference and a weak positive correlation with age (p < 0.001) in white girls 11–17 years of age [ 30 ]. However, we found no consistent associations of age or gender with Lp(a) in NHANES-III below age 20 (Table 3 ). Despite the levels of Lp(a) that tended to be highest between age 6 – 11 years in boys, the lack of similar trend in girls, and the absence of age-related difference in the levels of Lp(a) in the combined group, suggests that pre-pubertal or pubertal status may not significantly influence levels of Lp(a) in children. Altogether, our observation from the present NHANES-III study, together with the work of others [ 30 , 38 ] found very few significant associations of Lp(a) with personal, behavioral, or environmental variables. It therefore appears likely, that multiple factors at the environment and or genetic levels may act together to differentially influence levels of Lp(a) in children and adolescents in the US. Family History Few studies of the association of Lp(a) with family history have been reported in children [ 30 , 39 ]. Our observations of significant association of parental history of heart attack/angina before age 50, with levels of Lp(a) >30 mg/dl in NHANES-III (Figure 2 ) are consistent with results from the Bogalusa study that found an association of parental history of premature heart attack with higher levels of Lp(a) [ 30 ]. However, contrary to Bogalusa study showing an association of Lp(a) with parental history of hypercholesterolemia, trends for family history in black children in NHANES-III were concordant with those in whites, although not attaining statistical significance. More recently, Dirisamer and colleagues provided additional support for higher levels of Lp(a) levels in children and adolescents from families with premature coronary heart disease compared to those without familial coronary heart disease [ 40 ]. In young adults aged 23–35 years in the CARDIA study, a non-significant trend toward higher Lp(a) levels in those with a family history of myocardial infarction was observed in whites, but no association was seen in blacks [ 33 ]. Collectively, the association of parental history of premature heart attack appears associated with levels of Lp(a) > 30 mg/dl, may lend support to the theory of genetic underpinning for the higher levels of Lp(a) observed in black children. In the NHANES-III study, no association of Lp(a) were seen with family history of stroke, hypertension, or diabetes. Similarly, family history of high cholesterol and diabetes were not significantly associated with levels of Lp(a) >30 mg/dl in the entire sample (Figure 2 ), except in non-hispanic white children (Table 2 ). Cross-sectional studies of adults have not consistently shown a relationship of Lp(a) with NIDDM. Conversely, several reports indicate an association between IDDM with Lp(a) [ 29 , 41 ]. However, NIDDM is thought to have a stronger genetic component in its etiology than IDDM [ 42 ]. Despite the inconsistencies in the literature, there is strong evidence to suggest that Lp(a) is a risk factor for vascular disease in diabetics [ 43 ]. Further research on clinical and subclinical diabetes and Lp(a) is needed. Limitations Limitations of the present study include possible bias from survey non-response, missing values for some variables, and confounding by variables not measured. Fortunately, several special studies of earlier NHANES-III data have indicated little bias due to non-response [ 44 ]. Although, adequate reliability has been demonstrated for Lp(a) measurement [ 14 ], the lack of a single, generally accepted laboratory method and national standardization program remains a problem, perhaps explaining in part the inconsistencies among studies [ 3 ]. The relatively large sample size provided good statistical power and the conservative criteria for statistical significance reduced the possibility of chance findings attaining significance despite a large number of tests. Overall, the representativeness of the sample and the use of sample weights provided wide generalizability of the results to United States black and white and Mexican American children and adolescents of the same ages. In conclusion, ethnicity significantly associated with levels of Lp(a). Parental history of heart attack/angina before age 50 years associated with levels of Lp(a) > 30 mg/dl in offspring. Collectively, different pathological thresholds may have to be established for elevated serum Lp(a) levels, to be used as a risk marker for coronary heart disease in different populations. Future research should include longitudinal studies of Lp(a) in white, black and hispanic children followed to adulthood. Racial admixture as well as environment and behavioral variables associated with acculturation and urban residence should be studied, especially in Mexican American and black populations. Standardization of methods will facilitate inter-study and longitudinal comparisons. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544891.xml |
535926 | Discriminative topological features reveal biological network mechanisms | Background Recent genomic and bioinformatic advances have motivated the development of numerous network models intending to describe graphs of biological, technological, and sociological origin. In most cases the success of a model has been evaluated by how well it reproduces a few key features of the real-world data, such as degree distributions, mean geodesic lengths, and clustering coefficients. Often pairs of models can reproduce these features with indistinguishable fidelity despite being generated by vastly different mechanisms. In such cases, these few target features are insufficient to distinguish which of the different models best describes real world networks of interest; moreover, it is not clear a priori that any of the presently-existing algorithms for network generation offers a predictive description of the networks inspiring them. Results We present a method to assess systematically which of a set of proposed network generation algorithms gives the most accurate description of a given biological network. To derive discriminative classifiers, we construct a mapping from the set of all graphs to a high-dimensional (in principle infinite-dimensional) "word space". This map defines an input space for classification schemes which allow us to state unambiguously which models are most descriptive of a given network of interest. Our training sets include networks generated from 17 models either drawn from the literature or introduced in this work. We show that different duplication-mutation schemes best describe the E. coli genetic network, the S. cerevisiae protein interaction network, and the C. elegans neuronal network, out of a set of network models including a linear preferential attachment model and a small-world model. Conclusions Our method is a first step towards systematizing network models and assessing their predictability, and we anticipate its usefulness for a number of communities. | 1 Background The post-genomic revolution has ushered in an ensemble of novel crises and opportunities in rethinking molecular biology. The two principal directions in genomics, sequencing and transcriptome studies, have brought to light a number of new questions and forced the development of numerous computational and mathematical tools for their resolution. The sequencing of whole organisms, including homo sapiens , has shown that in fact there are roughly the same number of genes, for example, in mice and men. Moreover, much of the coding regions of the chromosomes (the subsequences which are directly translated into proteins) are highly homologous. The complexity comes then, not from a larger number of parts, or more complex parts, but rather through the complexity of their interactions and interconnections. Coincident with this biological revolution – the massive and unprecedented volume of biological data – has blossomed a technological revolution with the popularization and resulting exponential growth of the computing networks. Researchers studying the topology of the Internet [ 1 ] and the World Wide Web [ 2 ] attempted to summarize these topologies via statistical quantities, primarily the distribution P ( k ) over nodes of given connectivity or degree k , which was found to be completely unlike that of a "random" or Erdös-Rényi graph. Instead, the distribution obeyed a power-law P ( k )~ k - γ . As a consequence many mathematicians concentrated on (i) measuring the degree distributions of many technological, sociological, and biological graphs (which generically, it turned out, obeyed such power-law distributions) and (ii) proposing various models of randomly-generated graph topologies which could reproduce these degree distributions ( cf . [ 3 ] for a thorough review). The success of these latter efforts reveals a conundrum for mathematical modeling: a metric which is universal (rather than discriminative) cannot be used for choosing the model which best describes a network of interest. The question posed is one of classification , meaning the construction of an algorithm, based on training data from multiple classes, which can place data of interest within one of the classes with small test loss. Systematic enumeration of substructures has so far been used to find statistically significant subgraphs or "motifs" [ 4 - 8 ] by comparing the network of interest to an assumed null model. Recently, the idea of clustering real networks into groups based on similarity in their "significance profiles" has been proposed [ 9 ]. We here use and extend these ideas to compare a given network of interest to a set of proposed network models. Rather than unsupervised clustering of real networks, we perform supervised classification of network models. In this paper, we present a natural mapping from a graph to an infinite-dimensional vector space using simple operations on the adjacency matrix. The coordinates (called "words", see Methods) reflect the number of various substructures in the network (see Figures 3 and 6 ). We then use support vector machines (SVMs) to build classifiers that are able to discriminate different network models. The performance of these classifiers is measured using the empirical test-loss on a hold-out set, thus estimating the probability of misclassifying an unseen test network. We selected 17 different mechanisms proposed in the literature to model various properties of naturally occurring networks. Among them are various biologically-inspired graph-generating algorithms which were put forward to model genetic or protein interaction networks. We are then able to classify naturally occurring networks into one of the proposed classes. We here classify data sets for the E. coli genetic network, the C. elegans neuronal network and the yeast S. cerevisiae protein interaction network. To interpret and understand our results further we define a measure of robustness to estimate the confidence of the resulting classification. Moreover, we calculate p -values using Gaussian kernel density estimation to find substructures that are characteristic of the network model or the real network of interest. We anticipate that this new approach will provide general tools of network analysis useful to a number of communities. Results and Discussion We apply our method to three different real data sets: the E. coli genetic network [ 10 ] (directed), the S. cerevisiae protein interaction network [ 11 ] (undirected), and the C. elegans neuronal network [ 12 ] (directed). Each node in E. coli 's genetic network represents an operon coding for a putative transcriptional factor. An edge exists from operon i to operon j if operon i directly regulates j by binding to its operator site. This gives a sparse adjacency matrix with a total of 423 nodes and 519 edges. The S. cerevisiae protein interaction network has 2114 nodes and 2203 undirected edges. Its sparseness is therefore comparable to that of E. coli 's genetic network. The C. elegans data set represents the organism's fully mapped neuronal network. Here, each node is a neuron and each edge between two nodes represents a functional, directed connection between two neurons. The network consists of 306 neurons and 2359 edges, and is therefore about 7 times more dense than the other two networks. We create training data for undirected or directed models according to the real data set. All parameters other than the numbers of nodes and edges are drawn from a uniform distribution over their range. We sample 1000 examples per model for each real data set, train a pairwise multi-class SVM on 4/5 of the sampled data and test on the 1/5 hold-out set. We determine a prediction by counting votes for the different classes. Table 1 summarizes the main results. All three classifiers show very low test loss and two of them a very high robustness (see Subsection Robustness under Methods). The average number of support vectors is relatively small. Indeed, some pairwise classifiers have as few as three support vectors and more than half of them have zero test loss. All of this suggests the existence of a small subset of words which can distinguish among most of these models. The predicted models Kumar [ 13 ], Middendorf-Ziv (MZ) [ 14 ], and Sole [ 15 ] are based on very similar mechanisms of iterated duplication and mutation. The model by Kumar et al . was originally meant to explain various properties of the WWW. It is based on a duplication mechanism, where at every iteration a prototype for the newly introduced node is chosen at random, and connected to the prototype's neighbors or other randomly chosen nodes with probability p . It is therefore built on an imperfect copying mechanism which can also be interpreted as duplication-mutation, often evoked when considering genetic and protein-interaction networks. Sole is based on a similar idea, but is an undirected model, and allows for two free parameters, a probability controlling the number of edges copied and a probability controlling the number of random edges created. MZ is essentially a directed version of Sole. Moreover, we observe that none of the biological networks were predicted to be generated by preferential attachment even though these networks exhibit power-law degree distributions. The duplication-mutation schemes arise as the most successful. However, it is interesting to note that every duplication-mutation model by construction gives rise to an effective preferential attachment [ 16 ]. Our classification results therefore do not dismiss the idea of preferential attachment, but merely the specific model which directly implements this idea. Kumar and MZ were classified with almost perfect robustness (see Subsection Robustness under Methods) against 500-dimensional (out of 4680 dimensions) subspace sampling. With 26 different choices of subspaces, E. coli was always classified as Kumar. We therefore assess with high confidence that Kumar and MZ come closest to modeling E. coli and C. elegans , respectively. In the case of Sole and the S. cerevisiae protein network we observed fluctuations in the assignment to the best model. 3 out of 22 times S. cerevisiae was classified as Vazquez (duplication-mutation), other times as Barabasi (preferential attachment), Klemm (duplication-mutation), Kim (scale-free static), or Flammini (duplication-mutation) depending on the subset of words chosen. This clearly indicates that different features support different models. Therefore the confidence in classifying S. cerevisiae to be Sole is limited. The statistical significance of individual words in different models is investigated using kernel density estimation (see Methods) by finding words which maximize η ij ≡ p i ( x 0 )/ p j ( x 0 ) for two different models ( i and j ) at a word value of the real data set x 0 . Figure 1 shows training data for two different models used to classify the C. elegans network: the MZ model [ 14 ] which wins in the classification results, and the runner-up Grindrod model [ 17 ]. The histograms for the word nnz D ( AU AD AT AU A ) are shown along with their estimated densities, nnz D ( AU AD AT AU A ) extremely disfavors the winning model over its runner-up (minimizes η ij ). The opposite case is shown in Figure 2 for E. coli , where the plotted word distribution supports the winning model (Kumar [ 13 ]) and disfavors (maximizes η ij ) the runner-up Krapivsky-Bianconi model [ 18 , 14 ] (preferential attachment). More specifically we are able to verify that the likelihood to generate a network with E. coli 's word values is highest for the Kumar model for most of the words. Indeed, out of 1897 words taking at least 2 integer values for all of the models, the estimated density at the E. coli word value was highest for Kumar in 1297 cases, for Krapivsky-Bianconi [ 18 , 14 ] in 535 cases and for Krapivsky [ 18 ] in only 65 cases. Figure 2 shows the distributions for the word nnz D ( AUT AUT AU AUT A ) which had a maximum ratio of probability density of Kumar over that of Krapivsky-Bianconi at E. coli 's word value. In fact, E. coli has a zero word count, meaning that none of the associated subgraphs shown in Figure 3 actually occur in E. coli . Four of those subgraphs have a mutual edge which is absent in the E. coli network and also impossible to generate in a Kumar graph. Krapivsky-Bianconi graphs allow for mutual edges which could be one of the reasons for a higher count in this word. Another source might be that the fifth subgraph showing a higher order feed-forward loop is more probable to be generated in a Krapivsky-Bianconi graph than in a Kumar graph. This subgraph also has to be absent in the E. coli network since it gives a zero word value, demonstrating that both the Kumar and Krapivsky-Bianconi models have a tendency to give rise to a topological structure that does not exist in E. coli . This analysis gives an example of how these findings are useful in refining network models and in deepening our understanding of real networks. For further discussions refer to our website. [ 14 ] The SVM results suggest that one may only need a small subset of words to separate most of the models. The simplest approach to find such a subset is to look at every word for a given pair of models and compute the best split, then rank words by lowest loss. We find that among the most discriminative words some occur very often, such as, nnz ( AA ) or nnz ( AT A ), which count the pairs of edges attached to the same vertex and either pointing in the same direction or pointing away from each other, respectively. Other frequent words include nnz D ( AA ), nnz D ( AT A ) and Σ U ( AT A ). Figures 4 and 5 show scatter-plots of the training data using the most discriminative three words. Conclusions We proposed a method to discriminate different network topologies, and we showed how to us the resulting classifier to assess which model out of a set of given network models best describes a network of interest. Moreover, the systematic enumeration of countably infinite features of graphs can be successfully used to find new metrics which are highly efficient in separating various kinds of models. Our method is a first step towards systematizing network models and assessing their predictability, and we anticipate its usefulness for a number of communities. Methods Network models We sample training data for undirected graphs from six growth models, one scale-free static model [ 19 - 21 ], a small-world model [ 22 ], and the Erdös-Rényi model [ 23 ]. Among the six growth models two are based on preferential attachment [ 24 , 25 ], three on a duplication-mutation mechanism [ 16 , 15 ], and one on purely random growth [ 26 ]. For directed graphs we similarly train on two preferential attachment models [ 18 ], two static models [ 17 , 27 , 20 ], three duplication-mutation models [ 13 , 28 ], and the directed Erdös-Rényi model [ 23 ]. More detailed descriptions and source code are available on our website [ 14 ]. For the (directed) E. coli transcriptional network and the (directed) C. elegans neuronal network we sample training data for all directed models; for the (undirected) S. cerevisiae protein interaction network we sample data for all undirected models. The set of undirected models includes two symmetrized versions of originally directed models [ 17 , 28 ]. One should note that properties of a directed model can differ significantly from its symmetrized version. In general, the more network classes allowed, the more completetely word space is explored, and therefore the more specific the classification can be. In order to classify real data, we sample training examples of the given models with a fixed total number of nodes N 0 , and allow a small interval I M of 1–2% around the total number of edges M 0 of the considered real data set. All additional model parameters are sampled uniformly over a given range (which is specified by the model's authors in most cases, and can otherwise be given reasonable bounds). Such a generated graph is accepted if the number of edges M falls within the specified interval I M around M 0 , thereby creating a distribution of graphs associated with each model which should best describe the real data set with given N 0 and M 0 . Some of the models can be described as a generalization of another model. Although a generalized model can overlap with a specific one in its support, word space is sufficiently high-dimensional that such confusing realizations are practically impossible. To build intuition, consider that the Erdös model itself includes all possible network topologies. Nonetheless there is extremely low test loss with any other models, indicating that it still defines a particular volume in this high-dimensional space. Similarly, very few real networks have non-negligible prediction scores for being classified as Erdös networks. Words The input space used for classifying graphs was introduced in our earlier work [ 6 ] as a technique for finding statistically significant features and subgraphs in naturally occurring biological and technological networks. Given the adjacency matrix A representing a graph ( i.e ., A ij = 1 iff there exists an edge from j to i ), multiplications of the matrix count the number of walks from one node to another ( i.e ., [ A n ] ij is the number of unique walks from j to i in n steps). Note that the adjacency matrix of an undirected graph is symmetric. The topological structure of a network is characterized by the number of open and closed walks of given length. Those can be found by calculating the diagonal or non-diagonal components of the matrix, respectively. For this we define the projection operation D such that [ D ( A )] ij = A ij δ ij (1) and its complement U = I - D . (Note that we do not use Einstein's summation convention. Indices i and j are not summed over.) We define the primitive alphabet { A ; T , U , D } as the adjacency matrix A and the operations T , U , D with the transpose operation T ( M ) ≡ M T , for any matrix M . T ( A ) and A distinguish walks "up" the graph from walks "down" the graph. From the letters of this alphabet we can construct words (a series of operations) of arbitrary length. A number of redundancies and trivial cases can be eliminated (for example, the projection operations satisfy DU = UD = 0) leading to the operational alphabet { A , AT , AU , AD , AUT }. The resulting word is a matrix representing a set of possible walks, which can be enumerated. An example is shown in Figure 6 . Each word determines two relevant statistics of the network: the number of distinct walks and the number of distinct pairs of endpoints. These two statistics are determined by either summing the entries of the matrix (sum) or counting the number of nonzero elements (nnz) of the matrix, respectively. Thus the two operations sum and nnz map words to integers. This allows us to plot any graph in a high-dimensional data space: the coordinates are the integers resulting from these path-based functionals of the graph's adjacency matrix. The coordinates of the infinite-dimensional data space are given by integer-valued functionals F ( L 1 L 2 ... L n A ) (2) where each L i is a letter of the operational alphabet and F is an operator from the set {sum, sum D , sum U , nnz, nnz D , nnz U }. We found it necessary only to evaluate words with n ≤ 4 (counting all walks up to length 5) to construct low test-loss classifiers. Therefore, our word space is a 6 = 4680-dimensional vector space, but since the words are not linearly independent ( e.g ., sum U + sum D = sum), the dimensionality of the manifold explored is actually much smaller. However, we continue to use the full data space since a particular word, though it may be expressed as a linear combination of other words, may be a better discriminator than any of its summands. In [ 6 ], we discuss several possible interpretations of words, motivated by algorithms for finding subgraphs. Previously studied metrics can sometimes be interpreted in the context of words. For example, the transitivity of a network can be defined as 3 times the number of 3-cycles divided by the number of pairs of edges that are incident on a common vertex. For a loopless graph (without self-interactions), this can also be calculated as a simple expression in word space: sum( D A A A )/sum( U AA ). Note that this expression of transitivity as the quotient of two words implies separation in two dimensions rather than in one. However, there are limitations to word space. For example, a similar measure, the clustering coefficient , defined as the average over all vertices of the number of 3-cycles containing the vertex divided by the number of paths of length two centered at that vertex, cannot be easily expressed in word space because vertices must be considered individually to compute this quantity. Of course, the utility of word space is not that it encompasses previously studied metrics, but that it can elucidate new metrics in an unbiased, systematic way. SVMs A standard classification algorithm which has been used with great success in myriad fields is the support vector machine , or SVM [ 29 ]. This technique constructs a hyperplane in a high-dimensional feature space separating two classes from each other. Linear kernels are used for the analysis presented here; extensions to appropriate nonlinear kernels are possible. We rely on a freely available C-implementation of SVM-Light [ 30 ], which uses a working set selection method to solve the convex programming problem with Lagrangian with y i ( w · x i + b ) ≥ 1 - ξ i ; i = 1,..., m where f ( x ) = w · x + b is the equation of the hyperplane, x i are training examples and y i ∈ {-1, +1} their class labels. Here, C is a fixed parameter determining the trade-off between small errors ξ i and a large margin 2/| w |. We set C to a default value . We observe that training and test losses have a negligible dependence on C since most test losses are near or equal to zero even in low-dimensional projections of the data space. Robustness Our objective is to determine which of a set of proposed models most accurately describes a given real data set. After constructing a classifier enjoying low test loss, we classify our given real data set to find a 'best' model. However, the real network may lie outside of any of the sampled distributions of the proposed models in word space. In this case we interpret our classification as a prediction of the least erroneous model. We distinguish between the two cases by noting the following: Consider building a classifier for apples and oranges which is then faced with a grapefruit. The classifier may then decide that, based on the feature size the grapefruit is an apple. However, based on the feature taste the grapefruit is classified as an orange. That is, if we train our classifier on different subsets of words and always get the same prediction, the given real network must come closest to the predicted class based on any given choice of features we might look at. We therefore define a robust classifier as one which consistently classifies a test datum in the same class, irrespective of the subset of features chosen. And we measure robustness as the ratio of the number of consistent predictions over the total number of subspace-classifications. In this paper we consider robustness for a subspace dimensionality of 500, a significantly small fraction of the total number of dimensions 4680. Kernel density estimation A generative model, in which one estimates the distribution from which observations are drawn, allows a quantitative measure of model assignment: the probability of observing a given word-value given the model. For a robust classifier, in which assignment is not sensitively dependent on the set of features chosen, the conditional probabilities should consistently be greatest for one class. To identify significant features we perform density estimations with Gaussian kernels for each individual word, allowing calculation of p ( C = c | X j = x ), the probability of being assigned to class c given a particular value x of word j . By comparing ratios of likelihood values among the different models, it is therefore possible, for the case of non-robust classifiers, to determine which of the features of a grapefruit come closest to an apple and which features come closest to an orange. We compute the estimated density at a word value x 0 from the training data x i ( i = 1,..., m ) as where we optimize the smoothing parameter λ by maximizing the average log-probability Q of a hold-out set using 5-fold cross-validation. More precisely, we partition the training examples into 5-folds , where { f i ( j )} j is the set of indices associated with fold i ( i = 1...5). We then maximize as a function of λ . In all cases we found that Q ( λ ) had a well pronounced maximum as long as the data was not oversampled. Because words can only take integer values, too many training examples can lead to the situation that the data take exactly the same values with or without the hold-out set. In this case, maximizing Q ( λ ) corresponds to p ( x , λ ) having single peaks around the integer values, so that λ tends to zero. Therefore, we restrict the number of training examples to 4 N v , where N v is the number of unique integer values taken by the training set. With this restriction Q ( λ ) showed a well-pronounced maximum at a non-zero λ for all words and models. Word ranking The simplest scheme to find new metrics which can distinguish among given models is to take a large number of training examples for a pair of network models and find the optimal split between both classes for every word separately. We then test every one-dimensional classifier on a hold-out set and rank words by lowest test loss. Web supplement Additional figures, more detailed description of the network models, and detailed results can be found at . Source code Source code was written in MATLAB and is downloadable from our our website . Authors' contributions MM, EZ, and CW had the original ideas for this paper. CW and LC guided the project. Most of the coding was done by MM and EZ. CA, JH, RK, CL, and GW coded most of the network generation agorithms. The final manuscript was mainly written by MM, EZ, CW, and LC. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC535926.xml |
524181 | ImageParser: a tool for finite element generation from three-dimensional medical images | Background The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy. Methods A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements. Results The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues. Conclusion The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information. | Background Diagnostic imaging devices such as CT, MRI and PET scanners are able to produce three-dimensional (3-D) descriptions of various features such as tissues and organs. In a computer, these images are some data to describe the intensity at each spatial point of a volume. The interpretation of the dataset requires special training and depends on the experience. Researchers have introduced a variety of algorithms to visualize 3-D medical images, and to extract the geometric information of objects from volumetric image data [ 1 - 3 ]. In recent years, the finite element method (FEM) has widely been used to simulate the mechanical deformation of tissues and organs during examinations or interventions [ 4 - 6 ]. To build up an FEM mesh from a medical image, the contour information of segmented regions of interest (ROIs) need to be first extracted from a volume of data [ 7 , 8 ]. Then, the volume is meshed into nodes and elements, and material properties are endowed to each element in accordance with the segmentation information [ 9 ]. By further applying the boundary conditions and mechanical loadings on the corresponding nodes or elements, commercial FEM software packages such as ANSYS and ABAQUS may calculate the mechanical stress and strain, and predict the deformation and motion in the field of view. The purpose of this work is to establish an FEM model to simulate the deformation of a woman breast based on mammography compression. A patient's breast may include three kinds of tissues: fatty, parenchyma, and cancerous tissues [ 6 ]. During the examination, the breast is squeezed by two flat paddles to obtain an image with a good contrast. The dependence of the relative deformation carries information on the mechanical properties of the tumors and masses. Thus, the FEM is a powerful tool to simulate this kind of deformation. The breast need first be separated from the context of a biomedical image which includes some other organs and the different tissues of the breast are then segmented. As seen in Figure 1 , parenchyma has a cloud-like shape and three tissues are fully mixed in some regions. While existing modeling techniques [e.g., [ 1 , 2 ]] may be applied, a significant amount of 3-D elements have to be introduced because the geometric shapes of the constituent tissues are fairly irregular with fuzzy boundaries. It is not optimal to apply those techniques to our research and clinical studies due to the requirements on the computational efficiency. Figure 1 The Interface of ImageParser When Loading a 3D Image. The image is automatically shown slice by slice with the slice number shown in the text box, and the interval between two slices can be changed. Clicking the slide bar or text box, we can focus on the current slice; double clicking the window area, we can navigate the image slice by slice again; and dragging the slide bar or inputting the slice number in the text box, we can jump to the desired slice. In this paper, a software package called the ImageParser is developed to generate an FEM model from 3-D medical images. While aiming at the imaging segmentation, mesh generation, and deformation simulation of heterogeneous breast tissues, the method is applicable to the many biomedical imaging and biomechanical analysis of soft/hard tissues such as mammography and cardiovascular imaging. This software uses a semi-automatic method to detect the objective constituents from the context of an image including neighboring tissues and organs. It segments an image based on customer-defined grayscale ranges, and meshes tissues into elements with a customer-defined size. Inputting the generated FEM mesh into an FEM program, we can calculate the mechanical deformation under specific boundary conditions and mechanical loadings. The ImageParser is written in Microsoft Visual C# .NET (Microsoft Development Environment 2003 Version 7.1), and can be integrated into a high-level image analysis environment with a good extendibility and scalability. Description Overview The ImageParser provides a window style GUI as shown in Figure 1 . A 3-D image is loaded and shown slice by slice. We can focus on any slice and edit it. While the image can be displayed in the color mode, in graphics analysis, we use 8-bit grayscale to describe a voxel. The RGB color can always be transformed into grayscale according to a desirable equation [ 10 ]. The size of voxel can be user-defined. The image can then be segmented into the real organs. Here we use Figure 1 as an example to show the procedure for generating an FEM mesh using the ImageParser. In Figure 1 , the breast is the selected ROI within the axial CT image including the breast, ribs, and organs in the thorax. To obtain its geometric information, we first isolate it from other unwanted regions by selecting a rectangular region as shown in Figure 2 . It is noted that this selection also works for all other slices so that when we select the ROI, we need work on the most representative slice and reserve enough space not to truncate the wanted region in other slices. Because the shape of the breast is irregular, we can still see the rib and part of the thorax in the ROI as well as some regions with the background color. We need further detect the borderline of the breast in each slice. Figure 2 The Region of Interest in a Slice. Under the function of selecting ROI, press the left button of the mouse and drag the mouse. When the dashed line rectangle covers ROI, release the button. All the slices will be shown as this selection. This software provides a semi-automatic interface to detect the borderline of the breast as shown in Figure 3 . We use the computer mouse to select some key points on the borderline, and then the software will automatically detect the borderline between the key points. Because the borderline changes from one slice to another, the software is designed to automatically detect the borderline of the neighboring slice using the known borderline as a seed. Repeating this procedure, the software can detect the borderlines for all slices. The algorithmic details will be provided in the next section. Based on these borderlines, we can reconstruct the surface of the breast. It is noted that for some special cases that the borderlines are fuzzy or irregular, the software cannot effectively detect the borderlines. However, we can manually select more key points in the first slice, and process other slices similarly. Then, based on our experience we can always manage to obtain the borderlines with a high precision. Figure 3 Selecting and Detecting the Borderline of the Breast in ROI. Under the function of selecting Outline, click the left button of the mouse on a point close to the borderline. The software will detect the closest point of the borderline and mark it with a yellow cross. After click the next point close to the borderline, the software will automatically detect the closest borderline between the previous point and this one. Repeat this procedure. The borderline is finally detected. Figure 3 shows three types of tissues in the breast: the black area representing fatty tissue, the gray area representing parenchyma, and the light area representing the tumor. Because these tissues have different mechanical properties, we need segment them out of the breast as new ROIs. Here we use the grayscale to classify the voxels. From the grayscale histogram in Figure 4 , we can find the grayscale ranges corresponding to the different tissue types. With the grayscale range for each tissue, the software can map the voxels onto corresponding categories. For example, in this figure we can use the grayscale from 0 to 64 to represent the fatty tissue, 64 to 144 the parenchyma tissue, and 144–256 the cancerous tissue. Figure 4 The Grayscale Histogram of ROI. Horizontal axis denotes a grayscale ( G ), and vertical axis a number of voxels. G = 0 is the background color of black; G = 255 the color of white. From the grayscale distribution corresponding to the tissues in Figure 3, we can define the grayscale range for fat, parenchyma, and tumor tissues. While we are able to directly output the segmentation information based on the voxels, it cannot be effectively used by any FEM software since the whole breast includes more than ten million voxels. We therefore mesh the breast into larger elements based on the specific requirements of precision and computation capability. Figure 5 shows the FEM mesh of one slice with three tissues marked by different colors. Extending this procedure to all slices and considering the slice thickness and element size, we can obtain the 3-D FEM mesh of the breast. While we take cuboidal elements as an example to generate the mesh at this stage, we can also mesh the breast into other elements such as tetrahedrons. Figure 5 The FEM Mesh of the Breast. The selected region is meshed by cuboidal elements. The color of black denotes fat; gray parenchyma; and white tumor. The green lines are the boundary of elements. Though only one slice is shown here, elements are also generated for some other slices so that the 3D FEM mesh of the Breast is obtained. After the mesh of the breast is generated, we can implement it into an FEM software package to simulate the mechanical deformation of the breast with given material constants of all the tissues and appropriate boundary conditions. Borderline Detection A medical image typically includes many kinds of organs and tissues. However, biomedical engineers may only be interested in a small number of regions in a complex medical image. While certain algorithms have been developed to automatically detect the surface of the 3-D image [ 1 - 3 , 11 ], the object surface may not be well defined because the ROI is in the context of the complex image, and the boundary is not clear especially for some soft tissues. We have to use our knowledge to isolate ROIs from the image. Therefore, we propose to develop a semi-automatic method as described in the following steps. 1. We first focus on one slice. When the first point ( x 0 , y 0 ) is selected close to the borderline, a function is then used to search the most possible border point in the square region with the left-top point ( x 0 - s , y 0 - s ) and the right-bottom point ( x 0 + s , y 0 + s ). Here s is a customer-defined parameter with a default value as 3 pixels. In this region, the gradient of each point Δ( x , y ) is defined by a Laplace operator [ 12 ] so that where f ( x , y ) denotes the intensity at ( x , y ). The detected point is the one with the different color from the background and with the maximum value of Δ( x , y )/[( x - x 0 ) 2 + ( y - y 0 ) 2 + ε ] where ε is a customer-defined parameter with a default value as 0.1 to prevent the singularity on the point ( x 0 , y 0 ). The detected point is denoted as ( x 1 , y 1 ). It is noted that the Laplacian normalized by the distance of the point to the selected seed point is to make the neighboring points have a higher priority to be detected. At certain regions, the borderline may not be clear or two borderlines are close enough, in which cases the program will not get lost. 2. We start to select the next point of the borderline. After we manually select one point visually close to the borderline following the method of detecting ( x 1 , y 1 ), the software can adjust the location of the point and detect the second point ( x 2 , y 2 ) on the borderline. As seen in Figure 6 , a function detects the borderline between ( x 1 , y 1 ) and ( x 2 , y 2 ). Two squares with edge length 2 s are marked by the dark color in a big square having a diagonal line from ( x 1 , y 1 ) to ( x 2 , y 2 ). We find the most possible border point in the two dark squares by using the same method in the first step. If this new border point is in the right-top square, we replace ( x 2 , y 2 ) by this point. Otherwise, we replace ( x 1 , y 1 ) by the new border point. Once ( x 1 , y 1 ) or ( x 2 , y 2 ) is updated, we continue to find the next border point in the same way. Repeat this procedure until the distance between the two points is less than 2s . Connecting all points in such an orderly way, we obtain the borderline. It is noted that this method is convergent because the distance between two working points becomes smaller and smaller in this procedure. Figure 6 Detecting Borderline between Two Points ( x 1 , y 1 ) and ( x 2 , y 2 ). First find the most possible border point in two dark regions. Then, treat the new point and the left old one as same as ( x 1 , y 1 ) and ( x 2 , y 2 ), and find the next border point. Repeat this procedure until the distance between two points is less than 2s . Connecting all points orderly, we obtain the borderline. 3. We repeat step 2 until the borderline is closed. We thus obtain the whole closed borderline in the slice. 4. For a 3-D medical image, due to the similarity of neighboring slices, the proposed software can map the selected key border points of the slice onto the neighboring slice and use the method in step 1 to find the corresponding border points in the new slice. After that we adopt step 2 to detect the borderline between the border points. In this way, we are able to detect the borderlines in all the slices. From these borderlines we can finally construct the surface of the selected ROI. Because the borderlines of other slices are detected on the basis of the first slice, selection of this slice greatly affects the quality of results. We suggest that this slice need contain the most representative information. If the change of two neighboring slices is large, we can optionally reselect the border point in the new slice instead of detecting the borderline by the computer. It is further noted that because the borderlines detected by the computer may be very irregular, we can use a cubic Bezier curve fitting technique to smooth the borderlines. FEM mesh Generation Among several methods to automating mesh generation [ 9 ], the mesh with cuboidal elements is the fastest and most stable method to mesh an organ with irregular shape even though it may require more elements at the boundary. We therefore apply the cuboidal-element mesh to make this software applicable for complex cases. For instance, in Figure 3 the cloud-like parenchyma is dispersed in the fatty tissue. It is almost impossible to extract the exact geometry of parenchyma. In this case, most geometry-based methods are invalid. In the cuboidal mesh, elements are generated layer by layer and are automatically connected through the overlaid nodes. Given an element size, we can calculate how many slices each layer of elements spans. For simplicity, we assume the borderline of the central slice to be the borderline of that layer. Since the borderline consists of many points, we first build up a grid using the element size, move each border point to the closest cross-point of the grid, and remove the repeated points. We thus obtain the borderline denoted by the cross-points of the grid. It is noted the borderline may be entangled somewhere due to the numerical truncation. We need normalize the borderline so that it encloses a single-connected region and the distance between two neighboring point is equal to the size of the elements. When we scan the single connected region, there exist two types of points on the borderline: jumping points and inertial points. If the left and right sides of a point are in the different states; i.e. one side is in the inside of the objective region and the other side is in the outside, then the point is called a jumping point. Otherwise, this point is called an inertial point. For instance, in an upstanding rectangle, all points on the left and right sides are jumping points, whereas the rest points on the top and bottom sides are inertial ones. On a closed borderline, each point is connected to two points. For a jumping point, the two neighboring points apparently have different values of y coordinate, whereas those for an inertial point do not. From this criterion we can identify the jumping points on the borderline. Once the points of the borderline are given, we can sort the points from top to bottom by y coordinate and from left to right by x coordinate. Then, for any y coordinate we can obtain a list of points with increasing x coordinates. During a horizontal scan for a fixed y coordinate, the number of jumping points in this list must be even, with which we obtain the pair-wise jumping points and find all the internal points between each pair of jumping points. Scanning the points from top to bottom, we can obtain all the internal points for the connected region. Then we can obtain the cuboidal-element mesh for this layer by mapping one point onto one element. Because different elements may have different material properties, we need find the segmentation information for each element. From the grayscale histogram, we have defined the grayscale ranges corresponding to the tissues. Typically, an element may contain many voxels that belong to different tissues, whereas the FEM requires the element to be homogeneous. We count the number of voxels for each tissue in the element and assume the maximum one to be the material of the element. Thus, we can map the elements onto the different tissues as shown in Figure 5 . We can thus mesh the object layer by layer and finally obtain the total FEM mesh, from which we can further calculate the volume of each tissue. The surface information of the object is important for applying boundary conditions and mechanical loadings. This software uses the 2-D rectangular elements to describe the surface. Each 3-D cuboidal element has six rectangular faces. We collect the faces from all cuboidal elements. Thus, for an object containing N cuboidal elements, we can obtain 6 N 2-D rectangular elements. Obviously not all the rectangular elements are on the surface of the object. If an element is not on the surface, from the connectivity, another 2-D element containing the same nodes must exist which belongs to the neighboring cuboidal element. Eliminating each pair of these inside elements, we are able to obtain surface elements. We further input the mesh with segmentation information into an FEM program based on the required data format, assign material properties to tissues, and apply boundary conditions on the surface nodes. We can eventually calculate the mechanical deformation, internal stress and strain by the FEM software. Results and Discussion 3-D FEM Mesh and Material Properties To illustrate the capability of this software, we construct an FEM model of a woman breast and simulate the mechanical deformation with applied compressive forces. A set of CT image of the prone breast was acquired consisting of 512 × 512 × 243 voxels. The voxel size is 0.46875 × 0.46875 × 0.6 mm 3 . As an example, the 148 th slice is shown in Figure 1 . The breast includes three kinds of tissues: fat, parenchyma, and tumor, which are represented by three grayscales as dark, gray, and light, respectively. Using the ImageParser package, we are able to mesh the breast by cuboidal elements with a size of 2.8125 × 2.8125 × 3 mm 3 . The breast is meshed into 14,902 elements with 18,486 nodes as shown in Figure 7 . The tumor, parenchyma, and fatty tissue consist of 154, 5783, and 8965 elements, respectively. The surface of the breast includes 6,900 rectangular 2-D surface elements. The region of breast is defined as follows: 0 < x < 84.375 mm , 0 < y < 87.1875 mm and 0 < z < 135 mm . Here x is from left to right in a slice of the image, y is from the top to bottom, and z is from the first slice to the last slice. Corresponding to the human body, y represents the normal direction of the coronal plane, while z signifies the normal direction of the axial (transverse) plane (Figure 7 ). Figure 7 3D FEM Mesh of the Breast. The breast is meshed by cuboidal elements with a size of 2.8125 × 2.8125 × 3 mm . 14,902 elements and 18,486 nodes are generated. The breast is in the region as: 0 < x < 84.375 mm , 0 < y < 87.1875 mm and 0 < z < 135 mm . Based on Krouskop et al. [ 13 ], the initial elastic moduli of three tissues are taken as 20 KPa for fat, 35 KPa for parenchyma, and 100 KPa for tumor. Because these tissues may undergo large (finite) deformation, we apply the Mooney-Rivlin nonlinear elastic (hyperelastic) model to describe the constitutive law for the finite deformation. Using the initial elastic moduli we can calculate the Mooney-Rivlin material constants as: C 01 = 1,333 Pa, C 10 = 2,000 Pa for fat; C 01 = 2333.3 Pa, C 10 = 3500 Pa for parenchyma; and C 01 = 6,667 Pa, C 10 = 10,000 Pa for tumor. It is noted that, due to the nonlinear characteristics, the elastic modulus for each tissue change as a function of deformation. FEM Modeling by ANSYS ANSYS 7.0 [ 14 ] is the commercial nonlinear FEM software. We input the nodes and elements into ANSYS, and define the material models for three tissues. The applied compression with two flat-paddles is designed to simulate the clinical mammography examination. The ANSYS elastic contact model is adopted for the interaction between the breast tissue and the much more rigid paddle. The paddle's Young's modulus and Poisson's ratio are taken as 210 GPa and 0.3, respectively. During the compression process, the breast deforms. The contact area between the breast surface and the paddle increases automatically. The friction coefficient between the breast and the paddle is assumed to be 0.2. The boundary conditions are assumed that all nodes attached to thorax are constraint as U x = U y = 0, so that they can only move in the z direction for computational convenience. The two paddles move toward each other with a quasi-static strain rate. The maximum paddle movement is limited to be 13.5 mm (20% deformation) in the z direction. FEM Results To simulate the nonlinearly elastic deformation, we divide this compression process into 20 incremental steps. At each step the displacement, strain, and stress fields can be calculated. Figure 8 shows the von Mises strain and tangential Young's modulus distributions at the last step. The deformation-dependent tangential Young's modulus is defined as Figure 8 The Strain and Tangent Young's Modulus Distribution. The von Mises strain (a) and Tangent Young's modulus (b) distribution in the layers at z = 69, 78, 87 mm are illustrated. Because the tumor is much harder than other tissues, the Tangent Young's modulus is obviously higher and the strain is lower than those at the neighoring region. where σ ε and ε e are the von Mises stress and strain, respectively. It is noted that, for linear elastic material, E is always a material constant. Because the material properties of skin, normal entity and tumor are all nonlinear, E should change during the process of compression. Figure 8(a) shows the von Mises strain for the sections at z = 69,78,87 mm . The strain around the thorax is quite significant due to the boundary constraint, whereas the strain close to skin is small because of the free boundary condition. In the region of tumor, it is shown that the strain is much smaller than that in the neighboring region because the tumor is much harder than other tissues. Figure 8(b) demonstrates that the tangential Young's modulus is no longer uniform even in the same tissue because its strain field is not uniform. Conclusions The ImageParser system has been developed to create FEM mesh models from 3-D medical images. A semi-automatic method has been proposed to detect the ROIs from the context of complex image structures. The ROIs can be meshed into cuboidal elements and segmented based on the grayscale of the voxels. It has been demonstrated that, through a 3-D CT image volume of the woman breast, the ImageParser can effectively mesh the breast into cuboidal elements, and simulate the realistic nonlinear deformation responses of the breast tissues upon compression. Authors' contributions LZS, GW, and MWV conceived and planned this research project. HMY and LZS designed and developed this software. TY prepared the CT image volume of the breast. TY, JW, and GW analyzed the CT images. HMY, LZS, and GW wrote the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524181.xml |
535932 | Cost-effectiveness of an intensive group training protocol compared to physiotherapy guideline care for sub-acute and chronic low back pain: design of a randomised controlled trial with an economic evaluation. [ISRCTN45641649] | Background Low back pain is a common disorder in western industrialised countries and the type of treatments for low back pain vary considerably. Methods In a randomised controlled trial the cost-effectiveness and cost-utility of an intensive group training protocol versus physiotherapy guideline care for sub-acute and chronic low back pain patients is evaluated. Patients with back pain for longer than 6 weeks who are referred to physiotherapy care by their general practitioner or medical specialist are included in the study. The intensive group training protocol combines exercise therapy with principles of behavioural therapy ("graded activity") and back school. This training protocol is compared to physiotherapy care according to the recently published Low Back Pain Guidelines of the Royal Dutch College for Physiotherapy. Primary outcome measures are general improvement, pain intensity, functional status, work absenteeism and quality of life. The direct and indirect costs will be assessed using cost diaries. Patients will complete questionnaires at baseline and 6, 13, 26 and 52 weeks after randomisation. Discussion No trials are yet available that have evaluated the effect of an intensive group training protocol including behavioural principles and back school in a primary physiotherapy care setting and no data on cost-effectiveness and cost-utility are available. | Background Low back pain is a very common complaint with major social and economical consequences. In a recent cross-sectional study the annual prevalence of low back pain in the general Dutch population was estimated at 44% [ 1 ]. The course of low back pain is usually relatively short: about 80–90% of people with low back pain spontaneously recover within four to six weeks. However, approximately 1–7% develop chronic low back pain. Although this is a relatively small group, the economic consequences are enormous [ 2 ]. The total costs of low back pain in the Netherlands in 1991 have been estimated at 1.7% of the Gross National Product [ 3 ]. About 93% of the total costs were due to absenteeism and disablement. Because of the enormous costs related to low back pain, effective interventions aimed at prevention and treatment of chronic complaints are necessary. The Cochrane Collaboration has published several systematic reviews on the effectiveness of different treatments for low back pain. Exercise therapy, back schools and behavioural therapy seem to be the most promising interventions for treatment of chronic low back pain [ 4 ]. Authors recommended future trials with sufficiently large sample sizes and sufficiently long follow-up periods. Cost-effectiveness and cost-utility analyses of treatments were also recommended, because the observed differences in effectiveness were only small. Evidence-based physiotherapy for sub-acute and chronic low back pain patients consists of adequate information and an active approach, including behavioural principles. As physiotherapists have not yet put these principles into practice [ 5 - 7 ] two important barriers have to be dealt with. First, changing behaviour of health care providers is always very difficult, even when guidelines are actively implemented [ 8 ]. Second, physiotherapists usually do not have specific knowledge of behavioural principles and are usually not specifically trained to provide behavioural therapy. To solve these issues, physiotherapists in Amsterdam have developed a new intervention program. This program not only makes optimal use of the combination of the principles of exercise therapy, behavioural therapy and back schools, but has structured it into a protocol that facilitates physiotherapists to perform this intervention in clinical practice. This trial will evaluate the cost-effectiveness and cost-utility of the intensive group training protocol compared with physiotherapy guideline care. Methods Study design The study is a randomised controlled trial (RCT). Alongside the trial a full economic evaluation will be conducted. The Medical Ethics Committee of VU University Medical Centre has approved the study design, protocols and informed consent procedures. Setting The trial will be conducted in a primary physiotherapy care setting in Amsterdam and its surroundings. Eighty-five physiotherapists will participate in the trial; 40 physiotherapists are trained to provide the intensive group training protocol and 45 physiotherapist are instructed to provide usual physiotherapy care according to the Low Back Pain Guidelines of the Royal Dutch College for Physiotherapy (KNGF). Study population Patients with non-specific low back pain referred to one of the participating physiotherapists by their general practitioner are eligible for participation in the trial. Patients are included if the current episode of low back pain lasts more than 6 weeks and if the complaints show no tendency to decrease, meaning that the patient has not increased his activities in the last three weeks. Furthermore, patients have to be between the age of 18 and 65 years old, live or work in Amsterdam and have a health insurance with one insurance company (Agis). This health insurance company covers about 80 to 90 percent of the Amsterdam population and is the only company that reimburses the intensive group training protocol. Patients are excluded from the study if 1) they have specific low back pain, attributable to e.g. infection, tumour, osteoporosis, rheumatoid arthritis, fracture, inflammatory process, radicular syndrome or cauda equina syndrome; 2) their general practitioner or medical specialist advised them not to perform physically straining activities; 3) they are pregnant; 4) they have pelvic pain/instability; 5) they are dealing with a lawsuit related to either their low back pain or related to their disability for work. Patients are recruited by participating physiotherapists. If patients are interested in participating in the trial, they receive written information about the trial and their name and phone number is given to a research assistant. The research assistant calls the patient two days later and explains the aim and implications of the study. If the patient agrees to participate, an appointment is made at a local research centre. At the local research centre a research physiotherapist checks again if the patients meets the eligibility criteria. Patients who meet the criteria and agree to participate in the trial must sign an informed consent form. Patients are asked to complete baseline questionnaires and the research physiotherapist will conduct baseline assessment of physiologic outcome measures. In accordance with the CONSORT statement, information on number of recruited and eligible patients and reasons for exclusions or refusal to participate will be registered for all recruited patients by the participating physiotherapists and the research physiotherapist. Treatment allocation Patients are randomly assigned to either the intensive group training protocol or physiotherapy guideline care. Randomisation is stratified for duration of complaints to ensure a sufficient number of sub-acute and chronic patients in each treatment group. To avoid inconvenience for the patients, seven local research centres are set up in different parts of the city. For each research centre two randomisation lists are prepared and permuted blocks of 4 patients are made to ensure equal distribution of patients for each research centre. An independent statistician generated the randomisation lists, using series of random numbers. The principle investigator (NvdR), who is not involved in the selection of patients, prepared the opaque, sealed envelopes, guaranteeing concealed randomisation. At the local research centre the administrative assistant hands the next envelope to the patient who then opens the envelope. The administrative assistant then checks the envelope and informs the participating physiotherapist about the treatment allocation. Blinding Both the research physiotherapists and the principle investigator remain blinded for the allocation of treatment. Patients cannot be blinded for the interventions. As a consequence most outcome measures, consisting of self-report questionnaires are not blinded either. All physical outcome measures are blindly assessed by the research physiotherapist as we ask the patients not to reveal information about their treatment to the research physiotherapists. Participating physiotherapists can not be blinded for treatment allocation, but they are not involved in the assessment of outcome measurements. Interventions Patients who are assigned to physiotherapy guideline care are treated according to the recently published Low Back Pain Guidelines of the Royal Dutch College for Physiotherapy (KNGF) [ 9 ]. The guidelines recommend giving adequate information, advising to stay active and providing exercise therapy with a behavioural approach for patients with sub-acute and chronic low back pain. As the guidelines are relatively new, physiotherapists providing the guideline care receive two training sessions of 2,5 hours each to ensure that the guidelines are properly applied. Preparation time of 2 hours before each session is strongly recommended. Two experts provide background information and discuss the content of the guideline. Video clips and statements on expected barriers are used to start discussions in groups of 10–15 physiotherapists supervised by expert trainers. After 4 months a follow-up session of 2,5 hours is organised to discuss practical problems and to ensure that all physiotherapists are working according to the guideline. The physiotherapists are asked to complete a form for each participating patient they treat, to register treatment goals, content of the treatment, total number of sessions in the treatment period and, if applicable, arguments to deviate from the guideline. The intensive group training protocol combines exercise therapy with principles of back school and behavioural therapy. Back school principles include group lessons with adequate information on causes of low back pain, factors influencing low back pain, advice on physical activity and dealing with a relapse. Operant conditioning and graded activity as components of behavioural therapy are included in the protocol. Baseline measurements, goal setting and time-contingency are the main elements of the intervention. The purpose of the protocol is to improve activities and participation in work or other social activities, instead of focussing on pain or anatomical impairments. The patient has an active role and is responsible for the results of the therapy. The physiotherapist has the role of coach and focuses on the achieved improvement instead of the remaining complaints [ 10 ]. Active behaviour will be reinforced by the physiotherapist. The protocol has a total duration of 30 weeks and consists of three phases: the starting phase, the treatment phase and the generalisation phase. It concerns 10 individual sessions of 30 minutes per session and 20 group sessions of 1,5 hours per session. During the first phase of three weeks, six individual sessions are planned for patient history and physical examination, providing information on the treatment, determining baseline level of functional capacity and signing a treatment contract. During the treatment phase the group sessions have a frequency of twice a week for eight weeks. Every patient has his own gradually increasing exercise program, with an operant-conditioning behavioural approach based on the baseline level of functional capacity. The treatment phase gradually changes into the generalisation phase in which patients learn to apply everything they have learned in the treatment phase to their own daily situation. Therefore the frequency of the sessions decrease in the last four weeks; patients are encouraged to exercise more at home and to choose a physical activity they will continue after treatment has finished. Two individual sessions are planned for evaluation during the twelve week training period and two additional individual sessions are planned three weeks and three months after the group sessions have finished. The exercise program consists of: 1. warming-up and cooling down 2. aerobic exercises on a rowing machine, stationary bike or treadmill 3. muscle strengthening exercises of the lower back, abdomen and buttocks 4. exercises that specifically apply to the patient's situation 5. home exercises The exercises mentioned at point 4. are determined by a Patient Main Complaint Form [ 11 ]. During the first intake patients are given this form, consisting of thirty different activities (e.g. turning in bed, lifting, walking, etc.). The patient is asked to select and prioritise the activities he has had trouble with during the last week and would very much like to see improved in the following months. The physiotherapist discusses the form with the patient and designs specific exercises for these activities. Three baseline measurements are performed to determine the maximal performance (for example, the maximum number of repetitions) for each exercise separately. The starting point of the program is 70% of the mean of all three measurements, in order to avoid failure and ensure the experience of success. In agreement with the patient the training quota are determined by the physiotherapist using the starting point, goals and training period to provide a gradually increasing program. The exercise goals are determined by the patient and physiotherapist together to ensure that goals are realistic, concrete, trainable and measurable. The treatment contract is signed by the patient and physiotherapist. The contract states that training quota are always followed exactly and that the patient keeps the graphs of finished sessions. For training, the physiotherapists will receive instruction on the background and content of the protocol, and will be trained to include behavioural principles in the physiotherapeutic management of low back pain at two meetings of six hours each. In groups of 7–8 physiotherapists discussions and role playing are supervised by one expert trainer with extensive experience in behavioural principles. Four months after the last meeting 2 follow up sessions of 4 hours are organised to discuss practical problems and practice difficult situations with the trainers. The principal investigator (NvdR) will regularly visit the group sessions at the physical therapy practices to monitor the conduct of the intensive group training protocol. For each participating patient the physiotherapist is asked to complete a registration form containing the treatment goals, content of the different sessions and evaluation of the protocol. Contrast physiotherapy guideline care and intensive group training protocol The intensive group training protocol is a standardised approach consisting of 30 treatment sessions. As the physiotherapy guideline care is not a protocol, the number of sessions will vary per patient. In daily practice the average number of treatment sessions is 9 and the average duration of treatment is 6 weeks [ 6 ]. The organisation of the intensive group training protocol is based on back school principles and will take place in groups of 5–8 patients. The physiotherapy guideline care is organised as usual physiotherapy care and patients are treated individually. The guidelines recommend exercise therapy with a behavioural approach. However, no further guidance is provided regarding the content of the exercise program (type, intensity, frequency and duration of exercises) or regarding integrating behavioural principles. In the intensive group training protocol the content of the exercise therapy, back school and operant condition are thoroughly described and the physiotherapists are trained to apply these skills in practice. So the contrast lies in the number of sessions, group versus individual therapy and the conduct of the behavioural therapy. Co-interventions and compliance During the intervention period, co-interventions are discouraged. However, co-interventions will be reported and evaluated. Compliance to the intensive group training protocol is assessed by registering the number of treatment sessions that patients attend. The content of treatment and number of treatment sessions received by the physiotherapy guideline care group will be registered. Outcome assessment In 1998 a proposal for standardised use of outcome measurement in low back pain studies was published [ 12 ]. An international group of investigators proposed a set of five domains that should be used in all low back pain studies: pain symptoms, back related function, general well being, disability and satisfaction with care. Additionally several other outcome measures that are commonly used in randomised trials in low back pain will be assessed. Primary outcome measures 1. The functional status is assessed with the Roland Morris Disability Questionnaire [ 13 ]. The questionnaire consists of 24 questions related to activities of daily living. Each item is scored either 0 (disagree with statement) or 1 (agree with statement) and the total score ranges from 0 (no dysfunction) to 24 (maximum dysfunction). 2. General improvement is measured on a 6 point scale ranging from "much worse" to "completely recovered". 3. An 11-point numerical rating scale is used for determining pain intensity, ranging from 0 "no pain" to 10 "very severe pain" [ 14 ]. 4. Work absenteeism is measured with the Short Form Health and Labour Questionnaire [ 15 , 16 ]. This questionnaire was developed for collecting quantitative data about the relation between illness, treatment and work-performance. Absence from work, reduced productivity at paid work, unpaid labour production and impediments to paid and unpaid labour are four dimensions that are addressed in the Health and Labour Questionnaire. 5. The EuroQol instrument is administered to assess the patient's general health status. The questionnaire describes the general health status in 5 dimensions: mobility, self-care, usual activities, pain/discomfort and anxiety/depression [ 17 ]. Because each of the five dimensions can be divided in 3 levels a total of 243 health states can be defined. Using the model by Dolan (1997) the total score will be expressed in utilities [ 18 ]. The official Dutch translation of the Euroqol will be administered. Secondary outcome measures 6. The Tampa scale for kinesiophobia is developed as a measure of fear of movement/(re)injury by Miller et al. [ 19 ]. The questionnaire is relatively short and can easily be used in a primary care setting [ 20 ]. The scale consists of 17 items and each item is provided with a 4-point Likert scale ranging from "strongly disagree" to "strongly agree". The Dutch translation of the TSK by Vlaeyen et al. [ 21 ] will be used in the trial. 7. Cognitive and behavioural pain coping strategies are assessed using the Pain Coping Inventory [ 22 ]. This questionnaire consists of 6 factors: pain transformation, distraction, reducing demands, retreating, worrying and resting. All 34 items are scored on a four point scale where 1 equals "hardly ever/never" and 4 equals "very often". A recent validation study of the Pain Coping Inventory reported the coping scales to be reliable and sensitive enough to identify differences between coping strategies in pain patients [ 23 ]. 8. Self-efficacy beliefs are measured using the Pain Self-Efficacy Questionnaire [ 24 ]. With the approval of Nicholas, the original 10-item questionnaire was translated into Dutch by the authors and subsequently translated back by a professional translator. Each item is scored on a 7-point scale ranging from 0 "not at all confident" to 6 "completely confident". By summarising the scores of all the items a total score is determined. 9. For measuring patient satisfaction four items (out of 17 items) of the Patient Satisfaction Scale of Cherkin et al. [ 25 ] are combined with nine items (out of 12 items) of the Patient Survey Instrument of Beattie et al. [ 26 ]. The Patient Satisfaction Scale was developed for measuring patient satisfaction with care they received from their physician and is a multidimensional disease specific measure, intended specifically for patients with low back pain. The Patient Survey Instrument is a multidimensional generic measure and was developed to determine the overall satisfaction with physical therapy. The items of the combined list are rated using a 5-point "agree-disagree" response format. The authors belief that the combination of both instruments is more applicable to the situation in the trial. Physical measurements The physical measurements will be performed at several local research centres. Therefore all physical tests must be easy to administer en practical. To minimize patient burden the tests should take as little time as possible en should not be too strenuous for the patient. 10. Anthropometric measurements will be done for interpretation of the physical outcome measures. Body weight, body height and skin fold measures are assessed. Skin fold thickness of biceps, triceps, subscapular and suprailiac will be assessed with a Harpenden skin fold calliper. The skin fold-thickness equation developed by Durnin and Womersley will be used to determine body fat mass [ 27 ]. 11. Aerobic capacity will be assessed with the Chester Step Test [ 28 , 29 ]. This test was developed to determine the aerobic capacity in a relatively simple and practical way. The test is sub-maximal and ends when the heart rate of the participant reaches 75% of its predicted maximum. The test starts with a very slow step rate (15 steps per minute) and every two minutes the step rate increases with 5 steps per minute. Because the action of stepping is familiar to most people, the majority of the patients in the study will be able to perform the test. 12. The isometric endurance of the back muscles is evaluated with the test according to Ito [ 30 ]. The patient is positioned on the floor with a pillow under the abdomen and arms by the side. The patients raises the trunk to a horizontal position and the time the patient can maintain this position is measured. 13. The fingertop-to-floor distance is measured to determine the flexibility of the spine [ 31 ]. Standing with bare feet the participant will be asked to bend maximally forward with the feet together and the knees straight. The distance from the tips of the middle fingers to the floor is measured with a metal-ended tape measure. Prognostic measures At baseline, data of various prognostic measures will be collected to evaluate if randomisation successfully resulted in two prognostically comparable groups and to be able to adjust for baseline differences in the analysis, if necessary. 1. Data on individual factors such as age, gender, level of daily activity and preference for one of the treatment groups will be gathered by the administrative assistant. 2. Characteristics of low back pain: duration and severity of the current episode and number of previous episodes will be assessed by the research physiotherapists. Cost data The aim of the economic evaluation will be to determine and compare all back pain related costs of patients receiving the intensive group training protocol or physiotherapy guideline care. The costs will be related to the effects of the interventions. Cost effectiveness will be conducted from a societal perspective. Direct health care costs, including the costs for physiotherapy, additional visits to other health care providers, prescription medication, professional home-care and hospitalisation and direct non-healthcare costs such as out-of-pocket expenses, costs for paid and unpaid help and travel expenses will be included. Also data on indirect costs of loss of production due to back pain will be estimated for both paid and unpaid labour. Direct and indirect costs will be evaluated with cost diaries that patients keep during the whole time they participate in the trial [ 32 ]. The general health status is measured with the Dutch version of the EuroQol to compare the results of the cost-effectiveness analysis with other health care problems. Patients will be asked to complete questionnaires at baseline and 6, 13, 26 and 52 weeks after randomisation. Physical measurements will be performed at baseline, 13 and 52 weeks after randomisation. Table 1 gives an overview of the data-collection. Sample size To be able to detect a clinically relevant difference in pain intensity (improvement of 2 points on the 11-point pain intensity numerical rating scale after 52 weeks [ 33 ]) with a power (1-β) of 90% and a significance level of 5% (two-sided), two groups of 48 patients are needed. A population of chronic low back pain patients typically has a mean score of 7 (SD 2) on an 11-point pain intensity numerical rating scale. To be able to find a clinically relevant difference in disability (improvement of 3 points on the RDQ after 52 weeks [ 34 ]) with a power (1-β) of 90% and a significance level of 5%, two groups of 60 patients are needed. Chronic low back pain patients typically have a mean score of 15 (SD 5) on the RDQ. We expect a drop-out rate of 10% at most. Drop-out rates of similar RCT's on neck pain and tennis elbow conducted at our institute were less than 3%. Therefore, to get complete data sets of 120 patients with sub-acute and 120 patients with chronic low back pain, 280 patients will be recruited, 140 sub-acute and 140 chronic low back pain patients. 85 Participating physiotherapists will be asked to recruit 5 patients; the recruitment period will be of 12 months duration. Statistical analysis Intention-to-treat analyses will be conducted for all patients participating in both groups. A generalised linear mixed model will be applied to evaluate differences between groups over a period of 52 weeks. Subgroup analyses will be performed for duration of back pain: sub-acute (6–12 weeks) versus chronic (>12 weeks), for severity of complaints at baseline, for age, and for the psychosocial characteristics somatisation, fear avoidance, catastrophising and self efficacy. Bootstrapping will be used for pair-wise comparison of the mean differences in direct health care, direct non-health care, total direct, indirect and total costs between the intervention groups. Confidence intervals will be obtained by bias corrected and accelerated (Bca) bootstrapping using 2000 replications [ 35 ]. Cost-effectiveness ratios will be calculated by dividing the difference between the mean costs of the two interventions by the difference in the mean effects of the two interventions. Ratios will include the primary clinical effect measures of the trial, i.e., general improvement, functional status, pain intensity and quality of life. Ratios will be graphically presented on a cost-effectiveness and cost-utility plane and acceptability curves will be calculated showing what the probability is that the intensive group training protocol is cost-effective at a specific ceiling ratio. Discussion The intensive group training protocol includes interventions, such as exercise therapy, behavioural treatment and back school, that have recently been proven to be effective in patients with low back pain. The graded activity intervention is considered the be a form of behavioural treatment and earlier studies have proven the effectiveness of this intervention for workers who are sick-listed due to low back pain [ 36 , 37 ]. The trial by Staal et al. (2003) was conducted at an occupational health service department of an airline company in the Netherlands. In this study graded activity was found to be more effective than usual care in reducing the number of sick leave days. Lindström et al. (1992) examined the graded activity intervention in sick-listed workers at the Volvo factories in Sweden and showed a significant reduction in the number of days of sick leave. All participants in both studies were workers on sick leave due to low back pain. In a primary care setting the graded activity intervention has not yet been studied. Although participants in our study will follow an individual, gradually increasing exercise program, the training and back school will take place in a group setting. In the Netherlands, the national physiotherapy guidelines for low back pain consist of general recommendations regarding diagnostic and therapeutic management of low back pain while the intensive group training protocol prescribes the frequency, intensity and duration of the exercise therapy, the content of informative group lessons and graded activity in detail. The intensive group training protocol is expected to be more effective, because it is a detailed protocol and because it combines principles of exercise therapy back school and behavioural therapy, which have recently been proven to be effective for this patient population in systematic Cochrane reviews [ 38 - 40 ]. Although the general practitioner, and if applicable the occupational physician, will be informed about the treatment and progress of the patient, the intervention is mono-disciplinary. A multidisciplinary intervention in a primary care setting has major practical implications and would increase the costs of the intervention considerably. The intensive group training protocol itself probably generates higher costs than physiotherapy guideline care but we expect reduction in health care utilization and productivity losses in the long term, compensating for the increase in treatment cost. This trial will provide physiotherapists with more knowledge and experience in behavioural treatment for low back pain patients and may increase the efficiency of physiotherapeutic care for this complex and expensive patient group. If the intensive group training protocol appears to be more cost-effective than physiotherapy guideline care, a future update of the national physiotherapy guideline will include more specific recommendations in line with this protocol. In that case the protocol will be implemented throughout the Netherlands. List of abbreviations KNGF = Royal Dutch College for Physiotherapy Bca bootstrapping = bias corrected and accelerated bootstrapping Competing interests The author(s) declare that they have no competing interests. Author's contributions NvdR is responsible for the data collection and drafted the manuscript. MWvT, JMB, WvM and HCWdV were involved in developing the original idea for funding and were co-applicants on the successful funding proposal. WKF and ACO both will contribute to data collection and processing. All authors participated in development of research protocols and in the design of the study. All authors read and corrected draft versions of the manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC535932.xml |
524036 | Lead editorial: The need for greater perspective and innovation in epidemiology | This editorial introduces the new online, open-access, peer-reviewed journal, Epidemiologic Perspectives & Innovations . Epidemiology (which we define broadly, to include clinical research and various approaches to studying the health of populations) is a critically important field in informing decisions about the health of individuals and populations. But the desire for new information means that the health science literature is overwhelmingly devoted to reporting new findings, leaving little opportunity to improve the quality of the science. By creating a journal dedicated to all topics of and about epidemiology, except standard research reports, we hope to encourage authors to write more on the neglected aspects of the field. The journal will publish articles that analyze policy implications of health research, present new research methods and better communicate existing methods, reassess previous results and dogma, and provide other innovations in and perspectives on the field. Online publishing will permit articles of whatever length is required for the work, speed the time to publication and allow free access to the full content. | Epidemiology is a critically important field in informing decisions about the health of individuals and populations. It is also a young field, with the potential for seeing fundamental improvements in the conduct of the science every year. But the desire for new information means that the health science literature is overwhelmingly devoted to reporting new findings, leaving little opportunity to improve the quality of the science. Epidemiologic Perspectives & Innovations ( EP&I ) was created to provide a forum for efforts to improve the quality of health science research and its applications. Successful enterprises know they must devote a substantial portion of their resources – at least a few percent and often ten percent or more – to assessing whether the rest of their resources are being optimally directed. Such efforts include research and development, which improves the quality of products, and outcomes research, which assesses the impact of those products. In an applied science like epidemiology, these efforts should be devoted to designing new methods for conducting studies and interpreting results, translation of research into effective policy recommendations, critical review of past findings and current practice, and improvement of teaching the next generation of scientists. Given the resources expended by the health science research enterprise, epidemiology (which we define broadly, including both population health and clinical research, and covering biological, behavioral, and economic dimensions) is characterized by remarkably little innovation, let alone critical review of existing dogma. A well-educated epidemiologist transported forward in time from 1980 would probably be able to read (and participate in) most current research and would find few surprises other than a few specific study results. Yet a substantial portion of all the epidemiology ever conducted has been carried out since 1980 – probably more than half, even including in the count all medical literature back to the dawn of literacy. The outputs of the science have increased to a torrent; research to improve the quality of the science is a trickle. It can hardly be argued that this slow innovation and lack of perspective is because current approaches offer little room for improvement. Current discussions of advanced statistical methods, the nature of random error, sensitivity analysis and uncertainty quantification, and proper interpretation of results, to name just a few, show that most current epidemiologic research uses methodology in need of improvement. Granting that many problems would be eliminated if health researchers – who often have minimal training in epidemiology – just followed the dictates of a good basic epidemiology textbook, there are still major problems that lack simple solutions. It is troubling that we plow ahead with billions of dollars worth of research every year while making minimal effort to answer fundamental questions about what that research is really telling us. Epidemiology is far too important to our society to be treated as an exercise in uncritically following existing formulae. The limitations of the field become even more apparent after the research is conducted. Results are cast out into the world as if they speak for themselves, forcing policy makers, clinicians, and interested lay people to interpret them, despite their lack of expertise in the topic area and analytic methods, and lack of necessary context. A missing marketplace for ideas These problems leave plenty of blame to go around, but a fair amount of it rests with the lack of opportunity to publish scholarly analytic work aimed at solving the problems. While some health researchers might be guilty of not giving the field's limitations a second thought, most probably can envision contributions they would like to make to improve it. Every conclusions section containing a single paragraph of policy discussion suggests that researchers would like to contribute to policy analysis, but have exhausted their paltry word limit. Every dissertation that reflects upon and challenges standard methods shows fresh analytic thinking, but the innovations might be read by just the half dozen people who view the actual dissertation, since the resulting publications will likely be limited to a brief recounting of the methods and results (narrowly defined). Every time a professor explains to her students that something they learned in a previous class or from reading the literature is wrong, there is a lesson that should be getting out to everyone in the field, not just the ten students in that room. These scenarios call for full-length, analytically complete presentations (see endnote 1). Such analysis cannot be grafted onto a paper primarily focused on presenting numerical results from a study, given the severe length limitations in print journals in the health sciences. Indeed, such analyses must usually be longer than typical word limits allow, even without the study results or applications that are needed to illustrate them or provide background. A researcher who writes such a paper has a very difficult time getting it published. Moreover, it is easy to anticipate that difficulty and never even try to write the paper. The founding editors of this journal were inspired by their own experiences with these difficulties. There are journals that cover some of these areas, but to a remarkably limited extent. "Health policy" (and more so, "health economics") journals focus on the financial side of health care, rather than more general health policy or economic issues. Statistics journals, and even the research methodology slots available in the health research journals, favor mathematically complicated advances over more practical advice. With the exception of occasional relevant entries in medical journal education series, articles devoted to improving the teaching or understanding of epidemiologic research have no clear venue. Even with some niches for certain methods, policy, overview, or perspectives articles, it is difficult to be enthusiastic about writing in these areas with no clear idea of where the results are likely to be published. Epidemiologic Perspectives & Innovations ( EP&I ) was created to provide this forum, taking advantage of the greater visibility and article length offered by open-access online publishing. Article topics The following are areas of inquiry EP&I will publish. (An accompanying editorial [ 1 ] presents a more specific "wish list" of some of the particular analyses the editors would like to see.) Policy: Policy recommendations do not flow simply and directly from health research results, and educated recommendations demand more than a few sentences of analysis at the end of a research report. A good policy recommendation requires high-quality analysis of a nature and quantity that does not fit in standard health research journals. At the same time, health researchers cannot leave policy analysis of their results for other people to do and publish in policy journals because there are very few such people or journals. If health researchers do not take the lead on policy analyses based on their research, the analyses will likely never be done. EP&I fills the gap by providing a forum for policy analysis in the context of health research. Policy analysis articles can be free-standing or specifically based on research reports published elsewhere. Submissions in this area should be analytic (addressing policy/decision analysis, economics, ethics, or other areas of analytic inquiry), rather than commentary. Methodologic Innovation and Communication: EP&I welcomes submissions in all areas of epidemiologic research methodology, from study design to data analysis and reporting, including new tools, simple but important observations, and widely understandable applications of existing tools. The strength of our editorial board in this area means that submissions will be reviewed by experts who understand and appreciate new methods. Unlike most other journals publishing methods articles, EP&I welcomes submissions that are not necessarily at the technological cutting edge (though such submissions are also encouraged), but that contain lessons that are not widely known. We will spare authors the all-too-common experience of being told "everyone already knows that" when they submit a paper that calls for the use of methods or practices that are widely overlooked. Research articles are needed to translate methodological findings that are "known" (in the sense of having been discovered and understood by methods specialists), to make them known (in the sense of being understood and usable by most researchers in the field). Ethics, Philosophy, and Critical Analysis of the Field: In most of the health research literature, any discussion of philosophical points or assessments of the quality of research is labeled "commentary" and restricted to the opinions of a few luminaries. But carefully reasoned ethical analysis, epistemology, analysis of quality, and the like are not mere commentary, and often come from junior researchers or outsiders. Our accompanying "wish list" editorial provides some examples of these types of analysis. Re-analyses: The deluge of research results in health science means that few study results are ever carefully re-analyzed, even when their implications are quite important. When such re-analyses do occur, they are often limited to letters or perfunctory assessments in systematic reviews. EP&I offers a forum for publishing full-length re-analyses (which might use different analytic approaches, start with different premises, or report different results) of important previous research. Teaching Methods and Innovations: Many fields have a dedicated teaching section in one or more journals. EP&I will include articles that provide teaching tools, innovations, and methods. Online publishing allows authors to include computer code, spreadsheets, datasets, and other tools that will allow readers to make use of the teaching tools. Teaching articles will be peer reviewed by experienced teachers and at least one current student at the appropriate level to judge the material. The ideal teaching articles will present an approach or method, the specific tools necessary for a reader to implement it, and a report of the authors' experience in using the material. Review will be based primarily on the apparent usefulness of the presented approach. Multidisciplinary Research: This category is somewhat redundant, given that many of the aforementioned article types necessarily draw upon knowledge from multiple disciplines. But it is worth mentioning specifically because it is often difficult to publish work that is based in multiple fields of inquiry and thus does not fit easily into any one of them, or that is squarely in another discipline but is intended for an audience of health scientists. EP&I encourages such submissions and will review them based on their analytic merits in the fields in which they are based and their potential usefulness to health researchers. More generally, EP&I is a home for all articles of and about epidemiology, with the exception of standard research reports. (Reporting research results as part of one of the above article types is, of course, welcome.) This includes many types of papers that are not themselves epidemiologic analysis, but inform epidemiology or are about epidemiology. We suspect that many papers of the above types exist on paper or in researchers' heads, but have previously been difficult to get published. Many more will be written when they are appreciated as analytic work that is central to the field. Flexible format To provide maximum flexibility for these kinds of articles, we worked with BioMed Central to create the "Analytic Perspectives" article type. We expect most submissions to EP&I to be this article type, which allows authors to create a structure that fits their analysis (as opposed to methods-results-discussion, which generally would not fit) and is labeled to emphasize that the article is analytic (as opposed to commentary). We are also taking the unusual (for a health science journal) step of encouraging the use of endnotes. We believe that the lack of substantive endnotes or footnotes – to provide important asides, definitions, or clarifications, to note exceptions to general rules, or provide other elaboration – is a major detriment to the content of health research papers, making it difficult to present certain analytic points. For example, an interesting statistical claim or policy observation that is almost always true can either be made without further elaboration (in which case the exceptions make the claim incorrect), can include a paragraph of caveats (which is awkward and distracting), or left out (see endnote 2). Often the latter is the author's choice, which impoverishes the literature. An endnote could solve the problem. In other cases, endnotes could include short derivations of calculations that will be obvious to some readers and uninteresting to others, but may be of interest to some. Anyone familiar with the social science literature or many other fields will understand the beneficial uses to which such notes can be put (see endnote 3). Authors should consult the instructions for submissions for mechanical details of endnote use. Target audience and authors We hope that many readers will read EP&I from virtual cover to virtual cover. Most readers of most health science journals scan the table of contents and read the one or two articles that report results in their subspecialty, or never even see a table of contents, but merely a list of search hits on PubMed. Most articles in EP&I should be of some interest to researchers who are serious about understanding analytic health research and its implications. We welcome submissions from researchers with all levels of experience in the field, and from experts in other fields writing for health researchers. Great innovations and critical analysis often come from senior scholars in a field, but they also often come from graduate students, outsiders, and others who are not heavily invested in the status quo of a field of inquiry. Conclusions In 1995, Science published the controversial article, "Epidemiology Faces its Limits" [ 2 ], which suggested that the field had already gathered all the low-hanging fruit and was not able to do much more. The premise implied by the title was dead wrong and still is: epidemiologic research (whether defined broadly or more narrowly) is no where near the limits of its technology or potential contribution to our knowledge. But nearly a decade later, the criticisms that rang true when that article was published still ring true; the progress toward "breakthroughs in the methodological tools of epidemiology" called for in that article has been limited. EP&I hopes to encourage the pursuit of breakthroughs (or, better still, a slow and steady flow of new innovations and perspectives) by providing a ready home for publishing a broad collection of such material. Endnotes 1. "Analytic" should not be confused with quantitative calculations and results, which seldom contain much actual analysis. Analysis can be thought of as the intellectual process of systematic inquiry aimed at understanding, explaining, or characterizing phenomena or concepts. 2. For example, authors might want to point out that nondifferential misclassification error that they suspect exists most likely biased a result toward the null. However, an unqualified statement to this effect will likely generate criticism that such error is not always toward the null and that believing so is a sure sign of methodological naivete. An endnote in which the authors point out that there are exceptions, but the bias is still usually toward the null, would allow them to make their point without a long awkward caveat breaking up the main text. 3. The literature in most of these fields uses footnotes, conveniently located at the bottom of the page. Online publishing leaves us without a bottom of the page, but allows for convenient opening of multiple windows, offering the opportunity to have the endnotes open in a separate browser window. Readers of printed PDF versions will, alas, have to flip to the end. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524036.xml |
526190 | A report of dangerously high carbon monoxide levels within the passenger compartment of a snow-obstructed vehicle | Background We sought to determine how quickly carbon monoxide would accumulate in the passenger compartment of a snow-obstructed vehicle. Methods A 1992 sedan was buried in snow to the level of the undercarriage, the ignition was then engaged and carbon monoxide levels recorded at 2.5-minute intervals. The primary outcome was the time at which a lethal carbon monoxide level was detected. Six trials were conducted: windows closed; windows open one inch; windows open 6 inches; windows closed and tailpipe swept clear of snow; windows closed and one cubic foot of snow removed around tailpipe; windows closed and tailpipe completely cleared of snow to ground level in a path 12 inches wide. Results Lethal levels of carbon monoxide occurred within 2.5 minutes in the vehicle when the windows were closed, within 5 minutes when the widows were opened one inch, and within 7.5 minutes when the widows were opened six inches. Dangerously high levels of carbon monoxide were detected within the vehicle when the tailpipe had been swept clear of snow and when a one cubic foot area had been cleared around the tailpipe. When the tailpipe was completely unobstructed the carbon monoxide level was zero. Conclusions Lethal levels of carbon monoxide occurred within minutes in this snow-obstructed vehicle. | Background Carbon monoxide poisoning is the number one cause of toxin related death in the United States [ 1 ]. It has been estimated that this poison may be responsible for up to 1,500 accidental deaths and 10,000 medical visits annually in the United States [ 2 ]. One of the manners in which people die from carbon monoxide poisoning is hypoxia secondary to entrapment within a snow-obstructed vehicle. Recommendations for prevention of these poisonings has been well documented but not thoroughly studied. The Centers for Disease Control (CDC) recommends that "following heavy snowfall, the public should be reminded to inspect vehicles to ensure that exhaust pipes are cleared of snow before engines are started" [ 3 ]. Brian Horner, in the Wilderness Medicine Letter made the following recommendation "The windows should be rolled down one inch on each side to provide cross-ventilation. Check the exhaust tail-pipe frequently to see that it is free of drifting snow" [ 4 ]. Both the Federal Emergency Management Agency (FEMA) [ 5 ] and the National Oceanic and Atmospheric Administration (NOAA) [ 6 ] recommend keeping the tailpipe clear and partially opening a window to prevent carbon monoxide poisoning. In reviewing case reports regarding snow emergencies, it appears that there have been situations where venting carbon monoxide through an open window may not have been sufficient to prevent dangerously high levels of carbon monoxide from accumulating within the passenger compartment [ 7 ]. This raises the question of whether current prevention guidelines are safe and accurate. We performed a pilot study with a single vehicle to simulate a snow emergency whereby a person stranded in a vehicle during a snowstorm would perform a safety maneuver. Our hypothesis was that opening a window a few inches for ventilation or clearing a tailpipe of snow would be sufficient to keep carbon monoxide levels in a non-lethal range. Methods In 1995, a 1992 four-door sedan whose exhaust system had been inspected by the State of Massachusetts and found to be functioning normally was placed in a driveway and snow was shoveled around its base on all four sides until the car was obstructed by snow to the level of the bumpers. A Nighthawk, battery powered, digital readout, 80 mAmp carbon monoxide detector was attached to the front of the driver's seat headrest. The instrument had the ability to detect carbon monoxide levels in the range of zero to nine hundred ninety nine parts per million (0 – 999 ppm) [ 8 ]. The ignition was engaged and carbon monoxide levels were measured every two and a half minutes until either the maximum level of 999 ppm was detected or 10 minutes had passed. Six trials were performed. Between each trial all doors and windows were opened, the carbon monoxide was allowed to exhaust from the vehicle until the detector registered zero parts per million. Results Dangerously high carbon monoxide concentrations were recorded in the passenger compartment within three minutes when the windows were closed, within five minutes when the front windows were open one inch and within 7.5 minutes when the windows were opened six inches (Table 1 ). With the windows closed and tailpipe swept clear, a carbon monoxide level of 751 ppm was recorded at 7.5 minutes. With the windows closed and the tail pipe cleared one cubic foot around, the highest CO level recorded was 299 ppm at 10 minutes. When the tailpipe was completely cleared (12 inches wide to ground level) no carbon monoxide was detected. Table 1 Carbon monoxide levels in a snow-obstructed vehicle. Time (minutes) Trial #1 Trial #2 Trial #3 Trial #4 Trial #5 Trial #6 0 0* 0 0 0 0 0 2.5 999 530 8 28 0 0 5 999 113 555 33 0 7.5 999 751 265 0 10 585 299 0 12.5 622 276 0 *Parts Per Million Trial #1: Windows Closed; Trial #2: Windows Open one inch; Trial #3: Windows Open 6 inches; Trial #4: Windows Closed, Tailpipe brushed clear of snow; Trial #5: Windows Closed, Tailpipe brushed clear of snow in an area one cubic foot around tail-pipe; Trial #6: Windows Closed, Tailpipe brushed clear of snow in an area 12 inches wide, depth to ground level. Discussion and conclusions Carbon monoxide is an odorless, tasteless by-product of fossil fuel combustion [ 1 ]. When it accumulates undetected in the passenger compartment of a snow obstructed vehicle it can rapidly cause toxicity and death. Toxic exposures increase during winter months in the United States and heavy snowfalls that occur over a short periods of time represent a potentially hazardous situation for travelers and for those who must remove snow from vehicles that have been parked outside [ 9 ]. In this study, only opening a window was not enough to prevent accumulation of carbon monoxide within the passenger compartment of the vehicle. As the guidelines suggest [ 3 - 6 ], snow must also be cleared from around the tailpipe. However, in this study, the passenger compartment was not safe until the tailpipe had been brushed clear of snow in an area twelve inches wide down to ground level. The major limitations of this study are the small sample size, the fact that the snow used in the trial was shoveled (higher density) rather than naturally accumulating, and that trial #1 was performed on a cold engine. Since multiple vehicles were not tested and since the temperature of the engine, air, and snow was not determined, it would be difficult to generalize the findings of this single vehicle study. Cold engines emit significantly higher levels of carbon monoxide at startup [ 10 ], all vehicles emit different levels of carbon monoxide at startup, and snow density and environmental conditions may effect carbon monoxide accumulation. Further studies to validate the study findings using in multiple vehicles at multiple operating temperatures are recommended. Competing interests The authors declare that they have no competing interests. Authors' contributions JG gathered data and prepared the manuscript. SP conceived of the project and gathered data. BO gathered data and presented the abstract at the Society for Academic Emergency Medicine. HS oversaw the project and provided statistical support. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526190.xml |
514888 | Dissecting the Transcriptional Control of Body Patterning | null | To build the complex body plan of higher organisms, thousands of genes must act in a coordinated fashion, becoming active at the right time and in the right place to define structures like head, thorax, and abdomen, or cell types like skin, muscle, and bone. One of the central questions for developmental biologists is how such specific spatiotemporal expression of genes is achieved. The general mechanism of the control of gene expression is well understood: Special proteins, called transcription factors, bind to short stretches of DNA near a gene. By docking to such binding sites, they activate or repress the transcription of the gene into mRNA (which is then translated into protein). Transcription factors often act in a combinatorial fashion—that is, several different factors have to bind in close proximity to each other to achieve a particular transcriptional outcome. As a consequence, their binding sites form clusters, called regulatory elements or modules. In many contexts, the genes that are activated or repressed encode transcription factors themselves, forming a cascade of transcriptional control events. One such transcriptional control hierarchy is the segmentation gene network in the fruitfly Drosophila . Organized in four tiers and acting in combinatorial fashion, the segmentation genes lay out the anterior-posterior axis of the embryo. In a stepwise refinement of expression patterns, they translate broad, overlapping gradients formed by maternally provided transcription factors into a periodic pattern of 14 discrete stripes that prefigure the 14 segments of the larva. The segmentation gene network has long been one of the prime paradigms for studying transcriptional control, and many researchers have worked over the years to experimentally dissect the regulatory interactions within the hierarchy. For some of the most important genes, the regulatory elements driving their expression and the favored binding sites have been identified. Nevertheless, the picture of transcriptional regulation within the segmentation gene network has remained incomplete. This is where the research reported by Mark Schroeder et al. comes in: With the sequence of entire genomes available, it's possible to use existing binding site information to computationally search the neighborhood of genes for regulatory elements. The difficulty here is that in higher organisms such as Drosophila , the binding sites are typically short and variable, and the search space is large; on the other hand, the fact that sites cluster—where transcription factors work in concert—aids the task. To identify regulatory elements, the researchers developed an algorithm, named Ahab, that models the behavior of multiple transcription factors competing for binding sites and fine-tunes the search by detecting clusters of weak sites. Using this approach, Schroeder et al. identified 52 regulatory elements within the segmentation gene network, 32 of them novel. The authors tested a large number of the newly identified modules experimentally by placing them in front of reporter genes that reveal where the modules drive expression within the developing fly. They showed that almost all modules faithfully reproduce the expression pattern of the endogenous gene. To better understand the way segmentation gene modules function, the researchers then systematically analyzed their predicted binding site composition. They correlated the composition of modules with the expression they produce and with the distribution of the transcription factors that bind to them. They were thus able to glean basic composition rules and to derive the mode of action for most of the factors, that is, whether they act as activators or as repressors. Segmentation in the early Drosophila embryo Overall, Schroeder et al. show that a computational search can greatly reduce the experimental effort necessary for finding regulatory elements within the genomic sequence. Their study provides an example of how experimental and quantitative methods can be combined to achieve a more global analysis of the regulatory interactions within a transcriptional network. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC514888.xml |
535891 | The establishment and characterization of the first canine hepatocellular carcinoma cell line, which resembles human oncogenic expression patterns | Background Hepatocellular carcinoma (HCC) is one of the most worldwide frequent primary carcinomas resulting in the death of many cirrhotic patients. Unfortunately, the molecular mechanisms of this cancer are not well understood; therefore, we need a good model system to study HCC. The dog is recognized as a promising model for human medical research, namely compared with rodents. The objective of this study was to establish and characterize a spontaneous canine tumor cell line as a potential model for studies on HCC. Results Histomorphological, biochemical, molecular biological and quantitative assays were performed to characterize the canine HCC cell line that originated from a dog with a spontaneous liver tumor. Morphological investigations provided strong evidence for the hepatocytic and neoplastic nature of the cell line, while biochemical assays showed that they produced liver-specific enzymes. PCR analysis confirmed expression of ceruloplasmin, alpha-fetoprotein and serum albumin. Quantitative RT-PCR showed that the canine HCC cell line resembles human HCC based on the measurements of expression profiles of genes involved in cell proliferation and apoptosis. Conclusions We have developed a novel, spontaneous tumor liver cell line of canine origin that has many characteristics of human HCC. Therefore, the canine HCC cell line might be an excellent model for comparative studies on the molecular pathogenesis of HCC. | Background Hepatocellular carcinoma (HCC) is one of the most worldwide frequent primary tumors in man, with an estimated 564,000 new cases and almost as many deaths in 2000 [ 1 ]. It almost always develops in the setting of chronic hepatitis or cirrhosis, conditions in which many hepatocytes are destroyed, inflammatory cells invade the liver, and connective tissue is deposited. Unlike colorectal carcinoma, for example, for which a model can be generated based on known molecular events occurring during the process of carcinogenesis [ 2 ], the pathogenesis of HCC is largely unknown [ 3 ]. Although many risk factors have been reported to be involved in the transformation from a normal cell into a malignant tumor cell, such as HBV, HCV, alcohol, aflatoxin B, cirrhosis, older age, and male gender, the molecular mechanisms of neoplastic transformation and progression in HCC are not yet well understood. However, the study of those mechanisms is hampered because the liver tissue of patients with HCC has only limited value and primary hepatocytes are difficult to maintain in culture. Furthermore, primary hepatocytes rapidly lose detoxifying P450 isoenzymes. In addition, and because of the heterogeneity of the molecular genetic changes that can lead to HCC across species, molecular genetic studies in animals have not yet provided a precise general model for the molecular pathogenesis of HCC in humans. The dog is a valuable model for human comparative studies, since it has a comparative life span and habitat and thus similar risk factors and its domestication started over 10,000 years ago [ 4 ]. Moreover, and like rodents, the dog develops spontaneous hepatocellular tumors. However, these tumors are not associated with hepatitis and cirrhosis, and develop in normal livers. Furthermore, the entire genome of the dog is currently being sequenced, what will allow further detailed species-specific molecular analyses. Here, we describe the establishment and morphological, immunohistochemical, biochemical, and molecular characterization of the first canine hepatocyte cell line derived from a spontaneous HCC of a dog. The objective of this study was to investigate whether this cell line could be used as a potential model for studies on human HCC. Therefore, we investigated whether this canine hepatocyte tumor cell line had features similar to human HCC with respect to mutations in the hepatocyte growth factor receptor (c-MET) gene and the differential gene expression of several oncogenes, proto-oncogenes and proteins involved in proliferation, apoptosis and cell survival. Results Histopathology of the donor dog The primary neoplasm was histologically characterized by broad trabeculae, 2 – 6 cells in thickness, of well-differentiated hepatocytes and separated by sinusoidal structures lined by endothelium. The hepatocytes had uniform moderately sized nuclei and small nucleoli; mitotic figures were very rare. Regularly areas with marked steatosis or glycogen accumulation within the neoplastic hepatocytes were observed. Locally within this well differentiated tumor there was a carcinomatous area characterized by broad trabeculae of more basophilic cells with large nuclei, moderate anisokaryosis, usually one or more large nucleoli and 3–5 mitotic figures per high power field (Figure 1 ). The non-affected liver histology of the donor dog showed no abnormalities, such as inflammation. Figure 1 Histopathological characteristics of the original liver tumor from which the cHCC cell line is derived. A ) A well-defined tumor area (*), as well as a carcinomatous area (**); B ) An enlargement of the carcinomatous area (**). Development and histomorphological characterization Directly after establishment of the initial cell suspension (June 19, 2002), the cells appeared pleiomorphic. After approximately 10 weeks of culturing, the cells formed clusters as rounded, vital cells, which are non-adherent. This characteristic has remained ever since. Freezing and re-culturing of the cell line had no effect on cell growth. A 1:10 splitting and medium refreshment of the culture by careful trypsinization once a week (a "passage") is optimal. The tryspinized cell clusters were further cultured with fresh DMEM culture medium. The cells rather grow in these smaller clusters and not as single cells. The cell clusters were collected, fixed and handled as described. As shown in Figure 2 , histology revealed solid cell clusters of large epithelial cells, with papillary projections at the periphery and extensive central necrosis. The cells were polygonal and moderately pleiomorphic, 10 – 20 μm in the largest diameter, showed marked anisocytosis and sometimes vacuolation of the cytoplasm. The nuclei were large (5–10 μm) and centrally located, with large and often multiple prominent nucleoli, and many, sometimes bizarre, mitotic figures (Figures 2A and 2B ). Immunohistochemical staining for both HepPar1 and CK7 was best after Bouin fixation. In the Bouin fixed material, the hepatocyte marker HepPar1 (Figure 3A ) revealed moderate granular cytoplasmic staining in the majority of the cells; CK7 (Figure 3B ) showed slight to moderate granular cytoplasmic staining in a minority of the cells. As controls for the two stainings, liver tissue and kidney tissue of a healthy dog was used. These were positively and negatively stained, respectively. HepPar1 staining shows throughout the liver tissue samples, whereas the CK7 is localized in the bile ducts. Figure 2 Histomorphological characterization cHCC cell line. A ) Solid cell clusters of large epithelial cells with papillary projections at the periphery and extensive central necrosis. Bouin fixation, HE staining; B ) Papillary growth of moderately pleiomorphic cells with anisocytosis and anisokaryosis, prominent nucleoli, and multiple mitotic figures. Carnoy fixation, HE staining. Figure 3 Immunohistochemical characterization cHCC cell line. A ) Immunohistochemical staining for HepPar1. Moderate granular cytoplasmic staining in the majority of the cells. Bouin fixation; B ) Immunohistochemical staining for CK7. Slight to moderate granular cytoplasmic staining in a minority of the cells. Bouin fixation. Biochemical characterization The activity of ALT, GLDH and AST was measured to investigate whether the cHCC cells produced liver characteristic enzymes. In order to compare the amount of hepatic enzymes produced by the cHCC cell line, those measurements were also performed for the widely used human hepatocyte cell line HepG2 (enzyme activity of HepG2 was set at 100%), and the commonly used canine kidney cell line MDCK. The results showed the cHCC cell line produced 25% of the highly liver-specific ALT compared to HepG2, whereas the MDCKs did not produce ALT at all (Table 3 ). Of another specific liver enzyme, GLDH, the cHCC cell line produced 19% of the activity of HepG2, whereas the MDCK cells produced only 10%. The cHCC cell line produced 28% of AST compared to HepG2, whereas MDCK produced only 8%. Table 3 Liver enzyme activity measurement of the cHCC cell line compared with HepG2 and MDCK.* ALT (%) AST (%) GLDH (%) HepG2 100 100 100 cHCC 25.8 27.7 19 MDCK 0 8 10 * Note: The activity of the enzymes is given as a percentage compared to the activity of proteins in HepG2. Molecular characterization To further examine whether the cell line truly consists of hepatocytes, we isolated RNA from the cHCC, made cDNA, and performed PCRs for the gene expression of hepatocyte markers. The cHCCs proved to be PCR-positive for canine serum albumin, alpha-fetoprotein and ceruloplasmin. All obtained products were sequence confirmed. Mutations in the c-MET gene Mutations in exons 15–21 of the c-MET gene have been described in human HCC. A PCR was therefore performed with primers based on this region of the c-MET gene. The products were analyzed and aligned with known canine and human c-MET sequences (Gen Bank Accession numbers AB118945 and NM_000245, respectively). At nucleotide position 4089, a thymine (T) instead of an adenine (A) was observed, which resulted in a serine in the cHCC cell line versus a threonine in healthy tissue at codon 1363 (T1363S) (see Figure 4 ). Figure 4 Mutation in the c-MET gene of cHCC (in figure as "c-MET cHCC") compared to healthy liver tissue (canine c-MET; Gen Bank accession number AB118945; "canine c-MET" in figure) and human c-MET (Gen Bank accession number NM_000245; "c-MET human" in figure) sequences. nt 4021–4219 of the canine c-MET is shown. The mutation is marked in grey at nt 4089 (A to T) of the canine c-MET, which corresponds to a change of amino acid 1363, from a threonine to a serine (T1363S). The STOP codon in the canine and human c-MET is also marked in grey. Alignment was performed by SECentral CLUSTAL W (1.7) multiple sequence alignment. Quantitative measurements of mRNA levels of gene products differentially expressed in cHCC To explore whether the cHCC cell line has similar expression profiles of genes involved in neoplastic growth and apoptosis as human HCC, the mRNA levels of the following genes were measured by means of quantitative RT-PCR: c-MET, PTEN, p27kip, Bcl-2, beta-catenin, SOCS3, ODC, TGF-alpha and collagen-1. The expressions of these genes were normalized by relating them to the housekeeping genes, HPRT and beta-actin. As shown in Figure 5 , c-MET, a receptor tyrosine kinase involved in cell survival and growth, was down-regulated 33-fold compared to the control group. PTEN, an inactivator of the Akt/PKB pathway was down-regulated over 200-fold. To further substantiate activation of Akt/PKB, we measured two downstream targets, p27kip and Bcl-2. They were indeed down-regulated 5-fold and up-regulated 3-fold, respectively. HGF, the ligand of c-MET, induces mRNA levels of beta-catenin and TGF-alpha, both involved in cellular growth. The latter two were down-regulated 6- and 7-fold, respectively. From two novel proteins associated with hepatocellular carcinomas in man [ 5 ], suppressors of cytokine signaling type 3 (SOCS3) and collagen-I, a non-significant 2-fold elevation of SOCS3, were found and a 7-fold, significant down-regulation of collagen-I was observed. The gene expression of a proliferation factor, ornithine decarboxcylase (ODC), also proved to be elevated 3-fold. Figure 5 Differential gene expression profiles of the cHCC cell line as compared to liver tissue of healthy dogs, measured by quantitative Real-Time PCR. Data represent mean ± SE of the groups. The "n" for these figures stands for the fold change of the gene expressions of the cell line as compared with our control group. Discussion We have described the establishment and characterization of a canine hepatocyte tumor cell line, derived from a spontaneous HCC in a dog. Both immortalized hepatocytes and hepatic progenitor cells come from various transgenic mouse lines [ 6 ], are drug-induced [ 7 ], or are obtained after SV40 large T-antigen transfection [ 8 ]. However, immortalization with SV40 induces HGF/c-MET activation via an autocrine HGF loop [ 9 ]. This clearly contrasts with the cHCC, where HGF-induced growth is absent (data not shown), most likely because of severely reduced c-MET levels. Our morphological study has accumulated good evidence – but no definitive proof of the hepatocytic and neoplastic nature of the cHCC cells. Positive staining for hepatocyte marker HepPar1 strongly indicates the hepatocytic origin of the cultured cells [ 10 ]. The neoplastic nature of the cells can be deduced from the pleiomorphism of the cells, the number of sometimes bizarre mitotic figures. The simultaneous presence of CK7 and HepPar1 positive cells is also consistent with the neoplastic nature of the cells [ 11 ], and suggests the presence of both fully differentiated hepatocytes and progenitor cells in the cHCC cell culture. The hepatocytic nature of the cHCC cell line is further indicated by the activity of the liver-specific enzyme ALT and the expression of hepatocyte markers, like serum albumin, ceruloplasmin, and alpha-fetoprotein, as it is also the case for the human tumor liver cell line HepG2. Binding of HGF to c-MET triggers tyrosine autophosphorylation of the intracellular domain in the c-MET receptor and induces responses that account for mitogenesis and growth. In human c-MET, point mutations have been described in the tyrosine kinase domain, which may be associated with the development of primary liver carcinomas [ 12 ]. In our study, we detected an unknown point mutation near the tyrosine kinase domain, which results in a conserved change from a threonine to a serine. Whether this mutation has any influence on the autophosphorylation of two adjacent tyrosines [ 13 ] remains to be proven. In quantitative RT-PCRs, we measured the differential gene expressions of several gene products involved in proliferation/growth and cell survival. In human HCC, the c-MET gene-expression was observed to be induced in 60% of the cases [ 14 ], whereas we found a down-regulation of the c-MET gene-expression. Although we only measured mRNA levels of c-MET, we can correlate them to their protein expression levels [ 15 ]. Furthermore, lack of HGF-responsiveness was also observed by reduced expression of beta-catenin and TGF-alpha in the cHCC cell line, which is in accordance with findings in human HCC [ 16 ]. It has also been observed that the tumor-suppressor PTEN, the Akt/PKB pathway inhibitor [ 17 ], was down-regulated drastically, leading to an increased activity of Akt/PKB, as shown by the anti-apoptotic protein Bcl-2 and the cell-cycle inhibitor p27kip, which were induced and inhibited, respectively. Both p27kip and PTEN are inhibited in human HCC as well [ 17 , 18 ]. In addition, the gene expressions in cHCC of SOCS3 and collagen-I were elevated (although not significantly) and inhibited, respectively, as it was also detected in human HCC [ 19 ]. Moreover, we found an up-regulation for ODC. This proliferation factor, induced by several growth factors, is responsible for proliferation in many cell types [ 20 ]. Taken together, the expression data show elevated proliferation, increased cell survival, and reduced apoptosis, which explains the neoplastic nature of cHCC. Conclusions From the morphological, biochemical, and molecular biological assays performed in this study, we conclude that the cHCC cell line clearly represents hepatocytes. In addition, cHCC has neoplastic characteristics comparable to HCC in man. Therefore, this cell line can be used as a model not only to study the molecular pathogenesis of human HCC, but also to investigate possible etiological agents of canine hepatitis. Methods Donor dog Our material was taken from the liver of an eleven-year old, privately owned female Cairn Terrier dog, diagnosed with HCC by histological analysis. With the owners' consent, the dog was routinely anesthetized, parts were taken from various areas of the neoplastic liver tissue. In addition, liver tissue was collected for histological analysis, fixed in 10% buffered formalin and paraffin embedded. Sections were routinely stained with hematoxylin and eosin (HE). Then, the dog was immediately euthanized. Isolation and culturing conditions Immediately after resection, the liver samples were kept in DMEM culture medium supplemented with 10% fetal calf serum (FCS; Fetal Calf Serum Gold, PAA Laboratories GmBH, Pasching, Austria), penicillin and streptomycin (P/S; 100 IU/ml and 100 μg/ml final concentration, respectively) and were kept on ice. Under sterile conditions, liver samples of various areas were cut into small pieces (5 × 5 mm) and trypsinized with 30 ml of trypsin/EDTA (0.5 g/l trypsin 1:250 and 0.2 g/l EDTA; BioWhittaker Europe, Verviers, Belgium) in a sterile Erlenmeyer flask placed on a stirring platform for 30 minutes. The cell suspensions were filtered with a 70 μm nylon filter (Falcon; Becton Dickinson Labware, Franklin Lakes, NJ). Erythrocytes were lysed from the filtered suspension. The remaining cell suspension was resuspended in DMEM supplemented with 10% FCS and P/S and was cultured at 37°C with 5% CO 2 and 95% air under a humidified atmosphere in non-coated flasks. It was observed every day for any changes. The medium was refreshed twice a week. Histomorphological characterization After 3 weeks of culturing with medium refreshment only and no splitting of the culture, the content of a T80 cm 2 flask with the cHCC cell culture was harvested by transferring the entire contents of the flask to a tube, which was centrifuged for 10 minutes at 1,500 g. The supernatant was replaced by freshly made fixation fluid for 4 hours. For optimal immunohistochemical staining, four different fixatives were used: zinc sulfate formalin, Bouin, Carnoy, and 10% neutral buffered formalin. The fixated cell pellet was transferred to a foam leaf-protected plastic embedding cassette. After fixation, samples were manually dehydrated and embedded in paraffin. Sections (3 μm thick) were cut and stained with hematoxylin and eosin (HE). For immunohistochemical staining, paraffin sections were mounted on poly-L-lysine coated slides, post-fixed into ice-cold acetone fixation fluid for 10 minutes, air dried and stored at room temperature (RT) until use. For the detection of HepPar1, slides were deparaffinized, immersed in 10 mM Tris, 1 mM EDTA buffer (pH 9), heated in a microwave oven for 10 minutes for antigen retrieval, cooled down for 10 minutes at RT and washed in PBS buffer. Endogenous peroxidase activity was blocked by 0.3% H 2 O 2 , in methanol, for 30 minutes at RT. After washing with PBS buffer containing 0.1%Tween-20, background staining was blocked by incubating the sections with normal goat serum (1:10 diluted in PBS), for 30 minutes. Sections were incubated overnight at 4°C with the primary antibody HepPar1 (clone OCH1E5, Dakocytomation, Glostrup, Denmark) diluted 1:50 in PBS. After washing in PBS-Tween, slides were incubated in DAKO EnVision™ + reagent, HRP-labeled (Dakocytomation,) for 45 minutes at RT. After washing in PBS buffer, sections were developed using 3,3-diaminobenzidine as chromogen, and counterstained with hematoxylin. For the detection of CK7, the slides were treated as described above but without antigen retrieval, and incubated overnight at 4°C with mouse anti-human CK7, clone OV-TL 12/30 (Dakocytomation), diluted 1:25 in PBS with 1% bovine serum albumin. For both HepPar1 and CK7, formalin-fixed paraffin-embedded canine liver and kidney tissue controls were incubated with and without the primary antibody. In contrast to the cell culture, in the liver tissue antigen retrieval for CK7 was necessary and, therefore, a 40-minute proteinase-K (Dakocytomation) digestion at RT was performed before blocking of the endogenous peroxidase, in methanol. Biochemical characterization The content of a T80 cm 2 flask with the cHCC cell culture was harvested, spun down for 5 minutes at 1,500 g, the cell pellet was washed in 10 ml PBS, centrifuged for 5 minutes at 1,500 g and the cells were resuspended in 1 ml PBS. Two hundred μl of the cell suspension were lysed and homogenized with a pestle in RIPA buffer containing 1% Igepal, 0.6 mM phenylmethylsulfonyl fluoride, 17 μg/ml aprotinine and 1 mM sodium orthovanadate (Sigma Chemical Co., Zwijndrecht, The Netherlands), for 30 minutes on ice. Total protein concentrations were calculated using a Lowry-based assay (DC Protein Assay, BioRad, Veenendaal, The Netherlands). For the liver enzyme measurement, 800 μl of the cell suspension was centrifuged at 12,100 g for 5 minutes. The pellet was lysed in milliQ by vortexing, centrifuged again for 5 minutes at 12,100 g and the supernatant was analyzed in a Beckman Synchron CX7 analyzer. The following enzymes were measured: ALT, AST and GLDH. AST and ALT were measured at 37°C with the Tris-pyridoxal phosphate method with Beckman-Coulter reagent. GLDH was measured with Roche reagent. All samples were subjected to the external quality control mission of the Dutch Foundation for Quality Assessment in Medical Laboratories. As comparison, the widely used human hepatoma cell line HepG2 (Deutsche Sammlung von Mikroorganismen und Zellkulturen GmbH, DSMZ, Germany) and MDCK canine kidney cell line (own collection) were used. These were grown to 80–100% confluency (T80 cm 2 flask) under standard culturing conditions, as described for the cHCC cell line. RNA isolation and reverse-transcription PCR Total cellular RNA was isolated from each frozen canine liver tissue in duplicate and from all the cell cultures used in this study using the Qiagen RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. The RNA samples were treated with DNase-I (Qiagen RNase-free DNase kit). In total 3 μg of RNA was incubated with poly (dT) primers at 42°C for 45 min, in a 60 μl reaction, using the reverse transcription system (Promega Benelux, Leiden, The Netherlands). Molecular characterization To examine whether the cell line consisted of hepatocytes, we isolated total RNA and made cDNA as described above. PCRs were performed to investigate the gene expressions of hepatocyte (albumin, alfa-fetoprotein, ceruloplasmin) markers. All reactions were performed in a 50 μl volume with a thermal cycler (MJ Research Inc., Watertown, MA). Reaction mixtures contained 0.2 μM of each oligonucleotide primer (Isogen Life Science, Maarssen, The Netherlands), PCR buffer (Invitrogen Corporation, Carlsbad, CA), 2.5 U of Platinum Taq polymerase (Invitrogen), 2 mM MgCl 2 (Invitrogen) and 250 μM of each nucleotide (Promega Corporation, Madison, WI). The PCR conditions were: initial denaturation at 95°C for 4 min, followed by 40 cycles consisting of denaturation at 95°C for 1 minute, annealing at 60°C for 1 minute, elongation at 72°C for 1 min, and, finally, an elongation step at 72°C for 10 min. The PCR products were analyzed on a 1.5% agarose gel, and the DNA fragments were visualized with ethidium bromide. The primers used for these PCRs are depicted in Table 1 . Table 1 Oligonucleotides for RT-PCR used in this study. Primer Target Primer sequence (5'-3') mutMETF1 C-terminal part of canine cMET CCT TGG AAA AGT AAT AGT TC mutMETR1 C-terminal part of canine cMET GTT TCA TGT ATG GTA GGA C mutMETF2 C-terminal part of canine cMET GAA GTT TCC CAG TTT CTG AGC mutMETR2 C-terminal part of canine cMET AAG GGT ATG GAG CAA CAC AT CSA U Serum albumin GTT CCT GGG CAC GTT TTT GTA TGA CSA L Serum albumin CTT GGG GTG CTT TCT TGG TGT AAC Ceruloplasmin U Ceruloplasmin GGA ATA TGA GGG GGC CAT CTA TC Ceruloplasmin L Ceruloplasmin GCA CGT CCA CTT CAT TAC CCA TGC C alpha-fetoprot U alpha-fetoprotein GGC TGC TCC GCC ATC CAT CC alpha-fetoprot L alpha-fetoprotein TTT TCC CCA TCC TGC AGA CAC TCC Mutations in c-MET To investigate mutations in the tyrosine kinase domain of c-MET, a PCR was performed with two overlapping primer sets for this domain (Table 1 ), both resulting in an approximately 750 bp product. PCR conditions were as described above with an annealing temperature of 50°C. The products were sequenced using an ABI 3100 Genetic Analyzer (Applied Biosystems, Nieuwerkerk a/d IJssel, The Netherlands). Sequence analysis and alignments were performed with Lasergene software (DNASTAR Inc., Madison, WI). Samples for Real Time PCR Quantitative gene expression measurements of the cHCC cell line were compared with a group of four healthy liver tissues. Liver biopsies from the healthy dogs, which included two Cairn terriers (breed of the donor dog), were obtained under local anesthesia with a 16G biopsy needle and immediately snap-frozen and stored at -70°C until further analysis. Quantitative measurements of mRNA levels of gene products involved in neoplastic growth and apoptosis Real-Time PCR based on the high affinity double-stranded DNA-binding dye SYBR green I (SYBR ® green I, BMA, Rockland, ME) was performed in triplicate in a spectrofluorometric thermal cycler (iCycler ® , BioRad). Per reaction, 1.67 μl of cDNA was used in a 50 μl volume containing 1 × manufacturer's buffer, 2 mM MgCl 2 , 0.5 × SYBR ® green I, 200 μM dNTP's, 0.4 μM of each oligonucleotide primer, 1.25 units of AmpliTaq Gold (Applied Biosystems), on 96-well iCycler iQ plates (BioRad). Primers (Table 2 ) were designed using PrimerSelect software (DNASTAR Inc.). All PCR protocols included 40 cycles consisting of denaturation at 95°C for 20 sec, annealing for 30 sec, and elongation at 72°C for 30 sec. Melt curves (iCycler, BioRad), gel electrophoresis, and sequencing were used to examine each sample for purity and specificity. For each experimental sample, the amount of the genes under study, and of the housekeeping genes HPRT and beta-actin, were determined from the appropriate standard curve in autonomous experiments. Results were normalized according to the average amount of housekeeping genes and the values divided by the normalized values of the healthy group to generate relative expression levels [ 5 ]. Statistical analysis was performed using the Student T-test, and the level of significance was set to a p value 0.05. Table 2 Oligonucleotides for quantitative RT-PCR used in this study. Primer Extension Primer sequence (5'-3') Tm Product size (bp) Accession Number HPRT U AGCTTGCTGGTGAAAAGGAC 56 100 L77488/ L77489 L TTATAGTCAAGGGCATATCC Beta-actin U GATATCGCCGCGCTCGTCGTC 55 350 Z70044 L GGCTGGGGTGTTGAAGGTCTC c-MET U TGTGCTGTGAAATCCCTGAATAGAATC 58 112 AB118945 L CCAAGAGTGAGAGTACGTTTGGATGAC PTEN U AGATGTTAGTGACAATGAACCT 64,5 110 U92435 L GTGATTTGTGTGTGCTGATC P27Kip U CGGAGGGACGCCAAACAGG 59 90 AY455798 L GTCCCGGGTCAACTCTTCGTG Bcl-2 U TGGAGAGCGTCAACCGGGAGATGT 62 87 AB116145 L AGGTGTGCAGATGCCGGTTCAGGT Beta-catenin U ATGGGTAGGGCAAATCAGTAAGAGGT 64 107 AY485996 L AAGCATCGTATCACAGCAGGTTAC TGF-alpha U CCGCCTTGGTGGTGGTCTCC 63 122 AY458143 L AGGGCGCTGGGCTTCTCGT SOCS3 U ACACCAGCCTGCGCCTCAAGACCT 63 119 AY485997 L CGCCTCGCCGCCCGTCA Collagen I U GTGTGTACAGAACGGCCTCA 61 111 AF056303 L TCGCAAATCACGTCATCG ODC U GTGGGCGATTGGATGCTCTTTG 59 111 BI395954 L TGTTGGCCCCGACATCACATAGTAG Ethics All our procedures concerning animal use were approved by the owners and by the Utrecht University's Ethical Committee, as required by the Dutch law. The animals received human care in line with the University's guidelines. Abbreviations ALT – alanine aminotransferase; Akt/PKB – ser/thr protein kinase B (also PKB/Akt); AST – aspartate aminotransferase; CK7 – cytokeratin 7; GLDH – glutamate-lactate dehydrogenase; HCC – hepatocellular carcinoma; HepPar1 – hepatocyte paraffin-1; HGF – hepatocyte growth factor; HPRT – hypoxanthine phosphoribosyl transferase; MDCK – Madin-Darby canine kidney cells; ODC – ornithine decarboxcylase; P27KIP – kinase inhibitor protein 27 kDa; PTEN – phosphatase and Tensin homolog deleted on chromosome TEN; SOCS3 – suppressors of cytokine signalling type 3; TGF-alpha – transforming growth factor alpha. Author's contributions SYB carried out the establishment, the biochemical and molecular characterization, and the mutations in c-Met study. BS carried out the quantitative RT-PCR, whereas JY and RK carried out the histomorphological part. TvdI performed the description of the initial tumor. SYB, HFE, TvdI, JR and LC participated in the study design and coordination of the study. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC535891.xml |
548675 | Inactive enzymatic mutant proteins (phosphoglycerate mutase and enolase) as sugar binders for ribulose-1,5-bisphosphate regeneration reactors | Background Carbon dioxide fixation bioprocess in reactors necessitates recycling of D-ribulose1,5-bisphosphate (RuBP) for continuous operation. A radically new close loop of RuBP regenerating reactor design has been proposed that will harbor enzyme-complexes instead of purified enzymes. These reactors will need binders enabling selective capture and release of sugar and intermediate metabolites enabling specific conversions during regeneration. In the current manuscript we describe properties of proteins that will act as potential binders in RuBP regeneration reactors. Results We demonstrate specific binding of 3-phosphoglycerate (3PGA) and 3-phosphoglyceraldehyde (3PGAL) from sugar mixtures by inactive mutant of yeast enzymes phosphoglycerate mutase and enolase. The reversibility in binding with respect to pH and EDTA has also been shown. No chemical conversion of incubated sugars or sugar intermediate metabolites were found by the inactive enzymatic proteins. The dissociation constants for sugar metabolites are in the micromolar range, both proteins showed lower dissociation constant (Kd) for 3-phosphoglycerate (655–796 μM) compared to 3-phosphoglyceraldehyde (822–966 μM) indicating higher affinity for 3PGA. The proteins did not show binding to glucose, sucrose or fructose within the sensitivity limits of detection. Phosphoglycerate mutase showed slightly lower stability on repeated use than enolase mutants. Conclusions The sugar and their intermediate metabolite binders may have a useful role in RuBP regeneration reactors. The reversibility of binding with respect to changes in physicochemical factors and stability when subjected to repeated changes in these conditions are expected to make the mutant proteins candidates for in-situ removal of sugar intermediate metabolites for forward driving of specific reactions in enzyme-complex reactors. | Background One of the potential use of sugar and sugar intermediate binding proteins and binders of intermediate sugar metabolites derived from microbes is in new and expanding area of environmental biotechnology particularly in carbon dioxide fixation bioprocess reactors [ 1 , 2 ]. Accelerated consumption of fossil fuels and other anthropogenic activities have resulted in increased atmospheric levels of the greenhouse gas carbon dioxide (CO 2 ). Sustained increase of atmospheric CO 2 has already initiated a chain of events with unintended ecological consequences [ 3 - 7 ]. The reduction in atmospheric carbon dioxide level is highly desirable lest it will have a catastrophic impact upon both the environment and the economy on a global scale [ 5 - 7 ]. Biotechnological process with recombinant catalytic proteins offer contained handling of carbon dioxide and could be one method of abatement of carbon dioxide pollution [ 8 , 9 ]. Recent advances in biotechnological methods makes possible efficient capture [ 10 ] and fixation of CO 2 at emission source/site into concatenated carbon compounds [ 9 , 11 ]. Such a process starts with initial capture of the carbon dioxide solublized as carbonic acid or bicarbonate [ 10 ]. After adjustment of pH using controllers and pH-stat the solution is fed to immobilized Rubisco reactors [ 12 ] where acceptor D-Ribulose-1,5-bisphosphate (RuBP) after CO 2 fixation is converted into 3-phosphoglycerate [ 8 , 9 ]. We have invented a novel scheme which proceeds with no loss of CO 2 (unlike cellular biochemical systems) in 11 steps in a series of bioreactors [ 13 ]. For starting up the process, however, a different scheme was used to generate RuBP from D-glucose rather than from 3-PGA [ 14 ]. The linear combination of reactors in the 11 step RuBP regeneration process requires large volume and weight and are unsuitable for use in mobile CO 2 emitters leaving only the stationary source of emission to be controlled using this technology [ 8 , 9 ]. These problems are circumvented in a new scheme where enzyme-complex reactors instead of linear combination of purified single enzyme reactors were proposed [ 1 , 2 ]. In this scheme, the catalytic enzymes have been used as functionally interacting complexes/interactomes. The four reactors harboring enzymatic complexes/mixtures [ 2 ] replace the current 11 reactors for conversion of 3PGA into RuBP [ 8 , 9 ] and are termed as enzyme-complex reactors (Figure 1 ). As an alternate to immobilized enzyme-complexes, in this scheme, successive conversion in radial flow with layers of single and purified but uniformly oriented enzymes in concentric circle with axial collection flow system with required combination of enzymes in individual reactors could also be used [ 1 ]. This arrangement in preliminary experiments shows promise and leads to a faster conversion rate and requires less volume and material weight. Sugar and sugar intermediate metabolite removal in enzyme-complex reactors at key steps is necessary for proper and specific driving of forward reactions. This requires in-situ separation of sugar or intermediate metabolites by specific binding entities. The compounds such as 3-phosphoglycerate (3PGA) and 3-phosphoglyceraldehyde (3PGAL) must be separated at key steps for proper functioning of these enzyme-complex reactors [ 1 , 2 ]. Sugar and their intermediate metabolites binding proteins derived from microbial and other sources despite being used for various applications [ 15 , 16 ] have not yet been used in environmental biotechnological applications. However, they are potentially applicable in RuBP recycling. In this report we demonstrate the utility of two inactive mutants of enzymatic proteins: phosphoglycerate mutase (PGDM) and enolase. The inactive mutants of yeast enzymes PGDM [ 17 - 19 ] and enolase [ 20 , 21 ] were characterized for properties that may render them potentially useful in reactors. We report determination of the enzymatic activity, sugar binding capacity, specificity in binding for different sugar and their metabolites and reversibility in binding with respect to changes in physicochemical factors and stability on repeatedly subjecting to these changes using purified proteins. Results Purity of proteins The inactive PGDM mutant [ 17 , 18 ] was received as purified protein and analyzed on a 10% SDS-PAGE (Figure 2 ). The S39A mutant of yeast enolase 1 [ 20 , 21 ] was purified from recombinant yeast using ammonium sulphate precipitation and chromatographic purification on CM cellulose (Figure 2 ). The yeast enolase mutant of H159A was also purified using a similar protocol (data not shown). The purified proteins were tested for modifying activity on 3-phosphoglycerate, 3-phosphoglyceraldehdye, glucose and sucrose (Figure 3A ). Chromatographic detection of sugars and intermediate metabolites Paper chromatography was used for initial qualitative detection of sugars or sugar metabolites (Figure 3A ) subsequently thin layer chromatography (Figure 3B ) was used for detection and determination of modification as well as for measurements. Relative measurements of spot area with respect to the controls allowed determination of binding as described in methods. Binding constants Binding constants for mutant proteins were determined and presented in Table 1 . Dissociation constant (Kd) is a useful measure to describe the strength of binding or affinity between the proteins (P) and the ligands (L) and serves as an indicator as to how easy it is to separate the complex PL. With both PGDM and enolase mutants high micromolar (μM) concentration L is required to form PL indicating that the strength of bindings is rather moderate or even low. The smaller Kd, the stronger is the binding. The dissociation constant for 3-phosphoglycerate was found to be lower by both PGDM (655 ± 33 μm) and yeast enolase mutants S39A (676 ± 28 μm), H159A (797 ± 47 μm) compared to that for 3-phosphoglyceraldehde (822 ± 42, 835 ± 38 and 966 ± 31 μm respectively). The H159A had the weakest binding among other proteins for 3-phosphoglycerate. In qualitative measurements both proteins showed release or lack of binding when subjected to low pH (pH 4.0). In addition EDTA at concentrations that leads to chelation of Mg 2+ ions was very effective for release of ligands from enolase as well as from PGDM (Table 1 ). Reversibility in binding and repeated use The PGDM showed binding with 3PGA and 3PGAL at pH 7.5 in presence of 50 mM NaCl. The binding was completely lost when the protein was exposed to pH 4.0 (Table 1 ). Binding was restored if the pH value was brought back to 7.5 within 30 min exposure to lower pH. The S39A mutant of yeast showed binding to 3PGAL as well as to 3PGA in presence of 10 mM NaCl and 10 mM MgCl 2 . Both 3PGA and 3PGAL failed to bind the protein and is released when subjected to 15 mM EDTA. The binding is restored upon removal of EDTA and addition of NaCl and MgCl 2 . Both proteins retained more than 50% initial binding upon repeated use, in this study we did not use more than 30 min incubation at lower pH during any single use. As shown in Table 2 , the proteins retained qualitatively more than 50% binding for at least 8 cycles of repeated changes between pH 4.0 and pH 7.5. PGDM was less stable (8 cycles) than S39A or H159A mutants (10 cycles) each. This is important for reactor applications. While it is possible to control reactor operations where in-situ separation reactors can be bought back to pH immediately upon binding and for small reactors less than 30 min of residence time can be useful, more optimization studies are needed to establish stability at low pH for these proteins. For MgCl 2 cycling, the proteins retained more than 85% activity even after 20 cycles and were not studied any further. Discussion Microbes (as well as higher organisms) produce a number of binding and metabolizing proteins for sugar and other intermediate metabolic products of sugar metabolism pathways. Complex conversion necessitated using binding entities to enhance reactor performance as well as obtaining converted compounds in purified state. In the biocatalytic CO 2 fixation bioprocess, three modular reactors enables continuous fixation. The first module is Rubisco reactor where CO 2 is fixed onto RuBP and converted into 3PGA, the second involves regeneration of ATP which acts as a cofactor for subsequent process and the third being RuBP regeneration [ 8 , 9 ]. Recently an added module has been devised for efficient capture of CO 2 , which is constructed before first module [ 10 ]. However, the generation of RuBP from converted 3PGAL requires a series of conversion in 11 reactors [ 8 , 9 ]. In many reactors in this series, the sugar or intermediate metabolites are generated that are not substrates for immediate subsequent reactors. Thus a specific capture and delivery will help eliminate dilution as well as interference in reactors where sugar intermediates are not direct substrate. Towards this goal, three different entities divided into biological and chemical moieties have potential for use. The biological entities include lectins or sugar binding proteins or non-immunogenic origin and inactive mutants of sugar binding enzymes. The chemical entities include the entities that recognize and binds aldehyde, ketones and alcohols. However, most of the biological entities provides high specificity, strong binding, reversible with respect to select physico-chemical conditions. They are also compatible with buffer systems with respect to conversion reactors in the loop where they are likely to be used. For 3PGAL and 3PGA as test sugars we have attempted using PGDM and inactive yeast enolase mutants (S39A, H159A) here. The determination of binding strength by these entities will enable their pilot tests in novel reactors in close loop with 3PGA to RuBP conversion reactors. It appears that the PGDM and enolase mutants are reversible binders and are stable with respect to repeated use-cycles. However, the binding is moderate at best (Kd values are in the range of 655–966 μM). Similar measurements for enolase provides 500 ± 28 and 673 ± 32 μM for binding with 3PGA and 3PGAL indicating that wild type enolase has slightly higher binding affinity for these metabolites. Further enhancement in binding by screening other different inactive enzyme mutants or that from different sources may help find more suitable entities. Conclusions In the present report we demonstrate binding without chemical modification of 3PGA and 3PGAL by inactive yeast mutant enzymatic proteins PGDM and enolase. The binding seems to be specific as the proteins were not found to bind other sugars (glucose, fructose or sucrose in mixtures that were subjected to incubation). The binding is reversible with respect to pH and EDTA and proteins retain activity even after repeated use. The MgCl 2 cycling seems to have less effect on protein stability with respect to binding and release and could be more suitable for use in larger reactors. While these criteria are suitable for use of the proteins for in-situ separation of sugar metabolites in reactors, but high micromolar dissociation constants (655–966 μM) suggest the moderate or even low binding strength. Finding inactive enzymes with high binding affinity or engineering them for this purpose will improve their utility. Methods Protein purification Purified yeast PGDM and recombinant cultures for inactive yeast enolase mutants (S39A and H159A) were obtained from Dr. J. Nairn and Prof. J. M. Brewer as research gift respectively. The enolase mutants were purified following suitable modifications of published protocols [ 20 , 21 ]. Briefly, the overnight grown yeast cultures were harvested when absorbance values reached 12. The cells were harvested by centrifugation at 4000 × g and disrupted using sonication for 10 min with 20 sec gap and burst cycles and centrifuged at 12000 × g for 20 min. This crude protein solution was subjected to ammonium sulphate precipitation at 75% saturation. After ammonium sulphate precipiration the protein was dialyzed for 16 hours in a dialysis bag (molecular weight cutoff 3000) in 0.1 M Tris-Cl (pH8.5) containing 0.1 M NaCl and 0.1 mM MgCl 2 , with three changes in buffer solution. Enolase mutants were subjected to ion exchange chromatography for further purification on Acta prime protein purification system (Amersham Pharmacia Biotech, CA) using Q Sepharose column at a flow rate of 0.5 ml /minute at 4°C. Protein was eluted using a NaCl gradient (0.1 M to 0.3 M), about 95 fractions containing 8.6 ml each were collected. The proteins eluted in about conductivity equivalent of 0.1–0.15 M NaCl (fractions 5 to 9). This purified protein was temporarily stored at 4°C and used for subsequent experiments. Proteins were estimated using Bradford's method [ 22 ] using BSA (1 mg/ml) as standard. 10%SDS-PAGE was prepared and subjected to Coomassie blue staining. Paper and thin layer chromatography Paper and thin layer chromatography was performed to demonstrate that the purified mutant proteins lacked any sugar modification capacity. Sugar mixtures in appropriate concentration were incubated with Proteins (PGDM or enolase mutants S39A and H159A) for up to 16 hours. At the end of incubation (every 2 hours), the mixtures were centrifuges and the aliquots from experimental mixtures were spotted onto the filter paper chromatograms (Whatman; 3 mm) or on TLC plates. The TLC analyses were performed on Plastic backed 20 cm × 20 cm Silica Gel 60 F254 plates with 0.2 mm layer thickness (Merck). After spotting with an applicator the samples were air-dried and placed in a TLC tank (27 cm × 24 cm × 7 cm) containing the solvent system. For both chromatography, the spots were air-dried and the chromatograms were dipped in the solvent system [60% v/v n-propanol/30% v/v conc.ammonia/10% v/v distilled water] and allowed to run for 5 hours. The chromatograms were removed from the solvent system and subjected to staining. Three different staining techniques were used to detect sugars, ammonium molybdate, silver nitrate and alpha-naphthol staining [ 23 , 24 ]. For ammonium molybdate staining, the paper chromatogram was dipped in the solution contianing: 5 ml 60 per cent w/w perchloric acid,10 ml and 0.1 N hydrochloric acid,25 ml, 0.4 per cent w/v ammonium molybdate, and made the volume to 100 ml with distilled water. The paper, after drying in a current of warm air for a few minutes to remove excess water, was subjected to heating at 85°C for 7 min in a water-jacketed oven. The spots in the chromatogram were also visualized using alkaline silver oxide reagent. This reagent was composed of two parts: first part containing 0.1 ml saturated aqueous silver nitrate plus 19 ml acetone and second part containing 0.5 g NaOH dissolved in 5 ml water and finally these two parts are added to 100 ml with ethanol. First part was mixed immediately before use and a few drops of water were added, with stirring, until all the AgNO 3 is dissolved. The dried chromatogram was then dipped through the silver reagent and allowed to air dry for 10 min and subsequently dipped into ethanolic sodium hydroxide and again allowed to air dry. After the spots are visible, the paper was soaked in dilute (5 mg/l) sodium thiosulfate for 1 minute and rinsed in tap water. The last step dissolves the dark background and allows obtaining a permanent record. For staining with alphanapthol, paper chromatogram was dipped in the following solution: 1 per cent w/v alpha-napthol, 10 per cent v/v orthophosphoric acid and distilled water to make up the volume. The air-dried paper or TLC plate was heated for a few minutes at 85°C in a water-jacketed oven for color development. Determination of binding parameters The dissociation constants for protein-sugar binding was estimated by measurements of area in chromatograms. For this purpose covalently immobilized protein A sepharose beads (Pharmacia Biotech, CA) was used. The proteins were immobilized on protein A using Amino link kit (Pierce Chemicals, CA). The known concentration of protein was incubated at room temperature (25°C) with varying concentration of sugar in the range of 1 μM to 1 mM in a 100 μl fixed volume. At the end of incubation (10 min), the mixture was centrifuged at 10000 × g and an aliquot of supernatant was spotted and chromatogram (TLC) was developed. A similar mixture but with BSA coupled beads served as control. Area calibration using varying concentration of sugar with a fixed aliquot spot volume was recorded under identical conditions. From the measurement of area in control and experimental set the free sugar was calculated such bound sugar is control minus sugar left in experimental set. The experimental data was used to draw a Scatchard type plot from where dissociation constant was calculated, represented by P as free protein, L as ligand and PL as the ligand-bound-protein, the dissociation constant is defined as Kd = [Pfree] [Lfree]/ [PL]. The dissociation constant Kd values for PGDM and S39A enolase for 3PGA and 3PGAL were calculated using experimental data using MS excel program. Authors' contributions DB and DB carried out purification of enolase mutants, MK and SM carried out the binding assays. MTS, SC, AG and VG participated in design of the study and performing analyses, MTS also helped to draft the manuscript. SKB conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC548675.xml |
499548 | Treatment of mechanically-induced vasospasm of the carotid artery in a primate using intra-arterial verapamil: a technical case report | Background Despite improvements in the safety and efficacy of endovascular procedures, considerable morbidity may still be attributed to vasospasm. Vasospasm has proven amenable to pharmacological intervention such as nitrates, intravenous calcium channel blockers (CCBs), and intra-arterial papaverine, particularly in small vessels. However, few studies have focused on medium to large vessel spasm. Here we report the use of an intra-arterial CCB, verapamil, to treat flow-limiting mechanically-induced spasm of the common carotid artery (CCA) in a primate. We believe this to be the first such report of its kind. Case presentation As part of a study assessing the placement feasibility and safety of a catheter capable of delivering intra-arterial cerebroprotective therapy, a female 16 kg baboon prophylaxed with intravenous nitroglycerin underwent transfemoral CCA catheterization with a metallic 6-Fr catheter without signs of acute spasm. The protocol dictated that the catheter remain in the CCA for 12 hours. Upon completion of the protocol, arteriography revealed a marked decrease in CCA size (mean cross-sectional area reduction = 31.6 ± 1.9%) localized along the catheter length. Intra-arterial verapamil (2 mg/2cc) was injected and arteriography was performed 10 minutes later. Image analysis at 6 points along the CCA revealed a 21.0 ± 1.7% mean increase in vessel diameter along the length of the catheter corresponding to a 46.7 ± 4.0% mean increase in cross-sectional area. Mean systemic blood pressure did not deviate more than 10 mm Hg during the procedure. Conclusions Intraluminal CCBs like verapamil may constitute an effective endovascular treatment for mechanically-induced vasospasm in medium to large-sized vessels such as the CCA. | Background Rapid advancements in endovascular technology and techniques allow for treatment of an ever-increasing range of neurovascular diseases. Despite improvements in the safety and efficacy of these procedures, complications such as vasospasm, stroke, and perforation still occur [ 1 ]. Vasospasm, or contraction of smooth muscle fibers in the wall of a vessel, is a commonly recognized adverse event that may complicate an endovascular procedure by limiting distal blood flow. Vasospasm complicates many disease states, particularly those affecting small vessels. Recently, treatment of small-vessel vasospasm has proven amenable to pharmacological intervention. For example, in the treatment of cerebral artery spasm, intravenous nitrates [ 2 ], intravenous calcium channel blockers (CCBs) [ 3 ], and intra-arterial papaverine [ 4 ] and CCBs [ 5 ] have been shown to prevent or mitigate this small artery spasm. However, few studies have focused on the treatment of medium and large vessel spasm [ 6 ], and even fewer have taken aim at mechanically-induced vasospasm. This type of spasm, unlike subarachnoid hemorrhage-induced vasospasm, is not the result of inflammation [ 7 ] and a functional nitric oxide deficiency [ 8 , 9 ], but rather direct physical irritation of the endothelium. In this report, we demonstrate the use of an intra-arterial CCB, verapamil, to treat flow-limiting mechanically-induced spasm of the common carotid artery (CCA) in a non-human primate. We believe this to be the first such report of its kind. Case presentation As part of a study aiming to assess the placement feasibility and safety of a catheter capable of delivering intra-arterial cerebroprotective therapy, a 16 kg female baboon ( Papio anubis ) underwent carotid artery catheterization under general anesthesia. Since Papio anubis is regarded as vasospasm-prone (unpublished data), the animal was pre-treated with oral nimodipine (Nimotop, Bayer,1 mg/kg every 4 hours for 24 hours), and placed on a prophylactic infusion of intravenous nitroglycerin (200 mcg/hr) and heparin (100 units/hr). To place a 6 Fr (2 mm) treatment device in the 3–4 mm right CCA [ 10 ], the animal underwent transfemoral catheterization with a 7 Fr guiding catheter using Seldinger technique. Under single-plane fluoroscopic guidance, the guiding catheter was placed into the brachiocephalic artery (5–6 mm) and then advanced into the right CCA after prophylactic administration of 2 mg of intra-arterial verapamil (1 mg verapamil/cc normal saline). A proprietary 6 Fr metallic catheter was then passed through the guiding catheter. Once inside the CCA, the 6 Fr catheter was exposed by retraction of the 7 Fr guiding catheter. Control arteriography, performed by injection of non-ionic iodinated contrast material through the guiding catheter, revealed normal patency of the carotid artery without evidence of spasm or limitation of arterial flow. As part of the study protocol, this co-axial catheter system remained in the brachiocephalic vessels for 12 hours. Throughout this procedure, the animal was maintained under general anesthesia with a narcotic-nitrous mixture. Intravenous nitroglycerin infusion (200 mcg/hr) and physiological monitoring were continued. The guiding catheter was connected to a heparinized saline infusion (3 units heparin/cc normal saline at a rate of 30 cc/hour). Before removing the co-axial system at the conclusion of the experiment, carotid arteriography was performed to verify positioning of the catheter and patency of the vessels. These images revealed a decrease in vessel diameter localized to the length of artery where the 6 Fr catheter was positioned (Fig. 1A and 1C ). Prior to further manipulation of the catheters, an additional bolus of intra-arterial verapamil (2 mg/ 2 cc normal saline) was instilled through the guiding catheter positioned in the brachiocephalic artery. After ten minutes, repeat carotid arteriography demonstrated a visible increase in vessel caliber, presumably due to a reduction in vasospasm (Fig. 1B and 1D ). The diameter of the CCA was compared before and after verapamil administration at 6 equally-spaced points along the catheter. This revealed an increase in the mean CCA diameter from 2.85 ± 0.14 mm during spasm to 3.45 ± 0.18 mm post-verapamil administration (Figure 2 ). This corresponded to a 21.0 ± 1.7% mean increase in the vessel diameter post-verapamil injection, which represents a 46.7 ± 4.0% mean increase in cross-sectional area (Fig. 3 ). Review of continuous invasive blood pressure tracings demonstrated minimal systemic response to the intra-arterial administration of verapamil at this dosage; systemic blood pressure did not deviate more than 10 mm Hg systolic following instillation of verapamil. Figure 1 Anterior-posterior angiogram of right common carotid artery injection of a Papio anubis with a 6 Fr catheter in place both ( A. ) during vessel spasm on catheter, and ( B. ) 10 minutes after infusion of intraluminal verapamil (2 mg). Overlay images showing 6 Fr catheter position in CCA (gold) during spasm ( C. ) and after alleviation with verapamil ( D. ). Arrows (→) indicate tip of catheter. Figure 2 Image analysis at 6 paired positions (Lines A-F) along catheter in common carotid artery both ( A. ) during vessel spasm, and ( B. ) 10 minutes after intraluminal verapamil (2 mg) administration. ( C. ) Raw data table includes vessel diameter measurements both pre and post-verapamil injection. Figure 3 ( A. ) Bar graph depicting both the pre and post-verapamil mean vessel diameters from six positions along the length of the common carotid artery (CCA) (2.85 ± 0.14 mm and 3.45 ± 0.18 mm, respectively), and ( B. ) cross-sectional areas (6.41 ± 0.61 mm 2 and 9.39 ± 1.0 mm 2 , respectively). Note the 46.7% increase in mean cross-sectional area after verapamil administration. At the conclusion of the procedure, the co-axial catheter system was removed. Anesthetics, heparin, and nitroglycerin infusions were discontinued. The animal was awakened from anesthesia uneventfully showing no signs of neurological impairment. MRI brain scan, including diffusion-weighted imaging at 36 hours, showed no evidence of cerebral infarction. Discussion Driven by technology and the ever-increasing need for minimally invasive treatment modalities, the number of endovascular procedures performed annually continues to rise. The increased number and variety of endovascular procedures have introduced new situations in which vasospasm may be encountered. The sheer size and complexity of large bore catheters and their delivery systems makes them more likely to induce spasm in the vessel in which they are utilized (medium and large caliber arteries). Thus, it is important to identify pharmacological agents that will relieve this vasospasm with minimal side effects. Spasm of arteries secondary to therapeutic medications or diagnostic instrumentation has long been acknowledged as a possible complication of interventional procedures. Vasospasm, in general, has been attributed to a variety of pharmacological stimuli ranging from cocaine [ 11 ] and alcohol [ 12 ], to L-thyroxine [ 13 ] and NSAIDs [ 14 ]. Vasospasm may also be attributed to mechanical irritation [ 15 ], as in the present study. In the past, treatment of mechanical spasm has simply been withdrawal of the offending catheter. A passive treatment such as this is often times undesirable, especially when the catheter system needs to remain in position, as in our experiment. There are several agents that have been shown to be effective in preventing and treating vasospasm, each of which has its limitations. Intravenous nitrates have been the mainstay of vasospasm prevention for endovascular procedures [ 16 ], but their cardiovascular and intracranial pressure (ICP) effects limit their acute use for vasospasm treatment [ 17 ]. Intra-arterial papaverine has been used either as monotherapy or as an adjunct to balloon angioplasty in subarachnoid hemorrhage-induced vasospasm of smaller cerebral vessels [ 4 , 18 , 19 ]. However, papaverine therapy is short-acting, has untoward side-effects, such as elevation of ICP, and its role in larger vessel spasm remains ill-defined [ 20 ]. Recently, novel intra-arterial agents, such as mannitol and amrinone, have been used to reverse acute carotid spasm [ 21 ] and cerebral vasospasm following subarachnoid hemorrhage [ 22 ], respectively. Further efforts are needed to identify, compare, and validate pharamacotherapies for medium to large vessel spasm. To reduce the tone of a muscular artery, a logical point of intervention is inhibition of calcium influx into smooth muscle cells. Voltage-sensitive CCBs, or 1, 4-dihydropyridines (such as verapamil, nifedipine, nimodipine, and amlodipine), function in this manner. Nimodipine improves outcomes after cerebral vasospasm secondary to subarachnoid hemorrhage [ 23 ]. Intraluminal administration of verapamil, in particular, has been used for both the pretreatment of vessels for endovascular procedures [ 16 ], as well as reversal of spasm in coronary grafts [ 6 ]. Recently, intra-arterial verapamil has also been reported to be safe and effective in the treatment of cerebral vasospasm [ 5 ]. Verapamil is well tolerated systemically, yet hypotension is the primary concern during administration. In this case report we document the use of a 2 mg intra-arterial verapamil injection into the CCA of a non-human primate to acutely reverse catheter-induced vasospasm. This use is unique for several reasons. Firstly, in contrast to the report by He et al. [ 6 ], in which intra-arterial verapamil was used to alleviate spasm of an internal thoracic artery graft, we used verapamil to treat a significantly larger caliber vessel. Secondly, the presence of elastic fibers in the larger carotid artery compared to the highly muscular internal thoracic artery represents a different functional architecture. Thirdly, He et al. [ 6 ] attributed their observed vasospasm to ionotrope therapy (dobutamine, dopamine, and epinephrine) that was instigated for post-operative hemodynamic support. The spasm that we observed, however, occurred in the presence of hemodynamic stability and was isolated to a segment of the carotid in close association with a metallic 6 Fr catheter, suggesting mechanical irritation as the etiology. This report serves as preliminary evidence for the utility of intra-arterial verapamil in large vessel vasospasm and not the conclusive study as its scope is limited by three issues. First, by not having a control (untreated) subject, it is impossible to say for sure that the observed mitigation of vasospasm is due to the intervention and not the natural history of the disease. Considering that effects were observed in the presence of an in situ metallic catheter, we believe strongly that the vasodilation was due to the verapamil. Second, the observed mitigation of vasospasm occurred during a continuous intravenous infusion of nitrates. It is conceivable that the synergistic effects of verapamil with these nitrates actually treated the vasospasm, and not the verapamil itself. Finally, this study, by its design, does not attempt to define the long-term durability of intra-arterial verapamil. Only through additional experimentation and use will the full utility of this agent as an intraluminal treatment for vasospasm be understood. Conclusions We describe the acute alleviation of in situ catheter-induced CCA vasospasm in a non-human primate by an intra-arterial infusion of verapamil (2 mg) without demonstrable complications. Although only an observational study in one subject, this report suggests that intra-arterial administration of verapamil may be an effective intervention for the treatment of mechanically-induced vasospasm in medium to large-sized muscular arteries and that further experimentation in this area is warranted. Competing interests None declared. Authors' contributions ALC, GPC, and WJM performed the surgical procedure, delivered the critical care to the animal, composed, and revised the manuscript. LF and PM performed the angiography studies. ESC conceived the study and oversaw its design and completion. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC499548.xml |
387277 | When Food Kills | Food-borne disease kills humans only rarely, although the ramifications and implications of these few deaths for science, regulators, and government are large | For the estimated 800 million people, living largely in developing countries, without enough food to eat, the main food risk is starvation. But if you ask, ‘When does food actually kill?’ in a country such as the United Kingdom, ‘Not that often’ is the short reply you would give after reading Hugh Pennington's book When Food Kills: BSE, E. coli, and Disaster Science . The two food-borne diseases that occupy much of the book, Escherichia coli 0157 and bovine spongiform encephalopathy (BSE), kill humans very rarely, although the ramifications and implications of these few deaths for science, regulators, and government are large. As Pennington clearly explains, there is still much uncertainty in the science of BSE, and the eventual UK death toll from the human form may be as low as a few hundred, with even the most pessimistic expert assessments putting the upper bound as fewer than 5,000. Food-borne E. coli 0157 kills fewer than a dozen people a year in the UK. Whilst each death is a terrible tragedy and an indescribably harrowing experience for those close to the victim, these figures are small when compared with other ways in which food kills. Epidemiologists estimate that the dietary contributions to cardiovascular disease and cancer between them kill more than 100,000 people a year in Britain. Yet we hear much more about BSE and E. coli as food risks. For instance, a recent study by the King's Fund ( http://www.kingsfund.org.uk/pdf/healthinthenewssummary.pdf ) reports that the rate of news coverage in the UK of a death from variant Creutzfeldt-Jakob disease, the human form of BSE, is nearly 23,000 times that for a death from obesity. In his characteristically diverting and obscurely erudite way, Pennington describes this discrepancy between public perception and magnitude of risk by referring to an article on railway accidents published in 1859 by one Dionysius Lardner. The systematic and much more revealing analyses of risk perception by psychologists such as Paul Slovic over the past 25 years do not get a mention. In fact, one of the hallmarks of Pennington's style is his enthusiasm for taking his reader down little-known historical byways. Whether it be the drowning (possibly suicide) of King Ludwig II of Bavaria in the Starnberger See or the treatment of James Norris in Bethlehem Lunatic Asylum in 1814, Pennington has an almost endless supply of anecdotes to provide peripheral colour to his main narrative. Indeed, on some occasions his delight in the detail makes it hard to see where the main narrative is leading, although his aim is to show that similar conclusions can be drawn about risk management in food, transport, oil rigs, and other fields. Anyone who has heard Hugh Pennington speak will know that he has a remarkably direct and engaging style, which he translates into the written word with verve. Already on page 2, he gets us into the mood by referring to a sample from a five-year-old girl sent for analysis at the start of the Lanarkshire E. coli outbreak of 1996: ‘It was a stool. The word carries the impression of firmness, even of deliberate effort in its production. Hers was not’. His laconic sense of humour is also reflected in many of the wittily irrelevant or tangential photographs. My personal favourites are ‘Her Majesty in Gloves’ on page 44 and ‘Turds on Campsite Track’ on page 101. The Lanarkshire E. coli 0157 outbreak, which in late 1996 affected 202 people and killed eight, was very much Pennington's show. He chaired the public enquiry that led eventually to a change in the law, requiring all butchers in the UK handling cooked and raw meat to be licensed. The license itself is less important than the training in food safety management principles that precedes it. The butcher John Barr (and his staff), whose shop was the primary source of the outbreak, apparently did not know that you have to keep raw meat and ready-to-eat products separate to avoid cross-contamination with dangerous pathogens, such as E. coli 0157, that can occur in raw meat. Pennington's authoritative and blow-by-blow account shows failings not only in the butcher (who was, incidentally, Scottish Master Butcher of the Year in 1996), but also in the inspectors who had visited his shop eight times in the previous two years. They had not, apparently, picked up that Barr and his staff employed the same knives for cutting up raw and cooked meat, nor that they used a ‘biodegradable’ cleaning fluid, not realising that this is not the same as ‘biocidal’. The second theme, BSE, is given somewhat shorter treatment. Nevertheless, Pennington goes into some detail in assessing the prion theory of transmissible spongiform encephalopathies (he argues that a nucleic acid is not also involved). He also reviews the sequence of events that led the UK government in the early 1990s to conclude that there was not likely to be a risk to human health and to be slow to change its view. This and the concluding part of the book (see below) draw heavily on the Phillips Enquiry into BSE. Although this enquiry focussed on the response of the UK government, its lessons are relevant to other countries where BSE has emerged in recent years, including many European countries, Japan, Canada, and the United States. In his book Mountains of the Mind , Robert Macfarlane writes: ‘[F]or the hunter risk wasn't optional—it came with the job. I sought risk out, however. I courted, in fact paid for it. This is the great shift which has taken place in the history of risk…. [I]t became a commodity’. Pennington reflects a similar shift in attitude to food risk over the past half century or so. Back in 1938, although it was known that over 2,500 people a year in Britain died from drinking raw milk, the risk was not seen as large enough to warrant legislation to make pasteurisation compulsory. We are now used to much higher standards of food safety, and we can, as a society, enjoy the luxury of fear of relatively minor risks. Nevertheless, there are important lessons from past failures for all involved in food safety (and in other areas of risk management), and Pennington discusses some of these in his concluding chapters. He emphasises the need to continually review the evidence underpinning risk assessments, to communicate effectively with the media, to ensure that actions to manage risks are effectively implemented and audited. Notably, he refers to the importance of inclusiveness and openness about risk and uncertainty in decision-making: ‘[I]f [this] becomes the norm, it will be possible to say that good has come out of tragedy’. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC387277.xml |
529260 | The role of hemodialysis machines dedication in reducing Hepatitis C transmission in the dialysis setting in Iran: A multicenter prospective interventional study | Background Hepatitis C virus (HCV) infection is a significant problem among patients undergoing maintenance hemodialysis (HD). We conducted a prospective multi-center study to evaluate the effect of dialysis machine separation on the spread of HCV infection. Methods Twelve randomly selected dialysis centers in Tehran, Iran were randomly divided into two groups; those using dedicated machines (D) for HCV infected individuals and those using non-dedicated HD machines (ND). 593 HD cases including 51 HCV positive (RT-PCR) cases and 542 HCV negative patients were enrolled in this study. The prevalence of HCV infection in the D group was 10.1% (range: 4.6%– 13.2%) and it was 7.1% (range: 4.2%–16.8%) in the ND group. During the study conduction 5 new HCV positive cases and 169 new HCV negative cases were added. In the D group, PCR positive patients were dialyzed on dedicated machines. In the ND group all patients shared the same machines. Results In the first follow-up period, the incidence of HCV infection was 1.6% and 4.7% in the D and ND group respectively (p = 0.05). In the second follow-up period, the incidence of HCV infection was 1.3% in the D group and 5.7% in the ND group (p < 0.05). Conclusions In this study the incidence of HCV in HD patients decreased by the use of dedicated HD machines for HCV infected patients. Additional studies may help to clarify the role of machine dedication in conjunction with application of universal precautions in reducing HCV transmission. | Background Hepatitis C virus (HCV) transmission occurs mainly through large or repeated direct percutaneous punctures to blood vessels; for example repeated injections for drug abuse [ 1 ]. Less frequent routes are sexual transmission [ 2 ], perinatal transmission [ 3 ], acquisition from mucous membrane exposure [ 4 , 5 ], body fluids [ 6 ] and colonoscopy [ 7 ]. However, in up to 40% of infected individuals, the route of transmission remains unknown [ 8 ]. Since the introduction of blood and organ donor screening by antibody testing in 1991, HCV has rarely been transmitted by transfusion of blood products[ 1 ], but there remains a relatively high incidence of new infections in hemodialysis (HD) units [ 9 , 10 ]. Several reports around the world indicate that the frequency of HCV is higher in patients undergoing maintenance HD than in the general population. The reported prevalence of HCV infection in maintenance HD patients varies markedly from country to country and from one center to another [ 11 ] ranging between 8% and 39% in North America, 1% and 54% in Europe, 17% and 51% in Asia, and 1% and 10% in Australia [ 12 ]. In Iran the prevalence of HCV varies from 5.5%–24%. [ 13 , 15 ]. Molecular virological studies have clearly shown the nosocomial transmission of HCV to hemodialysis patients,[ 16 , 17 ] but the exact modes of transmission remain unclear. Studies suggest several risk factors, including transmission through blood components [ 18 ]; patient-to-patient transmission through shared equipment [ 19 ], devices [ 20 ], or multidose vials [ 21 ]; and between patients treated on the same shift but not sharing equipment [ 16 ]. Basic hygienic precautions, for instance hand washing, the use of protective gloves when patients and HD equipment is touched are observed worldwide but only a few centers have isolated their HCV-positive patients or dialyzed them during dedicated shift or using dedicated dialysis machines. At the present time, the Center for Disease Control and Prevention (CDC) does not recommend isolation of patients with HCV [ 1 ]. The evaluation of this problem is difficult because of the paucity of prospective studies and the scarce data about patient-to-patient transmission in settings other than HD centers [ 6 ] and therefore the benefit of isolation of HCV infected dialysis patients remains controversial. The prevalence of HCV in hemodialysis units is higher than normal population in Iran (5–24% [ 13 , 15 ] versus 0.3 [ 22 ]) and most other countries. Considering the added expense of patient isolation we conducted a prospective study in hemodialysis units in Tehran, Iran, to evaluate the role of HD machine separation in reducing HCV transmission to HD patients. Methods Among 40 HD centers in Tehran, we randomly selected centers one by one to reach a total number of 593 patients (12 centers) to enroll in this study. Selected centers were randomly divided in to dedicated (D) and non-dedicated (ND) HD machine groups, including 297 patients in D (4 centers) and 296 patients in ND group (8 centers). ELISA III checked all patients for HCV antibody detection before enrolling in the study. Positive cases were confirmed by RT-PCR. Only patients who were HCV positive by RT-PCR were considered to be HCV infected. Out of 593 HD cases 51 were RT-PCR positive (30 in the D groups and 21 in the ND group), and 542 were HCV negative (267 in the D group and 275 in the ND group). The prevalence of HCV infection in the D group was 10.1% (range: 4.6%– 13.2%) and was 7.1% (range: 4.2%–16.8%) in ND group. During the study conduction, 5 new HCV positive cases (1 in D group and 4 in ND group) and 169 new HCV negative cases entered the study. Information regarding age, sex, occupation (health care personnel, surgeons and dentists), HCV infected relatives, previous peritoneal dialysis, surgery during last 2 months, duration of hemodialysis, number of blood product transfusions, history of organ transplantation, and the causes of ESRD was collected. The obtained history of IV drug abuse, tattooing, and multiple sex partners was not reliable. 442 patients (254 cases in the D and 192 in the ND group) were followed for 9 months (first follow up population). 281 patients (160 cases in the D and 121 cases in the ND group) who remained within our study were followed for an additional 9 months (Second follow up population). Histories of surgeries or blood product transfusion were obtained at each follow-up; with no significant difference found between the D and the ND groups. There were no significant differences between the D and the ND groups in the number of patients lost to follow-up due to death, renal transplantation or transfer to a different hospital change (data not shown). Patients were dialyzed for 4 or 4.5 hours, 2 or 3 times weekly, using standard HD techniques by Cuprophane and Polysulfone dialyzers. All included HD patients were HIV and HBs-Ag negative. Dialysis membranes were low-pressure and used only once and HD machines were bleached and rinsed between dialysis sessions according to the manufacturers' instruction. Socioeconomic level was essentially similar between D and ND groups. The only difference between two groups of HD centers was that in-group D, HCV positive patients were assigned to a dedicated HD machine, but in-group ND, HD HCV positive and negative patients were not assigned to dedicated machines. All machines were located in dialysis wards and not in separate rooms in both groups. Patient to staff ratio in the D and the ND groups was not statistically different (3.1 and 3.4 respectively) and all staff members were negative for anti-HCV. To prevent HCV transmission, educational courses were held for the staff to reemphasize the CDC hygienic guidelines; however, an interview of all nurses directly involved in patient care disclosed some deviation from CDC hygienic guidelines. The minority of nurses remembered situations when they had failed to change their gloves due to an urgent adjustment of a hemodialysis machine. A checklist was used respecting hemodialysis-specific infection control practices and new gloves were applied for each individual patient. Nevertheless, masks, aprons and protective glasses were not universally used. In all centers, all patients had specific dialysis stations assigned to them, and chairs and beds were cleaned after each use. Handling and storage of medications and hand washing were not done in the same or adjacent areas to those where used equipment or blood samples were handled. One of the ND centers was excluded from the study due to non-adherence to CDC hygienic guidelines in the first months of the study. Statistical analysis was performed using SPSS 10.5 software. Comparisons between groups were made by the chi-square test method for categorical variables and by the t-test for quantitative variables. Results The mean age was 49.5 years (range from 12–84), 58.7% were male, and the mean HD duration was 21.6 months. The etiology of end-stage renal disease was hypertension in 36% followed by diabetes in 28% and glomerulonephritis in 10.5%. 15.5% of patients in the dialysis centers had a one-time history of kidney transplantation, and 2.2% had undergone transplantation twice. The demographic data for two groups is illustrated in table 1 . Table 1 Demographic characteristics of dedicated and non-dedicated groups. Cases included at the beginning the study Cases included during the study (new cases) Dedicated Non-dedicated Dedicated Non-dedicated Total count of included cases 267 275 85 84 Age [Mean (SE)] 48.5 (0.9) 50.6 (1.0) 47.9(3.1) 51.9(1.8) Male proportion (%) 59.9 54.2 62.2 61.9 At-risk occupation (%) 0.4* 2.6 5.1 6.1 Duration of HD [Mean (SE)] 24.9(2.8) 25.2 (4.9) 12.6(4.8) 11.8(5.8) Previous peritoneal dialysis (%) 6.9** 2.2 2.4 2.4 IV drug abuse (%)*** 0.0 1.3 3.9 1.2 Surgery during the last 2 months (%) 1.9 1.5 12.2 8.3 Transfusion during the last 2 months (%) 27.0 21.0 22.0 19.0 Previous transplantation (%) 17.7 18.2 9.5 8.3 * P = 0.04 (Significant difference with the control group) ** P = 0.009 (Significant difference with the control group) *** History of IV drug abuse was not ascertained. All other differences between the D and the ND group were not significant. In the first follow-up period, the incidence of HCV infection was 1.6% and 4.7% in the dedicated and the non-dedicated groups (p = 0.05). In the second follow-up period, the incidence was 1.3% in the dedicated and 5.7% in the non-dedicated groups (p < 0.05) (table 2 ). Table 2 Incidence of HCV positive (PCR) cases in dedicated and non-dedicated groups in the first and second follow-up. First follow-up Second follow-up Positive No (%) Negative No (%) Positive No (%) Negative No (%) Dedicated 4(1.6) 250(98.4) 2 (1.3) 158 (98.7) Non-dedicated 9(4.7) 183(95.3) 7 (5.8) 114 (94.2) P value = 0.05 <0.05 Discussion The possibility of an intradialytic spread of HCV appeared to be very low and the treatment of HCV infected patients with dedicated machines was not strictly required [ 23 - 26 ]. Although there is no consensus regarding machine dedication between HCV non-infected and HCV infected patients, we found that using dedicated HD machines in both follow-up periods has an important role in reducing HCV transmission. Similar results have been shown previously. Low prevalence of HCV infection (HCV antibodies) in a HD unit in Istanbul (4.7%) showed that patient isolation and use of dedicated dialysis machines for seropositive patients decrease the transmission of HCV infection in HD centers [ 27 ]. Data derived from another study in Turkey demonstrated that nosocomial spread of HCV in HD units in which both seropositive and seronegative patients were treated together were higher than that of units with dedicated machines [ 28 ]. A study in Lebanon has shown that infection by HCV may be dialysis machine-related, rather than transfusion-related [ 29 ]. Another study from Portugal also demonstrated that the incidence of HCV infection was lowest in units that used dedicated machines or dedicated rooms for anti-HCV-positive patients [ 30 ]. Genotyping analysis in a molecular study confirmed that implementation of rigorous hygienic routines and introduction of dedicated rooms and machines for HCV-infected patients are important measures for effective control of HCV infection in a hemodialysis environment [ 31 ]. Findings from a study conducted in Shiraz, Iran, where 5.5% of patients were anti-HCV positive indicate that cross-infection by dialysis machines was mainly responsible for HCV infection. This study also reemphasized that cross infection through dialysis machines, rather than transfusion of blood products was the priming mode of transmission of hepatitis C virus among HD patients [ 15 ]. Some authors recommended that it is sufficient to treat every dialysis patient as potentially infectious, strictly adhering to the "universal precautions for prevention of HCV transmission", to prevent the spread of HCV in dialysis units [ 32 , 33 ] and isolation of HCV-infected dialysis patients and use of dedicated machines are unjustified [ 34 ]. P. Gilli et al demonstrated that machine separation in the presence of strict application of hygienic precautions did not reduce HCV transmission [ 35 ]. In agreement to this report, simpler measures such as the observance of Universal Precautions (UP), continuous training of the care staff and the use of anti-HCV positive patients personal instruments which can stop the diffusion of HCV infection in HD centers [ 36 ] have been mentioned. In our study population, the prevalence of HCV infection was approximately the same in both groups at the beginning of the study, but significantly lower incidence of HCV infection in D group may show that machine dedication strategies can be effective to reduce HCV transmission at least in our HD centers. Conclusions Considering the prevalence of HCV infection and adherence to adequate infection control measures, HD machine dedication may help to decrease transmission of HCV infection in our dialysis units. However rigorous implementation of precaution measures remains a cornerstone for prevention of HCV transmission among patients undergoing maintenance hemodialysis, but as unpredictable accidents can always take place in hemodialysis units; machine dedication may play a more important role in prevention of HCV transmission. Further studies are needed to evaluate the possible roles of machine dedication in the presence of strict adherence to hygienic precautions. Authors' contributions Dr AA Shamshirsaz designed the draft questionnaires and study protocol, managed the coordination of the surveys and drafted the manuscript. . Dr. Kamgar participated in drafting the manuscript and data collection, coordinated the study and designed the protocol. Dr Bekheirnia conceived of and designed the study, performed the statistical analysis, drafted the manuscript and helped design the protocol. Dr Bouzari helped draft the manuscript, participated in statistical analysis and managed paraclinical surveys. Dr Habibzadeh drafts the manuscript, participated in statistical analysis and helped in data collection and physical examination. Dr Pourzahedgilani participated in physical exam, filled out questionnaires and searched scientific sources. Dr V Broumand participated in drafting the manuscript. Dr Moradi participated in statistical analysis. Dr. Ayazi, Dr. Hashemi, Dr. A.H Shamshirsaz, Dr. Borghei and Dr. Haghighi took part in physical exams and filling questionnaires. Dr. B Broumand facilitated the study design and progress. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC529260.xml |
549557 | Use of dietary supplements by female seniors in a large Northern California health plan | Background Women aged ≥ 65 years are high utilizers of prescription and over-the-counter medications, and many of these women are also taking dietary supplements. Dietary supplement use by older women is a concern because of possible side effects and drug-supplement interactions. The primary aim of this study was to provide a comprehensive picture of dietary supplement use among older women in a large health plan in Northern California, USA, to raise awareness among health care providers and pharmacists about the need for implementing structural and educational interventions to minimize adverse consequences of self-directed supplement use. A secondary aim was to raise awareness about how the focus on use of herbals and megavitamins that has occurred in most surveys of complementary and alternative therapy use results in a significant underestimate of the proportion of older women who are using all types of dietary supplements for the same purposes. Methods We used data about use of different vitamin/mineral (VM) supplements and nonvitamin, nonmineral (NVNM) supplements, including herbals, from a 1999 general health survey mailed to a random sample of adult members of a large Northern California health plan to estimate prevalence of and characteristics associated with supplement use among women aged 65–84 (n = 3,109). Results Based on weighted data, 84% had in the past 12 months used >1 dietary supplement, 82% a VM, 59% a supplement other than just multivitamin or calcium, 32% an NVNM, and 25% an herbal. Compared to white, nonHispanic women, African-Americans and Latinas were significantly less likely to use VM and NVNM supplements and Asian/Pacific Islanders were less likely to use NVNM supplements. Higher education was strongly associated with use of an NVNM supplement. Prevalence did not differ by number of prescription medications taken. Among white, nonHispanic women, multiple logistic regression models showed that college education, good health, belief that health practices have at least a moderate effect on health, and having arthritis or depression significantly increased likelihood of NVNM use, while having diabetes decreased likelihood. Conclusions An extremely high proportion of older women are using dietary supplements other than multivitamins and calcium, many in combination with multiple prescription medications. Increased resources should be devoted to helping clinicians, pharmacists, supplement vendors, and consumers become more aware of the safety, effectiveness, and potential side effects of dietary supplements. | Background A recent national survey of medication use patterns found the highest prevalence of medication use over a one-week period was among women aged ≥ 65 years [ 1 ]. Over 80% of women in this age group had taken at least one prescription or over-the-counter medication during the week preceding the survey, and over half had taken five or more medications. In addition to these prescription medications, nearly 60% had used some type of vitamin/mineral supplement and 14% an herbal or other type of dietary supplement. These estimated prevalences of vitamin/mineral, herbal, and other supplement use are higher than those derived from previous national surveys [ 2 - 4 ], and by extrapolation, so is the prevalence of concomitant use of supplements and drugs. While many herbal medications and dietary supplements are safe for most people to use [ 5 ], there is growing evidence that some herbs and other types of non-herbal supplements can cause serious adverse effects [ 6 - 9 ]. Given the large proportion of this group concurrently using clinician-prescribed medications and self-prescribed dietary supplements, there is a great potential for drug-supplement interactions, especially since several surveys have shown that patients generally do not report or under-report use of supplements to their clinicians and pharmacists [ 10 - 13 ]. In a previous study, we used data from a 1999 general health survey of adult members of the Kaiser Permanente Medical Care Program of Northern California (KPCMP) to characterize use of nonvitamin, nonmineral (NVNM) supplements, including herbals, among the adult membership of this large health plan [ 14 ]. The results presented here expand on the earlier work by including vitamin/mineral (VM) supplement use, focusing on women aged 65–84, and employing logistic regression modeling to identify predictors of different types of dietary supplement use among these women. Our intent is to provide a comprehensive picture of dietary supplement use among this growing segment of the health care seeking population as a basis for planning structural and educational interventions to minimize adverse consequences of self-directed supplement use. A second aim is to raise awareness about how the focus on use of herbals and megavitamins that has occurred in most surveys of complementary and alternative therapy use results in a significant underestimate of the proportion of older women who are using all types of dietary supplements for the same purposes. This project was approved by the Kaiser Foundation Research Institute's Institutional Review Board in Oakland, CA. Methods Data sources During Spring 1999, a confidential general health survey ("Adult Member Health Survey") was mailed to a stratified random sample of 40,000 adults aged ≥ 25 who were members of the Kaiser-Permanente Medical Care Program in Northern California. Up to three attempts were made to obtain a mailed response from each person in the sample unless the individuals refused or were deemed ineligible due to curtailment of membership, death, language barrier, or incorrect address. The survey questionnaire used in the first two mailing attempts included questions about use of complementary and alternative medicine (CAM) modalities and dietary supplement use, in addition to questions covering demographic and health-related characteristics and medication use. (A shortened form of the questionnaire sent to non-respondents to the first two mailings did not contain the CAM and dietary supplement questions.) Completed non-abridged questionnaires were received from 72% (n = 3,109) of women aged 65–84 in the survey sample. We restricted our estimates to women under age 85 because both the response rate and numbers of respondents aged ≥ 85 were too low to generalize about the "oldest old" cohort. Use of dietary supplements during the previous 12 months was ascertained through two questions. First, a question covering use of 19 different CAM modalities: "Have you used the following methods to help treat or prevent health problems?" included items for "Any herbal medicine or supplement" and "Megavitamins/high dose vitamin therapy (not including daily multiple vitamin)." A separate question asked about use of selected dietary supplements: "In the past 12 months, did you use any nutritional supplements?" This question had a response checklist that included daily multiple vitamin with or without minerals (e.g., Centrum, One-a-Day), calcium with or without vitamin D, vitamin C, vitamin E, melatonin, ginkgo biloba , Echinacea , kava kava, glucosamine, St. John's Wort, and a space to write in other supplements, which were then individually coded and categorized as herbal supplements, other nonvitamin/nonmineral (NVNM) supplements, or vitamin/mineral (VM) supplements. Respondents were classified as VM users if they reported use of one or more vitamins/minerals, although we further subdivided this group into multivitamin and/or calcium users only and those who reported using other VMs such as vitamins C and E or minerals such as zinc and magnesium, with or without a daily multivitamin and/or calcium ("Dietary Supplement other than a Multivitamin and/or Calcium"). Respondents were classified as herbal users if they had indicated herbal use on the CAM modality checklist or indicated use of one or more herbals in the dietary supplement checklist. Respondents were classified as NVNM supplement users if they had indicated use of any of the herbals, glucosamine, or melatonin on the dietary supplement checklist or wrote in a supplement subsequently categorized as an herbal/botanical, amino acid, enzyme, protein, hormone or other non-herbal NVNM dietary supplement [ 15 ]. The estimated percentages of women who regular took one or more prescription medications were based on response to the question "How many prescription medicines do you regularly take," while estimated use of prescription medication with a narrow therapeutic index was based on health plan pharmacy data for the sample. The decision to use self-report rather than pharmacy data to estimate the numbers of prescription medications regularly was made because nearly 17% of the women had reported filling prescriptions outside of the health plan during the 12 months preceding the survey, and also because it was going to be extremely labor-intensive to determine from pharmacy data the numbers of different types of prescription medications each respondent "regularly" used. Statistical analysis The respondent sample was assigned post-stratification weights so that analyses with weighted data would reflect the actual age (by 5-year intervals), gender and geographic distribution of the adult membership from which the sample was drawn. All percentages reported in the text and tables are based on weighted data. However, the tables include the actual (unweighted) subgroup denominators used in the analyses. All analyses were performed using PC-SAS version 8.2 [ 16 ]. Calculations of 95% confidence intervals (CI) and significance testing were done using the Proc Surveymeans procedure for data collected using a multi-stage survey design. The range of the confidence intervals is affected by the size of the subgroup denominator, which is why confidence intervals are tighter around prevalence estimates for the white, nonHispanic (whiteNH) subgroup than the estimates for the other race/ethnic subgroups. Prevalence ratios (PR) were calculated to compare supplement use rates for subgroups of interest against rates for a reference group (e.g., herbal use among women aged 75–79 vs. aged 65–74). Confidence intervals around the PRs were used to assess the range of PR's compatible with the data at a level of 95% confidence. In the text and tables, a PR confidence interval that includes 1.0 indicates that rates are not statistically significantly different from each other at the p < .05 level. Logistic regression models run with unweighted data were used to test whether African-American/Black (AA/B), Hispanic/Latina (H/L), and Asian/Pacific Islander (A/PI) women differed significantly from white nonHispanic (whiteNH) women in use of different supplements after controlling for age and also after controlling for age, education, and health status. Logistic regression models were used to identify statistically significant independent predictors of four types of use: (a) any dietary supplement, (b) any dietary supplement other than a multivitamin and/or calcium supplement only, (c) any NVNM supplement, and (d) any herbal supplement. The results we present are restricted to whiteNH women because we found that the predictor variables did not operate the same way in separately run models for the other race/ethnic groups in the sample. Indicator variables included in the logistic models were 3 age groups (75–79, 80–84 vs. 65–74), 4 education levels (< 12 years, some college, college graduate vs. high school graduate), overall health status (good/excellent health vs. fair/poor health), arthritis (yes vs. no), diabetes (yes vs. no), depression for at least two weeks during the past 12 months, and belief that "lifestyle/habits (what you eat, exercise, and weigh) affect health" moderately to extremely vs. not at all/a little bit. Rates of supplement use were not substantially different for ages 65–69 and 70–74, so these age groups were collapsed. All four logistic models included the same set of predictor variables and were run using data from the 96% (n=2378) whiteNH women who had complete data for all variables. In the text and tables, adjusted odds ratios (Adj. OR) with 95% confidence intervals that cross 1.0 are not statistically significant at the p < .05 level. Results and discussion Table 1 shows selected demographic and health-related characteristics of the sample. The sample is predominantly whiteNH (80%), educated beyond high school (55%), and in good health (80%), although a large percentage have chronic health problems such as hypertension, diabetes, arthritis, and depression. Nearly 85% reported regularly taking at least one prescription medication and 19% five or more. Approximately 16% regularly used a prescription medication with a narrow therapeutic index, such as an anticoagulant, cardiac glycocide, or tricyclic antidepressant. Approximately 70% believed that their health habits had a moderate to large effect on their health. Table 1 Characteristics of the sample population (n = 3109) N Unwtd.% Wtd.%* All Ages Ages 65–74 1468 47.2 64.1 Ages 75–79 1356 43.6 22.9 Ages 80–84 285 9.2 13.0 Race/Ethnicity White, nonHispanic 2483 81.1 80.2 African-American/Black 169 5.5 5.7 Hispanic / Latina 147 4.8 5.0 Asian / Pacific-Islander 222 7.3 7.8 Other 41 1.3 1.2 Educational Attainment < High School Graduate 472 15.4 15.0 High School Graduate/GED 921 30.0 30.0 Some College 1090 35.6 34.8 4-Year College Graduate 583 19.0 20.2 Health Status Excellent/Very Good/Good 2436 78.7 79.6 Fair/Poor 658 21.3 20.4 Health Conditions Heart Disease 561 18.0 17.3 Diabetes 350 11.3 11.4 Hypertension 1449 46.6 46.4 Arthritis 1202 38.7 38.1 Depression for ≥ 2 weeks during yr 365 11.7 12.0 # Rx Medications Used (by self-report) 1 500 16.9 16.8 2–4 1429 48.4 48.9 5 or more 567 19.2 19.1 Taking R x Medication with a Narrow Therapeutic Index** 513 16.5 15.9 Belief About How Much Health Habits/Lifestyle Affect Health Little or no effect 917 30.8 29.6 Moderate effect 591 19.8 19.1 Great deal of effect 1472 49.4 51.3 * Weighted percentages are based on respondent data weighted to reflect the age, gender, and geographic distribution of the membership. N's in table are actual numbers of respondents with this characteristic. ** Estimated based on health plan pharmacy records for sample. Table 2 [See Additional file 1 ] shows estimates of the percentages of female seniors who used specific types of dietary supplements. Estimates are provided for all women and for women in the four major race/ethnic groups because the estimates for the overall population are so heavily influenced by the large proportion of whiteNH women in the sample. Overall, 84% of the women had used at least one dietary supplement during the previous 12 months, 82% a VM supplement, 59% a dietary supplement other than just a multivitamin and/or calcium, 32% an NVNM supplement, and 25% an herbal supplement. Among those who used at least one supplement (n = 2,574), a mean of 3.25 (sd = 15.29) supplements were used, with 22% using only one and 57.1% using ≥ 3 supplements. The mean number of supplements used excluding daily multivitamins and calcium was 2.14 (sd = 12.42), with 49% using only one and 27.3% using ≥ 3. After adjusting for age, African-American/Black seniors were significantly less likely than whiteNH seniors to use daily multivitamins and calcium, as well as all the other categories of dietary supplements. This difference remained statistically significant after also adjusting for education, health status, and the three chronic health conditions. Hispanic/Latina seniors were also significantly less likely than whiteNH seniors to use VM supplements (with the exception of vitamin E) and NVNM supplements, but did not significantly differ on herbal use. Asian/Pacific Islander seniors did not significantly differ from whiteNH seniors on use of any type of dietary supplement, use of VMs, or use of dietary supplements other than just a multivitamin and/or calcium. However, they were significantly less likely than whiteNH women to use NVNM supplements. Among women using any supplement, mean use of all supplements and supplements excluding multivitamins and calcium were both significantly higher among whiteNH women than among women of color (not shown). Table 2 also shows that use of glucosamine by women with arthritis was significantly lower among African-American/Black, Hispanic/Latina, and Asian/PI women as compared to whiteNH women (OR AA/B = 0.32, CI: 0.13–0.82; OR H/L = 0.19, CI: 0.06–0.61; OR A/P = 0.56, CI: 0.28–1.11). Similarly, women of color who had experienced depression for at least two weeks during the 12-month interval were substantially less likely than whiteNHs to report using St. John's Wort to treat depression. Rates of ginkgo biloba use did not significantly differ by race/ethnicity. Table 3 [See Additional file 2 ] shows how use of dietary supplements varied by personal characteristics other then race/ethnicity for the whole sample and Table 4 [See Additional file 3 ] provides this information for the whiteNH women who comprised most of the sample. The NVNM and herbal supplement use rates were lower for seniors aged 75–84 than for those aged 65–74, but the differences were smaller than those observed for race/ethnicity. While higher education was significantly associated with supplement use, it was more strongly associated with use of NVNM and herbal supplements than VM use. For example, NVNM use among college graduates was approximately 40% higher than that for high school graduates, while the rate for those who had not completed high school was approximately 40% lower than the rate for high school graduates. Health status and belief about the effect of health practices were more strongly associated with NVNM supplement use than use of any dietary supplement. Table 5 [See Additional file 4 ] shows the results of multiple logistic regression models predicting use of any dietary supplement, use of any dietary supplement other than just a multivitamin or calcium, use of any NVNM supplement, and use of any herbal by whiteNH women. Women who had not completed high school were significantly less likely than high school graduates to use a dietary supplement other than daily multivitamin/calcium and an NVNM, but did not significantly differ in likelihood of using herbals. Higher education, especially a college degree, was associated with significantly higher likelihood of use of all four categories of supplements as compared to high school education. Good health remained significantly associated with use of dietary supplements other than multivitamin/calcium, NVNM use, and herbal use, although in the case with herbals, it was only significant after the three health conditions were entered into the model. Belief that health practices had a moderate-large effect on health was a significant predictor of both VM and NVNM use, even after controlling for its strong association with educational attainment. Having arthritis increased the likelihood of use of VMs and NVNMs, but not herbals. In contrast, having diabetes decreased the likelihood of using VM and NVNM supplements. Experiencing depression did not have a significant effect on use of VM supplements, but doubled the likelihood of use of NVNM supplements, especially herbals. A logistic regression model predicting glucosamine use among women with arthritis found that being African-American/Black (OR = 0.42, CI: 0.16–1.08) or Hispanic/Latina (OR = 0.25, CI: 0.08–0.82) race/ethnicity significantly decreased the likelihood of glucosamine use as compared to whiteNH women, while having education beyond high school (OR = 1.64, CI:1.16–2.30) and the belief that health practices have a moderate-large effect on health (OR = 1.49, CI:1.03–2.15) were associated with significantly greater likelihood of glucosamine use. The prevalence of use of NVNM supplements and dietary supplements other than a multivitamin and/or calcium did not differ by number of prescription medications taken (0, 1, 2–4, or ≥ 5) nor by whether a prescription medication with a narrow therapeutic index was being taken. Approximately 10% of women seniors who were taking an anticoagulant were also using ginkgo biloba or garlic supplements, even though there is evidence that this combination of blood thinners can lead to adverse consequences. Figure 1 shows how use of dietary supplements as a complementary or alternative therapy to treat or prevent health problems is probably underestimated among female seniors due to study investigators' focus on herbals and megavitamins. Prevalences of use of herbal and other dietary supplements were estimated based on responses provided by the same women to both the herbal supplement item in the complementary and alternative medicine (CAM) checklist and the nutritional supplement question. The estimated prevalence of herbal supplement based on reported specific supplements was twice as high as that based on the CAM checklist item alone. Broadening supplement use to include any NVNM significantly increased the percentage of users, primarily due to use of glucosamine for arthritis and other joint conditions. Further broadening to use of any NVNM or VM other than multivitamin and/or calcium resulted in a percentage twice as high as herbal use based on the coded supplements. Lastly, the prevalence of use of any type of VM or NVNM supplement, including more mainstream daily multiple vitamins and calcium, was more than three times greater than the prevalence of herbal use based on the coded supplements. Figure 1 Underestimation of dietary supplement use by tracking herbal use only among women aged 65–84 . NVNM = Nonvitamin, nonmineral including herbals. Based on respondent data weighted to reflect the age, gender, and geographic distribution of the membership. Finally, Figure 2 shows estimated rates of NVNM supplement use among 45–54 and 55–64 year old women in this same health plan membership compared with rates of use among current seniors based on their responses to the same survey questions. Since current supplement use is probably one of the best predictors of future supplement use, the data suggest that the prevalence of NVNM use among women aged ≥ 65 years will increase substantially over the next couple of decades as the health plan population ages. Figure 2 Differences in dietary supplement use among women by age cohort in a health plan population . NVNM = Nonvitamin, nonmineral supplement including herbals. Supplement other than Multivitamin/Calcium = any dietary supplement other than multivitamin and/or calcium. Based on respondent data weighted to reflect the age, gender, and geographic distribution of the membership. Conclusions The 84% prevalence of use of any vitamin/mineral (VM) or nonvitamin/nonmineral (NVNM) supplements, 32% prevalence of NVNM supplement use, and 25% prevalence of herbal supplement use by women aged 65–84 in this Northern California health plan membership are substantially higher than prevalences previously reported for women ≥ 65 years of age. Compared to the results of the national Slone Survey of medication use among adults, this population was significantly more likely to report use of any vitamin/mineral (82% vs. 59%), daily multivitamin (57% vs. 33%), calcium (57% vs, 23%), any herbal or other NVNM supplement (32% vs. 14%), and specific NVNM supplements, including ginkgo biloba (15% vs. 5%), glucosamine (12% vs. 4%), and Echinacea (8% vs. < 1%) [ 1 ]. While the time frame for the surveys differed (use during the past 12 months vs. use during the past week), this is unlikely to affect the comparison of most of the specific VM and NVNM supplements which are generally used almost daily. The percentages of herbal users (25%) and women using both herbals and prescription medicines (21%) were also higher than rates observed by Foster et al. (9% and 6%, respectively) based on Eisenberg et al's 1997 national survey of alternative medicine use over a 12-month interval [ 17 ]. Further, our survey found that the 32% prevalence of NVNM use and 59% prevalence of use of NVNM or VM supplements other than just a multivitamin or calcium (59%) were substantially higher than prevalence of herbal use and were not affected by the number of prescription medications women were taking nor whether any of those medications had a narrow therapeutic index. Our findings that supplement use was significantly higher among women who had higher levels of education, were white, nonHispanic (vs. African-American/Black or Hispanic/Latino), and were in good health are consistent with findings reported by other studies [ 18 - 22 ]. However, we also found that belief that one's health practices and lifestyle had at least a moderate effect on health was an additional significant predictor, but only for white, nonHispanic women; bivariate and logistic analyses done separately for African-American, Hispanic/Latino, and Asian/Pacific Islander subgroups found no indication that this factor influenced supplement use. For women of color, education beyond high school was the strongest predictor of use. Finally, we showed that some health conditions were significantly associated with higher likelihood of use of certain kinds of supplements (arthritis, depression), while another (diabetes) was associated with lower likelihood of use. This suggests that studies which employ one variable to represent presence of any chronic health problem may yield inaccurate results. The higher rates we observed in our study may be a result of differences in the demographic composition of the survey populations. Our sample was predominantly (80%) white, nonHispanic and education beyond high school (35% some college and 20% college graduates), and in this and other surveys, being whiteNH and better educated was significantly associated with supplement use. However, while the usage rates among African-American/Black and Hispanic/Latina women and those without post-high school education are substantially lower, the demographics of the 1998–1999 Slone survey and 1997 Eisenberg et al. national survey samples are not substantially different from ours. Our observed usage rates may also be higher because of the social environment. Previous national surveys have shown that rates of NVNM and any alternative therapy use among people in the Western United States are higher than rates for the entire country [ 3 , 23 , 24 ]. However, the health plan membership is diverse with regard to education, socioeconomic status, and health-related attitudes. The higher herbal and NVNM usage rates observed in this study compared to those reported by Foster et al. and Radimer et al. also may be related to the timing of the surveys. Eisenberg et al. reported highly significant increases in use of herbal medicine (2.5% to 12.1%) and megavitamin use (2.4% to 5.5%) for the adult population overall in1990 vs. 1997 [ 10 ]. In an earlier study, we reported an increase from 1.2% in 1996 to 9% in 1999 of use of herbals by women ≥ 65 years based on response to a question about use of different types of alternative therapies in this triennial health plan membership survey [ 25 ]. Finally, the wording of the questions to ascertain herbal use and NVNM use were not totally comparable across surveys. The estimates of VM and NVNM use reported in the Slone study are based on an open-ended question about use of any medication during the preceding 7 days, with a prompt for both VM and herbal supplement use, but not use of other types of NVNM supplements. Foster's estimate is based on response to a question about use of herbals as one of several different types of therapies. In contrast, our results are based on a question that provided a response checklist of some specific VM, herbal, and other NVNM supplements along with the opportunity for individuals to add additional supplements used, which were later coded and categorized. In an earlier study of alternative therapy use, we examined the difference in estimates of herbal use by the health plan membership based on indication of herbal use in a checklist of 17 different methods (not labeled as alternative therapies) used to treat or prevent health problems and indication of herbal use based on that item and response to the dietary supplement use question. We found that basing the estimate on the combined questions versus the single item nearly doubled the rate of herbal use among women ≥ 65 years (17.6% vs. 9.6%) [ 25 ]. Our finding that much larger percentages of adult women are taking NVNMs and VMs other than multivitamins and calcium suggests that for purposes of surveying populations about complementary and alternative therapy use and for medical interviews, the focus should be on use of all types of dietary supplements and medicinal teas, not just herbal supplements. There are several reasons for this. First, VMs and NVNMs other than herbals have the potential for causing adverse reactions, such as high doses of Vitamin C or zinc resulting in gastric upset, that can lead to further self-medication and/or medical visits. Second, certain VMs and NVNMs other than herbals also have the potential to interact with prescription medications, resulting in decreased effectiveness or affecting physiological indicators of how the medication is affecting the individual. Third, a focus on herbals alone excludes other NVNMs that are commonly used by patients with certain health conditions. For example, we estimate that nearly 3% of women used melatonin and 15% of whiteNH women (24% of those with arthritis) use glucosamine, neither of which is an herbal supplement. Finally, surveys of Mexican-American and Central American older women have shown a high prevalence of use herbal and other types of medicinal teas, which may not be picked up by questions asking about dietary supplement use [ 26 ]. However, since a growing proportion of the population is drinking herbal teas for nonmedicinal reasons (i.e., other than to treat health problems or symptoms), it will be important to find the best way to ask about medicinal tea use so as to avoid including in estimates those people who are drinking non-medicinal herbal teas as an alternative to caffeinated beverages. Several surveys have found that patients do not tend to report use of herbals and other dietary supplements to their health care providers in clinical encounters [ 1 , 9 , 11 , 27 ]. Because of this lack of communication, there is a great potential for adverse interactions of drugs and dietary supplements in this age group. It may also be the case that some dietary supplements or particularly high dosages of supplements might actually cause symptoms or changes in physiological indicators that may be incorrectly attributed to other underlying health problems, resulting in unnecessary or inappropriate treatment either by the woman or her clinician. As the cost of prescription medications continues to rise and health insurers continue to place caps on medication coverage, it is likely that increasing numbers of older women, especially those on limited incomes, will turn to dietary supplements as a lower-cost alternative for treating health conditions. Concomitantly, the incidence of supplement-related health problems is likely to increase. Our finding that nearly 60% of older women in this population were using dietary supplements other than multivitamins and calcium underscores the importance of clinicians querying patients about use of all types of dietary supplements when assessing health problems and prescribing medications, and as a back up, pharmacists inquiring about use of dietary supplements that may interact with prescription and over-the-counter medicines that are being purchased. Initiation of the communication by clinicians and pharmacists is likely to result in increased patient awareness that these dietary supplements may affect their health and treatment outcomes, which should then lead to higher rates of patient-initiated communication about dietary supplements they are using or considering using. Greater clinician and pharmacist awareness of all the different prescribed and self-directed regimens patients are using may lead to more proactive interventions to decrease adverse effects of supplement use. However, in order for clinicians and pharmacists to be able to respond to patient questions about dietary supplements, as well as to identify individuals at high risk for adverse effects, better information about the safety, effectiveness, and side-effects of dietary supplements need to be available and easily accessible, such as through the Natural Standard and National Medicines Databases. In conclusion, our study indicates that use of dietary supplements to treat or prevent health problems is very prevalent among older insured women, and that based on current use in younger age groups, the prevalence can be expected to increase over the next few decades. It will be important for federal agencies, professional associations, manufacturers, and consumer groups to promote research into the safety and effectiveness of commonly used dietary supplements, to develop standards for product quality, and to develop guidelines for recommended dosages based on age, weight, and health history that can be disseminated to both health care professionals and stores or clinics which sell these products. At the same time, it is important to begin to educate patients and the broader public about the importance of more thoroughly researching the safety, effectiveness, and potential negative effects of particular dietary supplements before beginning to use them. Competing interests The author(s) declare that they have no competing interests. Authors' contributions NG designed the study, conducted the survey, analyzed the data, and drafted the manuscript. DS developed the scheme for coding the dietary supplement data, consulted on data analysis, and participated in manuscript development. Both authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional File 1 Table 2 - Estimated percentages of female health plan members aged 65–84 using specific types of dietary supplements, overall and by race/ethnicity Click here for file Additional File 2 Table 3 - Estimated use of dietary supplements by women aged 65–84 by selected personal characteristics other than race/ethnicity Click here for file Additional File 3 Table 4 - Association of selected personal characteristics with dietary supplement use by white, nonHispanic women aged 65–84 Click here for file Additional File 4 Table 5 - Results of multiple logistic regression models predicting dietary supplement use by white, nonHispanic women aged 65–84 Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549557.xml |
549543 | Long-term survival from gastrocolic fistula secondary to adenocarcinoma of the transverse colon | Background Gastrocolic fistula is a rare presentation of both benign and malignant diseases of the gastrointestinal tract. Malignant gastrocolic fistula is most commonly associated with adenocarcinoma of the transverse colon in the Western World. Despite radical approaches to treatment, long-term survival is rarely documented. Case presentation We report a case of a 24-year-old woman who presented with the classic triad of symptoms associated with gastrocolic fistula. Radical en-bloc surgery and adjuvant chemotherapy were performed. She is still alive ten years after treatment. Conclusions Gastrocolic fistula is an uncommon presentation of adenocarcinoma of the transverse colon. Radical en-bloc surgery with adjuvant chemotherapy may occasionally produce long-term survival. | Background Gastrocolic fistula is a rare complication of both benign and malignant diseases of the gastrointestinal tract [ 1 - 6 ]. In the Western World, adenocarcinoma of the transverse colon is the commonest cause of a fistulous connection between the stomach and the colon with a reported incidence of 0.3–0.4% in operated cases [ 3 , 4 ]. Despite radical en-bloc surgery, these patients usually have a poor prognosis [ 5 , 6 ]. Long-term survival for these patients is rarely reported [ 5 ]. The authors report a 24-year-old woman who presented with a gastrocolic fistula secondary to an adenocarcinoma of the transverse colon and describe her treatment and long-term follow up. Case presentation A 24-year-old woman presented to the surgical clinic with epigastric pain, feculent vomiting and post-prandial diarrhoea of three months duration; she had lost over one stone in weight. She was previously healthy and was not taking any regular medications. There was no history of peptic ulcer disease, inflammatory bowel disease, trauma or previous abdominal surgery. She had been investigated two years previously by a gastroenterologist for intermittent left-sided abdominal pain at which time the clinical examination and blood tests were normal. Irritable bowel syndrome had been diagnosed, although no colonic imaging was performed. Both her maternal grandfather and great-grandfather had suffered from colonic cancer. An initial ultrasound scan of the abdomen revealed thickened bowel in the right upper quadrant with a dilated duodenum. A barium meal and follow through was then performed: this demonstrated a mucosal abnormality on the greater curvature of the stomach with a fistulous tract into the transverse colon (Figure 1 ). Barium enema and colonoscopy were not performed. The presence of a mucosal abnormality on the greater curvature of the stomach was confirmed on upper gastrointestinal endoscopy although initial biopsies revealed no evidence of a malignant neoplasm. Her blood tests revealed: haemoglobin 9.5 g/dl, mean cell volume 71.6 fl and a white cell count 20.2 × 10 9 /l; urea, electrolytes and liver function tests were all normal. Figure 1 Barium meal demonstrating fistulous connection between greater curvature of the stomach and the distal half of the transverse colon (arrowed). In view of her symptoms, an exploratory laparotomy was undertaken. At surgery, a large mobile tumour of the distal transverse colon was identified; this was adherent to the greater curvature of the stomach, the mesentery and to several loops of jejunum. A radical en-bloc resection was performed involving a subtotal gastrectomy, transverse colectomy and small bowel resection (Figure 2 ). The patient made an uneventful recovery from surgery. Histology revealed a poorly differentiated mucinous adenocarcinoma of colon without lymphatic involvement (Dukes' Stage B): this was adherent to and had penetrated the stomach wall. She received adjuvant 5-fluorouracil (420 mg/m 2 ) and folinic acid (20 mg/m 2 ) chemotherapy every four weeks for the following six months. Figure 2 Macroscopic en-bloc surgical specimen showing fistula between stomach and transverse colon (arrowed). She has been followed-up with two-yearly colonoscopy and five-yearly upper gastrointestinal endoscopy. She remains well with no signs of either local or distant recurrence more than ten years after initial diagnosis. Discussion Advanced neoplasms of the stomach and transverse colon are the commonest causes of a gastrocolic fistula: adenocarcinoma of the transverse colon is commoner in the Western World [ 1 , 3 , 4 ], whereas adenocarcinoma of the stomach is a more frequent cause in Japan [ 5 ]. Gastrocolic fistula has also been reported with other tumour types such as gastric lymphoma [ 7 ], carcinoid tumours of the colon [ 8 ] and rarely, metastatic tumours [ 9 ] and infiltrating tumours of the pancreas, duodenum and biliary tract [ 3 ]. With advances in medical treatment, gastrocolic fistula secondary to peptic ulcer disease is now less common [ 6 ]. A variety of other causes of gastrocolic fistula have been reported: these include syphilis, tuberculosis, abdominal trauma, Crohn's disease, Cytomegalovirus gastric infection in AIDS patients and percutaneous endoscopic gastrostomy (PEG) tubes [ 10 - 13 ]. The fistulous connection in a gastrocolic fistula usually arises between the greater curvature of the stomach and the distal half of the transverse colon because of their close anatomical proximity separated only by the gastrocolic omentum [ 13 ]. Two theories have been advanced for the development of a fistula [ 1 , 3 , 4 ]: the tumour may invade directly across the gastrocolic omentum from the orginating organ; alternatively, a tumour ulcer may provoke a surrounding inflammatory peritoneal reaction leading to the adherence and fistulation between the two organs. Cases of malignant gastrocolic fistula have usually been characterised by the presence of large infiltrative tumours with a surrounding inflammatory reaction, as seen in our patient; lymph node involvement is unusual [ 13 ]. Our patient presented with the characteristic triad of symptoms associated with a gastrocolic fistula [ 5 , 14 ]: diarrhoea, weight loss and faeculent vomiting. Other symptoms include: abdominal pain, fatigue, faeculent eructations and nutritional deficiencies. The gastrocolic fistula was identified in our patient by means of an upper gastrointestinal contrast series. Because the flow in the fistula is predominantly from transverse colon to stomach [ 15 ], several authors have suggested that barium enema is the more sensitive investigation in detecting and delineating such a fistula, although the detection rate may be lower in neoplastic cases [ 2 , 16 - 18 ]. Computerised tomography may also be useful in both delineating the fistula and identifying the underlying aetiology [ 5 , 19 ]. Endoscopy is an excellent tool for visualising the fistulous opening (especially in the stomach) and also allows preoperative histological confirmation [ 20 , 21 ]. Although two stage approaches have been advocated historically for malignant gastrocolic fistula, in order to first correct nutritional deficiencies [ 22 ], most authors now prefer radical en-bloc resections [ 14 ]. Despite such approaches, most patients have a poor prognosis and no patient has survived for more than nine years after resection [ 5 ]. This case report describes the longest disease free survival of a patient with a malignant gastrocolic fistula. To the authors' knowledge, she is also the youngest patient to be reported. It is worth noting that colorectal cancer in patients aged less than 35 years is normally associated with a poorer prognosis compared with older age groups [ 23 - 25 ]. This is related to the biological characteristics of such tumours with a higher proportion of mucinous poorly differentiated tumours. As a result, younger patients present with more advanced disease. Such patients require early diagnosis and a radical approach to treatment. Conclusions Gastrocolic fistula is an uncommon presentation of adenocarcinoma of the transverse colon. Radical en-bloc surgery with adjuvant chemotherapy may occasionally produce long-term survival. Competing interests The author(s) declare that they have no competing interests. Authors' contributions MJF collated the information, searched literature and wrote the manuscript. JKD assisted in literature search and writing of the manuscript. KM was responsible for long-term follow up of the patient and assisted in literature search. MCP managed the patient, helped in preparing the manuscript and edited the final version. All authors have read and approved the final version of the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549543.xml |
539259 | Stoicism, the physician, and care of medical outliers | Background Medical outliers present a medical, psychological, social, and economic challenge to the physicians who care for them. The determinism of Stoic thought is explored as an intellectual basis for the pursuit of a correct mental attitude that will provide aid and comfort to physicians who care for medical outliers, thus fostering continued physician engagement in their care. Discussion The Stoic topics of good, the preferable, the morally indifferent, living consistently, and appropriate actions are reviewed. Furthermore, Zeno's cardinal virtues of Justice, Temperance, Bravery, and Wisdom are addressed, as are the Stoic passions of fear, lust, mental pain, and mental pleasure. These concepts must be understood by physicians if they are to comprehend and accept the Stoic view as it relates to having the proper attitude when caring for those with long-term and/or costly illnesses. Summary Practicing physicians, especially those that are hospital based, and most assuredly those practicing critical care medicine, will be emotionally challenged by the medical outlier. A Stoic approach to such a social and psychological burden may be of benefit. | Background Medical outliers are defined in health care reimbursement, especially in the prospective payment system, as those patients who require an unusually long hospital stay or whose stay generates unusually high costs, i.e., the most severely ill [ 1 ]. According to Meadow et al., racial and Hispanic minorities are more likely to be outliers, as are urban dwellers, and those who live in counties in the USA that have poverty rates greater than 16.7% [ 2 ]. The ranks of medically uninsured in the USA have risen to the highest level since 1998 (45,000,000) while at the same time an additional 1.3 million people have fallen below the poverty line [ 3 ]. Financial concerns have overwhelmed the American medical educational system and hospitals [ 4 , 5 ]. Medical outliers have become a significant problem. The US Congress has developed a system of payments through Medicare to protect hospitals from high cost patient stays. Teaching hospitals and smaller public hospitals have a higher percentage of outliers. These hospitals are frequently subjected to patient "dumping" by larger, more powerful hospital systems trying to rid themselves of less lucrative clientele [ 6 ]. To make matters worse, teaching and public institutions may have their elective surgery schedules (normally profitable) impacted by the medical outlier problem [ 7 , 8 ]. Eric Cassell has pointed out that medical practice is now being shaped by commercial considerations; the issue of financing health care dominates all facets of medicine, including education, research, the relationships between physicians, and the relationships between physicians and their patients [ 9 ]. Nonetheless, "sick people are often cared for by physicians who, though burdened by the system in which they work, are dedicated to the sick and to medicine. Doctors who love their profession and who devote their lives to it are not rare" [ 9 ]. However, caring for medical outliers can be a burden for even the most dedicated physicians. In a previous debate an argument was made, using the works of Kant and Hegel, that physicians have an obligation to treat medical outliers [ 10 ]. Here an argument is made using Stoic thought in the pursuit of a corollary to support that maxim (obligatory care of medical outliers by physicians). This corollary, that a physician's mental state (attitude), will determine his comfort in pursuing the above maxim, is supported by the Stoic view of determinism. Stoics will argue that becoming a physician is predetermined by Nature, or God, and, that having occurred, it must also be predetermined that a physician will care for medical outliers. Thus, the only thing physicians can alter, or control, is their attitude. Zeno, the founder of Stoicism, explains, "A man's excellence or virtue does not depend on his success in obtaining anything in the external world; it depends entirely on having the right mental attitude toward things" [ 11 ]. Stoicism teaches that the universe is rational, that it can be explained rationally, and organized rationally. Stoics taught that logos , the ability of humans to think, plan, and express themselves, was inherent in the cosmos. Therefore, logos is part of Nature, or God, "God and man are related to each other at the heart of their being rational agents. If a man fully recognizes the implications of this relationship, he will act in a manner which wholly accords with human rationality at its best, the excellence of which is guaranteed by its willing agreement with nature. This is what it is to be wise...." [ 12 ]. Stoics believed that a cosmic thread related every event to another and that such a premise allows a human to live a life at one with Nature, or God. In acquiring such an understanding a human could strike an accord between his or her attitudes, actions, and the course of events. The reason for having a good mental attitude and the understanding that a cosmic thread relates all actions and events, according to the Stoics, is that God rules all and nothing can occur unless God wills it. Accepting everything that happens to one's self will bring contentment, "So the good man will accept everything, knowing that it is not only unalterable, since Fate determines all, but also the work of God, the perfect being; namely that his happiness depends entirely upon himself, and not at the mercy of other persons or the play of outside forces. What brings happiness is to have the right attitude, to choose the right actions, to aim correctly at the mark" [ 11 ]. Therefore, to "try" is within a man's power, and good intentions are indeed enough according to Stoic philosophy. The ability to succeed is not necessarily within man's grasp, but his attitude is his own doing. If a person does his or her best and has no self-reproach for this effort, then he or she is one with Nature, or God. There are difficulties with accepting Stoic philosophy, which will be reviewed later. Nonetheless, the Stoic view of a physician's approach to the care of the medical outlier will be advocated as one that may allow a physician to be successful in outlier care. Success does not refer to monetary remuneration, but an approach that will allow a physician mental comfort (a good attitude) in such an advocacy. Physicians must understand the concepts of good, the preferable, the morally indifferent, living consistently, appropriate actions, the cardinal virtues, and the passions to successfully apply the Stoic view. Discussion The good, the preferable, and the morally indifferent (in "right actions") What is good? Any physician pondering this question may come up with answers such as a lucrative medical practice, minimal on-call days, or passing medical board examinations. These things are good, especially when compared to poverty, sickness, or unemployment. However, the Stoics believed what was "good", was also morally perfect (virtue, virtuous acts and virtuous people). Virtue and virtuous things belonged in a league of their own. If you were virtuous, according to the Stoics, you were "good", therefore happy, and this was moral perfection. If you were virtuous you always did what was morally right. Things that are "bad" are morally imperfect (not virtuous); here we speak of evil and wickedness, not poverty, illness, or death. The Stoics' view of good and bad were extremes of perfection and imperfection. Beauty, wealth, a good job, and a good marriage were things that were preferable, but not morally "good". Illness, poverty, and death were less preferable, but not morally "bad". However, neither the preferable, nor the less preferable were considered "good" or "evil"; they were considered morally indifferent, "Goodness, however, and knowledge, although they had value of a unique kind, could not be the only things to have value. Right action (author's italics) is a matter of choice concerned with morally indifferent things – will you look for wealth or accept poverty, marry or remain a bachelor, live or die? – and choice between absolutely different alternatives would not involve knowledge or reason.... Virtue then can consist in the effort to obtain these things that have value and avoid their contraries, and knowledge can be knowledge of what is to be preferred. But since things of this sort are not "good" or "bad", it is of no importance whether one has them or does not have them, as far as goodness is concerned. The good intention is enough; achievement may be impeded by forces outside a man's control" [ 11 ]. It can generally be agreed upon that caring for medical outliers, ideally, is a "right action" because we are dealing with sickness and economics, i.e., morally indifferent things that are not preferable. Many times caring for medical outliers is something we must do to keep our position, fulfill our contract, or to avoid a lawsuit. However, if we speak of doing what is right in the virtuous, or morally perfect sense, caring for medical outliers must be more than moral indifference. It must be an act of virtue, but can only be so if the physician is doing it out of the deepest sense of duty. Virtue in this case means having the right mental attitude toward outlier care and understanding that it is more than an ordinary good, it is an action that stands morally in a class of its own. Living consistently Philosophers in ancient Greece would inquire, "What is the goal of a perfect life" [ 11 ]. The Stoic would answer, "living consistently". It means to live harmoniously because those who live in conflict are unhappy. Zeno explained that "the single plan by which life should be lived must be a plan formed by correct reason, and this would be one that is natural in the sense that it accords both with man's nature and with universal nature" [ 11 ]. In advocating the position of the Stoics it is very important to understand that they believed that man inherently considered the interests of his fellow humans important and accepted whatever difficulties divine providence placed upon him so that the wider plan of nature, or God, could be implemented. To be happy or content in the care of medical outliers a physician must have this consistency of life and an absence of conflict in this regard. To be able to understand this concept and accept it, the physician must be aware that he or she is a part of the whole. For example, let us say a flower flourishes in a garden; we understand what "to flourish" means. Also, let us say we observe a group of birds, some are healthy and some are not. Therefore, we know what the natural condition, or norm, for a flower and a bird should be. This norm is good. So each thing, flower, or bird has a good that is a universal. There is a universal condition or nature that is appropriate for each thing, "Eating hay is natural to horses, but not to men. It accords with universal nature that horses should eat hay and that men should speak a language. But the former is inappropriate to men and the latter to horses. Universal Nature sanctions a norm for particular things – the nature of plants, animals and men – by reference to which they can be said to attain or not to attain their individual ends" [ 12 ]. Thus we can understand what is meant by "the part". Man is born and man will die. Man fights the Universal Nature of death, a particular "norm" of humans. In struggling against his role, or his "part" in nature, man may do extraordinary things to keep himself healthy or alive. A man may decide to preserve his home or his child's education by not having (paying for) health insurance. A physician, for example, may decide that a patient needs an organ transplant. The physician will engage this struggle whether or not the patient has health insurance, and the end result may make the patient a medical outlier, physiologically and/or economically. Universal Nature may sanction a norm for particular things, but humans, and especially physicians, often struggle against their role as a "part" and come into conflict with nature. Each flower, bird, person, or physician is a part of Nature's whole. Contrary events may happen to an individual "part". Birds are hunted and eaten, flowers are cut and put into a vase, and severely ill humans may end up on long-term ventilatory support. The Stoics believed that such events are part of nature's order. Such a view may be contrary to a human observing the whole of nature. However, if the human perspective is removed from an event then, "From the perspective of the whole even such conditions are not unnatural, because all natural events contribute to the universal well-being" [ 11 ]. The Stoics combine their views of the part and the whole. They view the whole as perfect and the nature of perfect requires inequities and incompatibilities; nothing that happens to a human is disadvantageous to him or her, nor is it a disadvantage to nature. Nature is perfect, so according to Stoicism, suffering does not occur for its own sake, but "it is necessary to the economy of the whole" [ 12 ]. For physicians to live consistently they must understand their place in nature. Stoics would explain to today's physician that their contentment in dealing with medical outliers depends on this understanding of nature. If this is understood and accepted there will be no conflict and "living consistently" will be attainable. Appropriate actions Most physicians hope to make the right, or moral, decisions in regard to their actions. Caring for medical outliers is a choice that has to be made by many physicians. Stoics believed it was always appropriate to act virtuously, but acting virtuously can only occur when a man is "perfectly good", and since men are not "perfectly good" then physicians cannot act virtuously, in other words, be morally perfect. However, Stoicism has made a niche for appropriate actions, "to act virtuously is always morally good, and to act faultily is always bad, to act appropriately is not in itself either good or bad in the sense of being morally "good" or "bad" [ 11 ]. Even though physicians as humans are not perfect, and thus cannot act virtuously, nonetheless they can make appropriate decisions, take appropriate actions, and "do the right thing". A physician may care for a medical outlier, but it is not necessarily a morally good action if he or she does it without the complete understanding of why it is the right thing to do. In other words, taking care of a medical outlier is a just action, and thus an appropriately good action, but only if the physician is doing it without duress or not being mandated to attend to the patient (such as being on-call, by contract, or even being shamed into doing it). Cardinal virtues Irrational forces plague a man's mind according to Plato, and these forces have to be controlled before a man could have the needed knowledge to act virtuously. The Stoics, however, did not feel this was true. They felt that if a man could be trained to think correctly, then he could learn to act virtuously. Zeno went on to define four cardinal virtues that were necessary for a man to acquire to be successfully trained to think correctly so that he could act virtuously. He used Plato's work as a basis and he defined the four virtues in terms of the fourth virtue, wisdom. Justice was wisdom concerned with distribution. Temperance (self-control) was wisdom concerned with acquisition. Bravery was wisdom concerned with endurance. Wisdom was defined as "knowledge of what should and should not be done, or knowledge of what is good or bad or neither" [ 11 ]. Medical outliers require justice. Physicians must be able to distribute their actions (medical practice) fairly and equitably to all those who are in need of such services. Temperance, or self-control, must be learned or acquired. Physicians have to control their emotions when assigned to a medical outlier, having their patients turn into medical outliers, or seeing other parties' responses to medical outliers. Physicians should allow neither anger, frustration, anxiety, nor fear to overtake them. Enduring the endless days, weeks, or months of caring for a medical outlier certainly requires bravery and stamina. Fielding the unending phone calls and the constant re-tuning of a patient's hemodynamic status can be of marathon proportions. Wisdom is what should or should not be done and what is good or bad must not only apply to the type and amount of medical care, but it should also encompass the virtue of the acts of justice, compassion, and care given to the medical outlier. The passions Physicians need emotion. The Stoics did not disagree, but they wished to eliminate passion ( pathos ) or what many of them called a mental disturbance. The Stoic "passion" is defined as an excessive uncontrollable drive due to an overestimation of the worth of the "indifferent" things (or events) mentioned previously. Nonetheless Stoics taught that to have great affection was indeed desirable, but at the same time one should remain passionless. Animals are driven to an action because of a stimulus, but in man such a stimulus (or impulse) requires the mind to accede to the stimulus. The Stoics found this to be important because they felt it to be a point of distinction between humans and animals. To the Stoics all living animals were compelled to respond to stimuli by their psyche, a mix of fire and air that was responsible for the functions of living animals (they held that the psyche was not immaterial and could be physically damaged). There are times, however, when a man's mind is out of control and his passions become excessive [ 11 ]. There were four kinds of passion the Stoics recognized: fear, lust, mental pain, and mental pleasure (as opposed to physical pleasure). The passions were explained by F.H. Sanbach, "Fear is a contraction of the psyche caused by the belief that something bad is impending. It causes paleness, shivering, and thumping of the heart. But the belief is false: what is feared is not what a Stoic calls "bad", but one of the morally indifferent things, e.g. death, pain, ill-repute. Fear is the result of exaggerating their importance, of believing they will bring real harm, whereas they do not affect man's essential moral being and if they come are to be accepted as part of the great plan of nature. Lust is a longing for something believed to be good, but again is falsely so believed, since the supposed good is morally of the psyche. Mental pain is a contraction of the psyche resulting from the belief, again erroneous, that something bad is present.... Pleasure was defined as an irrational expansion of the psyche caused by the supposed presence of something good.... What is thought to be good is not in fact good, but at the best acceptable" [ 11 ]. In many instances the passions do come into play when a physician cares for a medical outlier. Fear affects the physician from several perspectives. Physicians fear for adverse outlier outcomes, not only for the patient's sake and that of the family, but also out of concern for potential litigation, non-reimbursement for services rendered, and long hours incurred in the care of the patient. In regard to lust (desire), something believed to be good, but falsely so, several points can be made. The Stoics spoke of many types, or species, of lust, anger being foremost among them. This species of lust is very appropriate to discuss in regard to care of the medical outlier. Physicians do get angry and occasionally act out when challenged. Outliers involve a large investment of emotion, time, and a potential loss of income on the part of the physician (he or she could be caring for patients who are less involved and whose medical insurance has expired). Also, hostility toward staff for small deviations from the plan of care may occur more frequently than the staff would like. As the patient's course of illness drags on physicians may have anger for the patient and the family (even though it may be well concealed). Such anger occurs because the patient does not improve or improves too slowly. There may also be anger towards the family for asking too many questions or questioning the plan of care the physician is following. In addition, the family may also want other physicians consulted or more time from their current physician. What is, in fact, happening is that the family is merely trying to exert control over what little they can yet control. Mental pain or anguish is self evident in medical outlier care. The previous passions of fear and lust (the species of anger) contribute to the burden of mental pain. The species of mental pain that Stoics address include grief and pity, two potentially powerful distorters of judgment for physicians. Mental pleasure in caring for those that are seriously and/or chronically ill is not as self-evident. Again we do not speak here of physical pleasure. The species of mental pleasure include "pleasure at unexpected 'benefits', pleasure at other people's misfortunes, pleasures caused by deceit and magic" [ 11 ]. Physicians do not take pleasure in the misfortunes of their patients, but there may be interplay of this element when dealing with their colleagues in regard to medical outliers. There are times when various physicians have differing views as what should be done in the course of a patient's care. When one physician is "wrong" and another is found to be "right" concerning a particular decision, procedure, diagnosis, or course of therapy, there are instances of gloating, or taking pleasure in a colleague's fall, error, or misperception. There is no doubt that medical outlier care evokes passions. There is little for a physician to do but his or her best in regard to giving health care. Much is out of the physician's control, and therefore much of Stoic thought is applicable; control the passions, keep a good attitude, have an open mind about plans of care, have an open mind as to who can participate in decisions (patient, family, other health care providers), understand what is a reasonable outcome, and remember that is does not matter who gets credit for good outcomes [ 13 ]. Flaws in Stoic thought If all human events and actions are predetermined how are human freedoms and free will to be addressed? Universal causation is the bedrock of Stoic philosophy. If human attitudes and beliefs are within an individual's power or sphere of influence, is this truly congruent with Stoic determinism? Robert L. Arrington illustrates the human attitude towards sickness as a foible in Stoic thought [ 14 ]. Illness can be a misfortune or an " indifference". The Stoics seem to hint that we should see illness as an "indifference" and a misfortune and then choose. If we apply universal causation in this matter there must be a cause for us to view illness one way or another. Arrington's interpretation of this dilemma in Stoic philosophy is illuminating, "And if the causes that exist prior to our forming the attitude lead us to perceive the illness as misfortune, it is not possible for us to perceive it as a matter of indifference . If, on the contrary, the causes lead us to assume the attitude of indifference, then it becomes impossible for us to see the illness as misfortune. Either one of the sets of courses or the other must exist, from which it follows that it is either impossible for us to feel misfortune or impossible for us to feel indifference. If one of these options is impossible, the attitude we take is necessary in which case we really didn't have any options at all. And without options or choices, there is no thing as freedom or voluntary behavior. And, so it seems, our attitudes and beliefs are not in our power" [ 14 ]. This argument regarding whether universal causation and determinism is consistent with a free will has been debated for over 20 centuries. Today there are philosophers on both sides of the issue. Another flaw is the Stoic approach to evil. Stoics simply tell us it does not exist; events may seem evil, but they are not. Stoics teach that only the human perspective allows the interpretation that evil exists. Religions of the world, many philosophers, and people who have viewed and/or endured suffering cannot agree with the Stoics. A further distortion in Stoic thought involves the idea that the life of virtue is the only "good" life. What about the "preferred" things that we as humans know make our lives better? What is wrong with "attaining the goals of impulse" [ 14 ]? There was a gradual progression in the evolution of later Stoic philosophy to allow the acceptance of the "preferable" things and this erosion of principle led to many attacks on Stoicism from other philosophical quarters. And, finally, the Stoics felt the universe was rational and in unity. A divine thread ran through the cosmos connecting everything and everybody. Many philosophers cannot accept this concept. However, as we see the progression of this line of reasoning as it regards the study of the "string" theory in physics and the further work and modification of Einstein's views of relativity, we realize that there may be a mathematical basis to existence. The Stoics may be criticized about their "thread" through the cosmos, but when we discuss how time "bends" and describe gravity as "curved space" the critics of Stoicism may be tightrope-walking this same thread. The health care rendered to medical outliers can be for a substantial length of time and cost to the medical practitioner. The intensive interaction with the patient, his or her family, consultants, nurses, other ancillary staff, and the institution can sap the performance of involved physicians. Placing the argument for the predetermination of events aside, if one is a physician that is involved in a hospital-based specialty, critical care medicine, or as surgeon, the fact of the matter is that medical outliers will come to your door. By fate or by choice physicians in the above-mentioned areas will be engaged with outliers. As the Stoics point out, the future is coming at you and there is nothing you can do about it, except adjust your attitude. While engaged in such an endeavor physicians will "try" to do their best, hopefully without self-reproach as to their efforts. Physicians, as humans, cannot be "good", or morally perfect. They become tired, hungry, worry about the bills, their children, their practice, hospital policies, etc. Nonetheless, most physicians realize and understand what appropriate actions are necessary in regard to the most ill and poorest of patients. Also, though physicians may not be morally perfect (virtuous in the Stoic sense), they know what is "preferable" for their patients, i.e., to get well, go home and be with their families. These are morally "indifferent" things, but as Stoics point out, "virtue then can consist in the effort to obtain these things that have value and avoid their contraries, and knowledge can be knowledge of what is to be preferred" [ 11 ]. As mentioned previously, physicians frequently struggle against their role as a "part". By the mere fact that physicians acknowledge struggles with or against insurers, patients, families, colleagues, ancillary staff, and institutions there is realization that they are part of a "whole", but at the same to "live consistently" requires an understanding of one's role in nature and the need for absence of conflict. In the struggle to help the critically ill, the chronically ill, or the incredibly poor, avoiding conflict is challenging. The cardinal virtues of justice, temperance, bravery, and wisdom come with upbringing, education, and life experiences. Lacking the proper intellectual or nurturing environment will not allow the flourishing of these virtues in an individual. In schools of medicine the faculty attempt to inculcate these virtues in the students, although they are not always successful. The Accreditation Council for Graduate Medical Education (ACGME) has recently mandated six core competencies for resident education [ 15 ]. One of these competencies is Professionalism (and ethics). This, in effect, formalizes ethics education at the graduate level (residency). Such a graduate level discourse should be preceded by a problem-based learning format at the medical school level, preferably before the students begin their clinical work. Thus, the Stoic view, or any other philosophical view or ethical concept, could be taught pre-clinically and then reinforced in an ACGME residency format. Justice is a far-reaching concept that more physicians need to embrace. It is difficult to teach and is best acquired through experience of its antithesis. Temperance, or self-control, can be mandated by medical staff guidelines and licensing boards, but this virtue actually needs to have been ingrained before medical school. Bravery is something that residency training seeds in a physician through the frequent facing of dying or hostile patients that come through a medical practice, especially at an academic institution that cares for the disenfranchised. Wisdom will come with time and maturity. Both are needed to acquire wisdom. Physicians, medical students, and residents can, indeed, be taught to think correctly, as the Stoics emphasized. However, individuals will have varying degrees of success depending their rearing/nurturing and educational environment. Emotion is necessary to physicians, but the passions, those uncontrollable mental disturbances due to overestimation of the value of "indifferent" things, may blind them in their judgment and decision-making. It is important that when making important decisions in regard to medical outliers that the passions be "checked". To avoid frustration, disappointment and unhappiness in the practice of medicine as it regards medical outliers, physicians must do two things: (1) control things that are within their power (attitudes, desires, beliefs), and (2) be indifferent to the things that they cannot control (things external to themselves) [ 16 ]. Even though Stoicism has evoked controversy for over twenty centuries it is relevant to a physician who must juggle patients, procedures, therapies, and colleagues in the care of a patient who has maximally taxed medical insurers, institutions, other practitioners, and their own families. Summary Insurers and institutions may have financial burdens, but those providing patient care, especially physicians, bear a disproportionate slice of the mental anguish associated with the care of medical outliers. Here an argument has been made that applying the philosophical tenets of Stoicism to the physician's intellectual pursuit of how to deal mentally with the care of medical outliers is appropriate. Physicians that are hospital based and those practicing critical care medicine may well be the providers most emotionally challenged by outliers. A Stoic approach to such a social and psychological burden may be helpful. Competing interests The author(s) declare that they have no competing interests. Author's contributions TJP is responsible for the manuscript in its entirety. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC539259.xml |
554089 | Locally advanced duodenal gangliocytic paraganglioma treated with adjuvant radiation therapy: case report and review of the literature | Background Gangliocytic paraganglioma are rare neoplasms that predominantly arise in periampulary region. Though considered benign the disease can spread to regional lymphatics. Case presentation A 49 year old woman presented with melena and was found to have a periampullary mass. Endoscopic evaluation and biopsy demonstrated a periampullary paraganglioma. The tumor was resected with pylorus-preserving pancreaticoduodenectomy and was found to represent a gangliocytic paraganglioma associated with nodal metastases. In a controversial decision, the patient was treated with adjuvant external beam radiation therapy. She is alive and well one year following resection. The authors have reviewed the current literature pertaining to this entity and have discussed the biologic behavior of the tumor as well as the rationale for treatment strategies employed. Conclusion Paraganglioma is a rare tumor that typically resides in the gastrointestinal tract and demonstrates low malignant potential. Due to rarity of the disease there is no consensus on the adjuvant treatment even though nearly 5% of the lesions demonstrate the malignant potential. | Background Gangliocytic paragangliomas are unusual neoplasms that may be identified anywhere within the gastrointestinal tract, but predominate in the periampullary region. This entity was first discussed by Dahl et al in 1957, and subsequently reported by Taylor and Helwig in 1962 [ 1 , 2 ]. Kepes and Zacharias named the tumor and described its characteristics in 1971 [ 3 ]. The pathognomonic features of these neoplasm is the identification of three distinct cellular elements: spindle cells, epithelial cells, and ganglion cells. These tumors are considered benign, yet occasionally metastasize to regional lymph nodes, as well as to distant organs [ 4 ]. Long term survival is common with appropriate resection. We report a case of a 49-year-old female who presented with melena and was found to have a periampullary gangliocytic duodenal paraganglioma. The details of the clinical presentation, histopathological findings, and therapeutic choices are provided. Case presentation A 49-year-old woman presented with a 6-month history of melena. During the preliminary consultation, she also complained of right upper quadrant pain, which radiated to her right lower quadrant and upper back. She underwent upper gastrointestinal (GI) endoscopy which demonstrated a 3 cm, ulcerated, ampullary mass (figure 1 ). Endoscopic biopsies suggested the diagnosis of paraganglioma. The lesion did not obstruct the ampullary orifice. Both computed tomography (CT) and magnetic resonance imaging (MRI) of the abdomen failed to demonstrate this lesion or any additional abnormalities. Figure 1 Endoscopic photograph of the paraganglioma demonstrating an exophytic tumor whose edges enfold around a central area of necrosis that is actively bleeding. The patient underwent pylorus-preserving pancreaticoduodenectomy and lymph node dissection. The postoperative course was uneventful except for delayed gastric emptying diagnosed by frequent vomiting during postoperative days 7 to 10, and confirmed by gastrograffin swallow. These symptoms eventually resolved with conservative management and the patient was discharged from the hospital on 15 th postoperative day. Histopathological analysis Gross pathological evaluation of the resected specimen included a portion of duodenum with ampulla, measuring 16 cm. in length, and a portion of pancreatic head measuring 5 cm. in length. There was a polypoid periampullary mass protruding into the duodenal lumen, measuring 1.4 × 1.2 × 0.7 cm. On cut section, the mass was moderately firm and pink-tan. It was circumscribed but unencapsulated, and appeared to be covered by normal appearing duodenal mucosa. There was no evidence of pancreatic invasion by the tumor. A total of seven lymph nodes were also removed, 5 peripancreatic (3.0 cm in greatest dimension) and 2 periduodenal (1.2 cm and 1.0 cm in greatest dimension, respectively). Histologically, the tumor consisted of a complex neoplastic proliferation that included a component resembling carcinoid or islet cell tumor (figure 2 ), admixed with a proliferation of spindled neurofibrillary cells and larger polygonal cells demonstrating gangliocytic differentiation (figures 3 and 4 ). There were areas of stromal hyalinization resembling amyloid, with focal calcification (figure 5 ). Congo red and Thioflavin T stains were negative for amyloid. The tumor extended through the muscularis propria and along the common bile duct, but did not invade the pancreas. The resection margins were free of tumor. Metastatic paraganglioma was present in 6 of 7 periduodenal and peripancreatic lymph nodes. The metastatic lymph nodes showed the same mixed histologic features as the primary tumor. Figure 2 Photomicrograph showing areas of the tumor that had an epithelial pattern resembling carcinoid tumor or islet cell tumor. These cells were cohesive, uniform in size and shape with small round uniform nuclei, and formed rosettes, cribriform structures, solid nests and trabecular cords (Hematoxylin and Eosin original magnification ×100). Figure 3 Photomicrograph showing areas of the tumor that consisted of a proliferation of spindled neurofibrillary cells with admixed larger polygonal ganglionic cells (Hematoxylin and Eosin original magnification ×100). Figure 4 Photomicrograph showing the ganglionic cells having large round nuclei with chromatin clearing and large central nucleoli. The cytoplasm is abundant and amphophilic staining. The ganglionic cells are admixed with the spindled cells (Hematoxylin and Eosin original magnification ×400). Figure 5 Photomicrograph showing the focal areas of stromal hyalinization with calcification. Special stains for amyloid were negative (Hematoxylin and Eosin original magnification ×200). Immunohistochemical analysis demonstrated that the tumor stained positively for S-100, chromogranin, synaptophysin, and cytokeratin AE1 and AE3. No reactivity was observed with MART-1 or HMB 45. Staining for c-kit (CD 117) was performed on sections of the primary tumor and of one of the lymph nodes with metastatic tumor. The carcinoid-like epithelial cells and the spindle-shaped neurofibrillar cells stained negatively for the cell marker c-Kit. The gangliocytic cells stained strongly positive for c-kit. Adjuvant therapy Due to the evidence of regional lymph node metastasis the patient was counseled regarding adjuvant therapeutic options. The treating physicians queried recognized experts in the field of radiation therapy for gastrointestinal malignancies via email and a consensus developed that external beam radiation therapy might be reasonable, although it was admitted that no data are available regarding the use of this modality for this disease entity. Ultimately, a decision was made to administer external beam radiotherapy to the abdomen in an effort to eradicate any possible residual disease not removed during surgery and to reduce the risk of locally recurrent disease. No chemotherapy was advised due to the rarity of distant metastases and the lack of response of these neoplasms to conventional systemic therapy. The patient was treated with intensity-modulated radiotherapy in 28 fractions of 180 cGy/fraction over 37 elapsed days; 6 mv photon beam energy was used. The target was a postoperative tumor bed with a 5–10 mm margin. The total dose was 5,040 cGy. She tolerated treatment well and is now symptom free more than one year following resection. Surveillance CT scans and endoscopy have been performed both of which reveal no evidence of recurrent disease. Discussion Gangliocytic paraganglioma is a rare, typically benign tumor of the gastrointestinal tract most commonly located in the second portion of the duodenum, with a few cases having involved the jejunum and pylorus [ 5 - 7 ]. Burke et al reported that there seems to be a slight male predominance and an average age of 54 at presentation [ 5 ]. Other authors have denied that there is any gender preference [ 8 ]. This lesion usually presents with abdominal pain and gastrointestinal bleeding due to mucosal ulceration. Obstructive jaundice is less common [ 9 ]. Histologically, our patient's tumor demonstrated the characteristic tricellular pattern of gangliocytic paragangliomas. These tumors are typically composed of an admixture of ganglion cells, spindle cells and epithelial cells [ 5 , 10 - 13 ]. These tumors are submucosal, and rarely recur or metastasize [ 14 - 16 ]. In most reported cases regional lymph node involvement, the metastatic cells consist predominantly of epithelial cells [ 15 ]. A relatively unique element of the case we present is that six of 7 regional lymph node metastases contained all three characteristic cell types, and thus, demonstrated the possibility for each of these cell types to acquire a malignant potential. Immunohistochemically these tumors stain positive for a variety of markers as was demonstrated in this report. Such markers include those mentioned above as well as neuron-specific enolase, pancreatic polypeptide, somatostatin, myelin basic protein and neurofilament proteins [ 6 , 12 , 13 , 17 ]. The origin of gangliocytic paragangliomas has been widely debated and includes hypotheses ranging from a hamartomatous derivation to cellular elements arising from pancreatic neuroendocrine tissue, or that of the retroperitoneal celiac sympathetic or parasympathetic plexuses [ 4 , 12 ]. There is no data in the literature to guide clinicians on the use of adjuvant therapy despite the fact that approximately 5% of cases demonstrate malignant features [ 4 ]. Since this patient had multiple positive lymph nodes and is relatively young, a trial of adjuvant radiotherapy to the operative bed was considered reasonable and was endorsed by radiation oncologists at high volume cancer centers queried via email. Conclusion Gangliocytic paraganglioma is a rare duodenal tumor that can present with non-specific symptoms. Positive diagnosis can be obtained histologically by observing three characteristic cell types. Although this tumor is considered benign, the possibility exists for regional lymph nodal spread. Due to the rarity of the disease, no clear adjuvant treatment strategy has been determined in cases that demonstrate regional or distant metastasis. Competing interests The authors(s) declare that they have no competing interests. Authors' contributions AW wrote the original manuscript. AM performed surgical resection and prepared requested revisions of the manuscript. JM performed histopathological evaluation of the lesion and prepared photomicrographs. CT administered radiation therapy to the patient, made editorial suggestions, and supervised AW. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554089.xml |
529274 | Image fusion for dynamic contrast enhanced magnetic resonance imaging | Background Multivariate imaging techniques such as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) have been shown to provide valuable information for medical diagnosis. Even though these techniques provide new information, integrating and evaluating the much wider range of information is a challenging task for the human observer. This task may be assisted with the use of image fusion algorithms. Methods In this paper, image fusion based on Kernel Principal Component Analysis (KPCA) is proposed for the first time. It is demonstrated that a priori knowledge about the data domain can be easily incorporated into the parametrisation of the KPCA, leading to task-oriented visualisations of the multivariate data. The results of the fusion process are compared with those of the well-known and established standard linear Principal Component Analysis (PCA) by means of temporal sequences of 3D MRI volumes from six patients who took part in a breast cancer screening study. Results The PCA and KPCA algorithms are able to integrate information from a sequence of MRI volumes into informative gray value or colour images. By incorporating a priori knowledge, the fusion process can be automated and optimised in order to visualise suspicious lesions with high contrast to normal tissue. Conclusion Our machine learning based image fusion approach maps the full signal space of a temporal DCE-MRI sequence to a single meaningful visualisation with good tissue/lesion contrast and thus supports the radiologist during manual image evaluation. | Background In recent years, multivariate imaging techniques have become an important source of information to aid diagnosis in many medical fields. One example is the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) technique [ 1 , 2 ]. After the administration of a gadolinium-based contrast agent, a sequence of d 3D MRI volumes is recorded from a certain part of the body (see Fig. 1 ). Thus, each spatial coordinate p = ( x , y , z ) in the volume can be associated with a temporal kinetic pattern vector which is regarded as a point in a signal space (see Fig. 2 ). The examination of these temporal kinetic patterns at different spatial coordinates in the volume allows the observer to infer information about local tissue types and states (see Fig. 3 ) [ 3 ]. Figure 1 Visualisation of contrast agent concentration as gray value images of the same volume slice at different points of time (Left to right: first precontrast, first postcontrast and fifth postcontrast image). The lesion is located near the centre of the right breast. Figure 2 Alternative view on a temporal sequence of d 3D MRI volumes: Each spatial coordinate p in a 3D volume can be associated with a d -dimensional temporal kinetic vector x p consisting of measurements of the local intensity at d points of time. Figure 3 Illustration of temporal kinetic patterns of contrast uptake for normal, benign and malignant tissue (left to right) measured during DCE-MRI with two precontrast and five postcontrast recordings. Especially the strong signal uptake between the two precontrast measurements and the first postcontrast measurement indicates suspicious tissue. Today, much effort is spent on enhancing the capabilities of the imaging techniques e.g. increasing the spatial and temporal resolution. In contrast to these improvements in image acquisition, much less effort has been spent on effective visualisation methods. Even though several approaches for detection and classification of suspicious lesions in DCE-MRI data of the breast have been proposed (e.g. [ 4 - 8 ]), it is still common practice for the huge amount of data to be analysed manually using simple operations such as subtraction images of two volumes. Obviously, these images can only comprise a small fraction of the information which is commonly spread over all volumes of the sequences. As a consequence, analysing multivariate images in radiology remains a time consuming and challenging task which potentially can be alleviated by the application of image fusion techniques. Image fusion Image fusion methods have been an area of research for several decades. According to Genderen & Pohl [ 9 , 10 ], image fusion ' is the combination of two or more different images to form a new image by using a certain algorithm' e.g. integration of a large number of multivariate images from a remote sensing process into one image. Because Genderen & Pohl already stated PCA as a standard technique for image fusion in remote sensing, we adopt the more general definition of the term image fusion from the remote sensing community. Whereas in the medical imaging community the meaning of the term image fusion is commonly restricted to fusion of multimodal images, the definition of this term used in this article also includes multivariate images such as multispectral or multitemporal images. Pattern recognition methods such as artificial neural networks (ANN) have gained much attention from the remote sensing community [ 11 - 15 ]. From the point of view of pattern recognition, the problem of image fusion is strongly related to the task of dimension reduction : Ignoring the spatial order of the patterns x , the image data is an unordered set of patterns that forms a data distribution in the data space and image fusion or dimension reduction corresponds to a mapping to a new low dimensional space which retains certain properties of the original data distribution. Subsequently, the mapped patterns can be spatially ordered according to the locations p of the corresponding sources, leading to the final fused images. Well-known algorithms such as Principal Component Analysis (PCA) [ 16 ] or Self Organising Maps [ 17 ] have been successfully applied for various tasks of multispectral or multitemporal image fusion [ 11 - 15 ]. It is important to note that these methods are not bounded with limitations on the dimensionality of . Hence, they are especially suited if is high dimensional. In this work, we investigate the application of machine learning algorithms to medical image fusion. We compare the results of the standard linear PCA with it's nonlinear extension, the so called Kernel PCA (KPCA) which was proposed by Schölkopf et al. in 1998 [ 18 ]. Our empirical observations are presented and discussed by means of DCE-MRI data sets from a breast cancer screening study [ 19 ]. Image material presented in this paper is also provided online in original size (PNG format) [ 20 ]. Methods In the following, we briefly describe the theoretical background of the linear PCA and nonlinear KPCA algorithms and their application to the task of image fusion. Both methods determine a set of projection directions, referred to as principal directions (PDs), by optimising a certain criterion. The mapping M is defined by a subset of all possible PDs. Projecting each pattern x p on to one of these PDs associates each spatial position p with a new scalar value (the principal component ) of which integrates information from the different components of x p , respectively. The resulting 3D image can be visualised as a gray value image or using perceptually optimised colour scales [ 21 , 22 ]. Alternatively, the low dimensional representation of the patterns can be displayed as RGB composite images, if M is defined by a set of three PDs. Principal component analysis Principal Component Analysis is one of the most frequently used dimension reduction method. Suppose the data are given by the set Γ = { x i }, x i ∈ , 0 ≤ i ≤ N , PCA is a transformation in a new coordinate system of uncorrelated and orthogonal principal axes ξ ∈ , | ξ | = 1 which can be derived from the eigenvectors of the covariance matrix by solving the eigenvalue equation λ ξ = C ξ (2) for λ ≥ 0 and ξ ∈ \ {0}. The first eigenvector ξ 1 (the one with the largest eigenvalue λ 1 ) maximises the variance . Therefore, the set of the first n ≤ d eigenvectors or PDs carry more variance than any other n orthogonal projections. Kernel principal component analysis In recent years, kernel based methods have been the object of much research effort within the machine learning community. The concept of a subset of kernel methods is based on the combination of well-known linear algorithms such as Principal Component Analysis or Fisher Discriminant Analysis with nonlinear kernel functions [ 23 , 24 ]. While the application of these functions allows more powerful nonlinear solutions, the kernelised algorithms retain most properties of their linear versions. Consider a nonlinear function which maps the examples x ∈ Γ to some feature space [ 25 ]. Furthermore, assume that the mapped data are centred in . In order to perform the PCA in , one has to find the eigenvectors ξ of the covariance matrix i.e. those vectors that satisfy with ξ ∈ \ {0} and λ ≥ 0. Substituting (3), it is easy to see that the eigenvectors ξ lie in the span of Φ( x 1 ),...,Φ( x N ). Therefore, Schölkopf et al. [ 26 ] define the equivalent eigenvalue problem Nλ α = K α (4) where α denotes the column vector of coefficients α (1) ,..., α ( N ) describing the dual form of the eigenvector by and K is the symmetric Gram matrix with elements K ij = K ( x i , x j ) = Φ( x i ), Φ( x j ) . (6) Normalising α k corresponding to the k-th eigenvalue λ k of K ensures λ k α k , α k = 1. Now, principal components can be extracted in by projecting an example x on ξ k using It is crucial to note that for extracting principal components using (4) and (7) the inner product Φ( x i ), Φ( x j ) is needed rather than the explicit images Φ( x i ), Φ( x j ) alone. Instead, one can use kernel functions fulfilling Mercer's Theorem such as the Gaussian Kernel with bandwidth parameter σ or the Polynomial Kernel of degree d K ( x i , x j ) = x i , x j d (9) which allow the PCA in the corresponding to be performed implicitly with reasonable computational costs. For the Polynomial Kernel we have a clear interpretation of KPCA. In this case, is the space of all monomials of degree d of the pattern components. Thus, KPCA is a linear PCA of the corresponding high order statistical features. The KPCA algorithm can be summarised as follows: 1. Calculate the Gram matrix K of Γ using a suitable parameterised kernel function. 2. Transform K according with . This transformation implicitly moves the centre of mass of the mapped data {Φ( x i )}, x i ∈ Γ to the origin of , i.e. centres the data in . 3. Calculate the eigenvector expansion coefficients α k , i.e. the eigenvectors of and normalise them. 4. Extract principal components using (7). Compression vs. discrimination Application of both image fusion techniques leads to a set of up to d PDs in case of PCA and up to N PDs in case of KPCA. In general, a compact visualisation of the complete data as a single image is desired. In this case, inspection of the fused image based on the PD corresponding to the first (largest) eigenvalue is optimal in terms of a general compression scheme: The projection on this PD retains most of the total data variance and leads to a reconstruction with least mean square error. Nevertheless, image fusion is commonly employed with a well defined intention e.g. in order to detect a specific phenomenon such as bushfires in multitemporal satellite images [ 11 ] or (as in this work) tumour lesions in DCE-MRI data. In addition to the general compression characteristics, the fused image has to show task-specific discriminative properties which do not necessarily reflect the total data variance. In this case, using a PD corresponding to one of the following eigenvalues may lead to more discriminative visualisations. If the image data are fused by KPCA, an additional degree of freedom can be exploited. In addition to the index of the selected PD, the type and parameterisation of the kernel K can be varied leading to alternative mappings to the feature space, changing the characteristic of the fusion image. Experiments In the following, the fusion results of both methods are discussed and illustrated with DCE-MRI sequences from six cases (referred to as S 1 ,..., S 6 ) which were taken during the the MARIBS breast screening study [ 19 ]. Each sequence consists of seven 3D MRI volumes of the female breast, recorded with a separation of 90 sec using a standardised protocol (A fast spoiled gradient echo sequence (FLASH) with TR = 12 ms, TE = 5 ms, ip angle = 35°, FOV = 340 mm and coronal slice orientation). Before recording the third volume, a gadolinium-based contrast agent was administered with a bolus injection. Therefore, each spatial position p in the 256 × 128 × 64 (1.33 mm × 1.33 mm × 2.5 mm) sized volume is associated with a pattern , d = 7 describing the temporal signal kinetic of the local tissue. The images were manually evaluated by an expert who marked voxels of tumour with a cursor on an evaluation device. Below, the kinetic signals of the marked tumour voxels are labelled '+'. Signals corresponding to voxels of the complement of the marked region are labelled '-'. For this kind of data, experiments of Lucht et al. [ 5 ] suggest recording a much longer temporal sequence of 28 images which makes the need for efficient fusion techniques evident. Evaluation criteria In order to provide an objective discussion of the value and drawbacks of both algorithms, we focus on the following requirements: 1. The marked region should be visualised with high contrast compared to unmarked regions in order to facilitate detection of kinetic signals which are similar to the marked signals. 2. The fusion image should follow the first criteria without time consuming manual manipulation by the observer (e.g. tuning of transfer functions such as windowing). Following the first criteria, the purpose of the visualisation is specified implicitly by the voxel labels. In the present work, the expert marked regions of tumour tissues. Thus, optimal fusion images of an image sequence display locations of cancerous kinetic signals with high contrast to normal signals. Next to the visualisation of the fusion images as gray value and RGB images, both methods are evaluated by means of a receiver-operating-characteristic (ROC) analysis [ 27 , 28 ]. To this end, pixel intensities of the fusion images are interpreted as confidence values for the existence of suspicious signals and are compared with the expert label as ground truth. The ROC analysis objectively measures the applicability of the fusion images for the task of lesion detection. However, no conclusion can be drawn about how well other tissue types are distinguishable in the fusion images, i.e. how well the information of the entire signal space is represented. Preprocessing For numerical reasons, the voxel value range of each volume sequence is individually normalised to [0; 1]. In order to preserve the signal kinetics, the individual minimal and maximal intensity value is determined simultaneously on all d image volumes of each sequence. To ensure this normalisation is robust with respect to single outlier values, the values are calculated based on an application of a 3 × 3 × 3 median filter. Since about 66% of each volume is covered by background, all images sequences are preprocessed with a full automatic tissue/background separation method. The histogram of the sum of local intensity differences (sod) feature individually calculated for each sequence, has a bimodal shape and shows a clear separable maximum for the background voxels. The optimal threshold separating background from tissue can be computed automatically [ 29 ]. The resulting binary masks are postprocessed with a morphological closing operator [ 30 ] to ensure closed masks for the regions of tissue. Adaptation In order to automate and optimise the fusion process, a priori knowledge about the phenomenon to be visualised, given by the expert label, is used to find a suitable parameterisation of the algorithms as described in detail in the following section. In practice, these labels are not available for new image sequences. Thus, the algorithms have to be adapted on a small number of image sequences, e.g. from a subgroup of cases of a screening study, which were manually evaluated by a human expert and can be subsequently applied to the data of an arbitrary number of unseen cases. To assure the experimental setup reflects the circumstances of a practical application, the data sets Γ used for adaptation consist of marked tissue signals from only five of the six image sequences and the sixth unseen image sequence is used for the evaluation of the algorithm's capabilities. This setup is repeated six times, each time using a different image sequence for evaluation. In case of KPCA, using all kinetic signals from the five image sequences is prohibitive due to the computational and memory complexity. Therefore, the KPCA is adapted with a reduced data set Γ consisting of all signals of the marked tumour regions and an equal number of signals randomly selected from non-tumour regions. Parameter selection An essential part of kernel methods is the mapping from the data space to the feature space by the kernel function. In this paper, we focus on the frequently used Gaussian Kernel (8) which is parameterised by the bandwidth parameter σ . Selection of this parameter is crucial for the fusion process. For the experiments, s is chosen by scanning the range [0.05,...,2.0] using a step size of 0.05. Because manual evaluation by visual examination of the fusion images of each parameterisation is time consuming, we apply an automatic selection heuristic for the bandwidth based on the component specific Fisher score with class specific mean μ ± and variance v ± . The Fisher score is commonly used for ranking components x ( k ) of a set {( x , y )} of binary labelled ( y = ±) examples according to their discriminative power. In a similar manner, the score can be evaluated for different PDs on a random subset of the training set Γ utilising the corresponding principal component values with their associated expert label and thus can be interpreted as a measure for the first evaluation criteria. Furthermore, the sign of the PCA/KPCA based PDs can be adjusted in order to obtain a high value for the average intensity of tumour voxels causing tumour lesions to appear as bright regions. Thereby, the a priori knowledge of which region of the five image sequences used for adaptation should be visualised with high contrast can be utilised for selecting proper parameterisations which lead to discriminative visualisations tailored to the given task. Fusion For each method and image sequence, the first three PDs are used for calculating fused images, referred to as I 1 , I 2 and I 3 . For the purpose of visualisation, the range of the voxel values is normalised to [0; 255]. Additionally, I 1 , I 2 and I 3 are composed in to an RGB image I RGB . For fusion images based on KPCA, the bandwidth for each I k is chosen according to the individual maximum of the Fisher criterion as illustrated in Fig. 10 . Figure 10 Plot of Fisher score values for PD 1 of the KPCA algorithm with varying bandwidth. The score indicates a varying magnitude of separation between the class of suspicious tissue signals and the class of normal tissue signals. Below, the fusion image I 1 for S 1 based KPCA with four different bandwidth values A, B, C and D is shown. Variation of the bandwidth leads to fusion images with varying imaging properties. The bandwidth B leads to a fusion image that displays the tumour with the highest contrast to the surrounding tissue and the Fisher score shows a peak at the corresponding position. For bandwidth values A, C and D, the Fisher score and the contrast in the fusion images decreases. Results Fusion results for the sequences S 1 ,..., S 6 based on the PCA algorithm are shown in the lower 2 × 2 block of Fig. 4 , Fig. 5 , Fig. 6 , Fig. 7 , Fig. 8 and Fig. 9 . For all six sequences, the fusion image I 1 based on the PD with the leading eigenvalue does not lead to discriminative visualisations. The tumour lesions appear with the same intensity as fatty tissue, while glandula tissue is displayed as dark areas ( S 3 , S 4 ). In contrast to I 1 , the discriminative power of I 2 is obviously much greater for all six image sequences. The display of the tumour lesions (high intensity values) differs significantly from areas of glandular tissues, blood vessels (medium intensity values) and fatty tissue (low intensity values). The contrast between tumour lesion and the surrounding tissue decreases in I 3 of S 2 , S 3 and S 5 . Additionally, the surrounding tissue is displayed less detailed ( S 1 , S 2 , S 4 , S 5 ). According to the weak discriminative characteristic of I 1 and I 3 , the tumour lesions are coloured with shadings of green or cyan in the corresponding I RGB . Figure 4 Fusion images I 1 , I 2 , I 3 and corresponding colour composite image I RGB for sequence S 1 based on KPCA (upper 2 × 2 block) and PCA (lower 2 × 2 block). The lesion is located near the centre of the left breast. Figure 5 Fusion images I 1 , I 2 , I 3 and corresponding colour composite image I RGB for sequence S 2 based on KPCA (upper 2 × 2 block) and PCA (lower 2 × 2 block). The lesion is located in the lower left part of the left breast. Figure 6 Fusion images I 1 , I 2 , I 3 and corresponding colour composite image I RGB for sequence S 3 based on KPCA (upper 2 × 2 block) and PCA (lower 2 × 2 block). The lesion is located near the centre of the left breast. Figure 7 Fusion images I 1 , I 2 , I 3 and corresponding colour composite image I RGB for sequence S 4 based on KPCA (upper 2 × 2 block) and PCA (lower 2 × 2 block). The lesion is located near the centre of the right breast. Figure 8 Fusion images I 1 , I 2 , I 3 and corresponding colour composite image I RGB for sequence S 5 based on KPCA (upper 2 × 2 block) and PCA (lower 2 × 2 block). The lesion is located near the implant in the right breast. Figure 9 Fusion images I 1 , I 2 , I 3 and corresponding colour composite image I RGB for sequence S 6 based on KPCA (upper 2 × 2 block) and PCA (lower 2 × 2 block). The lesion is located near the centre of the left breast and is surrounded by glandular tissue. Fusion images based on KPCA are shown in the upper 2 × 2 block of Fig. 4 , Fig. 5 , Fig. 6 , Fig. 7 , Fig. 8 and Fig. 9 . For S 1 , S 2 , S 3 and S 4 , image I 1 displays the tumour lesion with high contrast to the surrounding tissues. Adipose tissue appears in I 1 and I 2 with mediumin tensity. In I 2 of S 4 and S 3 , glandular tissue can be observed in addition to the tumour. These areas appear dark in I 1 . The fraction of glandular tissue regions in I 2 of S 1 and S 2 is much smaller, since the tumour is located near the chest muscle where the breast mostly consists of fatty tissue and blood vessels. An interesting detail can be observed in I 3 of S 4 . The image clearly shows a ring structure as part of or around the tumour lesion. At positions inside the ring which are displayed with high intensity values in I 1 and I 2 , the temporal kinetic patterns show a fast uptake with a following constant or slightly decreasing concentration of the contrast agent. In contrast to the signals inside the ring, all signals corresponding to the ring structure in I 3 show as teadily increasing concentration. In all composite images I RGB except for S 5 , the tumour lesions are coloured white and can be easily discriminated from fatty tissue (shadings of blue to purple) and glandular tissue (shadings of blue to green). For image S 5 , only I 1 shows a discriminative characteristic. The tumour is displayed as a small cluster of high intensity values in the lower right area of the right breast, next to the implant. According to common practice, the curves obtained from the ROC analysis of the fusion images I 1 , I 2 and I 3 are compared by measuring the area - under - the - curve (AUC) values. The corresponding AUC values are listed in Tab. 1 . The fusion image yielding the highest AUC value is printed bold for each sequence. For five of six sequences, a fusion image based on PCA yields the highest AUC value (column PCA in Tab. 1 ). The fusion image I 2 based on the second PD of the PCA algorithm significantly outperforms the corresponding PCA based fusion images I 1 and I 3 . A similar predominance of I 2 can be observed for the KPCA based AUC values (column KPCA in Tab. 1 ). Here, I 2 outperforms I 1 and I 3 in four of six cases ( S 1 , S 2 , S 3 and S 5 ). Only for S 4 and S 6 the fusion image I 1 yields the largest AUC value. Nevertheless for KPCA, the difference to the corresponding fusion images I 1 and I 3 is much less distinct. In particular I 1 yields AUC values which are close to those of the corresponding fusion image I 2 . The predominance of the second component also decreases, if the PCA algorithm is trained with the reduced data set used for adaptation of the KPCA (column PCA (reduced) in Tab. 1 ). In comparison with the results of the PCA adapted with the entire data set, the AUC values of I 2 decrease and increase for the fusion images I 1 and I 3 . Table 1 Area under ROC curve values for fusion images I 1 , I 2 and I 3 for series S 1 ,..., S 6 based on KPCA, PCA and PCA trained with the same reduced training set as KPCA. For each AUC value, the pixel intensities of the fusion images are interpreted as confidence values indicating the existence of suspicious signals at the corresponding positions. The largest AUC value for each case is printed bold. Area-Under-ROC-Curve KPCA PCA PCA (reduced set) Sequence I 1 I 2 I 3 I 1 I 2 I 3 I 1 I 2 I 3 S 1 0.950 0.972 0.879 0.539 0.993 0.633 0.772 0.972 0.692 S 2 0.918 0.945 0.728 0.727 0.993 0.712 0.852 0.948 0.547 S 3 0.995 0.998 0.710 0.520 0.997 0.926 0.799 0.997 0.747 S 4 0.996 0.985 0.259 0.926 0.999 0.919 0.992 0.985 0.963 S 5 0.959 0.966 0.904 0.693 0.997 0.925 0.814 0.964 0.344 S 6 0.994 0.986 0.706 0.926 0.999 0.919 0.785 0.986 0.802 The influence of the bandwidth σ on the fusion characteristic is illustrated in Fig. 10 . For small values of the bandwidth σ only a small fraction of the tumour lesion appears with high intensities. If the bandwidth is chosen according to the maximum of the Fisher score, the lesion is visualised with high contrast to the surrounding tissue. In the shown example, the Fisher criterion decreases along with the contrast of the visualisation for further increasing bandwidth values. Discussion The results shown in the preceding section indicate that fusion of DCE-MRI data by PCA or KPCA leads to compact and meaningful visualisations. Lesions are correctly displayed as bright regions or with specific colouring and can be easily discriminated from surrounding tissue. Once a small subgroup of cases is evaluated, the obtained secondary information in the form of labelled tumour areas is utilised for automation of the data processing and presentation: ( i ) The sign of the PD is selected in a way that tumour lesions always appear with high intensities. ( ii ) The parametrisation of the kernel function of the KPCA is optimised in such a way that the fusion images show the desired discriminative characteristics. Thus, both evaluation criteria stated in the section Evaluation criteria are accomplished. Although both methods are applicable for the task of image fusion, several properties should be discussed in more detail. According to the ROC analysis and visual appraisal, the fusion image I 2 based on PCA shows for nearly all cases a discriminative characteristic which is superior to all other fusion images based on PCA or KPCA. While I 1 based on PCA captures the slightly increasing elucidation of the major part of the breast, caused by minor accumulation of contrast agent in tissues such as fat, the fusion image I 2 corresponding to the second PD of PCA shows the lesions with high contrast to the surrounding tissue. This can also be observed by means of the PDs itself. Figure 13 shows a plot of the components of the three PCA based PDs. The plot of PD 1 shows anearly constant or slightly increasing curve, whereas the plot of the components of PD 2 is similar to a typical temporal course of contrast agent concentration insuspicious tissue (see Fig. 3 ). The plot of PD 3 shows increasing values for the components corresponding to the postcontrast measurements. From this follows that the major part of the signal variance is caused by voxels which exhibit signals at different intensity levels with only minor changes of intensity in the course of time. This fraction of data variance is captured by PD 1 of PCA. The next major source of variance is the signal uptake between the precontrast and the first postcontrast measurement insuspicious tissue which is captured by PD 2 and leads to the superior discriminative characteristics of the fusion image I 2 . PD 3 is sensitive to signals which show a continuously increasing intensity for the postcontrast measurements. Hence, I 3 is more discriminative than I 1 , but less discriminative than I 2 . Figure 13 Plot of the components of the vectors PD 1 (solid), PD 2 (dashed) and PD 3 (solid with crosses) based on the PCA algorithm. The plot of PD 2 shows a typical signal of suspicious tissue (see Fig. 3) and therefore leads to discriminative fusion images with high intensity values at positions of tissue that exhibits a significant signal uptake after injection of the contrast agent. The ROC analysis of the KPCA based fusion images indicates that the fusion images I 2 show superior discriminative characteristics for four of six cases ( S 1 , S 2 , S 3 and S 5 ). However, selection of a suitable kernel parametrisation leads to comparable AUC values for I 1 . For fusion images corresponding to PDs with smaller eigenvalues, KPCA based images still show more details than those based on PCA, if the bandwidth value is chosen according to the maximum of the Fisher score. Figure 11 shows the KPCA based (left column) and the PCA based (right column) fusion images I 4 , I 5 and I 6 for sequence S 4 . While KPCA distributes the total data variance on N PDs, the PCA method uses only d PDs. Therefore, the PCA based fusion images I 4 , I 5 and I 6 typically contain a large fraction of high frequent noise. It is important to note that the fusion images based on KPCA are not necessarily uncorrelated, if each image is calculated using PDs with different bandwidth values, and therefore may display redundant information. In five of six cases, RGB visualisations based on KPCA show the tumour lesion as white regions which are easy to discriminate from other tissue types. In contrast to subtraction images which also allow detection of lesions with high sensitivity (see e.g. [ 4 ]), the fusion images I RGB provide a more comprehensive display of the data. A single subtraction image displays only the information of a two dimensional subspace of the signal space , i.e. the information of two manually selected components of the signal vector. Without further manipulation of the transfer function and after selection of two suitable components, a subtraction image commonly shows the lesion as a cluster of high intensity values and other types of tissue are not displayed or indistinguishable. The fusion images are low dimensional representations of the entire signal space. Thus, the RGB composite images I RGB based on PCA or KPCA clearly display the lesion in combination with glandular or fatty tissue and major blood vessels. Figure 11 Fusion images I 4 , I 5 and I 6 for S 3 based on the PDs with the fourth, fifth and sixth largest eigenvalue. The left column shows the fusion images based on KPCA. Each fusion image was calculated with a bandwidth that was individually optimised according to the Fisher score. The right column shows the same images fused with PCA. In contrast to the KPCA based fusion images, these images show a significant fraction of high frequent noise and less details. One drawback of KPCA is the increased computational and memory complexity in contrast to PCA. In case of KPCA, the complexity scales with the size N of the training set Γ. During the adaptation of KPCA, an N × N sized kernel matrix has to be stored and manipulated, whereas the covariance matrix for PCA is only of size d × d . Thus for KPCA, the computation time (LINUX system / 1.8 GHz Pentium IV / 2 GB RAM) for the adaptation, i.e. calculation of the kernel matrix and extraction of 3 PDs, increases significantly with the size of the training set Γ and takes 73 seconds for Γ consisting of 2700 training items which is comparable to the computation time of the PCA for the given setup (see Fig. 14 ). While even for large matrices, a subset of eigenvectors can be extracted in a reasonable time using efficient numerical software packages like LAPACK [ 31 ], the memory complexity obviously limits the size of Γ. One way to address this problem is to subsample the data. Instead of using a random sample of the whole data set, the chosen scheme assures the presence of tumour voxels in the training set. In the former case, the presence of a larger number tumour voxels is unlikely because of the unbalanced ratio between number of tumour voxels and the number of non-tumour voxels. Nevertheless, the reduction of the training data causes a degradation of the detection performance and changing fusion characteristics (see Fig. 12 ). Figure 14 Computation time for adaptation of KPCA (solid line). The measured time includes calculation of the kernel matrix and the extraction of the first three PDs. Additionally, the time for adaptation of the PCA using the complete training data is shown (dashed line). Figure 12 Image I 1 , I 2 , I 3 and I RGB of S 3 (top block) and S 4 (bottom block) fused by the PCA algorithm which was adapted on the same reduced data set as KPCA. More important for practical applications of both methods is the computational expense for calculation of the fusion images. Using PCA, the value of a fusion image voxel is equivalent to the inner product of two d -dimensional vectors and the calculation of the three fusion images I 1 , I 2 and I 3 of one volume slice takes approximately 1 second. In case of KPCA, the inner product has to be calculated in the feature space and the PD in is only implicitly given as an expansion of N kernel functions. Thus, computation of I 1 , I 2 and I 3 of one volume slice takes approximately 23 seconds for training sets Γ consisting of 1000 examples and increases linearly with the size of Γ (Fig. 15 ). Figure 15 Computation time for the three fusion images I 1 , I 2 and I 3 of one slice using PCA (dashed line) and KPCA (solid line). The computation time of principal component values with KPCA increases linearly with the size of the training set. For PCA, the computation time depends only on the dimension of the signal pattern and is constant for the given setup. In consideration of the fact that both methods are able to fuse the multitemporal DCE-MRI to single meaningful images which do not only show the lesion with high intensities, but also other types of tissue such as fatty or glandular tissue, the standard linear PCA seems to be most suitable for the given signal domain because of it's low computation time and superior detection performance. Only for PCA, the three fusion images can be calculated for a complete volume in a reasonable time and without delaying the diagnostic process. According to the ROC analysis, the introduction of nonlinearity by the kernel function did not improve the discriminative properties of the fusion images, but visual appraisal of the RGB composite images based on KPCA suggest a more comprehensive display of the different types of tissue. It is an open question whether fusion images of other data domains with more complex or higher dimensional signals might benefit more obviously from the nonlinearity of KPCA. Conclusion In this paper, we have demonstrated the integration of distributed information from DCE-MRI image sequences to meaningful visualisations by means of PCA and KPCA. Both methods were able to accentuate the regions marked by the expert as important in image sequences blinded to automatic analyses. By the employment of task-specific information, the parametrisation of the KPCA algorithm was optimised in order to accentuate the relevant characteristics of the visualisation. List of abbreviations PCA Principal Component Analysis KPCA Kernel Principal Component Analysis PD Principal Direction DCE-MRI Dynamic Contrast Enhanced Magnetic Resonance Imaging Authors' contributions T. Twellmann, A. Saalbach and T. W. Nattkemper conceived the experimental setup. Implementation and realisation was done by O. Gerstung. Image acquisition was done under supervision of M. O. Leach. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC529274.xml |
526379 | A computational approach for ordering signal transduction pathway components from genomics and proteomics Data | Background Signal transduction is one of the most important biological processes by which cells convert an external signal into a response. Novel computational approaches to mapping proteins onto signaling pathways are needed to fully take advantage of the rapid accumulation of genomic and proteomics information. However, despite their importance, research on signaling pathways reconstruction utilizing large-scale genomics and proteomics information has been limited. Results We have developed an approach for predicting the order of signaling pathway components, assuming all the components on the pathways are known. Our method is built on a score function that integrates protein-protein interaction data and microarray gene expression data. Compared to the individual datasets, either protein interactions or gene transcript abundance measurements, the integrated approach leads to better identification of the order of the pathway components. Conclusions As demonstrated in our study on the yeast MAPK signaling pathways, the integration analysis of high-throughput genomics and proteomics data can be a powerful means to infer the order of pathway components, enabling the transformation from molecular data into knowledge of cellular mechanisms. | Background Signal transduction is the primary means by which eukaryotic cells respond to external signals from their environment and coordinate complex cellular changes. It plays an important role in the control of most fundamental cellular processes including cell proliferation, metabolism, differentiation, and survival [ 1 ]. Extracellular signal is transduced into the cell through ligand-receptor binding, followed by the activation of intracellular signaling pathways that involve a series of protein phosphorylation and dephosphorylation, protein-protein interaction, and protein-small molecules interaction. Recently, with the accumulation of genome sequence information, large-scale genomic and proteomic techniques have offered insights into the components of signal transduction pathways and the molecular and cellular responses to cell signaling. For example, large-scale yeast two-hybrid screening methods and Co-IP technique have been used to identify physical interactions between proteins [ 2 - 5 ]. Synthetic lethal screens are used to identify genetic interactions [ 6 ]. The protein chip is an advanced in vitro technique for analyzing protein functions [ 7 ]. In addition, microarray experiments can simultaneously measure the transcript abundance of thousands of genes in different conditions. These experimental approaches have generated enormous amounts of data and provide valuable resources for studying signal transduction pathways. However, our understanding of the signal transduction processes underlying these data lags far behind data accumulation. Therefore, there is a great need to develop computational methods to direct biological discovery, enabling biologists to discover the mechanisms underlying complex signaling pathways and interactions among them. Given the fact that signal transduction is achieved by a cascade of protein interactions and activations, one major challenge in dissecting signal transduction pathways is to determine the order in which the signal is transduced. Traditionally, genetic epistasis analysis is used to address this question. In such analysis, the order of gene function can be determined by comparing the phenotype of a double mutant ab to that of a single mutant a , or a single mutant b . However, this analysis is time-consuming, expensive and sometimes the results can be misinterpreted [ 8 ]. Computational methods using large-scale genomics and proteomics information can expand the scope of experimental data and reduce the number of experiments required to detect the order of pathway components. Although it is important, little research has been performed in this field, with a major obstacle being the lack of completeness and accuracy of the data. Here we present a computational approach that integrates different types of information to predict the order of the pathway components assuming all the pathway components are known. Results Because the yeast MAPK pathways involved in pheromone response, filamentous growth, maintenance of cell wall integrity and hypertonic shock response are among the most thoroughly studied pathways, we use them to develop and test our method (Fig. 1 ). As protein-protein interaction plays an important role in achieving the signal transduction process, useful prediction of the order of the pathway components will require knowledge of the interacting partners of these pathway components. Here, we utilize the Database of Interacting Proteins (DIP) that is based on curated collection of all functional linkages of proteins obtained by experimental methods, including yeast two-hybrid experiments, immunoprecipitation, and affinity purification [ 9 ]. Although important, the usefulness of the interaction information is limited, as the presence of a physical interaction may not indicate the activation of the interacting proteins. The protein kinases analysis based on protein chip technique provides direct information about protein phosphorylation and activation, but it only presents a very small fraction of the complete picture of protein activation. Compared to the protein chip data, gene expression data from DNA microarray provide an overall picture of whole-cell response under different conditions. Therefore, we utilize this data source as the indirect information about protein activation to complement protein-protein interaction data. Our goal is to develop a computational method for integrating these data sources for ordering yeast MAPK pathway components. Two expression datasets are used in our analysis, one is composed of 56 conditions relevant to the behavior of MAPK signal transduction and another is the "compendium" set which is composed of 300 diverse mutations and chemical treatments [ 10 , 11 ]. To incorporate the gene expression data, we hypothesize that the genes encoding the proteins on the same signaling pathway, especially the adjacent pathway components, have similar gene expression profiles. In order to test the hypothesis, we calculated the correlations between each pair of genes using the two expression datasets, and performed a hypergeometric test on the similarity of gene expression pattern of the adjacent pathway components. The hypergeometric distribution is given by where N represents the total number of protein pairs, M represents the number of protein pairs in adjacent positions on a specific MAPK pathway, n is the total number of protein pairs that have an absolute value of correlation coefficient above a given threshold, e.g. 0.7, and k is the number of adjacent protein pairs having an absolute value of correlation coefficient above this threshold. The p-value obtained from the test is 2 × 10 -4 when the threshold is set to 0.7, indicating that protein pairs in adjacent position on a pathway tend to have a higher correlation coefficient value than random protein pairs. This fact is applied in developing our score function that incorporates the gene expression information. For each MAPK pathway, we examine all permuted orders of the pathway components with the starting point (membrane receptor) and the ending point (transcription factor) of each MPAK pathway fixed and calculate the score for each permutation according to the score function defined as in "Method" section. Then, we rank each permutation based on its corresponding score, with the high-ranking orders being the more likely pathway orders. For the pheromone response pathway, the scores based on each individual data set and the scores based on integrating both data sets are shown in Fig. 2 . Based on protein-protein interaction data alone, the "true" pathway is assigned a score of 0.75, ranking 241 among all the 5040 possible pathways, while based on gene expression data alone, it is assigned a score of 0.96, ranking 25 among all the 5040 possible pathways. However, after we integrate the scores obtained from two different sources together, the "true" pathway obtain a score of 1.71, with a rank of 2, which is a much higher-ranking than the ranking based on either data type alone. Similar results are shown for the other three yeast MAPK pathways (Table 1 ). Therefore, our score function that integrates protein-protein interaction data and gene expression data seems to provide more accurate prediction of the order of the pathway components than methods based on either data source alone. This prediction can be used to guide hypothesis-driven research and significantly reduce the number of required experiments. Discussion The rapid accumulation of genomics and proteomics information and the development of large-scale experiment techniques motivate us to develop computational approaches to dissecting different pathways. Arkin et al . described a time-lagged correlation analysis to infer the interactions among the components on the first few steps of the glycolytic pathway, thus the order of the components on the glycolytic pathway could be deduced [ 13 ]. Schmitt Jr. et al . applied this method to identify the cause-effect relationships among genes in the organism Synechocystis in response to different light conditions [ 14 ]. The limitation of this time-lagged correlation analysis is the requirement of high resolution of time-scales for sampling. That is, if the level of gene expression or the amount of the pathway components is not measured in a small sampling interval, the great resolution into the orderings of pathway components cannot be achieved. Gomez et al . used known protein-protein interactions of Saccharomyces cerevisiae as training data and represented the proteins as collections of domains to predict links within the human apoptosis pathway [ 15 ]. However, not all proteins have a defined domain composition. In principle, these two approaches use either gene expression data or protein-protein interaction data to infer pathways. However, neither method can be applied to jointly analyze data of different sources. Although protein-protein interaction data provide key information to reveal the relationships between components in a singnal transduction pathway, they are subject to many biases (e.g. high false positive and false negative rates) and are not able to capture the dynamic nature of the pathways that are condition dependent. DNA microarray data offer information about whole-cell responses in different conditions but only provide indirect information on the ordering of genes in a specific pathway. These two different data types offer complementary information, and our approach infers the order of the pathway components based on the integration of these two data types and can significantly increase our ability for pathway inference. We note that, despite great improvements over the results based on single data type, our approach is not able to put the correct order as the top one among all possible orders. This is largely due to the imperfectness of current data sources. To further improve our method, we may require data of higher quality or incorporate more types of data, such as protein chip data. We note that the utility of integrating yeast protein-protein interaction map and gene expression profiles to predict signal transduction network has previously been described by Steffen and colleagues [ 16 ]. In their approach, the interaction data were used to create "candidate" pathways and infer the orders between the pathway components, and then the "candidate" pathways were scored according to the number of pathway members that were clustered together based on the expression profiles. However, as many interactions are currently not identified, some links between pathway members may be missing at the very first step and cannot be recovered in the following inference. In addition, the prediction results are highly dependent on the clustering method and the number of clusters into which the genes were grouped. In contrast, our starting point is that we assume that all pathway components are known and use gene expression data to calculate the correlation coefficients between genes and incorporate the results into our score function directly. While our overall objective is somewhat more modest than that of Steffen and colleagues, the motivation of our work was to test whether there is any information in the current data sources to infer the correct order of pathway components. If the goal could not be achieved when all pathway components are known, then it is very unlikely that any method starting from scratch to reconstruct signal transduction pathway will succeed. Fortunately, our results indicate that this modest task can be accomplished and suggest the usefulness of genomics and proteomics information. We have shown our method can lead to a good prediction for well-known yeast MAPK signaling pathways. In addition, we have tested our approach on the DNA damage checkpoint pathway that is involved in cell-cycle progression. The "true" pathway ranks 4 among all the 750 possible pathways based on our integrated approach, while it has a rank of 46 and 60 based on protein-protein interaction data alone and gene expression data alone, respectively. Therefore, we conjecture that our approach may be applicable to many other pathways including less well-understood ones. It is worth to note that signaling pathways are not limited to one-dimensional sequence of genes, as our focus in this study. Instead, they should be depicted as multidimensional networks. To make further complicated prediction and modeling of the networks, we need to incorporate more biological information and apply more elaborate statistical approaches. Conclusions We have demonstrated that our integrated approach can significantly improve the performance of predicting the order of signaling pathway components, without detailed knowledge of all the genes in the pathway or the molecular nature of the gene products. It may be important to incorporate other valuable sources of data, including protein chip data, genomic sequence information and protein domain information if we want to make the transition from a linear one dimension pathway to a multidimensional model of signaling networks, which represents a great challenge in the field of systems biology. Methods For protein-protein interaction data, the score function is defined as follows: where n is the total number of proteins on the pathway, and X i, i+1 = 1 if there is an observed interaction between the i th and the (i+1) th proteins on the pathway and X i, i+1 = 0 otherwise. Here p represents the false negative rate of the interaction data. In this study, we fixed the false negative rate as 0.4. It was estimated that the total number of interactions between all yeast proteins or the size of yeast interactome is about 20000~30000 [ 17 , 18 ]. In this study, the interaction data we obtained from DIP includes 15118 pairwise protein-protein interactions, which covers more than 50% of the total number of estimated protein interactions assuming all of the interactions in DIP are true interactions. Indeed, this assumption should be valid as DIP is manually curated and it provides high quality interaction data by minimizing the total number of false positive interactions. Therefore, the false negative rate of the interaction data in DIP may well be less than 0.5. As our method is based on the ranking the of calculated scores, the ranking of all possible orderings are not affected with any false negative rates below 0.5. However, the interaction data availability is limited for some species, for example, only 1379 interactions among about 900 human proteins are included in DIP. In such cases, the performance of our approach may not be as informative as that in yeast. For gene expression data, the score function is defined as: where r i, i+1 represents the correlation coefficient between the i th and the (i+1) th proteins on the pathway. The two data sources are considered with equal importance, so we rescale the score S i of all the possible pathways to [0, 1] by where S min and S max are the minimum and the maximum scores of all the possible pathways respectively for either protein-protein interaction data or gene expression data. The rescaling procedure is performed on both data sets. The integrated score is the sum of the rescaled scores for each individual data set. Authors' contributions YL designed the study, performed the pathway analysis, and drafted the manuscript. HZ conceived and guided the study. Both authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526379.xml |
517832 | A Mechanism of Prion Propagation | null | The key to any protein's function is its structure. Proteins first emerge as a linear strip of amino acids from the cellular protein-manufacturing machinery, and it is this primary sequence that determines a protein's ultimate conformation. Improperly folded proteins—which can gum up cells and, when secreted, tissues—are normally destroyed. But in a wide range of diseases, including prion (from proteinaceous and infectious) diseases and neurodegenerative diseases like Parkinson disease and Alzheimer disease, amyloid fibrils, or plaques—misshapen proteins that aggregate into characteristic rope-like configurations—accumulate in tissue. Schematic of single-molecule fluorescence experiment used to establish amyloids growth mechanism When amyloid precursors and prions (pronounced PREE-ons) lose their normal conformation, they acquire the ability to infect their neighbors. Like molecular dominoes, the fall of one malformed protein precipitates the downfall of its neighbors, as one protein after another assumes the misshapen form of the first. Any chance of developing methods to contain the expansionist tendencies of these proteins depends on understanding the mechanism of propagation, an area of active research. An abundance of small protein aggregates, called oligomers, is associated with amyloid fiber growth and formation. (Single proteins are called monomers; they “polymerize” into longer chains.) Mounting evidence suggests these so-called amyloid intermediates are the “toxic species” underlying amyloid diseases. The steps in amyloid formation, however, are unclear: Must amyloids follow a progression from monomer to oligomer to plaque? That is, are oligomers required for amyloid plaque formation? Using the yeast prion protein Sup35 to study how amyloids form, Jonathan Weissman and colleagues propose a model of amyloid plaque formation and show that it can indeed occur in the absence of the putative toxic oligomers. In yeast, the Sup35 protein forms self-replicating aggregations reminiscent of amyloid formation and prion propagation. Though yeast aren't susceptible to prion diseases, they do assume what scientists call the yeast prion state. Two protein domains called NM together form self-propagating amyloid fibers that give rise to the yeast prion state. Oligomers, which are typically seen when other proteins form amyloids, have also been seen during this process, some of them near NM fiber ends. Weissman's team wanted to know what these oligomers were doing. To investigate the role of oligomers in NM amyloid formation and growth, the researchers explored the relationship between monomer concentration and polymerization progress. Initially, fiber growth rate was tied to the concentration of NM monomers; but as concentrations increased, growth rate was moderated by NM conformational changes caused after binding to the fiber ends. Shaking the samples increased polymerization rate. During polymerization reactions, the authors observed a pronounced pause, followed by an abrupt increase in polymerization rate. Since the length of the pause showed only a weak dependence on the concentration of monomers, Weissman and colleagues explain, this finding could not be explained by a simple model of nucleation polymerization, in which growth occurs monomer by monomer, emerging from a monomer “nucleus.” Instead, Weissman and colleagues' findings support a model in which nucleated monomers initially support fiber growth, fibers undergo fragmentation, and monomers rapidly grow from the broken ends. Weissman and colleagues confirmed that the fibers were growing monomer by monomer by attaching to fragmented fiber ends with fluorescent microscopy, which can detect single molecules. Though the authors do not rule out the possibility that oligomers could attach to the fiber ends as well, their results show that amyloid growth can occur independently of oligomers. Since many of the properties observed in Sup35 polymerization are evident in other amyloid-forming proteins, the model presented here may be shared as well. Future studies will have to explore this question, along with the issues of how oligomers figure into the process and how they cause disease. Weissman and colleagues raise the possibility that creating conditions that favor fiber growth while inhibiting oligomer formation might limit the toxic effects of amyloid plaques. The approaches outlined here should lay the foundation for exploring these questions in higher organisms. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC517832.xml |
529248 | Uric acid: A new look at an old risk marker for cardiovascular disease, metabolic syndrome, and type 2 diabetes mellitus: The urate redox shuttle | Background The topical role of uric acid and its relation to cardiovascular disease, renal disease, and hypertension is rapidly evolving. Its important role both historically and currently in the clinical clustering phenomenon of the metabolic syndrome (MS), type 2 diabetes mellitus (T2DM), atheroscleropathy, and non-diabetic atherosclerosis is of great importance. Results Uric acid is a marker of risk and it remains controversial as to its importance as a risk factor (causative role). In this review we will attempt to justify its important role as one of the many risk factors in the development of accelerated atherosclerosis and discuss its importance of being one of the multiple injurious stimuli to the endothelium, the arterial vessel wall, and capillaries. The role of uric acid, oxidative – redox stress, reactive oxygen species, and decreased endothelial nitric oxide and endothelial dysfunction cannot be over emphasized. In the atherosclerotic prooxidative environmental milieu the original antioxidant properties of uric acid paradoxically becomes prooxidant, thus contributing to the oxidation of lipoproteins within atherosclerotic plaques, regardless of their origins in the MS, T2DM, accelerated atherosclerosis (atheroscleropathy), or non-diabetic vulnerable atherosclerotic plaques. In this milieu there exists an antioxidant – prooxidant urate redox shuttle. Conclusion Elevations of uric acid > 4 mg/dl should be considered a "red flag" in those patients at risk for cardiovascular disease and should alert the clinician to strive to utilize a global risk reduction program in a team effort to reduce the complications of the atherogenic process resulting in the morbid – mortal outcomes of cardiovascular disease. | Background While the topicality of serum uric acid (SUA) being a risk factor is currently controversial [ 1 , 2 ], there is little controversy regarding its association as a risk marker associated with cardiovascular (CVD) and renal disease (especially in patients with hypertension, diabetes, and heart failure). SUA seems to be a graded marker of risk for the development of coronary heart disease (CHD) or cerebrovascular disease and stroke compared with patients with normal uric acid levels and especially those in the lower 1/3 of its normal physiological range [ 1 , 3 - 13 ]. LK Niskanen's et al . recently published article has demonstrated new information regarding this subject. They were able to demonstrate that elevations of SUA levels were independent of variables commonly associated with gout or the metabolic syndrome in association with CVD mortality in middle aged men [ 3 ]. In 1951, Gertler MM and White PD et al . sat out to determine the clinical aspects of premature coronary heart disease in 100 male patients 40 years old and younger. Their findings were increased mesomorphic body build, shorter stature, increased anterior posterior chest wall diameter, and increased cholesterol and uric acid (5.13 +/- .11 vs. 4.64 +/-.06) as compared to the normal population [ 14 ]. A much larger trial (1967) confirmed the initial interest in SUA and CVD with the publication of the early, large (5,127 participants), epidemiologic, seminal Framingham study. This classical paper by Kannel et al . noted an elevated SUA was also associated with an increased risk of coronary heart disease for men aged 30–59 [ 15 ]. In addition to the important finding of elevations in lipoproteins (specifically cholesterol levels greater than 250 mg/100 ml) being associated with CHD, there also appeared a definite association of elevated SUA, which was associated with an increase in the incidence rate of CHD. The above authors also noted that subjects in this study with evidence of impaired carbohydrate metabolism or disordered purine metabolism could be assumed to have accelerated atherogenesis [ 15 ]. This controversy regarding SUA being a risk factor or a risk marker is not as important as understanding its overall role in the association with endothelial cell damage, dysfunction, decreased endothelial nitric oxide (eNO) bioavailability, and how SUA interacts with other substrate toxicities and increased reactive oxygen species (ROS) of the A-FLIGHT-U acronym, which result in accelerated atherosclerosis (table 1 ). Johnson RJ et al. have nicely demonstrated that hyperuricemia predicts cardiovascular events in the general population, the hypertensive population, and patients with pre-existing CVD. Furthermore hyperuricemia predicts the development of future hypertension [ 11 ]. Table 1 A-FLIGHT-U ACRONYM Identification of multiple metabolic toxicities and injurious stimuli responsible for reactive oxygen species production. (figure 2) A Angiotensin II (also induces PKC-β isoform) Amylin (hyperamylinemia) / amyloid toxicity AGEs/AFEs (advanced glycosylation/fructosylation endproducts) Apolipoprotein B Antioxidant reserve compromised Absence of antioxidant network Aging ADMA (Asymmetrical DiMethyl Arginine) F Free fatty acid toxicity: Obesity toxicity: Triad L Lipotoxicity – Hyperlipidemia – Obesity toxicity: Triad I Insulin toxicity (endogenous hyperinsulinemia-hyperproinsulinemia) Inflammation toxicity G Glucotoxicity (compounds peripheral insulin resistance) reductive stress Sorbitol/polyol pathway Pseudohypoxia (increased NADH/NAD ratio) H Hypertension toxicity Homocysteine toxicity hs-CRP T Triglyceride toxicity: Obesity toxicity: Triad U Uric Acid toxicity: Antioxidant early in physiological range and a conditional prooxidant late when elevated through the paradoxical (antioxidant → prooxidant) URATE REDOX SHUTTLE Endothelial cell dysfunction with eNOS uncoupling, decreased eNO and increased ROS. Vulnerable atherosclerotic plaque milieu of being acidic, proinflammatory, excess metal ions (Fe) (Cu) from vasa vasorum rupture and red blood cell plasma membranes due to intraplaque hemorrhage and plaque thrombus formation. There are certain clinical clustering groups with increased cardiovascular risk, which have associated hyperuricemia (table 2 ). Non-diabetic patient groups with accelerated atherosclerosis, T2DM patient groups with accelerated atherosclerosis (atheroscleropathy), congestive heart failure patient groups with ischemic cardiomyopathy, metabolic syndrome patient groups (with hyperinsulinemia, hypertension, dyslipidemia, impaired glucose tolerance, and obesity), renal disease patient groups, hypertensive patient groups, African American patient groups, patient groups taking diuretics, and patient groups with excessive alcohol usage. Each of these clustering groups has metabolic mechanisms that may help to explain why SUA may be elevated (table 2 ). In addition to the recurring finding of an elevated tension of oxidative- redox stress and ROS in many of the groups is the importance of the MS and insulin resistance. Table 2 Hyperuricemia: clinical clusters at cardiovascular risk GROUPS Abbreviated Mechanisms Patients with CVD Accelerated atherosclerosis Congestive heart failure Increased apoptosis – necrosis of the arterial vessel wall and capillary resulting in increased purine metabolism and hyperuricemia. Increased oxidative – redox stress Antioxidant – Prooxidant Paradox: Urate Redox Shuttle Patients with (T2DM) Accelerated atherosclerosis (Atheroscleropathy) Acting through obesity and insulin resistance. Accelerated atherosclerosis with increased vascular cell apoptosis and inflammatory necrosis with increased purine metabolism resulting in hyperuricemia and increased oxidative stress through ischemia-reperfusion and xanthine oxidase. Additional reductive stress associated with glucotoxicity and pseudohypoxia. Increased oxidative-redox stress Antioxidant – Prooxidant Paradox: Urate Redox Shuttle Obesity – Insulin resistance Hyperinsulinemia – Insulin toxicity Metabolic Syndrome (figure 1): Hyperinsulinemia Hypertension Hyperlipidemia dyslipidemia, obesity Hyperglycemia Leptin may induce hyperuricemia. Insulin increases sodium reabsorption and is tightly linked to urate reabsorption. Increased oxidative – redox stress Antioxidant – Prooxidant Paradox: Urate Redox Shuttle Men and Postmenopausal females Estrogen is uricosuric Renal diseases Decreases in GFR increases uric acid levels Hypertension Urate reabsorption increased in setting of increased renal vascular resistance, microvascular disease predisposes to tissue ischemia that leads to increased urate generation (excess purine metabolism) and reduced excretion (due to lactate competing with urate transporter in the proximal tubule). Increased oxidative – redox stress Antioxidant – Prooxidant Paradox: Urate Redox Shuttle African American Unknown (assumed genetic causes as yet unidentified) Diuretic use Volume contraction promotes urate reabsorption Alcohol use (in excess) Increases urate generation and decreased urate excretion Uric acid, MS, T2DM, and atheroscleropathy The importance of hyperuricemia and the clustering phenomenon of the metabolic syndrome were first described by Kylin in 1923 when he described the clustering of three clinical syndromes: hypertension, hyperglycemia, and hyperuricemia [ 16 ]. In 1988, Reaven GM described the important central role of insulin resistance in the seminal Banting lecture where he described Syndrome X, which has now become known as the metabolic syndrome (MS) and/or the insulin resistance syndrome (IRS) [ 17 ]. Seven decades after the clustering phenomenon was reported by Kylin (1993), Reaven GM and Zavaroni I et al. suggested that hyperuricemia be added to the cluster of metabolic and hemodynamic abnormalities associated with insulin resistance and/or hyperinsulinemia of Syndrome X [ 18 ]. The four major players in the MS are hyperinsulinemia, hypertension, hyperlipidemia, and hyperglycemia. Each member of this deadly quartet has been demonstrated to be an independent risk factor for CHD and capable of working together in a synergistic manner to accelerate both non-diabetic atherosclerosis and the atheroscleropathy associated with MS, PD, and T2DM. In a like manner, hyperuricemia, hyperhomocysteinemia, ROS, and highly sensitive C- reactive protein (hsCRP) each play an important role in expanding the original Syndrome X described by Reaven in the atherosclerotic process. The above quartet does not stand alone but interacts in a synergistic manner resulting in the progression of accelerated atherosclerosis and arterial vessel wall remodeling along with the original players and the A-FLIGHT-U toxicities (table 1 ). The MS of clinical clustering has been renamed multiple times over the past 16 years indicating its central importance to cardiovascular disease and was included in the recent National Cholesterol Educational Program – Adult Treatment Panel III (NCEP ATP III) clinical guidelines in order to assist the clinician in using this important tool to evaluate additional cardiovascular risk [ 16 - 19 ]. Hyperinsulinemia and Hyperamylinemia Insulin, proinsulin, and amylin individually and synergistically activate the renin – angiotensin system (RAS) with subsequent increase in Ang II. Ang II is the most potent endogenous inducer of NAD(P)H oxidase, increasing NAD(P)H, which increases vascular – intimal reactive oxygen species (ROS) and superoxide (O 2 -• ) [ 19 , 20 ]. There are many deleterious effects of hyperinsulinemia in addition to its being responsible for sodium, potassium, water, and urate retention in proximal kidney (table 3 ) [ 21 ]. Table 3 Deleterious effects of hyperinsulinemia (HI) 1. HI, hyperproinsulinemia, and hyperamylinemia synergistically activate RAS with subsequent increase in Ang II, renin, and aldosterone. 2. HI promotes Na + and H 2 O retention, which increases blood volume and pressure. In turn this activates the reabsorption of uric acid resulting in elevation of SUA. In turn increased SUA has been shown to increase tubular reabsorption of Na+. 3. HI increases membrane cation-transport increasing intracellular Ca ++ , which increases tone and pressure. 4. HI activates the sympathetic nervous system. 5. HI stimulates vSMC proliferation and migration and remodeling. 6. HI increases the number of AT-1 receptors. 7. HI creates cross talk between the insulin receptor and AT-1 receptor, resulting in a more profound Ang II effect. 8 HI promotes PI3 kinase Akt-MAP kinase Shunt. Impairing the metabolic (PI3 kinase-AKT pathway while promoting the MAPkinase remodeling pathway. 9. HI induces Ang II, which promotes the MAP kinase pathway and remodeling. 10. HI induces Ang II, which is the most potent stimulus for production of NAD(P)H oxidase with reactive oxygen species generation (superoxide production) and resultant vascular oxidative stress. Hypertension Hypertension is strongly associated with hyperuricemia. SUA levels are elevated in hypertension and are present in 25% of untreated hypertensive subjects, 50% of subjects taking diuretics, and greater than 75% of patients with malignant hypertension [ 22 ]. Potential mechanisms involved with the association of hyperuricemia and hypertension include the following: 1. Decreased renal blood flow (decreased GFR) stimulating urate reabsorption, 2. Microvascular (capillary) disease resulting in local tissue ischemia. 3. Ischemia with associated increased lactate production that blocks urate secretion in the proximal tubule and increased uric acid synthesis due to increased RNA-DNA breakdown and increased purine (adenine and guanine) metabolism, which increases uric acid and ROS through the effect of xanthine oxidase (XO). 4. Ischemia induces increased XO production and increased SUA and ROS. These associations with ischemia and XO induction may help to understand why hyperuricemia is associated with preeclampsia and congestive heart failure. Because endothelial dysfunction, local oxidant generation, elevated circulating cytokines, and a proinflammatory state are common in patients with cardiovascular disease and hypertension there is an increased level of oxidative – redox stress within vascular tissues. Oxidative – redox stress results in impaired endothelium-dependent vasodilation with quenching of endothelial nitric oxide (eNO) and allows the endothelium to become a net producer of ROS specifically superoxide as the endothelial nitric oxide synthase (eNOS) enzyme uncouples to produce superoxide instead of eNO. This similar mechanism applies equally well to that associated with type 2 diabetes and congestive heart failure [ 11 , 19 ]. It is important to note that allopurinol and oxypurinol (XO inhibitors) are capable of reversing the impaired eNO production in both heart failure [ 23 - 25 ] and type 2 diabetes mellitus (T2DM) [ 26 ]. Lin KC et al . were able to demonstrate that blood pressure levels were predictive for cardiovascular disease incidence synergistically with serum uric acid level [ 27 ]. Two separate laboratories have demonstrated the development of systemic hypertension in a rat model of hyperuricemia developed with a uricase inhibitor (oxonic acid) after several weeks of treatment [ 28 , 29 ]. This hypertension was associated with increased renin and a decrease in neuronal nitric oxide synthase in the juxtaglomerular apparatus. Prevention of this hypertension was accomplished by an ACE inhibitor and to a lesser extent L-arginine. These findings indicate an interacting role of the renin- angiotensin system and the NOS enzyme. Hypertension, neural nitric oxide synthase (nNOS) and renin changes were also prevented by maintaining uric acid levels in the normal range with allopurinol or benziodarone (a uricosuric). These above models have provided the first challenging evidence that uric acid may have a pathogenic role in the development of hypertension, vascular disease, and renal disease [ 11 ]. Obesity Obesity has reached epidemic proportions in the past decade and represents one of the confounding factors associated with the MS and T2DM [ 19 , 30 ] (figure 1 ). Hyperuricemia has been associated with increasing body mass index (BMI) in recent studies and are even apparent in the adolescent youth [ 30 - 33 ]. Figure 1 Metabolic syndrome: hyperuricemia. This image focuses on the "H" phenomenon consisting of the four major players in the MS: Hyperinsulinemia, Hypertension, Hyperlipidemia and the Lipotoxicity – Obesity toxicity triad, and Hyperglycemia. These players have frequently been referred to as the "deadly quartet" and the "H" phenomenon. It is important to note the central position of insulin resistance in this image and also hyperuricemia. Hyperuricemia is flanked by hyperhomocysteinemia to indicate its importance in the MS. Each of these players has its own important role and this image helps to portray the clustering effect and synergism to contribute to an overall increased oxidative – redox stress to the endothelium of the vasculature. Leptin levels are elevated and associated with insulin resistance in MS and early T2DM. Bedir A et al. have recently discussed the role of leptin as possibly being a regulator of SUA concentrations in humans and even suggested that leptin might be one of the possible candidates for the missing link between obesity and hyperuricemia [ 34 ]. Furthermore, hypertriglyceridemia and free fatty acids are related to hyperuricemia independently of obesity and central body fat distribution [ 30 , 33 ] (table 1 : (T): Triglyceride toxicity and (F): Free fatty acid toxicity). Hyperglycemia: Impaired glucose tolerance: Type 2 Daibetes Mellitus (T2DM) Glucotoxicity places an additional burden of redox stress on the arterial vessel wall and capillary endothelium. Hyperglycemia induces both an oxidative stress (glucose autoxidation and advanced glycosylation endproducts (AGE) – ROS oxidation products) and a reductive stress through pseudohypoxia with the accumulation of NADH and NAD(P)H in the vascular intima [ 19 , 35 , 36 ]. This redox stress consumes the natural occurring local antioxidants such as: SOD, GPX, and catalase (table 4 ). Once these local intimal antioxidants are depleted uric acid can undergo the paradoxical antioxidant – prooxidant switch or the urate redox shuttle [ 37 , 38 ] Table 4 Antioxidants: enzymatic – nonenzymatic inactivation of free radicals. ENZYMATIC ANTIOXIDANTS SUPER OXIDE DISMUTASE (SOD) Reactions catalyzed: [O 2 - + SOD → H 2 O 2 + O 2 ] Various isoforms: ecSOD (extracellular); Mn-SOD (mitochondrial); Cu/Zn-SOD (intracellular) CATALASE – Location: peroxisome. Reaction catalyzed: [2 H 2 O 2 + catalase → 2 H 2 O + O 2 ] GLUTATHIONE PEROXIDASE – Location: mitochondrion, cytosol, and systemic circulation. Glutathione (GSH or glutamyl-cysteinyl-glycine tripeptide): the reduced -SH of GSH is oxidized to disulfide GSSG. Glutathione peroxidase-catalyzed reation: [GSH + 2 H 2 O 2 → GSSG + H 2 O + O 2 ] Glutathione reductase-catalyzed reaction: [GSSG → GSH] at the expense of [NADH → NAD + ] and/or [NAD(P)H → NAD(P) + ] ENZYMATIC – NONENZYMATIC INACTIVATION OF FREE RADICALS. NITRIC OXIDE SYNTHASE Location: membrane. Isoforms: eNOS (endothelial): good nNOS (neuronal): good iNOS (inducible-inflammatory): bad O 2 - and nitric oxide (NO) are consumed in this process with the creation of reactive nitrogen species (RNS). O 2 - + NO → ONOO-(peroxynitrite) + tyrosine → nitrotyrosine. Nitrotyrosine reflects redox stress and leaves a measurable footprint. NO the good; O 2 • the bad; ONOO - the ugly * NONENZYMATIC ANTIOXIDANTS Vitamins (A, C, and E): Thiols: Sulfhydryl (-SH)-containing molecules. Albumin: Is an antioxidant because of it is a thiol-containing macromolecule. Apoproteins: Ceruloplasmin and transferrin. Bind copper and iron in forms, which cannot participate in the Fenton reaction. Uric acid: Early on in the atherosclerotic process in physiologic ranges: antioxidant. PARADOX: Late in elevated range prooxidant with loss of supporting antioxidants above and in a milieu of oxidative – redox stress within the atherosclerotic intima. In MS, T2DM and advanced vulnerable atherosclerotic plaques SOD, Catalase, and GPX are depleted. The Urate Redox Shuttle . PARADOX: antioxidants may become prooxidant in a certain milieu. * Beckman JS and Koppenol WH [1996] Nitric oxide, superoxide, and peroxynitrite: the good, the bad, and ugly. Am J Physiol 271(5 Part 1): C1424–C1437 Homocysteine A direct relation between homocysteine levels and SUA levels is known to occur in patients with atherosclerosis. Not only do these two track together (possibly reflecting an underlying elevated tension of redox stress) but also may be synergistic in creating an elevated tension of redox stress, especially in the rupture prone, vulnerable atherosclerotic plaque with depletion of local occurring antioxidants [ 39 - 41 ] (figure 1 ). Atherosclerosis and Atheroscleropathy Non-diabetic atherosclerosis and atheroscleropathy (accelerated atherosclerosis associated with MS, prediabetes, and T2DM) are each impacted with the elevation of uric acid [ 42 , 43 ]. Prothrombotic milieu In MS and T2DM there is an observed increased thrombogenecity, hyperactive platelets, increased PAI-1 (resulting in impaired fibrinolysis), and increased fibrinogen in the atherosclerotic milieu associated with the dysfunctional endothelial cell. Additionally, the vulnerable atherosclerotic plaque includes increased tissue factor, which increases the potential for thrombus formation when the plaque ruptures and exposes its contents to the lumen [ 19 , 42 , 43 ]. Uric acid as one of the multiple injurious stimuli to the endothelium of the arterial vessel wall and capillary The upper 1/3 of the normal physiologic – homeostatic range (> 4 mg/dl) and abnormal elevations (> 6.5 or 7 mg/dl in men and > 6.0 mg/dl in women) in SUA definitely should be considered as one of the multiple injurious stimuli to the arterial vessel wall and capillary, which may contribute to endothelial dysfunction and arterial – capillary vessel wall remodeling through oxidative – redox stress [ 2 , 3 , 19 ] (figure 2 ). There are multiple injurious stimuli to the endothelium and arterial vessel wall in the accelerated atherosclerosis associated with MS and T2DM (atheroscleropathy)(figure 2 ). It is important to note that redox stress occurs upstream from inflammation by activating the nuclear transcription factor: NFkappa B [ 39 ]. Over time, individually and synergistically injurious stimuli of the A-FLIGHT-U acronym (table 1 ) result in the morbid – mortal complications of MS, T2DM, atheroscleropathy, and non-diabetic atherosclerosis. Figure 2 Multiple injurious stimuli to the endothelium in non-diabetic atherosclerosis and atheroscleropathy. This image portrays the anatomical relationship between the endothelium, intima, media and the adventitia. Each of these layers plays an important role in the development of accelerated atherosclerosis (atheroscleropathy) of the MS, PD, and overt T2DM. Of all the different layers the endothelium seems to play a critical and central role. It is placed at a critical location and acts as an interface with nutrients and toxic products not only at its luminal surface of musculo-elastic arteries but also at the endothelial extracellular matrix interface of the interstitium in capillary beds. The intima, sandwiched between the medial muscular layer and the endothelium, is the site of atherosclerosis, intimopathy, and the atheroscleropathy associated with MS, PD, and overt T2DM. There are multiple injurious stimuli to the endothelium including ROS and hyperuricemia. It is important to note that redox stress occurs upstream from inflammation by activating the nuclear transcription factor: NFkappa B [39]. Over time, individually and synergistically these injurious stimuli (table 1) result in the morbid – mortal vascular complications of MS, T2DM, atheroscleropathy, and non-diabetic atherosclerosis. Each of these A-FLIGHT-U toxicities may be viewed as an independent risk marker – factor and is known to have a synergistic effect when acting in concert [ 19 , 21 , 39 , 42 , 43 ]. Additionally, low density lipoproteins such as LDL-cholesterol are capable of being modified and retained within the intima through a process of oxidative modification through free radicals, hypochlorous acid, peroxynitrite, and selected oxidative enzymes such as xanthine oxidase, myeloperoxidase and lipoxygenase (table 5 ) [ 19 , 44 - 50 ]. Table 5 Origin, enzymatic pathways of reactive oxygen species, and their oxidized products. [Origin and Location] Enzymatic Pathways: [ROS] Potent Oxidants: [Products] Oxidized lipids and proteins: Mitochondrial Respiratory Chain O 2 • -OH • Oxidized lipids, proteins, nucleic acids, and autoxidation byproducts Inflammatory Macrophage Membranous NAD(P)H Oxidase O 2 • -OH • H 2 O 2 Advanced lipoxidation endproducts (ALE) ortho o-tyrosine meta m-tyrosine Granular Myeloperoxidase (MPO) Hypochlorous acid HOCL Tyr (Tyrosine) NO 2 3-Chlorotyrosine di-Tyrosine NO 2 - (Nitrotyrosine) Macrophage Nitric Oxide Synthase (iNOS) Inducible (iNOS) Large bursts – uncontrolled ONOO • NO 2 - (Nitrotyrosine) Endothelial Cell Nitric Oxide Synthase (NOS) Constitutive (cNOS) eNOS → NO nNOS → NO Small bursts – controlled NO + O 2 • → ONOO • ONOO • NO 2 - (Nitrotyrosine) NO 2 - (Nitrotyrosine) eNOS-derived NO NO The GOOD * Natural-occurring, local-occurring, chain-breaking, antioxidant Superoxide O 2 • The BAD * Toxic effects of ROS on proteins, lipid, nucleic acids Peroxynitrite ONOO • The UGLY * Toxic effects of ROS on proteins, lipid, nucleic acids Hypochlorous acid HCLO The UGLY * Toxic effects of ROS on proteins, lipid, nucleic acids Restoration of eNO Via the eNOS reaction Antioxidant Antioxidant Prevention of the toxic effects of ROS * Beckman JS and Koppenol WH [1996] Nitric oxide, superoxide, and peroxynitrite: the good, the bad, and ugly. Am J Physiol 271(5 Part 1): C1424–C1437 The simple concept that SUA in patients with CVD, MS, T2DM, hypertension, and renal disease may reflect a compensatory mechanism to counter oxidative stress is intriguing. However, this does not explain why higher SUA levels in patients with these diseases are generally associated with worse outcomes [ 11 ]. An antioxidant – prooxidant urate redox shuttle Antioxidants may become prooxidants in certain situations [ 51 - 55 ]. Therefore we propose the existence of an antioxidant – prooxidant redox shuttle in the vascular milieu of the atherosclerotic macrovessel intima and the local sub endothelial capillary interstitium of the microvessel [ 38 , 51 , 52 ] (figure 3 ). Figure 3 Antioxidant – prooxidant urate redox shuttle. The antioxidant – prooxidant urate redox shuttle is an important concept to understand regarding accelerated atherosclerosis. This shuttle is important in understanding the role of how the antioxidant uric acid becomes prooxidant in this environmental milieu, which results in its damaging role to the endothelium and arterial vessel wall remodeling with an elevated tension of oxidative – redox stress (ROS), accelerated atherosclerosis and arterial vessel wall remodeling. SUA in the early stages of the atherosclerotic process is known to act as an antioxidant and may be one of the strongest determinates of plasma antioxidative capacity [ 53 ]. However, later in the atherosclerotic process when SUA levels are known to be elevated (in the upper 1/3 of the normal range >4 mg/dl and outside of the normal range >6 mg/dl in females and 6.5–7 mg/dl in males) this previously antioxidant (SUA) paradoxically becomes prooxidant. This antioxidant – prooxidant urate redox shuttle seems to rely heavily on its surrounding environment such as timing (early or late in the disease process), location of the tissue and substrate, acidity (acidic – basic – or neutral ph), the surrounding oxidant milieu, depletion of other local antioxidants, the supply and duration of oxidant substrate and its oxidant enzyme. In the accelerated atherosclerotic – vulnerable plaque the intima has been shown to be acidic [ 54 ], depleted of local antioxidants with an underlying increase in oxidant stress and ROS (table 1 ) (table 5 ) and associated with uncoupling of the eNOS enzyme and a decrease in the locally produced naturally occurring antioxidant: eNO and endothelial dysfunction. This process is also occurring within the microvascular bed at the level of the capillary within various affected hypertensive and diabetic end organs [ 19 , 51 , 52 ] (figure 4 ). Figure 4 Uncoupling of the eNOS reaction. It is important to understand the role of endothelial dysfunction in accelerated atherosclerosis and even more important to understand the role of eNOS enzyme uncoupling and how it relates to MS, PD, T2DM, and non-diabetic atherosclerosis. Oxygen reacts with the eNOS enzyme in which the tetrahydrobiopertin (BH 4 ) cofactor has coupled nicotinamide dinucleotide phosphate reduced (NAD(P)H) emzyme with L-arginine to be converted to nitric oxide (NO) and L-citrulline. When uncoupling occurs the NAD(P)H enzyme reacts with O 2 and the endothelial cell becomes a net producer of superoxide (O 2 • ) instead of the protective endothelial NO. This figure demonstrates the additional redox stress placed upon the arterial vessel wall and capillaries in patients with MS, PD, and overt T2DM. Nitric oxide and vitamin C have each been shown to inhibit the prooxidant actions of uric acid during copper-mediated LDL-C oxidation [ 38 , 55 ]. The ANAi acronym We have devised an acronym, to better understand the increase in SUA synthesis within the accelerated atherosclerotic plaque termed: ANAi. A – apoptosis, N – necrosis, A – acidic atherosclerotic plaque, angiogenesis (both induced by excessive redox stress), i – inflammation, intraplaque hemorrhage increasing red blood cells – iron and copper transition metal ions within the plaque. This acronym describes the excess production of purines: (A) adenine and (G) guanine base pairs from RNA and DNA breakdown due to apoptosis and necrosis of vascular cells in the vulnerable – accelerated atherosclerotic plaques; allowing SUA to undergo the antioxidant – prooxidant urate redox shuttle (figure 3 ). Reactions involving transitional metal ions such as copper and iron are important to the oxidative stress within atherosclerotic plaques. Reactions such as the Fenton and Haber- Weiss reactions and similar reactions with copper lead to an elevated tension of oxidative – redox stress. FENTON REACTION: Fe 2+ + H 2 O 2 → Fe 3+ + OH • + OH - Fe 3+ + H 2 O 2 → Fe 2+ + OOH • + H + HABER – WEISS REACTION: H 2 O 2 + O 2 - → O2 + OH - + OH H 2 O 2 + OH - → H 2 O + O 2 - + H + The hydroxyl radicals can then proceed to undergo further reactions with the production of ROS through addition reactions, hydrogen abstraction, electron transfer, and radical interactions. Additionally, copper (Cu 3+ - Cu 2+ - Cu 1+ ) metal ions can undergo similar reactions with formation of lipid peroxides and ROS. This makes the leakage of iron and copper from ruptured vasa vasorum very important in accelerating oxidative damage to the vulnerable accelerated atherosclerotic plaques, as well as, providing a milieu to induce the SUA antioxidant – prooxidant switch within these plaques [ 42 ]. These same accelerated – vulnerable plaques now have the increased substrate of SUA through apoptosis and necrosis of vascular cells (endothelial and vascular smooth muscle cells) and the inflammatory cells (primarily the macrophage and to a lesser extent the lymphocyte). Endothelial function and endothelial nitric oxide (eNO) The endothelium is an elegant symphony responsible for the synthesis and secretion of several biologically active molecules. It is responsible for regulation of vascular tone, inflammation, lipid metabolism, vessel growth (angiogenesis – arteriogenesis), arterial vessel wall – capillary sub endothelial matrix remodeling, and modulation of coagulation and fibrinolysis. One particular enzyme system seems to act as the maestro: The endothelial nitric oxide synthase (eNOS) enzyme and its omnipotent product: endothelial nitric oxide (eNO) (figure 2 ). The endothelial nitric oxide synthase (eNOS) enzyme reaction is of utmost importance to the normal functioning of the endothelial cell and the intimal interstitium. When this enzyme system uncouples the endothelium becomes a net producer of superoxide and ROS instead of the net production of the protective antioxidant properties of eNO (table 6 ) (figure 4 ). Table 6 The positive effects of eNOS and eNO • Promotes vasodilatation of vascular smooth muscle. • Counteracts smooth muscle cell proliferation. • Decreases platelet adhesiveness. • Decreases adhesiveness of the endothelial layer to monocytic WBCs (the "teflon effect"). • Anti-inflammatory effect. • Anti-oxidant effect. It scavenges reactive oxygen species locally, and acts as a chain-breaking antioxidant to scavenge ROS. • Anti-fibrotic effect. When NO is normal or elevated, MMPs are quiescent; conversely if NO is low, MMPs are elevated and active. MMPs are redox sensitive. • No inhibits prooxidant actions of uric acid during copper-mediated LDL oxidation. • NO has diverse anti-atherosclerotic actions on the arterial vessel wall including antioxidant effects by direct scavenging of ROS – RNS acting as chain-breaking antioxidants and it also has anti-inflammatory effects. There are multiple causes for endothelial uncoupling in addition to hyperuricemia and the antioxidant – prooxidant urate redox shuttle: A-FLIGHT -U toxicities, ROS, T2DM, prediabetes, T1DM, insulin resistance, MS, renin angiotensin aldosterone activation, angiotensin II, hypertension, endothelin, dyslipidemia – hyperlipidemia, homocysteine, and asymmetrical dimethyl arginine (ADMA) [ 19 , 39 , 43 ]. Xanthine oxidase – oxioreductase (XO) has been shown to localize immunohistochemically within atherosclerotic plaques allowing the endothelial cell to be equipped with the proper machinery to undergo active purine metabolism at the plasma membrane surface, as well as, within the cytoplasm and is therefore capable of overproducing uric acid while at the same time generating excessive and detrimental ROS [ 56 ] (figure 3 , 4 ). To summarize this section: The healthy endothelium is a net producer of endothelial nitric oxide (eNO). The activated, dysfunctional endothelium is a net producer of superoxide (O 2 - ) associated with MS, T2DM, and atheroscleropathy [ 43 ]. Uric acid and inflammation Uric acid and highly sensitive C reactive protein (hsCRP) each now share a respected inclusion as two of the novel risk markers – risk factors associated with the metabolic syndrome. It is not surprising that these two markers of risk track together within the MS. If there is increased apoptosis and necrosis of vascular cells and inflammatory cells in accelerated – vulnerable atherosclerotic plaques as noted in the above section then one would expect to see an increase in the metabolic breakdown products of RNA and DNA with arginine and guanine to its end product of uric acid. SUA elevation may indeed be a sensitive marker for underlying vascular inflammation and remodeling within the arterial vessel wall and capillary interstitium. Is it possible that SUA levels could be as similarly predictive as hsCRP since it is a sensitive marker for underlying inflammation and remodeling within the arterial vessel wall and the myocardium [ 57 ]. Should the measurement of SUA be part of the national cholesterol educational program adult treatment panel III and future IV (NCEP ATPIII or the future NCEP ATPIV) clinical guidelines (especially in certain ethnic groups such as females and in the African Americans)? Uric acid is known to induce the nuclear transcription factor (NF-kappaB) and monocyte chemoattractant protein-1 (MCP-1) [ 58 ]. Regarding TNF alpha it has been shown that SUA levels significantly correlate with TNF alpha concentrations in congestive heart failure and as a result Olexa P et al . conclude that SUA may reflect the severity of systolic dysfunction and the activation of an inflammatory reaction in patients with congestive heart failure [ 59 ]. Additionally, uric acid also stimulates human mononuclear cells to produce interleukin-1 beta, IL-6, and TNF alpha [ 11 ]. Tamakoshi K et al . have shown a statistically significant positive correlation between CRP and body mass index (BMI), total cholesterol, triglycerides, LDL-C, fasting glucose, fasting insulin, uric acid, systolic blood pressure, and diastolic blood pressure and a significant negative correlation of CRP with HDL-C in a study of 3692 Japanese men aged 34–69 years of age. They conclude that there are a variety of components of the MS, which are associated with elevated CRP levels in a systemic low-grade inflammatory state [ 60 ]. CRP and IL-6 are important confounders in the relationship between SUA and overall mortality in elderly persons, thus when evaluating this association the potential confounding effect of underlying inflammation and other risk factors should be considered [ 61 ]. Uric acid and chronic renal disease Hyperuricemia can be the consequence of increased uric acid production or decreased excretion. Any cause for decreased glomerular filtration, tubular excretion or increased reabsorption would result in an elevated SUA. Increased SUA has been found to predict the development of renal insufficiency in individuals with normal renal function [ 11 ]. In T2DM hyperuricemia seems to be associated with MS and with early onset or increased progression to overt nephropathy, whereas hypouricemia was associated with hyperfiltration, and a later onset or decreased progression to overt nephropathy [ 62 ]. An elevated SUA could be advantageous information for the clinician when examining the global picture of T2DM in order to detect those patients who might gain from more aggressive global risk reduction to delay or prevent the transition to overt nephropathy. Elevated SUA contributes to endothelial dysfunction and increased oxidative stress within the glomerulus and the tubulo-interstitium with associated increased remodeling fibrosis of the kidney and as noted earlier in this discussion to be pro-atherosclerotic and proinflammatory. This would have a direct effect on the vascular supply affecting macrovessels, particularly the afferent arterioles. The glomeruli would be affected also through the effect of uric acid on the glomerular endothelium with endothelial dysfunction due to oxidative – redox stress and result in glomerular remodeling. SUA's effect on hypertension would have an additional affect on the glomeruli and the tubulo-interstitium with remodeling changes and progressive deterioration of renal function. Increased ischemia – ischemia reperfusion would activate the xanthine oxidase mechanism and contribute to an increased production of ROS through H 2 O 2 generation and oxidative stress within the renal architecture with resultant increased remodeling. Hyperuricemia could increase the potential for urate crystal formation and in addition to elevated levels of soluble uric acid could induce inflammatory and remodeling changes within the medullary tubulo-interstitium. A recent publication by Hsu SP et al. revealed a J-shaped curve association with SUA levels and all-cause mortality in hemodialysis patients [ 63 ]. They were able to demonstrate that decreased serum albumin, underlying diabetic nephropathy, and those in the lowest and highest quintiles of SUA had higher all-cause mortality. It is interesting to note that almost all of the large trials with SUA and cardiovascular events have demonstrated this same J shaped curve regarding all-cause mortality with the nadir of risk occurring in the second quartile [ 11 ]. Johnson RJ et al . have speculated that the increased risk for the lowest quartile reflects a decreased antioxidant activity, while the increased risk at higher levels reflects the role of uric acid in inducing vascular disease and hypertension through the mechanism of the previously discussed antioxidant prooxidant urate redox shuttle. This would suggest that treatment with xanthine oxidase inhibitors (allopurinol) should strive to bring levels to the 3–4 mg/dl range and not go lower [ 11 ]. Nutritional support for hyperuricemia While it is not within the scope of this review to discuss this important topic with an in- depth examination, it is important to discuss some prevailing concepts and provide some clinical nutritional guidelines for hyperuricemia (table 8 ). Table 8 Nutritional guidelines for hyperuricemia Obesity Caloric restriction to induce weight loss in order to decrease insulin resistance of the MS. Exercise to aid in weight reduction by increased energy expenditure, also to increase eNOS and eNO, as well as, increase HDL-C with its antioxidant – anti-inflammatory effects. Both will result in REDOX STRESS REDUCTION Alcohol Avoidance and or moderation. Especially beer with the increased purines from hops and barley. Also improve the liver antioxidant potential. REDOX STRESS REDUCTION Low purine diet (moderation) Moderation in meats and seafood's, especially shrimp and barbeque ribs (all you can eat specials). Vegetables and fruits higher in purine should not be completely avoided as they provide fiber and naturally occurring antioxidants. Lists should be provided to demonstrate the vegetables and fruits that are higher in purines to allow patients healthier choices REDOX STRESS REDUCTION Fiber Emphasize the importance of fiber in the diet as fiber helps to bind excess purines in the gastrointestinal track. REDOX STRESS REDUCTION Moderation is the key element in any diet approaching hyperuricemia. The nutritional "gold standard" for the treatment of hyperuricemia has been "the low purine diet". This traditional diet has recently come into question as it may limit the intake of high purine vegetables and fruits. Vegetables and fruits are important for the fiber they supply in addition to naturally occurring antioxidants. Recently, of greater importance is controlling obesity through generalized caloric restriction and increased exercise to combat the overnutrition and underexercise of our modern-day society, as well as, controlling the consumption of alcohol [ 64 ]. Nutritional support by the nutritionist and the diabetic educator (an integral part of the health care team) is of utmost importance when dealing with the metabolic syndrome, T2DM, and the cardiovascular atherosclerotic afflicted patients in order to obtain global risk reduction, because we are what we eat. Conclusion From a clinical standpoint, hyperuricemia should alert the clinician to an overall increased risk of cardiovascular disease and especially those patients with an increased risk of cardiovascular events. Hyperuricemia should therefore be a "red flag" to the clinician to utilize a team effort in achieving an overall approach to obtain a global risk reduction program through the use of the RAAS acronym (table 7 ). Table 7 The RAAS Acronym: GLOBAL RISK REDUCTION R Reductase inhibitors (HMG-CoA). Decreasing modified LDL-cholesterol, i.e., oxidized, acetylated LDL-cholesterol. Decreasing triglycerides and increasing HDL-cholesterol. Improving endothelial cell dysfunction. Restoring the abnormal Lipoprotein fractions. Thus, decreasing the redox and oxidative stress to the arterial vessel wall and myocardium. Redox stress reduction A AngII inhibition or receptor blockade: ACEi-prils. ARBs-sartans. Both inhibiting the effect of angiotensin-II locally as well as systemically. Affecting hemodynamic stress through their antihypertensive effect as well as the deleterious effects of angiotensin II on cells at the local level – injurious stimuli -decreasing the stimulus for O 2 • production. Decreasing the A-FLIGHT toxicities. The positive effects on microalbuminuia and delaying the progression to end stage renal disease. Plus the direct-indirect antioxidant effect within the arterial vessel wall and capillary. Antioxidant effects. Aspirin antiplatelet, anti-inflammatory effect on the diabetic hyperactive platelet. Adrenergic (non-selective blockade) in addition to its blockade of prorenin → renin conversion. Amlodipine – Felodipine with calcium channel blocking antihypertensive effect, in addition to their direct antioxidant effects. Redox stress reduction A Aggressive control of diabetes to HbA1c of less than 7. This usually requires combination therapy with the use of insulin secretagogues, insulin sensitizers (PPAR-gamma agonists), biguanides, alpha-glucosidase inhibitors, and ultimately exogenous insulin. Decreasing modified LDL cholesterol, i.e., glycated-glycoxidated LDL cholesterol. Improving endothelial cell dysfunction. Also decreasing glucotoxicity and the oxidative-redox stress to the intima and pancreatic islet. Aggressive control of blood pressure , which usually requires combination therapy, including thiazide diuretics to attain JNC 7 guidelines. Aggressive control of homocysteine with folic acid with its associated additional positive effect on re-coupling the eNOS enzyme reaction by restoring the activity of the BH 4 cofactor to run the eNOS reaction via a folate shuttle mechanism and once again produce eNO. Aggressive control of uric acid levels with xanthine oxidase inhibitors (allopurinol and oxypurinol) should be strongly considered in view of the prevailing literature in order to achieve more complete: Global Risk Reduction Redox stress reduction S Statins. Improving plaque stability (pleiotropic effects) independent of cholesterol lowering. Improving endothelial cell dysfunction. Moreover, the direct/indirect antioxidant anti-inflammatory effects within the islet and the arterial vessel wall promoting stabilization of the unstable, vulnerable islet and the arterial vessel wall. Style. Lifestyle modification (weight loss, exercise, and change eating habits). Stop Smoking. Redox stress reduction SUA may or may not be an independent risk factor especially since its linkage to other risk factors is so strong, however there is not much controversy regarding its role as a marker of risk, or that it is clinically significant and relevant. Regarding the MS and epidemiologic evaluations: A multivariate model could well eliminate hyperuricemia as an independent risk factor even if it were contributing to the overall phenotypic risk of the syndrome. Additionally, we must remember that it was Reaven that called for the inclusion of hyperuricemia to Syndrome X we now call MS – insulin resistance syndrome -IRS in 1993 [ 18 ]. A quote by Johnson RJ and Tuttle KR is appropriate for the concluding remarks: "The bottom line is that measuring uric acid is a useful test for the clinician, as it carries important prognostic information. An elevation of uric acid is associated with an increased risk for cardiovascular disease and mortality, especially in women" [ 64 ]. Abbreviations Serum uric acid (SUA); cardiovascular disease (CVD); coronary heart disease (CHD); endothelial nitric oxide (eNO); endothelial nitric oxide synthase (eNOS); endothelial nitric oxide (eNO); reactive oxygen species (ROS); metabolic syndrome (MS); insulin resistance syndrome (IRS); nicotine adenine dinucleotide phosphate oxidase reduced NAD(P)H; superoxide (O 2 -• ); xanthine oxidase (XO); type 2 diabetes mellitus (T2DM); angiotensin converting enzyme (ACE); renin-angiotensin-aldosterone system (RAAS); advanced glycosylation endproducts (AGE); superoxide dismutase (SOD); glutathione (GPX); plasminogen activator inhibitor (PAI-1); angiotensin II (AngII); low density lipoprotein cholesterol (LDL-C); asymmetrical dimethyl arginine (ADMA); highly sensitive C reactive protein (hsCRP); national cholesterol educational program adult treatment panel III (NCEP ATPIII); nuclear transcription factor (NF-kappaB); monocyte chemoattractant protein-1 (MCP-1); tumor necrosis factor alpha (TNF alpha); interleukin one beta (IL-1beta); interleukin 6 (IL-6); body mass index (BMI); high density lipoprotein (HDL); hydrogen peroxide (H 2 O 2 ); free fatty acids (FFA). Competing interests The authors declare that they have no competing interests. Author's contribtions MRH and SCT envisioned, wrote and edited jointly. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC529248.xml |
521172 | Protein Thiol Modifications Visualized In Vivo | Thiol-disulfide interconversions play a crucial role in the chemistry of biological systems. They participate in the major systems that control the cellular redox potential and prevent oxidative damage. In addition, thiol-disulfide exchange reactions serve as molecular switches in a growing number of redox-regulated proteins. We developed a differential thiol-trapping technique combined with two-dimensional gel analysis, which in combination with genetic studies, allowed us to obtain a snapshot of the in vivo thiol status of cellular proteins. We determined the redox potential of protein thiols in vivo, identified and dissected the in vivo substrate proteins of the major cellular thiol-disulfide oxidoreductases, and discovered proteins that undergo thiol modifications during oxidative stress. Under normal growth conditions most cytosolic proteins had reduced cysteines, confirming existing dogmas. Among the few partly oxidized cytosolic proteins that we detected were proteins that are known to form disulfide bond intermediates transiently during their catalytic cycle (e.g., dihydrolipoyl transacetylase and lipoamide dehydrogenase). Most proteins with highly oxidized thiols were periplasmic proteins and were found to be in vivo substrates of the disulfide-bond-forming protein DsbA. We discovered a substantial number of redox-sensitive cytoplasmic proteins, whose thiol groups were significantly oxidized in strains lacking thioredoxin A. These included detoxifying enzymes as well as many metabolic enzymes with active-site cysteines that were not known to be substrates for thioredoxin. H 2 O 2 -induced oxidative stress resulted in the specific oxidation of thiols of proteins involved in detoxification of H 2 O 2 and of enzymes of cofactor and amino acid biosynthesis pathways such as thiolperoxidase, GTP-cyclohydrolase I, and the cobalamin-independent methionine synthase MetE. Remarkably, a number of these proteins were previously or are now shown to be redox regulated. | Introduction Cysteine is one of the most rarely used amino acids in the proteins of most organisms studied so far ( Pe'er et al. 2004 ). Therefore, when highly conserved in proteins, it usually plays crucial roles in the structure, function, or regulation of the protein. This is due to the ability of thiol groups to stabilize protein structures by forming covalent disulfide bonds and to coordinate metal ions, as well as due to their high reactivity and redox properties. Proteins in the extracellular space and oxidizing cell compartments (e.g., endoplasmic reticulum and periplasm) often rely on disulfide bonds to support their correct folding and maintain their structural stability ( Bardwell 1994 ). Cytosolic proteins, on the other hand, are present within the reducing environment of the cytosol. Here, cysteine residues are reduced and often found in binding pockets of substrates, coenzymes, or metal cofactors (e.g., in zinc binding dehydrogenases), or are present in the active site of enzymes, where they directly participate in the catalytic reaction (e.g., in cysteine proteases). Moreover, cysteine residues are also often involved in redox reactions, where transfer of electrons proceeds via thiol-disulfide exchange reactions. Importantly, the activity of all these cytosolic enzymes usually depends on the preservation of the reduced state of the cysteine residue(s) involved. The very same properties that make cysteine the perfect amino acid for redox reactions, metal coordination and thiol-disulfide interchanges, also make cysteines extremely vulnerable to oxidation by reactive oxygen species (ROS). ROS arise transiently during normal metabolism as toxic byproducts of respiration and have been shown to accumulate under conditions of oxidative stress. Over the past few years, an increasing number of thiol-containing proteins has been identified that use ROS as a mediator to quickly regulate their protein activity ( Linke and Jakob 2003 ). This new class of redox-regulated proteins includes the molecular chaperone Hsp33, which we discovered in 1999 ( Jakob et al. 1999 ), metabolic enzymes (e.g., glyceraldehyde-3-phosphate dehydrogenase [GapDH]) ( Cotgreave et al. 2002 ), prokaryotic and eukaryotic transcription factors (OxyR and Yap1) ( Rainwater et al. 1995 ; Zheng et al. 1998 ; Kang et al. 1999 ; Kuge et al. 2001 ; Kim et al. 2002 ), kinases (protein kinase C and Raf) ( Gopalakrishna and Jaken 2000 ; Hoyos et al. 2002 ), and phosphatases (PTP1B and PTEN) ( Barrett et al. 1999 ; Leslie et al. 2003 ). What all these proteins have in common are highly reactive cysteine residues that are quickly and reversibly modified upon exposure to oxidative stress. These modifications include disulfide bond formation (e.g., in Hsp33, RsrA, and OxyR), nitrosylation (e.g., in Ras and OxyR), glutathionylation (e.g., in PTP1B, GapDH, and OxyR), or sulfenic acid formation (e.g., in PTP1B and OxyR). These modifications cause significant conformational changes and either lead to the activation (e.g., in Hsp33, OxyR, PKC, and Raf-kinase) or inactivation (e.g., in p53 and PTEN) of the respective protein's function. Upon return to non–oxidative stress conditions, cellular reductants such as the small molecule glutathione as well as cellular reductases like thioredoxin and glutaredoxin rapidly reduce the cysteine modifications and restore the original protein activity. These findings suggested that basically any protein with reactive cysteine residue(s) has the potential of being redox regulated. Many important regulatory proteins such as zinc finger proteins contain clusters of conserved cysteines and are, therefore, attractive targets for redox regulation. Over the past few years, several proteomic strategies have been developed to identify proteins that undergo thiol modifications in vivo. These methods addressed very specific questions and were used to either identify disulfide-bonded or glutathionylated proteins under oxidative stress conditions in vivo ( Fratelli et al. 2002 ; Cumming et al. 2004 ), thioredoxin-targeted proteins in chloroplasts and Escherichia coli ( Motohashi et al. 2001 ; Yano et al. 2001 ; Kumar et al. 2004 ), or target proteins of periplasmic thiol-disulfide oxidoreductases ( Hiniker and Bardwell 2004 ; Kadokura et al. 2004 ). None of these methods, however, generated a general and global overview of thiol-modified proteins in vivo. We have now invented a differential thiol-trapping technique combined with two-dimensional (2D) gel analysis to monitor the in vivo thiol status of cellular proteins upon variations in the redox homeostasis of the cells. To test our method, we analyzed the thiol-disulfide status of proteins in aerobically growing E. coli cells and confirmed that the majority of proteins with thiol modifications are localized to the oxidizing environment of the periplasm. We found that the periplasmic thiol-disulfide oxidoreductase DsbA is responsible for these protein thiol modifications and identified novel DsbA substrate proteins. We then used our method to visualize directly the extent of ROS-induced thiol oxidation during aerobic growth and identified a number of cytosolic proteins with reactive cysteine residues that require functional thioredoxin to maintain their reduced thiol state. Finally, we analyzed the thiol status of cellular proteins in cells that were exposed to H 2 O 2 -induced oxidative stress and discovered a select group of new potentially redox-regulated proteins in vivo. Results A Differential Trapping Technique to Detect Oxidatively Modified Proteins In Vivo We have developed an innovative technique that allows us to monitor globally the in vivo thiol status of cellular proteins upon variations in the redox homeostasis of the cells. This method is based on the sequential reaction of two variants of the thiol-modifying reagent iodoacetamide (IAM) with accessible cysteine residues in proteins ( Figure 1 ). Wild-type or mutant cells that were grown exponentially in glucose-minimal medium at 37 o C were exposed to the desired oxidative stress treatment. Then, cells were treated with trichloracetic acid (TCA) to rapidly quench thiol-disulfide exchange reactions. All accessible thiol groups were then alkylated with cold, unlabeled IAM under denaturating conditions. In a next step, all reversible thiol modifications that developed during normal growth or oxidative stress treatment (e.g., disulfide bonds and sulfenic acids) were reduced with DTT, and the newly accessible thiol groups were modified with 14 C-labeled IAM. Therefore, radioactivity was specifically incorporated into proteins that originally contained thiol modifications. High ratios of 14 C activity/protein are predicted for proteins with thiol modifications while low ratios of 14 C activity/protein are predicted for proteins whose thiol groups are not significantly modified in vivo ( Figure 1 ). Figure 1 Schematic Overview of Our Differential Thiol-Trapping Technique Under normal growth conditions (top), a hypothetical cytoplasmic protein present within the complex mixture of the crude whole-cell extract is fully reduced. Upon incubation in TCA, all thiol-disulfide exchange reactions are quenched and the cells lyse. In the first thiol-trapping step, the protein is denatured and incubated with IAM. Accessible thiol groups are quickly carbamidomethylated (CAM) and blocked for the subsequent reduction/alkylation steps. After TCA precipitation and washing, DTT is added to reduce oxidized cysteines, and 14 C-labeled IAM is used to modify potentially newly released, accessible cysteines. Under oxidative stress conditions (bottom) the cysteine residues become modified (e.g., sulfenic acid and disulfide bonds). In the first trapping step, IAM cannot attack the oxidized disulfide bond. Only after reduction with DTT are the cysteines accessible to the 14 C-labeled IAM. Therefore, the 14 C radioactivity correlates with the degree of thiol modification in the protein. The differentially trapped protein species are chemically identical regardless of their original thiol-disulfide status. This ensures their identical migration behavior on 2D gel electrophoresis. The differentially trapped protein extract was then separated by 2D gel electrophoresis. Importantly, due to this specific trapping technique, all accessible thiol groups of each protein were carbamidomethylated to an extent that is independent of the original thiol-disulfide status of the protein. This ensures their identical migration behavior on 2D gels. The 2D gels were stained with colloidal Coomassie blue to get a measure of the total protein content. The 14 C radioactivity, which correlates to the degree of thiol modification in the individual spots, was determined by exposing the dried gels to phosphor screens. Then, the 14 C activity/protein ratio was visualized and quantified. The Majority of Oxidized Proteins Are Present in the Periplasm of E. coli In order to test our method, we analyzed the steady-state thiol-disulfide status of cellular proteins in wild-type E. coli. E. coli strains were grown in minimal medium to mid-logarithmic phase, and the cells were harvested. The cysteines were thiol trapped using our differential thiol-trapping technique and separated on 2D gels. To analyze the extent of thiol modification and the distribution of thiol-modified proteins in an unbiased way, we focused first on the 100 most abundant proteins on our colloidal Coomassie blue–stained 2D gels. We set the total spot intensity of all 100 protein spots on the colloidal blue gel to 100% and determined the relative spot intensity for each protein. We then quantified the 100 corresponding 14 C activity spots on the phosphor image, set their combined total spot intensities to 100%, and determined again the relative spot intensity for each protein. Finally, we determined the 14 C activity/protein ratios for each of the 100 proteins. Low 14 C activity/protein ratios were predicted for non-thiol-modified proteins presumably present in the reducing environment of the cytoplasm, while high ratios of 14 C activity/protein were predicted for proteins with cellular thiol modifications such as those present in the oxidizing milieu of the E. coli periplasm. As shown in Figure 2 , the majority of proteins (91 proteins) had a 14 C activity/protein ratio below 2.0, while nine proteins showed a higher than 2.0-fold ratio. Figure 2 Overall Thiol-Disulfide State of Cellular Proteins in Exponentially Growing E. coli Wild-Type Cells (A) Colored overlay of the Coomassie blue–stained 2D gel (shown in green) and the phosphor image (shown in red) of a differentially trapped protein extract from exponentially growing E. coli wild-type cells. Proteins with a high ratio of 14 C activity/protein appear red; proteins with a low ratio appear green. Protein spots with a ratio of 14 C activity/protein greater than 2.0 are indicated by an arrow, while circles label abundant proteins without cysteines. (B) Distribution of the 14 C activity/protein ratio in the 100 most abundant protein spots found on a Coomassie blue–stained gel. Bars representing spots with a ratio higher than 2.0 are colored red and are labeled with the name of the protein(s) they represent. (C) Distribution of the 14 C activity/protein ratio in the 100 most intense protein spots found on the phosphor images. Bars representing spots with a ratio higher than 2.0 are colored red and are labeled with the name of the protein(s) they represent. (D) Regular and reverse trapping of exponentially growing E. coli wild-type cells. Details of colored overlays of stained protein gels (shown in green) and phosphor images (shown in red) of cell extracts upon regular trapping (top) and reverse trapping (bottom). Mass spectrometric identification of a large number of these proteins suggested that our differential thiol trapping is indeed very selective for proteins with thiol modifications. From the nine protein spots with a 14 C activity/protein ratio greater than 2.0, six are known periplasmic proteins such as the periplasmic oligopeptide permease (OppA) and glycerol-uptake protein (UgpB ), as well as proteins associated with the outer membrane like the outer membrane porin protein A (OmpA) ( Figure 2 A and 2 B). Importantly, all of these proteins harbor at least two cysteines, suggesting that they may form structural disulfide bonds in the oxidizing environment of the periplasm. Three potentially cytoplasmic proteins were found to have high 14 C activity/protein ratios: dihydrolipoyl transacetylase (AceF), lipoamide dehydrogenase (Lpd), and the stringent starvation protein (SspA) ( Figure 2 A and 2 B). AceF and Lpd correspond to the enzymes E2 and E3 of the pyruvate dehydrogenase complex. Lpd contains a reactive cysteine pair in the active site that undergoes disulfide bond formation during the regeneration of the disulfide bond of the covalently bound cofactor lipoamide of AceF ( Massey and Veeger 1960 ). Detection of these proteins in our analysis is an excellent indication that we obtained an in vivo snapshot of proteins that use disulfide bond formation in their catalytic cycle and shows that the method can detect the redox state of covalently bound thiol-containing cofactors as well. SspA contains only one cysteine residue. This cysteine residue might be glutathionylated in vivo under steady-state conditions. Alternatively, however, SspA might co-migrate with the low-abundance periplasmic protein arginine binding protein ArtI that harbors two highly conserved cysteines and migrates at a very similar position on periplasmic extract gels (A. Hiniker, personal communication). The majority of cytoplasmic proteins that contain numerous cysteines, on the other hand, revealed a 14 C activity/protein ratio below 2.0, including the very abundant elongation factor EF-Tu (Tu elongation factor [TufB]-IF1) with three cysteine residues ( 14 C activity/protein ratio = 1.3 ± 0.5) and isocitrate dehydrogenase with seven cysteine residues ( 14 C activity/protein = 1.2 ± 0.5). Protein spots that showed extremely low 14 C activity/protein ratios (less than 0.2) included the very abundant outer membrane porin protein E as well as the cytoplasmic proteins P-specific transport protein and trigger factor , all of which do not contain any cysteine residues ( Figure 2 A). These results indicated that under our labeling conditions, IAM was quite specific for cysteine residues, and labeling of non-thiol-containing amino acids could be neglected. Because many proteins with thiol modifications are low-abundance periplasmic proteins, we performed a similar analysis but focused now on the 100 most heavily 14 C-modified proteins rather than on the 100 most abundant proteins ( Figure 2 C). A number of proteins that showed a high 14 C activity/protein ratio were periplasmic proteins that we had previously identified. In addition, however, the low-abundance periplasmic proteins periplasmic histidine binding protein (HisJ), ArtJ, DsbA, and the periplasmic dipeptide binding protein (DppA) were identified as proteins with a very high degree of thiol modification ( 14 C activity/protein ratio > 2.0) ( Figure 2 A and 2 C). All four proteins are known to be localized to the periplasm of E. coli and again contain at least one pair of cysteine residues. Both DppA and HisJ have recently been identified as substrates of the disulfide oxidase DsbA ( Hiniker and Bardwell 2004 ), confirming that they do contain disulfide bonds in vivo. To obtain an idea about the sensitivity of our method, we closely analyzed DsbA, a protein that contains two cysteine residues and that has been found to be fully oxidized in wild-type E. coli cells using our technique as well as conventional thiol-trapping methods ( Kishigami et al. 1995 ). Although DsbA was only a faint spot on protein gels (318 spots showed a stronger signal), it was among the most abundant spots on the phosphor image (the 27th most intense spot). This showed that even in a low-abundant protein such as DsbA, the presence of only two thiol-modified cysteines is fully sufficient to create a clearly detectable 14 C signal. Regular and Reverse Thiol Trapping—Determining Redox States In Situ Quantitative analysis of the ratio of oxidized and reduced protein species in vivo can be used to determine the redox state of the protein in the cell providing that the oxidation mechanism of the protein is known ( Watson and Jones 2003 ). We therefore considered that if our regular thiol trapping was completely alkylating all reactive cysteine residues, the 14 C activity/protein ratio of a defined protein spot should correspond to the amount of oxidized protein. To then visualize and quantify the amount of reduced protein species in the same protein spot, we decided to perform a reverse trapping in parallel. In the reverse trapping, all free, accessible cysteines were immediately alkylated with radioactive IAM, while oxidatively modified cysteines were alkylated with cold IAM after their reduction. Therefore, the 14 C activity/protein ratio should now correspond to the amount of reduced protein in the respective protein spot. We confirmed that proteins that had very high ratios of 14 C activity/protein such as oxidized OmpA IF1 and IF2 in our regular trapping had very low ratios of 14 C activity/protein (0.2 ± 0.08) in our reverse-trapped samples ( Figure 2 D). A mostly reduced protein such as succinyl-CoA synthetase, which had a low ratio of 0.7 ± 0.2 under regular trapping conditions, on the other hand showed a very high ratio of 2.8 ± 0.7 under reverse-trapping conditions. Based on these results, we considered that the comparison of the 14 C activity/protein ratio of defined protein spots in regular and reverse-trapped samples could give us the ratio of oxidized and reduced protein under steady-state conditions, if the reaction mechanism of the protein oxidation was known. This should then allow us to determine the half-cell potential or redox state of the cellular proteins in vivo. To test our approach, we decided to calculate the redox state of Lpd, a cytosolic enzyme that we identified in our screen to be partly oxidized under steady-state conditions. Oxidative decarboxylation of pyruvate goes along with the reduction of lipoamide, the prosthetic group of AceF. To regenerate this complex, the disulfide bond in dihydrolipoamide is re-oxidized by the active-site disulfide bond of Lpd, which itself donates its electrons to the prosthetic flavin adenine dinucleotide and ultimately to nicotinamide adenine dinucleotide. Because the standard redox potentials of Lpd and the dihydrolipoic acid/lipoic acid redox pair of AceF are known ( Table 1 ) ( Maeda-Yorita et al. 1991 ; Nelson et al. 2000 ), we determined the 14 C activity/protein ratio of Lpd under regular and reverse-trapping conditions and determined its redox state in vivo. Table 1 Standard Redox Potentials, Redox Potentials, and 14 C Activity/Protein Ratios in the Reverse and Regular Trapping Experiments of Selected Redox Pairs a Maeda-Yorita et al. (1991) b Determined from the ratios obtained in the trapping experiment and the standard redox potential c Nelson et al. (2000) d Determined from the ratios obtained in the trapping experiment, assuming that AhpC is in equilibrium with the cellular redox potential as represented by the GSH/GSSG redox pair e Calculated from the concentrations of GSH and GSSG in E. coli DHB4 ( Aslund et al. 1999 ), assuming that AhpC is in equilibrium with the cellular redox potential as represented by the GSH/GSSG redox pair f Based on the formation of intermolecular dimers, a ratio of [AhpC red ] 2 /[AhpC ox ] was used ( Ellis and Poole 1997 ) The in vivo redox state of Lpd was found to be –0.261 ± 0.009 V (pH 7.0). This was in excellent agreement with the proposed flow of electrons through this multienzyme system, showing that Lpd is able to oxidize the dihydrolipoic acid in AceF ( E 0 = −0.290). A reliable direct measurement of the redox potential of the dihydrolipoic acid/lipoic acid redox couple in AceF was not possible because of the very low signal that we found for AceF in the reverse-trapping experiment. This indicated that AceF is mostly in its thiol-oxidized state. This probably reflects the fact that the prosthetic group is only accessible to Lpd in its reduced state ( Nelson et al. 2000 ), which presumably allows AceF to keep its prosthetic group oxidized even within the very reducing environment of the cytosol. The redox potential that we determined for Lpd also suggested that the components of this multienzyme complex are neither in equilibrium with one another nor with the overall GSH/GSSG redox potential in the cell (−0.24 V at pH 7.0, an intracellular concentration of 5 mM GSH, and a ratio of GSH/GSSG of 223:1 in E. coli DHB4 [ Aslund et al. 1999 ]). These results showed that the calculation of (standard) redox potentials for proteins in vivo using our differential trapping technique was possible when sufficient amounts of reduced and oxidized species could be detected and when their ratios could be reliably quantified. Then, this technique proved to be very useful to estimate the direction of electron flow in metabolic pathways in vivo. We also calculated the standard redox potential of alkylhydroperoxide reductase small subunit (AhpC), a protein that uses disulfide bond formation to detoxify alkylhydroperoxides, assuming that under steady-state conditions it is in equilibrium with the overall cellular redox potential. We calculated a standard redox potential for AhpC of −0.257 ± 0.009 V. This agrees well with studies in Helicobacter pylori ( Baker et al. 2001 ) and our findings (see below) that suggested that thioredoxin ( E 0 = −0.270 V) ( Krause et al. 1991 ) might play a direct role in the catalytic cycle of AhpC. Identification of the In Vivo Substrate Proteins of DsbA We found that our method specifically and reliably detected proteins with thiol modifications in vivo. This suggested to us that our method should also be an excellent tool to determine the in vivo substrate specificity of cellular thiol-disulfide oxidoreductases. We therefore decided to first compare the thiol-disulfide status of proteins in wild-type E. coli and strains that lack DsbA, the enzyme that is responsible for disulfide bond formation in the periplasm of E. coli . Previously, only a few DsbA substrates have been identified. The studies that addressed this question in the past relied either on the formation of covalent intermediates between an active-site cysteine mutant of DsbA and substrate proteins ( Kadokura et al. 2004 ), or on the instability and premature degradation of periplasmic proteins that are no longer stabilized by disulfide bonds because of the absence of DsbA ( Hiniker and Bardwell 2004 ). We grew wild-type E. coli cells and cells lacking the chromosomal copy of DsbA ( dsbA:: kan) to mid-logarithmic phase, harvested the cells, and differentially thiol-trapped the cysteines. As shown in Figure 3 and Table 2 , using our new thiol-trapping technique, we identified a number of proteins that showed significantly less or no thiol modification in dsbA deletion strains than in wild-type strains. Among the proteins that we selected for mass spectrometric analysis were known DsbA substrate proteins (e.g., OmpA, DppA, organic solvent tolerance protein [Imp], and HisJ) as well as a number of proteins that have not yet been associated with DsbA (e.g., ArtJ and UgpB). Because all of these proteins are periplasmic and have at least one conserved pair of cysteines, it appears very likely that these proteins are also substrate proteins of DsbA. Figure 3 Identification of In Vivo Substrate Proteins of the Periplasmic Disulfide Bond Oxidase DsbA (A) Colored overlay of the stained 2D gel (shown in green) and the phosphor image (shown in red) of differentially trapped protein extract from exponentially growing E. coli wild-type cells. Proteins with a high 14 C activity/protein ratio appear red, while proteins with a low ratio appear green. Proteins that were found to have significantly lower ratio of 14 C activity/protein in the dsbA ::kan strain (B and C) are labeled. (B) Overlay of the stained 2D gel (shown in green) and the phosphor image (shown in red) of a differentially trapped protein extract from exponentially growing dsbA ::kan cells. Proteins that were found to have a significantly lower 14 C activity/protein ratio in dsbA ::kan cells than in wild-type strain (A) are marked with an arrowhead. A circle marks the position of DsbA on the wild-type gel. (C) Overlay of the stained 2D gel (shown in green) and the phosphor image (shown in red) of a differentially trapped protein extract from E. coli dsbA ::kan cells growing under oxygen limitation. Arrowheads label proteins that were found to have a significantly lower 14 C activity/protein ratio than the wild-type strain grown under oxygen limitation. A circle marks the position of DsbA on the wild-type gel. Table 2 Identification of the In Vivo Substrates of DsbA a The activity/protein ratio of the given protein spot on gels from differentially trapped extracts from E. coli DHB4 ( wild-type) was divided by the corresponding activity/protein ratio on gels from differentially trapped extracts from E. coli LL029 ( dsbA ::kan) b The p -value according to the TTEST function of Excel 2000 (Microsoft, Redmond, Washington, United States) c The activity/protein ratio of the given protein spot on gels from differentially trapped extracts from E. coli DHB4 ( wild-type) grown under oxygen limitation was divided by the corresponding ratio on gels from differentially trapped extracts from E. coli LL029 ( dsbA ::kan) grown under oxygen limitation d Localization according to Swiss-Prot ( http://us.expasy.org/sprot/ ) ( Boeckmann et al. 2003 ) or PSORTB ( http://www.psort.org/psortb/index.html ) ( Gardy et al. 2003 ). CP, cytoplasmic; IM, inner membrane; OM, outer membrane; PP, periplasmic e Imp and the outer membrane protein YaeT migrate to the same position in the 2D gels Interestingly, the oxidation state of the known DsbA substrate protein OmpA, as well as of some other periplasmic proteins, appeared to be only moderately affected by the lack of DsbA ( Figure 3 A and 3 B). Because the process of folding and disulfide bond formation in OmpA has been shown to be reasonably fast (half time, t 1/2 = 5 min) even in the absence of DsbA ( Bardwell et al. 1991 ), we considered that air oxidation was probably responsible for the disulfide bond formation in proteins such as OmpA under steady-state conditions. To investigate the role that air oxidation might play in the disulfide bond formation of periplasmic proteins in vivo, we performed the same thiol-trapping experiments using wild-type and dsbA ::kan cells that were grown under very limiting oxygen conditions. Under those oxygen-limited conditions the 14 C activity/protein ratio of OmpA and other periplasmic proteins was dramatically decreased compared to wild-type cells and also significantly lower than in dsbA ::kan cells grown under normal oxygen conditions ( Figure 3 C; Table 2 ). These results not only confirmed that the thiol modifications that we detected with our differential thiol-trapping technique are indeed in vivo modifications and are not introduced during the aerobic lysis and trapping of the sample, but also clearly showed that DsbA is not absolutely necessary for the thiol oxidation of certain periplasmic proteins under aerobic conditions. Under low-oxygen conditions, however, which occur in stationary phase or when cells are grown micro-aerobically, functional DsbA is absolutely required for the successful disulfide bond formation in the E. coli periplasm. TrxA Protects a Large Number of Intracellular Proteins from Oxidation While DsbA promotes disulfide bond formation in the E. coli periplasm, the thioredoxin and glutaredoxin systems reduce disulfide bonds in the E. coli cytoplasm. These systems not only prevent the formation of unwanted disulfide bonds in cytoplasmic proteins, which often lead to the inactivation of the respective proteins, but also play important regulatory roles in the cell. For instance, the oxidative stress response in both prokaryotes and eukaryotes is rapidly attenuated by glutaredoxins and thioredoxins, which reduce and inactivate the oxidative stress transcription factors OxyR or Yap1p ( Carmel-Harel and Storz 2000 ). Therefore, analysis of the proteins that use these systems for their specific reduction will help us to identify cytosolic proteins that use thiol modifications in their functional life cycle. To analyze the substrate specificity of cytoplasmic thiol-disulfide oxidoreductases, we decided to compare the thiol-disulfide status of proteins in a thioredoxin null mutant and the isogenic wild-type E. coli strain. Importantly, strains that lack the trxA gene do not exhibit a general disulfide stress phenotype ( Prinz et al. 1997 ), which minimizes potential secondary thiol modifications in proteins that could be otherwise attributed to those stresses ( Derman et al. 1993 ). The dramatic alteration in the oxidation state of a large number of cellular proteins in a Δ trxA strain as compared to a wild-type E. coli strain is clearly visible ( Figure 4 ). Of the 100 proteins that were selected based on their high level in 14 C activity, 37 protein spots showed a more than 2-fold further increase in thiol modification in Δ trxA strains compared to wild-type strains, where functional thioredoxin is apparently working successfully to keep them reduced. Proteins whose 14 C activity/protein ratio did not change in the absence of thioredoxin included the majority of periplasmic proteins that have been identified before, as well as some highly abundant cytoplasmic proteins that harbor presumably inaccessible or unreactive cysteines. Figure 4 Identification of the In Vivo Substrate Proteins of Thioredoxin A (A) Colored overlay of the stained 2D gel (shown in green) and the phosphor image (shown in red) of differentially trapped protein extract from exponentially growing E. coli wild-type cells. Proteins with a high ratio of activity per protein appear red, proteins with a low ratio appear green. Proteins that were found to have a significantly higher ratio of activity per protein in the trxA − strain (B) are labeled. (B) Overlay of the stained 2D gel (shown in green) and the phosphor image (shown in red) of differentially trapped protein extract from exponentially growing E. coli Δ trxA cells. Proteins that were found to have a significantly higher ratio of 14 C activity/protein in the Δ trxA strain are labeled with an arrow. All 37 spots were selected for mass spectrometric analysis, and resulted in the identification of 27 individual proteins, 23 of which have either been shown or are predicted to be localized to the reducing environment of the cytoplasm and to contain at least one and up to ten cysteine residues ( Table 3 ). Of those, at least six proteins (e.g., thioredoxin-linked thiol peroxidase [Tpx], AhpC, GapDH, and aconitase B [AcnB]) have been previously shown to be targets of thioredoxin in plants ( Yamazaki et al. 2004 ) or, very recently, in E. coli ( Kumar et al. 2004 ) ( Table 3 ). Thirteen proteins have cysteine residues in the active site or the cofactor-binding site (e.g., aspartate semialdehyde dehydrogenase, citrate synthase, and γ-glutamyl phosphate reductase). Five proteins are known metal-binding proteins either coordinating zinc via cysteines (e.g., cobalamin-independent methionine synthase [MetE] and carbonic anhydrase [YadF]), binding iron (e.g., alcohol/acetaldehyde dehydrogenase[AdhE]), or harboring iron-sulfur clusters (e.g., AcnB and succinate dehydrogenase [SdhB]) ( Table 3 ). Reactive and/or accessible cysteine residues appear to make these proteins particularly vulnerable to small amounts of ROS, such as hydrogen peroxide (H 2 O 2 ), which are known to be produced as toxic byproducts of cellular respiration in aerobically growing cells ( Costa Seaver and Imlay 2001 ). That H 2 O 2 must be produced during aerobic growth became also obvious when we found the detoxifying enzymes AhpC and the thioredoxin-linked Tpx to be largely oxidized. Both proteins use disulfide bond formation to detoxify peroxides and appear to require functional thioredoxin to regenerate their reduced thiol status. Table 3 In Vivo Substrate Proteins of Thioredoxin A a The 14 C activity/protein ratio of the given protein spot on gels from differentially trapped extracts from E. coli DHB4 ( wild-type) was divided by the corresponding 14 C activity/protein ratio on gels from differentially trapped extracts from E. coli WP570 (Δ trxA ) b The p -value according to the TTEST function of Excel 2000 c Localization according to Swiss-Prot ( http://us.expasy.org/sprot/ ) ( Boeckmann et al. 2003 ) or PSORTB ( http://www.psort.org/psortb/index.html ) ( Gardy et al. 2003 ). CP, cytoplasmic; IM, inner membrane; OM, outer membrane; PP, periplasmic Table 3 Continued The in vivo snapshot of the thiol status of proteins in thioredoxin-defective strains shows for the first time, to our knowledge, the detrimental effects of small amounts of ROS generated during aerobic growth on cytosolic proteins, and the important role that thioredoxin A plays in regenerating these proteins. A surprisingly large number of cytosolic proteins appear to harbor such oxidation-sensitive cysteine residues that require the constant presence of the reducing thioredoxin system under aerobic growth. We cannot exclude at this point the possibility that the other cellular redox system glutaredoxin is overwhelmed by the accumulation of thiol-modified proteins in thioredoxin-deficient strains as well. This might be the reason why we detect thiol-modified MetE, one of the very few known glutathionylated E. coli proteins ( Hondorp and Matthews 2004 ), which presumably requires functional glutaredoxin for its regeneration. It is also important to note that we focused only on the 100 most intense protein spots on the phosphor image. A significant number of additional proteins showed an at least 2-fold further increase in their 14 C activity/protein ratio in trxA − cells as compared to wild-type cells. These are likely also substrates of thioredoxin and remain to be identified ( Figure 4 ). Only three periplasmic proteins were found to be increasingly thiol-modified in the absence of cytoplasmic thioredoxin; Tpx, OppA , and the methionine-binding protein MetQ (YaeC). In the case of Tpx, thioredoxin A has been suggested to be an essential component in its functional regulation. This makes Tpx a potential substrate protein of the periplasmic thiol-disulfide oxidoreductase systems DsbC or DsbG, which are connected to cytoplasmic thioredoxin via the membrane protein DsbD. Absence of thioredoxin in the cytoplasm would lead to the accumulation of oxidized DsbC and DsbG in the periplasm, which would then no longer be able to reduce and/or isomerize disulfide bonds in periplasmic proteins such as Tpx; this would explain the accumulation of oxidized proteins in the periplasm. The same could apply for the other two periplasmic proteins that we identified in this study. Alternatively, however, thiol modifications that occur and are not reduced in the cytoplasm might prevent their efficient transport, or might impair potential attachments of lipid anchors. The latter might be the case for the lipid protein MetQ (YaeC), whose single cysteine is predicted to be linked to a lipid anchor. Identification of Proteins Sensitive to Oxidative Stress Over the past few years, an increasing number of redox-sensitive proteins have been identified that use the oxidation state of reactive cysteine residues as a regulatory switch. Oxidative stress–induced thiol modifications lead to conformational changes and to the transient activation or inactivation of the respective protein. Upon return to non–oxidative stress conditions, cellular reductants such as the thioredoxin system rapidly reduce the cysteine modifications and restore the original protein activity. The observation that the function of so many different cysteine-containing proteins is regulated by the redox conditions of the environment, suggests that basically any protein with one or more reactive and exposed cysteines has the potential of being redox-regulated. Because many important regulatory and biosynthetic proteins contain clusters of cysteines in the active site or cofactor-binding site, they can therefore be considered attractive and potential targets for this novel form of functional regulation. To investigate thiol modification under conditions of exogenous oxidative stress, wild-type E. coli cells were grown to mid-logarithmic phase and exposed to oxidative stress treatment by addition of H 2 O 2 . Here, we focused on the 100 most heavily 14 C-labeled proteins after 10 min of H 2 O 2 treatment, and compared their 14 C activity/protein ratio to the ratio immediately before and 2, 5, and 30 min after the addition of H 2 O 2 ( Figure 5 ; Table 4 ). From the 100 most thiol-modified proteins in oxidatively stressed E. coli cells, seven proteins showed increasing thiol modification upon exposure to oxidative stress. These included H 2 O 2 -detoxifying enzymes such as Tpx, as well as a number of biosynthetic enzymes such as MetE, GTP cyclohydrolase (FolE), and phosphoglycerate dehydrogenase (SerA). Figure 5 Identification of Oxidative Stress–Sensitive Proteins In Vivo (A) Overlay of the stained 2D gel (shown in green) and the phosphor image (shown in red) of differentially trapped protein extracts from H 2 O 2 -stressed E. coli wild-type cells. Proteins that were found to have a significantly higher ratio of 14 C activity/protein after 10 min of H 2 O 2 treatment than in untreated cells are labeled. (B) Time course of the oxidation of proteins in E. coli wild-type cells upon treatment with H 2 O 2 . Details of the colored overlays of stained protein gels (shown in green) and phosphor images (shown in red) of cell extracts taken before (Co) and 2, 5, 10, and 30 min after addition of H 2 O 2 to the cells. The selected proteins are FolE, Tpx, MetE, and TufB. Bar charts on the right show the oxidation-dependent change in ratio of 14 C activity/protein of the protein spot labeled by an arrowhead. (C) Time course of the oxidation of MetE in E. coli wild-type cells upon treatment with 1 mM diamide. Details of the colored overlays of stained protein gels (shown in green) and autoradiographs taken on X-ray films (shown in red) of cell extracts taken before (Co) and 2, 10, and 30 min after addition of diamide to the cells. Table 4 Oxidation Stress Sensitive Proteins in E. coli a The activity/protein ratio of the given protein spot on gels from differentially trapped extracts from E. coli DHB4 ( wild-type) treated with 4 mM H 2 O 2 was divided by the corresponding ratio on gels from differentially trapped extracts from untreated E. coli DHB4 b The p -value according to the TTEST function of Excel 2000 c Localization according to Swiss-Prot ( http://us.expasy.org/sprot/ ) ( Boeckmann et al. 2003 ) or PSORTB ( http://www.psort.org/psortb/index.html ) ( Gardy et al. 2003 ). CP, cytoplasmic; PP, periplasmic Importantly, in the companion paper by Hondorp and Matthews (2004) , MetE has been shown to be redox-regulated in vivo and in vitro. The surface-exposed cysteine 645 in MetE was found to be particularly sensitive to oxidative stress–induced thiol modifications. The authors showed that this thiol modification transiently inactivated the enzyme, which provided an excellent explanation for the observed methionine auxotrophy that accompanies oxidative stress in E. coli . To compare the kinetics and extent of our H 2 O 2 -induced thiol modification of MetE with the diamide-induced glutathionylation observed by Hondorp and Matthews, we analyzed the time course of MetE modification upon exposure of E. coli cells to 1 mM diamide. As shown in Figure 5 C, MetE was maximally thiol-modified within 2 min of diamide treatment and maintained its high level of thiol modification over at least 30 min of incubation. This was in excellent agreement with the in vivo thiol trapping conducted by Hondorp and Matthews, who showed that at their first time point of 15 min, all of MetE was in the oxidized state. The second metabolic enzyme that we identified as being particularly sensitive to oxidative stress was FolE, which catalyzes the committed step in the synthesis of the one-carbon donor tetrahydrofolate. Analysis of its crystal structure revealed that FolE contains a cysteine-coordinating zinc center, which escaped prior detection because of its high oxidation sensitivity in vitro ( Rebelo et al. 2003 ). Air-oxidation of FolE leads to the inactivation of the enzyme. We have now observed that FolE is one of the major targets of H 2 O 2 treatment in E. coli, with a more than 5-fold increase in 14 C activity/protein ratio upon H 2 O 2 treatment. This suggested that FolE is also transiently oxidized and inactivated upon oxidative stress in vivo, a finding that would clearly make physiological sense. Tetrahydrofolate is a highly oxidation-sensitive compound, and synthesizing it under oxidative stress conditions would be extremely wasteful for the cell. Analysis of the time course of thiol modification in FolE during H 2 O 2 -induced oxidative stress treatment showed a steady increase in thiol modification. This was in contrast to the time course of thioredoxin substrate proteins such as Tpx, whose thiol modification peaked around 2–5 min after the start of the oxidative stress treatment ( Figure 5 B). This suggests that FolE is indeed a protein whose redox state is not controlled by thioredoxin and that is especially sensitive to oxidative stress treatment. A large number of cysteine-containing cytoplasmic proteins (e.g., isocitrate dehydrogenase), as well as all of the identified periplasmic proteins, did not show any significant increase in oxidation-induced thiol modification. This indicated that under oxidative stress conditions, the majority of cytosolic proteins remain reduced and confirmed that the periplasmic proteins were already fully oxidized under aerobic growth conditions. The probably best-known redox-regulated proteins in E. coli, the oxidative stress transcription factor OxyR and the molecular chaperone Hsp33, were not among the 100 most thiol-modified proteins after 10 min of oxidative stress. This was not very surprising, given that OxyR is a low-abundance protein and Hsp33 has an extremely low pI (pH 4.35) and cannot be detected by our 2D gel system. The fact, however, that we identified a number of proteins that either have been shown (Tpx and MetE) or were predicted (FolE) to undergo thiol modification upon oxidative stress in vivo, made us very confident that thiol modifications play regulatory or functional roles in the other proteins that we discovered as well (e.g., GlyA, PheT, and SerA). All of these proteins have numerous cysteine residues, which are either surface exposed or might play other functional roles that have not yet been identified ( Table 4 ). Detailed biochemical analysis is now required to investigate the exact role that thiol modifications play in these potentially redox-regulated proteins. Conclusion: A Widely Applicable New Method to Visualize Thiol Modifications In Vivo Over the past few years, a variety of reversible oxidative cysteine modifications have been discovered that regulate the activity of eu- and prokaryotic proteins. The most prominent modification is disulfide bond formation, but also transient glutathionylation of cysteines or oxidation to sulfenic acid have been found to play an important regulatory role in many proteins ( Barrett et al. 1999 ; Kim et al. 2002 ). These modifications are usually transiently introduced by specific stress conditions such as peroxide or disulfide stress and are resolved by cellular thiol-disulfide oxidoreductases such as thioredoxin and glutaredoxin, which are usually upregulated during these stress conditions ( Potamitou et al. 2002 ). Several new proteomic strategies have now been developed to probe for potential substrate proteins of cellular thiol-disulfide oxidoreductases and proteins that undergo thiol modifications during stress conditions. These strategies involve the use of radioactive glutathione to detect glutathionylated proteins under oxidative stress conditions ( Fratelli et al. 2002 ), fluorescent thiol-reactive dyes to identify thioredoxin-targeted proteins ( Yano et al. 2001 ), or active-site mutants of thiol-disulfide oxidoreductases, whose failure to complete the enzymatic reaction leads to an irreversible disulfide crosslink between thiol-disulfide oxidoreductase and substrate, which can then be identified by SDS-PAGE or diagonal PAGE in combination with mass spectrometry ( Motohashi et al. 2001 ; Balmer et al. 2003 ; Kadokura et al. 2004 ). Very recently a study also examined proteins that interact very tightly with tap-tagged thioredoxin in E. coli ( Kumar et al. 2004 ). This technique, however, was unable to distinguish between proteins that require thiol-disulfide exchange reactions with thioredoxins (i.e., enzymatic substrates) and proteins that simply associate with thioredoxin without the involvement of thiol chemistry. This was especially emphasized by the finding that at least 25% of the thioredoxin-associated proteins identified by Kumar et al. did not contain any cysteines. With the help of these various methods, a number of proteins have been identified that serve as substrate proteins of different thiol-disulfide oxidoreductases in E. coli and other organisms. We have now developed a technique that allows us to globally monitor and compare the thiol-disulfide status of all cellular proteins that can be resolved in 2D gels. With this technique, the substrate proteins of all major oxidoreductase systems can be identified and dissected simply by comparing the thiol status of proteins in the appropriate mutant strains. In prokaryotes, for instance, substrate overlap and differences between the thioredoxin A and C can be analyzed by simply comparing the thiol disulfide status in trxA − and trxC − cells. The substrate specificity of the complete thioredoxin system can then be distinguished from the substrate specificity of the glutaredoxin systems by comparing the thiol-disulfide status of cellular proteins in strains lacking one of the systems altogether. Most importantly, this technique should be widely applicable to many different cell types and organisms. Preliminary experiments in yeast, for instance, showed that AdhE, GapDH, and SdhB are major targets for oxidative thiol modifications in yeast (data not shown). We found the respective E. coli homologs to be among the most redox-sensitive proteins in strains lacking thioredoxin A. The reason why this method is applicable to both pro- and eukaryotic cells is the rapid thiol-quenching step that involves the incubation and lysis of cells in the presence of TCA. This immediately stops all thiol-disulfide exchange reactions. All subsequent trapping steps are then conducted with the soluble proteins under denaturating conditions. These unique features should allow us and others to monitor and visualize the in vivo thiol status of cellular proteins upon exposure of various cells and organisms to virtually every physiological or pathological condition that is accompanied by oxidative stress. Materials and Methods Bacterial strains. E. coli DHB4 (F′ lac - pro lacI Q /Δ (ara - leu)7697 araD139 Δ lacX74 galE galK rpsL phoR Δ (phoA) PvuII Δ malF3 thi ) (herein referred to as wild-type), WP570 (DHB4 Δ trxA ) ( Prinz et al. 1997 ), and LL029 (DHB4 dsbA ::kan) were grown aerobically in glucose MOPS minimal medium ( Neidhardt et al. 1974 ) containing 40 μg/ml L -leucine and 10 μM thiamine at 37 °C. Oxygen-limited cultures were grown in completely full 15-ml screw cap tubes. E. coli LL029 was obtained by P1 transduction of a dsbA ::kan insertion mutation into DHB4. The dsbA -null strain AH55 was used as the source of the P1 transduction ( Hiniker and Bardwell 2004 ). Harvest of cell samples. Wild-type E. coli and the respective mutant cells were grown to an OD 600 of 0.4 at 37 o C. To expose wild-type E. coli cells to oxidative stress treatment, the cells were then treated with 4 mM H 2 O 2 or 1 mM diamide for the duration indicated. Then 1.8 ml of the cell culture was harvested directly into 200 μl of ice-cold 100% (w/v) TCA and stored on ice for at least 20 min. Differential thiol trapping of cellular proteins. The TCA-treated cells were centrifuged (13,000 g , 4 °C, 30 min), and the resulting pellet was washed with 500 μl of ice-cold 10% (w/v) TCA followed by a wash with 200 μl of ice-cold 5% (w/v) TCA. The supernatant was removed completely, and the pellet was resuspended in 40 μl of denaturing buffer (6 M Urea, 200 mM Tris-HCl (pH 8.5), 10 mM EDTA, and 0.5 % [w/v] SDS) supplemented with 100 mM IAM. This first alkylation procedure irreversibly modified all free thiol groups that were made accessible by the urea and SDS-denaturation of the proteins. After 10 min of incubation at 25 °C, the reaction was stopped by adding 40 μl of ice-cold 20% (w/v) TCA. After 20 min of incubation on ice, the alkylated proteins were centrifuged again, and the pellet was washed with TCA as described before. The protein pellet was then dissolved in 20 μl of 10 mM DTT in denaturing buffer to reduce all reversible thiol modifications such as disulfide bonds and sulfenic acids. After a 1-h incubation at 25 °C, 20 μl of a solution of 100 mM radioactively labeled [ 14 C-1]-IAM in denaturing buffer was added to titrate out the DTT and to irreversibly alkylate all newly reduced cysteines. The reaction mixture was incubated for 10 min at 25 °C. The reaction was stopped by adding 40 μl of 20% (w/v) TCA. After precipitation on ice and subsequent centrifugation, the pellet was washed first with TCA and then three times with 500 μl of ice-cold ethanol (for schematic overview see Figure 1 ). Reverse-trapping experiments were conducted as described except that the first alkylation procedure was performed with [ 14 C-1]-IAM while the second alkylation step was performed with unlabeled IAM. For protein identification purposes, thiol-trapping experiments using nonradioactive IAM in both alkylation steps were performed in parallel. 2D gel electrophoresis. The pellet of the thiol-trapped proteins was dissolved in 500 μl of rehydration buffer (7 M urea, 2 M thiourea, 1% [w/v] Serdolit MB-1, 1% [w/v] dithiothreitol, 4% [w/v] Chaps, and 0.5% [v/v] Pharmalyte 3–10), and the 2D gel electrophoresis was performed as previously described ( Hiniker and Bardwell 2004 ). Staining of the gels, storage phosphor autoradiography, and image analysis. Gels were stained using colloidal Coomassie blue stain ( Neuhoff et al. 1985 ) and scanned using an Expression 1680 scanner with transparency unit (Epson America, Long Beach, California, United States) at 200-dpi resolution/16-bit grayscale. Phosphor images were obtained by exposing LE Storage Phosphor Screens (Amersham Biosciences, Piscataway, New Jersey, United States) to dried gels for 7 d. The phosphor image screens were read out with the Personal Molecular Imager FX (Biorad, Hercules, California, United States) at a resolution of 100 μm. The original image size of the phosphor image was changed to a resolution of 200 dpi with PhotoShop 7.0 (Adobe Systems, San Jose, California, United States). The phosphor images and images of the stained proteins were analyzed using Delta 2D Software (Decodon, Greifswald, Germany). Data analysis. For each of the described experiments, at least four individually trapped samples of cultures were obtained from at least two independent cell cultures. The only exceptions were the time course of H 2 O 2 treatment at the time points 2, 5, and 30 min, the DsbA experiments under oxygen limitation, and the time course of diamide treatment. For each of the experiments, the phosphor image with the highest overall 14 C activity was chosen for spot detection. The 100 most abundant spots were chosen from the detected set of spots and the boundaries transferred to all other phosphor images and protein gel images using the Delta 2D “transfer spots” function. The absolute intensity for each of these 100 spots on the protein gels and the phosphor image was determined to quantitatively describe the amount of protein and 14 C activity for each protein spot. These absolute spot intensities were then normalized over all 100 spots (for trapping and reverse-trapping of wild-type cells, and for all H 2 O 2 experiments). This normalization scheme was changed when the thiol-disulfide status of the dsbA mutant strain was analyzed. This was based on the consideration that a large number of the most intense spots on the phosphor images are heavily thiol-modified periplasmic proteins, which are putative DsbA substrate proteins. Normalizing over those protein spots would largely affect our data analysis. We therefore decided to normalize over four of the most abundant intracellular spots, TufB isoform (IF) 1, GapA IF 1, AhpC, and GroEL, whose thiol-disulfide status was not affected by the absence or presence of DsbA. In the case of the trxA mutant strain, similar considerations led us to normalize over four of the most abundant periplasmic protein spots, OmpA IF 1, OmpA IF 2, HisJ, and ArtJ, whose thiol-disulfide status was not influenced by the lack of TrxA activity. Finally, the ratio of 14 C activity/protein was calculated by dividing the normalized intensity of the protein spot on the phosphor image by the corresponding normalized intensity of the Coomassie blue–stained protein spot. For a protein to be considered significantly thiol-modified, the average of the 14 C activity/protein ratio for a given protein spot had to be at least 1.5-fold above the average of the 14 C activity/protein ratio of this protein under control conditions. Identification of proteins from 2D gels. Thiol-trapped samples using nonradioactive IAM in both alkylation steps were separated on 2D gels and used to excise proteins of interest. These proteins were identified by Peptide Mass Fingerprinting at the Michigan Proteome Consortium ( http://www.proteomeconsortium.org ). Supporting Information Accession Numbers The Swiss-Prot ( http://www.ebi.ac.uk/swissprot/ ) accession numbers for the gene products discussed in this paper are 30S ribosomal subunit protein S2 (P02351), 50S ribosomal subunit protein L5 (P02389), AceF (P06959), AcnB (P36683), AdhE (P17547), AhpC (P26427), and γ-glutamyl phosphate reductase (P07004), ArtI (P30859), ArtJ (P30860), aspartate semialdehyde dehydrogenase (P00353), carbonic anhydrase (P36857), carbonic anhydrase (P36857), citrate synthase (P00891), DAHP synthetase (P00886), DppA (P23847), DsbA (P24991), GapA (P06977), glutamyl-tRNA synthetase (P04805), GroEL (P06139), GTP cyclohydrolase I (P27511), HisJ (P39182), Hsp33 (P45803), Imp (P31554), isocitrate dehydrogenase (P08200), Lpd (P00391), MetE (P25665), MetQ/YaeC (P28635), NusA (P03003), OmpA (P02934), OppA (P23843), OxyR (P11721), phenylalanyl-tRNA synthetase beta-subunit (P07395), phosphate import ATP-binding protein (P07655), phosphoribosylaminoimidazole synthetase (P08178), phosphoribosylaminoimidazole-succinocarboxamide synthetase (P21155), phosphotransferase system enzyme I (P08839), PhoU (P07656), porin protein E (P02932), P-specific transport protein (P06128), pyruvate kinase I (P14178), SdhB (P07014), SerA (P08328), serine hydroxymethyltransferase (P00477), SspA (P05838), succinyl-CoA synthetase (P07459), Tpx (P37901), trigger factor (P22257), TufB (P02990), and UgpB (P10904). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC521172.xml |
517826 | A Neuroeconomics Approach to Inferring Utility Functions in Sensorimotor Control | Making choices is a fundamental aspect of human life. For over a century experimental economists have characterized the decisions people make based on the concept of a utility function. This function increases with increasing desirability of the outcome, and people are assumed to make decisions so as to maximize utility. When utility depends on several variables, indifference curves arise that represent outcomes with identical utility that are therefore equally desirable. Whereas in economics utility is studied in terms of goods and services, the sensorimotor system may also have utility functions defining the desirability of various outcomes. Here, we investigate the indifference curves when subjects experience forces of varying magnitude and duration. Using a two-alternative forced-choice paradigm, in which subjects chose between different magnitude–duration profiles, we inferred the indifference curves and the utility function. Such a utility function defines, for example, whether subjects prefer to lift a 4-kg weight for 30 s or a 1-kg weight for a minute. The measured utility function depends nonlinearly on the force magnitude and duration and was remarkably conserved across subjects. This suggests that the utility function, a central concept in economics, may be applicable to the study of sensorimotor control. | Introduction In real world situations we often have to choose between possible actions that lead to different outcomes. To provide a computational framework for such a decision process, the notion of a utility function is often used ( Neumann and Morgenstern 1944 ). A utility function assigns to each possible action a number that specifies how desirable each outcome is. In the theory of rational choice, it is assumed that subjects will choose the action that leads to the most desirable outcome and thus the highest utility. The economics literature extensively discusses the problem of having a utility function that depends on two or more variables ( Edgeworth 1881 ; Pareto 1909 ). For example, people may associate a utility with the number of apples and oranges they are offered. There will be combinations of apples and oranges which have equal utility. Having three apples and three oranges could be judged as being equally good as having ten apples and one orange. These two possibilities would form two points along an “indifference curve” in apple–orange space, representing outcomes with identical utility that are, therefore, equally desirable. Such indifference curves have been extensively studied by economists in terms of goods and services (c.f. Humphrey 1996 ). The sensorimotor system also has to choose between different actions. The utility of actions will depend on two components—the cost associated with performing an action and the desirability of the outcome. Here we characterize the utility function used by the sensorimotor system by measuring the indifference curves for human subjects experiencing short pulses of force. In sensorimotor control, utility functions that depend on several variables occur frequently. Consider, for example, unpacking a car after a snowboarding vacation. We could carry all the suitcases at the same time, reducing the time to unpack but maximizing the weight we have to lift concurrently. At the other extreme we could transport each item individually, which would minimize the magnitude of the force required at the expense of a long unpacking duration. The chosen solution is likely to lie somewhere between these two extremes and may reflect an optimal decision based on a utility function that depends on duration and magnitude of the forces. Once a utility function is specified, the decision problem becomes one of solving an optimal control problem, finding the actions that maximize the utility. A number of studies in the field of optimal sensorimotor control have proposed loss functions (the negative of utility) and derived the optimal actions given these proposed loss functions. For example, the minimum jerk model ( Hogan 1984 ; Flash and Hogan 1985 ) suggests that people minimize the average squared jerk of the hand (third derivative of position) when making reaching movements. Alternative models have suggested that during reaching people try to minimize the variation of endpoint errors that arise from noise on the motor commands ( Harris and Wolpert 1998 ; Todorov and Jordan 2002 ). However, these and many other similar studies assume a loss function and compare the predicted behavior with observed behavior, rather than measure the loss function directly. In recent works we have used a statistical approach to infer the loss function, instead of assuming it. We defined the statistics of the errors observed by subjects and showed that they were sensitive to quadratic errors for small errors but that for larger errors they were robust to outliers ( Körding and Wolpert 2004 ). Here we use an alternative approach that is analogous to the approaches used in economics to infer a loss function. Different movements may be associated with different costs or utility. For example, a utility function could assign a numerical value to each possible movement, characterizing how costly it is to the organism. Here, we have examined how the utility associated with producing a force depends on two parameters, the duration and the magnitude of a force profile (see Materials and Methods for details). The force profiles were smoothed square waves that could be linearly scaled by the duration of the force, T, and the maximum value of the force, F. On each trial, subjects experienced two force profiles that differed in both T and F. They then had to choose which of the two force profiles they would experience again. They were told to choose the force that required the least effort. In this two-alternative forced-choice experiment subjects thus indicated their preference for one combination of F and T over another combination of F and T. This allowed us to infer indifference curves: Given the choice of two combinations of F and T that are on the same indifference curve, subjects will have no preference. The associated utility of these force profiles is thus identical. To obtain a full utility function from a set of utility curves we additionally needed to determine the utility of one indifference curve relative to another. This was achieved by finding “doubling points.” A doubling point is a point on one indifference curve that subjects show no preference for when compared to experiencing a point on another indifference curve twice (that is, two smoothed square waves in quick succession—see Materials and Methods for details). Thus we could determine the full utility function. Results/Discussion In a two-alternative forced-choice paradigm, subjects chose which of two experienced force profiles they preferred to experience a second time (see Figure 1 ). This allowed us to find a set of force profiles to which subjects showed equal preference and were therefore indifferent. Although subjects experience these force profiles as being physically different, they show no preference in terms of which they wish to experience again. Figure 1 The Experimental Setup The subject's hand position (pink circle) was visible on the screen. The hand movement was restricted to stay within a small area (blue box). The direction of the force is represented by the blue arrow, and the temporal profile of the force is shown by the blue curve. Various hypothesized utility functions predict different choices and thus different indifference lines. The first model we hypothesized was that subjects would minimize the integrated force they are using (F × T). This predicts hyperbolas as indifference lines ( Figure 2 A). Alternatively, people could minimize the integrated squared force (F 2 × T) ( Figure 2 B). In this case they would prefer long-duration, weak forces to short-duration, strong forces of equal integrated force. Another possible model would be that people would just try to minimize the maximal force they had to produce, regardless of how long they had to hold it (F) ( Figure 2 C). Figure 2 Hypothesized and Measured Indifference Curves and Loss Function from a Single Subject (A–C) The predicted indifference lines are shown that minimize (A) the integrated force (F × T), (B) the integrated squared force (F 2 × T), and (C) the maximal force (F) . (D) Experimental data from a single subject. The open circles are the reference forces. The blue full circles connected by the black lines represent indifference points. Error bars denote the 95% confidence intervals. Force profiles are illustrated (blue curves for single forces, pink curve for doubling points). (E) Inferred color plot of the loss function (warmer colors represent greater cost). A single subject's results are shown in Figure 2 D. The reference forces are shown as open circles, while the indifference points are shown as filled circles. The location of the indifference points had relatively small error bars (black 95% confidence intervals). Therefore, this subject showed a preference for points towards the origin compared to those further from the origin (along the blue lines). Joining up such points in force–time space allows us to obtain indifference curves (black lines). For short duration profiles (less than 150 ms), as the duration increased, the force needed to decrease to maintain constant utility. This makes intuitive sense: as the duration of experienced force increases, more effort is required to stabilize the arm. For longer durations (greater than 500 ms), as the duration increased, the force required to maintain equal utility also increased. This means people prefer to experience a 2-s force profile compared to experiencing a 1-s force profile. We explain this counterintuitive result—that increasing both the duration and force can keep the utility constant—in the following way. The shape of the force profiles for all conditions was kept self-similar. This means that force profiles with a longer duration have a slow onset and offset (each is 20% of the total duration). For long durations subjects can, therefore, progressively compensate for the imposed forces as they ramp up slowly, thereby producing less loss. We furthermore measured how much smaller a force profile needed to be (scaled uniformly in duration and force) so that experiencing it twice had the same utility as experiencing the unscaled profile once. For the four open-circle reference points in Figure 2 D, the pink circles show the corresponding four points that have half the utility. We can thus infer how the loss function changes as the force profiles are scaled ( Figure 2 E; see Materials and Methods ). Any order-preserving transformation of the utility function will have no effect on subjects' preferences. That means that arbitrary scalings can be applied to the loss function while the optimal behavior remains unchanged. This property of utility function is well known in economics and has led to the idea of ordinal utility ( Pareto 1909 ), in which the ordering of preferences is the key feature of utility. The utility of the first reference point is thus arbitrarily set to be equal to one. The plotted relative utility is the utility function arising from this assumption. The double-hump experiment defines the derivative of the utility, which is interpolated and integrated to obtain the relative utility. To infer the relative utility function ( Figure 2 B), we had to assume local linearity. The loss function shows nonlinear behavior. Figure 3 shows the inferred utility function averaged over all the subjects. We can analyze how loss increases along the line connecting the reference points ( F/T = 44.6). Fitting a model of the form Loss = ( FT ) α to the data from the double-hump experiments leads to an α of 1.1 ± 0.15 (mean ± SEM over subjects). This α, when fit to the data from all the subjects for each of the four lines, is approximately constant (1.2, 1.0, 0.9, 0.9). The shape of the loss function is highly conserved over the set of subjects. In particular, the effect that indifference curves increase for both very short and long durations is found over the set of subjects. Figure 3 Iso-Loss Contours and Loss Function for the Set of All Subjects The black curves are the iso-loss curves. Error bars denote the standard error of the mean over the population. The color plot represents the inferred loss function (warmer colors represent greater loss) obtained by interpolating the data from the double-hump forces. By applying the methodology developed by economists, we have shown that fundamental properties of the nervous system, such as loss functions, can be inferred by the choices humans make in a sensorimotor task. In general, these loss functions will depend on a large number of factors that were not measured in our experiment. For example, there are subjective emotional components to human decision making ( Sanfey et al. 2003 ). However, parametric variations would allow such multi-dimensional loss functions to be determined. Interestingly, the inferred loss function we report cannot easily be modeled by any simple function of our experimental variables F and T. However, it is highly conserved across the subjects, suggesting a common underlying mechanism is at work. Moreover, our results suggest that the opposite approach—first hypothesizing a loss function and then predicting human decision making—is likely to miss interesting aspects of the behavior and underlying processes. We are therefore hopeful that the application of economic methods to the study of the nervous system, referred to as neuroeconomics ( Glimcher 2003 ), will continue to provide new insights into the functioning of the central nervous system. Materials and Methods Subjects and the manipulandum After providing written informed consent, five right-handed subjects (aged 20–40 y) participated in this study. The experiments were carried out in accordance with institutional guidelines. A local ethics committee approved the experimental protocols. While seated, subjects held the handle of robotic manipulandum with two degrees of planar freedom. This was a custom-built device (vBot) consisting of a parallelogram constructed mainly from carbon fiber tubes that were driven by rare earth motors via low-friction timing belts. High-resolution incremental encoders were attached to the drive motors to permit accurate computation of the robot's position. Care was taken with the design to ensure it was capable of exerting large end-point forces while still exhibiting high stiffness, low friction, and also low inertia. The robot's motors were run from a pair of switching torque control amplifiers that were interfaced, along with the encoders, to a multifunctional I/O card on a PC using some simple logic to implement safety features. Software control of the robot was achieved by means of a control loop running at 1,000 Hz, in which position and force were measured and desired output force was set. A virtual reality system was used that prevented subjects seeing their hand, and allowed us to present visual images into the plane of the movement (for full details of the setup see Goodbody and Wolpert 1998 ) (see Figure 1 A). The force between the subject's hand and the manipulandum was continuously measured using a six-axis force transducer (Nano25; ATI Industrial Automation, Apex, North Carolina, United States) sampled at 1,000 Hz by the control loop. The experiment consisted of trials in which the robot generated force profiles on the subjects' hands. The force profiles experienced were parameterized by their duration T in ms and their maximal strength F in Newtons. The force profile f ( t ) approximated a square profile, but with smooth onset and offset: On each trial the subjects experienced two different force profiles and then could choose which of the two profiles to experience for a second time. Using such a forced-choice procedure allowed us to determine the indifference curves. Inferring indifference pairs Subjects saw a starting sphere and two selection spheres (see Figure 1 A). Each trial started when the subject moved the cursor, representing their hand, into the starting sphere. The trial then had three phases. (1) One of the selection spheres turned green, and subjects were required to place the cursor into this sphere, where they experienced a force profile F 1 . The subjects then returned the cursor to the starting sphere. (2) The other selection sphere turned green, and subjects were required to place the cursor in that sphere, where they experienced a force profile F 2 . Subjects then returned the cursor to the starting sphere. (3) Both selection spheres turned green, and subjects were required to choose which of the two spheres to move to, where they would experience the same force associated with that sphere, either F 1 or F 2 . Therefore, subjects could decide which force profile, F 1 or F 2 , to experience a second time. To obtain four indifference curves, we chose four reference profiles that had durations T of 200, 300, 400 and 500 ms. The maximal force F was chosen for each reference so that the ratio T/F had the value 44.6. This gave a maximal force that ranged from 4.5 N for the shortest duration reference to 11.2 N for the longest duration reference. These reference points lie along a straight line in time–force space (see Figure 2 A, open circles). On each trial, one of the two force profiles, F 1 or F 2 , was set to be one of the reference forces and the other was a test force. The sphere associated with the reference force was randomized each trial between the left and right locations. To obtain indifference lines, we wished to find points along the radial lines shown in Figure 2 A to which subjects were indifferent to the four reference points. To obtain these we used a two-alternative forced-choice paradigm in which the test force produced was chosen from one of these lines, which correspond to T/F ratios of 2.0, 7.4, 20.0, 44.6 (double-hump, see below), 85.4, 142.1, and 203.0, with the aim of finding the point along the line at which subjects would choose between the reference and test force indifferently (that is, at probability level 0.5). We used an adaptive fitting protocol (QUEST; Watson and Pelli 1983 ) to find the p = 0.5 threshold of a logistic function. The reference points and T/F ratio lines were interleaved in a pseudorandom order. Forty trials were performed to obtain each indifference pair. Each reference point, together with the six T/F ratio line points that subjects preferred equally, defines an indifference curve. Inferring the loss function The above procedure allowed us to obtain indifference lines—where the utility has equal value. However, to obtain a full utility function we need to join up these lines and determine the relative utility of one indifference line to another. To achieve this we performed a two-alternative forced-choice paradigm in which the reference force was as before, but the test force was selected from the T/F = 44.6 line, with the force profile presented twice in succession (the “double hump” force). This condition was run interleaved with the other conditions. We assumed that the utility of experiencing the double hump was twice the utility of a single hump (a linearity assumption). This assumption allowed us to link the reference point to a point of half its utility, further allowing us to linearly interpolate log(utility) between these points to obtain estimates of the loss function between the lines. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC517826.xml |
539271 | Antitumor effectiveness of different amounts of electrical charge in Ehrlich and fibrosarcoma Sa-37 tumors | Background In vivo studies were conducted to quantify the effectiveness of low-level direct electric current for different amounts of electrical charge and the survival rate in fibrosarcoma Sa-37 and Ehrlich tumors, also the effect of direct electric in Ehrlich tumor was evaluate through the measurements of tumor volume and the peritumoral and tumoral findings. Methods BALB/c male mice, 7–8 week old and 20–22 g weight were used. Ehrlich and fibrosarcoma Sa-37 cell lines, growing in BALB/c mice. Solid and subcutaneous Ehrlich and fibrosarcoma Sa-37 tumors, located dorsolaterally in animals, were initiated by the inoculation of 5 × 10 6 and 1 × 10 5 viable tumor cells, respectively. For each type of tumor four groups (one control group and three treated groups) consisting of 10 mice randomly divided were formed. When the tumors reached approximately 0.5 cm 3 , four platinum electrodes were inserted into their bases. The electric charge delivered to the tumors was varied in the range of 5.5 to 110 C/cm 3 for a constant time of 45 minutes. An additional experiment was performed in BALB/c male mice bearing Ehrlich tumor to examine from a histolological point of view the effects of direct electric current. A control group and a treated group with 77 C/cm 3 (27.0 C in 0.35 cm 3 ) and 10 mA for 45 min were formed. In this experiment when the tumor volumes reached 0.35 cm 3 , two anodes and two cathodes were inserted into the base perpendicular to the tumor long axis. Results Significant tumor growth delay and survival rate were achieved after electrotherapy and both were dependent on direct electric current intensity, being more marked in fibrosarcoma Sa-37 tumor. Complete regressions for fibrosarcoma Sa-37 and Ehrlich tumors were observed for electrical charges of 80 and 92 C/cm 3 , respectively. Histopathological and peritumoral findings in Ehrlich tumor revealed in the treated group marked tumor necrosis, vascular congestion, peritumoral neutrophil infiltration, an acute inflammatory response, and a moderate peritumoral monocyte infiltration. The morphologic pattern of necrotic cell mass after direct electric current treatment is the coagulative necrosis. These findings were not observed in any of the untreated tumors. Conclusion The data presented indicate that electrotherapy with low-level DEC is feasible and effective in the treatment of the Ehrlich and fibrosarcoma Sa-37 tumors. Our results demonstrate that the sensitivity of these tumors to direct electric current and survival rates of the mice depended on both the amount of electrical charge and the type of tumor. Also the complete regression of each type of tumor is obtained for a threshold amount of electrical charge. | Background The use of electric current in the treatment of malignant tumors has been known since the beginning of the 19 th century. Several investigators have reported encouraging results from experimental low-level direct current therapy (DEC) in different types of tumor [ 1 - 3 ]. These studies have shown that DEC has an antitumor effect in different animal tumor models and in clinic; however, it has not yet been universally accepted. The dose-response relationships obtained in these studies indicate that the DEC effectiveness depends on both the type of tumor and therapeutic scheme (amount of electrical charge and electrode array). Lack of guidance has become an obstacle to introduce the electrochemical treatment (EChT) in the clinic oncology. This is due to the lack of standardization of the EChT method regarding DEC doses and electrode array. Ren et al. [ 4 ] studied the influence of the dose and electrode spacing in the breast cancer and concluded that an increase of the dose lead to an increase in both the necrosis percentage and increased survival rate. However, they did not find significant spacing effect on the tumor necrosis percentage. On the other hand, Chou et al. [ 5 ] revealed that the number of electrodes depends on the tumor size and that the electrodes inserted at the base perpendicular to the tumor long axis increased the antitumor effectiveness respect to other electrode configurations used. In spite of these results, the efficacy of DEC treatment has been controversial since an optimum electrode array and a threshold amount of electrical charge for each type of tumor have not been established. We believe that the procedure to determine the amount of electrical charge for each type of tumor is completely destroyed is more feasible to implement than that for the optimum electrode array, which involves several variables, such as polarity, number, and orientation of the electrodes. The knowledge of the optimum values of these parameters may lead to maximize the antitumor effectiveness of DEC and minimize their adverse effects in the organism. This allows the establishment of a therapeutic procedure for the tumor treatment in animals and in clinical oncology. The aim of this study is to test the hypothesis that the responses of the tumors treated with DEC is dependent on dose. Ehrlich and fibrosarcoma Sa-37 tumors were used. The survival rates of the mice bearing of these two types of tumor were determined. The antitumor effects of DEC were also evaluated through the peritumoral and tumoral findings in Ehrlich tumor. Methods Animals The experiment was run in accordance with Good Laboratory Practice rules and animals protection laws. The experiment was approved by the ethical committee of Oriente University, which follows the guideline from the Cuban Animal Ethical Committee. BALB/c male mice, 7–8 week old and 20–22 g weight were used. They were supplied from the National Center for Production of Laboratory Animals (CENPALAB), Havana City, Cuba, and were kept in standard laboratory conditions with water and food ad libitum. Animals were healthy (without signs of fungal or other infections) and were maintained in plastic cages inside a room at a constant temperature of 23 ± 2°C and relative humidity of 65 %, and a natural day-night cycle. During therapy the animals were firmly fixed on wooden boards, so all treatments were performed in the absence of anesthesia. All treated animals showed uneasy and quick breathing during fixation. Tumor cell lines Ehrlich and fibrosarcoma Sa-37 cell lines, growing in BALB/c mice, were received from the Center for Molecular Immunology, Havana City, Cuba. Both cell lines are being maintained in the Cell Culture Collection of the Department of Pathologic Anatomy, Hospital "Conrado Benítez", Santiago de Cuba, Cuba. The Ehrlich and fibrosarcoma Sa-37 ascitic tumor cell suspensions, transplanted to the BALB/c mouse, were prepared from the ascitic forms of the tumors. Ehrlich solid and subcutaneous tumors, located dorsolaterally in animals, were initiated by the inoculation of 5 × 10 6 viable tumor cells in 0.2 ml of 0.9 % NaCl, while fibrosarcoma Sa-37 solid and subcutaneous tumors located dorsolaterally in animals, were initiated by the inoculation of 1 × 10 5 viable tumor cells in 0.2 ml of 0.9 % NaCl. For both tumors, the viability of the cells was determined by Trypan blue dye exclusion test and it was over 95 %. Cell count was made using an hematocytometer. Tumor growth was followed by measuring three perpendicular tumor diameters (a, b and c, where a > b > c) with a vernier caliper. The tumor volume was estimated using the equation . The mean tumor volume with the corresponding standard deviation of three determinations was calculated in each experimental group. Mice with non-palpable tumor at day 60 after the treatment were designated as cured. Tumor doubling time (DT, in days ) was determined for each individual tumor as the time needed to double the initial tumor volume. For each experimental group the mean DT and its standard deviation were calculated. Histopathological study of the Ehrlich tumor The histologic cuts from each tumor were made according to the largest diameter. They were fixed in a 10 % formol solution and processed by the paraffin method. Hematoxylin and eosin stained slides were used to evaluate the presence of necrosis. Hematoxylin and eosin stained slides were examined under an Olympus light microscope. The extent of necrosis was defined as the percentage of necrotic region compared with the whole area of the tumor section. The peritumoral alterations were evaluated as none (-), slight (+), moderate (++) and severe (+++). Electrochemical treatment To supply electrochemical treatment, a high stability and low noise DEC source was built at the National Center for Applied Electromagnetism (CNEA). The electrode configuration consisted of a multi-electrode array formed by two anodes and two cathodes inserted into the base perpendicular to the tumor long axis keeping about 3 mm distance between them. Cathode and anode were connected in alternate sequence. This multi-electrodes array was proposed taking into account the results reported by Chou et al. [ 5 ]. All electrodes were cleaned and sterilized in alcohol prior to use. Platinum electrodes of 0.7 mm diameter and 20 mm long were used. After the electrodes were inserted, they were connected to the DEC source. In order to find the thresholds of the electrical charge for which Ehrlich and fibrosarcoma Sa-37 tumors are completely destroyed, different amounts of electrical charge in the range of 5.5 to 110 C/cm 3 were used. From this range of electrical charge three values were chosen to show the DEC effectiveness in both types of tumors. When the tumors reached approximately 0.5 cm 3 in BALB/c mice, a single shot electrotherapy was supplied (zero day). For each type of tumor four groups consisting of 10 mice each randomly divided were formed. For Ehrlich tumor the groups formed were: control group (CG1), treated group with electrical charge of 36 C/cm 3 (18.0 C in 0.5 cm 3 ) and 6.7 mA for 45 min (TG1-1), treated group with 63 C/cm 3 (31.5 C in 0.5 cm 3 ) and 11.7 mA for 45 min (TG1-2), and treated group with 92 C/cm 3 (46.0 C in 0.5 cm 3 ) and 17 mA for 45 min (TG1-3). For fibrosarcoma Sa-37 tumor the groups formed were: control group (CG2), treated group with 36 C/cm 3 (18 C in 0.5 cm 3 ) and 6.7 mA for 45 min (TG2-1), treated group with 63 C/cm 3 (31.5 C in 0.5 cm 3 ) and 11.7 mA for 45 min (TG2-2), and treated group with 80 C/cm 3 (40.0 C in 0.5 cm 3 ) and 14.8 mA for 45 min (TG2-3). The dose of 105 C/cm 3 (52.5 C in 0.5 cm 3 ) and 19.4 mA for 45 min was supplied to 10 mice (5 mice bearing Ehrlich tumor and 5 mice bearing fibrosarcoma Sa-37 tumor). Also the dose of 110 C/cm 3 (55 C in 0.5 cm 3 ) and 20.3 mA for 45 min was supplied to 10 mice (5 mice bearing Ehrlich tumor and 5 mice bearing fibrosarcoma Sa-37 tumor). These doses were used to evaluate the therapeutic and adverse effects of the DEC above 100 C/cm 3 . For each type of tumor was formed a control group of 10 mice. In order to examine from the histolological point of view the effects of direct electric current in Ehrlich tumor two experimental groups were formed: a control group (CG-A) and a treated group with 77 C/cm 3 (27.0 C in 0.35 cm 3 ) and 10 mA for 45 min (TG-A). This treated group was divided in three subgroups TG1-A, TG2-A and TG3-A to show the tumor and peritumoral findings at 1, 2 and 4 days after DEC treatment. Each experimental group was formed by 6 mice. When the Ehrlich tumor volumes reached 0.35 cm 3 , two anodes and two cathodes were inserted into the base perpendicular to the tumor long axis and a single shot electrotherapy was supplied (zero day). In all experiments, before treatment the DEC was increased gradually step by step for two minutes until the desired intensity. During treatment it was constant and continually monitored. The voltage was also continually monitored. It varied, in accordance with the change of tissue resistance during the current application, between 5 and 25 V. The total electrical charge was calculated in real time. After a single application of the intended dose, the treatment was stopped. In this case, the current was decreased step by step for two minutes until its intensity was 0 mA. During electrotherapy, mice were firmly restrained, without obvious discomfort; therefore no anesthesia was necessary. In the control groups, four electrodes were placed into the base perpendicular to the tumor long axis without applying any direct current (0 mA). The animals of this group were firmly fixed but without DEC and showed uneasy and quick breathing during their fixation. Survival rates of the mice bearing both types of the tumors were determined for each experimental group. The survival rate (in %) was defined as the ratio between the number of live animals and the total number of animals, multiplied by 100 %. Survival checks mortality were made daily. Histopathological study of the tumor The histologic cuts from each tumor were made according to the largest diameter. They were fixed in a 10 % formal solution and processed by the paraffin method. Hematoxylin and eosin staining was used. Each cut was divided into four microscopic fields in order to calculate the necrosis percentage through panoramic lens. This percentage was calculated as the ratio between the necrosis area and the tumor total area, multiplied by 100 %. Statistical criteria The nonparametric statistical criterion of one-tailed Wilcoxon-Mann-Whitney rank sum was used to compare volumes between the treated groups with DEC and their respective control groups. Survival curves for the three different mice treatment groups for each tumor type were estimated by using the Kaplan-Meier product limit estimator [ 6 ]. McNemar's statistical criterion was used for comparing the main histopathological findings in peritumoral zones in animals from CG-A and TG-A. P values of less than 0.05 were considered significant. The mean value and its mean standard error were reported for each experimental group. Results As it is shown in Table 1 and Figure 1 , Ehrlich tumors in DEC-treated mice were significantly inhibited as compared with tumors of untreated mice (P < 0.02). This tumor growth inhibition following DEC treatment was observed in every individual mouse. Also there are significant differences between the treated groups being more evident for TG1-3 (P < 0.05). Similar effect of DEC treatment was observed in fibrosarcoma Sa-37 bearing mice (Table 1 and Fig. 2 ). In these mice DEC treatment also resulted in significant inhibition of tumor growth (P < 0.02). For this type of tumor also were observed significant differences between the treated groups being more evident for TG2-3 (P < 0.05). The results shown in this study revealed that the sensitivity of the Ehrlich and fibrosarcoma Sa-37 tumors was dose dependent. The sensitivity to DEC of both types of tumors increased with the increase of the amount of electrical charge (Table 1 and Figs. 1 and 2 ). These results also made evident that fibrosarcoma Sa-37 tumor were more sensitive to DEC than Ehrlich tumor under the same amount of electrical charge (TG1-1 compared with TG2-1 and TG1-2 compared with TG2-2). For these doses there were significant differences (P < 0.05). It was also observed on Ehrlich tumor for doses of 36 and 63 C/cm 3 that the tumors partially regressed for 2 and 4 days, respectively. For these same doses the fibrosarcoma Sa-37 tumor reached their respective partial regressions for 4 and 5 days. Both eventually outgrew again. The complete regression of the Ehrlich tumor was observed 25 days after treatment with 92 C/cm 3 (Table 1 and Fig. 1 ); however, for the fibrosarcoma Sa-37 tumor it was observed 15 days post-treatment with 80 C/cm 3 (Table 1 and Fig. 2 ). After 60 days post-treatment the tumors were non palpable in TG1-3 and TG2-3. For these doses there were no significant differences (P > 0.05) in the growth of these two types of tumor after treatment; however, there were significant differences in the time for which each type of tumor was completely destroyed (P < 0.05). In the case of the untreated tumors, fibrosarcoma Sa-37 tumor showed a quicker growth than that of the Ehrlich tumor. Also, the DT of fibrosarcoma Sa-37 was 0.7 times smaller than that of the Ehrlich tumor (Table 1 ). The overall survival curves of the mice bearing Ehrlich and fibrosarcoma Sa-37 tumors are shown in figures 3 and 4 , respectively. These figures show that for both types of tumors the survival rate of the mice treated with DEC was significantly greater when compared with that of their respective untreated mice (P < 0.001). In this figure it was also observed that the cure rates were 80 % (8/10) for Ehrlich tumor (TG1-3) and 90 % (9/10) for fibrosarcoma Sa-37 tumor (TG2-3). Significant differences between the survival rates of the mice treated with different amounts of electrical charge (P < 0.05) were also found, being more marked for TG1-3 and TG2-3 for Ehrlich and fibrosarcoma Sa-37 tumors, respectively. For the dose of 36 C/cm 3 there were no significant differences between both types of tumor (P > 0.05); however, for the other doses there were significant differences (P < 0.05). The cured mice were sacrificed at 100 days post-treatment. Before sacrifice, the animals were active and in good physical condition with adequate body weight. They had good posture and coats of hair. After sacrifice, the histopathological findings in each of these mice showed complete disappearance of the tumor and evidence of healing. In the treated mice a very little necrotic tissue remained within a fibrous scar. Serology and histological finding of the organs did reveal neither abnormalities nor metastases (results not shown). The death of a mouse 1-day after DEC treatment was observed in TG1-3. The histological findings revealed damages in the lungs due to hemorrhage and a small circular necrosis. Metastases were not observed in this mouse. It was also observed the death of a mouse 25 days post-treatment in TG1-3 and 50 days in TG2-3 due to the cannibalism shown by the mice, probably because of the blood present in the tumors after DEC treatment. All the mice died for amounts of electrical charge above 100 C/cm 3 , during the first 24 hours after DEC treatment. The histological findings showed both severe alterations in liver and kidney and an increase in the weight of these organs. Metastases were not observed in any of these mice. The histopathological findings revealed that in the Ehrlich untreated tumors (CG-A) the necrotic area was mainly central and it constituted approximately from 20 % of the tumor total area (Fig. 5 ). However, in tumors treated with DEC, a wide necrotic area was observed. The tumor necrosis percentages of treated groups at 1, 2 and 4 days after treatment were approximately 2.7, 3.9 (Fig. 6 ) and 4.7 (Fig. 7 ) times higher than that of the CG-A, respectively. These differences were significant (P < 0.02). Also there were significant differences between the necrosis percentages of treated tumors at 1, 2 and 4 days (P < 0.02). There was a lack of well defined necrosis zones surrounding the electrodes. The morphologic pattern of the necrotic cell mass observed is the coagulative necrosis. The dead tissue becomes both swollen and firm in consistency. Preservation of the basic profile of the coagulated cancerous cell and nuclear karyolysis were also observed. The lysed erythrocytes was also observed. This type of necrosis was accompanied by accumulation of neutrophil polymorphonuclear leucocytes. Lymphocytes (L) and plasmatic cells, named CP, were observed in all the tumors in CG-A and TG-A but there were no significant differences (Table 2 , Fig. 8 ). Neutrophil infiltration (N) and vascular congestion, named CV, were observed in all animals from the TG-A (Figs. 9 and 10 ). The intensity grades of these peritumoral findings were severe; however, the intensity grade of the monocyte infiltration (M) was slight to moderate in this TG-A. Edema and acute inflammatory response were observed 1, 2 and 4 days after treatment (Figs. 9 and 10 ). These peritumoral findings were not present in any of the animals from the CG-A (Table 2 ). There were significant differences (P = 0.008) between the peritumoral findings of the CG-A and TG-A. In this experiment no mouse died from intercurrent disease during or after the treatment. Before sacrifice, the animals were active and in good physical condition with adequate body weight. They had good posture and coats of hair. Discussion The results of this study demonstrated that DEC has a marked antitumor effect because a single-shot electrotherapy delivered via four platinum electrodes inserted into the base of the fibrosarcoma Sa-37 and Ehrlich murine tumors significantly retarded their growths when compared with their respective control groups. The fact that tumor regression increases with the increase of the amount of electrical charge may be explained because the induced necrosis by DEC into the tumor depends directly on its intensity, a matter that is in agreement with the results of Robertson et al. [ 7 ]. In an additional experiment was corroborated that the decrease of each treated tumor volume is due to the higher necrosis percentage induced into the tumor by DEC action. The histopathological findings made to mice 100 days post-treatment may suggest that an increase of the dose bring about an increase of the percentage of the tumor necrosis and the necrotic overlap. Also these findings confirm that the results of the pathology study were consistent with the survival study. We believe that the necrosis is the predominant mechanism of cell death, by the cellular tumefaction (or cellular swelling), cell rupture, breakdown of organelles and acute inflammatory response observed during the first 4 days post-treatment in all treated tumors, result that agrees with that previously reported by Dodd et al. [ 8 ] and Holandino et al. [ 9 ]. Von Euler et al. [ 10 ] demonstrated that the appearance of the necrosis depends on the polarity of the electrode. The findings of necrosis observed by these researchers around anode and cathode electrodes were also observed in all treated tumors (coagulative necrosis, extravasation of blood cells, nuclear karyolysis and edema), fact that was explained because both electrodes were inserted into the tumors. On the other hand, Von Euler [ 11 ] observed both apoptosis and necrosis around the anode but only necrosis around cathode. The necrosis may be due to the ischemia observed in all tumors treated with DEC, which could lead to an irreversible cell injury of the tumor cells and therefore to cellular death. This fact could be related with other experimental findings found after DEC treatment, such as: degradation of phospholipids, lost of high energy phosphate and increase of the intracellular calcium [ 7 ], membrane damage [ 5 ], ionic imbalance [ 2 , 12 ], mitochondrial alterations [ 9 ] and ischemia/reperfusion injury [ 13 ]. The prolonged acute inflammation observed during 4 days after DEC treatment may be explained by the persistent leukocyte infiltrate also observed in the peritumoral findings. This persistent leukocyte infiltrate (essential feature of the inflammatory response) becomes a harmful agent because during the chemotaxis they amplify the effects of the initial inflammatory stimulus through the liberation of potent mediators (enzymes, chemical mediators and toxic radical of oxygen) that lead to both endothelial and tissue damages. This leukocyte infiltrate may also activate the immune system [ 14 ]. In all these processes the reactive oxygen species have been shown to have an important role. In addition to these species are essential elements in the emergence of an inflammatory process [ 14 , 15 ]. Therefore we speculate that the oxidative burst may be the immediate cause of cell death in both tumors, although not investigated in this study. These facts and the high necrosis percentages shown in this study may lead to the complete destruction of the solid tumor treated with DEC. The complete disappearance of the Ehrlich and fibrosarcoma Sa-37 tumors achieved for 92 and 80 C/cm 3 , respectively, may suggest that each tumor model has its threshold of electric charge from which it is completely destroyed. This threshold depends on the electric nature of the tumor and their physiological characteristics (stage, volume and histogenic characteristics). This fact explains the cure of the mice and why the tumors do not duplicate their initial volumes during the observation time (infinite DT, represented in Table 1 by ∞ symbol). The experimental data revealed that the fibrosarcoma Sa-37 showed the higher sensitivity and curability to DEC than Ehrlich tumor and that both tumor response and survival rate of mice were DEC dependent. However, in the untreated tumors Fibrosarcoma Sa-37 showed a DT shorter than that of the Ehrlich tumor. This fact indicates the higher agressiveness of Fibrosarcoma Sa-37. The mortality observed in all animals treated with amounts of electrical charge above 100 C/cm 3 could be explained by the severe damages induced by DEC in kidney and liver. Griffin et al. [ 12 ] explained this result by the induced serum electrolyte imbalance resulting from a metabolic load due to the breakdown products of the tumors. The hemorrhage observed in the lungs of the mouse death 1 day after DEC treatment in TG1-3 may be explained by the vascular rupture and/or perforation of blood vessels due to a mechanic effect by the insertion of an electrode. The small circular necrosis also observed in this organ's mouse may be consequence of the cytotoxic action of DEC. The uneasy and quick breathing observed in both control and treated groups, during the fixation of the mice did not have any influence in the results obtained in this study. Conclusions The data presented indicate that electrotherapy with low-level DEC is feasible and effective in the treatment of the Ehrlich and fibrosarcoma Sa-37 tumors. Our results demonstrate that the sensitivity of these tumors to direct electric current and survival rates of the mice depended on both the amount of electrical charge and the type of tumor. Also the complete regression of each type of tumor is obtained for a threshold amount of electrical charge. Competing interests The author(s) declare that they have no competing interests Authors' contributions HCC conceived the study, and participated in its design and coordination. Also, he carried out the inoculation of the tumor cells in the mice, the measure of the tumor volumes and the survival rate of mice as well as elaborated the manuscript. MCSQ participated in the design of the study and participated in the measure of the histological findings of the organs and tumor and peritumoral findings and as well as elaborated the manuscript. LEBC carried out the inoculation of the tumor cells in the mice, conceived and participated in the design of the study and performed the statistical analysis as well as elaborated the manuscript. RNPB and DSL participated in the design of the study and contributed to elaboration of this manuscript. MFS participated in the design of the study and carried out the serology. All authors read and approved the final manuscript. OGP and TRG participated in the design of the study and contributed to elaboration of this manuscript. All authors read and approved the final manuscript. JLMF participated in the design of the study and performed the statistical analysis. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC539271.xml |
526184 | Microarrays for global expression constructed with a low redundancy set of 27,500 sequenced cDNAs representing an array of developmental stages and physiological conditions of the soybean plant | Background Microarrays are an important tool with which to examine coordinated gene expression. Soybean ( Glycine max ) is one of the most economically valuable crop species in the world food supply. In order to accelerate both gene discovery as well as hypothesis-driven research in soybean, global expression resources needed to be developed. The applications of microarray for determining patterns of expression in different tissues or during conditional treatments by dual labeling of the mRNAs are unlimited. In addition, discovery of the molecular basis of traits through examination of naturally occurring variation in hundreds of mutant lines could be enhanced by the construction and use of soybean cDNA microarrays. Results We report the construction and analysis of a low redundancy 'unigene' set of 27,513 clones that represent a variety of soybean cDNA libraries made from a wide array of source tissue and organ systems, developmental stages, and stress or pathogen-challenged plants. The set was assembled from the 5' sequence data of the cDNA clones using cluster analysis programs. The selected clones were then physically reracked and sequenced at the 3' end. In order to increase gene discovery from immature cotyledon libraries that contain abundant mRNAs representing storage protein gene families, we utilized a high density filter normalization approach to preferentially select more weakly expressed cDNAs. All 27,513 cDNA inserts were amplified by polymerase chain reaction. The amplified products, along with some repetitively spotted control or 'choice' clones, were used to produce three 9,728-element microarrays that have been used to examine tissue specific gene expression and global expression in mutant isolines. Conclusions Global expression studies will be greatly aided by the availability of the sequence-validated and low redundancy cDNA sets described in this report. These cDNAs and ESTs represent a wide array of developmental stages and physiological conditions of the soybean plant. We also demonstrate that the quality of the data from the soybean cDNA microarrays is sufficiently reliable to examine isogenic lines that differ with respect to a mutant phenotype and thereby to define a small list of candidate genes potentially encoding or modulated by the mutant phenotype. | Background Genes of higher plants are expressed in a coordinated fashion during development of tissue and organ systems and in response to different environmental conditions. This regulation may be tightly linked for some sets of genes, for example, in a specific biochemical pathway. Expression of regulatory genes may modulate the expression of key genes or entire sets of genes in individual pathways. The investigation of single gene expression patterns as determined by RNA blotting or quantitative reverse transcriptase PCR have been used to understand how different temporal, developmental, and physiological processes affect gene expression. With recent advances in genomics, very large numbers of genes can now be simultaneously analyzed for their expression levels in a comparative fashion between two biological states using microarray or biochip technology. Several techniques for the 'global' analysis of gene expression have been described [ 1 - 4 ]. These include (a) high density expression arrays of cDNAs on conventional nylon filters with radioactive probing; (b) microarrays or 'chips' using fluorescent probes, and (c) serial analysis of gene expression (SAGE). Methods for global expression analysis require either knowledge of the entire genome of an organism or accumulation of a large EST (expressed sequence tag) database for the organism. In soybean ( Glycine max ), more than 286,000 5' EST sequences have been generated and deposited in public databases [[ 5 ], and this report]. These 5'ESTs represent a collection of 80 cDNA libraries from different tissue and organ systems at various stages of development and under diverse physiological conditions. Collaborative, multidisciplinary research to enhance the development of plant genome resources and information that would be publicly available for gene expression, gene tagging, and mapping has been a priority in recent years in plants of agronomic importance, including soybean [ 6 ]. Here, we report the development, qualification, and use of 27,513 members of a low redundancy set of tentatively unique cDNAs or 'unigenes' in soybean. The 3' sequence of this set was determined and microarrays constructed. The public availability of the low redundancy clone set, sequence information, and microarrays reported here will greatly enhance gene discovery and genomic scale research in soybean and other legumes by the community of researchers. For example, we illustrate the use of the 5' and 3' sequence-verified cDNA microarrays to determine organ-specific expression and we demonstrate their potential to discover the molecular basis of specific mutations in closely related isogenic lines. Results and discussion Cluster analysis of 280,000 ESTs reveals 61,127 'unigenes' in soybean Figure 1 illustrates an overview of the data generation and analysis used to create a low redundancy 'unigene' set and its use in construction of cDNA microarrays. The 5' EST sequence information was used as the raw material including the addition of over 30 new cDNA libraries and more than 160,000 5' sequences since an initial report [ 5 ]. In total now, over 80 libraries and over 280,000 5' EST have been generated from many tissue and organ systems of various stages of development, ranging from roots, shoots, leaves, stems, pods, cotyledons, germinating shoot tips, flower meristems, tissue culture derived embryos, and pathogen challenged tissues. These libraries, with the one exception of library Gm-r1030 described below, were non-normalized. Thus, the mRNAs that are more abundant in various tissue and organ systems will be more highly represented in the EST collection. To remove redundancy and identify unique sequences, the ESTs were assembled using the Phrap program [ 7 ] into contiguous regions (contigs) representing overlapping sequences based on EST sequence similarity. In this way, longer overlapping sequences of expressed genes are assembled. Identical sequences that represent redundant mRNAs of various sizes are also recognized and the number of sequences in a contig in a non-normalized cDNA library is a rough approximation of the relative abundance of that particular mRNA within that tissue. Figure 1 Steps in the construction and documentation of cDNA microarrays using a low redundancy soybean 'unigene' set of 27,513 cDNA clones. See Methods for details. (a) NSF Plant Genome Program project "A Functional Genomics Program for Soybean" (NSF DBI #9872565); (b) Soybean Public EST project [5]; (c) Washington University Genome Center, St. Louis, MO; (d) Center for Computational Genomics and Bioinformatics, University of Minnesota, Minneapolis, MN [20]; (e) Genome Systems, St. Louis, MO, until its closure; (f) Keck Center for Comparative and Functional Genomics, University of Illinois, Urbana, IL [21] (g) Soybean Functional Genomics, Department of Crop Sciences, University of Illinois, Urbana, IL [30]; (h) databases maintained by the National Center for Biotechnology Information, Bethesda, MD [22]. The combined number of contigs and singletons (sequences that occur only once) resulting from a computer assembly of ESTs is an estimate of the number of unique genes in the organism. As the number of ESTs grows, the number of unique genes in the organism will continue to be refined. Our current contig analysis of the entire public EST collection for soybean of 286,868 sequences yields 61,127 'unigenes' of which 36,357 are contigs and 24,770 are singletons. The finding of 61,127 soybean unigenes by EST cluster analysis agrees well with an independent contig and unigene assembly in the databases of The Institute for Genomic Research (TIGR) which shows 30,084 contigs and 37,601 singletons for a total of 67,826 tentatively unique sequences from among 334,730 sequences representing all publicly available sequences clustered in Release 11 [ 8 ]. Other large scale plant EST collections as analyzed by the TIGR gene indices [ 8 ] show 42,301 unigenes for Arabidopsis thaliana (of 247,429 ESTs), 36,976 for Medicago truncatula (of 189,919 ESTs), 31,012 for tomato (of 156,645 ESTs), 109,509 for wheat (of 494,195 ESTs) and 56,364 for maize (of 377,188 ESTs). The complete genome sequence for Arabidopsis has revealed an estimated 26,000 genes [ 9 ]. Of course, the unigene sets determined by EST clustering are only estimates of the number of unique genes in an organism and depend on the number of ESTs available, the technologies used to make the libraries, and the bioinformatic methods used to assign clusters. The soybean genome is approximately 1.2 × 10 9 bp which is about 7.5 times the size of the Arabidopsis genome and twice that of tomato, but less than half the size of the maize genome. Thus, it is not unexpected that soybean may have a larger number of unigene clusters than Arabidopsis or tomato, for example. Although soybean is not hexaploid in origin as is modern wheat, it is thought to be an ancient autotetraploid and many examples of duplicate loci exist in soybean. Virtual subtraction using high density cDNA filter arrays increases gene discovery in immature cotyledon libraries that abundantly express storage protein gene transcripts Certain tissues contain large amounts of specialized transcripts. Developing soybean cotyledons, for example, contain large amounts of RNA transcripts representing highly expressed storage protein genes. In order to select for some of the weakly expressed cDNA clones from a mid-maturation stage cotyledon library, we used a virtual subtraction approach using high densisty filters. A total of 18,000 bacterial clones containing cDNAs from library Gm-c1007 (immature cotyledons of 100–300 mg fresh weigh range from Williams variety) was printed in high density on nylon filters and probed with 33 P-cDNA produced by reverse transcription of total mRNA isolated from immature soybean cotyledons. Figure 2A shows the highly complex hybridization pattern resulting from using total mRNA from immature cotyledons to probe the high density filter. The intensity of each dot represents the hybridization signal and the relative abundance of that cDNA in the message population. The phosphorimager pattern was quantified by image analysis software and 5000 of the lowest expressing clones were selected. The cDNA clones were physically reracked to a new set of 384 well plates to form the filter-normalized reracked library designated Gm-r1030. The 5' end of these clones were then sequenced at Washington University. Figure 2B shows that 1,528, or 85%, of the 1799 sequences within the filter-driven reracked library Gm-r1030 were novel and not found in any of the 931 sequenced clones from the Gm-c1007 source cDNA library. Thus, the filter normalization method was an effective way to identify cDNAs with low expression and increase gene discovery in libraries that contain large numbers of transcripts from highly expressed genes. The virtual subtraction method using high density cDNA filters compares favorably with other mRNA subtraction methods used to create normalized libraries during the cloning process [ 10 ]. Figure 2 Gene discovery is increased by selection of weakly expressed cDNAs clones from a cDNA library made from immature cotyledons. (A) Phosphorimager pattern: A high density membrane containing 18,432 double spotted colonies from the Gm-c1007 cDNA library made from immature cotyledons was hybridized with 33 P-labeled cDNAs transcribed from mRNAs isolated from immature cotyledons. (B) Graphical representation of the new cDNAs selected by the normalization process using filter hybridization. Circles represent the 931 total sequences obtained from the non-normalized source cDNA library Gm-c1007 versus the 1799 sequences of Gm-r1030 that were selected as weakly expressed sequences from the filter hybridization experiments shown in part (A). The intersection of the circles represent sequences common to both sets. H = number of sequences with hits in the databases; N = number of sequences that did not have a hit in the databases; and T = total number of sequences. Selection and 3' sequencing of 27,513 soybean cDNAs from the soybean unigene set to use in microarrays High density cDNA arrays of bacterial cultures spotted on nylon membranes and probed with radioactively labeled transcripts are useful for gene discovery as illustrated in Figure 2 above, but they have limited use for quantifying the relative abundance of transcripts expressed in independent mRNA samples. An alternative method to the high density filters is microarray technology [ 2 , 3 ] in which PCR-amplified cDNA inserts, or oligonucleotides, are printed on glass slides and probed with mRNAs populations that have been separately labeled with different fluorescent probes. To enable global expression studies, the ideal would be to have each gene represented at least once on an array. Toward this aim, we selected 27,513 of the cDNA clones by four successive clustering assemblies of the soybean cDNAs performed as the 5' EST data accumulated. Table 1 shows the four reracked sets of cDNA clones (Gm-r1021, Gm-r1070, Gm-r1083, and Gm-r1088) and the level of uniqueness within them as defined by Phrap and CAP3 analysis [ 7 , 11 ]. Thus, the clustering was conducted periodically as the number of 5' EST input sequences grew in size. After each clustering, the previously selected and reracked cDNAs were excluded from subsequent reracking lists. In order to develop a 'unigene' set for soybean, a single representative of each contig was chosen. To select a representative from each contig, we chose the cDNA clone corresponding to the EST that was found at the furthest 5' region of each contig. Thus, we are selecting the cDNA clones most likely to be near full-length. Table 1 Comparison of the percent unique sequences as determined by either CAP3 or Phrap analysis for the 5' and 3' ESTs represented in each of the four successive reracked clone subsets that constitute the low redundancy soybean 'unigene' set Rerack order & name Number cDNAs No. ESTs clustered a Cap3 b Phrap b % Unique ESTs c Cap3 or Phrap 1. Gm-r1021 4,089 2,797, 5' 2,202 s 259 c 2,054 s 334 c 88.0% : 80.4% 1. Gm-r1021 4,089 2,797, 3' 1,836 s 413 c 1,682 s 505 c 85.4% : 78.2% 2. Gm-r1070 9,216 6,938 5' 5,566 s 620 c 5,116 s 831 c 89.2% : 78.0% 2. Gm-r1070 9,216 6,938 3' 4,284 s 1,124 c 3,900 s 1,340 c 85.7% : 75.5% 3. Gm-r1083 4,992 3,879 5' 3,426 s 200 c 3,289 s 260 c 93.5% : 79.7% 3. Gm-r1083 4,992 3,879 3' 2,474 s 599 c 2,256 s 723 c 91.5% : 76.8% 4. Gm-r1088 9,216 7,434 5' 6,295 s 521 c 5,909 s 745 c 91.7% : 89.5% 4. Gm-r1088 9,216 7,434 3' 4,719 s 1,173 c 4,152 s 1,513 c 79.3% : 76.2% Entire set, 1–4 27,513 27,513 5 d 21,873 s 2,402 c 18,663 s 3,966 c 88.2% : 81.2% Entire set, 1–4 27,513 21,048 3' 11,959 s 4,156 c 8,341 s 5,641 c 73.0% : 63.3% a Unless otherwise noted, the ESTs included in the cluster analysis represent only the cDNAs for which both the 5' and 3' sequences are known and for which the read length is over 200 bases. b The number of singletons (s) and number of contigs (c) are shown. c The % unique sequences is the number of singletons plus the number of contigs divided by the total number of ESTs. d In this analysis, all 5' sequences were included even if the corresponding 3' sequence was not known. EST clustering will overestimate the number of unique genes as some of the shorter ESTs will not overlap and thus are falsely counted as independent, unique sequences. However, the clustering analysis can also falsely lump non-identical members of gene families into the same contig based on conservation of sequence similarity in the coding region. The 3' sequencing is especially useful for resolving both of these issues as there is generally more variation in the 3' UTR in plant genes than in the coding region. For those reasons and as a quality control of the reracking process, we sequenced the 3' end of the reracked cDNAs. From the 27,513 total 3' sequencing attempts on the tentatively unique cDNAs represented in Table 1 , a total of 22,088 sequences met the criteria of high quality sequence. The 3' sequencing was more problematic than the 5' sequencing due to termination of the sequencing reactions at some of the long polyA tails characteristic of soybean and many other plant cDNAs. An anchored primer was used to increase the success rate (see Methods). The average length of the 3' ESTs was 526 bases compared to the average 5' sequence read length of 474 for 280,094 ESTs. Since the clustering analyses were performed at successive intervals as the EST collection grew in size, we repeated the Phrap contig analyses separately using only the input sequences of each cDNA rerack for which both a 5' and 3' EST were known. We also performed a CAP3 analysis [ 11 ]. Table 1 shows that CAP3 values for the 5' sequence yielded 88.0 to 93.5% unique sequences while the Phrap values were slightly lower at 78.0 to 89.5% unique sequences. Interestingly, the estimate of unique sequences using the 5' EST data did not change substantially from reracked library r1021 where only approximately 6800 ESTs were clustered through library r1088 where over 250,000 sequences were clustered. A separate cluster analysis of only the 27,513 input 5' sequences revealed 81.2 to 88.2% unique sequences by Phrap and CAP3 analyses, respectively. The 3' ESTs were also separately subjected to CAP3 or Phrap analysis. The CAP3 values showed a slightly higher level of uniqueness (or lower level of redundancy) with 79.3 to 91.5% for CAP3 in the successive clustering analyses versus 75.5 to 78.2% unique sequences as determined by Phrap. An overall figure of 73.0% for the CAP3 analysis on the 21,048 total 3' sequences clustered was found versus 63.3% for Phrap. The differences between the 5' and 3' levels of uniqueness (i.e., 88.2% versus 73.0% for the entire sets as determined by CAP3) can be explained by the nature of reverse transcriptase action. The reverse transcriptase was primed using an oligo dT primer and so the cDNAs will begin at the 3' end and will terminate randomly at variable sites as the enzyme progresses to the 5' end of the mRNA template. Thus, 5' ESTs often begin at variable sites. Therefore, even though two 5' ESTs may have originated from the same mRNA transcript, they will not cluster if they are non-overlapping and will be counted as two separate ESTs. The 3' soybean EST reads begin just after the poly A tail and produce longer average read lengths than the 5' soybean ESTs; thus, the 3' ESTs are more likely to form an overlapping contig if there is any redundancy among them. The 5' and 3' sequence (where known) of each soybean unigene were queried against the non-redundant (nr) database with BLASTX [ 12 ]. Annotations were assigned to each 5'and 3' if the best match had an e value of ≤10 -6 . Table 2 shows a complete cross list of all identifiers for each member of the 27,513 soybean unigenes in the reracked libraries including the 5' and 3' annotations. Table 2 Information contained in a comprehensive cross list of soybean unigene clone IDs. Shown are various identifiers and annotations for 27,513 reracked cDNAs used in microarray construction. The full list is provided with arrays and available upon request. Cross List Identifiers (for each cDNA clone) Example (one of 27,543 cDNAs) Comments Reracked Clone ID Gm-r1021-12 The individual cDNA clone ID in the 384-well destination plates after reracking or rearraying of the selected clones from the cDNA source library plates. Reracked Plate ID Gm-r1021 #1 The 384-well reracked plate name in increments of 384 (ie., 1, 385, 769, etc.) Reracked row_column position A12 Position of the clone in the 384-well reracked plate Reracked 3' Keck Sequence ID GM210001A21A6 Sequence identifier assigned by the Keck Center for the 3' EST Reracked 3' Genbank Accession AW348131 Genbank assigned accession number for the 3' EST Reracked 3' Annotation glutathione S-transferase GST 22 [Glycine max] Top BLASTX hit for the 3' EST, at E10 -6 or lower Source Clone ID Gm-c1004-464 The individual cDNA clone ID in the 384-well source plate. Source Plate ID Gm-c1004 #385 The 384-well source plate name in increments of 384 (ie., 1, 385, 769, etc.) Source row_column position D8 Position of the clone in the 384-well source plate Source WashU Sequence ID sa26h04.y1 Sequence identifier assigned by Washington University, 5' EST Source 5' Genbank Accession AI442436 Genbank assigned accession number for the 5' EST Source 5' Annotation glutathione S-transferase GST 22 [Glycine max] Top BLASTX hit for the 5' EST at E10-6 or lower Source Library Gm-c1004 Name of the cDNA source library Cultivar/Genotype Williams Specific information on the soybean variety or genotype Tissue/Developmental Stage Entire roots of 8-day old seedlings Tissue/organ system/stage from which the cDNA library was constructed Construction of microarrays representing the 27,513 soybean unigene cDNAs The current 'unigene' collection (or tentatively unique sequences) represents low redundancy sets of cDNA clones. We have processed all of these cDNAs for microarrays, as outlined in the Methods section, into three sets of 9,216 cDNAs per array. As shown in Table 3 , these include reracked libraries Gm-r1070 (a set of 9,216 cDNA clones from libraries of various developmental stages of immature cotyledons, flowers, pods, and seed coats); Gm-r1021 plus Gm-r1083 (a set of approximately 9,216 cDNA clones from 8-day old seedling roots, seedling roots inoculated with Bradyrhozobium japonicum, 2-month old roots, and whole seedlings); and Gm-r1088 (a collection of 9,216 cDNA clones from a number of libraries made from cotyledons and hypocotyls of germinating seedlings and leaves and other plant parts subjected to various pathogens or environmental stress conditions and also from tissue-culture derived somatic embryos). As an example, the Gm-r1070 set contains 3,938 tentatively unique cDNAs that are directly derived from two flower cDNA libraries (Gm-c1015 and Gm-c1016) that were sequenced deeply with over 14,000 5' ESTs obtained from these two libraries. In addition, a total of 2,639 cDNAs on the array are directly derived from source libraries made from the immature stages of cotyledon development and representing over 11,000 input sequences from the cotyledon stages of seed development. Table 3 Soybean microarrays and low redundancy and low redudancy unigene sets built from the public EST collection Microarrays and Reracked Unigene cDNA sets a Source cDNA Library a No. of cDNAs on array Soybean Variety Soybean tissues b Set 1. Gm-r1070: 9216 cDNAs highly representative of developing seeds and flowers Gm-r1070 Gm-c1016 2242 Williams 82 immature flowers " Gm-c1015 1696 Williams 82 mature flowers " Gm-c1008 869 Williams whole young pods (2 cm) " Gm-c1029 589 Williams immature cotyledons from 25–50 mg fresh weight seed " Gm-c1010 234 Williams immature cotyledons 100–200 mg seed fresh wt. " Gm-c1011 88 Williams immature cotyledons 100–200 mg seed fresh wt. " Gm-c1007 528 Williams immature cotyledons 100–300 mg seed fresh wt. " Gm-c1030 1200 Williams immature cotyledons 100–300 mg seed fresh wt. low expressing cDNAs fromGm-c1007 filter hybridizations " Gm-c1023 89 T157 immature seed coats from seed of 100–200 mg fresh wt. " Gm-c1019 1681 Williams immature seed coats from seed of 200–300 mg fresh wt. Set 2. Gm-r1021+Gm-r1083: 9216 cDNAs highly representative of roots Gm-r1021(c) Gm-c1004 4224 Williams roots of 8-days old seedlings Gm-r1083 Gm-c1009 1117 Williams roots, 2 month old plants " Gm-c1028 3055 Supernod roots innoculated with B. japonicum " Gm-c1013 820 Williams whole 2–3 week old seedlings Set 3. Gm-r1088: 9216 cDNAs highly representative of seedlings, leaves, and stressed or pathogen challended tissues Gm-r1088 Gm-c1019 426 Williams immature seed coats from seed of 200–300 mg fresh wt. " Gm-c1023 929 T157 immature seed coats from seed of 100–200 mg fresh wt. " Gm-c1027 2706 Williams cotyledons of 3- and 7-day-old seedlings " Gm-c1036 613 Jack somatic embryos cultured on MSD 20 for 2 to 9 mo. " Gm-c1075 304 Jack differentiating somatic embryos cultured on MSM6AC " Gm-c1064 707 Williams epicotyl, 2 week old seedling, auxin treatment " Gm-c1065 1309 Williams germinating shoot, cold stressed, 3 day old seedlings " Gm-c1066 191 Williams leaf and shoot tip, salt stressed, 2 wk. old seedling " Gm-c1067 438 Williams82 germinating shoot, 3 day old seedling, auxin treatment " Gm-c1068 630 Williams82 leaf, drought stressed. 1 month old plants " Gm-c1072 365 PI 567.374 leaves and shots from 2–3 week old seedlings induced for SDS symptoms " Gm-c1073 324 Williams 82 leaves and shoots from 2–3 week old seedlings included for SDS symptoms " Gm-c1074 274 Williams 82 9–11 day old seedlings induced for HR response by P. syringae carrying avrB gene a More description of the reracked and source libraries are available in Genbank at b Tissues were collected from plants grown in greehouse or growth chamber except for the immature and mature flowers which were collected from plants grown in the field. c Since the Gm-r1021 reracked liabrary contains 4089 cDNAs, a total of 135 were repeated to obtain an even 9216 when combined with the Gm-r1083 cDNAs. The cDNAs from the sequence-driven, reracked clone sets were amplified by PCR using the Qiagen-purified cDNA templates that were prepared for 3' sequencing (as opposed to amplification of the inserts directly from E. coli cultures containing the plasmid DNA). All 27,513 PCR reactions were performed with generic M13 forward and reverse primers using a robotic pipettor. Approximately 25% of the purified PCR cDNA inserts were subjected to agarose gel electrophoresis for quality control. Of these, the average insert size was estimated to be 1,340 bp for library Gm-r1021, 1,110 bp for library Gm-r1070, 1,259 for library Gm-r1083, and 1,269 bp for library Gm-r1088. The 9,216 amplified inserts of each set were singly spotted onto glass slides as outlined in the Methods section. A set of 64 control or 'choice' clones was assembled by hand into one 96-well plate (designated Gm-b10BB) and printed eight times repetitively throughout each array. Thus, the total number of spots on the array is 9,728 consisting of 9,216 cDNAs from the unigene set plus 512 (64 cDNAs × 8 repeats) from the choice clones. The choice clones were selected for various reasons. Some represent constitutively expressed genes (such as ubiquitin and EF1). Some are cDNAs whose expression is restricted to a subset of specific plant tissues (such as Rubisco or seed storage proteins). Some are clones of enzymes representing commonly used antibiotic resistance markers in transgenic plants (as hygromycin or kanamycin resistance), and 32 are cDNAs that represent at least 13 different enzymes of the flavonoid pathway. The flavonoid pathway was chosen because the corresponding genes often respond to many biotic and abiotic stress conditions and it has been widely studied in plant systems. Soybean microarrays have potential to reveal the molecular basis of a mutant phenotype Figure 3 illustrates an example of the reliability of the soybean microarray approach using dual labeled RNA probes from two near isogenic lines. These data illustrate the potential to discover novel genes by analysis of contrasting probes from mutant and normal lines. In this experiment, we compared two isogenic soybean lines that differ only at the T locus. The T locus controls the color of the pigment in the trichome hairs on the stems, leaves, and pods of the plant and also modifies the composition of the flavonoids and the color of the seed coats. Total RNA extracted from developing seed coats of line XB22A ( T / T genotype) was labeled with Cy3 and compared to RNA extracted from the same stage of developing seed coats from an isoline containing the spontaneous mutation 37609 ( t */ t * genotype) that was labeled with Cy5. A replicate experiment with a dye swap was also performed. Hybridizations were performed to microarrays constructed with the low redundancy set Gm-r1070 representing cDNAs from seeds, seed coats, and flowers. Figure 3 shows both replicates before and after the flagging and normalization procedures conducted as described in the Methods section. As shown in Figure 3 , the normalization procedure serves to compensate within slide differences between the Cy3 and Cy5 intensity levels to shift the majority of spots to the line of equivalent expression between the isogenic lines. Also, as shown in Figure 3 , very few of the 9,728 cDNAs on the array were reproducibly found to be expressed differentially in the two genotypes at levels higher or lower level than two-fold. A group of 16 of these (encircled by a line) are overexpressed in the T / T line relative to the t */ t * line by approximately three-fold. These cDNAs correspond to the flavonoid 3' hydroxylase cDNAs that were repetitively spotted on the array as members of the 'choice' clone set. Table 4 shows the Cy5 and Cy3 intensities and the ratios of the two replicate slides for all cDNA spots that exceeded a two-fold difference after normalizations within each slide and between the replicate slides. Only 23 cDNAs were found to have values that meet the criteria of exceeding a two-fold differences in both of the replicate slides. Of these 16 were the repetitively spotted flavonoid 3' hydroxylase cDNAs. Only an additional 22 cDNAs (known as partial hits) had values exceeding two-fold levels in one but not both of the slides replicates. Thus, out of over 9200 cDNAs represented on the array, there are relatively few that show differential expression between the RNAs in the normal and mutant lines. Figure 3 The scatter plots of the log values of expression data from two duplicate microarray slides before (left) and after flagging and normalization (right). RNAs were extracted from seed coats of the 50–75 mg per seed fresh weight range by standard methods [13]. Replicate 1 was hybridized with Cy5 labeled RNA from seed coats of the T / T genotype and Cy3 labeled RNA from seed coats of the isogenic t */ t * mutant line. Replicate 2 is a dye swap experiment in which the mRNA from the T / T genotype is labeled with Cy3 and the isogenic t */ t * line is labeled with Cy5. The lines in each graph indicate expression either two-fold higher or two-fold lower than equivalent levels of expression. The dots encircled by the box represent repeats of flavonoid 3' hydroxylase cDNAs on the array that are overexpressed in the RNA samples from seed coats of the T / T genotype. Table 4 Differentially expressed cDNAs detected in dual labeling microarray experiments comparing isogenic lines of the T locus in soybean. Clone ID Genbank Intensities a Expression Ratios Functional Annotation d 3' Accession Replicate 1 Replicate 2 XB22A ( T / T ) 37609 ( t */ t *) XB22A ( T / T ) 37609 ( t */ t *) XB22A/37609 ( T / T ) / ( t */ t *) Cy 5 Cy3 Cy3 Cy5 Rep 1 b Rep 2 b Ave b,c Overexpressed in XB22A Gm-b10BB-23 AF499730 28686 10847 38350 8194 2.645 4.680 3.520 Flavonoid-3' hydroxylase Gm-b10BB-23 AF499730 26794 9746 34956 7839 2.749 4.459 3.512 Flavonoid-3' hydroxylase Gm-b10BB-22 AF499731 23979 9018 33272 7626 2.659 4.363 3.440 Flavonoid-3' hydroxylase Gm-b10BB-23 AF499730 26094 9231 23580 5264 2.827 4.479 3.427 Flavonoid-3' hydroxylase Gm-b10BB-23 AF499730 26812 10670 35600 7963 2.513 4.471 3.350 Flavonoid-3' hydroxylase Gm-b10BB-22 AF499731 25663 9746 32520 7685 2.633 4.232 3.338 Flavonoid-3' hydroxylase Gm-b10BB-22 AF499731 24020 9468 34329 8241 2.537 4.166 3.295 Flavonoid-3' hydroxylase Gm-b10BB-22 AF499731 24578 10218 35465 8158 2.405 4.347 3.267 Flavonoid-3' hydroxylase Gm-b10BB-23 AF499730 24662 9850 26041 5957 2.504 4.371 3.208 Flavonoid-3' hydroxylase Gm-b10BB-22 AF499731 22240 9526 31641 7643 2.335 4.140 3.138 Flavonoid-3' hydroxylase Gm-b10BB-22 AF499731 24548 11084 35328 8233 2.215 4.291 3.100 Flavonoid-3' hydroxylase Gm-b10BB-22 AF499731 19465 8343 30897 7912 2.333 3.905 3.098 Flavonoid-3' hydroxylase Gm-b10BB-23 AF499730 26572 12004 37720 8844 2.214 4.265 3.084 Flavonoid-3' hydroxylase Gm-b10BB-23 AF499730 21583 9921 31828 7410 2.175 4.295 3.082 Flavonoid-3' hydroxylase Gm-b10BB-22 AF499731 21122 10431 37273 8852 2.025 4.211 3.028 Flavonoid-3' hydroxylase Gm-b10BB-23 AF499730 26053 12897 37691 8164 2.020 4.617 3.027 Flavonoid-3' hydroxylase Underexpressed in XB22A Gm-r1070-484 BE819850 2843 6235 3931 8702 0.456 0.452 0.454 Bowman-Birk inhibitor Gm-r1070-8083 BE823467 2471 5070 3498 8557 0.487 0.409 0.438 Ribonucleoprotein homolog Gm-r1070-8006 BE823540 3353 7237 4145 12219 0.463 0.339 0.385 Trypsin inhibitor, Kunitz Gm-r1070-9195 BE824378 1889 4384 2383 7562 0.431 0.315 0.358 No hits found Gm-r1070-120 BE657237 3083 7548 3769 11877 0.408 0.317 0.353 Trypsin inhibitor, Kunitz Gm-r1070-8909 BE824331 3771 10945 3756 13568 0.345 0.277 0.307 Beta conglycinin Gm-r1070-9099 BE824364 1806 19569 1663 31399 0.092 0.053 0.068 Albumin precursor/leginsulin (a) Intensities after background subtraction and global normalization between replicates and within each slide are shown. (b) The mean ratios of the 16 flavonoid hydroxylase cDNAs are significant below the p = 0.0001 level in a t-test compared to 2.0 as the mean. (c) The average ratio of both slides is calculated by as follows: (XB22A Rep 1 intensity + XB22A Rep2 intensity) / (37609 Rep 1 intensity + 37609 Rep 2 intensity) (d) The matches for all of the functional annotations were to soybean ( Glycine max ) sequences except for the ribonucleoprotein homolog which was to Arabidopsis thaliana . An examination of the ratios of the 16 repetitively spotted flavonoid 3' hydroxylase cDNAs using a t test showed that the mean ratio of the repeated cDNAs on replicate 1 (2.424) were statistically significant at a P value of 0.0001 when compared to an expected mean of a 2.0, or a two-fold expression difference. The low P values were also found for replicate 2 and for the mean value (3.245) of both replicates. Thus, the flavonoid 3' hydroxylase cDNAs are statistically significant outliers in the microarray analysis. The microarray data presented here and showing that the cytoplasmic levels of the flavonoid 3' hydroxylase are higher in the T / T line agree very well with RNA blot data which showed that the flavonoid 3' hydroxylase gene has reduced expression in the seed coats of the t */ t * isoline compared to the T / T lines [ 13 ]. In addition to the RNA blot data showing differences in these mutant lines, we have definitively shown that the flavonoid 3' hydroxylase is encoded by the T locus by sequence data of other alleles of the locus and by genetic cosegregation data [ 13 ]. We do not know the reason for the change in the expression levels of the seven other cDNAs as shown in Table 4 , most of them representing various seed or storage type proteins. While the T locus does determine the flavonoid and pigment compounds synthesized in various tissues including seed coats and trichomes, it is possible that the flavonoid compounds themselves modulate an additional effect on seed protein synthesis in the seed coats. Alternatively, the observed differences in the levels of these cDNAs could be due to an artifact during the dissection procedure. We know from Northern blots, that flavonoid 3' hydroxylase is highly expressed in the seed coats, but is not expressed in the cotyledons so any small amount of contaminating cotyledon cells due to imprecise dissection of the seed coats of one line versus the other could lead to observed differences in seed protein RNAs. As this example in Figure 3 and Table 4 illustrates, the use of dual labeled mRNAs from near isogenic lines to probe microarrays is a powerful approach with which to obtain a small list of candidate genes from among the thousands examined by microarray analysis. In this example only eight functionally different cDNAs (or seven if the two trypsin inhibitor cDNAs are counted as one) of over 9200 cDNAs spotted on the array met the criteria of exceeding two-fold levels of expression in both replicates. If a cDNA is repetitively spotted on an array, as were the flavonoid 3' hydroxylase cDNAs, then the data are statistically significant. After identifying a short list of candidate genes, it is then feasible to test them by other methods (as RNA blotting, quantitative RT-PCR, RFLP or SNP analysis) in order to find an association of a particular cDNA with the mutant phenotype. Of course, if a particular mutation has a regulatory or epigenetic effect on a large number of downstream RNAs, or if a mutation does not affect the abundance of an mRNA, then the global expression approach may not be effective in identifying the primary nature of the mutant locus. For example, the standard recessive t allele at the T locus is the result of a premature stop codon and does not affect abundance of the flavonoid 3' hydroxylase mRNA to the same extent as does the t * mutation at that locus [ 13 ]. Tissue specific gene expression using the soybean microarrays In contrast to the results with near isogenic lines of the T locus which showed that relatively few cDNAs showed differential expression between the two very closely related lines, the soybean microarrays reveal larger numbers of cDNAs showing differential mRNA abundance levels in different tissue types of the same plants. For illustration, Figure 4 shows one of the two replicates of a dual labeling experiment using the low redundancy set Gm-r1088 of 9,216 cDNAs. The Cy3 labeled probe in this experiment was RNA from roots of hydroponically grown soybean plants, and the Cy5 probe was RNA from leaves of the same plants. The upper ratio threshold is 2.0 and the lower threshold is 0.5. Figure 4 One of the scatter plots of the log values of expression data from microarray slides hybridized with Cy3 labeled RNA from leaves and Cy5 labeled RNA from roots. Many cDNAs have differential expression above or below the two-fold level as indicated by the lines. Table 5 lists a selection of the 300 clones with significantly elevated expression in leaves (ratios below the 0.5 threshold) and the 125 clones with significantly elevated expression in roots (ratios above the 2.0 threshold) that consistently varied more than two-fold in both of the replicated slides. Among the clones with elevated expression in roots are chalcone isomerase, putative aquaporins (tonoplast intrinsic protein), tubulin, an auxin-repressed protein, peroxidase, sucrose synthase and PEP carboxylase. These plants were not inoculated with Rhizobia and therefore only a few nodulin-related genes were observed to be markedly upregulated: nodulin-26, nod factor binding lectin-nucleotide phosphohydrolase (GS50, Accession # AF207687) and a MtN19 homolog. Table 5 A selection of genes that are differentially expressed in leaves or in roots Clone Identification GenBank accession no. Average Ratio 1 Function 2 Annotation (BLAST hit and organism) Leaves up-regulated Gm-b10BB-41 AI495218 0.103 en Rubisco ( Glycine max ) Gm-r1088-7900 BU550654 0.294 en Light harvesting chlorophyll a/b binding protein ( Arabidopsis thaliana ) Gm-r1088-8981 BU549821 0.261 en Photosystem I subunit ( Oryza sativa ) Gm-r1088-3538 BU546899 0.365 en Thylakoid lumen protein ( Arabidopsis thaliana ) Gm-b10BB-47 AW185639 0.171 en Plastocyanin precursor ( Glycine max ) Gm-r1088-2905 BU546179 0.258 en Trehalose-6-phosphate phosphatase ( Arabidopsis thaliana ) Gm-r1088-7106 BU548940 0.150 st Vegetative Storage Protein ( Glycine max ) Gm-r1088-5827 BU548964 0.465 df Acidic chitinase ( Glycine max ) Gm-r1088-6724 BU549206 0.217 cmg Putative calreticulin ( Oryza sativa ) Gm-r1088-3756 BU546067 0.415 cmg Cytochrome P450 ( Pyrus communis ) Gm-r1088-8229 BU550097 0.454 cmg Catalase ( Glycine max ) Gm-r1088-5243 BU547961 0.143 cmg Putative serine carboxypeptidase II-3 precursor ( Oryza sativa ) Gm-r1088-1433 BU545435 0.200 cmg H protein ( Flaveria anomala ) Gm-b10BB-12 AI900038 0.211 cmg F3H (Flavanone-3-Hydroxylase) ( Glycine max ) Gm-r1088-4994 BU547986 0.205 cmg Matrix metalloproteinase MMP2 ( Glycine max ) Gm-r1088-2794 BU547254 0.138 cmg Putative lipoic acid synthase (LIP1) ( Arabidopsis thaliana ) Gm-r1088-4578 BU547484 0.165 cmg Lipid transfer protein-like protein ( Retama raetam ) Roots up-regulated Gm-r1088-5321 BU547784 3.806 no Nodulin-26 ( Glycine max ) Gm-r1088-7410 BU550525 2.304 no MtN19 homolog ( Medicago truncatula ) Gm-r1088-6955 BU551008 4.630 no Similar to nodulins and lipase homolog ( Arabidopsis thaliana ) Gm-r1088-6384 BU550458 4.431 to bZIP transcription factor ( Arabidopsis thaliana ) Gm-b10BB-11 AI930858 3.008 df Chalcone isomerase ( Glycine max ) Gm-r1088-6204 BU547671 2.541 cmg Putative aquaporin (tonoplast intrinsic protein) ( Arabidopsis thaliana ) Gm-r1088-2818 BU546503 2.848 cmg Phosphoenolpyruvate carboxkinase ( Flaveria trinervia ) Gm-r1088-1741 BU544616 2.724 cmg Similar to sucrose synthase ( Pisum sativum ) Gm-b10BB-37 AW309104 6.228 cmg Proline-rich protein ( Glycine max ) Gm-b10BB-38 AI442449 2.233 cmg DAD-1 (Defender Against apoptopic cell Death) ( Glycine max ) Gm-r1088-5369 BU547794 8.372 cmg Ripening related protein ( Glycine max ) Gm-r1088-7112 BU548943 6.661 cmg Germin-like protein ( Phaseolus vulgaris ) Gm-r1088-5330 BU547868 4.974 cmg Pectinesterase (EC 3.1.1.11) precursor ( Vigna radiata ) Gm-r1088-6104 BU549267 4.554 cmg Asparagine synthase (glutamine-hydrolyzing) ( Glycine max ) Gm-r1088-741 BU544257 2.455 cmg Cationic peroxidase ( Glycine max ) Gm-b10BB-45 AW318233 2.135 cmg Tubulin (b chain) ( Glycine max ) Gm-r1088-5315 BU547781 4.334 u Specific tissue protein 1 ( Cicer arietinum ) Gm-r1088-5332 BU547869 4.267 oth Auxin-repressed protein ( Robinia pseudoacacia ) 1 The average ratios of individual values from two slides after normalization and using a dye swap procedure. 2 en – energy; st – storage; to – transcription; cmg – cell growth and maintenance; u – unknown; oth – other; df – defense; no – nodulation related The soybean proline-rich protein (SbPRP1 Accession # J05208) (represented on the array by AW309104, Gm-c1019-3688) is also among the root expressed clones. SbPRP1 has been shown to be expressed preferentially in the roots [ 14 ]. A gene of interest overexpressed in roots is the DAD-1 (Defender Against apoptotic cell Death). No Rubisco or photosynthesis related genes were observed to be over expressed in the roots as would be expected for the non-green tissues. In leaves, genes typical for green tissues are upregulated as expected. These are the photosynthesis genes (eg. Rubisco, plastocyanin precursor, chlorophyll a/b binding protein type II, trehalose-6-phosphate phosphatase, photosystem subunits and proteins, thylakoid lumen protein, light-harvesting chlorophyll a/b binding protein), and the vegetative storage proteins. Ribosomal proteins, cytochrome P450, catalase, and chitinase were also noted as overexpressed in the leaves. Our publicly available Gm-r1021 soybean unigene subset containing 4,098 cDNAs has also been used to examine differential gene expression in roots and shoots of older soybean plants [ 15 ]. We have previously utilized microarrays containing 9,216 clones of the Gm-r1070 set (representing many cDNAs from developing seeds, seed coats, flowers, and pods) to carry out a detailed analysis of induction of somatic embryos during culture of cotyledons on auxin-containing media [ 16 ]. The resulting transcript profiles were subjected to a cluster analysis and revealed the process of reprogramming of the cotyledons cells during the induction process. The 495 cDNAs (5.3% of the cDNAs on the array) that were differentially expressed were clustered into 11 sets using a non-hierarchical method (K-means) to reveal cDNAs with similar profiles in either the adaxial or the abaxial side of the embryos from 0 to 28 days in 7 day intervals. Among other conclusions, these global expression studies indicated that auxin induces dedifferentiation of the cotyledon and provokes a surge of cDNAs involved in cell division and oxidative burst. Thus, the soybean cDNA arrays that we have developed from the unigene cDNA set can be used to reveal the underlying physiological and biochemical pathways potentially operative in specific tissues, developmental stages, or environmental treatments. Obviously, cDNA arrays from soybean or any other organism that are constructed with PCR inserts representing an average size of 1.1 kb will generally hybridize with any RNAs from gene family members that share greater than 85% homology. Thus, cDNA arrays will generally not distinguish expression from closely related duplicated sequences. Oligo arrays spotted with synthetic 70-mers or Affymetrix short oligo arrays have greater potential to separate the expression from close related duplicated sequences if the oligos are chosen from the 3' or 5' non-coding regions that carry more sequence variability than the protein coding regions. Conclusions Although microarray data is limited from soybean and most plants other than Arabidopsis, the construction of the 27,513 member low redundancy 'unigene' cDNAs for soybean reported in this paper will greatly stimulate this area. The number of slides containing all 27,513 of the cDNAs is being reduced to one, or at most two slides, and the slides are publicly available. Spotted PCR products with average size of over 1 kb are useful not only for soybean, but for other legume species as cross hybridization to the long probes will be substantial. The 3' sequencing reported here is particularly useful for differentiating gene family members and for future design of gene specific oligo arrays of either 70-mers spotted on glass slides or by Affymetrix technology using short oligos synthesized in situ . The cDNA or oligo-based microarrays add to the developing suite of genome analysis approaches in soybean [ 17 ]. A few of the unlimited applications include profiling expression from genes that respond to challenges by various pathogens and by environmental stresses as drought, heat, cold, flooding, and herbicide application. Also, by analysis of the near isogenic lines of the T locus as an example, we demonstrated the potential of soybean cDNA arrays to be used for discovery of genes responsible for uncharacterized mutations. Future expression profiling of mutant phenotypes or of genotypes that differ in protein or oil content and other quantitative traits will yield significant clues to the genes involved in those pathways and traits. Methods Contig assembly for unigene selection Raw sequence files of the 5' soybean EST data from Washington University or 3' data from the University of Illinois Keck Center were produced from sequence traces using the Phred base calling program [ 18 , 19 ]. The sequences were trimmed for leading and trailing vector and linker sequences and artifact E. coli sequences were removed. Quality checks included determining the number of ambiguous 'N' base calls in a sequence and trimming the leading and trailing poor quality (high-N) sections to obtain the best subsequence where the number of Ns was 4% or less of the total bases. The EST sequences were clustered into contig sets based on sequence overlap using the program Phrap [ 7 ]. The processing and analysis results for each sequence are displayed on a set of World Wide Web pages [ 20 ]. The distribution of sequence lengths in each submission set are displayed in histograms. The base call and quality information for each sequence in a submission are displayed in artificial gel images of the sequences. Each sequence is displayed as the raw sequence before vector filtering and the cleaned sequence after vector filtering. A color-coded sequence quality graph shows the part of a sequence retained after trimming as well as the regions trimmed for low quality, polyA or polyT, and vector sequences. Blast reports for each sequence are displayed and can be searched collectively for words or phrases of interest. Contig sequences and images of the contig assemblies are displayed on linked web pages along with graphs describing the contig qualities [ 20 ]. Clone reracking and 3' sequencing Soybean cDNA clones corresponding to the 5' most representative member of a contig or to a singleton were selected using Oracle database tables and SQL queries. The E.coli stocks representing those clones were reracked into new 384 well plates to form the sequence driven reracked libraries Gm-r1021 (4,089 cDNA clones), Gm-r1070 (9,216 cDNA clones) and, Gm-r1083 (4,992 cDNA clones). Initially, these were reracked from source 384-well plates to destination 384-well plates by Genome Systems (St. Louis, MO) using a Qbot and shipped on ice to the University of Illinois for extraction and 3' sequencing. Reracked library Gm-r1088 (9,216 cDNA clones) was reracked at the University of Ilinois Keck Center using a QPix robot, (Genetix, New Milton, Hampshire UK). Growth rates for the E.coli stocks were over 99.5%. The cDNA libraries were all constructed in either pSPORT 1 (Invitrogen, Carlsbad, CA) or pBluesciptII SK (+) (Stratagene, La Jolla, CA) plasmid vectors in DH10B host cells. Each 384-well plate of a bacterial library was split into four 96-well, 2 ml block plates, each corresponding to a different quadrant (A1, A2, B1, B2) and grown overnight in 1 ml LB media with 100 μg/ml ampicillin. High quality DNA templates were purified using a QIAGEN BioRobot 9600 or BioRobot 8000 with QIAprep 96 Turbo miniprep kits (QIAGEN, Germantown MD). Dideoxy terminator sequencing reactions for the 3' ends of the soybean cDNA clones were conducted by the University of Illinois Keck Center for Comparative and Functional Genomics [ 21 ] using standard methods analyzed either on gel-based ABI 377 or capillary-based ABI3700 instruments. Inserts within each vector type can be sequenced from the 5' end using the M13 reverse primer and the 3' end using the M13 universal forward primer. However, for higher success rates at the 3' end, a degenerate primer consisting of [5'-TTTTTTTTTTTTTTTTTT(A/C/G)-3'] was employed in order to enhance the success of 3' sequencing reactions by eliminating the need to sequence through the poly A tail. The primer was synthesized and purified by HPLC (Qiagen Operon, Alameda CA) to remove shorter, incomplete primers. Using high quality Qiagen purified cDNA templates, the average 3' untrimmed read length was over 600 bases with a success rate of 80 to 85%. Original sequence trace files are available by ftp from the University of Illinois Keck Center [ 21 ]. The trimmed sequences were entered into Genbank [ 22 ]. The reracked 5' and 3' sequences were analyzed by both the CAP3 [ 11 ] and Phrap programs [ 7 ]. All cDNA clones of the low redundancy reracked 'unigene' sets are available to the public through Biogenetic Services, Inc., Brookings, SD, or the American Type Culture Collection, Manasas, VA. Annotation of the unigene cDNAs using BLASTX The 5' and 3' sequences of the 27,513 unigene cDNAs clones were annotated using BLASTX against the nonredundant (nr) protein database with cutoff E value of 10E -6 . The top blast hit was used as the annotation for each of the 5' and 3' ESTs represented in the unigene sets printed on the microarrays. In some cases, the protein family assignments were also made using the Metafam program based on a BLASTX analysis against a protein sequence database consisting of a non-redundant set of sequences from SwissProt & TrEMBL [ 23 ], PIR & NRL [ 24 ], GenPept [ 25 ], and Integrated Genomics, Inc. (Chicago, IL). Each of the protein sequences in this database is also placed in a protein family in the MetaFam database [ 26 - 28 ]. The results from each BLASTX report were parsed and placed in an Oracle 8i database. The strong protein sequence hits from BLASTX are matched up to the MetaFam protein families to which those protein sequences belong. Amplification of cDNAs and preparation for use in microarray construction All pipetting steps involved in amplifying the cDNAs by PCR, purification of the cDNAs, and assembling them into 384-well spotting plates were conducted with a Multimek TM 96 Automated pippetor (Beckman Instruments, CA) to reduce errors associated with manual pipetting. Amplification The same Qiagen plasmid DNA templates that were prepared for the 3' sequencing by a Qiagen robot at about 100+ ng/μl were also used for PCR amplification using Taq polymerase (Invitrogen, Carlsbad, CA), universal forward and reverse primers in 96 well plates using the MJ DNA Engine Tetrad (MJ Research, Waltham, MA). Four PCR reaction plates are prepared at a time, one from each quadrant of a 384-well library plate. A master mix consisting of final concentrations of 1X PCR buffer (20 mM Tris-HCl, pH 8.4, 50 mM KCl), 2 mM MgCl 2 , 0.25 mM each of dGTP, dATP, dTTP, dCTP, 1 μM of M13 universal primer, 1μM of M13 reverse primer, and 0.05 U/μl of Taq polymerase (Invitrogen, Carlsbad, CA, cat no. 18038-042) was prepared and 48 μl were aliquoted into each well of a 96 well PCR reaction plate (MJ Research MSP-9621). A 0.5 μl aliquot of an undiluted plasmid template DNA was aliquoted into the 48 μl of master mix. The plates were briefly centrifuged for 1 min at 1500 rpm and placed into an MJ PTC-200 DNA Engine for 1 min of denaturation at 94°C, and 28 cycles of 92°C for 30 sec, 56°C for 45 sec, and 72°C for 30 sec and a final extension of 72°C for 5 minutes. A typical yield from the PCR was about 30–100 ng/μl. Purification The PCR products were loaded into Millipore multiscreen plates (Millipore #MANU 03050) and were subjected to a vacuum applied at 15 psi for about 10 min until the wells were completely empty. Then 60 μl of sterile water were added to each well using the Multimek automated pipettor and the PCR products were washed. The purified products were eluted in sterile water, retrieved and then stored in 96 well plates at -20°C. A 1 μl aliquot of each well from 3 rows from each 96 well plate is run on a gel to check the quality of the PCR and purification of the cDNA. The yield after purification was between 30 and 40 μls with concentrations around 15–50 ng/μl. Spotting plate assembly The four quadrants were then reassembled into a 384-well spotting plate containing 6 μl per well: 4.5 μl of PCR product from the 96 well plates mixed with 1.5 μl of 4X Micro Spotting Solution Plus (MSP4X, Telechem, Sunnyvale, CA). Alternatively, in earlier prints, the spotting plates were assembled at a final concentration of 3X SSC, 0.01% N-lauroylsarcosine by mixing 3.5 μl of purified PCR product with 1.5 μl of 10X SSC, 0.033% Sarkosyl, pH 7.0 (1.5M NaCl, 0.15 M citric acid, trisodium salt, 1.12 mM N-lauroylsarcosine, Sigma L-9150). Microarray construction A set of 9,216 prepared cDNA inserts from 24, 384-well spotting plates were single spotted onto amine coated glass slides (1 in × 3 in, Telechem Superamine, SMM slides, Telechem International, Sunnyvale, CA) using a Cartesian PyxSys 5500 robot (Genomic Solutions, Ann Arbor, MI) equipped with 16 quill pins (ChipMaker II from Telechem International) and an environmental chamber. The cDNAs were printed at 55% ± 5% relative humidity setting within the chamber and in a room that was controlled for humidity to be between 45 and 60% using room dehumidifiers as needed. Control of humidity was critical for printing. All arrays contained 32 grids of spots arranged in an 8 × 4 matrix. Each grid had 19 rows and 16 columns of spots for a total of 9,728 spots per array. A total of 9,216 spots were the cDNAs prepared from the 'unigene' set to form 18 of the 19 rows with 288 spots per grid. After all of the 9,216 cDNAs were printed, an additional row of 16 spots was printed as the first row of each grid for a total of 32 grids × 16 spots = 512 additional spots. These cDNAs were printed from the choice clone spotting plate designated Gm-b10BB which contained 64 hand-picked clones. Thus, the 64 hand-picked, choice clones were printed 8 times each, i.e., each clone was printed in twice in four separate grids. In addition, since the Gm-r1021 library contained only 4089 cDNAs, an additional 135 were repeated in order to obtain an even 9216 cDNAs for printing when combined with the Gm-r1083 unigene set. The three microarray platforms were entered in the Gene Expression Omnibus database [ 29 ] with platform accession numbers GPL229 for Gm-r1070, GPL1013 for Gm-r1021+Gm-r1083, and GPL1012 for Gm-r1088. Complete tables of sequence identifiers and accession numbers for the unigene cDNAs printed on arrays as illustrated in Table 2 are available [ 30 ]. Construction of the 'choice' clone PCR plate for repetitive spotting To construct the choice plate Gm-b10BB, 64 clones were chosen to be used as negative and positive controls for expression analysis in all microarray slides. These 64 clones were chosen to represent certain constitutively expressed genes, or other markers for particular tissues, and is also highly representative of key genes of the soybean flavonoid pathway. The 64 clones were hand picked and grown over night in microfuge tubes containing 100 μl of YT media at 37°C, 250 rpm. The following day, microfuge tubes containing 200 μl of YT supplemented with 100 μg/ml ampicillin and 8% glycerol were inoculated with 5 μl from the previous culture and grown over night at 37°C and 250 rpm. To create a 96 well plate of these E.coli stocks, 100μl of the previously grown culture were transferred to wells in columns A1 thru H8 and stored at -80°C. Wells in columns A9 thru H12 were left empty. A small database for the Gm-b10BB plate was prepared containing the name of each gene, its sequence, accession number, and the corresponding well in the Gm-b10BB plate. To make a replicate copy for sequencing, 100 μl of YT supplemented with 100 μg/ml ampicillin and 8% glycerol were inoculated with 5 μl of the -80°C E. coli stock and incubated overnight at 37°C and 250 rpm. Miniprep DNAs were isolated and sequenced at the University of Illinois Keck Center using a 5' M13 primer. The identity of each clone was confirmed by comparison of the sequences obtained from the Keck center with the sequences contained in our previously prepared database by using the Pairwise Blast tool available at the NCBI web page. All sequences showed >97% identity with the corresponding sequence in the database. PCR amplification using the DNA miniprep plate as a source for templates was performed with the Mutimeck 96 automated pipetor (Beckman) as described above. All PCR products were purified and separated in 1% agarose gel to evaluate the purity of the amplified DNAs and determine their size. The purified and analyzed PCR products from the 96 well plate, Gm-b10BB, were used to assemble a 384 well spotting plate. The 384 well spotting plate contained 6 μl per well: 4.5 μl of PCR product from the 96 well plate aliquoted on each of the 4 quadrants and mixed with 1.5 μl of 4X Micro Spotting Solution Plus (MSP4X, Telechem, Sunnyvale, CA) after assembly. Post-print processing After all slides were printed, the cDNAs were UV-cross linked to the slide coating with 650 m Joule ultra violet light using a StrataLinker (Stratagene, La Jolla, CA). [Note: prior to cross-linking the spots were rehydrated if necessary. Rehydration was required for the slides printed with the SSC-Sarkosyl spotting solution but was not required for those printed with Telechem spotting solution. DNA spots were rehydrated by passing the slide over a gentle vapor of steam for a few seconds until spots glistened but did not coalesce and then were quick dried on a 70°C heating block]. To remove excess spotted DNA as well as to denature attached DNA to single strands, slides were treated with the following series of washes with agitation: 2 min with 200 mls of 0.2% SDS, two 1 min water rinses, 95°C water for 2 min, 0.2% SDS for 1 min, and finally two water rinses of 1 min each. Slides were subjected to low speed centrifugation for 2 min at 500 rpm to dry and were stored in a slide rack in a dust free container. Plant material and RNA isolation and labelling Seed coats and cotyledons were dissected from plants grown to maturity in soil in the greenhouse. Roots and leaves were collected from soybean plants grown for 11 days after germination in an aerated hydroponic solution with normal nutrient conditions. Total RNA was extracted using phenol-chloroform and lithium chloride precipitation methods [ 31 , 32 ]. RNA was further purified by use of RNeasy Mini or Maxi columns Qiagen, Valencia, CA) according to the manufacturer's instructions. Prior to labelling, the purified RNA was concentrated in a Speed Vac (Savant Instruments, Halbrook, NY) or by using YM-30 Microcon column (Millipore, Bedford, MA). For each RNA probe, 50 to 60 μg of purified total RNA was labeled by reverse transcription in the presence of Cy3- or Cy5-dUTP [ 33 ]. Briefly: the RNA and 5μg oligo-dT 18–21 mer (Operon, Qiagen) were annealed in a 10μl volume at 70°C for 10 min and cooled on ice. A 20 μl cocktail containing 1X first strand reaction buffer, 10 mM DTT, 0.5 mM each of dATP, dCTP, dGTP, 0.2 mM dTTP, 100μM Cy3- or Cy5-dUTP (Amersham, Pharmacia) and 400 U of 200 U/μl SUPERSCRIPT™II (Invitrogen, Carlsbad, CA, cat no. #18064-014) was added to 10μl of the denatured RNA and oligo-dT mixture). The 30 μl reaction was incubated for 1 hr at 42°C, after which 200 additional units of SUPERSCRIPT™II were added and incubation was continued for another hour at 42°C. The reaction was then treated with RNAse A and RNAse H (0.5μg and 1.0 U respectively, Invitrogen, Carlsbad, CA) for 30 min at 37°C to degrade the RNA. The resulting Cy3 and Cy5-labeled cDNAs were paired and mixed together according to the intended experiment and unincorporated nucleotides were removed using a PCR cleaning kit (Qiagen, Valencia, CA). Cleaned probes were concentrated in a SpeedVac (Savant Instrument, Holbrook, NY) for approximately 5 min to a volume of less than 32 μl prior to being used in hybridization to one array. Microarray hybridization reactions The microarray slides were prehybridized by incubation in 5X SSC, 0.1% SDS, 1% BSA at 42°C for 45 to 60 min. For each slide, the labeled cDNA probe was brought to 30.5 μl with the addition of sterile water. A 1.5 μl aliquot of 10 μg/μl polyA was added and the probe was denatured at 95°C for 3 min. An equal amount (32 μl) of pre-warmed 2X hybridization buffer (50% formamide, 10X SSC, 0.2% SDS, [ 33 ] was added to the mixture and the probe was pipetted between the pre-hybridized slide and the cover slip (LifterSlip, Erie Scientific Company, Portsmouth, NH). The slide was placed in a hybridization chamber (Corning, New York, NY) and incubated overnight for 16–20 hrs at 42°C. The next day the cover slip was removed and the slide was washed once in 1X SSC, 0.2% SDS prewarmed to 42°C; once in 0.2X SSC, 0.2% SDS at room temperature; and once in 0.1X SSC at room temperature. The washes were conduced with gentle shaking at 100 rpm for 5 min. Slides were subjected to low speed centrifugation for 2 min at 500 rpm to dry. Scanning, quantitation, and normalization The hybridized slides were scanned with a ScanArray Express fluorescent microarray scanner (Perkin Elmer Life Sciences, Boston, MA) and their fluorescence quantified by ScanArray Express software or by GenePix Pro 3.0 (Axon Instruments, Union City, CA). A perl program was written for post analysis processing of the quantitated image files from the Scan Array Express or GenePix Pro3.0. Local background was subtracted from each spot intensity. Spots showing signal intensities below the 95th percentile of the background distribution in the Cy3 or Cy5 channel were filtered out. The ratio of Cy5 mean to Cy3 mean ( r ) was computed and used to adjust the Cy3 values to Cy3 X sqrt( r ) and the Cy5 values to Cy5/sqrt( r ). A between-replicate correction was made using an ANOVA model, which equalized average grid or slide intensities between replicates, for Cy3 and Cy5 separately. The ratio of the resulting adjusted intensities of Cy5 to Cy3 was computed for each spot. The coefficient of variation (standard deviation/mean) across replicates was calculated for each spot to evaluate repeatability of the hybridizations. High density filter hybridization and selection of weakly expressing cDNAs High density nylon filters containing 18,432 non-sequenced cDNA clones from the cDNA library Gm-c1007 made from immature cotyledons were spotted using a Qbot by Incyte Genomics. Before use, the filters were washed in0.5% SDS solution that was heated to 60°C, poured over the membrane, and gently agitated for five minutes. This will rid the filter of any residual debris and will result in a cleaner hybridization. Radiolabelling of probe Total mRNA from developing cotyledons was labeled with 33 P-dATP in the following manner: RNA in 8 μl water (up to 5 μg, but generally 2 to 3 μg of mRNA) was combined with 4 μl Oligo dT (0.5 μg/ul, 70 μM, Sigma Lot 29H9065). The mixture was heat treated for 10 min at 70°C and chilled on ice before adding the following: 6 μl of 5X first strand buffer (BRL/Life Tech Cat. #18064-014); 1 μl DTT; 1.5 μl each of 10 mM dGTP, dCTP, and dTTP; 1.5 μl reverse transcriptase (200 units/μl, SuperScript II RT from BRL/Life Tech Cat. #18064-014); and 10 μl 33 P dATP at 10 mCi/ (NEN, 33P Cat#612H04029). After incubation at 37°C for 90 min, the probe was purified by a passage through a Bio-Spin 30 Chromatography Column (Bio-Rad Cat. #732-6006), then stored at 4°C until ready to be denatured and added to the pre-hybridized filter). Prehybridization The filter was rolled and placed in a hybridization bottle containing 25 ml of pre hybridization solution without formamide [ 34 ] and was prehybridized for 3–4 hrs at 65°C in a rotor oven. Hybridization Adding the probe to the filter: Once the filter was pre-hybridized, the probe was denatured for 10 min at 95°C and then the entire radiolabeled probe was added directly to the prehybridization mixture (in the bottle with the filer). The hybridization was allowed to proceed for 12–18 hrs. Washing The filters were washed twice in the pre-warmed (50–55°C) low stringency wash solution (2XSSC, 0.5% SDS, 0.1% Na pyrophosphate) for 15 min each. The filters were then washed for about 2 hrs at 55°C in high stringency buffer (0.1XSSC, 0.5% SDS, O.1% sodium pyrophosphate) with gentle shaking. Imaging Filters were analyzed with a Typhoon 8600 variable mode imager (Amersham Pharmacia Biotech, Inc, Piscataway, NJ) and imaged with the software package Array Vision (Imaging Research Inc., St. Catharines, Ontario, Canada) to correlate spot intensity and filter position. Spots with very low intensity of 1 to 500 were selected at random in order to enrich for cDNAs representing mRNAs of low abundance. Theses clones were reracked into 384-well plates to form library Gm-r1030 and sent for 5' sequencing at the Washington University Genome Center. Distribution of materials Upon request, all novel materials described in this publication will be made available in a timely manner for noncommercial research purposes. The cDNA clones are available from the Biogenetic Services, Brookings, SD or the American Type Culture Collection, Manasas, VA. Microarrays are available on a cost recovery basis by contacting Lila Vodkin, University of Illinois. List of abbreviations PCR polymerase chain reaction SSC standard saline citrate SDS sodium dodecyl sulfate Authors' contributions LOV led the unigene and microarray development, coordinated the project, and drafted the manuscript; AK constructed multiple cDNA libraries included in the unigene cDNAs, performed over 18,000 PCR reactions, performed the library normalization by filter screening shown in Figure 1 , the array hybridization reactions including that of Fig 4 , and drafted protocols; RS led the informatics and sample tracking efforts for array printing, high throughput PCR, cDNA clone reracking, printed arrays, developed and drafted protocols for analysis of array data; SJC initiated the PCR and array hybridization protocols for the project and constructed several cDNA libraries; DOG performed 10,000 PCR reactions, printed arrays, and participated in clone rearraying; RP contributed to protocol development, PCR production, and gel analysis; GZ accumulated the 'choice clone" cDNAs and performed the isoline hybridization analysis shown in Figure 3 ; MS contributed to informatics and sample tracking and gel analysis of PCR products; MVS performed EST cluster analysis for development of the unigene set and drafted sections of the paper; ES and CS performed EST analysis and clustering for the unigene set; ER coordinated EST processing and informatics; JE constructed multiple cDNA libraries used in selecting the unigene set; RS led the public EST project for library construction and 5' sequencing; AR-H and JCP provided hydroponically grown plant material and RNAs for Figure 4 and constructed a cDNA library; VC and PC constructed multiple cDNA libraries; GG and LL performed annotations of the 27,500 unigene set; JP and PS led or performed the 3' sequencing of the 27,500 unigene clones. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526184.xml |
539265 | Hypothalamic-pituitary-gonadal axis hormones and cortisol in both menstrual phases of women with chronic fatigue syndrome and effect of depressive mood on these hormones | Background Chronic fatigue syndrome (CFS) is a disease which defined as medically unexplained, disabling fatigue of 6 months or more duration and often accompanied by several of a long list of physical complaints. We aimed to investigate abnormalities of hypothalamic-pituitary-gonadal (HPG) axis hormones and cortisol concentrations in premenopausal women with CFS and find out effects of depression rate on these hormones. Methods We examined follicle stimulating hormone (FSH), luteinizing hormone (LH), estradiol, progesterone and cortisol concentrations in 43 premenopausal women (mean age: 32.86 ± 7.11) with CFS and compared matched 35 healthy controls (mean age: 31.14 ± 6.19). Patients were divided according to menstrual cycle phases (follicular and luteal) and compared with matched phase controls. Depression rate was assessed by Beck Depression Inventory (BDI), and patients with high BDI scores were compared to patients with low BDI scores. Results There were no significant differences in FSH, LH, estradiol and progesterone levels in both of menstrual phases of patients versus controls. Cortisol levels were significantly lower in patients compared to controls. There were no significant differences in all hormone levels in patients with high depression scores versus patients with low depression scores. Conclusion In spite of high depression rate, low cortisol concentration and normal HPG axis hormones of both menstrual phases are detected in premenopausal women with CFS. There is no differentiation between patients with high and low depression rate in all hormone levels. Depression condition of CFS may be different from classical depression and evaluation of HPG and HPA axis should be performed for understanding of pathophysiology of CFS and planning of treatment. | Background Chronic fatigue syndrome (CFS) is a clinical presentation that primarily affects women and characterized by severe disabling fatigue, and other symptoms; including musculoskeletal pain, sleep disturbance, impaired concentration, and headaches, in the absence of organic illness or severe psychiatric disorder that would explain the fatigue. For diagnosis of CFS operational criteria have been developed by Centers for Disease Control (CDC) [ 1 ]. Although the cause CFS is poorly understood, there are some causative theories for underlying conditions. Hypotheses about its etiology have included viral infections, immune dysregulation and abnormal endocrine function among others [ 2 ]. It has been reported that onset of CFS mostly following a significant stressor, most frequently a viral infection, and the course of the syndrome remits and relapses with occurrence of physical and psychological stressors [ 3 ]. Stress is known to interfere with the menstrual cycle and may lead to chronic anovulation and amenorrhea [ 4 ]. This generally thought to be caused by a decrease in the activity of hypothalamic gonadotropin releasing hormone (GnRH) pulse generator with subsequent inhibition of the pituitary-gonadal axis [ 5 , 6 ]. Stress-induced activation of hypothalamic-pituitary-adrenal (HPA) hormonal axis plays an important role in suppressing the HPG axis [ 7 ]. Infusion of corticotropin releasing hormone (CRH) into the cerebral ventricles leads to inhibition of LH secretion in primates [ 8 ]. CRH antagonism has also been shown to prevent the inhibitory effect of stress on the HPG axis in the rodent and in the monkey [ 9 ]. Women with hypothalamic amenorrhea have higher basal cortisol levels and blunted cortisol response to exogenous administration of CRH, suggesting that the increase in cortisol secretion may reflect increased endogenous CRH activity [ 10 ]. Pertuberations of HPA axis function have been described in CFS [ 11 , 12 ]. Studies of the HPA axis in CFS show a mild hypocortisolism of central origin, in contrast to the hypercortisolism of major depression [ 13 , 14 ]. There are similarites between onset, course, and clinical syndromes of CFS and glucocorticoid defficiency states. Clinical syndromes of CFS and Addison's disease share many common features: one of the principal clinical features of Addison's is fatigue, the core feature of CFS. The other common symptoms of CFS include arthralgia, myalgia, adenopathy, exacerbation of allergic responses, intermittent fever, postexertional fatigue, and depressed mood. These symptoms can also be experienced by those withdrawing from hypercortisolaemic states [ 15 ]. CFS occurs more commonly in women [ 16 ]. It was suggested that alterations in reproductive hormone levels might be involved in the pathoetiology of CFS [ 17 ]. There has been reported that this condition may be due to estrogen deficiency and reflect underactivity of the HPG hormonal axis [ 18 ]. The GnRH secretion from hypothalamus drives secretion of LH and FSH from pituitary gonadotropes [ 19 ]. FSH and LH govern the cyclical secretion of estradiol and progesterone over the course of menstrual cycle. The pulsatile pattern of GnRH secretion is critical for the control of serum LH, FSH, and ovulation. Interaction between HPA and HPG axes in stress-induced amenorrhea suggests that there may be perturbation of these axes in CFS. One important confound is co-morbid depressive illness, present in approximately 50% of CFS patients [ 20 ]. Relation between depressive mood and these axes has been more investigated yet. This relationship may contribute to clarification of pathoetiology of CFS and to describe treatment strategy of this complex syndrome. In this study, we aimed to investigate main hormones of HPG and HPA axes; FSH, LH, estradiol, progesterone and cortisol levels in premenopausal women with CFS. Furthermore to find out relationship between these hormones and depressive mood with comparing patients with high and low depression scores. Methods Subjects A total of 43 premenopausal women diagnosed as CFS, according to the international CFS definition criteria [ 1 ], were recruited from outpatient clinic of Physical Medicine and Rehabilitation Department of Dicle University (Diyarbakýr, Turkey) for this study. Fatigue assessment was done according to CDC criteria [ 1 ]. Fatigue characteristics are persistent or recurrent lasting at least 6 months; recent and/or well defined onset; not secondary to excessive physical activity, or any organic or psychiatric disorders; not resolved by rest; and inducing important reduction of previous levels of physical and mental activities. Thirty five age matched demographically similar healthy premonopausal women were also selected as controls. Ethic committee of Dicle University hospital approved the study, and all subjects voluntarily agreed to participate. All patients underwent medical screening that included physical examination and relevant investigation, with a minimum of urine analysis, full blood count, measurement of urea, electrolytes, and erythrocyte sedimentation rate, and test for thyroid and liver function. All patients and controls were evaluated by structered psychiatric interview to exclude any additional psychiatric disorder prior to inclusion in the study. Depression rate was assessed by Beck Depression Inventory (BDI) in all patients and controls. Patients with CFS were divided to two groups according to the BDI higher and lower than 17 scores. All prescription medications, included psychoactive and non-prescription medications, vitamins, and herbal remedies were tapered and then stopped at least 2 weeks prior to study [ 17 , 21 ]. All subjects and controls had no frank hypocortisolism on endocrine assessment. No patients and controls had received any oral or intraarticular corticosteroid therapy during the three months preceding the study, or had received exogenous estrogens, progesterone, or any other drugs affecting sex or cortisol hormones metabolism. The other exclusion critetria [ 1 ] were: active, unresolved, or suspected disease likely to cause fatigue; alcohol or other substance abuse within 2 years prior to onset of the chronic fatigue and any time afterward; and a body mass index ≥45. All subjects and controls were undergoing normal menstrual cycles and were not taking contraceptive pill. Subjects and controls were all studied during the follicular (n: 28 with CFS and n: 23 controls) and luteal (n: 15 with CFS and n: 12 controls) phases of menstrual cycle. Procedures and Hormone Assays Blood samples were collected in the early morning (8.30–10.30 AM) after an all night fast and plasma was separated immediately by centrifugation; then sera obtained were stored at -20°C until hormonal assaying. All hormone values assayed by "Electro Chemil Luminescence Immunassay (ECLIA)" (Roche, 1010/1020 Elecsys Systems Immunoassay) method. Serum concentrations of follicle stimulating hormone (FSH, normal values 3.3 to 11.3 mIU/mL for follicular phase and 1.8 to 8.2 mIU/mL for lueal phase), luteinizing hormone (LH, normal values 2.4 to 12.6 mIU/mL for follicular phase and 1.0 to 11.4 mIU/mL for luteal phase), estradiol (E 2 , normal values 24.5 to 195 pg/mL for follicular phase and 40 to 261 pg/mL for luteal phase), progesterone (normal values 0–1.6 ng/mL for follicular phase and 1.1 to 21 ng/mL for luteal phase), and cortisol (normal values 6.2 to 19.4 μgr/dl) were evaluated in all patients and controls in both of menstrual periods. Statistical analysis were done by SPSS 8.0 PC program. Results were expressed as means ± SD (standard deviation). Statistical significances were tested using the independent student's t test for women with CFS and control group comparisons. Mann Whitney U test was used for comparison of hormonal data of women in both phases of menstrual cycle, and patients with high and low BDI scores groups comparison. The level of statistical significance was set at a two-tailed p-value of 0.05. Results All patients with CFS and healthy controls were reproductive premenopoausal women; and mean ages were 32.76 ± 7.07 and 31.14 ± 6.19 respectively for both groups. All of the patients had debilitating clinically evaluated and medically unexplained fatingue that does not resolve with bed rest and severe enough to significantly reduce daily activity for at least 6 months. The other clinical findings of patients with CFS were summarized in Table 1 . There wasn't significant differentiation between ages and BMI of groups (p > 0.05). There were no significant differences between the means of FSH, LH, progesterone and estradiol in all CFS patients compared to all controls (p > 0.05). Mean concentrations of cortisol were significantly lower in all CFS patients than all controls (p < 0.001) (Table 2 ). There were no significant differences between the levels of FSH, LH, progesterone and estradiol in CFS patients compared to controls in follicular phase (p > 0.05). Mean concentrations of cortisol were significantly lower in CFS patients than controls in follicular phase (p = 0.001) (Table 3 ). There were no significant differences between the levels of FSH, LH, progesterone and estradiol in CFS patients compared to controls in luteal phase (p > 0.05). Mean concentrations of cortisol were significantly lower in CFS patients than controls in luteal phase (p < 0.05) (Table 4 ). Fifty patients (69.76%) with CFS had ≥17 BDI scores. Mean cortisol concentrations of patients with BDI scores ≥17 were higher than patients with BDI scores <17 but differentiation wasn't significant. There were no significant differences between these patients groups in HPG axis hormone levels (Table 5 ). Discussion In this study, reproductive HPG axis hormone levels demonstrated no significant differences in women with CFS from controls during follicular and luteal phases. These findings are in agreement with Korszun et al. [ 17 ] who reported data from 9 premenopausal women with fibromyalgia and 8 with CFS. They showed no significant differentiations of reproductive axis function in both of patients groups in estrogen and progesterone levels, as well as LH pulsatility during the follicular phase. However, Studd and Panay [ 18 ] reported data from 28 premenopausal women with CFS 87 and of these, 25% showed low plasma estradiol concentrations. The authors reported that CFS may represent an hypoestrogenic state and recommend the use of hormone replacement therapy for women with CFS. In addition, they claim that 80% of patients improved after treatment of estradiol patches and cyclical progestagens. Chronic fatigue syndrome is generally accepted stress related disease and disfunction of HPA axis has been reported in this syndrome [ 22 ]. In this study, cortisol levels were lower in women with CFS compared to healthy controls. Some sudies of HPA in CFS show a mild hypocortisolism of central origin in contrast to hypercortisolism of major depression [ 13 , 14 ]. In an early study of the HPA axis in patients with CFS Demitract et al. [ 22 ] reported low 24-hour urine free cortisol compared with that of control subjects. Baseline evening plasma corticotropin levels were elevated and cortisol levels were depressed. Significantly lower baseline cortisol levels was reported in an earlier study [ 23 ]. Despite these findings, the majority of further studies have failed to replicate this. Differentiations of studies methodology and sample characteristics may explain the variety of results. High circulating cortisol is well replicated finding in major depression [ 24 ] and so presence of depression makes the cortisol findings more difficult to interpret. Significantly raised baseline cortisol levels in subjects with CFS studied by Wood et al. [ 25 ] was explained by their high BDI scores [ 20 ]. In this study, high BDI scores (≥17) were detected in 69.76% of patients with CFS. There were no significant high level of cortisol and HPG axis hormone concentrations in patients with high BDI score compared to patients with low BDI scores (<17). Scott and Dinan [ 14 ] reported finding of low urine free cortisol in patients with CFS compared with healthy controls. In addition, there was no difference between depressed and non depressed patients with CFS. These findings are in agreement with our study. In another study [ 26 ] was reported blunted corticotropin and cortisol in response to administration of ovine CRH without differences in basal levels. Studies in primates have demonstrated that intracerebroventricular infusion of CRH as well as proinflammatory cytokines such as interleukin-1 can decrease LH secretion [ 27 , 28 ]. Stress induced (hypothalamic) amenorrhea, as well as exercise-induced amenorrhea and anorexia nervosa, activate the HPA axis, increasing cortisol secretion and decreasing corticotropin or cortisol response to exogenous CRH [ 29 - 31 ]. These HPA axis abnormalities are similar to those seen in depression, suggesting that activation of the HPA axis may be linked to inhibition of the HPG axis. Young et al. [ 32 ] found %30 lower plasma estradiol level in women with depression than controls in the follicular phase. The half-life LH was significantly shorter in women with depression than controls during both of the follicular and luteal phases. The other reproductive hormones were normal in women with depression compared to control women in both the phases of menstrual cycle. In this study was found significantly lower circulating cortisol levels in patients with CFS in contrast to high BDI scores. However, there was no significant differentiation in cortisol levels between the patients with low and high depression scores. This is in contradiction to those hypercortisolism of classical major depression and stress condition. More recent studies support this contradicton [ 21 , 33 ]. In recent years, however, it has become increasingly apparent that deppression is a heterogenous condition from both a psychological and a physiological perspective [ 34 ]. Moreover, decreased HPA axis activity was reported in some stress related states such as CFS, atypical and seasonal depression [ 35 ]. These results suggest that depression condition which is seen in CFS may be different from classical depression. There may be overlapping between symptoms of CFS and those depressive subtypes or reactive form of depression in CFS. This condition may explaine both hypocortisolism in patients with CFS and the lack of HPG axis hormone abnormalities in this study. Determining single basal levels of HPA and HPG axes do not reflect activity of these axes entirely. Dynamic test indicates differences in function of these axes. We did not carry out dynamic characteristics of these axes. This point is limit of our study. In conclusion, we detected low cortisol levels in patients with CFS in spite of their high depressive mood rate. However, in this study, we were unable to describe HPG axis hormone abnormalities in both menstrual phases. Hypocortisolism may be a biological factor that contributes fatigue chronicity and the reason of normality in HPG axis. Depressive mood of chronic fatigue syndrome may be different from classical depression. There are need to carry out future controlled and larger clinical trials to clarify these matters. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC539265.xml |
514491 | Sir2-Independent Life Span Extension by Calorie Restriction in Yeast | Calorie restriction slows aging and increases life span in many organisms. In yeast, a mechanistic explanation has been proposed whereby calorie restriction slows aging by activating Sir2. Here we report the identification of a Sir2-independent pathway responsible for a majority of the longevity benefit associated with calorie restriction. Deletion of FOB1 and overexpression of SIR2 have been previously found to increase life span by reducing the levels of toxic rDNA circles in aged mother cells. We find that combining calorie restriction with either of these genetic interventions dramatically enhances longevity, resulting in the longest-lived yeast strain reported thus far. Further, calorie restriction results in a greater life span extension in cells lacking both Sir2 and Fob1 than in cells where Sir2 is present. These findings indicate that Sir2 and calorie restriction act in parallel pathways to promote longevity in yeast and, perhaps, higher eukaryotes. | Introduction The budding yeast Saccharomyces cerevisiae has served as a useful model for aging research, leading to the identification of new longevity genes and pathways whose counterparts can be examined in higher eukaryotes ( Kaeberlein et al. 2001 ). One measure of aging in yeast is the finite replicative life span (RLS) of mother cells, defined as the number of mitotic cycles completed prior to senescence ( Mortimer and Johnston 1959 ). Alternatively, the survival of nondividing yeast cells over time can be monitored and has been termed chronological aging ( Fabrizio and Longo 2003 ). It has been proposed that replicative aging in yeast may be a suitable model for the aging of dividing cells in mammals, such as germ cells; whereas, chronological aging of yeast may be related to the aging of postmitotic tissues. Replicative aging of yeast can be caused by the accumulation of extrachromosomal rDNA circles (ERCs) in the mother cell nucleus ( Sinclair and Guarente 1997 ), and mutations that decrease ERC formation correlate with increased life span. One example of such a mutation is deletion of the gene encoding the rDNA replication fork barrier protein Fob1, which results in a dramatic decrease in ERC levels accompanied by a 30%–40% increase in mean and maximum RLS ( Defossez et al. 1999 ). In addition to Fob1, the Sir2 protein has also been found to affect longevity by regulating the rate at which ERCs are formed ( Kaeberlein et al. 1999 ). Sir2 is an NAD-dependent histone deacetylase ( Imai et al. 2000 ; Landry et al. 2000 ; Tanner et al. 2000 ) necessary for transcriptional silencing near telomeres ( Aparicio et al. 1991 ), HM loci ( Ivy et al. 1986 ; Rine and Herskowitz 1987 ), and rDNA ( Bryk et al. 1997 ; Smith and Boeke 1997 ). Deletion of Sir2 increases both rDNA recombination ( Gottlieb and Esposito 1989 ) and ERC formation, while shortening life span by approximately 50% ( Kaeberlein et al. 1999 ). Conversely, overexpression of Sir2 increases life span by 30%–40%. Overexpression of Sir2 in the context of FOB1 deletion fails to further extend life span, consistent with the idea that Sir2 and Fob1 both impact aging by regulating ERC levels ( Kaeberlein et al. 1999 ). Calorie restriction (CR) of yeast cells can be accomplished by a reduction in the glucose concentration of growth media from 2% to 0.5% (or lower) and results in a 30%–40% increase in life span ( Lin et al. 2000 ). Several genetic models of CR have also been described. In one model, deletion of the HXK2 gene, coding for hexokinase, reduces the availability of glucose for glycolysis; while in the others, deletion of other genes, including gpa2Δ and gpr1Δ, decreases cAMP-dependent protein kinase activity ( Lin et al. 2000 ). Growth in low glucose and the various genetic models of CR have been treated as experimentally interchangeable. While they are clearly not identical, evidence to date suggests that they behave in a similar manner with respect to yeast aging ( Lin et al. 2000 , 2002 , 2004 ; Kaeberlein et al. 2002 , 2004 ) . Several reports have suggested a link between the enhanced longevity associated with CR and increased activity of Sir2 ( Koubova and Guarente 2003 ). In one genetic model of CR, cdc25-10 is reported to decrease both rDNA recombination and ERC levels ( Lin et al. 2000 ). In addition, deletion of Sir2 has been shown to prevent life span extension by cdc25-10 and low glucose ( Lin et al. 2000 , 2002 ). These data have been used to support a model whereby CR activates Sir2, thus causing decreased ERC accumulation and increased life span. It was initially proposed that life span extension by CR is the consequence of a metabolic shift resulting in increased cellular NAD available as a substrate for Sir2-dependent histone deacetylation ( Lin et al. 2002 ). More recently, this theory has been supplanted by two competing models for activation of Sir2 by CR: (1) a decrease in cellular nicotinamide (a product inhibitor of Sir2) via upregulation of PNC1 ( Anderson et al. 2003a ), and (2) a decrease in cellular NADH (a competitive inhibitor of Sir2) ( Lin et al. 2004 ). We present here evidence that CR and Sir2 act in different genetic pathways to promote longevity and show that Sir2 is not required for full life span extension in response to CR. In addition, we offer data suggesting that previous experiments were misinterpreted. Finally, we propose a model that reconciles our findings with earlier reports and suggests a greater level of conservation between aging in yeast and higher eukaryotes. Results We recently carried out a large-scale study of more than 40 single-gene deletions reported to affect aging in yeast (unpublished data). This analysis was performed in the BY4742 genetic background, which has a mean life span significantly longer than most other yeast strains commonly used for aging research ( Table 1 ). Included in this analysis were three genetic models of CR (hxk2Δ , gpa2Δ , and gpr1Δ) and fob1Δ . As previously reported for shorter-lived strain backgrounds ( Defossez et al. 1999 ; Lin et al. 2000 ), each of these single-gene deletions resulted in a 30%–40% increase in life span in BY4742 ( Figure 1 A). Figure 1 Regulation of Longevity by CR and Fob1 (A) Life span analysis for three genetic models of CR and deletion of FOB1 . Strains shown (and mean life spans) are BY4742 (26.7), fob1Δ (37.8), gpa2Δ (34.9), gpr1 (34.4), and hxk2Δ (36.7). (B) fob1Δ and hxk2Δ increase life span additively. Strains shown (and mean life spans) are BY4742 (26.6), fob1Δ (37.3), hxk2Δ (36.7), and fob1Δ hxk2Δ (48.3). (C) fob1Δ and gpa2Δ increase life span additively. Strains shown (and mean life spans) are BY4742 (27.2), fob1Δ (37.8), gpa2Δ (36.7), and fob1Δ gpa2Δ (54.5). Table 1 BY4742 Is Long Lived Relative to Other Yeast Strains Commonly Used in Aging Research Reported mean RLS (MRLS) for each strain and the percent difference relative to BY4742 ((Strain MRLS − BY4742 MRLS)/Strain MRLS) are shown Since both CR and deletion of FOB1 increased life span individually in BY4742, we examined the effect of CR combined with deletion of FOB1 . It is notable that this experiment has not to our knowledge been previously reported. We constructed a fob1Δ hxk2Δ double mutant and determined the replicative aging potential of this strain. As expected, both single mutants lived longer than wild-type mother cells ( p < 0.001). However, the life span of the fob1Δ hxk2Δ double mutant greatly exceeded that of either single mutant ( p < 0.001), suggesting an additional effect on longevity as a result of combining deletion of FOB1 with CR ( Figure 1 B). In order to demonstrate that this synthetic lengthening of life span in combination with fob1Δ was not specific to the hxk2Δ model of CR, we determined the life span of a fob1Δ gpa2Δ double mutant. As observed for HXK2, deletion of GPA2 combined with deletion of FOB1 resulted in a mean life span significantly greater than was observed for either single mutant ( p < 0.001), and nearly double that of wild-type cells ( Figure 1 C). With mean and maximum life spans of 54.5 and 94 generations, respectively , the fob1Δ gpa2Δ and fob1Δ hxk2Δ double mutants are to our knowledge the longest-lived yeast strains reported to date. The observation that CR further increases the long life span of a fob1Δ mutant is inconsistent with the model that CR increases life span solely by activation of Sir2. Since overexpression of SIR2 is sufficient to increase the life span of wild-type cells but fails to further extend the life span of a fob1Δ mutant ( Kaeberlein et al. 1999 ), CR (acting through Sir2) should also fail to further extend the life span of a fob1Δ strain, by this model. Our data therefore suggest the existence of a Sir2-independent pathway by which CR enhances longevity. In order to test this possibility, we determined whether CR would increase life span in the absence of Sir2. As observed in other strain backgrounds, deletion of SIR2 shortens life span by approximately 50% in BY4742 ( Figure 2 A), likely because of an elevated level of ERCs ( Kaeberlein et al. 1999 ). Neither deletion of HXK2 nor deletion of GPA2 conferred increased life span to the sir2Δ mutant. As expected, deletion of FOB1 was sufficient to suppress the life span defect of cells lacking Sir2 ( Figure 2 B), consistent with the idea that accelerated ERC accumulation is responsible for the severe life span defect of the sir2Δ strain. Surprisingly, in the sir2Δ fob1Δ double mutant, deletion of HXK2 resulted in a robust life span extension ( Figure 2 C; p < 0.001). Similarly, the life span of sir2Δ fob1Δ gpa2Δ triple mutant cells was significantly longer than that of sir2Δ fob1Δ double mutant cells ( Figure 2 D; p < 0.001). In fact, the life spans of sir2Δ fob1Δ hxk2Δ and sir2Δ fob1Δ gpa2Δ cells did not differ significantly ( p ≈ 0.4) from fob1Δ hxk2Δ and fob1Δ gpa2Δ cells, respectively. Thus, CR clearly enhances longevity in the absence of both Sir2 and Fob1, but not in the absence of Sir2 alone. While seemingly contradictory (see below), these findings demonstrate that Sir2 is dispensable for life span extension by CR, at least in the context of reduced ERC levels (as a result of fob1Δ ). Figure 2 Life Span Extension by CR Does Not Require Sir2 (A) CR fails to increase life span of a sir2Δ mutant. Strains shown (and mean life span) are BY4742 (26.7), sir2Δ (14.0), hxk2Δ sir2Δ (12.4), and gpa2Δ sir2Δ (11.7). (B) Deletion of FOB1 suppresses the short life span of a sir2Δ strain. Strains shown and mean life spans are: BY4742 (27.5), sir2Δ (14.0), sir2Δ fob1Δ (30.0). (C) Deletion of HXK2 increases the life span of a sir2Δ fob1Δ double mutant. Strains shown (and mean life spans) are BY4742 (26.5), sir2Δ fob1Δ (30.0), and sir2Δ fob1Δ hxk2Δ (45.3). (D) Deletion of GPA2 increases the life span of a sir2Δ fob1Δ double mutant. Strains shown (and mean life spans) are BY4742 (26.6), sir2Δ fob1Δ (30.0), and sir2Δ fob1Δ gpa2Δ (51.0). Genetic models of CR, such as hxk2Δ and gpa2Δ, have been used as convenient surrogates for CR by growth on low glucose ( Lin et al. 2000 , 2002 ); however, it is possible that these genetic models of CR may not completely recapitulate the effects of glucose deprivation. Additionally, unlike genetic models of CR, growth on low glucose provides an opportunity to control the degree of CR by manipulating the glucose concentration within a range of values ( Kaeberlein et al. 2002 ). Taking advantage of this property, we examined the life span of wild-type and sir2Δ fob1Δ double mutant cells on 2%, 0.5%, 0.1%, and 0.05% glucose ( Figure 3 ; Figure S1 ). Wild-type cells showed an increase in mean life span ranging from 15% to 25%, with maximal increases observed at 0.05% glucose ( p < 0.05). The effect of growth on low glucose was even more pronounced in the sir2Δ fob1Δ double mutant, with mean life span increased by 25% on 0.5% glucose ( p < 0.01) and by 60% on 0.05% glucose ( p < 0.001). Figure 3 CR Is More Effective at Enhancing Longevity in a sir2Δ fob1Δ Double Mutant than in Wild-Type Cells Percent increase in mean life span relative to growth on 2% glucose was determined for 20 mother cells from each strain at 0.5%, 0.1%, and 0.05% glucose. Our data conflict with the report that CR fails to increase the life span of a sir2Δ fob1Δ double mutant ( Lin et al. 2000 ). However, all of these prior experiments, as well as nearly all of the published life span data on CR in yeast, were carried out in the PSY316 strain background ( Lin et al. 2000 , 2002 , 2004 ; Anderson et al. 2002 , 2003a , 2003b ; Bitterman et al. 2002 ). We therefore asked whether strain-specific effects might account for this apparent discrepancy. Consistent with prior reports, we observed that growth on low glucose fails to increase the life span of a sir2Δ fob1Δ double mutant derived from strain PSY316 (unpublished data). However, the previous experiments demonstrating that either deletion of FOB1 or overexpression of SIR2 increase life span were carried out in W303R ( Kaeberlein et al. 1999 ), a genetic background apparently unrelated to PSY316. Notably, life span phenotypes for a fob1Δ mutant or SIR2 -overexpressing strain in PSY316 have not been reported. Thus, we created these strains and measured their life span. Neither deletion of FOB1 ( p = 0.29) nor overexpression of SIR2 ( p = 0.76) was sufficient to increase life span in the PSY316 background ( Figure 4 A). In fact, PSY316 behaves differently from the majority of other yeast strains with respect to the roles of SIR2 and FOB1 as regulators of longevity, since overexpression of SIR2 or deletion of FOB1 has been found to increase longevity in multiple genetic backgrounds, including BY4742 ( Table 2 ). Figure 4 CR Increases the Life Span of Cells Overexpressing SIR2 (A) Neither deletion of FOB1 nor overexpression of SIR2 impact longevity in PSY316. Strains shown (and mean life spans) are PSY316 (21.1), PSY316 fob1Δ (20.7), and PSY316 SIR2 -ox (21.7). (B) Overexpression of SIR2 and CR increase life span additively in BY4742. Strains shown (and mean life spans) are BY4742 on 2% glucose (26.1), BY4742 on 0.05% glucose (31.8), BY4742 SIR2 -ox on 2% glucose (34.6), and BY4742 SIR2 -ox on 0.05% glucose (42.2). Table 2 FOB1 Deletion or SIR2 Overexpression Increase Life Span in Multiple Genetic Backgrounds The percent effect on mean RLS ((mutant − wild-type)/wild-type × 100) as a result of FOB1 deletion or SIR2 overexpression is shown for each strain relative to the parental wild-type (strain background). The reported mean RLS for fob1Δ and SIR2- overexpressing mutants in each strain is also shown in parentheses. To our knowledge, PSY316 is the only background in which these interventions do not increase longevity Unlike in PSY316, overexpression of SIR2 in BY4742 significantly increases life span ( Figure 4 B; p < 0.001). Further, growth of SIR2 -overexpressing cells on low glucose results in an additional life span increase ( p < 0.001), similar to that observed for sir2Δ fob1Δ double mutant cells on low glucose. The observation that CR further enhances the already long life span of cells in which SIR2 is overexpressed reinforces our model that CR and SIR2 promote longevity by influencing different pathways ( Figure 5 ). Figure 5 Two Pathways Determine Yeast Longevity The longevity of mother cells can be modified by at least two independent interventions: altered ERC levels and CR. In cells lacking Sir2 but containing Fob1, senescence due to ERCs predominates, causing an extremely short life span that cannot be increased by CR. In cells lacking FOB1 , ERCs are greatly reduced and the CR pathway predominates. The presence or absence of Sir2 does not impact the longevity benefits of CR under this condition. Discussion We present substantial genetic evidence that CR and Sir2 act in different genetic pathways to promote longevity. The combination of CR with SIR2 overexpression results in an additive life span increase, as expected for two genetic interventions acting in parallel pathways. Further, in the context of FOB1 deletion, CR results in a larger relative increase in life span in the absence of Sir2 than in cells where Sir2 is expressed. Finally, the ability of CR to promote longevity in a strain lacking Sir2 definitively demonstrates the existence of a Sir2-independent aging pathway responsive to CR. Experiments have previously suggested that life span extension by CR in yeast is partially Sir2-independent ( Jiang et al. 2002 ). It is important to note, however, that the conditions employed for these experiments involved maintaining the cells on defined medium, which is known to slow growth rate and shorten life span by about 50% ( Jiang et al. 2000 ). Under these conditions, CR is reported to modestly increase mean life span of sir2Δ mother cells from seven generations to nine generations. This differs from our results, which demonstrate that CR has no significant effect on life span in the sir2Δ background when cells are grown under standard conditions (see Figure 2 A). We speculate that the apparently toxic effects of growth on defined medium (as evidenced by dramatically reduced life span and fitness) are partially mitigated by CR in a Sir2-independent manner. It is not clear whether this modest effect (two generations) is related in any way to the robust (20–30 generations) Sir2-independent life span extension caused by CR under standard growth conditions. The seemingly disparate findings that CR fails to extend the life span of a sir2Δ strain (see Figure 2 A) but dramatically extends life span in a sir2Δ fob1Δ double mutant background (see Figure 2 C– 2 E) can be explained by a model in which there are (at least) two pathways that regulate aging in yeast: one is ERC accumulation and the other is undefined at a molecular level, but responsive to CR (see Figure 5 ). In our long-lived wild-type background, both processes influence longevity. To explain why CR fails to extend life span in the sir2Δ strain, we postulate that the ERC pathway predominates in this mutant. Cells lacking Sir2 exhibit elevated rDNA recombination and increased levels of ERCs ( Kaeberlein et al. 1999 ), resulting in the premature death of nearly all mother cells prior to an age where the CR pathway becomes limiting. Thus, CR, acting through the alternative pathway, fails to impact aging in the sir2Δ mutant. In the fob1Δ mutant or sir2Δ fob1Δ double mutant, strains in which ERCs are greatly reduced ( Defossez et al. 1999 ; Kaeberlein et al. 1999 ), CR slows aging through the Sir2-independent alternative pathway. This independent pathway should be more important when ERC levels are reduced, and, consistent with this model, we find that CR has a more pronounced effect on life span under these conditions (see Figure 3 ; Figure S1 ). Our findings do not preclude the possibility that CR enhances Sir2 function through previously proposed mechanisms. However, the fact that the life spans of sir2Δ fob1Δ hxk2Δ and sir2Δ fob1Δ gpa2Δ triple mutants do not differ significantly from those of fob1Δ hxk2Δ and fob1Δ gpa2Δ double mutants, respectively, suggests that any role for Sir2 in the CR pathway is, at best, minor. Alternatively, it is possible that another protein can substitute for Sir2 as a downstream effecter of CR when Sir2 is absent. This model seems unlikely, however, given a need to postulate that the hypothetical Sir2-like protein could function as a substitute for Sir2 only in a strain lacking Fob1, since CR fails to increase life span in the sir2Δ single mutant. The most likely candidate for such a Sir2-like protein is the Sir2 homolog, Hst1. We find that deletion of HST1 has no effect on life span (unpublished data), suggesting that, at least under normal conditions, Hst1 is not an important determinant of longevity. Nearly all of the evidence supporting a role for Sir2 in CR-mediated life span extension is derived from experiments carried out in PSY316, further weakening the case for a Sir2-dependent model. The inability of SIR2 overexpression, in particular, to increase life span in the PSY316 background supports the idea that Sir2 does not play a primary role in CR-mediated life span extension, as it is not straightforward to postulate a model whereby CR would increase life span via activation of Sir2 in a strain background that is insensitive to Sir2 dosage. Further, the inability of the fob1Δ mutation to increase life span in PSY316 provides a plausible explanation for why CR is unable to enhance longevity in the PSY316 sir2Δ fob1Δ double mutant, and suggests that either deletion of FOB1 fails to impact ERCs in this background or ERCs are not limiting for life span. While we cannot rule out the possibility that the Sir2-independent nature of CR is unique to BY4742, we note that BY4742 behaves like the majority of other strains with respect to increased life span in response to deletion of FOB1 or overexpression of SIR2, while PSY316 is the only strain (to our knowledge) that is unresponsive to these interventions ( Table 2 ) . The observation that CR further increases the long life span of a fob1Δ strain (see Figure 1 B and 1 C) suggests that the mechanism of enhanced longevity by CR is unrelated to ERCs, as cells lacking Fob1 have dramatically reduced ERC levels. However, it is still possible that ERCs limit the life span of fob1Δ cells and that CR slows ERC accumulation by a second pathway that is insensitive to both Fob1 and Sir2. This seems unlikely, since CR is more effective at enhancing longevity in a sir2Δ fob1Δ double mutant than in wild-type cells. It has been observed that, while life span is comparable between wild-type and sir2Δ fob1Δ cells, ERCs are much reduced in the double mutant ( Kaeberlein et al. 1999 ), suggesting that sir2Δ fob1Δ cells are not senescing as a result of ERCs. Thus, life span extension by CR in this context is likely to be unrelated to ERCs. The existence of an ERC-independent aging pathway in yeast that is modulated by CR is of particular relevance to aging in higher organisms. CR is the only intervention shown to extend life span in a wide range of eukaryotes, including mammals ( Weindruch and Walford 1988 ). In contrast, there is no evidence that ERCs affect aging in organisms other than budding yeast. Nevertheless, in Caenorhabditis elegans , increased expression of the Sir2 ortholog, Sir-2.1, has been found to extend life span in a manner dependent on the Daf-16 transcription factor ( Tissenbaum and Guarente 2001 ). Similarly, the mammalian Sir2 ortholog, SirT1, has recently been reported to regulate the activity of murine Foxo3A ( Brunet et al. 2004 ; Motta et al. 2004 ) . These experiments support a role for Sir2 proteins in eukaryotic aging, linking Sirtuin activity to insulin/IGF-1 signaling. Evidence is accumulating, however, that CR and insulin/IGF-1 act in different pathways to regulate aging in complex eukaryotes. Life span extension by CR is independent of Daf-16 in C. elegans ( Lakowski and Hekimi 1998 ; Houthoofd et al. 2003 ), and CR can further extend the life span of long-lived insulin/IGF-1 pathway mutants in both C. elegans and mice ( Lakowski and Hekimi 1998 ; Bartke et al. 2001 ). We present similar evidence that the effects of CR and Sir2 are genetically distinct in yeast, raising the intriguing possibility that aspects of both aging pathways have been conserved. Materials and Methods Strains and plasmids. All yeast strains used in this study are congenic derivatives of BY4742 (MATα his3Δ1 leu2Δ0 lys2Δ0 ura3Δ0), except for PSY316AR (MATα RDN1::ADE2 his3- 200 leu2-3,112 lys2 ura3-52), PSY316AR fob1Δ::kanMX, and PSY316AR SIR2 -ox. All gene disruptions were verified by PCR. In addition, sir2Δ mutants were verified by the sterility phenotype associated with this mutation. Strains overexpressing Sir2 were constructed by genomic integration of an extra copy of SIR2, as described ( Kaeberlein et al. 1999 ), and life span was determined for four independent transformants. RLS analysis. Yeast strains for RLS analysis were removed from frozen stock (25% glycerol, −80 °C) and streaked onto YPD. After 2 d of growth, single colonies were selected and patched to YPD. The next evening, cells were lightly patched to the plates used for life span analysis (4–6 strains per plate). After overnight growth, cells were arrayed onto solid medium using a micromanipulator and allowed to undergo 1–2 divisions. Virgin cells were selected and subjected to life span analysis. Cells were grown at 30 °C during the day and stored at 4 °C at night. Daughter cells were removed by gentle agitation with a dissecting needle and tabulated every 1–2 cell divisions. All life span experiments were carried out on standard YPD plates (2% glucose), except for the low glucose experiments, which were performed on YEP plates supplemented with the indicated amounts of glucose. In order to prevent introduction of bias, strains were coded such that the researcher performing the life span experiment had no knowledge of the strain genotype for any particular strain. For each experiment, each strain was randomly coded at the time of removal from frozen stock. One individual was responsible for assigning codes (K. T. K.) while a different individual (M. K. or B. K. K.) performed the micromanipulation and was unaware of the genotypes of the strains being analyzed. Statistical analysis of data For statistical analysis, life span datasets were compared using a two-tailed Wilcoxon Rank-Sum test. Mother cell life span and p -value matrices for each figure are available in Dataset S1 ; life span data for individual mother cells are available in Dataset S2 . Wilcoxon p -values were calculated using the MATLAB ranksum function. Data shown in each figure and used to calculate p -values were derived from pair-matched, pooled experiments where each mutant was compared to wild-type cells examined within the same experiment(s). Strains are stated to have a significant difference in life span for p < 0.05. Supporting Information Dataset S1 P- Value Matrices for Figures 1 – 4 Each matrix contains the Wilcoxon Rank-Sum p -values for a two-tailed test in which the life span data for the strain in the corresponding row were compared against the life span data for the strain in the corresponding column. Significant p -values ( p < 0.05) are colored yellow. P -values were calculated using the MATLAB ranksum function. (46 KB PDF). Click here for additional data file. Dataset S2 Raw Mother Cell Life Span Data for Figures 1 – 4 (16 KB TXT). Click here for additional data file. Figure S1 CR Increases Life Span in Wild-Type and sir2Δ fob1Δ Mother Cells (A) Life span extension by CR is maximized at 0.05% glucose in BY4742 mother cells. Mean life spans are shown for cells grown on 2% glucose (24.8), 0.5% glucose (28.3), 0.1% glucose (30.1), and 0.05% glucose (32.1). (B) Life span extension by CR is maximized at 0.05% glucose in sir2Δ fob1Δ mother cells. Mean life spans are shown for cells grown on 2% glucose (26.0), 0.5% glucose (32.9), 0.1% glucose (40.8), and 0.05% glucose (42.0). (88 KB PS). Click here for additional data file. Accession Numbers The Saccharomyces Genome Database ( http://www.yeastgenome.org/ ) accession numbers for the yeast genes and gene products discussed in this paper are CDC25 (SGDID S0004301), FOB1 (SGDID S0002517), GPA2 (SGDID S0000822), GPR1 (SGDID S0002193), HST1 (SGDID S0005429), HXK2 (SGDID S0003222), PNC1 (SGDID S0003005), and SIR2 (SGDID S0002200) The LocusLink ( http://www.ncbi.nlm.nih.gov/LocusLink/ ) accession numbers for the non-yeast genes and gene products discussed in this paper are C. elegans Daf-16 (LocusLink 172981), C. elegans Sir-2.1 (LocusLink 177924), mouse Foxo3A (LocusLink 2309), and mouse SirT1 (LocusLink 23411). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC514491.xml |
387263 | Segmentally Variable Genes:A New Perspective on Adaptation | Genomic sequence variation is the hallmark of life and is key to understanding diversity and adaptation among the numerous microorganisms on earth. Analysis of the sequenced microbial genomes suggests that genes are evolving at many different rates. We have attempted to derive a new classification of genes into three broad categories: lineage-specific genes that evolve rapidly and appear unique to individual species or strains; highly conserved genes that frequently perform housekeeping functions; and partially variable genes that contain highly variable regions, at least 70 amino acids long, interspersed among well-conserved regions. The latter we term segmentally variable genes (SVGs), and we suggest that they are especially interesting targets for biochemical studies. Among these genes are ones necessary to deal with the environment, including genes involved in host–pathogen interactions, defense mechanisms, and intracellular responses to internal and environmental changes. For the most part, the detailed function of these variable regions remains unknown. We propose that they are likely to perform important binding functions responsible for protein–protein, protein–nucleic acid, or protein–small molecule interactions. Discerning their function and identifying their binding partners may offer biologists new insights into the basic mechanisms of adaptation, context-dependent evolution, and the interaction between microbes and their environment. | Introduction Microbes occupy almost every habitable niche in the biosphere, highlighting their enormous capability for adaptation and survival. This adaptive ability has been refined during millennia of evolution and has resulted in genes that evolve at very different rates. Some, such as housekeeping genes that code for the essential biochemical functions of the organism, are now evolving rather slowly. Others that have to defend against potentially lethal attack by viruses or toxins and adapt to varying environmental conditions, often evolve more rapidly ( Murphy 1993 ; Moxon and Thaler 1997 ; Jordan et al. 2002 ). Pathogenic microbes, for example, face stringent tests of their adaptive potential because of the escalating efficiency of the host-defense mechanisms ( Moxon and Thaler 1997 ). In the arms race between pathogens and their hosts, both sides try to improve their overall fitness by deploying sophisticated strategies to generate genetic variability ( Elena and Lenski 2003 ). Sequence divergence during rapid evolution can take many forms. Some genes change throughout their entire sequences, resulting in apparently lineage-specific genes that lack clear similar sequences in current versions of GenBank. Others show a mosaic pattern of one or more variable regions interspersed within conserved regions. This latter group is the subject of this paper and we refer to them as segmentally variable genes (SVGs). For the purpose of the current analysis, we define such variable regions as having a minimum length of 70 amino acids, which would permit them to fold into independent domains. This distinguishes them from most nonfunctional interdomain segments, which are usually shorter and whose principal function depends on length rather than specific sequence content. An example of an SVG family is provided by the cytosine-5 DNA methyltransferases ( Posfai et al. 1989 ). These enzymes typically form parts of restriction-modification systems, which are key components of an important bacterial defense mechanism to protect against phage attack and other unwanted infiltration of foreign DNA ( Cheng 1995 ). These methyltransferases catalyze the addition of a methyl group from S-adenosylmethionine to the 5-position of cytosine and contain a highly variable region of more than 90 amino acids that is responsible for specific DNA sequence recognition ( Figure 1 A; Posfai et al. 1989 ; Cheng 1995 ; Lange et al. 1996 ). A detailed examination of the three-dimensional (3D) structure of the variable region suggests that it folds into an independent domain, which has been shown to bind to DNA ( Cheng et al. 1993 ). The flanking sequences are highly conserved because they are responsible for the chemistry of methylation, which is common to all members of the family. Variability in this family has arisen because there is a need for great variation in the DNA sequences being recognized so that the specific pattern of methylation becomes a marker to distinguish innate DNA from foreign DNA. Figure 1 Variability Profile for Typical SVGs Blocks in the lines are conserved subsequences identified using the Pfam, BLOCKS, and PRINTS databases. In the variability profile, the x-axis is the amino acid position and the y-axis is the variability index (see Materials and Methods). Variable domains are marked by the black lines over the graph. (A) Cytosine-specific DNA methyltransferase of 355 amino acid long in H. pylori . Notice the variable domain in the middle and the variable segment in its N-terminal region, which is shorter than 70 amino acids and has no known function. (B) Virulence-associated protein homolog (VacB) of 644 amino acid long in H. pylori . It has two variable domains at the N- and C-termini. To the best of our knowledge, there has been no systematic attempt to identify, catalog, and classify similar SVGs in the sequenced microbial genomes. Nor has any attempt been made to find potentially common functions among genes displaying this property. Since it is known that many genes involved in defense mechanisms, such as the DNA methyltransferases and the antigens exposed on the surface of bacteria, show such variability ( Roche et al. 2001 ), it is tempting to speculate that one might identify host-defense genes based on this property. Thus, the regional variability might reflect the influence of diversifying selection pressure that could come from constant interaction with other fast-evolving molecules in the environment. Could such genes be the predominant members of the SVG families? Or do other genes, such as those involved in basic energy metabolism and synthesis, show similar variability? In this paper we provide an initial systematic analysis. We describe our findings about the distribution of SVGs and the potential function achieved by segmental variability. Results Classification of Genes into Three Broad Groups We carried out a classification of the genes in 43 fully sequenced microbial genomes (see Table S1 for a full name list). A Web site ( http://geneva.bu.edu) is also provided with results for several selected genomes, including Escherichia coli, Helicobacter pylori, Neisseria meningitidis, and several others. Each gene is accompanied with schematic diagrams from Pfam ( Bateman et al. 2002 ), BLOCKS ( Henikoff et al. 1999 ), PRINTS ( Attwood et al. 2003 ), and the nongapped BLAST ( Altschul et al. 1990 ) analyses. For each genome, the full proteome is compared with the nonredundant GenBank sequence set using nongapped BLAST ( see Materials and Methods for the parameters used). Based on the degree of conservation or divergence among similar genes in different species, we classify them into three broad groups. Lineage-specific genes are defined as genes with no significantly similar hits from other species in the current GenBank ( E -value cutoff, 1 E -5). SVGs are defined as genes containing at least one highly variable region, containing more than 70 amino acids, interspersed among well-conserved regions. In any single SVG family, the length of the variable region can differ only within a certain range ( see Materials and Methods for more details). In this paper, regions are considered to be variable if no sequence similarity can be detected against possible homologous genes, where the overall homology is determined by the conserved portions. The rest of the genes in the genome are considered as fully conserved genes. Naturally, this initial soft classification is somewhat dependent on specific thresholds and will be biased by the current state of GenBank and the quality of the annotation. In Figure 2 we show a scatter plot of the three classes of genes in the H. pylori genome in two-dimensional (2D) space, where the x-axis shows the length of the variable region and the y-axis shows the number of possible homologs of each gene. Lineage-specific genes (filled square in Figure 2 ) by definition naturally cluster on the x-axis. Most of the genes in this group are still annotated as unknown. A few genes with annotated functions in this group, such as the outer-membrane protein family in H. pylori ( Tomb et al. 1997 ), only appear in this organism and contribute to its unique biology. A second group contains fully conserved genes (filled triangle in Figure 2 ) with only short variable regions. It is in this class that most “housekeeping” genes fall. Examples include the subunits of ATP synthetase F1 (atpD, atpA, atpG) and ribosomal proteins such as rps4 ( Figure 2 ), etc. The third group contains the SVGs (filled diamond in Figure 2 ). A few examples in this group are labeled with their names in Figure 2 and will be discussed later. In Table 1 we list the number of genes in each category for a representative set of microbial genomes (see Table S1 for a full list). Figure 2 Classification of Three Groups of Genes from a Single Genome, H. pylori , in 2D Space The x-axis is the length of the variable region and the y-axis is the number of possible homologs a gene has from a BLAST search. The variable region length for a lineage-specific gene is defined as the length of the gene so that they naturally cluster onto the x-axis. Multiple variable regions in one gene are represented separately. Table 1 Classification of Genes into Three Broad Categories for a Representative Set of Microbial Genomes See Table S1 for the entire table SVGs are subdivided into different types depending on whether they have one, two, or more variable regions. The number of genes with a single variable region is much larger than the number of genes with multiple ones. In Figure 1 A we show the variation profile of an SVG containing one variable region. The variation profile is displayed together with conserved subsequences identified using the Pfam ( Bateman et al. 2002 ), BLOCKS ( Henikoff et al. 1999 ), and PRINTS ( Attwood et al. 2003 ) databases. This gene is the cytosine-specific DNA methyltransferase, M.HpyAVIB, from H. pylori . The variability lies in its DNA recognition domain (approximately 140 amino acids), which in this case recognizes the DNA sequence CCTC ( Lin et al. 2001 ). In Figure 1 B we give an example with two variable regions. It is the virulence-associated protein homolog VacB from H. pylori , which has variable regions at both its N-terminus (approximately 200 amino acids) and C-terminus (approximately 100 amino acids). VacB has been shown to encode a 3′–5′ exoribonuclease and is necessary for expression of virulence ( Cheng and Deutscher 2002 ). The conserved central region (approximately 400 amino acids (Pfam domain: RNB) defines a group of homologs distributed in a number of microbial genomes ( Zuo and Deutscher 2001 ). Note that the C-terminal region is variable, and its E. coli homolog contains RNA-binding motifs ( Zuo and Deutscher 2001 ). Although the detailed physiological roles of VacB remain unknown ( Cheng and Deutscher 2002 ), the variable regions may contribute to the determination of substrate specificity of VacB in the RNA quality-control process that eliminates defective ribosomal RNA (rRNA) molecules in different species. The number of SVGs increases as genome sizes vary, from 0.5 MB (Mycoplasma genitalium ) to 8.6 MB (Streptomyces coelicolor ) ( Table 1 ). For most microorganisms included, the proportion of SVGs varies in the range of 10%–20%. The number of lineage-specific genes, on the other hand, does not appear to correlate with the genome size. Instead, it is influenced by the content of the database. For instance, a “minimal” genome, M. genitalium , has a relatively high content of SVGs (20%) and a low percentage of lineage-specific genes (0.2%). However, when a closely related species, M. pneumoniae , is excluded from the database, its proportion of lineage-specific genes rises to 14%, while the proportion of SVGs remains unchanged. In general, the genomic proportion of SVGs is less affected by the database content. Case Studies of SVGs and Functional Implication of Variability In the following sections, we have selected several SVG families to demonstrate the functional implication of segmental variability. Outer-membrane signal transduction genes/sensor histidine kinases In prokaryotes, two-component signal-transducing systems are common and consist of a histidine kinase (HK) and a response regulator. Most HKs are membrane-bound, homodimeric proteins with an N-terminal periplasmic sensing domain and a C-terminal cytoplasmic kinase domain. HKs usually possess a highly variable sensing domain (usually over 150 amino acids), while the cytoplasmic kinase domain is quite conserved. By diversifying the sensing domain, microorganisms can develop different two-component modules to respond to different signals and interact with small molecules from the exterior. Figure 3 displays the distance matrix calculated from the sensing domains and the kinase domains from a group of highly similar HK genes. As shown in Figure 3 , sensing domains are much more diverse than the kinase domains. Moreover, the two regions show distinct clustering patterns, of which only the one for the conserved kinase domains is close to the phylogenetic relationship inferred from 16S rRNA sequences (data not shown). Significant homologies in the sensing regions can only be found in closely related species (e.g., Ralstonia solanacearum [Rs] and Ralstonia metallidurans [Rm] in Figure 3 ), suggesting rapid divergence after speciation. Other sensor genes involved in cell motility , e.g., genes encoding methyl-accepting chemotactic protein (MCP) (see tlpA, tlpC in Figure 2 ), are also highly variable in their N-terminal domains. In several bacteria , e.g., Vibrio cholerae , there is a greater number of segmentally variable MCP genes (approximately 40) than in other genomes (see the gene list of V. cholerae at http://geneva.bu.edu) , which must correspond to its expanded ability to detect different chemical signals and find favorable environments. Although a few conserved motifs have been detected in the sensing region ( Galperin et al. 2001 ), the exact sensing signals for most prokaryotic HKs are unknown. Figure 3 2D Representation of the Distance Matrix Computed from the Variable and Conserved Domains in a Group of Similar HKs The upper triangle shows the variable domains, the lower one the conserved domains. Amino acid sequence distances are calculated by the PROTDIST program using the Dayhoff PAM matrix. The sequence from each species is the best match ( E -value < 1 E -10) in that genome to the query E. coli gene. Abbreviations for organisms: Ec, Escherichia coli K12; Ps, Pseudomonas syringae pv. syringae B728a; Rm, Ralstonia metallidurans ; Rs, Ralstonia solanacearum ; Li, Listeria innocua ; Tm, Thermotoga maritime ; Ml, Mycobacterium leprae ; Mt, Mycobacterium tuberculosis CDC1551; No, Nostoc sp. PCC 7120; Ef, Enterococcus faecalis ; Bs, Bacillus subtilis ; Ne, Nitrosomonas europaea ; Sy, Synechococcus sp. PCC 7942; At, Agrobacterium tumefaciens . The PROTDIST program is included in the PHYLIP software package version 3.5 ( Felsenstein 1989 ). Transporter genes and outer-membrane proteins The biggest family of SVGs is cell envelope-related, including the ATP-binding cassette transporters (ABC transporters), outer-membrane proteins, and virulence-related gene products. For membrane proteins, since part of their sequences are exposed to the outside of the cell and interact directly with the environment, one might hypothesize that the variable portions have evolved rapidly to deal with the changing environmental conditions. ABC transporters are essential for microorganisms because they import nutrients into the cell and export noxious substances and toxins out of the cell. A typical ABC transporter gene in a prokaryote genome has a conserved ATPase domain (approximately 150 amino acids) and a large (over 300 amino acids) variable integral membrane domain. Two examples from this group are the multidrug-resistance genes hetA and spaB shown in Figure 2 . It is known that substrates interact with the specific binding sites inside the membrane domain ( Holland and Blight 1999 ), which suggests that the variability in the membrane domain may have to do with substrate selectivity or with different transport kinetics. Moreover, outer-membrane transporters are binding targets for bacteriophages and bacterial toxins. For example, the vitamin B12 transporter BtuB (614 amino acids) is the receptor for bacteriophage BF23 and E-colicin ( Bradbeer et al. 1976 ; Mohanty et al. 2003 ). The crystal structure of BtuB in E. coli has been solved ( Chimento et al. 2003 ). The variable region in E. coli BtuB overlaps with the 22-strand β-barrel (position 150–360), while the N-terminal hatch domain (position 6–132) and the extreme C-terminal TonB-box domain (position 550–614) are conserved among many homologs ( Figure S1 ). The extracellular loops between contiguous strands in the β-barrel are displayed outside the cell ( Chimento et al. 2003 ) and possibly serve as receptor sites for bacteriophages and toxins. The variability in these loops may be driven by attempts to defend against bacteriophages and interaction with different bacterial toxins. DNA/RNA-processing enzymes DNA/RNA processing enzymes form another large family of SVGs. Characteristic examples are the restriction and modification enzymes, where the DNA methylases have a variable region designed for DNA sequence recognition ( Cheng 1995 ) and the restriction enzymes are almost completely variable. Here we discuss two other genes: DNA gyrase B ( gyrB ) and DNA topoisomerase A ( topA ), whose competing actions control the degree of DNA supercoiling ( Tse-Dinh et al. 1997 ). Schematic alignments anchored by the conserved motifs from the BLOCKS database ( Henikoff et al. 1999 ) for both enzymes are shown in Figure 4 . The variable region in GyrB is an additional approximately 160 amino acids long segment that is only present in the gram-negative eubacteria ( Figure 4 B). Experiments probing the role of this region in E. coli GyrB have demonstrated its involvement in DNA binding, although the detailed function is unknown ( Chatterji et al. 2000 ). We suspect that variability in this inserted domain may determine the specificity of the interaction between GyrB and DNA or suggest interaction with other molecules. It is intriguing to see that other gyrases lacking this region are also functional. Figure 4 Schematic Alignment of TopA and GyrB (A) TopA. (B) GyrB. Each line represents a sequence. Black boxes indicate the conserved blocks from the BLOCKS database and are aligned correspondingly. Red boxes in (A) are the zinc-finger motifs reported by Pfam. Notice that the number of occurrences of this motif varies and that there are several sequences without this motif in the C-terminal. The lines between the boxes are the variable sequences that cannot be aligned. Variable domains are labeled in the figure. For TopA, the N-terminal region of approximately 600 amino acids shows extensive sequence similarity while the C-terminal region (over 100 amino acids) is variable both in sequence content and in length ( Figure 4 A). The conserved N-terminal region of TopA has the catalytic function of relaxing negatively supercoiled DNA ( Feinberg et al. 1999 ). The variable C-terminus of TopA sometimes contains multiple copies of zinc-binding motifs, although there are a few exceptions, e.g., TopA in Mycobacterium tuberculosis ( Figure 4 A). Interestingly, there are two copies of TopA in H. pylori 26695; one has three zinc-binding motifs in C-terminal region and the other does not. The zinc-binding motifs in E. coli TopA are shown to be involved in the interaction with the β′ subunit of RNA polymerase ( Cheng et al. 2003 ) and in DNA binding ( Ahumada and Tse-Dinh 1998 ). Since RNA polymerase β′ subunit is a fully conserved gene, the overall sequence variation in the C-terminal region of TopA seems more likely to relate to DNA binding. TopA plays an important role in adaptation to environmental challenges, such as heat shock conditions ( Tse-Dinh et al. 1997 ). Deletion experiments show that in E. coli the C-terminal region is important for the in vivo function of TopA during the osmotic stress response ( Cheng et al. 2003 ). All together, these facts suggest a versatile role that the C-terminal region of TopA might play in those processes. Variable regions are sometimes found in DNA processing enzymes with essential and conserved functions. One example is DNA polymerase I, which has a variable region between the conserved C-terminal 5′–3′ polymerase domain and the N-terminal 5′–3′ exonuclease domain. In some polymerases, this region encodes a 3′–5′ exonuclease activity for proofreading replication errors, and conserved motifs can be observed ( Derbyshire et al. 1995 ). However, other polymerases in the same family that lack such proofreading activity show much sequence divergence in this region ( Derbyshire et al. 1995 ). The exact reason why sequence variability is observed in these polymerases is unknown. Another interesting family is the aminoacyl-tRNA synthetases (AARS) ( Ibba and Söll 2000 ). This family of genes is well known for its precision in substrate selection. The molecules known to interact with AARS include tRNA, amino acids, and ATP. Since the same amino acids and ATP molecules are found in all organisms, variability inside the AARS sequences must relate to the recognition and interaction with the tRNAs. Correspondingly, each AARS usually contains a conserved domain for catalysis and acceptor helix interaction and a nonconserved domain that interacts with the variable distal parts of its substrate tRNA ( Schimmel et al. 1993 ). For instance, in bacterial-type prolyl-tRNA synthetase (ProRS), the N-terminal catalytic domain (approximately 200 amino acids) and the C-terminal anticodon-binding domain (approximately 150 amino acids) are highly conserved, while a less conserved region of about 180 amino acids is inserted between them ( Figure S2 ). This variable domain shows similarity to the YbaK domain, which is thought to be involved in oligonucleotide binding ( Zhang et al. 2000 ). Sporadic conserved residues in this region of E. coli ProRS are known to be involved in the posttransfer editing for mischarged Ala-tRNA Pro ( Wong et al. 2002 ). ProRS is also known to possess an inherent ability to mischarge cysteine ( Ahel et al. 2002 ). Partial deletion of this variable region of E. coli ProRS results in a lower rate of proline acylation to cysteine acylation ( Ahel et al. 2002 ), suggesting a possible role of substrate discrimination in this region. Thus, the variability in this inserted domain of ProRS appears to contribute to substrate recognition and the editing function of the enzyme. Intriguingly, ProRS in Methanococcus jannaschii, which does not have this inserted region, also possesses editing abilities ( Beuning and Musier-Forsyth 2001 ). As a result, there is a possibility that this region may have another unknown function, e.g., interaction with other undetected molecules. Carbohydrate active enzymes Variable regions exist in carbohydrate metabolizing enzymes, such as glycosyltransferases (GTs) and glycoside hydrolases (GHs), which respectively catalyze the biosynthesis of diverse glycoconjugates and their selective cleavage ( Bourne and Henrissat 2001 ). Many pathogens express outer-membrane glycosylated oligosaccharides, which closely interact with the host environment ( Saxon and Bertozzi 2001 ). For example, they even mimic host cell surface glycoconjugates to evade immune recognition ( Persson et al. 2001 ). Both GTs and GHs have been classified into subfamilies based on sequence similarity ( Bourne and Henrissat 2001 ). Structural studies on bacterial GTs from different subfamilies always reveal two-domain molecules, such as LgtC ( Persson et al. 2001 ), GtfB ( Mulichak et al. 2001 ), MurG ( Hu et al. 2003 ), and SpsA ( Charnock and Davies 1999 ), with one domain responsible for donor molecule (usually nucleotide-diphospho-sugar) binding and the other domain involved in acceptor sugar molecule binding. These genes exhibit great variability in the acceptor-binding domains and conservation in the donor-binding domains (see Figure S3 for the example of GtfB), which agrees with the relatively limited types of donor species (usually UDP/TDP-sugar) and their conserved binding modes, but a diversity of acceptor molecules (LgtC: lactose; GtfB: vancomycin aglycone; MurG: N -acetyl muramyl pentapeptide; SpsA: unknown). Owing to the lack of homology in the acceptor binding domains, the substrate specificities encoded by these regions for most GTs are still unknown. Transcriptional regulators Prokaryotic transcriptional regulators form another large group of SVGs. Transcription regulators are usually two-domain proteins with one binding to DNA and one binding to ligand. The DNA-binding domains, which usually interact with DNA via helix–turn–helix, zinc-finger, or other modes, are more conserved than ligand-binding domains. Based on the characteristic conserved DNA-binding domains, transcriptional regulators can be classified into many different families ( Nguyen and Saier 1995 ; Rigali et al. 2002 ). Even within each family, the ligand-binding domains are variable. For instance, the C-terminal regions involved in effector molecule binding and oligomerization (E-b/O) inside the GntR transcriptional regulator family are highly variable both in sequence content and in size ( Rigali et al. 2002 ). The variability in the effector molecule-binding domains enables the transcriptional regulators to sense the presence of diverse ligands and signal the regulation of the downstream genes or operons accordingly. As in most previous cases, these variable regions remain functionally uncharacterized. Hypothetical genes In addition to genes with functional annotations, our method identifies a number of SVGs with unknown or hypothetical annotations in each genome ( H. pylori : 17 genes; N. meningitidis : 32 genes; V. cholerae : 69 genes, etc.; see http://geneva.bu.edu for the full list). In contrast to lineage-specific hypothetical genes, these hypothetical genes contain conserved domains, which suggest their functional importance. Although most of the conserved domains in these hypothetical genes have currently unknown function, there are a few exceptions. Among them are the prokaryotic mechanosensitive channel proteins, which respond to external osmotic pressure ( Pivetti et al. 2003 ). Examples include the 343 amino acid long E. coli B1330 and 371 amino acid long Bacillus subtilis YhdY, both of which are currently annotated as “hypothetical.” However, they both have the characteristic domain of mechanosensitive proteins (Pfam domain: MS_channel). The central regions (approximately 150 amino acids) of these genes are conserved while both the N-terminal region (approximately 100 amino acids) and the C-terminal region (approximately 100 amino acids) are variable (see alignment in Figure S4 ). The conserved central region encodes three transmembrane segments, and the molecules are predicted to have their N-terminus outside and C-terminus inside the cell ( Miller et al. 2003 ). Although the C-terminus is variable, the deletion experiments show that it is indispensable for stability and activity of this protein ( Miller et al. 2003 ). It is tempting to hypothesize that the interacting partners for both N- and C-termini might vary in different organisms. Functional Classification of SVGs We are interested in probing the functional distribution of SVGs within a single genome. Are certain functional categories overrepresented? In Figure 5 , we show a functional classification of SVGs in three microorganisms using 18 broad functional categories of the clusters of orthologous group (COG) database ( Tatusov et al. 1997 ). We calculated the percenta g e ( r in Figure 5 ) of SVGs within each functional class and the p -value of overrepresentation ( Figure 5 ). Several functional categories are overrepresented ( p -value < 0.01; see Figure 5 for details): (i) cell envelope biogenesis, outer membrane; (ii) DNA replication, recombination and repair; (iii) secondary metabolite biosynthesis, transport and catabolism; (iv) cell motility and secretion; (v) cell division and chromosome partitioning. Among them, only categories (i) and (ii) are overrepresented in all three genomes. Most functional categories involved in the basic metabolic processes are not significantly overrepresented or even underrepresented. The number of overrepresented categories and the order of significance differ from one genome to another, reflecting differences in genome content and presumably the relative importance of the different specific adaptations. Figure 5 Functional Classification of SVGs in Three Microorganisms M is the total number of genes in a COG broad functional category, and m is the number of SVGs within that category. r ( = m/M ) is the proportion of SVGs in that category. The p -value is calculated using a hypergeometric distribution: let N = number of genes in the genome; n = number of SVGs identified; M = number of genes belonging to a particular category; m = number of SVGs belonging to a particular category: The set of lineage-specific genes has been excluded in each genome to avoid the possible skew it brings to the estimation of significance. The significance level is set at 0.01. Cells with p -value less than 0.01 are shaded. In Figure 6 we show the relative abundance of a set of SVG families in several microorganisms based on shared keywords in the annotations. The relative enrichments in several gene families for some microbes seem to correlate with the peculiarities of niche adaptation. In particular, H. pylori has more SVGs involved in cell motility and chemotaxis than two other genomes with a similar genome size (N. meningitidis, Streptococcus pneumoniae). H. pylori is one of the few microbes that can colonize the highly acidic gastric environment ( Tomb et al. 1997 ). The motility of H. pylori is crucial for its infectious capability and there is evidence that poorly motile strains are less able to colonize or survive in the host ( O'Toole et al. 2000 ). S. pneumoniae has more carbohydrate-metabolizing enzymes, especially glycosyltransferases (GTs), which appear to be segmentally variable. The unique pattern of cell surface glycosylation in S. pneumoniae has been under extensive investigation and plays an important role in pathogenesis (Tette lin et al. 2001 ). The GTs are responsible for making O -linked glycosylations on surface proteins, which coat the surface of the bacterium and interact with the host (Tette lin et al. 2001 ). Figure 6 Abundance of SVGs in Different Functional Categories in Five Microorganisms The approximate total gene number for each organism is as follows: H. pylori , 1,566 genes; S. pneumoniae , 2,094 genes; N. meningitidis , 2,065 genes; E. coli , 4,289 genes; B. subtilis , 4,100 genes. Gene Duplication and SVGs Duplication followed by diversification is an efficient way of generating functional innovations ( Prince and Pickett 2002 ). Regional sequence divergence has been observed between duplicated gene copies ( Gu 1999 ; Dermitzakis and Clark 2001 ; Marin et al. 2001 ). We thus asked the following questions: (1) What is the distribution of paralogous genes in the set of SVGs in a single genome? (2) Is there a significant association between gene duplication and SVGs? In Figure 7 A, we show the distribution of paralogous genes among SVGs in several genomes. We consider paralogous genes to be similar genes in the same genome with a BLAST E -value less than 1 E -5. As shown in Figure 7 A, in H. pylori , N. meningitidis, and S. pneumoniae , the largest group of SVGs is the one with no paralogs. However, in E. coli , the largest group is the one with a single paralog. E. coli obviously has more paralogous genes in the SVG set, probably owing to a larger genome size by duplication. In Figure 7 A (inset), we show the percentage of genes with different numbers of paralogs in each class for both segmentally variable and fully conserved genes in E. coli . Interestingly, over half of the fully conserved genes in E. coli do not have paralogs. There is a significant difference between the two distributions (χ 2 test, p -value < 1 E -5). In Figure 7 B, we list the number of genes in a contingency table and test the significance using a χ 2 test. For all genomes examined, there is a strong association between gene duplication and SVGs, suggesting an SVG is more likely to have originated from a duplicated gene. Figure 7 Paralogous Genes in SVGs (A) Paralog families in SVGs for four microorganisms. The x-axis shows the number of paralogs for each SVG. The y-axis shows the number of SVGs. The inset figure shows the percentage of genes with different numbers of paralogs for SVGs and fully conserved genes in E. coli genome. The x-axis is the number of paralogs, and the y-axis is the percentage. (B) Contingency tables to examine the dependence between SVG and paralogous gene. χ 2 statistics are computed using standard formula. Here we give an interesting example where one paralogous copy of a gene is segmentally variable and the other copy is fully conserved. In H. pylori strain 26695, gene products of HP1299 (253 amino acids) and HP1037 (357 amino acids) both have a conserved domain (approximately 250 amino acids; Pfam: Peptidase_M24) that is characteristic of the methionyl aminopeptidase ( map ) family (metalloprotease family M24) ( Rawlings and Barrett 1995 ). HP1299 is fully conserved in a number of microbes and is homologous to the E. coli map gene ( Figure S5 ), while the product of HP1037 has an extra N-terminal region (approximately 100 amino acids) that is variable among its similar genes ( Figure S6 ). Additionally, HP1037 is annotated as a conserved hypothetical gene. The five residues found in the E. coli map that are involved in cobalt (Co 2+ ) binding (Asp-97, Asp-108, His-177, Glu-204, Glu-235; Rawlings and Barrett 1995 ), are conserved in both genes by examining the multiple alignment. These findings suggest that HP1037 might also encode a map activity and that its variable N-terminal region might be involved in additional functional roles, e.g., interactions with other molecules. In Saccharomyces cerevisiae , there are two map genes and both have an extra N-terminal region compared to the E. coli map gene. One copy of the yeast map gene contains zinc-finger motifs in the N-terminal region that are indispensable for in vivo function ( Li and Chang 1995 ). A functional role involving interaction with the ribosome has also been suggested for this N-terminal domain ( Vetro and Chang 2002 ). In most prokaryotes, it has been assumed that there is only one copy of the map gene. The SVG family exemplified by HP1037 may represent another family of map genes in prokaryotes. Discussion A major fraction of bioinformatics research on sequence analysis has focused on the conserved regions in proteins, trying to hypothesize the role of the protein by identifying sequence motifs that have been shown experimentally to correlate with a specific function. Some work has gone into cataloging the groups of lineage-specific proteins that show no similarity to other proteins in GenBank ( Galperin and Koonin 1999 ), but there the route to assigning function usually needs experimental approaches requiring biochemistry or genetics or more rarely by determining the crystal structure of the gene product ( Zhang et al. 2000 ). Unfortunately, current bioinformatics methods are only occasionally helpful in suggesting where to begin such studies. In this paper we have initiated an effort to identify SVGs, which contain both well-conserved regions and highly variable regions. By looking carefully at a few specific examples where functional information is available from experimental data, we find that the variable region often seems to play a key role in mediating interactions with other molecules, both large and small. Sometimes the variable portions are involved in biological processes with a component of interaction between the cell and agents from the external environment. For instance, the DNA methyltransferases are part of a defense system that recognizes and clears invading foreign DNA; membrane-bound sensory HKs and mechanosensitive ion channels, etc., monitor changes of living conditions. Sometimes the variable portions are involved in intracellular processes that appear to have lineage-specific features. Thus, the variable regions inside DNA GyrB and several types of AARSs probably determine the specificity of substrate recognition. The detailed factors that introduce the molecular variability may go well beyond our explanations here and likely vary from case to case. Some variable regions may have diverged a long time ago and are now kept constant, while others may keep changing. In all of these cases, SVGs are exceptionally worthy targets of further experimental investigation, and such investigations can be greatly aided by the presence of the conserved regions that may suggest a preliminary function to be tested. Why might certain genes contain these variable regions? Could they be simply relics left over during evolution and now serve no purpose? Are they just “pseudo-segments” with no function? There are several lines of evidence that support the hypothesis that when variable regions have been retained, they indeed serve a function. First, several studies have shown that deletions are, on average, more frequent than insertions ( Halliday and Glickman 1991 ). As a result, if a region is evolving under weak functional constraints, it tends to get smaller over time ( Lipman et al. 2002 ). Second, in a special case, one can imagine that when a variable region occurs at the C-terminus of a protein and is not being selected, it is likely to suffer random mutations including nonsense mutations or insertions/deletions that cause a shift in reading frame. Thus, we searched GenBank release 136.0 for examples of genes that matched the conserved region of an SVG, but in which the C-terminus was missing or much shorter. The DNA sequences downstream of such hits were examined for similarity to the variable region in the query gene. Of the 83 SVGs with a C-terminal variable region in H. pylori , none of them had hits with a disrupting stop codon in the variable region; 20 of them have hits with genes showing insertions/deletions that cause frame shifts in the variable region. However, the real number is likely to be much fewer, since, based on previous work, many of them may be the results of sequencing errors ( Posfai and Roberts 1992 ). In other cases, we find that some proteins have lost the variable segment in a subset of genomes. For instance, in ProRSs, the variable segment is absent in archaea; in GyrB, the variable segment is absent in the Gram-positive bacteria. Clearly in those cases the organisms can get by without the variable domain, although they may have a compensating function in a different gene. But this again does not imply that the variable region has no function in those genes that have retained it. SVGs are distinct from sequences with shuffled domains ( Doolittle 1995 ) in that the variable region is bounded by the same sets of conserved portions, while domain shuffling usually manifests itself in a different sequential order of conserved domains. We also hypothesize that the variable regions in SVGs are not the result of multiple domain fusion events, each resulting in an insertion of a different sequence into the protein. This hypothesis is supported by the fact that the fused domains are often conserved across multiple organisms ( Marcotte et al. 1999 ). Additionally, our procedure requires that the variable regions are of similar length within a family of proteins, which are also restricted to conserved length distributions. This filter suggests a mutational mechanism that originated from an ancient protein. Indeed, it is possible that originally the variable region was a result of a single or possibly relatively few ancient fusion events, but this paper does not focus on the evolutionary origin of SVGs. Another prediction from our observations is that the variable regions are excellent candidates to bind substrates or partner macromolecules. They may be extremely helpful in discovering the networks of protein–protein or protein–nucleic acid interactions within a cell. Bioinformatics may even be able to help in this endeavor by finding genes that seem to have coevolving variable regions as a result of such interactions. Experimental data from techniques such as the yeast two-hybrid system or microarrays may provide evidence for interactions that can involve two variable regions. Much additional bioinformatics work will be needed to explore fully the potential of this method in hypothesizing function. For instance, the size limits we have arbitrarily imposed on the variable region should be tested systematically. In our relatively simple formulation presented here, the length of the variable region and the number of proteins in the same family that do not have an alignment to the variable region are the primary factors in determining its statistical significance. Methods using other sequence analysis tools, such as multiple alignment and sequence profiles, may provide alternative ways to identify segmental pattern of variability. A fundamental problem is to differentiate random evolutionary drift from positive selection correlated to functional requirements. Although one might expect that the N- and C-termini may be more variable than the regions in the middle, our data suggest that variable regions in SVGs are not preferentially located in either end (data not shown). We have also examined the amino acid composition, codon usage, and GC content in the variable regions and the conserved regions of the same SVG. While there is no significant deviation of amino acid composition and GC content between the two regions in general, codon usage appears to be biased in the variable regions (data not shown). SVGs usually account for 10%–20% of the total genes in a microbial genome. Currently, we think of the class of lineage-specific genes as being the key factor that distinguishes one strain or species from another. The class of SVGs that we have defined in this paper must now be added to this collection of lineage-specific genes by virtue of the unique segments that constitute their variable regions. They also appear to provide functional elements that help to differentiate among strains and species. This point is well illustrated by considering the restriction-modification systems. Here, the DNA methyltransferases, which have a variable region responsible for DNA recognition, are members of the SVG class. With the help of their companion restriction endonucleases, which typically appear as lineage-specific genes, they serve to keep foreign, unmodified DNA sequences from entering the genome. In this case, the synergy of function provided by members of the two classes highlights the key role that both sets of genes must play in defining the individuality of a strain or species. Our analysis to date is limited to prokaryotes and archaea where SVGs are transcribed and translated as contiguous genomic segments. In eukaryotes, alternative RNA splicing introduces substantial additional complexity into the interpretation of gene structure and protein product, thereby rendering impossible the simple analysis we have applied here. It is tempting to consider alternative splicing as a highly evolved control mechanism to introduce the variability we find in the SVGs and thereby achieve the functional diversity necessary for cell survival under different conditions. In eukaryotes, alternatively spliced exons can be introduced in response to the functional demands of different cell types by merely juggling protein coding regions in the genome, thereby creating an SVG structure. If this view is correct, then it reinforces and highlights the importance of these SVGs to the workings of the cell. In this paper we have provided an initial glimpse of SVGs, which appear to provide an important genetic layer in the adaptation of cells to novel environments and hazardous pathogens. We have focused attention on the biological significance of these genes, especially those that have highly diverged segments. We are currently trying to develop a more refined classification of these genes so as to explore the functional significance of the variability. We would like to know whether extreme variability is required for diverse function or whether more modest variation is sufficient. Such questions require that we can first distinguish positive selection acting on these variable regions from neutral evolution leading to gene decay and eventual loss. Since the variable regions we report are often not amenable to current tools available for alignment, we are exploring new methods that will help us to assess whether positive selection is driving the evolution of these genes. In summary, we have identified an extremely useful way of classifying genes that leads to the identification of those with a high priority for both experimental and computational research. Materials and Methods Our method for detecting SVGs includes several steps: (1) identification of similar genes followed by query-anchored multiple alignment using nongapped BLAST ( Altschul et al. 1990 ); (2) taxonomy clustering of similar genes to avoid bias; (3) detection of segmental variability. Identification of similar genes Given a gene, we start by searching for all its similar genes in the nonredundant database (GenBank release 136.0, 15 June 2003) using nongapped BLAST ( Altschul et al. 1990 ). We use the nongapped BLAST because the gapless high scoring pairs (HSPs) reported are rather conservative. The gapped BLAST, however, tends to extend HSPs over variable regions, which has been observed in several examples (e.g., DNA-recognition domain in cytosine-specific methyltransferase; data not shown). Two criteria are used to define close similarity. First, the E -value is less than 1 E -10. Here we use a strict E -value threshold to avoid possible functional divergence among the homologs. Accordingly, we use the BLOSUM80 scoring matrix in the BLASTP search, although the result does not change dramatically if BLOSUM62 is used. Second, the overall length of the hit sequence does not differ significantly from the query sequence. We define the gap content (GapC) between two sequences: where L,l are the lengths of the protein sequences of two genes. It is a measure of the smallest percentage of gaps needed to be introduced into the pairwise alignment. Sequences with a high GapC value indicate significantly different domain structures, possibly owing to domain insertions or losses, and thus are excluded from the set of similar genes. In our current implementation, we require that GapC must be less than 0.2. Taxonomy clustering of the similar genes Similar genes reported by BLASTP are not evenly distributed among different species. In many cases, highly similar genes from different strains of the same species or highly similar paralogous genes from a particular strain tend to introduce bias into the dataset. We adopted a simple taxonomy clustering by using the NCBI Taxonomy Database ( Wheeler et al. 2003 ) to reduce this bias. We collapse all the similar genes from the same species into a single group. Then we choose the gene with the best similarity score to the query sequence as the representative of that species for later calculations. The definition of species follows the hierarchical taxonomy used in the NCBI Taxonomy database (superkingdom → phylum → class → subclass → order → family → genus → species → no rank [strain]). By doing taxonomy clustering, we are able to collect a less biased sample of similar genes from different species. Detection of segmental variability Query-anchored multiple alignment after taxonomy clustering is performed by aligning the HSPs reported by nongapped BLAST (see Figure S2 and http://geneva.bu.edu) . Two unaligned regions in two sequences are considered as the variable regions if they are bounded by similar HSPs at both ends (or one end, if the unaligned region is at the terminus of the gene). To avoid the possibility of a large segment containing insertions or deletions, we again require that GapC be less than 0.2 between these two unaligned regions. For each amino acid position in the query gene, we can count the number of times (m) it is inside an HSP region and the number of times (n) it is inside a variable region. A high ratio of n over m + n suggests that this position is inside the variable region most of the time. We estimate the statistical significance ( p -value) of the variability for each position by a binomial distribution: where q is the probability of an amino acid position being inside a HSP region. We estimate q by averaging the proportion of HSP in each hit sequence among all hits. If the p -value calculated using the above formula is less than the significance level, which we set at 0.05, we then consider this position as a variable position; otherwise, it is a conserved position. A consecutive run of variable positions forms a variable region. The next question is how long the variable region should be to be considered meaningful, as opposed to functionally unimportant regions such as linker regions, which are usually short. From our experience, there is no clear decision boundary between the length of the region and its functional importance. Any choice of cutoffs would have to balance between false positives and false negatives. However, previous studies on the length distribution of protein domains has shown that the most likely length of a protein domain is around 70 amino acids, and regions shorter than this are less likely to form a functional domain ( Wheelan et al. 2000 ). Based on this, we chose 70 amino acids as the length threshold for a variable region to be considered functionally important. In Figure S7 , we show the length distribution of the variable regions in all genes of H. pylori. A direct way of visualizing the variability of a protein sequence is by calculating the ratio of n over ( m + n ) for each position and plotting it. We call such plots variability profiles. Sample variability profiles are shown in Figure 1 . In Figure 1 A, two obvious peaks are present: one from position 20 to 70, the other from position 160 to 300. The latter (approximately 140 amino acids) forms a separate DNA recognition domain, while the former (approximately 50 amino acids) has no known function. In Figure 1 we also show conserved subsequences from the Pfam ( Bateman et al. 2002 ), BLOCKS ( Henikoff et al. 1999 ), and PRINTS ( Attwood et al. 2003 ) databases. The BLOCKS and PRINTS databases are relatively conservative in defining motifs. However, the Pfam domain seems to include the variable region within the conserved region, as shown in Figure 1 A. Supporting Information Data Deposit We provide a static collection of segmentally variable genes at our Web site, http://geneva.bu.edu . SVGs for several representative genomes are listed there. For SVG lists in other genomes, please request more information from Y. Zheng at E-mail: zhengyu@bu.edu . All the case examples mentioned throughout the paper and Supporting Information have been compiled into one Web page, http://geneva.bu.edu/paper03.html , with hyperlinks. Readers can follow each hyperlink to access additional information from Pfam, BLOCKS, PRINTS, COG, and nongapped BLAST for each gene. Figure S1 Multiple Alignment of BtuB and Homologs Conservation score is plotted under the alignment (ClustalX). The conserved portions are as follows: N-terminal domain, extreme C-terminal domain, and a segment between N-terminal and C-terminal domain. The variable domain (between N-terminal and C-terminal) overlaps with the transmembrane 22-strand β-barrel regions. (2.69 MB EPS). Click here for additional data file. Figure S2 Query-Anchored Alignment of ProRS The query protein is H. pylori ProRS. The blue segments are HSPs reported by nongapped BLAST. The yellow segments are the variable region. The gray region is the gap-rich region (GapC > 0.2, deletion in this alignment). See http://geneva.bu.edu/paper03.html for a high-resolution Web figure. (4.71 MB EPS). Click here for additional data file. Figure S3 Multiple Alignment of GtfB and Its Homologs (3.12 MB EPS). Click here for additional data file. Figure S4 Multiple Alignment of B. subtilis Gene yhdY and Its Homologs YhdY is currently annotated as a hypothetical protein and contains a conserved domain for mechanosensitive proteins (the middle region of the alignment) and two variable domains (N- and C-termini). (2.86 MB EPS). Click here for additional data file. Figure S5 Multiple Alignment for H. pylori Gene HP1299 It is the methionine aminopeptidase (type Ia map ). This is an example of a fully conserved gene. (1.87 MB EPS). Click here for additional data file. Figure S6 Multiple Alignment for H. pylori Gene HP1037 It is currently annotated as “conserved hypothetical protein.” The N-terminal region is variable. The conserved C-terminal domain is characteristic of methionine aminopeptidase. (2.22 MB EPS). Click here for additional data file. Figure S7 Length Distribution of Variable Regions in the Genome of H. pylori Shown as a histogram. Only variable regions inside fully conserved genes and SVGs are included. Pink line shows the domain size distribution in 3D-structure database (data from Wheelan et al. 2000 ). (643 KB EPS). Click here for additional data file. Table S1 Classification of Genes into Three Broad Categories (62 KB DOC). Click here for additional data file. Accession Numbers The GenBank ( http://www.ncbi.nlm.nih.gov/GenBank/ ) accession numbers for the genes discussed in Figure 3 are as follows: atpA (2314285), atpD (2314283), atpG (2314284), dnaX (2313841), flgK (2314271), ftsK (2314237), gyrB (2313611), hetA (2314367), HP1450 (2314626), infB (2314195), M.hpyAVIB (2313124; REBASE [ http://rebase.neb.com ] ID M2.hpyAVI), mutS (2313742), NQO3 (2314431), NQO8 (2314432), polA (2314647), rps4 (2314460), spaB (2313717), spoT (2313901), tlpA (2313179), and tlpC (2313162). The GenBank accession numbers for the genes discussed in Figure 3 are as follows: Agrobacterium tumefaciens (15890351), B. subtilis (16079962), Enterococcus faecalis (8100675), E. coli K12 (16128553), L. innocua (16801788), Mycobacterium leprae (15826988), M. tuberculosis CDC1551 (15840173), Nitrosomonas europaea (22955201), Nostoc sp. PCC 7120 (17228666), P. syringae pv. syringae B728a (23470301), Ralstonia metallidurans (22980570), R. solanacearum (17548875), Synechococcus sp. PCC 7942 (21954778), and Thermotoga maritime (15644402); in case studies, B. subtilis yhdY (2633299), E. coli b1330 (1787591), H. pylori cytosine-specific DNA methyltransferase (2313124), H. pylori HP1299 (2314463), H. pylori HP1037 (2314181), H. pylori prolyl-tRNA synthetase (2313329), and H. pylori VacB (2314413). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC387263.xml |
493273 | Rofecoxib for dysmenorrhoea: meta-analysis using individual patient data | Background Individual patient meta-analysis to determine the analgesic efficacy and adverse effects of single-dose rofecoxib in primary dysmenorrhoea. Methods Individual patient information was available from three randomised, double blind, placebo and active controlled trials of rofecoxib. Data were combined through meta-analysis. Number-needed-to-treat (NNT) for at least 50% pain relief and the proportion of patients who had taken rescue medication over 12 hours were calculated. Information was collected on adverse effects. Results For single-dose rofecoxib 50 mg compared with placebo, the NNTs (with 95% CI) for at least 50% pain relief were 3.2 (2.4 to 4.5) at six, 3.1 (2.4 to 9.0) at eight, and 3.7 (2.8 to 5.6) at 12 hours. For naproxen sodium 550 mg they were 3.1 (2.4 to 4.4) at six, 3.0 (2.3 to 4.2) at eight, and 3.8 (2.7 to 6.1) at 12 hours. The proportion of patients who needed rescue medication within 12 hours was 27% with rofecoxib 50 mg, 29% with naproxen sodium 550 mg, and 50% with placebo. In the single-dose trial, the proportion of patients reporting any adverse effect was 8% (4/49) with rofecoxib 50 mg, 12% (6/49) with ibuprofen 400 mg, and 6% (3/49) with placebo. In the other two multiple dose trials, the proportion of patients reporting any adverse effect was 23% (42/179) with rofecoxib 50 mg, 24% (45/181) with naproxen sodium 550 mg, and 18% (33/178) with placebo. Conclusions Single dose rofecoxib 50 mg provided similar pain relief to naproxen sodium 550 mg over 12 hours. The duration of analgesia with rofecoxib 50 mg was similar to that of naproxen sodium 550 mg. Adverse effects were uncommon suggesting safety in short-term use of rofecoxib and naproxen sodium. Future research should include restriction on daily life and absence from work or school as outcomes. | Background Dysmenorrhoea is associated with painful cramping of the lower abdominal or back muscles, with or without other symptoms such as nausea, vomiting, and diarrhoea. Onset of dysmenorrhoea is common during adolescence, and up to 50% of women of reproductive age may be affected [ 1 ], and 10% incapacitated for up to three days each menstrual cycle. The pain caused by dysmenorrhoea can be debilitating, resulting in women being unable to perform daily activities, and being absent from work or school. In consequence, dysmenorrhoea is associated with emotional, social, and economic burdens. Despite the impact of dysmenorrhoea on daily living, few women seek medical advice [ 2 ], or know which treatments work [ 3 ]. Raised concentrations of uterine prostaglandins are thought to cause the pain and cramping associated with dysmenorrhoea [ 4 - 6 ]. Nonsteroidal anti-inflammatory drugs (NSAIDs) inhibit prostaglandin synthesis and are commonly used to treat the condition. The newer Cox-2 selective inhibitors (coxibs) also inhibit prostaglandin synthesis, providing an alternative to conventional NSAIDs. Relatively low rates of gastrointestinal adverse effects allow the use of higher doses of coxibs in acute pain and dysmenorrhoea. These high doses may have the additional advantage of longer duration analgesia with extended dosing intervals. Systematic reviews have shown NSAIDs to be effective in the treatment of primary dysmenorrhoea [ 7 , 8 ] While the latest, Cochrane, review [ 8 ] reported on 4,066 women in trials of NSAIDs in dysmenorrhoea, the trials themselves were small, with an average of about 50 women per trial. These 63 randomised double-blind trials investigated 21 different NSAIDS, at different doses, in studies of varying design, varying outcomes, and varying duration. At least moderate pain relief over a cycle was reported in 14 comparisons between NSAID (any NSAID, any dose) and placebo in 599 women, with an NNT of 2.1 (1.9 to 2.5). The most studied NSAID was naproxen, with 287 women in seven trials, with an NNT of 2.5 (2.0 to 3.3) for this outcome. An earlier review [ 7 ] included more trials, some not double-blind, but came to substantially the same conclusion about analgesic efficacy in primary dysmenorrhoea. NSAIDs also improved activities of daily living, though reported in only 216 women [ 8 ]. Adverse event information in these trials was not informative given the small number of women and trials, and that adverse events were rare in younger women taking NSAIDs for limited time. Cox-2 selective inhibitors (coxibs) provide an alternative to conventional NSAIDs, with a potential advantage of once daily dosing [ 9 - 11 ]. This individual patient data meta-analysis of rofecoxib in dysmenorrhoea aimed to determine the efficacy and duration of analgesic activity of single dose rofecoxib, and to evaluate adverse effects. Methods QUOROM guidelines [ 12 ] for quality of reporting of meta-analyses were followed though no flow chart was used since trial data all came from a single source. Merck Research Laboratories, Rahway, New Jersey provided individual patient data from three Phase III trials of rofecoxib in dysmenorrhoea (studies 38, 55 and 56), with the guarantee that all relevant studies completed by July 2002 had been made available. One of the trials has been published in full [ 9 ]. Searching PubMed to January 2004 identified one other randomised but open trial of rofecoxib [ 13 ] not sponsored by Merck. For inclusion, trials had to be randomised and double blind, compare rofecoxib with placebo, and provide single dose efficacy information. Outcome data were available for pain relief, pain intensity, time to remedication (use of rescue analgesic), and adverse effects. Each trial report was independently read and scored for quality using a three item, 0–5 point scale [ 14 ]. For inclusion a trial had to score a minimum of two points, one each for randomisation and double blinding, out of a maximum of 5. A sixteen-point scale was also used to assess trial validity [ 15 ]. Our intention was to use two pain outcomes (pain relief over the first dose, and pain relief over the whole cycle) with the outcome being that closest to at least half pain relief. Over the first dose, this might be a measure of total pain relief (TOTPAR), while over a whole cycle it may be a patient global evaluation of good or excellent, rather than mild, fair, no improvement, or worse. Analyses for comparator treatments were based on information available only from the trials of rofecoxib in this report. Outcome data were pooled in an intention to treat (number of patients randomised) analysis. Neither heterogeneity tests nor funnel plots were used [ 16 , 17 ]. Instead clinical homogeneity of trials was examined graphically [ 18 ]. Relative benefit (or risk) was calculated using a fixed effects model [ 19 ] with no statistically significant difference between treatments assumed when the 95% confidence intervals included 1. Number-needed-to-treat (or harm) was calculated using the method of Cook and Sackett [ 20 ] using pooled observations. NNT is the reciprocal of the absolute risk reduction or increase; for instance, if 75 out of 100 patients benefit with treatment and only 25 out of 100 benefit with placebo, the absolute risk increase is 0.75–0.25 = 0.5, and the NNT is 1/0.5 = 2. The z test [ 21 ] was used to determine statistical differences between NNTs for different doses, treatments or outcomes. Mean adverse event rates were calculated, weighting by treatment group size. Use of rescue medication was analysed as the proportion of patients remedicating at different time points within 12 hours. Results The mean age of women in the trials was 31 years, and at baseline pain was moderate in 66% of women and severe in 34%. All trials were randomised, double blind, and compared oral doses of rofecoxib with an active control and placebo in women with moderate to severe pain due to dysmenorrhoea. One trial was a single dose crossover (study 38), and two were multiple dose crossovers (studies 55 and 56), where the crossover was between single doses in different menstrual cycles. Single dose efficacy data were available for the first 12 hours of treatment in all trials, but not summary estimates over a cycle. Study designs and quality and validity scores are shown in Table 1 . All trials scored the maximum five points for quality and at least 13/16 points for trial validity; some of the criteria for validity were not appropriate because of the individual patient presentation of results. Table 1 Trial details Trial ID Study drug and dose, number of women Design Observations after 8 hrs Quality score Validity score 38 49 women Rofecoxib 25 mg Rofecoxib 50 mg Ibuprofen 400 mg Placebo Single oral dose, parallel 3 menstrual cycles 12, 24 5/5 ≥13/16 55 60 women Rofecoxib 50 mg then 25 mg as required Naproxen sodium 550 mg every 12 hrs Placebo Oral. Multiple dose study with single dose efficacy data, multiple dose adverse events Cross-over, 1 of 6 drug sequences 3 menstrual cycles 12 hour obervations after a single dose in a three-day study 5/5 ≥13/16 56 122 women Rofecoxib 50 mg as required Rofecoxib 50 mg then 25 mg as required Naproxen sodium 550 mg every 12 hrs Placebo Oral. Multiple dose study with single dose efficacy data, multiple dose adverse events Cross-over, 1 of 4 drug sequences 4 menstrual cycles 12 hour obervations after a single dose in a three-day study 5/5 ≥13/16 The single dose trial (study 38) was conducted over three cycles. Each of 49 women received three of four treatments (rofecoxib 25 mg, rofecoxib 50 mg, ibuprofen 400 mg, or placebo). For the two multiple dose studies, one (study 55) compared a single dose of rofecoxib 50 mg followed by 25 mg daily as required, naproxen sodium 550 mg every 12 hours, or placebo in 60 women over three menstrual cycles. The other (study 56) compared rofecoxib 50 mg as required, rofecoxib 50 mg followed by 25 mg daily as required, naproxen sodium 550 mg every 12 hours, or placebo in 122 women over four menstrual cycles. Both trials reported multiple dose adverse effects. In the multiple dose trials, all women received each treatment regimen for one cycle. Pain intensity and pain relief were measured using the standard 4-point categorical pain intensity scale (0 none, 1 mild, 2 moderate, 3 severe) and a 5-point point pain relief scale (0 none, 1 a little, 2 some, 3 a lot, 4 complete). Pain measurements were collected using patient diaries. Patients were assessed at baseline, then at least hourly for eight hours, and again at 12 hours for single dose efficacy data. The exact time at which a patient requested remedication (or rescue analgesic), if required, was recorded. Adverse effects were recorded as the number of patients with any adverse effect(s), or particular adverse effects. Efficacy Full efficacy results over six, eight and 12 hours are shown in Table 2 . All active treatments were significantly more effective than placebo at all time points. Table 2 Number needed to treat for at least 50% pain relief Improved with % improved Number of trials Drug and dose (mg) Active Placebo Active Placebo Relative risk (95% CI) NNT (95% CI) Six hour ourcomes 1 Rofecoxib 25 66/115 45/118 57 38 1.5 (1.1 to 2.0) 5.0 (3.7 to 7.8) 3 Rofecoxib 50 140/226 70/225 62 31 2.0 (1.6 to 2.5) 3.2 (2.4 to 4.5) 1 Ibuprofen 400 31/49 10/47 63 21 3.0 (1.7 to 5.4) 2.4 (1.7 to 4.2) 2 Naproxen sodium 550 120/181 60/178 66 34 2.0 (1.6 to 2.5) 3.1 (2.4 to 4.4) Eight hour ourcomes 1 Rofecoxib 25 70/115 44/118 61 37 1.6 (1.2 to 2.2) 4.2 (2.8 to 9.0) 3 Rofecoxib 50 147/226 73/225 65 32 2.0 (1.6 to 2.5) 3.1 (2.4 to 9.0) 1 Ibuprofen 400 30/47 11/47 61 21 2.6 (1.5 to 4.6) 2.6 (1.8 to 5.1) 2 Naproxen sodium 550 121/181 62/178 68 35 2.0 (1.6 to 2.4) 3.0 (2.3 to 4.3) Twelve hour ourcomes 1 Rofecoxib 25 64/115 45/118 56 38 1.5 (1.1 to 1.9) 5.7 (3.3 to 20) 3 Rofecoxib 50 135/226 74/225 60 33 1.8 (1.5 to 2.3) 3.7 (2.8 to 5.6) 1 Ibuprofen 400 27/49 12/47 55 26 2.2 (1.3 to 3.7) 3.4 (2.1 to 9.2) 2 Naproxen sodium 550 111/181 62/178 61 35 1.8 (1.4 to 2.2) 3.8 (2.7 to 6.1) Rofecoxib 25 mg was tested in a single trial, rofecoxib 50 mg in three, ibuprofen 400 mg in one and naproxen sodium 550 mg in two. For all the active analgesics the proportion of patients with at least 50% pain relief was about 60% at all time points, and with placebo it was about 30% at all time points (Table 2 ). Numbers needed to treat tended to be much the same for rofecoxib 50 mg, ibuprofen 400 mg and naproxen sodium 550 mg, though somewhat higher (worse) for rofecoxib 25 mg (Table 2 ). There was no significant difference between NNTs for single doses of study treatments at six, eight or 12 hours. For instance, no significant difference at the 12 hour comparison was seen between the NNTs of rofecoxib 25 mg and rofecoxib 50 mg (z score 1.19, p = 0.23), ibuprofen 400 mg (z score 0.28, p = 0.78), or naproxen sodium 550 mg(z score 1.09, p = 0.28), or between NNTs of rofecoxib 50 mg and ibuprofen 400 mg (z score 0.26, p = 0.79), or naproxen sodium 550 mg (z score 0.096, p = 0.60). Remedication The proportion of patients who remedicated at different time points over 12 hours is shown in Figure 1 . At 12 hours remedication occurred with 29% on rofecoxib 25 mg, 28% on rofecoxib 50 mg, 29% on naproxen sodium 550 mg, 41% on ibuprofen 400 mg, and 50% with placebo. Figure 1 Remedication time for all drugs Adverse effects Few adverse effects of a particular type were reported, and none were serious in any trial. The most commonly reported adverse effects were nausea and somnolence, but these occurred infrequently. A single dose in one trial (study 38) gave the proportion of patients reporting any adverse effect(s) as 10% (5/49 patients) with rofecoxib 25 mg, 8% (4/49) with rofecoxib 50 mg, 12% (6/49) with ibuprofen 400 mg, and 6% (3/49) with placebo. With multiple doses over a cycle, the proportion of patients reporting any adverse effect(s) was 23% (42/179 patients) with rofecoxib 50 mg, 24% (45/181) with naproxen sodium 550 mg, and 18% (33/178) with placebo. Discussion Pain with dysmenorrhoea usually lasts for about three days, though with considerable individual variation. Trials of analgesics can have various forms. The simplest might be to give the same analgesic for the whole of the painful cycle, and ask a global question concerning efficacy at the end. Women might then be crossed over to a different treatment at the next cycle. A variation would be to use the same basic structure, but make more detailed evaluations of pain or pain relief over a limited time during the first day, though a global question could always be added. A more complicated design would use a cross-over within a single cycle. The three trials described here used a cross-over between cycles, with detailed pain measurements over 12–24 hours in the first painful day. Twelve-hour and 24-hour outcomes have also been reported in two other recent studies of coxibs in dysmenorrhoea [ 10 , 11 ]. With previous NSAID studies [ 8 ] the outcome most often used in placebo-controlled trials was at least moderate pain relief or an equivalent outcome over a whole cycle. A recent open-label study had a crossover design with drugs given each successive day [ 13 ]. All three trials were of the highest reporting quality, and had high validity scores, indicating that known sources of bias are unlikely to occur [ 22 , 8 ]. We know that to be sure of a result (as an NNT) we need information from about 400 patients when the NNT is about 2, but much more when the NNT is higher (worse) [ 23 ]. Here we have information from about 450 women with rofecoxib 50 mg and 360 with naproxen sodium 550 mg, but only about 200 women contributed for rofecoxib 25 mg and fewer than 100 for ibuprofen 400 mg (Table 2 ). For rofecoxib 25 mg and ibuprofen 400 mg, therefore, uncertainty over the size of the effect continues. Individual patient information from three trials of high quality showed that for the outcome of at least half pain relief over 12 hours, rofecoxib, naproxen sodium and ibuprofen were similarly effective in the treatment of pain associated with dysmenorrhoea. This confirms what was known from previous meta-analysis [ 8 ], in which most information was for naproxen at various doses with an NNT of 2.5 (2.0 to 3.3) for the outcome of at least moderate pain relief over 3–5 days compared with placebo. In this analysis the NNT for a single dose of 550 mg naproxen sodium was between 3.0 and 3.8 over six to 12 hours. The Cochrane review had information on 287 women in seven placebo-controlled trials, while two of the three trials here had information on 359 women taking naproxen sodium. Rofecoxib 50 mg was statistically indistinguishable from naproxen sodium 550 mg (Table 2 ) at all times, though rofecoxib 25 mg tended to have numerically higher (worse) NNTs at all times. Remedication over 12 hours was statistically indistinguishable between rofecoxib doses and naproxen. Here the number of women studied was even larger, with 451 women involved in the trials comparing rofecoxib 50 mg and placebo. The difference between rofecoxib 50 mg and naproxen sodium 550 mg would be in dosing schedules, with once versus twice a day dosing. Pain relief and duration of analgesia are not the only issues of importance in dysmenorrhoea. The impact of dysmenorrhoea on activities of daily living, disability or function, and absence from work or school are additional factors to be considered. These outcomes were not addressed by the trials for rofecoxib, which were conducted for regulatory purposes. This limits the utility of the information, but trials of other coxibs (also conducted for regulatory purposes) have also concentrated on pain relief and duration of analgesia [ 10 , 11 ]. Future trials should examine a range of short-term analgesia and longer outcomes like interference with daily living or absence from work or school. The analysis by Zhang and colleagues [ 7 ] did examine these additional outcomes, and found daily life to be less restricted with naproxen or ibuprofen than with placebo, and fewer absences from work or school to occur with naproxen than with placebo. These outcomes are infrequently reported [ 8 ], but are likely to be associated with pain, so decreased pain should improve these other outcomes as well. Verification of this assumption with data from high quality clinical trials would be welcome, though. Future individual patient analysis of trials in dysmenorrhoea would have the potential to examine issues around the efficacy of analgesics in women with heavy menstrual loss, or who use combined oral contraceptive pills. In this analysis information was not available for these analyses, and in any event any sub-groups would probably have been too small for any definitive answer. In the trials for rofecoxib, information on adverse effects was collected using diaries. Few adverse effects were reported to have occurred and none were serious. The most common adverse effects were nausea and somnolence. These and headache have been frequently reported with other coxibs [ 10 , 11 ] and NSAIDs [ 7 , 24 ]. The problem when interpreting information on adverse effects, though, is that any symptom can be recorded as an adverse event however tenuous its association to the study drug. We cannot be certain whether these symptoms were due to the condition or to the drug. Conclusions Based on information from three trials, a single dose of rofecoxib 50 mg is as effective as a single dose of naproxen sodium 500 mg in controlling the pain associated with dysmenorrhoea, and causes relatively few adverse effects. Competing interests RAM has been a consultant for Merck, Sharpe and Dohme Ltd, UK. RAM, JE and HJM have received lecture fees from pharmaceutical companies. The authors have received research support from charities and government sources at various times, but no such support was received for this work. Neither author has any direct stock holding in any pharmaceutical company. The terms of the financial support from MSD included freedom for authors to reach their own conclusions, and an absolute right to publish the results of their research, irrespective of any conclusions reached. MSD did have the right to view the final manuscript before publication, and did so. Authors' contributions JE conducted the analyses, which were checked by RAM. All authors contributed equally to the design, writing and reviewing of the paper. Table 3 Proportion of women who used rescue analgesic Percent who remedicated by (hrs) Drug & dose (mg) 6 8 12 Rofecoxib 25 21 25 28 Rofecoxib 50 22 24 27 Ibuprofen 400 22 35 41 Naproxen sodium 550 18 24 29 Placebo 37 44 50 Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC493273.xml |
534796 | Primary lymphocutaneous nocardiosis in an immunocompetent patient | Background Nocardia brasiliensis is a rare human pathogen usually associated with localized cutaneous infections. Case Presentation We report a case of primary lymphocutaneous Nocardia brasiliensis infection developed after a bone fracture of the left hand of an otherwise healthy 32-year-old man. Treatment with trimethoprim-sulfamethoxazole given for a total of three months combined with surgical debridement resulted in complete resolution of the infection. Conclusion Nocardiosis should be part of the differential diagnosis in patients with sporotrichoid infection, particularly those with a history of outdoor injury. Culture of the affected tissue and antimicrobial susceptibility testing of the isolate should be performed for diagnosis and treatment. | Background Nocardiosis is a rare localized or systemic infection caused by several species of the genus Nocardia . This genus consists of strictly aerobic, Gram-positive, variably acid-fast, filamentous bacteria with a tendency to fragment into bacillary and coccoid forms [ 1 ]. N. asteroides , N. farcinica , N. nova (included in the N. asteroides complex) and N. brasiliensis are the species most often involved in human disease [ 1 , 2 ]. N. brasiliensis has been recovered from the soil in many tropical and subtropical areas but rarely in temperate areas. Traumatic inoculation of N. brasiliensis into the skin is the most typical mode of acquisition of the infection due to this organism [ 1 , 2 ]. Herein, we describe a case of a man, who had an accident at work and 1 week later developed lymphocutaneous infection caused by N. brasiliensis . Case Presentation A previously healthy 32-year-old man was referred to the emergency department of orthopedics with traumatic injuries of the index, middle and ring fingers of the left hand. The injury happened at work while he was operating a machine of cotton elaboration. On admission, routine laboratory investigations showed only elevated white blood cell count (12,400/mm 3 ), while red blood cell count, haemoglobin, chemistry and urine analysis were within normal limits. Radiography of the bones of his left hand revealed fractures of the nail bones of the middle and the ring finger. Surgical debridement of the damaged soft tissue was undertaken with amputation of these two nail bones. The patient was hospitalized and intravenous therapy with ceforanide (1 g/12 h), ofloxacin (200 mg/12 h) and metronidazole (500 mg/8 h) was initiated. Four days after his admission the injury of the hand became tender, erythematous, swelling and began to drain. The purulent material expressed from the hand was sent for culture. Three days later the lesion worsened and was complicated with lymphangitis. The patient was noted to be febrile (38.5°C) without any other systemic symptoms. Physical examination revealed multiple erythematous subcutaneous nodules along the lymphatics extending up the patient's left forearm. These nodules were tender and painful. There was no regional lymphadenopathy. Debridement of the lesions was performed and the tissue was submitted for bacterial and fungal cultures. The Gram-stained smear showed polymorphonuclear leucocytes and Gram-positive fine, branching filaments, partially acid-fast, with a tendency to fragment into coccoid and bacillary forms. Laboratory tests showed: white blood cell count (18,000/mm 3 ), absolute neutrophil count, 12,780/mm 3 , erythrocyte sedimentation rate, 75 mm/h; and C-reactive protein, 14.2 mg/dl (normal < 0.8 mg/dl). After 5 days of incubation cultures of the pus and the tissue on Columbia blood agar grew white colonies adherent to the agar, rough with a velvety surface, having a characteristic mouldy odor (Figure 1 ). Colonial characteristics, physiological properties and biochemical tests performed identified the isolate as Nocardia brasiliensis (Table 1 ). Susceptibility to the antibiotics by the determination of the MICs using the E-test method (AB Biodisk, Solna, Sweden), showed that the isolate was sensitive to trimethoprim-sulfamethoxazole, amoxicillin-clavulanic acid, gentamicin, tobramycin, amikacin, and minocycline, intermediate to ciprofloxacin and resistant to ampicillin, second and third generation cephalosporins, erythromycin, clindamycin, ofloxacin and pefloxacin. The patient's antimicrobial therapy was changed to intravenous trimethoprim-sulfamethoxazole (160/800 mg b.i.d). The patient responded to therapy. Following 2 weeks of treatment the patient improved and all laboratory tests returned to normal. He was discharged 3 weeks after his admission on oral trimethoprim-sulfamethoxazole (160/800 mg b.i.d). The antibiotic therapy was continued for a total of 3 months. His hand and arm lesions were healing well and 6 months later revealed complete resolution of the infection without signs of recurrence. Figure 1 Rough chalky-white colonies of Nocardia brasiliensis grown on Columbia blood agar Table 1 Physiological characteristics and biochemical reactions of our Nocardia brasiliensis isolate Test or characteristic Reaction of our strain Decomposition of: Adenine - Casein + Tyrosine + Xanthine - API 20C AUX assimilation results: Glucose + Glycerol + Galactose + N-acetyl-D-glucosamine + Inositol + Adonitol - Trehalose + Equivalent growth at 35°/45°C - Lysozyme broth + Production of arylsulfatase (7d) - Gelatin liquefaction (7d) + Sensitivity to: Gentamicin + Tobramycin + Amikacin + Erythromycin - Discussion In the United States, there are an estimated 500–1,000 new cases of nocardiosis each year [ 3 ]. On the basis of epidemiological surveys conducted in France and Italy, the annual estimated incidence of human nocardiosis is 150–250 and 90–130 cases, respectively [ 4 , 5 ]. N. brasiliensis accounts for only about 7–14% of the reported cases [ 3 ]. The incidence of nocardiosis in Greece is still unknown because the number of nocardial infections are not reported to the public health authorities. Only one case of N. brasiliensis lymphocutaneous syndrome has been previously described in the same geographic area [ 6 ]. N. brasiliensis although rarely implicated in pulmonary and disseminated infections in immunocompromised patients, has been most commonly associated with cutaneous infections [ 7 ]. Nocardia enters the skin after traumatic inoculation injuries, varying from contaminated abrasions and puncture wounds to insect and animal bites. The most common resulting skin lesions are on the upper and lower extremities [ 7 ]. Cutaneous manifestations include: (i) mycetoma, (ii) lymphocutaneous (sporotrichoid) infection, (iii) superficial skin infection, and (iv) disseminated infection with cutaneous involvement. The present case is consistent with the classical presentation of lymphocutaneous infection with a primary lesion at the site of injury on the hand and an ascending lymphangitis involving the forearm. The inoculation probably occurred from the cotton that had been contaminated by Nocardia and entered the wound after the accident. The lymphocutaneous syndrome can be caused by a wide variety of microorganisms. The most common causative agents of the syndrome in addition to Sporothrix schenkii and Nocardia brasiliensis , include Mycobacterium marinum and Leishmania species. Less common causes are Coccidioides immitis , Cryptococcus neoformans , Histoplasma capsulatum , Blastomyces dermatitidis , Pseudoallescheria boydii , other species of Mycobacterium , Streptococcus pyogenes , Staphylococcus aureus and viruses as cowpox virus and herpes simplex [ 8 ]. A history of a traumatic wound contaminated with soil and the relatively brief incubation period (less than 2 weeks) suggest nocardiosis. Diagnosis of nocardial infection can be established by cultural isolation of the microorganism. Identification to the species level can be successfully performed either by conventional biochemical methods or by molecular techniques [ 1 , 9 ]. Trimethoprim-sulfamethoxazole combination is recognized the drug of choice for nocardiosis [ 10 ]. Primary lymphocutaneous nocardiosis may be curable after a course of 2 to 4 months, although several studies report clinical cures of cutaneous nocardiosis caused by N. brasiliensis after only 2 to 3 weeks of therapy. In patients with sulfa intolerance or those who fail therapy with trimethoprim-sulfamethoxazole, alternative therapy must be based on sensitivity testing. Minocycline, tetracycline, amikacin and amoxicillin-clavulanic acid have been successfully used [ 11 ]. Although rare, lymphocutaneous nocardiosis must be considered, diagnosed with appropriate cultures and adequately treated, in order to prevent progression to dissemination of the primary skin disease. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC534796.xml |
535339 | Proteomics computational analyses suggest that the carboxyl terminal glycoproteins of Bunyaviruses are class II viral fusion protein (beta-penetrenes) | The Bunyaviridae family of enveloped RNA viruses includes five genuses, orthobunyaviruses, hantaviruses, phleboviruses, nairoviruses and tospoviruses. It has not been determined which Bunyavirus protein mediates virion:cell membrane fusion. Class II viral fusion proteins (beta-penetrenes), encoded by members of the Alphaviridae and Flaviviridae, are comprised of three antiparallel beta sheet domains with an internal fusion peptide located at the end of domain II. Proteomics computational analyses indicate that the carboxyl terminal glycoprotein (Gc) encoded by Sandfly fever virus (SAN), a phlebovirus, has a significant amino acid sequence similarity with envelope protein 1 (E1), the class II fusion protein of Sindbis virus (SIN), an Alphavirus. Similar sequences and common structural/functional motifs, including domains with a high propensity to interface with bilayer membranes, are located collinearly in SAN Gc and SIN E1. Gc encoded by members of each Bunyavirus genus share several sequence and structural motifs. These results suggest that Gc of Bunyaviridae, and similar proteins of Tenuiviruses and a group of Caenorhabditis elegans retroviruses, are class II viral fusion proteins. Comparisons of divergent viral fusion proteins can reveal features essential for virion:cell fusion, and suggest drug and vaccine strategies. | Introduction Two classes of viral envelope proteins that mediate virion:cell fusion have been described. Class I and II fusion proteins (aka α-and β-penetrenes) are distinguished, in part, by the location of the "fusion peptide," a cluster of hydrophobic and aromatic amino acids that appears critical for fusing viral and cell membranes. The fusion peptides of class I fusion proteins are located at or near the amino terminus, whereas fusion peptides of class II fusion proteins are internal. The overall structures of these two classes of viral fusions proteins are also distinct. Class I fusion proteins have a pair of extended α helices that are separated by sequences variable in length, but usually containing one or more dicysteine linkages. Several otherwise disparate viruses, including orthomyxoviruses, paramyxoviruses, retroviruses, arenaviruses, filoviruses and coronaviruses encode class I fusion proteins [ 1 - 4 ]. Class II fusion proteins are comprised mostly of antiparallel β sheets. The prototypic class II fusion protein is the E glycoprotein of tick-borne encephalitis virus (TBEV), a member of the genus flavivirus of the Flaviviridae family [ 5 ]. E possesses three β sheet domains (I-III). In the slightly curved rod-like configuration of the E protein present in the virion, the fusion peptide is located at the tip of domain II, the furthest point distal from the C-terminal transmembrane anchor. The virion configuration of envelope glycoprotein E1, the fusion protein of the Alphavirus Semliki Forest virus (SFV), demonstrates a remarkable fit to the scaffold of TBEV E [ 6 ]. E of dengue virus (DEN) and West Nile virus, medically important flaviviruses, also can be fit to the class II structure [ 7 , 8 ]. Recent studies indicate that TBEV E, DEN E and SFV E1 undergo similar conformational changes upon exposure to low pH, as encountered during entry via endocytic vesicles, suggesting a common fusion mechanism [ 9 - 11 ]. Based on sequence similarities, it is likely that E1 of other Alphaviruses and E of other members of the flavivirus genus within the family Flaviviridae are also class II fusion proteins. Members of the two other genuses in the Flaviviridae, hepaciviruises and pestiviruses, appear on the basis of proteomics computational analyses to encode truncated class II fusion proteins [ 12 ]. The Bunyaviridae family of enveloped RNA viruses includes five disparate genuses. Orthobunyaviruses, phleboviruses, nairoviruses and tospoviruses are spread by insect vectors, whereas hantaviruses are spread by rodent vectors [ 13 ]. Members of each Bunyavirus genus include important human and animal pathogens, except the tospoviruses, whose members infect plants [ 14 , 15 ]. The Bunyavirus genome consists of three single-stranded RNA segments. The envelope glycoproteins are encoded by the middle-sized segment (M) [ 16 , 17 ]. Members of each genus encode two glycoproteins that are present on the virion surface, and designated Gn and Gc to refer to their location amino terminal or carboxyl terminal on the M encoded polyprotein. The M segments of orthobunyaviruses, phleboviruses, and tospoviruses have been shown to encode for "nonstructural" proteins (NSm). In the case of the orthobunyaviruses and phleboviruses, NSm is synthesized as part of the polyprotein, but in tospoviruses NSm is encoded via an ambisense strategy by a separate mRNA [ 18 ]. The identity and structure of Bunyavirus fusion protein(s) are unknown, though it is likely that Gn or Gc fulfills this role. Proteomics computational analyses suggest that Bunyavirus Gc, and similar proteins of Tenuiviruses and a group of Caenorhabditis elegans retroviruses, are class II viral fusion proteins (β-penetrenes). Materials and Methods Sequences For sequence and structural comparisons of Bunyavirus M encoded proteins representatives of the five genuses were used, including pleboviruses Sandfly fever virus, Sicilian strain (SAN, accession number: AAA75043) and Rift Valley fever virus (RVF, P03518), orthobunyavirus Bunyamwera virus (BUN, NP047212), hantavirus Hantaan virus, strain 76–118 (HAN, P08668), nairovirus Crimean-Congo hemorrhagic fever virus, strain IbAr (CCHF, NP950235), and tospovirus tomato spotted wilt virus, ordinary strain (TSWV, NP049359). Additional phlebovirus M sequences compared included those of Uukuniemi virus (UUK, NP941979) and Punta Toro virus (PTV, VGVUPT). Bunyavirus M sequences were compared to sequences encoded in Alphavirus subgenomic RNA, including structural proteins of Sindbis virus (SIN, P03316), Semliki Forest virus (SFV, NP463458), Venezuelian equine encephalitis virus, strain TC-83 (VEE, P05674), Western equine encephalitis virus, strain McMillan (WEE, AAF60166), O'nyong-nyong virus, strain GULU (ONN, P22056), Mayaro virus, strain TRVL4675 (MAY, AA033335), Barmah Forest virus strain BH2193 (BFV, AA033347) and Ross River virus, strain NB5092 (RRV, NP740686). Comparisons were also made with proteins encoded by RNA2 of the HZ isolate of Rice stripe virus, a Tenuivirus (pvc2, AA031607), and with certain retroviruses of Caenorhabditis elegans , including Cer13 (hypothetical protein Y75D11A.5, NP508324). We also compared Bunyavirus M sequences to structural proteins of Flaviviruses, including members of the flavivirus genus tick-borne encephalitis virus, strain Neudoerfl (TBEV, P14336); Japanese encephalitis virus, strain JaOARS982 (JEV, P32886), yellow fever virus, strain 17D-204 (YFV, P19901), dengue virus type 2, strain PR-159/S1 (DEB, P12823), and West Nile virus, strain NY 2000-crow3356 (WNV, AF404756). The prototype hepaciviruses, strain H (subtype 1a) of hepatitis C virus (HCV, P27958), and several pestiviruses, including the Alfort 187 strain of classical swine fever virus, aka hog cholera virus (CSFV, CAA61161), bovine viral diarrhea virus genotype 1 aka pestivirus type 1, stain NADL (BVDV, CAB91847) and border disease virus, strain BD31 (BVD, AAB37578), were used in other comparisons. Proteomics computational methods Methods to derive general models of surface glycoproteins have been described previously [ 2 ]. PRSS3, a program derived from rdf2 [ 19 ], which uses the Smith-Waterman sequence alignment algorithm [ 20 ], was used to determine the significance of protein alignments. PRSS3 is part of the FASTA package of sequence analysis programs available by anonymous ftp from ftp.virginia.edu. Default settings for PRSS3 were used, including the blosum50 scoring matrix, gap opening penalty of 12, and gap extension penalty of 2. MacMolly (Soft Gene GmbH, Berlin) was used to locate areas of limited sequence similarity and to perform Chou-Fasman and Robson-Garnier analyses [ 21 , 22 ]. PHDsec (Columbia University Bioinformatics Center, ) was the preferred method of secondary structure prediction [ 23 ]. PHDsec predicts secondary structure from multiple sequence alignments by a system of neural networks, and is rated at an expected average accuracy of 72% for three states, helix, strand and loop. Domains with significant propensity to form transmembrane helices were identified with TMpred (ExPASy, Swiss Institute of Bioinformatics, ). TMpred is based on a statistical analysis of TMbase, a database of naturally occurring transmembrane glycoproteins [ 24 ]. Sequences with a propensity to partition into the lipid bilayer were identified with Membrane Protein e X plorer version 2.2a from the Stephen White laboratory using default settings [ 25 ]. The NetOglyc server was used to predict mucin type GalNAc O-glycosylation sites. RasMac (University of California Regents/Modular CHEM Consortium, ), developed by Roger Sayle, was used to render a 3D model of SFV E1, which was extrapolated to SIN E1 and SAN GC. Results Similar sequences or common structural/functional motifs are located collinearly in the carboxyl terminal glycoprotein of Sandfly fever virus and Sindbis virus envelope glycoprotein E1. Previously, Gallaher and coworkers modeled the structure of the retroviral transmembrane glycoprotein (TM) [ 2 ] onto the scaffold of the known structure of the HA2 portion of the influenza virus hemagglutinin [ 26 ]. Later, Gallaher [ 1 ] fit the fusion protein of Ebola virus, a filovirus, to retroviral TM. Both models proved remarkably similar to the structures of these fusion proteins solved later by X-ray crystallography [ 27 - 29 ]. These results indicate that Gallaher's "Rosetta Stone" strategy, which employs the fusion peptide and other identifiable features in combination with computer algorithms that predict secondary structure, is a useful approach to the construction of working models of class I viral fusion proteins. This approach, supplemented with newer proteomics computational tools, was applied to envelope glycoproteins encoded by members of the Bunyaviridae. Our initial finding, obtained using the PRSS3 alignment algorithm [ 20 , 30 ], was that the amino acid (aa) sequence of Gc of Sandfly fever virus (SAN), a phlebovirus, has a significant similarity (p < 0.002) with the aa sequence of E1, the fusion protein of Sindbis virus (SIN), an Alphavirus (Table 1 ). SAN Gc also showed significant overall alignments with E1 of several other Alphaviruses examined, including Semliki Forest virus (SFV), Western equine encephalitis virus (WEE) and O'nyong-nyong virus (ONN). The Gc proteins of the three other phleboviruses, Rift Valley fever virus (RVF), Uukuniemi virus (UUK) and Punta Toro virus (PTV), the only phleboviruses with completely sequenced Gc coding regions also showed significant sequence similarities to certain Alphavirus E1 proteins. The alignment of RVF Gc with SIN E1 and WEE E1 was statistically significant, but the alignment with SFV E1 was not. PTV Gc showed the highest overall aa sequence similarity with any Alphavirus E1 examined, that of Venezuelan equine encephalitis virus (VEE) (p < 0.0004). UUK Gc showed a significant overall alignment only with Ross River virus (RRV) E1, while RRV E1 failed to align with any of the other three phlebovirus Gc examined. These results from multiple comparisons of phleboviruses Gc and Alphavirus E1 indicate that the significant alignment between SAN Gc and SIN E1 is not a statistical aberration, but may underlie structural and functional similarities between the two viral glycoproteins. It is also of interest that the PRSS3 sequence alignment tool permitted detection of similarities not detected than by the use of BLASTp or related computational methods. Table 1 Comparison of phlebovirus Gc with Alphavirus E1 using the PRSS3 sequence algorithm. Alphavirus E1 1 Phlebovirus Gc SIN SFV WEE VEE MAY RRV BFV ONN SAN 0.002 0.02 0.001 0.004 0.05 NS 0.03 0.007 RVF 0.04 NS 0.03 0.003 NS NS 0.04 NS PTV NS 0.04 0.0002 0.0004 NS NS NS 2 NS UUK NS NS NS NS 3 NS 0.05 NS NS 1 Two-way comparisons were done between the full-length amino acid sequences of the indicated glycoproteins. Probabilities (p values) of a significant alignment are based on 1000 shuffles. SAN: Sandfly fever virus, RVF: Rift valley fever virus; PTV: Punta Toro virus; UUK: Uukuniemi virus; SIN: Sindbis virus; SFV: Semliki Forest virus; WEE: Western equine encephalitis virus; VEE: Venezuelan equine encephalitis virus; MAY: Mayaro virus; RRV: Ross River virus; BFV: Barmah Forest virus; ONN: O'nyong-nyong virus. 2 p = 0.08 3 p = 0.07 Prior X-ray crystallographic studies have demonstrated that SFV E1 is a class II viral fusion protein (β-penetrene) [ 6 ]. Because of its extensive sequence similarity with SFV E1, SIN E1 is assumed to be a class II viral fusion protein [ 31 ]. The sequence similarities between SAN Gc and SIN E1 do not permit alignment by computational methods alone. The significance of the overall sequence similarity can, however, be attributed to three collinear similarity regions between the two glycoproteins detected by the PRSS3 algorithm (Fig. 1A ). Beginning from the amino terminus, the first sequence similarity starts in β-sheet Do in domain Ia of SIN E1 and extends past β-sheet b in domain IIa. The alignment was significant (P < 0.0002) between aa 889–928 of SAN Gc and aa 833–873 of SIN E1 (Fig. 1A ). The next significant sequence similarity, aa 1022–1101 of SAN Gc and aa 946–1029 of SIN E1 (p < 0.03), includes the SIN E1 sequence from β sheet Fo in domain Ib to β sheet i in domain IIb. The third region of similarity, SAN Gc aa 1181–1287 and SIN E1 aa 1112–1202 (p < 0.003), includes most of SIN E1 domain III. In addition, two domains that are characteristic of class II viral fusion proteins are also apparent in SAN Gc. Prior studies have shown that aa 887–904 of SFV E1 are critically involved in virion:cell fusion, and indicate that this region includes the fusion peptide [ 32 , 33 ]. The fusion peptide of SIN E1 is assumed to be located similarly [ 31 ]. The fusion peptides of the class II viral fusion proteins are located at the end of domain II, and consist predominantly of aromatic aa (usually phenylalanine [F] or tryptophan [W]), hydrophobic aa, and aa with high turn potential (glycine [G] and proline [P]). Cysteine linkages usually stabilize the fusion peptides of class II viral fusion proteins in the overall structure (Fig 1A , red). A sequence (aa 960–977) corresponding to a consensus class II fusion peptide is present in SAN Gc in a similar location to the SIN E1 fusion peptide (Fig. 1A ). Another common domain of class II viral fusion proteins readily identifiable in SAN Gc is the carboxyl terminal transmembrane anchor. Rossmann and coworkers provided experimental evidence that SIN E1 aa 1215–1241 contains the transmembrane domain of SIN E1 [ 31 ]. A similar hydrophobic sequence is located near the carboxyl terminus of SAN Gc. TMpred, an algorithm that identifies possible transmembrane helices, assigns a significant score of 3048 (>500 is statistically significant) to aa 1303–1322 of SAN Gc, which suggests that this is the transmembrane anchor of SAN Gc. Using the regions of local similarity and the fusion peptide and transmembrane domains, which are collinear, a proposed alignment between SAN Gc and SIN E1 can be constructed (Fig. 1B ). The alignment necessitates only one "insertion." Relative to domain IIb of SIN E1, it appears that SAN Gc, has an added sequence (aa 932–958), a proposed "loop" flanked by cysteines and containing two N-linked glycosylation sites (NXT/S) reminiscent of glycosylated loops of other viral envelope proteins [ 34 ]. Figure 1 Colinear arrangement of similarities in Sindbis virus E1 and Sandfly fever virus Gc. Alignments were constructed as detailed in the text. Panel A: Linear arrangement of the domain structure of SIN E1 and proposed domain structure of SAN Gc according to the convention for class II viral fusion proteins (β-penetrenes) originally described for TBEV E by Rey et al. [5]. Regions of significant sequence similarities in SIN E1 and SAN Gc determined by the PRSS3 sequence alignment program are indicated. Probabilities (p values) are based on 1000 shuffles. Panel B: Amino acids are numbered from the beginning of the Sindbis virus subgenonic mRNA encoded polyprotein and the beginning of the SAN M segment encoded polyprotein. (:) refers to identical amino acids. (.) refers to chemically similar amino acids. Plum amino acids: N-glycosylation sites. Hydrophobic transmembrane domains were predicted using TMpred. Sequences with significant WWIHS scores were identified by MPeX (olive). In SAN Gc, predicted α-helices are indicated by dashed boxes and predicted β-sheets are underlined with a dashed arrow. Membrane interfacial domains in Bunyavirus glycoproteins To provide support for the proposed alignment of SAN Gc and SIN E1, another proteomics computational tool was used to compare potential membrane interactive domains in the glycoproteins. Besides fusion peptides, a motif that can be important in virus:cell fusion and is present in many class I and class II viral fusion proteins is an aromatic aa rich domain proximal to the transmembrane anchor [ 35 ]. The pre-anchor domains are not highly hydrophobic according to the Kyte-Doolittle hydropathy prediction algorithm, but have a tendency to disrupt and partition into bilayer membranes as revealed by analyses using the Wimley-White interfacial hydrophobicity scale (WWIHS) [ 35 , 36 ]. SIN E1 contains a sequence prior to and overlapping the transmembrane anchor with a significant WWIHS score as determined by Membrane Protein eXplorer (MPeX) [ 25 ]. SAN Gc has two sequences with significant WWIHS scores in this region, the pre-anchor and the putative transmembrane domain. The fusion peptides of all class I and II viral fusion proteins examined to date overlap sequences with significant WWIHS scores (RFG, unpublished observation). The proposed fusion peptide in SAN Gc consists of a sequence with a significant WWIHS score, which further supports the assignment of this sequence as the fusion peptide. Additional sequences with significant WWIHS scores are located collinearly along SAN Gc and SIN E1. In total, six of the seven sequences in SAN GC with significant WWIHS scores overlap in the proposed alignment with sequences with significant WWIHS in SIN E1. Analysis of membrane interfacial potential in the primary sequences thus provides further support for the proposed alignment of SAN Gc and SIN E1. A model of phlebovirus Gc Recently, Gibbons and coworkers determined the structure of a fragment of the SFV E1 ectodomain (lacking carboxyl terminal aa 392–438) after exposure to low pH and liposomes [ 33 ]. Under these conditions, which mimic an endosomal environment, the SFV E1 ectodomain fragment changes from a soluble monomer to a trimer as it inserts into the liposomal membrane after exposure to low pH. A similar post-fusion structure was found in two other class II fusion proteins, E of DEN and TBEV [ 10 , 11 ]. These investigators proposed several possible fusion intermediates of SFV E1 and other class II viral fusion proteins after exposure to low pH. These intermediates are assumed to be similar to structural intermediates of SIN E1. To determine the plausibility of the proposed SAN Gc and SIN E1 alignment, a model of SAN Gc was scaffolded on a presumed structural intermediate of SIN E1 in which compared to the orientation in the virion at neutral pH, domain III is displaced closer to the fusion peptide (Fig. 2 ). The collinear sequence alignments between SAN Gc and SIN E1 suggest that both glycoproteins may have a similar domain structure. Similar sequences/structures are drawn in similar locations. In this possible fusion intermediate, the putative SAN Gc fusion peptide is assumed to be located at the end of the molecule furthest from the carboxyl terminal (C-terminal) transmembrane anchor domain. Like SIN E1 and other class II fusion proteins SAN Gc may be comprised mostly of antiparallel β sheets, an expectation supported by several secondary structure prediction algorithms, including PHDsec [ 23 ], Chou-Fasman [ 21 ] and Robson-Garnier [ 22 ] analyses. The proposed SAN Gc structure conforms closely to the structural predictions of PHDsec, the most robust algorithm, but is also generally consistent with both Chou-Fasman and Robson-Garnier predictions. In some cases because of significant aa sequence similarity with SIN E1, ambiguous structures in SAN Gc are depicted as in SIN E1. Additional evidence for the proposed alignment is the location of cysteine residues of SAN Gc and SIN E1. The cysteine residues are usually the most conserved aa in within a protein family because disulfide linkages are a critical determinant of secondary structure. The dicysteines of SIN E1 (Fig. 2B ) are arranged such that bonds are formed only between residues within the same putative domain. A similar arrangement is feasible for SAN Gc with C residues present in close proximity after scaffolding on the SIN E1 structure, and with possible linkages occurring within the three proposed domains. This model also locates the four SAN Gc glycosylation sites so they are surface accessible. Figure 2 Model of Sandfly fever virus Gc based on predicted structure of a Sindbis virus E1 fusion intermediate. Panel A: A structural intermediate of SFV E1 as determined by Gibbons et al. [33] was projected to SIN E1. Panel B: A model fitting SAN GC to the predicted structure of SIN E1. Structures predicted to be similar are color-coded the same way in SIN E1 and SAN Gc. Grey lines: dicysteine linkages. Black stick figures: N-glycosylation sites (sites with central proline are often not used). Regions with significant Wimley-White interfacial hdrophobicity scale scores were predicted with MpeX (black). There are many possible alternatives to the cysteine linkages and secondary structures of SAN Gc drawn in Figure 2 . Nevertheless, a plausible three-dimensional model of SAN Gc that conforms to the scaffold of the known structure of Alphavirus E1 can be constructed. This result coupled with predictions of a predominantly β sheet secondary structure of SAN Gc provides further support its proposed alignment with SIN E1. Sequence/structural features of Bunyavirus Gc suggest that a class II fusion protein structure is conserved in members of the Bunyaviridae To provide additional evidence for the proposed SIN E1/SAN Gc alignment and the SAN Gc class II fusion protein model, we determined whether structural/sequential similarities with class II fusion proteins are conserved in envelope proteins encoded by other members of the Bunyavirus family. With the exception of tospoviruses, that use an ambisense strategy for synthesis of a nonstructural protein, Bunyavirus M segments are negative in polarity and the mRNA transcribed contains a long open reading frame [ 37 ]. The M segment mRNA is translated as a polyprotein, which is post-translationally processed [ 38 ]. There is considerable diversity in the number and sizes of the M segment encoded polyproteins produced in infected cells, but all Bunyaviruses encode at least two glycoproteins, Gn and Gc. Prior analyses have revealed similarities between Gc encoded by members of orthobunyaviruses and tospoviruses [ 18 , 39 ], but evidence that Gn and Gc serve analogous functions in each Bunyavirus genus has not been available previously. Comparisons among Gc of members of the five genuses of the Bunyaviridae using the PRSS3 algorithm revealed that the type members of each genuses display significant sequence similarities with certain Gc of viruses of other Bunyavirus genuses (Table 2 ). The most significant alignment detected among members of different genuses of the Bunyaviridae, was between RVF, type member of phleboviruses, and tomato spotted wilt virus (TSWV), type member of tospoviruses (p < 10 -18 ). As noted previously [ 18 , 39 ], orthobunyavirus Gc also show a significant similarity to tospovirus Gc, with Bunyamwera virus (BUN) Gc displaying a significant similarity to TSWV Gc (p < 10 -8 ). As with phlebovirus Gc, the prototype of the hantavirus genus, Hantaan virus (HAN), showed a modest sequence alignment (p < 0.05) with SIN E1, further supporting the proposed similarities between Bunyavirus Gc and Alphavirus E1. Significant alignments were not detected between Bunyavirus Gc or Alphavirus E1 and TBEV E or other flavivirus class II viral fusion proteins. Limited local similarities were observed between some Bunyavirus Gc and pestivirus E2. It is noteworthy that the significance of the overall sequence similarities between certain phlebovirus Gc and Alphavirus E1 is higher than some similarities among Gc of some prototypic members of the Bunyaviridae (compare Table 1 and 2 ). Collectively, these results suggest that Gc of Bunyaviruses share a limited number of similar sequences. Table 2 Similarities among Bunyavirus Gc, Alphavirus E1 and related glycoproteins as determined with the PRSS3 sequence algorithm. Viral protein 1 Viral protein Hanta HAN Gc Nairo CCHF Gc Phlebo RVF Gc Tospo TSWV Gc Alpha SIN E1 CeRV Cer13 Env 2 Tenui RiSV pvc2 2 Flavi TBEV E Obuny BUN Gc 0.0005 0.01 0.009 10 -5 NS NS NS NS Hanta HAN Gc --- 0.0001 NS NS 0.05 NS 0.003 NS Nairo CCHF Gc --- --- 0.05 0.0001 NS NS NS NS Phlebo RVF Gc --- --- --- 10 -18 0.04 10 -8 0.001 NS Tospo TSWV Gc --- --- --- --- NS NS NS NS Alpha SIN E1 --- --- --- --- --- 0.02 NS NS CeRV Cer13 Env 2 --- --- --- --- --- --- 0.02 NS Tenui RiSV pvc2 2 --- --- --- --- --- --- --- NS Two-way comparisons were done between the full-length amino acid sequences of the indicated glycoproteins. Probabilities (p values) of a significant alignment are based on 1000 shuffles. BUN: Bunyamwera virus; HAN: Hantavirus; CCFV: Crimean-Congo hemorrhagic fever virus; RVF: Rift valley fever virus; TSWV: Tomato spotted wilt virus; SIN: Sindbis virus; Cer13: Caenorhabditis elegans retrovirus 13; RiSV; rice stripe virus; TBEV: tick-borne encephalitis virus. 2 C-terminal sequence. Available computational methods alone do not permit overall alignments of Bunyavirus Gc, however, through the use of PRSS3 and MacMolly alignment tools and by inspection certain common sequences can be identified. The most highly conserved sequences among type members of the five Bunyavirus genuses conform to a consensus class II fusion peptide (Fig. 3 , red) and to a carboxyl terminal transmembrane domain (Fig. 3 , violet). Certain cysteine clusters, which are likely to stabilize the secondary structures of the proteins, also are present in similar locations along the proteins, including the beginning of putative domain III (Fig. 3 ). Further support for the alignment of Bunyavirus Gc is obtained by the use of the WWIHS. Sequences with significant WWIHS scores are located in similar locations in Bunyavirus Gc (Fig. 3 , olive). The proposed fusion peptides and transmembrane domains of each of the Bunyavirus Gc examined displayed significant WWIHS scores. In addition, most of the Gc examined had sequences with significant WWIHS scores in the putative IIa domains and either the putative Ic domain or the beginning of adjacent domain III. This alignment suggests that orthobunyaviruses and tospoviruses Gc have extended regions of approximately 400 and 50 amino acids at the amino terminus relative to phlebovirus, hantavirus and nairovirus Gc. These results also suggest that motifs involved in virion:cell fusion are conserved in Gc throughout the Bunyaviridae, and that Gc is the fusion protein encoded by members of each Bunyavirus genus. Figure 3 Alignment of Gc amino acid sequences of prototype members of the five genuses of the Bunyaviridae family. Alignments were constructed by identifying the fusion peptide (red) and the transmembrane anchor (violet) as described in the text. Additional local sequence similarities were identified by using the Complign feature of MacMolly, the PRSS3 alignment algorithm or by inspection. Sequences with significant WWIHS scores (olive) were identified by MPeX. Sequence/structural features of Bunyavirus Gc are present in a Tenuivirus protein and in an Env encoded by a Caenorhabditis elegans retrovirus As previously reported, phlebovirus Gc have a significant similarity to surface protein pvc2 encoded by plant viruses of the Tenuiviridae [ 40 ]. The PRSS3 sequence alignment algorithm confirms this similarity (Table 2 , p < 0.001). An envelope protein (Env) encoded by a group of retroviruses of the nematode C. elegans also demonstrate significant similarities to phlebovirus Gc [ 41 , 42 ]. There is a significant alignment detected by the PRSS3 algorithm between SAN Gc and the carboxyl terminal region of Env of Cer13, a potentially replication-competent member of this group (Table 2 , p < 10 -8 ). These results further validate the use of the PRSS3 algorithm to identify limited similarities amongst viral proteins. Alignment of the carboxyl terminal portion of the pvc2 protein of rice stripe virus (RiSV), a Tenuivirus, and the envelope protein encoded by Cer13 retrovirus with two phlebovirus Gc reveals a collinear arrangement of fusion peptide consensus sequences (Fig. 4 , red) and potential carboxyl terminal transmembrane domains (Fig. 4 , violet). These proteins also have several overlapping sequences with significant WWIHS scores (Fig. 4 , olive). The retention of these features in proteins encoded by evolutionarily distant genomes, provides further evidence that these motifs are important for the function of Bunyavirus fusion proteins. Figure 4 Alignment of phlebovirus Gc amino acid sequences with Tenuivirus surface protein pvc2 and the carboxyl terminal Env protein of a Caenorhabditis elegans retrovirus. Sequences are color-coded as in Figure 3. Protein order in polyproteins encoded by Bunyaviridae M segments The longest open reading frames of M segments of all members of the Bunyaviridae are antisense to the virion RNA. mRNAs transcribed from Bunyavirus M segments are translated into large polyproteins that are subsequently cleaved by into functional proteins [ 38 , 43 ]. Gc of members of each of the five Bunyavirus genuses are the carboxyl terminal proteins of the polyprotein (Fig. 5 ). Because viral proteins with similar functions may have similar genome locations, we sought evidence for sequence similarities among other Bunyavirus proteins encoded by the M segment. Gn of type members of the five Bunyavirus genuses were compared to each other and each had a limited sequence similarity to at least one other Gn of a type member of a different genus (Table 3 ). The most significant alignment was between the Gn proteins of HAN, a hantavirus, and CCHF, a nairovirus (p < 10 -4 ). HAN Gn also showed a significant alignment with Gn of RVF, a phlebovirus. Both HAN Gn and Gn of TSWV, a tospovirus, also showed significant alignments with envelope protein 2 (E2) of SIN. SIN E2 has been implicated as the virion protein responsible for binding to the cell surface receptor [ 44 ]. These results suggest that the Gn of Bunyaviruses have limited similarity, and may have a common role or roles in the virus replication cycle. The order (from amino to carboxyl terminus) of proteins in the polyproteins of SIN and other members of the Alphaviridae is Capsid-E2-E3-6K-E1. Receptor and fusion functions may reside in two different Bunyavirus proteins, Gn and Gc respectively, occurring in the same order as the envelope glycoproteins, E2 and E1, that carry out these functions in Alphaviruses (Fig. 5 ). The similarities in protein order and functions support the hypothesis that Gc is the fusion protein of Bunyaviruses. These results also support the suggested nomenclature of Gn and Gc for the Bunyavirus M segment encoded glycoproteins as a replacement for the current ambiguous nomenclature, which variously assigns the designation G1 or G2 to unrelated Bunyavirus glycoproteins. The presumptive protein encoded by the amino terminal region of the pvc2 protein of RiSV also showed limited similarity to SIN E2 (Table 3 ). Thus, the similarities between proteins encoded by a Tenuivirus extend to another glycoprotein of a virus with a class II fusion protein. Figure 5 Common order of proteins in Bunyavirus M segment polyproteins. Related glycoproteins Gn and Gc are in the same order in the polyproteins of prototypic members of the Bunyaviridae. Prior designations of the glycoproteins are indicated in parentheses. Hydrophobic domains were predicted using TMpred. The O-glycosylation rich (mucin-like) region in CCHF was delineated using NetOGlyc 3.1 as described previously by Sanchez and coworkers [46]. These authors also described the indicated potential cleavages of the CCHF polyprotein. Table 3 Similarities among Bunyavirus Gn, Alphavirus E2 and related glycoproteins as determined with the PRSS3 sequence algorithm. Viral protein 1 Viral protein Hanta HAN Gn Nairo CCHF Gn Phlebo RVF Gn Tospo TSWV Gn Alpha SIN E2 CeRV Cer13 Env 2 Tenui RiSV pvc2 2 Flavi TBEV E Obuny BUN Gn NS NS NS 0.02 NS NS NS NS Hanta HAN Gn --- 10 -4 0.03 NS 0.05 NS NS NS Nairo CCHF Gn --- --- NS NS NS NS NS NS Phlebo RVF Gn --- --- --- NS NS NS NS 3 NS Tospo TSWV Gn --- --- --- --- 0.04 NS NS NS Alpha SIN E2 --- --- --- --- --- NS 0.04 NS CeRV Cer13 Env 2 --- --- --- --- --- --- 0.025 NS Tenui RiSV pvc2 2 --- --- --- --- --- --- --- NS 1 Two-way comparisons were done between the full-length amino acid sequences of the indicated glycoproteins. Probabilities (p values) of a significant alignment are based on 1000 shuffles. BUN: Bunyamwera virus; HAN: Hantavirus; CCFV: Crimean-Congo hemorrhagic fever virus; RVF: Rift valley fever virus; TSWV: Tomato spotted wilt virus; SIN: Sindbis virus; Cer13: Caenorhabditis elegans retrovirus 13; RiSV; rice stripe virus; TBEV: tick-borne encephalitis virus. 2 N-terminal sequence. 3 p = 0.08 The simplest M polyprotein, encoding only Gn and Gc, is that of hantaviruses (Fig. 5 ). In addition to Gn and Gc, phlebovirus and orthobunyavirus encode nonstructural proteins (NSm) that have two or three potential transmembrane spanning domains as detected by TMpred (Fig. 5 ). Nairoviruses, such a Crimean-Congo hemorrhagic fever virus (CCHF), synthesize a similar protein [ 45 , 46 ], and we have designated this protein "NSm." NSm of the RVF and BUN, type members of phlebovirus and orthobunyaviruses, as well as the "NSm" protein of CCHF have short regions of similarity with each other as revealed by the PRSS3 sequence alignment algorithm, although the overall alignments are not significant. There were also short regions of similarity between the NSm proteins of these three Bunyavirus genuses and the prM/M-like proteins of Flaviviruses (not shown). Immature virions of members of the flavivirus genus of the Flaviviridae contain a precursor prM to the small membrane protein M. prM is cleaved by furin or by a furin-like protease during virus release to produce the mature M protein localized on the surface of flavivirus virion [ 47 , 48 ]. Flavivirus PrM/M, contains two potential membrane spanning domains, and their functions may include shielding of internal cellular membranes from the fusion peptide of E [ 7 , 47 ]. It is possible that the phlebovirus, orthobunyavirus and nairovirus M segment encoded nonstructural proteins, all with multimembrane-spanning potential, serve the same function for Gc. NSm of TSWV, a tospovirus, showed no sequence similarity or structural similarity with any Bunyavirus protein examined. The functions of tospovirus NSm, which is encoded by the only positive polarity gene in any M segment, and the other Bunyavirus NSm proteins remain to be determined. Nairovirus M may encode two additional proteins, a mucin-like protein (MLP), which contains a variable region with a high concentration of potential O-glycosylation sites, and a protein designed here X, neither of which have obvious homologs encoded by members of the other Bunyavirus genuses [ 45 , 46 ]. The coding sequences of Bunyavirus M appear to have evolved in a manner preserving the order of the glycoproteins Gn and Gc, while allowing for insertion or deletion of sequences encoding additional proteins. Discussion Proteomics computational analyses suggest that Bunyavirus Gc proteins are class II viral fusion proteins (β-penetrenes), with a structure similar to the fusion proteins of Alphaviruses and Flaviviruses. Similar sequences or common structural/functional motifs are collinearly located in Bunyavirus Gc and Alphavirus E1. Features common to other class II fusion proteins, including an internal fusion peptide, a carboxyl terminal transmembrane domain and regions with a high propensity to interface with bilayer membranes, are conserved and in similar locations in Gc of viruses in each genus of the Bunyaviridae. These features are also present in glycoproteins encoded by nonenveloped Tenuiviruses of plants, and a group of C. elegans retroviruses previously shown to have remarkable sequence similarities to phlebovirus Gc. These results also indicate that Gallaher's "Rosetta Stone" strategy can be used to identify potential class II viral fusion proteins, as demonstrated previously for class I fusion proteins [ 1 - 3 , 49 ]. The common placement of proper names or "cartouches" allowed the ancient languages of the Rosetta Stone to be deciphered. As advanced by Gallaher, fusion peptides can serve a similar function to facilitate alignment of viral fusion proteins with limited sequence similarities. Many viral fusion proteins fit neither class I or II and it is likely that other classes of viral fusion protein also exist. However, among major classes of enveloped RNA viruses, there are at least six, myxoviruses, retroviruses, paramyxoviruses, filoviruses, arenaviruses and coronaviruses, that encode class I viral fusion proteins [ 1 - 4 ]. Alphaviruses, members of the flavivirus genus of the Flaviviruses, and according to current analyses, Bunyaviruses, encode class II viral fusion proteins. Computational analyses suggest that members of the two other Flavivirus genuses, hepaciviruses and pestiviruses, encode variant class II fusion proteins [ 12 ]. The viruses encoding class II or II viral fusion proteins thus represent a substantial portion of enveloped RNA virus families known to infect vertebrates. It is significant that representative class I and II encoding viruses are also found in evolutionarily distinct plant viruses and viruses or virus-like genomic elements of nematodes and insects [ 41 , 42 , 50 , 51 ]. There may be constraints on the structures of viral proteins capable of effectively mediating virion:cell fusion, or a limited number of enveloped RNA virus lineages. Alphaviruses appear to use separate envelope proteins for fusion (E1) and attachment (E2) [ 44 ]. Because Bunyavirus Gc display similarities to Alphavirus E1 and certain Bunyavirus Gn display limited sequence similarities to Alphavirus E2, Bunyaviruses may have adapted a similar strategy. Verification that Gc is the fusion protein of Bunyaviruses will require a combination of X-ray crystallographic structural studies and site-directed mutagenesis of key features such as the putative fusion peptide. Verification that Gn serves as the receptor binding protein for any Bunyavirus requires identification of its cell surface receptor. E, the class II fusion protein of TBEV, dengue virus, and other members of the flavivirus genus of the Flaviviridae, mediates both virion:cell fusion and receptor-binding [ 52 , 53 ]. Therefore, it is possible that Bunyavirus Gc serves both as the fusion protein and receptor binding protein. The remarkable similarities in both the pre- and post-fusion forms of the fusion proteins of SFV E1, an Alphavirus, and DEN and TBEV, members of the flavivirus genus of the Flaviviruses, in the absence of detectable sequence similarities, suggest that Alphavirus and Flavivirus class II fusion proteins may have diverged from a common progenitor. Alternatively, there may have been convergent evolution towards the common structure. Likewise, the sequence similarities detected between phlebovirus Gc and SIN E1 are consistent with divergent evolution from a common progenitor, but are insufficient to directly establish a phylogenic relationship. The results presented here suggest that Gc of members of the Bunyaviridae may have a common ancestor. Gn and Gc are in analogous locations in the polyproteins encoded by the five genuses of the Bunyaviridae. The simplest Bunyavirus M polyprotein, that of hantavirus members, encodes only Gn and Gc, whereas M of members of other Bunyavirus genuses encode several additional proteins. Therefore, divergence of Bunyavirus M segments may have occurred either through acquisition of sequences and/or lose of sequences in a cassette manner constrained in part by the locations of the major glycoproteins. Comparisons of divergent viral fusion proteins with internal fusion peptides can reveal features essential for virion:cell fusion. Regions of high membrane interfacial propensity including the fusion peptide and the transmembrane anchor, appear in similar locations in Bunyaviruses, Alphaviruses and Flaviviruses. The presence of several additional sequences with the propensity to interact with bilayer membranes in class II viral fusion proteins has not been considered in previous virion:cell fusion models [ 9 , 11 , 54 ]. Cell entry of Alphaviruses and Flaviviruses is believed to occur via the endocytic route, and it is likely that this is the entry route of Bunyaviruses [ 55 ]. Following binding to the cellular receptor, a putative function of Bunyavirus Gn (Fig. 6A ), the Bunyavirus virion may be taken up in an endocytic vesicle (Fig. 6B ). Exposure to acidic pH in the endosome may trigger conformational changes in the envelope proteins and in the virion itself resulting in dissociation of Gn and Gc (Fig. 6C ). Current models of fusion mediated by Alphavirus and Flavivirus class II viral fusion proteins suggest that the low pH of the endosome triggers trimerization and a bending of class II fusion proteins at a flexible "hinge" region between domains I and II elevating the fusion peptide so that it can insert into the host membrane [ 6 , 9 , 11 , 54 ]. Current models then suggest that a rearrangement of the stem (pre-anchor region), so that there are more extensive interactions with domains I-III, results in a deformation of the viral and target membranes and the formation of apposing membrane "nipples" (Fig. 6E ). Subsequently, the nipples are brought closer together by continued interactions of the stem with domains I-III, which results in bilayer hemifusion (Fig. 6F ). Complete fusion follows allowing entry of the ribonucleoproteins containing the viral genomic RNA (Fig. 6G ). An analogous mechanism, involving deformation and nipple formation of the viral and cellular membranes caused by rearrangements of the viral fusion proteins (six helix bundle formation) has been proposed for class I viral fusion proteins [ 54 ]. Figure 6 Hypothetical model of Bunyavirus:cell fusion. Steps in the entry process of Bunyaviruses can be extrapolated from current models of class II viral fusion protein-mediated virion cell fusion. Panel A. The Bunyavirus glycoproteins Gn and Gc are modeled according to SIN virion structure analyses by Zhang et al. [31]. Based on limited similarities with Alphavirus E2 proteins (Table 3), Gn is depicted as the receptor-binding protein of Bunyaviruses. Certain Bunyaviruses may encode other membrane-associated proteins that interact with the fusion peptide or other regions of Gc. Panel B: Receptor-binding triggers uptake of Bunyavirus virion by endocytosis. Panel C: Acidification of the endocytic vesicle occurs via the action of proton transporters and may initiate Gn and Gc dissociation. Panel D: bending at the flexible "hinge" region beween domains I and II permits Gc trimer formation and insertion of the fusion peptide into the endosomal vesicle membrane. Panel D' Alternatively, Gc trimer formation may involve the rotation of domain III and a rearrangement (twist) of domain II as shown for SFV E1, DEN E and TBEV E [11,33,54]. Panel E: As previously proposed [11,33,54] the formation of more extensive Gc contacts in the trimers and stem regions may release of energy for distortion of the endosomal and viral membranes resulting in formation of "nipple-like" projections. Panel E': Alternatively, aa sequences of Gc that form a track with the ability to interface with bilayer membranes (Fig. 2, black), may facilitate mixing of the endosomal and viral membranes. Panel F: Formation of further trimer contacts and hemifusion. Hemifusion may not occur in the D' and E' pathway. Panel G: Formation of the "fusion pore" and entry of the ribonucleoprotein (RNP) segments. Modified from models and concepts proposed in references 9-12. Current fusion models do not consider that the transmembrane domain and fusion peptide, while anchored into the viral and cellular membranes, would still be free to move laterally without distorting the membranes. More importantly, the virion is quite small compared to the cell, and would be freely mobile. Rearrangement of the fusion proteins may simply draw the virus closer to the cell without distorting either the viral or cellular membranes. An alternative to the models involving apposing membrane nipple formation is suggested by the observation that sequences of class II viral fusion proteins, including the fusion peptide, the transmembrane anchor and other sequences with high WWIHS scores, potentially form a nearly continuous track of membrane interactive regions that could channel the movement of lipids during virion:cell fusion (Fig. 2 , black). Similar nearly contiguous sequences with significant WWIHS scores are present in the post-fusion intermediates of Alphaviruses SIN and SFV, DEN and other flaviviruses, and the proposed structures of hepaciviruses and pestiviruses [ 12 ]. An intermediate, with the track of sequences with high membrane interfacial propensity, may be the first intermediate formed after exposure to the low pH in liposomes (Fig. 6D' ). Upon formation of higher multimers of trimers, the regions with high WWIHS scores, in conjunction with the fusion peptide and transmembrane could then form a "pore" in which the lipids of the cellular and viral bilayer membranes could mix directly (Fig. 6E' ). With lipid mixing facilitated by these membrane interfacial sequences bilayer fusion may proceed without a hemifusion step, but still permitting entry of the genome-containing RNP (Fig. 6G ). In the absence of structural determinations by X-ray crystallography, models such as proposed here can provide useful hypotheses to guide experimental strategies for development of vaccines or drugs to prevent or treat infection by viruses with class II fusion proteins. Prior to the availability of X-ray structural data, several potent HIV-1 TM inhibitors were developed [ 56 , 57 ] based on the Gallaher HIV-1 TM fusion protein model [ 2 ]. Fuzeon™ (DP178; T20 enfuvirtide), one of these peptides corresponding to a portion of an α helices and the pre-anchor domain, has been shown to substantially reduce HIV-1 load in AIDS patients, and has been approved for use in the treatment of HIV infection in the United States and European Union [ 58 , 59 ]. Peptides targeted to membrane interactive motifs block virion:cell fusion mediated by DEN and West Nile virus, flaviviruses with class II fusion proteins (Hrobowski et al., submitted). Peptide inhibition strategies targeted to Gc may be broadly applicable to various members of the Bunyaviridae. Competing Interests The authors declare that they have no competing interests. Authors' Contributions CEG performed the sequence alignments and assisted in the preparation of figures. RFG supervised the work and wrote the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC535339.xml |
526392 | Elevated levels of matrix metalloprotein-3 in patients with coronary aneurysm: A case control study | Background Matrix metalloproteinases (MMPs) have been implicated in the pathogenesis of arterial aneurysms through increased proteolysis of extracellular matrix proteins. Increased proteolysis due to elevated matrix degrading enzyme activity in the arterial wall may act as a susceptibility factor for the development of coronary aneurysms. The aim of this study was to investigate the association between MMPs and presence of coronary aneurysms. Methods Thirty patients with aneurysmal coronary artery disease and stable angina were enrolled into study (Group 1). Fourteen coronary artery disease patients with stable angina were selected as control group (Group 2). MMP-1, MMP-3 and C-reactive protein (CRP) were measured in peripheral venous blood and matched between the groups. Results Serum MMP-3 level was higher in patients with aneurismal coronary artery disease compared to the control group (20.23 ± 14.68 vs 11.45 ± 6.55 ng/ml, p = 0.039). Serum MMP-1 (13.63 ± 7.73 vs 12.15 ± 6.27 ng/ml, p = 0.52) and CRP levels (4.78 ± 1.47 vs 4.05 ± 1.53 mg/l, p = 0.13) were not significantly different between the groups. Conclusion MMPs can cause arterial wall destruction. MMP-3 may play role in the pathogenesis of coronary aneurysm development through increased proteolysis of extracellular matrix proteins. | Introduction Coronary artery aneurysms are defined as dilated coronary artery segments that are greater than 1.5 times the diameter of adjacent normal segments [ 1 , 2 ]. The gold standard for diagnosing this type of aneurysm is coronary angiography, which provides information about the size, shape, location and number of aneurysms. Coronary aneurysms may occur during the development of coronary atherosclerosis. Previous studies have shown that coronary aneurysms are observed in 1% to 5% of patients with angiographic evidence of coronary artery disease [ 3 - 6 ]. In some studies, coronary aneurysms have been associated with an increased risk of myocardial infarction [ 3 , 4 ]. Although the mechanisms responsible for coronary aneurysm formation during the atherosclerotic process are unclear, atherosclerosis-induced aneurysms derive primarily from thinning and/or destruction of the media [ 6 - 8 ]. Possible factors contributing to aneurysms are matrix-degrading enzymes such as collagenases, gelatinases, and stromelysins [ 9 , 10 ]. More specifically, matrix metalloproteinases (MMPs) are enzymes that can degrade the structural proteins of connective tissue. Degradation of extracellular matrix proteins may weaken the connective tissue, thereby leading to a weakened vascular wall. We investigated the association between MMPs and coronary artery aneurysm by measuring the levels of MMP-1 and MMP-3 (both of which represent markers of proteolytic activity) in patients with coronary artery disease, some of whom had coronary aneurysms (cases) and others who did not (controls). Methods Patient population We reviewed the medical records of patients who had undergone coronary angiography between January, 2002 and April, 2003. Among 4,456 cases reviewed, 55 patients (1.23%) diagnosed with aneurysmal coronary artery disease were selected. Sixteen patients with acute coronary syndromes and nine patients with balloon angioplasty history were excluded from the study. The remaining 30 patients with aneurysmal coronary artery disease patients were enrolled into the study. Transverse diameter of an aneurysm and reference vessel were measured using the post-processing software (Schimadzu Corporation, DIGITEX ALPHA Plus System, Kyoto, Japan, 2001). The ratio between dilated coronary artery segment and reference vessel diameter was calculated. The control patients (n = 14) had coronary artery disease, but were free of aneurysmal coronary dilatation. Both groups had positive exercise stress tests and had been diagnosed with stable angina. Blood biochemistry and echocardiography were performed in all patients. No patient had a history of coronary atherectomy or balloon angioplasty. All participants gave informed consent. Autoimmune disease, inflammatory arteritis, chronic or, acute infectious disease, use of steroid or anti-inflammatory drugs within the last three months, renal failure and cancer were accepted as exclusion criteria. Laboratory assays Specimen collection Fasting blood samples (8–10 hours fast) were obtained from the antecubital vein at approximately 9:00 a.m. These were centrifuged for 10 min at 3,000 × g at a temperature of about 4°C. Serum was stored at -70°C. Blood samples were analyzed at the Ege University Department of Microbiology, Section of Serology. Assay protocol for MMP-1 and MMP-3 MMP levels were determined using enzyme-linked immunosorbent assay (ELISA) kits, according to the manufacturer's instructions (MMP-1, Biotrak Amersham Pharmacia Biotech, United Kingdom; RPN 2610; MMP-3, Biotrak Amersham Pharmacia Biotech, United Kingdom; RPN 2613). The ELISA kit measured total MMP-1 (pro MMP-1, free MMP-1, MMP1/tissue inhibitor MP-1 complex), total MMP-3 (pro MMP-3, free MMP-3, MMP3/tissue inhibitor MP-1 and MMP3/tissue inhibitor MP-2 complex) at >89% cross reactivity. Samples were incubated in microtitre wells pre-coated with anti-MMP-1 (lyophilized rabbit anti-MMP-1) and anti-MMP-3 (peroxidase labelled Fab antibody to MMP-3) antibodies. The assays use the pro form of a detection enzyme that can be activated (by captured active MMP) into an active detection enzyme. MMP-1 and MMP-3 can be measured in the range of 6.25–100 ng/ml and 3.75–120 ng/ml, respectively. The results received from the optic scanners at 450 nm were converted into ng/ml values from a standard curve. All samples were run in duplicate and were averaged. Within-assay precision values for duplicate determinations were 5.5%, 7.9% and 7.3% at MMP-1 concentrations of 16.89 ± 0.94 ng/ml, 35.53 ± 2.82 ng/ml and 54.08 ± 4.0 ng/ml, respectively. Between-assay precisions for repeated measurements of the same sample were 11.6%, 12.0% and 13.2% at MMP-1 concentrations of 23.19 ± 2.68 ng/ml, 55.27 ± 6.65 and 98.04 ± 12.93, respectively. The within-assay precisions for duplicate determinations were 4.8%, 2.4% and 2.1% at MMP-3 concentrations of 13.7 ± 0.66 ng/ml, 33.7 ± 0.83 ng/ml and 83.2 ± 1.76 ng/ml, respectively. Between-assay precisions for repeated measurement of the same sample were 13.3%, 11.7% and 8.8% at MMP-3 concentrations of 11.2 ± 1.49 ng/ml, 27.6 ± 3.24 and 75.4 ± 6.63, respectively. Determination of C-reactive protein levels Serums were obtained by centrifugation of vacutainer-clotted tubes at 3,000 rpm for 10 minutes. High sensitivity C-reactive protein (hs-CRP) samples were stored at -30°C and analyzed by latex particle-enhanced immunoturbidimetric assay. The total median inter-assay and intra-assay coefficients of variation for the assays were <6% for CRP. All results were recorded in the patients' files. Statistical analyses All values are reported as mean ± SD. Chi Square test was used in the comparison of categorical variables while student unpaired-t test or Mann-Whitney Rank Sum tests were used, where appropriate, in the univariate analysis. Statistical analyses were performed with SPSS statistical software. A value of p < 0.05 was considered to be statistically significant. Results There were no significant differences in baseline characteristics between cases and controls. High-density lipoprotein, low-density lipoprotein, total cholesterol and triglyceride levels were not statistically different between the groups. Clinical characteristics of and medication use by the groups are shown in Table 1 . Table 1 Clinical Characteristics and Medication Use of Study Participants Group 1 (n = 30) Group 2 (n = 14) p Mean age (yrs) 55.2 ± 10.0 51.8 ± 7.7 NS Male sex % (n) 70%(21) 64%(9) NS Diabetes Mellitus % (n) 13%(4) 14%(2) NS Hypertension % (n) 30%(9) 21%(3) NS Smoking % (n) 60%(18) 50%(7) NS TC (mg/dl) 196.8 ± 31.7 195.1 ± 38.2 NS TG (mg/dl) 148.7 ± 71.2 151.7 ± 64.0 NS HDL-C (mg/dl) 46.7 ± 11.8 50.7 ± 13.0 NS LDL-C (mg/dl) 125.1 ± 28.2 116.7 ± 34.2 NS hs-CRP (mg/L) 4.78 ± 1.47 4.05 ± 1.53 NS MMP-1 (ng/ml) 13.63 ± 7.73 12.15 ± 6.27 NS MMP-3 (ng/ml) 20.23 ± 14.68 11.45 ± 6.55 0.039 Baseline therapy Aspirin 73%(22) 64%(9) NS Nitrate 57%(17) 64%(9) NS Statin 17%(5) 21%(3) NS Number of stenotic vessels One vessel disease 37%(11) 36%(5) NS Two vessel disease 50%(15) 43%(6) NS Three vessel disease 13%(4) 21%(3) NS Reference vessel diameter (mm) 2.95 ± 0.48 - - Aneurysm vessel diameter (mm) 4.78 ± 0.93 - - Aneurysm/reference vessel ratio 1.6 ± 0.1 - - Aneurysm segment Right coronary artery 53%(16) - - Left anterior descending artery 27%(8) - - Left Circumflex artery 30%(9) - - Group 1: Patients with coronary aneurysm, Group 2: Patients without coronary aneurysm, TC: Total cholesterol, TG: Triglyceride, HDL-C: High-density lipoprotein cholesterol, LDL-C: Low-density lipoprotein cholesterol, hs-CRP: High sensitivity C- reactive protein, MMP-1: Matrix metalloproteinase-1, MMP-3: Matrix metalloproteinase-3, NS: Non-significant Mean serum MMP-1 (13.63 ± 7.73 vs 12.15 ± 6.27 ng/ml, p = 0.52) and CRP levels (4.78 ± 1.47 vs 4.05 ± 1.53 mg/l, p = 0.13) were not significantly different between cases and controls. Mean serum MMP-3 values were significantly higher in the cases than in controls (20.23 ± 14.68 and 11.45 ± 6.55 ng/ml respectively, p = 0.039). MMP-1, MMP-3 and hs-CRP levels are shown in Figure 1 . Figure 1 text Discussion Essential factors contributing to the formation of coronary aneurysms include vessel media degradation and ulceration due to increased proteolytic activity. Connective tissue integrity, another factor contributing to aneurysm development, depends on the balance between degradation and repair of the extracellular matrix. Activation or inhibition of degrading enzymes affects extracellular matrix modeling [ 9 , 10 ], which, in turn, affects connective tissue and vascular wall integrity. Matrix-degrading enzyme activity is a tightly controlled process that involves transcription, activation of latent pro-enzymes and inhibition of proteolytic activity [ 11 - 13 ]. A key step in the regulation of MMPs may occur at the level of transcription [ 14 ]. The mechanism by which gene transcription is mediated is thought to involve a prostaglandin E 2 (PGE 2 )-cAMP- dependent pathway. G-proteins have been implicated in this pathway [ 15 ]. Transcription activity can be stimulated by a variety of inflammatory cytokines, hormones, and growth factors [ 16 - 19 ]. Several factors are also known to inhibit MMP gene expression and these include indomethacin, corticosteroids, and interleukin-4 [ 17 , 20 , 21 ]. MMP activity is also regulated by tissue-specific inhibitors. There are four known tissue inhibitors of metalloproteinases (TIMP-1, -2, -3 and -4). The TIMPs are secreted by a variety of cell lines, including smooth muscle cells and macrophages. Their activity is increased by growth factors and either increased or decreased by different interleukins [ 22 ]. Increased levels of MMP-2, MMP-3, MMP-9 and MMP-12 have been identified in aneurysm vessel walls [ 23 - 27 ]. Gene disruption of MMP-9 suppresses the development of experimental abdominal aortic aneurysms [ 28 ]. Conversely, decreased levels of TIMPs have been found in the aneurysm wall [ 26 ]. Allaire et al. [ 29 ] reported that local expression of TIMP-1 may prevent aortic aneurysm degeneration and rupture in a rat model. Carrell et al. [ 30 ] examined differences in MMPs between patients with aortic aneurysm and patients with aortic atherosclerosis but without aneurysm. Among a wide range of MMPs tested, only MMP-3 was over-expressed in the aortic aneurysm samples. Reduced aneurysm formation has been observed in mice with MMP-3 gene inactivation [ 31 ]. Finally, the recent observation that high circulating levels of MMP-3 are associated with coronary lesions in Kawasaki disease [ 32 ] also supports an important role for MMP-3 in the pathogenesis of coronary aneurysms. These data suggest that proteolytic balance in the vascular wall plays a key role in aneurysm development. MMP-1 (interstitial collagenase) and MMP-3 (stromelysin-1) are members of a family of proteinases that degrade one or more components of the extracellular matrix. In our study, it appears that elevated MMP-3 activity may represent a risk factor for coronary aneurysm formation. This finding is concordant with previously published studies. The mechanisms underlying this association are unclear. MMP-3 gene disruption may be responsible. Lamblin et al. [ 33 ] have reported similar findings, namely, that the MMP-3 5A allele is associated with the occurrence of coronary aneurysm. Others have reported that MMP-3 is expressed in atherosclerotic plaque cells, but not by cells in normal arteries [ 34 - 37 ]. In addition, extensive inflammation and destruction of musculo-elastic vessel wall elements have been observed in dilated human coronary arteries [ 38 , 39 ]. Schoenhagen et al. [ 40 ] suggest that the degradation of extracellular matrix by MMP-3 may contribute to the expansion of the coronary vessel wall. This effect is characteristic of positive remodeling. Based on these and our own observations, we maintain that MMP-3 over-expression may occur in aneurysm segments. Histopathologic studies would be needed to clarify whether or not this is the case. MMP levels are elevated in patients with acute myocardial infarction, unstable angina and coronary angioplasty [ 35 , 41 , 42 ]. All patients in our study had been diagnosed with stable angina before being enrolled into the study. CRP reflects systemic inflammatory activity. In this study, we did not observe increased CRP levels in those patients with coronary aneurysms. One explanation for similar CRP expression between cases and controls might be that all study subjects had been diagnosed with stable angina pectoris. Varying degrees of inflammation are reported among individuals with abdominal aortic aneurysms. This variation may relate to possible confounding due to clinical manifestations (asymptomatic or symptomatic) and aneurysm progression rates (cm/year). Other investigators have failed to observe increased CRP levels among asymptomatic patients with abdominal aortic aneurysm [ 43 ]. Because elevated MMP-3 levels likely contribute to the development of coronary aneurysms, this matrix-degrading enzyme may represent an important therapeutic target. Luan et al. [ 44 ] reported that a number of statins inhibit MMP-3 activity in rabbits. COX-2 inhibitors may also suppress MMP expression. Production of MMPs by macrophages occurs through a PGE 2 /cAMP-dependent pathway [ 45 ]. Theoretically, COX-2 inhibitors could attenuate this pathway. Another target of MMP inhibition has been demonstrated in animal models of adenovirus-mediated TIMP gene transfer [ 46 ]. In reporting our findings, we acknowledge that measurement of TIMP levels between cases and controls would have provided useful information about the possibility of proteolytic imbalance. Similarly, measurement of locally produced inflammatory cytokines, hormones and growth factors would be interesting to know about, since these regulate matrix-degrading enzyme expression [ 16 - 19 ]. This could provide relevant information, as systemic inflammatory activity may not reflect local inflammatory infiltration in aneurysm segments. Finally, the study would have benefited from having a larger sample size as well as genotype determination. We conclude that MMP-3 overexpression due to a proteolytic imbalance may lead to coronary aneurysm development through degradation of matrix components, especially lamina elastica. New medical therapeutic options targeted specifically against MMP-3 may prove useful in the prevention of aneurysm formation. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526392.xml |
526386 | The J-shape association of ethanol intake with total homocysteine concentrations: the ATTICA study | Background Epidemiological studies suggest a non-monotonic effect of alcohol consumption on cardiovascular risk, while there is strong evidence concerning the involvement of homocysteine levels on thrombosis. The aim of this work was to evaluate the association between usual ethanol consumption and homocysteine levels, in cardiovascular disease free adults. Methods From May 2001 to December 2002 we randomly enrolled 1514 adult men and 1528 women, without any evidence of cardiovascular disease, stratified by age – gender (census 2001), from the greater area of Athens, Greece. Among the variables ascertained we measured the daily ethanol consumption and plasma homocysteine concentrations. Results Data analysis revealed a J-shape association between ethanol intake (none, <12 gr, 12 – 24 gr, 25 – 48 gr, >48 gr per day) and total homocysteine levels (mean ± standard deviation) among males (13 ± 3 vs. 11 ± 3 vs. 14 ± 4 vs. 18 ± 5 vs. 19 ± 3 μmol/L, respectively, p < 0.01) and females (10 ± 4 vs. 9 ± 3 vs. 11 ± 3 vs. 15 ± 4 vs. 17 ± 3 μmol/L, respectively, p < 0.01), after controlling for several potential confounders. The lowest homocysteine concentrations were observed with ethanol intake of < 12 gr/day (Bonferroni α* < 0.05). No differences were observed when we stratified our analysis by type of alcoholic beverage consumed. Conclusion We observed a J-shape relationship between homocysteine concentrations and the amount of ethanol usually consumed. | Introduction Alcoholic beverages are widely consumed throughout the world and it has long been known that heavy alcohol consumption is hazardous to various body organs. In several countries alcohol is considered as one of the leading causes of preventable deaths, after smoking [ 1 ]. However, there is now also substantial evidence that the intake of light to moderate amounts of ethanol is associated with reduced morbidity and mortality from several cardiovascular conditions, particularly coronary heart disease (CHD) [ 2 ]. The interpretation of these beneficial effects has been extensively discussed and it has been suggested that the effects on cardiovascular disorders might not be due to ethanol per se but to other confounding factors [ 3 ]. Low to moderate ethanol consumption has been associated with reduced mortality, primarily due to a reduction in coronary heart disease (CHD). Conversely, heavy drinking increases mortality, mainly due to haemorrhagic stroke and non-cardiovascular diseases [ 4 , 5 ]. Some investigators consider increased homocysteine levels as an independent risk factor of cardiovascular disease, and its involvement in mechanisms of thrombosis has well been documented [ 6 , 7 ]. Moreover, other studies suggest that an elevated plasma total homocysteine concentration increases the risk associated with some of the conventional cardiovascular risk factors [ 8 , 9 ]. However, there are findings that do not confirm or recognize homocysteine importance in actually causing coronary artery disease, while recent studies have considered homocysteine more as a result than a cause of arteriosclerosis, especially due to the confounding effect of various nutrients and other lifestyle-related factors, including alcohol drinking [ 10 - 13 ]. We therefore studied the relation between amount of ethanol consumption and homocysteine levels, in 3042 adults enrolled in the ATTICA Study. Subjects and Methods Study population The "ATTICA" study [ 14 ] is a health and nutrition survey, which is being carried out in the province of Attica (including 78% urban and 22% rural areas), where Athens is the metropolis. The sampling was random, multistage and it was based on the age – sex distribution of the province of Attica, provided by the National Statistical Service (census of 2001). Also, all people living in institutions were excluded from the sampling, and we enrolled only one participant per household. From May 2001 to December 2002, 4056 inhabitants from the above area, who had no clinical symptoms or signs of cardiovascular or any other atherosclerotic disease (as assessed by the physical examination and reported medical history), nor evidence of chronic viral infections, were randomly selected to enter into the study. None of the participants was under current or chronic use of certain drugs that influence homocysteine levels, like methotrexate, trimethoprin, cholestyramine and cyclosporine. Moreover, subjects did not have cold or flu, acute respiratory infection, dental problems or any type of surgery in the preceding week. Of the 4056 inhabitants, 1518 men (46 ± 13 years old) and 1524 (45 ± 13 years old) women agreed to participate (75% participation rate). Participants were interviewed by trained personnel (cardiologists, general practitioners, dieticians and nurses) who used a standard questionnaire. The selected sample was population-based and reflecting the underlying population with respect to sex, age and residence. The number of the participants was determined by power analysis and chosen to evaluate greater than 0.5 standardised differences between ethanol groups and homocysteine levels, with statistical power > 0.80 at < 0.05 probability level (P-value). Measurements The questionnaire included demographic characteristics (age, sex, mean annual income and years of school), detailed medical history and lifestyle habits, such as food items consumed, smoking habits and physical activity status. Dietary intake during the year before enrolment was assessed through a semi-quantitative food frequency questionnaire provided by the EPIC-Greece Study. The questionnaire was administrated in person by specially trained dieticians and has been validated [ 15 ]. The daily ethanol intake was assessed in a 7-day food record. All alcoholic beverages consumed, i.e. wine, beer, whisky, traditional alcoholic drinks, like "retsina" or "tsipouro", and other spirits were recorded and daily ethanol intake (in grams) was calculated. For the presentation of our findings we categorized ethanol intake into five groups: (a) no ethanol intake, (b) low (< 12 gr), (c) moderate (12 – 24 gr), (d) high (25 – 48 gr) and (e) very high (>48 gr). Moreover, the frequency of consumption of several food groups was quantified approximately in terms of the number of times per month the food was consumed. Regarding the rest of the investigated parameters the educational level of the participants (as an index of social status) was measured in years of school. Information about smoking habits was collected using a standardized questionnaire developed for the Study. Current smokers were defined as those who smoked at least one cigarette per day. Former smokers were defined as those who had stopped smoking more than one year previously. The rest of the participants were defined as non smokers. For the multivariate statistical analyses cigarette smoking was quantified in pack-years (cigarette packs per day × years of smoking), adjusted for a nicotine content of 0.8 mg / cigarette. All participants were classified at entry according to their habitual physical activity. Class 1 were sedentary, engaging in little exercise; class 2 were moderately active during a substantial part of the day; and class 3 performed hard physical work much of the time. Classification was based on the responses to questions about the occupation and usual activities, including part-time jobs and notable non-occupational exercise [ 14 ]. Body mass index was measured as weight (in kilograms) divided by standing height (in meters squared). Obesity was defined as body mass index > 29.9 Kg / m 2 . Blood samples were collected from the antecubital vein between 8 to 10 a.m., in a sitting position after 12 hours of fasting and avoiding of ethanol. For the determination of plasma fibrinogen blood was anticoagulated with 3.8% trisodium citrate (9:1 vol/vol) and cooled on ice until centrifugation. For determination of homocysteine, blood was collected in a cool vacutainer containing EDTA, which was stored on ice for a maximum of 2 hours till the centrifugation at 3000 g for 5 minutes at 4°C. Plasma homocysteine levels were measured with an automatic Abott Axsym analyzer, which is based on the technology of polarized immunofluorescence. The intra and inter-assay coefficients of variation of homocysteine did not exceed 5%. Arterial blood pressure was automatically measured at the end of the physical examination with subject in sitting position. Hypertension was defined as a systolic blood pressure >/= 140 mmHg, a diastolic blood pressure >/= 90 mmHg, or the use of any antihypertensive medication; hypercholesterolemia was defined as total cholesterol levels greater than 220 mg/dl or the use of lipid lowering agents and diabetes mellitus as a fasting blood glucose > 125 mg/dl or the use of antidiabetic medication. Statistical analysis Continuous variables are presented as mean values ± standard deviation, while qualitative variables are presented as absolute and relative frequencies. Associations between categorical variables were tested by the use of contingency tables and the calculation of chi-squared test. Comparisons between normally distributed continuous variables and categorical variables were performed by the calculation of Student's t-test and multi way Analysis of co-Variance (multi-ANCOVA), after controlling for homoscedacity and various potential confounders. In the case of asymmetric continuous variables the tested hypotheses were based on the calculations of non-parametric tests, such as Mann – Whitney and Kruskal – Wallis. Kolmogorov-Smirnov criterion assessed normality of continuous variables. Finally, correlations between continuous variables were tested through multiple regression analysis after the adjustment for the potential confounders and interactions. The J- shape association between the exposure variable (ethanol intake) and homocysteine levels was illustrated by connecting the mean values of the investigated parameters using 3 rd order interpolating polynomials. All reported P -values are based on two-sided tests and compared to a significance level of 5%. However, due to multiple significance tests we used the Bonferroni correction (since the number of comparisons was less than ten) in order to account for the increase in Type I error. SPSS 11.0 software (SPSS Inc. 2002, Illinois, USA) was used for all the statistical calculations. Results Thirty four percent of males and 62% percent of females reported ethanol abstinence within the recorded 7-day period (p < 0.001). In addition, 37% of males and 33% of females consumed < 12 gr of ethanol per day, 16% of males and 4% of females consumed 12 – 24 gr of ethanol per day, and 13% of males and 1% of females consumed > 24 gr of ethanol per day (1.6% of males and 0.4% of females consumed > 48 gr/d), during the preceding week. Furthermore, middle-aged male participants (45 – 65 years old) consumed higher quantities of ethanol compared to younger (< 45 years) or older individuals (18 ± 16 vs. 12 ± 14 vs. 15 ± 16 gr of ethanol per day, respectively, p = 0.002), while no statistically significant differences were observed between ethanol consumption and age, in females (9 ± 13 vs. 11 ± 12 vs. 12 ± 14 gr of ethanol per day, respectively, p = 0.391). Ethanol intake comes from wine in 65% of men and 77% of women, from beer in 22% of men and 11% of women and from spirits or other drinks 13% of men and 12% of women. Further, descriptive characteristics of the studied population by ethanol consumption level are presented in Table 1 . By the exception of years of school (p = 0.02) and prevalence of hypertension (p = 0.01) no other associations were observed between ethanol intake and smoking habits, prevalence of hypercholesterolemia, diabetes and obesity. Table 1 Descriptive characteristics of study's participants by alcohol intake, and by gender Daily ethanol intake Males None < 12 gr/d 12 – 24 gr/d 25 – 48 gr/d > 48 gr/d Current smoking 39% 54% 43% 49% 54% Physical inactivity 62% 56% 67% 69% 64% Years of school (SD) 14(4) 13(4) 11(6) 11(4)** 9(4)** Hypertension 28% 37%** 39%** 45%** 44%** Hypercholesterolemia 33% 34% 44% 39% 34% Diabetes 11% 11% 9% 9% 7% Obesity 22% 24% 23% 19% 24% Females Current smoking 38% 25% 32% 37% 30% Physical inactivity 64% 75% 62% 57% 80% Years of school (SD) 13(4) 12(4) 11(3)* 10(4)** 8(3)** Hypertension 17% 28%** 22% 22% 26%** Hypercholesterolemia 28% 35% 32% 37% 40% Diabetes 8% 10% 12% 6% 6% Obesity 18% 15% 12% 17% 20% ** Bonferroni α < 0.01 and * α < 0.05 for the comparisons between ethanol intake and no intake groups. Homocysteine values were higher in males as compared to females (14.5 ± 6 vs. 10.8 ± 3.5 μmol/L, p < 0.001). The 10 th percentile for men was 8.6 μmol/L and for women 6.8 μmol/L, while the 90 th percentiles were 18 μmol/L and 14 μmol/L, for men and women, respectively. Due to the significant differences observed between genders in homocysteine levels, all the following analyses will be gender-specific. Unadjusted analysis revealed a J-shape association between ethanol quantities consumed during the past week (none, < 12 gr, 12 – 24 gr, 25 – 48 gr, >48 gr of ethanol per day) and homocysteine levels in both males (13 ± 3 vs. 11 ± 3 vs. 14 ± 4 vs. 18 ± 5 vs. 19 ± 3 μmol/L, respectively, p < 0.01) and females (10 ± 4 vs. 9 ± 3 vs. 11 ± 3 vs. 15 ± 4 vs. 17 ± 3 μmol/L, respectively, p < 0.01). Post hoc analysis revealed that the lowest values of homocysteine levels were observed in people who reported moderate daily ethanol intake of <12 gr (Bonferonni α = 0.02 for males and α = 0.02 for females). No differences were observed when we stratified our analysis by alcoholic beverages primarily consumed. Figure 1 illustrates the observed J-shape association between ethanol intake and homocysteine levels in males and females. Figure 1 Homocysteine levels by daily ethanol intake in males (upper figure) and females (power figure) (continuous line is a 3 rd order interpolating polynomial) However, since several potential confounders may influence the relationship between ethanol intake and homocysteine concentration we repeated our analysis after taking into account age, gender, pack-years of smoking, presence of hypertension, hypercholesterolemia, and diabetes, body mass index, fruits and vegetables consumption, especially leafy green vegetables, legumes, citrus fruits and juices that are reached in folic acid, as well as years of school. Multivariate regression analysis showed that <12 gr/d ethanol intake was inversely associated with homocysteine levels (b-coefficient = -0.5, p = 0.02) as compared to no consumption. On the other hand, increased ethanol intakes, i.e. 12 – 24 gr/d, 24 – 48 gr/d or > 48 gr/d were positively associated with homocysteine concentration (b-coefficient = 1.2, p = 0.03, b-coefficient = 1.8, p = 0.02 and b-coefficient = 1.9, p = 0.02, respectively). No differences were observed when we stratified our analysis by gender. Discussion The results of the present study revealed a J-shape association between ethanol consumption and homocysteine levels, of a large, random and population representative sample, free of cardiovascular disease. The lowest values of homocysteine were observed in daily ethanol intake of less than 12 gr, both in men and women and remained significant after adjustment for several potential confounders. Our results are in line with that of some other studies. For example, De Bree et al. [ 16 ] observed lower homocysteine concentrations at higher levels of ethanol consumption, with non drinkers having a (geometric) mean homocysteine of 14.2 μmol/L, compared to 13.9 μmol/L in drinkers of ≤ 20 gr ethanol/ day, 12.5 μmol/L in drinkers of between 20 and 40 gr/day and 13.1 μmol/L in drinkers of ≥ 40 gr / day. In our study, the lowest homocysteine concentrations were observed with ethanol intakes <12 gr /day. This difference between our and the previous study may attribute to the type of alcoholic beverage consumed, since in the study of Bree et al. beer was the main alcoholic drink, while in our study it was wine. In another study the most positive association of ethanol (from beer consumption) on homocysteine levels was observed at ethanol intakes 4 to 14 gr/d [ 17 ]. Another study in severely obese patients revealed a U-shaped association between homocysteine concentrations and the amount of ethanol consumption [ 18 ]. In particular, the most beneficial effect was observed with consumption of < 100 gr ethanol/ week and especially in red wine consumers, compared to subjects who consumed white wine, beer or spirits. However, the lower homocysteine concentrations in those consuming less than 100 gr ethanol/ week were not significant after controlling for serum folate concentration. Finally a study in elderly subjects also found a J-shape relation, with nondrinkers and subjects consuming ≥ 60 drinks/ month, showing higher homocysteine concentrations, compared to those consuming ≤ 60 drinks/ month [ 19 ]. However, the interpretation of the results from the previous study is difficult because the total amount of ethanol ingested was not calculated. On the contrary, there are several studies that have shown a linear relationship between ethanol intake and homocysteine levels. For example, Folsom et al. [ 20 ] in a study of middle-aged men and women showed a positive association of ethanol on homocysteine. However, he studied very low intakes of ethanol, ranging from 27 to 47 gr/ week, and this may be the reason why a J -shaped association was not observed. According to our findings, a significant positive association was observed at much higher intakes (i.e. 84–168 gr/week). Another study in young women (aged 15–44) [ 21 ] showed that those consuming >7 drinks/ week were 90% more likely to have elevated homocysteine levels (> 10 μmol/l), compared to those who did not consume ethanol. In the same study, subjects consuming 1–7 drinks/week had the same homocysteine levels with those that didn't consume, supporting, partially, two relations between ethanol intake and homocysteine. However, the association between ethanol and homocysteine levels failed to achieve statistical significance. Finally, homocysteine was positively associated with ethanol intake in the Framingham Offspring cohort [ 22 ] at daily intakes of more than 15 g. In this study liquor and red wine consumption was significantly and positively associated with homocysteine. This association was not observed with beer and white wine consumption. Our data were analyzed according to total ethanol intake and did not distinguish between different types of ethanol. Rimm et al. [ 23 ]reviewed the literature with respect to beverage-specific effects on coronary heart disease and could not find any systematic effects. On the contrary, they showed that the U-shaped relation between ethanol intake and cardiovascular disease mortality persisted in populations with very different drinking patterns. Although there have been many publications on this topic since the aforementioned review, no systematic pattern or results have emerged until now. Perhaps most notably in this respect are the findings which suggest similar protective effects of ethanol not only in Bavaria (Germany) and the Czech Republic, where beer is mainly consumed, but also in Mediterranean countries, where wine is the most popular alcoholic beverage [ 24 ]. Additionally, Greece is a Mediterranean country, where wine is the most commonly used alcoholic beverage. According to our findings as well as the recent results from the EPIC-Greece study [ 25 ] 72% of women's total ethanol intake comes from wine, 26% from beer and 12% from spirits. For men wine contributes to 56% of total ethanol intake, beer 15% and aniseed drinks 20%. Therefore our data do not support the assumption of Mennen et al. [ 26 ] who suggested that the inverse association between ethanol and homocysteine is seen in populations which consume predominantly beer. Chronic alcoholism has been found to be associated with hyper-homocysteinaemia, which could attribute to disturbed folate metabolism and to changes in circulating concentrations of vitamin B 12 and pyridoxal phosphate, as well as to ethanol intake per se [ 27 ]. Finally, the dual effect of ethanol consumption on homocysteine has also been confirmed from data of animal studies, which clearly show effects of excessive ethanol intake on the methionine cycle [ 13 ]. Nevertheless, the finding that subjects who do not consume ethanol have higher homocysteine levels than light to moderate drinkers needs further investigation. Whether this fact can be attributed to ethanol per se or to other substances of alcoholic beverages (e.g. folate, B 12 , B 6 , betaine) remains unclear and more intervention and experimental studies are necessary. Limitations This study as a cross-sectional one cannot establish causal relations but only generate hypothesis for associations. The population studied in this work is homogeneous and may reflect lifestyle habits in similar cultures, like Western Europe, Mediterranean etc. However, our findings could not extrapolate into other populations without further investigation and consideration. Also, the numbers of participants in categories of high intake (>48 gr of ethanol /d) were rather small, and the impression of the effects in homocysteine levels in even higher ethanol consumption may be misleading. Although this analysis has been adjusted for several known confounders, we have indirectly investigated the impact of serum folate and vitamins B 6 and B 12 intake (through food groups consumed) on homocysteine concentrations. In addition, kidney function is a strong determinant of homocysteine; however, we have not measured serum creatinine. The later may be another limitation of our study. Additionally, misreporting of ethanol consumption, due to social class can be a potential confounder. Conclusion The present study supports the existence of a J-shape association between ethanol consumption and homocysteine levels in both males and females, of a large, random and population representative sample, free of cardiovascular disease. Therefore our results indicate that daily consumption of 1–2 units of ethanol is associated with lower homocysteine concentrations and provide further evidence for a variant association between ethanol intake and coronary heart disease risk in both genders. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC526386.xml |
534783 | RNA interference: learning gene knock-down from cell physiology | Over the past decade RNA interference (RNAi) has emerged as a natural mechanism for silencing gene expression. This ancient cellular antiviral response can be exploited to allow specific inhibition of the function of any chosen target gene. RNAi is proving to be an invaluable research tool, allowing much more rapid characterization of the function of known genes. More importantly, RNAi technology considerably bolsters functional genomics to aid in the identification of novel genes involved in disease processes. This review briefly describes the molecular principles underlying the biology of RNAi phenomenon and discuss the main technical issues regarding optimization of RNAi experimental design. | Introduction In 1998 Fire and coll. coined the term RNA interference (RNAi) referring to the phenomenon of post-translational silencing of gene expression that occurs in response to the introduction of double-stranded RNA (dsRNA) into a cell [ 1 ]. This phenomenon can result in highly specific suppression of gene expression. RNAi technology is rapidly spreading in research laboratories worldwide, as it is associated with a number of practical and theoretic advantages over preexisting methods of suppressing gene expression (Table 1 ). RNAi promises to revolutionize key areas of medical research, as demonstrated by the preliminary findings obtained in the fields of cancer, infectious diseases and neurodegenerative disorders. In this review the principles underlying this phenomenon as well as the technical challenges encountered while using RNAi for research purposes are discussed. Table 1 Comparison between different methods for gene silencing. Method Advantages Drawbacks RNA interference Specific Relatively easy Knock-down (not knock-out) Needs transfection Anti-sense DNA Easy Inexpensive Variable efficiency Variable specificity Needs transfection Dominant negative mutants Stable suppression Specific protein domains can be targeted Needs transfection Variable/unexpected effect Knock-out animal Complete gene silencing Labor intensive, expensive Lethal mutants may prevent embryonic development Small molecule inhibitors Easy delivery Variable specificity Labor intensive development The physiology of RNAi Introduction of long dsRNA into a mammalian cell triggers a vigorous nonspecific shutdown of transcription and translation, in part due to activation of dsRNA-dependent protein kinase-R (PKR) [ 2 ]. Activated PKR phosphorylates the translation initiation factor EIF2: this effect, in association with activation of Rnase-L and induction of interferon production, halts protein synthesis and promotes apoptosis. Overall, this is believed to represent an antiviral defense mechanism [ 3 ]. Owing to this phenomenon, initial observations of RNAi induced by long dsRNA in plants [ 4 ] and the nematode Caenorhabditis elegans [ 1 ] were at first applied to mammalian cells with little success. In a breakthrough experience reported by Elbashir et al., it was discovered that dsRNAs 21–23 nucleotides long – termed small interfering RNAs (siRNAs) – could suppress mammalian gene expression in a highly specific manner [ 5 ], pointing the way to gene silencing in mammalian cells. RNAi is a highly conserved mechanism throughout taxonomical species [ 6 ]. In addition to have an antiviral activity, RNAi is also believed to suppress the expression of potentially harmful segments of the genome, such as transposons, which might otherwise destabilize the genome by acting as insertional mutagens [ 7 ]. Though its mechanisms are not fully elucidated, RNAi represents the result of a multistep process (Figure 1 ). Upon entering the cell, long dsRNAs are first processed by the RNAse III enzyme Dicer [ 8 ]. This functional dimer contains helicase, dsRNA binding, and PAZ (named after piwi, argonaute, and zwille proteins) domains. Whereas the former two domains are important for dsRNA unwinding and mediation of protein-RNA interactions, the function of the PAZ domain species, is not completely elucidated [ 9 , 10 ]. Dicer produces 21–23 nucleotide dsRNA fragments with two nucleotide 3' end overhangs, i.e. siRNAs. Recently it has been suggested that Dicer has functions other than dsRNA cleavage that are required for siRNA-mediated RNAi in mammals [ 11 ]. RNAi is mediated by the RNA-induced silencing complex (RISC) which, guided by siRNA, recognizes mRNA containing a sequence homologous to the siRNA and cleaves the mRNA at a site located approximately in the middle of the homologous region [ 9 ]. Thus, gene expression is specifically inactivated at a post-transcriptional level. In C. elegans, Dicer has been shown to interact with rde proteins. The rde proteins bind to long dsRNA and are believed to present the long dsRNA to Dicer for processing [ 12 ]. Mutants displaying a high degree of resistance to RNAi have been reported to possess mutations at rde-1 and rde-4 loci [ 13 ]. Given the highly conserved nature of these enzymes, similar mutations may be of significance in mammalian cells. Figure 1 Mechanism of RNA interference (RNAi). The appearance of double stranded (ds) RNA within a cell (e.g. as a consequence of viral infection) triggers a complex response, which includes among other phenomena (e.g. interferon production and its consequences) a cascade of molecular events known as RNAi. During RNAi, the cellular enzyme Dicer binds to the dsRNA and cleaves it into short pieces of ~ 20 nucleotide pairs in length known as small interfering RNA (siRNA). These RNA pairs bind to the cellular enzyme called RNA-induced silencing complex (RISC) that uses one strand of the siRNA to bind to single stranded RNA molecules (i.e. mRNA) of complementary sequence. The nuclease activity of RISC then degrades the mRNA, thus silencing expression of the viral gene. Similarly, the genetic machinery of cells is believe to utilize RNAi to control the expression of endogenous mRNA, thus adding a new layer of post-transciptional regulation. RNAi can be exploited in the experimental settings to knock down target genes of interest with a high specific and relatively easy technology (see text for more details). Besides gene silencing, RNAi might be involved in other phenomena of gene regulation. DNA/RNA interactions are known to influence DNA methylation. It appears that RNAi can also function on this level by methylating cytosines as well as CpG sequences more classically associated with methylation. If the target sequence shares homology with a promoter, transcriptional silencing may occur via methylation. Moreover, RNA appears to interact with chromatin domains, which may ultimately direct DNA methylation. Studies of C. elegans have shown that RNAi can spread among cells through mechanisms that may not hinge upon siRNA [ 14 ]. The systemic RNA interference-deficient (sid) locus, sid-1, encodes a conserved protein with a signal peptide sequence and 11 putative transmembrane domains, suggesting that the sid-1 protein may act as a channel for long dsRNA, siRNA, or a currently undiscovered RNAi-related signal. Sid-1 mutants retain cell-autonomous RNAi but fail to show spreading of RNAi. It remains unclear whether this systemic RNAi occurs in mammals, although a strong similarity is reported between sid-1 and predicted human and mouse proteins. siRNA synthesis and delivery strategies Several strategies for inducing siRNA-mediated gene silencing have been developed, each of them presenting specific advantages and disadvantages (Table 2 ). Table 2 Comparison between siRNA delivery methods. Method Advantages Drawbacks Chemical or enzymatic synthesis Rapid Enzymatic: no need to test individual siRNA Chemical: high purity Transient RNAi Needs transfection Enzymatic: variable purity & specificity Chemical: expensive DNA plasmid vector or casette Less expensive Stable RNAi Labor intensive Needs transfection Viral vector Stable RNAi May be effective in cells resistant to transfection with dsRNA/plasmids Labor intensive Potential biohazard Synthesis, purification, and annealing of siRNAs by industrial chemical processes [ 15 ] is becoming increasingly popular. This method is rapid and purity is generally high. This may be the best approach for initial "proof of principle" experiments. In vitro siRNA synthesis is an alternative and relies upon the T7-phage polymerase [ 16 ]. This polymerase produces individual siRNA sense and antisense strands that – once annealed – form siRNAs. Extra nucleotides required by the T7 promoter are removed by RNase digestion and cleaning steps. Otherwise, recombinant Rnase-III can be used to cleave long dsRNAs to produce multiple siRNAs [ 17 ]. Although technically easy, this approach presents the drawback of the generation of non-specific siRNAs. siRNAs can be produced by polymerase-III promoter-based DNA plasmids or expression cassettes [ 18 ]. These constructs produce small inverted repeats, separated by a spacer of three to nine nucleotides, termed short hairpin RNAs (shRNAs), which are processed by Dicer into siRNAs [ 19 ]. Transcription begins at a specific initiation sequence, determined by the promoter used. In addition to a defined initiation sequence, the U6 polymerase-III promoter terminates with TTTT or TTTTT [ 20 ]. The products are shRNAs that contain a series of uridines at the 3' end, a feature that seems to favor RNAi [ 21 ]. Suppression of gene expression by RNAi is generally a transient phenomenon [ 22 ]. Gene expression usually recovers after 96 to 120 hours or 3 to 5 cell divisions after transfection, which is likely due to dilution rather than degradation of siRNAs. However, by introducing plasmids which express siRNA and a selection gene, stable RNAi can be sustained as long as two months after transfection [ 23 ]. Interest is growing in the use of viral vector-mediated RNAi. Adenoviral and retroviral vectors have been reported to produce siRNAs in vivo [ 24 , 25 ] and stable RNAi is obtained using this method, though in the absence of a selective pressure [ 26 , 27 ]. Virus-mediated RNAi may circumvent some of the problems associated with cells that are generally refractory to RNAi, such as non-transformed primary cells [ 28 ]. At present, the question of whether functional RNAi will continue in all progeny of a cell with stable vector integration remains unanswered. Designing RNAi experiments Several crucial considerations should be beard in mind while designing RNAi experiments. The below examples regard RNAi experiments performed with chemically synthesized siRNA. 1. The first step is to design a suitable siRNA sequence. A growing number of libraries of validated siRNAs directed toward some frequently targeted genes are available. However, if the gene of interest has not been targeted using siRNA before, a novel siRNA must be developed. In mammalian cells RNAi is mediated by 21- to 23-nucleotide siRNAs containing symmetrical two nucleotide 3' overhangs. Given a siRNA sequence alone, it is not currently possible to predict the degree of gene knockdown produced by a particular siRNA. Nevertheless, several observations have been made that can be taken into account to increase the probability of producing an effective siRNA. The chief variable is the gene target site. Generally, it is recommended that a target site located at least 100–200 nucleotides from the AUG initiation codon is chosen. Targets within 50–100 nucleotides of the termination codon should instead be avoided. The 5' and 3' untranslated region (UTR) should also be avoided, since associated regulatory proteins might compromise RNAi. This is just a general recommendation, as some siRNAs targeting the 3' UTR have also been shown to induce RNAi [ 29 ]. Numerous on-line design tools will produce a list of suitable gene target sites. It is important to ensure that the sequence is specific to the target gene by performing a BLAST search in order to avoid cross reaction with unwanted genes. As an example, Biocomputing at the Whitehead Institute for Biomedical Research – a nonprofit independent research and educational institution affiliated with the Massachusetts Institute of Technology – is one of several organizations that has developed a freely available web-based siRNA design tool. 2. The structural characteristics of the siRNA molecules are another crucial aspect to be considered while designing RNAi experiments. SiRNA of 21 nucleotides with 3'-d(TT) or (UU) overhangs are considered the most effective [ 30 ]. Despite the fact that nucleotide-protein steric interactions contribute to the relationship between siRNA length and activity, the reason for this relationship is not completely elucidated. For optimal siRNA secondary structure, the GC ratio should ideally be between 45 and 55%, and multiple identical nucleotides in series, particularly poly(C) and poly(G), should be avoided to determine any requirements for modification, such as fluorophore labeling to allow for siRNA tracking and quantification of transfection efficiency. 3. To induce RNAi, siRNA must be transfected into the cells of interest. Several transfection reagents exist, the most commonly used being liposomal or amine-based. In some cases electroporation may be used, but cell toxicity can be high with this technique [ 31 ]. Cell lines show varying responses to different transfection reagents, and it may be necessary to try more than one reagent or approach. Transfection efficiency is optimized by titrating cell density, transfection time, and the ratio of siRNA-to-transfection reagent. The cell passage number and antibiotic use can also affect the efficiency of transfection. 4. Recently, experimental design features have been suggested to guarantee the rigor of RNAi experiments [ 32 ]. Due to the high specificity of RNAi, a siRNA with a one-nucleotide sequence mismatch can serve as a negative control. If this approach is used, absence of homology with other targets should be confirmed at the design stage. It is important to remember that mismatched siRNAs could target mutant gene sequences. Therefore loss of functional target gene silencing should be demonstrated to validate this approach. Alternatives include sequences that preset no homology to any known gene. Some investigators have suggested that scrambled siRNA is not sufficiently homologous to the target sequence to function as an adequate control; therefore, they propose a combination of mismatched and scrambled controls [ 32 ]. A more challenging functional control is to demonstrate the "rescue" of the target gene function following artificial overexpression of the target gene. Transfection of a plasmid expressing the gene sequence to which a siRNA is targeted results in production of mRNA that would also be targeted by the siRNA. This problem can be overcome using plasmids containing silent mutations. This approach takes advantage of degeneracy of gene coding, i.e., amino acids are represented by more than one three-nucleotide codon sequence. Rescue is achieved by expression of a protein identical to the native protein from a nucleotide sequence that differs from the native nucleotide sequence to which the siRNA is targeted. Alternatively, siRNAs directed to the 3'-UTR can be used. Many researchers use more than one siRNA, with each targeted to different areas of the gene sequence. A consistent RNAi response using different siRNAs with a variety of targets within the gene sequence of interest would increase confidence in experimental results. Dose-respons characteristics should be determined and the lowest effective concentration of siRNA be used to avoid nonspecific effects. 5. The effect of RNAi should be quantified at both the mRNA and the protein level. The knockdown of a protein should be probably evaluated after mRNA reduction has been proved: in fact, a reduction in protein levels not accompanied by a decrease in mRNA might indicate that other mechanisms are at work, such as RNAi mediated by microRNA. Northern blot analysis is considered by many to be the gold standard. Real-time reverse transcriptase polymerase chain reaction, incorporating internal controls to quantify "housekeeping" gene transcript levels, can also be used [ 33 ]. Protein knockdown can be confirmed by Western blot analysis, immunofluorescence, flow cytometry and phenotypic and/or functional assays. Although RNAi generally occurs within 24 h of transfection, both onset and duration of RNAi depend on the turnover rate of the protein of interest, as well as the rate of dilution and longevity of the siRNAs. The duration of gene silencing can also be modified by factors such as the concentration of serum in the culture medium, which affects cell-cycle rate. It is therefore necessary to determine the time course of any silencing observed under specific conditions using the modalities discussed. Conclusions RNAi is now commonly used in biological and biomedical research to study the effect of blocking expression of a given gene. As the effect is rarely complete, it is generally termed a "knock-down" to distinguish it from the "knock-out" achieved by deletion of the gene. Although significant advances have been made as compared to previous methods, RNAi has its own limitations. Not every sequence works, most researchers reporting a success rate of about one in three. Moreover, although the effects are generally believed to be highly sequence-specific, some doubts remain as to whether or not some of the observed effects are "off target." Some residual activation of the interferon system has been reported, as well as degradation of closely related, but non-identical, mRNAs. Nevertheless, RNAi remains the most promising functional genomics tool recently developed. DNA microarray technology has now enabled the level of expression of every gene in the genome to be determined under any condition [ 34 , 35 ]. This has led to a huge accumulation of data on genes whose expression is significantly altered in several diseases. To take an example, large databases have been established of genes that are pathologically regulated in cancer. In some cases this has resulted in the identification of key genes involved in tumor development and provided important new therapeutic targets. However, in most cases the pattern of gene expression is far too complex to allow for identification of the relatively small number of misexpressed genes that are involved in causing or maintaining the disease rather than the much larger number that are innocent bystanders. The ability of RNAi to provide relatively easy ablation of gene expression has opened up the possibility of using collections of siRNAs to analyse the significance of hundreds or thousands of different genes whose expression is known to be upregulated in a disease, given an appropriate tissue culture model of that disease. Perhaps more important still is the possibility of using genome-wide collections of siRNAs, whether synthetic or in viral vectors, as screening tools. Two main avenues of research can rely upon RNAi libraries. First, in a high-throughput manner each gene in the genome is knocked-down one at a time and the cells or organism scored for a desired outcome, such as death of a cultured cancer cell but not a normal cell. Due to the very large numbers of assays required to screen all 35–50,000 genes in the human genome, this approach is highly labor-intensive and time consuming. The other approach is to use large pools of RNAi viral vectors and apply a selective pressure that only cells with the desired change in behavior can survive. The identity of the genes knocked-down in the surviving cells can then be identified by sequencing the RNA interference vectors that they carry. Both approaches show consider-able promise in identifying novel genes that may make important therapeutic targets for inhibition either by conventional drug discovery methods or, more intriguingly, by RNA interference itself. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC534783.xml |
523229 | BMP Receptor Signaling Is Required for Postnatal Maintenance of Articular Cartilage | Articular cartilage plays an essential role in health and mobility, but is frequently damaged or lost in millions of people that develop arthritis. The molecular mechanisms that create and maintain this thin layer of cartilage that covers the surface of bones in joint regions are poorly understood, in part because tools to manipulate gene expression specifically in this tissue have not been available. Here we use regulatory information from the mouse Gdf5 gene (a bone morphogenetic protein [BMP] family member) to develop new mouse lines that can be used to either activate or inactivate genes specifically in developing joints. Expression of Cre recombinase from Gdf5 bacterial artificial chromosome clones leads to specific activation or inactivation of floxed target genes in developing joints, including early joint interzones, adult articular cartilage, and the joint capsule. We have used this system to test the role of BMP receptor signaling in joint development. Mice with null mutations in Bmpr1a are known to die early in embryogenesis with multiple defects. However, combining a floxed Bmpr1a allele with the Gdf5 - Cre driver bypasses this embryonic lethality, and leads to birth and postnatal development of mice missing the Bmpr1a gene in articular regions. Most joints in the body form normally in the absence of Bmpr1a receptor function. However, articular cartilage within the joints gradually wears away in receptor-deficient mice after birth in a process resembling human osteoarthritis. Gdf5 - Cre mice provide a general system that can be used to test the role of genes in articular regions. BMP receptor signaling is required not only for early development and creation of multiple tissues, but also for ongoing maintenance of articular cartilage after birth. Genetic variation in the strength of BMP receptor signaling may be an important risk factor in human osteoarthritis, and treatments that mimic or augment BMP receptor signaling should be investigated as a possible therapeutic strategy for maintaining the health of joint linings. | Introduction Thin layers of articular cartilage line the bones of synovial joints and provide a smooth, wear-resistant structure that reduces friction and absorbs impact forces ( Brandt et al. 1998 ). Loss or damage to articular cartilage is a hallmark of arthritic diseases and is one of the most common reasons that both young and old adults seek medical care. Millions of people are afflicted with arthritis, and it ultimately affects more than half of people over the age of 65 ( Badley 1995 ; Yelin and Callahan 1995 ). A better understanding of the molecular mechanisms that create and maintain articular cartilage is crucial for discovering the causes of joint disorders and providing useful medical treatments. Joint formation begins during embryogenesis, when stripes of high cell density called interzones form across developing skeletal precursors ( Haines 1947 ). Programmed cell death occurs within the interzone, and a three-layered interzone forms that has two layers of higher cell density flanking a region of lower cell density. Non-joint precursors of the skeleton typically develop into cartilage, which hypertrophies and is replaced by bone. However, cells within the high-density layers of the interzone are excluded from this process and develop into the permanent layers of articular cartilage found in the mature joint ( Mitrovic 1978 ). Studies over the last 10 y have begun to elucidate some of the signaling pathways that contribute to the early stages of joint formation. Wnt14 is expressed in stripes at the sites where joints will form, and it is capable of inducing expression of other joint markers when misexpressed at new locations in the limb ( Hartmann and Tabin 2001 ). Several members of the bone morphogenetic protein (BMP) family of secreted signaling molecules are also expressed in stripes at sites where joints will form, including those encoded by the genes Gdf5, Gdf6, Gdf7, Bmp2, and Bmp4 ( Storm and Kingsley 1996 ; Wolfman et al. 1997 ; Francis-West et al. 1999 ; Settle et al. 2003 ). Of these, Gdf5 expression is most strikingly limited to regions where joints will develop and is one of the earliest known markers of joint formation. Mutations in either Gdf5 or the closely related Gdf6 gene also block formation of joints at specific locations, providing strong evidence that these molecules are essential for the joint formation process ( Storm et al. 1994 ; Settle et al. 2003 ). However, mutations in Bmp2 or Bmp4 cause early embryonic lethality, making it difficult to test their role in joint formation ( Winnier et al. 1995 ; Zhang and Bradley 1996 ). Much less is known about how signaling pathways function during the subsequent maturation and maintenance of adult joint structures. Importantly, BMP signaling components are present in adult articular cartilage, suggesting that they may function during the late development or maintenance of this critical structure ( Erlacher et al. 1998 ; Chubinskaya et al. 2000 ; Muehleman et al. 2002 ; Bau et al. 2002 ; Bobacz et al. 2003 ). BMPs bind tetrameric complexes of two type I and two type II transmembrane serine-threonine kinase receptors. Upon BMP binding, these complexes transduce a signal by phosphorylating members of the Smad family of transcription factors ( Massague 1996 ). Recent experiments have implicated two different BMP type I receptors in skeletal patterning, BMPR1A and BMPR1B. Both receptors can bind BMP2, BMP4, and GDF5, although GDF5 shows higher affinity for BMPR1B ( Koenig et al. 1994 ; ten Dijke et al. 1994 ; Yamaji et al. 1994 ; Nishitoh et al. 1996 ; Chalaux et al. 1998 ). Both receptors are also expressed in dynamic patterns during normal development. In limbs, Bmpr1a expression becomes restricted to joint interzones, perichondrium, periarticular cartilage, hypertrophic chondrocytes, and interdigital limb mesenchyme. In comparison, Bmpr1b expression is seen primarily in condensing precartilaginous mesenchymal cells, regions flanking joint interzones, perichondrium, and periarticular cartilage ( Dewulf et al. 1995 ; Mishina et al. 1995 ; Zou et al. 1997 ; Baur et al. 2000 ). Null mutations in the Bmpr1b gene produce viable mice with defects in bone and joint formation that closely resemble those seen in mice missing Gdf5 ( Storm and Kingsley 1996 ; Baur et al. 2000 ; Yi et al. 2000 ). Null mutations in Bmpr1a cause early embryonic lethality, with defects in gastrulation similar to those seen in mice with mutations in Bmp4 ( Mishina et al. 1995 ; Winnier et al. 1995 ). Recent studies with floxed alleles suggest that Bmpr1a is also required for many later developmental events, but its roles in bone and joint formation have not yet been tested ( Mishina 2003 ). A genetic system for activating or inactivating genes specifically in joint tissues would be particularly useful for further studies of joint formation and maintenance. Here we take advantage of the tissue-specific expression pattern of the Gdf5 gene to engineer a Cre / loxP system ( Nagy 2000 ), Gdf5-Cre, that can be used to remove or ectopically express genes in joints. Tests with reporter mice show that this system is capable of modifying genes in all of the structures of the mature synovial joint, including the ligaments of the joint capsule, the synovial membrane, and the articular cartilage. Gdf5-Cre recombination bypasses the early embryonic lethality of null mutations in Bmpr1a, and shows that this receptor is required for early joint formation at some locations and for initiation of programmed cell death in webbing between digits. Interestingly, Bmpr1a is also required for postnatal maintenance of articular cartilage throughout most of the skeleton. In Gdf5-Cre/Bmpr1a floxP mice, articular cartilage initially forms normally, but subsequently loses expression of several key cartilage markers after birth. It ultimately fibrillates and degenerates, resulting in severe osteoarthritis and loss of mobility. These experiments suggest that BMP signaling is required for normal maintenance of postnatal articular cartilage, and that modulation of the BMP signaling pathway may play an important role in joint disease. Results Genetic System for Testing the Function of Genes in Joint Development To generate a general system capable of specifically testing genes for functions in skeletal joint development, we engineered transgenic mice to express Cre recombinase in developing joints ( Figure 1 ). Gdf5 is a gene strongly expressed in stripes across developing skeletal elements during embryonic joint formation. A bacterial artificial chromosome (BAC) containing the Gdf5 locus was modified by homologous recombination in bacteria to insert a cassette encoding Cre-internal ribosome entry site (IRES)-human placental alkaline phosphatase (hPLAP) into the translation start site of Gdf5 ( Figure 1 A). This modified BAC was then used to make lines of transgenic mice. The resulting Gdf5-Cre transgenic mice were tested for transgene expression and Cre recombinase activity by crossing them to R26R reporter mice that activate the expression of lacZ after Cre-mediated removal of transcriptional stop sequences ( Soriano 1999 ). The resulting progeny were analyzed both for expression of the transgene by assaying HPLAP activity and for recombination of DNA by assaying LACZ activity. The progeny from all three lines showed strong LACZ expression primarily in joints, and in two of three lines HPLAP expression could also be seen in joint regions. Interestingly, HPLAP expression in the Gdf5-Cre transgenic GAC(A) line used for all subsequent breeding experiments was seen to precede LACZ expression during successive development of joints in the digits ( Figure 1 C) (unpublished data). These experiments clearly demonstrate that the Gdf5-Cre transgene expresses Cre recombinase and causes DNA recombination in developing joint regions. Figure 1 A Genetic System to Drive Gene Recombination in Developing Joints (A) A 140-kb BAC from the Gdf5 locus was modified by inserting Cre -IRES- hPLAP into the translation start site of Gdf5 and used to make transgenic mice. Not to scale. See Materials and Methods for details. (B–E) Visualization of Gdf5-Cre driven recombination patterns based on activation of lacZ expression from the R26R Cre reporter allele. (B) LACZ activity is visible as blue staining in the ear (ea) and the joints of the shoulder (s), elbow (eb), wrist (w), knee (k), ankle (a), vertebra (vj), and phalanges (black arrowheads) of an E14.5 mouse embryo. (C) E14.5 hindlimb double-stained to show both HPLAP expression from the transgene (grey/purple staining) and LACZ expression from the rearranged R26R allele (blue staining). Note that both markers are visible in the oldest, proximal interphalangeal joint (black arrowhead), only HPLAP activity is visible in the more recently formed medial interphalangeal joint (black arrow), and neither HPLAP nor LACZ expression is visible in the youngest, most distal joint of the digit (white arrowhead). (D) Newborn (P0) forelimb with skin partially removed showing LACZ activity expressed in all phalangeal joints (red Salmon gal staining, black arrowheads) and regions of some tendons (asterisk). (E) Section through the most distal phalangeal joint of a P0 hindlimb stained with Alcian blue to mark cartilage showing LACZ expression (stained red) in all tissues of developing joints: articular cartilage (black arrowhead), precursors of ligaments and synovial membranes (black arrow), and cells where cavitation is occurring (asterisk). GAC(A) mice were crossed with lacZ ROSA26 Cre reporter strain (R26R) mice to analyze the pattern of Cre-mediated lacZ recombination throughout development. Joints in developing limbs begin forming in a proximal-distal pattern such that the shoulder joint forms prior to the elbow joint. In addition, three major stages of early joint development have been defined by histology as (1) interzone formation, (2) three-layer interzone formation, and (3) cavitation ( Mitrovic 1978 ). Consistent with the proximal-distal pattern of joint development in the limbs, LACZ activity is seen at embryonic day 12.5 (E12.5) in the more proximal joints, including the shoulder and knee (unpublished data). By E14.5, LACZ expression is typically seen in all but the most distal joints of the limbs ( Figure 1 B and 1 C), but with some variability in both strength and extent of expression from embryo to embryo. The strongest-staining embryos often have additional staining in fingertips (not seen in the E14.5 embryo in Figure 1 C, but clearly detectable in the E13.5 embryo shown in Figure 2 ). Sections through developing joints show that LACZ is present in many cells at the interzone stage (unpublished data). However, expression of LACZ in nearly 100% of joint cells is not achieved until the three-layer interzone stage (for example, in the knee joint at E14.5 or in any of the phalangeal joints at E16.5 (unpublished data). Within the developing skeleton, Cre-mediated expression of LACZ remains strikingly specific to joints throughout development. Furthermore, it is seen in all the structures of postnatal synovial joints including the articular cartilage, joint capsule, and synovial membrane ( Figure 1 D and 1 E) (unpublished data). These patterns are consistent with the well-established expression of Gdf5 in interzone regions during embryonic development ( Storm and Kingsley 1996 ). Adult expression patterns of the Gdf5 gene are not as well characterized, but Gdf5 expression has previously been detected in adult articular cartilage using both RT-PCR and immunocytochemistry ( Chang et al. 1994 ; Erlacher et al. 1998 ; Bobacz et al. 2002 ). Figure 2 Bmpr1a Is Required for Webbing Regression and Apoptosis in Specific Regions of the Limb (A and B) Control E14.5 forelimb (A) compared to a, E14.5 mutant forelimb (B) showing webbing between digits 1 and 2 (arrowheads) and extra tissue at the posterior of digit 5 (arrows). (C) Gdf5-Cre induced lacZ expression from R26R in an E13.5 forelimb showing LACZ staining (blue) in metacarpal-phalangeal joints, between digits 1 and 2 (arrowhead), and in a region posterior to digit 5 (arrow). (D and E) Sections of E14.5 hindlimbs showing apoptosis visualized by TUNEL staining (green) and proliferation visualized by staining for histone H3 phosphorylation (red). Controls show strong, uniform TUNEL staining between digits 1 and 2 (D, arrowhead) while mutants show patchy TUNEL staining interspersed with mitotic cells in similar regions (E). Scale bar = 200 μm. (F) Quantitation of TUNEL staining and mitotic cells in the posterior region of the fifth digit shows apoptosis is reduced 30% while proliferation is increased 20% (asterisks indicate statistically significant difference). (G and H) By E15.5, interdigital tissue has regressed in controls (G, arrowhead). In contrast, tissue remains in mutants at this location, primarily derived from cells that have undergone Gdf5-Cre -mediated recombination that inactivates Bmpr1a function and activates expression of LACZ (H). Scale bar = 75 μm. Other sites besides limb joints also have Cre-mediated lacZ expression. Starting at E13.5, LACZ activity is detected in an anterior and posterior domain of the limb bud ( Figure 2 C). At E14.5, LACZ activity is detectable in the developing ear pinnae, ribs, sternum, tissues in the face, and some regions of the brain and spinal cord ( Figure 1 B) (unpublished data). At birth, LACZ is also expressed in tendons running along the vertebral column, regions of tendons in the wrist and ankle, and some tendon insertions ( Figure 1 D) (unpublished data). By 5 wk of age, LACZ is also expressed in the hair follicles, ear cartilage, some cells in the growth plate of the long bones, and portions of the brain and spinal cord (unpublished data). Surprisingly, 23 of 63, or 37% of transgenic mice analyzed also show some degree of wider “ectopic” LACZ expression, which can extend throughout many different tissues in the animal. However, sustained expression of the transgene itself, as assayed by HPLAP activity, is still restricted primarily to joints in animals that show evidence of more generalized recombination based on LACZ expression (unpublished data). This suggests that in a fraction of animals, sporadic expression of Cre at some time early in development is sufficient to lead to both ectopic recombination and LACZ expression. While the fraction of animals with broader recombination patterns must be tracked and accounted for during experiments, these animals offer the potential benefit of revealing additional new functions of target genes that could be subsequently studied with additional site-specific Cre drivers. Gdf5-Cre/Bmpr1a floxP Animals Survive to Adulthood with Ear, Webbing, and Joint Defects We next used the Gdf5-Cre system to test the role of BMP signaling during normal joint development. Gdf5-Cre transgenic mice were bred to animals carrying a conditional floxed allele of the Bmpr1a locus ( Mishina et al. 2002 ) , usually in the presence of the R26R reporter allele to facilitate simultaneous visualization of Cre-mediated recombination patterns (see typical cross in Figure 3 ). PCR amplification confirmed that a key exon of the Bmpr1a gene was deleted in mice that also carried the Gdf5-Cre transgene (unpublished data). Previous studies have shown that the recombined Bmpr1a floxP allele mimics a null allele of the Bmpr1a locus when transmitted through the germline ( Mishina et al. 2002 ). The Gdf5-Cre/Bmpr1a floxP conditional knockout mice were viable and survived to adulthood, showing that the Gdf5-Cre driver can bypass the early embryonic lethality previously reported in animals with a null mutation in the Bmpr1a locus ( Mishina et al. 1995 ). Figure 3 Gdf5-Cre -Mediated Deletion of Bmpr1a (A) Breeding strategy simultaneously deletes Bmpr1a floxP and allows visualization of Gdf5-Cre -mediated recombination by lacZ expression from R26R . (B–E) 5-week-old mutant and control mice stained with Alcian blue to mark cartilage and alizarin red to mark bone. (B) Ankle of control with strong blue staining lining each joint (arrowheads). (C) Ankle of mutant showing an absence of blue staining in most regions (arrowheads) and a joint fusion between the central (c) and second (2) tarsals (arrow). (D) Control and (E) mutant metatarsal/phalangeal joint which lacks blue staining in articular regions (arrowheads) but retains staining in the growth plate (asterisks). (F) Control forelimb. (G) Mutant forelimb with webbing between the first and second digit (black arrowhead). The viable Gdf5-Cre/Bmpr1a floxP mice showed several phenotypes. First, the conditional knockout mice had shorter ears that often lay flatter against their heads than controls (controls 13.1 ± 0.1 mm long, n = 38; mutants 11.8 ± 0.2 mm, n = 11; p < 0.0001). BMP signaling is known to be required for growth of the external ear of mice ( Kingsley et al. 1992 ), and this phenotype likely reflects loss of Bmpr1a function in the fraction of ear cells that express the Gdf5-Cre transgene. Most mutant mice also showed soft tissue syndactyly or retention of webbing between the first and second digits of their feet, a phenotype that was more frequent and more severe in the forelimbs (201 of 220, or 91%, of forefeet and 109 of 220, or 50%, of hindfeet). Finally, mutant animals showed obvious skeletal changes in whole-mount skeletal preparations. At some sites in the ankles, joints seemed to be missing entirely, with fusion of bones that would normally be separate. For example, the second distal tarsal was fused to the central tarsal bone in every conditional knockout animal examined (18 of 18), a phenotype not observed in controls (zero of 18) ( Figure 3 B and 3 C). At other locations, joints had clearly formed but showed dramatic loss of staining with the cartilage matrix marker Alcian blue ( Figure 3 B– 3 E) (unpublished data). Normal Alcian blue staining was seen in non-articular regions, such as the cartilaginous growth plate ( Figure 3 D and 3 E, asterisk). These data suggest that Bmpr1a function is required for the formation of specific joints in the ankle region and for either generation or maintenance of articular cartilage in most other joints of the limb. Developmental Origin of Webbing Phenotype Interdigital mesenchyme is normally eliminated by apoptosis during embryonic development, a process that can be stimulated by BMP beads, inhibited by Noggin, or blocked by overexpression of dominant-negative BMP receptors ( Garcia-Martinez et al. 1993 ; Yokouchi et al. 1996 ; Zou and Niswander 1996 ; Guha et al. 2002 ). Limbs of Gdf5-Cre/Bmpr1a floxP mutant embryos showed obvious retention of interdigital webbing between the first and second, but not other, digits of E14.5 forelimbs ( Figure 2 A and 2 B), a pattern that corresponds to the presence or absence of webbing seen in the adult limb. They also showed excess tissue on the posterior margin of the fifth digit ( Figure 2 B, arrow). Analysis of LACZ expression in Gdf5-Cre/R26R reporter embryos showed that Cre-mediated recombination has occurred by E13.5 in the metacarpal-phalangeal joints, and in the interdigital region between the first and second, but not other, digits. In addition, a domain of recombination and expression of LACZ is also reproducibly seen in the posterior half of the fifth digit ( Figure 2 C). Terminal deoxynucleotidyl transferase–mediated deoxyuridine triphosphate nick end labeling (TUNEL) staining of interdigital mesenchyme between the first and second digits ( Figure 2 D and 2 E) and the fifth digit flanking mesenchyme showed a decreased number of dying cells in the regions where excess tissue is retained in the mutant limbs. Numbers of phosphorylated histone H3-labeled proliferating cells were also elevated in these regions ( Figure 2 F). Most cells found in the webbed region between the first and second digits at E15.5 strongly expressed LACZ in Gdf5-Cre/Bmpr1a floxP mutant embryos ( Figure 2 H). These data suggest that regional loss of BMPR1A receptor signaling blocks programmed cell death in interdigital mesenchyme, and that the recombined cells survive and proliferate in the absence of BMPR1A signaling. Failure of Early Joint Formation in Ankle Regions The Bmpr1a gene is expressed in the interzone region of developing joints at E13.5 ( Baur et al. 2000 ). In situ hybridization showed that the gene is also expressed in the interzones of ankle joints and prospective articular cartilage regions of digit joints at E15.5 ( Figure 4 ). LACZ staining indicated that Cre-mediated recombination begins to occur in ankle joints around E14.5, and is extensive by E15.5 ( Figure 4 G and 4 J) (unpublished data). In the ankle joint regions that were obviously fused in postnatal mutant animals, alterations in early joint marker expression could also be seen by E15.5. At this stage, the Gdf5 gene is normally expressed in stripes that mark the sites of joint formation ( Figure 4 F), and the gene for the major collagen protein of cartilage matrix (Col2a1) is down-regulated in the interzone region ( Figure 4 E). In contrast, Col2a1 staining extended completely through the joint region between the second and central tarsal of Gdf5-Cre/Bmpr1a floxP mutants ( Figure 4 H, black arrow), and Gdf5 expression was seen only as a small notch extending into where the joint should be forming ( Figure 4 I, bracket). These data suggest that the fusions seen between ankle bones in postnatal mutant skeletons are the result of incomplete segmentation of skeletal precursors during embryonic development, a defect confined to some locations in the ankle. Figure 4 Bmpr1a Is Expressed in Joints and Is Required for Continued Joint Formation in the Ankle Region (A) Diagram of ankle bones from a wild-type mouse; bones fusing in mutant are colored red. Roman numerals II–IV, metatarsals; 2, 3, and 4/5, distal row of tarsal bones; c, central tarsal bone; ta, talus; ca, calcaneus. (B and C) In situ hybridization at E15.5 showing that Bmpr1a is expressed in ankle joint interzones (B, arrowheads) and in the forming articular regions of the phalangeal joints (C, arrowheads). (D) Near adjacent section to (C) showing Gdf5-Cre induced LACZ expression from R26R in the forming joints of the digits (arrowheads). (E–J) Marker gene expression and R26R LACZ staining patterns on near adjacent sections of control and mutant embryos. In control mice at E15.5 ankle joints are clearly delineated as regions that have down-regulated Col2 (E), express Gdf5 throughout (F), and express LACZ in most cells (G; white arrowheads and black arrows). In mutant embryos at the same stage, joint formation is incomplete. Faint Col2 expression can be seen connecting a medial region of tarsal 2 with metatarsal II (H, white arrowhead), and Gdf5 expression does not extend all the way across the joint at this location (I, white arrowhead). Between tarsals c and 2, mutants express Col2 across the normal joint-forming region (H, black arrow) and lack expression of Gdf5 at sites where skeletal fusions are observed (I, black arrow and bracket). (J) Scale bar = 100 μm. Failure to Maintain Articular Cartilage in Other Joints In most joints of Bmpr1a conditional knockout mice, embryonic segmentation of skeletal precursors occurred normally. Although Gdf5-Cre -mediated recombination was seen as early as E13.5 in digit interzone regions (see Figure 2 C), no changes in cell death or cell proliferation could be seen in the metacarpal-phalangeal or metatarsal-phalangeal joints at E13.5 or E14.5 (unpublished data). Similarly, although clear LACZ expression was seen by E15.5 in interphalangeal joints and periarticular regions ( Figure 4 D), no difference in morphology or expression of Col2a1, Gdf5, or Bmpr1b was seen in the articular regions of the phalanges at these stages (unpublished data). At birth, digit joints were generally indistinguishable from those in control animals; chondrocytes were abundant in articular regions and were surrounded by typical cartilage matrix with normal staining by Safranin O, a histological stain for proteoglycans ( Figure 5 ). At this stage, both wild-type and mutant cells in articular regions also expressed high levels of Col2a1 and Aggrecan (Agg), the genes encoding the major structural proteins of cartilage matrix ( Figure 5 B and 5 G) (unpublished data). No alterations in cellular apoptosis or proliferation were observed (unpublished data). Figure 5 Bmpr1a Is Required to Maintain Expression of ECM Components in Articular Cartilage In situ hybridization or LACZ staining on near adjacent sections of metacarpal-phalangeal joints (A–C and F–H) and the tarsal 2-metatarsal II joint (D–E and I–J) of P0 mice. At birth, articular cartilage of controls (A–E) and mutants (F–J) appears similar by Safranin O staining (A and F), and Col2 expression (B, G). Mat4 expression confirms that articular cartilage is initially specified in mutants (D andI, brackets). LACZ expression confirms Cre-mediated recombination has occurred in articular cartilage (C, H, E, and J). (K–T) Near adjacent sections of the metacarpal-phalangeal joints of P14 mice. Two weeks after birth, articular cartilage of controls stains with pericellular Safranin O (orange staining, K), and expresses Col2 (L), Agg (M), and SOX9 (N). In contrast, mutant articular cells are smaller and more densely packed, lack pericellular Safranin O staining (P), have reduced expression of Col2 (Q) and Agg (R), but retain normal levels of SOX9 protein (S, brackets; dashed line marks faint edges of articular surfaces). LACZ expression confirms Cre-mediated recombination has occurred in articular cells (O ansd T, brackets). (A and K) Scale bar = 75 μm. To determine whether articular cells were properly specified in mutants, we also analyzed expression of Matrilin-4 (Mat4), a gene expressed specifically in the periarticular and perichondral regions of developing joints ( Klatt et al. 2001 ). In both control and mutant animals, transcription of Mat4 was clearly detectable in the articular cartilage layers of newborn joints ( Figure 5 D and 5 I). In all experiments, expression of LACZ throughout articular regions indicated that Cre-mediated recombination had occurred throughout the articular regions ( Figure 5 C, 5 H, 5 E, and 5 J). The normal histological appearance, staining properties, and marker gene expression patterns suggest that Bmpr1a is not required for the initial formation or specification of articular cartilage. By 1 wk after birth, obvious differences began to be detected in the articular regions of mutant animals. The expression of Col2a1 was reduced throughout the articular surfaces of the carpals, metacarpals, and phalanges of the forefeet (unpublished data). Less severe reductions were also seen in articular cells of tarsals and metatarsals in the hindfeet (unpublished data). By 2 wk of age, Col2a1 expression was reduced in most cells of the articular region ( Figure 5 L and 5 Q), accompanied by markedly reduced Safranin O staining ( Figure 5 K and 5 P), and decreased expression of Agg and two genes normally expressed in more mature articular cartilage cells, Collagen 3 (Col3a1) and Collagen 10 (Col10a1) ( Figure 5 M and 5 R) (unpublished data) ( Eyre 2002 ). Inhibition of BMP signaling in cultured chondrocytes has previously been reported to induce Collagen 1 (Col1a1) expression, increase proliferation, and result in cells with flattened, fibroblast-like morphology ( Enomoto-Iwamoto et al. 1998 ). However, we saw no increase in the expression of Col1a1 in mutant articular cartilage, and no proliferation was detected in articular cells of either mutant or control animals (unpublished data). While recombined LACZ marker expression was detected in most articular cartilage cells, it was also observed in scattered subarticular chondrocytes, growth plate chondrocytes, and osteoblasts ( Figure 5 O and 5 T) (unpublished data). Although this implies that BMP signaling was defective in multiple cell types, the observed defects were confined to the articular cartilage. For example, Osteocalcin and Col1a1 expression appeared normal in osteoblasts (unpublished data). Together, these data suggest that BMPR1A activity is required in postnatal joint articular cartilage to maintain expression of many genes encoding structural components of cartilage matrix. Previous studies have shown that Sox9 is required for normal cartilage differentiation, for expression of cartilage extracellular matrix (ECM) genes including Agg , and is a direct transcriptional regulator of the key cartilage matrix gene Col2a1 ( Bell et al. 1997 ; Lefebvre et al. 1997 ; Bi et al. 1999 ; Sekiya et al. 2000 ). Notably, despite reduced expression of many cartilage matrix marker genes in Bmpr1a mutant mice, the SOX9 protein was present at normal levels in articular regions at all stages examined, including newborn, 2-wk-old, 7-wk-old, and 9-mo-old mice ( Figure 5 N and 5 S) (unpublished data). Synovial Hypertrophy, Cartilage Erosion, and Accelerated Cartilage Maturation Conditional loss of Bmpr1a led to marked hypertrophy of the synovial membrane in the joint capsule of some joints, particularly in the ankle region. In the most severely affected joints, the expanded synovial membrane grew into the joint space and was associated with obvious loss or erosion of the articular cartilage ( Figure 6 A and 6 B, asterisks, arrows). Accelerated cartilage maturation and increased expression of Col10a1 was frequently seen in the chondrocytes underlying the articular erosions ( Figure 6 C and 6 D, brackets) (unpublished data). Interestingly, the regions of increased Col10a1 expression did not correspond to the regions that had undergone Cre-mediated recombination. Instead, increased expression of Col10a1 was seen in a zone of largely LACZ-negative cells stretching from the cartilage adjacent to the ossification front (where Col10a1 is normally expressed in maturing cartilage cells), toward the regions where surface articular cartilage was severely eroded or missing ( Figure 6 A and 6 B, arrowheads). Previous studies suggest that parathyroid hormone-related protein, a diffusible signal made in the articular surface, may normally inhibit maturation of underlying cartilage ( Vortkamp et al. 1996 ; Weir et al. 1996 ). Local loss of the articular surface could remove this inhibition and lead to a cell-nonautonomous acceleration of maturation in chondrocytes underlying points of articular erosion. Figure 6 Synovial Membrane Expansion, Articular Surface Erosion, and Accelerated Maturation of Underlying Cartilage in Ankles of Bmpr1a Mutant Mice Near adjacent sections from the tarsal 2-metatarsal II joint of 7-d-old mice. (A and B) LACZ staining (blue) shows Cre-mediated recombination is largely restricted to articular (arrowheads) and synovial cells (asterisks) in both controls and mutants. (C and D) In situ hybridization shows Col10 expression expands in mutants toward regions of synovial membrane expansion and articular surface erosion (brackets and arrows). This may be a cell nonautonomous effect of joint damage, since the LACZ expressing cells at the articular surface do not show upregulation of Col10 (arrowheads) and the region of expanded Col10 expression is largely made up of cells that have not undergone Cre-mediated recombination. Note the formation of a cartilaginous bridge along the joint capsule of the mutant where joint formation is disrupted at earlier stages (B, white arrowhead, and Figure 3 , white arrowheads). (A) Scale bar = 75 μm. This synovial hypertrophy is associated with increased numbers of mononuclear cells resembling synoviocytes or macrophages, cell types that are difficult to distinguish even with surface markers at early postnatal stages. However, no neutrophils were observed, suggesting that there is little inflammation. At later stages synovial hypertrophy is reduced. Further work will be needed to determine whether synovial development is regulated by BMP signaling, or whether the synovium becomes enlarged as a response to nearby skeletal malformations (such as fusion of the second and central tarsals or defects in the articular cartilage). Noninflammatory Degeneration of Articular Cartilage in Digit and Knee Joints Outside of the ankle region, little or no evidence was seen for expansion of the synovial membrane. Instead, mutant mice showed histological signs of osteoarthritis, such as fibrillation of the articular surface ( Figure 7 ). As previously seen in 1- and 2-wk-old animals, Safranin O staining and Agg and Col10 expression were all reduced in mutant articular regions of the forefeet and hindfeet by 7 wk of age, and the beginning signs of cartilage loss were observed (unpublished data). By 9 mo of age, many regions of articular cartilage were completely missing or extremely fibrillated, leaving regions of exposed bone on the surface ( Figure 7 A– 7 D). No alterations were seen in the expression of Osteocalcin, Col1a1, or matrix metalloprotease-13 at either 7 wk or 9 mo. Figure 7 Loss of Bmpr1a Signaling Leads to Articular Cartilage Fibrillation and Degeneration in Digits and Knees of Aging Mice (A–D) Near adjacent sections of metatarsal-phalangeal joints from 9 month old mice. Articular cartilage of controls is complete and stains strongly with Safranin O (A, orange stain). In contrast, articular cells of mutants are severely fibrillated or absent with much reduced staining of Safranin O (C, arrowheads). LACZ expression confirms Cre-mediated recombination has occurred in articular cells (B and D). (E–P) Sagittal sections through knee joints of 7-wk- (E–J) or 9-mo-old animals (K–P); fe, femur; ti, tibia; gp, growth plate. Seven weeks after birth, the height of the tibial epiphysis is reduced in mutants (E and H, bars), and their articular layer stains poorly with Safranin O, is fibrillated, and is strikingly thinner (F and I, black arrowhead, and brackets). Near adjacent sections with LACZ staining confirm Cre-mediated recombination has occurred in articular cells (G and J). Note that in mutants, LACZ is absent in cells adjacent to those that do stain with Safranin O, suggesting Bmpr1a may act cell autonomously (I and J, white arrowheads). At 9 mo old, the mutant tibial epiphysis is extremely thin (K and N, bars), and the articular layer is completely absent, leaving bone to rub directly on bone (L and O, bracket). LACZ staining shows Cre-mediated recombination occurred in articular cells of controls (M) and in some remaining skeletal tissue of mutants (P). Also note aberrantly formed meniscal cartilage in mutants (E, H, K, and N, arrows), and increased sclerosis in mutant epiphyses (E, H, K, and N, asterisks). (A and K) Scale bar = 50 μm; (I) scale bar = 300 μm. The major weight-bearing joint of the hindlimb, the knee, showed changes that closely paralleled that seen in the foot joints. All markers of cartilage matrix looked similar to controls at E16.5, suggesting that early stages of joint formation were not disrupted (unpublished data). By postnatal day 7, Safranin O staining and Col2a1 and Agg expression were clearly reduced in the mutant, despite continued expression of Sox9 (unpublished data). The overall shape of mutant knee skeletal elements appeared similar to controls, although the fibrocartilaginous meniscus that resides between the femur and tibia appeared much less dense in mutants at E16.5. Some cartilage formed in the meniscus region, but the size of these elements was greatly reduced and contained abundant cells with fibrous, noncartilaginous appearance (unpublished data). This reduction of the meniscus can also be seen in sections from 7-wk- and 9-mo-old animals ( Figure 7 E, 7 H, 7 K, and 7 N, arrows). At 7 wk of age the normally domed tibial epiphysis was flattened and depressed in the knees of mutant animals, markedly reducing the distance between the growth plate and articular surface ( Figure 7 E and 7 H, vertical bar). Articular cartilage was also thinner than in control animals, showed nearly complete absence of Safranin O staining, and was either acellular or beginning to fibrillate in many regions ( Figure 7 F and 7 I). The few large Safranin O-stained cells still apparent in mutant articular regions appeared to correspond in position to rare LACZ-negative cells in adjacent sections, suggesting that Bmpr1a is required cell-autonomously in articular cartilage ( Figure 7 I and 7 J, white arrowheads). By 9 mo, large areas of mutant knees were devoid of articular cells, and the bones of the femur and tibia appeared to rub directly against each other. Furthermore, the epiphysis of the tibia was extremely depressed, to the point that growth plate cartilage was almost exposed through the surface of the bone ( Figure 7 K, 7 L, 7 N, and 7 O). In addition, mutants at 7 wk and 9 mo showed subchondral sclerosis, especially in the epiphysis of the femur ( Figure 7 E, 7 H, 7 K, and 7 N, asterisks). While subchondral sclerosis is commonly seen in cases of osteoarthritis, it is unclear in this case whether the sclerosis is mainly a response of bone formation to compensate for decreased articular cartilage, or whether it is the effect of loss of Bmpr1a signaling in some LACZ-positive cells that are also observed in these regions (unpublished data). The histological signs of joint arthritis were accompanied by functional impairments in both grasping ability and range of motion in mutant animals. Gdf5-Cre/Bmpr1a floxP mutant animals showed a highly significantly reduced ability to grasp and remain suspended on a slender rod (mean suspension time: controls 38 ± 6 s, n = 39; mutants 6 ± 3 s, n = 11; p < 0.0001). Mutant mice also showed a clear decrease in the maximum range of mobility of two different joints in the digits, as assayed by passive manipulation (MT/P1 joint: controls 100 ± 0°, n = 26; mutants 82 ± 3°, n = 8; p < 0.0003; P1/P2 joint: controls 152 ± 1°, n = 23; mutants 140 ± 5°, n = 6; p < 0.05). The structural, histological, marker gene expression, and functional changes in mutant mice demonstrate that BMPR1A is required for normal postnatal maintenance of articular cartilage. Discussion Previous studies suggest that BMP signaling is involved in a large number of developmental events. Many of these events occur early in embryogenesis, and complete inactivation of BMP receptors causes death by E9.5 ( Mishina et al. 1995 ). The Gdf5-Cre recombination system bypasses the early embryonic lethality of Bmpr1a mutations, and provides important new information about the role of this receptor in limb and skeletal development. The three major limb phenotypes revealed by eliminating Bmpr1a with Gdf5 -driven Cre include webbing between digits, lack of joint formation at specific locations in the ankle, and failure to maintain articular cartilage after birth, resulting in severe arthritis. Previous studies have shown that manipulation of BMP signaling alters interdigital apoptosis during development of the limb, but no experiment has identified a specific member of the BMP signaling pathway that is required for this process ( Yokouchi et al. 1996 ; Zou and Niswander 1996 ; Zou et al. 1997 ; Guha et al. 2002 ). Our new loss-of-function data confirm that BMP signaling is required for interdigital apoptosis and suggests that Bmpr1a is a critical component for mediating this signal. At some sites, loss of Bmpr1a function leads to a defect in the early stages of joint formation, resulting in a complete failure to form a joint and fusion of bones in the ankle. Mutations in two different ligands in the BMP family, Gdf5 and Gdf6, the Bmpr1b receptor, and in the human Noggin locus ( Storm and Kingsley 1996 ; Gong et al. 1999 ; Baur et al. 2000 ; Yi et al. 2000 ; Settle et al. 2003 ) also produce defects in joint formation at specific locations in the limbs. The joint defects associated with multiple components of the BMP pathway provide strong evidence that BMP signaling is required for early stages of joint formation at some anatomical locations. Most joints still form normally when Bmpr1a is knocked out in Gdf5 expression domains. The lack of joint fusions outside the ankle region could be due to differences in requirement for BMP signaling in different joints, to compensating expression of other BMP receptors outside the ankles, or to differences in the detailed timing of Gdf5-Cre stimulated gene inactivation in ankles and other joint regions. Comparison of the expression of the HPLAP marker (driven directly by Gdf5 control elements) and the R26R LACZ marker (expressed following Gdf5-Cre recombination) suggests that recombination-stimulated changes in gene expression may be delayed for a 0.5–1 d in the digit region (see Figure 1 C). In addition, levels of Bmpr1a mRNA and protein may persist for some time following Gdf5-Cre stimulated recombination, making it possible to bypass an early requirement for Bmpr1a in joint formation at some locations. Following the decay of Bmpr1a mRNA and protein, the Gdf5-Cre strategy should result in permanent inactivation of Bmpr1a function in recombined cells. This system thus provides one of the first strong genetic tests of Bmpr1a function at later stages of joint development. Despite the normal appearance of articular regions and gene expression immediately after birth, Bmpr1a -deficient animals are unable to maintain the normal differentiated state of articular cartilage as they continue to develop and age. These results suggest that BMP receptor signaling is essential for continued health and integrity of articular cartilage in the postnatal period. Articular cartilage is a key component of synovial joints and is one of the few regions in the skeleton where cartilage is maintained into adulthood. Despite the importance of articular cartilage in joint health and mobility, little is known about the factors that create and maintain it in thin layers at the ends of long bones. In our experiments, articular cartilage lacking Bmpr1a retains some normal characteristics, in that it maintains a very low proliferation rate, does not express Col1a1, and continues to express SOX9, a major transcription factor regulating expression of structural components of cartilage matrix. However, several of the most prominent structural components of cartilage matrix fail to be maintained in mutant animals, resulting in decreased synthesis of Col2a1, Agg, and proteoglycans. Therefore, BMPR1A appears to maintain articular cartilage primarily through inducing expression of key ECM components. It is interesting that the SOX9 transcription factor continues to be expressed in mutant cartilage despite loss of Col2a1, a direct target of this transcription factor ( Bell et al. 1997 ; Lefebvre et al. 1997 ). Previous studies suggest that SOX9 activity can be modified by protein kinase A (PKA)-dependent protein phosphorylation, or by coexpression of two related proteins, L-SOX5 and SOX6 ( Lefebvre et al. 1998 ; Huang et al. 2000 ). In addition, close examination of the order of genes induced during chicken digit formation reveals that Sox9 turns on first, followed by Bmpr1b with L-Sox5, and then Sox6 and the cartilage matrix structural components Col2a1 and Agg ( Chimal-Monroy et al. 2003 ). These results, together with the altered pattern of gene expression seen in our Bmpr1a -deficient mice, suggest that BMPR1A signaling may normally act to stimulate SOX9 by post-translational protein modification, or to induce L-Sox5 or Sox6 in cartilage to maintain expression of ECM components. These models are consistent with the ability of BMP2 to both increase PKA activity and induce expression of Sox6 in tissue culture cells ( Lee and Chuong 1997 ; Fernandez-Lloris et al. 2003 ). Although we have tried to monitor the expression of L-Sox5 or Sox6 in postnatal articular cartilage, and test the phosphorylation state of SOX9 using previously described reagents ( Lefebvre et al. 1998 ; Huang et al. 2000 ), we have been unable to obtain specific signal at the late postnatal stages required (unpublished data). Furthermore, null mutations in L-Sox5 or Sox-6 cause lethality at or soon after birth, and no effect on cartilage maintenance has been reported ( Smits et al. 2001 ). However, it seems likely that these or other processes regulated by BMP signaling cooperate with SOX9 to induce target genes in articular cartilage. Mutation of Smad3 or expression of dominant negative transforming growth factor β (TGF-β) type II receptor also disrupts normal articular cartilage maintenance ( Serra et al. 1997 ; Yang et al. 2001 ). Both manipulations should disrupt TGFβ rather than BMP signaling, and both manipulations cause articular cartilage to hypertrophy and be replaced by bone. In contrast, our analysis of Bmpr1a mutant articular cartilage showed a loss of ECM components, but no signs of hypertrophy or bone replacement. Therefore, TGFβ and BMP signaling are playing distinct but necessary roles to maintain articular cartilage. Although BMPs were originally isolated on the basis of their ability to induce ectopic bone formation, their presence in articular cartilage and strong effect on cartilage formation has stimulated interest in using them to repair or regenerate cartilage defects in adult animals ( Chang et al. 1994 ; Erlacher et al. 1998 ; Edwards and Francis-West 2001 ; Chubinskaya and Kuettner 2003 ). The failure to maintain articular cartilage in the absence of normal BMPR1A function suggests that ligands or small molecule agonists that interact specifically with this receptor subtype may be particularly good candidates for designing new approaches to maintain or heal articular cartilage at postnatal stages. Lack of Bmpr1a function in articular cartilage results in severe fibrillation of the articular surface and loss of joint mobility. The development of severe arthritis symptoms in Bmpr1a -deficient mice raises the possibility that defects in BMP signaling also contribute to human joint disease. Osteoarthritis is known to have a significant genetic component, but it likely involves multiple genetic factors that have been difficult to identify ( Spector et al. 1996 ; Felson et al. 1998 ; Hirsch et al. 1998 ). Humans that are heterozygous for loss-of-function mutations in BMPR1A are known to be at risk for juvenile polyposis ( Howe et al. 2001 ; Zhou et al. 2001 ), but the risk of osteoarthritis for these people has not been reported. However, the control mice used in this study were heterozygous for a null allele of Bmpr1a, and they showed little sign of osteoarthritis even late in life. Several chromosome regions have been previously linked to arthritis phenotypes in humans using either association studies in populations or linkage studies in families. It is interesting to note that several of these chromosome regions contain genes encoding different members of the BMP signaling pathway, including the BMP5 gene on human chromosome 6p12 ( Loughlin et al. 2002 ), the MADH1 gene on human chromosome 4q26–4q31 ( Leppavuori et al. 1999 ; Kent et al. 2002 ), and the BMPR2 receptor on human chromosome 2q33 ( Wright et al. 1996 ). The complex nature of human osteoarthritis suggests that interactions between multiple genes may be involved in modifying susceptibility to the disease. The inclusion of genetic markers near BMP signaling components may help identify additional osteoarthritis susceptibility loci and facilitate the search for causative mutations. Development and disease processes in synovial joints have been difficult to study genetically, because synovial joints are generated and function at relatively late stages of vertebrate development. The Gdf5-Cre system provides a new method for restricting gene expression or inactivation primarily to articular regions, thus avoiding the pleiotropic functions of many genes in other tissues. Depending on the configuration of the floxed target gene, this system can be used to either activate the expression of a gene primarily in developing joints (ssee Figure 1 B– 1 D), or to inactivate gene function in articular regions (see Figure 3 ). Additional studies with this system should greatly enhance our knowledge of the development, function, and disease mechanisms of joints, and may bring us closer to better prevention and treatment of joint diseases. Materials and Methods Generation of Gdf5-Cre transgenic mice A mouse 129x1/SvJ BAC library (Invitrogen) was screened to identify a 140-kb BAC from the Gdf5 locus. This BAC was modified using a homologous recombination system in E. coli ( Yang et al. 1997 ) to place nuclear-localized Cre recombinase (from plasmid pML78, gift of Gail Martin) followed by IRES- hPLAP (from plasmid 1726, gift of Oliver Bogler) directly behind the ATG start site of Gdf5 . In the process, 583 bp of the first exon of Gdf5 was removed and no functional GDF5 protein is predicted to be produced. The 5′ homology arm was subcloned from a PCR product tailed with XhoI and Bsp120I restriction sites that contains 781 bp of 5′ genomic Gdf5 sequence ending at the ATG translation start site (forward primer 5′-CTGTCTCGAGATGAGGTGGAGGTGAAGACCCC-3′; reverse 5′-GTTTGGGCCCATCCTCTGGCCAGCCGCTG-3′). Cre was subcloned from a 1.1-kb Bsp120I/EcoRI fragment of pML78. IRES hPLAP was subcloned from a 2.1-kb PCR product tailed with EcoRI and SpeI sites that contains the hPLAP translation stop site (forward primer 5′-ATCTCTCGAGGAATTCTCCACCATATTGCCGTCTTTTG-3′; reverse 5′-AGAACTCGAGACTAGTCGGGACACTCAGGGAGTAGTGG-3′). The 3′ homology arm was subcloned from a 0.8-kb PCR product amplified from a 0.9-kb XhoI Gdf5 genomic subclone containing part of the first exon and downstream intron. The forward primer contains the 3′ end of the first exon and is tailed with a SpeI site; the reverse primer is from the T7 promoter of the vector containing the 0.9-kb subclone and flanks the intronic XhoI site (forward primer 5′-CTAAACTAGTCACCAGCTTTATTGACAAAGG-3′; reverse 5′-GATTTCTAGAGTAATACGACTCACTATAGGGC-3′). The targeting construct was built and verified in pBSSK (Stratagene, La Jolla, California, United States), then digested with XhoI and subcloned into pSV1, the vector used for homologous recombination ( Yang et al. 1997 ). Southern blotting, PCR, and DNA sequence analysis confirmed the appropriate targeting construct and BAC modifications were made (unpublished data). Before the modified BAC was injected to produce transgenic animals, a loxP site present in the BAC vector, pBeloBAC11, was removed to prevent the addition of undesired Cre target sites into the genome. To do this, BAC DNA was prepared by CsCl separation, digested with NotI to free the insert from the vector, and size-fractionated over a sucrose gradient. Aliquots of fractions were run on a pulse-field gel and Southern blotted using vector-specific DNA as a probe. Fractions containing unsheared insert and almost no detectable vector DNA were dialyzed in microinjection buffer (10 mM Tris [pH 7.4] with 0.15 mM EDTA [pH 8.0]) using Centriprep-30 concentrators (Millipore, Billerica, Massachusetts, United States). This purified insert DNA was adjusted to 1 ng/μl and injected into the pronucleus of fertilized eggs from FVB/N mice by the Stanford Transgenic Facility. Transgenic founder mice were identified by PCR using Cre -specific primers 5′-GCCTGCATTACCGGTCGATGCAACGA-3′ and 5′-GTGGCAGATGGCGCGGCAACACCATT-3′, which amplify a 725-bp product, and were assessed for absence of BAC vector using vector-specific primers 5′-CGGAGTCTGATGCGGTTGCGATG-3′ and 5′-AGTGCTGTTCCCTGGTGCTTCCTC-3′, which amplify a 465-bp product. Three lines of Gdf5-Cre mice were established and maintained on the FVB background. Matings with R26R Cre-inducible LACZ reporter mice ( Soriano 1999 ) were used to test for Cre activity. Staining for LACZ and HPLAP on whole embryos or sections of embryos was accomplished following established protocols ( Lobe et al. 1999 ). The red LACZ substrate (see Figure 1 E) is 6-chloro-3-indoxyl-beta-D-galactopyranoside (Biosynth International, Naperville, Illinois, United States). General characterization of Bmpr1a mutant mice Bmpr1a null and floxed alleles ( Ahn et al. 2001 ; Mishina et al. 2002 ) were obtained on a mixed 129 and C57BL/6 background and maintained by random breeding. Mice carrying the null and floxed alleles were typically mated to Gdf5-Cre mice as shown in Figure 3 . The resulting mice are on a mixed 129; C57Bl/6; FVB/N background, with both controls and mutant animals generated as littermates from the same matings. Whole-mount skeletal preparations were made from 34- to 36-d-old mice ( Lufkin et al. 1992 ). Pairs of ears from euthanized 6-mo-old animals were removed, pinned, photographed, projected, and measured from the base of the curve formed between the tragus and antitragus to the farthest point at the edge of the pinnae. Grasping ability in 6-mo-old mice was measured by placing animals on a slender rod and timing how long they could remain suspended on the rod, to a maximum time allowed of 2 min. Data from five consecutive trials for each mouse were averaged. Range of motion assays were conducted on the MT/P1 and P1/P2 joints of the second hindlimb digit from euthanized 18-wk-old animals. Forceps were used to bend the joint to its natural stopping position, and the resulting angle was measured to the nearest 10° under 12.5× magnification using a 360° reticule. Analysis described in this section occurred on animals lacking R26R . Control mice included all nonmutant genotypes generated by Parent 1 being heterozygous for Gdf5-Cre and Bmpr1a null and Parent 2 being heterozygous for Bmpr1a floxP (see Figure 3 ). All statistical analysis used the Student's t-test or Welch's t-test, and values listed are mean ± standard error of the mean. Cell death and proliferation assays Limbs from mutant and control animals at E13.5 and E14.5 were dissected and frozen in OCT (Sakura Finetek,Torrence, CA, United States). Cryosections of tissue were assayed by TUNEL using the In Situ Cell Death Detection Kit, Fluorescein (Roche, Basel, Switzerland). Following TUNEL, slides were washed in PBS, blocked with PBS + 0.05% Tween-20 + 5% goat serum, washed again, and incubated with a 1:200 dilution of a rabbit anti-phospho-histone-H3 antibody called Mitosis Marker (Upstate Biotechnology, Lake Placid, New York, United States) to identify cells in mitosis. Cy3-labeled anti-rabbit secondary antibody was used to detect the antibody. Cell nuclei were labeled with DAPI, and slides were mounted in Vectamount (Vector Laboratories, Burlingame, California, United States) and visualized at 100× magnification. The area of selected anatomical sites were measured, and the number of TUNEL-labeled nuclear fragments and the number of Cy3-labeled nuclei were counted from three 10-μm sections spanning 50 μm, from three control and three mutant animals. The number of labeled cells in the metacarpal-phalangeal and metatarsal-phalangeal joints was counted in a 290 μm × 365 μm rectangle placed around the center of the joint. The posterior region of the fifth digit was defined by drawing a line from the tip of the digit down 2.15 mm and across to the lateral edge of the tissue. For this analysis, the R26R Cre reporter was not present. Histology and histochemistry Tissue from animals ranging from stages E14.5 to P14 was prepared for analysis by fixing in 4% paraformaldehyde (PFA) in PBS for 45 min to 4 h depending on the stage; washing three times in PBS, once in PBS + 15% sucrose for 1 h, and once in PBS + 30% sucrose for 2 h to overnight depending on the stage; and then freezing in OCT. Tissue from animals aged 7 wk to 9 mo was processed similarly to earlier stages except that it was decalcified in 0.5 M EDTA (pH 7.4) for 4 d prior to incubating in sucrose. All solutions were prechilled and used at 4 °C with agitation, and skin from tissues of P0 or older mice was lacerated or removed prior to processing. Tissue was then cryosectioned at 12 μm and processed. Staining of sections with Safranin O, Fast Green, and Harris' hematoxylin was carried out using standard histological procedures. Detection of LACZ activity with X-Gal was performed as described ( Lobe et al. 1999 ) and was followed by refixing in 4% PFA, rinsing with deionized water, counterstaining with Nuclear Fast Red (Vector Labs), rinsing with water again, and then mounting in Aquamount (Lerner Labs, Pittsburgh, Pennsylvania, United States). RNA in situ hybridization was performed as described ( Storm and Kingsley 1996 ), with the following modifications: (1) Prior to the acetylation step, sections were incubated with 10–20 μg/ml proteinase K for 30 s to 7 min at room temperature (depending on the developmental stage), followed by refixing in 4% PFA and washing three times in PBS; (2) prehybridization step was skipped, and (3) embryonic tissue sections used a different color development mix ( Thut et al. 2001 ). Probes for the following genes have been published previously: Bmpr1a ( Mishina et al. 1995 ), Col2a1 ( Metsaranta et al. 1991 ), Col10a1 ( Apte et al. 1992 ), Gdf5 ( Storm and Kingsley 1996 ), Osteocalcin ( Celeste et al. 1986 ), and Sox5 and Sox6 ( Lefebvre et al. 1998 ). The following probe templates were gifts: Agg , Dr. Vicki Rosen, Genetics Institute; Bmp2 and Bmp4 , Arend Sidow, Stanford University; Col1a1 , Bjorn Olsen, Harvard Medical School; Bmpr1b, Col3a1, and Mat4 probes were made from ESTs with IMAGE clone numbers 5056341, 478480, and 406027, respectively (Invitrogen, Carlsbad, California, United States). Sections for immunohistochemistry were fixed in 4% PFA, then digested with 942–2,000 U/ml type IV-S bovine hyaluronindase (Sigma, St. Louis, Missouri, United States) in PBS (pH 5) at 37 °C for 30 min to 2 h depending on the stage. Slides were then washed in PBS, treated with 0.3% hydrogen peroxide in 100% methanol for 30 min, washed, blocked with PBS + 0.05% Tween20 + 5% goat or fetal bovine serum, washed again, and incubated with primary antibodies in PBS + 0.05% Tween 20 + 1% goat or fetal bovine serum overnight at 4 °C. Biotin-labeled secondary antibodies (Vector Labs) were tagged with HRP using the Vectastain Elite ABC kit (Vector Labs) followed by detection with DAB (Vector Labs). Primary antibodies and dilutions used were: goat anti-mouse MMP13, 1:100 (Chemicon International, Temecula, California, United States); rabbit anti-human SOX9, 1:500 ( Morais da Silva et al. 1996 ); rabbit anti-phosphorylated-SOX9 (SOX9.P), 1:10–1:250 ( Huang et al. 2000 ). Supporting Information Accession Numbers GenBank ( http://www.ncbi.nih.gov/Genbank/ ) accession numbers for the genes discussed in this paper are Gdf5 (AC084323) and Bmpr1a (NM_009758). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC523229.xml |
534797 | Evaluation of a communication skills seminar for students in a Japanese medical school: a non-randomized controlled study | Background Little data exist for the effectiveness of communication skills teaching for medical students in non-English speaking countries. We conducted a non-randomized controlled study to examine if a short intensive seminar for Japanese medical students had any impact on communication skills with patients. Methods Throughout the academic year 2001–2002, a total of 105 fifth-year students (18 groups of 5 to 7 students) participated, one group at a time, in a two-day, small group seminar on medical interviewing. Half way through the year, a five-station objective structured clinical examination (OSCE) was conducted for all fifth-year students. We videotaped all the students' interaction with a standardized patient in one OSCE station that was focused on communication skills. Two independent observers rated the videotapes of 50 students who had attended the seminar and 47 who had not. Sixteen core communication skills were measured. Disagreements between raters were resolved by a third observer's rating. Results There was a statistically significant difference in proportions of students who were judged as 'acceptable' in one particular skill related to understanding patient's perspectives: asking how the illness or problems affected the patient's life, (53% in the experimental group and 30% in the control group, p = .02). No differences were observed in the other 15 core communication skills, although there was a trend for improvement in the skill for asking the patient's ideas about the illness or problems (60% vs. 40%, p = .054) and one of the relationship building skills; being attentive and empathic nonverbally (87% vs. 72%, p = .064). Conclusion The results of this study suggest that a short, intensive small group seminar for Japanese medical students may have had a short-term impact on specific communication skills, pertaining to understanding patient's perspectives. | Background The literature from English-speaking countries indicates that teaching communication skills is effective in improving learners' communication skills with patients [ 1 ]. However, the evidence from non-English speaking countries is sparse [ 1 ]. In addition, the conceptual frameworks for communication skills teaching are based on research evidence from English-speaking countries [ 2 ]. There is an ongoing debate about whether the principles and methods for teaching communication skills developed in English-speaking countries could be applied to other places with different languages and cultures [ 2 - 4 ]. Teaching communication skills is gaining popularity and proliferating for Japanese health professional students [ 5 ]. Yoshida et al. conducted a controlled study to examine the effects of such training with 16 Japanese dental students and had a positive result [ 6 ]. A few reports have been published on Japanese medical students [ 7 - 9 ]. However, to our best knowledge, no controlled studies for communication skills teaching have been conducted for that population. In many traditional medical schools in Japan, communication skills teaching is limited in time and scope, and isolated from other formal curricula. Thus it is important to know whether such type of training make a difference, at least in the short run. This should also be of interest to educators elsewhere who similarly work in settings where there is not enough formal curricular time for communication skills teaching. The objective of this study was to evaluate the impact of a short, intensive small group seminar, which was based on Western educational principles, on Japanese medical students' communication skills with patients. Methods Participants Medical schools in Japan last six years with the last two years consisting of clerkships. Before the fifth-year, Japanese students typically have few direct interactions with patients. Throughout the academic year 2001–2002, a total of 105 fifth-year students from the Nagoya University School of Medicine rotated through the various clinical services of the Nagoya University Hospital. Students divided themselves into 18 groups of 5 to 7, but the sequential order of rotations is set by the medical school officials. Educational intervention As part of a 1-week clerkship rotation at the Department of General Medicine, students participated in a two-day, small group seminar on the medical interview and communication skills. Typically either or both of two of the authors (KM, NB) facilitated the seminar. Both facilitators had had an experience in learning and teaching the medical interview and communication skills in the United States. The seminar utilized learner-centered, skills-oriented, experiential, and interactive learning methods. To guide the teaching of communication skills, we created a conceptual model for patient-physician communication referring to 3 existing models [ 2 , 10 , 11 ]. Although our main teaching focus is on communication process skills, we also addressed the content aspects of the medical interview (e.g., discussion of differential diagnosis). The learning activities during the seminar are summarized in Figure 1 . Figure 1 Learning activities during a two-day seminar on medical interviewing and communication skills. Outcome measures In September 2001, half way through the academic year, a five-station objective structured clinical examination (OSCE) was conducted for all fifth-year students. The primary purpose of the OSCE was to provide trainees with the opportunity to receive feedback on their clinical skills from the faculty in a safe and structured environment. One OSCE station focused on the medical interview. Students engaged in a 5-minute interaction with a standardized patient presenting with cough. A total of 10 fourth-year students were trained to serve as standardized patients in a series of 3 small group sessions, each lasting 60 minutes [ 12 ]. During the interview, the fifth-year students were observed by faculty and evaluated for both station-specific and general communication skills on the pre-defined rating scale. The faculty gave students a 3-minute feedback immediately after the encounter. Standardized patients did not give feedback. All interactions were videotaped and subsequently reviewed by faculty members to provide students with additional written feedback. Placed at the mid point of the academic year, the OSCE provided us with the opportunity to evaluate the short-term effectiveness of the small group seminar on students' communication skills. We reviewed the videotapes of 52 students who had attended the seminar (the experimental group) and 53 students who had not at the time of the OSCE (the control group). The group assignment was based on the sequential order of clinical rotations, arbitrarily set by the medical school officials. The time intervals between the seminar and the OSCE ranged from 1 week to 5 months. Students were asked to provide informed consent using a form that had been approved by the Institutional Review Board at the Nagoya University Hospital. The interview rating form was created by one of the authors (KM) and includes 16 essential communication skills items. They are grouped into 6 communication tasks that should be accomplished during the initial 5 minutes of an encounter ( establish initial rapport, survey patient's reason(s) for the visit, determine the patient's chief concern, elicit and understand the patient's perspective, manage flow – provide the structure for the interview, and use of relationship building skills ). The performance was rated on a 4-point scale labelled as good, satisfactory, insufficient and poor. The skills items were selected for their association with improved patient outcomes. They were derived from evidence-based communication assessment tools (i.e., the Calgary-Cambridge observation guide, the SEGUE framework, and the checklist developed by the investigators of the Macy Initiative in Health Communication ) [ 2 , 10 , 11 ]. These instruments are based on the same conceptual models for patient-physician communication we referred to during our teaching seminar. Two staff members (KK, HW) were trained to serve as raters. The tapes were independently reviewed and scored using the students' communication skills rating scale. Ten arbitrarily selected videotapes of students' role-plays of a doctor-patient encounter during the small group seminar were used to ensure accuracy and inter-rater reliability. At the time of the research the raters primarily worked outside the University and did not participate in the teaching seminars. Thus they were blinded to the students' group assignments. From the 105 students who attended the OSCE, 2 did not return the consent form, 5 did not give permission for the video review, for 1 the videotape quality was too poor to be analyzed. Thus, a total of 97 videotapes were available for the analysis. A skill item was considered 'acceptable' if both raters scored the students' performance as 'good' or 'satisfactory.' It was labeled as 'unacceptable' if both raters scored the performance as 'insufficient' or 'poor.' When these two raters disagreed over the judgment about the students' performance (e.g., one rater scored the performance of a skill item as 'acceptable' and the other scored the performance of the same item as 'unacceptable'), a communication educator and researcher (KA) served as the tiebreaker. The overall disagreement rate between the two raters (KK, HW) was 21%. The raters disagreed more often on inviting the patient to tell the story chronologically (41%), actively responding to the patient's concerns and nonverbal cues (34%), and being attentive and empathic nonverbally (31%). Statistical analysis We compared baseline characteristics of the two groups using t-tests for a continuous variable (age) and chi-square tests for categorical variables. To evaluate the effect of the educational intervention, the proportion of students with 'acceptable' performance was compared with those whose performance was unacceptable using chi-square tests for all 16 skills items. All statistical analyses were done by JS using Stat View Version 5.0 (SAS Institute Inc. North Carolina). Results Student characteristics including gender did not differ between the groups except that more students in the control group engaged in self study as a preparation for the OSCE (p < .05) (Table 1 ). There were trends that more students in the control group took an elective on communication skills at the 4 th year and were interested in a future generalist career. For both groups combined, the mean age was 23.5 years and 38 % were women. Table 1 Baseline characteristics of the students Intervention Group (N = 47) Control Group (N = 50) P-value Mean age (SD) 23.6 (1.5) 23.4 (1.5) 0.48 Women 36% (N = 17) 40% (N = 20) 0.70 Did a self-study preparing for OSCE 34% (N = 16) 56% (N = 28) 0.03 Took an elective on communication in medicine at the 4 th year 43% (N = 20) 54% (N = 27) 0.26 Interested in becoming a generalist 13% (N = 6) 26% (N = 13) 0.10 The proportions of students who were judged to have performed as 'acceptable' for each of the 16 items are shown in Table 2 . There was a statistically significant difference for one particular skill related to understanding patient's perspectives: "exploring how the illness or problem affected the patient's life" (53% in the intervention group vs. 30% in the control group, p = .02). No significant differences were observed for the other 15 skills, although there was a trend favouring the intervention in the skill for "asking the patient about ideas concerning the illness or problem (60% vs. 40%, p = .054) and one of the relationship building skills: "being attentive and empathic nonverbally (87% vs. 72%, p = .064)." Table 2 Student performance of the skill judged as 'acceptable' Communication Tasks and Related Skills Intervention Group (N = 47) Control Group (N = 50) P-value Establish Initial Rapport Greet patient and obtain patient's name 92% 94% 0.43 Introduce self and clarify the role 100% 98% 1.0 Survey Patient's Reason(s) for the Visit Allow the patient to complete his/her opening statement 9% 6% 0.71 Invite the patient to tell the story chronologically 49% 46% 0.77 Actively listen, using verbal and nonverbal techniques 66% 58% 0.42 Summarize. Check for understanding. Invite more questions? 70% 60% 0.29 Determine the Patient's Chief Concern Ask closed-questions that are non-leading, one at a time 100% 100% 1.0 Define the concern completely 96% 94% 1.0 Elicit and Understand the Patient's Perspective Explore contextual factors (e.g., job, family, hobbies) 66 % 62% 0.69 Ask the patient's ideas about the illness or problems 60% 40% 0.054 Explore how the problem affects the patient's life 53% 30.0% 0.02 Manage Flow – Provide the Structure to the Interview Summarize periodically throughout the interview 81% 76% 0.56 Use signposting 40% 30% 0.28 Use of Relationship Building Skills Be attentive and empathic nonverbally 87% 72% 0.064 Actively respond to patient's concerns and nonverbal cues 38% 40% 0.8637 Use appropriate language 100% 100% 1.0 Discussion A short, intensive small group seminar on medical interviewing appeared to have had an impact on some specific skills, pertaining to "eliciting and understanding the patient's perspectives." It did not seemed to have improved the skills associated with the other tasks: establishing initial rapport, surveying the patient's reason(s) for the visit, determining the patient's chief concern, and managing flow – providing the structure for the interview, and the skills for building relationships. There are several strengths of our study. First, this is one of the few empirical, controlled studies from a non-English speaking country. Even though the students were not strictly randomized into intervention and control groups, the assignment occurred arbitrarily by the administration, without regard to students' preferences or interests in medical interviewing. Thus, it is unlikely that the higher scores in the intervention group are attributable to self-selection. Although there was a significant difference between the groups in proportions of students who did a self-study for the OSCE, which might have caused the results of no difference in most of the skills, the other characteristics such as age and gender were similarly distributed (Table 1 ). Second, interventions and evaluations were guided by the conceptual framework, modelled after the 3 widely used theoretical models that are based on rigorous, empirical research in the field of patient-physician communication [ 2 , 10 , 11 ]. Third, the communication skills evaluation instrument was matched with the competencies taught in the small group sessions [ 13 ]. By carefully delineating and defining specific communication skills that should be addressed in the teaching session and by evaluating the effect of the teaching intervention on these individual skills, we sought to examine whether some skills were more teachable than others in such a brief, small group sessions. Our study also has weaknesses that should be addressed. First, our teaching method was based on the research findings in Western world, and this is based on the untested assumption that these findings are equally valid in Japan. There is evidence that patient-physician communication patterns in Japan are different from those in the West. Previous research by Ohtaki and colleagues compared patient-physician communication patterns in Japan and the USA [ 14 ]. It included 20 outpatient consultations of four physicians in Japan and 20 outpatient consultations of five physicians in the USA. Japanese physicians spent less time on social talk than the USA counterparts (5% vs. 12%). Japanese patient-physician encounters included more pauses than those in the USA (30% vs. 8.2% of the total consultation length). There is a need for more empirical studies linking physicians' communication skills to patient outcomes specifically for Japanese population. Second, our assessment of students' communication skills was based on observations of a single, five-minute OSCE station. The reliability of which as a measure of communication skills is known to be low [ 15 ]. Third, because we assessed the students' skills at only one time, we could not assess the change in students' performance before and after the intervention. Fourth, the use of junior students as standardized patients may have influenced the performance of the examinees. The accuracy of student-standardized-patients' (student-SPs') portrayal would be a critical issue especially when the OSCE is used to grade students. Although we did not objectively investigate the consistencies of the portrayal by student-SPs, our examinees rated highly the fidelity of student-SPs, i.e., the degree to which they were acting as if they were real patients (mean score, 3.9 on a 5-point Likert scale) [ 12 ]. Fifth, our study might have only shown that the intervention was effective in improving students' skills for eliciting 'expert' observations of patient perspectives, not actual patient perspectives. We did not ask student-SPs whether examinees elicited their perspectives. Rather, we judged examinees' ability to elicit patient perspectives through their 'observable' behaviours from the experts' point of view. The role of student-SPs in evaluating fellow students' communication skills, particularly skills for eliciting patient perspectives should be addressed in future studies. Finally, the statistically significant difference observed for only 1 skill among a total of 16 skills could be due to chance alone. It is certainly possible that our intervention was too weak to influence any of the 16 communication skills. One can hypothesize the reasons why the intervention appeared to make a difference to some communication skills competencies but not to others. One could speculate that the competencies that were not influenced by the intervention were either very easy in general or too difficult to acquire in such a short teaching session. For example, the skills for establishing initial rapport (greet patient and obtain the patient's name, introduce self and clarify roles) and skills for determining the patient's chief concern (ask closed-ended questions that are non-leading and one at a time, define the concern completely) may be already present from the outset or so easy to acquire that a self-study just before the OSCE would make no differences in scores between the groups regardless of the intervention. On the other hand, the skills for surveying the patient's reason(s) for the visit, which requires being open at the beginning of the interview, may be too difficult for students to demonstrate, with or without the intervention. In particular, only 9% in the intervention group and 6% in the control group demonstrated an acceptable performance for the skills of allowing patients to complete their opening statements. These very low scores may also indicate that during small group sessions, we did not emphasize enough the importance of not interrupting patients at the beginning of the interview. Another explanation is that 'content' skills (i.e., what we communicate) are easier for students to acquire than 'process' skills (i.e., how we communicate). Kurtz at al. noted that the skills for understanding patient's perspectives, which our intervention made a difference, are actually 'content' skills, not 'process' skills [ 16 ]. One could argue that the intervention was just too short to influence other 'process' skills. These interesting hypotheses should require further investigations. Conclusions The results of this study suggest that a short, intensive small group seminar for Japanese medical students may have had an impact on specific communication skills, namely, skills for exploring how the illness or problem affected the patient's life, asking the patient about ideas concerning the illness or problem, and being attentive and empathic nonverbally at least in the short term. Further studies should be done to confirm this preliminary finding and to clarify the skills for which educational interventions could make a difference. Competing interests The author(s) declare that they have no competing interests. Authors' contributions KM contributed to the conception and design of the study, design and implementation of the educational intervention, interpretation of the data, and drafting of the manuscript. KK and KW contributed to the collection of the data and reviewing of the manuscript. KA contributed to the collection of the data and reviewing of the manuscript. JS contributed to the conception and design of the study, analysis and interpretation of the data and reviewing of the manuscript. NB contributed to the conception and design of the study, design and implementation of the educational intervention, interpretation of the data, and reviewing of the manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC534797.xml |
549033 | Clinical response in Japanese metastatic melanoma patients treated with peptide cocktail-pulsed dendritic cells | Background Metastatic, chemotherapy-resistant melanoma is an intractable cancer with a very poor prognosis. As to immunotherapy targeting metastatic melanoma, HLA-A2 + patients were mainly enrolled in the study in Western countries. However, HLA-A24 + melanoma patients-oriented immunotherapy has not been fully investigated. In the present study, we investigated the effect of dendritic cell (DC)-based immunotherapy on metastatic melanoma patients with HLA-A2 or A24 genotype. Methods Nine cases of metastatic melanoma were enrolled into a phase I study of monocyte-derived dendritic cell (DC)-based immunotherapy. HLA-genotype analysis revealed 4 cases of HLA-A*0201, 1 of A*0206 and 4 of A*2402. Enriched monocytes were obtained using OptiPrep™ from leukapheresis products, and then incubated with GM-CSF and IL-4 in a closed serum-free system. After pulsing with a cocktail of 5 melanoma-associated synthetic peptides (gp100, tyrosinase, MAGE-2, MAGE-3 and MART-1 or MAGE-1) restricted to HLA-A2 or A24 and KLH, cells were cryopreserved until used. Finally, thawed DCs were washed and injected subcutaneously (s.c.) into the inguinal region in a dose-escalation manner. Results The mean percentage of DCs rated as lin - HLA-DR + in melanoma patients was 46.4 ± 15.6 %. Most of DCs expressed high level of co-stimulatory molecules and type1 phenotype (CD11c + HLA-DR + ), while a moderate number of mature DCs with CD83 and CCR7 positive were contained in DC products. DC injections were well tolerated except for transient liver dysfunction (elevation of transaminases, Grade I-II). All 6 evaluable cases except for early PD showed positive immunological responses to more than 2 melanoma peptides in an ELISPOT assay. Two representative responders demonstrated strong HLA-class I protein expression in the tumor and very high scores of ELISPOT that might correlate to the regression of metastatic tumors. Clinical response through DC injections was as follows : 1CR, 1 PR, 1SD and 6 PD. All 59 DC injections in the phase I study were tolerable in terms of safety, however, the maximal tolerable dose of DCs was not determined. Conclusions These results suggested that peptide cocktail-treated DC-based immunotherapy had the potential for utilizing as one of therapeutic tools against metastatic melanoma in Japan. | Background Despite many attempts in the last few years to target cancer-specific antigens, a breakthrough in terms of clinical response has yet to be achieved mainly because of a scarcity of effective genuine cancer antigens, immunological evasion, or an immunosuppressive state. Melanoma-associated antigens are categorized as class I human leukocyte antigen (HLA)-restricted cancer/testis antigens [ 1 ] which are considered to be tolerable to the immune system because they are also expressed in normal tissues. However, malignant melanoma is the most well known cancer in which multiple tumor-specific antigens have been defined and utilized in vaccination strategies as peptide vaccines or peptide-pulsed DC vaccines [ 2 - 9 ]. From a clinical point of view, several vaccination strategies for stage IV melanoma using a combination of several (more than 3) peptides with a restriction to HLA-A2 have been reported to date [ 10 , 11 ]. However, little immunotherapeutic study regarding HLA-A24-restricted multiple peptides has been conducted because HLA-A24 is not a common allele in Caucasians. Several studies have demonstrated the identification of many HLA-A24-restricted CTL epitopes from various cancer-related antigens including p53, CEA, telomerase, tyrosinase, MAGE proteins etc. [ 12 - 18 ]. When it comes to melanoma, our group demonstrated the feasibility of using a combination of 5 melanoma-associated peptides with restriction of HLA-A24 (peptide cocktail) as a specific cancer vaccine in an immunotherapeutical trial (Akiyama et al, Anticancer Res., 2004). Based on basic research results, a phase I clinical trial of HLA-A2 or A24-restricted melanoma peptide cocktail-pulsed dendritic cell-based immunotherapy has been performed. Here we describe the safety and efficacy of DC-based immunotherapy against metastatic melanoma. Materials and methods Patient characteristics and eligibility criteria Nine patients with metastatic melanoma were enrolled in a phase I clinical trial of a peptide cocktail-pulsed DC-based vaccine approved by the Institutional Review Board (No. 12–93 and 12–94) of the National Cancer Center, Tokyo. All patients gave written informed consent. All patients had received prior surgery, chemotherapy and radiation (Table 1 ). Three subjects had metastatic lesions in the brain and been given radiation to control them. Inclusion criteria were: i) biopsy-proven stage IV metastatic melanoma, ii) age ≥ 18 years, iii) performance status ≤ 2, iv) HLA-A2 or A24 phenotype and v) measurable target lesions. Exclusion criteria were : i) prior therapy < 4 weeks before trial entry, ii) untreated CNS lesion, iii) pregnancy, iv) autoimmune disease, and v) concurrent corticosteroid/immunosuppressive therapy. All the patients, who gave written informed consent, received subcutaneously (s.c) 3 DC vaccines at the inguinal region weekly and toxicity was checked. DCs were injected in dose-escalation design at a dose level per cohort of 1.0, 2,0 and 5.0 × 10 7 /body/shot (Table 1 ). The injected DC number was calculated from the percentage of Lin - HLA-DR + gated populations in a FACS analysis. Table 1 Phase I study of DC-based therapy against melanoma Patient No. Age Sex Previous therapy Measurable lesions DC injection (times) Side effect DTH Response peptide KLH 1 41 F ST, CT, RT, IFNβ lung, LN 1 × 10 7 (10) Hepatic (II) - ++ PR 2 75 M ST, CT, IFNβ LN 1 × 10 7 (10) - + + SD 3 49 F ST, CT, IFNβ, RT lung, liver 1 × 10 7 (3) - - - (PD)* 4 49 M ST, CT lung, liver 2 × 10 7 (6) - - - PD 5 50 M ST, CT, IFNβ lung, liver, LN 2 × 10 7 (6) Hepatic (I) - - PD 6 69 M ST, CT, IFNβ LN 2 × 10 7 (10) - + + CR 7 61 M ST, CT, RT liver, LN 5 × 10 7 (8) Hepatic (I) + ++ PD 8 64 F ST, CT, RT lung 5 × 10 7 (3) Fever (I) - - (PD) 9 66 F ST, CT, lung, LN 5 × 10 7 (3) - - - (PD) * The (PD) patients represent those who received fewer than 4 DC injections because of an early progression of the disease. Preparation of DCs and peptides Leukapheresis products from 7 L of processed blood were washed and centrifuged using density-adjusted OptiPrep™ (Axis-Shield PoC, Oslo, Norway), then the monocyte layer at the top was retrieved. Cells were transferred to an X-fold culture bag (Nexell, Irvine, CA) and cultured in the presence of GM-CSF at 50 ng/ml (CellGenix, Freiburg, Germany) and IL-4 at 50 ng/ml (CellGenix) in X-VIVO15 serum-free medium (Biowhittaker, Walkersville, MD). After 7 days, harvested cells were pulsed with a cocktail of 5 melanoma-specific synthetic peptides (25 μg/ml each) restricted to HLA A2 or A24 and KLH (25 μg/ml, Intracell, Frederick, MD). DC-enriched cells were washed and cryopreserved in Cryocyte bags (Baxter Healthcare Co., Deerfield, IL) until used. The purity of CD14 + cells was evaluated with a flow cytometer (FACSCalibur, Becton-Dickinson Co., CA) before and after OptiPrep™ separation. The percentage of DCs was rated as the lin - HLA-DR + population (lineage antibodies including CD3, CD14, CD16, CD19, CD20, CD56 ; Becton-Dickinson Co.). The additional DC-related markers were determined on gated lin - HLA-DR + cells. The following peptides restricted to HLA-A2 or A24 were synthesized according to GMP standards by Multiple Peptide Systems, CA. HLA-A2: MART-1 27–35 (AAGIGILTV), gp100 209–217 (IMDQVPFSV), tyrosinase 368–376 (YMDGTMSQV), MAGE-2 157–166 (YLQLVFGIEV), MAGE-3 271–279 (FLWGPRALV) ; HLA-A24: gp100 152–160 (VWKTWGQYW), tyrosinase 206–214 (AFLPWHRLF), MAGE-1 135–143 (NYKHCFPEI), MAGE-2 156–164 (EYLQLVFGI), MAGE-3 195–203 (IMPKAGLLI). Characterization of tumor specimens before DC vaccines Skin metastatic lesions were obtained from patients who gave written informed consent. The expression of melanoma tumor antigens was investigated using RT-PCR as described previously [ 19 ]. HLA protein expression was also evaluated using an immunohistochemical (IHC) analysis with anti-HLA-A2 or A24 monoclonal antibody (One Lambda Inc., Canoga Park, CA). A phenotypical analysis of lymphocytes infiltrating the tumor site was also performed using IHC. Clinical and immunological monitoring Adverse effects were evaluated according to the NCI Common toxicity criteria after 3 DC injections. Standard conventional definitions of major (complete or partial) objective responses were used. Stable disease (SD) was defined as less than a 25% change in size with no new lesions lasting at least 4 weeks. Clinical response was rated as maximal through the DC vaccinations. The patients received up to 10 injections on the condition that at least one measurable lesion showed more than stable disease (SD) response and/or an ELISPOT assay performed after 4 injections indicated a positive response for more than 1 melanoma-associated peptides. PBMC samples were harvested before and 29, 78, 134 and 190 days after the 1 st DC injection, and frozen prior to use for immunological monitoring tests. All patients were followed up for 2 years after the enrollment into the study. ELISPOT assay The ELISPOT assay was performed using in vitro re-stimulations. Briefly, PBMCs were incubated in a 24-well culture plate at 4 × 10 6 per ml and divided into non-adherent and adherent cells. Adherent cells were treated with a peptide cocktail and β2-microglobulin for 2 hrs, and co-cultured with non-adherent cells in the presence of IL-2 at 15 U/ml and IL-7 at 10 ng/ml. On day 7, non-adherent cells were re-stimulated with peptide-pulsed adherent cells. On day14, responder cells (1 × 10 4 /well) were incubated with peptide-pulsed target cells (1 × 10 5 /well; .221A201 cells for HLA-A2 peptide or TISI cells for HLA-A24 peptide) in a 96-well culture plate coated with anti-IFN-γ antibody (MABTECH AB, Nacka, Sweden) overnight. Finally positive spots stained with anti-IFN-γ antibody were measured using the KS ELISPOT system (Carl Zeiss AG, Overkochen, Germany). HLA-A2-restricted Influenza M1 peptide (GILGFVFTL) or HLA-A24-restricted EBNA3A peptide (RYSIFFDY) was used as a negative control. Tetramer staining PBMCs were re-stimulated twice in vitro and utilized for tetramers staining. CD8 + -enriched T cells were obtained by the depletion of CD4 + T cells using Dynabeads M-450 CD4 (Dynal, Oslo, Norway) and used for tetramers staining. The staining was performed according to the method reported by Kuzushima et al [ 20 ]. The PE-labeled tetramers used in the present study were as follows: HLA-A*0201 MART1 (Beckman Coulter Inc., San Diego, CA), HLA-A*0201 gp100, HLA-A*2402 tyrosinase, HLA-A*2402 MAGE-1, HLA-A*2402 HIV (RYLRDQQLL) and HLA-A*0201 Influenza M1 tetramers (MBL, Nagoya, Japan). Intracellular cytokine staining PBMCs were stimulated with 25 ng/ml of PMA (Sigma) and 1 μg/ml of ionomycin (Sigma) for 5 hrs in a 96-well culture plate. Breferdin A (10 μg/ml) was also added to cultures in the last hour. After the stimulation, cells were stained with FITC-anti-CD4 MoAb, and subsequently intracellular staining was performed with fix/permealization buffer and PE-labeled anti-IFN-γ or anti-IL-4 MoAb (Pharmingen, San Diego, CA). Finally, the ratio of Th1 (IFN-γ + ) and Th2 (IL-4 + ) was calculated in PBMC samples obtained before and after DC vaccination. DTH reactions The HLA-A2 or A24 peptide cocktail solution diluted to a dose of 5 μg/ml (each peptide) and KLH (50 μg/ml) were injected intradermally on the patient's forearm and the redness and induration at the injection site was measured. PPD was used as a positive control. Statistical analysis Statistical differences were analyzed using Student's paired two-tailed t -test. Values of p < 0.05 were considered significant. Results DC characterization The mean percentage of DCs rated as lin - HLA-DR + in melanoma patients was 46.4 ± 15.6 %, not different from that in healthy volunteers (data not shown). The frequencies of the DC-related markers were determined on gated lin - HLA-DR + cells : HLA-class I 97.5 ± 0.9 %, CD80 87.6 ± 6.9 %, CD86 85.5 ± 7.4 %, CD1a 55.2 ± 24.2 %, CD83 29.9 ± 13.3 %, CCR7 32.4 ± 13.7 %, DC SIGN 78.2 ± 19.3 %, CD11c + HLA-DR + 90.6 ± 6.0 %, CD123 + HLR-DR + 0.99 ± 1.3 %. Most of DCs expressed high level of co-stimulatory molecules and type1 phenotype (CD11c + HLA-DR + ), while a moderate number of mature DCs with CD83 and CCR7 positive were contained in DC products. On the other hand, the T cell-stimulating activity of DCs investigated in the MLR assay using allogeneic T cells was as strong as that of DCs obtained from healthy volunteers (data not shown). Characterization of tumor specimen An analysis of melanoma antigen expression by RT-PCR was performed in 3 cases. The expression of more than 2 antigens in the tumor was verified in all cases. HLA protein expression was positive in 5 out of 9 cases (Table 2 ). Patient 1 who showed a remarkable clinical response (PR), was representative of HLA protein-positive cases (Fig. 1 ). In contrast, in patient 7, HLA-A2 protein expression in the tumor was lost in the course of treatment. Table 2 Immunological monitoring in melanoma patients Patient No. HLA Tumor antigen, HLA expression ELISPOT Tetramer Th1/Th2 balance 1 A*2402 3/5(Tyr,M1,M2), A24(+) 3/5(Tyr,M1,M2) Tyrosinase (0.34%) 5.19 (1.45) a 2 A*0201 A2(+) 2/5(MART1,gp100) MART1 (0.64%) 3.68 (1.49) 3 A*2402 A24(-) N. D b N. D. - 4 A*0206 A2(-) 2/5(MART1, M2) - 3.05 (2.57) 5 A*0201 A2(-) 2/5(MART1, M2) MART1 (1.48%) 2.83 (3.68) 6 A*2402 2/5(M2,M3), A24(+) 2/5(M2, M3) - 3.76 (2.00) 7 A*0201 4/5(MART1,Tyr,gp100,M2), A2(+) 2/5(gp100, M2) - 2.64 (1.79) 8 A*2402 A24(-) N. D. N. D. - 9 A*0201 A2(+) 1/5(gp100) c N. D. N. D. a The value in the parenthesis shows Th1/Th2 ratio prior to DC vaccination. b N. D. ; not done. c The value shows the one obtained prior to DC vaccines. Figure 1 Immunohistochemical analysis of metastatic tumor tissue from responder patient 1 and non-responder patient 7. A; H-E stain and B; anti-HLA-A24 MoAb from patient 1. C; anti-HLA-A2 MoAb before DC vaccination and D; anti-HLA-A2 MoAb after 4 DC injections from patient 7. Magnification × 200. ELISPOT assay CTL precursors of more than 2 melanoma peptides were recognized after DC vaccines in 6 of 9 cases. Two HLA-A2 + cases (patients 5 and 9) showed HLA-A2 peptide-specific CTL responses before the vaccination. Patients 1 and 6, which showed remarkable clinical responses, exhibited many CTL precursors against a HLA-24 restricted peptide-cocktail (Table 3 , Figure 2 ). Notably, in patient 1, a remarkable increase in the CTL response to the HLA-A24 peptide cocktail was seen in accordance with the regression of metastatic tumor of the lung (Fig. 3 ). On the other hand, patient 7 also demonstrated a high CTL precursor frequency, but showed no significant clinical response. Table 3 Peptide cocktail-specific CTL precursor frequency during DC vaccination Spot No./CD8 + T cell (%) a Patient No. DC injection (times) before day29 day78 day134 day190 1 1 × 10 7 (10) 1.19/0.45 6.96/0.06 8.82/0.63 8.81/0.08 5.4/0.08 2 1 × 10 7 (10) 0.07/0.05 0.07/0.2 0.02/0 0.02/0 0.29/0.03 3 1 × 10 7 (3) N.D. b N.A. c N.A. N.A. N.A. 4 2 × 10 7 (6) 0.39/0.53 1.29/0.03 1.12/0 N.A. N.A. 5 2 × 10 7 (6) 1.74/0.05 0.51/0.2 1.25/0.04 N.A N.A. 6 2 × 10 7 (10) 0.21/0.27 0.31/0.28 1.18/0.24 7.80/0.19 9.82/0.30 7 5 × 10 7 (8) 0.62/0.20 6.52/0.1 7.33/0.11 N.A. N.A. 8 5 × 10 7 (3) N.D. N.A. N.A. N.A. N.A. 9 5 × 10 7 (3) 3.09/1.24 N.D. N.A. N.A. N.A. The percentages represent IFN-γ-positive spot No. divided by total CD8 + cell No. from 1 × 10 4 PBMCs. a Each value represents the percentage with peptide cocktail/without peptide cocktail. b N.D. ; not done, c N.A. ; sample not available. Figure 2 CTL responses in the course of DC injections in 6 evaluable cases. Patients 1, 2 and 6 were responders and patients 4, 5 and 7 were non-responders. Responders (cases 1,6) showed remarkable CTL expansion in PBLs compared with before DC vaccination. In contrast, non-responders (patients 4,5) showed no significant CTL responses except in patient 7. Figure 3 Impact of DC vaccines on metastatic lesions of the lung in responder patient 1. Upper and lower panels show a lung and hilar lymph node metastatic lesion (arrow), respectively. The CT scan was made before therapy and after 4, 7 and 10 DC vaccinations. Tetramer staining After CD4 + T cell depletion, the frequency of CD8 + cells was more than 85%. The proportion of PE-labeled tyrosinase-HLA-A24 tetramer-positive cells among gated CD8 + cells was 0.34% in patient 1 (Table 2 ). HIV-A24 tetramer (negative control)-positive cells were not detected. The percentage of PE-labeled MART1-HLA-A2 tetramer-positive cells was 0.64% and 1.48% in patients 2 and 5, respectively. On the other hand, that of Influenza M1-HLA-A2 tetramer (negative control)-positive cells was 0.04%. Th1 and Th2 balance after DC vaccination In 5 of 6 evaluable cases, the balance of Th1 and Th2 shifted more to Th1 after 4 DC injections compared with prior to vaccination. (Table 2 ). The amplitude of the shift seemed to be larger in clinical responders (patients 1, 2, 6) than non-responders (patients 4, 5, 7) (% of ratio increase; 264 ± 86 vs. 114 ± 35). DTH Three of 6 evaluable cases showed positive DTH to a peptide-cocktail after DC injections (Table 1 ). On the other hand, 4 of 6 cases developed a DTH response to KLH protein. There were stronger reactions to KLH in patients 1 and 7. Adverse effects of DC vaccine Safety was assessed after 3 DC injections in all 9 cases. Three of 9 patients developed mild hepatic dysfunction (grade I-II), however it was only transient and disappeared in spite of the continuance of DC injections. Rheumatoid factor and anti-nuclear antibody were negative before the injection, but increased to 1:160 and 1:40, respectively after the injections finished in patient 1. No clinical symptoms of autoimmune disease were found in patient 1 (Table 1 ). Clinical response Clinical response was rated as maximal through the DC vaccinations. In 6 evaluable cases except for 3 cases of early PD cases due to a rapid progression of the disease, 1CR (patient 6), 1PR (patient 1), 1SD and 3 PD were obtained (Table 1 ). Large metastatic lesions in the lung and hilar nodes in patient 1 dramatically decreased in size after 4 DC injections, and almost disappeared after treatment finished (Fig. 3 ). Moderate sized cervical metastatic lesions in patient 6 finally started to decrease after 8 DC injections and disappeared surprisingly rapidly after the finish of DC therapy. In contrast, patient 7 who exhibited good immunological responses in the ELISPOT assay and DTH, showed no shrinkage of the tumor, resulting in cessation after 6 DC injections. Characterization of infiltrated lymphocytes in the tumor IHC analysis of infiltrated lymphocytes in the tumor after DC vaccines was performed only in patient 1 and 7. The obvious infiltration of a larger number of CD4 + or CD8 + T cells and a small number of CD20 + B cells were shown in patient 1 (Fig. 4 ). In contrast, no significant cell infiltration was seen in patient 7 who did not develop any therapeutical effect on the tumor (data not shown). Figure 4 Phenotype analysis of lymphocytes infiltrating the tumor site in responder patient 1. Obvious infiltration of a larger number of CD4 + or CD8 + T cells and a small number of CD20 + B cells is shown. Indirect staining using anti-CD4, CD8, CD20 or CD56 MoAb as primary Ab and goat anti-mouse Ab as secondary Ab was performed. Magnification × 200. Discussion Clinical trials of specific immunotherapy against metastatic melanoma using peptide-pulsed Mo-derived DCs have been performed in mainly Western countries, and some fruitful results were obtained [ 7 , 10 , 11 ]. In those cases, most of the patients belonged to the HLA-A*0201 type. In the present study, we investigated the effect of peptide-pulsed DCs on 4 cases of HLA-A*2402 + metastatic melanoma patients besides 4 cases of HLA-A*0201 + patients in a clinical phase I trial. This is the first report to demonstrate that peptide-pulsed DCs were effective in some HLA-A24 + melanoma patents in Japan. It is well known that HLA-A*2402 is a common genotype and around 60% positive in Asians. There was one case of HLA-A*0206 patient among 5 HLA-A2 + patients (Table 2 ). Sidney et al. [ 21 ] demonstrated that over 70% of the peptides that bound A0201 with high affinity were found to bind at least two other supertype molecules like A*0202, A*0203 or A*0206. Taking it into considerations, the HLA-A*0206 patient was finally enrolled into the study. With regard to other HLA-A24 + solid cancers, stomach, colon and bladder cancers have been treated with peptide (MAGE-3)-pulsed DC vaccines, and showed a limited response [ 22 - 24 ]. Considering that melanoma is highly immunogenic and probably a good model for tumor-specific immunotherapy despite being an unusual tumor in Asian countries, it deserves a phase I study using peptide-pulsed DCs. In our study, peptide cocktails combining 5 peptides for each HLA type (HLA-A2 or A24) were prepared and used for DC pulsing. Our clinical study revealed positive ELISPOT responses against more than 2 peptides in all 6 evaluable cases. In previous reports, clinical DC therapy using more than 3 melanoma peptides demonstrated the induction of a specific CTL response against multiple melanoma peptides [ 10 , 11 ]. However,, there is still some controversy over the efficacy of multiple epitope-based vaccinations and Smith et al. [ 25 ] demonstrated that, although polyepitope vaccines are an effective way of priming polyvalent CTLs, continual stimulation with polyepitope vaccines might restrict CTL induction as a result of immunodominance. The results of our study are thought to answer that question, but testing of the peptide cocktail vaccine in more patients will be needed. To refine the quality and protocol of the tumor-specific immunotherapy for clinical trials, the prediction of clinical response in an individual is important [ 26 ] and should be discussed. In our study, the correlation between immunological parameters and clinical response was investigated in a limited number of cases. First of all, as to HLA expression in the tumor, patients 1, 2, 6 and 7 were positive, and patients 4 and 5 were negative. HLA-negative cases showed a progression of the tumor. Even in positive cases, patient 7 turned negative in the course of DC therapy, showing tumor progression. Loss of HLA expression in melanoma is reported to be a complex phenomenon associated with melanoma antigen loss [ 27 ], β2-microglobulin gene mutation [ 28 ] or loss of heterozygosity (LOH) in chromosome 6 and may lead to tumor progression and metastasis. As to patient 7, considering that the melanoma antigen expression was maintained, the functional expression of β2-microglobulin should be investigated. All the other HLA-positive cases showed CR, PR and SD, respectively. There was a tendency for HLA expression to be associated with tumor response, and some researchers reported a positive correlation of HLA-expression to tumor response in immunotherapy against melanoma. However, despite the positive correlation of HLA-expression in the tumor with anti-tumor response, Nestle et al. demonstrated that HLA-expression in the tumor did not correlate to survival in melanoma patients [ 29 ]. Second, the amplitude of the CTL response in the ELISPOT assay seems to be another key factor predicting anti-tumor response. Patients 1, 6 and 7 showed large responses to peptide cocktail in ELISPOT, and patients 2, 4 and 5 showed small responses. The former exhibited a remarkable regression of tumor except patient 7. On the other hand, the latter showed a poor response. There was a likely tendency that the amplitude of the CTL response was associated with tumor regression. Also, it was difficult to predict when immunological responses like CTL induction start to be activated in vivo during DC vaccination, and this question needs to be answered. In the present study, because of a limited number of patients given DC vaccines, the tendency that HLA-class I protein expression in the tumor and the amplitude of ELISPOT responses are seemingly associated with tumor regression is not convincing. Finally, in order to improve tumor response in the present study, there are still some issues regarding clinical DC preparation. First of all, the purity of CD14 + cells after Opti-prep separation is still low and may not be reproducible. Therefore, other clinical grade-monocyte separation methods using an elutriator or negative selection with CD2 and CD19 MoAbs [ 30 ] should be tried. Second, considering that the amplitude of the CTL response was associated with tumor regression, and that even a remarkable increase of CTL frequency inevitably diminished in spite of the repetition of DC vaccinations, it seems to be crucial to maintain increased CTL frequency in blood leading to TIL in the tumor and expand more than enough to develop a substantial number of memory CD8 + CTL in lymph nodes. Such a novel method will be needed to develop an effective cancer vaccine. Conclusions In the present study, we investigated the effect of dendritic cell (DC)-based immunotherapy on metastatic melanoma patients with HLA-A2 or A24 genotype. Nine cases of metastatic melanoma were enrolled into a phase I study using HLA-A2 or A24-restricted peptide cocktail-pulsed DCs. All 6 evaluable cases showed positive immunological responses to more than 2 melanoma peptides in an ELISPOT assay. Clinical response through DC injections was as follows : 1CR, 1 PR, 1SD and 6 PD. All 59 DC injections in the phase I study were safely administered to patients. These results suggested that peptide cocktail-treated DC-based immunotherapy had the potential for utilizing as one of therapeutic tools against HLA-A2 or A24 + metastatic melanoma. Abbreviations DC, dendritic cell ; HLA, human leukocyte antigen ; GM-CSF. granulocyte macrophage-colony-stimulating factor ; IL, interleukin ; KLH, Keyhole limpet hemocyanin ; CTL, cytotoxic T cell ; DTH, delayed-type hypersensitivity ; CR, complete remission : PR, partial remission ; SD, stable disease ; PD, progressive disease ; RT-PCR, reverse transcription-polymerase chain reaction ; IFN, interferon ; PBMC, peripheral blood mononuclear cell. Competing interests The authors declare that they have no competing interests. Authors' contributions YA participated in the design of the study and drafting the manuscript and were responsible for completing the study. RT, NI, MS, YH carried out apheresis and cell processing and were responsible for DC production. AY and NY were responsible for the clinical side of the study. IK, IN, KT and KM participated in the design of the study and performed biological assays. YT and KY reviewed the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549033.xml |
509417 | Phylogenetic relationships of typical antbirds (Thamnophilidae) and test of incongruence based on Bayes factors | Background The typical antbirds (Thamnophilidae) form a monophyletic and diverse family of suboscine passerines that inhabit neotropical forests. However, the phylogenetic relationships within this assemblage are poorly understood. Herein, we present a hypothesis of the generic relationships of this group based on Bayesian inference analyses of two nuclear introns and the mitochondrial cytochrome b gene. The level of phylogenetic congruence between the individual genes has been investigated utilizing Bayes factors. We also explore how changes in the substitution models affected the observed incongruence between partitions of our data set. Results The phylogenetic analysis supports both novel relationships, as well as traditional groupings. Among the more interesting novel relationship suggested is that the Terenura antwrens, the wing-banded antbird ( Myrmornis torquata ), the spot-winged antshrike ( Pygiptila stellaris ) and the russet antshrike ( Thamnistes anabatinus ) are sisters to all other typical antbirds. The remaining genera fall into two major clades. The first includes antshrikes, antvireos and the Herpsilochmus antwrens, while the second clade consists of most antwren genera, the Myrmeciza antbirds, the "professional" ant-following antbirds, and allied species. Our results also support previously suggested polyphyly of Myrmotherula antwrens and Myrmeciza antbirds. The tests of phylogenetic incongruence, using Bayes factors, clearly suggests that allowing the gene partitions to have separate topology parameters clearly increased the model likelihood. However, changing a component of the nucleotide substitution model had much higher impact on the model likelihood. Conclusions The phylogenetic results are in broad agreement with traditional classification of the typical antbirds, but some relationships are unexpected based on external morphology. In these cases their true affinities may have been obscured by convergent evolution and morphological adaptations to new habitats or food sources, and genera like Myrmeciza antbirds and the Myrmotherula antwrens obviously need taxonomic revisions. Although, Bayes factors seem promising for evaluating the relative contribution of components to an evolutionary model, the results suggests that even if strong evidence for a model allowing separate topology parameters is found, this might not mean strong evidence for separate gene phylogenies, as long as vital components of the substitution model are still missing. | Background The typical antbirds (Thamnophilidae) is a speciose family within the furnariid radiation (sensu [ 1 ]) of the New World suboscine clade. The family includes fully 200 species [ 2 ] that all are restricted to neotropical forests. Most species are arboreal or undergrowth inhabitants, while only a few members are clearly terrestrially adapted, which otherwise seems to be the commonest lifestyle for most members in closely related clades (e.g., gnateaters Conopophagidae, antpittas Grallariidae, tapaculos Rhinocryptidae, and antthrushes Formicariidae). The highest diversity of typical antbirds is found in the Amazonian basin, and differences in ecological specializations make it possible to find as many as 40 species in the same area [ 3 ]. Morphologically typical antbirds shows considerable variation in size and patterns and colors of the plumage (black and shades of grey, buff and chestnut, with sexual plumage dimorphism in many species), while the variation in shape is more restricted. Many insectivorous niches are occupied, but the specialization of some species to follow army ants (to capture escaping insects) is perhaps the most well known. This habit has also given raise to the vernacular family name. In traditional classifications, the antpittas (Grallariidae) and antthrushes (Formicariidae) were grouped together with typical antbirds in an even larger family. However, the support for the expanded antbird family was indeed weak, and both morphological [ 4 - 6 ] and molecular [ 1 , 7 ] evidence suggests that antpittas and antthrushes are distantly related to typical antbirds. DNA sequence data [ 1 , 8 ] suggests that gnateaters (Conopophagidae) forms the sister clade to typical antbirds, while antpittas and antthrushes are more closely related to tapaculos (Rhinocryptidae), woodcreepers and ovenbirds (Furnariidae). Even though the monophyly of typical antbirds seems to be well supported by both syrinx morphology [ 6 ] and molecular data [ 1 , 7 ] the phylogenetic relationships within this assemblage are poorly understood, and the confusion extending to all taxonomic levels. Both the monophyly of several genera of typical antbirds has been questioned [ 3 , 9 , 10 ], as well as the delimitation of certain species [ 2 , 11 - 14 ]. Some species have also been moved from one genus to another (e.g., the black-hooded antwren that has been moved from the genus Myrmotherula to Formicivora [ 15 ]). The current knowledge of the phylogenetic relationships among typical antbirds rests mainly on interpretations drawn from external features, mostly of bill and feet, and has remained essentially the same for 150 years [ 2 ]. As typical antbirds are morphologically and ecologically diverse, they form a challenging group for studies of, e.g. adaptive evolution. However, such studies, as well as biogeographic interpretations, are difficult to make as long as there is no phylogenetic hypothesis. The aim of this study is therefore to create a hypothesis of generic relationships of typical antbirds that could be used as a framework for more detailed studies of the evolution of the group. Two nuclear introns, intron 2 in myoglobin and intron 11 in the glyceraldehyde-3-phosphodehydrogenase gene (G3PDH), and the mitochondrial cytochrome b gene, have been sequenced for 51 typical antbird taxa representing 38 out of the 45 genera recognized by Ridgely and Tudor [ 3 ]. We have used Bayesian inference and Markov chain Monte Carlo (MCMC) to estimate the phylogenetic relationships. A common assumption made by molecular systematists is that gene trees accurately reflect species trees. Nevertheless, different data partitions may have different phylogenies due to processes as lineage sorting, gene duplication followed by extinction, and lateral transfer by hybridization and introgression (reviewed in [ 16 - 18 ]). Primarily, there are two contradictory strategies utilized to handle data sets with significant phylogenetic incongruence between independent data partitions. Advocates for a "total evidence approach" (e.g., [ 19 , 20 ]) suggest that available data always should be combined, even though individual data partitions might be partly incongruent. The arguments are that a combination of different data partitions might improve the total resolution as different data partitions might be useful to resolve different areas of the tree, and that additive data sets might enhance phylogenetic informative characters that have been hidden by noise in the individual partitions. Opponents to this view (e.g., [ 21 , 22 ]) advice that data partitions with a significant level of incongruence should not be combined, as reliable characters might be obscured by random or systematic errors and in the worse case result in an erroneous topology (even though individual data partitions might provide consistent estimates). However, when independent evidence is lacking and incongruence occurs between individual data partitions, it may be difficult to determine whether particular partitions are better estimates of the species tree than others. Researchers might favor the "total evidence approach" for this particular reason (even though the argument for not combining data partitions with significant levels of incongruence have strong merits). However, the degree of incongruence between individual gene trees could be used to determine whether the phylogenetic conclusions should be based on the combined data set, or only those parts that are similar among the different partitions. A commonly used approach for analysing combined data with maximum likelihood is to assume a single (the same) substitution model for all of the combined genes (for exceptions, see [ 23 , 24 ]). A significant result of incongruence between the combined result and the individual genes can then be hard to explain, since the incongruence could be due to both true difference in gene phylogeny and a misfit in the assumed model of evolution for the combined data [ 21 , 25 ]. This misfit could, for example, be a result of not allowing a heterogeneous model, that is, not allowing the different genes to have separate substitution models in the combined analysis [ 26 ]. We have thus explored our data partitions (the individual genes) by the congruence test described by Nylander et al. [ 27 ], which utilizes Bayes factors. The test is not an explicit significance test but compares the strength of evidence between two models of character evolution. Although nuclear genes (as when situated on different chromosomes) may be considered as members of different linkage groups, the maternally inherited mitochondrial genome is effectively independent from the nuclear genome. Organelle genomes have also been suggested to more susceptible to "flow" between taxa during hybridization (although much less common in animals than in plants). In birds Degnan and Moritz [ 28 ] and Degnan [ 29 ], for example, have demonstrated that the mitochondrial tree in Australian white-eyes misrepresented the tree of nuclear loci and the expected species tree, possibly due to previous hybridization events. We have thus primarily been interested in the potential incongruence between the mitochondrial cytochrome b and the two nuclear genes (myoglobin and G3PDH), but all combinations of the three genes were examined. However, limitations in the substitution models might be the most important explanation to observed incongruence between data partitions, rather than an intrinsic phylogenetic incongruence [ 27 ]. We also explored how changes in substitution models affected the observed incongruence in our data set. Results Molecular variation and sequence distances After alignment, the concatenated sequences become 2173 bp long. A total of between 679 bp ( Sclerurus scansor ) and 723 bp ( Myrmotherula leucophthalma ) was obtained from myoglobin intron 2, between 351 bp ( Rhegmatorina melanosticta ) and 400 bp ( Myrmeciza griseiceps ) from G3PDH intron 11, and 999 bp from cytochrome b . The observed, pairwise distances between ingroup taxa range between 0, 7% and 10, 7% in myoglobin, between 0, 3% and 19, 3% in G3PDH and between 6, 5% and 23, 9% in cytochrome b . Indels were found both in the myoglobin intron 2 and in the G3PDH intron 11. In most cases these are autapomorphic indels or occur in especially variable and repeatable regions. Given the tree topologies obtained from the Bayesian analyses, some synapomorphic indels were observed. For example, all Thamnophilus representatives share with Sakesphorus bernardi an insertion in the G3PDH intron, and, together with Dysithamnus mentalis and Herpsilochmus atricapillus , an insertion in the myoglobin intron. Phylogenetic inference and molecular models A priori selection of substitution models showed that fairly parameter rich models were the best fit for all data partitions. Importantly, modeling rate variation seemed to be an important component. For the cytochrome b partition the GTR+I+Γ was the best fit, and for myoglobin intron 2, it was the GTR+ Γ. For the G3PDH intron 11 the somewhat simpler HKY+ Γ model was chosen. These models were used in the consecutive MCMC of the individual genes as well in the combined analysis. The parameter estimates from the two separate MCMC runs for each data set were found to be very similar (data not shown), thus allowing an inference from the concatenated output. After discarding the burn-in phase the inference for the cytochrome b was based on a total of 36, 000 samples from the posterior, for myoglobin the inference was based on 38, 000 samples, and for G3PDH and the combined data, inference were based on 38, 000, and 55, 600 samples, respectively. For the phylogenetic inference, the mode of the posterior distribution of topologies was presented as a majority-rule consensus tree from each analysis (Figures 1 , 2 , 3 , 4 ). Figure 1 The G3PDH majority rule consensus tree. The 50% majority rule consensus tree obtained from the Bayesian analyses of the G3PDH (glyceraldehydes-3-phosphodehydrogenase) intron 11 data set. Posterior probability values are indicated to the right of the nodes. Figure 2 The myoglobin majority rule consensus tree. The 50% majority rule consensus tree obtained from the Bayesian analyses of the myoglobin intron 2 data set. Posterior probability values are indicated to the right of the nodes. Figure 3 The cytochrome b majority rule consensus tree. The 50% majority rule consensus tree obtained from the Bayesian analyses of the cytochrome b data set. Posterior probability values are indicated to the right of the nodes. Figure 4 The combined majority rule consensus tree. The 50% majority rule consensus tree obtained from the analyses of the combined data set (G3PDH intron 11, the myoglobin intron 2 and the cytochrome b data sets). Clades A, B and C are major groups of typical antbirds discussed in the text. Posterior probability values are indicated to the right of the nodes. The trees obtained from the Bayesian analyses of the individual genes (cytochrome b , myoglobin and G3PDH) and the combined data set all differ in topology and degree of resolution. The G3PDH gene produced the poorest resolved tree (Figure 1 ) and also contains the smallest number of nodes with posterior probability values above 0.90. The myoglobin (Figure 2 ) and cytochrome b (Figure 3 ) genes produced trees with similar degree of resolution and nodal supports, but there is a weak tendency for cytochrome b giving better resolution and support at terminal nodes. The combined data set (cytochrome b , myoglobin and G3PDH) produced the most resolved tree (Figure 4 ) with the highest number of strongly supported nodes (exceeding 0.90 posterior probability). Overall, the myoglobin, the cytochrome b and the combined trees are topologically rather similar, while the G3PDH tree is the most deviant. A common pattern in all trees is that several nodes are unresolved, or short with low or intermediate posterior probabilities support values (0.50–0.90). The observed topological conflicts between the obtained trees generally occur at these short nodes, and there are only a few nodes with posterior probabilities values above 0.90 that are in conflict between the trees. Of these, one concerns the outgroup relationships (the G3PDH tree supports with 0.96 posterior probability a position of Pteroptochos tarnii that differs from all other trees). The other two conflicts concern internal relationships within well supported sub-clades: The cytochrome b tree places with 0.98 posterior probability Myrmotherula menetriesii basal to a clade consisting of Myrmotherula axillaris, Myrmotherula behni and Formicivora rufa . In the combined tree Myrmotherula menetriesii instead is nested within this clade with 1.00 posterior probability. The myoglobin tree suggests with 0.94 posterior probability that Taraba major is basal to Batara cinerea and Hypoedaleus guttatus , while Taraba major is basal also to Mackenziaena severa and Frederickena unduligera with 0.99 posterior probability in both the combined and the cytochrome b trees. However, most suggested relationships are congruently supported by more than one of the trees obtained from the individual genes and by the combined data set. Several clades are also supported by all three genes trees as well as by the combined data set, including the recognition of a monophyletic origins of 1) the "large antshrikes" ( Taraba major , Batara cinerea , Hypoedaleus guttatus , Mackenziaena severa , and Frederickena unduligera ), 2) the "professional" ant-following antbirds ( Pithys albifrons, Phlegopsis erythroptera , Phaenostictus mcleannani , Rhegmatorhina melanosticta and Gymnopithys leucaspis ), 3) a Sakesphorus - Thamnophilus antshrike lineage ( Sakesphorus bernardi and the five representatives of the genus Thamnophilus ), and 4) a clade consisting of the wing-banded antbirds ( Myrmornis torquata ), the spot-winged antshrike ( Pygiptila stellaris ) and the russet antshrike ( Thamnistes anabatinus ). Sistergroup relationships between antvireos ( Dysithamnus mentalis ) and Herpsilochmus antwrens ( Herpsilochmus atricapillus ), as well as between Myrmotherula obscura and Myrmochanes hemileucus are also recognized by all trees. Based on the tree obtained from the Bayesian analysis of the combined data set, typical antbirds could also be divided into three major clades (marked as A, B and C in Figure 4 ). The first clade (clade A) includes four genera that are suggested to have a basal position in relation to all other typical antbirds (1.00 posterior probability in the combined tree). This basal group (supported by 0.72 posterior probability in the combined tree) includes the representative of Terenura antwrens ( Terenura humeralis ), the wing-banded antbird ( Myrmornis torquata ), the spot-winged antshrike ( Pygiptila stellaris ) and the russet antshrike ( Thamnistes anabatinus ). The second clade (clade B, Figure 4 ) is supported by 0.95 posterior probability in the combined tree and includes all antshrike genera (except the spot-winged antshrike and the russet antshrike, see clade A), antvireos ( Dysithamnus ), Herpsilochmus antwrens and the banded antbird ( Dichrozona cincta ). Within this large clade several lineages occur that receives more than 0.95 posterior probability. Noticeable within this clade is that neither the analyses of the individual genes nor the combined data set conclusively support that the representative of the antshrike genus Sakesphorus ( Sakesphorus bernardi ) is phylogenetically separated from the Thamnophilus antshrikes. The last clade (clade C, Figure 4 ), including the Myrmeciza antbirds, most antwren genera (e.g., Myrmotherula and Formicivora ), the "professional" ant-following antbirds, and some allied species, is supported by a 1.00 posterior probability value. Also within this clade several lineages are supported by posterior probability values above 0.90. However, the most interesting observation is the strong support for a polyphyletic origin of the Myrmeciza antbirds and the Myrmotherula antwrens. Tests of incongruence The Bayes factor tests showed extensive incongruence between partitions, at least in the sense that relaxing the assumption of a common topology parameter always gave a better model likelihood (Table 1 ). For example, allowing the cytochrome b partition to have a separate topology from the two nuclear partitions myoglobin and G3PDH, gave a 2logB 12 of 60.8. This value strongly suggests that an unlinked model is superior to the model assuming a common topology parameter for all partitions. This would also suggest that there is strong conflict between the mitochondrial and the nuclear partitions. However, this conclusion is far from conclusive when we consider the linking of the topology parameter for other combinations of the data. Combining the topology parameter for either one of the nuclear partitions with the mitochondrial, actually gives a better model (higher Bayes factors) than considering the mitochondrial vs. the nuclear partition (Table 1 ). For example, compared to the model that assumes a common topology parameter, unlinking the myoglobin partition from the other gave a 2logB of 102.26. Unlinking the G3PDH partition gave an even better model, with a 2logB of 118.12. Furthermore, if we would have to choose the one partitioning scheme that had the highest model likelihood, the model allowing a separate topology parameter for all partitions would be the clear choice (having a 2logB of 241.36 compared to the common model). Table 1 Summary of Bayes factor tests of incongruence. Entries are twice the log of the Bayes factor in the comparison between models M 1 and M 2 (2log B 12 ). The row models are arbitrarily labeled M 1 ; thus, positive values indicate support for the column model over the row model. A dash (-) indicates which partitions that have linked topology parameters. Model Cyt b-Myo-G3PDH Cyt b, Myo-G3PDH Cyt b-Myo, G3PDH Cyt b-G3PDH, Myo Cyt b, Myo, G3PDH Cyt b-Myo-G3PDH 0 60.84 118.12 102.26 241.36 Cyt b, Myo-G3PDH 0 57.28 41.42 180.52 Cyt b-Myo, G3PDH 0 -15.86 123.24 Cyt b-G3PDH, Myo 0 139.1 Cyt b, Myo, G3PDH 0 The parsimony based ILD-test did not find a significant incongruence between the three gene partitions (p = 0.967). Discussion Phylogenetic incongruence between gene partitions Allowing the gene partitions to have separate topology parameters clearly increased the model likelihood. That is, the unlinked models clearly had a better fit to the data than the linked models. Judging from the absolute value of the 2logB (Table 1 ), we are inclined to conclude that we should treat each partition as having its own posterior distribution of trees. However, the question is if we from these results really can say that the gene partitions evolved on different phylogenies? There are several reasons why different data partitions may have different phylogenies, although being sampled from the same taxa, or even the same individuals (se above). We cannot completely rule out the occurrence of any of these processes in our data. However, we believe that the interpretation based solely on Bayes factors might be hazardous. For instance, is it plausible that all three gene partitions had evolved on three different phylogenies, or that the linking of cytochrome b and myoglobin is a more reasonable partition of the data, instead of the mitochondrial versus the nuclear partitions? Nylander et al. [ 27 ], speculate that limitations in the substitution models might be more reasonable explanations to the high Bayes factors observed when comparing unlinked and linked models. Changing a component of the nucleotide substitution model, e.g. adding parameters to model rate variation, had much higher impact on the model likelihood than unlinking parameters among data partition. To illustrate the impact of changing the substitution model in our data, we run additional MCMC analyses under a different set of models, and compared them with the previous analyses using Bayes factors. The results were striking (Table 2 ). For example, we compared two models without rate variation, one with linked and the other with unlinked topologies (in both models GTR was used for cytochrome b and for myoglobin, and HKY for the G3PDH). The 2logB was 295.98 in favor for the unlinked model. However, adding parameters for modeling rate variation to either of the two models increased the model likelihood tremendously. The 2logB in favor of a model having parameters for rate variation (applying the same substitution models as the ones chosen a priori using AIC, see material and methods), varied between 5125.22 and 5662.56, depending on the model being compared (Table 2 ). Similar observations of magnitude changes in Bayes factors were made by Nylander et al. [ 27 ], when allowing rate variation. Another striking feature was that once parameters for modeling rate variation had been incorporated into the model, unlinking topologies did not seem to have as pronounced effect on the model likelihood (Table 2 ), compared to the models without rate variation. This observation is in concordance with previous findings that many functional genes have a strong among-site rate variation and that adding the relevant parameters to the model is likely to have a large effect on the likelihood [ 23 , 27 , 30 , 31 ]. Table 2 Summary of Bayes factor tests showing the effect of changing substitution model components. Entries are twice the log of the Bayes factor in the comparison between models M 1 and M 2 (2log B 12 ). The row models are arbitrarily labeled M 1 ; thus, positive values indicate support for the column model over the row model. A dash (-) indicates which partitions that have linked topology parameters. Asterisks (*) indicate models where the rates are assumed to be equal. Model Cyt b-Myo-G3PDH Cyt b, Myo, G3PDH Cyt b-Myo-G3PDH* Cyt b, Myo, G3PDH* Cyt b-Myo-G3PDH 0 241.36 -5421.2 -5125.22 Cyt b, Myo, G3PDH 0 -5662.56 -5366.58 Cyt b-Myo-G3PDH* 0 295.98 Cyt b, Myo, G3PDH* 0 It is worth noting that the parsimony based ILD-test did not find a significant incongruence between the three gene partitions. The value of this observation is uncertain, however, as the ILD test is based on another optimality criterion (parsimony). Furthermore, the strength of the test and interpretation of the results have also been questioned (e.g., [ 32 ]) In conclusion, allowing partitions to have separate topology parameters put fewer restrictions on the data. Hence, we should expect to find a better fit of the model to the data. Bayes factors seem promising for evaluating the relative contribution of components to an evolutionary model. However, judging from the relative increase in model likelihood when unlinking topologies compared to e.g., adding parameters for rate variation, we would anticipate components in the substitution model (for example, allowing rate variation among lineages) to have more effects on accommodating incongruence in the data. That is, even if we find strong evidence for a model allowing separate topology parameters, this might not mean strong evidence for separate gene phylogenies, as long as vital components of the substitution model are still missing. For further discussions on Bayesian approaches to combined data issues see e.g., [ 25 , 26 , 33 ]. Phylogeny and morphological variation in typical antbirds Even though we are unable to conclusively tell whether the observed phylogenetic incongruence between the individual gene partitions is due to genuine differences in phylogeny, or to limitations in the models used, we believe that the tree obtained from the combined data set represents the best estimate of the true relationships within the typical antbird assemblage. Obviously, several relationships are strongly supported, by congruent recognition by the individual gene trees and/or by high nodal support values. Nevertheless, other relationships have to be regarded as tentative, and especially those where any of the individual gene trees gives a strong nodal support for an alternative topology. It is noticeable that, although the individual genes congruently support several terminal groups, basal relationships are generally less well resolved and more often in conflict. Even though this observation might be biased due to the use of improper molecular models when calculating the trees, biased mutation rate in studied genes, or a biased taxon sampling, it could indicate that the diversifications of typical antbirds was characterized by some rapid speciation bursts. There are only a few recent studies of typical antbirds with taxon samplings that includes representatives from several genera, but these studies show similar difficulties in resolving generic relationships. For example, in a study of phylogenetic relationships of Myrmotherula antwrens that included representatives from several other typical antbird genera, Hackett and Rosenberg [ 10 ] obtained considerably different topologies from plumage characters, allozyme and morphometric data, respectively. In addition, the phylogenetic relationships suggested from mitochondrial DNA sequence data within a partly comparable taxon sampling [ 9 ], have little resemblance to those in Hackett and Rosenberg [ 10 ]. The nodes between typical antbirds in the DNA-DNA hybridization "tapestry" by Sibley and Ahlquist [[ 7 ]: Figure 372] also contain a high degree of short branches. It is also apparent that earlier antbird taxonomists, using external morphology, had difficulties in their taxonomic decisions and interpretations of higher-level relationships. Ridgway [[ 34 ]: p. 9] expressed that "The classification of this group is very difficult, more so probably than in the case of any American family of birds". Hackett and Rosenberg [ 10 ] concluded that antwren speciation mainly has been followed by plumage differentiation (and to some degree size differentiation) rather than changes in body proportions. Overall, this evolutionary pattern, with great changes in plumage and more limited changes in body proportions, seems to characterize the entire typical antbird assemblage (in contrast to the situation in ovenbirds, where there is a great variation in body proportions but not in plumage characters). However, Hackett and Rosenberg [ 10 ] suggested that neither plumage nor morphometric data correctly predicted the genetic relationships among the studied taxa. Our results seem to support their assumption as the traditionally used plumage characters in typical antbirds, as stripes, wingbars, and general coloration; seem to be irregularly distributed in the phylogenetic tree. It is reasonable to assume that plumage characters in typical antbirds are variable to such a degree that they are of limited use in studies of higher-level relationships. High levels of homoplasy (convergences and reversals) in plumage characters have also been reported in other passerine birds e.g., in Australian scrubwrens [ 35 ] brush-finches [ 36 ], and in New World orioles [ 37 ]. However, if excluding members in the "basal" group (clade A, Figure 4 ) and a few other aberrant taxa, the division of typical antbirds into the two main lineages in our phylogeny (clade B and C, Figure 4 ) is overall in good agreement with their body proportions (although there is a considerable size variation within both clades). The antshrikes (excluding Tamnistes and Pygiptila ), antvireos and Herpsilochmus antwrens in clade B (Figure 4 ) are all more or less robust birds with heavy and prominently hooked bills, and many of them have a barred plumage pattern. The taxa in clade C (Figure 4 ), which includes most antwren genera, the Myrmeciza antbirds, the "professional" ant-following antbirds and some allied species, are generally slimmer birds with longer, thinner bills that have a less prominent hook. Most suggested relationships within clade B and C are in good agreement with traditional classifications. The recognition of monophyletic origins of most of the "professional" ant-following taxa ( Phaenostictus , Gymnopithys , Rhegmatorhina , Pithys and Phlegopsis ) and the "large" antshrikes ( Taraba , Hypoedaleus , Batara , Frederickena and Mackenziaena ) are two examples where our results are congruent with traditional classifications. The suggested relationships between the Hypocnemis and Drymophila antbirds, and the Herpsilochmus antwrens and the antvireos ( Dysithamnus ), respectively, have also been proposed previously based on molecular data [ 9 , 10 ]. Unfortunately, the genera Biatas , Clytoctantes , Percnostola , Rhopornis , Stymphalornis and Xenornis were lacking in our study; while most of these should probably be referred to Clade C, Biatas is difficult to place. Some novel relationships and the phylogenetic positions of some aberrant taxa For certain taxa the position in our combined phylogeny is unexpected considering the external morphology and traditional classification. Most noticeable are the position of the banded antbird ( Dichrozona cincta ), which is nested within the clade with antshrikes, antvireos and Herpsilochmus antwrens (clade B, Figure 4 ), and the position of the wing-banded antbird ( Myrmornis torquata ) as sister to the russet antshrike ( Thamnistes anabatinus ) and the spot-winged antshrike ( Pygiptila stellaris ) (clade A, Figure 4 ). However, the increased number of molecular based phylogenies in recent years have led to discoveries of several examples, at different phylogenetic levels, were birds have been misclassified due to significant morphological differences from the taxa to which they are most closely related [ 38 - 40 ]. The phylogenetic position of the wing-banded antbird ( Myrmornis torquata ) has long been obscured and it was long placed with the typical army-ant followers (e.g., [ 2 ]). The wing-banded antbird has also been suspected to be related to ground antbirds (Formicariidae sensu [ 7 ]) based on similarities in morphology and general appearance [ 7 ]. Our results confidently place it within typical antbirds, a conclusion further supported by its vocalization [ 2 ] and choice of nest site and its white egg [ 41 ]. The well supported relationship to the arboreal russet antshrike ( Thamnistes anabatinus ) and spot-winged antshrike ( Pygiptila stellaris ), suggested by our data, has apparently been obscured by structural differences caused by its adaptation to a terrestrial life-style shared with for example the antthrushes. A similar explanation may apply to the peculiar position of the banded antbird ( Dichrozona cincta ) in the combined phylogeny, as this taxon is also a mainly terrestrial bird, unlike the other members in the "antshrike" clade (clade B, Figure 4 ). The fact that the banded antbird has a rather long branch in the combined tree and that its phylogenetic position alter between the individual gene trees, leads us to consider the phylogenetic position of the banded antbird ( Dichrozona cincta ) as preliminary. However, it is obvious that it is not closely related to the Hylophylax antbirds with which it has traditionally been grouped (based on similarities in plumage patterns and weak sexual dimorphism). It should be noted that, due to the peculiar position of Dichrozona cincta , a second individual (ZMUC 128217) have been sequenced for all three genes. There were no variation at all found between the two individuals in G3PDH, in myoglobin 1 ambiguous position were found, and in cytochrome b 24 base pairs (2.4%) that differed as well as 3 ambiguous positions were found. Overall, this variation is within the variation that could be suspected between individuals within a species. Thus, the strange position of Dichrozona cincta in our analyses is unlikely to be due to sample or sequence mix-up. There are several other, less striking examples where the position of taxa in our phylogeny conflicts with relationships suggested in classifications based on external morphology. The Herpsilochmus antwrens for example (traditionally placed among Myrmotherula , Microrhopias and Formicivora antwrens), are quite different in appearance from their sister group Dysithamnus in being rather slim, lacking a particularly hook-bill, and in having a distinctly patterned plumage (however, as discussed above a close relationship between Herpsilochmus and Dysithamnus is also supported by an independent molecular study). Other examples are the positions of Myrmorchilus and Neoctantes , respectively (see discussion below). In these cases their true affinities may have been obscured by morphological adaptations to habitats or food sources that differ from those preferred by their closest relatives. The strong support in the combined tree for basal positions of Myrmornis , Pygiptila, Thamnistes and Terenura relative to all other typical antbirds is maybe the most unexpected result of our study. In a majority of classifications Terenura is placed close to other antwrens, but with no strong data support. Although the precise position of the Terenura antwrens is partly ambiguous in our analysis, they obviously belong to an ancient radiation that is only distantly related to the other "antwrens". The Terenura antwrens differ from other "antwrens" in plumage pattern and in being more slender and warbler-like with a thinner bill and longer tail. In a study based on mitochondrial DNA the position of Terenura was ambiguous depending on how the data set was analyzed [ 9 ] but clearly it was not closely related to the other taxa included in that study (e.g., Myrmotherula , Formicivora, Herpsilochmus , Hypocnemis , Drymophila ). The well-supported phylogenetic position of Pygiptila and Thamnistes as the sistergroup to Myrmornis (instead of being close to other antshrikes as suggested in many linear classifications), is novel. However, Pygiptila and Thamnistes resemble each other in their ways of feeding in the sub-canopy, Thamnistes also resembling the Pygiptila female in appearance, and differing from most antshrikes in feeding behavior. DNA-DNA hybridization data [ 7 ] and protein electrophoresis [ 10 ] have previously shown Pygiptila to be genetically distant from the Thamnophilus antshrikes. The general external resemblance of Pygiptila and Thamnistes to other antshrikes is therefore best explained, as being plesiomorphic, and this may also be the case with their suspended nest-type. The polyphyly of Myrmotherula antwrens and Myrmeciza antbirds Our results confirm both previous molecular studies that suggest the Myrmotherula antwrens are polyphyletic [ 9 , 10 ], and the suspicion based on morphology that also the rather diverse genus Myrmeciza constitutes an unnatural taxon [ 3 ]. Nevetheless, most Myrmeciza antbirds studied herein belong to the same clade, although they are not monophyletic as several other genera ( Myrmoborus , Gymnocichla , Pyriglena , Sclateria , Schistocichla , Hypocnemoides , and Hylophylax ) are nested among them. However, the chestnut-tailed antbird ( Myrmeciza hemimelaena ), which represents a group of small and slim Myrmeciza antbirds with prominent wing spots in both sexes, groups with the Drymophila , Hypocnemis and Cercomacra antbirds. The small and slim Myrmeciza antbirds resembles morphologically the Hypocnemis antbirds in having similar wing spots as well as a rather short and rufous-brown tail. The clade that includes the remaining Myrmeciza antbirds consists of three unresolved lineage. The first includes a group of large and heavily built Myrmeciza antbirds (represented by Myrmeciza fortis ). Next outside this group is the fire-eye ( Pyriglena leuconota ), followed by the bare-crowned antbird ( Gymnocichla nudiceps ) and the Myrmoborus antbird representative ( Myrmoborus myotherinus ). These taxa have rather stout bodies and in most cases red eyes. Both the fire-eyes and the bare-crowned antbird were previously assumed to be related to the large, heavy-billed Myrmeciza antbirds (e.g., [ 3 ]). The second lineage consists of the silvered antbird ( Sclateria naevia ) and the Schistocichla antbird representative ( Schistocichla leucostigma ). These relationships are in good agreement with the overall plumage characters in these taxa [ 3 ], with the males being rather uniform gray while the females are rufous. Such a plumage is also found in the genus Percnostola , with which the Schistocichla antbirds are considered to be most closely related ( Schistocichla and Percnostola have even been regarded as congeneric, but it has also been suggested that Percnostola could be polyphyletic). In the third lineage, Myrmeciza griseiceps and Myrmeciza berlepschi form the sister clade to Myrmeciza loricata, Hypocnemoides maculicauda and Hylophylax naevia (the latter two are sister taxa). This group consists of rather typical shaped and sized " Myrmeciza " antbirds. Although it has a shorter tail, Hylophylax naevia shares plumage pattern with Myrmeciza loricata ( Hypocnemoides maculicauda is more discretely patterned), while Myrmeciza griseiceps and Myrmeciza berleschi , on the other hand, are more uniformly colored birds. A non-monophyletic origin of Myrmotherula antwrens, suggested by our data, agrees with the results of previous molecular studies [ 9 , 10 ]. The results also support Hackett and Rosenberg's [ 10 ] protein electrophoresis data suggesting that the "gray" and "streaked" forms of Myrmotherula antwrens are more closely related to each other than either is to the "checker-throated" forms. The combined tree (Figure 4 , clade C) suggests that the Myrmotherula antwrens evolved along two separate phylogenetic lineages. In the first, the "checker-throated" forms ( Myrmotherula fulviventris and Myrmotherula leucophthalma ) group with the black bushbird ( Neoctantes niger ) and constitute the sister to the dot-winged antwren ( Microrhopias quixensis ) and the stripe-backed antbird ( Myrmorchilus strigilatus ). Based on the external morphology these taxa indeed constitute a rather heterogeneous group. For example, the stripe-backed antbird has previously been suggested to be related to Formicivora and Drymophila antwrens [ 42 ], which are distantly related according to our results. However, Neoctantes , Microrhopias and Myrmorchilus are monotypic genera that lack obvious close relatives. Myrmorchilu s is essentially a terrestrial bird, living in chaco scrub, thus differing in habits and habitat from the "typical" antwren lifestyle. Neoctantes lives in humid forest like most Myrmotherula antwrens, but its bill is modified to hammers on stems, vines etc., and to be used as a wedge to pry off strips of bark [ 2 ]. The morphological differences between Neoctantes and Myrmorchilus on one hand, and the "checker-throated" Myrmotherula antwrens on the other, could thus be the result of adaptive specializations in the former taxa. In the second lineage of Myrmotherula antwrens, the "streaked" forms represented by the short-billed antwren ( Myrmotherula obscura ) and the black-and-white antbird ( Myrmochanes hemileucus ) form the sister group to the "gray" forms (represented by Myrmotherula menetriesii , axillaris and behni ) and Formicivora rufa . Although the support for nesting Formicivora rufa among the "gray" forms of Myrmotherula is rather weak, it suggests that the generic boundary between Formicivora and "gray" Myrmotherula antwren is far from unambiguously settled. This is also indicated by the recent transfer of the black-hooded antwren from the genus Myrmotherula to Formicivora [ 15 ]. Bates et al. [ 9 ] also found a close relationship between Myrmotherula longipennis (belonging to the "gray" form of Myrmotherula antwrens) and the genus Formicivora ( Formicivora grisea and Formicivora rufa ). Conclusions The phylogenetic results support that most antbirds could be divided into two major clades that are in broad agreement with traditional classifications. The first clade includes most antshrike genera, antvireos and the Herpsilochmus antwrens, while the second clade consists of the Myrmeciza antbirds, the "professional" ant-following antbirds, and allied. However, some relationships within these clades, as well as the support for that Terenura antwrens, the wing-banded antbird ( Myrmornis torquata ), the spot-winged antshrike ( Pygiptila stellaris ) and the russet antshrike ( Thamnistes anabatinus ) are basal to all other typical antbirds, are unexpected based on external morphology. Possibly the true affinities of these taxa have been obscured by morphological convergence due to adaptations to new habitats or food sources. Our results also strongly support that both the Myrmeciza antbirds and the Myrmotherula antwrens are unnatural groupings in need for taxonomic revisions. Also certain other taxa may be unnatural units, but definitive conclusions must await future analyses involving more taxa. Bayes factors seem promising for evaluating the relative contribution of components to an evolutionary model. However, changing a component of the nucleotide substitution model, e.g. adding parameters to model rate variation, had much higher impact on the model likelihood than unlinking parameters among data partition. Thus, even though strong evidence for a model allowing separate topology parameters is found, this might not mean strong evidence for separate gene phylogenies, as long as vital components of the substitution model are still missing. Methods Taxon sampling, amplification and sequencing Totally 51 typical antbird species were selected for the molecular analyses, including representatives from 38 genera out of 45 genera recognized by Ridgely and Tudor [ 3 ]. From some antbird genera ( Myrmeciza , Myrmotherula and Thamnophilus ) several species were included, as the monophyly for these genera had been questioned [ 3 , 9 , 10 ]. The phylogenetic trees were rooted using representatives from major furnariid lineages suggested by Irestedt et al. [ 1 ]. Sample identifications and GenBank accession numbers are given in Table 3 (see additional file 1 ). Nucleotide sequence data were obtained from two nuclear introns, myoglobin intron 2 and the glyceraldehydes-3-phosphodehydrogenase (G3PDH) intron 11, and from the mitochondrial cytochrome b gene. The complete myoglobin intron 2 (along with 13 bp and 10 bp of the flanking regions of exons 2 and 3, respectively) corresponding to the region between positions 303 (exon 2) and 400 (exon 3) in humans (GenBank accession number XM009949) and the complete G3PDH intron 11 (including 36 bp and 18 bp of exons 11 and 12, respectively) corresponding to the region 3915 to 4327 in Gallus gallus (GenBank accession number M11213) were sequenced. From the cytochrome b gene 999 bp were obtained corresponding to positions 15037 to 16035 in the chicken mitochondrial genome sequence [ 43 ]. Some indels were observed in the alignments of myoglobin intron 2 and the G3PDH intron 11, respectively (see results), but all gaps in the sequences were treated as missing data in the analyses. No insertions, deletions, stop or nonsense codons were observed in any of the cytochrome b sequences. Extraction, amplification and sequencing procedures for cytochrome b and myoglobin intron 2 follow the descriptions in Ericson et al. [ 44 ] and Irestedt et al. [ 1 ]. A protocol described by Fjeldså et al. [ 45 ] was followed for the amplification and sequencing of the G3PDH intron. For each gene and taxon, multiple sequence fragments were obtained by sequencing with different primers. These sequences were assembled to complete sequences with SeqMan II™ (DNASTAR inc.). Positions where the nucleotide could not be determined with certainty were coded with the appropriate IUPAC code. Due to a rather low number of insertions in myoglobin intron 2 and G3PDH intron 11 the combined sequences could easily be aligned by eye. Phylogenetic inference and model selection We used Bayesian inference and Markov chain Monte Carlo (MCMC) for estimating phylogenetic hypothesis from DNA data (see recent reviews by Holder and Lewis, [ 46 ]; Huelsenbeck et al., [ 47 ]). Bayesian inference of phylogeny aims at estimating the posterior probabilities of trees and other parameters of an evolutionary model. Importantly, two components need to be specified (apart from the data): the model of nucleotide substitution and the prior distributions for the parameters in that model. The models for nucleotide substitutions were selected for each gene individually, prior to the MCMC, and using the Akaike Information Criterion (AIC [ 48 ]). This was done using the program MrModeltest [ 49 ] in conjunction with PAUP* [ 50 ]. Specifically, MrModeltest compares 24 standard substitution models, including models allowing rate variation, utilizing the likelihood scores calculated by PAUP* on an initial, approximate phylogeny (see e.g., [ 51 ]). After models had been selected for the individual gene partitions, prior distributions for the model parameters were specified. For stationary state frequencies, we used a flat Dirichlet prior, Dir(1, 1, 1, 1). A Dirichlet prior, Dir(1, 1, 1, 1, 1, 1) were also used for the nucleotide substitution rate ratios of the general time-reversible model (GTR [ 52 - 54 ]). A Beta distribution, Beta(1, 1), were used for the transition/transversion rate ratio of the Hasegawa-Kishino-Yano model (HKY [ 55 ]). A uniform prior, Uni(0.1, 50), was used on the shape parameter of the gamma distribution of rate variation (Γ [ 56 ]), and a Uni(0, 1) prior was used for the proportion of invariable sites (I [ 57 ]). An exponential prior, Exp(10), were used for branch lengths, and all trees were assumed to be equally likely (flat prior on topology). The posterior probabilities of trees and parameters in the substitution models were approximated with MCMC and Metropolis coupling using the program MrBayes [ 58 ]. The gene partitions were analyzed both separately and combined. In the combined analysis, each gene partition was allowed to have separate parameters by using a rate multiplier [ 27 , 58 , 59 ]. One cold and three incrementally heated chains were run for 3 million generations, with a random starting tree and a temperature parameter value of 0.2. Trees were sampled every 100th generations, and the trees sampled during the burn-in phase (i.e., before the chain had reached its apparent target distribution) were discarded. Two runs, starting from different, randomly chosen trees, were made to ensure that the individual runs had converged on the same target distribution [ 60 ]. Convergence of parameters was checked by examining parameter means and variances between runs. After checking for convergence, final inference was made from the concatenated output from the two runs. A Bayesian test of incongruence Bayesian methods provide us ways not only to estimate posterior probabilities for trees and parameters in a model, but also to evaluate the model itself. Bayes factors [ 61 ], allow us to make sophisticated comparisons between models used in phylogenetic analyses [ 27 , 62 , 63 ]. Bayes factors measure the strength of evidence in favor of one model M 1 compared to another M 2 , given the data X, and is calculated as the ratio of the model likelihoods, B 12 = f (X|M 1 )/ f (X|M 2 ). The model likelihoods f (X|M i ) are difficult to calculate analytically but can be estimated by using the output from an MCMC [ 27 , 62 ]. Here we explore the congruence test described by Nylander et al. [ 27 ], which utilizes Bayes factors. The test is not a significance test but merely compares the strength of evidence between two models of character evolution. In the first model, data partitions are allowed to have their own unique set of substitution parameters, but we assume the data as having evolved on the same topology, but with partition-specific branch lengths. Strictly speaking, we are restricting the data partitions to have the same posterior distribution for topologies, but (potentially) different distributions in all other parameters. In the second model we relax the assumption of a single distribution of topologies for all data partitions. That is, if the data partitions (genes) truly evolved on different phylogenies, they are allowed to do so in the model. The comparison or 'test' is to see if the second model provides compelling evidence as to be accepted as superior. Here we use the log of the Bayes factor and a value of >10 for 2 logB 12 have been suggested as strong evidence against the alternative model, M 2 [ 61 ]. To accomplish the incongruence test we utilized the unlink command in MrBayes, which allows the user to let parameters as well as topologies to be unlinked between partitions. We calculated Bayes factors and compared the effects on the model likelihood when linking or unlinking topologies between all the gene partitions. We were primarily interested in the potential incongruence between the mitochondrial cytochrome b partition and the two nuclear partitions myoglobin and G3PDH, but all combinations of the three genes in our data set were examined. For comparison, we also tested whether the different gene partitions were in significant conflict with each other by using the parsimony based incongruence-length differences test (ILD) [ 64 ], implemented in PAUP* [ 50 ]. The results are based on 10,000 replicates, with ten iterations (random additions of taxa) per replicate. Authors' contribution MI designed the study, carried out the labwork, participated in the phylogenetic analyses, and drafted the manuscript. JF assisted with the design of the study and with the draft of the manuscript. JN performed the phylogenetic analyses, drafted parts of the results, and material and methods section of the manuscript. PE conceived the study. All authors read and approved the manuscript. Supplementary Material Additional File 1 Table 3. Samples used in the study . The classification follows Ridgely and Tudor [3] for typical antbirds, and Irestedt et al. [1] for families. Abbreviations: AHMN = American Museum of Natural History, New York; FMNH = Field Museum of Natural History, Chicago; LSUMZ = Louisiana State University, Museum of Natural Science; NRM = Swedish Museum of Natural History; ZMCU = Zoological Museum of the University of Copenhagen. References: (1) Irestedt et al. [1]; (2) Fjeldså et al. [45]; (3) Johansson et al. [65]; Fjeldså et al. [66]. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC509417.xml |
546328 | Parsing a Cognitive Task: A Characterization of the Mind's Bottleneck | Parsing a mental operation into components, characterizing the parallel or serial nature of this flow, and understanding what each process ultimately contributes to response time are fundamental questions in cognitive neuroscience. Here we show how a simple theoretical model leads to an extended set of predictions concerning the distribution of response time and its alteration by simultaneous performance of another task. The model provides a synthesis of psychological refractory period and random-walk models of response time. It merely assumes that a task consists of three consecutive stages—perception, decision based on noisy integration of evidence, and response—and that the perceptual and motor stages can operate simultaneously with stages of another task, while the central decision process constitutes a bottleneck. We designed a number-comparison task that provided a thorough test of the model by allowing independent variations in number notation, numerical distance, response complexity, and temporal asynchrony relative to an interfering probe task of tone discrimination. The results revealed a parsing of the comparison task in which each variable affects only one stage. Numerical distance affects the integration process, which is the only step that cannot proceed in parallel and has a major contribution to response time variability. The other stages, mapping the numeral to an internal quantity and executing the motor response, can be carried out in parallel with another task. Changing the duration of these processes has no significant effect on the variance. | Introduction Even the most simple behaviour involves a chain of computations, which link perception, decision making, and action [ 1 , 2 , 3 ]. Measurements of response times (RTs) have been used as a major source of information on the organization of these stages [ 4 , 5 ], and more recently these analyses have been combined with neuroimaging data to identify separate processing modules [ 6 , 7 ]. This seemingly simple measure of time to completion of a cognitive operation has several intriguing properties. One of them is its noisy character. Even in very simple tasks, RTs typically vary over a broad range of several hundred milliseconds. Another property is that RT can slow down considerably under some circumstances in which the subject is distracted by another competing stimulus or task. This suggests the existence of at least some stages that act as a bottleneck and can only operate serially, one at a time. Here we set out to relate response variability and the serial versus parallel architecture of processing stages. Do all stages of processing contribute uniformly to this variance? Or are some stages particularly variable in their computation time? And does variability relate in a systematic manner to their parallel or serial nature? When two tasks are presented simultaneously (or sequentially at a short interval), a delay in the execution of the second task has been systematically observed [ 8 , 9 , 10 , 11 ]. This interference effect is referred to as the psychological refractory period (PRP) and has been explained by a model that involves three stages of processing: a perceptual component (P component), a central component (C component), and a motor component (M component), in which only the C component establishes a bottleneck [ 5 , 9 , 12 , 13 , 14 , 15 ]. PRP experiments have associated the C component to “response selection”, the mapping between sensory information and motor action [ 16 ]. A separate line of psychological research has investigated how the decision to respond is achieved. The decision-making process has been modelled as a noisy integrator that accumulates evidence provided by the sensory system [ 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ]. Although many variants have been proposed, the basic idea is that perceptual evidence is stochastically accumulated in time. Decision thus results from a random walk of an internal abstract variable. Indeed, in many circumstances, such a decision mechanism can be optimal in the sense that it maximizes the overall likelihood of a correct classification of the stimuli [ 26 , 27 ]. In the simplest scheme, all the variance in RT is attributed to this integration process. Thus, the integration model establishes a possible parsing of a task into components: a fixed component to transform the sensory information to an abstract variable, the accumulation of evidence itself (the only variable process), and the execution of the response. In the present work, we propose a single assumption that unifies those two lines of research. We postulate that only the integration process establishes a serial bottleneck, while all other stages can proceed in parallel with stages of another task ( Figure 1 ). This model should thus explain both the dual-task interference experiments and the detailed analysis of RT distributions. While extremely simple, the model makes powerful mathematical predictions in experiments in which the order of the presentation of the two tasks and their relative offset in presentation are varied. Moreover, it also makes specific predictions in experiments in which the complexity of one of the tasks is changed. Depending on whether the locus of the change is in the P, C, or M components, the shape of RTs as a function of the delay between stimuli acquires a very different shape. Figure 1 The Model: The Process of Accumulation of Evidence Constitutes the Mind's Bottleneck Each task involves a sequence of three stages of processing. The perceptual and motor stages are fixed and can be carried out in parallel with stages of another task, while the central stage consists of a noisy integration (a random walk) until a decision threshold is reached. The central stage of task 2 cannot start until the central stage of task 1 is finished. Thus, this establishes a bottleneck and characterizes a serial process. The distribution of RTs for the second task is wider than that for the first task, because it combines the intrinsic variance of task 2 (the time to reach threshold) and the variance in onset of the central stage of task 2, which is set by the ending of the central stage of task 1. We designed a behavioural task to test the validity of the model. This number-comparison task involves deciding whether a digit presented on the screen is larger or smaller than 45. Different manipulations of the task render it more difficult, presumably, at different stages of processing. The different task manipulations include notation (whether the number was presented in Arabic digits or in spelled words), distance (the numerical distance between the presented number and 45), and response complexity (whether subjects were asked to tap once or twice to indicate their choice). Previous studies have shown that all of these manipulations change the difficulty of the task: RTs increase when numerical distance decreases and when numbers are presented in spelled words [ 28 , 29 ]. These effects have been shown to be additive and to involve distinct brain regions and components of the event-related potentials [ 29 , 30 ]. Thus, it is likely that they affect different components of processing, making this task a good candidate to explore the validity of the model. Results Subjects were asked to perform a dual task. One of the two tasks was presented visually and involved a number comparison: subjects decided whether a digit presented on the screen was larger or smaller than 45. Hereafter we will refer to it as “number task”. The other was a tone-discrimination task that involved deciding whether the frequency of a single pure tone that was presented for 150 ms was high (880 Hz) or low (440 Hz) (subjects heard both tones repeatedly before the beginning of the experience). Hereafter we will refer to it as “tone task”. Two different populations of subjects performed the task in the two possible orders, tone task followed by number task or vice versa. The number task was our main task of study, and was manipulated using three different factors: notation (whether the number was presented in Arabic digits or in spelled words), distance (the numerical distance between the presented number and 45), and response complexity (whether subjects were asked to tap once or twice to indicate their choice). The tone task was never varied throughout the experiment. The rationale underlying this experimental design is that the tone task is used as a probe to study, through interference experiments, the different stages of processing of the number task. This asymmetry between the two tasks, which might be helpful to keep in mind, was of course not stated to the subjects, who were just asked to attend equally to both tasks. The results section is organized as follows. We first report an analysis of basic measures of central tendency and dispersion. We then address how different manipulations (within the number task or through the interference with the tone task) change the mean RTs and their dispersion. These types of tests allow us to test the additivity of the effects of each factor and, through the interference analysis, whether they affect the perceptual, the central, or the motor stage. A second level of analysis involves a more detailed characterization of the shapes of the distributions of RTs. Fitting the distributions allows us to evaluate the accumulation model of response decision and to relate its components to those identified by their patterns of interference in the first-level analysis. For clarity and as a reference throughout the paper, all the definitions, components of the models, and experimental manipulations are summarized in Table 1 . Table 1 Definitions of Notations DOI: 10.1371/journal.pbi0.0030037.t001 Analysis of Mean RTs and Interquartile Range Effect of the different manipulations of the number task The first analysis involved studying the effects of the different manipulations of notation, distance, and response complexity on mean RTs and response dispersion on the number task when it came first. Our model predicted that manipulations that affect separate stages should have additive effects on mean RTs, and that only manipulations that affect the central stage should significantly increase response dispersion. For this analysis (and throughout the paper unless otherwise specified) distance, which is the absolute value of the difference between the presented number and 45, was binned in two groups, close (≤12) and far (>12). Central tendency was measured by estimating the mean RT after trimming for outliers, by discarding responses slower than 1,200 ms. Response dispersion was measured by estimating the interquartile range, i.e., the difference between the 75th percentile and 25th percentile of the distribution of RTs. (Identical results were obtained when using other measures, e.g., median and standard deviation of RTs. Note that, in general, serial stage models predict that factors affecting distinct stages should have additive effects on mean RTs, but not necessarily on median RTs. Our model, however, supposes that factors affecting P and M components do not add to response dispersion, but merely add a constant factor to RTs. Under this hypothesis, factors affecting selectively P, C, and M components should also have additive effects on median RTs. In fact, the effects of perceptual and motor factors should be quantitatively the same on mean and median RTs.) As we expected from several prior experiments [ 29 , 30 ], performing the task for spelled words required more time than for Arabic digits ( Figure 2 A). We also observed a significant distance effect: RT for close numbers was longer than for far numbers. In addition, these effects were additive, as revealed by ANOVAs with subjects as a random factor and notation and distance as within-subject factors ( Table 2 , effects of notation; Figure 2 A). Similarly, response complexity increased subjects' mean RT, and this effect was additive with the distance manipulation ( Table 2 , effects of response complexity; Figure 2 A). Figure 2 Effects of the Different Manipulations on the Mean and Dispersion of RT (A) Changes in the mean RT of the numeric task when it comes first in the different experimental manipulations. Changing the notation or the response complexity makes mean RT slower, and within each condition, responses are slower for close than for far distances. The difference between far and close conditions is independent of the experimental manipulation, indicating an additive effect that is tested in the ANOVAs (see Table 2 ). (B) A different pattern is observed for the interquartile range, which provides a measure of dispersion. While distance manipulation results in a major change of the interquartile range, there is not a major effect of notation or response complexity. Table 2 Results of the Different Manipulations of the Number Task on the Mean and Interquartile Range Red indicates a significant effect DOI: 10.1371/journal.pbi0.0030037.t002 Interestingly the effects of the different manipulations on response dispersion did not follow the effects on the mean, indicating that some factors slowed RT but did not significantly increase their dispersion. The distance manipulation resulted in a significant increase of the interquartile range typical of stochastic process, where the dispersion increases with the mean. In contrast, notation and response complexity, while causing an important change in the mean, did not result in a significant increase of the interquartile range ( Table 2 ; Figure 2 B). To fully address whether the number-comparison task involves three separate stages with each experimental factor (distance, notation, and response complexity), a complete “additive factors” experimental design is needed, in which the different factors are crossed and thus all the interactions can be tested. However, such a factorial design, if tested in the double-task experiment, would involve an exceedingly large number of conditions, which would be very difficult to test on a subject by subject basis within a single session. Instead, we ran it as an independent experiment, in which a new group of subjects was asked to perform only the number task. The results of this new experiment, summarized in Table 3 , confirmed and extended our previous findings. (1) All factors (distance, notation, and response complexity) had a significant main effect on mean RT. (2) All interactions between factors were nonsignificant. In particular, the new experiment allowed us to test two interactions that could not be addressed previously (notation by response complexity and triple interaction), which were also nonsignificant. This additive-factors analysis is thus fully compatible with our hypothesis that the number task involves three successive stages, each selectively influenced by one of the three factors. (3) As expected, those findings held for both analyses of the mean and median RTs, both of which are reported in Table 3 . (4) As shown previously, the interquartile range and standard deviation were only affected by the distance manipulation, but did not change significantly with the response complexity and notation manipulations. The size of the confidence intervals was similar in all conditions ( Table 3 ), suggesting that this was not just a matter of statistical power. For instance, response complexity had a major impact on mean RT (249 ± 20 ms), but no effect on its standard deviation (1.4 ± 8.9 ms), while distance had a more modest impact on mean RT (41.2 ± 5.8 ms) and a comparable effect on standard deviation (19.2 ± 6.5 ms). While this result does not necessarily imply that the variance associated with notation- and response-dependent processes is strictly zero, it suggests that those processes have a substantially lower contribution to the variance than the distance-dependent stage. Table 3 Results of the Different Manipulations of the Number Task (in a Single-Task Experiment) on Different Variables In this experiment, the condition Words 2 Taps was included to allow a full factorial design that permits testing all the different double interactions between the three factors and their triple interaction. Same results are obtained for different measures of central tendency (mean and median) and of dispersion (standard deviation and interquartile range). Red indicates a significant effect. Confidence intervals (CI) are reported, with all values in milliseconds DOI: 10.1371/journal.pbi0.0030037.t003 Taken together with the assumption of our model that only the central stage contributes to response variance, our observations suggest that the numerical distance factor affects the C decision component, while notation and response complexity affect noncentral P or M components. Interference by the tone task In addition to tests of additivity, a useful experimental technique to address the separable nature of different components and to understand their organization in time, is the interference analysis, in which the task of study (the number-comparison task) is performed together with a probe task (the tone task). The delay in the onset between the two tasks is controlled experimentally, and to achieve a full separation of the three components, the two tasks must be presented in both possible orders ( Figure 3 ). Under the assumptions of the PRP model, the P and M components can be carried out in parallel with another task, but the central stage is the only one that provides a bottleneck, in the sense that the central component of each task cannot be carried out simultaneously. From these premises, one can predict the curve giving RTs for the first and second tasks (RT1 and RT2, respectively) as a function of delay, and how it changes with a manipulation of the P, M, and C components of either task. The sets of predictions and a sketch of the logic are presented in Figure 3 . The aim here is thus to associate the experimental manipulations (notation, distance, and response complexity) to the different components in the PRP model (P, C, and M) by analysing the changes in mean RT with delay. Here and throughout the paper, we follow the convention that RTs to both tasks are reported from trial onset. Figure 3 Description of the Task and Sketch of the PRP Model and Its Predictions (A) Scheme of the main PRP effect. The vertical axis labels RT. The column on the left indicates the first task, and each coloured box within the column represents a different stage of processing: P component (dark green), C component (red), and M component (blue). The series of columns on the right indicate the processing time for task 2 at different delays (Δ), labelled on the x-axis. For each column, the three different boxes represent the three different stages of task 2: P component (green), C component (orange), and M component (cyan). As Δ progresses, the P component starts later. All components can be performed in parallel except for the C component, which establishes a bottleneck. This results in the following predictions: (1) response to the first task is independent of Δ, and (2) the RT2 (from onset of the trial) represented by the black line, is unchanged for small Δ while at sufficiently large Δ (noninterference regime) it increases linearly, with a slope of one, with Δ. (B) The predicted RT1 and RT2 (from trial onset) as a function of Δ is represented by the grey and black lines, respectively. (C) The model also establishes definite predictions for experiments in which one of the tasks is changed. The six different panels indicate all possible manipulations: first task changed (left column) or second task changed (right column) and whether the change affects the P component (first row), C component (middle row), or M component (bottom row). The changed component is labelled with a highlighted box and with an arrow. For simplicity, we assumed that the task manipulation always increases the duration of one component. RTs before the manipulation (which are the same across all panels) are represented with a solid line, grey for RT1 and black for RT2, and the RTs of the manipulated task are labelled with a dotted line with the same colour code. If the first task is changed (left column), different effects are observed depending on whether the change is in the M component or in the P–C components (which cannot be distinguished with this manipulation). If the M component is affected (bottom row), RT1 changes, but the response to the second task is unchanged. If the locus of the change is in either the P or the C component (middle and top rows), there is a larger delay until execution of task 2 and the following effect is observed: for small Δ (interference regime), RTs are increased and the regime of interference is increased, which is indicated by a shift of the kink to the right. If the second task is changed (right column), different effects are observed depending on whether the change is in the P component or in either the C or M component. If the change is in the P component (top row), for small Δ there is no net change in the response to the second task (because there was a wait at the end of the P component so extending it does not change total time execution), but there is less wait and thus the kink is shifted to the left. If the change is made in either the C or M component (middle and bottom rows) the result is a rigid shift, which is independent of Δ. By performing experiments in which the two tasks are presented in different orders, all task components can be differentiated. All task manipulations, according to the PRP model, should fall into one of the three categories, perceptual, central, or motor, each defined by its characteristic RT signature. We begin by describing the mean RT results when the number task (for which experimental parameters were varied) was performed first, and the tone task came second. The PRP model predicts that each of the manipulated variables of notation, distance, and response complexity should have a main effect on the first number-comparison task, but only some of those effects (those that affect P and C components of the first task) should propagate to the RT2, and should do so only at short interstimulus delays (Δ) ( Figure 3 ). To evaluate these predictions, mean RTs were calculated within each condition and each subject, and submitted to ANOVAs with subjects as a random factor and delay and the variable of interest as within-subject factors. The detailed results of those ANOVAs are reported in Table 4 . In the text, we merely draw attention to the main points. Table 4 Results of the ANOVAs of the Interference Experiments: Number Task Followed by Tone Task Each column corresponds to a different ANOVA. Each line represents a different effect: task manipulation, delay, and their interaction. Red indicates a significant effect. All 18 data cells follow the predictions of the PRP model DOI: 10.1371/journal.pbi0.0030037.t004 The ANOVAs on number-comparison RTs (RT1) revealed the expected main effects of number notation (74 ms, slower for verbal than for Arabic numbers), numerical distance (91 ms, slower for close digits than for far digits), and response complexity (175 ms, slower for two-tap responses than for one-tap responses). There was no main effect of Δ, and none of the task effects interacted with delay. These results suggest that, as requested, participants performed the number comparison as task 1 independently of the delay of presentation of the subsequent tone task. Similar ANOVAs on tone-decision RTs (RT2) revealed a main effect of delay, characteristic of the PRP phenomenon. As shown in Figure 4 (left column, black solid curves), RTs were independent of delay up to a certain value, then began to increase linearly with further increases in delay. Figure 4 Dissociating P, C, and M Components by Their Interference Patterns In the left column the number task is performed first and the tone task second. In the right column the tone task is performed first and the number task second. In both cases, the number task is manipulated by the three factors of notation, distance, and response complexity. In all panels the code is identical: RT1is coloured grey while RT2 is coloured black. The “easy” condition is represented by a solid line and the “difficult” condition by a dotted line. All the data can be explained in terms of the PRP model: notation (top row) affects the P component, distance (middle row) affects C, and response complexity (bottom row) affects M (see also Tables 4 – 6 for statistics, and note the agreement with the predicted RTs shown in Figure 3 ). Table 6 Results of the ANOVAs of the Interference Experiments: Tone Task Followed by Number Task Each column corresponds to a different ANOVA. Each line represents a different effect: task manipulation, delay, and their interaction. Red indicates a significant effect. All 18 data cells follow the predictions of the PRP model DOI: 10.1371/journal.pbi0.0030037.t006 Crucially, our three experimental factors had differential effects on those two segments of the RT curve. Notation and distance showed both a main effect and an interaction with delay ( Table 4 ). As a further test, we analysed the data for short delays, within the interference regime (Δ ≤ 350 ms) and long delays (Δ ≥ 600 ms) ( Table 5 ). For the notation and distance manipulation, when collapsing the data across all short delays, there was a significant effect of both factors (respectively 87 ms and 100 ms). For long delays RTs were no longer affected by those variables. These features are characteristic of effects that affect either the P or the C components of a task (see Figure 3 ). Table 5 t -Tests to Study the Effect of Each Manipulation on RT2 within the Regime of Interference (Short Delays) and within the Regime When the Two Tasks Are Performed Independently (Long Delays) Effect sizes are shown in milliseconds (in parentheses). Manipulations that differentially affect the short and long delays are responsible for the interactions reported in Tables 4 and 6 . Red indicates a significant effect. All 12 data cells follow the predictions of the PRP model. Red indicates a significant effect DOI: 10.1371/journal.pbi0.0030037.t005 The situation was quite different for the response-complexity variable. The ANOVAs did not reveal either a main effect of response complexity or an interaction with delay on the RT2 (see Figure 4 , bottom left, black curves; see also Table 4 ). Thus, none of the larger (175 ms) effect that was observed on the first task was propagated to the second task. This result is confirmed by the t -tests, where we did not observe a significant difference either in the short delays or in the long delays (see Table 5 ). This is characteristic of a variable that affects the motor stage of processing. We now describe the mean RT results when the tone task was performed first, and the number task (for which experimental parameters were varied) came second. In this case the PRP model predicts that there should be no effect of the manipulated variables of notation, distance, and response complexity on the first tone task; in addition, the RT2 should exhibit a constant increase (independent of delay) when the change affects the M and C components and should change only for large delays when the change affects the P component (see Figure 3 ). As described above, to evaluate those predictions, mean RTs were calculated within each condition and each subject, and submitted to ANOVAs with subjects as a random factor and delay and the variable of interest as within-subject factors ( see Table 6 ). The ANOVAs on the tone task RTs (first task, RT1) revealed no effects on the task manipulation, as predicted by the PRP model because response to task 1 should be independent of the nature of task 2. The ANOVAs on the number-comparison RTs (second task, RT2) again revealed a very significant nonlinear effect of delay characteristic of the PRP effect. In addition, for the distance and response-complexity manipulations, we observed a task effect that did not interact with delay (see Table 6 ), typical of central and motor manipulations. For the notation manipulation, we observed a task effect that interacted with delay, typical of perceptual manipulations. These observations were consistent with the t -tests performed for short and long delays. When data were collapsed across all short delays for the distance and motor manipulation, there was a significant effect of both factors (respectively 86 ms and 255 ms). In contrast, there was no significant difference in RT for the number task for the different notations for small delays (see Table 5 ). For all comparisons there was a significant effect for long delays: notation (68 ms), distance (92 ms), and response complexity (215 ms). Thus, the notation effect behaves with the characteristics of a variable that affects the P component, and combining this analysis with the prior in which the number task came first, we observe that each manipulated variable affects a different component: notation affects P, distance affects C, and response complexity affects M (compare the predictions of each stage, Figure 3 ; and the data resulting from each manipulation, Figure 4 ). The dependence of RT on delay follows the prediction of the PRP model for all conditions, task manipulations, and task orders. However, we find a small departure from the model when we compare the mean RTs for both tasks when they were presented either first or second at the maximum delay (1,025 ms). In both cases we find that the response is slower when the task is presented first: number task, 756 ms when presented first and 678 ms when presented second; tone task, 720 ms when presented first and 518 ms when presented second. Thus there is a fixed component (independent of delay) of approximately 150 ms, which needs to be added to RT1 to fully explain the data. Detailed Analysis of the Distribution of RTs Effect of the different manipulations of the number task The shape of the RT distributions (for correct trials) was analysed for each task when it was presented first. For the number task we analysed six different cases corresponding to the three different manipulations (Digits 1 Tap, Words 1 Tap, and Digits 2 Taps), and two levels of numerical distance: close distances (≤12) and far distances (>12). For each of these distributions, the histograms of RTs and their cumulative distributions were calculated, and the latter were fitted to a simple model of RTs. The model was based on a fixed onset delay, t o , followed by a forced random walk to a threshold T with slope α and diffusion constant σ ( Figure 5 ). The fixed delay ( t o ) corresponds to the sum of the P and the M components (see Figure 1 ). Figure 5 Dissociating Parallel and Serial Components by RT Distributions (A) RT histograms (when the number task was presented first) fitted by a simple random-walk model, separately for far distances (left column) and close distances (right column) and for the three different tasks: Digits 1 Tap (top row), Words 1 Tap (middle row), and Digits 2 Taps (bottom row). (B) Cumulative plots of the same data. The effect of both notation and response appears to be a shift of the distribution to the right while the distance effect is a change in the slope. Within each panel, we have overlapped the corresponding fit (blue line) and the fit to the easiest condition—Digits 1 Tap, Far Digits (red line)—to make the change between the different distributions apparent. (C) The two fitted values (fixed delay and integration time) as a function of numerical distance for the three different tasks. The integration time decreases with distance, but it is independent of the tasks. In contrast, the fixed delay does not change with distance but changes with the task. The summed delay plus integration time fit the mean reaction times for each distance (solid circles). (D) Statistics performed on the fit reveal that the fixed delay has a slope not significantly different from zero (i.e., it does not depend on distance), but it changes with task. In contrast, the integration time is significantly different from zero, but it does not change with task. The applicability of random-walk models to RT data has been widely studied in numerous tasks [ 18 , 19 , 20 , 21 , 22 , 23 ], including the number-comparison task [ 17 ]. While there are a large number of variants (see Discussion), allowing us to capture further details of the data at the expense of increases in theoretical complexity, our approach here is to remain with a model as simple as possible, whose sole purpose is to separate stochastic and invariant contributions to reaction times. The parameters were determined as follows. T can be set to one without loss of generality. For simplicity, we assumed that σ was the same for all six experimental conditions, while α and t o could vary (we verified that none of the results depended qualitatively on the particular choice of σ). The best-fitting values were determined by exhaustive search using a minimum-squares criterion. The value of 1/α characterizes the integration time (which explains all the variance), while t o captures fixed components that do not contribute to the variance. Thus, our purpose was to test the prediction of our model that the notation and response-complexity manipulations should affect the parameter t o while the distance manipulation should affect the parameter α. Figure 5 shows the fitted distributions of RTs corresponding to the three different tasks: Digits 1 Tap ( Figure 5 A and 5 B, first row), Words 1 Tap ( Figure 5 A and 5 B, second row), and Digits 2 Taps ( Figure 5 A and 5 B, third row). For each of these tasks, we have separated the data corresponding to the close distances (right column) and the far distances (left column). The fit was accurate, with the exception that it was smoother than the real data and thus did not fully capture a fairly abrupt peak at the modal response. The shapes of the distributions appeared to change in two qualitatively different manners. For fixed distances (same column) but changing task, the distributions shifted in time. Conversely, for fixed task (same line) but changing the numerical distance, the distribution became wider. For a finer-grained analysis, and to test the significance of this phenomenon, we binned the data in 24 different bins based on their distance to the reference 45 used for numerical comparison. For each bin, we calculated the α and t o that provided the best fit. We found that the t o changes from task to task but does not depend on distance. In contrast, 1/α does not change across tasks but changes with numerical distance ( Figure 5 C). To test this, we performed a linear regression of both parameters as a function of distance, thus producing two estimates (the slope and the y -intercept at x = 0) ( Figure 5 D). For t o the value of the slope (for the three tasks) does not differ significantly from zero ( p > 0.3) and the value of the intercept differs significantly across tasks ( p < 0.001). In contrast, for 1/α the intercept is not significantly different across tasks ( p > 0.5) while the slope is significantly different from zero ( p < 0.001) Thus, response complexity and notation manipulation affect t o , while numerical distance affects 1/α. These results are consistent with the prior analysis, which showed that response complexity and notation manipulations did not significantly affect the interquartile range (another measure of dispersion) while the distance manipulation did significantly change the interquartile range. Prediction of the distribution of RT2s Here we try to explain the precise shape of RT2s, by combining, based on the PRP model, the distributions obtained for each task when presented first. If the two tasks were completely sequential, then the resulting distribution would be simply the convolution of the two original distributions. However, the PRP model states that only the C component is sequential, and, thus, because some operations can be done in parallel, the resulting RT2s are shorter than expected from a convolution. The operation performed is not completely trivial and is described step by step in Materials and Methods . The only essential point is that this calculation cannot be performed by simply knowing the RTs to each task, but also requires an estimate of the duration of the M component of the first task (M1) and the P component of the second task (P2). These durations are not directly accessible to measurement, but they can be estimated as a result of the fitting of the distribution of RT2. Thus, confronting the distributions of the first and second tasks provides access to the otherwise hidden durations of the postulated component stages, allowing further tests of our model. For each task (Digits 1 Tap, Words 1 Tap, and Digits 2 Taps) we tried to fit the 20 distributions of RT2 (ten for each value of the delay and the two possible orders of the tasks (tone–number or number–tone) from the distributions of RT1, with P2 and M1 as free parameters. We found that with these parameters alone, the data could not be fitted (there were no values of the parameters that gave mean square residuals less than 0.3 for all distributions, and the fitted curves were not similar at all to the real data). It seemed evident that the problem was that the predicted distributions were shifted in time with respect to the original distributions, and thus we decided to add one parameter, T d , a rigid shift in time of all distributions of RT2 (see Discussion for the rationale of this parameter). We then found good fits for the ensemble of distributions ( Figure 6 , mean square residuals < 0.015) with the following values of T d : Digits 1 Tap, 125 ms; Words 1 Tap, 125 ms; and Digits 2 Taps, 75 ms. Figure 6 Predicting the Distribution of RTs to the Second Task from the PRP Model Left: Cumulative plots of RTs to the number task when it is presented second (dots) and the predicted distribution based on the PRP model (solid lines). Each curve (coded in different colours) represents one of the ten possible values of Δ. Right: Same data for RTs to the tone task when it is presented second (dots) and the predicted distribution from the PRP model (solid lines). Each row corresponds to a different task: Digits 1 Tap (first row), Digits 2 Taps (second row), and Words 1 Tap (third row). Each panel was fit with three parameters: M1, P2, and a fixed delay. Each fit provides the parameters P2 and M1. When the number task was second, the parameters are P(Number) and M(Tone). When the tone task was the second, the fit parameters are P(Tone) and M(Number). The obtained values of the square residuals for different parameters were not sufficient to actually calculate precisely each parameter, since the fit was unstable in the P2 − M1 direction (i.e., it did not change much if both parameters were changed but their sum was kept constant), but they were sufficient to calculate their sum ( Figure 7 ). In agreement with our previous observation, we found that the notation manipulation affects P(Number) + M(Tone) ( Figure 7 , left) but not P(Tone) + M(Number) ( Figure 7 , right). In contrast, and also consistent with our previous findings, the response-complexity manipulation affects P(Tone) + M(Number) ( Figure 7 , left) but not P(Number) + M(Tone) ( Figure 7 , right). Figure 7 Parameters Obtained from the PRP Fitting and Their Task Dependence The PRP fitting allowed us to estimate the values of P2 + M1. Depending on which task is presented first, we can calculate P(Number) + M(Tone) (left bars) or P(Tone) + M(Number) (right bars). P(Number) + M(Tone) changes with notation manipulation but not with response manipulation. Conversely, P(Tone) + M(Number) changes with response manipulation but not with the notation manipulation. Furthermore, the left bars are consistently higher than the right bars, suggesting that visual perception of digits and words takes approximately 150–220 ms longer than auditory perception of a single tone. Finally, the parameters obtained from the interference experiment may be compared to those of the previous fit, which was based on the shape of the distributions of RTs for the first task, and which yielded estimates of 1/α (the time of integration) and t o (a fixed delay). As expected from our model, across the different conditions summarized in Table 7 , we observe that t o is always approximately equal to the sum of the durations of the P and M components, while 1/α is equal to the duration of the C component. This provides further evidence that the process of accumulation of evidence does indeed constitute the characteristic bottleneck (the C component) in dual-task experiments. Table 7 Fitted Parameters When the fit method is “RT model”, parameters were obtained by fitting the shape of the distribution of RTs when the number task is the first task; when the fit method is “PRP fit”, parameters were obtained following the PRP model of interference, from RTs measured when the number task is the second task. t o is the fixed delay and 1/α the integration time. The 1/α row also shows the percent of the total RT dedicated to the central integration process. The following parameters are estimated: P ( Tone ) + M ( Number ), P ( Number ) + M ( Tone ), and T d . The comparison of both methods indicates a good quantitative convergence: when summed, the noncentral P and M components of the PRP model account for the same amount of time as the fixed contribution t o in the RT distribution DOI: 10.1371/journal.pbi0.0030037.t007 Discussion We proposed a basic model that relates the organization of parallel and serial components and the process of accumulation of evidence to reach a decision. The model, although simple, results in a wide number of predictions that, as we have shown, hold over a vast variety of manipulations. We show that the perceptual transformation of sensory information into an abstract quantity representation can be carried out in parallel with another task and is a low-variability process (whose variability does not increase with the mean); that the accumulation of evidence establishes a bottleneck and is an intrinsically variable process; and that the execution of the response constitutes yet another parallel, low-variability process. Our data suggest that the integration of evidence in time to reach a decision constitutes the only central process in a simple cognitive task that links perceptions to actions. Validity of the PRP Model While dual-task experiments (in which two tasks are presented at variable delays) allow different interpretations, experiments in which one of the two tasks is parametrically manipulated provide a severe test of the PRP model [ 12 , 13 , 16 ]. Indeed, the simple hypothesis that the central module is the only serial stage results in concrete predictions about the dependence of mean RTs on delay [ 16 ]. In different cognitive tasks, the PRP model was successfully used previously to identify and dissect different processing components. For example, in a detection experiment where the brightness and the probability of target occurrence were manipulated, it was shown that brightness behaved as a perceptual component while frequency showed the characteristics of a C component [ 31 ]. PRP models have also been used to show that word selection involves C components while phoneme selection behaves as an M component [ 32 , 33 ]. Here we have tested, within the number-comparison task, three different manipulations, in the two possible orderings of the sequential tasks, thus providing an exhaustive test of the model. Our finding that all manipulations fall reliably within one of the PRP components provides strong evidence of the generality of this phenomenon. In addition, while it had been shown previously that the distribution of dual-task RTs was wider than that predicted by noninterfering processing of the two tasks [ 34 ], precisely adjusting this distribution based on the PRP model, to our knowledge, has not been done before. Our analysis of the distribution of RTs for the second task based on the distributions for each individual task when presented first implies that the model can explain not only the mean RTs, but also their entire distribution. Since the model is parametric, fitting it to the data yields absolute measurements of the duration of the central stage and of the sum of perceptual and motor stages ( Table 7 ). Those measurements, which we obtained consistently by two different means (analysis of single-task RT distributions and of the PRP interference pattern), are consistent with previous experiments using the additive-factors method [ 29 ]. A striking result, however, is the duration of the C component, which even in a simple task represents about 70% of the total RT. Considering the simplest version of our task (comparison of Arabic numerals, one tap), our results indicate that 180 ms is taken by the sum of P and M components, while a full 550 ms is taken by the C component. Previous event-related potential experiments suggested that it takes approximately 190 ms to identify an Arabic digit and begin to access a quantity representation [ 29 ]. The present evidence indicates that this notation-dependent stage is absorbed during the PRP delay and thus belongs to the P component. Altogether, the evidence suggests that the central stage starts after digit identification and goes on all the way to the actual key press. While all the PRP predictions held, the only discrepancy with the model arose from an unexpected slowness of responses to the first task. As predicted, RT1 was independent of the delay. However the mean RT was larger than found previously when subjects performed only the number-comparison task [ 35 ]. Even within our experiment, it was larger than the time taken to perform the same task when it was presented second at a delay of 1 s (in the noninterference regime). This discrepancy also became evident in the convolution of the two distributions, where the fitting turned out to be impossible without a translation in time, but became very accurate once this translation was added to the fit. Previous PRP experiments have also observed a similar slowing down of the first task, independent of Δ [ 36 ]. Thus we believe that a correction needs to be made to the PRP model. There are at least two possible and not exclusive rationales for this correction. First, temporal attention could be involved. The presentation of the first task could act as a primer in time for the second task. Indeed, it has been shown that reaction times decrease when subjects know the precise timing of stimulus occurrence [ 37 ]. Second, executive attention might also have to be engaged before performing the first task, in order to prepare for the instructions of performing the two tasks in a specific order and with specific responses. Thus, two components, a structural central bottleneck and a central task-setting component, may contribute to the delay in the dual-task paradigm [ 14 , 38 ]. Here, as in other PRP experiments, we have designed the tasks in order to maximally separate the inputs and outputs to the system (different perceptual modalities and different response hands). Under these conditions, as described above, we still find a source of central interference. Moreover we find that the transformation from a word form to an abstract semantic representation does not participate in this central process, nor does the execution of two consecutive and repetitive motor actions. The generality of these findings, however, has obvious bounds. We do not state here that any motor manipulation should result in a change in a parallel component; more complex motor responses, however, might require central supervision and create a bottleneck. Similarly, while we claim that mapping a word form to an abstract number representation can be done in parallel, we do not mean that it would not interfere with any possible stimulus. Finally, under some situations that lead to high automaticity, either through extensive training [ 39 , 40 ] or very consistent stimulus–response mapping, the central bottleneck may be negligible [ 41 , 42 , 43 ]. Alternative RT Models There is a vast literature on the analysis of the shape of RT distributions as a source of knowledge about the human information-processing system, and many different models of these distributions have been proposed [ 20 , 21 , 23 , 26 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ]. Here we did not intend to fully test the validity of the different models or to see which provided a better explanation of our data. We rather chose a simple model that contains the essence of a stochastic integrator and tried to use it to understand the effects of different manipulations. Our most important finding is that the distance manipulation, which is the only one to show interference, as revealed by the PRP experiment, is also the only one to change the stochastic integration time. Conversely, the manipulations that show no interference only affect the fixed delay. Thus there is a consistent parsing of the task from both methods. There are, however, several variations in the model that would be particularly interesting to test in this condition. First, as alternatives to the random-walk model with fixed mean and variance that we adopted here, one may propose a noise-free integration whose slope varies from trial to trial [ 52 ] or a diffusion model with variance in the drift [ 45 ]. Distinguishing these models may provide a way to measure the timing of the flow of information between perceptual stages and central stages. Do sensory systems provide only one vote to a noisy decision machinery, or do they rather provide a series of stochastic votes, which the decision machinery accumulates? And if the latter, what is the sampling time of communication between both systems? A second important type of alternative to our model concerns the nature of the central process. Instead of a unique integrator, there might be a network of interacting integrators with lateral connections, which collectively implement the decision-making process and whose interactions create a functional bottleneck [ 23 ]. The existence of rare but attested cases in which two response-time tasks can be performed in parallel without cost [ 41 , 42 ] might seem to favour the existence of multiple integrators, but it is also possible that highly trained sensorimotor tasks can eventually be triggered directly, without going through an accumulation-based decision stage [ 53 , 54 ]. Finally, for simplicity our model assumed a constant decision threshold T. In a more complicated model, the value of the threshold might be changeable. Such a feature might be needed to fit the results of experiments in which one varies the prior probability of a given response or its associated reward (variables that were fixed throughout our experiment). For example, in a go/no-go experiment involving digit comparison, in which the probability of a response was fixed at a controlled probability p go , it has been shown that p go affects T, but not the drift rate (α) [ 17 ]. Even in experiments with fixed response probabilities, subjects might continually adjust their threshold, lowering it after a successful response and increasing it after an error [ 55 ]. Such adjustments might capture another characteristic feature of RTs, which is their intrinsic autocorrelation structure and, in particular, their increase following errors [ 26 , 56 ]. While typically even simple tasks result in highly variable distributions of RTs, under some particular circumstances, including extensive practice, very precise (almost invariant) distributions of RTs can be obtained, e.g., in subjects trained to estimate a fixed duration [ 57 ]. It has also been shown that task modifications can lead to fixed delays, i.e., increases in RT that do not change the variance [ 57 ]. These findings support the idea that certain mental processes, as we propose here, can be carried out with negligible variance. They imply that variability in RT does not result merely from an intrinsically noisy neuronal machinery but rather from the computation underlying each process. Here, based on our results, we propose a hypothesis that needs further testing: that the central processes that involve integration of information represent the bulk of the variance while perceptual and motor processes are highly reliable. In particular, we predict that if a task can be performed in an invariable fashion it should also be automatic, in the sense of becoming immune to central interference. Cerebral Substrates of the Different Components While we characterized the different processing stages through behavioural observations, it is an essential issue to relate these findings to brain anatomy and physiology. At the single-task level, the neurophysiological bases of simple perceptual decision making have been widely studied in tactile- [ 58 , 59 , 60 ] and visual-discrimination tasks [ 53 , 54 , 61 , 62 , 63 , 64 , 65 , 66 ]. These studies have revealed direct physiological correlates of the accumulation process postulated in formal RT models. Some neurons appear to code for the current perceptual state. For instance, neurons in the middle temporal area (area MT) appear to encode the amount of evidence for motion in a certain direction [ 25 , 63 ]. Other neurons, distributed in multiple areas including posterior parietal, dorsolateral prefrontal, and frontal eye fields, appear to integrate this sensory information and thus show stochastically increasing firing rates in the course of decision making [ 25 , 61 , 67 ]. In agreement with the accumulation model of decision making, the rate of increase varies with the quality of sensory evidence [ 25 , 61 , 68 ], and the response is emitted when the firing exceeds a threshold [ 66 ]. Furthermore, accumulation of information about the upcoming response appears in the firing train after a latency of about 200 ms [ 69 , 70 ], which is relatively fixed for a given task and might thus index the duration of the initial perceptual stage. In humans, a similar indicator of accumulated evidence towards a motor decision is provided by scalp recordings of the lateralized readiness potential (LRP) [ 71 ]. The LRP is calculated as the difference in event-related potentials between electrodes overriding left and right motor cortices. In a bimanual task, this index shows a monotonously increasing deviation predictive of the side of the upcoming motor response, and whose intensity reflects the accumulated amount of evidence [ 72 , 73 ]. In numerical comparison, the LRP starts approximately at 200 ms [ 74 ], again compatible with a fixed perceptual delay. While the LRP component is localized to motor and premotor cortices, another event-related potential component more broadly distributed across the scalp, the P3 is also associated to postperceptual processes [ 71 , 72 ] and shows a continuous, accumulation-like increase as a function of numerical distance in a comparison task [ 75 ]. Thus, both LRP and P3 might reflect the accumulation of evidence observed in monkey electrophysiological studies in distributed parietal and frontal regions. Indeed, functional magnetic resonance imaging studies of the comparison task show that intraparietal and precentral cortices are systematically activated and that their activation correlates with the distance between the objects to be compared [ 76 ]. This bilateral parietal and frontal system has been identified as a shared response selection system across a diversity of input modalities and across different types of stimulus–response mappings [ 77 ]. There is a debate, however, concerning the universality of this system, because at least some studies have found variable sites of activation associated with response selection in different tasks [ 78 ]. For instance, in an auditory paradigm dissociating the amount of sensory evidence and the response accumulation process, the former was associated with superior temporal cortex and the latter with anterior insula and opercular frontal cortex [ 79 ]. What happens to those physiological decision processes during dual-task performance? At present, we know of no neurophysiological study and only a handful of human physiological studies of the PRP phenomenon. In event-related potentials, when C and P components were manipulated, perceptual manipulation led to a change in the P2 component (generally associated with perceptual processing), while the central manipulation affected the amplitude and the onset of the P3 component [ 31 ]. The LRP is also delayed during the PRP, in tight correlation with RT [ 80 ]. Finally, functional magnetic resonance imaging, a time-insensitive measure, showed that in the interference regime of the PRP there is no increase in activation relative to performing the two tasks independently, even when searching at a low threshold within regions of interest, which included the prefrontal cortex, the anterior cingulate, and supplementary motor area [ 36 ]. This result suggests that the PRP does not result from active executive monitoring processes, but rather from a passive queuing of the second task, as proposed in the present model. Altogether, neurophysiological and brain-imaging studies suggest that, beyond an initial perceptual delay of about 200 ms, there begins a process of accumulation of evidence, which involves the joint activation of a distributed network of areas, with partially changing topography as a function of the nature of the task, but with frequent coactivation of parietal and premotor regions. Our results suggest that this accumulation system is responsible for establishing the PRP bottleneck. This bottleneck might occur because the cerebral accumulation system is broadly distributed and largely shared across tasks, and thus must be entirely “mobilized”, at any given moment, by whichever task is currently performed (for a simulation of this process, see [ 81 ]). This neuronal implementation of our model leads to a precise electrophysiological prediction, which could be tested in further research: the accumulation neurons in the lateral intraparietal area and frontal eye field, in an animal trained to perform a pair of PRP tasks, should show two successive stages of accumulation staggered in time; in humans, this might be reflected in a rigid, nonoverlapping sequence of two LRP or P3 event-related components, whose respective durations should covary with the RTs to the two tasks. Materials and Methods Participants A total of 42 participants, all right-handed, were involved in this study (24 males). Sixteen participants (aged 25 y ± 5 y) performed the experiment in which the tone task was presented first, and the other 16 (aged 24 y ± 4 y) performed the experiment in which the number-comparison task was presented first. Ten participants (aged 22 y ± 2 y) performed the numeric task with the addition of the Words 2 Taps condition. Participants were all native French speakers and were remunerated for their participation. Procedure Participants were asked to perform two tasks, with the clear instruction that they had to respond accurately and as fast as possible to each of them. The delay in the onset of the two tasks changed randomly from trial to trial from 0 ms (simultaneous presentation) to 1,025 ms. Subjects responded to both tasks with key presses, with the right hand for the number-comparison task and with the left hand for the tone task. In the number-comparison task, a number was flashed in the centre of the screen for 150 ms, and subjects had to respond whether the number was larger or smaller than 45. The presented number ranged between 21 and 69, excluding 45. In different blocks, subjects performed three different versions of the number task. In the first version, the number was presented in Arabic digits and subjects were asked to respond by tapping once over the corresponding key (Digits 1 Tap). In the second version, the number was presented as a written word (in French), and subjects were also asked to respond with a single key press (Words 1 Tap; we refer to this as the “notation manipulation”). Finally, in the third version, the number was presented in Arabic digits, but subjects were asked to respond by tapping the corresponding key twice (Digits 2 Taps; we refer to this as the “response-complexity manipulation”). Within each block, both the numerical distance between the target and 45 and the delay between the presentation of the two stimuli varied randomly, and trials were presented with an intertrial interval that fluctuated between 2,600 and 3,000 ms. In each block, which lasted almost 2 min, subjects performed 40 trials. Before the beginning of each block, subjects saw instructions on the screen, which instructed them what the number task would be for this corresponding block. Subjects practiced one block of each task to get familiar with the task. After this brief training, they performed a total of 18 blocks (six for each version) in an approximately 45-min session. Stimuli Stimuli were shown on a black-and-white display on a 17-in. monitor with a refresh rate of 60 Hz. Subjects sat 1 m from the screen. Stimuli were always presented in the fovea, and their size was 1° for the Arabic digits and 2.5° for the words. Auditory stimuli were pure tones of 150-ms duration and 440- or 880-Hz frequency. Auditory stimulation was provided through headphones. Data analysis All the analyses described here were done only on correct responses (which comprised 83% of the trials). Since there were two tasks and each task had two possible responses, chance level for this experiment is at 25%. Errors (17%) included errors in either the first or second task and trials in which subjects failed to respond to either of the tasks, or both. One subject was discarded from the analysis because the data clearly revealed that he had not performed the task as required. His RT1 arrived systematically a few hundred milliseconds after the onset of the second task, indicating that he was waiting for both tasks to be presented in order to respond and not, as indicated, responding to both tasks as fast as possible. For similar reasons, for all analyses, trials in which the RTs to the first task were larger than 1,200 ms (<5% of the trials) were excluded. All the statistics were done using the R software package ( http://www.r-project.org/ , and in all ANOVAs subjects were treated as a random factor. Throughout the paper, RTs for both tasks are, per convention, measured from trial onset, i.e., the onset of the first stimulus. Distribution analysis RTs were fitted to a model based on a fixed delay onset ( t o ) followed by a forced random walk dV = α · dt + σ · dz and response emission as soon as V reaches a threshold b (see Figure 1 ). Thus the RT is defined by T R = inf[ t ≥ 0, V ( t ) ≥ b ], where b is the threshold. This problem (of the first hitting time to an absorbing barrier of a Brownian motion) has been widely studied and can be solved analytically using the Fokker–Planck equation. The probability of hitting threshold for the first time at time t is given by the following equation: Changing the onset by a fixed delay t o and setting the threshold to one simply shifts the distribution, which then becomes This is the equation we used to fit the RT distributions. All six distributions resulting from the different experimental manipulations corresponding to (Digits 1 Tap, Digits 2 Taps, Words 1 Tap) × (Distance Far, Distance Close) were fit to a fixed value of σ and to values of α and t o , which were allowed to vary across the different experimental conditions. The best parameters were obtained through exhaustive search using a minimum-squares criterion. For each value of σ , the best values α and t o were found for each experimental condition, and the mean square residuals were averaged across all distributions. It was found that the σ that minimized the mean squares deviation across all distributions was 0.018. The changes in the remaining parameters with different experimental conditions, which were of interest to this study, are reported in the Results sections. We repeated this fit for a broad range of σ and found that the obtained results did not depend on the choice of σ . Predicted distributions based on the PRP model Here we describe how RTs for task 2 can be predicted based on the distribution of RTs for both tasks when presented first. Because of the presence of the PRP wait (which depends on the value of the response to the first task), this operation is not strictly a convolution. Since the method is not trivial and, to our knowledge, it has not been performed elsewhere, we will describe it step by step: In a serial sequence of two processes (in which one needs to be finished before the next one starts), each with a probability distribution of RTs given respectively by R 1 and R 2 , the probability of performing the sequence at time T is given by This formula is simply the convolution of the two original distributions. In a PRP experiment, however, the execution of the two tasks is not serial, since there are both serial (central) and parallel (noncentral) components. The first difference is that task 2 waits not for the complete execution of task 1 but rather for the completion of the P and C components of task 1 (see Figure 3 ). Hence, the first modification is that the first distribution needs to be shifted by M1 (to account for the real start-up time of the second distribution) R 1 * ( t ) = R 1 ( t + M 1 ). The second modification, because of the nature of the PRP experiment, is that task 2 obviously cannot start until it is presented and thus the onset time is actually given by R 1 ** ( t ) = max[ R 1 * ( t ), Δ] . Thus the real distribution of onsets of task 2 is given by the accumulated probability of a shifted R 1 up to Δ (which results in a spike at Δ) followed by the tail of R 1 * ( t ). The spike becomes more pronounced as the delay is larger, and thus the two tasks become independent. The last consideration has to do with the time it takes to respond to task 2. If Δ is sufficiently large (in the independent regime), the probability of executing task 2 at time t is given by R 2 ( t ). However, within the interference regime (for small Δ), P2 (or part of it) has been executed by the time that the P and C components of the first task (which corresponds to t − M 1 ) are finished (see Figure 3 ). The distribution coincides with R 2 at t = Δ, but as t increases, part of the P component of task 2 has been carried out and this saturates at t = Δ + P2. Thus the probability of executing task 2 at time t 2 given that task 1 has been executed at time t + M 1 is given by R 2 ( t 2 ), where t 2 = min[( t − Δ), P2] + t . This formula is only valid for t > Δ, but this is not important because in any case R 1 ** ( t ) = 0 for t < Δ. The important issue, however, is that this transformation depends on t and thus the sum described in equation 3 is not strictly a convolution. We can still define R 2 * ( t,T ) = R 2 [ T − t 2 ( t )] as the probability of completing task 2 in time T − t given that task 1 has been completed in t + M1. The final formula (adapting equation 3 after all the transformations) then becomes Since all these transformations depend on Δ, M1, and P2, this prediction is parametric. The data were fit by exhaustive search according to mean squares criteria. We fitted all the data (for each task and for all the different values, to obtain the values of M1 and P2). As described in Results, this model was not sufficient to fit the data (note that we are simultaneously fitting a family of 30 curves), so we included a third fixed delay parameter ( T d ) to the fit. With the inclusion of the parameter T d , the errors, measured as the mean square residual (i.e., the mean of the squares of the difference between the data and the fit across all the points of the ten distributions corresponding to all possible delays) were consistently below 0.015 (20 times smaller than could be found without the inclusion of this parameter), and we observed a parabolic type of distribution, with a clear minimum (reported in Figure 7 ) when plotting the error as a function of P2 + M1. When data were plotted in the orthogonal direction (P2 − M1), however, the fit was unstable with different local minima. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC546328.xml |
549032 | Gap-junction channels inhibit transverse propagation in cardiac muscle | The effect of adding many gap-junctions (g-j) channels between contiguous cells in a linear chain on transverse propagation between parallel chains was examined in a 5 × 5 model (5 parallel chains of 5 cells each) for cardiac muscle. The action potential upstrokes were simulated using the PSpice program for circuit analysis. Either a single cell was stimulated (cell A1) or the entire chain was stimulated simultaneously (A-chain). Transverse velocity was calculated from the total propagation time (TPT) from when the first AP crossed a V m of -20 mV and the last AP crossed -20 mV. The number of g-j channels per junction was varied from zero to 100, 1,000 and 10,000 (R gj of ∞, 100 MΩ, 10 MΩ, 1.0 MΩ, respectively). The longitudinal resistance of the interstitial fluid (ISF) space between the parallel chains (R ol2 ) was varied between 200 KΩ (standard value) and 1.0, 5.0, and 10 MΩ. The higher the R ol2 value, the tighter the packing of the chains. It was found that adding many g-j channels inhibited transverse propagation by blocking activation of all 5 chains, unless R ol2 was greatly increased above the standard value of 200 KΩ. This was true for either method of stimulation. This was explained by, when there is strong longitudinal coupling between all 5 cells of a chain awaiting excitation, there must be more transfer energy (i.e., more current) to simultaneously excite all 5 cells of a chain. | Introduction We have developed an electric field hypothesis for the mechanism of transmission of excitation from one cell to the next that does not require gap-junction channels [ 1 - 4 ]. In the electric field hypothesis, the electrical voltage that develops in the narrow junctional cleft (V jc ) when the prejunctional membrane generates an action potential, serves to depolarize the postjunctional membrane its threshold, by a patch-clamp-like effect. The parameters that affect the magnitude of Vjc include the size of R jc , the transverse resistance of the junctional cleft. This results in excitation of the postjunctional cell, after a brief junctional delay. The total propagation time consists primarily of the summed junctional delays. This results in a staircase-shaped propagation, the surface sarcolemma of each cell firing almost simultaneously [ 2 ]. There are no low-resistance connections between the cells in several different cardiac muscle and smooth muscle preparations (reviewed in [ 3 , 4 ]. Propagation by mechanisms not requiring low-resistance connections have also been proposed by others [ 5 - 8 ]. Propagation has been demonstrated to be discontinuous (or saltatory) in cardiac muscle [ 9 - 12 ]. Fast Na + channels are localized in the junctional membranes of the intercalated discs of cardiac muscle [ 13 - 15 ] Sperelakis, 1995, a requirement for the EF mechanism to work [ 3 , 4 , 1 , 2 , 13 ]. In connexin-43 and Cx40 knockout mice, propagation in the heart still occurs, but it is slowed [ 15 - 19 ], as predicted by our PSpice simulation study [ 20 ]. It was reported that the anisotropic conduction velocity observed in the heart is not a result of cell geometry [ 21 ]. Subsequently, we published a series of papers on the longitudinal and transverse propagation of action potentials in cardiac muscle and smooth muscle using PSpice analysis [ 22 , 20 , 24 ]. In the review process for our recent paper [ 24 ], one of the referees asked us to determine the effect of introducing strong cell coupling via gap-junction (g-j) channels between cells within each chain on the transverse propagation in our 5 × 5 model (5 parallel chains of 5 cells each). Unexpectedly, we found that strong cell coupling (10,000 or 1,000 g-j channels per junction) actually inhibited transverse propagation. This fact was briefly mentioned as an unpublished observation in that paper. The purpose of the present study was to do a thorough investigation of that strange phenomenon. The results showed that, in cardiac muscle, transverse propagation was inhibited when many g-j channels were added between cells of each chain. This was true when either a single cell in the first chain was stimulated (cell A1) or the entire chain (A-chain) was stimulated simultaneously. Methods Details of the methods used and modeling of the PSpice analysis were given in our previous papers, including the limitations [ 21 - 24 ]. The full version of the PSpice software for circuit analysis/design was obtained from the Cadence Co. (Portland, OR). The assumptions made were given previously, including the entire circuit that was used [ 22 ]. An abbreviated version of the circuitry is given in the first two figures. The surface membrane of each myocardial cell was represented by 2 units and each junctional membrane by 1 unit (Figs. 1 , 2 ). The values for the circuit parameters used under standard conditions were given previously for both the surface units and junctional units in cardiac muscle [ 22 ]. The myocardial cell was assumed to be a cylinder 150 μm long and 16 μm in diameter. The myocardial cell capacitance was assumed to be 100 pF, and the input resistance to be 20 MΩ. A junctional tortuosity (interdigitation) factor of 4 was assumed for the cell junction. Figure 1 The 5 × 5 model for cardiac muscle, consisting of 5 parallel strands (A-E) of 5 cells each (1–5) (total of 25 cells). Each muscle cell was represented by a block of 4 basic units: 2 units representing the surface membrane (one upward-facing and one downward-facing) and one unit for each of the two junctional membranes. For simplicity, the lumped resistance of the gap-junctions is not indicated here, but is shown in Fig. 2. Transverse propagation is sequential activation of chains A to B to C to D to E. Figure 2 Blow-up of a small portion of the 5 × 5 model to show the electrical circuit for each basic unit, including the "black-box" required for excitability. R ol2 represents the longitudinal resistance of the interstitial fluid between the parallel chains; the higher the resistance, the tighter the packing of the chains. Depolarizing current (0.25 nA) is applied to the interior of either the first cell or the entire chain (A-chain) simultaneously. When gap-junction channels were added, a resistor (R gj ) was inserted across each cell junction, from the interior of one cell to the interior of the next one. The circuit used for each unit was kept as simple as possible, using only those ion channels that set the resting potential (RP) and predominate during the rising phase of the AP. We wanted to only inscribe the rising phase of the APs to study propagation in the 2-dimensional sheet of myocardial cells. It was not possible to insert a second or third "black-box" into the basic excitable units, because the system became erratic, therefore the dynamic behavior of the cardiac cell membrane was an approximation. The RP was -80 mV, and the overshoot potential was +30 mV (AP amplitude of 110 mV). Propagation velocity was calculated from the measured total propagation time (TPT) (measured as the difference between when the APs of the first cell and last cell crossed -20 mV) and cell length. Because the PSpice program does not have a V-dependent resistance to represent the increase in conductance for Na + ions in myocardial cells during depolarization and excitation, this function had to be simulated by a V-controlled current source (our "black-box") in each of the basic circuit units (Fig. 2 ). The current outputs of the black-box, at various membrane voltages, were calculated assuming a sigmoidal relationship between membrane voltage and resistance between -60 mV and -30 mV. The V values used in the GTABLE were those recorded directly across the membrane. The excitabilities of the basic units were the same as in our previous papers [ 21 - 24 ]. The upper chain of cells was assumed to be bathed in a large volume of Ringer solution connected to ground. The external resistance (R o ) of this fluid was divided into two components: a radial (or transverse) resistance (R or ) and a longitudinal resistance (R ol ). The longitudinal resistance values (R ol2 ) between the upper chain and interior chains were increased over a wide range to reflect packing of parallel chains into a bundle of fibers, with different degrees of tightness of the packing (Fig. 1 ). The higher the R ol2 value, the tighter the packing of the chains, i.e., the lower the cross-sectional area of the ISF space. The applicable equation is R ol2 = ρ (Ω-cm) × L (cm) /Ax (cm 2 ), where ρ is the resistivity of the ISF, L is the length, and A x is the cross-sectional area of the ISF space. The transverse resistance of the interstitial fluid (ISF) space (R or2 ) also reflects the closeness between the chains; the lower the R or2 value, the closer the chains are packed. In the present 5 × 5 model, there were five parallel chains (chains A, B, C, D, and E) of five cells each (cells 1–5). Electrical stimulation (rectangular current pulses of 0.25 nA and 0.50 ms duration) was applied to the inside of either the first cell of chain A (cell A1) or all 5 cells of the A-chain. Under initial conditions, the cells in each chain were not interconnected by low-resistance pathways (gap junction channels), so that transmission of excitation from one cell to the next had to be by the EF developed in the narrow junctional cleft. Then gj-channels were added (100, 1,000 and 10,000 channels per junction) to determine what effect they would have on transverse propagation. Since the number of functional gj-channels per junction have been estimated to be about a 1000 [ 12 ], we varied the number over a very wide range. The resistances of the gap-junction channels were lumped into one equivalent resistances because they are all in parallel. As shown in Figure 1 , there were two surface membrane units in each cell (one facing upwards and one inverted) and one unit for each of the junctional membranes (intercalated disks of cardiac muscle). To improve clarity, in some runs the V-recording markers were placed on only one chain at a time. When all cells in a model were being recorded simultaneously (25 cells), the V markers were removed from most of the basic units to minimize confusion. That is, the voltage was recorded from only one surface unit (upward-facing) in each cell. The junctional cleft potential (V jc ) was recorded across R jc , the radial (or transverse) resistance of the narrow and tortuous junctional cleft. Under standard conditions, R ol2 was 200 KΩ, R or2 was 100 Ω, and R jc and was 25 MΩ (50 MΩ ÷ 2). Results The 5 × 5 model (5 parallel chains of 5 cells each) of cardiac muscle was used to examine whether addition of gap-junction (gj) channels between the cells in each chain would affect transverse propagation of simulated (PSpice) action potentials. The number of gj-channels was increased from zero (standard conditions; resistance of the gap junction (R gj ) of infinite) to 100 (R gj = 100 MΩ), 1000 (R gj = 10 MΩ), and 10,000 (R gj = 1.0 MΩ). Experiments were done with electrical stimulation (0.5 nA, 0.5 ms) of only the first cell of the first chain (A; cell A1) and with simultaneous stimulation of all 5 cells of the A-Chain (each cell receiving 0.5 nA current). This second method of stimulation was done to obtain a more accurate assessment of strictly transverse propagation. Figure 3 illustrates some of the results with stimulation of only cell A1. Panel A shows the standard conditions: R gj = ∞ (0 gj-channels) and R ol2 of 200 KΩ (the longitudinal resistance of the interstitial space between the chains). As shown, all 5 chains (25 cells) responded. The total propagation time (TPT) was 4.2 ms (measured as the elapsed time between when the first and last APs crossed -20 mV). When 10,000 g-j channels were inserted into the cells of each chain (R gj = 1.0 MΩ), then the last 2 chains (D and E) failed to respond ( panel B ). The 5 cells of each chain (A, B, and C) that responded now fired simultaneously because of the high degree of cell coupling, thereby giving only three AP traces, as shown. However, raising R ol2 to 10 MΩ allowed all 5 chains to respond ( Panel C ), and propagation velocity was increased. These data are summarized in Table 1 , part A . Hence greatly elevating R ol2 could overcome the impaired transverse propagation caused by the gj-channels. Figure 3 Transverse propagation of simulated action potentials (APs; rising phase) for cardiac muscle (5 × 5 models) with stimulation of one cell only (cell A1; first cell of the A-chain). A : R gj = ∞ (0 channels). R ol2 = 200 KΩ. Standard conditions. All 25 cells responded. B: R gj = 1.0 MΩ (10,000 channels). R ol2 kept unchanged. The last 2 chains (D, E) failed to respond. All 5 cells of each chain that responded (A, B, C) fired simultaneously because of the strong cell coupling. C: With R gj held at 1.0 MΩ, raising R ol2 to 10 MΩ (representing tighter packing of the parallel chains) now allowed all 5 chains to respond. Thus, adding gj-channels inhibited transverse propagation, but this inhibition could be overcome by raising R ol2 . Table 1 Summary of simulation data on cardiac muscle with stimulation either of single cell or entire chain. R gj (MΩ) R ol2 (MΩ) TPT 5c (ms) TPT 4c (ms) TPT 3c (ms) TPT 2G (ms) # of chains responding ∞ 0.2 4.2 3.5 2.7 5 100 0.2 2.4 2.3 2.2 5 10 0.2 ---- ---- 2.1 3 1.0 0.2 ---- ---- 2.1 3 ∞ 1.0 3.8 3.2 2.6 5 100 1.0 3.6 3.1 1.9 5 10 1.0 ---- ---- 1.6 3 1.0 1.0 ---- ---- 1.5 3 ∞ 5.0 2.7 2.6 2.3 5 100 5.0 2.0 1.6 1.2 5 10 5.0 ---- ---- 1.0 3 1.0 5.0 ---- ---- 0.9 3 ∞ 10 2.4 2.2 2.0 5 100 10 1.5 1.3 1.0 5 10 10 2.6 1.2 0.8 5 1.0 10 2.9 1.3 0.8 5 ∞ 0.2 3.6 2.7 2.0 1.2 5 100 0.2 2.5 2.4 2.4 1.2 5 10 0.2 ---- ---- ---- 1.1 2 1.0 0.2 ---- ---- ---- 1.1 2 ∞ 1.0 2.6 2.1 1.5 0.9 5 100 1.0 2.7 2.6 1.6 0.8 5 10 1.0 ---- ---- 1.6 0.8 3 1.0 1.0 ---- ---- 1.6 0.7 3 ∞ 5.0 1.7 1.4 0.9 0.6 5 100 5.0 1.6 1.4 0.9 0.6 5 10 5.0 ---- 1.3 0.8 0.5 4 1.0 5.0 ---- 1.3 0.8 0.5 4 ∞ 10 1.4 1.1 0.8 0.6 5 100 10 1.3 1.0 0.8 0.5 5 10 10 1.2 0.9 0.7 0.4 5 1.0 10 1.2 0.9 0.7 0.4 5 Figure 4 illustrates some of the results with simultaneous stimulation of all 5 cells of the A-chain. Panel A shows the standard conditions: R gj = ∞ and R ol2 = 200 KΩ. As shown, all 5 chains responded (TPT was 3.6 ms). Note that all 5 cells of the B – E chains did not respond simultaneously. When 10,000 gj-channels were added to the contiguous cells of each chain (R gj = 1.0 MΩ), then the last 3 chains (C, D, and E) failed to respond ( Panel B ). The 5 cells of each chain that responded (A, B) now fired simultaneously because of the strong coupling. However, elevating R ol2 to 10 MΩ allowed all 5 chains to respond ( Panel C ). These data are summarized in Table 1 , part B. Figure 4 Transverse propagation of simulated APs for cardiac muscle with stimulation simultaneously of the entire A-chain. This was done as a better assessment of transverse propagation for comparison with stimulation of only one cell of the A-chain. A: R gj = ∞ (0 channels). R ol2 = 200 KΩ. Standard conditions. All 5 chains responded. B: R gj = 1.0 MΩ (10,000 channels). With R ol2 kept at 200 KΩ, chains C, D, and E failed to respond. C : With R gj held at 1.0 MΩ, raising R ol2 to 10 MΩ (representing tighter packing of the chains) now allowed all 5 chains to respond. All 5 cells of each chain responded simultaneously because of the strong coupling. Thus, adding gj-channels inhibited transverse propagation, but this inhibition was overcome by raising R ol2 . Figure 5 is a graphic summary of the results for both stimulation of only one cell (A1) ( Panel A ) and stimulation of the entire A-chain ( Panel B ). The data include R gj values and R ol2 values not illustrated in Figures 3 and 4 . As such, this figure is complementary to Table 1 , but Table 1 lists the TPT values as well. As shown, progressive addition of gj- channels reduces the number of chains that respond, with both ways of stimulation, and progressive elevation of R ol2 reverses this inhibition. Figure 5 Summary of the transverse propagation experiments. Graphic plot of the number of chains responding as a function of R gj for cardiac muscle, with stimulation of only one cell (cell A1) ( panel A ) or with stimulation of the entire A-chain ( panel B ). Data for various R ol2 values (0.2, 1.0, 5.0, and 10 MΩ) are indicated. Four different R gj values were tested: ∞ (0 channels), 100 MΩ (100 channels), 10 MΩ (1,000 channels), and 1.0 MΩ (10,000 channels). Thus, transverse propagation was depressed when there were many gj-channels (1000 or 10,000), but elevation of R ol2 could overcome this depression. Discussion The results demonstrated that, in a cardiac muscle model, the insertion of many gj- channels, between abutting cells in each longitudinal chain of cells, actually inhibited transverse propagation of excitation between the parallel chains. This was true both for stimulation of only a single cell of the first chain (A-chain; cell A1) and for stimulation simultaneously of the entire 5 cells of the A-chain. The inhibition produced by the gj-channels could be overcome by greatly increasing the value of R ol2 (the longitudinal resistance of the narrow interstitial space between the parallel chains), reflecting tighter packing of the chains. This finding surprised us at first. We were expecting either that there would be no effect on transverse transmission of excitation or that transverse transfer of excitation would be enhanced. But on further refection, the inhibition produced might be predicted. This is based on the fact that, with strong longitudinal coupling between cells, the entire chain of 5 cells must be simultaneously stimulated to threshold. Therefore, if the transverse transfer energy available is limited and is near threshold for a give chain, it is likely that some chains will fail to fire. Thus, the problem of strong coupling is not in the chains that are already excited, but rather in the quiescent adjacent chain which is in process of trying to become excited (D-chain in case of Fig. 3 and C-chain in Fig. 4 ). Strong coupling in the "in-process" chain requires more energy transfer from the "already-activated" chain. If the "in-process" chain were not coupled by gj-channels, then if only one cell of the chain received enough stimulating energy to become activated, excitation would spread from it to the remaining cells of that chain. This idea was tested, and the results shown in Figure 6 are consistent with the mechanism proposed above. In panel A , in which R gj was 1.0 MΩ (10,000 gj-channels) uniformly in all 5 chains, chains D and E failed to fire. When R gj of just chains D and E were changed to infinity (0 channels), then all 5 chains fired ( panel B ). Therefore, the inhibition of transverse propagation can be reversed by uncoupling the cells of the chains that failed. Figure 6 Special experiment to test why presence of many gap-junction channels inhibits transverse propagation. Stimulated Cell A1 only. R ol2 of 1.0 MΩ. A: Uniform value for R gj of 1.0 MΩ (10,000 channels) in all 5 chains. Chains D and E failed to respond. Compare with Fig. 3B for the standard R ol2 of 0.2 MΩ. B: The R gj values in chains D and E only were changed to infinity (0 channels). Now chains D and E responded. See text for details. This demonstrates that removing the gj-channels in the two chains awaiting excitation (D, E) increased the safety factor for transverse propagation. Consistent with the argument presented in the paragraph above, the transverse propagation velocity was not inhibited by insertion of gj-channels, up through the last chain activated (C-chain in case of Fig. 3 and B-chain in Fig. 4 ). If anything, the transverse velocity was increased slightly (TPT lowered). The TPT values (for differing amount of transverse spread of excitation, e.g. 2-chains, 3-chains, 4-chains, or all 5-chains) are listed in Table 1 for different degrees of cell coupling. Note that, in part A, the largest change in TPT value is between R gj of ∞ (0 channels) and R gj of 100 MΩ (100 channels); adding more channels (Rgj of 10 MΩ (1000 channels) or 1.0 MΩ (10,000 channels)) did not further decrease TPT. Similar results were found when the entire A-chain was simulated (part B of Table 1 ). The reader should be alerted to the limitations of the PSpice program. It was not possible to insert a second or third "black-box" in the basic excitable units (because the system went berserk). Therefore, the dynamic behavior of the cardiac cell membrane was only a close approximation. We previously found (unpublished observation) that the transverse velocity was greater when size of the model was increased (eg, 3 × 4, 5 × 5, 7 × 7) up to a presumed maximum, indicating that the boundary conditions affected behavior of the model. Therefore, the present experiments should be repeated on larger-sized models. But we believe that the qualitative findings would be much the same. In addition, 2-dimensional activation maps should be made in the future to better elucidate how the wavefront spreads. These findings might have important clinical implications, especially for the genesis of arrhythmias in pathophysiological situations. Any pathology that altered the number of functioning gj-channels, not only would affect longitudinal velocity of propagation, but also the transverse propagation ability and velocity. Therefore, the genesis of some arrhythmias, e.g., the reentrant type, could be promoted under such conditions. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549032.xml |