text
stringlengths
2
2.63M
Aspergillus fumigatus is an important human fungal pathogen there are few expression systems available to study the contribution of specific genes to the growth and virulence of this opportunistic mould. Regulatable promoter systems based upon prokaryotic regulatory elements in the E. coli tetracycline-resistance operon have been successfully used to manipulate gene expression in several organisms, including mice, flies, plants, and yeast. However, the system has not yet been adapted for Aspergillus spp.Although A. fumigatus using a simple co-transfection approach. Vectors were generated in which the tetracycline transactivator (tTA) or the reverse tetracycline transactivator (rtTA2s-M2) are controlled by the A. nidulans gpdA promoter. Dominant selectable cassettes were introduced into each plasmid, allowing for selection following gene transfer into A. fumigatus by incorporating phleomycin or hygromycin into the medium. To model an essential gene under tetracycline regulation, the E. coli hygromycin resistance gene, hph, was placed under the control of seven copies of the TetR binding site (tetO7) in a plasmid vector and co-transfected into A. fumigatus protoplasts together with one of the two transactivator plasmids. Since the hph gene is essential to A. fumigatus in the presence of hygromycin, resistance to hygromycin was used as a marker of hph reporter gene expression. Transformants were identified in which the expression of tTA conferred hygromycin resistance by activating expression of the tetO7-hph reporter gene, and the addition of doxycycline to the medium suppressed hygromycin resistance in a dose-dependent manner. Similarly, transformants were identified in which expression of rtTA2s-M2 conferred hygromycin resistance only in the presence of doxycycline. The levels of doxycycline required to regulate expression of the tetO7-hph reporter gene were within non-toxic ranges for this organism, and low-iron medium was shown to reduce the amount of doxycycline required to accomplish regulation.Here we describe the construction of plasmid vectors that can be used to regulate gene expression in A. fumigatusThe vectors described in this report provide a new set of options to experimentally manipulate the level of specific gene products in Aspergillus fumigatus is a saprophytic filamentous fungus that has become the leading mould pathogen in leukemia treatment centers and transplantation units in developed countries, second only to Candida spp. as a cause of systemic mycosis [Aspergillus case-fatality rate demonstrated that more than 50% of patients die with, or as a result of, aspergillosis, despite having received the reference standard of therapy [ mycosis . Despite therapy . The conA. fumigatus genome is expected to greatly facilitate efforts to determine the contribution of specific gene products to the virulence of this opportunistic pathogen. Unfortunately, the genetic tractability of A. fumigatus has lagged behind some other fungal systems, particularly in the area of conditional expression systems. Inducible promoter systems have proven to be instrumental for the elucidation of gene function in a number of species, most notably with essential genes. Experimental manipulation of gene expression in A. fumigatus is presently accomplished through the use of DNA cassettes that are introduced into the organism as transgenes [alcA promoter from A. nidulans has been successfully used in A. fumigatus [alcA promoter can have significant effects on the metabolism of the organism and thus remain a concern for many applications, particularly for in vivo studies.The completion of the annotated sequence of the ansgenes -5, inseransgenes ,6 or expansgenes . An induumigatus . HoweverE. coli tetracycline-resistance operon, a regulatory unit that detects minute concentrations of tetracycline and mounts an appropriate resistance response. Expression of the operon is controlled by a repressor protein, TetR that binds to operator sequences (tetO) in the promoter/enhancer region of the operon and prevents transcription. In the presence of tetracycline TetR is unable to bind tetO, which releases the repression and allows the operon to be expressed. This system has been adapted for experimental gene regulation in eukaryotes by fusing TetR to the VP16 transcriptional activating domain of herpes simplex virus VP16, thereby creating a synthetic tetracycline-regulatable transcriptional activator protein (tTA) that can be used to regulate a gene that is under the control of a tetracycline-responsive promoter linked to a 175 bp minimal gpdA promoter from A. nidulans and one showing moderate hygromycin sensitivity (tTA-1). Conidia from each of these transformants were spotted into the center of a plate of minimal medium containing both doxycycline and hygromycin and the radial growth of the colony was monitored with time. The pH of the medium in this experiment was adjusted to 8 in order to maximize the hygromycin toxicity. As shown in Fig. A. fumigatus .Inducible promoter systems are particularly useful for creating strains that can be inducibly depleted of an essential gene product . To modeulans p48, Fig. 2.tus Fig. can be uhph RNA than tTA-2 and/or basal expression from one or more integrated copies of the tetO7-hph reporter gene (which would not be affected by doxycycline). Since there was a clear dose-response effect of doxycycline on hph expression and hygromycin resistant growth in this strain that requires interaction of the transactivator with tetracyclines before 'tet-on' . Unfortunditions , we chosnditions . One paric cells .tetO7-hph reporter . Doxycycline was incorporated into the medium at 100 μg/ml to ensure that the tetO7-hph transgene would be expressed at sufficient levels to protect against hygromycin toxicity. Approximately 15% of 27 hygromycin resistant colonies showed reduced growth when shifted to hygromycin medium without doxycycline, one of which was selected for further analysis. As shown in Fig. tetO promoter in A. fumigatus. A further increase in hygromycin resistance was achieved at 15 μg/ml of doxycycline, but concentrations above 15 μg/ml had no additional effect. Northern blots analysis confirmed that the levels of hph RNA in the rtTA transformant were increased by the addition of doxycycline to the medium .A recent report has shown that iron blocks the accumulation and activity of tetracyclines in bacteria . Since iH20 Fig. . This inS. cerevisiae [Candida albicans and C. glabrata are the only pathogenic fungi in which the system has been successfully applied thus far, however neither of these studies used the tetR-VP16 fusions upon which the tTA and rtTA systems are based [The tetracycline-inducible method of gene regulation has become one of the most popular tools to manipulate gene expression in eukaryotes . The effrevisiae , the sysre based -17.A. fumigatus. Since the hph gene is essential in the presence of toxic levels of hygromycin, the ability to control hygromycin resistance by modulating the levels of hph transcription validates the system as a tool for analysis of essential genes in A. fumigatus. In the tTA system we found that individual transformants varied in the amount of doxycycline that was necessary to regulate expression of the tetO7-hph reporter gene. Since doxycycline prevents the tTA protein from binding to the tetO sequence, this is most likely due to variability in the amount of tTA protein that is expressed in each transformant. A limitation of the tTA approach described here is that the majority of the hygromycin-resistant transformants from the tTA/tetO7-hph co-transfection were not susceptible to regulation by doxycycline. This may be due in part to leaky expression of the tetO7-hph reporter, caused by enhancers in the proximity of the integration site [gpdA promoter used in this study were too high to be removed by non toxic concentrations of doxycycline. Since lower levels of tTA expression are more readily suppressed by doxycycline, it is conceivable that a weaker promoter used to drive tTA would increase the frequency with which doxycycline-regulatable transformants can be isolated. Lower levels of tTA expression could also be accomplished by using a shorter segment of the gpdA promoter used in this study.In this study we show that both the tet-off (tTA) and tet-on (rtTA) systems can be used to regulate the expression of a hygromycin resistance reporter gene in ion site ,25. A setetO7-hph reporter gene was also observed in a strain expressing the reverse transactivator, rtTA. Concentrations of doxycycline from 2 μg/ml to 15 μg/ml gave a graded response of hygromycin resistance, indicating that A. fumigatus is responsive to concentrations of doxycycline that are similarly effective in S. cerevisiae [C. albicans [A. fumigatus. Only 15% of the hygromycin-resistant colonies from an rtTA/tetO7-hph co-transfection showed doxycycline-dependent hygromycin resistance however, suggesting that some of the hygromycin resistance was due to leaky expression of the tetO7-hph gene. Leakage of tetO7-regulated genes has been described in other systems, and is attributed to enhancers located in the proximity of the integration site that increase expression of the tetO-linked gene [tetO7-controlled genes regardless of whether they are integrated randomly in the genome or targeted to specific loci.The ability to quantitatively control expression from the revisiae and C. aalbicans , raisingked gene ,25. ThisA. fumigatus. A limitation of the system was that only 10–15% of the transformants could be regulated by doxycycline, either when tTA or rtTA were used, emphasizing the need to screen for regulatable transformants. A recent approach to limit the problem of leakiness of a tetO-driven gene is the use of trans-silencer proteins comprised of fusions between tetR and a transcriptional silencing domain [A. fumigatus-derived trans-silencer protein into the co-transfection approach described in this study would improve the efficiency of the system.This report establishes the utility of the tetracycline-regulated system as an approach to regulate gene expression in g domain ,27. It iAll vectors are based on the pBluescript plasmid (Stratagene) and were linearized prior to transfection. PCR amplification of components were performed using standard amplification protocols using PfuTurbo DNA polymerase (Stratagene).tet operator sequence (tetO7) was PCR amplified from pUHD10-3 [aagcttgcgtatcacgaggccctttc and the reverse primer 5'-aagcttctcgacccgggtaccgag (added HindIII cloning sites are underlined) and cloned into the HindIII site of pBluescript. A 1.6 kb fragment containing a minimal gpdA promoter from A. nidulans (-175 relative to the ATG of the hph open reading frame), the hph gene encoding resistance to hygromycin, and the trpC terminator from A. nidulans, was then PCR amplified from pAN7-1 [gagctccccatcttcagtatattcatc (added SstI cloning site underlined) and reverse primer 5'-tctagatcgcgtggagccaagagcgg (added XbaI cloning site underlined) and cloned downstream of tet07 into the SstI and XbaI sites of the plasmid, creating p482. To minimize read-through from flanking sequences into tet07, a 280 bp segment of the terminator region of A. fumigatus cgrA [tet07 PCR to create p500. The cgrA terminator was PCR amplified from genomic DNA of A. fumigatus isolate H237 using the forward primer 5'aagcttacagcagaagaatctctc (added HindIII cloning site underlined) and reverse primer 5'ctcgagatgattcatgacgtatattc (added XhoI cloning site underlined), cloned into pCR2.1-Topo (Invitrogen), excised with HindIII, and inserted upstream of tetO7 in p482 to create p500.A segment containing seven copies of the pUHD10-3 with them pAN7-1 with fortus cgrA was inseA. nidulans gpdA promoter was amplified from pAN7-1 [hph open reading frame) using the forward primer 5'-aagcttcggagaatatggagctt (added HindIII cloning site underlined) and the reverse primer 5'-gaattcggtgatgtctgctcaag (added EcoRI cloning site underlined) and cloned into pBluescript at the same sites. The tTA gene was then PCR amplified from pUHD15-1 [gaattctggcaatgtctagattagataaaag (added EcoRI cloning site underlined) and reverse primer 5'-atcatgtctggatcctcgcg and cloned into the EcoRI and BamHI sites downstream of the gpdA (-679) promoter. A 280 bp segment of the terminator region of A. fumigatus cgrA [actagtacagcagaagaatctctc (added SpeI site underlined) and reverse primer 5'-gcggccgcatgattcatgacgtatattc (added NotI site underlined) and inserted into the SpeI and NotI sites downstream of tTA. To introduce phleomycin selection into this construct, a phleomycin resistance cassette containing the A. nidulans gpdA promoter, the Streptoalloteichus hindustanus ble gene encoding resistance to phleomycin, and the S. cerevisiae CYC1 terminator was amplified from pBCphleo using the forward primer 5'-cctcaggcggagaatatggagcttcatcg and the reverse primer 5'-cctcaggaattaaagccttcgagcgtccc. The PCR product was cloned into pCR-Blunt II-TOPO (Invitrogen), excised with KpnI and XhoI and inserted into the PgpdA-tTA construct to create p444. The phleomycin cassette was excised from p444 with HindIII and re-ligated to create p473. To introduce hygromycin selection into p444, the phleomycin cassette was excised with KpnI and HindIII and replaced with a hygromycin resistance cassette that was amplified from pAN7-1 [ggtacccggagaatatggagcttc (added KpnI cloning site underlined) and reverse primer 5'-aagcttgcttgagagttcaaggaag (added HindIII cloning site underlined) to make p434.A segment of the m pAN7-1 to create p474. To introduce phleomycin resistance into p474, the phleomycin resistance cassette was excised from p444 with KpnI and HindIII and cloned into the same sites in p474 to create p480. To introduce hygromycin resistance into p474, the hygromycin resistance cassette described in p434 was excised from an unrelated plasmid as a HindIII fragment and cloned into the HindIII site of p474 to make p502.The tTA gene was excised from p473 with A. fumigatus strains used in this study are listed in Table-Aspergillus minimal medium plates [4·4H20. For low-iron minimal medium, the FePO4·4H20 concentration was reduced to 0.45 μM.The m plates . This miA. fumigatus protoplasts as previously described [tetO7-hph reporter construct was co-transfected with 5 μg of the linearized tTA plasmid (p444), or 50 μg of the linearized rtTA plasmid (p474).Plasmids were introduced into escribed . FollowiAspergillus minimal medium agar containing hygromycin and doxycycline at the concentrations specified in the Figure legends. The plates were then incubated at 37°C, and colony diameter was measured with time. Radial growth rates were calculated from the exponential part of the resulting growth curves.For experiments addressing the effects of doxycycline on hygromycin sensitivity, ten thousand conidia were spotted onto the surface of hph gene expression, RNA was isolated from overnight cultures in minimal medium supplemented with the indicated concentrations of doxycycline by crushing in liquid nitrogen and extracting RNA from the crushed mycelium with phenol/chloroform. Twenty micrograms of total RNA were fractionated by formaldehyde gel electrophoresis as previously described [32P-labeled hph DNA probe under stringent conditions in 50% (v/v) formamide/5XSSC /2X Denhardt's solution/10% (w/v) dextran sulfate/1% (w/v) sodium dodecyl sulfate (SDS). The hph probe was an 800 bp EcoRI-BamHI fragment from pAN7-1 [hph open reading frame. Hybridization intensity was quantified with a Phosphorimager (Molecular Dynamics) and normalized for differences in gel loading by quantitating the relative levels of SYBR-green II-stained rRNA .For analysis of escribed , transfem pAN7-1 containitTA tetracycline transactivatorrtTA reverse tetracycline transactivatorTetR tetracycline repressortetO TetR binding sequencehph hygromycin resistance geneble phleomycin resistance geneA. fumigatus, screening of transformants and drafting the manuscript. RB participated in plasmid construction. JCR contributed to the planning of the study. DSA conceived of the project and directed its design and execution. All authors have read and approved the final manuscript.KV participated in vector construction, gene transfer into
Basal Cell Carcinoma (BCC) is the most common carcinoma in humans. It accounts for 20% of carcinomas in men and 10–15% of carcinomas in women. Despite its high incidence, metastatic events are exceedingly rare. The reported frequency of metastatic dissemination is estimated at 0.0028–0.5 percent. Once metastasis is detected, there is a high mortality rate of 50% within 8 months.In this study, we present a case of simultaneous lung and parotid metastases of giant BCC primary located on the right medial canthus of a 62 year old female.Examination of the tumor located on the medial canthus obtained showed "adenoid BCC". Computed tomography (CT) was performed to evaluate parotid region for evaluation of parotid gland and chest. Parotid and lung metastasis were detected in CT. Routine labarotory tests and radiological investigations were done. There was no abnormal finding. We also investigated this patient with a bone scan , abdominal and cranial CT scans .Although metastasis of BCC is a very rare condition, this study reports a case of simultaneous parotid gland and lung metastasis originating from a giant BCC primary that was located on the right inner canthus of a 62 year old female. Basal cell carcinoma (BCC) is the most common carcinoma in humans and accounts for 20% of carcinomas in men and 10–15% of carcinomas in women. Approximately 75–86% of primary BCCs are found on the head or neck. The most common location on the head is the nose, specifically the nasal tip and alae. It constitutes 90% of periorbital malignancies [BCC arising on the medial canthus tends to be deep and invasive and may result in perineural extension and loss of optic nerve function. Pieh et al reported that the highest recurrence rates of BCC following attempted excision, (approximately 60%), was seen with lesions arising from the medial canthus since these lesions tend to be more invasive and difficult to manage . ReclusiA 62-year-old woman was referred to the Plastic and Reconstructive Surgery Department for treatment of a bleeding exophytic tumor located on the right inner canthus. She had had the lesion for approximately 11 years. Initially, the patient was treated with excision and primary closure ten years ago. At this time the tumor had a diameter of 5 cm. The tumor was diagnosed as adenoid BCC microscopically and surgical margins were tumor-positive. The patient was operated on two years later when the diameter of the recurrent tumor was 15 mm. Histological examination of this second specimen revealed an "adenoid BCC" with clear surgical margins.Although the tumor recurred again after the second excision, the patient neglected medical advice and did not undertake any treatment Figure . More reWe also investigated this patient with a bone scan , abdominal and cranial CT scans and a thoracic CT. Multiple metastatic lesions were seen in the chest CT Figure .Examinations of the cardiovascular, gastrointestinal, neurological, urogenital and hematological systems and other parts of the skin were performed by physical and routine laboratory and radiological techniques. There were no abnormal findings. Biopsy was performed from the tumor located on the inner canthus and revealed "adenoid basal cell carcinoma" Figure . Also, FThe patient did not accept the offer of surgical treatment for the tumor on the inner canthus. She was referred to the Oncology Department and treated with radiotherapy and chemotherapy. The patient received approximately 6000 cGy of external beam radiation over 3 weeks totally. Also, chemotherapy was initiated with cisplatin and 5-fluoruracil. She was followed up with physical examination and CT scans for six months and there were no metastases to other organs. She is still being followed.Spates et al. noted that metastatic BCC was first reported in 1894 . As outl1) the primary tumor must occur in skin containing hair follicles and not the mucous membranes;2) metastasis cannot be by simple extension, but occurring at a site distant from the primary tumor;3) the primary tumor and the metastasis must have similar histologic appearances of basal cell carcinoma; and4) squamous cell features must not be present in the lesions ,8. The cWhile the usual BCC that gives rise to metastases is a large, ulcerated, locally invasive BCC of the head and neck that recurs despite repeated surgical procedures or radiotherapy, these features are not absolute prerequisites for metastasis . ImmunosTumors greater than 3 cm in diameter have a 2 % incidence of metastasis and/or death. This increases to 25% in those lesions more than 5 cm in diameter and to 50% in lesions more than 10 cm in diameter . The proThe author(s) declare that they have no competing interests.EC, conceived the study and coordinated the write-up and submission. AA participated in the writing of the manuscript. All authors read and approved the final manuscript.
Cigarette smoking prevalence among gay men is twice that of population levels. A pilot community-level intervention was developed and evaluated aiming to meet UK Government cessation and cancer prevention targets.Four 7-week withdrawal-oriented treatment groups combined nicotine replacement therapy with peer support. Self-report and carbon monoxide register data were collected at baseline and 7 weeks. N = 98 gay men were recruited through community newspapers and organisations in London UK.At 7 weeks, n = 44 (76%) were confirmed as quit using standard UK Government National Health Service monitoring forms. In multivariate analysis the single significant baseline variable associated with cessation was previous number of attempts at quitting .This tailored community-level intervention successfully recruited a high-prevalence group, and the outcome data compares very favourably to national monitoring data (which reports an average of 53% success). Implications for national targeted services are considered. Analysis of tobacco marketing has demonstrated lesbian and gay youth as an emerging target community , therebyGay men have been disproportionately affected by HIV/AIDS disease in developed countries. HIV risk-taking behaviour is associated with cigarette smoking among HIV-negative gay men , and amoThe National Health Service (NHS) Cancer Plan, a UK Government health strategy, recommends that Primary Care Trusts (PCTs) take a commissioning lead in forming local alliances involving community groups, harnessing community efforts, and dissemination of effective interventions. . In ordeThis innovative pilot study aimed to design, recruit to, and deliver a series of pilot smoking cessation group interventions and to evaluate outcomes using standard UK Government assessment criteria.Setting up and running specialist Smoking Cessation Clinics", part of the Smoking Cessation Training and Research Programme (SCTRP) at St Bartholomew's and Royal London School of Medicine and St George's Hospital Medical School. The programme of withdrawal-oriented treatment combines groupwork, nicotine replacement therapy and ongoing peer support throughout. An initial information session is followed by 6 closed group sessions, setting a quit data for week 3. This pilot consisted of 4 delivered groups, and each group consisted of 7 closed weekly meetings each of 2 hours.The intervention was developed and delivered by a community-based volunteer-led charity in London UK, with a remit to promote the health of gay men. Potential acceptability and effectiveness were maximised by providing an NHS-approved programme adapted for an appropriate service wholly facilitated and attended by gay men. Seven volunteers experienced in delivering group interventions within the organisation were trained in the 3 day course "The service principle was for a non-judgemental environment where gay men could address socialising and gay social spaces, recreational drug use, sexuality and HIV and the impact of these on their motivations, and ability, to quit smoking. Several specific modifications were made to the taught model. Our intervention modified the SCTRP program's use of "Quit buddies" which promoted partnered support, instead creating "Quit cells" of 3 or 4 participants. This design modification was made in the light of other group interventions delivered by this community organisation in which reliance of a participant on more than one person for support has found to be more reliable. The information on Zyban was expanded to address contraindications with HIV antiretroviral combination therapies. Exercises from assertiveness training courses were imported to assist participants in clearly communicating the intention to remain a non-smoker. In general, group discussion and processes were focussed on culturally-specific contexts to gay men. A detailed intervention programme was written in order to promote consistency across the cycles of intervention delivery.Week 1: information on the course content as well as expectations of quit date are given along with information regarding potential side effects and how to deal with them. Week 2: what to expect when you quit and how to deal with reactions, information on the effects of carbon monoxide, preparation for quit date, personal action plan and the how to use a smoking diary. Week 3: information on how to use nicotine replacements, role play of assertive refusal of cigarettes, selection and formation of quit support cells, and personal statements of cessation. Week 4: group review of challenges of the first week of not smoking with reference to smoking diary and personal action plan, exploration of potential "alternative" support such as meditation and exercise, discussion of the challenges of drug use with respect to smoking cessation. Week 5: group review of previous week's experience, information on health benefits achieved to date and weight gain issues. Week 6: review of previous week's experience, identification of future sources of support. Week 7: review of previous week's experience, information of health benefits to date, elaboration on support sources, small celebration of the group's achievement.Twenty-four recruitment advertisements were placed in free London-wide and national gay press, and accompanying editorial and articles were secured to support the recruitment process.Prior to the initial session, participants were sent the required UK Department of Health self-completion Smoking Cessation Service NHS Client Assessment Form. Carbon monoxide readings were taken at each session from week 2, using the "Smokealyser" calibrated carbon monoxide register, and readings were used in addition to self-report data to confirm smoking cessation at week 7. All intervention attendees were asked to give written permission for data collection purposes and were given guarantees of confidentiality.All data were entered into SPSS for windows V11. In line with NHS monitoring data requirements, the percentage of successful quitters was calculated as those who gave carbon monoxide readings and confirmed they had quit at week 7 as a percentage of those who set a quit date for week 3. Variables were entered individually into univariate binary logistic regressions, with cessation outcome as the dependent variable and participant baseline characteristics, attitudes and behaviour, and nicotine replacement methods as independent variables. Variables with p values below 0.25 were then entered stepwise into a multivariate logistic regression, with 95% confidence intervals (95% CI) reported.Ninety-eight men registered to attend the intervention, and of these 76 attended at least the first session. Sixty-nine of men returned the assessment sheet, and the outcome analysis is of those 69 men.The mean age of participants was 37.1 years , and n = 63 (90%) reported their ethnicity as White. Forty-four men (64%) had been educated to degree level or higher, and n = 52 (75%) were in full time employment with a further 9 (13%) men medically retired, n = 5 (7%) unemployed, n = 2 (3%) in full time education and n = 1 (1%) retired. Seventeen men (25%) were entitled to free prescriptions (i.e. the welfare state pays for their prescribed medications). Sixty-five men (94%) reported that they drink alcohol, consuming a mean of 22.8 units per week .The daily number of cigarettes smoked was as follows: 1–5 ; 5–10 ; 11–20 ; 21–30 ; 31–40 ; 41+ . The first cigarette after waking was smoked during the following number of minutes after waking: 5 minutes ; 6–30 minutes ; 31–60 minutes ; 61+ minutes . Smoking motivations are summarised in Table Participants reported a mean 2.6 of consultations with their primary care General Practitioner (GP) in the previous year . Secondary/hospital consultations in the previous year were reported by n = 35 (52%) men, with a mean of 2.26 consultations for these men . Thirty-four men (51%) had been recommended by their GP to give up cigarette smoking, and n = 26 (38%) men were currently on prescribed medication. Fourteen men (20%) were diagnosed HIV-positive, n = 25 (51%) HIV-negative, n = 16 (23%) untested and n = 4 (6%) refused to answer. The participants rated their health as follows: excellent n = 10 (14.5%); good n = 36 (52%); moderate n = 20 (29%); poor n = 2 (3%); very poor n = 1 (1%).Sixty-one men (90%) had made a previous attempt to quit, and of those who had made an attempt the mean was 2.85 attempts . Previously employed nicotine replacement methods were gum n = 30 (49%), patches n = 30 (49%), nasal spray n = 3 (5%), inhalor n = 12 (20%), microtabs n = 3 (5%), nicotine lozenges n = 4 (7%), and Bupropion (Zyban) n = 12 (20%).Participants described the importance of this current attempt to quit as extremely important ; very important ; quite important ; not at all important (n = 0). Participants rated their chances of quitting for good on this attempt as extremely high ; very high ; quite high ; not very high ; very low .Attendance at sessions was consistently high, of 532 person-sessions 13 sessions were missed. Non-attendance did not apparently cluster around a particular session.At week 3, of the 69 men who gave data, n = 58 men (84%) set a quit date. At week 7 (4 weeks after the quit date) n = 44 men (64%) were confirmed as having quit using the CO monitor, representing 58% of those who attended the first session, 76% of those who set a quit date and 64% of those who gave data at baseline and week 7.A further 3 men reported by telephone that they had quit smoking but did not attend the final session to give clinical data to verify. Nine men (13%) reported not having stopped smoking, n = 6 men (9%) set a quit date at week 3 and did not return to group, n = 7 men (10%) attended the first session only. For the purposes of this analysis, these 25 men were coded as not having quit in the following modelling.This analysis considers those 44 men confirmed as having ceased compared to those 25 categorised as not having quit. Following univariate analysis (see Table This pilot intervention has targeted a hitherto overlooked high smoking prevalence group, and has adapted a Government-approved intervention to meet the specific needs of gay men in an appropriate and acceptable setting. The success rate of 76% of men who had set a quit date being confirmed as having quit at week 7 compares extremely favourably to national monitoring data, which reports a success rate nationally 2001–2002 for smoking cessation services as 53% .Public health targets must consider the needs of high prevalence communities, and this may be achieved through innovative development of existing effective services. However, this study has highlighted the lack of targeted interventions for gay men, and the evidence demonstrates further elevated health needs compared to the general population in the fields of alcohol and drug use mental hFurther research may identify the factors which contributed to the effectiveness of this pilot complex participative intervention, including offering recruitment and delivery outside of community settings, measuring success rates for gay men in non-gay specific or tailored groups, and the usefulness of "quit cells". Longer-term follow-up data and increasing dosage to include a follow-up session would also provide further useful data. In order to refine the intervention for trial testing, qualitative data regarding the utility, acceptability and preferences for the content of specific sessions would be illuminating. Further, the non-randomised design without comparison group limits presents a limitation to the generalisability of findings, yet still offers cessation outcomes much better than standard national cessation data quoted above which were collected without quasi-experimental design using the same follow-up period. Data were not available on the 29 men who registered for the course but did not attend or complete baseline data, and so it is not possible to compare their demographics or smoking behaviours to those who took up the intervention. Certainly, replication of this first pilot would be necessary in other settings, e.g. non-metropolitan communities, where issues of feasibility and uptake should be addressed. Commissioners may consider the purchase of existing facilitators from cities to deliver in non-metropolitan areas where demand is likely to be lower, as smoking cessation service recommendations state that group leaders need to keep up to date with their skills and to use them on a regular basis .In order to meet the smoking cessation needs of this hitherto overlooked population, and to meet public health policy targets, a rigorous research agenda must be established. While the use of required standard outcome monitoring must be continued, rigorous experimental trials using longer term follow up and commonly reported measures are required. Complex participative interventions must be developed, as in this pilot, from evidence-based interventions with full programme description to ensure replication. The development of appropriate interventions must first pilot services to ensure that they are appropriately adapted to maximise acceptability and uptake among target communities.Lastly, provision of the service by skilled volunteer facilitators has ensured an acceptable, low-cost intervention with a rate of effectiveness in these four pilot groups that compares favourably to national non-targeted interventions outcomes calculated using standard assessment formula. Acceptability of the model appears high with respect to the low number of missed sessions. Voluntary sector provision and delivery should be considered as a low-cost and highly acceptable point of delivery for effective community-level smoking cessation interventions.The authors declare that they have no competing interests.JB and NH managed the pilot study. RH was responsible for data management and analysis and drafted the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
The Adenomatous polyposis coli (APC) tumour suppressor is found in multiple discrete subcellular locations, which may reflect sites of distinct functions. In Drosophila epithelial cells, the predominant APC relative (E-APC) is concentrated at the apicolateral adherens junctions. Genetic analysis indicates that this junctional association is critical for the function of E-APC in Wnt signalling and in cellular adhesion. Here, we ask whether the junctional association of E-APC is stable, or whether E-APC shuttles between the plasma membrane and the cytoplasm.We generated a Drosophila strain that expresses E-APC (dAPC2) tagged with green fluorescent protein (GFP-E-APC) and we analysed its junctional association with fluorescence recovery after photobleaching (FRAP) experiments in live embryos. This revealed that the junctional association of GFP-E-APC in epithelial cells is highly dynamic, and is far less stable than that of the structural components of the adherens junctions, E-cadherin, α-catenin and Armadillo. The shuttling of GFP-E-APC to and from the plasma membrane is unaltered in mutants of Drosophila glycogen synthase kinase 3 (GSK3), which mimic constitutive Wingless signalling. However, the stability of E-APC is greatly reduced in these mutants, explaining their apparent delocalisation from the plasma membrane as previously observed. Finally, we show that GFP-E-APC forms dynamic patches at the apical plasma membrane of late embryonic epidermal cells that form denticles, and that it shuttles up and down the axons of the optic lobe.We conclude that E-APC is a highly mobile protein that shuttles constitutively between distinct subcellular locations. The Adenomatous polyposis coli (APC) protein is an important tumour suppressor in the colonic epithelium . A key fHowever, APC proteins have additional functions in connection with the actin and microtubule cytoskeletons that appear to be separate from their function in controlling Wnt signalling ,9. One oIn order to explore this mechanism, we asked whether Drosophila E-APC might have a structural role at AJs. If so, E-APC would be expected to be stably associated with AJs, similarly to the structural components of the adhesive complex. As in mammalian epithelia ,16, the We used the GAL4 system to express GFP-E-APC throughout the embryo, and found that its subcellular distribution is very similar to that of endogenous E-APC in fixed embryos. In particular, GFP-E-APC is concentrated underneath the plasma membrane in apicolateral regions of embryonic epithelial cells Fig. ; Fig. 2bNext, we conducted fluorescence recovery after photobleaching (FRAP) experiments in live embryos, to examine how stably GFP-E-APC is associated with adherens junctions. We bleached the fluorescence in a defined square centred over the junctional region of an epithelial cell with a short laser pulse, and examined the recovery of the fluorescence within this square over time Fig. . This reWe also conducted FRAP experiments with structural AJ components, namely E-cadherin-GFP, Armadillo-GFP and α-catenin-GFP. In these cases, we can only recover a small fraction of the initial fluorescence within the time frame of the experiment Fig. . This alWe conclude that E-APC is significantly more mobile than the structural AJ components. This suggests that E-APC shuttles either within the cortex, along the zonula adherens, or that it shuttles from the cytoplasm to the plasma membrane . IntereIn late embryonic stages, GFP-E-APC forms striking patches underneath the apical plasma membrane of epidermal cells that are in the process of forming denticle extrusions Fig. . These sIt has been reported that E-APC and Armadillo are required for anchoring mitotic spindles in the cortex of dividing blastoderm cells in the early Drosophila embryo . We cannWe also expressed GFP-E-APC in eye imaginal discs, to examine its subcellular distribution within a larval epithelial sheet. We thus noticed striking puncta of green fluorescence within the axons of the optic stalk that connects these discs to the larval brain Fig. . These pshaggy/zeste white3, or sgg, mutants) therefore mimic constitutive and sustained Wingless signalling [sgg mutant embryos [sgg mutant embryos, the levels of membrane-associated GFP-E-APC are also noticeably reduced . Taken together with our results from the sgg mutants, this suggests that the kinetic association of GFP-E-APC with the plasma membrane is unaffected by Wingless signalling.The subcellular distribution of E-APC and its accumulation at the adherens junctions is unchanged in other mutants of the Wingless signalling pathway were gezw3M11-1 and dshv26 mutant embryos lacking maternal and zygotic gene function were generated as described [sgg mutants (identified with an RFP-marked X chromosome [sgg mutant embryos were hand-picked (from timed egg collections) under the dissecting microscope, and separated into GFP-positive and GFP-negative embryos; unfertilised embryos were discarded.escribed . We did romosome ). For WeAntibody staining of fixed embryos and analysis by confocal microscopy were described previously . The folThe following primary and secondary antibodies were used for Western blotting: rabbit anti-E-APC ; mouse mFor live imaging, embryos were dechorionated in 50% bleach for 1–2 minutes and washed. Embryos were transferred to a moistened black filter (Schleicher and Schüll). Embryos were adhered to coverslips with heptane glue, made by mixing heptane and clear sellotape (Sellotape Ltd). Embryos were mounted in Voltalef oil (10S). For short term imaging (<30 minutes), embryos were mounted on a glass slide with small coverslips as supports. For longer term imaging, e.g. for bleaching of pre-denticle patches, embryos were mounted in oil and placed on Bio-foil gas permeable membrane (Sartorius Ltd) mounted on a perspex frame .FRAP experiments were performed using a Bio-Rad Radiance confocal microscope with a 40× NA 1.3 objective lense. Imaging was performed with a 488 nm argon laser at 5% laser power and the following confocal settings: iris at 4 mm, 50% gain, zoom 10, scan speed 500 lps, box size 512 × 512 pixels. These conditions were found to give minimal photobleaching over the observed time.For each FRAP experiment, a pre-bleach image was recorded by selecting a focal plane and taking a Z-series, consisting of 3 0.5 μm steps either side of the desired focal plane (from -1.5 μm to +1.5 μm). The LaserSharp software was used to define several regions of interest (ROIs) for bleaching. A maximum of one bleach ROI was placed in any cell and several cells were always left unbleached. Typically, 3–5 ROIs were bleached in one field of view on one embryo. These regions were bleached at 100% laser power (scanning at 500 lps). 10 bleach scans were found to produce the best results for all constructs. After bleaching, a Z-series was recorded every 15 seconds for 5 minutes. At the time of these experiments, the LaserSharp software did not contain a function for performing this type of 4D bleaching experiment. This problem was overcome by manually switching between imaging and bleaching settings and manually saving pre bleach images and starting the time course. As a result of this, there was usually a 30–60 second delay between the pre-bleach image and the post-bleach images.Data sets were analysed with the Bio-Rad LaserPix software. For each time point, the total pixel intensity distribution was compared to the pre-bleach image to select the corresponding region. The two images were then compared by eye to confirm that they did correspond to the same focal plane. The coordinates for the bleach ROIs were used to accurately locate the bleach spots on the pre bleach image, and the mean fluorescence intensity for each ROI was calculated. Several equivalent sized ROIs were also placed on unbleached cells to measure any change in fluorescence due to photobleaching or movement.To track movement of the cells, an acetate sheet was placed over the computer monitor and each ROI was marked on it as well as the shapes of the cells surrounding it. By aligning the sheet with the appropriate cell shapes, the ROI could be appropriately positioned for each time point. This process was used to position each ROI on the appropriate image for each time point.Once all ROIs had been placed on the image, the mean fluorescence intensities were calculated for each ROI, and their positions were saved on a copy of the image See Fig . Data waData sets were discarded for any of the following reasons. First, if movement of the embryo in the Z axis took the sample outside the range of the Z-series in any time point. Second, if movement in the X/Y axis was sufficient to move significant numbers of the bleach boxes outside of the observed region. Third, if an ROI ever left the field of view, all data points for that ROI was discarded. Fourth, all data sets were discarded if the intensities of the control ROIs changed dramatically at any point in the experiment, or showed a large general increase or decrease.Pre-denticle structures were bleached in a similar manner to junctional E-APC described above.GFP-E-APC was expressed in eye imaginal discs by the GAL4 system, using the driver line GMR.GAL4 (described in FlyBase). Eye discs and brains were dissected from crawling third instar larvae in PBS. Eye discs were teased away from the brain and inverted to reveal the optic stalk. Whole disc/brains were mounted in a drop of PBS under a cover slip, supported by two smaller cover slips. Each disc was observed for no more than 30 minutes.FRAP experiments were performed using a Bio-Rad Radiance confocal microscope and Bio-Rad LaserSharp software, using the 100× NA 1.4 objective lens. A narrow strip was bleached across the whole field of view by adjusting the size of the scanning area. These experiments were performed before a FRAP program was available for LaserSharp so bleaching was performed manually, leading to somewhat variable intervals between each stage of the experiment. The region was bleached with the 488 nm line of an argon laser for approximately 20 scans. Time courses were recorded after each bleaching experiment for 5 minutes.sgg mutant embryos, including the Western blots; M.B. directed the study, helped with the microscopy and drafted the manuscript. All authors read and approved the final manuscript.A.C. developed and conducted most of the FRAP experiments; J.M. completed the analysis of GFP-E-APC in live and fixed FRAP of GFP-E-APC in early embryonic epithelial cells. Example of a FRAP experiment of GFP-E-APC, as described in Figure 3.Click here for fileFRAP of GFP-E-APC in the larval optical stalk. Example of a FRAP experiment of GFP-E-APC, as described in Figure 6.Click here for file
Are short synthetic peptides the key to developing cancer vaccines? And what are the obstacles in the way? CTLs specifically recognize and lyse targets through the interaction of T cell receptors (TCRs) on the surface of the T lymphocyte with protein fragments (peptides) presented on the surface of target cells, in association with major histocompatibility complex (MHC) class I molecules. When a particular CTL interacts with a target cell, it rapidly divides to form a clonal population of T cells with the identical TCR.CD8Townsend and colleagues first elucidated the molecular basis of target cell recognition by CTLs in 1985 [+ T cell response that is specific for several of these antigens. The development of such responses, however, requires a large tumor load, occurs late in the disease, and probably does not cause the efficient destruction of the tumor cells [Furthermore, before Boon and colleagues cloned the first antigen recognized by tumor-reactive CTLs in 1991 , it was or cells . Thus, aUnfortunately, some synthetic peptides, including some corresponding to immunodominant epitopes (those which cause the biggest part of the immune response) from tumor antigens, only seem to bind MHC class I molecules with medium to low affinity and/or are recognized by specific T cells with relatively low avidity. These characteristics are the likely cause of the poor immune reaction generated by these peptides . One strSolinger and colleagues were the first to describe antigen variants producing T cell responses that were stronger than those elicited by the parental sequences . Some hePLoS Medicine [In a study by Lee and colleagues in this issue of Medicine , despiteMedicine ,11. In aMedicine , into anIt is increasingly clear that even the smallest alteration in the structure of the MHC peptide complex can result in significant changes in which TCRs are selected after vaccination. Thus, manipulating the immune T cell repertoire in vivo through the use of heteroclitic tumor antigen peptide variants could be harder than anticipated. As the field moves rapidly towards the use of new vaccine adjuvants with high immunogenic potential , reasses
There is a continued need to develop more effective cancer immunotherapy strategies. Exosomes, cell-derived lipid vesicles that express high levels of a narrow spectrum of cell proteins represent a novel platform for delivering high levels of antigen in conjunction with costimulatory molecules. We performed this study to test the safety, feasibility and efficacy of autologous dendritic cell (DC)-derived exosomes (DEX) loaded with the MAGE tumor antigens in patients with non-small cell lung cancer (NSCLC).This Phase I study enrolled HLA A2+ patients with pre-treated Stage IIIb (N = 4) and IV (N = 9) NSCLC with tumor expression of MAGE-A3 or A4. Patients underwent leukapheresis to generate DC from which DEX were produced and loaded with MAGE-A3, -A4, -A10, and MAGE-3DPO4 peptides. Patients received 4 doses of DEX at weekly intervals.Thirteen patients were enrolled and 9 completed therapy. Three formulations of DEX were evaluated; all were well tolerated with only grade 1–2 adverse events related to the use of DEX (injection site reactions (N = 8), flu like illness (N = 1), and peripheral arm pain (N = 1)). The time from the first dose of DEX until disease progression was 30 to 429+ days. Three patients had disease progression before the first DEX dose. Survival of patients after the first DEX dose was 52–665+ days. DTH reactivity against MAGE peptides was detected in 3/9 patients. Immune responses were detected in patients as follows: MAGE-specific T cell responses in 1/3, increased NK lytic activity in 2/4.Production of the DEX vaccine was feasible and DEX therapy was well tolerated in patients with advanced NSCLC. Some patients experienced long term stability of disease and activation of immune effectors Vaccine immunotherapy as an approach to cancer treatment has evolved over the last 10 years as the basic biology of the immune response has been elucidated. Tumor-associated antigens that are capable of eliciting cytotoxic T cell responses have been identified. Among the most frequently expressed across many malignancies are the MAGE antigens, originally described in melanoma, but expressed by other tumors including non-small cell lung cancer (NSCLC) . Immune in vitro and eradicate established murine tumors .Dexosomes have been demonstrated to participate in antigen presentation in the following way ,22. Afteex vivo dendritic cell approach for eradication of advanced cancer. Purified dexosomes were shown to be effective in both suppressing tumor growth and eradicating an established tumor in this model. Furthermore, the effect of the dendritic cell-derived dexosome was greater than that of the dendritic cell from which it was produced [Dexosomes have demonstrated significant antitumor activity in a mouse tumor model, suggesting that the use of dexosomes derived from dendritic cells may result in improved efficacy relative to the produced . TherefoWe performed this study to investigate the safety, feasibility, and efficacy of administering autologous dexosomes loaded with tumor antigens (subsequently referred to as DEX) to patients with advanced NSCLC. We also evaluated the immunologic responses in selected patients and monitored the clinical outcomes.This phase I clinical protocol was approved by the Duke University Medical Center Institutional Review Board and conducted in compliance with the Helsinki Declaration and under an IND from the United States Food and Drug Administration held by Anosys Corporation. All subjects provided written informed consent. Patients were eligible for enrollment if they had histologically confirmed, unresectable Stage III A or B or Stage IV NSCLC, were HLA A*0201 positive, at least 18 years of age, and had adequate organ function and a Karnofsky performance status of at least 80%. Patients were required to have been treated with at least one prior standard chemotherapy regimen and have measurable disease. In addition, patients were required to have tumor expressing MAGE A3 or MAGE A4. To avoid performing repeat biopsies, this was achieved by detecting MAGE A3 or MAGE A4 expression in peripheral blood tumor cells by RT-PCR using established methods.The main exclusion criteria were: prior therapy within 4 weeks of the leukapheresis, CNS disease, history of autoimmune disease, concurrent use of systemic steroids, presence of HIV infection or acute or chronic viral hepatitis B or C. Pregnant or lactating women were also excluded.2 atmosphere in the presence of 50 ng/mL GM-CSF and 10 ng/mL of IL-4 . On the 7th day of culture, the supernatant of the resulting dendritic cell preparation was harvested, filtered, and concentrated. Dexosomes were then isolated by ultracentrifugation on a D2O/sucrose cushion. As described in table Dexosomes were manufactured from peripheral blood mononuclear cells (PBMCs) as previously described . Briefly13 MHC class II molecules in a volume of 3 mL (divided into twoinjections given at two sites on opposite sides of the body) as a combination of subcutaneous (90% of the volume) and intradermal (10%) injections weekly for 4 weeks. No retreatment was allowed. Vital signs were monitored for 1 hour after each injection. Clinical responses were assessed by RECIST criteria. CT scans of the chest through the upper abdomen were obtained at baseline, 1 month following the last dose of DEX and every 3 months after last dose of DEX for 1 year, but scans to confirm responses were not required in this phase I study. All surviving patients have been followed every 6 months for assessment of vital status.Patients were enrolled into three cohorts that varied in the method of MHC Class I peptide loading and concentration as described in Table Prior to the initial leukapheresis and 1 week after the last dose of DEX, the following peptides were injected intradermally, in addition to the standard recall antigen panel of Candida, Mumps, and tetanus: MAGE-A3(112–120), MAGE-A4(230–239), MAGE-A10(254–262), and MAGE-A3(247–258), each at 10 μg in 0.1 mL saline. The diameter of the induration and erythema was measured 48 hours following the peptide injection.in vitro stimulation of PBMCs with autologous DCs pulsed with the MAGE-A3(112–120), MAGE-A4(230–239), and MAGE-A10(254–262). The number of spots (interferon-gamma-secreting T cells) per 20,000 responding PBMC was reported. The background number of spots against an irrelevant antigen was subtracted from the number of spots for the experimental conditions.Immune response was evaluated at baseline and 1 week following last dose of DEX with cryopreserved PBMCs obtained by leukapheresis. The ELISPOT assay was performed by ImmunoSite, Inc. according to previously reported methods using boNK cells were isolated from cryopreserved PBMCs using an NK Cell Isolation Kit according to the manufacturer's instruction. NK cell purity was checked by flow cytometry using anti-CD3-FITC, anti-CD45-PerCP, and anti-CD56-APC antibodies . Isolated NK cells and NK cells activated for 40 hours by IL-2 600 Units/ml were incubated at various effector to target rations with chromium-51 labeled K562 cells, an NK target, for 4 hours at 37°C and cytotoxicity was assessed by the amount of radiolabeled chromium released. Cytotoxicity was calculated as follows: percentage of target cell lysis = 100 × (counts per minute (cpm) of experimental release - cpm of spontaneous release) / (cpm of maximum release - cpm of spontaneous release).The primary endpoints of this study were safety and feasibility, with secondary endpoints of clinical and immunologic response rates. The incidence, type and severity of adverse events were recorded during the study treatment through 30 days following the last dose of DEX. Descriptive statistics were used to present the data. Adverse events were coded using MEDDRA version 5.0. Survival and time to progression were measured from the date of the first injection to the date of documented disease progression or death. For patients who progressed, the time to disease progression was determined by the interval from the first injection+ 1 day to the last evaluation of disease staging. For patients who did not progress or die during the two year follow up period, the time of disease progression and survival was determined by the interval between the first dose of DEX and the date of last evaluation of disease staging + 1 day and concatenated with the '+' sign.Thirteen patients, median age 62 years (range 44–72 years) with unresectable pretreated Stage III or IV NSCLC were enrolled. The median time from original diagnosis to study entry was 9.9 months (range 2–61 months) and the median Karnofsky score of the patient population was 80% (range 80–100). DEX therapy was administered to 9/13 patients. Of the 9 dosed patients, 5 patients had Stage IV and 4 patients had Stage IIIB disease. Six patients had stable disease and 3 patients had progressive disease at study entry. Two patients had squamous cell carcinoma, 4 patients had adenocarcinoma, 2 patients had large cell carcinoma, and in 1 case the histological type was not reported. All patients had received prior chemotherapy , 6/9 patients had received prior radiotherapy and 4/9 patients had prior surgery for cancer treatment. Four patients did not receive DEX for the following reasons: manufacturing failure in 2 cases , one of whom had received chemotherapy 13 days prior to leukapheresis and one of whom (DU39) also had rapid disease progression at the time of leukapheresis; delay in shipment in one instance (DU14); and, rapid disease progression prior to planed dosing with DEX in one case (DU44). The characteristics of all dosed patients are listed in Table 2 (see separate file for Table 2).14 (range 4.1 × 1012 to 9.1 × 1014). This quantity of dexosomes in our advanced NSCLC patients is similar in quantity to that generated from healthy donors .The dose of DEX that was selected corresponded to the maximum dose that could be achieved from healthy donors. We confirmed that this dose could be generated in all but two patients with NSCLC. The mean dexosome generation consisted of a total class II number of 3.14 × 10The DEX immunotherapy was generally well tolerated without evidence of serious toxicity. The most frequently reported adverse events causally related to the use of DEX were mild (Grade 1–2) in severity and included: Injection site reactions in 8 patients; flu like syndrome (1 patient); and, peripheral edema and pain in the arm (1 patient). There were no significant organ or laboratory toxicities attributable to the vaccine. No autoimmune reactions were observed.All 9 dosed patients underwent DTH testing with individual tumor-associated peptides prior to and following all doses of DEX. There was no DTH response to the specific peptide antigens prior to DEX therapy. Three patients had a positive response of at least 5 mm erythema or induration in the longest dimension 48 hours after skin testing with one of the MAGE peptides. Specifically, DU06 had 5 mm induration and erythema with MAGE-A4(230–239), DU24 had 6 mm induration and erythema with MAGE-A10(254–262) and DU49 had 5 mm induration and erythema with MAGE-A3(112–120), respectively.The peptide-specific immune response to MAGE and CMV was analyzed using ELISPOT in 5 of 9 dosed patients . One patient (DU49) exhibited detectable increases in T cell precursor frequency to MAGE-A10(254–262) following in-vitro stimulation . Assays for DU50 and DU63 could not be completed because of poor viability. Robust responses to anti-CD3 and to the control peptide CMV pp65 were observed in DU24, DU81, and DU49, but no MAGE-specific responses were detected.Since most patients did not exhibit a significant increase in antigen-specific T cell activity, we hypothesized that regulatory influences such as CD4+CD25+ regulatory T cell populations might inhibit augmentation of the T cell response. In 2/3 patients who had analyzable specimens available, an increase in CD4+CD25+ T cells as a percentage of CD4+ T cells was observed following completion of DEX therapy when compared with baseline values . The small number of samples available for this analysis precludes any conclusions but does suggest that CD4+CD25+ T cell analyses should accompany future studies of DEX immunotherapy.During the study, new data from Escudier et al (manuscript submitted) suggested that the immunologic activity of DEX might be due to activation of NK cells. We therefore explored the hypothesis that NK cells may be activated following DEX therapy. This was not planned as part of the initial analysis and therefore specimens of PBMC were limiting in all but 4 patients . Although there was no consistent change in NK percentage before and after immunization . The possible role of negative regulatory mechanisms was suggested by the presence of elevated levels of CD4+CD25+ regulatory T cells following immunization in some patients.We observed increases in systemic immune responses against MAGE by DTH reactivity in 3/9 patients who had no reactivity to the MAGE peptides prior to immunization and activation of NK cells, but found minimal increases in antigen-specific T cell activity in An intriguing immunologic observation was the increase in NK activity following immunization in 2/4 patients analyzed. Although DEX are intended to activate antigen-specific, MHC-restricted T cell responses, it is possible that cytokines released in response to DEX therapy could cause activation of NK cells or that DEX could directly activate NK cells. DEX therapy may stimulate both innate and adaptive arms of the immune response and thereby provide a rationale for maximizing the anti-tumor effect of this approach, even in cases where tumors have lost Class I antigens, a common finding as cancers become more advanced . Indeed,Despite the small sample size and the fact that 3/9 dosed patients had disease progression at the time of initiation of DEX treatment, we observed prolonged disease stabilization in some patients. Large clinical trials in patients with advanced NSCLC have generally reported median time to progression of 3–5 months in patients with advanced NSCLC treated with systemic chemotherapy regimens -30. The DEX therapy was well. Immune activation and stability of disease was observed in some immunized patients with advanced NSCLC.Michael Morse received funding from NIH 5R21CA89957-02. Additionally, portions of this study were funded by Anosys, Inc.Nancy Valente, Revati Shreeniwas, Mary Ann Sutton, Alain Delcayre, Di-Hwei Hsu, and Jean BernardLe Pecq held stock and were employees in Anosys,H. Kim Lyerly was a consultant for Anosys, Inc.MAM was the principal investigator of the study and oversaw all aspects including protocol development, patient management, data collection and analysis, and manuscript preparation.JG enrolled patients to the study and managed their care and participated in data analysis.Takuya Osada performed the NK assays and analyzed the data.SK enrolled patients to the study and managed their care.AH performed in vitro immunologic assays and analyzed the data.TMC oversaw the immunologic analyses performed at Duke University and analyzed the data.NV, RS, and MAS oversaw development of the protocol, data collection and analysis, and manuscript preparation.AD developed and oversaw the MAGE screening for patient eligibility.D-H H oversaw portions of the immunologic analysis and data analysis.J-B L provided scientific direction regarding generation of the dexosomes, protocol development, and data analysis and manuscript preparationHKL provided consultation on immunologic assay development All authors read and approved the final manuscript.Table 2 (DOC) presents the remainder of clinical and immunological data from all patientsClick here for file
Caenorhabditis elegans is a little less lonely than the rest of us—it is a self-fertile hermaphrodite, which as a larva makes and stores sperm before switching to egg production for the remainder of its lifespan. A sister species, C. briggsae, is also hermaphroditic, but phylogenetic evidence suggests the last common ancestor of the two species had a female/male mode of reproduction. This raises the question of how the sex determination mechanisms, which must have evolved independently, differ between the two species. In this issue, Sudhir Nayak, Johnathan Goree, and Tim Schedl show that a crucial difference lies in the activities of two genes.The nematode C. elegans, the early period of sperm production is controlled by multiple proteins, two of which are the focus of this study, the RNA-binding protein GLD-1 (encoded by the gene gld-1) and the F-box-containing protein FOG-2 (encoded by the gene fog-2). Together, they repress translation of a gene, tra-2, by binding to its messenger RNA. This allows another gene, fem-3, to transiently masculinize the larval germline to produce sperm.In C. elegans and C. briggsae, Schedl and colleagues found they share 30 out of 31 sex determination genes, but not fog-2. More surprisingly, they found that the role of gld-1 in sex determination is opposite in the two species. When C. elegans is deprived of gld-1, would-be hermaphrodites produce only oocytes. But when C. briggsae is deprived of gld-1, would-be hermaphrodites produce only sperm. Thus, the authors conclude, the control of hermaphrodite spermatogenesis is fundamentally different in the two species.Comparing the genomes of C. elegans genome, the authors showed that fog-2 arose from a gene duplication event after the C. elegans–C. briggsae split, which occurred approximately 100 million years ago. Since then, its final exon, which codes for the C-terminal end of the protein, has undergone rapid evolution. The authors also show that this is the “business end” of the protein for its interaction with GLD-1, suggesting that the divergence of C. elegans and C. briggsae sex determination pathways resulted, in part, from FOG-2's new interaction with GLD-1.By further examining the fog-2 is in C. elegans is still unclear. The authors speculate that it may recruit additional factors onto the gld-1/tra-2 mRNA complex, increasing efficiency of translation repression. Much remains to be discovered about C. briggsae sex determination as well. The authors suggest that additional genetic differences promoting self-fertility are likely to have accumulated since the two species diverged, which may act to strengthen the male–female germline switching signal. Investigation of this possibility may shed more light on how hermaphroditism operates in these two species, and how a developmental pathway controlling sex determination can evolve.Exactly what the role of
Arabidopsis.A novel approach for modeling gene-regulatory networks, based on graphical Gaussian modeling, is used to create a network for the isoprenoid biosynthesis pathway in Arabidopsis thaliana, we detect modules of closely connected genes and candidate genes for possible cross-talk between the isoprenoid pathways. Genes of downstream pathways also fit well into the network. We evaluate our approach in a simulation study and using the yeast galactose network.We present a novel graphical Gaussian modeling approach for reverse engineering of genetic regulatory networks with many genes and few observations. When applying our approach to infer a gene network for isoprenoid biosynthesis in The analysis of genetic regulatory networks has received a major impetus from the huge amounts of data made available by high-throughput technologies such as DNA microarrays. The genome-wide, massively parallel monitoring of gene activity will increase the understanding of the molecular basis of disease and facilitate the identification of therapeutic targets.To fully uncover regulatory structures, different analysis tools for transcriptomic and other high-throughput data will have to be used in an integrative or iterative fashion. In simple eukaryotes or prokaryotes, gene-expression data has been combined with two-hybrid data and phenIn higher organisms, however, little is known about regulatory control mechanisms. As a first step in reverse engineering of genetic regulatory networks, structural relationships between genes can be explored on the basis of their expression profiles. Here, we focus on graphical models ,6 as a pGraphical models are powerful for a small number of genes. As the number of genes increases, however, reliable estimates of conditional dependencies require many more observations than are usually available from gene-expression profiling. Furthermore, because the number of models grows super-exponentially with the number of genes, only a small subset of models can be tested . Most imSome of these problems may be circumvented by restricting the number of possible models or edges ,13 or byAs an alternative approach to modeling genetic networks with many genes, we propose not to condition on all genes at a time. Instead, we apply graphical modeling to small subnetworks of three genes to explore the dependence between two of the genes conditional on the third. These subnetworks are then combined for making inferences on the complete network. This modified graphical modeling approach makes it possible to include many genes in the network while studying dependence patterns in a more complex and exhaustive way than with only pairwise correlation-based relationships.Saccharomyces cerevisiae.For an independent validation of our method, we compare our modified graphical Gaussian modeling (GGM) approach with conventional graphical modeling in a simulation study. We show at the end of the Results section that our approach outperforms the standard method in simulation settings with many genes and few observations. For a further evaluation with real data, we apply our approach to the galactose-utilization data from to detecArabidopsis thaliana . The grIsoprenoids serve numerous biochemical functions in plants: for example, as components of membranes (sterols), as photosynthetic pigments (carotenoids and chlorophylls) and as hormones (gibberellins). Isoprenoids are synthesized through condensation of the five-carbon intermediates isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP). In higher plants, two distinct pathways for the formation of IPP and DMAPP exist, one in the cytosol and the other in the chloroplast. The cytosolic pathway, often described as the mevalonate or MVA pathway, starts from acetyl-CoA to form IPP via several steps, including the intermediate mevalonate (MVA). In contrast, the plastidial pathway involves condensation of pyruvate and glyceraldehyde 3-phosphate via several intermediates to form IPP and DMAPP. Whereas the MVA pathway is responsible for the synthesis of sterols, sesquiterpenes and the side chain of ubiquinone, the MEP pathway is used for the synthesis of isoprenes, carotenoids and the side chains of chlorophyll and plastoquinone. Although both pathways operate independently under normal conditions, interaction between them has been repeatedly reported ,17.Reduced flux through the MVA pathway after treatment with lovastatin can be partially compensated for by the MEP pathway. However, inhibition of the MEP pathway in seedlings leads to reduced levels in carotenoids and chlorophylls, indicating a predominantly unidirectional transport of isoprenoid intermediates from the chloroplast to the cytosol ,18, althTo gain more insight into the cross-talk between both pathways at the transcriptional level, gene-expression patterns were monitored under various experimental conditions using 118 GeneChip (Affymetrix) microarrays . To construct the genetic regulatory network, we focused on 40 genes, 16 of which were assigned to the cytosolic pathway, 19 to the plastidal pathway and five encode proteins located in the mitochondrion. These 40 genes comprise not only genes of known function but also genes whose encoded proteins displayed considerable homology to proteins of known function. For reference, we adopt the notation from .The genetic-interaction network among these genes was first constructed using GGM with backward selection under the Bayesian information criterion (BIC) . This waet al. glucose, which revealed that sterols were formed via the MVA pathway, while plastidic isoprenoids were synthesized using intermediates from the MEP pathway [13C]- and 1-deoxy-D-xylulose into β-carotene, lutein and phytol indicated that the carotenoid and chlorophyll biosynthesis pathways proceed from intermediates obtained via the MEP pathway [On a metabolic level, our results are substantiated by earlier labeling experiments using [1- pathway . Moreove pathway .In contrast, a close connection between the MVA and the MEP pathways could not be detected. This suggests that cross-talk on the transcriptional level may be restricted to single genes in both pathways.DXR, MCT, CMK, GGPPS11, GGPPS12, AACT1, HMGR1 and FPPS1, supporting the hypothesis that AACT1 and HMGR1 are involved in communication between the MEP and MVA pathways.In a further analysis step, we examined which gene pairs the four identified pathways attached to. Genes from the plastoquinone pathway were predominantly linked to the genes DXPS2, HDS, HDR, GGPPS11, DPPS2 and PPDS2, whereas the chlorophyll biosynthesis appears to be related to DXPS2, DXPS3, DXR, CMK, MCT, HDS, HDR, GGPPS11 and GGPPS12. Genes from the phytosterol pathway attach to FPPS1, HMGS, DPPS2, PPDS1 and PPDS2.Genes from the carotenoid pathway attached to σij|, where σij denotes the pairwise correlation between genes i and j.Incorporating 795 additional genes into the isoprenoid genetic network would not have been feasible with standard GGMs as the graphical model would have had to be newly fitted for each additional gene. Also, hierarchical clustering would not have been an appropriate tool for detecting the similarities in the correlation patterns between the two isoprenoid metabolisms and their downstream pathways. Figure The positions of the MVA pathway genes (labeled 'm') and the non-mevalonate pathway genes (labeled 'n'), respectively, are shown to the right of the figure. The symbol + represents the positions of genes from the downstream pathways identified in Table For an independent comparison between the modified and the conventional GGM approaches, we simulated gene-expression data with 40 genes and 100 observations. This simulation framework corresponds to the data for isoprenoid biosynthesis and is thought to be only exemplary at this point. An extensive simulation study is currently underway and will be presented elsewhere.k edges decays as a power law ∝ kγ-. For metabolic and protein networks, γ is usually estimated to range between 2 and 3, which would result in very sparse networks with fewer edges than nodes in our simulation settings. To allow for denser networks, we generated 100 graphs each for γ = 0.5, 1.5 and 2.5. With 40 nodes, these graphs then comprised 88.3, 49.7 and 30.5 edges on average. For each edge, the conditional dependence of the corresponding gene pairs was modeled with a latent random variable in a structural equation model as described in [N.Following recent findings on the topology of metabolic and protein networks ,30, we sribed in . FurtherThe performance of the graphical modeling approaches was monitored using the rate of true and false positives in receiver operator characteristics (ROC) curves had GAL4p-binding sites. These genes were also identified in [After incorporating all yeast genes into our network of the nine galactose genes, 13 genes were found to attach significantly well. Among these, ified in . This reified in , we did Analysis of gene expression patterns, for example cluster analysis, often focuses on coexpression and pairwise correlation between genes. Graphical models are based on a more sophisticated measure of conditional dependence among genes. However, with this measure, modeling is restricted to a small number of genes. With a larger set of genes, it is rather difficult to interpret the model and to generate hypotheses on the regulation of genetic networks.In our approaches, in the search for significant co-regulation between two genes all other genes in the model are also taken into account. However, the effect of these genes is examined separately, one gene at a time. Because of this simplification, modeling can include a larger number of genes. Also, each edge has a clear interpretation, representing a pair of significantly correlated genes whose dependence cannot be explained by a third gene in the model. Our frequentist method has a resemblance to the first two steps in the SGS and PC algorithms . By restBy using a Gaussian model, we can only reveal linear dependencies between genes. For handling nonlinearities, gene-expression profiles should be discretized and analyzed in a multinomial framework. In principle, it should be straightforward to adopt our approach to a multinomial model. Because we focused on linear dependencies, we have not addressed this problem so far.A. thaliana, we constructed a genetic network and identified candidate genes for cross-talk between both pathways. Interestingly, both positive and negative correlations were found between the identified candidate genes and the corresponding pathways. AACT1 and HMGR1, key genes of the MVA pathway, were found to be negatively correlated to the module of connected genes in the MEP pathway. This suggests that in the experimental conditions tested, AACT1 and HMGR1 may respond differently (than the MEP pathway genes) to environmental conditions, or that they possess a different organ-specific expression profile. In either case, expression within both groups seems to be mutually exclusive. On the other hand, a positive correlation was identified between IPPI1 and members of the MVA pathway, suggesting that this enzyme controls the steady-state levels of IPP and DMAPP in the plastid when a high level of transfer of intermediates between plastid and cytosol takes place.For the isoprenoid biosynthesis pathways in Although we have considered only metabolic genes in this analysis, the method can be extended to identify genes encoding other types of proteins belonging to the same transcription module. In fact, transcription factors and other regulator proteins, as well as structural proteins such as transporters, are often found in the same expression module . Our resSimilarly, the expression of genes in the phytosterol pathway appears to be influenced by genes from the MVA pathway. For the downstream regulation of plastoquinone biosynthesis, however, genes from both pathways seem to be involved. This finding is in agreement with the dual localization of enzymes from the plastoquinone pathway in either the plastid or the cytosol. The regulation of this pathway may therefore depend on processes happening on the metabolic and regulatory level in both compartments.A. thaliana can only be made on the basis of additional knowledge and biological experiments. At this stage, the use of domain knowledge has provided some means of network validation. As genes from the respective downstream pathways were significantly more often attached to the isoprenoid network than were candidate genes from other pathways, we are quite confident that our method can grasp the modularity in the dependence structure within groups of genes and also between groups of genes. Such modularity would have been difficult to detect by standard graphical modeling or clustering.We have shown in a simulation study that for gene-expression data with many genes and few observations, the modified GGM approaches have performed better in recovering conditional dependence structures than conventional GGM. However, a final evaluation of our inferred network for the isoprenoid biosynthesis pathways in q be the number of genes in the network, and n be the number of observations for each gene. The vector of log-scaled gene-expression values, Y = is assumed to follow a multivariate normal distribution N with mean μ = and covariance matrix Σ. The partial correlation coefficients ρij|rest, which measure the correlation between genes i and j conditional on all other genes in the model are calculated asLet ωij, 1, j = 1,...,q are the elements of the precision matrix Ω = Σ-1.where ρij|rest can be estimated and tested against the null hypothesis ρij|rest = 0 [i and j is drawn if the null hypothesis is rejected. Since the estimation of the partial correlation coefficients involves matrix inversion, estimators are very sensitive to the rank of the matrix. If the model comprises many genes, estimates are only reliable for a large number of observations.Using likelihood methods, each partial correlation coefficients rest = 0 . An edgeCommonly, the modeling of the graph is carried out in a stepwise backward manner starting from the full model from which edges are removed consecutively. The process stops when no further improvement can be achieved by removal of an additional edge. The final model is usually evaluated by bootstrapping to exclude spurious edges in the model.i, j be a pair of genes. The sample Pearson's correlation coefficient σij is the commonly used measure for coexpression. For examining possible effects of other genes k on σij, we consider GGMs for all triples of genes i, j, k with k ≠ i, j. For each k, the partial correlation coefficient ρij|k is computed and compared to σij. If the expression level of k is independent of i and j, the partial correlation coefficient would not differ from σij. If on the other hand, the correlation between i and j is caused by k since k co-regulates both genes, one would expect ρij|k to be close to 0. Here, we use the terminology, that k 'explains' the correlation between i and j.Let ρij|k values in a biologically and statistically meaningful way, we define an edge between i and j if ρij|k ≠ 0 for all remaining genes k. In particular, if there is at least one k with ρij|k = 0, no edge between i and j is drawn since the correlation between i and j may be the effect of k. Our approach can be implemented as a frequentist approach in which each edge is tested for presence or absence or alternatively, as a likelihood approach with parameters θij, which describe the probability for an edge between i and j in a latent random graph.In order to combine the different i, j and all remaining genes k, p-values ρij|k are obtained from the likelihood ratio test of the null hypothesis ρij|k = 0. In order to combine the different p-values ρij|k, we simply test whether a third gene k exists that 'explains' the correlation between i and j. For this purpose, we apply the following procedure:For the gene pair i, j form the maximum p-value(1) For each pair pij,max = max{pij|k, k ≠ i, j}.pij,max according to standard multiple testing procedures such as FDR [(2) Adjust each h as FDR .pij,max value is smaller than 0.05, draw an edge between the genes i and j; otherwise omit it.(3) If the adjusted q(q - 1))/2 in the model. Implicitly, multiple testing over all genes k is also involved in step 1. However, because the maximum over all pij|k is considered, a multiple testing correction is not necessary.The correction for multiple testing in step 2 is carried out with respect to the possible number of edges be a sample of n observations. For estimating θ, we maximize the log-likelihood L(θ) = logPθ(y) via the EM-algorithm [The frequentist approach has the disadvantage that a connection between two genes lgorithm .θt be a current estimate of θ. Further, let g be the unobserved graph encoded as an adjacency matrix with gij ∈ {0,1} depending on whether there is an edge between genes i and j or not. In the E-step of the EM-algorithm, the conditional expectation of the complete data log-likelihood is determined with respect to the conditional distribution p,Let By assuming independence between edges, Equation (1) becomesand further, after replacingand summing out Equation 2 we findP and P at the right side of Equation (3) are approximated by the statistical evidence of edge i, j in GGMs with genes i, j and k. As we only want to estimate the effect of k on the correlation between i and j, we distinguish only the two cases whether k is a common neighbor of i and j, for example, gik = 1 and gjk = 1 or not. When k is a common neighbor, we test ρij|k ≠ 0 versus ρij|k = 0. When k is not a common neighbor of i and j, we test σij ≠ 0 versus σij = 0 for the pairwise correlation coefficients instead. Thus, we obtain and are p-values of the corresponding likelihood ratio tests. After replacing Equation (4) in Equation (3), the M-step of the EM-algorithm, that is the maximization of Eθ(logPθ(g)|y,θt) with respect to θ, leads to an iterative updating scheme θ t → θt+1 withwhere θ as followsIn summary, we determine the probability parameters i, j, compute P(ρij|k ≠ 0) and P(σij ≠ 0) for all genes k ≠ i, j.(1) For gene pairs θ0, apply iteratively Equation (5) until the error |θt+1 - θt| drops below a prespecified value, for example 10-6.(2) Starting with i, j in step 1 of the analysis, the partial correlation coefficients ρij|k are not only computed and tested for genes k in the model but also for the additional candidate genes. However, the iteration in step 2 is not extended to these candidate genes. In other words, θij is only iteratively updated in Equation (5) if both genes i, j are in the original model. For candidate genes k, θik and θjk are kept fixed at a prespecified value, for example 1, and are not re-estimated in the EM-iteration process.Our latent random graph approach also enables us to fit a large number of additional genes into a constructed genetic network. In this case, for a gene pair θ. If these candidates have an effect on the correlation between i and j, θij will decrease. Thus, by comparing the original network with the network inferred from allowing for additional genes in step 1, we can determine which candidate genes lower the θ-values and, accordingly, fit well into the network.This outline introduces a second level into the modeling process. At the first level, the network between the original genes is constructed. At the second level, we test how additional candidate genes influence the parameters Additional data is available with the online version of this paper. Additional data files The gene expression values of the isoprenoid genesClick here for additional data fileThe gene expression values of the 795 genes from other pathwaysClick here for additional data fileA more detailed description of the microarray data Click here for additional data fileThe correlation pattern of the 40 isoprenoid genes.Click here for additional data file
Carbon dioxide fixation bioprocess in reactors necessitates recycling of D-ribulose1,5-bisphosphate (RuBP) for continuous operation. A radically new close loop of RuBP regenerating reactor design has been proposed that will harbor enzyme-complexes instead of purified enzymes. These reactors will need binders enabling selective capture and release of sugar and intermediate metabolites enabling specific conversions during regeneration. In the current manuscript we describe properties of proteins that will act as potential binders in RuBP regeneration reactors.We demonstrate specific binding of 3-phosphoglycerate (3PGA) and 3-phosphoglyceraldehyde (3PGAL) from sugar mixtures by inactive mutant of yeast enzymes phosphoglycerate mutase and enolase. The reversibility in binding with respect to pH and EDTA has also been shown. No chemical conversion of incubated sugars or sugar intermediate metabolites were found by the inactive enzymatic proteins. The dissociation constants for sugar metabolites are in the micromolar range, both proteins showed lower dissociation constant (Kd) for 3-phosphoglycerate (655–796 μM) compared to 3-phosphoglyceraldehyde (822–966 μM) indicating higher affinity for 3PGA. The proteins did not show binding to glucose, sucrose or fructose within the sensitivity limits of detection. Phosphoglycerate mutase showed slightly lower stability on repeated use than enolase mutants.in-situ removal of sugar intermediate metabolites for forward driving of specific reactions in enzyme-complex reactors.The sugar and their intermediate metabolite binders may have a useful role in RuBP regeneration reactors. The reversibility of binding with respect to changes in physicochemical factors and stability when subjected to repeated changes in these conditions are expected to make the mutant proteins candidates for Sustained increase of atmospheric CO2 has already initiated a chain of events with unintended ecological consequences and allowed to run for 5 hours. The chromatograms were removed from the solvent system and subjected to staining. Three different staining techniques were used to detect sugars, ammonium molybdate, silver nitrate and alpha-naphthol staining ,24. For The dissociation constants for protein-sugar binding was estimated by measurements of area in chromatograms. For this purpose covalently immobilized protein A sepharose beads was used. The proteins were immobilized on protein A using Amino link kit . The known concentration of protein was incubated at room temperature (25°C) with varying concentration of sugar in the range of 1 μM to 1 mM in a 100 μl fixed volume. At the end of incubation (10 min), the mixture was centrifuged at 10000 × g and an aliquot of supernatant was spotted and chromatogram (TLC) was developed. A similar mixture but with BSA coupled beads served as control. Area calibration using varying concentration of sugar with a fixed aliquot spot volume was recorded under identical conditions. From the measurement of area in control and experimental set the free sugar was calculated such bound sugar is control minus sugar left in experimental set. The experimental data was used to draw a Scatchard type plot from where dissociation constant was calculated, represented by P as free protein, L as ligand and PL as the ligand-bound-protein, the dissociation constant is defined as Kd = [Pfree] [Lfree]/ [PL]. The dissociation constant Kd values for PGDM and S39A enolase for 3PGA and 3PGAL were calculated using experimental data using MS excel program.DB and DB carried out purification of enolase mutants, MK and SM carried out the binding assays. MTS, SC, AG and VG participated in design of the study and performing analyses, MTS also helped to draft the manuscript. SKB conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
Translational Research (TR) provides a set of tools and communication context for scientists and clinicians to optimize the drug discovery and development process. In the proceedings of a Princeton conference on this timely topic, the strengths and needs of this developing field were debated. Outcomes and key points from these discussions are summarized in this article which covers the topics of defining what we mean by translational research , ways in which to engender the TR mindset and embed it in organizations such as the pharmaceutical industry in order to optimize the impact of available technologies (including imaging methods), the scientific basis and under-pinnings of TR including genomics knowledge, information sharing, as well as examples of application to drug discovery and development. Importantly, it should be noted that collaborations and communications between the stakeholders in this field, namely academia, industry and regulatory authorities, must be strengthened in order for the promise of TR to be delivered as better therapies to patients. Successful drug development requires satisfying a matrix of domains from relevance to the disease and the drug-ability of the target through feasibility and convenience of drug delivery, demonstration of favorable benefit-risk profile in order to achieve to a drug label that reflects physician and patient acceptance. Herein lies a key role for TR in helping to navigate this journey.There are many challenges facing pharmaceutical companies in the post-genome era not least of which is declining productivity and innovation. Not surprisingly, there is agreement between Industry, Academia and Regulatory communities that the drug discovery and development process needs to change in order to meet the future needs of patients with effective and desirable drugs. A key part of the strategic solution is to leverage the application of TR principles and practices, which if implemented will go a long way towards addressing the challenge posed by FDA's Critical Path Initiative that hosted a small group of clinical and basic science researchers and individuals from the pharmaceutical industry. Among the topics discussed were how to define "Translational Research", how to expedite the transfer of pre-clinical findings to influence development plans, how to select biomarkers to ensure support for decisions, how new strategies can be effectively translated to practical tactics, and what team players and collaborations are necessary to conduct successful TR. Success factors identified include: Identification and validation of novel drug targets, development of robust and validated assays to screen drug leads for safety and potential efficacy in humans, and the identification of suitable patients for expedited but informative trials.In order to promote discussion on this topic, the International Quality and Productivity Center (IQPC) organized a in vivo measurements and leverage preclinical models that more accurately predict drug effects in humans, TR itself can be defined in many ways. At its core however, is the thesis that information gathered in animal studies can be translated into clinical relevance and vice versa, thus providing a conceptual basis for developing better drugs. It could in fact be argued that the designation of a special term or definition for TR might be unnecessary or even misleading. Historically the term was assigned to create awareness and advocacy for the general public, clinicians and scientific communities, and especially for the government and other private sponsors [While the goal of TR is to implement sponsors in this Whatever the precise definition TR, it should serve as a forum to find a "common language" for clinicians and scientist in navigating the complexities of basic scientific approaches, data analysis and information processing. It clearly implies the need for an intensive training for scientists and clinicians in multiple disciplines to acquire expertise and experience to conduct TR. For the purposes of this symposium, the scope of translational research was defined as the application of scientific tools and methods to drug discovery and development. This can be achieved by integrating information concerning a) exposure (pharmacokinetics), b) biological activity (pharmacodynamics including safety profiles) delineating differences between species and leading to the validation of target and mechanism biomarkers, and c) outcomes leading to an understanding of efficacy and safety between species and ultimately to the qualification or linkage of biomarkers to clinical outcome (for a fuller discussion on Biomarkers and Surrogate Endpoints see definitions in ). Thus TTherefore, in taking a pragmatic or operational rather than a definitional approach, a key to a successful translation of non-human research to human clinical trials lies in the choice of biomarkers. While biological pathways tend to be homologous across species and more so than pharmacokinetic parameters such as absorption and clearance, animal models themselves have a poor record of predicting human disease outcome. Nonetheless, biomarkers are the key for prediction of biological activity if not drug efficacy in humans. At least three types of biomarkers can be identified: (1) target biomarkers measuring the interactions between a drug and its target; (2) mechanism biomarkers measuring their downstream biological effects and (3) outcome biomarkers that reflect efficacy and safety. A second dimension can also be ascribed to biomarkers to help drug developers assign risk assessment to such approaches. This sub-classification links desired utility to points on a risk continuum; e.g. low, medium and high, in which 'low' describes a biomarker applied solely to animal models for example for selecting compounds for progression into humans, whereas 'medium' is association with utility for some aspects of early clinical profiling of efficacy and safety including across species correlation, and 'high' is associated with reproducibility and qualification as an outcome or even regulatory tool in humans.Additionally, TR itself undergoes an evolution from pathfinding (hypothesis generating) to discovery research, to development, and finally to application. Each of these operational phases is amenable to being evaluated or supported by biomarkers, either for the definition of objectives, proof of principle or in assessing risk and feasibility. Consequently the right choice of biomarkers can help drive decision-making and lower the costs and cycle-time for progression of a new drug from the bench into the clinic. In summary, whatever the definition or classification ultimately used, in practical terms translational tools should be developed and applied on a "fit for purpose" basis with prior assessment and agreement of attendant risks.Traditional Research and Development paradigms have accentuated the boundaries between the territories of discovery and development worlds and have not been conducive to bridging key transition points. This is unfortunate since the development world tends to lag behind advances made in discovery, a point recognized by FDA in launching the Critical Path Initiative . In brieWhile advances have been made on streamlining forward progression of R&D through organizational linkages, what has not happened to the same degree is a bi-directional flow of information, namely flow of information from the clinic back into the hands of the discovery scientist. The consequence of this is that the biological models used to qualify drug candidates may fail to be predictive of subsequent drug responses in the clinical setting. Thus a practical outcome of TR is to improve the overall probability for technical success (POS) in drug development.in silico modeling, but also the need to empower key scientists and clinicians with the task of enhancing the prediction and iteration learning cycle. Since there are different organizational solutions for embedding the TR mindset within an organization, a key element is to provide TR expertise to drug development teams. Furthermore, innovation and productivity values are critically linked through information exchange. Rapid iteration and transfer of knowledge gained from prototype development experience will enable more rapid compound redesign against the highly desired target and be reflected as enhanced innovation. On the productivity side, the tools outlined in the Critical Path Initiative [Consequently the next paradigm for R&D optimization depends not only on leveraging emerging technologies such as pathway mapping and itiative , once efThe journey however starts at understanding the scientific foundations of physiology and pathophysiology, thus providing a rational linkage between the gene, its expressed product, disease expression and ultimately outcome. The discipline of biomarker identification and development as mentioned previously encompasses these principles and is a core tool in the TR scientist's armamentarium.a priori agreed performance characteristics, such that there is agreement on the utility of the marker.Biomarkers (which are not necessarily Surrogate endpoints and few are in fact) are key tools for escorting the drug candidate from the bench to the bedside and back. That is they can be both animal "diagnostic" as well as human "diagnostic" tools. A key implementation tool is therefore to identify early on which biomarkers may be of value and to study these in the relevant animal models, that is, specifically include them in preclinical screening paradigms, as well as identify their role in the clinical development plan. Biomarkers, which include imaging techniques as well as protein and genetic markers, may fulfill several roles in R&D from compound screening and selection through dose justification, decision-making and risk mitigation, however the key is to overtly link them to the discovery and development plans with There are many good examples of the value or non-value of preclinical models in predicting subsequent human response and safety. The journey from preclinical experience to the clinic is a well-worn one , albeit without the degree of overall predictiveness we would desire. On the other hand there is a marked paucity of examples in which clinical experience or observation was translated back into a legitimate drug target and discovery effort (e.g. Viagra). Thus, a major opportunity lies in both developing more sensitive and specific animal models of disease (e.g. knock in/out) as well as fully leveraging novel clinical observations. At the same time it is the ultimate validation in the clinic that counts, and rapid feedback of that information will allow the conditional probabilities and learning cycle to be enhanced. By enabling these principles through organizational and cultural change, the impact of TR will be determined by direct impact on high-quality mid-phase transitions as well as reduced cycle-times and resource burdens.The era of genome-scale biology has seen an increase in, and production of, vast amounts of biological data together with an extensive increase in biology-oriented databases. To make the best use of biological databases and the knowledge they contain, different kinds of information from different sources must be integrated in ways that make sense to biologists. A major component of the integration effort is the development and use of annotation standards such as ontologies. Ontologies offer a conceptualization of domains of knowledge and facilitate both communication between researchers and the use of domain knowledge by computers for multiple purposes. Therefore, the Gene Ontology (GO) project was founded in 1998, in an attempt to provide consistent descriptors for gene products, in different databases; and to standardize classifications for sequences and sequence features. Since then, the GO Consortium has grown to include many databases, including several of the world's major repositories for plant, animal and microbial genomes . DespiteIn the past, biological processes and the underlying genes, proteins, other molecules and environmental factors, have been studied separately more than on an integrated basis. The challenge, however, for future research on human disease is to understand not only the mechanistic basis, but also the underlying dynamics of gene product expression. Thus, biological research should emphasize the analysis of pattern of gene expression over individual measurements.Molecular Function describes activities, such as catalytic or binding activities, at the molecular level, e.g. kinase activity. (2) Biological Process describes biological goals accomplished by one or more ordered assemblies of molecular functions, e.g. 'cell death' can have both subtypes, such as 'apoptosis', and subprocesses, such as 'apoptotic chromosome condensation'. (3) Cellular Component describes locations, at the levels of subcellular structures and macromolecular complexes, e.g. 'nuclear inner membrane' with the synonym 'inner envelope' [GO has been developed to predict behavior of entire biological systems, being assigned to three aspects: (1) nvelope' .The powerful use of comparative gene expression analysis in human disease was exemplified with a recent study on gene expression profiles of gastric cancer patients and their correlation to survival. Leung et al. have sho provides structured, controlled vocabularies and classifications that cover several domains of molecular and cellular biology and are freely available for community use in the annotation of genes, gene products and sequences.In summary, the application of mathematical models and computer simulations to analyze gene expression profiles and to compare complex data sets of various origins may provide new insight into the pathogenesis of cancer progression and metastasis. The Gene Ontology (GO) project Cancer vaccines are promising therapeutics designed to elicit immune responses against antigens expressed by tumor cells. However, vaccines that have worked well in preclinical models have not translated into consistent responses in the clinic. Since vaccines are comprised of multiple components, multiple immunological endpoints are used to identify the least effective vaccine components in cancer patients. Post-clinical research strategies are subsequently designed with a focus on improving the least effective vaccine components.To improve the performance of cancer vaccines in the clinics, which are traditionally judged by clinical endpoints, novel endpoints and biomarkers are needed to assist in understanding why cancer vaccines are not working. From clinic to bench, a systematic strategy is needed for pre-clinical optimization that addresses vaccine limitations identified in the clinics; and from bench to clinic, performance criteria need to be established for a follow-up clinical study. After gathering the therapeutic options, testing has to be prioritized on the basis of: a) already available data; b) availability of the therapeutic modality; c) models and assays available internally; d) turnaround time; and e) on the patent landscape.Prioritization and rapid evaluation of novel therapeutics will decrease the turnaround time and facilitate decision-making. However, several tools are needed to make this a reality. For example, complex therapeutic strategies require biomarker or even surrogate endpoints from clinical trials to direct development of second-generation therapeutics. The rapid qualification and choice of surrogate endpoints should be based on knowledge gathered by an "early-stage therapeutic opportunities database". This comprehensive database should include data on therapeutic targets, models, assays and published results and indeed the plethora of new therapeutic strategies in preclinical stages can only be managed by accessing informative databases. Moreover, pre-clinical compound optimization can be facilitated by establishing quantitative endpoints of short duration and lastly go / no go decision points must be established for surrogate endpoints and clinical responses in animal models.However, several current issues of scientific basis also have to be addressed, such as the importance of clinical surrogate endpoints, the relevance of animal models, lack of concordances between assays, and the lack of concordance between surrogate endpoints and the clinical response, in order to improve cancer vaccine development strategies., a non-profit organization conducting more than 100 clinical trials and treating 7000 cancer patients yearly.A core principle of TR revolves around validation of targets, biomarkers and treatment modalities in humans. These activities and drug development itself cannot be undertaken without patients or clinical data. How TR can be integrated in a multi-center, multi-cultural organization involving patient accrual from more than 38 different countries worldwide, for the research and treatment of cancer can be exemplified by EORTC Advancement in basic science and immunology and an overwhelming revolution of biotechnology have changed the targets and endpoints in cancer trials from the mere assessment of cytotoxicity to defined mechanisms for potential anti-tumor effect. That is in the era of "targeted therapies" molecular therapeutics are now being designed to target "strategic" checkpoints that underlie the malignant phenotype. The challenges to be met are: 1) dealing with new compounds affecting novel molecular targets, 2) innovation in design and analysis of clinical trials, 3) cooperation between translational researchers and network of clinical investigators and 4) informed patients. The major concerns in conducting clinical trials are rising costs coupled with efficacy rates as low as 5% in cancer patients, making signal to noise detection not only difficult but expensive. and storage of tissue is de-centralized at the institute where it is collected. To assure equal quality of tissue, which in outcome of scientific experiments can be compared, standardization of the collection and storage methods is fundamental. Therefore, protocols for storage, retrieval and tracking of tissues will be standardized and implemented in all participating laboratories.The need for research on tumor tissue requires the set-up of tumor banks and the associated administrative burden often discourages young oncologists. EORTC established a tumor bank comprising real tissue samples but including a "virtual review" by pathologists. This ensures the availability of a well-categorized and prognostically evaluated collection of primary tumors and allows an online-searchable bank for researchers to access. Indeed the tumor bank harbors paraffin-embedded tumors, as well as frozen tumor tissues Access to the tumor bank allows screening of many available tumor samples for the expression of molecular targets and will help to unravel novel biomarkers for diagnosis and treatment. Such access will allow us to overcome the missed opportunities due to lack of tissue collection in clinical trials, which could have allowed better pre-screening of potentially responsive patients based on expression of certain biomarkers e.g., expression of bcl-2, and the treatment of target positive patients may have ensured a better clinical outcome in this target class.The challenge of testing promising new modalities for the cure of disease that had shown efficacy in experimental models lies in a lack of understanding of the underlying mechanism, heterogeneity of human genetic backgrounds and a lack of suitable controls in human studies. Strategies have been developed at the NIH for the global monitoring of patients by studying, with high-throughput technology, the systemic effects of treatments as well as their effect within the target organ. For this bedside to bench effort, a systematic sampling of human tissues of local (site of immunogen application), systemic (circulation) and peripheral (tumor site) origin needs to be standardized to ensure high quality of samples avoiding degradation of protein, RNA and DNA. This TR approach allows experimental studies in human samples during or after therapy through amplification of transcripts for analysis of minimal sample tissue, and the application of monitoring techniques for genetic profiling. Further, proteomic-based approaches allow following the kinetics of the mechanism of actions of therapeutics.Studying the effects of treatment in a bedside to bench approach provides markers for the characterization of disease process and/or testing hypotheses generated by experimental models. Therefore, the nature of research in the clinical setting can realistically be described as 'hypothesis generating", rather than 'hypothesis driven', through a discovery-driven approach. Analysis of the genetic background can reveal polymorphism of genes involved in immune reactions, such as cytokines and their receptors, which might influence the outcome of immunological interventions in different patient populations .Analysis of disease heterogeneity can be approached by transcriptional analysis, through linear amplification of RNA and subsequent analysis by cDNA array and transcriptome array, and/or functional protein analysis, through protein characterization by proteomics ,7. NumerDespite the many obstacles in monitoring therapeutic effect in early phase clinical trials and the lack of hypothesis, the scientific significance of these trials should be reviewed assuming that the new treatment will not be beneficial. Desirable outcomes include learning about the disease process, the primary goal of the therapy and the reasons for its failure. Another concern should be if we have taken advantage of the patient population accrued at least to learn something, although independent of treatment, about the disease process itself. Clinical trials should therefore be designed, within ethical constructs, to look at questions beyond the ones related solely to treatment. This can be achieved through (1) establishment of libraries of relevant clinical samples for immediate or future studies, (2) prospective collection of data into a consistent format, and (3) tight link between clinical and scientific data.Developing better therapies for chronic inflammatory diseases also exemplifies the use of the latest technological advances in TR such as proteomics, transcriptomics and cellomics, for identification or application of biomarkers. Chronic inflammation frequently precedes the development of cancer in adults, such as lung , esophagSeveral factors, such as: 1) the nuclear protein HMGB1, 2) the S100 family of molecules; 3) purine metabolites, ATP, AMP and uric acid, and 4) heat shock proteins have emerged as relevant mediators or "endogenous damage or danger signals" to recruit inflammatory cells, to promote wound healing and associated stromagenesis, angiogenesis; and ultimately to modulate immune functions .Current attempts for cancer therapy focused on vaccination to antigenic targets or application of cytokines have resulted in measurable anti-tumor reactivity in the blood; however, these therapies have mostly failed to show a correlation with tumor outcome or progression. Therefore, to more completely understand and identify factors assessing tumor death could inform and drive the development of more effective biological therapies for cancer patients. Sample acquisition in the blood includes serum/protein collection for Seldi-Tof mass spectrometry; and the collection of cells for microarray, proteomics, and high contents screening via cellomics. Protein chip Seldi-Tof MS has been already successfully used to discriminate serum expression profiles in various cancer types -13. The Beside proper study design, the models chosen to perform data classification and to estimate classification errors are highly critical for the complex data analysis. The identification of diagnostic markers for cancer, or markers to identify responders vs. non-responders to therapy requires systematical analysis of healthy vs. diseased, then of benign inflammatory disease vs. malignant cancer. Thus, methods to perform statistical analysis are powerful, intuitive and provide an objective position from which to assess results. To handle these complex data analysis problems, the University of Pittsburgh has formed the Pittsburgh Supercomputing Center (PSC) headed by Dr. Arthur W. Wetzel, in a joint effort with Carnegie Mellon University and Westinghouse Electric Company, and is to date the most powerful open-resource computer available.There are many examples of the value of weaving molecular imaging into Investigational New Drug Development. At the same time, the scale of the initial investments required vs. perceived benefits may not gain the necessary support of decision makers for application into development programs. There is a clear need to educate on the power and limitations of nuclear imaging techniques within the context of enhancing new drug development. Within this context, a primary goal for TR is to emphasize the cultural and operational shifts required of various stakeholders including academia, in order to better partner with industry.The term imaging covers a range of available techniques, including discovery autoradiography, small animal imaging (PET and MRI), traditional anatomical imaging , functional imaging and many new tracers are available as are techniques with increased sensitivity to enable micro-doing studies (AMS) . NuclearFirstly demonstrating drug penetration into the tissue of interest and co-localization or binding with the intended target through receptor occupancy , including describing dose vs. target occupancy curves remains a key objective an done used frequently in early clinical research. A second objective involves the quantification of a compound's pharmacokinetic (PK) profile using radio-labeled compound, an analysis that can be performed on a region of interest basis e.g. to assess time on target as well as potential therapeutic benefits vs. side effects. Additionally, imaging can be used to quantify pharmacodynamic (PD) effects of drug action and their relationship to administered dose. In combination, PK/PD information thus derived can be used to select a dose with which to test the clinical hypothesis or help quantify the therapeutic index. From a TR perspective all these techniques can be applied in the discovery and preclinical phases to facilitate compound selection and optimization as well as in the clinical phases.A key question emerges in applying these technologies: "How best to get it done" and the debate of internal imaging centers vs. external networks and academic relationships quickly emerges. On balance, it is clear that there is not one ideal solution here rather in general a collaborative approach between industry and academia is recommended. As a consumer of medical imaging, industry is a critical player in driving innovation and the paradigm shift towards more frequent yet appropriate utilization. However, a partnership approach ultimately generates better value and cost-effectiveness for the Imaging discipline as a whole.TR is an approach to foster communication between the scientific community and clinical practitioners. To maximize the value this can bring requires that public and governmental education has to be improved in order to leverage understanding and advocacy. There are many benefits to be accrued from this, not least of which being for the patient that is waiting for meaningful therapeutic advances. New drugs have to be developed fast and show effect on the right target at the earliest possible stage of development in order for industry to become more innovative and productive and medicines to be less expensive.Amongst other specific aspects required, are the strengthening of educational opportunities for physician scientists to help prepare them to conduct effective TR. At the same time, discovery science should be conducted by scientists who have been trained in relevant disciplines including cell biology and pharmacology as well as molecular biology. This in turn requires grant support for TR-related projects. Specifically, young scientific investigators should have more access to grants from governmental bodies and foundations in order to conduct research on clinical samples. This funding is largely in the hands of government leadership. Other points for disseminated education include the availability of a plethora of tools available to conduct and advance TR and development opportunities that include high quality clinical sample collection.Dr. Marincola founded the Journal of Translational Medicine, an Open Access, peer-reviewed online journal, so that more therapeutic insights may be derived from new scientific ideas – and vice versa .Lastly, since TR is information intensive, considerable efforts are required to provide accessible databases and share knowledge. To help ameliorate this gap and provide access to information derived from human experimentation and to optimize the communication between clinicians and scientist, In conclusion, TR represents a team effort, since no single constituency can be fluent in all aspects, and thus a concerted effort is needed amongst translational researchers to convince stakeholders and legislators of the need to support TR efforts, and thus maximize its potential.
This study has evaluated urinary tract injuries and dysfunction after Radical Hysterectomy (RH) performed in patients with cervical cancer and has compared the cystometric parameters and urinary complications occurring in these patients with those occurring in patients who had undergone Simple Hysterectomy (SH).A prospective case-control study was conducted to evaluate urinary tract injuries (intra-operative and post-operative) and dysfunction in 50 patients undergoing RH for cervical cancer and to compare them with the same parameters in 50 patients who underwent SH for benign disease.Mean age in the RH group was 46.3 years and in the SH group was 50.1 (p = 0.63). There were no bladder and urethral injuries in either group of patients. There was one intra-operative ureteral injury in the RH patients but none in those who underwent SH. (p < 0.05). In the two weeks after surgery, 15% of RH patients and 11% of SH patients had experienced a urinary tract infection urinary tract infection (p = 0.61). Two week after surgery 62% of RH patients had no urinary symptoms, compared to 84% in the SH group who did (p < 0.02). Urinary residual volume, first urinary sensation and maximal bladder capacity were higher in the RH group, but this was not statistically significant. The only case of a urinary fistula appeared in a patient who received 5000 cGy radiation therapy pre-operatively, but this spontaneously healed after 3 weeks of catheterization.Intra-operative and post-operative urinary tract complications are comparable in patients undergoing RH and SH and an expert gynaecological oncologist might be able to further decrease complications. However, radiation therapy before surgery may increase the risk of complications. Although the incidence of lower urinary tract complications after RH has been reported with variable rates, up to one half of patients undergoing RH experience at least one lower urinary tract symptom that develops after surgery and at a variable period of time ,2. SeverIn this study, we prospectively evaluated intra-operative urinary tract injuries in addition to post-operative urinary tract dysfunction and infection at 2, 6, and 14 weeks following surgery. We also compared these findings with those at the same times in patients who underwent SH for benign disease.Between October 2000 and December 2002, 50 women who underwent RH and bilateral lymph node dissection (BLND) were considered eligible for inclusion in the study. All patients had squamous cell carcinoma of cervix (SCC Cx.) and were staged as being Stages I and II. The operations were performed by the same gynaecological surgeons, using the same standard technique (class III Piver & Rutledge). Pre-operative management was standardised for all patients. Preoperatively a detailed medical history, physical examination, routine laboratory tests, pelvic CT-scan , urine analysis, and urine culture were carried out.The exclusion criteria were; a history of voiding dysfunction, previous pelvic surgery, brain or spinal cord diseases, diabetes mellitus, and contraindications to urodynamic studies. The latter included a history of vesicoureteral reflux, hydronephrosis, frequent or recent urinary tract infection or urethral stricture. Patients received one pre-operative and three post-operative doses of a second generation cephalosporin (Cephazolin).The duration of surgery, amount of intra-operative haemorrhage and the occurrence of any organ injuries were recorded. A Foley's catheter was inserted at the time of surgery and was left in place for two weeks after surgery. The patient's urinary catheter was removed when their post-voiding residual volume was less than 75 ml.Water cystometry, urinalysis and urine culture were performed at 2, 6, 14 weeks after operation. The test for water cystometry was performed with the subject lying in a supine position. A 12F double-lumen catheter was introduced transurethrally into the bladder to withdraw residual urine. The pressure-volume relationship of the bladder was determined by filling the bladder with isotonic saline at a rate of 30–50 ml/min. The cystometry fill phase ceased when the patient experienced an urge to void urine, the first indication being leakage through the urethra, or a bladder volume of 600 ml (which ever occurred first). The volume at the termination of the fill-phase was designated as the maximum bladder capacity (MBC). We also assessed the bladder volume of each patient at their first desire to void . Post-void residual urine volume (RU) was determined by transurethral catheterization after voiding had ceased. The presence or absence of any urinary symptoms was determined by both questionnaire and direct interview with the patient.Fifty patients with benign disease who had underwent SH, were evaluated at the same time periods in the same way for comparison with the RH group of patients. In the SH group of patients, the Foley's catheter was inserted for 24 hours after operation. Data were analysed by SPSS statistical software using the chi-square and Student's 't' test for data analysis.A and IIB stages, respectively. Patients who had stage IB2 or higher stages of cervical cancer received 4500–5000 cGy of irradiation pre-operatively. None of these patients received adjuvant radiation during the interval between surgery and performance of urodynamic studies.During this study, 50 patients with early stage cervical cancer and who underwent RH for cervical cancer and 50 patients who had undergone SH for benign disease were evaluated. Two patients in the RH group and 3 from the SH group were lost during the study. The mean ages and their BMIs (Body Mass Index) in two groups of patients were not statistically different. However, parity in the RH group was higher (p < 0.05) Table . In the In the SH group, the most common pathological conditions requiring hysterectomy were as follows; dysfunctional uterine bleeding (47.83%), uterine myoma (21.7%), ovarian cyst (10.8%), chronic pelvic pain (4.3%), adenomyosis (4.3%), endometrial cancer (4.35%), CIN (4.35%) and molar pregnancy (2.17%).1 cervical cancer) in the RH group had an intra-operative ureteral injury , and the ureteral anastomosis was carried out at that time. The urinary catheter and ureteral stent were removed four weeks after operation.The average blood loss and mean operative time for both groups are shown in table 2 cervical cancer) had received chemo-irradiation (5000 cGy) pre-operatively. She had a spontaneous urine leakage from the vagina approximately 2 months after surgery. A retrograde cystography revealed a minute vesico-vaginal fistula. After 3 weeks of continuous bladder drainage, the fistula resolved spontaneously and she had no urinary leakage at her follow-up visits.Another patient are showed in table Modern surgical techniques have resulted in a decrease in the incidence of lower urinary tract complications occurring as a result of radical hysterectomy. In particular, in recent times, various surgical strategies have been developed to avoid damaging the inferior segment of the cardinal ligament as well as the terminal bundle in the uterosacral and pubocervicovesical ligaments. This has made it possible for patients' lower urinary functions to return more rapidly to their pre-operative states ,5. Howevet al, reported a 6.6% rate of intra-operative urinary tract injuries during radical surgery for cervical carcinoma [In a study by Zaino and colleagues intra-operative complications were reported to occur as being 4.5% urinary tract and 8.7% other organs [arcinoma . In our et al, [Zaino et al, reportedet al, and thiset al, . In our et al, but lesset al, . Also, Cet al, .et al reported that 67% of patients had impairment or absence of bladder sensation after a RH [et al, 84% of patients had an increased first desire to void and maximal capacity in the post-operative period [In our study, urinary tract dysfunctions that followed radical surgery were that 4% had an abnormal post-voiding residual volume at the first post-operative visit. The first voiding sensation at the third visit was 49% and stress urinary incontinence was 17%. However, maximal capacity remained abnormal in 65% of cases by 14 weeks after surgery. Ralph ter a RH . In the e period .Urinary symptoms in our study occurred in 20% 2 weeks after operation and which were higher in patients with pre-operative radiotherapy . Urinary symptoms remained high at the third post-operative visit, although they declined in patients who had undergone surgery alone.et al, (mean was 45 years) [In our study, the patients mean age; BMI, parity, operative time, and blood loss were higher in those undergoing RH. The mean age of our patients was higher than patients in the study by Vervest 5 years) . Also in5 years) the patiIn recent years, several studies support the role of a gynaecological oncologist who is specifically trained in such aspects of care and who can obtain optimal cytoreductive surgery in patients with ovarian carcinoma ,13. TherThe author(s) declare that they have no competing interests.NB carried out the surgery and participated in drafting the manuscript.FG carried out follow-ups.HA participated in the design of the study and helped to draft the manuscript.HK helped in follow-ups and performed the statistical analyses.PH participated in the design of the study and helped to draft the manuscript.All authors read and approved the final manuscript.
HIV/AIDS is the most dramatic epidemic of the century that has claimed over two decade more than 3 million deaths. Sub Saharan Africa is heavily affected and accounts for nearly 70% of all cases. Despite awareness campaigns, prevention measures and more recently promotion of anti viral regimens, the prevalence of cases and deaths is still rising and the prevalence of systematic condom use remains low, especially in rural areas. This study identifies barriers to condom use based on the Health Belief Model (HBM) in Benin, West Africa.The study was a cross-sectional survey conducted from June to July 2002. Two hundred fifty one (251) individuals were interviewed using a structured questionnaire adapted from a standardized WHO/GAP questionnaire. A logistic regression was used to identify factors associated with condom use.In spite of satisfactory knowledge on HIV/AIDS transmission, participants are still at high risk of contracting the infection. Sixty three (63) percents of the interviewees reported being able to recognize infected people, and condom use during the last occasional intercourse was declared by only 36.8% of males and 47.5% of females. Based on the HBM, failure to use condom was related to its perceived lack of efficacy [OR = 9.76 (3.71–30.0)] and perceived quality [OR = 3.61 (1.31–9.91)].This study identifies perceived efficacy (incomplete protective effect) and perceived utilization-related problem (any reported problem using condoms) as the main barriers to condom use. Hence, preventions strategies based on increasing perceived risk, perceived severity or adequate knowledge about HIV/AIDS may not be sufficient to induce condom use. These data will be useful in designing and improving HIV/AIDS prevention outreach programs in Sub Saharan Africa. One of the current challenging tasks faced by health professionals and scientists worldwide is the prevention and control of HIV/AIDS. This disease claims yearly a huge toll of deaths, productivity and economic losses, especially in sub-Saharan Africa where the population is already weakened by poverty, malaria and tuberculosis ,2. CurtaBenin is a West-African country with a population of 6.2 million people. The male/female sex ratio is 0.96, 48.5% of the population is under age 15, the rate of literacy is nearly 30.0 % and farming as the main occupation [The sample size was calculated based on results from the Benin 2001 Demographic and Health Survey . A stratData were stored in a file using Epi info 2000 and thenThe age distribution of the study population was similar to that of the Benin population . Table 1Table The overall condom use in this population was low 34%). Table 4%. TableHowever, it is interesting to note that whilst females declared ever using condom less often than males, they declared having used the condom during the last occasional intercourse more often. In particular, even though 73% of females declared never using the condom, 47% declared having used it during the last occasional intercourse. None of these differences were significant but they clearly indicated contradictory tendencies and answers to the questionnaire. It may be that among females who ever used condom, they use it more frequently than males. Finally, the median number of sexual partners during the last 12 months varied by gender , by education (2 for some versus 1 for none) and by marital status (2 for single versus 1 for other) but not by age groups.high perceived risk of contracting HIV infection among interviewees: 94% considered themselves as vulnerable to HIV/AIDS. This proportion was higher in females compared to males. Similarly there was a high perceived severity of HIV/AIDS: 99% of females compared to 87% of males perceived HIV/AIDS as a severe and deadly disease. Conversely, there was a relatively low perceived efficacy of condom as a protective measure: only 37% of the interviewees perceived condom as an effective mean in protecting from getting HIV infection. We identified several socio-cultural barriers to behavioural change namely reported problems using condom (88% of the interviewees), the alleged capability to physically recognize an HIV infected person and the denial all together of the disease (only 19% participants believe HIV/AIDS exists). Also, cultural practices such as polygamy (20% of the study population), poverty, the belief that there is a cure for the disease (74%) and religion (9 % of non favorable reaction towards condom are among declared Christians) were all not favorable to HIV infection control.Overall, there was a Table This study was the first ever to use the Health Belief Model (HBM) to assess cultural behaviour in rural Benin towards condom use and HIV/AIDS. The HBM was reported to be one of the most widely used behavioural frameworks for more than five decades but has been criticized for its inability to efficiently predict people's behaviour . There iOur results showed there is a high awareness on AIDS in general and that women knew more about the modes of transmission of HIV/AIDS and its impacts than men. Conversely, women were more likely to feel that they could identify HIV-infected individuals from their symptoms. In addition, females were less likely to declare using condom in general even though a higher proportion declared having used condom during the last occasional sexual intercourse. This finding is disturbing and could be explained by the difference in perception of the question "do you use condoms?" It is difficult to judge what the true answer is but it is likely that rare events are better reported, and thus women may be more prone to recall the use of condom than men during occasional intercourse given that they declared on average fewer sexual partners. It is also possible that among women who do use condom, they will use it more regularly than men.Our measure of perceived vulnerability might not be sensitive enough to capture differences in perceived risks. In fact, all women and most men felt they were at risk of acquiring the infection, yet only a small proportion were using condoms. Another explanation may be that perceived risk is not a driving force in behavioural change in this subset of the population. This is an illustration of the complexity of modeling human behaviour and can thus make a case for further cultural-specific HIV-behavioural research. When only considering the percentage of condom use by gender, females appear to be at a higher risk of acquiring HIV even though they appeared to know more about transmission routes and prevention methods. This might be due to the well established difficulty facing women in negotiating the terms of sexual intercourse. In fact, gender inequality is associated with poverty, condom with distrust and sexual economic exchange is not perceived as prostitution . All theDespite a relatively acceptable knowledge of modes of transmission and prevention methods, only a few of participants declared using condoms, which is an indication that a relatively good knowledge about HIV/AIDS, even though necessary, may not be a key factor in behavioural change in fighting HIV epidemic in the study population. These findings also indicate that programmes which aim only at increasing awareness and knowledge may not succeed.Using the HBM to analyze the determinants of behavioural change in our study population, we can conclude that there is a high-perceived vulnerability and perceived severity, and yet this does not encourage condom use. An important proportion of participants do not believe in the efficacy of condoms and there are barriers to the use of condoms.Our results are comparable to that found in a similar study in the USA and in aOne limitation of our study was that for ethical reasons, subjects less than 15 years old were excluded even though some may have already been sexually active. Also, there was a potential selection bias by not having equal number of interviewers by gender, which resulted in an over-sampling of males. Our results would be biased if the reason for poor recruitment of women was linked to their behaviours, which is not likely to be the case. There were three males interviewers for one female , and interviewer/participants must be from same gender. For the purposes of the analysis we assumed that reported knowledge and behavioural risk factors are independent. Finally there is no evidence for the validity or reliability for the original WHO questionnaire, however its use allows for comparability of results across settings.Condom use, in our study population, depends on its perceived quality and perceived efficacy. There is an indication that behavioral communication change strategies based on increasing perceived risk or vulnerability of the population or based on fear factor by increasing perceived severity of HIV/AIDS are less likely to be deterrent towards condom use and require more researches. HIV outreach programs must target more barriers of condoms use. Condom outreach programmes should be defined at community level and must be defined in association with the community, using problem-solving techniques and selecting the most relevant targets, based on their importance and changeability . Data frThe author(s) declare that they have no competing interests.SHH conceived of the study, designed the protocol, carried out and supervised the field work and data collection, performed and interpreted the statistical analysis and wrote the manuscript. HC participated in analysis and interpretation of the data, and in the writing of the manuscript. NJH contributed in earlier analysis of the data and reviewed the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
Since it has been reported that inhibition of proteasome activities may prevent cancer, the effects of proteasome inhibitors on arachidonic acid release from cells and on prostaglandin I2 production in rat liver cells is inhibited by actinomycin D.The proteasome inhibitors, epoxomicin, lactacystin and carbobenzoxy-leucyl-leucyl-leucinal, stimulate the release of arachidonic acid from rat glial, human colon carcinoma, human breast carcinoma and the rat liver cells. They also stimulate basal and induced prostacycin production in the rat liver cells. The stimulated arachidonic acid release and basal prostaglandin IStimulation of arachidonic acid release and arachidonic acid metabolism may be associated with some of the biologic effects observed after proteasome inhibition, e.g. prevention of tumor growth, induction of apoptosis, stimulation of bone formation. The proteasome degrades many cellular proteins, several with regulatory functions. It is not surprising that proteasome inhibitors affect many biologic processes includin++-dependent proteases, calpain, papain, chymotrypsin, trypsin and cathepsin are not affected by epoxomicin even at a 50 μM concentration . Some oevention -24. It ievention .3H] AA (91.8 Ci/mmol) was obtained from NEN Life Science Products, Inc. . Epoxomicin, lactacystin and ZLLL were purchased from Biomol . All other reagents were from Sigma Chemical Co. . Rat liver cells incubating with lactacystin (5.4 μM) or ZLLL (1.0 μM) for 6-h have been tested for viability by a tetrazolium-based assay and found not to be toxic AA (0.2 μCi/ml) were added and the cells incubated for another 24-h. The cells were washed 4 times with MEM and incubated for various periods of time with 1.0 ml of MEM containing 1.0 mg BSA/ml (MEM/BSA) and different concentrations of each compound. The culture fluids were then decanted, centrifuged at 2000 × g for 10 min, and 200 μl of the supernate counted for radioactivity. Radioactivity recovered in the washes before the incubation was compared to input radioactivity to calculate the % radioactivity incorporated into the cells AA, was added after the first 24-h incubation. The cells were incubated for another 24-h, washed three times with MEM, then incubated with the compounds in MEM/BSA for various periods of time. The culture fluids were decanted and analyzed for 6-keto-PGF1α, the stable hydrolytic product of PGI2, by radioimmunoassay AA release is presented as a percentage of the radioactivity incorporated by the cells. Except for the time-course experiments which used duplicate dishes, three to five culture dishes were used for each experimental point. The data are expressed as mean values ± SEM. The data were evaluated statistically by the unpaired Student's t-test. A P value < 0.05 was considered significant.The [
Relatively few U.S.-based studies in chronic kidney disease have focused on Asian/Pacific Islanders. Clinical reports suggest that Asian/Pacific Islanders are more likely to be affected by IgA nephropathy (IgAN), and that the severity of disease is increased in these populations.To explore whether these observations are borne out in a multi-ethnic, tertiary care renal pathology practice, we examined clinical and pathologic data on 298 patients with primary glomerular lesions at the University of California San Francisco Medical Center from November 1994 through May 2001. Pathologic assessment of native kidney biopsies with IgAN was conducted using Haas' classification system.Among individuals with IgAN (N = 149), 89 (60%) were male, 57 (38%) white, 53 (36%) Asian/Pacific Islander, 29 (19%) Hispanic, 4 (3%) African American and 6 (4%) were of other or unknown ethnicity. The mean age was 37 ± 14 years and median serum creatinine 1.7 mg/dL. Sixty-six patients (44%) exhibited nephrotic range proteinuria at the time of kidney biopsy. The distributions of age, gender, mean serum creatinine, and presence or absence of nephrotic proteinuria and/or hypertension at the time of kidney biopsy were not significantly different among white, Hispanic, and Asian/Pacific Islander groups. Of the 124 native kidney biopsies with IgAN, 10 (8%) cases were classified into Haas subclass I, 12 (10%) subclass II, 23 (18%) subclass III, 30 (25%) subclass IV, and 49 (40%) subclass V. The distribution of Haas subclass did not differ significantly by race/ethnicity. In comparison, among the random sample of patients with non-IgAN glomerular lesions (N = 149), 77 (52%) patients were male, 51 (34%) white, 42 (28%) Asian/Pacific Islander, 25 (17%) Hispanic, and 30 (20%) were African American.With the caveats of referral and biopsy biases, the race/ethnicity distribution of IgAN differs significantly from that of other major glomerulonephridities. However, among individuals undergoing native kidney biopsy, we see no evidence of a race/ethnicity association with severity of disease in IgAN by clinical and IgAN-specific histopathologic criteria. Further studies are needed to identify populations at higher risk for progressive disease in IgAN. IgA nephropathy (IgAN) is the most common form of glomerulonephritis (GN) worldwide . ApproxiClinical reports suggest that individuals of Asian/Pacific Islander heritage are more likely to be affected by IgAN than whites, African Americans, and persons of Hispanic descent. Reports from U.S. centers have generally compared results of white and African American IgAN patients, with little or no available information on U.S. patients of Asian/Pacific Islander heritage. Studies from Japan and China have reported that more individuals with ESRD in these countries had IgAN, implying that IgAN may have a more severe disease course in certain Asian populations ,7.To explore whether IgAN was more common and severe among Asian/Pacific Islanders in our population, we examined clinical and pathologic data on 149 patients with IgAN and a random sample of 149 patients with other primary glomerular lesions at the University of California San Francisco (UCSF) Medical Center.The records of 183 percutaneous native and transplant kidney biopsies with a diagnosis of IgAN received between November 1994 and May 2001 at the Renal Pathology Laboratory at UCSF were reviewed. Baseline demographic and clinical data included age, gender, race or ethnicity, history of kidney transplant, date of biopsy, and serum creatinine concentration at the approximate time of biopsy. In addition, the presence or absence of heavy proteinuria and the presence or absence of hypertension at the time of biopsy were recorded. Major ethnic groups included white, Asian/Pacific Islander, Hispanic, African American, and other/unknown. Ethnicity was determined using information from patient health insurance forms and history provided at the time of biopsy. Any case in which Henoch-Schönlein purpura, systemic lupus erythematosus or chronic liver disease were considered likely diagnoses were excluded, as were cases of IgAN superimposed on a systemic disease involving the kidney . Two examiners unaware of the clinical data independently reviewed the biopsies. Biopsies displaying fewer than six glomeruli by light microscopy or insufficient immunofluorescence staining, as defined below, were also excluded. Fifteen biopsies were additionally excluded due to incomplete recovery of microscopic slides from files. Biopsies from 149 patients, including 25 kidney transplant recipients, satisfied the criteria for inclusion and provided the basis for the IgAN analytic sample.Aside from IgAN, the most commonly diagnosed primary glomerular lesions at our institution over the same time period were focal segmental glomerulosclerosis (N = 314), membranous nephropathy (N = 197) and minimal change disease (N = 147). To establish baseline race/ethnicity prevalences for our region and referral population, we collected demographic data on a computer-generated random sample of individuals with non-IgAN glomerular disease (N = 149), stratified by kidney transplant.Pathologic assessment of the IgAN native kidney biopsies was performed based on Haas' IgA nephropathy classification system . All cas2 test for categorical variables, and analysis of variance or Kruskal-Wallis test for continuous variables. Two-tailed P-values <0.05 were considered statistically significant. SAS version 8.2 was used for all statistical analyses .Demographic and clinical data are reported as mean ± standard deviation, medians with interquartile ranges, and proportions with 95% confidence limits. Inter-ethnicity comparisons were performed using the Cochrane-Mantel-Haenzsel χPatient clinical characteristics for the IgAN group are summarized in Table P = 0.64). Sixty-six patients (44%) exhibited heavy (≥ 3 g/d) proteinuria, and 74 (50%) had documented hypertension (systolic blood pressure ≥ 140 or diastolic blood pressure ≥ 90 mm Hg) at the time of kidney biopsy. The fractions of patients with heavy proteinuria and hypertension at the time of kidney biopsy were not significantly different among white, Hispanic, and Asian/Pacific Islander groups.The median serum creatinine (SCr) of the IgAN cohort was 1.7 mg/dL (interquartile range 1.1–3.4 mg/dL). The median SCr of the African American group was signficantly higher (5.0 mg/dL) than the other ethnic groups; however, these calculations were based on a small sample size (N = 4) due to the low prevalence (3%) of African Americans with in our IgAN cohort. Median serum creatinine concentrations were not significantly different among white, Hispanic, and Asian/Pacific Islander groups fell into Haas subclasses IV or V, which are known independent predictors of progressive disease and poor renal outcomes . Only 22P = 0.006) compared to patients with IgAN. In addition, the distribution of race/ethnicity differed significantly between the two groups (P < 0.001). This association of IgAN and distribution of race/ethnicity persisted even when stratified by kidney transplant .Table In a biopsy series of 244 patients with IgAN, Haas found fewer African Americans (in a major urban setting), similar to that noted in other U.S.-based studies of IgAN -11. WhilIn our study, the fraction of biopsies in subclasses I and II (18%) was similar to that observed by Haas (23%). However, we observed a higher proportion of biopsies in subclasses IV and V (64% vs. 31%), and a lower proportion of biopsies in subclass III (19% vs. 45%) compared with Haas, possibly reflecting a temporal trend towards a higher biopsy threshold along with intergrader measurement bias.In a study reviewing the pattern of glomerulonephritis in Singapore over the past two decades, Woo and colleagues reported that IgAN was the most common primary GN occurring in Singapore . In our In contrast, based on smaller biopsy series, a striking variation in prevalence rates of IgAN has been reported from Europe and South America. In the UK, for example, Ballardie and colleagues noted comparatively low prevalence rates of IgAN in a predominantly white population in the early 1970's. In the subsequent 15-year period, however, these investigators reported a phenomenal rise in the observed incidence of IgAN , which the investigators felt more likely reflected a higher frequency of detection rather than true rise in disease incidence. Similar prevalence rates have also been documented from isolated white populations in Finland and southern Italy ,17. In cThese differences may be partially attributed to increased screening and disparities in the indication for kidney biopsy. In Japan and South Korea, for example, school-aged children undergo annual screening for urinary abnormalities; kidney biopsy is subsequently recommended for children with evidence of proteinuria or hematuria . More coH. parainfluenzae as a causative agent of IgAN in Japanese children and adults. Such claims are supported by studies showing the glomerular deposition of outer membrane H. parainfluenzae antigens and greater levels of plasma IgA1 antibody against OMHP in Japanese patients with IgAN [H. parainfluenzae colonization and/or infection has yet to be established.Although the etiology of IgAN remains unknown, there exists a strong suspicion for an environmental antigen trigger combined with a genetic susceptibility factor. Along these lines, several hypotheses have been proposed to account for the reportedly higher prevalence of IgAN in Asian/Pacific Islanders. With respect to potential dietary antigens, Wakai et al. found that high intake of rice and n-6 polyunsaturated fatty acids (PUFA) were associated with an increased risk of IgAN . Recent iseases) ,23. WhetThe presence of either hypertension or proteinuria ≥ 3.0 g/24 hrs at the time of diagnosis significantly correlated with worsened renal survival in IgAN, even when controlling for serum creatinine at the time of kidney biopsy [Despite ongoing investigative efforts, scant data are available regarding genetic markers that may predispose individuals to progressive disease from IgAN. Recent immunogenetic studies have suggested a potential role for the T-cell receptor (TCR) in the development of immune-mediated diseases. Deenitchina and colleagues found that genetic polymorphism of the TCR constant alpha chain was associated with progression of CKD in a cohort of Japanese patients with IgAN. Although promising, such polymorphisms of the TCR gene have yet to be evaluated in large, prospective studies or by genetic analysis of familial IgAN .Our results contest the assertion that IgAN follows a more severe course in individuals of Asian/Pacific Islander descent. One reason for the similar disease severity of IgAN in our study population may stem in part from the large subpopulation of Filipino patients comprising our Asian/Pacific Islander cohort. It is unclear whether certain subpopulations of Asian/Pacific Islanders, including Filipinos, exhibit IgAN prevalence rates similar to those documented by Koyama and Woo. Anecdotal reports from Thailand and India documenting prevalence rates of 4–9% suggest that IgAN may not have the same epidemiology among all southeast Asians . DespiteThere are several important limitations to this report. As with any single-center biopsy series, we may have been underpowered to detect a clinically significant difference due to the limited sample size (type II error). In addition, racial admixture may have also confounded the results, as we were unable to subclassify patients in the Asian/Pacific Islander group or account for the growing population of bi- or multi-ethnic individuals in our population. Furthermore, due to the study's case control design, and breadth of our referral base , we were unable to control for the criteria for kidney biopsy. As a result, a biopsy bias may have confounded our results. In other words, Asian/Pacific Islander patients in our referral base with mild to moderate proteinuria and/or hematuria might have been given a presumptive diagnosis of IgAN without nephrology referral or confirmatory kidney biopsy. With regard to disease prevalence, these potential referral and biopsy biases based on race/ethnicity are largely conservative in nature, and would have biased our results towards the null. Finally, we have included data from a modest-sized IgAN transplant population N = 25), the donor demographics of which were unavailable at the time of the study. However, the association of race/ethnicity and distribution of glomerular lesion persisted, even when stratified by kidney transplant, and thus our overall conclusions remained the same. In addition, a small European study of donor-recipient pairs (average follow-up 7 years) has shown that when a donor kidney with asymptomatic IgA deposits is transplanted into a recipient with ESRD secondary to a disease other than IgAN, the IgA immune deposits in the donor kidney are rapidly removed [, the donIn conclusion, with the caveats of referral bias and biopsy bias, the race/ethnicity distribution of IgAN differs significantly from that of other major glomerulonephridities. However, among individuals undergoing native kidney biopsy, we see no evidence of a race/ethnicity association with severity of disease in IgAN by clinical and IgAN-specific histopathologic criteria. Further studies are needed to identify populations at higher risk for progressive disease in IgA nephropathy.None declared.YH designed the study, collected and analyzed the data, and drafted the manuscript. GC supervised the study design, analyzed the data, and edited the manuscript. EF graded the IgAN histopathologic slides. JO collected, reviewed and graded all histopathologic data, and edited the manuscript. All authors approved the final manuscript.The pre-publication history for this paper can be accessed here:
The study aimed to investigate the role of these nanoparticles on the release of the pro-inflammatory cytokine TNF-α and IL-1α gene expression. We also investigated the role of intracellular calcium signalling events and oxidative stress in control of these cytokines and the effect of the particles on the functioning of the cell cytoskeleton. We showed that there was an increase in intracellular calcium concentration in J774 cells on treatment with PM10 particles which could be significantly reduced with concomitant treatment with the calcium antagonists verapamil, the intracellular calcium chelator BAPTA-AM but not with the antioxidant nacystelyn or the calmodulin inhibitor W-7. In human monocytes, PM10 stimulated an increase in intracellular calcium which was reduced by verapamil, BAPTA-AM and nacystelyn. TNF-α release was increased with particle treatment in human monocytes and reduced by only verapamil and BAPTA-AM. IL-1α gene expression was increased with particle treatment and reduced by all of the inhibitors. There was increased F-actin staining in J774 cells after treatment with PM10 particles, which was significantly reduced to control levels with all the antagonists tested. The present study has shown that PM10 particles may exert their pro-inflammatory effects by modulating intracellular calcium signalling in macrophages leading to expression of pro-inflammatory cytokines. Impaired motility and phagocytic ability as shown by changes in the F-actin cytoskeleton is likely to play a key role in particle clearance from the lung.The effects of PM At 25 μg/ml the [Ca2+]c decreased to a value similar to the 5 μg/ml particle concentration. . The [Ca2+]c following concomitant treatment of cells with particles and calcium antagonists was reduced . In our previous studies c had previously been observed .PMd figure . At timease in Ca+c had pr10 and UfCB particles on TNF-α release by J774 macrophages is shown in figure 10 particles caused significantly more TNF-α release as the same mass of UfCB particles by the mouse macrophage cell line.A comparison of the gram for gram dose effect of PM10 is shown in figure 10 induced TNF-α release with W-7 (W), Trolox (T) or Nacystelyn (N) at any particle dose tested.The release of TNF-α protein by human monocytes after treatment with varying concentrations of PM10 for 4 hours produced a significant increase in IL-1α mRNA content compared with unstimulated cells (p < 0.05) .Treatment of human peripheral blood monocytes with 10 μg/ml PMs p < 0.0 , although the increase in the fluorescence intensity of the PM10-treated cells was modest (a 5% difference).The fluorescence intensity of cells stained for F-actin after treatment with PM10 are considered to drive these effects c changes lead to production of the proinflammatory cytokine TNF-α [10 particles can also stimulate entry of extracellular calcium into both J774 macrophages and human macrophage derived monocytes, and that this process is inhibited by a calcium channel blocker suggesting that the PM10, in a similar fashion to UfCB induces opening of plasma membrane calcium channels leading to a calcium influx. The results obtained using the antioxidant nacystelyn were confusing. In the J774 macrophages nacystelyn was unable to inhibit PM10 induced increases in cytosolic calcium concentration, whereas the same antioxidant was very effective in the human monocyte derived macrophages. This difference could be due to a species difference or a comparison between a cell line and primary cells. A number of cell lines have been demonstrated to exhibit aberrant calcium signalling pathways. Our previous studies using human macrophages suggest that ultrafine particle-induced increases in cytosolic calcium can be mediated by ROS [10 are ultrafine, it is conceivable that much of the calcium increase is ROS mediated, at least in part. However, PM10 also contains other substances, such as metals, that could influence this pathway. Metals would in fact be expected to increase the ROS production by the PM10 particles [We have previously shown that ultrafine or nanoparticles enhanced the calcium influx into cells of a monocytic cell line (MM6) ,34 and tne TNF-α . We demod by ROS and sincarticles .10 (10 μg/ml) that induces calcium elevation also stimulates significant increases in both TNF-α protein release and IL-1α mRNA production by macrophages. The calcium channel blocker verapamil and the intracellular calcium chelator BAPTA-AM reduced the calcium increase, TNF-α protein release and IL-1 mRNA expression by human monocytes when stimulated with PM10 particles. This is strong evidence to suggest that influx of extracellular calcium plays a key role in upregulating the proinflammatory response induced by PM10 that could lead to disease. However, the calmodulin inhibitor W-7 had little effect on TNF-α release, while it did inhibit IL-1 mRNA expression. The antioxidants also had variable abilities to block cytokine expression, inhibiting IL-1 mRNA production but not TNF-α protein release. These differences could be explained either by divergent pathways controlling expression of the two cytokines, or that TNF-α protein was measured in comparison to IL-1 mRNA. However, clearly both calcium and ROS are important in the regulation of IL-1α mRNA expression while only calcium is important in controlling TNF-α expression in macrophages exposed to PM10.The present study clearly shows that the same dose of PM10 is far more potent that UfCB in terms of its ability to induce TNF-α protein release. This is likely to be due to other components, such as metals and organic compounds other than the carbon core, within PM10 that can promote inflammation. It is also possible that components such as the UF particles and metals could interact to enhance toxicity as has been shown for ROS production in vitro and inflammation in vivro [10 particles, therefore it is unlikely that cytokine release, changes in intracellular calcium, and IL-1α gene expression can be explained solely by endotoxin.These studies indicate that on an equal mass basis PMin vivro . Our pre10 generates ROS [10 induced increase in F-actin staining in this study demonstrates that particle-derived ROS impact on the macrophage cytoskeleton. Our previous studies also demonstrate that Uf particle-induced ROS play a role in elevating the cytosolic calcium concentration of macrophages leading to increased TNF-α production [10 exposure. As has been shown by other workers [10 particles increased the F-actin fluorescence signal in cells stained with FITC-labelled phalloidin, although changes in the distribution of actin filaments was not apparent from microscope analysis there appeared to be more cortical staining. In accord with the role of calcium and ROS in the induction of IL-1 expression, both of these factors appeared to play an important role in modulating the F-actin cytoskeleton.As explained previously, the cytoskeleton is the scaffold of cells, and in the case of motile cells such as macrophages it is responsible for controlling movement. Disruption of the cytoskeleton, particularly via oxidative stress, is thought to disrupt cellular structure and hence function . We haveates ROS . The abioduction . The res workers ,45, in b workers which ma10 particles may exert their increased pro-inflammatory effects by modulating intracellular calcium signalling in macrophages leading to expression of proinflammatory cytokines. An additional consideration is the effects of particles on the cytoskeleton of the cell. Impaired cellular motility and phagocytic ability is likely to play a key role in particle clearance from the lung, thus perpetuating the effects of PM10. The role of calcium and ROS in other cellular responses are under investigation.The present study has shown that PM
Neisseria species, and phylogenetic trees, to investigate whether genotypic clusters can be resolved among these recombinogenic bacteria and, if so, the extent to which they correspond to named species.It is a matter of ongoing debate whether a universal species concept is possible for bacteria. Indeed, it is not clear whether closely related isolates of bacteria typically form discrete genotypic clusters that can be assigned as species. The most challenging test of whether species can be clearly delineated is provided by analysis of large populations of closely-related, highly recombinogenic, bacteria that colonise the same body site. We have used concatenated sequences of seven house-keeping loci from 770 strains of 11 named N. meningitidis, N. lactamica and N. gonorrhoeae. A few isolates arose from the branch that separated N. meningitidis from N. lactamica leading us to describe these species as 'fuzzy'.Alleles at individual loci were widely distributed among the named species but this distorting effect of recombination was largely buffered by using concatenated sequences, which resolved clusters corresponding to the three species most numerous in the sample, Neisseria where individual loci are inadequate for the task. This approach should be applied by taxonomists to large samples of other groups of closely-related bacteria, and especially to those where species delineation has historically been difficult, to determine whether genotypic clusters can be delineated, and to guide the definition of species.A multilocus approach using large samples of closely related isolates delineates species even in the highly recombinogenic human The definition of bacterial species, and a concept of species applicable to all bacteria, are problems that have long exercised systematists and microbiologists -4. WhileMolecular approaches to assigning bacteria to species began with the introduction of DNA-DNA hybridization, which allowed an objective assessment of the extent of sequence similarity among a set of genomes, and remains the systematicist's gold standard, defining bacterial species as those isolates whose genomes show at least 70% hybridization under standardized conditions . HoweverIndividual isolates of a named species differ in gene content and the A multilocus approach has recently been applied to small numbers of isolates of several relatively distantly related named species of enterobacteria , and othNeisseria into genotypic clusters. We chose this example because Neisseria are naturally transformable, are among the most recombinogenic bacteria, and there is good evidence for relatively frequent localised recombination between the named Neisseria species [Neisseria isolates, but the tree based on the concatenated sequences effectively resolves the three major named species within the sample, although the boundaries are fuzzy due to the presence of a small number of intermediate genotypes.In this study we have evaluated the ability of seven individual house-keeping gene sequences, and of the concatenated sequences of these genes, to resolve a large sample of human pathogenic and commensal species ,16 throuNeisseria MLST database, which includes several thousand sequence types (STs) of N. meningitidis and smaller numbers assigned to several other named human Neisseria species [N. meningitidis were compared with all STs assigned to the other human Neisseria species. The sequences of the seven gene fragments were concatenated in-frame and a tree was constructed (using third codon position sites) using Mr Bayes [N. gonorrhoeae, and all but two of the 171 STs of N. lactamica, descend from single well-supported nodes . The great majority of N. meningitidis also formed a single well-resolved cluster, but a few arise from the branch leading to the N. lactamica isolates. Very similar clustering of these three species was observed using other sets of 500 N. meningitidis STs from the database, and in a neighbour-joining tree constructed using all STs in the Neisseria MLST database (data not shown). The high levels of recombination in the Neisseria make the fine structure of the tree meaningless for epid species on the bMr Bayes . Figure N. gonorrhoeae, N. meningitidis and N. lactamica, their boundaries are not perfectly defined and a number of isolates are placed on the branch between N. lactamica and N. meningitidis, representing intermediate genotypes.Analysis of the individual gene trees shows that these fail to resolve the named species and highlights many examples where interspecies recombination has resulted in anomalous clustering Figure . The cleNeisseria species do not cluster clearly. A significant separation is observed between two subtrees in the MLST database, using Neighbour-Joining, minimum evolution and UPGMA tree-building approaches (data not shown).The small numbers of STs assigned to other human Current molecular definitions of species use rules or cut-off values (e.g. ≥ 70% DNA-DNA hybridization) and rarely take account of the genotypic diversity within and between populations . A more N. meningitidis and N. lactamica may be a Figure . The conN. meningitidis, N. lactamica and N. gonorrhoeae are reasonably secure; the two N. lactamica that clustered highly anomalously probably represent species mis-identification. The most critical test of the multilocus approach is the ability to resolve N. lactamica from N. meningitidis since these colonise the same body site, the nasopharynx. Resolution of these named species was remarkably good, although the boundaries between N. lactamica and N. meningitidis are somewhat fuzzy, due to the existence of intermediate forms. This is to be expected as recombinogenic bacteria have mosaic genomes, resulting from the occasional replacement of chromosomal segments with those from related populations. Thus, in any large dataset, there may be isolates in which one or more of the loci used in a multilocus approach to species definition will have been recently introduced from a related population. Single unusually divergent replacements, or replacements at more than one of the multiple loci, may place isolates away from the majority of isolates of the species. However, only seven STs in Figure N. meningitidis or N. lactamica), and there was no overlap between these two named species (i.e. a region containing isolates identified as both species interspersed with one another).Network based methods using the multilocus approach with large datasets, should clarify whether they fall into distinct clusters, or whether the difficulties in defining species by phenotypic methods reflect an underlying genetic reality in which resolved clusters are not evident.Sorting the human commensal es names . We gainIf necessary, further resolution between apparent clusters may be attempted by increasing the numbers of loci sequenced. Provided that the alleles at these loci show a degree of specificity to a given species cluster, then the resolution of that cluster will be enhanced. If this cannot be demonstrated, then it is likely that the isolates under test do not genuinely form separate populations, and should not be considered to be distinct species. This approach lends itself to "electronic taxonomy", in which systematic classification may be evermore finely elucidated through the accumulation of online sequence databases.The work described here obviously begs the question of what forces or mechanisms could generate such separation among recombining bacteria. We offer a simple model for recombining organisms as follows: consider two populations freely recombining within themselves and with each other. New mutations arising in one population will readily spread to the other, and to an observer they appear to form one cluster of related strains. If a barrier to recombination should be erected between them, such that isolates are much more likely to undergo recombination with their own population, then the rate of generation of new genotypes within each population may increase beyond the rate at which such genetic innovation is shared and the two populations begin to diverge. As the populations diverge, decreasing sequence identity will further impede recombination, thus reinforcing the effect of the original genetic barrier and creating a permanent separation ,25.N. gonorrhoeae. Unlike the other human Neisseria, which colonise the nasopharynx, the primary niche of the gonococcus is the genital tract, and it has been proposed that gonococci arose relatively recently due to the successful invasion of the genital tract by a nasopharyngeal Neisseria lineage [It is not difficult to suggest candidate mechanisms. Niche separation is one example, and almost certainly underlies the tight well-defined cluster of lineage . Similar lineage , limitat lineage and simiThe point at which such a group is described as a species is a matter more of human interest and attention than any intrinsic evolutionary process. The properties of the species clusters we observe will be determined by the diversification of those strains sharing the speciation loci (i.e those that determine gene flow). Because speciation is gradual, we should be able using estimates of recombination within and between groups derived from multilocus data, to define nascent species which if they continue to diversify in isolation, are expected to form distinct sequence clusters, ie species, in the future.The bacterial domain of life is not uniform. Instead we see clumps of similar strains that share many characteristics, and with an innate human urge to classify, we have defined these as species. This work shows that by applying a simple approach using sequence data from multiple core housekeeping loci, we can resolve those clusters, provided such clusters exist. However, these species clusters are not ideal entities with sharp and unambiguous boundaries; instead they come in multiple forms and their fringes, especially in recombinogenic bacteria, may be fuzzy and indistinct. A multilocus approach using large numbers of isolates will provide data that help us to develop theoretical models of how species emerge, and relate these to the observed population genetic structure of bacteria. This should be enormously helpful to taxonomists, whose foremost duty will remain to provide us with pragmatic species designations which attempt to reflect the underlying genetic reality.Neisseria MLST database [N. meningitidis were concatenated as described below, and analysed together with the concatenated sequences of N. meningitidis strains with ST numbers from 1 to 500. Species definitions were as recorded at [Neisseria were also used to construct individual gene trees.The contents of the publicly accessible database ,18 were MLST loci were concatenated in-frame to form a 3267 bp sequence, of which only third position sites were used in subsequent analyses. To illustrate clustering in this dataset, a tree was constructed using Mr Bayes 3.0b4 . A startrRNA ribosomal RNAMLST Multi Locus Sequence TypingST Sequence TypeUPGMA Unweighted Pair Group Method with Arithmetic MeanBGS conceived of the study and drafted the manuscript, CF participated in study design and analysis of results, WPH designed the study, carried out the analyses and interpreted the results, and drafted the manuscript.
We generated transgenic zebrafish in which expression of the firefly luciferase (luc) gene is driven by the zebrafish per3 promoter. Live larvae from these lines are rhythmically bioluminescent, providing the first vertebrate system for high-throughput measurement of circadian gene expression in vivo. Circadian rhythmicity in constant conditions was observed only after 5–6 d of development, and only if the fish were exposed to LD signals after day 4. Regardless of light exposure, a novel developmental profile was observed, with low expression during the first few days and a rapid increase when active swimming begins. Ambient temperature affected the developmental profile and overall levels of per3 and luc mRNA, as well as the critical days in which LD cycles were needed for robust bioluminescence rhythms. In summary, per3-luc zebrafish has revealed complex interactions among developmental events, light, and temperature in the expression of a clock gene.The roles of environmental stimuli in initiation and synchronization of circadian oscillation during development appear to vary among different rhythmic processes. In zebrafish, a variety of rhythms emerge in larvae only after exposure to light-dark (LD) cycles, whereas zebrafish luciferase gene under a clock gene promoter enables the study of molecular circadian rhythms in vivoGeneration of zebrafish that express the firefly The circadian clock controls biological processes such as behavior, gene expression, and physiology in diverse organisms, ensuring that these processes to take place at appropriate times of the day. This is crucial for many organisms, such as plants, which must synchronize photosynthesis with day-night cycles. Animals also synchronize to environmental cycles because of more subtle but nevertheless important needs such as predator avoidance, food availability, and optimal temperatures for various processes. Circadian clocks in all species share the following properties: They persist even in constant environmental conditions with periods near 24 h; they can be reset by environmental stimuli such as light and temperature; and their periods are relatively constant at different temperatures.Drosophila, has been extremely valuable in identifying important players of the clockworks and their roles ) were raised as transgenic F1 fish. Three of the five lines emitted bioluminescence above background. The level of bioluminescence varied depending on the line. The strongest-glowing line (#23) was used for this study unless otherwise stated. All the animals used in this study were the progeny of crosses between a transgenic line and the *AB wild-type strain. Therefore, these animals carried the transgene in hemizygous condition.per3-luc transgenic zebrafish is screening for mutations that affect bioluminescence rhythms. Since it is most convenient to screen the youngest possible animals, bioluminescence from transgenic embryos was monitored first. It was also expected that per3-luc-mediated bioluminescence in embryos should cycle from day 1 of development even in constant conditions, because per3 mRNA expression detected by in situ hybridization has been demonstrated to oscillate from day 1 postfertilization in constant conditions in 2 l of ddH2O) aerated overnight and containing 0.5 mM D-luciferin potassium salt and 0.013% Amquel Instant Water Detoxifier . Once loaded with animals, four such plates were subjected to automatic monitoring of bioluminescence every 30 min by the Topcount multiplate scintillation counter (Perkin-Elmer) equipped with six detectors and plate stackers. The room temperature was set at 21–22 °C, and the machine at 24 °C. However, due to the heat created by the machine, temperature at the bottom of the stacker was 1–2 °C higher than the room temperature. In order to minimize high background counts under lighted conditions, each plate was dark-adapted for approximately 5 min before being counted for bioluminescence. Each well was counted for 4.8 s every 30 min. The plates were illuminated with two white fluorescent lamps, each facing the left or right side of the stacker. The approximate intensity of the light that reached the plates was 17–35 lux, depending on the position of the plates within the stacker.Embryos or larval fish were placed individually in every other well of a white 96-well Optiplate with 200 μl of Holtfreter solution . This burst of luminescence is typically followed by a low background level of luminescence (< 50 cps). Furthermore, intermediate levels of spikes were also found in many plots just before the burst of high bioluminescence. Therefore, in order to eliminate data from dead fish, data that exceeded 5,000 cps, or those that went down below 50 cps, at any point of the analyzed portion of the data were first discarded. Then an averaged plot of the remaining data from each clutch of embryos was examined, and the highest count on days 8–9 was determined for experiments presented in Bioluminescence data from the Topcount were imported into Microsoft Excel 2000 by the Import and Analysis macro . In many of the experiments performed on the Topcount, some plates were placed in the machine several days earlier than others in order to monitor fish that experienced different numbers of LD cycles . TherefoPeriod and rhythmicity for each animal were determined by a macro based on12–18 (Invitrogen) as the primer in a 25–40 μl reaction volume. Real-time PCR was performed in a 25 μl reaction volume containing a probe, forward and reverse primers, and qPCR Mastermix according to the manufacturer's instruction . Each reaction was quadrupled in order to minimize pipetting errors. The primers and TaqMan MGB probes for per3 and luc were designed and synthesized by the Assays-by-Design Gene Expression service : per3 forward, 5′-GCCCTGGCAGCACCA-3′; per3 reverse 5′-GAAAGCTGGAGGACGAGGAA-3′; probe, 5′-6-FAM-CTAAGAGCTCAAAATCC-NFQ-3′; luc forward, 5′-GCAGGTGTCGCAGGTCTT-3′; luc reverse, 5′-GCGACGTAATCCACGATCTCTTTT-3′; probe, 5′-6-FAM-TCACCGGCGTCATCG-NFQ-3′. The ABI Sequence Detection System 7000 (Applied Biosystems) was programmed to perform the following protocol: 50 °C for 2 min, 95 °C for 10 min, followed by 40 cycles of 95 °C for 15 s and 60 °C for 1 min.Total RNA was extracted from 9–42 embryos or larval fish raised in petri dishes using TRIzol reagent . The number of animals used for each extraction was recorded. Once extracted, total nucleic acid concentration was determined by a spectrophotometer. In order to prevent genomic DNA contamination, RNA samples were treated with Turbo DNA-free , and the concentration determined again by a spectrophotometer. Total RNA (0.5–1 μg) was subjected to cDNA synthesis by Superscript II Reverse Transcriptase (Invitrogen) using Oligo (dT)per3 or luc cDNA per animal was calculated by the standard-curve method [per3 and luc were compared between two different developmental stages as well as among different times of the day. It was also important to calculate the amount of each mRNA species per animal in order to compare these data to bioluminescence data. The amount of a specific control RNA, as well as the total RNA, may differ among fish of different ages, in which case RNA per animal cannot be calculated by the relative quantification method using a constitutive control. As a concentration standard, a single-stranded DNA oligonucleotide of known concentration was used for each gene. These oligonucleotides span from the 5′ end of the forward primer to the 5′ end of the reverse primer, and including 75 bp for per3 and 110 bp for luc . The standard concentration was varied from 102 to 107 copies per reaction in 10-fold increments. For every qPCR experiment, reactions for standards were performed in four replicates along with reactions for cDNA samples.In this study, relative amount of e method rather tt-test was performed. The nonparametric Wilcoxon/Kruskal-Wallis test was performed on data that were not normally distributed even after various transformations were tried. Where multiple tests were performed on a set of data, the experimentwise error rate (α) was adjusted by the Dunn-Sˇidák method [To test whether percentages of rhythmic fish among different experimental groups were equal, the G-test was performed using Microsoft Excel 2000 according to Sokal and Rohlf . If multk method .Protocol S1(27 KB DOC).Click here for additional data file.http://www.ncbi.nlm.nih.gov/) accession numbers of the sequences discussed in this paper are per 3 cDNA (NM_131584) and BAC clone CH211–138E4 (AL929204). The Ensembl (http://www.ensembl.org/Danio_rerio/) ID of the flanking gene of per 3 mentioned in The GenBank (
Microarrays can survey genome-wide expression patterns. Not only can these gene expression profiles be used to identify a few genes of interest, they are now being creatively applied for hypothesis generation and testing Microarrays are used to survey the expression of thousands of genes in a single experiment. Applied creatively, they can be used to test as well as generate new hypotheses. As the technology becomes more accessible, microarray analysis is finding applications in diverse areas of biology. Microarrays are simply a method for visualizing which genes are likely to be used in a particular tissue at a particular time under a particular set of conditions. The output of a microarray experiment is called a “gene expression profile.”Gene expression profiling has moved well beyond the simple goal of identifying a few genes of interest. The notion that this is the major objective of microarray studies has engendered the oft-repeated criticism that the approach only amounts to “fishing expeditions.” The sophistication of microarray analysis very much blurs the distinction between hypothesis testing and data gathering. Hypothesis generation is just as important as testing, and very often expression profiling provides the necessary shift in perspective that will fuel a new round of progress. In many gene expression profiling experiments, the hypotheses being addressed are genome-wide integrative ones rather than single-gene reductionist queries. In general, without a hypothesis only the most obvious features of a complex dataset will be seen, while clear formulation of the scientific question undoubtedly fuels better experimental design. And in some cases, the results of a microarray screen that was initially designed as an effort at cataloguing expression differences are so unexpected that they immediately suggest novel conclusions and areas of enquiry.All microarray experiments rely on the core principle that transcript abundance can be deduced by measuring the amount of hybridization of labeled RNA to a complementary probe. The idea of a microarray is simply to lay down a field of thousands of these probes in perhaps a 5 sq cm area, where each probe represents the complement of at least a part of a transcript that might be expressed in a tissue. Once the microarray is constructed, the target mRNA population is labeled, typically with a fluorescent dye, so that hybridization to the probe spot can be detected when scanned with a laser. The intensity of the signal produced by 1,000 molecules of a particular labeled transcript should be twice as bright as the signal produced by 500 molecules and, similarly, that produced by 10,000 molecules half as bright as one produced by 20,000 molecules. So a microarray is a massively parallel way to survey the expression of thousands of genes from different populations of cells. Trivially, if fluorescence is observed for a gene in one population but not another, the gene can be inferred to be on or off, respectively. With appropriate replication, normalization, and statistics, though, quantitative differences in abundance as small as 1.2-fold can readily be detected. The output of all microarray hybridizations is ultimately a series of numbers, which covers a range of almost four orders of magnitude, from perhaps one transcript per ten cells to a few thousand transcripts per cell .It is the comparison of gene expression profiles that is usually of most interest. This is because the visualization is done at the level of transcript abundance, but just seeing a transcript does not guarantee that the protein is produced or functional. If, however, a difference in transcript abundance is observed between two or more conditions, it is natural to infer that the difference might point to an interesting biological phenomenon.A general approach to performing gene expression profiling experiments is indicated as a flow diagram in The ability to survey transcript abundance across an ever-increasing range of conditions gives geneticists a fresh look at their cellular systems, in many cases providing a more holistic view of the biology, but at the same time feeding back into the classical hypothetico-deductive scientific framework. The technology has rapidly advanced beyond the simple application of fishing for candidate genes and now sees applications as diverse as clinical prediction, ecosystem monitoring, quantitative mapping, and dissection of evolutionary mechanisms.Two of the better-known examples of the interplay between microarray profiling and hypothesis testing are provided by the studies of Much excitement has been generated recently by the potential for clinical applications of gene expression profiling in relation to complex diseases such as cancer, diabetes, aging, and response to toxins. An early foray into this realm was provided by PLoS Biology by DeRisi and colleagues (Plasmodium falciparum might identify a handful of genes that are induced at critical times and hence might be novel drug targets. Employing very careful staging, a platform with low experimental noise, and appropriate statistical procedures, they discovered an extremely tight molecular lifecycle within the organism. Families of functionally related genes are induced as a unit, one after another, in a tightly orchestrated rhythm that testifies to incredible integration of the physiology of the parasite. They show that with microarray analysis it is possible to model the physiology and biochemistry of the pathways instead of just targeting a few genes.A good example of the ability of microarray analyses to simply surprise us is provided by the study reported in this issue of lleagues . They reIn the coming years, expect to see microarrays developed for an extremely diverse range of organisms and applied to an even wider range of questions, from parasitology to nutritional genomics. Consensus on a core set of statistical options will likely emerge, as will agreement on data quality standards. Applications will encompass defining gene function; inferring functional networks and pathways; understanding how variation is distributed among individuals, populations, and species; and developing clinical protocols relating to cancer prognosis and detection of toxin exposure. Similar profiling methods for proteins and metabolites will attract just as much attention as functional genomics, building on the foundations laid by genome sequencing.
The purpose was to examine how General Practitioners (GPs) use clinical information and rules from guidelines in their decisions on drug treatment for high cholesterol values.Twenty GPs were presented with six case vignettes and were instructed to think aloud while successively more information about a case was presented, and finally to decide if a drug should be prescribed or not. The statements were coded for the clinical information to which they referred and for favouring or not favouring prescription.The evaluation of clinical information was compatible with decision-making as a search for reasons or arguments. Lifestyle-related information like smoking and overweight seemed to be evaluated from different perspectives. A patient's smoking favoured treatment for some GPs and disfavoured treatment for others.The method promised to be useful for understanding why doctors differ in their decisions on the same patient descriptions and why rules from the guidelines are not followed strictly. The medical decision examined in our study is whether or not to initiate drug treatment for high cholesterol values. The topic has been the focus of much debate on the grounds that the proportion of individuals with elevated cholesterol values is high in most Western populations, and that the costs for treating all these people with drugs life-long would be enormous, with a marginal benefit in risk reduction for the majority of them -3. The cThus, the decision-making task can be carried out as follows. The first step is to decide whether the patient case is an instance of secondary prevention, diabetes or FH, and if not, to estimate the numerical risk for coronary disease within ten years. A risk above 20% would justify pharmacological treatment, given that life style intervention has been tried for a sufficiently long period.In this study we address the question of how General Practitioners (GPs), who manage most of the cholesterol testing and treatment in Sweden, make such decisions when guidelines are not physically available to them. We will try to highlight the decision-making by examining how it is affected by clinical variables describing the patient and by medical knowledge and decision rules on behalf of the doctors. The reason for studying decisions without access to written guidelines is that as experienced GPs (in the case of three of us), we have found that this is how decisions on cholesterol treatment are usually made. Furthermore, in a previous study concerning the ability of GPs to make numerical estimates of future cardiovascular risks, we asked the GPs if they had access to any tool for making numerical risk estimates at their clinic . Only niOur first set of research questions concerned how different kinds of information about the patient relate to the decision to prescribe a drug or not to do so. First, we estimated the importance of the individual information categories by counting the total number of times they had, according to the coding of the verbal protocols, been valued in a positive or negative direction in relation to the decision at hand. Second, in order to get an idea about why different doctors reach different decisions when presented with identical case information, we made the following analyses. For each of the patient cases separately, the subgroup who decided to prescribe was compared with those with an opposite decision regarding how often they valued different information categories. Third, to further understand how the participants differed in their judgments, we examined which kinds of specified information about a patient are the most likely to lead to disagreement, i.e. to be judged in a positive direction by some participants and in a negative direction by others.Disagreement about the evaluation of data on a given variable may result from different cut-off values, e.g. for the cholesterol variable. A certain value can be considered high for one participant and thus speaking for drug treatment. The same cholesterol value may be considered almost normal by another participant and thereby speaking against drug treatment for the same patient. The age variable may also be associated with disagreement due to different cut-off values. A higher age is generally associated with a higher risk, but there is a lack of evidence for the potential benefit of treating the oldest age groups, and this may introduce different cut-off values for different doctors.Disagreement may also be caused by what might be called different perspectives. If we take smoking as an example all doctors should recognize smoking as a factor associated with an increase in future cardiovascular risk, and should accordingly make statements with a positive directionality for drug treatment. On the other hand, some doctors in some situations may regard actions aimed at smoking cessation as more beneficial than cholesterol reduction, which may give smoking a negative directionality in relation to drug treatment. Overweight can be regarded in the same way, i.e. as an indicator of drug treatment or as indicating change of life style as preferable to drug treatment. Thus, there are two alternative treatment philosophies – drug treatment or life style change – which in turn may be associated with opposite evaluations of the same data in relation to drug treatment. To the extent that these philosophies in fact are associated with different evaluations, one may regard them as different perspectives where certain data (e.g. smoking) are seen from different angles, as risk indicators or as entities that could be changed through patient's own efforts (i.e. by changing life style) as a means to treat the his or her health problem. The latter perspective may also be associated with somewhat moralistic evaluations, e.g. that overweight or smoking is the patient's own choice or own "fault", which in turn would decrease the inclination to initiate drug treatment. Some evidence for this conjecture comes from a CJA study by Evans et al . They inOur second set of research questions concerned the use of rules, and the concept of risk as shown in the verbal protocols. Six patient cases were chosen that included two high-risk patients (secondary prevention or diabetes) for whom the guidelines can be transferred to a simple decision rule . Our question was how frequently such decision rules were in the verbal protocols and their content in relation to practice guidelines for elevated blood lipids. For the remaining four cases (primary prevention) no such simple guideline-based rule can be applied and instead, a numerical risk calculation is suggested. We examined the extent to which references to risk estimates were made in the think-aloud protocols. For both secondary and primary prevention cases we were interested in determining how the decisions corresponded to what is indicated by guidelines and risk algorithms.In sum, our research questions concerned: Importance of information (which categories of information about the patients seem to be most important for the decisions?). Patterns of importance for "Yes" and "No" decisions . Disagreement (which categories of information give rise to disagreement?). Use of rules (their frequency and contents). Risk estimation .Our approach to analyse think-aloud protocols in a medical decision task for the relative importance of different information categories and the amount of disagreement in the evaluation of these categories has not been tried before as far as we know. We believe that the results can be useful for understanding why doctors reach different decisions in response to the same patient cases and why they often do not act in accordance to guidelines. This knowledge should be useful as an aid to design guidelines and teaching.The 20 participants received the same six patient cases and the order of the cases was the same for all participants. Cases with "Yes"- and "No" decisions as the recommended treatment according to the guidelines were mixed as evenly as possible. Ten of the participants were randomly assigned to a condition where, in addition to thinking aloud, they also rated their willingness to prescribe a drug at regular intervals during each case. As was described in a previous paper , this grTwenty GPs working in the southern Stockholm area participated. There were 10 males and 10 females. Their ages varied between 34 and 60 years (M = 48.3) and they had practiced between one and 22 years (M = 11.4) as specialists in family medicine. A total of 36 doctors were contacted by telephone. They were selected so as to have a relatively even distribution across different districts in the area and according to gender, but the selection was not random. Twenty-four agreed to participate, but before the session four of these later declined to participate.Six clinical cases were selected from an original set of 40 authentic cases with cholesterol values above normal at least 5.5 mmol/L). The original set was used in a Clinical Judgment Analysis design with a different sample of doctors and is described in Backlund et al . Two of .5 mmol/LThe different kinds of clinical information presented on the six successive screens were divided time-wise in the same way for all six cases. The order in which this information was presented was arranged so as to be as realistic as possible in relation to clinical practice . Table The study was conducted at the doctor's office or in a room nearby. All visits and recordings were made by one of the authors (LB). The cases were presented on a computer screen (Software Question Asker™ (QA) . In the The study was approved by the local ethics committee.Each case ended with a screen with the following question: "Would you prescribe a cholesterol-lowering drug for this patient?" The participant responded by clicking on one of two response alternatives, "Yes" or "No".The sessions were tape-recorded. A secretary then transcribed the recorded sessions into a written, word-by-word format. The protocols were segmented into statements. The next step was to categorize the statements into one of ten categories concerning the general characteristic of the statement (cp. Cognition Categories): Attention, Evaluation, Rule, Explanation, Action pharmacological treatment, Action non-pharmacological treatment, Action other, Want of information, Rating and Other. The set of categories is described in more detail in Backlund et al . Each stTwo of the authors (LB and YS) independently coded the protocols from the first six participants. Reliability was computed separately for directionality and for information category (one of 21) as the percentage of statements that were coded into the same directionality/information category. For these first six participants, the inter-judge reliability was 92% for directionality and 94% for information category. The reliability measures were considered to be satisfactory and therefore only one of the authors (LB) performed the remaining coding.The original set of 21 different information categories was reduced to eleven. When we ranked the information categories with regard to the frequency of positive or negative evaluations there was a great leap between weight (frequency 14 and rank eleven) and triglycerides (frequency four and rank twelve). We therefore excluded triglycerides and information categories with fewer evaluations. Examples of such excluded categories were information about physical examination of the heart and lungs and test results concerning liver or thyroid function. The remaining eleven information categories were Cholesterol, LDL (low density lipo-protein), HDL (high density lipo-protein), weight, smoking, CHD (coronary heart disease), diabetes, hypertension, heredity, sex and age.Table F = 8.80, p < .01, as was Information category × Case interaction, F = 11.80, p < .01. . Thus, as could be expected, the different information categories were evaluated unequally often, and the pattern of relative importance differed in the six individual patient cases. All other main effects and interaction effects were also significant with p < .01.Figure In the following, the six patient cases will be analysed separately. This may allow us to detect possible differences in the pattern of importance between different information categories for participants who decided to prescribe a drug and those who made the opposite decision. For each case the number of evaluative statements was the dependent variable in a 2 (Decision: Yes/No) × 11 (Information category 1–11) × 2 (Direction: positive/negative) ANOVA, with the first as a between-group variable and the latter two as within-group variables. The statistical effects are summarized in Table The main effect of Decision was not significant in any of the cases, indicating that there was no evidence of an association between the number of evaluative statements and decision outcome. For four of the cases the effect of Direction was significant, indicating that positive and negative statements were of unequal frequency. Except for case SH , positive statements were more frequent than negative ones. For all six cases the different information categories were evaluated unequally often (i.e. main effect of Information). A significant Decision × Direction Interaction, indicating that the decision to prescribe or not to prescribe was associated with different distributions between positive and negative statements, was found in only two of the cases. Direction × Information Interaction was significant or nearly significant for all five cases, suggesting that the distribution of positive and negative directionality was unequal across the different information categories.The most interesting part of these analyses, however, is whether different decisions were associated with different evaluative patterns across the information categories. Statistically, this corresponds to two- or three-way interaction effects including Decision and Information.A significant Decision × Information interaction for a case would indicate that participants with a "Yes"-decision had their number of evaluative statements differently distributed across Information categories compared to participants with a "No"-decision, regardless of whether the direction was positive or negative. The three-way interaction includes the directionality of the statements as well. As can be seen from Table Case IS represents a 67-year-old female with hypertension as a central risk factor in addition to her cholesterol elevation. She also had a modest heredity. As Figure Case TW Figure represenCase AR Figure represenThus, an analysis of the response patterns for these three cases suggests that the "Yes" and "No"-groups differed not only in how much they evaluated the central risk factor(s) (in addition to cholesterol elevation) as favouring drug treatment but also in that the "No"-group seemed to have identified at least one information category as evidence against treatment. As regards the remaining cases, the evaluative pattern for case PU (young woman with severe heredity for CHD) could be interpreted in a similar way, with the patient's (young) age as an argument against drug treatment, whereas case GM (diabetic case) did not invite any such interpretation. For Case SH, no comparison between response groups could be made as all participants decided not to prescribe.We defined agreement as the degree to which the same information about a patient case was evaluated with the same directionality. We hypothesized that disagreement would be more common for information about lifestyle-related factors like smoking and weight than would be the case for medical conditions like hypertension and diabetes. There was only one case with a clear overweight and one case where the patient smoked, and the number of evaluative statements concerning these two information categories was therefore rather low. Regarding case TW's smoking, there were 21 statements with a positive direction and ten with a negative direction. For case AR's overweight there were three positive and six negative statements. For hypertension, each of the cases had either only positive or only negative directions . The same pattern was found for diabetes and CHD, except that for diabetes two statements concerning case GM were negative compared to 24 statements with positive directionality, and for CHD (case AR) one statements was negative whereas 31 were positive. Thus, with minor exceptions the participants agreed on the evaluations of these three information categories. The data were consequently in line with our hypothesis.There were few evaluative statements concerning the sex of the cases. With one single exception all statements were negative and concerned female cases, which is in line with the known lower risk for female patients to suffer from cardiovascular diseases. A corresponding tendency towards positive evaluations of the male cases was not clearly demonstrated in this material, which could suggest a possible bias in how sex is evaluated as a risk factor. As far as the age variable is concerned, there were both positive and negative statements (i. e. disagreement) for four of the six cases, which was in accord with our expectations. Among the information categories concerning laboratory values, cholesterol was evaluated most often by far, with a fairly even distribution of positive and negative statements. For at least four of the cases, there were approximately the same numbers of positive and negative evaluations of the same cholesterol value. In other words, according to our definition there is evidence of disagreement among participants in the evaluation of the different cholesterol values.At the level of individual participants, there were eleven instances where doctors made both positive and a negative evaluation(s) of one case. Four of these eleven concerned smoking, two cholesterol, two LDL and one each of hypertension, coronary heart disease and diabetes.A total of 32 statements (i.e. not more than 1.6 per participant) were coded as rules. According to our judgment, 18 of these 32 statements were derived from or were compatible with the guidelines and twelve of these 18 were a more or less directly referring to secondary prevention or diabetes (e.g "He has angina pectoris and should be below 5 in cholesterol"). Examples of other contents for the statements coded as rules were age limit for cholesterol treatment, importance of looking for secondary hypercholesterolaemia, the role of LDL/HDL ratio, priority of smoking vs. pharmacological treatment, the desired blood pressure value for diabetics blood pressure and the cut-off value for ten-year risk for primary prevention. For two of the patient cases , the guidelines allow a simple decision rule to be applied. Of the 32 instances of reference to a rule, 24 were in connection with these two patient cases.For the four primary prevention cases, IS, TW, SH and PU, a number of statements referring to numerical risk estimate (guidelines say 20% within the next ten years) could have been expected. Only two participants referred to numerical risk estimates.We discuss first how the doctors evaluated the available information in relation to the decision to be made . When each case was analyzed separately, there was some evidence of different patterns of information use shown by prescribers and non-prescribers. The non-prescribers seemed to evaluate central risk factors with a positive directionality less often than prescribers, and they also appeared to identify at least one information category that was given a negative directionality. This is compatible with theories that describe decision-making as search for arguments or reasons for one or the other decision alternative ,17.against treatment were used as arguments for treatment by other doctors. We interpret this finding as showing that prescribing and not prescribing doctors evaluate given information from different perspectives, i.e., from different viewing angles that will put different aspects of the given information in the foreground and background, respectively [Paradoxically, the information categories that some doctors used as arguments ectively . If we tThe use of life style factors as arguments for prescription decisions was further illustrated in a separate analysis based on the same verbal protocols that also include a task where the doctors were asked to describe freely how they usually reason when they meet patients with high cholesterol values . The proDisagreement was also shown for the age variable. Age is generally considered as positively and monotonically related to risk for future cardiovascular events. At the same time, the guidelines make the reservation that the benefit of giving drugs to very old people is unclear. As far as young patients are concerned, the perspective of ten-year risk appears to be too narrow. The recommended procedure is to increase or project the age to 60 years in order to estimate the risk ,5. For dCholesterol was another variable that was ambiguous which could be explained in part by the selection of patient cases. Four of the six cases had cholesterol values in the range of 5.0–6.5 mmol/L, which is often labeled as a mild elevation. This might have formed the basis for negative evaluations, i.e. when a value was close to normal a decision to refrain from drug prescription could have been favored. A few participants also commented that the cholesterol values were lower than they had expected, or lower than those of their own patients.Ambiguity in the decision situation due to seeing the situation in terms of different treatment perspectives (relevant for life style factors) or different ideas about the optimal cut-off points could possibly be reduced by clearer guidelines, which in turn accentuate the need for more research on a number of issues. These issues include the role of life-style factors for coronary heart disease, as well as how patients should be motivated to change their life style, and cost-benefit outcomes of using drug treatment of patients in different age groups and with different cholesterol values.We will now consider the second set of research questions addressed in the present study, viz., the extent to which the participants used certain rules as a basis for their decisions. Based on the verbal protocols, the frequency of statements classified as a rule was rather low, on average 1.6 per participant, and most of the rules concerned secondary prevention. Part of the explanation for the low number of rules might be that the participants were uncertain about the contents of the guidelines and were therefore less willing to talk about them. However, from our separate coding for knowledge of guidelines content referred to above , the conThe low frequency of statements containing a reference to the risk concept could be explained in the same way, since the participants had no immediate access to an aid for calculating risk and were possibly unsure about the general content of such an aid . Another reason for the low number of rules might be that the instructions did not encourage the participants to explain their decisions, but simply to state aloud their thoughts about the presented information, which is generally considered as the best method to ensure that the verbal protocols reflect the cognitive processes of interest . A thirdFrom the view of evidence-based decisions and quality of care, we can say that many of the cases were difficult and that a considerable spread in the decisions was to be expected. In fact, most of the participants found it difficult to decide about several of the cases, which was evident from interviews after the sessions. On the other hand, the only case (SH) with a mild risk (5–10%) was correctly judged by every participant as not being a candidate for drug treatment. Case AR with angina pectoris represents a decision situation where the guidelines could justify pharmacological treatment in a straightforward manner, and 17 out of 20 chose to prescribe. The presence of diabetes in Case GM could similarly justify drug treatment, but this was the choice for only half the participants. The reason could be that the recommendations concerning diabetes as a risk factor in parity with coronary heart disease is rather new. The Swedish guidelines were published in 1999 and the study was conducted in 2000.One limitation of the present analyses is that most of the conclusions are based on pooled data from groups of participants, while the principal interest is in strategies at the individual level. For example, the opposing evaluations of the same patient data could only be demonstrated between doctors due to the low number of patient cases. After completion of the six cases, the participants were asked to relate in their own words how they usually reason regarding pharmacological treatment when confronted with patients with high cholesterol values. In a forthcoming paper these narratives will be analyzed at the individual level as "scripts" for dealing with cholesterol treatment. We will then have a better understanding of how knowledge and guidelines in this area of medicine are represented in memory, and how these cognitive structures are related to actual decisions and to the individual doctor's think-aloud protocols from processing the cases.In this study we have used a new method to analyse a medical treatment decision. Verbal protocols were coded with respect to how different patient variables seemed to favor or not to favor the decision to prescribe a drug or not. The method promised to be fruitful for understanding why doctors reach different decisions in response to the same patient descriptions and why guidelines are not followed.The author(s) declare that they have no competing interests.All authors participated in the design of the study. LB carried out the data collection. LB and YS performed the coding of the protocols. LB performed the rest of the data analyses and drafted the manuscript. All authors participated in the discussion of the draft. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
The objective of the study was to compare the prevalence and severity of musculo-skeletal pain between two socioeconomically contrasting areas in Oslo, Norway, and to explore possible explanatory factors.Questionnaire survey, carried out as part of The Oslo Health Study in 2000–2001. Data from 821 persons (40 and 45 year old) living in a less affluent inner city area were compared with 854 persons living in an affluent area of the city . Bivariate comparisons (chi square test) and multiple regression analyses were performed to investigate differences between the samples.61 % in east and 56 % in west (p < 0.05) reported pain/stiffness in muscles/joints during the last four weeks. 30 % in east versus 19 % in west (p < 0.001) reported extensive pain. The between area difference in extensive pain was partially explained by physical inactivity, mental health problems and being of non-Western origin.Musculo-skeletal pain is reported by 55–60 % of middle aged persons in Oslo during a four week period, and must be considered a normal phenomenon. Poor social conditions, inactivity, mental health problems and being an immigrant imply increased risk of more severe symptoms with a concomitant demand of health care. In spite of this, social health inequities still exist, and recent analyses from Oslo even indicate an 30 years . Life ex30 years . Well-knSome claim that future research on this topic should concentrate exclusively on interventions . In Grea.The data collection was part of the Oslo Health Study, a joint collaboration between the Oslo City Council, the University of Oslo and the Norwegian Institute of Public Health, which was conducted from May 2000 to September 2001. All residents born in 1924/25, 1940/41, 1955, 1960 and 1970 (n = 41353) received the three-page main questionnaire by mail, as an invitation to participate in a health screening. At the screening station a simple clinical examination and a blood test were performed, and the questionnaire was handed in. Two supplementary questionnaires were given out: one identical for all age groups, and one in four different versions. Participants were asked to fill in the supplementary questionnaires at home and return them by mail. Two reminders were sent to non-respondents. An overview of all topics covered in the questionnaires (in English) can be obtained from In the present study we analysed data from persons born in 1955 and 1960, who lived either in the inner eastern part of Oslo or in the outer western part (see below). We used data from the main questionnaire as well as from the age specific supplementary questionnaire.The variables included from the main questionnaire were: marital status, educational level, employment status, disability pension, social assistance, country of origin, physical exercise, alcohol intake, smoking habits, general health status, mental health problems, and musculo-skeletal disorders. Country of origin was recoded as Western or non-Western .Below is a list of various problems: Have you suffered from any of the following during the last week, including today? Put a cross for every problem. Choices: Not troubled, slightly troubled, quite a lot troubled, much troubled . The values were summarised and divided by the number of answers, and a mean value of 1,85 or more was used as a marker of mental health problems [Mental health problems were assessed by the following question: problems .Have you suffered from pain and/or stiffness in muscles and joints in the course of the last four weeks? Choices: Not troubled, somewhat troubled, very troubled for the alternatives neck/shoulders, arms/hands, upper back, lower back, hips/legs/feet and elsewhere. The values were summarised and divided by the number of answers. A mean value of 2 or more was used as an indicator of extensive pain/stiffness in muscles or joints [Musculo-skeletal pain was explored by the following question: r joints .The variables from the age specific supplementary questionnaire included were: own income, household income, muscular pain/stiffness last 4 weeks, duration of muscular pain/stiffness, satisfaction with health care, and belief in own coping ability.st 2000, west had 67296 inhabitants and east 80 668. . We have chosen to compare these two areas, because they are strongly contrasted regarding living conditions.Oslo's local authority districts can be ranked according to: level of income, education, employment, disability pension, housing standard, number of non-western immigrants, and mortality ,12. AccoStatistical analyses were performed using SPSS version 11.0. Bivariate comparisons of categorical variables were examined by the chi square test. Multiple regression analyses (stepwise) were performed to estimate the explanatory power of independent variables. A 5 % level of significance was chosen.The main questionnaire was completed by 821 forty- and 45 year olds living in east (50.7 % women) and 854 living in west (62.9% women), corresponding to a response rate of 39.0 % in east and 43.9 % in west. Some returned the questionnaire without attending the health screening, meaning that 1348 persons completed the supplementary questionnaires.There was no significant difference regarding full time employment, and frequent use of alcohol was more common in west. For all other socioeconomic and lifestyle variables, as well as general and mental health, east came out poorer Table .The proportion having experienced muscular pain/stiffness during the last four weeks, being very troubled by muscular pain/stiffness in various body parts, or reporting extensive pain/stiffness was higher in east. No difference was found regarding pain duration. Participants in west were more satisfied with health care and more confident in own coping ability Table .Female gender, living in east, low education, low own income, non-Western country of origin, no hard exercise and mental health problems were all correlated to extensive muscular pain/stiffness . Non-western origin was the most important predictor of extensive pain/stiffness in men and mental health problems in women . When we performed the analyses for respondents of Western and non-Western origin separately (data not shown), mental health problems were the most important independent predictor for extensive pain/stiffness for both groups .In both areas around 60 % reported pain/stiffness in muscles/joints during the previous four weeks: 61.4 % in east and 55.9 % in west, a statistically significant difference of little clinical relevance. We do not know the prevalence among non-respondents, but as The Oslo Health Study implied a comprehensive data collection on many topics, it is unlikely that muscular problems in particular should influence response rate extensively. In a questionnaire survey we carried out in the same areas in 1994 approximately 55 % in both areas reported musculo-skeletal pain during the last four weeks . In anotIt is thus important that the proportion reporting to be very troubled was significantly higher in east regarding all body regions. This might be due to a higher prevalence in east of specific musculo-skeletal diseases, like rheumatoid arthritis, fibromyalgia, etc. The Oslo Health Study asked about fibromyalgia and osteoporosis: In east 49 persons reported fibromyalgia and nine osteoporosis, compared to 19 and five in west. In previous studies we found no difference in prevalence of rheumatoid arthritis or osteoBlank and Diderichsen found social inequities in both frequency and intensity of a variety of common symptoms in a Swedish population . Their rThe response rate in our material is low . Total response rate in The Oslo Health Study was 46.5 % for 45- year olds, 43.7 % for 40- year olds and 46 % for all age groups. Non-attendance does not occur randomly. Analyses of the impact of self-selection on the Oslo Health Study have shown that the following sub-groups were under-represented among the attendees: unmarried or divorced, males, persons with low education, low income groups, receivers of disability benefit, inner city dwellers and those not born in Norway . But wheWe consider it a strength to use geographical area as a marker of socioeconomic position, and not for example individual education or income. Residential areas are distinct and easy to handle for authorities and politicians, and the majority of health care resources are allocated at area level. That inhabitants in affluent areas are healthier than in less attractive areas, is hardly a surprise, but which are the mechanisms behind the differences? There may be a certain amount of selection: The financial disadvantage of disabled people make it more likely that they live in poorer areas. In our material, far more people of non-western origin lived in east compared to west. As being of non-Western origin showed a strong independent correlation with severe muscular pain, this selection contributed significantly to the between area difference observed. A less healthy physical environment, less healthy lifestyle, and the psychological impact of being poorer than other people, are also possible explanatory factors . Some auAs our study is cross-sectional, causal interpretations cannot be made, we can only describe associations between socioeconomic measures and the health inequities observed. Several studies have shown that education and income can not explain the difference in self-reported health between socioeconomic contrasting areas -22. Our The present study shows that even in Norway today the perception and impact of a health problem is related to a person's socioeconomic situation. Self-reported health status is known to correlate with mortality, and it is a person's perceived health problems which influence the demand for health care. Significantly more persons living in a non-affluent area of Oslo reported extensive pain, compared to persons in an affluent area. Inactivity, poor mental health, and being a non-Western immigrant implied increase risk of severe symptoms.The authors declare that they have no competing interests.The two authors planned and carried out the data collection together. MB carried out the data analyses and drafted the manuscript. Boyh authors approved the final manuscript.
The level of phylogenetic congruence between the individual genes has been investigated utilizing Bayes factors. We also explore how changes in the substitution models affected the observed incongruence between partitions of our data set.The typical antbirds (Thamnophilidae) form a monophyletic and diverse family of suboscine passerines that inhabit neotropical forests. However, the phylogenetic relationships within this assemblage are poorly understood. Herein, we present a hypothesis of the generic relationships of this group based on Bayesian inference analyses of two nuclear introns and the mitochondrial cytochrome Terenura antwrens, the wing-banded antbird (Myrmornis torquata), the spot-winged antshrike (Pygiptila stellaris) and the russet antshrike (Thamnistes anabatinus) are sisters to all other typical antbirds. The remaining genera fall into two major clades. The first includes antshrikes, antvireos and the Herpsilochmus antwrens, while the second clade consists of most antwren genera, the Myrmeciza antbirds, the "professional" ant-following antbirds, and allied species. Our results also support previously suggested polyphyly of Myrmotherula antwrens and Myrmeciza antbirds. The tests of phylogenetic incongruence, using Bayes factors, clearly suggests that allowing the gene partitions to have separate topology parameters clearly increased the model likelihood. However, changing a component of the nucleotide substitution model had much higher impact on the model likelihood.The phylogenetic analysis supports both novel relationships, as well as traditional groupings. Among the more interesting novel relationship suggested is that the Myrmeciza antbirds and the Myrmotherula antwrens obviously need taxonomic revisions. Although, Bayes factors seem promising for evaluating the relative contribution of components to an evolutionary model, the results suggests that even if strong evidence for a model allowing separate topology parameters is found, this might not mean strong evidence for separate gene phylogenies, as long as vital components of the substitution model are still missing.The phylogenetic results are in broad agreement with traditional classification of the typical antbirds, but some relationships are unexpected based on external morphology. In these cases their true affinities may have been obscured by convergent evolution and morphological adaptations to new habitats or food sources, and genera like The typical antbirds (Thamnophilidae) is a speciose family within the furnariid radiation (sensu ) of the In traditional classifications, the antpittas and antthrushes (Formicariidae) were grouped together with typical antbirds in an even larger family. However, the support for the expanded antbird family was indeed weak, and both morphological -6 and moMyrmotherula to Formicivora )However, when independent evidence is lacking and incongruence occurs between individual data partitions, it may be difficult to determine whether particular partitions are better estimates of the species tree than others. Researchers might favor the "total evidence approach" for this particular reason (even though the argument for not combining data partitions with significant levels of incongruence have strong merits). However, the degree of incongruence between individual gene trees could be used to determine whether the phylogenetic conclusions should be based on the combined data set, or only those parts that are similar among the different partitions. A commonly used approach for analysing combined data with maximum likelihood is to assume a single (the same) substitution model for all of the combined genes . A sb and the two nuclear genes (myoglobin and G3PDH), but all combinations of the three genes were examined. However, limitations in the substitution models might be the most important explanation to observed incongruence between data partitions, rather than an intrinsic phylogenetic incongruence eTamnistes and Pygiptila), antvireos and Herpsilochmus antwrens in clade B and the "large" antshrikes are two examples where our results are congruent with traditional classifications. The suggested relationships between the Hypocnemis and Drymophila antbirds, and the Herpsilochmus antwrens and the antvireos (Dysithamnus), respectively, have also been proposed previously based on molecular data )The posterior probabilities of trees and parameters in the substitution models were approximated with MCMC and Metropolis coupling using the program MrBayes . The gen1 compared to another M2, given the data X, and is calculated as the ratio of the model likelihoods, B12 = f(X|M1)/ f(X|M2). The model likelihoods f(X|Mi) are difficult to calculate analytically but can be estimated by using the output from an MCMC for typical antbirds, and Irestedt et al. [1] for families. Abbreviations: AHMN = American Museum of Natural History, New York; FMNH = Field Museum of Natural History, Chicago; LSUMZ = Louisiana State University, Museum of Natural Science; NRM = Swedish Museum of Natural History; ZMCU = Zoological Museum of the University of Copenhagen. References: (1) Irestedt et al. [1]; (2) Fjeldså et al. [45]; (3) Johansson et al. [65]; Fjeldså et al. [66].Click here for file
In many of the world's poorest countries, dying is often accompanied by avoidable pain and other distressing symptoms. How can we improve care at the end of life? It must be made a public health priority Unfortunately, governments in these countries usually give care at the end of life a low priority compared with preventive and curative services . This prAs three physicians in Jamaica, Uganda, and Rwanda, we believe that providing quality care at the end of life should be seen as a global public health priority. By using relatively low-cost palliative care approaches and community-based strategies, thousands of terminally ill patients in Africa and the Caribbean could be relieved of their pain and suffering.In the countries where we work, the burden of cancer and HIV/AIDS is overwhelming. In Africa about 2.5 million people die annually from HIV/AIDS, and more than 0.5 million die from cancer ,4. SepulIn Rwanda, as in most other African countries, infectious diseases are still rife. Health professionals are often faced with the terrible dilemma of having to choose between saving lives and easing the suffering of the dying. Indeed the authorities usually believe that any investment in palliative care would be at the expense of providing life-saving treatments for those suffering from curable, often infectious illness.In many Caribbean countries, while the scourge of water- and insect-borne infectious diseases is largely under control, the prevalence rates of HIV in the adult population are some of the highest in the world . In JamaPrevention efforts—including health promotion, education, and screening—and treatments aimed at cure or prolonging life are key strategies needed to reduce the burden of HIV/AIDS and cancer in resource-poor countries . HoweverAt present, access to treatment where we are working is essentially controlled by the ability of the patient to pay. Thus, only about one in 200 people with HIV in Uganda are able to obtain antiretroviral medicines . FurtherGiven that prevention isn't taking effect in many places, and curative services are poorly available or inappropriate, we believe that the provision of palliative care in the CThe WHO has defined palliative care as an approach that improves the quality of life for patients and their families facing the problems associated with life-threatening illness, through the prevention and relief of suffering. This is done through early identification, careful assessment, and treatment of pain and other problems—physical, psychological, and spiritual. Dying is regarded as a normal process, and death is neither hastened nor postponed . The phiEffective and relatively cheap methods exist for controlling pain and other symptoms. For example, the World Health Organization (WHO) has outlined a relatively cheap way of relieving cancer pain in about 90% of patients, which could be extended to patients with HIV/AIDS . Sadly, Several studies in East Africa have looked at the experience of dying, the quality of care at the end of life, and patients' unmet needs ,11,12. RIn the Caribbean, patients' needs at the end of life appear to be similar to those of patients in many East African countries. A qualitative study in Grenada, in the Eastern Caribbean, showed that people preferred to die at home rather than in hospital and—in the absence of pain relief and much-needed counseling, information, and financial support—they took solace in spiritual comfort . In JamaThis fictional case scenario gives an impression of the sorts of problems that patients face at the Hope Institute, Kingston—Jamaica's first public hospice.A 50-year-old woman is diagnosed with inoperable lung cancer. Because of brachial plexus involvement, she experiences severe pain and weakness of her arm. She is treated at Kingston Public Hospital with palliative radiotherapy, which helps the pain for a few months. But then the pain returns, and she requires a high dose of slow-release morphine for pain control.She lives in the mountains, and her house is a two-and-a-half-hour bus ride from Kingston, the capital city. Unfortunately, the public pharmacy in Kingston is unwilling to dispense more than a week's supply of morphine at any one time, because they have limited supplies (there is a shortage of the drug in Jamaica) and because they think the patient's dose is unacceptably high. So she has to make the exhausting five-hour round trip every week.Her husband's health has also recently declined, and the woman's sister now has to care for the patient and her husband. The family now has the financial means to afford only one small meal a day, and they rely on donations from their church community in order to survive.Because the family's savings dwindle, and the public pharmacy faces further shortages of morphine, the woman with cancer requires multiple admissions to the hospice in Kingston over the last six months of her life in order to get suitable analgesia.Uganda has made palliative care for patients with AIDS and cancer a priority in its National Health Plan . In 1993Hospice Uganda provides community-based care principally to patients suffering from HIV/AIDS and cancer. Almost all patients coming to the hospice have pain, and a great deal of attention is focused on good pain management. Uganda is only the third African country to have made morphine available and affordable to its patient population. Because of the dearth of legal prescribers , in May 2004, Uganda changed the statute. This allowed midwives to prescribe pethidine, and allowed clinical palliative care nurses and clinical officers who are specially trained and registered to prescribe morphine.How was Uganda—an African country with a relatively under-funded health service—able to provide a palliative care service? A national program using a public health approach to reach those in need was established following principles outlined in the WHO's National Cancer Control Guidelines . These gFour other African countries—Botswana, Ethiopia, Tanzania, and Zimbabwe—have made the development of home-based care a priority in dealing with the HIV/AIDS epidemic . BotswanBy using strategies such as providing access to an essential short list of relatively cheap generic medications, and other methods recommended by WHO, it has now been proven that palliative care in the African context is affordable and achievable ,7,14.We believe that, following the Ugandan and Botswanan models, palliative care should be integrated into national government strategies. In order to begin to show governments the importance and economic justification for developing a palliative care health policy, it is clear that needs assessments are an essential first step. It is likely to be much less expensive to provide community-based care with family and community support at the end of life than to burden already overcrowded hospital wards with patients suffering end-stage disease. There is a long tradition, both in Africa and in the Caribbean, of caring for the disabled, the mentally ill, and the young and elderly sick at home.Both start-up and sustainable funding are enormous issues that will need to be addressed by local governments, international funding agencies, and charitable bodies. Advocating palliative care to decision makers, providing training programs for health professionals, and making medications available and affordable are important challenges.Research in individual countries is needed to assess whether the above recommendations are suitable locally. Hospice Africa Uganda is advocating to other African governments and assessing other African countries where local laws and customs may dictate the most suitable way to provide palliative care together with government support. Partnerships and a public health approach to palliative care must be the way forward.African Palliative Care Associationapca@hospiceafrica.or.ugRepresenting Kenya, South Africa, Tanzania, Uganda and Zimbabwe E-mail: Hospice Africa (Uganda)info@hospiceafrica.or.ug; E-mail: anne@hospiceafrica.or.ugResource and Training Centre PO Box 7757, Kampala, Uganda Tel: +256 41 266 867 / 510089; Fax: +256 41 510087—residence E-mail: Centre for Palliative LearningHospice Association of the Witwatersrand PO Box 87600, Houghton, Johannesburg 2041, South AfricaHospice Informationhttp://www.hospiceinformation.info. Click on “Training” to search for courses and conferences in palliative care and bereavement. Requires member's password to access this part of the website but membership is free to people in developing countries—contact hospice information at + 44 (0)870 903 3 903 (telephone), + 44 (0)20 8776 9345 (fax), or info@hospiceinformation. Information is also circulated quarterly by E-mail to members under the title of e-Choices.At Palliative Care in Resource-Poor Settingshttp://hab.hrsa.gov/tools/palliative/chap19.html.A freely available overview of HIV/AIDS palliative care, written by Kathleen Foley, Felicity Aulino, and Jan Stjernswärd. At Living Well with HIV/AIDShttp://www.fao.org/DOCREP/005/Y4168E/Y4168E00.HTM.A freely available manual on nutritional care and support for people with HIV/AIDS, by the Food and Agriculture Organization of the United Nations. At Cancer Pain Relief: A Guide to Opioid Availabilityhttp://www.medsch.wisc.edu/painpolicy/publicat/cprguid.htm.A section of this guide, by the WHO, is freely available at
SPG4 (spastin) gene, which encodes an AAA ATPase closely related in sequence to the microtubule-severing protein Katanin. Patients with AD-HSP exhibit degeneration of the distal regions of the longest axons in the spinal cord. Loss-of-function mutations in the Drosophila spastin gene produce larval neuromuscular junction (NMJ) phenotypes. NMJ synaptic boutons in spastin mutants are more numerous and more clustered than in wild-type, and transmitter release is impaired. spastin-null adult flies have severe movement defects. They do not fly or jump, they climb poorly, and they have short lifespans. spastin hypomorphs have weaker behavioral phenotypes. Overexpression of Spastin erases the muscle microtubule network. This gain-of-function phenotype is consistent with the hypothesis that Spastin has microtubule-severing activity, and implies that spastin loss-of-function mutants should have an increased number of microtubules. Surprisingly, however, we observed the opposite phenotype: in spastin-null mutants, there are fewer microtubule bundles within the NMJ, especially in its distal boutons. The Drosophila NMJ is a glutamatergic synapse that resembles excitatory synapses in the mammalian spinal cord, so the reduction of organized presynaptic microtubules that we observe in spastin mutants may be relevant to an understanding of human Spastin's role in maintenance of axon terminals in the spinal cord.The most common form of human autosomal dominant hereditary spastic paraplegia (AD-HSP) is caused by mutations in the Kai Zinn and colleagues use loss- and gain-of function mutations to study the Drosophila homologue of a gene mutated in human autosomal dominant hereditary spastic paraplegia Overexpression of Spastin in muscles erases their microtubule networks, consistent with the idea that Spastin is a microtubule-severing protein.In this study, we describe the phenotypes arising from mutation of the spastin mutations, and found that they produce recessive phenotypes affecting the larval neuromuscular system. The Drosophila neuromuscular junction (NMJ) uses glutamate as its neurotransmitter and employs ionotropic glutamate receptors homologous to vertebrate AMPA receptors (We made loss-of-function (LOF) eceptors . It is oeceptors .During the period from larval hatching through the third instar stage, the number of boutons at each NMJ increases by up to 10-fold in order to keep pace with the growth of its muscle target. New boutons are added by a process of budding . As thesspastin mutant larval NMJs. Boutons are more numerous than in wild-type larvae, and synaptic transmission is impaired. These changes could result from alterations in synaptic microtubule dynamics, because we find that microtubule bundles are depleted from the distal boutons of NMJs in spastin-null mutants. This is surprising, because the fact that Spastin overexpression destroys microtubule networks might lead one to expect that its removal would increase the number of microtubules. Morphological and microtubule phenotypes are seen only for a total gene deletion, indicating that complete loss of Spastin function is required to alter synaptic microtubules in the fly system. The phenotypes we see are quite different from those described in a recently published study of perturbation of Drosophilaspastin using RNAi methods and pan-muscle (24B) GAL4 driver lines (EP element and the driver were saved. About 2% of lines (131) exhibited lethality or reduced viability with one of the drivers, and 62 of these were lethal or semilethal with both drivers. The T32 insertion on the third chromosome conferred complete lethality when crossed to either driver, and produced a neuronal-driver-dependent axonal phenotype (see below).We identified one end . An EP eer” line . We geneer lines . Those lT32 element, we cloned a genomic DNA fragment adjacent to the insertion site and used it to identify a full-length cDNA encoding a 758-aa protein that is a member of the AAA ATPase family gene that is mutated in the most common form of AD-HSP . The AAA domains of the two proteins are 67% identical. The other region that is conserved between the Spastins (34% identity) corresponds to aa 233–404 of the fly sequence. The same region is also weakly related (26% identity) to human Spartin, the product of the SPG20 gene mutated in Troyer syndrome, a form of “complicated” HSP in wild-type embryos -GAL4, which is expressed in neuronal precursors and neurons to demonstrate that the phenotypes we describe for the null mutant are due to loss of Spastin.To evaluate Spastin's functions during development, we generated several deletion mutations from the utations . Because10-12spastin and 17-7spastin have behavioral phenotypes, but they eclose at normal frequencies and are fertile (see below). In contrast, most homozygous 5.75spastin pupae do not eclose. 5.75spastin adults have very severe behavioral phenotypes, and both sexes are sterile. These results suggest that the 10-12 and 17-7 alleles are hypomorphic, and that the 5.75spastin phenotype represents the null condition. RT-PCR analysis of cDNA from 10-12spastin and 17.7spastin animals indicated that low levels of truncated spastin transcripts are still produced (data not shown). These may direct synthesis of proteins initiated from internal ATGs that could retain partial function, since they include the entire conserved AAA domain.Flies homozygous for spastin mutations. However, we saw striking morphological changes in the NMJs of 5.75spastin third instar larvae. In 5.75spastin larvae than in Canton Sw− (WCS) control larvae. Other NMJs are affected in a similar manner (10-12spastin and 17-7spastin mutants had bouton numbers that did not differ significantly from controls.The number of Ib boutons per muscle 4 NMJ was increased by 1.6-fold relative to y 23 °C) G, and thy 23 °C) E. This my 23 °C) D. The cly 23 °C) . HypomorUAS-spastin cDNA insertion. This was difficult because of the early lethality produced by expression of Spastin from most drivers. UAS-spastin animals bearing pan-neuronal (Elav-GAL4), motoneuronal (OK6-GAL4), or pan-muscle (24B-GAL4 or G14-GAL4) drivers did not survive to larval stages at 23 °C, and few larvae appeared even at 18 °C. However, third instar larvae in which Spastin expression from the cDNA was conferred by spinster(spin)-GAL4, a weak driver that functions in both neurons and muscles (UAS-spastin to Elav-GeneSwitch (GS)-GAL4, a driver line bearing a neuronally expressed GAL4 derivative that is only active in the presence of the progesterone analog RU486 , to depolarization of the innervating nerves. Average EJP amplitudes in the null were reduced to 78% of the levels in control (WCS) larvae (p < 0.003). We also examined the average amplitude and frequency of responses to single vesicles of spontaneously released neurotransmitter (“mini” EJPs [mEJPs]). mEJP amplitude was increased slightly, to 117% of WCS levels (p < 0.03). There was no significant change in mEJP frequency (p = 0.3).To evaluate whether 5.75spastin mutants, QC was reduced to 67% of WCS levels in these larvae (p < 3 × 10−6). 17-7/spastin5.75spastin larvae had EJP amplitude, mEJP amplitude, and QC values intermediate between those of spastin-null and control larvae. QC in these transheterozygotes was also decreased significantly, to 78% of WCS levels (p < 0.002). The changes in EJP amplitude and QC observed in 5.75spastin mutants were completely rescued by spin-GAL4-driven Spastin expression. Average QC in rescued larvae was not significantly different from wild-type (p > 0.1), and was 30% greater than in 5.75spastin mutants , a measure of the number of vesicles released per evoked event, was calculated by dividing the EJP amplitude by the average mEJP amplitude. Because the evoked EJP was decreased and the mEJP increased in 17-7/spastin5.75spastin larvae were temperature sensitive. While the average QC in transheterozygotes was reduced to 78% of controls at 18 °C, this effect was exacerbated at higher temperatures. At 29 °C, QC was 54% of wild-type overexpressing Spastin in neurons, and found that synaptic transmission in these larvae was not significantly different from wild-type (data not shown).We also observed that the electrophysiological phenotypes of the hypomorphic ild-type F. Howeve5.75spastin pupae were able to eclose at room temperature compared to 94% of heterozygotes, and the adults that emerged had severe movement defects flies and flies homozygous for T32 , flew well enough to distribute themselves along the sides of the column. Interestingly, for those hypomorphs that did fly out to the sides, their distribution paralleled that of the controls, suggesting that flight responses in the column were relatively normal in this subpopulation of the mutants .In summary, d longer , indicatDrosophila Spastin affects microtubule networks, we overexpressed it in embryonic muscles using the G14-GAL4 or 24B-GAL4 drivers, and then visualized muscle microtubules in late stage 16 embryos with an anti-β3-tubulin antibody that preferentially stains polymerized tubulin .We quantified Futsch distribution by dividing the patterns of Futsch staining in boutons into three classes: continuous (bundles or splayed bundles), looped, and diffuse or undetectable. In 5 larvae B, there staining D. These spin-GAL4 or the RU486-induced, neural-specific Elav-GS-GAL4 drivers. Rescued larvae had Futsch staining patterns very similar to those seen in WCS . Thus, our results indicate that microtubule bundles are selectively depleted from the distal boutons of NMJs in larvae lacking Spastin protein. boutons B. Loopedspastin gene, which encodes an AAA ATPase, are the most common cause of pure AD-HSP. We identified the Drosophila Spastin ortholog toward diffuse patterns or the absence of detectable staining. This effect is most pronounced at terminal boutons, and is rescued by neuronal expression of Spastin are then moved distally into the boutons of the NMJ as it grows.Based on these findings, we suggest that the depletion of microtubules in the distal boutons of Drosophila does not strongly affect outgrowth, since the embryonic CNS axon ladder develops in a normal manner and motor axons reach their appropriate targets in spastin mutants. Furthermore, axonal and muscle microtubules are not detectably altered in spastin-null embryos. Severing of microtubules in vivo, however, may usually involve the actions of multiple severing proteins. In addition to Spastin, the Drosophila genome encodes three AAA ATPases whose AAA domains are closely related to that of vertebrate Katanin-60. These are Katanin-60, CG1193, and an ortholog of mammalian Fidgetins, CG3326 , (2) spastin RNAi larvae have reduced NMJs and an increase in synaptic transmission, and (3) loss of Spastin from neurons produces an increase rather than a decrease in stable microtubules in the NMJ.After this manuscript was submitted for initial review, a paper appeared on perturbation of chniques . In direspastin mutations that delete part or all of the coding region and on rescue of null mutant phenotypes by neuronal expression from a transgene. Our results show that spastin is not an essential gene: even spastin-null flies can eclose and live for several days, and spastin hypomorphs, which would be expected to more closely resemble most RNAi-perturbed flies, eclose at normal rates and have lifespans and behavior that do not greatly differ from wild-type from the Phylip package, based on alignment to the PFAM AAA consensus.For the overexpression screen, approximately 6,000 new spastin excision lines, alleles 10-12, 17-7, and 5.75 were generated via imprecise excision of EP T32 using SbΔ2-3. All alleles were homozygous viable, and their deletions were mapped by PCR and sequencing of larval or adult genomic DNA. Allele 5.75 causes sterility in both sexes.For the spastin rescue construct, the UAS-spastin cDNA construct was made by subcloning a 2.9-kb BglII fragment from GH11184 into the BglII site of pUAST (spastin cDNA up to 350 bp after the stop codon (excluding 681 bp of the 3′ UTR) and including 28 bp of polylinker sequence from the pOT2 plasmid at the 5′ end. The construct was injected at approximately 300 ng/μl into KiΔ2-3 embryos and several transgenic lines recovered; experiments described here used the Chromosome II insertion line 8-3-5.For the of pUAST . This frspastin-null phenotypes by spin-GAL4-driven expression was assayed by crossing UAS-spastin/CyOKr-GFP;5.75/TM3SerAct-GFPspastin to 5.75/TM3SerAct-GFPspin-GAL4/CyOKr-GFP; spastin. The numbers of Ib boutons in rescued larvae 5.75/spastin5.75). For larval overexpression experiments, the 213-3MHC-GS-GAL4 driver was used to induce muscle-specific expression of UAS-spastin at 23 °C in the absence of RU486. Expression induced by this driver was confirmed in a separate cross using UAS-EGFP.Rescue of scue see G. This wescribed . The numspastin cDNA encoding aa 136–416 (pGEX-T32PvuII), 1–167 (pAcG2T-T32BamRIa), and 380–758 (pAcG2T-T32BBA) were subcloned from GH11184 into bacterial (pGEX) or baculovirus (pAcG2T) expression vectors. Expressed protein (in the form of inclusion bodies for T32PvuII) was injected into guinea pigs and the antiserum tested on en-GAL4/+2/+ embryos. Antisera against T32PvuII (86EX) and T32BBA (1239) both showed strong staining in the Engrailed pattern; thus, both recognized overexpressed Spastin. 86EX was purified by incubation with membrane-bound pGEX-T32PvuII protein and subsequent elution with 100 mM glycine (pH 2.5) followed by neutralization with 3M Tris (pH 8.8). pAb1239 was affinity purified using the immunogen bound to Affi-gel10 beads , followed by preabsorption with 5.75spastin larval fillets.Regions of the z-series projections with a Zeiss 510 inverted confocal microscope and 63×/1.4 n.a. or 100×/1.2 n.a. PlanApo objectives. Only larval segments A2 and A3 were analyzed. Individual boutons were defined as a Syt-positive area encircled by Dlg-positive staining ; 5.75spastin mutant, 68 ± 2.4 (38); spin rescue, 50 ± 4.9 (26); mutant sibling with spin driver chromosome, 86 ± 4.6 (8); Elav-GS rescue, 70 ± 3.0 (19); and mutant sibling with Elav-GS driver chromosome, 110 ± 9.3 (21). p < 0.02 by one-way ANOVA for all paired comparisons.Stage 16 embryos were fixed and stained using standard methods with mAbning see , or a Syning see . All typning see were as 2, 20; NaHCO3, 10; HEPES, 5; Sucrose, 115; Trehalose, 5; and CaCl2, 1 (Sigma). Larvae were visualized with a 5×/0.10 n.a. Olympus objective on an Olympus BX50WI microscope. EJPs were evoked by pulling the cut end of the innervating segmental nerve into a heat-polished suction electrode and passing a depolarizing pulse sufficient to depolarize both motoneurons (Grass SD9 stimulator). For each experiment, 10–15 single EJPs evoked at 0.2 Hz were recorded, and then spontaneous mEJPs recorded for 1 min afterwards. Only recordings with resting membrane potential below −60 mV were acquired. The average resting membrane potential for control (WCS) larvae was −72.2 mV, and did not differ significantly from any of the experimental groups. Average muscle input resistance in control larvae was 8.9 MΩ, and differed significantly only from the input resistance determined for 5.75spastin /17-7spastin transheterozygotes (7.5 MΩ p < 0.04). Recordings were performed using an Axon Instruments Axopatch 200B amplifier with CV203BU headstage operating in current clamp mode. The signal was low-pass filtered at 5 kHz, digitized through an Axon Instruments Digidata 1322A 16-bit acquisition system, and recorded using Axon Instruments Clampex 8.2 software. Mean EJP amplitude was determined by averaging all single EJPs with Axon Instruments Clampfit 8.2, and corrected for nonlinear summation according to Intracellular recordings were obtained at 18 °C, using sharp microelectrodes filled with 3M KCl, from body wall muscle 6 (segments A3 or A4) of filleted third instar larvae, following standard methods . Larvae Eclosion rates were determined by counting numbers of empty versus full (dead) pupae on the sides of bottles in which flies had been allowed to lay for comparable time periods. The flight test assay was performed at room temperature using an opaque cylinder coated on the inside with fresh mineral oil. Flies of a given genotype were dumped through a hole in the center of a lid at the top. The cylinder was divided into bins along its height, and the number of flies per bin counted. Flies of different genotypes were age-matched; more than 200 flies were counted for each. The climbing assay was performed on 4–5 d old flies maintained individually in vials. Climbing velocity for each fly was measured by transferring it to an empty vial, banging it to the bottom, and then measuring either the time required to reach the top of the vial or the maximum distance it climbed in 30 s, whichever came first. Three trials were performed, and the best speed was used. For lifespan tests, flies were maintained at 25 °C, transferred every 3 d to fresh food vials, and their lifespan noted.Figure S15.75spastin mutant , (B) neuronally rescued ,spastin and (C) neuronally overexpressing larvae. The clustered, smaller, and more numerous boutons observed in mutant NMJs are absent in neuronally rescued larvae, which resemble controls and Syt (magenta) are shown for (A) WCS; see D. Spasti(1.4 MB TIF).Click here for additional data file.Figure S2spin-GAL4 rescue crosses, raised at 18 oC. These genotypes were (1) spin-GAL4/UAS-spastin;5.75spastin (spin Rescue), (2) spin-GAL4/CyOKr-GFP;5.75spastin , (3) spin-GAL4/UAS-spastin;5.75spastin/TM3SerAct-GFP , and spin-GAL4/5.75/TM3SerAct-GFPCyOKr-GFP; spastin .Behavioral tests were performed on flies from the four genotypes arising from the spastin mutants (0%) from these crosses reached the top of the vial in the prescribed 30 s time limit, compared to 8% for Rescue flies (n = 75), and 100% for both spastin/+ controls (n = 39 and 21). Twenty-seven percent of mutants (5.75[R]) did not climb at all, compared to only 4% of the Rescue flies and 0% of the spastin/+ controls. Thus, although both genotypes in the mutant background (homozygous for 5.75spastin) were much weaker than either 5.75spastin heterozygous control, Rescue flies showed improved climbing ability compared to the mutants.(A) Climbing behavior. None of the spastin mutants was significantly rescued by spin-driven expression of spastin , although lifespans were much shorter in 5.75spastin homozygotes than in heterozygous spastin/+ controls .(B) Similar to the results in (A), mean lifespan in (218 KB PDF).Click here for additional data file.Video S1Flies are shown moving in a vial.WCS fly are shown. Note the rapid rate at which they walk, as well as exhibiting climbing, jumping and flying behaviors. When still, their legs are controlled, and they are able to walk upside-down (out-of-focus fly near end of segment) for prolonged periods without falling.Segment 1: Wild-type. One female and one male 5.75spastin female is shown. Leg weakness is obvious, particularly for the mesothoracic and metathoracic legs, both when walking and standing still. She climbs poorly, and when rotated so that she is upside-down, is unable to maintain a hanging position. No wing movement or jumping is observed.Segment 2: Mutant. One spin-GAL4/UAS-spastin;5.75spastin flies, Spastin expression via the spin-GAL4 driver partially rescues the movement defects seen in 5.75spastin mutants. Two males are shown, followed by one female. Note their improved leg steadiness, velocity, and hanging ability. These flies can also jump spontaneously. The female appears to be less fully rescued; however, she is able to walk upside-down for prolonged periods, and exhibits wing movement.Segment 3: Rescue. In (7.4 MB MOV).Click here for additional data file.
Algorithms and information, fundamental to technological and biological organization, are also an essential aspect of many elementary physical phenomena, such as molecular self-assembly. Here we report the molecular realization, using two-dimensional self-assembly of DNA tiles, of a cellular automaton whose update rule computes the binary function XOR and thus fabricates a fractal pattern—a Sierpinski triangle—as it grows. To achieve this, abstract tiles were translated into DNA tiles based on double-crossover motifs. Serving as input for the computation, long single-stranded DNA molecules were used to nucleate growth of tiles into algorithmic crystals. For both of two independent molecular realizations, atomic force microscopy revealed recognizable Sierpinski triangles containing 100–200 correct tiles. Error rates during assembly appear to range from 1% to 10%. Although imperfect, the growth of Sierpinski triangles demonstrates all the necessary mechanisms for the molecular implementation of arbitrary cellular automata. This shows that engineered DNA self-assembly can be treated as a Turing-universal biomolecular system, capable of implementing any desired algorithm for computation or construction tasks. Engineered DNA self-assembly to produce a fractal pattern demonstrates all the necessary mechanisms for the molecular implementation of arbitrary cellular automata This organization is information-based: DNA sequences refined by evolution encode both the components and the processes that guide their development into an organism—the developmental program. For a language to describe this carefully orchestrated organization, it is tempting to turn to computer science, where the concepts of programming languages, data structures, and algorithms are used to specify complex organization of information and behavior. Indeed, the importance of universal computation for autonomous fabrication tasks was recognized in von Neumann's seminal work on self-reproducing automata, where he postulated a y object . If algoany algorithm can in principle be embedded in, and guide, a potentially aperiodic crystallization process. In this “algorithmic self-assembly” paradigm, a set of molecular Wang tiles is viewed as the program for a particular computation or molecular fabrication task function: at each time step, each cell is computed as the XOR of its two neighbors. Beginning with a row of all ‘0's punctuated by a single central ‘1,' snapshots of the cellular automaton's state at successive time steps may be stacked one on top of the other to produce a space–time history identical to Pascal's triangle modulo 2Whereas execution of a cellular automaton occurs perfectly and synchronously, molecular self-assembly is asynchronous and may have many types of errors. To be successful, an implementation of cellular automata by molecular tiling must address four challenges: (1) The abstract tiles must be translated into molecules (molecular tiles) that readily form 2D crystals. (2) Molecular tiles must be programmed with specific binding domains that match the logic of the chosen abstract tiles. (3) The binding of molecular tiles must be sufficiently cooperative to enforce the correct order of assembly and prevent errors. (4) Assembly of molecular tiles must occur on a specified nucleating structure, and spurious nucleation must be suppressed. These properties are necessary and sufficient for implementing not only the XOR cellular automaton, but also any other 1D cellular automaton. All four have been shown individually: several types of DNA Wang tiles have been designed and shown to grow into micron-scale 2D periodic crystals ; the intPreventing the types of errors mentioned above may seem impossible. For example, if a single binding domain is strong enough to hold a tile in place may associate at a given site at a rate, to zero . For randization , this modization predictsdization B.The effects of non-idealities can also be explored in this model. For example, mcG ≈ 2seG, which corresponds to the melting temperature of the crystals, such untemplated nucleation is inhibited by a kinetic barrier—the existence of a critical nucleus size in principle can be controlled by slowing down the growth processes, making experimental investigations the appropriate next step.Abstract Wang tiles are implemented as DNA tiles according to the scheme described earlier : each moSince untemplated crystals were not expected to produce recognizable Sierpinski triangles, it was necessary to create a proper nucleating structure to provide the initial input for the algorithmic self-assembly. Previous work using DNA tiles to self-assemble an initial boundary had proven to be difficult , so in tThe DAO-E Sierpinski tile set B consistIn order for the nucleating structure for the DAO-E lattice to assemble onto a long PCR-generated nucleating strand, the tiles on the input row must be of the DAE-O variant. Further, we simplified the construction so that all nucleating strands contain the same repetitive sequence, but the input tile strands are doped with a fraction of strands containing a ‘1' sticky-end, and again the nucleating structure contains a few randomly located sites from which a Sierpinski triangle should grow.In principle, two approaches can be taken for initiating algorithmic self-assembly of DNA tiles. In the preformed tile approach, each tile is prepared separately by mixing a stoichiometric amount of each component strand in the hybridization buffer and then annealing from 90 °C to room temperature over the course of several hours. The nucleating structure is similarly prepared by annealing the nucleating strand with input tile and capping strands. Then the rule tiles and nucleating structure are mixed together at a temperature appropriate for crystal growth. In the bulk annealing approach, the nucleating strand, the capping and input tile strands, and the strands for all rule tiles are initially mixed together and then annealed. Since, at the concentrations we use, the tiles themselves have melting temperatures between roughly 60 °C and 70 °C while the crystals have a melting temperature within a few degrees of 40 °C , during Results for the DAE-E tile set are shown in Shown in S of possible states for the memory cells and an update function f : S × S → S, one can create a set of |S|2tiles according to the scheme of S|) to about 20 for the DAO-E and DAE-E tile designs used here, but this is already sufficient to implement several known universal Turing machines and cellular automata A rigid or quickly straightening nucleating structure was simulated by setting iS = 4, so that near the crystal melting temperature where mcG ≈ 2seG, the border growth is strongly favorable. This was used for iS = 0.25 for the border tiles; in this case, near the melting temperature border growth requires stabilization by growth of rule tiles, resulting in faceted crystals. In combination with doubled concentrations of T-00 and T-11 (iS = 2), this case was used for The xgrow program simulates the kTAM for a set of square Wang tiles see , beginniThe strong effect of these variations may be seen in not seen for the T-11 tile, despite its increased concentration. This is because, regardless of what information is presented on the facet, there is no way to create a layer containing more than 50% T-11 tiles and no mismatches; T-01 or T-10 tiles must intervene. Thus the nucleation rate is substantially reduced, relative to T-00 nucleation on all-zero facets. This can be assessed in simulations by measuring the probability, p(L), that a T-00 tile will be found after L layers of growth from a facet. Simulations with parameters similar to p(L) ≈ 0.66−L/27e + 0.34 for all-zero facets, indicating strongly preferential nucleation, but for all other facets p(L) relaxes quickly to the asymptotic distribution. Simulations with parameters similar to p(L) relaxes to the asymptotic distribution immediately for every facet type investigated.Simulations confirm the preferential nucleation of T-00 tiles on all-zero facets when T-00 and T-11 concentrations are doubled . In contDesign of DNA Wang tiles occurs in three steps. First, the tile and lattice geometry must be determined. Here, the sizes (number of base-pairs) of each double-helical domain and sticky end, and other structural adornments such as contrast hairpins, are decided. These decisions impact the stability of each tile molecule, as the natural geometry of the DNA double-helix (10.5 bp for a full turn of B-form DNA) constraiAt the second level, specific sequences must be chosen. The issue here is that we wish to prevent undesired associations between strands that might inhibit formation of the correct molecular structure. We used the heuristic principle of sequence symmetry minimization , 1990 toThe third level of design concerns variations. We conceptualize DNA Wang tiles as consisting of three modules: the sticky ends, the core helical regions, and adornments such as the hairpin structures that provide contrast for AFM imaging. A given double-crossover core can be given different sticky ends (reprogrammed) by replacing just one or two strands, thus allowing reuse of core designs to implement different tile sets. In our experience, the structural and thermodynamic stability of a given core is not significantly affected by changes in the sticky-end sequences. Similarly, using additional strands, a given core can be used with or without the hairpin adornments, which can be inserted at various locations. Although the hairpin adornments can affect the integrity of a DNA tile , we have seldom found the undesired products to exceed 20% of the material.The core sequences for R-00 and S-00 are identical to the A and B tiles from a previous study . We usua2O based on extinction coefficients estimated using a nearest-neighbor model , PAGE purified, and quantitated by UV absorption at 260 nm in Hor model . DNA tilThe single-stranded nucleating strands were synthesized using a procedure based on Stemmer's assembly PCR . In asse+)*. The fraction of NRE subsequences is controlled by the amount of SplintNREUE2 and SplintNUERE2, which mediate the transitions into and out of the NRE sequence. The NRE input tile outputs ‘0' and ‘1,' while the NUE input tile outputs ‘0' and ‘0'. To generate a different language, or a different distribution of sequences in the same language, a new assembly PCR must be run. Assembly reactions for DAO-E and DAE-E nucleating strands were designed using slightly different principles. The improved design used for DAO-E nucleating strands is simpler: A single periodic sequence is generated. The fraction of ‘1' sites is determined by the stoichiometry of input tile strands used in subsequent self-assembly reactions—strands A4SV and A4-S00 both assemble in the input tile in the same place, but one carries a ‘1' sticky end while the other carries a ‘0' sticky end. The approach used for DAE-E nucleating strands is more complex but more powerful for generating non-trivial input patterns. By having multiple splints that can overlap a given sequence, the assembly can be directed to non-deterministically choose one of several ways to extend a sequence. Thus, assembly PCR can be used to generate any regular language . In thisN types) is prepared without polymerase . To avoid mispriming events, the splints are annealed in the reaction mixture at 37 °C for 5 min. The polymerase (0.4 μl) is added and the reaction is subjected to an initial 72 °C extension step, followed by 40 cycles . In stage 2, 40 μl of new PCR Mix B , is added to the first reaction volume and the reaction cycled for an additional 25 cycles . In stage 3, the 60 μl reaction volume is split into three 20 μl volumes, an additional 40 μl of Mix B is added to each and an additional 20 cycles are performed. At this point, long double-stranded product should be formed. (We have observed that such products remain in the well of an agarose gel long after a 20 kb marker has entered the gel.) Also the dNTPs in the mixture are presumably nearly exhausted—specifically there is little dGTP left. (Any remaining dGTP will be used up early in stage 4.) In stage 4, to create single-stranded nucleating strands, 5 μl of the stage 3 product are mixed with 55 μl of fresh PCR mixture for additional 60 cycles of the stage 3 program . While addition of asymmetric primers at this stage might yield more single-stranded product, a satisfactory yield of single-stranded product results without doing so. After the final PCR, the reaction mixture is extracted with phenol:chloroform:isoamyl alcohol , ethanol precipitated, and resuspended in purified water; the yield was estimated by UV absorbance. Typically, three 60 μl tubes of stage 4 product were pooled in a single recovery step and DNA was resuspended in 200 μl of water. Absorbance measurements of freshly resuspended material appeared unstable, perhaps because clumps of nucleating material scatter light. Long single-stranded DNAs may be prone to hydrolysis in water or strand-breakage upon freeze–thaw. However, after storage in water at 4 °C for a year, the nucleating structure still works well MX 4000 real-time PCR instrument using a Perkin-Elmer GeneAMP XL kit that uses rTth polymerase. In stage 1, a 20 μl reaction mixture containing 1 pmol total of splints , and began with preannealed samples at 15 °C, increasing to 80 °C over the course of several hours. Single-tile melts were superimposable with the reanneal from 80 °C back to 15 °C, indicating that equilibrium values were measured. Raw absorbance values were normalized. Whereas S-00 has a sharp melting transition near 65 °C, the R-00-23J tile has a somewhat more gradual transition, which we attribute to the presence of the hairpin. Above 40 °C, the absorbance of the mixture equals the average absorbance of the individual tiles, indicating that crystals have completely melted by that point. Prior to the crystal melting transition between 36 °C and 40 °C, there is significant noise in the measurement, presumably due to light scattering.Melting temperatures for tiles and crystals were estimated based on UVth tiles . These t260 melts of all tiles; however, several other DAO-E and DAE-E tiles have similar transitions between 50 °C and 70 °C. Therefore we assume that the templated and untemplated Sierpinski crystals also melt at approximately 40 °C and that at that temperature, the DNA tiles are reasonably well formed.We have not performed UV2+ buffer , annealing from 90 °C to 20 °C at a rate of 1 °C/min (taking about 1 h). Longer annealing schedules did not seem to decrease the error rate or the number of untemplated tubes or crystals.Self-assembly was performed by bulk annealing of all relevant rule tile, input tile, capping, and nucleating strands in a 50 μl volume of 1× TAE/MgDAO-E reactions contained nucleating strands sufficient to bind 0.004 μM of input tile (as estimated from binding capacity gels), 0.2 μM of each capping and input tile strand , and 0.2 μM of each rule tile strand (for each of the five or six tiles used). An excess of input tile strands was used to ensure complete coverage of the nucleating strand. The excess partial input tiles appeared not to significantly interfere with the self-assembly of algorithmic crystals.DAE-E reactions contained nucleating strands sufficient to bind 0.002–0.008 μM of input tile (as inferred from the estimated yield of the PCR), 0.2 μM of each capping and input tile strand , and 0.2 μM of each tile strand (for each of the four tiles used). Again, an excess of input tile strands was used to ensure complete coverage of the nucleating strand.2+ buffer on a Digital Instruments Nanoscope III equipped with a nano-Analytics Q-control III and a vertical engage J-scanner, using the roughly 9.4 kHz resonance of the narrow 100 μM, 0.38 N/m force constant cantilever of an NP-S oxide-sharpened silicon nitride tip (Veeco Metrology). After self-assembly is complete, samples were prepared for AFM imaging by deposition of 5 μl onto a freshly cleaved mica surface (Ted Pella) attached by hot melt glue to a 15 mm metal puck; an additional 30 μl of buffer was added to both sample and cantilever (mounted in the standard tapping mode fluid cell) before the sample and fluid cell were positioned in the AFM head. The tapping amplitude setpoint, after engage, was typically 0.2–0.4 V, the drive amplitude was typically 100–150 mV, scan rates ranged from 2 to 5 Hz. Individual tiles are most clearly resolved for low amplitude setpoint and high drive amplitude values. However, under such conditions, the greatest damage is done to the sample and the hairpin labels are less distinct, sometimes disappearing entirely. Thus, to prevent damage to samples, amplitude setpoint was maximized and/or drive amplitude minimized subject to the constraint that tiles and their hairpin labels be visible.AFM imaging was performed in tapping mode under TAE/MgAfter acquisition, most images were flattened by subtracting a low-order polynomial from each scan line, or by adjusting each scan line to match intensity histograms. For some images see , bottom,Figure S1(57 KB PDF).Click here for additional data file.Figure S2(160 KB PDF).Click here for additional data file.Figure S3(126 KB PDF).Click here for additional data file.Figure S4(16 KB PDF).Click here for additional data file.Figure S5(21 KB PDF).Click here for additional data file.Figure S6(16 KB PDF).Click here for additional data file.Figure S7(22 KB PDF).Click here for additional data file.Figure S8(244 KB PDF).Click here for additional data file.Figure S9(22 KB PDF).Click here for additional data file.Figure S10(25 KB PDF).Click here for additional data file.Figure S11(53 KB PDF).Click here for additional data file.Figure S12(33 KB PDF).Click here for additional data file.Figure S13(234 KB PDF).Click here for additional data file.Figure S14(203 KB PDF).Click here for additional data file.Figure S15(226 KB PDF).Click here for additional data file.Figure S16(256 KB PDF).Click here for additional data file.Figure S17(234 KB PDF).Click here for additional data file.Figure S18(146 KB PDF).Click here for additional data file.Figure S19. Compiled Figures S1–S18(1.7 MB PDF).Click here for additional data file.Video S1Each frame is an average of three raw images. At the center is an amalgamation of many individual algorithmic crystals, each with its own characteristic pattern of tiles . While no large undamaged Sierpinski triangles were seen in this series of images, in some frames it is possible to see both double-helices within the tiles, as well as the major and minor grooves within the helices.(17.8 MB MPG).Click here for additional data file.
The prediction of ancestral protein sequences from multiple sequence alignments is useful for many bioinformatics analyses. Predicting ancestral sequences is not a simple procedure and relies on accurate alignments and phylogenies. Several algorithms exist based on Maximum Parsimony or Maximum Likelihood methods but many current implementations are unable to process residues with gaps, which may represent insertion/deletion (indel) events or sequence fragments.Here we present a new algorithm, GASP , for predicting ancestral sequences from phylogenetic trees and the corresponding multiple sequence alignments. Alignments may be of any size and contain gaps. GASP first assigns the positions of gaps in the phylogeny before using a likelihood-based approach centred on amino acid substitution matrices to assign ancestral amino acids. Important outgroup information is used by first working down from the tips of the tree to the root, using descendant data only to assign probabilities, and then working back up from the root to the tips using descendant and outgroup data to make predictions. GASP was tested on a number of simulated datasets based on real phylogenies. Prediction accuracy for ungapped data was similar to three alternative algorithms tested, with GASP performing better in some cases and worse in others. Adding simple insertions and deletions to the simulated data did not have a detrimental effect on GASP accuracy.GASP will predict ancestral sequences from multiple protein alignments of any size. Although not as accurate in all cases as some of the more sophisticated maximum likelihood approaches, it can process a wide range of input phylogenies and will predict ancestral sequences for gapped and ungapped residues alike. Many eve.g. ,3). In oe.g. [Nevertheless, predicting ancestral sequences is not a simple procedure. It relies on a quality alignment plus an accurate – and correctly rooted – phylogenetic tree. Strict consensus methods are quick but can suffer from over-representation of larger clades of related sequences, which contribute more sequences to the consensus than more sparsely populated clades. Maximum Parsimony (MP) methods overcomee.g. -8) and ie.g. and FASTe.g. provide e.g. .GASP is an ancestral sequence prediction algorithm that is designed to handle gapped alignments of any size using a combination of MP and likelihood methods. Although probably not as accurate as some of the more sophisticated maximum likelihood approaches, it permits the estimation of ancestral states at residues that are gapped in any sequences of the alignment with comparable accuracy to that of ungapped residues.et al. 1992 [.GASP uses input from three sources: a multiple sequence alignment (MSA); an accompanying phylogenetic tree in Newick format ; and a Pal. 1992 . SequencGASP outputs an alignment in fasta format with both input terminal sequences and predicted ancestral node sequences. Ancestral sequences can either be grouped together at the end of the file or interspersed throughout the terminal sequences to reflect the tree topology Figure . Three tpX is the likelihood for a PAM distance of X , i is the ancestral amino acid at position r,j is the descendant amino acid at position r, pijX is the substitution probability of i to j in a PAMX matrix, and N is the number of residues in the alignment. Substitutions involving gaps are ignored in this calculation.where This allows a visual comparison between the branch lengths of the input phylogeny and the predicted branch lengths given the ancestral sequence predictions.r, GASP starts at the tips and works deeper into the tree, assigning a probability of a gap at each node n, which is equal to the mean probability of a gap at the descendant nodes:If the MSA has gaps, GASP will first assign gap status to every residue at every node. Insertions and deletions are assumed to be equally likely, although a gap is assigned in the case of a tied probability (below). For each residue p is the gap probability for residue r at node n. p1 and p2 are the gap probabilities for r at the two descendant nodes.where r is fixed as a gap, otherwise r is fixed as a 'non-gap'. GASP then works back up the tree from the root, this time using the new ancestral gap probability and both descendant gap probabilities to recalculate the gap probability:Terminal branches are given a probability of 1 if a gap is present or 0 if not. Once the root is reached, the gap status is fixed for the root. If the probability of a gap is greater than or equal to 0.5, residue p0 is the gap probability for r at the ancestral node.where r is fixed as a gap if p ≥ 0.5. This continues until all nodes are assigned as 'gap' or 'non-gap'.As with the root, r is assigned a probability for each amino acid at each node n. At the tips, r has a probability of 1 for the amino acid that is present in the MSA. GASP then works down the tree assigning probabilities based on the descendant nodes, branch lengths and a substitution matrix. By default, the PAM matrix of Jones et al. 1992 [1 matrix, which represents the probability that a given amino acid will be substituted by each other amino acid when the mean substitution rate is 1/100 residues. To make a PAMX matrix, which represents a length of evolutionary time where a sequence will have undergone X substitutions per 100 residues, the PAM1 matrix is multiplied by itself X-1 times:Once gaps are assigned, ancestral sequences are predicted in a similar fashion. Each residue al. 1992 is used.i is the ancestral amino acid,j is the descendant amino acid, k is the 20 possible transitory amino acids, pijX is the substitution probability of i to j in a PAMX matrix, pik(X-1) is the substitution probability of i to k in a PAM(X-1) matrix and pkj1 is the substitution probability of j to k in a PAM1 matrix.where r, the ancestral probabilities for each amino acid are calculated for the two descendant branches individually, using a PAMX matrix, where X is 100 times the branch length as substitutions per site, i.e. a branch of 0.1 substitutions per site would use a PAM10 matrix:Unless the ancestral node has a gap at position pi is the probability of amino acid i at residue r of node n, pijX1 and pijX2 are the probabilities of substitution from amino acid i to each amino acid j in the appropriate PAM matrix for the two descendant branches, pdj1 and pdj2 are the probabilities of amino acid j being at position r at the two descendant nodes.where Once the root is reached, the most probable amino acid is fixed as the ancestral sequence. As with gaps, GASP then works back up the tree, using the fixed ancestral node amino acid and the descendant node probabilities to give new probabilities for each amino acid. The most probable amino acid is then fixed and the process continues until all residues and all nodes have a fixed sequence.GASP is primarily designed for reasonably small trees (6–30 sequences), although there is no limit on input tree size. For larger trees, probabilities for each amino acid get very small near the root, which can lead to a heavy bias towards the fixed ancestral amino acid when GASP works back up the tree. To counter this GASP arbitrarily reduces any probabilities below a certain exclusion threshold (0.05 by default) to zero, thus reducing the slow accumulation of very unlikely amino acids.To optimise the PAM matrices used for probability calculations, GASP uses the variable branch lengths read from the input phylogeny. There is also an option to fix the PAM distance used for all branches, which would allow the use of trees without branch lengths.n and its direct ancestor n0. This ancestor itself is heavily influenced by the descendants of n but also by the 'outgroup' to n, namely those sequences that are descendant to n0 but not to n. The outgroup information contained by the ancestral node n0 can be vital in determining the correct sequence for n when the descendants of n are variable. For this reason, the GASP algorithm will, by default, fix ancestral sequences as it moves back 'up' the tree from the root, increasing the relative weighting of the outgroup to the two descendants. Because there is a chance of the wrong amino acid sweeping back up the tree , there is an option to use amino acid probabilities from the ancestral node in the last stage of GASP rather than giving the fixed amino acid an ancestral probability of 1. This option should be used with caution.Assignment of ancestral amino acids with the GASP algorithm is achieved by combining data from the descendants of a given node To test the GASP algorithm, a number of artificial phylogenies were simulated. Because there is a practically limitless number of possible tree sizes (in both numbers of sequences and branch lengths) and phylogenies, it was decided to test the algorithm on a set of simulated phylogenies based on real phylogenies that formed a subset of those for which the algorithm was originally written. This set comprised 94 neighbour-joining trees of protein families. Each tree contained at least two subfamilies of at least 3 members each, giving in total between 6 and 127 sequences. Simulations started by creating a random protein sequence 100 amino acids long. Each residue was assigned an amino acid randomly as determined by the amino acid frequencies in all the human sequences of SwissProt-TrEMBL (Release 42) . Sequencet al. 1992 [Three substitution methods were used. In the first 'PAM Equal Rates' model, the PAM1 matrix of Jones al. 1992 was usedBecause one of the main advantages of GASP is its ability to deal with gaps, a second test dataset was generated from the 'PAM Equal Rates' set of trees, this time with gaps added. The addition of gaps was kept simple so that the exact same trees could be used for the gap analysis, allowing direct comparison of the results with gaps and without. To do this, gaps were limited to single insertion/deletion ('indel') events per column of the MSA, allowing them to overlay onto the existing simulated 'PAM Equal Rates' data. In addition, no indels occurring next to root were allowed as it is impossible to judge without an outgroup whether such an event would be an insertion or deletion.r of the simulated sequences was considered in turn and had a probability of 50% of containing an indel. Gaps were all of length 1 . Although unrealistic for testing multiple alignment or phylogeny reconstruction programs, such a simplification is not a problem for ancestral sequence prediction as each residue is treated independently. The short gaps meant that, for the same total number of gapped residues, there is a higher diversity in the phylogenetic positioning of the indels.To make the gaps, each residue T between the tip (age 0) and the oldest direct descendant node from the root. A random branch (not leading to root) is then selected for which the ancestral node is older than T and the descendant node is no older. This is the branch on which the indel occurred. The indel is randomly assigned as an insertion or deletion event with equal probability. If it is an insertion then the ancestral node plus all nodes outside the descendant clade have residue r replaced with a gap. If it is a deletion then the descendant node and all its descendants have residue r replaced with a gap.Indels were placed randomly with respect to evolutionary time. Each node in the simulated data has an 'age', which is the number of rounds of potential substitution it took to complete the simulation after that node was formed. Each indel occurs at a random age The simulated trees and alignments were run through the GASP algorithm. Because the 'real' sequence of each simulated node was known, it was then possible to determine the accuracy of GASP predictions. To test the different parts of the GASP algorithm, predictions were also made using modified GASP algorithms with parts of the model excluded.Because prediction for invariant sites is trivial for all methods, the expectation is that accuracy is inversely related to the number of variable sites. Therefore, comparisons of methods are presented as a percentage of the variable sites. In this context 'variable sites' are defined independently for each node as those sites for which not all descendant nodes (including termini) have the same sequence as the ancestral node.The simulated phylogenies are of different sizes. Considering all nodes of all trees would bias results towards the larger trees. To avoid this, each tree was arbitrarily reduced to four representative nodes:1. 'Root' = The root of the tree.2. 'Near Root' = A direct descendant node of the root.3. 'Mid Tree' = A random node approx. midway in the tree.4. 'Near Tip' = A direct ancestral node of a terminal sequence.et al. 1995 [et al. 2000 [To determine whether the GASP algorithm was useful its performance was compared to a crude consensus sequence at each node. Where two amino acids were present at equal frequencies in a column of the MSA, the most frequent amino acid in the total MSA was selected for the ancestral sequence. GASP may be considered crude compared to some existing Maximum Likelihood approaches and so its performance was also compared to that of both ML algorithms implemented by the CODEML program from the PAML package , namely al. 1995 and the al. 2000 . In addiThe GASP model marginally out-performs all methods tested for constructing the ancestral sequence at the root of the tree Figure . For allet al. 1995 and Pupko et al. 2000 performed better overall for internal nodes, this difference was not seen for every node of every tree. At each level, GASP is sometimes better and sometimes worse than all three other algorithms using fixed PAM matrices rather than matrices derived from observed tree branch lengths.(b) fixing ancestral sequences on initial pass towards root without a second pass back up the tree.(c) no filtering of rare amino acid probabilities.(d) using ancestral probabilities when working back up the tree rather than fixed ancestral amino acids.Elements (a) and (b) were chosen for testing because they increase computational time, while (c) and (d) may not intuitively give the best results.For the phylogenies used in these simulations, all four variants performed worse than the standard GASP algorithm (data not shown). Using a fixed PAM distance for all branches rather than approximating the PAM distance using tree branch lengths (a) gives an unfair weighting to long branches and thus increases the probability of substitutions that are, in reality, unlikely. Fixing ancestral sequences on the way 'down' the tree to the root (b) does not use any outgroup information and is therefore significantly worse at distinguishing between two or more amino acids with similar ancestral probabilities. Less intuitive is the effect of reducing low amino acid probabilities to zero (c) and using fixed ancestral sequences when recalculating amino acid probabilities using all three connected nodes (d). Indeed, excluding these two elements have a much smaller effect but still reduce the overall accuracy of the algorithm (data not shown).i.e. combining (c) and (d)).Using fixed amino acids when working back up the tree increases the influence of the outgroup sequence. As was seen by the difference in accuracy between predictions at the root and nodes near the root Figure , outgroui.e. not correcting for multiple substitutions.) This is not testing the GASP algorithm per se but does provide information on the importance of using an accurate phylogeny construction algorithm. (The PAML package does not require pre-defined branch lengths and is therefore only affected by errors in supplied topology and not in branch lengths.) In many cases there was no difference. However, nearer to the root, using observed branch lengths rather than the real ones decreased prediction accuracy slightly. This decrease was correlated with total tree age (data not shown). This would imply that branch lengths corrected for multiple substitutions should be used for trees fed into the GASP algorithm, particularly with deep trees containing long branches.A final test was performed to compare the use of 'real' versus 'observed' branch lengths. To analyse the effect of gaps on prediction accuracy, pairwise comparisons were made between the gapped datasets and the corresponding ungapped simulations Figure . As woulAlthough explicitly designed for use with protein sequence alignments and trees, it is relatively simple to convert GASP for use with nucleotide datasets. To do this, a new 'PAM matrix' should be created with substitutions probabilities for A, C, G and T only. This structure would allow the user to fit fairly complex substitution models, with different substitution probabilities for each pair of nucleotides. If the aligned sequence is coding DNA, however, it is highly recommended to use the protein sequences or a different algorithm such as those in the PAML package , as the We have presented an algorithm for predicting ancestral sequences in gapped datasets. At the root of the tree, GASP marginally outperforms three existing algorithms implemented in the PAML package. For other nodes of the tree, however, the ML algorithms of CODEML -7 generaFor real life datasets, as for all evolutionary studies, predictions are dependent on the quality of input alignments. Gapped residues are, by their nature, often located in regions of evolutionary instability and therefore the interpretations of predictions at such sites require extra care. In many scenarios, however, gaps are introduced into alignments by the missing termini of fragment sequences. In these situations, the complete sequences that form the rest of the alignment may be very well aligned and so it is highly desirable to have an algorithm that can process the gaps introduced by the truncated sequences.Project name: GASP Project home page: Operating system(s): Platform Independent. (Tested on PC (Windows 98/XP) and UNIX (Red Hat Linux 7.3))Programming language: Perl.Other requirements: None.License: None.Any restrictions to use by non-academics: Author's permission required.GASP. Gapped Ancestral Sequence Prediction.Indel. Insertion or deletion event.ML. Maximum Likelihood.MP. Maximum Parsimony.MSA. Multiple Sequence Alignment.PAM. Point Accepted Mutation.RE conceived the algorithm, coded the Perl script, designed and performed the accuracy tests and statistical analyses, designed the phylogeny simulation method, generated the simulated datasets and drafted the manuscript. DS helped in the design of test simulations and in drafting the manuscript.
Saccharomyces cerevisiae and Escherichia coli have investigated the relationship between transcription and translation rates and stochastic fluctuations in protein levels, or more generally, how such randomness is a function of intrinsic and extrinsic factors. However, the fundamental question of whether stochasticity in protein expression is generally biologically relevant has not been addressed, and it remains unknown whether random noise in the protein production rate of most genes significantly affects the fitness of any organism. We propose that organisms should be particularly sensitive to variation in the protein levels of two classes of genes: genes whose deletion is lethal to the organism and genes that encode subunits of multiprotein complexes. Using an experimentally verified model of stochastic gene expression in S. cerevisiae, we estimate the noise in protein production for nearly every yeast gene, and confirm our prediction that the production of essential and complex-forming proteins involves lower levels of noise than does the production of most other genes. Our results support the hypothesis that noise in gene expression is a biologically important variable, is generally detrimental to organismal fitness, and is subject to natural selection.All organisms have elaborate mechanisms to control rates of protein production. However, protein production is also subject to stochastic fluctuations, or “noise.” Several recent studies in Analysis of gene expression data for nearly every gene in yeast provides evidence that random variation in the production rate of proteins could significantly affect the fitness of an organism Stochasticity is a ubiquitous characteristic of life. Such apparent randomness, or “noise,” can be observed in a wide range of organisms, resulting in phenomena ranging from progressive loss of cell-cycle synchronization in an initially synchronized population of microbes to the pattern of hair coloration in female calico cats. An important source of stochasticity in biological systems is the random noise of transcription and translation, which can result in very different rates of synthesis of a specific protein in genetically identical cells in essentially identical environments .Understanding how stochasticity contributes to cellular phenotypes is important to developing a more complete picture of how cells work. Accordingly, noise in gene expression and other cellular processes has been a major focus of research for more than a decade. While several cases have been described where stochasticity is advantageous maxTo test this prediction, we estimated protein production rates . Similar results were found when using different numbers of bins, when using halves or quartiles instead of thirds, or when separating bins by transcription rate instead of by number of translations per mRNA (data not shown). This result cannot be explained by the overall positive correlation between dispensability and rate of protein synthesis. In the first of these two methods, we binned yeast genes by their protein production rate, so that all genes in each of 15 bins had approximately equal levels of protein production see A. A Fishf, where f = 0 indicates no effect on growth when a gene is deleted, f = 1 indicates that a gene is essential, and 0 < f < 1 indicates a quantitative growth defect [f versus transcription [txn] rate | protein production rate, Spearman partial r = 0.282, n = 4,746, p = 10−87; f versus translations [tlns] per mRNA | protein production rate, Spearman partial r = −0.258, n = 4,746, p = 10−75). We also expected that the relationship between gene importance and implementation of the expression strategy that minimizes noise could additionally be seen by considering transcription rate and translation rate per mRNA together, as a ratio; a large ratio of transcription rate to translations per mRNA would indicate that transcripts are produced quickly but are translated slowly, corresponding to our expression strategy 1. Confirming this, the correlation between fitness effect and the ratio of transcription rate to translations per mRNA (controlling for protein production rate) is highly significant . Partial correlation analysis is thus in accordance with the trend illustrated in Because binning genes still allows for a small amount of variability in protein production within each bin see , we sough defect ) would cp ≤ 0.02) for all but one bin. As in In addition to essential genes, genes whose protein products participate in stable protein complexes (“complex subunits”) would also be expected to exhibit sensitivity to randomness in expression: producing too little or too much of a single protein complex subunit can compromise the proper assembly of the entire complex and waste the energy invested in the production of the other complex subunits. In support of this, it has been found that both under- and overexpression of complex subunits is more likely to result in a reduced growth rate or inviability of yeast than is misexpression of other genes, and also that complex subunits tend to be more precisely coexpressed with other genes than noncomplex subunits . Using dr = 0.203, n = 4,900, p = 10−46) and a low number of translations per mRNA . Using the ratio of transcription rate to translations per mRNA also yielded similar results . Thus, partial correlations confirm the finding illustrated in When we repeated the partial correlation analysis for complex subunits , we found similar results. When total protein synthesis was controlled for with the partial correlation, complex subunits were more likely to have a high transcription rate , suggesting that fitness effect and protein complex membership are independently associated with the expression strategy that minimizes stochastic fluctuation. Repeating the partial correlations above with either transcription rate or translations per mRNA in place of their ratio gave significant partial correlations with both fitness effect and protein complex membership as well (data not shown).Since proteins that participate in many protein–protein interactions are more likely to be essential , it was r = −0.11, p = 10−9) between an mRNA's rate of decay and the evolutionary rate of the protein it encodes. This correlation was surprising, as it is precisely the opposite of what one would expect if the relationship between the rates of mRNA decay and protein evolution were mediated by the level of expression: slow decay would result in increased expression, which is known to be associated with slow evolution .We found that noise in protein production is minimized in genes for which it is likely to be most harmful, specifically essential genes and genes encoding protein complex subunits. This finding supports the hypothesis that noise in gene expression is generally deleterious to yeast.Yeast appear to control the noise in their gene expression at both transcriptional and translational levels preferentially for some genes; however, this noise minimization is not without a cost, as the high transcription and high mRNA decay rates that are needed to minimize noise are energetically expensive and are thus expected to be advantageous only when the benefit of reducing noise in a particular gene's expression outweighs this cost . ProteinAs is the case with many genome-wide studies, it is possible that a hidden variable could bias our results. For example, it is possible that essential genes and genes encoding protein complex subunits tend to have high transcription and low translation for reasons unrelated to noise minimization. However, until such a reason is identified, the most parsimonious interpretation of our results is that yeast adaptively minimize noise in the expression of certain genes.Escherichia coli * A/T, where R is transcription rate, A is mRNA abundance, and T is mRNA half-life. Translation rates per mRNA in rich glucose medium were calculated from ribosome occupancy data by Transcription rates were calculated from mRNA abundances and decay rates in log-phase yeast growing in rich glucose medium accordinFitness effect ranks were calculated from 12 replicate growth experiments for all viable homozygous yeast deletion strains in rich glucose medium; growth experiments were conducted using the method described in ADH1*) was 10-fold weaker than the other two at full induction, but all three showed very similar relationships between noise strength and percent transcriptional induction. Since we do not have genome-wide data for the percent induction for genes in rich glucose medium (or any other environment), in our analysis we make the assumption that the promoters of more highly transcribed genes tend to be at higher percent induction levels. While this certainly does not hold for all genes, we believe that it is a reasonable approximation for most genes.XYr be the correlation coefficient between variables X and Y. To control for a third variable Z,Partial correlations were calculated as described by t distribution, by the equationTo assess the significance of the partial correlation, it is transformed to be distributed according to a Student's p-value can then be calculated according to where the t-value falls with respect to its expected distribution.The two-sided Figure S1–49).Fitness effect ranks are shown on the y-axis . Protein production rate (proteins/s) is shown on the x-axis. The Spearman rank correlation coefficient is r = –0.202 (p = 10(316 KB PPT).Click here for additional data file.Table S1(37 KB DOC).Click here for additional data file.
Relatively little is known about interest in pediatric pulmonology among pediatric residents. The purpose of this study, therefore, was to determine at this institution: 1) the level of pediatric resident interest in pursuing a pulmonary fellowship, 2) potential factors involved in development of such interest, 3) whether the presence of a pulmonary fellowship program affects such interest.A questionnaire was distributed to all 52 pediatric residents at this institution in 1992 and to all 59 pediatric residents and 14 combined internal medicine/pediatrics residents in 2002, following development of a pulmonary fellowship program.Response rates were 79% in 1992 and 86% in 2002. Eight of the 43 responders in 1992 (19%) had considered doing a pulmonary fellowship compared to 7 of 63 (11%) in 2002. The highest ranked factors given by the residents who had considered a fellowship included wanting to continue one's education after residency, enjoying caring for pulmonary patients, and liking pulmonary physiology and the pulmonary faculty. Major factors listed by residents who had not considered a pulmonary fellowship included not enjoying the tracheostomy/ventilator population and chronic pulmonary patients in general, and a desire to enter general pediatrics or another fellowship. Most residents during both survey periods believed that they would be in non-academic or academic general pediatrics in 5 years. Only 1 of the 106 responding residents (~1%) anticipated becoming a pediatric pulmonologist.Although many pediatric residents consider enrolling in a pulmonary fellowship (~10–20% here), few (~1% here) will actually pursue a career in pediatric pulmonology. The presence of a pulmonary fellowship program did not significantly alter resident interest, though other confounding factors may be involved. The specialty of pediatric pulmonology is relatively new, having been recognized as a pediatric sub-specialty by the American Board of Medical Sub-specialties in 1984. In 1997, there were approximately 500 board certified pediatric pulmonologists in the United States and Canada . It has This study involved the distribution of a questionnaire to all pediatric residents. The questionnaire was initially distributed in 1992 prior to institution of a pulmonary fellowship. The questionnaire was placed in the hospital mailbox of each resident. The study was repeated (and the questionnaire redistributed) in 2002 after the fellowship, which began in 1994, had been functioning for several years. To improve the response rate, the questionnaire was distributed twice, one month apart, during each time period. The questionnaire was a three-page, 18 question form that took approximately 15 minutes to complete (copy enclosed in Appendix see ). The quThe questionnaire was distributed to 52 pediatric residents (including 2 chief residents) in 1992 and to 59 pediatric residents (including 3 chief residents) and 14 medicine/pediatric residents (including 1 chief resident) in 2002. A medicine-pediatric residency program did not exist during the initial distribution period. To avoid compromising confidentiality in the relatively small medicine/pediatric group, residents were not asked to list their residency program in 2002 and consequently the 2 resident groups during that period were combined. Forty-three of the 52 residents completed the survey in 1992 (79%) compared to 63 of 73 (86%) during 2002 (p = NS).Of the 43 respondents in 1992, 30 (70%) had considered doing a fellowship in any pediatric subspecialty and of those, 15 believed they were "very likely" to do a fellowship. Of the 63 respondents in 2002, 40 (63%) had considered doing a pediatric fellowship and 16 of those were "very likely" to continue with fellowship training. These numbers compare with 9 of the 52 residents in 1992 (17%) who actually completed fellowship training compared to 16 of the 47 graduating residents from the 2002 survey (34%) who began fellowship training in 2003 or 2004.Eight residents 19%) in 1992 had considered a pulmonary fellowship compared to 7 residents (11%) in 2002 (p = NS). Table 9% in 199Table The last question in the survey asked residents what they thought they would be doing 5 years in the future. These responses are shown in Table This study found that a significant percentage of pediatric residents considered doing a pulmonary fellowship after their residency training, ranging from 11% in the 2002 group to 19% in the 1992 group. However, these residents also viewed themselves as less likely to enter general pediatrics perhaps suggesting that they were simply considering several pediatric subspecialties at some time during their residency training. This seems likely, as the highest scored factor by the +PF residents was the desire to continue their education after residency. Despite fairly high percentages of residents considering a pulmonary fellowship, only 1 resident in the entire group (~1%) actually believed that they would be a pediatric pulmonologist 5 years after the survey was completed. This percentage is very similar to that of first-time takers of the 1995 General Pediatrics Certifying Examination who believed they would be a pediatric pulmonologist in the future (1.1%) . If one The majority of residents who considered doing a pulmonary fellowship (8 of 15) were in their first year of residency. From personal experience, residents often consider various practice options early on in their training and frequently do not tend to narrow their choices until their second or third year of residency. This observation should be kept in mind when trying to recruit residents for pulmonary fellowship positions by seeking out those residents potentially interested in Pulmonology early on in residency rather than later.The ranking of factors that may contribute to an interest in a pulmonary fellowship were remarkably similar during the 2 time periods. The only score that approached a statistically significant difference was the statement "I enjoy the tracheostomy/ventilator population" and this score tended to increase in 2002. However, a dislike for the tracheostomy/ventilator population also received the highest score among those residents not interested in a pulmonary fellowship and was even higher than both the desire to enter general pediatrics and another fellowship program. These data may simply be a center phenomenon but might suggest a more global "disinterest" in this patient population that may need to be further studied. Two scores in the -PF resident group decreased from 1992 to 2002: not enough pulmonary patient experience to decide on a pulmonary fellowship and too few perceived pulmonary job openings. The first may be a positive reflection on the local resident experience with pulmonary patients in recent years or on the fellowship program itself though this is only speculative. The second received very low scores during both periods and may not be very relevant. However, there has been some evidence of a reversal in the prior trend of residents entering general pediatrics in recent years, suggesting greater interest among residents in fellowships . On the This study has certain shortcomings. This study asked residents to score specific factors that may or may not have been relevant to an individual resident. Although residents were given the opportunity to add personal comments, few did. In addition, certain patient populations, e.g those with respiratory infections, were not included as options in the survey and these omissions could cause study bias. Other factors that may contribute to residents' decisions regarding fellowship training were not addressed in this study. These factors include resident teaching by the faculty, resident gender, spouse occupation, mentor encouragement or other personal reasons -12. In aAlthough many pediatric residents consider enrolling in a PF (~10–20% here), few (~1% here) will actually pursue a career in pediatric pulmonology. The presence of a PF program did not significantly alter resident interest, though other confounding factors may be involved.BPD: bronchopulmonary dysplasiaPF: pulmonary fellowship+PF: those residents who considered taking a pulmonary fellowship-PF: those residents who had not considered taking a pulmonary fellowshipNone declared.The pre-publication history for this paper can be accessed here:Appendix. Resident questionnaireClick here for file
The human immunodeficiency virus (HIV) Tat protein is acetylated by the transcriptional coactivator p300, a necessary step in Tat-mediated transactivation. We report here that Tat is deacetylated by human sirtuin 1 (SIRT1), a nicotinamide adenine dinucleotide-dependent class III protein deacetylase in vitro and in vivo. Tat and SIRT1 coimmunoprecipitate and synergistically activate the HIV promoter. Conversely, knockdown of SIRT1 via small interfering RNAs or treatment with a novel small molecule inhibitor of the SIRT1 deacetylase activity inhibit Tat-mediated transactivation of the HIV long terminal repeat. Tat transactivation is defective in SIRT1-null mouse embryonic fibroblasts and can be rescued by expression of SIRT1. These results support a model in which cycles of Tat acetylation and deacetylation regulate HIV transcription. SIRT1 recycles Tat to its unacetylated form and acts as a transcriptional coactivator during Tat transactivation. Cycles of Tat acetylation and deacetylation, mediated by human sirtuin 1 (SIRT1), regulate HIV transcription suggesting that SIRT1 could be a therapeutic target The Tat protein of human immunodeficiency virus 1 (HIV-1) is essential for the transcriptional activation of the integrated HIV-1 provirus. Without Tat, HIV transcriptional elongation is inefficient and results in abortive transcripts that cannot support viral replication . Tat is trans-acting responsive element (TAR), an RNA stem-loop structure that forms at the 5′ end of all viral transcripts ) C. These This model was further tested in nuclear microinjection experiments using synthetic full-length Tat and AcTat. Microinjection of increasing amounts of either Tat or AcTat proteins into HeLa cells caused a marked transactivation of the HIV LTR luciferase reporter in a dose-dependent manner of less than 5 μM open reading frame in place of the viral nef gene -acetylated histone H3 peptide (amino acids 1–24) [+ for 10 min at room temperature prior to the initiation of the reaction by addition of NAD+ (1 mM).The histone deacetylation assay with recombinant SIRT1 was performed as described previously for SIRT2 in 100 μds 1–24) . Splitom2PO4 (pH 7.5), 5 mM EDTA, 30 mM sodium pyrophosphate, 10 mM NaF, and protease inhibitors . Duplicates were pooled, and 1 mg of lysate was immunoprecipitated either with monoclonal α-HA together with protein G-Sepharose with α-FLAG M2 agarose (Sigma) or α-T7-agarose (Amersham Biosciences) for 2 h at 4 °C. Beads were washed three times in lysis buffer, boiled in SDS loading buffer, and analyzed by WB with polyclonal α-FLAG (Sigma), monoclonal α-HA (Roche), or monoclonal α-T7 antibodies. For the IP of Tat with endogenous SIRT1, HEK 293 cells were transfected only with CMV-Tat/FLAG or the CMV-empty vector using Lipofectamine reagent. Cell lysates were immunoprecipitated with rabbit α-SIRT1 antibodies (generated against amino acids 506–747) together with protein G-Sepharose (Amersham Biosciences). Immunoprecipitated material was analyzed by WB with the M2 α-FLAG antibody (Sigma) or rabbit α-SIRT1 antibodies.HEK 293 cells were cotransfected in duplicate with expression vectors for CMV-Tat/FLAG; CMV-Tat/T7 or CMV-TatK50R/FLAG; and the SIRT1/HA or SIRT1-, SIRT2-, and SIRT6-FLAG expression vectors or the respective empty vector controls using Lipofectamine reagent (Invitrogen). Cells were lysed after 24 h in 250 mM NaCl, 0.1% NP40, 20 mM NaHFor in vitro interactions, 10 U of recombinant SIRT1 (Biomol) was incubated with biotinylated synthetic Tat or acetylated Tat proteins together with streptavidin-Sepharose (Amersham Biosciences) in lysis buffer in the presence of 5 mM nicotinamide (Sigma) for 3 h at 4 °C. Pelleted beads were washed three times in lysis buffer, resuspended in SDS loading buffer, and analyzed by WB with polyclonal α-SIRT1 antibodies, rabbit α-AcARM, or SA-HRP (Jackson Immunoresearch Laboratories).Double-stranded siRNAs directed against nucleotides 408–428 in the SIRT1 mRNA or control GL3 siRNAs were transfected into HeLa cells plated in six-well plates with Oligofectamine reagent according to the manufacturer's guidelines (Invitrogen). The mutant SIRT1 siRNA was identical to SIRT1 siRNA except for a two-nucleotide mismatch between the target mRNA for SIRT1 and the antisense strand of siRNA at nucleotides 418 and 419. After 48 h, cells were retransfected with the HIV LTR luciferase construct (200 ng) together with increasing amounts of CMV-Tat expression vectors and corresponding amounts of empty pcDNA3.1 vector (Invitrogen). In the control experiment, CMV-Tat was replaced by the CMV-luciferase construct, and HIV LTR luciferase was replaced by an HIV LTR promoter construct driving the expression of chloramphenicol acetyl transferase (CAT) ). Cells ΔNF-κB-luciferase reporter (200 ng) and increasing amounts of RSV-Tat using the Lipofectamine reagent (Invitrogen). In the control experiment, RSV-Tat was replaced by RSV-luciferase (200 ng), and the HIV LTR luciferase construct was replaced by the HIV LTR CAT reporter. In transfections with HR73, HeLa cells were cotransfected with the HIV LTR luciferase reporter (200 ng) and RSV-Tat expression vectors or the empty vector using Lipofectamine reagent. The RSV-luciferase construct was used as described above. After 4 h incubation with the DNA/Lipofectamine mix, the culture medium was changed and supplemented with indicated concentrations of HR73 dissolved in DMSO or DMSO alone. Cells were harvested 8 h later and processed for luciferase assays.In cotransfection experiments, human CMV-SIRT1 or CMV-SIRT1H363Y (600 ng) was cotransfected into HeLa cells plated in six-well plates with the HIV LTR luciferase reporter (200 ng) or the LTRSubconfluent MEFs (70%) were grown on Cellocate coverslips , and nuclear microinjections were performed at room temperature with an automated injection system (Eppendorf Micromanipulator 5171 together with Eppendorf Transjector 5246). Samples were prepared as a 20 μl injection mix containing the HIV LTR luciferase reporter or 5xUAS luciferase (each 100 ng/μl), RSV-Tat (10 ng/μl) or Gal4-VP16 (50 ng/μl), CMV-cyclinT1 (100 ng/μl), CMV-SIRT1 (100 or 300 ng/μl), together with CMV-GFP (50 ng/μl) in sterile water. At 6 h after microinjection, cells were examined under a Nikon Eclipse TE300 inverted fluorescent microscope to determine the number of GFP-positive cells, washed in cold phosphate buffer, and stored at −70 °C for luciferase assays (Promega). In HeLa cells, synthetic Tat or AcTat proteins (each 30 or 100 ng/μl) were coinjected with the wild-type or mutant HIV LTR luciferase reporters (each 100 ng/μl) together with CMV-GFP (50 ng/μl), and harvested 4 h after injection. Cells were treated immediately after injection with DRB , TSA (400 nM), or nicotinamide (5 mM). Microinjections in siRNA-treated cells were performed 48 h after siRNA transfection.−/GFP containing the GFP open reading frame in place of the nef gene and a frameshift mutation in the env gene, as well as the method to generate pseudotyped viral particles with VSV-G, were previously described [5 Jurkat cells with different amounts of viral suspension. The titer of the viral stock was measured by flow cytometric analysis of GFP expression 48 h after infection. The pHR′-EF-1α/GFP construct is a minimal nonreplicative HIV-1 genome containing a heterologous promoter, EF-1α, driving GFP expression [−/GFP or pHR′-EF-1α/GFP viral particles at a theoretical multiplicity of infection of 0.5 in 24-well plates. Cells were repeatedly washed and resuspended in fresh medium containing HR73 (1 μM) or DMSO alone. Viral infection was monitored 36 h later by flow cytometry analysis using a Calibur FACScan .The HIV molecular clone HIV-R7/Eescribed . The numpression . Viral ppression . All vec
Iron and copper play an important role in oxidative mechanisms, producing the deleterious hydroxyl radical (*OH) that peroxides lipid membranes and damages DNA. Myeloperoxidase (MPO) and nitric oxide (NO) are known sources of free radicals and induce reduction of ferritin-Fe3+ into free Fe2+, contributing to oxidative damage.The exogenous administration of Insulin-like Growth Factor-I (IGF-I) induces hepatoprotective and antifibrogenic actions in experimental liver cirrhosis. To better understand the possible pathways behind the beneficial effect of IGF-I, the aim of this work was to investigate severe parameters involved in oxidative damage in hepatic tissue from cirrhotic animals treated with IGF-I . Fe and Cu were assessed by atomic absoption spectrometry and iron content was also evaluated by Perls' staining. MPO was measured by ELISA and transferrin and ferritin by immunoturbidimetry. iNOS expression was studied by immuno-histochemistry.Liver cirrhosis was induced by CClLiver cirrhosis was histologically proven and ascites was observed in all cirrhotic rats. Compared to controls untreated cirrhotic rats showed increased hepatic levels of iron, ferritin, transferrin (p < 0.01), copper, MPO and iNOS expression (p < 0.01). However, IGF-treatment induced a significant reduction of all these parameters (p < 0.05).the hepatoprotective and antifibrogenic effects of IGF-I in cirrhosis are associated with a diminution of the hepatic contents of several factors all of them involved in oxidative damage. Insulin-like growth factor-I (IGF-I) is an anabolic hormone produced in different tissues in response to growth hormone (GH) stimulation . Liver sIn order to give a better insight into the pathways by which IGF-I seems to exert its the hepatoprotective and antifibrogenic actions, this study was aimed at analyze several parameters involved in oxidative stress or inflammation in the liver, such as metals ions (iron and copper), iron transport and store proteins (transferrin and ferritin) and enzymes (myeloperoxidase -MPO- and inducible nitric oxide synthase -iNOS-) both in IGF-I treated and untreated cirrhotic rats.3+ into free Fe2+ contributing to oxidative damage [Metal ions, such as iron and copper, exhibit the ability to produce reactive oxygen species, resulting in lipid peroxidation, DNA damage, depletion of sulfhydryls and altered calcium homeostasis -19. Irone damage ,23.4 inhalation twice a week for 11 weeks with a progressively increasing exposure time from 1 to 5 minutes. From that time until the 30th week rats were exposed to CCl4 once a week for 3 min. During the whole period of cirrhosis induction animals received Phenobarbital in the drinking water (400 mg/L). Rats were housed in cages placed in a room with 12-hour light-dark cycle and constant humidity and temperature (20°C). Both food and water were given ad libitum. Healthy, age and sex-matched control rats were maintained under the same conditions but receiving neither CCl4 nor Phenobarbital.Cirrhosis was induced as previously described ,12. BrieThe Guiding Principles for Research Involving Animals [All procedures were performed in conformity with Animals .th -30th) of CCl4 exposure (from day 0 to day 22nd). In the morning of day 0, animals were weight and blood samples were drawn from the retroocular venous plexus from all rats with capillary tubes and stored at -20°C until used for analytical purposes. Cirrhotic rats were randomly assigned to receive either vehicle or recombinant human IGF-I for three weeks. Control rats received saline during the same period. The last dose of IGF-I was administrated the day 21st at 6 p.m.The treatment was administrated the last three weeks . The rest of liver samples were stored at -80°C.In the morning of the 22Bouin-fixed tissues were processed and sections (4-μm.) were stained with Haematoxylin and Eosin and Masson's trichrome. Liver cirrhosis was diagnosed according to the criteria previously described ,16. LiveImmunohistochemical staining of iNOS in paraffin sections (4 μm) was performed using an avidin-biotin peroxidase technique as described by Shu el al. , with soHepatic samples were homogenized in a Potter homogenizer in 7 volumes of cold buffer containing 5 mM 2-mercaptoethanol, 0.5 μg/mL Leupeptin, 0.7 μg/mL pepstatin A and 100 μg/mL PMFS. Fibrous parts and unbroken cells debris were eliminated by centrifugion at 500 g for 5 min. Supernatans were used as the whole homogenate.® . Transferrin and ferritin were determined by immunoturbidimetry, using a Hitachi 710 autoanalyzer and kits for clinical human, from the same laboratory. MDA was assessed after heating samples at 45°C for 60 minutes in acid medium. It was quantitated by a colorimetric assay using LPO-586 , which after reacting with MDA, generating a stable chromophore that can be measured at 586 nm . Total proteins were assessed by Bradford's method [MPO was measured by ELISA, using a commercial kit from BIOXYTECHs method .Representative samples (approximately 1 g. of each rat liver) were collected, weighed and later dried in stove (70°C) to constant weight. Iron and copper concentrations were determined by flame atomic absorption spectrophotometry .P value < 0.05 was considered to be statistically significant. Calculations were performed with SPSS program version 6.0 .Data were expressed as mean ± SEM. To analyse the homogeneity among groups, Kruskall-Wallis test was used, followed by multiple post-hoc comparisons using Mann-Whitney U tests with Bonferroni adjustment. Any 4.Liver cirrhosis was histologically proven and ascites was observed in all rats treated with CClTable As shown in Figure Figure In order to find a relationship between the studied parameters and oxidative liver damage, MDA levels, an index of lipid peroxidation, were evaluated . Hepatic4-induced cirrhosis associated with iron and copper overload and an increase of myeloperoxidase and iNOS expression.These results show that the treatment with low doses of IGF-I induces a reduction of all studied parameters involved in oxidative damage mechanisms in this model of cirrhosis. These findings support the hepatoprotective and antifibrogenic effects previously reported ,16. ThisIt is well known that iron and copper promote oxidant forces ,18,21,302*- radicals [Free iron (or low molecular iron or chelatable iron pool) facilitates the decomposition of lipid hydroperoxides resulting in lipid peroxidation and induces the generation of OH radicals and also accelerates the nonenzymatic oxidation of glutathione to form Oradicals ,19,30,323+ per mole of protein [2*-, etc. [Most of the body's iron is tightly bound to transferrin, entering cells via receptor-mediated endocytosis. Transferrin avidly binds 2 moles of Fe protein . Normall protein ,30,31. I*-, etc. . EnhanceIn the present study, we have found that hepatic transferrin and ferritin levels increased in cirrhotic rats with a parallel rise in iron deposition, whereas in cirrhotic rats treated with IGF-I all the above-mentioned parameters appeared diminished and secrete myeloperoxidase (MPO) [2*- formed during this respiratory burst is converted to the bactericidal oxidant hypochlorous acid (HOCl) via a series of reactions catalyzed by superoxide dismutase and MPO [After hepatic injury, several kinds of cells are activated in the subsequent inflammatory response . Free rase (MPO) . The maj and MPO ,29. Nume and MPO . MPO has and MPO . The sam and MPO . In our Another result which deserves particular mention is that iNOS expression was significantly lower in cirrhotic rats treated with IGF-I compared to untreated cirrhotic animals. This finding is in accordance with those reported by other groups -53. Howe4-induced cirrhosis. The hepatic levels of both metals diminished in cirrhotic animals treated with IGF-I. MPO content, iNOS immunohistological expression and hepatic ferritin and transferrin levels were increased in untreated animals and returned to normal in cirrhotic animals treated with IGF-I.In conclusion, these results show that the hepatoprotective and antifibrogenic effect of IGF-I in rats with liver cirrhosis is associated with a significant reduction of the hepatic levels of several parameters such as Fe, Cu, MPO, iNOS, ferritin and transferring, all of them involved in oxidative damage. In this work, iron and copper overload have been demonstrated in the liver from rats with CClThe IGF-I effects described in the present study suggest that a therapeutical approach targeted at lowering oxidative stress marker levels could be effective in the chronic liver disease.2*-, superoxide radicals; MPO, myeloperoxidase; iNOS, inducible nitric oxide synthase; AU, arbitrary units.IGF-I, insulin-like growth factor-I; Fe, iron; Cu, cooper; MDA, malondialdehyde; CO, control healthy group; CI, untreated cirrhotic rats; CI + IGF, IGF-treated cirrhotic rats; OThe author(s) declare that they have no competing interests.MG: Analytical studies, hypothesis and paper elaboration.ICC: Experimental design and treatment (induction of liver cirrhosis and IGF-I administration), hypothesis, histopathological study and scores.MDS: Analytical studies and in vivo assay.IN: Atomic absorption spectrometry assay.JEP: In vivo assay.AC: Hypothesis and experimental design and revision.ADC: Experimental treatment and documentation.EC: Histopathological study and measurements.SGB: Revision.The pre-publication history for this paper can be accessed here:
The character of upper limb disorder in computer operators remains obscure and their treatment and prevention have had limited success. Symptoms tend to be mostly perceived as relating to pathology in muscles, tendons or insertions. However, the conception of a neuropathic disorder would be supported by objective findings reflecting the common complaints of pain, subjective weakness, and numbness/tingling. By examining characteristics in terms of symptoms, signs, and course, this study aimed at forming a hypothesis concerning the nature and consequences of the disorder.I have studied a consecutive series of 21 heavily exposed and severely handicapped computer-aided designers. Their history was recorded and questionnaire information was collected, encompassing their status 1/2 – 1 1/2 years after the initial clinical contact. The physical examination included an assessment of the following items: Isometric strength in ten upper limb muscles; sensibility in five homonymously innervated territories; and the presence of abnormal tenderness along nerve trunks at 14 locations.Rather uniform physical findings in all patients suggested a brachial plexus neuropathy combined with median and posterior interosseous neuropathy at elbow level. In spite of reduced symptoms at follow-up, the prognosis was serious in terms of work-status and persisting pain.This small-scale study of a clinical case series suggests the association of symptoms to focal neuropathy with specific locations. The inclusion of a detailed neurological examination would appear to be advantageous with upper limb symptoms in computer operators. Upper limb pain and dysfunction are frequent complaints associated with computer work. However, the responsible pathology and the pathophysiological mechanisms are insufficiently understood. In addition, there is no consensus with regard to physical findings that may reflect symptoms.The involvement of the nerves in "non-specific" upper limb disorder, e.g. in computer operators, is suggested by various observations: The demonstration of an elevated threshold to vibratory stimulation -3; abnorUpper limb pain in computer operators shares the features of a neuropathic pain: Common analgesics tend to be ineffective. Pain may be evoked spontaneously or may appear to constitute an abnormal response to stimuli with frequent occurrence of allodynia. In addition, there are often non-painful abnormal spontaneous or evoked sensory phenomena such as numbness/tingling. The common experience of weakness which may further deteriorate on use would also be compatible with an upper limb nerve affliction.A precise and accurate diagnosis is crucial for effective management and rehabilitation, and also for epidemiological studies concerning causation. In order to get a better understanding of the pathophysiological mechanisms, the injured tissue should be precisely located. This might not necessarily be where symptoms predominate.I have aimed at studying a clinical series of computer operators with upper limb complaints and dysfunction in terms of• exposure characteristics;• symptoms and past treatment;• physical findings which may reflect an affliction of the peripheral nerves;• prognosis with regard to symptoms and work-status.This study comprises a consecutive series of 21 computer-aided designers with pain and functional limitations in the dominant upper limb. All patients were referred to a department of occupational medicine for diagnostic and aetiological assessment and management. Three patients were males of median age 27 years (range 25–41) and 19 were females of median age 35 years (range 25–55).Patients were interviewed about the character, distribution, initial presentation and development of their symptoms. Special attention was given to the presence of upper limb pain, subjective weakness and numbness/tingling, and to other symptoms included in a standard protocol for work-related upper limb disorders .A subsequent physical examination included extracts of diagnostic criteria for selected clinical disorders .Upper limb nerve afflictions were defined from an additional neurological examination consisting of the following components:• Manual assessment of the isometric strength in a selection of ten upper limb muscles Figure . Any red• The sensibility was ass• The axillary nerve (the deltoid area); • The musculocutaneous nerve ; • The radial nerve ; • The median nerve (the tip of the second finger); • The ulnar nerves (the tip of the fifth finger). • The perception of vibration .rk 256 Hz) was addAny sensory deviation from normal was registered as abnormal.• Assessment of tenderness with slight pressure at 14 locations along the course of nerves . Any mec• The brachial plexus ; • The suprascapular nerve (suprascapular notch); • The axillary nerve ; • The musculocutaneous nerve ; • The median nerve ; • The radial nerve ; • The posterior interosseous nerve at the arcade of Frohse (supinator tunnel); • The ulnar nerve . Assessments in patients with unilateral disorder were based on comparison to contra-lateral findings defined as normal. In patients with bilateral disorder, test-results were related to other findings in the same limb assumed to be normal, e.g., strength in adjacent muscles or sensibility in adjacent innervation territories .The definition and location of a nerve affliction ("neuropathy") was based on a traditional approach with a focus on the topography and innervation patterns of the upper limb nerves. Special consideration was given to the presence of normal strength in certain muscles and of reduced strength in others ,19, and I have operated with two sets of criteria for the definition of focal neuropathy assuming the second criterion to be more convincing:Criterion 1: The presence of a pattern of muscle-weakness suggesting a focal neuropathy at a defined location, at which mechanical allodynia with slight pressure at the nerve is present.• Criterion 2: Criterion 1 plus sensory deviations from normal in one or several sensory territories located peripherally to focal neuropathy.• In addition, double crush at the aBrachial plexus neuropathy at chord level was defined with reduced strength in the deltoid, biceps, and radial flexor of the wrist muscles, when weaknesses were accompanied by brachial plexus tenderness at its passage behind the pectoral muscle. Depending on the extent of brachial plexus involvement, additional muscles may be weak and mechanical allodynia may extend in the proximal or medial direction.Median neuropathy at elbow level was defined with reduced strength in the radial flexor of the wrist muscle along with mechanical allodynia involving the median nerve at elbow level . With an isolated median neuropathy, the deltoid, biceps, and ulnar extensor of the wrist muscles must be intact.and the median nerve was defined in the following situations:Double crush involving the brachial plexus • Strength in the radial flexor of the wrist muscle was reduced as much as / more than it was in the deltoid or biceps muscles.• Mechanical allodynia was either the same or more conspicuous at the median nerve at elbow level than it was at plexus level.Posterior interosseous neuropathy was defined with reduced strength in the ulnar extensor of the wrist muscle along with tenderness at the nerve-passage below the arcade of Frohse in the dorsal proximal forearm. With an isolated posterior interosseous neuropathy, the deltoid, biceps, short radial extensor of the wrist, and radial flexor of the wrist muscles must be intact.and the posterior interosseous nerve was defined in the following situations:Double crush involving the brachial plexus • Strength in the ulnar extensor of the wrist muscle was reduced as much as / more than in the deltoid, biceps, or radial flexor of wrist muscles.• Mechanical allodynia was either the same or more conspicuous at the arcade of Frohse than it was at plexus level.Other potential focal neuropathy was defined according to similar criteria, e.g., an isolated carpal tunnel syndrome would require reduced strength in the short abductor of the wrist muscle but preserved strength in the radial flexor of the wrist muscle. An isolated ulnar neuropathy at elbow or wrist level would require reduced strength in the abductor of the fifth digit and intact proximal muscles. In addition, mechanical allodynia should be present at the appropriate locations along nerve trunks.Patients were recommended to freely move and use the symptomatic upper limb within the limits of immediate and subsequent pain aggravation. All patients were offered physiotherapy based on the concept of adverse neural tension ,24 and e1/2 – 1 1/2 years after the initial examination the patients responded to a questionnaire: The exposure characteristics; symptoms ; past treatment; pain intensity at the first encounter and at follow-up to be quantified on a VAS-scale from 0 (no pain) to 10 (extreme pain); and the present status with regard to functional limitations and work.The change of the level of reported pain between the first consultation at the department and at follow-up was assessed by Friedman's test.All 21 patients returned the questionnaire. The mean duration of work with computer-aided design was 95 months (16–260 months). The self-reported daily mean of time spent with computer work constituted 81% (50–100%) of the total working time. 86% of the respondents reported aggravating factors during the months prior to the onset of symptoms, including high work intensity, overwork or other work conditions causing an unusual strain.Pain in the dominant upper limb was common to all patients. It had a mean duration of 24 months (1–60 months) and was the main symptom in 13 patients. All but one patient had a subjective feeling of weakness/fatiguability. Five patients reported this to be the most disturbing symptom. 19 patients experienced numbness/tingling which constituted the main symptom in three of them. Five patients had bilateral symptoms.All patients had received treatment prior to admission: A limited and transitory effect of past physiotherapy was reported in four out of 17, of pain killers in one out of 10, and of local steroid injections in two out of three patients. For the remaining patients the past treatment had no effect.According to the defined criteria for work-related upper limb disorders , non-neuIn all 21 patients reduced strength was demonstrated in the following muscles: Deltoid, biceps, triceps, and the radial flexor, short radial extensor and ulnar extensor of the wrist. In a smaller number of patients there were additional strength-reductions in the pectoral, infraspinatus, latissimus and abductor of the fifth digit muscles or unemployed (eight patients).2 = 8.0 and 9.0, p < 0.005 and 0.003, respectively). However, the pain persisted on a disturbing level in the majority of patients were invariably involved. In a few limbs there was an additional involvement of the pectoral, small abductor of the fifth digit, latissimus dorsi, and infraspinatus muscles Figure . This isCaution should be exercised when drawing a comparison between the outcome of this study of patients, referred with a serious disorder, and studies of "healthy" computer operators in occupation. Still, it would be relevant to compare with upper limb findings in computer operators described by others. A study of 533 visual display terminal workers has suggested an array of upper limb disorders in 22%, dominated by tendon related conditions in 15% and probable nerve entrapment in 4% . In a reHowever, the diagnoses depend on the choice and validity of the clinical tests employed and on the diagnostic criteria applied. The "somatic shoulder/neck syndrome" is charaMy findings are more in accordance with those of Pascarelli, who studied 485 upper limb patients out of which 70% were computer operators. A detailed and comprehensive physical examination demonstrated protracted shoulders in 78% and head forward position in 71%. This was also frequent in my study-patients but not systematically registered. A neurogenic thoracic outlet syndrome in 70% was suggested by tests stressing the brachial plexus and by the demonstration of mechanical allodynia . In a foThe limited success of the prevention and management of computer-related upper limb disorders demands new approaches to practice and research in the field. The inclusion in future studies of the presented systematic examination of the upper limb nerves may provide additional diagnostic information. This may lead to future improvement of the prevention and management of computer-related upper limb disorders.None declared.JRJ designed the study and conducted all the clinical examinations, and wrote the manuscript.The pre-publication history for this paper can be accessed here:
What were the critical steps in the development of ATRA and arsenic as treatments for APL? Researchers in Shanghai tell the story and look to the future Acute promyelocytic leukemia (APL) was first identified as a distinct subtype of acute myeloid leukemia in 1957 by Leif Hillestad. It is called M3 in the French–American–British classification, with a variant type referred to as microgranular (M3v in the French–American–British nomenclature) Apoptosis: A genetically determined process of cell death in which the cell uses specialized cellular machinery to kill itself and is then eliminated by phagocytosis or by shedding.Caspase: A family of cysteine proteases with aspartate specificity that are essential intracellular death effectors.Disseminated intravascular coagulation: A hemorrhagic disorder that occurs following the uncontrolled activation of clotting factors and fibrinolytic enzymes throughout small blood vessels, resulting in depletion of clotting factors and tissue necrosis and bleeding.Fibrinogenopenia: A decrease in concentration of fibrinogen in the blood.Granulocyte: Terminally differentiated myelocyte or polymorphonuclear white blood cell with granule-containing cytoplasm.Ligand-inducible transcription factors: Transcription factors that structurally have domains associated with DNA binding and ligand (hormone) recognition. When binding to its specific ligand, the transcription factor initiates a series of conformational changes and interacts efficiently with its specific DNA response element to recruit components of the transcriptional machinery.Nuclear receptor superfamily: One of the most abundant classes of transcriptional regulators including receptors for steroid hormones , RAs, thyroid hormones, and so on. These transcription factors regulate diverse functions such as homeostasis, reproduction, development, and metabolism in animals.Promyelocyte: Granule-containing cell in bone marrow that is in an intermediate stage of development between myeloblasts and myelocytes and that gives rise to a granulocyte.Proteasome: Proteolytic complex that degrades cytosolic and nuclear proteins.Sumoylation: Post-translational modification of proteins by the small ubiquitin-like modifier SUMO.Ubiquitin: A chiefly eukaryotic protein that when covalently bound to other cellular proteins marks them for proteolytic degradation.APL accounts for 10%–15% of all cases of acute myeloid leukemia, with several thousand new cases diagnosed worldwide each year. Before the advent of differentiation therapy, APL was treated with anthracycline-based chemotherapy with a complete remission rate of 60%–76% and a 5-year event-free survival rate of 23%–35% [Failure to differentiate terminally characterizes most, if not all, cancer cells of every origin. Whether the induction of differentiation could be a treatment strategy for cancers was hotly debated for decades before the advent of differentiation therapy.cis RA in people with refractory or relapsed APL [cis RA was ineffective in treating APL An important discovery of the early 1970s was that myeloid leukemic cells could be reprogrammed to resume normal differentiation and to become non-dividing mature granulocytes or macrophages as a result of stimulation by various cytokines ,4. Basedpsed APL ,9,10, bu2/day). The result was dramatic: 23 patients (95.8%) went into complete remission (CR) without developing bone marrow hypoplasia or abnormalities of clotting. The remaining one patient achieved CR when chemotherapy was added Beginning in the early 1980s, the Shanghai Institute of Hematology conducted a series of experiments on differentiation therapy for APL. These experiments showed that all-trans RA (ATRA) could induce terminal differentiation of HL-60, a cell line with promyelocytic features, and fresh leukemic cells from patients with APL. These intriguing results were the impetus for a clinical trial. Twenty-four patients with APL were treated with ATRA , yellow arsenic , and white arsenic, or arsenic trioxide (As2O3) Arsenic is a common, naturally occurring substance that exists in organic and inorganic forms in nature. The organic arsenicals consist of an arsenic atom in its trivalent or pentavalent state linked covalently to a carbon atom. There are three inorganic forms of arsenic: red arsenic and induces partial differentiation at low concentrations (0.1 × 10−6 to 0.5 × 10−6 M). The rapid modulation and degradation of the PML-RARa oncoprotein by arsenic trioxide could contribute to these two effects Studies have shown that arsenic trioxide exerts dose-dependent dual effects on APL cells—it induces apoptosis (programmed cell death) preferentially at relatively high concentrations of retinoids and retinoid X receptors (RXRs). These belong to the steroid/thyroid/retinoid nuclear receptor superfamily of ligand-inducible transcription factors. Both RAR and RXR families consist of three subtypes: α, β, and γ etinoids .More than 95% of patients with APL have the tq22;q21) translocation. This results in the fusion of the RARα gene on 17q21 and the promyelocytic leukemia (PML) gene on 15q22, which generates a PML-RARaα fusion transcript [ translocPML-RARa may affect transcription in other pathways including those in which the transcription factor AP1 and interferon-responsive factors are involved. PML-RARα also binds to promyelocytic leukemia zinc finger (PLZF) protein and potentially affects its functions −7–10−6 M) exert their effects through targeting the PML-RARα oncoprotein, triggering both a change in configuration and degradation of the oncoprotein and the activation of transcription, leading to differentiation. Cleavage of the PML-RARα fusion protein by caspases at residue D522 has been shown in APL cells induced to differentiate by ATRA ATRA and arsenic trioxide degrade and cleave the PML-RARα oncoprotein. Although we now have a good understanding of the molecular mechanisms underlying ATRA in differentiation therapy for APL, these mechanisms were shown long after the identification of the efficacy of this drug in treating the disease. Now it is well established that pharmacological concentrations of ATRA , and this difference persisted after consolidation therapy (p < 0.05). Notably, all 20 patients in the combination group remained in CR whereas seven of 37 cases treated with monotherapy relapsed (p < 0.05) after a follow-up of 8–30 months .Since ATRA and arsenic trioxide degrade the PML-RARa oncoprotein via different pathways, and since studies in animal models have shown synergic effects of both drugs in prolonging survival or even eliminating the disease ,35, the It seems that a combination of ATRA and arsenic trioxide for remission and maintenance treatment of APL produces better results than either of the two drugs used alone, in terms of the time required to achieve CR and the length of disease-free survival. We hope that the use of three treatments—ATRA, arsenic trioxide, and chemotherapy—will ultimately make APL a curable human acute myeloid leukemia The story of ATRA in the treatment of APL shows that by targeting the molecules critical to the pathogenesis of certain diseases, cells can be induced to return to normal. Differentiation therapy is therefore a practical method of treating human cancer that has shown consistent effectiveness in trials. The clarification of the underlying molecular abnormalities of APL is an example of the benefits of a close collaboration between bench and bedside, and is necessary for our understanding of the mechanisms of ATRA in differentiation therapy. It is clearly important to elucidate the molecular and cellular basis of a particular cancer if we are to further develop mechanism-based target therapies.The sequencing of the human genome and ongoing functional genomic research are now accelerating the dissection of disease mechanisms and identification of therapeutic targets. This in turn may facilitate the screening of promising treatments. What we learn from developing curative treatment approaches to APL may help to conquer other types of leukemias and cancers.
Increasing prevalence of obesity and disorders associated with sedentary living constitute a major global public health problem. While previous evaluations of interventions to increase physical activity have involved communities or individuals with established disease, less attention has been given to interventions for individuals at risk of disease.ProActive aims to evaluate the efficacy of a theoretical, evidence- and family-based intervention programme to increase physical activity in a sedentary population, defined as being at-risk through having a parental family history of diabetes. Primary care diabetes or family history registers were used to recruit 365 individuals aged 30–50 years, screened for activity level. Participants were assigned by central randomisation to three intervention programmes: brief written advice (comparison group), or a psychologically based behavioural change programme, delivered either by telephone (distance group) or face-to-face in the family home over one year. The protocol-driven intervention programme is delivered by trained facilitators, and aims to support increases in physical activity through the introduction and facilitation of a range of self-regulatory skills . The primary outcome is daytime energy expenditure and its ratio to resting energy expenditure, measured at baseline and one year using individually calibrated heart rate monitoring. Secondary measures include self-report of individual and family activity, psychological mediators of behaviour change, physiological and biochemical correlates, acceptability, and costs, measured at baseline, six months and one year. The primary intention to treat analysis will compare groups at one-year post randomisation. Estimation of the impact on diabetes incidence will be modelled using data from a parallel ten-year cohort study using similar measures.ProActive is the first efficacy trial of an intervention programme to promote physical activity in a defined high-risk group accessible through primary care. The intervention programme is based on psychological theory and evidence; it introduces and facilitates the use of self-regulatory skills to support behaviour change and maintenance. The trial addresses a range of methodological weaknesses in the field by careful specification and quality assurance of the intervention programme, precise characterisation of participants, year-long follow-up and objective measurement of physical activity. Due to report in 2005, ProActive will provide estimates of the extent to which this approach could assist at-risk groups who could benefit from changes in behaviours affecting health, and inform future pragmatic trials. This trial addresses the rise in the burden of disease associated with sedentary living: a major public health problem. Physical inactivity accounts for up to 11.7% of all deaths in developed countries [Reversal of this trend will require not only public health programmes to increase activity at societal level, but also interventions to help high-risk individuals increase physical activity and maintain beneficial activity patterns . This is2 [The study targets people with a parental family history of Type 2 diabetes and a sedentary lifestyle, who constitute a clearly identifiable high-risk population . A consi2 , and pro2 ,6.Most trials have evaluated increasing physical activity in the context of established disease. The few published trials of primary prevention in high-risk groups have methodological limitations. They have mainly evaluated brief interventions to increase exercise in the general population delivered through primary care practitioners -9, oftenThree trials among individuals with impaired glucose tolerance in China , FinlandProActive intervention programme included a review of psychological theories and evidence, through systematic reviews [Many of the available trials evaluated interventions that were not explicitly based on psychological theory and evidence, and did not specify clearly which behaviour change techniques were applied by the providers -20. In a reviews ,20 and e reviews was seleet al., 2004. A causal modelling approach to the development of theory-based behaviour change programmes for trial evaluation. Submitted).A range of behaviour change techniques with evidence for their effectiveness was used to bridge the gap between intention and action: goal setting and review, action planning, use of prompts, self-monitoring, and reinforcement ,18,23,25ProActive.Major challenges in promoting physical activity are maintenance of behaviour change, and the avoidance of drop out rates that can approach 50% . ReviewsProActive is to determine the effects of a theoretical- and evidence-based intervention programme on objectively measured physical activity after one year, in sedentary individuals at risk of diabetes and related metabolic abnormalities due to their family history. Three questions are posed:The primary objective of 1. Behaviour change: Can an innovative approach to increasing physical activity achieve clinically important change in this behaviour when offered to a group at increased risk of diabetes?2. Disease impact: If so, what is the potential for the changes in behaviour achieved in mid-life to reduce the incidence of diabetes in later life?3. Dose finding: How does delivery of the approach, at two levels of intensity, affect acceptability, efficacy and costs?The trial will estimate the extent to which physical activity and its key psychological mediators are altered by the intervention programme, and assess its acceptability to this high-risk group. It will document the extent to which behaviour change is associated with reduction in weight gain and improvement in physiological and biochemical correlates, and will model the potential impact of the intervention on future risk of diabetes.Intensive, face-to-face interventions may not be a feasible health service model, and there is some evidence that less intensive, continuous support may be as effective ,23. The ProActive is a four-year study with a complex randomised trial design [l design , with ceThe study design and patient flows are shown in Figure The study is explanatory in design, and the quality-assured intervention programmes are delivered by carefully trained and supervised family health facilitators with experience of working in primary care or the community, and backgrounds in health promotion, dietetics and nursing.The study is set in urban, suburban and rural Cambridgeshire, Essex and West Suffolk, England, in the homes of participants and their families. The study population consists of offspring of people with Type 2 diabetes, aged 30–50 years, without a diagnosis of diabetes, and not considered very active based on self-report at the start of the study (see below). This age range defines a group at risk of weight gain ,32. Any Once the relevant ethical and PCT approval had been obtained, 53 practice teams in the locality were approached by letter, inviting them to take part in the study, and highlighting the reimbursement of all costs involved. Personalised letters were sent to the practice manager (who we asked to collate responses and reply using a reply slip and Freepost envelope), all partners and nursing staff. Included with each letter was a brief summary of the study and a Research Information Sheet for Practices (RISP) form . If no rInitially participants were recruited through their parents; patients with Type 2 diabetes on the diabetes registers of 20 practices ('recruitment method 1'). Patients were written to by their general practitioner, with a description of the study, and asked to provide contact information for any offspring aged 30–50 years, living locally. Consent was also sought for the practice to pass the contact details of the offspring to the research team so that they could invite the offspring directly into the study. Piloting demonstrated feasibility and acceptability of the method, and one reminder was sent after three weeks if no reply was received.From 20 practices, 2631 patients were approached and 2025 (77%) replied, yielding 1238 potentially eligible offspring who were invited to take part in the study. The ratio of approximately one potentially eligible offspring to two patients with diabetes was half our pilot projections, so to increase recruitment we developed a second recruitment approach ('recruitment method 2'). This approach recruited potential participants with a recorded family history of diabetes directly from practices with family history registers, and was feasible in seven of the 20 practices. General practitioners wrote to all patients aged 30–50 years with a recorded family history of diabetes, enclosing a study information sheet, and asking those willing to complete and return to the practice a questionnaire to determine which family member(s) had diabetes, and of which type. Consent was sought for this information and contact details to be passed on to the research team. Using this method, with again one reminder letter, 1340 patients were written to, and 896 (67%) responses were received, with 283 patients interested and eligible. Both recruitment approaches provided 1521 potential trial participants. Practitioners used their discretion in applying both approaches to the exclusion of patients who were physically or mentally unwell.Potential participants recruited by both methods were next written to by the research team with full information about the study and a screening activity questionnaire, describing occupational and leisure activity, based on published questionnaires ,36, to eTo fulfil measurement requirements participants had to be able to walk briskly, without help, on the flat for 15 minutes. Participants also had to live within reach of the measurement centre and the Family Health Facilitators; defined as a 30-minute average travel time from the study co-ordination centre. Other exclusion criteria included individuals with serious physical or psychiatric illness limiting programme involvement; people with life issues interfering with the study; those known to be pregnant or have diabetes before baseline measurement; and those planning to move away. As shown in Figure ProActive. Brief details of the trial were sent, together with a request for feedback if the practitioner had any concerns about the offspring's participation, or about the safety of the facilitators making home visits.Eligible offspring were registered with general practitioners in the Eastern Region of the UK. Prior to both baseline measures and randomisation, these doctors were individually informed about their registered patients' intention to participate in Randomisation was carried out centrally by the trial statistician, using a partial minimisation procedure that dynamically adjusted the randomisation probabilities in order to balance important covariates; body mass index, sex, age, physical activity , family size, and behavioural intentions. Randomisation thus used baseline measures. Thirty-two pairs of siblings and two sibling-triples were cluster randomised to the same study group to avoid contamination, and the remaining 295 participants (81%) were individually randomised. Overall, 365/465 (78%) of those eligible went forward to randomisation.At baseline and the end of the study, all participants attend the study centre at either Ely or Cambridge for questionnaires, physiological and anthropometrical measures, and venesection. At six months, psychological and self-reported physical activity data are collected by postal questionnaires. Measures relating to the intervention programme evaluation are collected by the facilitators during the intervention, and we assess reported use of self-regulatory strategies by participants to increase their activity levels at six and twelve months.In similar primary care based trials we have achieved attrition rates of 30% or less ,39, and Maximising retention is an important issue, particularly as the comparison group do not benefit from regular contact with a facilitator. At recruitment, the introductory leaflets for all three arms emphasised the importance of follow-up, irrespective of treatment group. Participants who drop out of the intervention programme are contacted by a principal investigator, and offered an opportunity to give feedback and to confirm drop out from the intervention programme only, or from trial measurement as well.The distribution of measures across baseline, six-month and one-year follow-up are shown in Table 2/kg/body weight) is measured by indirect calorimetry during a submaximal graded treadmill exercise test, and maximal cardiorespiratory fitness (VO2max) is estimated using predicted maximal heart rate (i.e. 220 minus age) [Oxygen uptake (ml Onus age) ,42.Self-report measures of well-being and quality of life include subjective health and energy (SF-36) , anxietyThe frequency and severity of physical activity related injury is assessed by study questionnaire at one year.Psychological mediators of physical activity include intention to increase activity over the next year, and its predictors . These key measures have been developed for the study following the recommendations of Ajzen . PhysiolProActive intervention programme to have an impact on health service costs in the short term, we are monitoring health service utilisation in the last 20% of participants recruited to the study.The economic analysis will explore the impact of a physical activity intervention programme on NHS costs. As the study is explanatory in design, we will not conduct a full cost-effectiveness analysis, but aim to provide a cost-description of the delivery of the intervention programmes. We are measuring the costs of delivering the 'face-to-face' and 'distance' intervention programmes via family health facilitators. These costs primarily comprise the training of facilitators, educational materials, travel, and the time that facilitators spend contacting and visiting families (including cancelled visits). Travel costs and contact time are recorded by the facilitators for every trial participant. The cost of facilitator time will be based on national average salaries, employment costs, qualifications, overheads and indirect costs . AlthougProActive are cardiovascular and musculoskeletal events associated with the laboratory procedures of treadmill exercise testing and injuries sustained as a consequence of increasing physical activity in everyday life. The cardiorespiratory fitness test used in this study is submaximal, and only undertaken following extensive screening procedures. If a participant exhibits a positive Rose angina questionnaire [The primary safety concerns for participants in ionnaire , a positionnaire or an abRanges for acceptable results are set for all clinical measures. If these are exceeded, the information is sent to the general practitioner, and the participant informed and advised to consult.As the intervention programme is based on participants' own preferred activities, and emphasises small achievable goals set by the participants, the risk of excess injury is small. Group information about injury will be reported.Participants previously unaware of their familial risk of diabetes may experience anxiety related to awareness of their increased risk status. This is considered in facilitator training, and measures of anxiety, worry about diabetes and perceived vulnerability are included (see above).Physiological and anthropometric measures are made in two centres by observers unaware of individuals' group allocation. Biochemical measures are made in one laboratory with established quality assurance systems. Randomisation was undertaken by the trial statistician, independently of the trial co-ordination team, and the data entry team are unaware of study group.The administrative database (participant information), dayPAR values and blood test results are managed in-house, with the latter being double entered. Numeric fields have limiters set so that values outside a defined range cannot be entered. Additionally, any blood results outside the 'normal range' are flagged for confirmation of value. Random checks on administrative data are performed regularly, checking the data on the database against paper records and correcting any errors found.Double data entry of all anthropometric and questionnaire measures is undertaken by an experienced, independent agency, blind to study group . In addition, random checks are applied as described above.The family health facilitator contacts participants randomised to the 'face-to-face' and 'distance' interventions, and arranges a home interview including family members. At this introductory interview, personal reasons for increasing physical activity are elicited and reinforced, family participation is encouraged, and the relationships between physical activity, weight gain and prevention of Type 2 diabetes are explained and discussed.In the 'face-to-face' arm this is followed by four visits and two brief support telephone calls over five months. During these interactions the participant and willing family members learn strategies to increase physical activity, for instance selecting activities that they enjoy doing, setting achievable goals, defining action plans, self-monitoring, self-reinforcement and relapse prevention. Pedometers are available for self-monitoring among participants who have chosen walking as their goal. A key difference between this intervention and others currently under evaluation (e.g. ACT) is that there is no absolute target for physical activity defined at the outset. Family members are encouraged to make gradual and continuous increases in their activity, as much as they feel able to, on the understanding that all increases, if maintained, are beneficial. Follow-up continues by monthly telephone calls up to one year, to discuss any difficulties in applying the strategies, and to encourage family members to increase activity further.In the 'distance' arm, following the introductory meeting the intervention programme is delivered by six telephone calls over five months, and then monthly by post up to one year, with content similar to the 'face-to-face' arm. During the phone calls the facilitators encourage the participants to involve family members. Visits and telephone calls take approximately one hour and 45 minutes, respectively.An arm-specific introductory leaflet is used, but otherwise materials are the same for the face-to-face and distance arms. All introductory leaflets include text to encourage retention in the trial. In the comparison arm the leaflets offer brief advice on the benefits of physical activity. Participants in the intervention programme arms are given an educational manual describing the strategies that participants are encouraged to use to increase their habitual activity in a step by step fashion.Various mechanisms are used to promote the fidelity of delivery of the intervention programme to the underlying psychological theories and intervention programme protocols. A detailed training manual and protocols for each contact were developed, and a Training Officer appointed. Facilitators attended a five day phased course in psychological theories, behaviour change techniques and experiential training in techniques, with six half-days initially, followed by refresher sessions at six months and continuing supervised practice by a clinical psychologist and through peer-appraisal. Facilitators complete a checklist for the introduction of and mastery of self-regulatory strategies by the participant after each contact, and monitor intervention programme attendance and drop-out for each participant.An assessment of adherence by facilitators to the behaviour change techniques specified in the protocols was conducted among a random sample of 27 participants, using reliable coding frames and transcripts of the sessions. The intervention programme evaluation includes: an assessment of the frequency of meetings and telephone calls, proportion of progress reports and postcards sent and progress reports returned, satisfaction with the intervention programme, reported use of self-regulatory strategies by participants at six months and one year, and drop-outs at one year.The sample size calculation was initially based on physical activity level (PAL), the ratio of total energy expenditure to estimated basal metabolic rate ,41, and Main analyses will be at one year, comparing combined 'face-to-face' and 'distance' versions of the intervention programme with 'brief advice', comparing 'face-to-face' with 'distance' modes of the intervention programme, and estimating the difference between each intervention programme group and 'brief advice' to inform a larger pragmatic trial. Analysis by intention-to-treat will retain individuals within their randomised group regardless of participation. Comparisons will involve an adjustment for baseline physical activity and other variables used in the randomisation. We will undertake sensitivity analyses, assuming a range of potential outcomes for non-completers, informed by available baseline and interim data on non-completers. Non-completers will have multiple data imputed with a 'missing at random' assumption and with sensitivity analyses to represent optimistic and pessimistic scenarios for drop out. Clustering effects by family will be estimated for the primary outcome.A secondary 'dose-response' analysis will use all three randomised groups, over baseline, six months and one year. A 'per protocol' analysis will also be undertaken among those completing the intervention programme.The incremental cost of delivering the 'face-to-face' intervention programme will be compared to the 'distance' and 'brief advice' groups.Stage 1) The trial will provide evidence on the relationship between observed behaviour change, weight change, and biochemical and physiological correlates. Modelling is facilitated by reference to the Ely Cohort; a prospective population cohort study that began in 1990 and involved 1122 people without known diabetes [ProActive.diabetes . MeasureStage 2) Using models based on past cohort data, the influence of behaviour change on future diabetes incidence [ncidence ,41,51,52Stage 3) We will undertake sensitivity analyses on the projections at Stage 2, using a range of plausible assumptions about how behaviour change might affect other risk factors and hence indirectly influence future diabetes risk.ProActive is the first efficacy trial of physical activity promotion in a defined high-risk group accessible through primary care, evaluating an intervention programme based on theory and evidence. It supports increases in informal activity, through the introduction and facilitation of self-regulatory strategies with regular reinforcement by the facilitator. Due to report in 2005, ProActive has the potential to make substantial contributions to understanding the extent to which such approaches could assist the wide range of at risk groups who could benefit most from increasing their physical activity.The trial team brings together expertise in the epidemiology of diabetes with intet al., 2004. A causal modelling approach to the development of theory-based behaviour change programmes for trial evaluation. Submitted). This will enable replication and further strengthening of effective intervention steps, as well as development of theory.In terms of the intervention itself, careful measurement along the hypothesised causal path from cognition, through self-reported behaviours to energy expenditure, will enable testing of the application of the Theory of Planned Behaviour in this setting, and of the relationship between the everyday activities that the programme has as its focus and the objectively measured physical activity VOThe authors declare that they have no competing interests.ALK, NW, SG, SS, WH, DS, – Principal InvestigatorsTP – Trial StatisticianKW – Trial Co-ordinatorWill H – Trial EconomistUE – Physical activity measurementAll authors read and approved the final manuscript. ALK is the paper guarantor.The pre-publication history for this paper can be accessed here:
In most recent large efficacy trials of barrier contraceptive methods, a high proportion of participants withdrew before the intended end of follow-up. The objective of this analysis was to explore characteristics of participants who failed to complete seven months of planned participation in a trial of spermicide efficacy.Trial participants were expected to use the assigned spermicide for contraception for 7 months or until pregnancy occurred. In bivariable and multivariable analyses, we assessed the associations between failure to complete the trial and 17 pre-specified baseline characteristics. In addition, among women who participated for at least 6 weeks, we evaluated the relationships between failure to complete, various features of their first 6 weeks of experience with the spermicide, and characteristics of the study centers and population.Of the 1514 participants in this analysis, 635 (42%) failed to complete the study for reasons other than pregnancy. Women were significantly less likely to complete if they were younger or unmarried, had intercourse at least 8 times per month, or were enrolled at a university center or at a center that enrolled fewer than 4 participants per month. Noncompliance with study procedures in the first 6 weeks was also associated with subsequent early withdrawal, but dissatisfaction with the spermicide was not. However, many participants without these risk factors withdrew early.Failure to complete is a major problem in barrier method trials that seriously compromises the interpretation of results. Targeting retention efforts at women at high risk for early withdrawal is not likely to address the problem sufficiently. Retention of participants has been a consistent problem in clinical studies of barrier contraceptive methods. For example, in six large studies of condoms, diaphragms, and spermicides conducted in the past decade, more than 30% of the participants failed for reasons other than pregnancy to complete the intended six months or six menstrual cycles of follow-up -6 Such hIssues regarding the design of barrier method studies have become increasingly important to researchers and public health scientists since the onset of the HIV epidemic because of the urgent need for methods to prevent this disease and other sexually transmitted infections. Numerous new barrier contraceptive methods and microbicides are currently in various stages of development and testing. Devising effective approaches to maximize retention in these studies will be critical.In this analysis, we used data from a large, recently completed randomized trial of the efficacy and safety of five spermicide products to determine whether we could identify specific subgroups of participants who were at particular risk for failure to complete the trial. Our goal was to provide information that might assist in the development of targeted approaches to improve follow-up in future trials.The primary purpose of this randomized trial was to estimate and compare the probability of pregnancy during six months of typical use of five nonoxynol-9 spermicide products. Safety, acceptability, and product use were additional specified outcomes. The trial was conducted at 14 sites in the United States between June 1998 and August 2002. The study was approved by the institutional review boards at each site and at Family Health International. All participants signed written informed consent forms before enrollment.A full description of the trial procedures has been published previously . In brieFollow-up visits were scheduled at 4, 17, and 30 weeks after admission. Each participant was also asked to return to the study site if she wished to discontinue use of the spermicide. At each visit, the participant was interviewed, and a urine pregnancy test was done. At the 4-week and final visits, she completed a seven-page acceptability questionnaire. A pelvic examination was performed at the final visit and at other visits as indicated. Colposcopy substudy participants had a vaginal colposcopy at each follow-up visit. Each participant was asked to do a pregnancy test at home 2, 10, and 23 weeks after admission and to telephone the site with the result. If a participant missed a scheduled contact, study procedures required that staff make at least four attempts to contact her by at least two different modalities If they could not contact her directly, staff were to try to reach her through an alternate contact person identified by the participant at admission. Compensation for completion of all scheduled visits in the primary study ranged from $120 to $400 at the 14 study sites; at most sites, the amount was divided evenly among the separate visits.In this analysis, we included all randomized participants except for 22 who were discovered to have been pregnant at admission and who therefore contributed no data to the primary analysis. We classified each of the remaining 1514 participants as having completed the study if she considered the spermicide to be her primary contraceptive method for at least 183 days after randomization, or she became pregnant before she stopped relying on it. Otherwise, she was classified as having failed to complete the study. We assigned each participant's last day in the analysis as the earliest of the following dates: the estimated date of fertilization of a pregnancy; the date she was last known to have been relying primarily on the assigned spermicide for contraception; the latest date her pregnancy status could be reliably determined; and 183 days after randomization. These rules were the same as those used in the prior primary pregnancy analyses .We assessed the associations between failure to complete and 17 baseline factors of interest, which were prespecified before the analysis. Among the subset of participants who were in the analysis for at least 6 weeks, we examined the associations between final status category and various factors that characterized their experience during the first 6 weeks in the study. Factors were categorized in part to ensure substantial numbers of participants in each level. Hypotheses about the effects of factors on completion status were tested using chi square tests, Fisher's exact tests, Mantel Haenszel tests. Parameters estimated by multivariable logistic regressions were tested using Wald tests. We included factors in regression models if they were associated with the outcome in bivariable analyses. None of the included factors were highly correlated. In both bivariable and multivariable analyses, a p-value of <0.05 was considered to indicate a significant association.Of the 1514 participants in this analysis, 635 (42%) failed to complete the study for reasons other than pregnancy. The proportion who withdrew early at each of the 14 study sites ranged from 17% to 83%. Only 3 centers had completion rates ≥65%. Forty nine participants (8% of those who withdrew) were discontinued by the site investigator because of a concern about their safety , staff error, or closure of the trial at the study site relative risk of failure to complete the trial Table . FactorsOf the 1095 participants who contributed more than 6 weeks to the analysis, those who in their first 6 weeks were not compliant with follow-up visits, coital diary completion, or use of the spermicide during sex were significantly less likely than others to complete the study Table . HoweverIn analyzing data from longitudinal studies, researchers commonly assume that the experience of participants who withdraw early, had they stayed in the study, would have been similar to the experience of those who completed. However, this assumption is generally impossible to confirm and is often implausible. If the assumption is false, the study findings may substantially misrepresent the likelihood of the outcome in the study population. If the degree of misrepresentation is not consistent across study groups, comparisons could be seriously biased. Indeed, some expert epidemiologists have suggested that a trial with losses of greater than 20% of the participants "would be unlikely to successfully withstand challenges to its validity" ,9.Our study, like other recent barrier contraceptive method studies, did not even approach this standard: 42% of our enrolled participants did not complete the trial. Furthermore, the participants who failed to complete were different in key ways from those who did – they reported significantly more frequent coitus at baseline, and they also were more likely to be younger, unmarried, and poorly compliant with study procedures and method use in the first few weeks after admission. All of these characteristics were associated to some extent with elevated risk of pregnancy in our population , which sClearly, increased attention to preventing this problem in future studies is imperative. In performing this analysis, our intention was to explore the potential impact of focusing retention efforts on participants with characteristics that are associated with failure to complete. However, although we did find some factors that were significantly associated with early withdrawal, none was highly predictive; that is, many participants without these factors failed to complete the study, and many with these factors did complete. Therefore, applying special efforts only to the high-risk participants would not likely have been sufficient to raise completion rates to desirable levels. In future trials, aggressive follow-up measures should be instituted universally. Such efforts might include assigning individual "case-workers" to participants, using novel means for communicating with the participants, such as pagers, conducting visits at participants' homes or at other locations convenient for them, providing specific reimbursement for expenses such as travel, parking, and child care, or providing extra incentives for completing follow-up. Researchers should be mindful, however, that one downside to some of these approaches is that they might influence participants' use of the study product or other behaviors related to the study outcome, which is detrimental if the goal of the trial is to estimate effectiveness during "typical use" of the product.In our study, participants who enrolled at study centers where enrollment was slow were at increased risk of failure to complete the study. The reason for this association is unclear. Factors at these centers that hindered enrollment also may have adversely affected participants' interest in remaining in the study. Alternatively, in responding to pressure to hasten recruitment, these centers may have enrolled women who were not good candidates for study completion. This latter possibility emphasizes the need to maintain a careful balance between recruitment and retention goals: rapid recruitment of participants who then drop out of the study is not beneficial to the study as a whole.The amount of reimbursement promised to our participants was strongly associated with final completion status in the bivariable analysis, but this effect was not significant when adjusted for other factors in our multivariable model. Numerous prior studies have shown that modest monetary incentives increase response rates to surveys or short follow-up studies ,11 Some We were surprised that several of the factors that we expected would be associated with early withdrawal did not show significant associations in this analysis. When we began this analysis, we presumed that one reason for both slow enrollment and poor follow-up rates in barrier method trials is the relatively poor efficacy of these products: women may consider them to be temporary or backup methods and thus may be unwilling to use them as their sole or primary contraceptive for the 6–12 month duration of these studies. However, in our study, participants who strongly wished to avoid pregnancy or who had completed their desired family size were not more likely than others to drop out, nor were participants who expressed concerns about contraceptive efficacy early in the trial. Furthermore, neither early medical problems nor other complaints about the spermicides were predictive of withdrawal. These findings differ from that of a previous randomized trial of spermicides conducted mostly in developing countries. In that trial, participants who initially liked the assigned product very much were more likely than others subsequently to complete the study and to use the product for a longer period of time after admission .In one respect, the poor retention rate in our study and in other barrier method trials is a result of the design of these studies, which typically call for censoring data when participants stop relying on the assigned contraceptive method. This design prohibits a true intent-to-treat analysis and is consequently a potential source of bias. Clearly, retention would be higher if the trials were designed at the outset to follow all subjects for the full intended duration of follow-up, even if they switched contraceptive methods. However, data from participants who are not using the method under study are not necessarily relevant to the efficacy and safety of the method. For the results of these trials to be meaningful, as many subjects as possible must not only complete follow-up but also continue to use the method during the full follow-up period. In our study, almost all the women who gave a reason for withdrawing early either cited problems with the spermicide or indicated that they wished to switch to another method after leaving the study. Our results are consistent with the findings of the 1995 National Survey of Family Growth, which showed that more than 47% of US spermicide users stopped relying on the method within the first 6 months of use . These fOur results suggest that to reduce bias potentially introduced by a large proportion of participants failing to complete the study, future barrier contraceptive method researchers should consider approaches in addition to those directly aimed at tracking and retaining individual participants. For example, both to reduce the burden on participants and to help the study staff maintain focus on follow-up, limiting data collection to critical variables may be appropriate. Complete collection of key data is clearly preferable to inadequate collection of less important data. Reducing the planned duration of follow-up would also certainly reduce withdrawals; although a larger sample size would be needed to provide the desired levels of precision and power, this disadvantage might be overcome if the shorter study were more attractive to potential participants. Given the large proportion of women who stop using the method earlier than 6 months, it is not clear that 6-month pregnancy probabilities are clinically needed anyway. Adding a run-in period to the trial before randomization might be helpful in excluding participants likely to drop out very early after admission, although such an addition might deter enrollment of other women as well, which is also a problem in these trials. Finally, innovative study designs to measure product efficacy should be evaluated. The design proposed by Steiner et al., which compares the one-month pregnancy probability in a relatively small number of women using a contraceptive method to the probability in women using a placebo, offers an alternative to the traditional 6–12 month trial . It showNo authors have any declared interests except the following:Elizabeth Raymond owns stock in Johnson and Johnson.Mitchell Creinin serves as a speaker for Ortho.Alfred Poindexter has had research grants from Columbia Laboratories and serves as speaker for Ortho.EGR helped design the trial, managed the trial, planned this analysis, and drafted the manuscript.PLC and BPL helped design the trial and/or this analysis, performed the analysis, and contributed to the manuscript.JL designed the trial and contributed to the manuscript.Other authors participated in the design of the trial, conducted the trial, and contributed to the manuscript.The pre-publication history for this paper can be accessed here:
S. cerevisiae, C. elegans, E. coli, A. thaliana, D. melanogaster, and H. sapiens. We use genomic sequence information to connect these data and compare global and modular properties of the transcription programs. Linking genes whose expression profiles are similar, we find that for all organisms the connectivity distribution follows a power-law, highly connected genes tend to be essential and conserved, and the expression program is highly modular. We reveal the modular structure by decomposing each set of expression data into coexpressed modules. Functionally related sets of genes are frequently coexpressed in multiple organisms. Yet their relative importance to the transcription program and their regulatory relationships vary among organisms. Our results demonstrate the potential of combining sequence and expression data for improving functional gene annotation and expanding our understanding of how gene expression and diversity evolved.Comparing genomic properties of different organisms is of fundamental importance in the study of biological and evolutionary principles. Although differences among organisms are often attributed to differential gene expression, genome-wide comparative analysis thus far has been based primarily on genomic sequence information. We present a comparative study of large datasets of expression profiles from six evolutionarily distant organisms: Comparative analysis of sequence and gene expression data from bacteria, yeast, worms, flies, weeds, and humans hints at the potential to extract biological and evolutionary principles We performed a statistical analysis comparing the pairwise correlations between genes in one organism to the correlations between their respective homologues. Indeed, a significant fraction of such correlations were similar see . The strSaccharomyces cerevisiae expression data , while in the other organisms they may be required to accelerate folding during cell growth.The observation that groups of functionally related genes are often coexpressed in multiple organisms prompted us to ask whether also the higher-order regulatory relationships between these groups have been conserved . To addrC. elegans), the standard deviations of the correlation coefficients do not exceed 0.1, even when removing half of the expression profiles. Taken together, these results indicate that, despite the sparseness of the data, our findings reflect real properties of the expression networks and not the specific subset of experimental conditions used.In order to test whether the variations in the regulatory relations among functional groups in different organisms are due to the use of unrelated sets of experimental conditions, we restricted both the human and the yeast expression data to the cell cycle experiments. We found that the correlations between modules did not change qualitatively due to this restriction B and 2C.To compare the higher-order regulatory structures more systematically, we decomposed the expression data of each organism into a set of transcription modules using the iterative signature algorithm (ISA) we proposed recently A and 3B.C. elegans tree exhibits a sharp transition between a regime dominated by a single branch (from which only few less-stable modules branch off) to a part of the tree that rapidly bifurcates into many branches at higher thresholds , most modules have either significantly less or significantly more homologues than expected of the connectivity k (the number of edges of a particular gene). We find that for all organisms, the connectivity is distributed as a power-law, n(k) ∼ k−γ, with similar exponents γ ≈ 1.1–1.8 that two genes of connectivity k and k′, respectively, are connected with each other . Interestingly, in the three organisms in which large-scale knockout information is available identified by our modular analysis. The decrease of modules E. Thus, Comparing genomic properties of different organisms is of fundamental importance in the study of biological and evolutionary principles. Although much of the differences among organisms is attributed to different gene expression, comparative analysis thus far has been based primarily on genomic sequence information. The potential of including functional genomic properties in a comparison analysis was demonstrated in recent studies that compared protein–protein interaction networks of different organisms .In this paper we presented a comparative analysis of large datasets of expression profiles from six evolutionarily distant organisms. We showed that all expression networks share common topological properties, such as a scale-free connectivity distribution and a high degree of modularity. While these common global properties may reflect universal principles underlying the evolution or robustness of these networks, they do not imply similarity in the details of the regulatory programs. Rather, with a few exceptions, the modular components of each transcription program as well as their higher-order organization appear to vary significantly between organisms and are likely to reflect organism-specific requirements.http://barkai-serv.weizmann.ac.il/ComparativeAnalysis/.Nevertheless, coexpression of functionally linked genes is often conserved among several organisms. Based on this finding, we proposed an efficient method that uses coexpression analysis for improving sequence-based functional annotation. An interactive implementation of this algorithm is available at Our analysis was based on the available expression data, which are still sparse for most organisms. It is likely that the modular decompositions we obtained are partial, so additional modules can be identified as more expression data become available. Nevertheless, by analyzing the sensitivity of our results to the number of conditions, we concluded that the composition of the modules themselves is rather robust. Moreover, the higher-order correlations between modules are only slightly affected by the number of conditions.S. cerevisiae . We excluded genes or conditions with more than 90% missing datapoints, resulting in expression matrices of the dimensions shown in Preprocessed expression data from Database using deDatabase , we onlyFASTA files for amino-acid sequences of coding regions were downloaded from the sources detailed in S. cerevisiae and E. coli) and RNAi experiments (C. elegans) were obtained from the sources indicated in Data for deletion mutants (mG of the genome G) and an associated set of regulating conditions . The defining property of a transcriptional module is self-consistency, which is achieved as follows. First, we assign scores to both genes and conditions that reflect their degree of association with the module. The gene score is the average expression of each gene over the module conditions, weighted by the condition score: g in condition c normalized over genes and conditions, respectively, such that gs, while the module conditions are those conditions in the dataset with the highest scores cs. The ISA identifies transcription modules through iterative refinement of a large number of random gene scores.A transcription modules consist of a set of coregulated genes in the present analysis, but retained only genes whose score is not less than 70% of the most significant gene is identical in all expression networks and fixed it to <k> = 0.001. Using the top 0.1% of all possible correlations corresponds to a lower limit on the correlation coefficients between 0.63 for S. cerevisae and 0.85 for D. melanogaster. The results are insensitive to the precise threshold value (see i is k = Σj≠iijA. In order to obtain the connectivity distributions n(k), we used logarithmic binning. The edges of the bins were powers of 2, and we counted the number of genes with ik between two edges and normalized by the bin width. We applied a linear fit to the log values of the bin centers against the normalized counts. We note that the resulting connectivity distributions scale as a power-law for a wide range of thresholds and the exponents only depend weakly on the choice of the threshold. The clustering coefficient of gene i is iC = (Σk>j≠iikAkjAjiA)/[ik(ik −1)/2].Each expression network can be described by a symmetric adjacency matrix http://barkai-serv.weizmann.ac.il/ComparativeAnalysis.Interactive applications for the refinement of sets of homologous genes and the exploration of our modular decompositions of the expression data are available online. We also present details about the highly connected genes in each organism, the pairs of genes that are significantly correlated in two organisms, and the eight modules related to core processes in yeast (and their homologue modules before and after refinement) on our website at Data S1Figure S1 and Figure S2.This note includes (38 KB PDF).Click here for additional data file.Data S2Figure S3, Figure S4, and Figure S5.This note includes (59 KB PDF).Click here for additional data file.Data S3Figure S6.This note includes (11 KB PDF).Click here for additional data file.Data S4(3 KB PDF).Click here for additional data file.Data S5After this work was completed, we succeeded in processing the more than 2,000 human chip experiments deposited at the SMD. Removing genes and conditions with more than 90% missing values resulted in 1,474 expression profiles for 24,795 genes. Our Web tools (“GeneHopping” and “ModuleTree”) allow researchers to use also this updated dataset.(3 KB PDF).Click here for additional data file.Figure S7(137 KB PDF).Click here for additional data file.Figure S8(16 KB PDF).Click here for additional data file.Figure S9(15 KB PDF).Click here for additional data file.Figure S10(33 KB PDF).Click here for additional data file.Figure S11(11 KB PDF).Click here for additional data file.Figure S12(29 KB PDF).Click here for additional data file.Figure S13(19 KB PDF).Click here for additional data file.
Inconsistent findings have been reported on the occurrence and relevance of creatine kinase (CK) isoenzymes in mammalian liver cells. Part of this confusion might be due to induction of CK expression during metabolic and energetic stress.The specific activities and isoenzyme patterns of CK and adenylate kinase (AdK) were analysed in pathological liver tissue of patients undergoing orthotopic liver transplantation.The brain-type, cytosolic BB-CK isoenzyme was detected in all liver specimens analysed. Conversely, CK activity was strongly increased and a mitochondrial CK (Mi-CK) isoenzyme was detected only in tissue samples of two primary hepatocellular carcinomas (HCCs).The findings do not support significant expression of CK in normal liver and most liver pathologies. Instead, many of the previous misconceptions in this field can be explained by interference from AdK isoenzymes. Moreover, the data suggest a possible interplay between p53 mutations, HCC, CK expression, and the growth-inhibitory effects of cyclocreatine in HCC. These results, if confirmed, could provide important hints at improved therapies and cures for HCC. Creatine kinase (CK) isoenzymes catalyse the reversible transfer of the phosphate group of phosphocreatine (PCr) to ADP, to yield ATP and creatine (Cr). The CK/PCr/Cr system is present primarily in tissues with high and fluctuating energy demands such as brain, heart and skeletal muscle, and serves as a temporal and spatial "energy buffer" that helps to maintain a high intracellular phosphorylation potential in situations of increased metabolic demand .de novo. The liver is the main site of Cr production in the body . After ody see -6). Othe. Othede ody . Finallody isoenzyme. On the other hand, besides BB-CK which was suggested to be present in endothelial and Kupffer cells, Vaubourdolle et al. also proIn a number of studies reporting significant levels of CK activity in liver, interference by adenylate kinase (AdK) isoenzymes in the CK activity assays -21 is veBecause of these conflicting data, the goal of the present study was to analyse in detail the CK and AdK activities in pathological liver tissue of patients undergoing orthotopic liver transplantation.The present project was approved by the ethics commission of the University of Innsbruck. In total, 25 liver samples were analysed. Twenty-three samples were obtained from 18 explanted organs of liver transplant recipients, one sample was obtained at autopsy (no. 1), and the last sample was from a normal rat liver. According to pathomorphological criteria, the 25 samples can be divided into 5 groups: (1) Nine samples of cirrhotic liver tissue ; (2) six samples of neoplastic tissue ; (3) three samples of necrotizing liver tissue due to acute or subacute organ rejection ; (4) five samples of macroscopically normal liver parenchymal tissue surrounding focal liver pathologies ; (5) two samples originating from a normal rat liver (no. 12) and from a patient with steatosis hepatis (no. 1).All steps were performed on ice or at 4°C. Approximately 5 g of liver tissue was homogenized in 45 ml buffer A . The homogenate was subjected to centrifugation for 5 min at 800 g. The pellet was discarded, and the supernatant centrifuged for 4 min at 5,100 g (centrifugation C2). The supernatant of C2 was further clarified by centrifugation for 12 min at 12,300 g, thus yielding the cytosolic fraction. The pellet of C2 was resuspended in 10 ml buffer A, followed by centrifugation for 2 min at 12,300 g (C3). After resuspension of the C3 pellet in 10 ml buffer A and centrifugation for a further 10 min at 12,300 g, the sediment was resuspended in 4 ml buffer A, thus yielding the mitochondrial fraction. One-ml aliquots of the different fractions were immediately frozen in liquid nitrogen and stored at -80°C until analysis.2, 2.1 mM ADP, 2.1 mM NADP, 21 mM N-acetylcysteine, 9 U/ml of hexokinase, and 5.8 U/ml of glucose-6-phosphate dehydrogenase (both from Sigma). Enzymatic activity was measured at 25°C as an increase in NADPH absorbance at 340 nm. For AdK, three separate measurements were made for each sample in the same assay medium. For CK measurements, 5.1 mM AMP was added to the assay medium to inhibit AdK activity. For each sample, three measurements with 10.3 mM PCr and three measurements without PCr were made, and the CK activity was calculated as the difference of the respective means. All values in this paper represent specific activities per mg of homogenate, cytosolic or mitochondrial protein. Protein amounts were measured according to the method of Bradford -1). Therefore, in the absence of histochemical data, it cannot be concluded with certainty whether the increased levels of CK are due to increased vascularization of the tumour , or to induction of CK expression in the malignant cells.Interestingly, we observed a strong induction of both BB-CK and Mi-CK expression in two samples of primary HCC. Despite CK/AdK activity ratios Induction of CK expression has been observed previously in many types of tumours (see ) and mayA key player in the picture might be the p53 tumour suppressor gene. Mutations in p53 are quite prevalent in HCC, especially in tumours with low cellular differentiation ,43. On tin vivo models the limited responsiveness of HCC to currently available therapeutic approaches and, thus, (ii) the poor prognosis associated with this disease. Cr analogues (cyclocreatine and β-guanidinopropionic acid) and also Cr itself were previously shown to have antitumour activity, both in cell culture and in models . Similaepatoma; .The present findings shed light on some old enigmas and open up fascinating avenues for future research. Our findings do not support significant expression of CK in normal liver and most liver pathologies, but rather indicate that many of the previous misconceptions in this field can be explained by interference from AdK isoenzymes. On the other hand, given the need for improved understanding of the molecular pathogenesis of HCC, and for improved therapies and cures, the induction of CK expression in HCC described here calls for a more in-depth analysis of the interplay between p53 mutations, HCC, CK expression, and the growth-inhibitory effects of cyclocreatine in HCC.AdK, adenylate kinase; BB-CK, brain-type cytosolic CK isoenzyme; CK, creatine kinase; Cr, creatine; HCC, hepatocellular carcinoma; Mi-CK, mitochondrial CK; MM-CK, muscle-type cytosolic CK isoenzyme; PCr, phosphocreatine.The authors declare that they have no competing interests.GM and RM covered the medical part of this study. GM, FNG and MW performed the biochemical experiments. MW drafted the manuscript.The pre-publication history for this paper can be accessed here:
Caenorhabditis elegans vulval development, the anchor cell (AC) in the somatic gonad secretes an epidermal growth factor (EGF) to activate the EGF receptor (EGFR) signaling pathway in the adjacent vulval precursor cells (VPCs). The inductive AC signal specifies the vulval fates of the three proximal VPCs P5.p, P6.p, and P7.p. The C. elegans Rhomboid homolog ROM-1 increases the range of EGF, allowing the inductive signal to reach the distal VPCs P3.p, P4.p and P8.p, which are further away from the AC. Surprisingly, ROM-1 functions in the signal-receiving VPCs rather than the signal-sending AC. This observation led to the discovery of an AC–independent activity of EGF in the VPCs that promotes vulval cell fate specification and depends on ROM-1. Of the two previously reported EGF splice variants, the longer one requires ROM-1 for its activity, while the shorter form acts independently of ROM-1. We present a model in which ROM-1 relays the inductive AC signal from the proximal to the distal VPCs by allowing the secretion of the LIN-3L splice variant. These results indicate that, in spite of their structural diversity, Rhomboid proteins play a conserved role in activating EGFR signaling in C. elegans, Drosophila, and possibly also in mammals.During ROM-1 increases the range of EGF, which specifies the vulval fate of precursor cells, by functioning in the signal- receiving precursor cells rather than the signal-sending anchor cells Intercellular signaling pathways control many diverse processes, such as cell proliferation, differentiation, survival, migration, shape changes, and responses to the environment. In most instances, the release of the signaling molecules by the signal-sending cells constitutes a rate-limiting step that determines the spatial distribution and temporal duration of the response . On the Drosophila growth factor Spitz, which activates the EGFR in multiple developmental processes receptor (EGFR) acts in a highly conserved signal transduction pathway that controls various cell fate decisions in metazoans . EGFR lirocesses . Geneticing cell . Drosophroteases . Site-sproteases . The Dro pathway . There a elegans . Rhomboi elegans . Rhomboi elegans and the ease S2P . On the ease S2P . WhetherC. elegans hermaphrodite vulva serves as a simple model by which to study signal transduction and cell fate determination during organogenesis in the somatic gonad induces three out of six equivalent vulval precursor cells in the ventral hypodermis to adopt vulval cell fates , rom-2 (C48B4.2), rom-3 (Y116A8C.14), rom-4 (Y116A8C.16), and rom-5 (Y54E10A.14) , followed by ROM-2 (29% identity) and the more diverged ROM-3 (24% identity), ROM-4 (26% identity), and ROM-5 (29% identity). Mutagenesis experiments with Drosophila Rhomboid-1 have identified a catalytic triad formed by conserved asparagine, serine, and histidine residues that are necessary for the serine protease activity A. All fihomboids . The trahomboids B. ROM-1 activity . This caactivity B, suggesrom-1, we isolated rom-1 cDNA by RT-PCR. An SL1 trans-spliced leader sequence was identified at the 5′ end of the message that was spliced to the second of the six exons predicted by the C. elegans genome project (http://www.wormbase.org). The remaining intron-exon boundaries were confirmed experimentally and corresponded exactly to the predicted boundaries. The conceptual translation of the 1,071-bp open reading frame (ORF) predicts a protein of 356 amino acids, with very short stretches of hydrophilic amino acids between the seven-pass transmembrane domains, except for a longer loop consisting of 43 amino acids between the first and second transmembrane domains to transiently knock down their expression animals with bacteria producing rom-3 dsRNA had no effect on vulval development (unpublished data). Due to the high degree of sequence similarity between rom-3 and rom-4 (69.8% identity), rom-3 RNAi is likely to simultaneously reduce rom-4 function.As a first step to examine the biological function of the pression . Double-rom-1 gene . The rom-1(0) single mutants exhibited no obvious phenotype; they were healthy and fertile. In addition, we obtained the rom-2(ok966) allele from the C. elegans Gene Knockout Consortium. The rom-2(ok966) animals carry a 530-bp deletion that removes the fifth exon, which contains the predicted catalytic center with the essential histidine residue (see rom-2(rf). Consistent with the RNAi experiments, both rom-1(0) and rom-2(rf) single mutants exhibited normal vulval development (rom-1(0) rom-2(rf) double mutants, no defects in vulval development were observed, ruling out a possible redundant function of the two genes mutation as well as rom-1 RNAi partially suppressed the multivulva (Muv) phenotype caused by overexpression of the LIN-3 EGF growth factor [lin-3(+)] (n1046 gain-of-function (gf) mutation in the let-60 ras gene, which renders vulval development partially independent of upstream signaling (rom-1(0) mutation suppressed the Muv phenotype of hs::mpk-1 animals that overexpress the wild-type MAPK MPK-1 under control of a heat-shock promoter together with Drosophila MEK-2 under control of the interferon-1α promoter mutation nor rom-2 RNAi affected the Muv phenotype of let-60(gf) animals ] or by thignaling (Table 1promoter (Table 1rom-1(0) mutation did not significantly enhance the vulvaless (Vul) phenotype caused by the lin-3(e1417), lin-2(n397), sem-5(n2019), or let-60(n2021) mutations that reduce the activity of the receptor tyrosine kinase (RTK)/RAS/MAPK pathway (unpublished data). Since these Vul mutants affect the cell fates of only the proximal VPCs , ROM-1 plays no role in the induction of the proximal VPCs by the AC. Thus, ROM-1 enhances the activity of the EGFR/RAS/MAPK pathway to allow the induction of the distal VPCs P3.p, P4.p, and P8.p.The (hs::lin-3extra) (rom-1(0) (lin-15(rf) mutants, all VPCs adopt vulval cell fates independently of the LIN-3 signal, though induction in lin-15(rf) mutants depends on the activity of LET-23 and the other components of the EGFR/RAS/MAPK pathway (rom-1(0) mutation did not suppress the Muv phenotype of lin-15(rf) animals, suggesting that loss of rom-1 function affects the LIN-3-dependent induction of vulval cell fates rather than the LIN-3-independent activity of the EGFR/RAS/MAPK pathway . Unlike pathway . The romrom-1 and the Notch and Wnt pathways, since both pathways control vulval cell fate specification in parallel with the RTK/RAS/MAPK pathway (lin-12 notch(gf) animals, no AC is formed, and all VPCs adopt the 2° cell fate (rom-1(0) lin-12(gf) double mutants mutation mutation on the egl-17::cfp expression pattern in mid L2 larvae. For this purpose, larvae were synchronized in the mid L1 stage at 13 h of development by letting them hatch in the absence of food, and then development was allowed to proceed by adding food for another 24 h until they reached the mid L2 stage (approximately 37 h of development). Loss of ROM-1 function had no effect on egl-17::cfp expression in the proximal VPCs , but significantly reduced egl-17::cfp expression in the distal VPCs when compared to wild-type animals (egl-17::cfp expression pattern in rom-1(0) animals is consistent with the epistasis data, which showed that loss of rom-1 function affects the induction of only the distal VPCs (see above).To assess how much inductive signal each VPC receives, we examined the expression pattern of the pathway . In mid and P7.p , and lowand P7.p A and 2B. animals C and 2D.rom-1 reporter by fusing 6.9 kb of the 5′ rom-1 promoter/enhancer region to the green fluorescent protein (gfp) ORF carrying a nuclear localizing signal (zhIs5[rom-1::nls::gfp]). With a translational full-length rom-1::gfp fusion construct, we failed to obtain transgenic lines that consistently expressed ROM-1::GFP. Moreover, a genomic DNA fragment encompassing the entire rom-1 locus failed to produce stable transgenic lines even when injected at relatively low concentrations (1–10 ng/μl), suggesting that elevated levels of ROM-1 are toxic to the animals.To analyze the expression pattern of ROM-1, we generated a transcriptional rom-1::nls::gfp reporter was widely expressed in somatic cells throughout development. Surprisingly, we did not detect any rom-1::nls::gfp expression in the gonadal AC before the L4 stage, while consistent expression was observed in the Pn.p cells and the Pn.a-derived neurons from the L1 stage on. In early L2 zhIs5 larvae, the six VPCs expressed rom-1::nls::gfp at equal levels . After vulval induction, rom-1::nls::gfp was down-regulated in the 1° and 2° descendants of P5.p, P6.p and P7.p, while the 3° descendants of P3.p, P4.p, and P8.p again expressed high levels of rom-1::nls::gfp after they had fused with hyp7 . To test whether rom-1::nls::gfp expression depends on RTK/RAS/MAPK signaling in the VPCs, we introduced the zhIs5 transgene into lin-7(e1413) mutants that exhibit a penetrant Vul phenotype due to reduced LET-23 EGFR activity (lin-7(e1413); zhIs5 animals, the up-regulation of rom-1::nls::gfp occurred less frequently . Thus, the AC signal up-regulates rom-1::nls::gfp expression in the VPCs that adopt vulval cell fates.The transcriptional l levels A and 3B.al fates C and 3D.ith hyp7 E and 3F.r fusion G and 3H r fusion . To test1 and Z4 . Uniformactivity . In lin-rom-1(+) function on vulval induction in gonad-ablated animals. If ROM-1 acts exclusively in the AC, then the rom-1(0) mutation should not affect vulval induction in gonad-ablated animals. On the other hand, if ROM-1 acts in cells other than the AC, then the rom-1(0) mutation should suppress vulval induction even in the absence of the AC. Since the inductive AC signal is absolutely required to initiate vulval development (let-60(gf) or hs::mpk-1 animals that exhibit a hyperactive EGFR/RAS/MAPK signaling pathway causing AC-independent vulval induction (lin-3(+) animals because, as reported previously by lin-3 under control of its own promoter, some vulval differentiation could still be observed in the absence of the AC, pointing at an additional source of LIN-3 from the transgene in cells outside of the gonad (rom-1 function in gonad-ablated lin-3(+), let-60(gf), or hs::mpk-1 animals caused a strong further reduction in vulval induction (lin-15(rf) animals that exhibit lin-3 independent vulval differentiation was not affected by the rom-1(0) mutation .To examine whether ROM-1 acts in cells other than the AC (which is part of the somatic gonad), we tested the effect of loss of elopment , we perfnduction . In addihe gonad , row 2. egl-17::cfp expression pattern after removal of the AC. In gonad-ablated animals, residual egl-17::cfp expression was observed in all VPCs loss-of-function mutation [lin-3(0)] was much stronger than the decrease observed in gonad-ablated lin-3(+); hs::mpk-1 animals (lin-3(0) mutation was suppressed by the hs::mpk-1 transgene). Vulval induction in lin-3(0); hs::mpk-1 animals was not affected by gonad ablation since the lin-3(0) allele eliminated lin-3 function in the AC reduction-of-function mutation almost completely abolished the expression of the egl-17::cfp marker (egl-17::cfp expression in the VPCs. Loss of rom-1 function in a lin-3(0); hs::mpk-1 background caused no further decrease in vulval induction, suggesting that ROM-1 does not affect vulval development in the absence of LIN-3 ; let-60(gf) and rom-1(0); hs::mpk-1 double mutants to levels comparable to those found in let-60(gf) and hs::mpk-1 single mutants (lin-31 promoter (lin-31::cre) that was used as a negative control had no effect on vulval induction ; let-60(gf) animals (rom-1 in the AC under control of the AC-specific enhancer (ACEL) (ACEL::rom-1), which is located in the third intron of the lin-3 locus Muv phenotype by rom-1(0) (rom-1 function.The absence of detectable ::rom-1) . The linnduction . Consist-3 locus . In contlin-3 dsRNA in the Pn.p cells in order to down-regulate by RNAi any possible lin-3 expression in the VPCs (lin-3 cDNA fragment under control of the same Pn.p cell-specific lin-31 promoter used above (lin-31::lin-3i) was introduced into wild-type animals . In wild-type L4 larvae, the 1° descendants of P6.p in the vulF toroid ring , secrete LIN-3 to specify the ventral uterine (uv1) cell fate in the somatic gonad . The lin-31::lin-3i transgene almost completely suppressed the Muv phenotype caused by the lin-31::lin-3S transgene, while the lin-31::cre transgene that was used as negative control had no effect and hs::mpk-1 animals animals lacking a gonad single mutants exhibited wild-type levels of vulval induction, the egl-38(rf) mutation reduced the Muv phenotype of hs::mpk-1 animals to a similar degree as the rom-1(0) mutation or the lin-31::lin-3i transgene produce LIN-3 to amplify the inductive signal.lin-3 locus encodes two splice variants termed LIN-3S (short) and LIN-3L (long) that are generated by the differential choice of the splice donor of exon 6 transgenic lines generated by injection of a relatively low (1 ng/μl) or high (100 ng/μl) concentration of the lin-3S minigene exhibited a strong Muv phenotype (with induction indices ranging from 4.1 to 5.6). In contrast, the lin-3L construct caused a Muv phenotype only when injected at a high concentration. . For the lin-3XL construct, we obtained variable results; some lines exhibited a weak Muv and others no or even a Vul phenotype (unpublished data). Since we failed to observe a consistent phenotype with this minigene construct, we did not further pursue the analysis of the lin-3XL minigene.The f exon 6 B background. The lin-3S and lin-3L transgenes both rescued the larval lethality of lin-3(0) mutants, yielding adult Muv animals (lin-3(0); lin-3S and lin-3(0); lin-3L animals were sterile. Loss of rom-1 function did not affect the viability or the Muv phenotype of lin-3(0); lin-3S animals mutants was reduced by loss of rom-1 function (rom-1(0), lin-3(0); lin-3L animals that escaped the larval lethality exhibited a weaker Muv phenotype than lin-3(0); lin-3L animals, suggesting that the function of LIN-3L during vulval induction partially depends on ROM-1 activity . The lin-31::lin-3S and lin-31::lin-3L transgenes both caused a strong Muv phenotype in the presence and absence of the AC mutants an unprocessed, membrane-bound form of LIN-3 that is retained on the plasma membrane of the AC induces the 1° fate in the adjacent VPC P6.p through juxtacrine signaling . It is possible that the reporter constructs used were lacking some of the regulatory sequences necessary to drive strong lin-3 expression in the VPC lineage. Other potential sources of LIN-3 may be the posterior ectoderm or the excretory system in the head. However, it seems unlikely that LIN-3 secreted from cells at the anterior or posterior end of the animal influences vulval induction, since we did not observe a bias favoring the induction of anterior or posterior VPCs in the absence of the AC.Three lines of evidence indicate that ROM-1 functions in the signal-receiving VPCs rather than the signal-sending AC. First, a omboid-1 suggest he gonad and by tal cells , had essL4 stage , and occrom-1::nls::gfp are highest in the proximal VPCs that adopt 1° and 2° vulval cell fates, suggesting that the proximal VPCs are competent to secrete LIN-3 in response to the inductive AC signal. LIN-3 from the proximal VPCs may facilitate the induction of the more distally located VPCs by paracrine signaling mutation, including the ovulation defects (unpublished data). This may explain why loss of rom-1 function causes neither the larval lethality nor the sterility observed in lin-3 mutants. Furthermore, our data indicate that LIN-3L function in the VPCs almost completely depends on ROM-1 activity. It seems improbable that the 15 amino acid insertion in LIN-3L could change the substrate specificity toward the ROM-1 protease. A more likely explanation for the inherent difference in the dependence of the LIN-3 splice variants on ROM-1 is suggested by the experiments performed with Drosophila Spitz as well as the newly identified longer variant (LIN-3XL) differ by 15 and 41 amino acid insertions in the juxtamembrane region just prior to the predicted Rhomboid cleavage site at the start of the transmembrane domain . Our expla Spitz . When exla Spitz , tissue-tebrates . Differetebrates . The tisCaenorhabditis elegans (dpy-19(e1259), lin-12(n137gf), rom-1(zh18) (this study), rom-2(ok966) (C. elegans Gene Knockout Consortium), and unc-119(e2498); LGIV: let-60(n1046) (lin-3(n1049), unc-5(e53), unc-44(362), lin-45(sy96), unc-24(e138), mec-3(e1338), dpy-20(e1282), egl-38(n578), and mec-3(n3197); LGX: sem-5(n2019) and lin-15(n309); extrachromosomal and integrated arrays: zhEx22, zhE66, zhEx72, zhEx68, zhEx69,zhEx73, zhEx78, zhEx81, zhE88, zhEx89, syIs12[hs::lin-3extra] ], huIs7 , and pTG96 were used as a cotransformation markers ] was integrated in the genome animals following γ-irradiation with 3,000 Rad to generate the array zhIs5 and backcrossed six times before analysis. Double and triple mutants were constructed using standard genetic methods. Where cis-linked markers were used they are indicated in the table legends.Standard methods were used for maintaining and manipulating elegans . The C. -1(mu38) ; LGIII: 0(n1046) , lin-3] , gaIs36[-Dmek-2] , and arI17::cfp] . Unless 17::cfp] .The cons markers . The extrom-1::nls::gfp reporter construct (pRH2) was generated by ligating a HindIII-NheI–restricted 6,998-bp genomic fragment spanning the entire 5′ upstream region of F26F4.3 isolated by PCR amplification with the primers OAD49 (5′-GGAAGCTTGCATGCCCAACGAAATCGATA-3′) and OAD59 (5′-GGGCTAGCCATGTTGTGGAGAAGGAGAAC-3′) into the HindIII-XbaI site of pPD96.04. The rom-1::gfp translational reporter construct (pAD31) was generated by PCR amplification of a 3,146-bp genomic fragment containing 1,849 bp of 5′ sequences and the entire rom-1 ORF using the primers OAD47 (5′-GACTCTAGAGTTGTCAAAAGGTCACGGG-3′) and OAD51 (5′-ATCCTCTAGAGTTGAGCAATTTTCGTTGTTCCAC-3′') followed by XbaI restriction and ligation to XbaI-digested vector pPD95.75. The upstream promoter region of this construct was further extended by replacing a 420-bp PstI fragment with a 2,099-bp PstI genomic fragment corresponding to positions –1,432 and –3,531 relative to the predicted translation start codon of F26F4.3. The lin-31::rom-1 construct (pAD16) was generated by ligating a 1,601-bp SalI-NotI fragment spanning the entire rom-1 coding sequence amplified with the primers OAD44 (5′-TTTTGGTCGACCTCCTTCTCCACAAC-3′) and OAD45 (5′-TTTGGCGGCCGCCTATGAGCAATTTTCG-3′) into the SalI-NotI site of the pB253 vector showed strong and specific GFP expression in the AC beginning in the mid L2 stage as reported and OAD32 (5′-CGTATCTGCAGAATCCAACTCGATATTAATTAC-3′) using first-strand cDNA synthesized from mixed-stage total RNA as template. The PCR-amplified products were size-fractionated by agarose gel electrophoresis, cloned into the pGEMT vector (Promega), and sequenced to identify clones encoding individual splice variants. To obtain full-length lin-3 cDNA construct (pAD27), a 1,996-bp XhoI fragment from the EST clone yk1053b07 (confirmed to encode full-length lin-3XL cDNA by DNA sequencing) was first subcloned into the XhoI site of a modified pBluescriptSK vector (pAD23) in which the PstI site had been destroyed by restriction with EcoRV and SmaI and religation of the resulting blunt ends. To generate full-length lin-3S and lin-3L cDNA constructs , 1,065-bp and 1,110-bp PstI fragments specific for each splice variant isolated from the partial cDNA clones described above were used to replace the 1,188-bp PstI fragment in the full-length lin-3XL cDNA construct (pAD27). The lin-31::lin-3S and lin-31::lin-3L constructs were generated by cloning the 1,133-bp and 1,178-bp XhoI cDNA fragments of the S and L splice variants into the SalI site of pB253 were generated by cloning a 6.1-kb genomic fragment spanning the entire ORF of lin-3 and 574 bp of 5′ and 236 bp of 3′ sequences amplified with the primers OAH137 (5′-CCAGAAAGTTCATGTGAATCAT-3′) and OAH138 (5′-TCACAGGAACTGAGAGGGAGAGTG-3′) into the pGEMT vector. From this construct, a 6,206-bp ApaI-SacI fragment was subcloned into pAD23 to obtain pAH62. The minigenes encoding each of the splice variants were obtained by replacing the 2,728-bp lin-3 genomic PstI fragment with 1,065-, 1,110-, and 1,188-bp PstI fragments isolated from cDNAs of the different splice variants. To construct the lin-31::lin-3 hairpin plasmid (pAD35), a 964-bp NdeI-HindIII lin-3S cDNA fragment from pAD25 was cloned into NdeI-HindIII–digested pAD27 using the recA–E. coli SURE strain as host to obtain pAD32. The resulting 1,918-bp lin-3 hairpin fragment was excised with XhoI from pAD32 and subcloned in E. coli SURE into the SalI site of the pB253 vector to obtain pAD35.The transcriptional 3 vector . To genethe ACEL , was clopromoter . Transgereported . The Kpnof pB253 . The linrom-1 function was generated by in vitro transcription using a 350-bp rom-1 cDNA fragment corresponding to the nucleotides –17 to 725 relative to the predicted start codon of the ORF inserted into pGEM-T (Stratagene) as template. Transcripts were prepared using T7 and Sp6 RNA polymerase and annealed prior to injection as described deletion mutant was isolated from an ethyl methanesulfonate–mutagenized library consisting of approximately 106 haploid genomes as previously described . The mutant strain was backcrossed six times against N2 before further experiments were done.The escribed . DNA pooVulval induction was scored by examining worms at the L4 stage under Nomarski optics as described . The numhttp://www.ncbi.nlm.nih.gov/) accession numbers of the Rhomboid genes discussed in this paper are C.e. ROM-1 (AAA91218), C.e. ROM-2 (CAA82377), C.e. ROM-3 (CAB55154), C.e. ROM-4 (CAB55122), C.e. ROM-5 (AAF60768), D.m. Rho-1 (CAA36692), D.m. Rho-2 (AAK06752), D.m. Rho-3 (AAK06753), D.m. Rho-4 (AAK06754), D.m. Rho-6 (NP_523557), H.s. Rho-1 (CAA76629).The GenBank (
This study describes the 24-h changes in plasma prolactin levels, and dopamine (DA), serotonin (5HT), gamma-aminobutyric acid (GABA) and taurine concentration in median eminence and adenohypophysis of newborn male rabbits.Animals were kept under controlled light-dark cycles , housed in individual metal cages, and fed ad libitum with free access to tap water. On day 1 after parturition, litter size was standardized to 8–9 to assure similar lactation conditions during the experiment. Groups of 6–7 suckling male rabbits were killed by decapitation on day 11 of life at six different time points during a 24-h period.Plasma prolactin levels changed significantly throughout the day, showing a peak at the beginning of the active phase (at 01:00 h) and a second maximum during the first part of the resting phase (at 13:00 h). Median eminence DA concentration also changed significantly during the day, peaking at the same time intervals as plasma prolactin. A single maximum (at 13:00 h) was found for adenohypophysial DA concentration. Individual adenohypophysial DA concentrations correlated significantly with their respective plasma prolactin levels. A maximum in median eminence 5HT concentration occurred at 21:00 h whereas adenohypophysial 5HT peaked at 13:00 h. Median eminence 5HT concentration and circulating prolactin correlated inversely. In the median eminence, GABA concentration attained maximal values at 21:00 h, whereas it reached a maximum at 13:00 h in the pituitary gland. Median eminence GABA concentration correlated inversely with circulating prolactin. In the median eminence, taurine values varied in a bimodal way showing two maxima, at the second half of the rest span and of the activity phase, respectively. In the adenohypophysis, minimal taurine levels coincided with the major plasma prolactin peak (at 01:00 h). Circulating prolactin and adenohypophysial taurine levels correlated inversely.The correlations among the changes in the neurotransmitters analyzed and circulating prolactin levels explain the circadian secretory pattern of the hormone in newborn male rabbits. The mechanisms that regulate prolactin secretion are complex . Two majIt is well known that basal secretion of prolactin varies throughout the day, describing a characteristic pattern with maximal values close to the light-dark transition ,8. Such The rat is very immature at birth, so that newborn and suckling rats are very sensitive to manipulations that can affect adulthood -18. CircThe rabbit is probably the best-studied laboratory animal in the wild, due to its abundance, size and importance as an agricultural pest ,24. WildIn contrast to the large amount of information available on circadian rhythms in adult mammals, studies on circadian phenomena in neonates are few ,27. For This study was performed using 24 multiparous, lactating Californian × New Zealand White crossbreed doe rabbits. Animals were housed in research facilities of the Animal Production Department. They were maintained under controlled light-dark cycles , housed in individual metal cages, fed at libitum using a commercial pellet diet with free access to tap water. On day 1 after parturition, litter size was standardized to 8–9 by adding or removing kits to assure similar lactation conditions during the experiment. This study was performed according to the CEE Council Directive for the care of experimental animals. Groups of 6–7 suckling male rabbits were killed by decapitation on day 11 of life at six different time points throughout a 24-hour cycle. The brains were quickly removed, and the median eminence and the anterior pituitary were taken out. Anterior pituitaries were weighed and homogenized in chilled (0–1°C) 2 M acetic acid. After centrifugation , the samples were either analyzed for DA and 5HT or boiled for 10 min and further centrifuged at 14000 rpm for 20 min to measure GABA and taurine.L-RP-1. Hormone was labeled with 125I by the chloroamine-T method [Staphylococcus aureus was used to precipitate the bound fraction [Plasma prolactin levels were measured by a specific homologous RIA method using AFT method . The volfraction . All sam2/H+ reference electrode were: conditioning electrode: -0.4 V; preoxidation electrode: +0.10; working electrode: +0.35 V. Indoleamine and catecholamine concentration was calculated from the chromatographic peak heights by using external standards and was expressed as pg/μg protein. The linearity of the detector response for DA and 5HT was tested within the concentration ranges found in median eminence and adenohypophysial supernatants.DA and 5HT concentration was measured by high pressure liquid chromatography (HPLC) using electrochemical detection , as described elsewhere . A C-18 Amino acids were isolated and analyzed by HPLC with fluorescence detection after precolumn derivatization with O-phthalaldehyde (OPA) as described elsewhere . An aliqStatistical analysis of results was performed by a one-way analysis of variance (ANOVA) followed by post-hoc Tukey-Kramer's multiple comparisons tests. Curve estimation in regression analysis was made by using SPSS software, version 10.1 . P values lower than 0.05 were considered evidence for statistical significance.Figure Figures 2 = 0.16, b0 = -123.7 and b1= 18.1 .Median eminence DA concentration changed in a bimodal way as a function of time of day, showing two maxima, coinciding with those of plasma prolactin at the active and resting phase of the diurnal cycle .As shown in Figure 2= 0.21, b0 = 25.7 and b1 = -0.22, F = 6.6, p < 0.01).Figure 2= 0.42, b0 = 11.6 and b1 = -0.11, F = 17.4, p < 0.0001).Figure The present study, performed in neonatal male rabbit pups sacrificed at 6 different time intervals during a 24-h cycle, describes for the first time significant changes in plasma prolactin levels throughout the day. In concomitant measurements of median eminence and adenohypophysial concentration of DA, 5HT, GABA and taurine, a clear daily pattern was found in almost every case. Contrasting with neonatal rats that did not display any circadian pattern of plasma prolactin , a dailyIn adult rabbits, daily patterns of prolactin secretion depend on light/dark phases . The preThe activity of several nuclei of rabbit hypothalamus increases with age and with experience of anticipatory arousal . HoweverAmong other possible neuromodulators of prolactin secretion, the arcuate nucleus receives a dense serotonergic innervation consisting of a population of brainstem neurons arising mainly from the midbrain raphe nuclei and fromTaurine has also been implicated in the regulation of prolactin release ,13,35,36A relatively dense innervation of GABA terminals exists in the external layer of the median eminence , and theGABA acting on specific receptors in the anterior pituitary has been reported to suppress prolactin secretion ,40, althIn suckling male rabbits plasma prolactin and median eminence and anterior pituitary concentration of several neuromodulators change on a daily basis. The existence of significant correlations among several of the neurotransmitters analyzed and plasma prolactin levels may explain the circadian secretory pattern of prolactin at this age in suckling rabbits. Collectively, the present results differ from the reported absence of circadian rhythmicity of prolactin and median eminence and adenohypophysial neuromodulators in rats at a comparable age.The author(s) declare that they have no competing interests.MPA and PC carried out the experiment and the immunoassays and the analysis of catecholamines, indoleamines and amino acids. DPC and AIE designed the experiments. Also, DPC performed the statistical analysis. PR took care of the experimental animals. AIE supervised its technical implementation and drafted the manuscript. All authors read and approved the final manuscript.
We assessed the applicability of the T&C test as an accurate and convenient means to screen for dementia in primary care and community settings.The study group comprised 59 patients and 405 community participants, all of who were aged 65 years and over. The time component of the T&C test evaluated the ability of a subject to comprehend clock hands that indicated a time of 11:10, while the change component of the T&C test evaluated the ability of a subject to make 1,000 Won from a group of coins with smaller denominations .The T&C test had a sensitivity and specificity of 73.0 and 90.9%, respectively, and positive and negative predictive values of 93.1, and 66.7%, respectively. The test-retest and interobserver agreement rates were both 95% (κ = 0.91) . The association between the T&C test and K-MMSE test was modest, while significant . The T&C test scores were not influenced by educational status.We conclude that the T&C test is useful as supplemental testing of important domains to traditional measures such as the MMSE. However, because T&C test is simple, rapid, and easy to use, it can be applied conveniently to elderly subjects by non-specialist personnel who receive training. Dementia, an acquired persistent impairment of cognitive functioning is an increasingly common problem in Korea, and is associated with increased morbidity and mortality, functional loss, caregiver burden, and institutionalization . NeverthScreening for dementia has been recommended to increase detection of dementia. The Mini-Mental State Examination (MMSE) is a brief screening test that quantitatively assesses the cognitive status of elderly people ,5. AlthoWe have therefore introduced a T&C test, and assessed the applicability of the T&C test as a convenient and accurate means to detect early stage dementia in primary care and community settings.One study group initially comprised 60 participants who either visited or were admitted to a hospital that was located in an urban area, namely Gwangju city (referred to forthwith as area A) between November and December 2001. Of these 60 participants, 37 were diagnosed with Alzheimer's disease or vascular dementia, 11 had mental illness such as schizophrenia or depression, 7 had organic brain disease such as cerebral apoplexy or Parkinson's disease, and 5 were alcoholics.Another study group initially comprised 412 participants who were recruited from all residents of Jangseong-county, Jeonnam province, South Korea, aged 65 and over in 2002(referred to forthwith as area B). The area consists of 11 towns and had an estimated population 54,528 of whom about 16.5% were aged 65 and over. The subjects were selected from each stratum (town) using cluster sampling. All subjects in whom vision or hearing was impaired were excluded from the study. We selected 59 (of 60) and 405 (of 412) participants from area A and B, respectively. All participants gave informed consent, and when participants with dementia cannot provide informed consent, caregivers were asked to provide it.The 'time' component of the T&C test evaluated the ability of a subject to comprehend that the hands of a clock indicated 11:10. The diameter of the clock was 15 cm, and the distance between the clock and the participant was 25–35 cm . When the subject responded incorrectly on the first attempt, the interviewer posed the same question, i.e., a second attempt was permitted. The time (in seconds) that it took for the participant to respond correctly was recorded. A time limit of 60 s for a response was imposed.In the 'change' component of the T&C test, the participants were required to make 1,000 Won from a group of coins of several smaller denominations, namely one 500, seven 100, and seven 50 Won coins, that were placed on a table in front of the participant. The participants were given 120 s to complete the task, and an additional 120 s was granted if the subject failed to complete the task at the first attempt. The time (in seconds) that it took for the participant to complete the task was recorded by the interviewer.The participants who completed both of the aforementioned tasks of the T&C test successfully were determined to be negative for suspected dementia, whereas participants who failed to complete either or both of the tasks were determined to be positive for suspected dementia. The T&C test was performed prior to the other cognitive tests by an interviewer who was unaware of the results of the other tests.In the participants from area A, the T&C test was conducted by a nurse, while a physician who were blinded to the result of the T&C test performed a clinical examination and neuropsychiatric inventory. The diagnostic criteria for dementia were based on those of the Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) . To deteThe participants from area B were interviewed by interviewers who had undergone sufficient training to be able to conduct the K-MMSE and T&C test.All participants were interviewed for data on social and demographic factors such as address, age, sex, educational status, and the number of family members living together.To assess the validity of the T&C test as a method to screen for dementia in the participants from area A, we compared the results of the test to a reference standard (diagnosed by a physician), and evaluated sensitivity, specificity, and positive and negative predictive values. To assess the test-retest reliability, the one type of interviewer conducted the same test twice at an interval of 24 h (n = 22 participants). To assess the interobserver reliability, two different types of interviewer conducted the same test, at an interval of 24 h (n = 22 participants).To assess the applicability of the T&C test as a method to screen for dementia in the participants from area B, the association between the T&C and K-MMSE test scores was evaluated.The sensitivity, specificity, and positive and negative predictive values and 95% Confidence interval of the T&C test were analyzed. After classifying the participants from area B into a group in which dementia was suspected, and another group in which dementia was not suspected based on the results from the T&C test, a Student's t-test was used to compare the total K-MMSE score with the scores for each of the components of the K-MMSE test, and Spearman's rank correlation was used to analyze the correlation between the T&C and K-MMSE test scores. SPSS for Windows and Stata Software 6.0 were used to analyze data and statistics.For the participants from area A, the average age was 73.2 ± 7.9 years, 55.9% of the group was female, the average number of years of education was 4.2 ± 5.4, and 62.7% lived alone. For the participants from area B, the average age was 73.1 ± 6.1 years, 58.5% of the group was female, the average number of years of education was 3.1 ± 4.0, and 29.0% lived alone .There was a significant difference between the total K-MMSE score and the scores for each of the components of the K-MMSE test between participants that were classified as positive and those that were classified as negative for dementia according to the T&C test Table . The assThe results of a logistic regression in which the T&C test as a dichotomous and dependent variable was performed with education status adjusting for age and sex revealed no association between them. The odds ratio for T&C test associated with educational status is 0.877 (95% CI = 0.766–1.004).For the time task in the T&C test, 75.8% of the participants produced a correct response on the first attempt after 6.3 ± 6.7 s, and 45 participants (11.1%) produced a correct response on the second attempt. For the change task in the T&C test, 81.2% of the participants produced a correct response on the first attempt after 12.7 ± 14.2 s, and 43 participants (10.6%) produced a correct response on the second attempt. 34 participants(8.4%) were tested twice for both the time and change task. None of the subjects refused to respond during the tests.Interracial variability in both the etiology of dementia and the accuracy of cognitive testing suggests that there is an urgent need to develop racially appropriate methods of cognitive assessment. The rate of vascular dementia due to cerebrovascular disease is much higher in Koreans than in other races; this is due to insufficient prevention, diagnosis, and treatment of hypertension, diabetes, and hyperlipidemia in Korea. Vascular dementia, unlike Alzheimer's disease, can be prevented by appropriate treatment and prevention for the risk factors of cerebrovascular diseases, can be treated to improve symptoms and inhibit progression of the disease.In the present study, the original T&C test of Inouye et al. was modiWhen compared to the results of Kawamato , in whicIn elderly patients, the assessment of cognitive function is affected by psychological factors and by the circumstances under which the tests are conducted. In our measurement of the reliability of the T&C test, the test-retest and interobserver agreement rates were both remarkably high . In a study by Inouye et al. the testIn the present study, no association was observed between T&C test scores and educational status. This result may be explained by the fact that interpreting the hands of a clock and calculation using change are behaviors that are common to all people during daily life, irrespective of race and education. In addition, unlike the language-focused MMSE, the questions in the T&C test cannot be misinterpreted or misunderstood, and are effective for assessing calculation ability and attention. Clock test (similar to T&C test) is less likely to be confounded by educational attainment , and meaThis study has several important limitations. First, while the hospital patients underwent an evaluation of their medical history as well as neurological and physical examinations before a diagnosis of dementia was made, the elderly community sample were evaluated using only the K-MMSE as a reference. Second, the hospital patients would appear to be a rather unusual sample in that there was a high prevalence of psychiatric morbidity in the non-demented subjects. Third, dementia in the present study was not classified as vascular or Alzheimer's-type, nor did we consider the severity of symptoms. Finally, the T&C test has limitations in assessment of a wide range of deficits associated with dementia. However, because T&C test is simple, rapid, and easy to use, the T&C test may pose particular advantages in primary care and community settings where frequent assessment of cognitive functioning is required. The T&C test adds supplemental testing of important domains to traditional measures such as MMSE. In addition, because the T&C test is less influenced by educational status, it may be particularly useful in populations with diverse educational and cultural backgrounds.We conclude that the T&C test is useful as a supplemental testing of important domains to traditional measures such as the MMSE, because sensitivity of T&C test is not great and the association between the T&C test and MMSE is modest. However, because T&C test is simple, rapid, and easy to use, it can be applied conveniently to elderly subjects by non-specialist personnel who receive training. In addition, the T&C test is less influenced by educational statusThis work was supported by grants from Chonnam National University Hospital (CUHRI-U-200241).The author(s) declare that they have no competing interests.JAR conceived of the study, collected data and drafted the manuscript. EKC participated in its design, performed data collection, and reviewed the manuscript. MHS participated in data analysis and reviewed the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
The project should lead to large-scale models of molecular and cellular processes involved in neuronal signaling. A prerequisite is the proper storage of knowledge coming from the literature.DopaNet DopaNet Molecular Pages are highly structured descriptions of quantitative parameters related to a specific molecular complex involved in neuronal signal processing. A Molecular Page is built by maintainers who are experts in the field, and responsible for the quality of the page content. Each piece of data is identified by a specific ontology code, annotated and linked to the relevant bibliography. The Molecular Pages are stored as XML files, and processed through the DopaNet Web Service, which provides functionalities to edit the Molecular Pages, to cross-link the Pages and generate the public display, and to search them.DopaNet Molecular Pages are one of the core resources of the DopaNet project but should be of widespread utility in the field of Systems Neurobiology. Genes to Cognition consortium . Anot. AnotGen Gilman ,4), whic, whicGenSimilarly, we started at the end of 2001 the mesotelencephalic dopamine consortium . The inA first step, prior to the design of large-scale dedicated experiments, consists in data mining the current literature in molecular and cellular neurobiology for existing quantitative knowledge. The resulting data has to be properly stored and annotated.α and βγ subunits. The information collected deals with the structure of the complex, its anatomical distribution within DopaNet target cells, and its functional properties. Each page is under the responsibility of its maintainer(s), who decide which data is to be included or not, and acknowledge the input of the various contributors. All the data included in a Molecular Page is annotated , and linked to bibliographic references. In addition, each single data stored in the databases of DopaNet is attached to one or several terms of the DopaNet Neuronal Ontology. This ontology will therefore act as a glue, relating the various pieces of data one to the other.A DopaNet Molecular Page is a collection of annotated numerical data relative to a "molecular complex" present in one or several DopaNet target cells. The "molecular complex" is taken here in the sense of the DopaNet Neuronal Ontology (see below), as a "stable assembly of molecules", a "molecule" being described as a "set of atoms linked together by covalent bounds". As a consequence, we can have a Molecular Page storing data relative to a molecular complex made up of components, that are themselves described in other Molecular Pages. An anticipated example is an heterotrimeric G protein and its 2(beta2)3 nAChR. Each term can be the child of several others. Therefore the complete picture is not a genealogical tree, but rather a network or relationships.An ontology is defined here in its information science meaning, as a hierarchical structuring of knowledge. In our case, it is a relational vocabulary, that is a set of terms linked together, aiming to describe a neuron. Each term has a definition and a unique identifier. Terms are related by "is a" inheritances, which represent sub-classing, and "part of" inheritances which represent deepening knowledge. For instance, the nicotinic receptor subunit alpha6 "is a" nicotinic receptor subunit, and is "part of" the There are several biological ontologies, the most famous (and complete) being Gene Ontology ,10). Num. Num10])A Molecular Page is made up of a header followed by several lists, each list containing a sequence of identical elements. There are currently twelve main lists, described below. Several other lists of items are used to described the page data at a finer level. According to the molecular complex described in the page, some of the lists can be empty.The Molecular Page header contains the name of the molecular complex described in the page, an abbreviation, the unique ontology code used to identify the page, the dates of creation and last modification of the page, and the page status. The possible status are:The Molecular Page has been submitted by the maintainers and is ready for public release.A new version of the Molecular Page, not yet ready for public release.A new Molecular Page in construction, that has never been submitted for public release.α4)2(β2)3 is described in the Page describing the complex "_2(beta2)_3 nAChR" . The related Ontology term can be found at For instance, the nicotinic acetylcholine receptor _2(beta2)_3 nAChR"                         status="stable">                         The main lists that compose a Molecular Page are:Maintainers are the only people authorized to directly modify the Molecular Pages. They are responsible for the quality and the completeness of the data included in the Page. However, maintainers are not assumed to systematically gather the information all by themselves. They are encouraged to contact experts to help them. Helpful people should be acknowledged as contributors.Contributors are all the people who bring new information about a Molecular Page, or correct an existing piece of information. Contributors can be seen as the equivalent of authors of an article. Except maintainers (who are contributors by definition), they cannot directly modify a Molecular Page. They have to contact a maintainer instead. Note that the database administration team can directly modify the Molecular Pages to comply with the guidelines.A Molecular Page describes a molecular complex. This complex is made up of components (at least one). The listOfComponents describes those components, their stoichiometry, and lists useful related resources. Each component is annotated by its ontology code.α4 and the subunit β2.The complex "_2(beta2)_3 nAChR" [DA:0000027] is made up of two components, the subunit <listOfComponents> <component DopaNetontology="DA:0000188"  name="alpha4 nicotinic receptor subunit"                     stoichiometry="2">                     <listOfResources>   <resource identifier="ACHa4hosa"     name="Ligand-Gated Ion Channel database"                     references="1"                   "> url="                    <taxon>Homo sapiens</taxon>             </resource>     </listOfResources>     </component>  </listOfComponents> The function of a molecular complex is most often modulated by permutations between various states . Accordingly, most of the quantitative data are actually relevant only for one state or a subset of states. Those states should therefore be listed, described and annotated. The quantitative data described in the "functional" lists (see below) will refer both to the states of the molecular complex itself, listed here, and the list of states of other relevant Molecular Pages.The complex "_2(beta2)_3 nAChR" may exist under (at least) three different states: "basal", "active", and "desensitized".<listOfStates>  <state identifier="basal" name="basal">         <description>                In the basal state, the ionic pore is closed. This state displays a weak affinity for agonists such as acetylcholine or nicotine.                 </description>                 </state>       </listOfStates>  A list of properties that depends solely on the molecular complex itself, and not on its relationships with other entities, such as ligands or substrates. Example of such properties are molecular weight or Stoke radius.<listOfGenericProperties>  <property name="MW" stateMolecule="basal">        <taxon>Homo sapiens</taxon>            <listOfValues>                <value mean="310971" unit="Dalton">                        <comment>without covalent modifications.</comment>                             </value>                        </listOfValues>                 </property>        </listOfGenericProperties>  listOfExtracellular shall be necessary at some point.The distribution of the molecular complex and its components is described within the relevant DopaNet target cells: cortical glutamatergic pyramidal neuron, mesencephalic dopaminergic neuron, striatal cholinergic interneuron, striatal enkephalinergic/GABAergic medium spiny neuron, striatal substance p/GABAergic medium spiny neuron. It is likely that a Each cell is divided into compartments, where the distribution of transcripts and molecules can be described. The approach used to explore the distribution is specified, since both the accuracy and the quantitativeness of the observations strongly depends on the method chosen. As for all the following data, the species where the study has been conducted is also mandatory.α4 is present in 100% of neurons and β2 is probably also present in 100% of neurons.One entry in the complex "_2(beta2)_3 nAChR" is the fact that in the cell soma of the rat mesencephalic dopaminergic neuron, single cell RT-PCR experiments showed that <listOfCells>  <cell cellName="mesencephalic dopaminergic neuron" DopaNetontology=" DA:0000702">    <listOfCompartments>         <compartment DopaNetontology="DA:0000137" name="cell soma">           <listOfTranscripts>               <transcript method="single cell RT-PCR" references="17">                     <taxon>Rattus norvegicus</taxon>                         <description>                            a4 is present in 100% of neurons. b2 is probably also present in 100% of neurons.                               </description>                           </transcript>                     </listOfTranscripts>               </compartment>          </listOfCompartments>      </cell>  </listOfCells>The ligands of a molecular complex are molecules or ions that bind to it. The size of the ligand relative to the molecular complex is irrelevant. Within the Molecular Page of "transforming growth factor receptor type I", one ligand is "transforming growth factor betal". Conversely, in the Molecular Page of "transforming growth factor beta1", one ligand is "transforming growth factor receptor type I"! See table kon, koff or Km can be stored in a controlled manner, in order to be easily retrieved later. Whenever possible, the quantitative values are related to the states of the molecular complexes involved, not only the state of the molecular complex subject of the Molecular Page but also the state of the ligand. This remark holds for the substrates, the translocators and the modulated substances as well (see below). See table Functional parameters such as The desensitized "_2(beta2)_3 nAChR" of the rat binds acetylcholine with a Ki versus the epibatidine of 8.6 ± 1.98 nM.<listOfLigands>   <ligand DopaNetontology="DA:0000184"       name="acetylcholine"                    origin="endogenous">                    <listOfProperties>          <property name="Ki_epibatidine"               references="10 18"                          stateMolecule="desensitized">                             <taxon>Rattus norvegicus</taxon>                <listOfValues>                 <value mean="8.6" sd="1.98" unit="nanomole per litre"/>                      </listOfValues>                 </property>             </listOfProperties>         </ligand>       </listOfLigands> Km, kcat or Vmax.All substances modified as a result of an interaction with the molecular complex. The parameters stored here are for instance The translocators are substances that go from one subcellular compartment to another, the translocation being mediated by the molecular complex. Typical parameters are conductance or relative permeability.The active complex "_2(beta2)_3 nAChR" of the rat translocates cations with a conductance of 13.3 ± 1.5 pS.<listOfTranslocators>  <translocator DopaNetontology="DA:0000264"            name="cation"                         origin="endogenous">                              <listOfProperties>          <property name="conductance" references="14" stateMolecule="active">            <taxon>Rattus norvegicus</taxon>                   <listOfValues>                 <value mean="13.3" sd="1.5" unit="picosiemens"/>                        </listOfValues>                  </property>            </listOfProperties>          </translocator>       </listOfTranslocators>  modulated entries are to be avoided as much as possible, since they generally reflect a set of binding and/or enzymatic events.In many case, one knows about the effect of a molecular complex on a substance, without knowing the detailed mechanism of action. The Possible conversions between the states described in the listOfStates, such as a conformational transition, or a covalent modification.The complex "_2(beta2)_3 nAChR" undergoes conformational transitions between the basal and active states. <listOfTransitions>  <transition state 1="basal" state2="active">        <comment>                       In the absence of ligand, the equilibrium is strongly displaced toward the basal state. Agonists, such as acetylcholine and nicotine, stabilise the active state and shift the equilibrium. The transition from basal to active corresponds to an opening of the ionic pore.                  </comment>             </transition>       </listOfTransitions>  The list of bibliographic resources used to gather and annotate the data. Each piece of data included in the Molecular Page should be linked to those bibliographic items by internal references.Molecular Pages are saved as XML files , and theMolecular Pages XML files are stored within two different repositories, depending on the status of the Page. One repository contains only the stable Pages ready for public release , while the other repository contains also the unstable and the forthcoming Pages .In addition to the two XML repositories, there is also a third HTML repository, containing the human-readable HTML versions of the stable Pages, automatically generated from their XML counterparts using XSL Transformations with theDopaNet Web Service has been designed and implemented, which provide functionalities to:As described above, Molecular Pages are continuously modified and updated by the maintainers, with the help of the contributors. In order to automatize, safe-guard and simplify as much as possible the work required by a maintainer to create and edit a Molecular Page, an application called the 1. authenticate Page maintainers2. browse pages by maintainers3. grant exclusive Page editing rights to a maintainer4. create and edit a Page via a rich user interface5. save or submit the edited Page, setting the "stable/unstable" statusThe DopaNet Web Service is made of both server-side and client-side components, all written in Java, and communicating via either the SOAP or the HIn addition to support the remote creation and editing of Molecular Pages, the DopaNet Web Service provide also functionalities to:1. register new DopaNet contributors2. update existing DopaNet contributor information3. browse Molecular Pages by status4. search Molecular PagesSearching of Molecular Pages is implemented by using the API provided by the Apache Xindice native XAlthough in their early stage of development, DopaNet Molecular Pages provide a unique source of structured, annotated quantitative data about the molecules involved in neuronal signaling. They will feed both the experimental biologist and the theoretician with the best available estimates for all kind of knowledge, whether biochemical, anatomical or functional. This will allow them to design better experiments or formal models, and to benchmark their results. As a side-effect triggered by the mandatory annotations, DopaNet Molecular Pages will also a bibliographic resource, each page being the equivalent of a small review of the literature.Gene Ontology is now a fully grown project, and is being widely used in several biological domains. Nevertheless, in its present form, Gene Ontology was not found suitable to be directly used by the DopaNet project. We hope to collaborate with Gene Ontology maintainers in the future. In particular, effort will be made to complete Gene Ontology in the area of Neurobiology. However, DopaNet Neuronal Ontology will never actually be a subset of Gene Ontology. Indeed, the purpose of the latter is to classify the gene products – and one of its most useful application so far has been the annotation of sequence database entries. The purpose of DopaNet Ontology is broader in term of knowledge, and not limited to the classification of gene products. At the same time it is focused onto a specific system, and therefore of interest for a narrower audience.molecular function, biological process, and cellular component. Only the latter is at the moment relevant to DopaNet purposes, that is the Molecular Pages. However, it is anticipated that the biological process vocabulary will be needed in the near future, for instance to annotate electrophysiological data. DopaNet cellular component vocabulary is larger than Gene Ontology one, since it contains the different kinds of neuronal cells In additology, .A cellular component may be for instance an anatomical structure, e.g. "dendrite" or "synaptic vesicle" but also a cell or a protein. Note that a "molecule" is defined in the Neuronal Ontology as a set of atoms covalently linked. A molecule cannot contains other molecules. Hence, a protein made up of several subunits, or a polypeptide and a co-enzyme are not "molecules", but "molecular complexes". Although our ontology is built for DopaNet purposes, it can be viewed as a more general "Neuronal Ontology". Therefore, we incorporate terms related to components present (or events taking place) in any neuron, not necessarily DopaNet target cells. In particular, such additions are advised if they clarify some hierarchical relationship.As described above, a "molecular complex" in DopaNet Neuronal Ontology contains one or several components, also present in the "molecule" branch. It could be considered redundant that all monomeric proteins are represented by two terms, as a "molecule" part of a "molecular complex". However, the meanings of the two branches are different. The "molecule" can be seen as an ideal entity, while the latter would rather represent an actual physical object of the cell. Moreover, the hierarchical structures of the two branches are different. In addition, a lot of proteins have only recently been discovered as functional complexes (e.g. the polymeric G-protein coupled receptors), and more are to be discovered. Finally, the systematic dissociation between the functional molecular complex and its components is handy when it comes to write the Molecular Pages.The Alliance for Cellular Signaling was a pioneer in designing Molecule Page. Contrary to DopaNet Molecular Pages, their focus is truly a "molecule" rather than a "molecular complex". For instance, an heteropolymeric receptor will not be represented by a Molecule Page, but rather by a collection of Pages, one per subunit.Kd In addition, pieces of quantitative knowledge, like binding or enzymatic reactions, should be provided in standardized format such as the Systems Biology Markup Language [DopaNet Molecular Pages are highly structured. While this could appear as an obvious choice, it actually comes with a double burden. First, the edition interface has to be sufficiently complex to reflect the underlying structure. This complexity certainly acts as a repellent for the biologist who wish to build a Molecular Page. Second, the high quality required, in particular concerning the annotations, leads to the rejection of a significant portion of the published knowledge. However, we think that a piece of data that cannot be properly annotated is of limited use for the community. For instance, a large amount of pharmacological properties is published without the species. Since those properties vary from one species to the other, one cannot easily re-use the value provided. Similarly, a numerical piece of knowledge cannot be used without caution if one does not know the method used to collect it, and the variability attached to it. Currently, the access to the data is only possible through the web interface. Moreover, although the user is able to search the content of the Molecular Pages using various criteria, the result is always presented as one or several Molecular Pages. However, the DopaNet Web Service should be enriched on a steady pace, and specific pieces of data should be served soon. One can envision interfaces providing precise and meaningful responses to queries like "All Kd for the ligand X of all molecules that bind it", under the form of a list of Language .Ka, Kd, Ki, Kp, IC50, but all those quantities can only be analyzed in regards of the knowledge about the various states of the complex, and its conformational transitions. The distributed annotation can cause concerns related to peer-validation and quality control. With the help of Nature Publishing Group, a peer review process has been set-up by the Alliance for Cellular Signaling to survey the edition of its Molecule Pages. Such an infrastructure is currently out of reach of DopaNet. However, we ensure that the maintainers are always recognized experts in the fields, or, for members of the EBI group, work in close relation. In addition, we included as much as possible guidance through the constraints imposed by the Page editing environment. That way, any Molecular Page complies with at least a minimal set of quality rules. Such an approach has already been successful in other areas. One of the most striking example is the Debian operating system project ([The Molecular Pages are maintained in a distributed fashion, with one or several experts in charge of each complex. Such an approach is mandatory for two reasons. Firstly, the knowledge accumulated by the project will soon become much too large to be handled by one individual, or even one team. Secondly, the level of detail and accuracy sought by the resource is such that only experts can fruitfully mine the adequate literature for relevant information. To extract the simple affinity of a receptor for a ligand can be a daunting prospect. Not only that affinity can be expressed by various parameters with different meaning, project , that mOn the contrary of the Molecular Pages, the Neuronal Ontology is currently developed only by the EBI team. Everyone can contribute by sending their suggestions, but for the sake of coherence the final building is centralized.DopaNet Molecular Pages allow to store annotated numerical data about molecular complexes involved in neuronal signaling. Although the Pages are one of the core resources of the DopaNet project, and therefore their focus on the mesotelencephalic dopamine system, the repository should be of widespread utility in the field of Systems Neurobiology. This is also the case of The DopaNet Neuronal Ontology. The resource is in its early stage of development and will benefit much from the feedback of users.All data contained in the DopaNet Molecular Pages may be copied and redistributed freely, under terms derived from the MIT license ..More information about the DopaNet project can be found at the URL DopaNet ontology is available at the URL DopaNet Molecular Pages are available at the URLNLN designed the database DTDs and schemas, wrote the XSL and acted as the final editorial authority on Molecule Pages. MD implemented all the edition and validation software, as well as the user interface, including the servers.
Assessment of the QT interval started to receive increased regulatory attention in the late 1980s.The heightened safety concern was precipitated by repeated reports on torsade de pointes (TdP) and other arrhythmias occurring in patients treated with an antihistamine drug (terfenadine) . ECG meaSimilarly, in early 1990s, attempts to decrease sudden cardiac death by novel antiarrhythmic drugs , have demonstrated that a certain degree of arrhythmia suppression was paralleled by a proarrhythmic effect, translated in 3-fold increase in mortality rate among patients treated with encainide or flecainide ) known to be derived from a large patient population and thereby to be considered the most rigorous from an epidemiological perspective ,61.α) they demonstrated a large variability of the α exponent (range: 0.233 – 0.485) in 50 healthy subjects. The same parameter in Fridericia's and Bazett's formulas is 0.33 and 0.50 respectively. Malik and colleagues concluded that correction of QT interval by heart rate may be misleading, regardless of the method used.The main limitation in the aforementioned formulas is that each of them attempts to correct for heart rate only, while leaving into play a number of other known confounders . Disappointingly, analysis done on ECGs sampled from periods of stable heart rate provided no better results . AccordiQT/RR regression models ,65 can bReporting of results becomes mostly informative if tabular frequency distribution and frequency histograms are used to display PR, QRS and QTc data for individuals and/or groups. For the hypothetic example captured in Fig. baseline, mean and mean maximum values for all parameters measured/computed for one group and displays the difference (D1) between the mean value of each parameter "on-treatment" and the corresponding mean value at baseline. Given that a D2 value is to be computed for the second group (comparator), their difference (D2 – D1), for all parameters and the resulting p value (Bomferoni adjusted) could be displayed in Risk-benefit assessment with respect to a drug's propensity to prolong the QT/QTc interval entails a careful judgement of the frequency and magnitude of QT changes encountered in the preclinical and/or clinical program and balancing the potential risks against the drug's benefit.The large variability in the prolonged QT/QTc behaviour as to the potential risk for a TdP ensuing, makes this task difficult and requires individual characterisation of a specific drug's effects on repolarization.Amiodarone, for example, is known to prolong repolarization but to cause rarely TdP. Sotalol which prolongs repolarization through the same mechanism of action as Amiodarone (blockade of the IKr channel) causes a more frequent occurrence of TdP .Some agents may cause slight QTc prolongation but when combined with other drugs that inhibit the metabolism of the suspected drug , marked prolongation can occur . A typicIt is estimated that about 40–50% of the cases of drug-induced QT interval prolongation and/or TdP, result from drug-drug interactions with metabolic inhibitors (as in the example of dofetilide-cimetidine) and that only 10% are associated with electrolyte imbalance, some 10% with concurrent use of other QT-prolonging drugs and approximately 10–20% of cases have no obvious risk factors .As a general rule, it is recommended that any prolongation should be considered as a potential toxicity . In thisIdeally, therapy should be individualized on the basis of patient's genotype/phenotype determined through pharmacogenetic studies performed in the early stages of a drug's development and through application of that information while exploring the drug's pharmacokinetic and pharmacodynamic properties, its drug interaction potential as well as when ethnical-based bridging data is generated.While genotyping of individual cases, where prior informed consent is obtained, based on strong suspicion of genetic substrate having caused substantial QT/QTc prolongation is highly recommendable , large-scale genotyping in early stages of drug development or pre-prescription genotyping are still controversial.Consequently, the clinical and scientific community is facing the need to apply classical "individualizing therapy" approaches in reducObviously, the most elementary requirement in this respect is that prescribing physicians should fully comply with contraindications regarding co-prescription of interacting drugs and with the recommendation on appropriate monitoring of targeted patients. More specifically, attention should be given to pharmacokinetic and pharmacodynamic factors that constitute important risk factors .Liver and/or renal diseases act as risk factors at pharmacokinetic level. Likewise, a multitude of metabolic inhibitors see , when tePharmacodynamic risk factors include diseases that are associated with QT interval prolongation see .sine qua non condition for preventing SAE in patients known to be treated with QT-prolonging drugs. QT interval should be monitored in these patients: (i) at baseline; (ii) at steady-state post-dose and at each incremental dose; (iii) when there is an inter-current change in level of risk, and (iv) if the patient develops symptoms of tachycardia or impaired cerebral circulation [Obviously, appropriate monitoring is a culation . TreatmeOccurrence of typical AE suggestive of eventual QT-prolongation, should prompt careful investigation of this possibility even in cases where initial QT/QTc assessment has shown to be negative. In such instances, it is recommended that screening for risk factors shall be employed and genotyping performed after receipt of informed consent. Furthermore, consideration should be give to "re-challenge" with the investigational drug under appropriate monitoring conditions, with the aim of obtaining an accurate assessment of the situation at hand as well as for getting useful information on dose- and concentration-response relationship.Compelling evidence has accrued during the past years on the potential of several cardiac and non-cardiac drugs to prolong cardiac repolarization (reflected as prolonged QT on surface ECG) and to predispose to life-threatening arrhythmias.This evidence has a major impact on the risk-benefit ratio of any drug, currently carefully considered from early stages of clinical drug development by pharmaceutical companies, by ethics committees as well as by regulatory agencies.The broad spectrum of risk factors that may interplay in the increased propensity toward malignant arrhythmias of any new chemical entity is just increasing and adding to the complexity of the problem.This calls for standardized methodologies to deal with the multifaceted aspects that the QT/QTc prolongation poses in practice, meant to ensure that drugs awarded market approval have undergone appropriate quality assurance scrutiny and, where necessary, further post-marketing surveillance is systematically planned and reported on, in a timely manner.The author(s) declare that they have no competing interests.Frequency distribution of TU morphology changes across two groups.Click here for fileTU morphology changes in individual subjects.Click here for fileFrequency distribution of TU morphology changes across two groups.Click here for fileFrequency distribution of the PR/QRS/QTc(B/F/L) data matching PK sampling .Click here for fileSummary of PR/QRS/QTc(B/F/L) data .Click here for fileNormal ranges for the PR/QRS/QTc(B/F/L) data and for the QTc(B/F/L) relative changes to baseline.Click here for fileFrequency distribution of the baseline and on-treatment values pertaining the PR, QRS, QT, QTcB, QTcF and QTcL parameters as well as the D1 difference.Click here for fileSummary of outcome differences between the two groups regarding key ECG parameters.Click here for fileAlert criteria based on ECG findings (measurements) and rational for subject withdrawal from study.Click here for fileCharacteristics of the cross-over and parallel study designs.Click here for fileDisease associated with prolonged QT/QTc interval.Click here for fileAbbreviations (Not mentioned in the text!)Click here for file
An increasing number of researchers have released novel RNA structure analysis and prediction algorithms for comparative approaches to structure prediction. Yet, independent benchmarking of these algorithms is rarely performed as is now common practice for protein-folding, gene-finding and multiple-sequence-alignment algorithms.Here we evaluate a number of RNA folding algorithms using reliable RNA data-sets and compare their relative performance.We conclude that comparative data can enhance structure prediction but structure-prediction-algorithms vary widely in terms of both sensitivity and selectivity across different lengths and homologies. Furthermore, we outline some directions for future research. RNA, once considered a passive carrier of genetic information, is now known to play a more active role in nature. Many recently discovered RNAs are catalytic, for example RNase P which is involved in tRNA maturation and the self-splicing introns involved in mRNA maturation . In addiA fundamental tenet of biology is that a stable tertiary structure is essential for biological function. In the case of RNA the secondary structure (the base-pair set for an RNA molecule) provides a scaffold for the tertiary structure ,10. Yet,O(nm3) in time, and O(nm2) in space, where n is the sequence length and m is the number of sequences). Current implementations, Foldalign [This has motivated plan B, the use of the "Sankoff-Algorithm", an algorithm designed for the simultaneous alignment, folding and inference of a protosequence for a set of homologous structural RNA sequences . The recoldalign ,28, Dynaoldalign and PMcooldalign , are resThe final approach (plan C) applies when no helpful level of sequence conservation is observed. We may exclude the sequence alignment step, predict secondary structures for each sequence (or sub-group of sequences) separately, and directly align the structures. Because of the nested branching nature of RNA structures, these are adequately represented as trees. The concept of a similarity measurement via edit operations, a standard procedure for string comparisons, has been generalised to trees -33. Tree• the viability of plan A, B, or C given tools available today, and• the relative performance of the tools used within each plan..We shall explicitly not evaluate computational efficiency, which (by necessity) differs widely between the tools. We also do not evaluate user friendliness except for some remarks in the discussion section. Data-sets, documentation and relevant scripts are freely available from in vivo structure, given only the sequence of bases. Three general considerations apply: (1) The in vivo structure is not only predetermined by the primary structure, but also by cellular components such as chaperones, base modifications, and even by the transcriptional process itself. There are currently no computational tools available that assess these effects. (2) There are 'ribo-switches', whereby two or more functional structures exist for a given sequence [RNA secondary structure inference is the prediction of base-pairs which form the sequence -57. Suchsequence -60. (3) in-vivo structure for each of them, as well as a consensus structure that captures the common, relevant structural aspects. The consensus structure per se does not exist in vivo, and so some mathematical rigour should be applied when working with this notion.The comparative approach to structure inference is initiated from a set of homologous RNA sequences. Attempts are made to infer the A, C, G, U}. An RNA sequence B = b1,...,bn contains n bases, but no structural information. For comparative analysis, we are given the RNA sequences B1,...,Bk. A secondary structure can be associated with each sequence B as a string S over the alphabet {""}, where parentheses in S must be properly nested, and B and S must be compatible: If are matching parentheses, then must be a legal base-pair. A base-pair is also denoted as bi·bj, si·sj, or simply i·j when the sequence is clear from the context. Both sequences and structures may be padded with a gap symbol "-", in order to align sequences and structures of different lengths. For compatibility of padded sequences and structures, we require that bi = "-" iff si = "-".An RNA sequence is a string over the RNA alphabet {structural alignment is a multiple sequence alignment of the 2 * k sequences, B1, S1,..., Bk, Sk, such that Bi is compatible with Si, and the following consistency criterion is satisfied: For any Si and Sj and any base-pair , we have ≠ ")" and ≠ "", then . This means that if one partner of a base-pair in Sj is aligned to one partner in Si, their partners must also be aligned to each other which is the best thermodynamic folding for B that exhibits the base-pairs specified by C, plus additional ones that do not conflict with the former. Refolding can be achieved by RNAfold with option -C . If B and S contain gaps, we remove them before refolding and reintroduce them in the same positions afterwards.A consensus structure exhibits base-pairs shared by the majority of structures under consideration, but has no sequence information associated with it. Each individual structure for a concrete sequence typically has additional base-pairs which are properly nested between those that constitute the consensus. Given a consensus structure Given a consistent structural alignment, it is easy to derive a consensus structure, as we can count majorities at individual positions. If the 5' partner of a base-pair passes the majority threshold, consistency implies that its 3' partner also makes it into the consensus.without structural information, we can approximate a structural alignment by computing Si = refold. We call this structural alignment reconstruction. While all Si will be consistent with C, and with each other as far as the base-pairs of C are concerned, they may be inconsistent for the base-pairs introduced in refolding. This is tolerable, since if we trust the consensus to capture the relevant common structural features, there is no need to require that all members of a family agree upon extra-consensus features.Given a consensus structure and a sequence alignment We note in passing that it seems worthwhile to study the conditions under which consensus derivation and structural alignment reconstruction are mutually inverse operations, but such theoretical issues are outside our present scope.consensus by example: They provide a reference sequence, say B1, with an experimentally proved structure S1, and provide a multiple sequence alignment of B1, S1 and additional sequences B2,..., Bn in the family under consideration. The sequence alignment is chosen to exhibit structural similarities between the reference structure and the other family members, but in general, we do not know the precise model of achieving similarity, nor do we know whether this model has been solved to optimality.While the plans A, B and C we are about to evaluate strive to find a good consensus structure from sequence data, the "truth" available to us comes in a different form. Structural databases only convey a consensus reconstruction.One consequence of this situation would be to conclude that the reference structure is the only reliable anchor point available to us for evaluation. Comparative analysis tools would then be evaluated by the capacity to predict this particular structure by using family information. This would be a meaningful way to proceed, however, the effect of structural homogeneity within a sequence family would go unmeasured, and so would the difficulty or success of exploiting it. We therefore proceed in a different way which we call S1 need not be compatible with any Bi except for i = 1. However, we can still compute Si := refold by treating bases as unpaired where they violate compatibility. What we obtain in this way is a reconstructed structural alignment, which will be consistent to the extent that the reference structure indeed describes the common structural features, and to the extent that the database sequence alignment reflects these. In all our test cases, this alignment was overall consistent, an indicator that the families and their structural features are in fact well defined. From this alignment, we derive a consensus structure as explained above using a threshold p = 0.5, which will serve as the standard of truth in our evaluation.The reference structure One may argue that our approach to reconstruct the truth is somewhat ad-hoc and should be replaced by a more systematic method. However, this is what the tools we evaluate try to achieve, and we should not add one of our own as the standard of truth. Hence, our consensus reconstruction is designed to stay as close as possible to the database information.Results of observations based on the above measures must be interpreted with care. We list a number of caveats that must be kept in mind when proceeding to the subsequent sections.In all tests, one could possibly obtain better predictions by tuning the program's parameters. We felt that it would be inappropriate to do so, since in the evaluation, we know the correct result and could use this knowledge in the tuning, whereas in a true application context, one does not have such guidance. Hence we used the recommended defaults in all cases.In some cases we apply a tool to data where we know that the model structure has features not recognised by the tool. An example is a structure with multiloops or pseudoknots, searched for with a tool that explicitly excludes such structures. We permit such cases, because again, in a true application context one does not know whether the tool is appropriate or not, and it is still of interest to see how close to the correct structure one can get.We take for granted the correctness of structural alignments taken from the literature, and the consensus reconstructed thereof. Should one of the tested algorithms produce a result that is actually better , it may be penalised. Also, we do not consider a large number of data-sets here, it is possible that performance of some algorithms improves on a different selection of data-sets.Our data reflect the state of the art in 2004. Most of the tools tested are very recent, and their authors are still improving them. Hence, not all observations will remain reproducible. In fact, we hope this study helps to obtain better results in the future.B1 with (preferably) an experimentally verified secondary structure S1. Experimental verification of a structure may be from a variety of sources: x-ray crystallography, NMR, enzymatic structure probing or phylogenetic inference. A comparison of phylogenetic with x-ray crystallographic structures has shown the phylogenetic predictions of rRNA to be very reliable (sensitivity > 97%) [We have compiled RNA sequence alignments consisting of up to 11 sequences derived from reliable sources (see table y > 97%) . This daTo avoid results bias, we constructed test alignments, with corresponding phylogenies that, wherever possible, were free of highly similar clades. In addition, we endeavoured to ensure that the reference sequence was central to the phylogeny, or more specifically, not an out group. To meet these requirements, sequences from large data-sets were sorted into high-similarity and medium-similarity groups (with respect to the model sequence), from which maximum-likelihood phylogenies were conOur data-sets are quite diverse and must for the purposes of this study be considered difficult to analyse in structural terms. The shape of ribosomal RNA is believed to be influenced by interaction with ribosomal proteins. The shape of RNase P shows relatively little sequence and structure conservation, and furthermore, it contains pseudoknots which are generally excluded by prediction algorithms. Transfer RNAs are known to be a hard case for thermodynamic folding, primarily due to the propensity of modified bases which influence structure formation. All tools tested may perform better upon less complex data-sets, but the purpose of this study is not to show how good the algorithms are but to compare relative performance when prediction is difficult.Sensitivity (X) and selectivity (Y) are common measures for determining the accuracy of prediction methods [X and Y for examining RNA secondary structure prediction: methods . Selecti methods and "pos methods ,65. We uTP is the number of "true positives" (correctly predicted base-pairs), FN is the number of "false negatives" (base-pairs in the reference structure that were not predicted) and FP is the number of "false positives" (in-correctly predicted base-pairs). However, not all FP base-pairs are equally false! We classify FP base-pairs as either inconsistent, contradicting or compatible. Predicted base-pairs which conflict with a base-pair in the reference structure are labelled inconsistent (i.e. i·j is predicted where either i·k and/or h·j are paired in the reference structure (h ≠ i and j ≠ k)). Predicted base-pairs (i·j) which are non-nested with respect to the reference structure are labelled contradicting (i.e. there exists base-pairs k·l in the reference satisfying k <i <l <j). Note that some base-pairs may both contradict and be inconsistent with the reference structure. Predicted base-pairs which are neither true positive, contradicting or inconsistent are labelled compatible and can be considered neutral with respect to algorithm accuracy. Hence these are subtracted in the selectivity evaluation, their number is ξ in the above equation. It is of interest to note that the base-pair metric [dBP is the sum of FN and FP, and hence is different from the measure used here.where r metric ,67 betweMatthews correlation coefficient [A measure combining both selectivity and sensitivity is useful for ranking algorithms. For this we employ the fficient defined MCC ranges from -1 for extremely inaccurate (TP = TN = 0) to 1 for very accurate predictions (FP - ξ = FN = 0). When comparing RNA structures TN = 0 occurs only in extreme examples, hence MCC generally ranges from 0 to 1. Furthermore, for the specific case of RNA structure comparisons, MCC can be approximated by the arithmetic-mean or geometric-mean of X and Y [ X and Y .The accuracy of the MFE single sequence method has been evaluated elsewhere and was found to have an accuracy of 73% when averaged over many different RNAs and "base-pair slippage" was tolerated in the evaluation . A recenO(n3) in time and O(n2) in memory where n is the sequence length. Both employ the same thermodynamic parameters [Mfold ,72 and Rrameters . Hence, S. cerevisiae tRNA-PHE which the free energy methods favour. Mfold infers 'suboptimal' structures by calculating minimum free energy structures with the restriction that every possible base-pair is forced in a one-by-one fashion. Unique structures are then ranked by energy. Investigating the top two suboptimal structures from Mfold resulted in an overall increase in the range of sensitivity, selectivity and correlation, 22–69%, 20–67% and 0.18–0.68 respectively. The predictions shown here are used to illustrate the potential advantages of using comparative analyses over single sequence methods.The sensitivity, selectivity and correlation of MFE methods (for the four data-sets considered here) ranged from 22–63%, 20–60% and 0.18–0.61 respectively and a covariation score matrix, augmented with penalties for inconsistent sequences, Bij. A standard trace-back procedure is performed to recover a consensus structure with the optimal sum-of-average-energy-and-covariation-score . The algorithm is remarkably efficient O(N·n2 + n3) in time and O(n2) in memory.RNAalifold ,76 impleThe sensitivity, selectivity and correlation of the RNAalifold predictions ranged from 57–91%, 57–100% and 0.57–0.95 respectively, showing a significant increase in the accuracy measures when compared to the MFE-methods.Pfold implements a "stochastic context free grammar" (SCFG) designed to produce a "prior probability distribution of RNA structures" for an RNA alignment input ,24,77. AThe algorithm is generally accurate and efficient. Hence, the over-all sensitivity, selectivity and correlation of the Pfold predictions ranged from 0–100%, 0–100% and 0.0–1.0, respectively. But removing those points where Pfold predictions were empty structures (LSU rRNA (H & M) and SSU rRNA (M), see figure must decrease when a significant proportion of the true base-pairs are engaged in pseudo-knots.ILM (iterated loop matching) is one of the few comparative RNA folding algorithms which can return pseudo-knotted structures ,78. It uThe inclusion of pseudo-knots prediction vastly increases the number of possible secondary structures, this is why they are generally excluded from exhaustive folding algorithms. In addition, there is a general lack of experimentally derived thermodynamic parameters which include pseudo-knots. ILM is a method still under development, hence the performance may improve once pseudo-knots can be more accurately modelled.O(nm3) in time, and O(nm2) in space for sequence length n and m sequences) to be practical [The Sankoff algorithm is a dynamic programming approach to obtain a common base-pair list with maximal sum of base-pair weights. Basically, this is a merger of sequence alignment and Nussinov in time (where N is the number of sequences and n is the length of the longest sequence). A simple match-based scoring scheme is used to rank putative conserved structure elements.Foldalign can be iThe Tool Abuse Caveat generally applies to the tool Foldalign as all of our data-sets contain multi-loops. The use of Foldalign for the prediction of global, multi-looped secondary structures is not recommended-as Foldalign is specifically designed for the location of short regulatory motifs such as IREs where thi and j of aligned nucleotides (where i indexes positions in sequence 1 and j indexes sequence 2) to be less than M. In addition, Dynalign uses the same method employed by MFold to reduce the conformation space, by limiting the size of internal loops [O(n3M3).Dynalign ,85 is a al loops ,86. The pairwise foldings we compared all sequences with the reference structure. Due to the computational expense of this algorithm it could only be used to predict tRNA and RNase P structures. Dynalign performed well on the tRNA, medium sequence homology data-set . With this one high-scoring point removed, averaged sensitivity, selectivity and correlation values ranged from 32–54%, 33–54% and 0.32–0.54 respectively. Comparing the performances of MFold and Dynalign showed that MFold performance was always superior on the RNase P data-set, Dynalign however did much better on the shorter and more diverse tRNA sequences. Performance gains could be made by investing more computer time and refolding RNase P with larger ' maximum insert size', which was set to 10 during this study. The use of Dynalign on the RNase P data-sets in this study is therefore a case of tool-abuse, as the parameters recommended by the authors of Dynalign were not used .The current Dynalign implementation is restricted to pair-wise sequence comparisons. Rather than compute all The Carnac algorithm, as mentioned previously, is not strictly an implementation of the Sankoff algorithm. A set of filters are employed through which sets of sequences are passed in a pair-wise fashion ,81,87. SThe Carnac algorithm was remarkably selective at base-pair predictions. However, the sensitivity of the algorithm was generally low, although when evaluated with the correlation coefficient it is comparable to RNAalifold and Pfold. Sensitivity, selectivity and correlation values for Carnac predictions ranged from 45–71%, 92–100% and 0.65–0.82 respectively. The sensitivity of Carnac can be increased by constraining a minimum free energy fold (i.e. with "RNAfold-C") with the Carnac predicted structure, but this cost in terms of selectivity. On average this increased the sensitivity by 22.5, decreased the selectivity by 17.2 and slightly increased the correlation by 0.05.RNAforester ,88 impleWe used the tRNA and RNase P data-sets and generated structure single sequence predictions with RNAfold. All predicted structures were aligned pairwise and a neighbour-joining approach used to cluster and align high similarity sequences and structure profiles. The highest scoring alignment was used to derive a predicted consensus that was evaluated against the consensus tRNA model structures. Sensitivity, selectivity and correlation ranges of consensus structures computed from the highest scoring RNAforester alignments were 29–67%, 27–67% and 0.26–0.66 respectively. It seems likely that much of the inaccuracy of this approach is due to MFE structure prediction, however the structure-clustering approach frequently separates mis-folded MFE predictions from the accurate folds.The MARNA algorithm ,89 proceQij > 1, where , , and pij are pair-probabilities computed using McCaskilPs partition function [Sensitivity, selectivity and correlation values of consensus structures computed from MARNA alignments of MFE structures ranged from 29–52%, 32–84% and 0.30–0.65 respectively. We also tried trimming high entropy base-pairs from the MFE predictions using the bound function . The newWe have evaluated three different strategies for comparative structure prediction, and altogether eight tools (not counting the single sequence methods). The results of which are summarised in figures For well aligned short sequences, both Pfold and RNAalifold generally perform well, PFold performed marginally better than RNAalifold. It is likely that some moderate refinements to RNAalifold would improve accuracy without altering the efficiency, for example, if gaps were not penalised in the free-energy evaluation and a more sophisticated model for scoring mutations was employed, perhaps ribosum matrices could beCarnac produced highly selective structures for all the test data-sets, which if used to constrain a free energy fold produced sensitive predictions with a cost to selectivity. The consistency of Carnac performance is remarkable, for all the data-sets considered here this heuristic approach performed well. It is however unclear how Carnac will perform on highly diverse data-sets.For advocates of plan C, we have an encouraging message: Both MARNA and RNAforester perform better on the medium similarity data than on high similarity data. This seems paradoxical at first glance, but one must understand that for an approach purely based on predicted structures, high sequence similarity can be a curse rather than a blessing: If sequences are very similar, they may jointly fold into the wrong MFE structure. With more sequence variation, it becomes more likely that at least some family members have good predictions, which by their mutual similarity can be picked out from the rest. This means that especially in the case of low sequence similarity, where nothing else works, plan C, currently the least explored strategy of all, has a certain promise.Finally, let us outline some directions for future research.An implementation of the single sequence pseudoknot algorithms ,43,94 emAgain allowing constrained foldings and alignments would be useful. The further development of "BLAST-like" folding heuristics for this should be a priority, obviously Carnac is a good start. The MARNA approach for producing structurally enhanced multiple alignments produced rather selective results after trimming high-entropy base-pairs from MFE predictions. This suggests that weighting edit-distances with partition-function derived probabilities or entropies will produce reasonable RNA alignments. A consensus structure could then be derived from MFE-structures or from PFold or RNAalifold predictions on the resultant alignment. This approach would effectively decouple the Sankoff algorithm into manageable structure-enhanced-alignment and folding stages.Two further developments are likely to increase the power of plan C. Pure multiple structure alignment presented in may leavMore training data is essential for this field to progress, for this homology search tools are essential. Infernal ,97 used PPG carried out the experiments, the analysis and drafted the manuscript. RG suggested comparing comparative structure prediction methods and assisted in the manuscript preparation. All authors read and approved the final manuscript.
The prevalence of mental disorders is so high that members of the public will commonly have contact with someone affected. How they respond to that person may affect outcomes. However, there is no information on what members of the public might do in such circumstances.In a national survey of 3998 Australian adults, respondents were presented with one of four case vignettes and asked what they would do if that person was someone they had known for a long time and cared about. There were four types of vignette: depression, depression with suicidal thoughts, early schizophrenia, and chronic schizophrenia. Verbatim responses to the open-ended question were coded into categories.The most common responses to all vignettes were to encourage professional help-seeking and to listen to and support the person. However, a significant minority did not give these responses. Much less common responses were to assess the problem or risk of harm, to give or seek information, to encourage self-help, or to support the family. Few respondents mentioned contacting a professional on the person's behalf or accompanying them to a professional. First aid responses were generally more appropriate in women, those with less stigmatizing attitudes, and those who correctly identified the disorder in the vignette.There is room for improving the range of mental health first aid responses in the community. Lack of knowledge of mental disorders and stigmatizing attitudes are important barriers to effective first aid. Surveys in many countries have found that mental disorders have a high prevalence and are a major cause of disability in the population -3. For eHow people initially respond to others with a mental disorder may influence their recovery. For example, it is known that many people with mental disorders get no professional help and thatThere has been no previous research on mental health first aid knowledge in the population. Previous mental health literacy surveys have assessed knowledge and beliefs about mental disorders and their treatment , but theIn 2003–2004 a household survey was carried out of Australian adults aged 18 or over by the company AC Nielsen. Households were sampled from 250 census districts covering all states and territories and metropolitan and rural areas. Up to 5 call backs were made to metropolitan selections and 3 to non-metropolitan selections. To achieve a target sample of 4,000 interviews with adults aged 18 years or over, visits were made to 28,947 households. The outcome of these visits was: no contact after repeated visits 14,630; vacant house or lot 306; refused 7,815; person sampled within household temporarily unavailable 1,132; no suitable respondent in household 287; did not speak English 383; incapable of responding 213; and unavailable for the duration of the survey 181. The achieved sample was 3998 persons, with 1001 receiving the depression vignette, 999 the depression with suicidal thoughts vignette, 997 the early schizophrenia vignette, and 1001 the chronic schizophrenia vignette.The interview was based on a vignette of a person with a mental disorder. On a random basis, respondents were shown one of four vignettes: a person with major depression, one with major depression together with suicidal thoughts, a person with early schizophrenia, and one with chronic schizophrenia. All vignettes were written to satisfy the diagnostic criteria for either major depression or schizophrenia according to DSM-IV and ICD-10. The vignette with depression and the one with early schizophrenia were written to satisfy these diagnostic criteria at a minimal level, so that we could ascertain the public's reaction to cases of a developing disorder which had reached the point where intervention was needed. The vignette of the person with depression together with suicidal thoughts was identical to the depression vignette in all respects except the suicidal thoughts and was designed to assess how this symptom affected the public's response. The chronic schizophrenia vignette was designed to assess the response to someone with a severe long-standing disorder, where acceptance seemed less likely. Respondents were also randomly assigned to receive either male ("John") or female ("Mary") versions of the vignette. These vignettes (John version) are shown in Table After being presented with the vignette, respondents were asked a series of questions to assess their recognition of the disorder in the vignette, their beliefs about treatment and long-term outcomes, beliefs about causes and risk factors, stigmatizing attitudes, awareness of mental disorders in the media, contact with people like those in the vignette, and the health and sociodemographic characteristics of the respondent. The questions relevant to the present paper are described below.other people believe. Please indicate how strongly you agree or disagree with the following statements. Most people believe that people with a problem like John's could snap out of it if they wanted. Strongly agree, Agree, Neither agree nor disagree, Disagree, Strongly disagree". Sociodemographic characteristics recorded included age group , gender, and education .To assess recognition of the problem in the vignette, respondents were asked: "What would you say, if anything, is wrong with John?". Responses of "depression" were counted as correct for the first two vignettes above, and responses of "schizophrenia" or "psychosis" for the second two. To assess mental health first aid responses, respondents were asked the open-ended question: "Imagine John is someone you have known for a long time and care about. You want to help him. What would you do?". Answers were recorded verbatim by the interviewer. To assess contact with people like those in the vignette, respondents were asked: "Has anyone in your family or close circle of friends ever had problems similar to John's?"; "Have they received any professional help or treatment for these problems?'; "Have you ever had problems similar to John's?"; "Have you received any professional help or treatment for these problems?"; and "Have you ever had a job that involved providing treatment or services to a person with a problem like John's?". Those that said "yes" to these questions were respectively labelled in the analyses reported below as "carers", "consumers" or "professionals". To assess stigma, respondents were asked a series of nine questions designed to elicit their attitudes towards the person in the vignette and nine items concerning what they thought others in the community would believe about the person in the vignette (perceived stigma) . PersonaResponses were coded according to the categories identified from an earlier study where the same question was administered as part of a randomized controlled trial of Mental Health First Aid . ResponsA. Encourage professional help-seekingB. Listen to / talk to / support personC. Listen to / talk to / support familyD. Assess problem / assess risk of harmE. Give or seek informationF. Encourage self-helpA. Encourage professional help-seeking, were subcoded into multiple categories to identify the type of professional help recommended. These categories were:Responses coded into category A1. GP / doctor unspecifiedA2. CounsellorA3. PsychiatristA4. PsychologistA5. Mental health team / servicesA6. Other mental health professionalsA7. Unspecified professionals and other professionalsA8. Accompany person (eg. Offer to go with him/her)A9. Contact help on their behalfExamples of responses coded into category B included "support, understanding and caring, someone to talk to him", "talk to her about it", "listen", "be there for him".Responses coded into category C were the same for category B, but referred to giving support, listening to, or talking to the sufferer's family. For example, "talk to her family", "contact relatives", "ask advice of parents", "support his parents".Responses coded into category D included "keep an eye on her, make sure she is safe", "make a contract with her so if she wants to harm herself she rings me first", "find out what is the real problem behind the behaviour, focus on the problem".Responses coded into category E included "ring the local health authority to get advice", "talk to other people who have been in the situation", "speak to health professionals and get best advice", "get some brochures from community health and give them to him". Responses coded into category F included "I suggest that he have a holiday/exercise/change jobs", "try and help him get into something he is interested in", "support groups", "do something for herself to get out of the situation".Inter-rater reliability of the coding was assessed by a second rater who independently coded 100 responses which were randomly selected using the SPSS Select Cases procedure .Inter-rater reliability of the content coding was assessed using kappa. Kappa values were interpreted according to Altman as folloThe frequency of codings was analysed by pooling across male and female versions of each vignette and percent frequencies calculated. Percentages were calculated applying survey weights to give better population estimates. Standard errors of these percentages were estimated using the Complex Samples procedure in SPSS 12.0 . This prMultiple logistic regressions were then conducted to examine the levels of association between participants suggesting particular treatment options and their sociodemographic and mental health experience attributes. The following predictor variables were included: age group; consumer, carer and professional status, including whether or not professional help was obtained; and levels of perceived stigma and personal stigma. Two vignette measures – type of vignette provided and whether respondents correctly identified the problem portrayed in that vignette – were also included in the analyses. Each logistic regression was also adjusted to take into account of sampling weights and clustering method applied in this survey. These analyses were undertaken using STATA 8 .Inter-rater reliability was assessed for a randomly chosen 100 responses. Kappa was very good or good for encourage professional help-seeking (0.89), listen/talk/support person (0.70), listen/talk/support family (1.00), encourage seeing doctor (0.98), encourage seeing counsellor (0.93), encourage seeing psychiatrist (0.94), encourage seeing psychologist (0.88), and accompanying the person to a professional (0.95). It was moderate for give or seek information (0.48), encourage seeing unspecified and other professionals (0.56), and contact professional on their behalf (0.56). Kappa was fair for encourage self-help (0.34) and poor for assess problem/risk of harm (0.15). Kappa could not be calculated because of zero frequencies from the first rater for the categories of mental health team/services and other mental health professional.To better understand the reasons for the fair and poor agreement with two of the codes, positive and negative agreement were examined separately . In bothTable Table Table Table Table Table The most common first aid responses were found to be encouraging professional help-seeking and listening/talking/supporting the person. Nevertheless, these responses were far from universal, with 32–44% not mentioning professional help and 27–34% not mentioning listening/talking/supporting. Given the likely helpfulness of these first aid responses, they need greater promotion in the community.Other first aid responses were mentioned only by a minority. Of particular concern is the low percentage assessing risk of harm for the person in the depression/suicidal vignette. Asking about suicidal intentions is often recommended as a response ,14, althEncouraging self-help was another minority response, but was associated with stigma and lack of recognition of the mental disorder in the vignette. Respondents appear to have suggested self-help as an alternative to professional help, rather than as a complement to it. We have previously reviewed the evidence on self-help interventions for depression and anxiety disorders and found that some have support ,17. SuchWhen correlates of first aid responses were examined, most variables had at least one significant association. However, the variables that most often predicted first aid responses were female gender, low personal stigma and correct recognition of the disorder in the vignette. The latter two predictors indicate potential barriers to providing first aid. Respondents who saw the person in the vignette as having negative attributes were less likely to respond by encouraging professional help-seeking or providing personal support. Efforts to reduce stigma in the community may therefore facilitate greater first aid. People who did not recognize the disorder showed a similar pattern of responses. These people lack knowledge of mental disorders, at least to the extent of being able to apply a psychiatric label. Therefore community education about how to recognize these disorders may also facilitate helpful first aid responses.One approach to improving public responses to people with a mental disorder is an individual training course in mental health first aid . Such a The study has two limitations which must be acknowledged. The major one is that the study has assessed intended first aid to a hypothetical person in a case vignette. Whether these intentions would be implemented in practice is unknown. Intentions might be seen as placing an upper limit on responses, such that if a respondent fails to state an intention, this is unlikely to be seen in practice. An alternative approach would have been to ask the respondent how they had treated actual people they knew with mental health problems. However, the disadvantage of this alternative would have been the lack of standard situations. A second limitation is that the inter-rater reliability of coding some of the first aid responses was low. Conclusions about these responses must be viewed with caution. On the other hand, the strengths of the study are the large representative sample, the open-ended responses which did not constrain the respondents to particular alternatives, and the ability to compare responses to a series of standard scenarios.There is room for improving the range of mental health first aid responses in the community. Lack of knowledge of mental health and stigmatizing attitudes are important barriers to effective first aid.The author(s) declare that they have no competing interests.AFJ was involved in securing funding for the survey, had a major role in the design of the survey and the interview questionnaire, carried out the descriptive statistical analysis and had a major role in writing the manuscript.KAB provided research assistance with the survey, coded all the responses, and wrote the method relevant to this coding.KMG was involved in the design of the study, developed the stigma scales and wrote some of the manuscript.BAK developed the coding scheme, was the second rater for inter-rater reliability, and wrote some of the manuscript.RAP did the regression analyses and wrote the section of the Method describing this.All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
The clinical course of breast cancer is difficult to predict on the basis of established clinical and pathological prognostic criteria. Given the genetic complexity of breast carcinomas, it is not surprising that correlations with individual genetic abnormalities have also been disappointing. The use of gene expression profiles could result in more accurate and objective prognostication.BRCA2, DNMT3B and CCNE1) as an independent prognostic marker . This "poor prognosis" signature was then tested on an independent panel of ERα-positive breast tumors from a well-defined cohort of 104 postmenopausal breast cancer patients treated with primary surgery followed by adjuvant tamoxifen alone: although this "poor prognosis" signature was associated with shorter relapse-free survival in univariate analysis (P = 0.029), it did not persist as an independent prognostic factor in multivariate analysis (P = 0.27).To this end, we used real-time quantitative RT-PCR assays to quantify the mRNA expression of a large panel (n = 47) of genes previously identified as candidate prognostic molecular markers in a series of 100 ERα-positive breast tumor samples from patients with known long-term follow-up. We identified a three-gene expression signature (Our results confirm the value of gene expression signatures in predicting the outcome of breast cancer. Breast carcinoma is the most common female cancer and is showing an alarming year-on-year increase. Most patients do not die as a result of the primary tumor but from metastatic invasion. The mean 5-year relapse-free survival rate is about 60% overall, but differs significantly between patients with forms that rapidly metastasize and those with less aggressive forms.i.e. age, menopausal status, lymph-node status, macroscopic tumor size, histological grade and estrogen receptor status, fail to accurately predict clinical behavior.Current clinical, pathological and biological parameters, ERBB2, CCDN1, MYC, UPA and PAI1 [Breast cancer initiation and progression is a process involving multiple molecular alterations, many of which are reflected by changes in gene expression in malignant cells. Many clinical studies have attempted to identify correlations between altered expression of individual genes and breast cancer outcome, but often with contradictory results. Examples of such genes include and PAI1 . It is tThe recent development of effective tools for monitoring gene expression on a large scale is providing new insights into the involvement of gene networks and regulatory pathways in various tumor processes . It has BRCA2, DNMT3B and CCNE1) associated with poor clinical outcome. We then tested this "poor prognosis" signature on an independent panel of ERα-positive breast tumor samples from a well-defined cohort of 104 postmenopausal breast cancer patients treated with primary surgery followed by adjuvant tamoxifen alone with known long-term follow-up.In this study, we used real-time quantitative RT-PCR assays to quantify the mRNA expression of 47 candidate prognostic molecular markers in a series of 100 ERα-positive breast tumor samples. We identified a three-gene expression signature were pre- or post-menopausal . Sixty patients received adjuvant therapy, consisting of chemotherapy alone in 14 cases, hormone therapy alone in 15 cases, and both treatments in 31 cases. The standard prognostic factors are presented in Table The second series consisted of 104 post-menopausal women whose breast tumors were excised at Centre René Huguenin from 1980 to 1994. The patients all received post-operative adjuvant hormone therapy consisting of tamoxifen (20 mg daily for 3–5 years) and no other treatment. The standard prognostic factors are reported in Table Complete clinical, histological and biological information was available for the two series of breast cancer patients; no radiotherapy or chemotherapy was given before surgery, and full follow-up took place at Centre René Huguenin. The histological type of the tumor and the number of positive axillary nodes were established at the time of surgery. The malignancy of infiltrating carcinomas was scored according to Scarff Bloom and Richardson's (SBR) histoprognostic system.Both series of tumor samples were placed in liquid nitrogen until total RNA extraction immediately following surgery.Quantitative values are obtained from the cycle number at which the increase in fluorescent signal associated with an exponential growth of PCR products starts to be detected by the laser detector of the ABI Prism 7700 Sequence Detection System using the PE Biosystems analysis software according to the manufacturer's manuals.TBP (Genbank accession NM_003194) encoding for the TATA box-binding protein (a component of the DNA-binding protein complex TFIID) as an endogeneous RNA control, and normalized each sample on the basis of its TBP content.The precise amount of total RNA added to each reaction and its quality (i.e. lack of extensive degradation) are both difficult to assess. We therefore also quantified transcripts of the gene TBP gene, termed "Ntarget", were determined by the formula: Ntarget = 2sampleΔCt , where ΔCt value of the sample was determined by subtracting the average Ct value of the target gene from the average Ct value of the TBP gene.Results, expressed as N-fold differences in target gene expression relative to the target values of the samples were subsequently normalized such that the Ntarget value of the tumor sample which contained the smallest amount of target gene mRNA in each tumor series would equal a value of 1.The NTBP and the 47 target genes were chosen with the assistance of the computer programs Oligo 5.0 . We conducted searches in dbEST, htgs and nr databases to confirm the total gene specificity of the nucleotide sequences chosen for the primers and probes, and the absence of single nucleotide polymorphisms. In particular, the primer pairs were selected to be unique when compared with the sequences of the closely related family member genes or of corresponding retropseudogenes. To avoid amplification of contaminating genomic DNA, one of the two primers or the probe was placed at the junction between two exons. Agarose gel electrophoresis allowed us to verify the specificity of PCR amplicons. The list of the 47 target genes tested in this study is indicated in Table Primers and probes for Total RNA was extracted from frozen tumor samples by using the acid-phenol guanidinium method. The quality of the RNA samples was determined by electrophoresis through agarose gels and staining with ethidium bromide, and the 18S and 28S RNA bands were visualized under ultraviolet light.Reverse transcription of total RNA was done in a final volume of 20 μL containing 1X RT buffer , 20 units of RNasin RNase inhibitor , 10 mM DDT, 100 units of Superscript II RNase H- reverse transcriptase , 3 μM random hexamers and 1 μg of total RNA. The samples were incubated at 20°C for 10 min and 42°C for 30 min, and reverse transcriptase was inactivated by heating at 99°C for 5 min and cooling at 5°C for 5 min.® PCR Core Reagents kit or the SYBR® Green PCR Core Reagents kit (Perkin-Elmer Applied Biosystems). The thermal cycling conditions comprised an initial denaturation step at 95°C for 10 min and 50 cycles at 95°C for 15 s and 65°C for 1 min.All PCR reactions were performed using a ABI Prism 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems). PCR was performed using either the TaqManU test and the Spearman rank correlation test (link between 2 quantitative parameters). Differences between the two populations were judged significant at confidence levels greater than 95% (p < 0.05).The distributions of the gene mRNA levels were characterized by their median values and ranges. Relationships between mRNA levels of the different target genes and comparison between the target gene mRNA levels and the clinical parameters were estimated using nonparametric tests: the Mann-Whitney Y axis against 1 – the specificity on the X axis, considering each value as a possible cutoff value. The AUC (area under curves) was calculated as a single measure for the discriminate efficacy of a molecular marker. When a molecular marker has no discriminative value, the ROC curve will lie close to the diagonal and the AUC is close to 0.5. When a test has strong discriminative value, the ROC curve will move up to the upper left-hand corner (or to the lower right-hand corner) and the AUC will be close to 1.0 (or 0).To visualize the efficacy of a molecular marker to discriminate two populations , we summarized the data in a ROC (receiver operating characteristic) curve . This cuHierarchical clustering was performed using the GenANOVA software .Relapse-free survival (RFS) was determined as the interval between diagnosis and detection of the first relapse .Survival distributions were estimated by the Kaplan-Meier method , and theThe results for the 47 genes are summarized in table BRCA2, DNMT3B, CCNE1, HMMR/RHAMM, MKI67, TERT and CCND1. The prognostic performance of these 7 genes was also assessed using ROC-AUC analysis. BRCA2 emerged as the most discriminatory marker of relapse status . The mRNA expression of this gene, as well as DNMT3B, CCNE1, HMMR/RHAMM, MKI67 and TERT, was higher in patients who relapsed than in patients who did not relapse, while only CCND1 mRNA expression was lower in patients who relapsed.Seven genes showed significantly different expression according to relapse status (P < 0.05), namely i.e. the genes with the highest ROC-AUC values , the patient population fell into two subgroups with significantly different relapse-free survival curves in univariate analysis, us Table and the us Table . Only thBRCA2, DNMT3B and CCNE1 mRNA levels were significantly higher in patients who relapsed than in those who did not relapse.The results for each of the three genes are summarized in table On hierarchical clustering of the samples, the three-gene expression signature dichotomized the 104 patients into two subgroups of similar sizes to those of the initial patient population .The "poor prognosis" signature was again associated with shorter relapse-free survival in this independent tumor series in univariate analysis, in univaze Table , only SBBRCA2, DNMT3B and CCNE1) with independent prognostic significance in breast cancer . This "poor prognosis" signature was then tested on an independent set of 104 ERα-positive breast tumors from a well-defined cohort of postmenopausal breast cancer patients treated with primary surgery followed by adjuvant tamoxifen alone. It was found to be significant in univariate analysis (P = 0.029), but not in multivariate analysis (P = 0.27). We have previously published individual data for 18 of these 47 genes, namely ERBB1-4 [MYC [TERT [CCND1 [CGB, CGA, ERα, ERβ, PR, PS2 [AR [DNMT3B [PAI1, PAI2 and UPA [We used real-time quantitative RT-PCR assays to quantify the mRNA expression of 47 genes previously identified as candidate prognostic molecular markers in 100 ERα-positive breast tumor samples. We identified a three-gene expression signature , and for distinguishing among closely related family member genes or alternatively spliced specific transcripts . Finally, real-time quantitative RT-PCR assay is a reference in terms of its performance, accuracy, sensitivity and throughput for nucleic acid quantification, and is more appropriate for routine use in clinical laboratories, being simple, rapid and yielding good inter-laboratory agreement and statistical confidence values.These discrepancies may be due to the clinical, histological and ethnic heterogeneity of breast cancer, but also to the fact that breast tumors consist of many different cell types – not just tumoral epithelial cells, but also additional epithelial cell types, stromal cells, endothelial cells, adipose cells, and infiltrating lymphocytes. Real-time RT-PCR requires smaller starting amounts of total RNA (about 1–2 ng per target gene) than do cDNA microarrays, making it more suitable for analyzing small tumor samples, cytopuncture specimens and microdissected samples. Real-time RT-PCR also has a linear dynamic range of at least four orders of magnitude, meaning that samples do not need to contain equal starting amounts of RNA. Real-time RT-PCR is also more suitable than cDNA microarrays for analyzing weak variations in gene expression and weakly expressed genes , confirming that the ERBB2 mRNA expression level is not a major prognostic factor in breast cancer; (b) ESR1/ERα mRNA levels were not different between the two subgroups , suggesting that the ESR1/ERα mRNA expression level in ERα-positive tumors is not predictive of outcome.The comparison of median target gene mRNA levels between patients who did and did not relapse provided two interesting results: (a) CCNE1), DNA methylation (DNMT3B) and DNA damage repair (BRCA2). This gene expression signature is an interesting candidate for routine clinical use, especially as the three genes encode well-characterized proteins for which specific antibodies are already commercially available. Furthermore, the three protein products are amenable to pharmacological control.The three-gene expression signature predictive of subsequent relapse status comprised genes involved in cell cycle control . We also observed a strong positive link between BRCA2 and MKI67, which encodes the proliferation-related Ki-67 antigen . The observed strong associations between BRCA2, HMMR/RHAMM and MKI67 mRNA expression explain why four- and five-gene expression signatures, comprising HMMR/RHAMM alone or together with MKI67, showed no additional prognostic value relative to the three-gene signature. control . We founBRCA2 expression ex vivo are in keeping with reports from several authors [BRCA2 mRNA expression is upregulated in rapidly proliferating cells in vitro. Our results are also in agreement with those of Egawa et al. [BRCA2 expression carries a poor prognosis in breast cancer. This link between BRCA2 overexpression and poor outcome should be taken into account when evaluating future BRCA2-based therapeutic approaches to breast cancer.Our results for authors ,34 showia et al. showing DNMT3B, the third gene in our expression signature, codes for one of the three functional DNA methyltransferases that catalyze the transfer of methyl groups to the 5-position of cytosine (DNA methylation). We previously showed that, among the three DNA methyltransferases , only DNMT3B overexpression is associated with poor outcome in breast cancer [DNMT3B (like DNMT3A) is known to be a de novo methylator of CpG sites. Abnormal DNA methylation is thought to be a major early event in the development of tumors characterized by widespread genome hypomethylation leading to chromosome instability and localized DNA hypermethylation; the latter may be important in tumorigenesis by silencing tumor suppressor genes [Finally, t cancer . DNMT3B or genes .BRCA2, DNMT3B and CCNE1) with prognostic significance. The practical value of this signature remains to be validated in large prospective randomized studies.In conclusion, by studying the expression of 47 genes previously identified as candidate prognostic markers in breast cancer, we identified a three-gene expression signature (ERα, estrogen receptor alpha; RT-PCR, reverse transcriptase-polymerase chain reaction.Real-time RT-PCR have been carried out by ST and IG. IB and RL interpreted the result, performed bioinformatics and statistical analyses.
Non-neuronal cells, including those derived from lung, are reported to express nicotinic acetylcholine receptors (nAChR). We examined nAChR subunit expression in short-term cultures of human airway cells derived from a series of never smokers, ex-smokers, and active smokers.At the mRNA level, human bronchial epithelial (HBE) cells and airway fibroblasts expressed a range of nAChR subunits. In multiple cultures of both cell types, mRNA was detected for subunits that constitute functional muscle-type and neuronal-type pentomeric receptors. Two immortalized cell lines derived from HBE cells also expressed muscle-type and neuronal-type nAChR subunits. Airway fibroblasts expressed mRNA for three muscle-type subunits significantly more often than HBE cells. Immunoblotting of HBE cell and airway fibroblast extracts confirmed that mRNA for many nAChR subunits is translated into detectable levels of protein, and evidence of glycosylation of nAChRs was observed. Some minor differences in nAChR expression were found based on smoking status in fibroblasts or HBE cells. Nicotine triggered calcium influx in the immortalized HBE cell line BEAS2B, which was blocked by α-bungarotoxin and to a lesser extent by hexamethonium. Activation of PKC and MAPK p38, but not MAPK p42/44, was observed in BEAS2B cells exposed to nicotine. In contrast, nicotine could activate p42/44 in airway fibroblasts within five minutes of exposure.These results suggest that muscle-type and neuronal-type nAChRs are functional in airway fibroblasts and HBE cells, that prior tobacco exposure does not appear to be an important variable in nAChR expression, and that distinct signaling pathways are observed in response to nicotine. Nicotine, the addictive component of tobacco smoke, signals through its family of receptors, the nicotinic acetylcholine receptors (nAChR). Acetylcholine is the endogenous ligand for these receptors, and has been found in many tissues outside of the nervous system. Non-neuronal nAChR have also been identified in tissues such as the skin, vasculature, and nasal mucosa . nAChR aThe ionic permeability of nAChR is dependent upon the subunit composition of the receptor, with some receptors showing preference to either calcium or sodium . HoweverThe primary route of exposure to nicotine is through inhalation, either by active smokers or non-smokers exposed to environmental tobacco smoke. Through inhalation, the lung, in particular, would be exposed to pharmacological doses of nicotine. In addition, receptor inactivation is likely to occur in sensitive receptors, due to the extended length of time that smokers use tobacco ,5. This In vitro data has indicated that airway epithelial cells release GM-CSF upon exposure to nicotine, and activate Akt, a signaling molecular important in cell survival [Using radiolabeled agonist, we have shown that saturable nicotine binding sites exist in the lung . Other psurvival ,9. Howevsurvival airway tIn this study, a series of 37 short-term human bronchial epithelial cultures, 25 airway fibroblast cultures, and 2 immortalized bronchial epithelial cell lines were examined by RT-PCR for nAChR expression. We also examined protein expression by immunoblot to determine which subunits are most highly expressed and to determine if appropriate combinations are present at the protein level to form functional receptors. We determined that the nAChR present are functional by examining calcium influx after agonist exposure, and blockade by antagonists. We also show that exposure of airway cells to nicotine leads to activation of downstream signaling pathways. Finally, we examined the nAChR present in HBE and airway fibroblasts derived from smokers, ex-smokers, and never smokers to determine if alterations in nAChRs based on tobacco exposure can be detected.HBE cells are cultured from airway biopsies using standard methodology in serum-free medium . BrieflyBEAS2B were all purchased from ATCC and cultured according to ATCC instructions. IB3-1 cells were derived from a cystic fibrosis patient and cultured in Hams medium with 10% serum .All chemicals used were from Sigma and all supplies were from either Fisher Scientific or PGC Scientific , unless otherwise indicated.RNA is isolated from cultures using standard guanidinium thiocyanate method . RNA is 12–18 (Invitrogen) was annealed to 1 μg total RNA and reverse transcribed with Superscript II (Invitrogen). The reaction contained 1 μg RNA, 500 ng Oligo dT12–18, 50 mM Tris-HCl, pH 8.3, 75 mM KCl, 3 mM MgCl2, 10 mM DTT, 1 mM each dNTP, 200 U superscript. Briefly, total RNA was incubated with oligo dT12–18 at 70°C for 10 min. The cDNA produced was then used as a template for PCR using specific primers. Table Primers were developed to unique regions of each subunit and were tested on human muscle and brain RNA purchased from Clontech . Optimized protocols were then used on RNA from airway cells. All primers span introns and do not amplify DNA. GAPDH or actin is always used as a positive control for RNA integrity. Oligo dT50 μg protein with loading buffer was denatured using 50 mM DTT and heated at 80°C for 15 min. Protein was loaded onto 10% Bis-Tris gels (Invitrogen). Brain lysate and lysates from myotubes cultures were used for positive control. Protein was transferred to PVDF or Multiblots . PVDF membranes were blocked for 1 hr with 5% blocker (Biorad) in TBS-T ). Primary antibody was diluted in carrier protein, 5% blocker for PVDF or 0.5% casein (Pierce) for Multiblots and incubated at 4°C overnight, see Table Airway cells were grown in 24-well dishes, with 10,000 cells per well. After 24 hours, medium was replaced with serum-free, prewarmed medium spiked with calcium-45, with a final specific activity of approximately 60 μCi/mM calcium. Drug was added to the wells as indicated, in triplicate. If nAChR antagonists or channel blockers were used, they were added 20 min before the addition of agonist. The calcium ionophore A23187 was used as a positive control for calcium influx. After incubation at 37°C, plates were put on ice and washed 3 times with ice-cold PBS. Lysis buffer was added. Lysates were transferred to scintillation vials, scintillation fluid added, and counted. Results are presented as percent control, with untreated control normalized to 100%.For phospho-PKC, phospho-p42/44, and phospho-p38, 10 minute exposure to 20 ng/ml EGF was used as the control. These conditions have been published as optimal for signaling by EGF in several published articles ,17. ImmuDifferences in expression frequency of nAChR subunits at either the RNA or protein level were analyzed by Fisher's Exact Test. In all other experiments, differences from control were determined using Student's T-test. All p-values reflect two-tailed tests.Using RT-PCR, we repeatedly detected mRNA for nAChR subunits in short-term cultures of human airway cells from bronchial biopsies. Figure The subunits that are expressed by HBE cells could potentially combine to form muscle-type (α1/ β1/ δ /ε) heteropentamers, neuronal α7 or α9 homopentamers, and neuronal heteropentamer receptors α3/ α5/ β2 or β4 and α6/β2 or β4. By examining the pattern of mRNA expression in each individual culture, combinations were observed that would produce a functional muscle-type receptor in 7 of 33 (21%) of cultures, a functional α3-containing neuronal type receptor in 10 of 28 (35%) of cultures, α6-containing neuronal type receptor in 13 of 28 (46%), α7 homopentamer receptors in 25 of 37 (68%), and homopentamer α9 receptors in 27 of 29 cultures (93%). Only 2 of 35 (6%) HBE cultures did not express at least one functional nAChR subunit combination.Two immortalized airway epithelial cell lines also expressed mRNA for many of these nAChR subunits, including muscle-type subunits Figure , Table 3Airway fibroblasts also expressed nAChR Table . We founThe mRNA expression of four nAChR subunits is significantly different when comparing airway fibroblasts and HBE cells. The muscle-type receptor subunits α1, δ, and ε are all expressed more frequently in airway fibroblasts than in HBE cells . In addition, the β3 subunit is also expressed more frequently in fibroblast cultures than in HBE cells . The subunit combinations that could form functional receptors were also examined and compared between cell types. Consistent with the individual subunit data, the combination of all subunits for the muscle-type receptor is expressed significantly more frequently in human airway fibroblasts (74%) than in HBE cultures (21%)(p = 0.0001).The nAChR subunit α9 was frequently expressed by both HBE cells and airway fibroblasts. Although we find this nAChR frequently and it may have physiological significance, it is unlikely that signaling through this receptor would be responsible for the immediate downstream effects seen in our studies, which focus on the effects of nicotine, since nicotine does not act as an agonist for this receptor type.We next examined nAChR subunit protein expression using immunoblotting. All the cultures examined expressed protein for nAChR subunits, in general agreement with mRNA expression data . Figure We examined functionality of the AChR on HBE cells by measuring calcium influx. After treatment with nicotine, extracellular radioactive calcium Ca-45) is internalized by BEAS2B cells and short-term HBE cultures .To test the specificity of this response, we used the nicotinic antagonists α-bungarotoxin and hexamethonium with BEAS2B cells and nicotine. α-Bungarotoxin will block muscle-type nAChR as well as α7 homopentamers and hexamethonium will block neuronal heteropentamer nAChR such as α3- and α6- containing receptors. We used ionophore alone and with the nicotinic antagonists as a control in these experiments. Antagonists had no effect on ionophore-induced calcium influx (data not shown). In this experiment we found that α-bungarotoxin could completely prevent the calcium-45 influx seen after nicotine treatment, while hexamethonium could only slightly inhibit the effect of nicotine in BEAS2B cells responds to nicotine because it is commonly activated by calcium influx. Calcium influx is an immediate effect of nicotine exposure to BEAS2B cells Figures and 5. TUsing phosphorylation as a marker of activation, we find that PKC and p38, but not p42/44 are activated after treatment with nicotine in BEAS2B Figure . As showIn contrast, signaling experiments done with short-term airway fibroblast cultures show that nicotine caused phosphorylation of p42/44 within 10 minutes of exposure Figure . After dTogether, these data indicate that the muscle-type (α1/β1/δ/ε) nAChR is consistently present on airway epithelial cells, while airway fibroblasts consistently demonstrate both a muscle-type and an α7 homomeric nAChR. Normal airway epithelial cells may also sometimes express the neuronal α3/α5/β2 or α6/β2 nAChR. The nicotinic receptors are functional, regulate calcium influx upon ligand binding, and lead to downstream activation of the signaling pathways MAPK or PKC when bound by nicotine. Downstream effects can be blocked by use of nicotinic antagonists.We examined the relationship of prior smoking to nAChR expression on airway cells. To do so, we compared subunits expressed at the mRNA and protein level in HBE cultures from never smokers, active smokers, and ex-smokers to determine if long-term exposure to nicotine was a factor in the type of nAChR expressed. As shown in Table Interestingly, in airway fibroblasts mRNA patterns for combinations of neuronal heteropentamers containing the α3 subunit were downregulated with smoking Table . Cultureet al. [et al. [et al. [Recent evidence suggest that endogenous acetylcholine is a local signaling molecule in non-neuronal tissue, and that nicotinic acetylcholine receptors are found outside the nervous system. Our data show that HBE cells can express mRNA for the neuronal α3/α5/β2 or β4, α6/β2 or β4, and α7 and α9 pentamers, as well as the muscle type α1/β1/δ/ε nAChR. Previous data in the human airway examined small numbers of HBE cultures for a limited number of nAChR subunits. Maus et al. used RT- [et al. . Our datInstead, our results suggest that the muscle-type nAChR present in HBE cells may have a functional role in HBE cells that has not previously been considered. The muscle-type receptor was more recently characterized than the neuronal type and previous literature never examined HBE cultures for the muscle-type receptor ,10,12. TSimilarly to HBE, mRNA and protein for nAChR subunits are commonly expressed by airway fibroblast cultures. Airway fibroblasts have never been examined for the presence of nicotinic receptors. Dermal fibroblasts have been shown to express mRNA for some receptors, although they were not examined for muscle-type receptors , and ginCalcium influx is a hallmark of the opening of the nAChR ion channel. An increase in intracellular calcium from the extracellular milieu can occur either by a direct influx of calcium through the nAChR channel, as occurs in α7 receptors, or by an influx of sodium that leads to depolarization of the cell and the opening of L-channels, as occurs after agonist binding to heteropentamer nAChR ,25. We dNicotine has previously been shown to affect signaling in human airway cells, and acetylcholine, the endogenous ligand, causes proliferation of HBE cells ,9. In ouOver the same time period, the MAPK family member p38 but not p42/44 is phosphorylated, a required step for activation. The activation of p38 is associated with regulation of apoptosis in response to cellular stress. The MAPK family kinases, such as p38, are not known to be directly activated by calcium; however there are several indirect pathways that lead to rapid phosphorylation of these pathways. These include signaling through the calcium/calmodulin-dependent protein kinases CaMKI, CaMKII, and CaMKIV as well as the calcium-activated signaling molecule PYK2.et al [This finding is in contrast to signaling activated by nicotine in airway fibroblasts. These cells phosphorylate p42/44 immediately upon treatment with nicotine. Like p38, p42/44 is not directly activated by calcium, but could be phosphorylated by calcium-activated signaling molecules. However, the pathways that lead to phosphorylation of different MAPK family members in the two cell types in response to nicotine have not been elucidated. It is possible that the nAChR type differences are responsible for the differential MAPK effects. For example, the α7 receptor is highly expressed on airway fibroblasts but not on HBE cells. The additional nAChR type on these fibroblasts may change the signaling pathways activated by cells in response to nicotine. This is consistent with a previous study by Jull et al . This stet al . A similFinally, we found that smoking had only modest effects on nAChR expression in the airway. Previous studies in the brain indicate that certain nAChR may be increased in frequency in smokers as compared to non-smokers , and a pIn contrast, there was a significant decrease in the frequency of the expression of the functional combination of subunits for the α3-containing receptors in airway fibroblasts of smokers and ex-smokers compared to never-smokers. This nAChR type is a sodium channel that undergoes inactivation upon long-term exposure to agonist, and, in airway fibroblasts, appears to be downregulated with long-term exposure to nicotine. This change in receptor expression remains even after exposure to nicotine ceases, as evidenced by the reduced frequency of expression in ex-smokers.We have shown that short-term cultures of normal airway fibroblasts as well as normal human bronchial epithelial cells from a number of different human donors consistently express functional nAChR and that these cell types differ in the type of nAChR they express. It is likely that the muscle-type nAChR plays a major role in the response of HBE cells to nicotine, and that the neuronal heteropentamers play a more minor role. Calcium influx as well as initiation of downstream signaling pathways indicate that receptors are functional and that both human bronchial epithelial cells and airway fibroblasts respond to nicotine, and those signaling responses may differ due to the difference nAChR present on the cell type. Together, these data suggest that exposure of the human airway to nicotine through tobacco smoke may have physiological consequences on airway homeostasis involving both the airway mucosa and the underlying submucosal mesenchymal cells. As such, nicotine may act to promote lung disease by acting to change cell growth and apoptosis. In airway fibroblasts this may leading to thickening of the airway wall seen in the pathogenesis of COPD. In the bronchial epithelium this may lead to preneoplasia or development of frank cancer.nAChR: nicotinic acetylcholine receptorHBE: human bronchial epithelial cellPKC: protein kinase CMAPK: mitogen activated protein kinaseBEGM: bronchial epithelial growth mediumEGF: epidermal growth factorP42/44: Extra-cellular signal regulated kinase isoforms 1 and 2DLC designed and performed the majority of the experiments and data analysis, and wrote the manuscript. TMH designed and performed RT-PCR with the assistance of MJS. JDL and NAC contributed the tissues that were grown into primary cultures and provided information on smoking history. AGD cultured the primary cells. JMS conceived of the study and participated in its design and coordination. All authors read and approved the manuscript.
DNA-directed synthesis represents a powerful new tool for molecular discovery. Its ultimate utility, however, hinges upon the diversity of chemical reactions that can be executed in the presence of unprotected DNA. We present a solid-phase reaction format that makes possible the use of standard organic reaction conditions and common reagents to facilitate chemical transformations on unprotected DNA supports. We demonstrate the feasibility of this strategy by comprehensively adapting solid-phase 9-fluorenylmethyoxycarbonyl–based peptide synthesis to be DNA-compatible, and we describe a set of tools for the adaptation of other chemistries. Efficient peptide coupling to DNA was observed for all 33 amino acids tested, and polypeptides as long as 12 amino acids were synthesized on DNA supports. Beyond the direct implications for synthesis of peptide–DNA conjugates, the methods described offer a general strategy for organic synthesis on unprotected DNA. Their employment can facilitate the generation of chemically diverse DNA-encoded molecular populations amenable to in vitro evolution and genetic manipulation. A method is presented that makes possible the use of standard organic reaction conditions and common reagents in the presence of unprotected DNA, an important step in enabling DNA-directed chemical synthesis and drug discovery A number of strategies have been proposed recently to enable the in vitro selection and evolution of chemical libraries . These nOne approach takes advantage of hybridization to induce proximity between reactants covalently attached to oligonucleotides. “Reading” is accomplished by the hybridization of the reactant conjugates to a DNA template, whereas synthetic execution results from the reactants being positioned closely together. The strategy has been demonstrated for several types of chemistries . HoweverRather than tailoring reactions to the narrow window of hybridization conditions, DNA reading and chemical transformation can be carried out in chronologically distinct steps . The DNAWe first had to choose a solid-phase material that exhibited several critical properties: reversible, efficient binding and release of unprotected DNA; robust solvent integrity; and resistance to chemical modification. The first requirement narrowed our focus to resins that noncovalently bind DNA. A number of resins were tested and excluded due to poor bind–release properties . Others exhibited extensive compression in organic solvent or poor reswelling during organic to aqueous solvent transitions (Poros 50 HQ). Reverse-phase resins were excluded because they would presumably not retain DNA in many organic solvents.2O, methanol (MeOH), dimethyl sulfoxide, N,N-dimethylformamide (DMF), ethyl acetate, and dichloromethane. Lastly, Sepharose has been used previously as a material for solid-phase synthesis assay, oligonucleotides were immobilized and eluted quantitatively in small volumes . Single-ynthesis . All subWe studied 9-fluorenylmethyoxycarbonyl (Fmoc)–based peptide synthesis because it has a well-established solid-phase precedent and offers a challenge in diverse chemical functionality A. As a DN-hydroxysuccinimide, 1-hydroxy-7-azabenzotriazole (HOAt), or N-hydroxybenzotriazole -3-ethylcarbodiimide hydrochloride (EDC) out-performed other activating reagents examined; other reagents gave poor coupling yields (benzotriazole-1-yl-oxy-tris-pyrrolidino-phosphonium hexafluorophosphate [PyBOP]), resulted in the formation of undesired side products , or led to poor recovery of DNA (dicyclohexylcarbodiimide [DCC] and diisopropylcarbodiimide [DIC]). Thirty-minute EDC coupling reactions were typically less than 50% efficient without the addition of acylation catalysts such as triazole . Of thesHOAt see . ImportaHOAt see , which mHOAt see .To examine whether the coupling conditions generalized to longer DNA fragments, we used an aminated 340-base ssDNA as the support. After coupling, the eluted amino acid–DNA conjugates were digested with nuclease P1, a 3′-to-5′ exonuclease that cleaves all but the 5′ phosphodiester bond of our ssDNA constructs. The 5′-terminal nucleotide, which maintains the linker and synthetic peptide product, was separated from the other nucleoside monophosphates by HPLC and verified by electrospray ionization mass spectrometry. Amino acid coupling to the 340-base ssDNA proceeded with efficiencies comparable to those observed with oligonucleotides (data not shown). In some cases, the sensitivity of this HPLC assay was increased using fluorescence detection. For these experiments, we synthesized a fluorescent lysine derivative, Fmoc-Lys(coumarin)-OH esters, which are normally cleaved with trifluoroacetic acid, could be removed at pH 6.5 in an aqueous solution at 70 °C. This gentle condition offers a convenient approach for acid deprotection.The carboxylic acid side chains of aspartic and glutamic acid are usually protected as esters during Fmoc peptide synthesis. Sterically bulky esters are required to suppress piperidine-induced imide formation, which leads to undesired side chain peptide bonds. We discovered that the conventional tert-butyl ester deprotection did not proceed through intramolecular imide formation, we coupled either Fmoc-Asp(tBu)-OH or Fmoc-Asp-OtBu to Leu-NC20, followed by Fmoc-Phe-OH. The main and side chain isomers of these tripeptide–DNA conjugates were resolved by HPLC after removal of the t-butyl group. Interconversion of the isomers during deprotection was undetectable (less than 5%). When the experiment was repeated using a 10 mM NaOH solution for tert-butyl ester deprotection (where imide formation would be expected), interconversion of the side and main chain isomers was observed. These results indicate that the thermolytic deprotection maintains the regiochemistry of the initial peptide bonds. At this time, we have little other data that speak to the mechanism of deprotection.To verify that the thermolytic Protection of the primary amine side chain of lysine prevents the formation of branched peptides. 2-Nitrobenzenesulfonamide (nosylamide) protection was particularly attractive as a means for lysine protection because the protecting group is base-stable and removed under conditions known to be DNA-compatible . The lysArginine does not absolutely require side chain protection . Howevertert-butyl esters. However, the trityl group is not ideal in aqueous conditions because of its hydrophobicity and extreme lability at high temperatures. To offer a more robust solution, we sought an H2O-compatible histidine protecting group. The 2,4-dinitrophenyl group, widely used in Boc peptide synthesis, is more hydrophilic than trityl and is removed under the nosyl-deprotection conditions. Unfortunately, 2,4-dinitrophenyl is not stable to the piperidine used for Fmoc removal group of His(Trt) is rapidly removed under the thermolytic conditions used to deprotect removal . After tHistidine racemization is a well-recognized concern in peptide synthesis. To assay the extent of racemization occurring during histidine coupling, we synthesized the oligonucleotide–dipeptide conjugate His(CNP)-Ala-NC20 using either L- or D-Fmoc-His(CNP)-OH. The diasteriomeric dipeptide–oligonucleotide products were resolvable by reverse-phase HPLC. Neither L-His nor D-His coupling resulted in detectable racemization. The experiment was repeated using L- and D-Fmoc-His(Trt)-OH with the same result.tBu)-OH was employed as the protected form of cysteine. The tert-butyl thioether coupled efficiently (Fmoc-Cys(Siciently and was n amino acids exceeded (0.9)n n . The HPLd (0.9)n , ruling Beyond the physical and chemical characterization of the peptide–oligonucleotide conjugates, it was important to examine their behavior in a biochemical setting. Thus, the [Leu]enkephalin pentapeptide was synthesized on an aminated 340-base ssDNA support, which was subsequently converted to duplex form. The conjugate exhibited a peptide-dependent electrophoretic mobility gel shift when incubated with the 3-E7 antibody , demonstDNA-directed synthesis requires “chemical translation” . Rather The proximity approach to chemical translation uses hybridization to induce proximity-driven chemical transformations. Because the DNA “reading” and chemical execution steps are simultaneous, reactions are necessarily performed in aqueous solutions with solute, pH, and temperature conditions that promote DNA–oligonucleotide hybridization. These conditions limit the generality, efficiency, and speed of possible organic transformations.2O, MeOH, ethanol, isopropanol, DMF, dimethyl sulfoxide, dichloromethane, and ethyl acetate (data not shown). We also carry out reactions at high temperatures. For example, difficult peptide couplings are facilitated with elevated temperature, and the BME-mediated deprotection of lysine, arginine, histidine, and cysteine is carried out at 60 °C. We have recently used microwave-assisted methods submonomer synthesis on unprotected DNA (data not shown). The chemistry used for peptoid synthesis is entirely different from peptide chemistry, illustrating the generality of the strategy. The potential for adapting other chemistries is essentially limitless. Wittig reactions, azide reductions, 1,3 dipolar cycloadditions, reductive aminations, Heck couplings, and a wide variety of other useful chemical transformations have been carried out in the presence of unprotected DNA without modification of DNA . Nuclease P1 (#27–0852-01), DEAE Sepharose Fast Flow (#17–0709-01), and Medium Grade G-25 Sephadex (#17–0033-01) were purchased from Pharmacia-LKB Technology . Xba1 was purchased from New England Biolabs . DEAE Sepharose columns were poured in Empty TWIST synthesis columns . Kendall Monoject syringes and a Promega manifold with chemically resistant PFTE stopcocks were used. All other chemical reagents or solvents were purchased from either Sigma-Aldrich or Fisher Scientific International .Fmoc amino acids were purchased from Novabiochem , Chem-Impex International , or Fluka . EDC was purchased from Omega Chemical . 2N-X-AGCAGGCGAATTCGTAAGCC, where X represents a C12 linker (NC20) or a longer PEG linker (NP20). NC20 was synthesized using the Glen Research 5′-Amino-Modifier C12 (#10–1922). NP20 was synthesized using the Glen Research Spacer Phosphoramidite 18 (#10–1918) followed by the 5′-Amino-Modifier 5 (# 10–1905).The internal control 10-base oligonucleotide had the sequence CGGACTAGAG. The reactive 20-base oligonucleotides had the sequence H260nm ≈ 224.5 mM−1cm−1) and were not considered in efficiency and yield determination. Reaction products were collected, concentrated to approximately 50 μM using centrifugal evaporation, and desalted over G-25 Sephadex. A mixture of 1 μl of desalted oligonucleotide and 1 μl of a freshly prepared saturated matrix solution was spotted on a matrix-assisted laser desorption/ionization target and allowed to air dry before mass spectrometry analysis. The matrix solution was made from 250 μl of H2O, 250 μl of acetonitrile, 25 mg of THAP, and 10 mg of ammonium tartrate. Peptide sequences (five or more amino acids) were verified by Edman degradation peptide sequencing.Coupling reactions were monitored by HPLC mobility shift using a C18 analytical column and UV detection at 260 nm and 280 nm . Linear gradients from 0%–90% acetonitrile in 100 mM triethylammonium acetate (pH 5.5) were employed. Coupling efficiencies and yields were determined by integration of elution peaks from the 260-nm channel. Chromophores added or removed during reactions cause changes in extinction coefficients less than the sensitivity (5%) of our HPLC assay . Yields were determined by integration of elution peaks from the 260-nm channel, using the P1 digestion product of unreacted starting material as a reference. Approximately 1 nmol of material was required for accurate UV detection. Products were collected, concentrated by centrifugal evaporation, and applied to a C18 SepPak cartridge in 25 mM triethylammonium acetate (pH 5.5). The cartridge was washed with 3 ml of 25 mM triethylammonium acetate (pH 5.5) and 1 ml of H2O. The products were eluted with 1 ml of 50/50 MeCN/H2O, concentrated to 100 μl by centrifugal evaporation, and analyzed by electrospray ionization mass spectrometry. For coumarin-labeled products, fluorescence was monitored (320 nm excitation/380 nm emission) with a scanning fluorescence detector , and less than 50 pmol of material was necessary for accurate fluorescence detection.DEAE elute buffer containing the peptide–DNA conjugates was neutralized and brought to 100 mM sodium acetate (pH 5.2) and 400 μM ZnSO5/pixel). The intensity of full-length control and [Leu]enkephalin bands were similar to within 1% (S/N approximately 600). Upon peak integration along the entire lane, the full-length band represented a similar percentage of total intensity in the control (83%) and [Leu]enkephalin (81%) samples. The data suggest that, in the worst case, 3% of the DNA could have been modified during the course of peptide synthesis.A 5′-aminated 340-base ssDNA support was generated as described . After l2O followed by 12 ml of DEAE bind buffer (10 mM acetic acid and 0.005% Triton X-100) using a syringe or a syringe barrel, a male-male luer adapter, and a vacuum manifold using a syringe. Long DNA molecules (340mers) were eluted with 4 ml of 1.5 M NaCl, 10 mM NaOH, and 0.005% Triton X-100 heated to 80 °C.2O, and 7.5 μl of diisopropylethylamine. Fmoc-Asn-OSu required four couplings rather than two to achieve quantitative yields.The following process was carried out twice. The column, with DNA bound, was washed with 3 ml of DMF. Using two syringes see , the colAfter the second amino acid incubation, the column was washed with 3 ml of DMF. Fmoc deprotection was carried out as follows: 3 ml of 20% piperidine in DMF was applied to a 3-ml syringe barrel attached to the top of the column. 1.5 ml was pushed through the column, followed by a 3-min incubation. An additional 1 ml was pushed through the column, followed by a 17-min incubation. The procedure was completed with a final 3-ml DMF wash.2O and 3 ml of MeOH and then incubated for 30 min at room temperature with a freshly prepared 500-μl solution of 50 mM Fmoc–amino acid-OH, 50 mM EDC, and 5 mM HOAt in MeOH. These conditions were derived directly from n amino acids exceeded (0.9)n. Fmoc deprotection was carried out as described for succinimidyl ester coupling. See Supporting Information for a more detailed description of peptide coupling.The column was washed with 500 μl of HtBu), and His(CNP), the column was washed with 3 ml of DMF and subsequently incubated for 30 min with 700 μl of DMF containing 500 mM BME and 250 mM DBU while submerged in a 60 °C H2O bath. The column was then washed with 3 ml of DMF and 12 ml of DEAE bind buffer. Lys(Ns) can also be deprotected quantitatively with a DMF solution containing 5% thiophenol and saturated K2CO3 at 37 °C for 90 min. These conditions deprotect Arg(Ns) inefficiently, and have not been tested for Cys(StBu) or His(CNP).For Lys(Ns), Arg(Ns), Cys(StBu), Glu(tBu), and His(Trt), after HPLC purification, the tert-butyl ester and/or trityl containing oligonucleotide–peptide hybrid was incubated in a 20-mM MgCl2 solution at 70 °C, yielding quantitative deprotection in 3 h or 12 h (Glu). Deprotection can alternatively be carried out before HPLC purification: after eluting from the DEAE Sepharose column, NaOAc (pH 5.2) and MgCl2 were added to final concentrations of 30 mM and 200 mM, respectively, and the solution was then incubated at 70 °C for the appropriate time. In contrast, acid deprotection on solid support was inefficient.For Asp and after peptide coupling. Calculated masses are noted to the left of the mass peaks.(149 KB PDF).Click here for additional data file.Figure S2(A) Reaction scheme for synthesis of GLFYG-NC20. Coupling efficiencies for individual steps are noted in black, and absolute yields from NC20 are noted in red. MALDI-MS results for all species are denoted under each species as “Observed .” See Protocol S1 for precise coupling procedures.(B) HPLC analysis of sequential couplings during peptide synthesis monitored at 260 nm. Load and elutes from columns 1–5 are indicated. Sequential coupling efficiencies (black) were calculated by integration of recovered aminated DNA peaks. Absolute yields (red) were calculated by integration of intended product peak relative to load. A nonaminated 10-base oligonucleotide (10mer) was included as a control for nonspecific DNA loss and modification. Percent recovery of 10mer is noted in red. The HPLC analysis employed a 60-min gradient of 0%–45% MeCN in100 mM TEAA (pH 5.5).(368 KB PDF).Click here for additional data file.Protocol S1(1.0 MB PDF).Click here for additional data file.
Through major efforts to reduce costs and expand access to antiretroviral therapy worldwide, widespread delivery of effective treatment to people living with HIV/AIDS is now conceivable even in severely resource-constrained settings. However, the potential epidemiologic impact of treatment in the context of a broader strategy for HIV/AIDS control has not yet been examined. In this paper, we quantify the opportunities and potential risks of large-scale treatment roll-out.We used an epidemiologic model of HIV/AIDS, calibrated to sub-Saharan Africa, to investigate a range of possible positive and negative health outcomes under alternative scenarios that reflect varying implementation of prevention and treatment. In baseline projections, reflecting “business as usual,” the numbers of new infections and AIDS deaths are expected to continue rising. In two scenarios representing treatment-centered strategies, with different assumptions about the impact of treatment on transmissibility and behavior, the change in the total number of new infections through 2020 ranges from a 10% increase to a 6% reduction, while the number of AIDS deaths through 2020 declines by 9% to 13%. A prevention-centered strategy provides greater reductions in incidence (36%) and mortality reductions similar to those of the treatment-centered scenarios by 2020, but more modest mortality benefits over the next 5 to 10 years. If treatment enhances prevention in a combined response, the expected benefits are substantial—29 million averted infections (55%) and 10 million averted deaths (27%) through the year 2020. However, if a narrow focus on treatment scale-up leads to reduced effectiveness of prevention efforts, the benefits of a combined response are considerably smaller—9 million averted infections (17%) and 6 million averted deaths (16%). Combining treatment with effective prevention efforts could reduce the resource needs for treatment dramatically in the long term. In the various scenarios the numbers of people being treated in 2020 ranges from 9.2 million in a treatment-only scenario with mixed effects, to 4.2 million in a combined response scenario with positive treatment–prevention synergies.These analyses demonstrate the importance of integrating expanded care activities with prevention activities if there are to be long-term reductions in the number of new HIV infections and significant declines in AIDS mortality. Treatment can enable more effective prevention, and prevention makes treatment affordable. Sustained progress in the global fight against HIV/AIDS will be attained only through a comprehensive response. Combining the two approaches of prevention and treatment of HIV/AIDS could avert more than 29 million new HIV infections by 2020 In June 2001, heads of state and government convened a United Nations Special Session on HIV/AIDS and adopted unanimously the “Declaration of Commitment on HIV/AIDS” . Today, The theme of the 15th International AIDS Conference in Bangkok last summer was timely and relevant. “Access for All” calls for extending to all of those in need both sufficient resources and a set of proven interventions to prevent new infections and save lives through effective treatment. Recent developments in HIV treatment, with simple combination therapies priced at less than US$150 per year—unthinkable just a short time ago—were a major driver of discussions during the conference. Widespread access to effective antiretroviral therapy (ART) for people living with HIV/AIDS is now conceivable even in countries with severely limited resources.The World Health Organization and its partners in the Joint United Nations Programme on HIV/AIDS have defined an ambitious “3 by 5” target of 3 million people on ART—half of those in most urgent need—by the end of 2005. The potential epidemiologic impact of large-scale roll-out of treatment programs, however, remains uncertain. Experience to date is limited, and comes mostly from Western countries and Brazil. While declines in AIDS mortality in the industrialized world have been impressive ,7,8, man.In our previous analysis of the potential benefits of a comprehensive package of preventive interventions , we noteBaseline projections of HIV epidemics in sub-Saharan Africa have been developed by the Joint United Nations Programme on HIV/AIDS and the World Health Organization based on the most current data available, and in collaboration with epidemiologic experts and analysts within the countries assessed . These “To simulate the effects of prevention and treatment on HIV/AIDS incidence, prevalence, and mortality, we first adapted the analytic approach used in the previously described Goals model to allowThe model includes underlying regional demography, acquisition of HIV and other sexually transmitted infections (STIs), progression from HIV to AIDS, and progression from AIDS to death. Annual risks of HIV infection in each risk group depend on the number of partnerships, the number of sex acts per partnership, HIV prevalence among partners and condom use. These risks are magnified by the presence of other STIs and alsoThe regional models were calibrated as follows: first, plausible ranges were specified for model parameters governing sexual behavior and biological factors based on review of published studies and survey results; second, multiple simulations were undertaken by sampling values from each of the ranges and recalculating the model for each set of sampled parameter values; third, model fit was assessed by comparing modeled prevalence for adult males and females separately to baseline projections through 2020; and fourth, the best-fitting parameter set in each regional model was selected for the purpose of scenario analysis see .Potential impacts of prevention efforts at a given coverage level were based on previously published estimates for a coWe examined a range of alternative scenarios based on various levels and effectiveness of prevention interventions, with and without successful attainment of the 3 by 5 treatment target for sub-Saharan Africa:Risk behaviors are maintained at current levels, and no treatment scale-up occurs. This is simply the baseline scenario that produces a relatively stable prevalence rate over the duration of the projection, with the number of people living with HIV and the number of new infections rising slowly over time because of population growth.In two alternative scenarios, the 3 by 5 target of 50% coverage of those in need of treatment by the end of 2005 is attained, and scale-up continues to reach 80% ART coverage of those in need by 2010, maintained at 80% thereafter. In an “optimal ART effects” scenario, we assumed that treatment reduces transmissibility by 99%, and that those under treatment have 50% lower annual partnership numbers and two times higher condom use than other adults. With a response that focuses primarily on treatment, it is assumed that behavior in the general community of infected and uninfected adults is unchanged from the baseline. In an alternative “mixed ART effects” scenario, less optimistic assumptions were made: that treatment reduces transmissibility only to the same levels as in asymptomatic infected individuals (two-thirds reduction from no treatment), and that behavior in treated patients is the same as in other adults. To capture the possibility of behavioral disinhibition in response to treatment availability, we assumed that condom use declines by 10% in both treated patients and the general community, with other behaviors unchanged. The potential for disinhibition is suggested primarily by experience in some developed countries, where condom use increased dramatically in the populations at highest risk prior to the introduction of ART but then declined; the likelihood and magnitude of reductions in condom use in sub-Saharan Africa, where such prevention-induced changes generally are much less prominent today, might be questioned. We therefore considered in sensitivity analyses a variant of this scenario that excludes disinhibition but preserves all other assumptions.In the absence of wide availability of treatment, reflecting weaker political and social support for HIV control efforts, we modeled a scenario in which the comprehensive prevention package described previously has onlyWe examined two scenarios combining treatment and prevention efforts, reflecting either optimistic or pessimistic possibilities. In the optimistic scenario, treatment strengthens prevention efforts. ART coverage is the same as in the two treatment-centered scenarios, with optimal assumptions about treatment impact on transmissibility and patient behavior. It is assumed that widespread availability of treatment enables the full impact of prevention efforts to be attained as described by Stover et al. . In a moAdditional scenarios could include pessimistic assumptions about limited ART scale-up levels and timing, emergence of large-scale drug resistance resulting from low adherence, or other possible unintended outcomes of wider treatment. Certainly, large-scale treatment efforts will demand close monitoring of adverse effects. However, experience with treatment programs in developing countries has been encouraging thus far, with reported adherence levels that are at least as high as those in developed countries ,19.In the baseline projections for sub-Saharan Africa, the annual number of new adult HIV infections rises from 2.4 to 3.7 million between 2004 and 2020, and adult AIDS mortality rises from 1.8 to 2.6 million . If scalWith less optimistic assumptions (treatment-centered response/mixed ART effects), the number of new infections rises, to 4.3 million per year by 2020 (a 14% increase); mortality trends are similar to the optimistic scenario in the short term, but worse in the long term, even compared to the baseline. Excluding the assumption of reduced condom use through disinhibition from the treatment-centered/mixed effects scenario has minimal effect on the results, lowering the number of new infections in 2020 by only 2% compared to the scenario that includes disinhibition.A prevention-centered response would have greater impact on the number of new infections, lowering annual incidence by more than half by 2020. The long-term mortality trend is more favorable in the prevention-centered scenario than in the treatment-centered scenario because of reduced incidence, but prevention would produce negligible mortality benefits in the near- and mid-term future in comparison to strategies that include ART. Alternative assumptions regarding overall effectiveness in a prevention-centered response produce results that scale as expected, with reductions in annual incidence of 34% to 64% and reductions in annual mortality of 20% to 42% by 2020.If treatment and effective prevention are scaled up jointly in a combined response, the benefits in terms of both infections and deaths averted could be substantially higher. In an optimistic scenario in which treatment programs support expanded prevention, the annual number of new infections would be 74% lower and annual mortality would be 47% lower by 2020, compared to baseline. It is worth noting that the long-term decline in AIDS deaths is driven more by prevention of new infections than by direct survivorship benefits from ART. In a pessimistic scenario in which a more narrow treatment focus limits effective prevention, the overall benefits are much more modest, with 26% and 16% reductions, respectively, in new infections and mortality by 2020 compared to the baseline.Prevalence rises by 7% in the optimal and by 27% in the mixed treatment-centered scenarios by 2020, as longer survival for treated patients offsets reductions in new infections through reduced transmissibility (and risk reductions among treated patients in the more optimistic scenario) . In scenThe total number of infections averted through a combined response would be 29 million over the period 2004 to 2020 if treatment enhances prevention, a benefit that is ten times greater than that of a strategy which focuses on treatment only, even with optimal assumptions, and 51% greater than that of a strategy which focuses on (less effective) prevention alone . If a trCombining treatment with prevention efforts will reduce the resource needs for treatment substantially in the long term . In the In this paper, we have examined the potential epidemiologic impact of global HIV/AIDS control efforts under a range of alternative scenarios reflecting varying implementation of strategies for prevention and treatment. Although we focus in particular on population health outcomes and epidemiologic trends, we recognize that there are numerous other social, economic, and individual health effects of interventions including ART that are beyond the scope of this analysis. We also restrict our focus in this paper to sub-Saharan Africa, where the overwhelming majority of people living with and dying from HIV/AIDS reside; however, our findings have broader applicability and more general implications in the worldwide fight against HIV/AIDS, which we highlight here.Effective prevention requires more than having sufficient funds to offer information and services. It also requires an environment that encourages people to internalize messages about risky behavior and to adopt actual behavior change, and allows people to utilize services such as testing and counseling without fear of stigma or discrimination. Stoneburner and Low-Beer have argued that the supportive social and political environment in Uganda allowed people to discuss AIDS with family members and close friends, which led to greater behavior change than in Kenya or Zambia where most people received information from mass media only . People Involving communities and family members in the delivery of treatment—for example, as treatment monitors—offers unique entry points for effective prevention activities and a lever for population-wide behavior change. Experience with community roll-out of treatment programs has shown, for example, that uptake of voluntary counseling and testing increased by 300% in one year of roll-out in Haiti, and by a factor of 12 in Khayelitsha, South Africa, after treatment introduction ,22. A stDuring most of the past 15 years, efforts to address the AIDS epidemic in sub-Saharan Africa have focused on prevention. There have been successes in some countries, but overall these efforts have not achieved their goals. The advent of vastly expanding treatment programs in the coming years, if opportunities to capitalize on broadened political support and community mobilization can be seized, offers the potential to enhance prevention effectiveness and avert many new infections and deaths.While prevention programs are unlikely to achieve full impact in the absence of treatment, so too is the impact of treatment programs reduced if vigorous prevention efforts are absent. Without effective prevention, the number of people requiring care and treatment will grow each year. As more and more people are kept alive with ART the treatment burden will become enormous unless effective prevention reduces the number of people becoming newly infected.Without effective prevention programs, we project that the number of people receiving treatment will grow to 6.3 million by 2010 and up to 9.2 million by 2020 in Africa alone to achieve 80% coverage of those in urgent need. Meeting this need would require a tremendous increase in financing, human capacity and infrastructure that might not be attainable.If effective prevention programs are combined with treatment programs, the same level of 80% ART coverage would be achieved by treating 5.8 million in 2010 and 4.2 million in 2020. In other words, the same goal could be attained at a far lower treatment cost and with a much greater chance of sustainability.Over the long term, it is effective prevention that will reduce the burden of illness due to AIDS and the number of people in need of ART. The lessons learned in the industrialized world have to be taken on board. The availability of treatment and the shift in focus away from very effective prevention programs has led to increases in unsafe sexual behavior, STIs, and HIV transmission in some settings ,10,11. TCountries in sub-Saharan Africa are faced with the most devastating epidemic of our times. We now have the unique opportunity to derive the maximum impact from available resources. The results from our analyses show how potential synergies between prevention and treatment could be translated into considerable health benefits at the population level. But synergies do not mean simply that prevention and treatment are pursued in parallel. When whole communities become involved in the scale-up of treatment access—as will be necessary to achieve the ambitious treatment targets defined by the 3 by 5 campaign—crucial opportunities can be created for increasing their involvement in prevention activities. Only if interactions with patients, family, and community members occasioned by the provision of treatment are also used to reinforce prevention, and only if prevention workers have an opportunity to refer those in need to care and treatment, will we move at last from slogans to impact.Figure S1(80 KB EPS).Click here for additional data file.Protocol S1(172 KB PDF).Click here for additional data file.Infections from HIV continue to increase, especially in sub-Saharan Africa. The World Health Organization has a plan to get more than 3 million people on treatment by 2005 (the “3 by 5” initiative); however, the overall effect of this plan on the population's health is uncertain, and will depend on the balance between treatment and prevention efforts.They tried to predict the number of new infections and deaths each year in sub-Saharan Africa from now until 2020 depending on whether control efforts focused on prevention, treatment, or both. What they found was that by far the most effective way of decreasing new infections and deaths was to combine the two approaches, and that by doing so more than 29 million new infections and 10 million deaths might be prevented compared with continuing at current levels of prevention and care.Despite the huge amounts of money directed at HIV/AIDS, because the problem is so vast, the resources are not enough. Hence it is important to target these resources effectively. Policy makers around the world could use information like this to decide where best to direct attention and funding to combat HIV/AIDS.http://www.unaids.org/wad2004/report.htmlJoint United Nations Programme on HIV/AIDS, AIDS epidemic update, December 2004: http://www.who.int/3by5/World Heath Organization, 3 by 5 Initiative: http://www.kff.org/hivaids/hivghpwgpackage.cfmGlobal HIV Prevention Working Group:
Vascular endothelial growth factor (VEGF)-C is implicated in lymphangiogenesis, however the exact role of VEGF-C in promoting lymphatic spread of cancer cells remains largely unknown.The expression of VEGF-C was immunohistochemically determined in 97 endoscopic biopsy specimens from 46 patients with submucosal gastric carcinoma (SGC). Nodal metastases including micrometastasis and isolated tumor cells (ITC) were evaluated by immunohistochemical staining for cytokeratin in 1650 lymph nodes, and tumor cells in these metastatic nodes were also examined for VEGF-C expression.In biopsy samples, VEGF-C was positively detected in 21 (47%) patients. Metastases were identified in 46 (2.8%) nodes from 15 (33%) patients. Metastases were detected in 39 nodes by hematoxylin-eosin (H&E) staining and in additional 7 nodes as ITC by immunohistochemical staining. The rate of lymph node metastases was significantly correlated with VEGF-C expression in biopsy samples (p < 0.05). The positive and negative predictive values of VEGF-C in biopsy specimens for nodal metastasis were 44 %(10/21) and 80% (20/25), respectively. Among the 46 metastatic nodes, tumor cells in 29 (63%) nodes positive patients expressed VEGF-C, whereas those in 17 (37%) nodes did not. VEGF-C expression was high in macronodular foci in medullary areas, whereas more than half of ITC or micrometastasis located in peripheral sinus lacked the expression of VEGF-C.Despite the significant correlation, immunodetcetion of VEGF-C in endoscopic biopsy specimens could not accurately predict the nodal status, and thus cannot be applied for the decision of the treatment for SGC. VEGF-C may not be essential for lymphatic transport, but rather important to develop the macronodular lesion in metastatic nodes. The incidence of early gastric carcinoma defined as being confined to the mucosa or submucosal layer has increased. In Japan, endoscopic mucosal resection (EMR) is now generally accepted for intramucosal cancers that are associated with a minimal risk of regional lymph node (LN) metastasis -4. For sVECF-C is known to bind VEGFR-3, which is specifically expressed on lymphatic vessels and stimulates lymphangiogenesis ,10. ManyRecently, small metastatic lesions have been detected genetically or immunohistochemically in various cancers, even though they were diagnosed as negative by conventional examination with H&E staining. Such lesions are designated as micrometastasis or isolated tumor cells (ITC). The biological and clinical significance of such minute nodal invasion of carcinoma cells is still controversial -38. In tForty-six patients with SGC diagnosed and treated by curative gastrectomy with standard lymph node dissection at the First Department of Surgery, Tokyo University Hospital, Tokyo, between 1994 and 2002 were included in this study. These patients were examined endoscopically prior to surgery; several pieces of tissue specimens were then sampled with routine biopsy forceps from various portions of the tumor. Formalin-fixed and paraffin-embedded sections of 97 biopsy specimens and 1650 dissected lymph nodes derived from these 46 patients were evaluated in this study. Additionally, all the resected primary tumors were histologically examined with H&E staining according to the Japanese Classification of Gastric Carcinoma . Tumors 2O2 and 0.05% diaminobenzidine tetrahydrochloride for 3 minute. Light counterstaining with Mayer's hematoxylin was performed. The 46 lymph nodes that showed the presence of carcinoma were also evaluated for the expression of VEGF-C with the same immunostaining method.The expression of VEGF-C was investigated with immunohistochemical staining using affinity purified goat polyclonal antibodies against VEGF-C . Sections (3-μm thick) of biopsy samples were deparaffinized in xylene, hydrated through a graded series of ethanol, and then immersed in 3% hydrogen peroxide in 100% methanol for 30 min to inhibit endogenous peroxidase activity. To activate the antigens, the sections were boiled in 10 mM citrate buffer, pH 6.0 for 30 minutes. After being rinsed in phosphate-buffered saline (PBS), the sections were incubated with normal rabbit serum for 10 min, and then incubated overnight at 4°C in humid chambers with the primary antibody to VEGF-C at 1/30 dilution. After three washes with PBS, the sections were incubated with biotinylated rabbit anti-goat immunoglobulin for 20 minute. After washing again with PBS, the slides were treated with peroxidase-conjugated streptavidin for 20 minutes, and developed by immersion in 0.01% HThe dissected lymph nodes were fixed in 10% formalin and embedded in paraffin. From each node, one 3-μm-thick section was prepared for H&E staining, and another three serial 5-μm sections were prepared for immunohistochemical staining with CAM 5.2 , a mouse monoclonal antibody that reacts with human cytokeratin numbers 8 and 18 . The strMetastasis was defined as the presence of tumor cells, whether single or in small clusters detected by H&E or immunohistochemical staining. Metastatic lesions that were more than 2.0 mm in diameter were defined as macrometastasis, while micrometastasis was defined as a tumor deposit larger than 0.2 mm but less than 2.0 mm, and ITC was defined as a tumor deposit less than 0.2 mm in maximum diameter.All statistical calculations were carried out using StatView-J 5.0 statistical software . The relationship between clinical and pathological characteristics of patients and the expression of VEGF-C was examined by Fisher's exact test. Differences with a p value of less than 0.05 were considered to be statistically significant.Metastases were observed in 12 patients (26.1%) and 39 lymph nodes (2.4%) by H&E examination Fig. . Among tTable The biopsy specimens were divided into two categories by the staining pattern of VEGF-C, diffuse or focal staining of carcinoma cells as described previously . When diAmong the 97 biopsy samples from 46 patients, carcinoma cells were contained in only one biopsy sample in 14 patients, and in 2, 3 and 4 biopsy samples in 16, 13 and 3 patients, respectively. In all of the latter cases, carcinoma cells in biopsy samples derived from different places showed exactly the same staining pattern of VEGF-C, and thus VEGF-C-positive and -negative tumors could be clearly distinguished.In biopsy specimens, VEGF-C was positively detected in 21 (46%) cases. As shown in Table Nodal metastasis was detected in 10 (48%) of 21 VEGF-C-positive tumors, and the rate was significantly higher than that in VEGF-C-negative tumors as evaluated by biopsy specimens (p = 0.047). However, the positive and negative predictive values of VEGF-C in biopsy for nodal status were 44% (10/21) and 75% (20/25), respectively, and 5 (20%) of 25 VEGF-C-negative tumors were accompanied with lymph node metastasis.Table Among 36 nodes from 10 patients who were determined as VEGF-C-positive in biopsy samples, tumor cells located in 10 (29%) nodes from 4 (40%) patients totally lacked the expression of VEGF-C. This finding clearly indicates that carcinoma cells that highly express VEGF-C are not always preferentially transported to the regional lymph nodes, even though the expression of this lymphangiogenic factor had a positive correlation with lymph node metastasis.More interestingly, the expression of VEGF-C was strongly correlated with the size of metastatic foci and the location of carcinoma cells in metastatic nodes Table . When caMany cancers metastasize to regional lymph nodes, and a positive nodal status often correlates with a poor prognosis of patients. However, the mechanisms of lymphatic metastasis have not been investigated in detail. Recent studies have demonstrated that the expression of VEGF-C is enhanced in various solid tumors, suggesting the possible contribution of VEGF-C to nodal metastasis, possibly through lymphangiogenesis ,42. NumbIn this study, therefore, we evaluated the expression of VEGF-C in biopsy samples in SGC. Our initial hypothesis was that VEGF-C expression can predict the accurate nodal status including micrometastasis/ITC, and thus may be useful to avoid the unnecessary gastrectomy in some SGC. Our results suggest that VEGF-C expression in biopsy specimens correlate with lymph node metastasis in SGC. However, the positive and negative predictive values were 44% and 80% respectively, and 20% of VEGF-C-negative tumors were node positive. This suggests that the immunodetection of VEGF-C in biopsy samples can not be used as clinical indicator to decide the treatment of SGC.Present study provides some interesting findings on the possible role of VEGF-C in nodal metastasis. Biopsy samples were obtained at preoperative endoscopy, fixed with formalin immediately after biopsy, and thus appear to reflect the in situ expression level of VEGF-C more precisely than surgically resected specimens. In our results, VEGF-C expression in biopsy samples showed a significant correlation with that in surgical specimens with 57% sensitivity and 91% specificity. However, more than half (60%) of the tumors categorized as VEGF-C negative in biopsy specimens were positive in surgical specimens, although VEGF-C-positive tumors in biopsy samples showed a good consistency with those in surgical specimens. This raises the possibility that VEGF-C expression may be somewhat upregulated by surgical manipulation.VEGF gene expression is regulated by a variety of stimuli, and hypoxia is known to be one of the most potent inducers of VEGF-A ,44. VEGFin vivo experiments using VEGF-C-transfected tumors have shown the same histological findings [Nonetheless, our data showed a significant correlation of VEGF-C expression in biopsy specimens with nodal metastasis, supporting a possible role of VEGF-C in lymphatic metastasis. As with hematogeneous metastasis, lymphatic metastasis of cancer cells is considered to be divided into several steps: invasion to lymphatic capillaries, movement into the lymphatic lumen with the lymphatic stream, attachment to the subcapsular sinus of lymph nodes, and invasion into the cortex. Lymphangiogenesis means the development and proliferation of new lymphatics from host vessels, but the ability of tumor cells to induce lymphangiogenesis and the presence of intratumoral lymphatic vessels are controversial. However, most malignant tumors are known to be associated with an increased number of lymphatic vessels in the peripheral area . In factfindings ,50. SincIn regional lymph nodes, tumor cells are thought to reach the peripheral sinus from afferent lymphatics. In fact, many metastatic cells were detected around the sinus area unless they developed into a macronodular lesion. Recently, small lesions have been divided into two categories; micrometastasis and isolated tumor cells (ITC), which are distinguished based on their size . MicromeIn our study, the metastatic lesions did not always express VEGF-C, and such small metastatic foci often lacked the expression of VEGF-C. Especially, ITC identified only with immunohistochemical staining are totally negative for VEGF-C. In addition, in 4 tumors with lymphatic invasion, none of the tumor cells located in the lymphatic vessels in the primary tumor expressed VEGF-C (data not presented). These unexpected results suggest that expression of VEGF-C in tumor cells is not relevant to the transportation to regional nodes once they enter lymphatic vessels.In contrast, most of the macrometastases or cancer cells invading the medullary area of metastatic nodes highly expressed VEGF-C. This phenomenon was quite interesting, though not fully explained by today's knowledge. This may suggest that proliferation and invasion in the internal area of metastatic nodes may partially require VEGF-C expression in tumor cells. Thus far, there is no definite report on the effects of VEGF-C on tumor cells. The VEGF-receptor 3 (VEGFR-3), a specific ligand of VEGF-C, was expressed only on certain tumor cells -54 and nIn summary, our retrospective study demonstrated that VEGF-C expression in tumor cells in biopsy specimens was significantly correlated with lymphatic metastasis in SGC, although the accuracy was not high enough to be used for clinical indicator. Metastatic tumor cells in micrometastasis or ITC located in marginal sinus often lacked the expression of VEGF-C, whereas macrometastasis located in the medullary area in metastatic nodes highly express VEGF-C. This suggests a possibility that expression of VEGF-C is not essential for lymphatic transport from primary tumor, but rather important to develop the macronodular lesion in metastatic lymph nodes.The author(s) declare that they have no competing interests.MI. Conceived of the study and wrote the original version of the manuscript.JK. Carried out the literature search and helped in drafting the manuscript.SK. Collected clinical and pathologic data and participated in manuscript preparationHN Helped to shape the idea for the study coordinated the study and edited the manuscript.All authors have read and approved the final manuscript.
Tools performing such analyses are often restricted to a category score based on a cutoff in the ranked list and a significance calculation based on random gene permutations as null hypothesis.Ranked gene lists from microarray experiments are usually analysed by assigning significance to predefined gene categories, ), to compare different scores and null hypotheses in gene category analysis, using Gene Ontology annotations for category definition. When a cutoff-based score was used, results depended strongly on the choice of cutoff, introducing an arbitrariness in the analysis. Comparing results using random gene permutations and random sample permutations, respectively, we found that the assigned significance of a category depended strongly on the choice of null hypothesis. Compared to sample label permutations, gene permutations gave much smaller p-values for large categories with many coexpressed genes.We analysed three publicly available data sets, in each of which samples were divided in two classes and genes ranked according to their correlation to class labels. We developed a program, Catmap (available for download at In gene category analyses of ranked gene lists, a cutoff independent score is preferable. The choice of null hypothesis is very important; random gene permutations does not work well as an approximation to sample label permutations. These gene annotation analyses can unravel new information about pathways and cellular functions responsible for different phenotypes. Computational tools aiding in this process have recently been developed -8, most p-value [In microarray analyses such as clustering, which provide defined subsets of genes with no internal ranking, it is natural to base the score on the number of category genes in the relevant subset. However, ranking of genes appear in many techniques for microarray analysis, such as correlation of gene expression to target profiles and scorp-value ,8, whichp-value , investip-value for the assigned score, a set of gene lists, ranked according to a chosen null hypothesis, are needed. The simplest choice of null hypothesis is just random gene permutations, and for some rank-based scores, the p-value can then be calculated analytically, without explicitly performing the permutations. However, the random gene permutations null hypothesis assumes independence of gene expression over biological samples, and the p-value is thus a combination of the p-value of how important the category is and the p-value for the genes of the category being coexpressed. When category genes behave similarly over a wide range of experimental conditions, the coexpression does not indicate relevance of the category for the question under study. In many analyses, a more appropriate null hypothesis is therefore sample label permutations, in which a set of ranked gene lists are generated based on the gene expression correlations to randomly permuted target values of the samples. This approach accounts for correlations between category genes and gives p-values that are bounded from below by the number of possible permutations of the samples in the data set. The latter is particularly important in data sets with few samples. Despite this, publicly available tools for gene annotation analysis are restricted to gene permutations [To calculate a utations -8.p-values, that would otherwise have been computationally infeasible. The input to the program is two files and some arguments. The first file contains the biologically relevant ranked list of genes and, if needed, additional ranked gene lists drawn from the null hypothesis. The second file contains the categories and their corresponding genes. The input arguments can either be specified on the command line or in a settings file, and are as follows: 1) a choice between the cutoff score the Wilcoxon rank sum score; 2) a choice of null hypothesis, which can be either the above mentioned user-supplied ranked lists or random gene permutations; 3) the number of permutations used in multiple category testing. If zero, no multiple category testing is performed.We present a program, Catmap, for gene category analysis based on ranked gene lists. The program uses either the number of genes above a cutoff or the Wilcoxon rank sum as score, and the significance of the score can be calculated from a user supplied set of ranked lists, thus allowing for sample label permutations. Furthermore, the program calculates corrections for multiple category testing, using permutation results to assess an effective number of independent categories, which enables Catmap to estimate very small multiple category p-value, the multiple comparison p-value, the false discovery rate, the ROC area , the number of genes in the category, and the 25th, 50th, and 75th percentiles of the ranks. The other output file, the companion file, contains all the categories, with all the genes and their ranks listed below. Each line contains a gene and its rank. The program can be downloaded at [The output of Catmap is two files. The main output file contains all the categories, one on each line ordered according to their significance. The line of a category contains the oaded at , where fet al. [p-values from cutoff independent scoring, showing that the p-value depends strongly on the choice of cutoff. This is further illustrated by the very different cutoffs at which the minimized cutoff-based p-value was obtained. A table with all categories is provided as a supplement [see We analysed the breast cancer data set of van 't Veer et al. with a cment see .p-values, using the Wilcoxon rank sum and the minimized cutoff-based p-value, respectively. The p-value based on the Wilcoxon rank sum was most often larger than the minimal cutoff-based p-value. Since the latter is biased by a minimization process, it must be interpreted as a score, rather than a p-value, thus requiring additional analyses to find statistical significance [Compared to the variations between the cutoff-based alternatives, the results shown in Table ificance ,8.p-values based on gene permutations tend to be lower than those based on sample label permutations. For categories with small p-values, there are remarkable differences, in particular for large categories with more than 20 genes. Since the gene permutation null hypothesis assumes independent genes, we expect a GO category whose genes are uncorrelated to have roughly the same p-value under the two different null hypotheses, whereas a significant category whose genes are highly correlated will get a lower p-value using the gene permutation null hypothesis. To illustrate this coexpression effect, we picked two large categories, "carboxylic acid metabolism" and "M phase", which are encircled in Figure et al. [p-values for the two null hypotheses, while "M phase" has a p-value of 10-7using gene permutations but the much higher p-value of 3 · 10-2 using sample label permutations. As seen in Figure Using the Wilcoxon rank sum, we compared the results of different null hypotheses. Three publicly available data sets were examined ,13,20. AIn Table p-value used in FuncAssociate [individual p-values for categories, but ranks categories based on the chosen score. Nevertheless, they give results similar to those obtained with the Wilcoxon rank sum and gene permutation. This is expected, since the minimized p-value is calculated with gene permutations, and the score adopted in GSEA [p-value, based on gene permutations, would do. It should be noted that GSEA, FuncAssociate, and iGA calculate multiple hypotheses corrected p-values, but these do not change the ranking of categories.Table ssociate and iGA ssociate . These t in GSEA ranks cap-value score on one hand, and the Wilcoxon rank sum on the other, in the treatment of categories for which only a subset of genes have expressions correlating significantly with the question under study. The important genes being in the top of the ranked list will give the category a good score with all three score functions, provided the remaining, seemingly insignificant, genes are distributed in the ranked list as expected by random. However, if these less important genes lie higher in the list than expected by random (though not high enough to affect the Kolmogorov-Smirnov or min-p scores), the category will be considered more important by the Wilcoxon rank sum. Reversely, if the less important category genes prevail in the bottom of the list, the Wilcoxon rank sum score function will deem the category as unimportant, while the other two scores will give the category a high significance, based on the top ranked genes alone. Whether seemingly insignificant genes being ranked better or poorer than explainable by random expectations should be observed or ignored is of course a matter of taste, and a possibility is to use several score functions, that may complement each other. The differences are, however, much smaller than those related to choice of null hypothesis, as revealed in Table There is a possible difference . Furthermore, for all data sets and ontologies studied, Neff was approximately half of the total number of categories. If this is a general feature for GO categories, the simple Bonferroni correction would not be totally unreasonable for small p-values.For all 3 sub-ontologies, the effective number of categories, r et al. the numbp-values were found for 1000 random gene lists.Figure et al. [It should be noted that whenever several ranked lists are examined as part of a project, this additional source of multiple hypotheses testing should also be corrected for. An example of such a correction, for cutoff-based score functions, is presented by Corà et al. .p-values of biologically relevant categories depend strongly on the choice of cutoff. The cutoff independent Wilcoxon rank sum score overcomes the problem, representing an alternative to the Kolmogorov-Smirnov score [p-value [We developed a computer program for calculating the significance of gene categories in a ranked list of genes. Corrections for multiple category testing can be performed by the program. To investigate the properties of different scores and null hypotheses, we analyzed three publicly available data sets ,13,20. Cov score -17 and t[p-value . The ranp-values and the ranking of categories depend strongly on the choice of null hypothesis. Compared to sample label permutations, gene permutations gave much smaller p-values for large categories with many coexpressed genes.Though sample label permutations in many situations represent a better null hypothesis than gene permutations, available gene annotation analysis tools are restricted to the latter. Our implementation allows for both null hypotheses, and we find that both the The implemented algorithm treats the categories sequentially and independently. As score function for category relevance, the program uses either the Wilcoxon rank sum or the number of genes above a given cutoff in the ranked list. The latter is implemented for method comparison and for the case of a defined subset of relevant genes, without internal ranking.For the case of the Wilcoxon rank sum, the user can supply a set of ranked lists distributed according to an appropriate null hypothesis, or request random permutations of genes as the null hypothesis. In the latter case, the significance of the score is calculated analytically by the program, using either an exact calculation by an iterative method, a Gaussian approximation, or a continuous volume approximation. The program chooses method based on a balance between accuracy and computation time. Details are presented in supplementary information see .p-value of category relevance is determined with Fisher's exact test [For the case of the cutoff-based score function, the act test , correspN independent categories are tested simultaneously, family-wise error rate simply means calculating the probability,When pmultiple (q) = 1 - (1 - q)N,     (1)p-value below any given number q by chance. For correlated categories, we make the assumption that the same functional form describes pmultiple (q), with N replaced by an effective number of independent categories Neff. We find Neff by generating a number, K, of ordered lists under the null hypothesis and calculating the p-values of all categories. We fit Neff using the maximum likelihood estimationthat at least one category has a pk is the minimal p-value for the k'th ordered list.where j highest ranked categories is found by counting the number of p-values from K permuted lists lower than the p-value of the j:th category and divide this number with K · j. For the case of sample label permutations, when a user supplied set of ranked gene lists are used to represent the null hypothesis, the first K lists are used to find Neff and false discovery rates, and the remaining lists are used to calculate p-values for each of the K lists.The false discovery rate for the The algorithm is implemented in the Perl program Catmap.pl and is released under the GNU General Public License (GPL). Catmap.pl, together with user instructions, is available for download at .Using Catmap, we analysed three publicly available data sets with gene annotations from the Gene Ontology.et al. [The data set of van 't Veer et al. consistset al. , and renet al. .et al. [The data set of Golub et al. consistset al. [The data set of Alon et al. consistset al. [All genes were first mapped to corresponding UniGene clusters . For theet al. this mapet al. ,20, the et al. , and comTB and MK implemented the algorithms in Catmap. All authors participated in the design of the study, prepared, read, and approved the final manuscript.supplement to Table 1.Click here for filesupplement to Table 2.Click here for filep-values for the Wilcoxon rank sum score.Click here for file
A great deal of research on the prefrontal cortex (PF), especially in nonhuman primates, has focused on the theory that it functions predominantly in the maintenance of short-term memories, and neurophysiologists have often interpreted PF's delay-period activity in the context of this theory. Neuroimaging results, however, suggest that PF's function extends beyond the maintenance of memories to include aspects of attention, such as the monitoring and selection of information. To explore alternative interpretations of PF's delay-period activity, we investigated the discharge rates of single PF neurons as monkeys attended to a stimulus marking one location while remembering a different, unmarked location. Both locations served as potential targets of a saccadic eye movement. Although the task made intensive demands on short-term memory, the largest proportion of PF neurons represented attended locations, not remembered ones. The present findings show that short-term memory functions cannot account for all, or even most, delay-period activity in the part of PF explored. Instead, PF's delay-period activity probably contributes more to the process of attentional selection. Persistent activity of neurons in an area of the frontal lobe - the prefrontal cortex - is often proposed to underlie short- term memory. Mikhail Lebedev and colleagues provide an alternative explanation Note, however, that when we use the phrase maintenance memory, many authorities would use “working memory” instead.Once the concept of working memory became edelay-period activity applies to neuronal activity that follows the transient presentation of an instruction cue and persists until a subsequent “go” or “trigger” signal. The description of delay-period activity in PFdl appeared very early in the history of behavioral neurophysiology . Dimming of the circle signaled that the monkeys should make an eye movement to its current location , for the following reasons. As a key feature of the experimental design, the circle's brightness changed only subtly and remained visible in its new form only briefly. Because the monkeys could not predict whether the circle would brighten or dim and because that subtle, short-lived event provided essential information about the time and target of the response, the monkeys had to attend to the circle intently during the period preceding the trigger signal. As a result of the central fixation requirement, this attention was necessarily covert, although it seems likely that the monkeys would have attended overtly to the circle , had they been allowed to do so. Indeed, the monkeys did so during training. The Discussion takes up the issues of divided attention, multiple motor plans, default motor plans, and other interpretational issues.Brightening of the circle indicated that the monkeys should make a saccade to the circle's initial location on that trial, which the monkeys had to remember in order to perform the task correctly . We can only speculate about the cause of this difference, but reaction times on Rem-trials may have been longer because attention had to be disengaged from the circle's location and reoriented to the remembered one prior to the response. For the “no-memory” condition (not given in p < 0.001). These data are consistent with the idea that each of the two marked locations attracted attention in the no-memory condition, whereas the monkeys directed most of their covert attentional resources to the attended location in the standard version of the task. We acknowledge, however, that there are other interpretations of these data. On control trials, for example, when the saccade was always toward the circle, saccade initiation was approximately 18 ms slower when the circle brightened than on trials when it dimmed . Thus, factors other than the orientation of attention probably contributed to reaction-time differences.preferred location. The lowest firing rate occurred when the monkey attended to the 270° location, termed the least preferred location.Att),attended-location index remembered-location index see . A neuroAtt)(I and remembered (RemI) locations. Each data point on the scatter plot represents a single spatially tuned neuron (both monkeys combined). Tuning for the remembered location (red symbols) was both weaker and less frequent than tuning for the attended location (blue symbols). Note that hybrid cells (green symbols) fill most of the space between the other two classes and that relatively few cells represent a single location exclusively. For example, many of the neurons classed as memory cells show some sensitivity to the attended location, albeit not a statistically significant one by the test that we employed. For the entire group of spatially tuned neurons , the mean selectivity indexes (± SEM) for the attended and remembered locations were AttI = 1.84 ± 0.08 and RemI = 1.21 ± 0.02 , which differed significantly at the p < 0.001 level (Wilcoxon matched-pairs test). p < 0.001).early period) and during the last 800 ms of the delay period, immediately prior to the trigger signal (the late period). . This measure is devoid of any bias caused by a cell's tuning properties in one task period or the other, but it includes the contribution of the spatially untuned cells. When we restricted the comparison to neurons that had any type of significant spatial tuning, in either the early or late periods, the late tuning index (1.76 ± 0.07) continued to exceed the early one (1.42 ± 0.05) significantly (p < 0.001). Most important, we obtained similar results for neurons with significant tuning to the circle's location, which characterizes attention and hybrid cells . We examined whether these results merely reflected the presence of a stimulus in the monkey's visual field and found strong evidence to the contrary. We compared tuning for the circle's location during the 800 ms before the circle started moving were predominantly attention cells . Neurons dorsomedial to the fundus (n = 412) fell into the three cell classes approximately equally . These regional differences within PFdl were highly significant for each monkey . Cells with significant memory signals composed 70% of the spatially tuned population in dorsomedial PFdl, but only 20% in ventrolateral PFdl.Based on a cytoarchitectonic analysis conducted on two of the three hemispheres, all of the cells situated ventrolateral to the fundus of the principal sulcus were located within area 46 and none were located in area 12. The area 46/12 architectonic boundary was first described by The right side of Population representations of the attended and remembered locations were further analyzed using a neuron-dropping analysis. Neuron-dropping curves express the strength of spatial tuning as the ability to estimate a spatial variable from the activity of a neuronal ensemble, as a function of ensemble size. We randomly selected an ensemble from the population of recorded PFdl neurons and used a single trial of activity from each cell to estimate both the attended and remembered locations. The findings of the neuron-dropping analysis agree with those from the analysis of single-cell activity and the population histograms and thus provide independent support. However, neuron-dropping analysis offers several advantages over the population histograms, in addition to providing confirmation of those results. In neuron-dropping, the estimation of either an attended or remembered location does not depend on any assumptions about the nature of the spatial tuning curve or the relative importance of very active cells versus those showing less activity. It does not ascribe any special significance to increases in activity relative to baseline (excitation) versus decreases (inhibition) or to the most preferred and least preferred locations. Each cell's activity contributes to the population estimation for all locations regardless of the direction of its modulation relative to baseline and whether that modulation significantly differs from baseline levels. Furthermore, the computation makes no assumption about any relationship between tuning for attended locations and remembered ones. This analysis also has the advantage that its results are expressed as a percentage of correct estimations by the neuronal ensemble, thereby facilitating comparison with the monkeys' performance, which in this experiment always exceeded 75% correct and sometimes approached 100% .RemI > 1.0). The same analysis was applied to the ventromedial and dorsolateral regions within the PFdl, described in the section entitled Histological Analysis, above (not shown). The ventrolateral subpopulation of PFdl neurons see A–4C overWe also used a neuron-dropping analysis to examine the ensemble's properties during response selection and execution. On Att-trials, the estimation of the attended location , while attending covertly to a stimulus located in peripheral visual space, make it unlikely that attention was further divided . Accordinot remembering some location. However, there is no basis for assuming a “memory” of a currently visible stimulus. It seems especially unlikely that the monkeys “remembered” the attended location in the context of the requirement that they centrally fixate while attending somewhere and remembering somewhere else.Second, the monkeys were required to remember the place where the circle first appeared on each trial, and their performance shows that they did so. Did they also “remember” the attended location? There is ample precedent for skepticism about the proposition that monkeys are Third, we cannot rule out the participation of neurons we class as attention or memory cells in a variety of processes involved in preparing or planning the movement or selecting the response target. Prior to the trigger signal, the monkeys may have prepared to make a movement to the remembered location, to the attended location, to both, or to neither. Fourth, we need to consider the possibility that the neural signals we observed reflect the prediction or anticipation of reward. attention has been used to cover many disparate concepts, including the effects of attention on sensory processing and the mechanisms that mediate those influences. We emphasize that the present finding differs from previous ones describing effects of attention on phasic, sensory-like responses. Often called the enhancement effect, the finding that sensory responses are larger when a stimulus or location is more attended was first described for the superior colliculus in the orientation of spatial attention. In the context of the present results, the finding that inactivation of parts of PFdl producedThe present results agree better with those of The present study reexamined the interpretation of PFdl's delay-period activity in terms of the maintenance-memory theory. We found that other factors are more important than mnemonic ones. The present results do not argue against a short-term memory function for PF, as one among many contributions to behavior. Nor should they lead to the dismissal of interpretations of some delay-period activity in PF, or some neuroimaging signals from that region, in terms of short-term memory. However, spatial memory signals occur less frequently in PFdl than the maintenance-memory theory predicts. Our data thus accord better with neuroimaging and neuropsychological studies indicating that PF plays a major role in attentional selection, including the monitoring of information and actions .How do our findings mesh with the fact that damage to PF appears to produce deficits in short-term memory, as Although attention could account for many findings about PF, we do not aim to replace one monolithic theory of PF function—the maintenance-memory theory—with an equally monolithic “attention theory.” Delay-period activity appears to reflect the learning and implementation of behavior-guiding rules , categor(Macaca mulatta) to perform the task. Each monkey sat in a primate chair in front of a computer monitor placed 57 cm from the monkey's eyes. We recorded eye position with an infrared oculometer and sampled at 250 Hz. The monkeys pressed a waist-high button with their right hand to start each trial and did not release the button until the end of the trial. Once the monkeys pressed the button, a 0.2° fixation point appeared at the center of the screen. After they had fixated this stimulus for 1.0–1.5 s, a 2° solid, gray circle appeared 8° from the center of the screen in one of four places. We trained two rhesus monkeys After the monkeys learned the task, we implanted recording chambers over the left (monkeys 1 and 2) and right (monkey 1) PFdl. For monkey 1, we used a single-electrode microdrive to obtain single-neuron activity records; for monkey 2, we used a microdrive that independently moved up to seven electrodes. During recordings in monkey 1, we intentionally biased the selection of task-related neurons toward those with delay-period activity. In monkey 2, we recorded the activity of all isolated neurons, regardless of whether they were task related.For histological reconstruction of recording sites, we examined Nissl-stained sections of 40 μm thickness from the right hemisphere in monkey 1 and the left hemisphere in monkey 2.ij,F with rows (i) corresponding to the remembered location and columns (j) to the attended location F is the firing rate on the thl trial for which position i was the remembered location and position j was the attended location, and 1N and 2N are total number of elements in the respective sums. Control trials were excluded from the calculation by not considering the diagonal elements of ijF .where We evaluated tuning to the attended location similarly by comparing across-column variability with within-column variability, one row at a time. The strength of representation of the attended location, was quantified as:ij (l).F To classify neurons into those representing remembered versus attended location, we used an 800-ms period preceding the trigger signal. We also evaluated spatial tuning (I) during the final 800-ms period before the circle started to move. .We used two task periods to compute the single trial firing rates We computed the population histograms of n neurons from a population. Then, for a given condition , we selected one trial of that condition randomly from each neuron . All the other trials for that neuron contributed to a look-up table of firing rates. This look-up table consisted of a matrix of average firing rates ij><F for remembered locations, i, and attended locations, j. The differences between firing rates in the look-up table and the rate on the selected trial were rank ordered, with a smaller rank signifying a closer match. We then summed the ranks ijr across individual neurons and took the remembered and attended locations associated with the lowest combined rank as the population estimation. The estimated remembered location either agreed or disagreed with the actual remembered location of the selected trial, as did the estimated attended location in a separate computation. Repeating this procedure for a given number of neurons, n, more than 2,400 times—each time starting with a randomly selected set of test trials —yielded a percentage of correct estimations of the attended and remembered locations. We then calculated neuron-dropping curves for ensembles of size one to the total number of neurons, but typically the range 1–100 sufficed to capture the main features of the population estimation.Neuron-dropping curves D and S7 To assess the representation of attended and remembered locations during the delays see A–7D, we Figure S1The activity matrix is the same as in (103 KB PPT).Click here for additional data file.Figure S2The activity matrix is the same as in (95 KB PPT).Click here for additional data file.Figure S3The activity matrix in (A) is the same as in (151 KB PPT).Click here for additional data file.Figure S4A table of tuning indexes is given at the top for each of the cell classes (plotted in the bottom part of the figure), combinations of those classes, and other groups of cells as described in the left column. These population averages are divided into two groups of columns, those on the left showing data for the period before the circle began rotating (early) and those on the right showing data for the period after it had stopped and the monkey awaited the trigger signal (late). In the plot, the dashed line shows the median values, the dotted line shows the upper IQR.(56 KB PPT).Click here for additional data file.Figure S5Same PFdl neuron as in (188 KB PPT).Click here for additional data file.Figure S6Same PFdl neuron as in (156 KB PPT).Click here for additional data file.Figure S7Format as in (46 KB PPT).Click here for additional data file.
Drosophila, identifying two mutants. A new reverse-genetics mutagenesis method that uses the crosslinking drug hexamethylphosphoramide to introduce small deletions has been used to generate and screen pools of mutagenized Drosophila melanogaster. A compatible mutation-detection method based on resolution of PCR fragment-length polymorphisms on standard DNA sequencers is implemented. As the spectrum of HMPA-induced mutations is similar in a variety of organisms, it should be possible to transfer this mutagenesis and detection procedure to other model systems.We report the use of the cross-linking drug hexamethylphosphoramide (HMPA), which introduces small deletions, as a mutagen suitable for reverse genetics in the model organism Drosophila melanogaster has been the prime genetic model organism for almost a century. This success story is mainly founded on countless so-called forward genetic screens designed to elucidate gene functions on the basis of their mutant phenotypes. Many of those screens reached a scale that has been termed 'saturating' as they identify all nonredundant genes involved in a certain phenotypic trait. However, forward genetic screens are limited in that they are only capable of uncovering functions that are easily measurable or visible. Furthermore, genes having a redundant or nonessential role are less likely to be found by forward genetics.The fruitfly The reverse genetic approach to unravel gene function starts with the DNA sequence. Mutations within the gene are induced and identified by various techniques and only subsequently is the mutant phenotype analyzed . ReverseDrosophila genome by insertion mutagenesis is far from being achieved [Both undirected and directed reverse genetic techniques have certain advantages and drawbacks. Transposon-based mutagenesis tends to be nonrandom because of the occurrence of hotspots for transposon integration. The use of transposable elements of different origin, such as P-elements and piggyBac, which exhibit a different insertion bias, can partly circumvent this problem. However despite large-scale efforts, the ultimate goal of covering the whole achieved ,11. Moreachieved .C. elegans [Drosophila is not feasible.RNAi and small interfering RNA (siRNA) screens provide a powerful tool to dissect the function of genes at a genome-wide scale -14, but elegans and for Drosophila allows for generation of both null as well as hypomorphic mutations [Targeted gene knockout in utations . HoweverN-ethyl-N-nitrosourea (ENU) [Random mutagenesis in reverse genetics generally relies on well-established techniques and commonly used mutagens, such as ethylmethansulfonate (EMS) ,17 and Nea (ENU) . Those cea (ENU) , by denaea (ENU) ,17, or bea (ENU) ,20-22. MFast neutrons have also been used to introduce small DNA lesions, which can simply be resolved by agarose electrophoresis after PCR amplification . This kiWe reasoned that it would be worthwhile to establish a generally applicable reverse genetic technique based on an unbiased and practicable random mutagenesis and an efficient mutation-detection performed on standard laboratory equipment. Here we introduce a novel mutagenesis protocol utilizing the cross-linking drug hexamethylphosphoramide (HMPA) , streamlThere are two ways to handle mutagenized progeny. Either large collections are established and maintained, which then are systematically and continuously screened for mutations of interest, or mutagenized progeny are screened directly and only animals exhibiting a desired trait are kept. The first method is in practice an F3 screen, which requires balancing of mutagenized chromosomes and maintenance of many stocks. This approach is far more labor-intensive than a simple F1 screen of progeny and thus is more suited to stock centers. Moreover, balancer chromosomes have many DNA sequence polymorphisms to wild-type chromosomes (our unpublished data), which will interfere with detection of mutagen-induced sequence polymorphisms.To circumvent the inherent problems with balancers, we devised an alternative genetic strategy, which had to fulfill the following criteria. First, mutagenized chromosomes have to be passed on in an unrecombined form such that mutations cannot be lost. Second, the mutagenized chromosomes should be brought into an isogenic background for mutation detection. Third, for economic reasons stock-keeping should be kept at an absolute minimum.yw wild-type laboratory strain but containing the same dominant marker on the two major autosomes. Both chromosome 2 and chromosome 3 are carrying white+ marked P-element insertions, which were chosen because white+ expression is restricted to different subregions of the eye isogenic to our e Figure . Chromose Figure . Thus, tyw strain background. Thus, the marked autosomes remained unrecombined and could be unambiguously assigned because of the dominant character of the white+ transgenes . However, analysis of X-chromosomal loci would require additional handling of F1 females or mutagenesis of F0 females and hence we did not carry out X-chromosomal screens.The crossing scheme and analysis procedure illustrated was optimized for autosomal genetics. We have generated another strain, KNF307, which in addition carries X chromosomes marked by a characteristic enhancer trap insertion at the et al. [et al. [EMS has been used as a deletion-inducing chemical in large-scale screens , but unbet al. ranged i [et al. . The cro [et al. . As our N,N-dimethylbenzylamine, which in our hands potentiated the sterilizing activity of HMPA.We modified the original HMPA mutagenesis protocol to administer a shorter, but more intense pulse of HMPA . As HMPA is reported to induce deletions as small as 2 bp and as a mutated allele is diluted 10-fold as a result of our pooling of five flies, we decided to analyze PCR fragments on a sequencer offering maximal resolution and high sensitivity. We have also evaluated the 'poison-primer technique' which is reported to preferentially amplify alleles with a deletion at the poison-primer binding site from large pools . HoweverPCR products were analyzed on either a gel-based or a capillary sequencer . To increase efficiency of mutation detection on gels, we pooled up to three PCR products. These were labeled with different fluorescent tags, partly because they were of similar size Figure .white- mutations at the transgenes on chromosomes 2 and 3. Overall, we found 24 mutations in about 62,700 male and female flies. Two flies were mosaic for the mutations. Given that mosaicism can only be scored in eyes and there only in nonoverlapping expression domains, the mutation rates discussed below may be slightly underestimated , because we did not recover any white- mutation. As sperm development takes up to 10 days [-4 mutations at the white+ loci, which are about twice as large as the vermilion locus. Our mutagenesis procedure involving an overnight incubation with HMPA rather than a 3-day incubation with HMPA and N,N-dimethylbenzylamine is therefore not much less efficient than the original protocol.Aguirrezabalaga et al. reported 10 days , we alsowhite- mutations between brood 2 and brood 3 of 2,064 bp. We assume that any deletion within the ORF would generate a null phenotype. Only 14 out of 31 HMPA-induced deletions selected at the vermilion locus would have been scoreable by our PCR approach, because the remaining 17 mutations were caused by large deletions affecting both primer-binding sites [The following additional parameters can be utilized to estimate mutant recovery. The ng sites .-4 × 14/31 × (710 - 2 × 20)/2,064)) or one mutation in 6,063 pools, respectively. Taking into account the fact that two mosaic flies may not have transmitted (reducing the mutation rate to 2.0 × 10-4), the estimate would be one mutation in 33,883 flies or one in 6,777 pools.We designed PCR primers for each gene to be scored such that they encompass the first coding exon and the PCR products are between 450 and 807 bp in size. The average weighted length of our PCR fragments was 710 bp (including two primers of 20 nucleotides each). We thus expect one mutation in 30,317 flies . According to the estimate we would have expected three.CG15000, which during the course of this study turned out to be the second exon of the dNAB locus .The first mutation detected was a 41-bp deletion in the first exon of CG17367 on the capillary sequencer , but we have not systematically investigated all of the white- mutations.This study focused on implementing HMPA mutagenesis for reverse genetics. As discussed above, HMPA efficacy has been assessed from mutations at the -3 nucleotide substitutions at the 1 kb awd locus [-4 deletions per 2 kb white+ locus for HMPA), but mutagen dose cannot be increased further because of the concomitant increase in male sterility.While the analysis of PCR fragment-length polymorphisms on our sequencers was very efficient, HMPA mutagenesis turned out to be the limiting parameter. It is about 28-fold less efficient than EMS mutagenesis when it is assumed that all HMPA hits are deleterious . HMPA induced mutations can be detected by fragment-length analysis of primary PCR products on standard sequencers. Hence, screening for small deletions reduces PCR costs by a factor of 2 and spares the effort of secondary assays.Mutant handling is independent of the mutagenesis protocol and may be combined with either EMS or HMPA mutagenesis. For example, TILLING can be performed both on large mutant collections and on a continuous supply of freshly generated mutants.Finally, given the genotoxic properties of HMPA in both prokaryotes and higher eukaryotes ,32, bothy, w; CG31666-white+; CG32111-white+) males were starved for 4 to 6 hours in a plastic bottle containing three layers of water-soaked LS14 filter papers (Schleicher & Schüll). A 1.1 ml sample of HMPA solution was carefully applied to the filters using a syringe with a long needle (21G2) inserted through the foam stopper. The starved males were exposed to the HMPA solution overnight. Bromophenol blue does not affect mutagenicity detectably, but stains the guts of the flies blue and thus enables mutagen uptake to be monitored and controlled. Freshly eclosed flies do not ingest enough mutagen. HMPA-contaminated plasticware must be disposed of by thermal waste treatment.About 150 1-3-day-old F0 KNF306 . After 2 days males were taken out and crossed to yw virgins in new bottles (brood 2A) and this cross was transferred after 3 days (brood 2B). After another 2 days F0 males were recovered and mated to fresh yw virgins (brood 3A). F1 males of broods 2A, 2B and 3A were collected and mated individually to three yw virgins in about 650 separate crosses per week. Five hundred non-sterile males were removed after 4 to 5 days and five males were pooled for DNA extraction. Fertilized females were returned, and unsuccessful crosses were discarded. If analysis of PCR fragments indicated a primary positive pool, crosses were traced back and kept for further analysis; the other crosses were discarded. From each of the five crosses of primary positive pools a single F2 male or female containing the chromosome of interest as manifested by the typical eye-color pattern was collected for DNA extraction. If PCR analysis yielded a secondary positive result in one of the five F2 flies, a single F2 male containing the chromosome of interest was taken out from the respective cross for balancing .2O) the beads were virtually free of contaminating DNA.) Debris was allowed to settle for about 5 min and each 50 to 100 μl of supernatant were transferred into a 96-well PCR plate. The reactions were incubated in a thermocycler for 30 min at 37°C, and finally for 5 min at 95°C to heat-inactivate proteinase K.DNA was extracted in bulk by squishing pools of each five flies through mechanic force in a vibration mill (Retsch MM30) programmed to shake for 20 sec at 20 strokes per second. Flies were placed into wells of a 96-well deep-well plate. Each well was then filled with 500 μl squishing buffer and one tungsten carbide bead (Qiagen). The deep-well plate was then sealed with a rubber mat (Eppendorf) and clamped into the vibration mill. , 0.6 μl dNTPs (10 mM), 0.1 μl fluorescently labeled primer 1 (100 μM), 0.1 μl primer 2 (100 μM), 0.12 μl hot-start Taq polymerase , 3 μl 10× buffer containing MgCl2 (Qiagen). Cycling conditions were 95°C 15 min, 35 × , 72°C 2 min, 4°C.A Tecan pipeting robot was used for PCR setup. To 5 μl of template DNA, master-mix was added and PCR was performed on an MJR thermocycler that was integrated into the robot. The master-mix per reaction was composed of 20.48 μl ddHThree differently labeled PCR reactions were then pooled. To facilitate sizing of fragments we also added ROX1000 size marker (Applied Biosystems) to five DNA pools. Samples of 1.5 μl pooled DNA were mixed with 1.5 μl loading buffer (consisting of one part 25 mM EDTA pH 8.0 with 50 mg/ml blue dextran and five parts HiDi formamide (Applied Biosystems)). The reactions were incubated for 3 min at 95°C, cooled down, and 1.5 μl each were loaded onto a 96-lane ABI 377 sequencer. Run conditions were as follows: 1 h pre-run at 1,000 V, 35 mA, 51°C and 10 h run at 2,400 V, 50 mA, 51°C. Gel images recorded at four different color channels by the GeneScan software were analyzed visually.Slight modifications to this protocol were introduced for analysis performed on an ABI 3730 capillary sequencer. First, DNA was diluted 20-fold before PCR. Second, after PCR, reactions were diluted 100-fold and 2 μl of diluted PCR products were added to each 15 μl HiDi formamide (Applied Biosystems). PCR product was diluted on a Tecan pipeting robot. Diluted DNA was denatured for 2 min at 95°C before analysis. Sample injection (10 sec) and analysis was done according to standard protocols. Identification of deletion fragments was then performed by visual inspection of gel-images generated by the Data Collection Software . No internal size standard was used, as deletion fragments were identified relative to wild-type PCR product.The following additional files are available with the online version of this paper. Additional data file The time schedule of mutagenesis, fly work, and screeningClick here for additional data fileInformation on the 10 other genes scoredClick here for additional data file
Microstimulation, a technique that activates a cluster of nerve cells by zapping them with a weak electrical current, has helped make causal links between neurons and behavior. For instance, when neurons in an area of the visual cortex that are “tuned” to a particular direction of motion are microstimulated, the way monkeys perceive moving dots on a video screen changes. Microstimulation seems to change what they see. Similar work has also been done for neurons that respond to binocular disparity—the depth-of-field information you gain because each eye has a slightly different view of the world.But many neurons respond, or are tuned, to more than one dimension, leading scientists to wonder how information from these multidimensional neurons contributes to perception—especially when some of that information is irrelevant to a given task. As they report in this issue, Gregory DeAngelis and William Newsome find that neurons tuned to both direction and binocular disparity contribute little to monkeys' perception of motion.The researchers asked three rhesus monkeys to determine the direction a group of dots was moving on a TV screen—a task that can be done regardless of the perceived depth of the dots. The authors had already located two different types of neurons in each of the monkey's brains: sites tuned strongly to direction and multidimensional sites tuned to both direction and binocular disparity. They then determined each site's exact preference: the direction of motion and degree of binocular disparity (if present) that triggered maximum neural activity. The researchers then showed the monkeys several sets of video displays, some with the dots moving in the “preferred” direction and some not. The microstimulation acts somewhat like adding dots in the preferred direction—which confuses monkeys when the real dots are moving against preference and aids them in trials of preferred moving dots. If the behavioral effect of microstimulation—whether it be a help or a hindrance—was significant, it meant that the monkeys were monitoring the activated neurons to perform the task at hand; if there was no change, the stimulated neurons were not being recruited.DeAngelis and Newsome hypothesized that multidimensional neurons might be ignored during pure motion perception tasks. For two of the three monkeys, this was true. Microstimulation of multidimensional sites had no effect on their behavior, compared to the significant effect of microstimulation of direction-only sites. But for the third monkey, called monkey R, microstimulation of both types of sites had significant effects on his performance. He didn't seem to be ignoring anything. The authors proposed that the monkeys could be using different neural strategies to complete the same task. This conclusion is supported by the fact that monkey R performed better on the task than the other monkeys; he appeared to be recruiting any neuron with applicable information, unlike the others, who seemed to rely on neurons tuned solely to direction of motion. Furthermore, for the few multidimensional sites that affected behavior, their contribution was tempered by how well the depth, or disparity, of the video matched the preference of the stimulated neurons.The results of this paper show that even if neurons carry information that can aid in perceptual decision making, they may not participate, depending on how they are tuned along other (irrelevant) stimulus dimensions. All directional neurons are not created equal—some are more useful than others for a particular task. Whether neurons that respond to a particular stimulus contribute to the task at hand depends on how closely that stimulus hews to the neurons' preference as well as on the subject's learned strategy for performing the task. This neural flexibility, the authors point out, suggests that the brain uses complex, variable strategies to respond to changing environmental stimuli. Techniques like microstimulation will be helpful in drawing the connections between neural activity and behavior.
The problem of training human resources in health is a real concern in public health in Central Africa. What can be changed in order to train more competent health professionals? This is of utmost importance in primary health care.Taking into account the level of training of secondary-level nurses in the Democratic Republic of the Congo (DRC), a systemic approach, based on the PRECEDE PROCEED model of analysis, led to a better understanding of the educational determinants and of the factors favourable to a better match between training in health sciences and the expected competences of the health professionals. This article must be read on two complementary levels: one reading, focused on the methodological process, should allow our findings to be transferred to other problems . The other reading, revolving around the specific theme and results, should provide a frame of reference and specific avenues for action to improve human resources in the health field .The results show that it is important to start this training with a global and integrated approach shared by all the actors. The strategies of action entail the need for an approach taking into account all the aspects, i.e. sociological, educational, medical and public health.The analysis of the results shows that one cannot bring any change without integrated strategies of action and a multidisciplinary approach that includes all the complex determinants of health behaviour, and to do it within the organization of local structures and institutions in the ministry of health in the DRC. A partnership of the Ministry of Health of the Democratic Republic of the Congo (DRC) – more specifically, the directorate that is in charge of health science education – the French-speaking community of Belgium and various education and training associations made it possible to set up and carry out a teaching innovation project to bolster human resources in the health sector. One of the major public health challenges in Africa is to find efficient ways to enhance human resources in the health sector.The goal of the medical policy in the DRC is to promote the health of the population by providing high-quality health care that is complete and integrated and in which the community participates, within the general context of the fight against poverty . With th• restructuring the health system according to political, legislative and administrative orientations as well as updating standards of services;• increasing the availability of resources by implementing an adequate administrative process;• establishing an integrated system of preventive and curative care and health promotion for the target groups;• strengthening the programmes of support to health activities;• coordinating, promoting intrasectoral and intersectoral collaboration and partnership for health;• promoting a suitable environmental framework for health.The Ministry attaches priority importance to delivering high-quality care and health services and by: reaffirming the strategy of primary health care (PHC) as a fundamental option of the national policy on health; reaffirming the health zones' or districts' achieving a minimum package of activities as an operational unit; and regular procurement of essential drugs, including biological products and other laboratory reagents.Training of nurses in the DRC is organized in all provinces of the country and conducted through technical medical institutes (ITM), medical educational institutes (IEM) for the secondary level and higher institutes of medical technology (ISTM) for the higher level. In 1998 there were 308 ITM and ISTM in the country; to date there are 254 schools of nurses at the secondary level in the country recognized by the DRC government.According to the type of management, these institutions are classified as public, officially agreed and private schools. However, the autonomy of management for all these schools is extensive, given the near inexistence of government subsidy. The infrastructure and quality of training differ widely from one ITM to another.As a rule, the solutions that one observes in public health are located in the area of further training for health professionals. To the extent that further training is important, it is disappointing to ascertain the low yields that these various training courses have in improving the quality of health care and services . There iWhen it comes to basic training, the young nurses who have just graduated from secondary school and make up the critical mass of health professionals in primary health care are commonly required by specific private or church employers to enrol in a full year of training after their academic studies in order to try to fill the gaps between their basic training and health professionals' actual training needs. Confusion on the part of the ministries and other forces involved is attested to by the absence of vision and lack of expertise in educational research and the lack of reforms of the educational curricula in order to keep pace with the fundamental changes that have resulted from the decentralization of primary health care.In addition, several other problems make this succinct analysis more complex. The unemployment rate for the country's nursing school graduates is extremely high, even for those who graduate from the best schools in Kinshasa, the capital. The situation is compounded by the almost total absence of quality management mechanisms, especially when it comes to taking stock of the health workforce that exists. The situation is a complex one in which the expected changes are not clearly discerned.The problem of health sector human resources is vast. The context that interests us in this article is that of the human resources who are in the front line when it comes to grappling with the various communities' health problems. These are the nurses who have completed secondary school courses. They are the main primary health care professionals in Central Africa, especially in the DRC . This chThe question of research is at three levels: on the theoretical level, this article proposes importing a theoretical model from one field to another; on the methodological level, it uses the action research-like mode of data collection to better establish results; on the empirical level, in the DRC research is unusual ground from which to introduce an innovation.This qualitative research pursues two objectives: to present a methodology and to study the results of its application . The qualitative hypothesis that subtends this thinking is that using an analytical, systemic planning and strategic synthesis model based on a systemic and participatory approach on various strategic and operational levels will procure the necessary vision for changing the basic education and training of nurses in secondary schools in the DRC.There is a lack of literature on experiments and experiences that use analytical or planning methods to understand complex social realities and consequently the adoption of strategic plans of action that should result from such experiences. It is thus important for all the players in the process to use the outcomes of the various stages of analysis and planning to produce an appropriate and adapted logical framework. It is necessary, however, to be able to set down on paper the methodology that is used to be able to construct a model that by the end of the process appears obvious for the actors involved, i.e. teachers, internship supervisors and school management, personnel within the ministry's Directorate for Health Education and project officers.Generally speaking, three relatively distinct stages were necessary, as follows:In the DRC, the Ministry of Health and more specifically its directorate in charge of health science education, is responsible for the basic training of secondary school nursing students and further training of health professionals Figure .The situation sometimes varies in neighbouring contexts. Thus in Rwanda, for example, the Ministry of Education is in charge of basic education and training in the technical schools and for the medical professions. In France and Belgium, the general choice was to have the Ministry of Education responsible for most of the training and education of health professionals.We shall not discuss in this article the relevance of the place of oversight for this type of brief for training health professionals. We shall limit our remarks to the need to choose the best place for systemic planning and strategic vision in the existing context. So it is that in the 6th Directorate of the Ministry of Health of the DRC, which is in charge of paramedical secondary education, the need was perceived to develop systemic planning tools that would give a comprehensive, consistent vision of the sector's needs. While the 6th Directorate is indeed the strategic level, there was early involvement of an operational level, in the form of a sample of schools and teachers, and creation of a management unit to guide the implementation of the plans by the 6th Directorate and teachers from the grassroots.It is important to remember that we are talking about systemic and operational planning, not just strategic planning . For theThis article follows on from the given that organizations and human beings are complex, and one way to have public health actions that heed this complexity is to use a systemic approach to analyse them .Various models for a systemic approach exist. The approach that we chose to develop a logical framework for analysing, planning, and synthesising the work of the ministry's directorate in charge of health science education in the DRC is Green and Kreuter's PRECEDE PROCEED model ,7. The PThe PRECEDE PROCEED planning model is welcome because of its multidisciplinary approach, based on the fields of epidemiology, social sciences, behavioural sciences, education and health administration. In a nutshell, the fundamental principles that gave rise to this approach come from the multifactoral nature of all problems. Once this has been posited, all efforts made to act upon behaviour, the environment and social factors must necessarily be multidimensional and multisectoral.The acronym PRECEDE means "Predisposing, Reinforcing and Enabling Constructs in Educational/Environment Diagnosis and Evaluation", while the acronym PROCEED means "Policy, Regulatory and Organizational Constructs in Educational and Environmental Development".The PRECEDE-PROCEED model emphasizes planning interventions by focusing on the expected outcomes of actions based on epidemiological, social, behavioural, environmental, educational, organizational, administrative and political diagnoses of a socio-health and/or educational situation. The stages in the construction of a systemic model for analysing the problem that interests us – health science teaching – are adapted as the process unfolds. One of this method's great potentials is its great flexibility, or its ability to adapt to the specific analyses' needs.A systemic analysis and planning model is built dynamically, in a process that calls for continuous assessment. The model that the ministry's 6th Directorate came up with, and that is presented below, must change with changing knowledge in the area.It is important to stress the qualitative process of continual exchanges and constant observations among the players that made it possible to fill the gaps in the information-gathering process. All these elements are much more difficult to organize in one well-defined stage, but are indeed part of a process that stresses the participatory approach and comes under the third strand of the methodology being discussed. The development of the first model proceeded during the workshop held in Kinshasa at the starting of the project, with the participation of personnel from three pilot schools and the Ministry of Health in October 2002. The three-day workshop, with 40 participants from various institutional levels, permitted the establishment in December 2003 of strategic orientations and guidelines for the continued broadening of the programme.All PRECEDE PROCEED models are built upon the players' actual experiences of the problem to solve. The clarification of the problem itself, which is part of the epidemiological and social diagnosis, comes out of a debate that must be conducted with all the parties concerned. This problem will gradually become more and more clear as its statement shuttles back and forth among the parties until it eventually satisfies the strategic and institutional level that is in charge of the programme and that the problem concerns directly. If, thanks to a resolutely participatory approach, all of the players adopt the use of a systemic approach on a real strategic, managerial and operational level, it will become a solid tool for the entire teaching body concerned.The presentation of results is at two levels: PRECEDE results and PROCEED results. The first are mainly descriptive . PROCEED results are more normative, leading to certain recommendations for practitioners and other actors.To facilitate presentation of the results and understanding of this coherent, overall vision of the interrelated elements, it was considered pertinent to retranscribe the full model as it exists for the 6th Directorate of the DRC's Ministry of Health.To structure the results' presentation, we shall follow the order in which the model's construction progressed. The table must be read from right to left, starting with the epidemiological and social diagnosis, then going on to the behavioural diagnosis and from there to the analysis of the educational and environmental determinants of these behaviours, and then to end with the analysis of the institutional diagnosis see .In the Ministry of Health, all the players are concerned by the mortality and morbidity indicators in the country. For health professionals, the lack of quality of the service provision and care provided by their health system is an obvious cause of the people's lack of confidence in their health system . HoweverWho are these players and what behaviour can explain, through a direct link, the diagnosis of inadequacy? In answering these questions with the players themselves, we discover that there are groups of players that are never clearly identified yet are clearly related to this problem of inadequacy. This is the case, for example, of the donors and nongovernmental organizations (NGOs).Revealing all the groups of players makes it possible to see more easily why importance should be given to a multisectoral approach, especially one that covers teachers and medical and paramedical professionals. If the population is considered a group of players that is separate from the problem at hand, it will not be possible to take it into account in setting up action strategies, to the extent that the aim of such work is to better define people's expectations in terms of the quality of care and arrive at a better understanding of their behaviour.During the discussions, the teachers felt that priority had to be given to separating the group of teachers from that of intern supervisors in order to better highlight the particularities and role of the field training. The school managements revealed their specific role in this problem of mismatches. Indeed, the teachers' and supervisors' behaviour is strongly linked to their own behaviour in dealing with changes . We haveThis diagnosis allows for the factors that are linked to the environment and are direct causes of the epidemiological and social diagnosis. In a context such as that of the DRC, geopolitical and socioeconomic factors head the list, along with the health structures' inaccessibility. To take a more constructive approach without denying reality, it is necessary to focus the analysis of this diagnosis on the more targeted problem of the inadequacies in the training sector. This reveals variables that are more controllable for the levels that are concerned and that everyone agrees are connected to the problem. These are: the learning environment, teaching environment, class hours that facilitate or hamper certain types of learning, etc.The educational diagnosis enables one to home in on the educational and motivational determinants, which must not be overlooked when one goes on to an interventional phase. To the extent that the systemic approach gives significance to each group of players as well, as is the case in the DRC, it is fully possible to set up a frame of analysis, assessment and action-research that presents the variables and determinants in a PRECEDE model that are specific to each specific group (action-research framework). This is what was done in the DRC to follow the changes in teaching practices, in conjunction with each intervention that was identified, that were made in the specific group of teachers. The results show that it is relatively easy to separate the educational determinants from each other in order to facilitate subsequent reflection about the strategic action to take.The predisposing factors that concern knowledge, experience, attitudes, perceptions and representations have a key place in relation to the behaviours of the players of interest to the Directorate for Education. This construction shows clearly that the training given is usually concerned with knowledge only and generally does not make use of the learners' life experiences.The other important result is to be able to visualize the place of representations in a conceptual framework that will likewise be used for the action. For example, there are the various representations of learning theories when it comes to teaching methods or unfounded beliefs about the quality of care. Specific models exist that enable one to delve much deeper into perceptions and beliefs ,11. ThesThe enabling factors in terms of actual competences are too often disregarded and underestimated in interventions. Incorporating them in this model thus enables the directorate in charge of this branch of education to check to what extent the projects, programmes and other support measures consider this priority strand in terms of development independence.The reinforcing factors, which are sometimes also referred to as facilitating factors, are the determinants that act upon the positive feedback loops. The importance that all the players give to this type of variable in constructing the model confirmed the need that the directorate had already felt to find means to set up long-range monitoring mechanisms for the various activities engendered by the programme or by some more specific projects.The model contains a certain number of variables. It is clear that it can be enriched in the course of the process through its use and the players' better discovery and gradual appropriation of its features.In terms of results, the institutional diagnosis requires analysing the situation at the organizational level that corresponds to the level of the model's application. This is a national health science education programme under the Ministry of Health. As such, the institutional diagnosis stresses essential strategic variables if one wants to work on a well-knit, comprehensive set of changes. It thus entails the need to analyse the institutional standards when it comes to inspections and assessment, but also those governing health system management and health sector human resource quality management . This is also the level on which we shall discuss how the programme dovetails with other variables and determinants.Before strategic thinking can be put into place based on this situational analysis, it is possible to go on to a more dynamic reading of the relations between determinants and variables. So it is that the DRC's Ministry of Health directorate in charge of health science education foresees a certain number of strategic axes for action. The aim of the action is the problem's translation into an objective form. The directorate thus considered its main goal to be to improve the match between what is learnt in schools and health professionals' needs and the population's expectations.To better understand how the reading of the conceptual model brought us to the action strategies, it is useful to stress an intermediate step that is summarized in Figure A natural adaptation of the PRECEDE model was to define the groups of players by their behaviour. In this way, we obtained a better picture of the division of responsibilities to achieve a common goal and evidence of the need for interdisciplinary work . A comprThe results bring to the fore a number of behaviours that attest to a lack of independence, absenteeism, lack of collaboration, failure to connect theory and practice, a lack of communication, ignorance of the teacher's role, etc., depending on the group. Examination of these results prompts us to stress the importance that must be given to the learning environment and, when it comes to action strategies, the importance to give to a learning environment that is in tune with the strategic axes that are selected, in this part just as in the other parts of the situation's analysis.Given this finding and the need to link the educational and institutional diagnosis with the health problem (seen as an appropriate form for education), one proposed strategic hypothesis is to favour learning techniques that are based on active teaching methods .In going consistently through the various diagnoses and organizational levels, this choice led the education directorate to think about changing its programmes and standards so as to base them on novel teaching concepts such as skills-based learning and settThe reading of these results in terms of strategic action reveals the need to bolster the analytical and planning process that already exists within the directorate to pay more attention to the educational and environmental determinants for all the target populations concerned. To our mind, the success of the expected changes in terms of narrowing the gap between the "supply" and the "demand" hinges on this.The following diagram summarizes the strategic axes that the Directorate for Health Science Education chose to achieve this objective on the basis of the PRECEDE analysis Figure .This figure reveals four axes to be reinforced:• to reinforce communication and coordination in conjunction with the other reinforcing factors: the pilot schools' teaching method committees, teaching monitoring and feedback, the setting-up of networks, etc.;• to develop methods to enhance the learner's autonomy: active teaching, constructivist approach, interdisciplinary, critical spirit, etc.;• to foster a learning environment that enables the learners to acquire knowledge: library, teaching materials, computer learning, computerized documentation centre, etc.;• to provide institutional and structural support: standards and curricula in tune with teaching and organizational innovations and skills targets that fit health professionals' needs and meet the community's expectations.The discussion will take place on two levels – the operational and the conceptual. On the operational level, we feel it is interesting not to dwell on the presentation of the model as it could have been applied, but on its actual application. The results are presented so as to allow the reader to understand how to organize the problems that are felt to exist in health science education in the DRC.Even if the Directorate for Health Science Education is well aware of its problems, the systemic modelling of the interconnected variables and populations seems to give it a conceptual and operational tool that is useful on various levels, as follows:• tool for dynamic analysis of the situation with regular updates;• tool for systemic planning that also enables the directorate to put forward arguments in dealing with donors and NGOs in the sector;• assessment tool that gives more importance to assessment criteria such as cohesiveness, consistency, relevance, appropriation, and comprehensiveness, i.e., process criteria;• research and evaluation tool that can also promote a more quantitative approach to analysing the relations between variables and various diagnoses or within the same diagnosis;• a dialogue-enhancing tool, for it gives the groups of players involved a vision of the planned change and a common objective.To sum up, this is a tool that provides a certain guarantee that the strategy development process is informed, meets the needs and is complete .The list of these advantages is obviously based on some baseline conditions: a participatory process in which the model is developed and operates and the appropriation of the concepts that subtend the model . Even thOn the conceptual level, the discussion will revolve around Figure We observed through the PRECEDE analysis and then the PROCEED strategic reflection phase that many disciplines converged in order to lead us to this hypothesis and a common objective of needs-matching. Indeed, when action is carried out it will be a matter of achieving a gradual advance that occurs along the strategic axes defined earlier in this article. Moreover, we are confronted with strategic choices that involve at least three dimensions: a public health approach, an education approach and a sociological approach.These three dimensions are part of the data collection process's success, as well as the success of the strategic choices that follow. This reinforces the fact that the PRECEDE PROCEED model comes from the development of an approach aimed at meeting the need for education and health promotion tools and methods. So it is that we see numerous applications of this process in technical health education establishments that spring from a true systemic analysis of the problems with full mastery of a structuring capacity, unlike some other models such as causal analysis (17).Similarly, we can consider that defining a problem at an institutional and organizational level also requires the identification and involvement of all the parties concerned. We can also consider that the tools that help to understand the relations between elements and insist on a better search for behavioural determinants are prerequisites for organizational learning that has groups of players interact with each other. This is all the more true if the change that is ultimately expected (a match between two sides of the equation) is contingent on changes in the players' behaviour and practices, as is the case of health education.In terms of limits of this research, it targets the analysis of an inadequacy within human resources' management in health, which is that of training of nurses from professional technical levels. Other levels of inadequacies are worthy to be analysed in a complementary way relating to other health professionals, the sectors of health and education planning. The Green model is complementary to the use of methodological dynamic references much as the management of the project cycle focuses on managing interventions or projects whose aim is to contribute to changing a situation from unsatisfactory to satisfactory. Its use within the framework of the project could obtain more means while enabling developments relating to action research. In this context, the contribution from other disciplines, such as psychology, could be reinforced.With regard to the three levels of starting research – the theoretical, methodological and empirical – PRECEDE PROCEED analysis is a model that can be applied to varied situations and problems, although it must be used participatively and proactively in order to enhance its utility in specific circumstances as a personal transfer tool. On the empirical level, the will of all actors – and the Ministry of Health in particular – to have a clear vision of the projected change and manner of reaching that point, while integrating the complexity, was the element carrying the process. We advance the hypothesis that L. Green's systemic approach may become one of a set of active methods, such as problem-based learning, cooperative learning, or even project-based learning, to transfer to learners in nursing schools and sections in the DRC. Indeed, the ability to analyse and synthesize, but also to carry out education and health promotion actions, is essential.DRC: Democratic Republic of CongoPRECEDE: Predisposing, Reinforcing and Enabling Constructs in Educational/Environment Diagnosis and EvaluationPROCEED: Policy, Regulatory and Organizational Constructs in Educational and Environmental DevelopmentThe authors declare that they have no competing interests.FP is responsible for this research. She initiated the project in DRC. She is a specialist in public health and pedagogy. She participated in the design and coordination of the study and drafted the manuscript.JK and DB are two teachers in charge of the reform of the nursing programme in DRC. They set up the collection of data for this study and finalized the analysis.YC and AL participated in the design of the study and the adaptation of de Green's model in the field of nursing training. YC participated in writing the manuscript.MG specializes in management and pedagogy. She conceived the study with FP and participated in its design. DP took part in the elaboration of the methodology. She brought an expertise in health promotion and wrote part of the manuscript.All authors read and approved the final manuscript.Table 1. PRECEDE model for health science teaching in the DRC oversized tableClick here for file
Drosophila has emerged as an invaluable model organism for studying this regulation. Activation of the NF-κB family member Relish by the caspase-8 homolog Dredd is a central, but still poorly understood, signaling module in the response to gram-negative bacteria. To identify the genes contributing to this regulation, we produced double-stranded RNAs corresponding to the conserved genes in the Drosophila genome and used this resource in genome-wide RNA interference screens. We identified numerous inhibitors and activators of immune reporters in a cell culture model. Epistatic interactions and phenotypes defined a hierarchy of gene action and demonstrated that the conserved gene sickie is required for activation of Relish. We also showed that a second gene, defense repressor 1, encodes a product with characteristics of an inhibitor of apoptosis protein that inhibits the Dredd caspase to maintain quiescence of the signaling pathway. Molecular analysis revealed that Defense repressor 1 is upregulated by Dredd in a feedback loop. We propose that interruption of this feedback loop contributes to signal transduction.The innate immune system is ancient and highly conserved. It is the first line of defense and the only recognizable immune system in the vast majority of metazoans. Signaling events that convert pathogen detection into a defense response are central to innate immunity. Drosphila, Foley and O'Farrell have identified numerous new regulators of the innate signaling response to gram-negative bacteriaBy silencing all conserved genes in As a typical metazoan suffers numerous microbial assaults during its lifespan, survival depends on robust defense strategies. Metazoan defenses are classified as either innate or adaptive. Adaptive immunity is characterized by elaborate genetic rearrangements and clonal selection events that produce an extraordinary diversity of antibodies and T-cell receptors that recognize invaders as nonself. While of profound importance, the adaptive responses are slow and limited to higher vertebrates. In contrast, the machinery of innate immunity is germ-line encoded and includes phylogenetically conserved signaling modules that rapidly detect and destroy invading pathogens . Model oSignaling in innate immunity consists of three steps: detection of pathogens, activation of signal transduction pathways, and mounting of appropriate defenses. The first step is triggered by the detection of pathogen-associated molecular patterns by host pattern recognition receptors . TypicalDrosophila as a model for analyzing innate immune signal transduction had a serendipitous origin. The Toll signaling pathway was discovered and characterized in Drosophila because of its role in specification of the embryonic dorsal ventral axis pathway, mediates responses to gram-negative bacterial infection in osophila . Althougosophila . Althougosophila . The N-tDrosophila. We developed a cell culture assay that allowed application of this library to a high-throughput RNAi evaluation of Imd pathway activity. This screen identified numerous components of signal transduction , defined a hierarchy of gene action, and identified a novel gene, sickie (sick), required for activation of Relish. Focusing on regulation of the Dredd caspase, we identified a novel inhibitor of Dredd, Defense repressor 1 (Dnr1), which is upregulated by Dredd in a feedback loop that maintains quiescence. We propose that interruption of this feedback loop contributes to signal transduction.Unlike the Toll pathway, which was thoroughly studied in its developmental capacities, analysis of the Imd pathway is relatively recent. Its more complete genetic dissection may well define another conserved and fundamental pathway of immune signaling. Of particular interest, a pivotal step in the Imd pathway—the regulation of Dredd-mediated cleavage of Relish—is not understood. To begin to address this, we developed a powerful RNA interference (RNAi)–based approach to functionally dissect the Imd pathway. In collaboration with others at the University of California, San Francisco, we produced a library of 7,216 double-stranded RNAs (dsRNAs) representing most of the phylogenetically conserved genes of Dipt, that encodes an antimicrobial peptide, Dipt-lacZ. Commercial preparations of LPS contain bacterial cell wall material capable of activating the receptor PGRP-LC and act as gratuitous inducers of antimicrobial peptide genes in Drosophila tissue culture cells genes, enhanced defense by RNAi (EDRi) genes, and constitutive defense by RNAi (CDRi) genes genes .In an initial visual screen, dsRNAs that altered the induced or constitutive expression of β-galactosidase were selected as candidate innate immunity genes. We subjected all the initial positives to a more stringent retest where we resynthesized the candidate dsRNAs, retested these under identical conditions, and counted the number of β-galactosidase-positive cells. We defined DDRi dsRNAs as reducing the frequency of Dipt-lacZ-expressing cells to below 40% of LPS-treated controls, EDRi dsRNAs as increasing the frequency of Dipt-lacZ-expressing cells more than 2-fold, and CDRi dsRNAs as inducing Dipt-lacZ-expressing cells to a level equal to or higher than that induced by LPS. About 50% of the initial positives met these criteria, yielding 49 DDRi dsRNAs, 46 EDRi dsRNAs, and 26 CDRi dsRNAs . The entC. EDRiC genes are listed as both EDRi and CDRi . Inactivation of either actin with dsRNA directed to the actin UTRs demonstrated that both actin transcripts must be inactivated for an observable EDRi or CDRi phenotype C.Caenorhabditis elegans, several genes can be inactivated by RNAi in Drosophila without an obvious drop in the efficiency of gene inactivation –tagged Relish . We inactivated sick by RNAi and subsequently tested dsRNAs representing the five CDRi epistatic subgroupings for their ability to activate Dipt-lacZ expression in the absence of Sick and those that downregulate an ongoing response (EDRis).Interestingly, groups of inhibitors implicate distinct pathways in immune regulation. For example, of the 17 genes that had the strongest EDRi phenotype, four encode splicing factors and four encode products that appear to interact with RNA. This functional cluster suggests that disruptions to some aspect of RNA processing/metabolism can substantially increase the number of S2 cells that activate expression of the Dipt-lacZ reporter in response to LPS exposure. While we do not know how RNA metabolism contributes to this phenotype, the repeated independent isolation of genes lying in a functional cluster reinforces a conclusion that the process is involved. Several other functional clusters were picked up in our screens. Three genes involved in Ras signaling were identified as EDRi genes. In addition, we noted weak EDRi phenotypes with three additional Ras signaling components . These findings argue that Ras signaling downregulates responses to LPS. This might represent a negative feedback circuit. However, the finding that MESR4 also has a CDRi phenotype suggests that the Ras/MAPK pathway may also impinge on the maintenance of quiescence.(α-Tub84D), a kinesin motor (Klp10A), and microtubule-severing function (CG4448/katanin) were isolated as EDRi genes. Perhaps an event involving microtubule structures helps limit immune responses. The two cellular actin genes (Act5C and Act42A) were individually dispensable, but their joint inactivation produced both EDRi and CDRi phenotypes. A regulator of actin function, SCAR, was also identified as a CDRi, and both actin and SCAR CDRi phenotypes fell into epistasis Group II. This suggests that disruption of the actin cytoskeleton in quiescent cells can activate the immune response in a Relish-dependent fashion. Since S2 cells are induced to phagocytose bacteria, and changes in cell shape are induced in response to LPS, it would not be surprising if cytoskeletal functions contribute to immune responses. Indeed, microarray studies showed induction of numerous cytoskeletal components in S2 cells upon incubation with LPS . This and other tentative indications of involvement of this pathway were either not reproduced or fell below the threshold in retesting. We are left uncertain about SCF contributions to immune induction in our system.A previous conventional genetic screen for mutations leading to constitutive action of the Imd pathway in d Relish . A Skp1/As in the case of the CDRi and EDRi phenotypes, our screen for DDRi phenotypes identified numerous genes falling into functional categories. One potential limitation of our approach for identification of DDRi is that some genes required for ecdysone maturation may be selected as immune deficient. Additionally, one of the largest functional categories was genes involved in translation and included four ribosomal proteins, three initiation factors, two amino acyl-t-RNA synthases, and an elongation factor. It seems likely that RNAi of genes in this category affects translation of the Dipt-LacZ reporter, as opposed to affecting modulation of signaling events. To cull our collection of DDRis of such indirect modulators of the response, we developed a secondary screen that does not rely on de novo gene expression. Based on the previously described phenotypes of Imd pathway members, we reasoned that inactivation of the core components transducing the signal would compromise activation of the Relish transcription factor. To identify DDRi dsRNAs that prevented Relish activation, we prepared a GFP-Relish reporter cell line and rescreened DDRi dsRNAs for loss of GFP nuclear translocation in response to LPS.sick as involved in Relish nuclear translocation in response to LPS. Although cells treated with sick dsRNA failed to mount an immune response, the cells were otherwise healthy through the course of the experiment. Dox-A2 RNAi reduced the survival of cells and was effectively lethal within a few days of the scoring of the immune response. We conclude from this that Sick and Dox-A2 contribute to the central signal transduction process, but it is presently unclear whether Dox-A2 has a significant specific input or if its effects are secondary to a global effect on cell viability.In addition to confirming a requirement for Dredd and PGRP-LC in Relish activation, we implicated a proteosomal regulatory subunit Dox-A2 and identified a novel gene It is notable that only two DDRi genes passed our secondary screen based on GFP-Relish localization. Does this mean that all the other DDRis are not really involved? While we have not yet analyzed all these genes, we suspect that many of them will modify the Imd pathway, either impinging on the pathway at a point beyond Relish translocation, or quantitatively or kinetically modifying Relish translocation in a manner that we did not detect in our screens. Insight into this issue is likely to be derived from further epistasis tests that might place some of these DDRis in the signaling pathway.We identified an unprecedented large number of immune response inhibitors (CDRi genes) in our screens. As there are diverse steps within and potentially outside the Imd signaling pathway at which the CDRi inhibitors might act, we sought to position their actions with respect to known Imd pathway functions by RNAi epistasis tests. By sequential inactivation of known Imd pathway components and CDRi gene products, we tested whether constitutive activation of immune reporters by CDRi dsRNAs depends on steps in the signal transduction pathway. In this way, we defined five distinct epistatic categories of CDRi gene products. The four CDRi genes that continue to activate immune responses despite inactivation of Imd, Dredd, or Relish are likely to act on signal-transduction-independent factors that maintain transcriptional quiescence of Dipt. The largest group of CDRis (12) depends on Relish function but not on upstream activators of Relish. These are likely to include two types of regulators: one type that sets the threshold of response so that basal activity of Relish does not trigger pathway activity, and a second type that contributes to suppression of Relish activity. The latter type of regulator might include inhibitors that impinge on the late steps in the signal transduction cascade. For example, genes whose normal function inhibits the activity of the full-length Relish transcription factor might be required to make the pathway activator dependent, and these would be found in this category. The remaining upstream epistasis groups that rely on additional signal transduction components are strongly implicated as significant contributors to the immune induction pathway. As all of the CDRis induced robust immune responses in the absence of ecdysone (unpublished data), we propose that the CDRis have their input into the Imd pathway at a level that is the same or lower than the level of the input from ecdysone. Given that this is true for all five epistatic groups of CDRis, the result suggests that ecdysone has its input at an early level of the Imd pathway.sick is required for constitutive activation of the responses by inactivation of CDRi genes in Groups III, IV, and V genes but not for the action of CDRi Group II or Group I genes. If we assume a simple linear pathway, this would indicate that Sick functions upstream of Relish and downstream of Imd and Dredd. It is noteworthy that the epistatic data are consistent with molecular data indicating that Sick is required for Dipt-lacZ induction and the nuclear translocation of Relish in response to LPS. This combination of phenotypic, epistatic, and molecular data argues for participation of Sick in the regulated activation of the Relish transcription factor.The identification of five epistasis groups of inhibitors also provides reference points for a second round of epistasis tests that position novel DDRi genes within the Imd pathway. We used this approach to show that the novel DDRi One epistatic group struck us as particularly interesting. While Dnr1 inactivation caused ectopic Dipt-lacZ expression, simultaneous loss of Dredd or Relish restored cells to their resting state. These data indicate that the wild-type function of Dnr1 is to prevent Dredd-dependent activation of Relish. Consistent with this hypothesis, we identified a C-terminal RING finger in Dnr1 with greatest similarity to the RING finger motifs observed in the C-terminus of IAP proteins. In addition to regulating caspase activity, IAPs also regulate their own stability through ubiquitin-mediated proteolysis. Similarly, we observed that mutation of a critical RING finger residue greatly stabilized Dnr1. These features suggest that Dnr1 is a caspase inhibitor, suggesting that it might act directly to inhibit Dredd activity.We observed that exposure of cells to LPS transiently stabilized Dnr1 and that this stabilization directly paralleled the period of Dredd-dependent Relish processing. This suggested to us that Dnr1 stability and accumulation might be regulated by its target, Dredd, a regulatory connection that could establish a negative feedback loop. We confirmed that Dredd activity is required for accumulation of Dnr1. These results suggest that Dredd modulates a RING-finger-dependent Dnr1 destruction pathway .Our results are consistent with a feedback inhibitory loop where Dredd activity promotes accumulation of its own inhibitor ; howeverIn summary, a new and powerful screening approach has provided many candidate regulators of the Imd pathway of the innate immune response, and we suggest that the newly identified contributors Dnr1 and Sick will govern central steps in the regulatory cascade that activates the Relish transcription factor. While our analysis has led to a focus on these two regulators, we suspect that other genes among those isolated will also make important direct contributions to the Imd pathway. Furthermore, some of the groups of genes falling into functional clusters are likely to define physiologically relevant inputs into the induction pathway.A library of DNA templates bearing the T7 RNA polymerase promoter at each 5′ end was prepared from genomic DNA in a two-step PCR protocol. In the first round of PCR, targeted regions of DNA were amplified using gene-specific primers (18–22 nucleotides) with a 5′ GC-rich anchor (GGGCGGGT). Primers were designed to amplify a region of nonintronic genomic DNA between 250 and 800 bp with minimal sequence overlap to all other amplimers. Templates from the first step were amplified in a second round using a universal primer containing the T7 RNA polymerase promoter sequence followed by the GC-rich anchor TAATACGACTCACTATAGGGAGACCACGGGCGGGT. dsRNA was generated from templates in in-vitro transcription reactions for 6 h at 37 °C. In vitro transcription products were annealed by heating to 65 °C and slowly cooling to room temperature. All products were tested for yield and size by gel electrophoresis, with 97% giving satisfactory results.Drosophila Expression System Protocol using hygromycin B as a selection marker. The Dipt-lacZ plasmid has been described previously supplemented with 10% heat-inactivated fetal calf serum, penicillin, streptomycin, and hygromycin per well. dsRNA was added to each well at a final concentration of 10 μg/ml. Cells were cultured for 4 d at 25 °C and incubated an additional 24 h in 1 μM 20-hydroxyecdysone . LPS was added at a final concentration of 50 μg/ml for 12 h. RNAi protocols were as previously described with 40,000–50,000 cells in 200 μl of Schneider's escribed .2, 3.5 mM K3Fe(CN)6, 3.5 mM K4Fe(CN)6, and 0.2% X-Gal in DMF). β-galactosidase activity assays were performed as described previously IMRB microscope. Immunofluorescent images were taken on an Olympus IX70 driven with DeltaVision software . S2 cells were deposited on Superfrost Plus Gold slides for immunofluorescence. Cells were fixed for 10 min in 4% formaldehyde (Sigma). Tubulin was detected with mouse anti-α-tubulin (Sigma). DNA was visualized with Hoechst 33258, and actin was visualized with rhodamine-coupled phalloidin . Images were processed with Adobe Photoshop 5.5, and figures were assembled with Adobe Illustrator 9.0.g for 3 min, washed in PBS and lysed on ice for 10 min in lysis buffer . Lysate was spun for 10 min at maximum speed, and the supernatant was added to sample loading buffer. Samples were separated by SDS-PAGE and analyzed by Western blotting. Anti-GFP antibody was purchased from BabCO , and HA and actin antibodies were purchased from Sigma.Dnr1-expressing vectors were prepared by cloning Dnr1 cDNA into pAc5/V5HisA (Invitrogen). The C365Y mutant form of Dnr1 was prepared with the Stratagene point mutation protocol using a TTCAATCCGTACTGTCACGTC sense primer and a GACGTGACAGTACGGATTGAA antisense primer. The mutation was confirmed by sequencing. For experiments with z-VAD-FMK, S2 cells were incubated in 100 μM z-VAD-FMK for 4 h at room temperature. Cells were harvested by centrifugation at 1,000
In response to activation by WASP-family proteins, the Arp2/3 complex nucleates new actin filaments from the sides of preexisting filaments. The Arp2/3-activating (VCA) region of WASP-family proteins binds both the Arp2/3 complex and an actin monomer and the Arp2 and Arp3 subunits of the Arp2/3 complex bind ATP. We show that Arp2 hydrolyzes ATP rapidly—with no detectable lag—upon nucleation of a new actin filament. Filamentous actin and VCA together do not stimulate ATP hydrolysis on the Arp2/3 complex, nor do monomeric and filamentous actin in the absence of VCA. Actin monomers bound to the marine macrolide Latrunculin B do not polymerize, but in the presence of phalloidin-stabilized actin filaments and VCA, they stimulate rapid ATP hydrolysis on Arp2. These data suggest that ATP hydrolysis on the Arp2/3 complex is stimulated by interaction with a single actin monomer and that the interaction is coordinated by VCA. We show that capping of filament pointed ends by the Arp2/3 complex (which occurs even in the absence of VCA) also stimulates rapid ATP hydrolysis on Arp2, identifying the actin monomer that stimulates ATP hydrolysis as the first monomer at the pointed end of the daughter filament. We conclude that WASP-family VCA domains activate the Arp2/3 complex by driving its interaction with a single conventional actin monomer to form an Arp2–Arp3–actin nucleus. This actin monomer becomes the first monomer of the new daughter filament. This paper provides the biochemical and biophysical basis for actin filament formation, necessary for cell shape and motility The actin cytoskeleton determines the shape, mechanical properties, and motility of most eukaryotic cells. To change shape and to move, cells precisely control the location and timing of actin filament assembly by regulating the number of fast-growing (barbed) filament ends . The actThe Arp2/3 complex must be activated by both a member of the Wiskott–Aldrich syndrome protein (WASP) family and a preexisting actin filament before it will nucleate a new actin filament . The strBoth the Arp2 and Arp3 subunits of the complex bind ATP . HydrolyWe show here that the Arp2/3 complex rapidly hydrolyzes ATP on the Arp2 subunit upon filament nucleation. There are several events in the Arp2/3 nucleation reaction that might trigger ATP hydrolysis on Arp2: (1) binding of VCA to the Arp2/3 complex, (2) binding of VCA-Arp2/3 to the side of a preformed filament, (3) binding of a VCA-tethered actin monomer to the Arp2/3 complex, or (4) binding of a second or third actin monomer to form a stable daughter filament. We find that ATPase activity requires the combination of a preformed actin filament, a VCA domain, and an actin monomer, but does not require actin polymerization. This indicates that hydrolysis is triggered relatively early in the nucleation reaction—before completion of a stable daughter filament. Capping the pointed ends of actin filaments also stimulates Arp2 to rapidly hydrolyze ATP in the absence of monomeric actin and VCA and without branch formation. Thus, ATP hydrolysis on Arp2 is stimulated directly by interaction with conventional actin, presented to the complex either as a monomer attached to the VC domain of the WASP-family protein or as one of the subunits making up the pointed end of a preformed filament. To our knowledge this is the first direct evidence that the monomer supplied by the VCA domain is the first monomer of the new daughter filament. From these observations we propose a model for the mechanism of Arp2/3 complex activation by WASP-family proteins.32P-8-AzidoATP to the Arp2 and Arp3 subunits of the Arp2/3 complex exposure to UV light. In the presence of γ-32P-AzidoATP at concentrations above the KD for ATP to show that UV irradiation covalently crosslinks α- complex . Here we for ATP , γ-32P-Aficiency A. Additiaffected A. The ef32P-AzidoATP-Arp2/3 with 2 μM monomeric actin in polymerization buffer and initiated polymerization by adding 750 nM VCA, which activates rapid actin filament nucleation by the Arp2/3 complex . We assayed timepoints both by SDS-PAGE and thin-layer chromatagraphy (TLC) during the same reaction to monitor remaining and cleaved 32P, respectively (32P cleavage) and cleavage has ceased by 90 s (unpublished data). As a control, 32P-ATP hydrolysis is only seen when the Azido-ATP is covalently crosslinked to the Arp2/3 complex -bound Arp2 is inaccessible to the enzyme and remains unconjugated orthophosphate. We quantified hydrolyzed 32P-ATP and released phosphate by TLC to calibrate the stoichiometry of ATP hydrolyzed by Arp2 , coincident with filament debranching , and nucleation is rapid (using N-WASP VCA), initiation of the polymerization reaction causes striking and near-complete ATP hydrolysis on Arp2 the Arp2/3 complex binds the sides of preformed actin filaments; (2) the Arp2/3 complex binds to the pointed ends of filaments, either by remaining associated with the daughter filament following nucleation or by capping preformed pointed ends; and (3) the Arp2/3 complex may interact with an actin monomer bound to the VCA domain of a WASP-family protein. There is abundant experimental evidence for filament side- and pointed-end binding by the complex , 2001b. Based on sequence conservation and biochemical similarities, ATP hydrolysis on Arp2 is probably driven by a mechanism similar to that which stimulates ATP hydrolysis on actin. The molecular details of how polymerization activates ATP hydrolysis on conventional actin, however, are not well understood. A leading hypothesis is that a “hydrophobic plug”—a loop between subdomains 3 and 4 of actin and Arc4 (p20) subunits . The ArcATP release = 330 s; 32P signal is unchanged by the addition of free ATP. The 32P signal is only equivalent to hydrolyzed ATP in the absence of free ATP. The addition of free ATP should cause the excess of uncrosslinked Arp2/3 complex to compete with the small fraction of crosslinked 32P-ATP-Arp2/3 complex and thereby significantly reduce the 32P signal. The observation that the 32P signal is not reduced, rather than confirming that removal of free ATP has no effect, instead confirms that contaminating ATP is present for the latter part of the “ATP-free” condition, presumably released slowly from monomeric actin. The lag in the polymerization created by the initial absence of ATP would be present in the experimental ATP hydrolysis measurement, but may not have been present in the nucleation data presented because this was generated by a model-dependent computer simulation are more than an order of magnitude faster than debranching of Arp2/3-generated dendritic networks (approximately 1000 s) . The kin2). This is consistent with the fact that in vitro the binding energy of this interface is sufficient to drive the interaction and promote the active conformation of the complex directly, even in the absence of VCA or a mother filament and gel-filtered before use. Rat N-WASP VCA (398–502) and Human Scar1-VCA (489–559) with N-terminal 6His tags and TEV cleavage sites were bacterially expressed and purified by nickel affinity chromatography.We purified Arp2/3 from tography . We flas2, 1 mM EGTA, 10 mM Imidazole [pH 7.0]). We took care not to unintentionally shear the phalloidin-stabilized actin filaments by using wide-bore pipette tips.We prepared phalloidin-stabilized actin filaments by adding 1/10 volume of 10× KMEI to monomeric actin at room temperature for 20 min to initiate polymerization, then added twice the concentration of phalloidin and incubated for a further hour at room temperature and added 6 μM γ-32P-labeled 8-AzidoATP . After a 2-min incubation to allow nucleotide exchange, we crosslinked for 9 s using a UV hand lamp , added 1 mM ATP and 1 mM DTT to quench the reaction and buffer exchanged into 1× KMEI plus 100 μM ATP, 1 mM DTT using a NAP5 column . We used the Arp2/3 for assays within 10 min of crosslinking. The same actin (including 7% pyrene–actin) was used for both ATP hydrolysis assays and correlative pyrene–fluorescence polymerization assays. We took ATPase time points by mixing 400 μl of the reaction mixture with premixed 400 μl of methanol and 100 μl of chloroform. We ran the precipitated protein on SDS-PAGE gel to separate the subunits and quantified 32P-labeling using a phosphoimager . For phosphate cleavage assays, we quenched timepoints into 1/10 volume 26 M formic acid, spotted on cellulose TLC plates, and separated the components in 0.4 M KH2PO4 (pH 3.4). We separately ran 32P-ATP and 32P-ATP treated with apyrase as standards to confirm the separation of 32P-ATP and cleaved 32P, respectively (unpublished data). As an alternative method of quantifying cleaved 32P, phosphomolybdate was extracted as in We diluted freshly thawed aliquots of Arp2/3 to 2.0 μM in 1 mM MgClAcanthamoeba actin with 7% pyrene–actin to monitor actin polymerization by fluorescence (–1) . We calc(–1) cf. . PolymerWe prepared filamentous actin as above and stabilized filaments with stoichiometric Alexa-488 phalloidin . We mixed 2 μM Alexa-488 phalloidin–F-actin with 20 nM Arp2/3, passed twice through a 30-gauge needle to shear the filaments, and incubated at room temperature. Timepoints were taken by diluting 500-fold and rapidly applying to poly-L-lysine–coated coverslips for visualization. Filament images were quantified for length distribution and branch frequency by a custom MATLAB routine.
With the rapid expansion of scientific research, the ability to effectively find or integrate new domain knowledge in the sciences is proving increasingly difficult. Efforts to improve and speed up scientific discovery are being explored on a number of fronts. However, much of this work is based on traditional search and retrieval approaches and the bibliographic citation presentation format remains unchanged.Case study.The Telemakus KnowledgeBase System provides flexible new tools for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining document surrogates with interactive concept maps of linked relationships across groups of research reports.research report schema, a document surrogate of extracted research methods and findings presented in a consistent and structured schema format which mimics the research process itself and provides a high-level surrogate to facilitate searching and rapid review of retrieved documents; (2) research findings, used to index the documents, allowing searchers to request, for example, research studies which have studied the relationship between neoplasms and vitamin E; and (3) visual exploration interface of linked relationships for interactive querying of research findings across the knowledgebase and graphical displays of what is known as well as, through gaps in the map, what is yet to be tested. The rationale and system architecture are described and plans for the future are discussed.Based on how scientists conduct research and read the literature, the Telemakus KnowledgeBase System brings together three innovations in analyzing, displaying and summarizing research reports across a domain: (1) An unfortunate consequence of specialization in the sciences is poor communication across research domains – which can hamper the knowledge discovery process. Research findings in one area may be pertinent to another, researchers may be unaware of relevant work by others that could be integrated into their work and important findings just outside a researcher's focus can be overlooked. Compounding this problem is the difficulty of keeping current with new research findings that continue to grow at an exponential rate.Reliance on keywords and/or subject indexing to find relevant literature limits the researcher's ability to precisely search for and locate specific research findings. For example, a typical database query to locate all research articles reporting a statistically significant relationship between caloric restriction and cancer would retrieve articles reporting both concepts as represented by the indexing and keyword search – but not necessarily linked together as a research finding, with information regarding reported statistical significance of the finding, nor, perhaps most importantly, lacking representation of the linkages among the retrieved document sets.This lack of "interactivity" among retrieved citations is a critical limitation of traditional search and retrieval systems. As stated by Swanson (1986) in his examination of "mutually isolated literatures:""Knowledge can be public, yet undiscovered, if independently created fragments are logically related but never retrieved, brought together, and interpreted [..] This essential incompleteness of search and retrieval therefore makes possible, and plausible, the existence of undiscovered public knowledge ."In addition to this limitation of search and retrieval, there are questions about representing a set of documents: What format or display of the retrieval set most enhances users' ability to identify which documents need to be examined in more detail? How can users navigate across document sets to enhance the discovery process? The bibliographic citation format is used by virtually all bibliographic databases today to report the results of database searches. However, it does not provide a way for the user to quickly review retrieved results for research methods and findings or to quickly view the relationships among the documents in the document set. Abstracts, even structured abstracts, simply do not provide a format conducive to rapid review of retrieved citations. In fact, the bibliographic citation format itself has changed little for the past two hundred years – even though it does not present an accurate representation of either the research methods or the research findings in a document .In spite of great improvements in document retrieval over the past twenty years, most information systems developed to promote scientific discovery e.g., -6), are , are 6])A comprehensive approach to these challenges is the goal of the Telemakus research program. Telemakus was named for the son of Odysseus who went searching for his father, the legendary Greek hero of Troy. Similarly, the Telemakus research program is developing approaches and tools for searching, knowledge discovery and mapping domain knowledge. The overall vision is to enhance the knowledge discovery process through retrieval, visual and interactive interfaces and tools. In close collaboration with researchers in the biology of aging, a working knowledgebase system has been designed to present aggregated citation information and research methods and findings for display in a conceptual schema.The Telemakus KnowledgeBase System provides the user with both a macro- and micro-view of domain knowledge. The macro-view facilitates identification of patterns – both expected and unexpected occurrences of relationships among research concepts – and permits visualization and dynamic navigation of scientific domains. The resulting maps are analogous to citation mapping work done by Small but, insThis article describes the theories underlying the Telemakus KnowledgeBase System, provides an overview of its implementation, reports initial user feedback and explores future directions. Telemakus system builds on prior research in the areas of: (1) schema theory, (2) concept representation and (3) information visualization.Schemas are generalized mental models that provide a guide for structuring the process of production and comprehension of texts: "...at the simplest level, a schema is a description of a complex object, situation, process or structure. It is a collection of knowledge related to the concept ." Accord"The crucial fact is that the cognitive constraints on information processing which require the formation of semantic macro-structures and which organize acts and speech acts in global units, at the same time have social implications: they determine how individuals wish, decide, intend and plan, execute and control, "see" and understand their own and others' speaking and acting in the social context. Without them the individual would be lost among a myriad of detailed incoherent pieces of visual, actional and prepositional information ."Research on the application of schema theory to scientific research includes the schematic representation of psychological reports , clinicaUnderstanding a written text is a process of fitting it into a larger schema known to reader as part of their previous knowledge about the world. It is reasonable to expect that presenting written texts in familiar formats can enhance and potentially speed up an individual's ability to review and analyze large document sets rapidly.Fuller investigThe predictability provided by schemas also applies to a document's metastructure. For example, Dillon has propA second core component of the Telemakus system is based on concept representation.Concept representation is an important component in accurately representing facts from the document. Characterizing the location of concepts in a scientific document can greatly facilitate accurate document representation.Indexing a document – using a vocabulary or thesaurus of terms to represent the document – is a standard method employed to improve retrieval of relevant documents. Yet traditional approaches to indexing fall short of true document representation: reducing the words found in the abstract, title or full-text of the document may be suggestive of the content but are not truly representative of the methods and research findings. The indexing literature is replete with studies documenting interindexer inconsistency, even among experienced professionals using familiar well-documented systems . Studies"The information retrieval (IR) problem can be described as a quest to find the set of relevant information objects corresponding to a given information need, represented by a query Q. The assumption is that the query Q is a good description of the information need N. An often used premise in IR is the following: if a given document D is about the request Q, then there is a high likelihood that D will be relevant with respect to the associated information need. Thus the information retrieval problem is reduced to deciding the aboutness relation between documents and queries ."Another problem with current indexing practice lies in the way the unique structure of the information elements in the document is obscured. Scientific research reports have a highly predictable structure, with an introduction, methods, results and conclusion. Concepts mentioned in the introduction or conclusion section of a scientific article may not be the primary focus of the research described within the document. For example, the Introduction may include discussion of research among several animal models whereas the target of the research study itself is a specific breed of mouse. However, current indexing processes (whether human or automated) rarely discriminate between locations of concepts in the document for indexing purposes. In addition, index terms do not represent the connections between the various elements in the document; thus, a significant amount of critical information for the scientist is lost.® or any other bibliographic database today that will answer the question: "Has anyone ever published data that supports a connection between cancer and caloric restriction? If so, what was the intervention, what type of experiments were done and what were the findings?" A successful response to a query of this type is extremely difficult or impossible in traditional information retrieval systems because: " [..] conventional IR systems that employ isolated term assignments seem inadequate for queries which are specific and empirical in nature. If, on the other hand, retrieval systems provide a link to represent the relationships between the variables of interest as reported in the documents, queries [..] would be better answered. That is, precision might be enhanced for specific and empirical queries when the relationships between the index terms were specified in retrieval systems [For example, it is not possible to unambiguously retrieve citations from PubMed systems ."In other words, the researcher asking the questions above can retrieve a set of citations that contain both topics but still must go through the full-text of each document to determine if the research specifically answers the question.Several research studies have explored the utility of relationships captured from data tables and figures in scientific research studies. Fuller, et. al. describeThe importance of data tables for expert decision-making was underscored by Malogolowkin, et. al. who found that cancer researchers rely on ideas presented in numerical displays in published research studies for much of their design of new research protocols . MalogolOh ,29,30 inIdentifying semantic relationships in text involves looking for certain linguistic patterns in the text that indicate the presence of a particular relationship (or research finding) using pattern-matching to identify the segments of the text or the parts of the sentence that match with each pattern: "If semantic relationships can be identified accurately in the text, retrieval results can be improved by eliminating documents containing the required concepts but not the desired relationships between the concepts ."The third component of the Telemakus system is based on visual mapping of reported research findings.As previously mentioned, indexing strategies rely on "isolated term assignments." This approach leads to the loss of two important sources of information: (1) intra-document information, i.e., the research relationships studied and tested and (2) cross-document information, which captures and links research relationships across groups of documents or domains. This loss is the result of breaking apart the context of clearly linked in research findings in the data tables and figures, concepts typically linked together (the x-y axes of the tables and graphs).Once the research relationships have been extracted, concept mapping, a means of spatially representing knowledge in a visual format, provides a potential solution to the challenge of maintaining the inter-relationships between documents and reported research findings. Spatial representations can assist in understanding conceptual relationships across a domain. They can also assist in identifying previously overlooked potential research connections.Numerous approaches to visualizing an information retrieval space have been explored e.g., ,32,33), ,33, 32,3While a review of information visualization strategies is outside the scope of this paper, there is a growing body of work related to mapping metaphors and visualizing large document sets and database search results to provide the user with the ability to visualize relationships among documents and their contents ,39. In aConcept mapping represents knowledge graphically through networks of ideas. Such networks consist of nodes (points) and links (arcs/edges). Nodes represent concepts and links represent connections between concepts. Concept mapping has been used for a variety of purposes, including to communicate complicated ideas and, as in the Telemakus system, to demonstrate connections among research findings.How might one apply the theories previously described in developing a comprehensive "real world" information retrieval and knowledge discovery system? As reviewed in the previous section, the Telemakus system is built on and extends prior research in the areas of concept representation, schema theory and information visualization. Work on components of what has become the Telemakus system has been underway for many years with a particular emphasis on the importance and utility of relationships extracted from data tables and figures ,42-44. FBased on how scientists use and want to use the research literature, Telemakus brings together three innovations in analyzing, displaying and summarizing research reports across a domain:1. Research Report Schema: Research methods and findings are extracted and presented in a consistent, coherent and structured schema format which mimics the research process itself and provides a high-level research report surrogate to facilitate searching as well as rapid review of retrieved documents.2. Research Findings extracted from data tables and figures are used to index the documents, allowing searchers to request research studies which report a relationship between two concepts of interest.3. Visual Exploration Interface provides a dynamic map of extracted research findings to graphically display what is known as well as, through gaps in the map, what is yet to be tested.The Telemakus system consists of a database, research report schema and tools to create relationship maps among concepts across documents. The research report schema serves as a surrogate for the study, methods and research findings for each document as well as providing an interactive search interface. The schematic representations include standard bibliographic information , information about the research design and methods and, most importantly, research findings derived from data tables and figures.® (UMLS®) Metathesaurus® as the basis for creating a controlled vocabulary.The elements extracted by the Telemakus system from full-text documents are listed in Table The UMLS Metathesaurus is a rich database of information on concepts that appear in one or more of a number of different controlled vocabularies and classifications used in the field of biomedicine. It provides a uniform, integrated distribution format of over 95 biomedical vocabularies and classifications and contains syntactic information. All Metathesaurus concepts are assigned to specific types or categories – e.g., "Disease or Syndrome," "Virus" – and the Semantic Network contains information about the permissible relationships among these types – e.g., "Virus" causes "Disease or Syndrome" . The 200The thesauri are reviewed (curated by expert indexers) in order to create a consistent controlled vocabulary structure. As indicated in Table At present, data extraction utilizes systems with both manual and automated processes. An evolving thesauri-building and revising approach are important components of the Telemakus system to ensure that vocabulary identification and management reflect the specialized needs of the knowledge domain as new research concepts are identified and reported.®, etc.). Database elements are extracted and verified against the relevant thesaurus. As new concepts are identified the UMLS is checked for the preferred term and it is added to the appropriate Telemakus thesaurus – along with synonyms, narrower and broader terms.The knowledgebase construction process begins with an Internet search of a bibliographic database by the researchers. Concentrating on the legends from data tables and figures focuses the extraction process and reduces the background noise of the full-text document, making the process tractable. In general, the information content of data tables and figures can be broken into two types: "facts" and "findings." Facts include reporting experimental design and comparative characteristics of animals in the study group . Findings are the results of the study (the research findings). Research findings are extracted from each of the "findings" data tables in a process described in Figure Table ®, (NLM®) is being tested as a means of automatically parsing the legends from the data tables and figures to identify preferred UMLS concepts for addition to the Telemakus thesauri. MetaMap maps arbitrary text to concepts in the UMLS Metathesaurus; or, equivalently, it discovers Metathesaurus concepts in text. With this software, text is processed through a series of modules. First it is parsed into components including sentences, paragraphs, phrases, lexical elements and tokens. Variants are generated from the resulting phrases. Candidate concepts from the UMLS Metathesaurus are retrieved and evaluated against the phrases. The best of the candidates are organized into a final mapping in such a way as to best cover the text [A current focus is the application of natural language processing (NLP) techniques to assist in the automation of concept extraction process. MetaMap, a program developed by the National Library of Medicinethe text .The Telemakus system architecture centers on: a relational database; a set of tools used to populate the knowledgebase with data extracted from bibliographic databases and full-text research reports; and several server side tools and programs responsible for delivering the content of the database to the public via the WWW. The entire system is built from open-source components, leveraging standard protocols and tools whenever possible.The document processing system is initiated by an analyst who runs, reviews and edits as necessary extractions from the document being processed. It currently consists of a number of discrete phases to download, extract and analyze each document. These services are built primarily in Java running behind Tomcat and Apache and accessed by the analyst through the browser.For the public Telemakus website interface, a number of open-source solutions have been selected and configured. An Apache web server intercepts all requests and delegates them to surrogate processes dedicated to each respective task. For requests to display the data from the database, the request is delegated to Zope, a content management service, for responding to the user's request. This typically includes running SQL queries against a PostgreSQL database and rendering the results in the conceptual schema that serves as a surrogate for each document. For tasks beyond simple queries and HTML requests, a Java Servlet™ is employed. As plain HTML is insufficient to effectively display and interact with the relationship map, a Java™ applet, TouchGraph, is used.TouchGraph is an open-source concept-mapping tool for creating and navigating links between information sources. The tool was chosen for Telemakus because of its flexibility, customizing capabilities, high quality source code and compatibility with most browsers and operating systems (OS). The TouchGraph visualization package serializes maps to and from XML. By using Java, HTTP and XML, TouchGraph makes it easy to dynamically feed content to generate interactive nodes-and-edges maps.Figure Figures The research report schema also serves as a convenient interface for searching for related research concepts, offering a rapid way of following research connections through the database. For instance, clicking on "killer cells, natural – ad libitum" would retrieve additional articles that present data tables linking those two concepts.The "map it" function, at the bottom of the retrieval set Figure provides. Caloric restriction was an ideal starting point for Telemakus because it is an important and rapidly expanding specialized area of the biology of aging that is also highly interdisciplinary. Telemakus is a component of the Science of Aging (SAGE) project funded by the Ellison Medical Foundation. Other SAGE partners include the American Association for the Advancement of Science and Highwire Press, Stanford University (SAGE Knowledge Environment web site: ).The first completed Telemakus knowledgebase focuses on caloric restriction in aging and is freely available at Formal usability testing of Telemakus is underway and will be the subject of a future article. Because a major goal of the Telemakus research program is to study scientists' approaches and preferences for accessing and using the scientific literature in order to create models and approaches for user-centered knowledgebases, researchers have been involved in the iterative design and testing of the system from its inception. The primary goals of this evaluation are to:1. Determine scientists' preferences for working with the research literature.2. Model preferred features based on those preferences.3. Test the completeness of schema elements and structure as a document surrogate.4. Experiment with and identify optimal visual representations to meet user needs.5. Iteratively review/evaluate/test for improved performance in response to user feedback.6. Identify domain(s) for future knowledgebase creation.In general, response to each successive iteration of Telemakus has been positive and included constructive feedback for system enhancements and expansions. User feedback affirms that retrieval based on research findings is a unique and highly desirable core function. Further, the Telemakus schematic document surrogate has been enthusiastically received as a major improvement over the traditional citation format with abstract. As one researcher stated , "The strengths of Telemakus are doing what PubMed does not do, which is to give an outline of the main points and to allow searching off the figure/table legends, organisms/sources and outcome fields."Additional feedback relates to the labeling of concept relationships as "statistically significant." Some researchers are interested in knowing the level of reported significance and asked for a detailed labeling to document this. In addition, there have been requests to consider labeling the relationships . Early testing of the mapping function resulted in the observation that color-blind individuals would not be able to see lines that were labeled with red or green, which led to a change in the mapping color scheme.There have been many additional suggestions for improving the visualization, including addition of three-dimensional representations and allowing more user control of the presentation itself. Some researchers have expressed interest in being able to build maps based on the date a particular research finding was reported. This functionality would create time sequence maps that show the progression of research over time and, perhaps, will demonstrate paths of research that have been discarded prematurely and may be worth re-visiting. A number of researchers have indicated the utility of this approach for teaching purposes – for a student to quickly get a sense of the research "facts" in a domain. There have also been requests for tools to support downloading subsets of the knowledgebases, as well as tools to allow individuals to manipulate maps and add their own research findings and ideas to the concept maps.While initial Telemakus development has focused on the research literature related to caloric restriction and the biology of aging, the goal is to expand into additional domains. For example, tables of genetic sequence information, which display reported relationships between gene sequences and diseases, are a natural area of expansion for Telemakus. There is great potential for building linkages between Telemakus knowledgebases and other factual databases, e.g., NCBI entrez resources. In addition, scientists from other domains beyond biomedicine have indicated that a customized schematic representation of research findings could be very useful in their domains.Speeding up document processing so Telemakus can easily and efficiently scale for comprehensive treatment of domains is a key priority. As discussed previously, the UMLS Metathesaurus resources are proving extremely useful. In addition, the Semantic Network will be tested for enhancing searching and visualization of research findings.We will continue to utilize an iterative development method so that results of usability evaluation can immediately inform development of additional features. In particular, we want to test our hypothesis that the mapping feature will promote knowledge discovery by showing graphically what is known as well as, through lack of links, what research linkages have not yet been tested.Since basic sciences researchers tend to initially focus on the data found within a report's tables and figures , extracting the headings and providing linked research concepts mimics a researcher's traditional approach to reading the research literature ,48. WhenOne of the long-term goals of the Telemakus system is not to build knowledgebases "ad infinitum" but rather to create flexible tools for users to quickly and efficiently locate and visualize aggregate research findings from any domain which reports research findings as data. As more and more full-text research reports are available on the Internet, we believe the tools we are developing will provide an important approach for focusing on research findings and providing visual cues for quick review and assimilation.The Telemakus KnowledgeBase System builds on a good deal of prior research in a variety of domains. It provides a flexible new approach for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining informative document representations with interactive concept maps of linked relationships across groups of research reports. Telemakus presents a novel approach to creating useful and precise document surrogates and may re-conceptualize the way we currently represent, retrieve and assimilate research findings from the published literature.None declared.SF conceived the study and contributed to its design, coordination and evaluation. DR and PB participated in the design of the study. DR led the overall coordination and drafted the manuscript. PB led the technical implementation. GMM contributed to the design, coordination and evaluation. All authors read and approved the final manuscript.
Symptomatic hypogammaglobulinemia in infancy and childhood (SHIC), may be an early manifestation of a primary immunodeficiency or a maturational delay in the normal production of immunoglobulins (Ig). We aimed to evaluate the natural course of SHIC and correlate in vitro lymphoproliferative and secretory responses with recovery of immunoglobulin values and clinical resolution.Children, older than 1 year of age, referred to our specialist clinic because of recurrent infections and serum immunoglobulin (Ig) levels 2 SD below the mean for age, were followed for a period of 8 years. Patient with any known familial, clinical or laboratory evidence of cellular immunodeficiency or other immunodeficiency syndromes were excluded from this cohort. Evaluation at 6- to 12-months intervals continued up to 1 year after resolution of symptoms. In a subgroup of patients, in vitro lymphocyte proliferation and Ig secretion in response to mitogens was performed.32 children, 24 (75%) males, 8 (25%) females, mean age 3.4 years fulfilled the inclusion criteria. Clinical presentation: ENT infections 69%, respiratory 81%, diarrhea 12.5%. During follow-up, 17 (53%) normalized serum Ig levels and were diagnosed as transient hypogammaglobulinemia of infancy (THGI). THGI patients did not differ clinically or demographically from non-transient patients, both having a benign clinical outcome. In vitro Ig secretory responses, were lower in hypogammaglobulinemic, compared to normal children and did not normalize concomitantly with serum Ig's in THGI patients.The majority of children with SHIC in the first decade of life have THGI. Resolution of symptoms as well as normalization of Ig values may be delayed, but overall the clinical outcome is good and the clinical course benign. Streptococcus pneumonia, Haemophilus influenza and Staphilococcus aureus, but the infections may be of unusual severity, persistence, or frequency.Pediatric patients with "recurrent infections" within our area are referred to the pediatric immunology clinic in the Kaplan Medical Center. Few fulfill the clinical criteria of the immune deficiency "red flags", Table In the last 10 years, tremendous advances in the fields of molecular medicine and genetics, have made possible the definitive diagnosis of most combined immunodeficiency patients, agammaglobulinemia patients and clinical syndromes associated immunodeficiency patients, on the basis of a recognized genetic aberration leading to a protein product dysfunction ,4. NeverIn this study we aimed to evaluate the natural course of disease in symptomatic hypogammaglobulinemia of infancy and correlate in vitro lymphoproliferative and secretory responses to mitogens in this population with recovery of immunoglobulin values and clinical resolution.Children more than 1 year of age, with recurrent infections, defined as more than three episodes of acute otitis media and/or more than one episode of acute sinusitis and/or more than one episode of pneumonia or the presence of a severe deep seated infection within the last 6 months, or fulfillment of one of the "red flags" of immunodeficiency, see Table All procedures were performed according to accepted ethical standards of the Institutional Review Board of Kaplan Medical Center. The parents of all the children were informed accordingly and gave their permission for participating in the study and blood sampling.Total serum Ig levels were measured by nephelometry and serum IgG subclasses have been assayed with an immunodiffusion commercial kit . Specific antibody production was not evaluated.5 cells were stimulated for four days with 0.01% w/v SAC , 2.5 μg/ml PWM, 20 μg/ml E. coli: O55:B5 LPS or with 20 μg/ml PHA . The cells were then pulsed with 1 μCi/well of [3H]-Thymidine and incorporated radioactivity was measured by a β scintillation counter. Proliferation was expressed as stimulation index (SI).Peripheral blood mononuclear cells (PBMC) were isolated from heparinized venous blood of patients and age-matched donors on ficoll isopaque gradients . Patient and normal donor cells were cultured in microtiter plates with culture medium: RPMI medium supplemented with 10% FCS , 10 mM Hepes buffer, 100 U/ml penicilin 100 μg/ml streptomycin, 2 mM L-glutamine, and 100 μg/ml kanamycine (Sigma Israel), and were grown at 37°C with 7.5% CO2 in air. 5 × 10In parallel, cell culture supernatant aliquots were harvested and Ig isotype concentrations were measured in the culture supernatants, by a solid-phase immunoassay, in Nunc- Immunoplate Maxisorp 96 wells . The plates were coated with goat anti-human IgM, IgG or IgA antibodies . Biotinilated goat anti-human IgM, IgG or IgA and streptavidin-alkaline phosphatase were used for measurement of IgM, IgG and IgA respectively. Resulting yellow dye intensity was read by an ELISA reader . Dye units were converted to immunoglobulin concentrations by extrapolation from standard curves determined by using purified myeloma proteins of known concentration in every assay.We used non-parametric tests to compare the means and a standard analysis of variance to compare between groups. Logistic regression analysis was used for comparison of distribution of dichotomous values between the groups. Analysis was performed using SPSS for windows ver 9.0.32 patients were included in the study, 24 males (75%) and 8 females (25%) with a mean age at diagnosis of 3.4 years (range 1.2 – 7.0). Clinical presentations included severe and recurrent Ear-Nose-Throat (ENT) infections – 22 patients (69%), pneumonia, bronchopneumonia or severe, recurrent upper respiratory infections – 26 patients (81%), diarrhea – 4 (12.5%) and atopy related complaints – 20 patients (63%). A positive family history of recurrent, unusual or severe infections was obtained in 7 patients (22%). Demographical and clinical data of patient cohort is summarized in Table Out of the initial 32 patients, 17 (53%) spontaneously corrected their Ig abnormalities. This group included 15 boys (88%) and 2 girls, mostly All defects were partial, no patient in this group, showing a complete absence of a given Ig isotype. The clinical course was benign, only 9.4% (3/32) patients requiring IVIg . 38% (12/32) of patients received prolonged antibiotic prophylaxis and resolution of clinical symptoms occurred in 84% of patients (14/17 in the THGI and 13/15 in the non corrected group). All calculated p values, non significant for comparison between the 2 groups.Comparative analysis of serum Ig isotype levels at diagnosis showed no significant difference, between the transient and non-transient group.In vitro lymphocyte proliferation and Ig secretion were measured in 9 patients (5 patients with THGI and 4 with non-corrected IGD) on one or more occasions. Lymphocytes proliferative responses to SAC, PWM and LPS showed no differences between the groups and no significant differences from childhood norms. The proliferative response to PHA was significantly increased (p < 0.005) after the correction of Ig abnormalities, overshooting normal controls Fig .Quantification of the in-vitro immunoglobulin secretion in response to the various mitogens showed significant isotype and mitogen dependent variation. The IgM secretory response to PWM and LPS, was low in hypogammaglobulinemic patients. After normalization of serum Ig values, the IgM response to LPS stimulation increased, see table For both IgG and IgA, the response in normal children was significantly better than in either group of patients. The IgA secretion index to all 3 stimulatory mitogens was minimal in hypogammaglobulinemic patients, even after serum Ig correction and differed significantly from age matched controls , table Young patients with recurrent infections represent a sizeable portion of the daily practice of all primary care pediatricians and family physicians, with parents clamoring for a solution with the accumulation of lost daycare or school days. The physician is faced with the dilemma when parental assurance will suffice, rather than initiation of a costly immunological investigation. Often, the presenting clinical signs and symptoms are insufficient for an educated diagnosis, as well as Ig levels in infants below the age of one year. The present investigation was initiated in order to try to contribute additional understanding how to differentiate between cases of primary immune deficiency and those who are not.We prospectively studied the outcome of SHIC in 32 patients during a period of 8 years, mean follow up of 3.2 years. During this time more than half corrected their Ig abnormalities. The mean follow up of the non-corrected IGD group is slightly shorter than that of the THGI group (2.5 years and 3.5 years respectively) which leads us to speculate that some patients in the "non-transient" group may eventually correct. This is consistent with previously published reports from Dalal et al. who, after a follow up of 10 years, found that 70% of patients had complete resolution of their Ig abnormalities .Interestingly, the majority of our patients were males (75%), and a higher proportion of males corrected their serum Ig . This finding of male preponderance is not uniquely ours. In the reports of Dalal et al. , 24/35 resolved their tendency for recurrent infections irrespective of their Ig values. Only a minority of patients required any medical intervention, 38% received antibiotic prophylaxis and only 9% intravenous Ig replacement therapy. We observed no difference in the clinical presentation or follow up of the transient group as compared to the non corrected IGD patients and resolution of serum Ig abnormality did not cause a complete clinical remission in all patients. This may be due to a residual inability to mount an adequate antibody response to specific antigen challenge, data that has not been evaluated in our series. The impaired IgG and IgA in vitro secretory responses, seen in hypogammaglobulinemic patients, even after serum Ig normalization, may be an expression of such an inability, which warrants further investigation.Atopy was a prominent associated complaint in 21/37 (57%) of our patients. Especially so in comparison with the prevalence of asthma in this age group in Israel – 7% ,23 and tThough no clinical evidence of T-cell functional impairment was observed in our patients , previous reports -28 suggeTHGI is a relatively common cause of symptomatic hypogammaglubulinemia in infancy in our area. Most children will spontaneously correct their Ig abnormalities during the first decade of life. Though tests of cellular or humoral stimulation index, are not as yet capable of differentiating the transient from the non-transient patients upon their presentation, significant isotype and mitogen specific variability is evident. The relative preservation of the in vitro IgM secretory response and the lack of IgA/IgG response in patients with hypogammaglobulinemia, argues for a delay in isotype switching as the molecular basis underlying the clinical entity of transient hypogammaglobulinemia of infancy.The author(s) declare that they have no competing interests.SHIC – Symptomatic Hypogammaglobulinemia in Infancy and ChildhoodTHGI – Transient Hypogammaglobulinemia of ChildhoodIg – ImmunoglobulinIGD – Imunoglobulin deficiencySAC – Staphylococcus aureus cowan IPWM – Pokeweed mitogenLPS – LipopolysaccharidePHA – PhytohemagglutininIVIg – Intravenous ImmunoglobulinENT – Ear, nose & throatMIK – carried out the patient care and follow-up, was responsible for the database organization, data analysis and for manuscript coordination and writingZT – is one of the research initiators, carried out the patient care and follow-up and contributed to the manuscript writingIA – carried out the in vitro cell proliferation and Ig secretion studiesRS – carried out the in vitro cell proliferation and Ig secretion studiesMS – carried out the patient care and follow-up, was responsible for the database initiationIZ – is one of the research initiators, carried out of the laboratory evaluation and contributed to the manuscript writingAll authors read and approved the final manuscriptThe pre-publication history for this paper can be accessed here:
This review article combines four disparate observations about Neural Tube Defects (NTDs). They are the worldwide decline in the birth incidence that began prior to prenatal diagnosis; family recurrence risks; the effect of prenatal diagnosis and termination of affected pregnancies; and the effect of folic acid. NTDs are due to many different causes . The incet al [Family studies suggest the recurrence risk for first-degree relatives of affected individuals is approximately 1 in 30. For second-degree relatives (the children of the mother's sisters and brothers) the risk is approximately 1 in 220 . Howeveret al report tet al [et al [The third aspect, prenatal diagnosis and termination of affected pregnancies, is one that should be discussed with all women in the reproductive age range and, more importantly, with patients who have a family history of an NTD. Folic acid taken orally on a daily basis is shown to lower the occurrence and recurrence of NTDs in their own offspring and in their relatives. The Medical Research Council was the et al have recl [et al demonstrl [et al and dietl [et al ,18.When recent trends in the birth incidence of NTDs are reported, they focus additionally on the effect that folic acid has on the early second trimester prevalence of affected fetuses. As for the epidemiological studies noted above, these reports include varying types of cases; some report only "spina bifida", others "spina bifida" and anencephaly, and still others mention these two types and encephalocele with or without hydrocephalus. Some studies report only deaths due to complications in these groups of patients as stillborns, or deaths in the neonatal time period; other reports study all affected newborns, and still others cover selective or spontaneously aborted fetuses. Those that include time intervals after the introduction of intrauterine diagnosis and selective termination do not take into consideration the variations in incidence at different gestational ages and at birth, whether stillborn or live . Creasy Our data and this review clearly demonstrate the effects of intrauterine diagnosis and selective termination prior to the recommendation for supplementation and fortification of foodstuffs with folic acid. Because the reason for termination of a pregnancy is not reportable in our state and the USA, we cannot determine the effect of folic acid on the prevalence of myelomeningocele and anencephaly in first and early second trimester fetuses. Studies of the effect of folic acid in reducing the birth incidence in communities with a low incidence, and active prenatal diagnosis associated with termination of affected fetuses, require longer-term studies than published to date. The differences in data discussed above need to be considered if one is to evaluate the effect of prenatal diagnosis and elective termination as well as the effects of fortification or supplementation with folic acid. We recommend that these variables be discussed with women of reproductive age, particularly if they are relatives of a patient with an NTD. Regardless of the uncertainties, we recommend supplementation of the diet of women, beginning three months prior to an anticipated pregnancy. We recommend all women of childbearing age take at least 400 mcg of folic acid daily when they begin sexual activity. Relatives of a patient with an NTD should take 4.0 mg daily, beginning three months prior to conception.The authors declares that he has no competing interests.
Absenteeism due to communicable illness is a major problem encountered by North American elementary school children. Although handwashing is a proven infection control measure, barriers exist in the school environment, which hinder compliance to this routine. Currently, alternative hand hygiene techniques are being considered, and one such technique is the use of antimicrobial rinse-free hand sanitizers.A systematic review was conducted to examine the effectiveness of antimicrobial rinse-free hand sanitizer interventions in the elementary school setting. MEDLINE, EMBASE, Biological Abstract, CINAHL, HealthSTAR and Cochrane Controlled Trials Register were searched for both randomized and non-randomized controlled trials. Absenteeism due to communicable illness was the primary outcome variable.Six eligible studies, two of which were randomized, were identified . The quality of reporting was low. Due to a large amount of heterogeneity and low quality of reporting, no pooled estimates were calculated. There was a significant difference reported in favor of the intervention in all 5 published studies.The available evidence for the effectiveness of antimicrobial rinse-free hand sanitizer in the school environment is of low quality. The results suggest that the strength of the benefit should be interpreted with caution. Given the potential to reduce student absenteeism, teacher absenteeism, school operating costs, healthcare costs and parental absenteeism, a well-designed and analyzed trial is needed to optimize this hand hygiene technique. Routifection" . This stfection" -5. The efection" ,7. Ignazfection" ,7. This fection" ,7. Thesefection" ,9.The elementary school environment is also negatively impacted by outbreaks of disease causing microorganisms ,11. Theset al. (1997) reported that proper handwashing compliance, with soap and water, in school-aged children ranged from 8 to 28 percent. Reported reasons for the observed inadequacy in compliance included insufficient time during the day, and the use of substandard washing facilities in hard to access locations of the school environment [Despite the scientifically proven evidence of the effectiveness of handwashing, and the increasing promotion of proper hand hygiene techniques, observational studies in school settings have indicated that handwashing practices are often lacking ,14. Guinironment ,14.In attempts to overcome the obstacles of routine handwashing in school environments, antimicrobial rinse-free hand sanitizers are being used as an alternative hand hygiene technique. The concern is that such programs may be carried out in the absence of evidence of effectiveness in the school environment. Thus, it is timely to review the evidence currently available for the effectiveness of antimicrobial rinse-free hand sanitizer programs in reducing absenteeism due to communicable illness. The aim of this systematic review was to determine whether antimicrobial rinse-free hand sanitizer interventions are effective in preventing illness-related absenteeism in elementary school children.A detailed written protocol was prepared and reviewed in advance (complete protocol can be obtained from the corresponding author).A comprehensive search was conducted to identify all relevant studies regardless of publication status. Six electronic databases were searched for studies published in any language. The databases included: Biological Abstracts (1990-May 2003), CINAHL (1982–2003), the Cochrane Controlled Trials Register (1981–2003), EMBASE (1980-May 2003), HealthSTAR (1975-May 2003), and MEDLINE (1966-May 2003). A detailed search strategy was developed for use in MEDLINE and an iterative process was completed to refine the MEDLINE search for each database. Descriptions of the database search strategies are presented in Appendix 1 . The interventions of interest were those that administered antimicrobial rinse-free hand hygiene programs compared with no intervention or placebo treatment arm in a school setting. The outcome of interest was the comparison of the number of absences due to communicable illnesses in children who received the antimicrobial rinse-free hand hygiene intervention with the number of such absences in those who received a placebo or no intervention. We included cluster randomized controlled trials (RCTs) and cluster non-randomized controlled trials regardless of publication status.et al. 2002 and Moher et al. 2003 indicate that the exclusion of trials in languages other than English (LOE) does not bias measures of effectiveness- however, both are cautionary, advocating language inclusive search strategies [All relevant citations, titles and abstracts, were imported into a reference database where duplicates were manually removed. Priority in downloading was given to MEDLINE. Reviews were excluded, but the bibliographies from such articles were examined for relevant studies. The screening was completed in an unblinded manner; there is inconclusive evidence that blinding introduced bias into the process . One indrategies ,19. Due Two reviewers independently abstracted data from all studies meeting the eligibility criteria, excluding one abstract where EM independently abstracted pertinent information , using pTwo reviewers independently assessed the quality of each of the included studies, excluding the abstract by Thompson 2004) previously mentioned , using t04 previoData synthesis and analysis was performed in accordance with the Cochrane Reviewers' Handbook . FirstlyA flow diagram of the search results is illustrated in Figure After review of the full text of these studies, 21 were excluded for the following reasons: no outcomes of interest (n = 1), inappropriate population (n = 1), inappropriate interventions (n = 9), inappropriate study design (n = 3), irrelevant subject matter (n = 5), and review (n = 2). Thus, a total of 4 trials fulfilled the inclusion criteria ,27,31,41Of the 6 remaining studies, 2 were crossover studies ,27, 1 waAll six trials were conducted in the United States, 4 with reported industrial sponsorship, and were published between 2000 to 2004. The trials varied in size , and geographic locations and Dyer et al. (2000), provided each student with alcohol-free instant hand sanitizer [et al. (2000), Guinan et al. (2002), Morton et al. (2004), and Thompson et al. (2004) provided each class with alcohol-based instant hand-sanitizer dispensers [The trials also varied with respect to the intervention administered. White anitizer ,41, whilspensers ,15,16,31spensers ,41, educspensers , one stuet al. (2001) reported a significant number of dropouts (857 of 1626 students did not complete the study) however no explanations were offered [The quality of reporting of the 5 trials that were examined in detail was low. Only one study was described as being randomized and double-blinded, however, it failed to describe a detailed and appropriate method of randomization and allocation concealment was unclear . Four of offered . Four st offered . Other cet al. (2000), the experimental group had a 20% (95% CI = 19–21%) reduction in absences due to communicable illness, the experimental group in the trial completed by Guinan et al. (2002) had a 49% (95% CI = 42–56%) reduction, White et al. (2001) demonstrated a 33% (95% CI = 17–45%) reduction in the experimental group, Dyer et al. (2000) had a 34 % (95% CI = 10–50%) reduction in absences due to communicable illness in the experimental group in the first phase and a 56 % (95% CI = 31–72%) in the second phase, and Morton et al. (2004) reported a significant odds ratio for McNemar's test .All six studies varied in their definition of communicable illness-related absenteeism, refer to Table et al. 1999, indicated this item to be of great concern for the parent's of school-aged children [Many studies have examined the importance of preventing the transmission of infectious diseases in the school environment, one such studied completed by Cramer children . The moschildren ,8,10,28.children -50. EffeCan the evidence from these six trials reported here be used to promote this type program in elementary schools at the present time? This systematic review of antimicrobial rinse-free hand sanitizers for prevention of illness-related absenteeism in elementary school children is the first review, of the author's awareness, to assess this issue. Although randomized controlled trials are the study design least likely to provide biased estimates of effect, due to the nature of school-based interventions, the inclusion of both randomized and non-randomized cluster controlled trials was required . Of the Four of the six studies used an alcohol-based product, the other two using a benzalkonium chloride based disinfectant. The FDA in the United States has indicated that insufficient data exits to classify the latter compounds as safe and effective to use as antiseptic handwashes. They are also adversely affected in the presence of organic material such as food residues, which may be an issue in schools . Four stSeveral limitations were encountered when completing this review, the major one being the scarcity of high quality studies. Additionally, although content experts, primary authors and industrial companies were contacted, no grey literature was found. The possible existence of unpublished non-significant trials should not be discounted. The validity of performing a quantitative synthesis was considered, however based on a qualitative inspection of heterogeneity and estimates of intervention effectiveness this was not deemed appropriate. Sources of heterogeneity included study designs, population characteristics, intervention characteristics, case definition and primary outcome measure. Thus, sensitivity and subgroup analyses were not performed, and publication bias was not assessed quantitatively. Another limitation was the fact that one reviewer was used to do the broad screen of articles and review the two citations identified between September 2003 and the present time. This may have biased the results; however, it is believed that this reviewer would overestimate the citations to be included.In wake of the recent worldwide emergence of Severe Acute Respiratory Syndrome (SARS), the importance of proper hand hygiene has been brought to the spotlight. Comprehensive hand hygiene programs with occasional reinforcement are an inexpensive intervention, which potentially can work for a broad population, with minimal adverse effects. Future research should concentrate on developing study protocols that are scientifically sound with regards to randomization generation, blinding, allocation concealment and other factors that will minimize or avoid bias. Hand hygiene programs are the most important infection control measure in the school environment and have potentially large public health and economic implications therefore their design, implementation, and analysis should be carried out with the rigour.The author(s) declare that they have no competing interests.EM conceived and designed the study as part of a graduate course in systematic reviews, reviewed trials for inclusions, abstracted data, participated in data analysis, and drafted and revised the manuscript.NLS participated in initial study design, reviewed trials for inclusion, abstracted data, participated in data analysis, and revised the manuscript.Both EM and NLS agreed upon the final revision.Appendix 1 – Syntax for searchesAppendix 2 – List of corresponding authors, content experts and industrial companies contactedAppendix 3 – Data collection formThe pre-publication history for this paper can be accessed here:Additional file 1 - Syntax for searchesClick here for fileAdditional file 2 - List of corresponding authors, content experts and industrialcompanies contactedClick here for fileAdditional file 3 - Data collection formClick here for file
Peromyscus maniculatus) are the principal natural hosts of SNV, in which the virus establishes life-long persistence without conspicuous pathology. Little is known about the mechanisms SNV employs to evade the immune response of deer mice, and experimental examination of this question has been difficult because of a lack of methodologies for examining such responses during infection. One such deficiency is our inability to characterize T cell responses because susceptible syngeneic deer mice are not available.Human infections with Sin Nombre virus (SNV) and related New World hantaviruses often lead to hantavirus cardiopulmonary syndrome (HCPS), a sometimes fatal illness. Lungs of patients who die from HCPS exhibit cytokine-producing mononuclear infiltrates and pronounced pulmonary inflammation. Deer mice (Mus musculus) granulocyte-macrophage colony stimulating factor. These cells are capable of processing and presenting soluble protein to antigen-specific autologous helper T cells in vitro. Inclusion of antigen-specific deer mouse antibody augments T cell stimulation, presumably through Fc receptor-mediated endocytosis.To solve this problem, we have developed an in vitro method of expanding and generating competent antigen presenting cells (APC) from deer mouse bone marrow using commercially-available house mouse (The use of these APC has allowed us to dramatically expand deer mouse helper T cells in culture and should permit extensive characterization of T cell epitopes. Considering the evolutionary divergence between deer mice and house mice, it is probable that this method will be useful to other investigators using unconventional models of rodent-borne diseases. Bunyaviridae) are rodent-borne and can cause hemorrhagic fever with renal syndrome (HFRS4) or hantavirus cardiopulmonary syndrome (HCPS) [Hantaviruses (family e (HCPS) . While He (HCPS) -4. In NoPatients afflicted with HCPS exhibit pronounced pulmonary inflammation due to capillary leak syndrome, with the consequent hypotension often leading to rapid decline and death . Virus iPeromyscus maniculatus) are the principal reservoir host of SNV [Deer mice [Recent advances in hematopoietic stem cell research have identified an important role for granulocyte-macrophage colony stimulating factor (GM-CSF) in the expansion and maturation of bone marrow cells into competent APC -40. We pusculus) . This leusculus) , it is pWe used RACE to obtain the complete 5' end of GM-CSF. This sequence was translated using the default translation table within MacVector. The polypeptide is predicted to have a 25 residue signal peptide based upon orthologous sequences from other species -45 Figu. The recDeer mouse bone marrow cultures contained mostly cells that appeared dead or dying after 24 hours in culture with GM-CSF. However, at 48 hours clusters of cells were apparent, while control wells without GM-CSF had fewer live cells than at 24 hours. By day 3, adherent stromal cell foci were conspicuous, while semiadherent and nonadherent cells were more evident and these became the prominent cells for the duration of culture. Day 12 bone marrow cells incubated for an additional 48 hours were large, ranging from 12 to 18 μm in diameter, and possessed macropinocytic vesicles and processes Figure . AlthougDay 8 deer mouse bone marrow cells were cultured with various concentrations of house mouse GM-CSF. Two days later, proliferation was assessed Figure and maxiBM-APC and T cells were examined for the expression of orthologous I-Eβ and TCRβC, respectively, by RT-PCR Figure . For I-EDeer mice were immunized with keyhole limpet hemacyanin (KLH), and 10 days later the lymph nodes, spleens and bone marrow were processed for in vitro expansion of polyclonal T cells (lymph node cells), while the bone marrow cells and splenocytes were frozen. Sera were tested for antibodies to KLH by ELISA and in each deer mouse tested the titer was greater than or equal to 8,000 (data not shown). For recall proliferation, KLH, in vitro-propagated T cells (14 days with IL-2) and BM-APC (14 days with GM-CSF) or freshly thawed splenocytes were cultured together for 72 hours, and proliferation was assessed by MTS assay Figure . For eacDeer mouse antiserum raised against KLH and incubated with antigen for one hour prior to addition of cells increased the sensitivity of T cell proliferation Figure , suggestTo our knowledge, no previous efforts have been made to develop long-term cultures of T cells from unconventional laboratory rodents. The principal reason for this is that highly inbred strains, required for conventional long-term T cell work, are not available from rodents not routinely used in laboratory work. At least for deer mice, we have developed a method of fulfilling this need by using commercially-available house mouse GM-CSF. This cytokine apparently binds to the GM-CSF receptor on deer mouse cells such that it generates competent APC from the bone marrow. These cells are capable of processing and presenting soluble antigen to autologous antigen-specific helper T cells.Our initial suspicions that house mouse GM-CSF might bind to deer mouse GM-CSF receptor was the result of previous work in whichWe used a method that has been shown to generate dendritic cells in house mice; however, the cells obtained from deer mouse bone marrow more closely resembled macrophages rather than DC. These cells contained many large macropinocytic vesicles, but conspicuous dendrites typical of DC were not observed. Microscopically, these cells also appeared sensitive to TNF, which decreased macropinocytosis, but it had no effect on the capacity of these cells to present antigen to T cells as has been reported for human DC derived from blood mononuclear cells . TNF can7 bone marrow cells from a deer mouse, which is sufficient for freezing 5 vials at 2 × 106 cells each. Each vial is used to seed a 100 mm bacterial Petri dish, which produces about 107 BM-APC at 14 days of culture. For deer mice, the most significant limitation for cells is from the spleen. Although deer mice are slightly smaller than BALB/c mice, their spleens are disproportionately small (unpublished observations). We routinely recover 7 × 106 splenocytes from a deer mouse, while BALB/c house mice usually provide 10-fold more. Because of this limitation, we have begun to use BM-APC to propagate T cells. This method involves culturing of bone marrow cells with GM-CSF for 10 days, then freezing aliquots of 106 cells. Three days before T cell restimulation, the 10-day BM-APC are thawed and cultured with GM-CSF, then used for restimulation with fresh antigen in one well of a 24-well plate. The T cells are fed fresh IL-2 DMM-5 at two-day intervals for expansion.Since the deer mice are outbred, this method requires the immunization and collection of cells from individual animals Figure . These cWe have used this method to establish nine T cell lines, six specific for KLH and three specific for SNV nucleocapsid antigen (data not shown). Based upon typical cell yields, it should be possible to assay several thousand wells on 96-well plates, which we estimate to be sufficient for many T cell activities, including cloning, peptide epitope mapping, TCR variable gene segment usage, and cytokine profiling.We believe the methods described in this work will allow the characterization of antigen presentation and T cell responses in infected deer mice. Many viruses impair pathways involved in APC and T cell functions so that they can evade a sterilizing immune response. With hantaviruses and their rodent hosts, millions of years of evolution have presumably allowed a coadaptation of the viruses and host immune responses such that pathology does not occur and the virus is not eliminated. It is possible that hantaviruses possess some as yet unidentified mechanism for suppressing an aggressive inflammatory immune response in rodent hosts, which is ineffective in human infections and often leads to inflammatory immunopathology.Because of the substantial evolutionary divergence of deer mice and house mice (about 25–50 million years) , it is lA limitation of this approach is that BM-APC lose their ability to proliferate in the presence of GM-CSF after four to six weeks, similar to what has been observed with house mouse bone marrow-derived cells ,54. SincWe have developed a method for generating large numbers of competent antigen presenting cells from deer mouse bone marrow using house mouse GM-CSF. This method resulted in the production of antigen-specific T cell lines from outbred deer mice. Inclusion of antigen-specific antibody in cultures augments T cell proliferation, suggesting the APC express Fc receptors. This method will allow characterization of APC and T cells in deer mice and may be extended to other rodent species that are important in infectious disease research.The deer mice used in these experiments were from a colony of animals established with deer mice trapped in western Colorado . All proSigmodon hispidus, AAL55394) and human (NP_000749) were aligned using MacVector's clustal algorithm.The 5' end of the deer mouse GM-CSF cDNA was obtained by using RACE. Briefly, a primer Table was desiTotal RNA from activated splenocytes was reverse-transcribed using an oligo-dT primer and Superscript II . TCRβC cDNA sequences from house mouse, rat and human were aligned with MacVector. PCR primers Table were desDeer mice were bilaterally immunized subcutaneously at the base of the tail with 20 μg of KLH emulsified in CFA (Sigma). Ten days later, draining lymph nodes, spleens and bone marrow were recovered for in vitro experiments. For production of high-titer KLH antiserum, deer mice were immunized i.p. with 20 μg of KLH emulsified in CFA and boosted with 20 μg in IFA one month later. Sera were collected 7 days after boosting.6 cells per vial in 1 ml of 10% DMSO/DMM-5 and stored at -70°C.Immunized deer mice were euthanized by cervical dislocation and the draining lymph nodes, spleens and bone marrow from individual animals were separately collected in Hank's balanced salt solution for processing. The lymph nodes served as a source of antigen-specific T cells, while the splenocytes were treated with ammonium chloride to lyse RBCs, then frozen in 10% DMSO/5% FBS deer mouse medium in aliquots for use as autologous APC for additional rounds of in vitro T cell stimulation. The bone marrow cells were collected from tibiae and femurs, washed twice in DMM-5, and aliquotted at 2 × 106) was quick-thawed in a 37°C water bath and cultured without washing in 100 mm bacterial Petri dishes in DMM-10 containing 20 ng/ml recombinant house mouse GM-CSF at 37°C under 7% CO2. Fresh GM-CSF/DMM-10 was provided on days 3, 6, 8, 10, and 12 for the generation of APC. Cells were collected with a scraper for use in experiments. Cells were processed by cytospin and stained with Wright's stain for morphological examination.Bone marrow-derived APC (BM-APC) were generated with modification of a previously described method for dendritic cells (DC) . One viaDeer mouse splenocytes depleted of RBC by ammonium chloride treatment were incubated with a suboptimal dose of PHA (2 μg/ml) (Sigma) in DMM-5 and recombinant human IL-2 (R&D Systems) for 48 hours. Proliferation was determined by MTS assay . The means and standard deviations of duplicate samples were calculated, with the mean of cells without IL-2 subtracted from sample means.Total RNA was extracted from 14 day T cell and BM-APC cultures and converted into cDNA. Class II expression of the bone marrow cells was assessed by PCR using a forward primer from exon 2 and a reverse primer that overlaps the boundaries of exons 2 and 3 of deer mouse I-Eβ (Table P. leucopus IgG (H&L) for 1 hour, then horse anti-goat IgG-HRP conjugate for 1 hour . ABTS substrate (Sigma) was incubated for 15 min, and plates were read at 414 nm. Means were calculated with the background subtracted.Sera were collected at euthanasia by cardiac puncture. Plates were coated with 5 μg/ml KLH in PBS overnight and washed 5× with wash buffer (PBS-0.1% TWEEN-20). Plates were then blocked with blocking buffer (5% nonfat powdered milk in wash buffer) for 1 hour at room temperature. The sera and remaining reagents were diluted in blocking buffer. Sera were incubated in duplicate for 2 hours at room temperature, followed by goat anti-6 cells per well (24 well plate) with 20 μg/ml KLH in DMM-5. After a 4 day incubation, the lymph node cells were collected and washed twice in DMM-5. The number of recovered lymph node cells varied between animals, but between 2 × 105 and 5 × 105 cells were recovered and plated with fresh antigen and 3 × 106 thawed autologous splenocytes in DMM-5 in 24 well plates in 1 ml of DMM-5. At 2 day intervals, cultures were fed by removing 750 μl of media and replacing it with DMM-5 containing 20 U/ml of recombinant human IL-2. When cultures were greater than 80% confluent, cells were passaged 1:2 into additional wells or into T25 tissue culture flasks. This process was continued for 14 days to expand T cells. Cultures were also restimulated at two week intervals with fresh mitomycin-C treated autologous splenocytes as above, or 2 × 106 BM-APC to continue propagation of T cell lines.In vitro stimulation of helper T cells was performed essentially as described elsewhere ,57. Lymp4) or splenocytes (2 × 105) were used to stimulate 105 T cells in the presence of KLH in 96-well plates. In other experiments, the effects of house mouse tumor necrosis factor (20 ng/ml) were evaluated on APC morphology and capacity to stimulate T cell proliferation. Lastly, antisera to KLH were produced in deer mice and used to assess the capacity of the BM-APC to capture and process antigen for presentation to T cells. In these experiments, antiserum or normal deer mouse serum were diluted to 1:2,000, a saturating dilution in ELISA as described above, with KLH and incubated for 1 hour at room temperature. BM-APC and T cells were then added and the cultures incubated for 72 hours prior to determining proliferative responses by MTS. Means and standard deviations were calculated from duplicate samples.BM-APC were examined for functional capacity to process and present KLH to sensitized autologous deer mouse T cells expanded in culture. For these experiments, mitomycin-C treated BM-APC -5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazoliumBD conducted bone marrow cell culture work. DGW and TAC performed RT-PCR experiments. JP cloned and sequenced the TCRβ cDNA. RMF cloned and sequenced the MHC class II cDNA. TS immunized deer mice and generated T cell lines.
Esophagectomy is considered the gold standard for the treatment of high-grade dysplasia in Barrett's esophagus (BE) and for noninvasive adenocarcinoma (ACA) of the distal esophagus. If all of the metaplastic epithelium is removed, the patient is considered "cured". Despite this, BE has been reported in patients who have previously undergone esophagectomy. It is often debated whether this is "new" BE or the result of an esophagectomy that did not include a sufficiently proximal margin. Our aim was to determine if BE recurred in esophagectomy patients where the entire segment of BE had been removed.Records were searched for patients who had undergone esophagectomy for cure at our institution. Records were reviewed for surgical, endoscopic, and histopathologic findings. The patients in whom we have endoscopic follow-up are the subjects of this report.Since 1995, 45 patients have undergone esophagectomy for cure for Barrett's dysplasia or localized ACA. Thirty-six of these 45 patients underwent endoscopy after surgery including 8/45 patients (18%) with recurrent Barrett's metaplasia or neoplasia after curative resection.Recurrent Barrett's esophagus or adenocarcinoma after esophagectomy was common in our patients who underwent at least one endoscopy after surgery. This appears to represent the development of metachronous disease after complete resection of esophageal disease. Half of these patients have required subsequent treatment thus far, either repeat surgery or photodynamic therapy. These results support the use of endoscopic surveillance in patients who have undergone "curative" esophagectomy for Barrett's dysplasia or localized cancer. The incidence of esophageal adenocarcinoma has increased more rapidly than any other form of cancer since the 1970s and now represents the majority of esophageal neoplasms in the West . Other rDetection of dysplastic Barrett's esophagus or mucosal adenocarcinoma is important because it allows the opportunity to intervene prior to the development of invasive neoplasia. Unfortunately, no medical or surgical GERD treatment has been consistently and convincingly demonstrated to prevent the development of adenocarcinoma . TraditiAfter approval by the Mayo Foundation's Institutional Review Board for Research, the electronic medical records of Mayo Clinic patients in Jacksonville, Florida, were searched to find all patients who had undergone esophagectomy for cure at the Mayo Clinic surgical facility, St. Luke's Hospital, Jacksonville, Florida, since 1995. This time period was chosen to coincide with the routine availability and clinical use of pre-operative staging with endosonography in our institution. The records of these patients were reviewed for pre-operative and post-operative staging results including computed tomography and endosonography studies. In addition, endoscopic, surgical studies and histopathological studies were studied. Specifically, the surgical specimens were reviewed to ensure that the esophagectomy specimen, including lymph node sampling, was adequate and the proximal margin was completely free of Barrett's metaplasia, dysplasia or carcinoma. The patients, in whom we have at least one follow-up endoscopy study, with biopsies obtained for histologic confirmation of mucosal disease, are the subjects of this report. Esophageal disease was staged according to the Tumor-Lymph node-Metastasis (TNM) criteria .Since 1995, 45 patients have undergone esophagectomy for Barrett's dysplasia or localized adenocarcinoma with curative intent in our institution. At operation, none of these patients were found to have extension of malignant disease to paraesophageal lymph nodes and all esophageal glandular mucosa was resected with only normal squamous mucosa remaining at the proximal surgical margin. Subsequently, 36 of these patients (80%) have undergone endoscopy after surgery including 8/45 patients (18%) who were found to have recurrent Barrett's glandular mucosa after curative resection and are described in the table.Open transthoracic esophagectomy (Ivor Lewis procedure) with pyloroplasty was performed in most patients (39/45) including the patients diagnosed with recurrent Barrett's disease. Five different surgeons performed these operations. Most patients had evidence of gastric stasis (retained food) at endoscopy . Anastomotic dilation was performed at endoscopy in 16/36 patients . It is possible that patients with anastomotic strictures may be at increased risk of recurrent Barrett's esophagus because of worse reflux although their swallowing symptoms may, alternatively, be related to other factors such as anastomotic ischemia or surgical sutures. Patients frequently used aspirin (42%) or COX-2 specific non-steroidal anti-inflammatory agents (25%). Twice-daily proton pump inhibitors were routinely prescribed for these patients although patient compliance is difficult to assess because of high drug costs and limited symptomatic improvement. While the small number of patients limits our analysis, these factors were found to occur in a proportional number of patients with Barrett's disease and no clear trends could be identified.2N0M0 in a 78-year-old man. This metaplastic glandular epithelium was detected at a follow up of 90 months and 17 months, respectively. In the other 3 patients, short segment Barrett's low-grade dysplasia has been found in lengths between 10–25 mm after complete resection of the esophagus for Barrett's high-grade dysplasia (1 patient) and Barrett's T3N0M0 carcinoma (2 patients). This recurrent Barrett's glandular dysplasia was detected at a follow up of 42–47 months. Erosive esophagitis was also noted in 4 of 5 patients indicating uncontrolled reflux disease. Subsequently all patients have been treated with high doses of proton pump inhibitors (such as esomeprazole 80 mg twice a day or 40 mg three or four times per day) in an attempt to maximally control reflux of acid and digestive juices from the stomach into the cervical esophagus.Five of these patients have been diagnosed with Barrett's metaplasia or low-grade dysplasia have been followed for more than 12 months in surveillance endoscopy programs monitoring the stability of the glandular epithelium. Two of these 5 patients have been found to have short segment Barrett's metaplasia with lengths of 10 mm and 15 mm, after complete esophageal resection for Barrett's high-grade dysplasia in a 72-year-old man and Barrett's adenocarcinoma T2 or T3N0M0 adenocarcinoma. These patients varied in age from 58–80 years of age. These patients were found to have more severe erosive esophagitis suggesting worse acid reflux and mucosal injury compared to the non-carcinoma recurrent Barrett's patients. Recurrent Barrett's multi-focal high-grade dysplasia, over a 10 mm segment length was detected 88 months after esophagectomy in one patient and was successfully ablated with porfimer sodium photodynamic therapy using the methods described elsewhere [1N0M0 was confirmed at repeat esophagectomy. Finally, a diminutive polypoid mass proximal to the surgical anastomosis was found in a 58-year-old woman who had 7 months previously undergone esophagectomy for Barrett's mucosal adenocarcinoma. Computed tomography with contrast enhancement noted esophageal wall thickening and suspicious lymphadenopathy. Repeat resection confirmed the tumor histologic grade of T2N1M0 adenocarcinoma.Three other patients developed recurrent Barrett's disease after curative resection of esophageal Tlsewhere . High-grOver the past four decades, the incidence of esophageal adenocarcinoma has risen dramatically, particularly in older white men . PreviouAfter undergoing esophageal resection, the native squamous mucosa of the cervical esophagus will be brought into contact with the acid-secreting mucosa of the gastric body. This reconstruction allows acid and duodenal juice to reflux from the gastric conduit to the remaining cervical esophagus. Reflux of gastric and duodenal content is an important factor in the pathogenesis of Barrett's metaplasia, dysplasia and esophageal adenocarcinoma -20. It iIn the study of Oberg et al, despite the use of potent acid-suppressing medications, severe esophageal acid exposure was noted in most patients. Patients with recurrent Barrett's epithelium were found to have significantly more severe acid exposure that occurred predominantly in the supine position . There wis or T1) were detected at surveillance endoscopy and were treated with endoscopic laser ablation, mucosal resection or surgical resection. While squamous cell carcinomas are not related to gastroesophageal reflux, this paper also suggests that esophageal cancer patients (squamous or adenocarcinoma) are predisposed to the development of metachronous carcinomas in the remnant cervical esophagus. This is consistent with DeMeester's experimental model of Barrett's dysplasia and adenocarcinoma occurring after complete gastrectomy with esophago-jejunostomy and reflux of bile and digestive enzymes into the cervical esophagus [Murata and colleagues recently reported the diagnosis of metachronous squamous cell carcinomas in five of 253 patients (2%) who had undergone esophagectomy for thoracic esophageal squamous cell carcinoma more than two years previously. These superficial carcinomas (Tsophagus . Also, iKonishi et al reported finding an adenocarcinoma in Barrett's esophagus following a total resection of the gastric remnant in a 52-year-old man who had undergone distal gastrectomy for gastric cancer nearly twenty years previously . The BarThese studies have demonstrated that the cervical esophagus is exposed to high amounts of acid and refluxate despite the use of proton inhibitor medications and often in the absence of severe reflux symptoms. Although our group of patients has been observed for only a median of 2 years after esophagectomy, our study confirms that the development of metaplastic columnar mucosa in the cervical esophagus is a common complication related to reflux associated injury to the squamous epithelium. Further, our findings suggest that this recurrent glandular mucosa is unstable and predisposed to the development of dysplasia and invasive carcinoma, as has already developed in most of patients.The early detection of this recurrent disease remains vitally important to preserve all possible treatment options including surveillance endoscopy follow-up, endoscopic ablation with porfimer sodium photodynamic therapy, and if necessary repeat esophagus resection surgery. Our specific recommendations include surveillance endoscopy every 6–12 months for patients who have undergone "curative" esophagectomy for Barrett's dysplasia or adenocarcinoma. In addition, we also routinely recommend indefinite use of proton pump inhibitors, regardless of symptom status, starting at twice daily dosing and increasing as necessary to control reflux symptoms and mucosal damage due to acid, bile and digestive enzymes. Whether these drug doses should be titrated based on ambulatory pH and impedance test results remains to be determined. We have generally been disappointed by prokinetic agents such as metoclopromide in improving reflux symptoms in these patients. For esophagectomy patients who develop recurrent Barrett's metaplasia we recommend the use of COX-2 inhibitors or aspirin chemoprevention to protect against the development of metachronous Barrett's carcinoma. ,30.None declared.All authors participated in the study design and coordination as well as case collection and review of histopathologic and endoscopic results. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
P. aeruginosa is a pathogen that causes acute and chronic lung infections by interacting with the pulmonary epithelial cells. We use this example for examining the ways of triggering the response of the eukaryotic cell(s), leading us to a better understanding of the details of the inflammatory process in general.Binding of a bacteria to a eukaryotic cell triggers a complex network of interactions in and between both cells. Considering a set of genes co-expressed during the antibacterial response of human lung epithelial cells, we constructed a promoter model for the search of additional target genes potentially involved in the same cell response. The model construction is based on the consideration of pair-wise combinations of transcription factor binding sites (TFBS).It has been shown that the antibacterial response of human epithelial cells is triggered by at least two distinct pathways. We therefore supposed that there are two subsets of promoters activated by each of them. Optimally, they should be "complementary" in the sense of appearing in complementary subsets of the (+)-training set. We developed the concept of complementary pairs, i.e., two mutually exclusive pairs of TFBS, each of which should be found in one of the two complementary subsets.We suggest a simple, but exhaustive method for searching for TFBS pairs which characterize the whole (+)-training set, as well as for complementary pairs. Applying this method, we came up with a promoter model of antibacterial response genes that consists of one TFBS pair which should be found in the whole training set and four complementary pairs.We applied this model to screening of 13,000 upstream regions of human genes and identified 430 new target genes which are potentially involved in antibacterial defense mechanisms. In spite of numerous sophisticated approaches devoted to this subject . We staWe suggest a simple, but exhaustive method for searching for TFBS pairs which characterize the whole training set, and combinations of mutually exclusive pairs (complementary pairs). The idea of starting the analysis with a "seed" of sequences allows a very biology-driven way of initial filtering of information.To enhance the statistical reliability and to get additional evidence in TFBS combination search, we applied the principal idea of phylogenetic footprinting (using orthologous mouse promoters), yet proposing a different view on applicability of this approach.Finally we came up with a promoter model which we applied to screening of 13,000 upstream regions human genes. We identified 430 new target genes which are potentially involved in antibacterial defense mechanisms.In every step of our investigations we tried to combine purely computational approaches with the preexisting experiment-based knowledge, as it is represented in corresponding databases and literature, and with our own biological expertise. To develop a promoter model, the first task is to select those transcription factors, the binding sites of which shall consitute the model. The overwhelming majority of methods and tools estimating the relevance of predicted TF binding sites in promoter regions are based on their over- and underrepresentation in a positive (+) training set in comparison with some negative (-) training set. If, however, a binding site is ubiquitous, or very degenerate, so that it can be found frequently in any sequence, the comparison with basically any (-)-training would not reveal any significance for its occurrence. That tells nothing about their functionality in any specific case, which may be dependent on some additional factors and/or other conditions. Therefore, basing the decision about the relevance of a transcription factor for a certain cellular response solely on whether its predicted binding sites are overrepresented in the responding promoters may lead to a loss of important information. Thus, we did not rely on this kind of evidence but rather chose the candidate transcription factors according to available experimental data. We found 5 factors reported in literature as taking part in anti-bacterial or similar responses and selected them as candidate TFs ,15,18-29On the other hand, some of the factors, which have also been mentioned in literature as potentially relevant or mighFinally, we constructed our promoter model of binding sites of 5 TFs , considering their pairwise combinations and some combinations of higher order .predictive model, since it is also important to have minimal occurrence of a discriminating feature in the (-)-training set. In the further work we considered all pairs with p < 0.005, but as this did not reasonably restrict the list of considered pairs, we had to apply an additional filtering approach. For this purpose we used a simple characteristic such as the percentage of sequences in (+)- and (-)-training sets. By operating directly with percentages we could easily filter out those pairs which would identify too many false positive sequences, thus getting rid of a substantial part of useless information. This procedure allows to estimate immediately the applicability of the model to identify further candidate genes that may be involved in the cellular response under consideration (see Methods).In several steps of the model construction we had to estimate overrepresentation of a feature in the (+)-training set compared with the (-)-training set. We operated with the number of sequences that possess the considered feature, in our case a pair of TFBS, at least once. Otherwise, mere enrichment of a feature in the (+)-training set may be due to strong clustering in a few members of that set which would not lead to a useful prediction model. At the first step the T-test has been performed ), but it appeared to be a weak filter: for example, we could find several pairs which showed, if estimated with T-test, a remarkable overrepresentation (p < 0.001), but with a difference of 97% in the (+)-training set versus 85% in the (-)-training set, which is of no practical use to construct a The main problem of promoter model construction are the numerous false positives. Developing our approaches we applied some anti-false-positives measures :• distance assumptions• identification of "seed" sequences• phylogenetic conservation• subclassification into complementary sequence sets.In the following, we will comment on each item in more details.The commonly accepted view that functionally cooperating transcription factors may physically interact with each other triggered us to introduce certain assumptions concerning the distances between the considered TFBS. Transcription factors can interact either immediately with each other or through some mediator proteins (co-factors). Principally there can be many ways of taking this into account, since our knowledge about the mechanisms of interaction is limited. In this work we used two different approaches to consider distances in the promoter model development.at a certain distance, but within a certain distance range, we considered the pairs occurring in segments of a certain length.In the first case we based our assumptions on the structure of known composite elements. We assumed that the binding sites of interacting TFs should occur in a distance of not more than 150 bp to each other Directly interacting factors should have the binding sites at a close distance.(b) The factors interacting through some co-factor may have binding sites on some medium distance, depending on the size and other properties of the co-factor (and the factors themselves).(c) We can also expect direct interaction of another type, when the two factors are not located in the nearest neighborhood, but their interaction requires the DNA to bend or even to loop. This means that the distance is no longer a close one, although we cannot estimate the distance range for this case; thus, we allowed different ranges of distances, excluding only the closest ones.We searched for pairs in three distance ranges, roughly called "close", "middle" and "far", all with adjustable borders, so that moving them we could get the best proportion of percentages in (+)- and (-)-training sets. We used the search in the distance ranges as a starting point, but some of the found pairs required optimization of the borders, so that they finally did not fit into any of the predefined ranges. The initial "close" range was taken as 5–20 bp, to exclude the overlapping of the sites, but to allow close interaction; however, the border had to be shifted in many cases up to 50 bp. The initial "middle" range was chosen from 21 to 140 bp (the number of nucleotides wrapping around the core particle of the nucleosome); the "long" range had its upper border at 250 bp.Initially the idea of "seed" sequences was exploited because of the desire to make use of preexisting biological knowledge about the expressed genes and also because of doubts in the reliability of the available data set. Different experimental approaches differ in their reliability. The microarray analysis is not absolutely reliable ,34-36, sTherefore, we started our analysis with a group of "seed" sequences, which we considered for distinct reasons more reliable and preferable. Choosing a seed group, we took into consideration two kinds of evidence; the first was the source of information, i. e. the methods with which the gene has been shown to participate in the response. We took the promoter sequences of those genes which have been reported by other methods but microarray analysis ,38-47,47The second kind of evidence was whether we could find any additional biological reasoning for the gene to participate in this kind of reply. For instance, a well-known participant of the NF-κB-activating pathway such as IκBα, or participants of different pathways which are likely to be triggered here as well, like c-Jun or PKC, were estimated as the first candidates for the "seed" group.Methods) -training set was reduced by at least two orders of magnitude: depending on the "seed" the number of considered pairs varied from 50 to 400. In the next steps this number was reduced by another order of magnitude Table .Each "seed" is characterized by its own set of pairs. To ensure the robustness of the obtained results, we undertook the "leave-one-out" test, removing consecutively one sequence of the "seed" set (for the combined "seed" sets which included human and mouse orthologs we excluded simultaneously both orthologous sequences). This has been repeated for each sequence (or ortholog pair). Only the robust pairs have been taken into further consideration.Evolutionary conservation of a TFBS is generally accepted as an additional criterion for a predicted site to be functional . HowThe observed fact is that functional features are not necessarily bound to conserved regions, as long as we speak about primary sequence conservation. Dealing with such degenerate objects as TF binding sites, one should not expect an absolute conservation of their binding sequences. From the functional point of view, it seems to be more reasonable to expect that not the sequences, but the mere occurrence of binding sites and/or their combinations as well as (perhaps) their spatial arrangement would be preserved among evolutionarily related genomes. That is the approach that we use in the present work, completely refraining from sequence alignments. We search for those pairs of TFBS which can be found in human and corresponding mouse orthologous promoter regions, considering the promoter as a metastring of TFBS. We took a feature (the pair of TFBS) into account only if we could identify it in both orthologous promoters, not taking into consideration in what region of the promoter it appeared; we also did not try to align metastrings of TFBS symbols, since they may be interrupted by many additional predicted TFBS . While this work was in progress, we found a very similar approach in the work of Eisen and coworkers ,55, who The idea that combinations or clusters of regulatory sites in upstream regions provide specific transcriptional control is not new ,8,56. NeIn the following, we will formalize our approach and describe the logics of our investigation.All procedures are described for the example of pairwise combinations, but principally all of them can be applied to combinations of higher orders. We restricted our attempt to pairs for sake of computational feasibility.Methods. A pair is taken into account if it has been found in a sequence at least once.We consider all possible pairwise combinations of TFBS in each sequence, as described in m and n located in a distance range from r1 to r2 (where r1 ≤ r2) on either strand of DNA (+ or -). We can denote the sets of sequences containing pairs in different relative orientation as, .Let us consider two TFBS To allow inversions of DNA segments containing pairs, we consider three classes of combinations Fig. :i = 1,...3 represents the set of sequences with a pair of i-th class m, ni) .In more general form for be a fraction of the sequences in the (+)-training set, and the fraction of sequences in the (-)-training (control) set.Let by choosing appropriate values for m, n, i and r1, r2. Also, we are interested only in pairs, which are present in at least a minimum fraction of (+)training sequences (C1) and in a defined maximum fraction of (-)-training sequences (C2). They can be filtered in advance.We have to solve now the optimization problem to maximize the difference for whichThus, we search for such C1,2 ≤ 1 are adjustable parameters.where 0 ≤ C1 = 0.8 and C2 = 0.4. We could not find pairs which would satisfy more stringent parameters, i. e. either higher C1 or lower C2; on the other hand, requirement (1) was found to be satisfied by a lot of different combinations which gave rise to the same Pt and Pc.For single pairs we chose r1, r2) from the expression ). Each possible type of pair is determined by values of m, n and i. We can list all types of pairs and assign a number j to each pair in this list. Then each type of pair is characterized by mj, nj, ij:To make the analysis more specific, we can consider combinations of pairs instead of single pairs. For sake of simplicity, we will omit furtheron we can identify and , which appear in the (+)training set simultaneously:For two different A triple or a combination of a higher order can be represented in the same way.The antibacterial response of the cell is triggered by at least two distinct pathways, and it may be therefore supposed that there are subsets of promoters activated by each of them. Optimally, they should be "complementary" in the sense of appearing in complementary subsets of the (+)-training set Fig .Seed sequences; Table Complementary pairs were searched first in a "seed" subset of the (+)-training set of sequences Fig , step 1.C1 is therefore always set to 1, );(a) they together cover the whole subset , with an allowed overlap (defined by the parameter C5).(b) each of them can be found in not more and not less than a certain number of sequences -training set of 33 sequences, looking only for the combinations found in the second step Fig. , step 4.The percentage of the pair occurrence in the (-)-training set has been counted on the first step with the subsequent filtering of pairs.A rather large number of combinations satisfied the requirements described in the previous section. However, when we selected those that were robust in a "leave-one-out" test for the "seed" sets, the final list of potential model constituents was shortened down to only 2 ubiquitous and 12 complementary pairs.We found one satisfactory pair which should be found in all promoters of target genes:AP - 1, NF - κB(1)AP-1, NF - κB, class 1, distance from 10 to 93 bp; see Fig. -training set 52%) -training set revealed the presence of all pairs under consideration). But the simultaneous usage of all the pairs could overfit the model, so we did not apply them all, sacrificing a bit of specificity for sake of a higher sensitivity.Finally, we came up with 4 complementary pairs Fig. composedC1, C2 for common pairs; for complementary pairs we considered the case with C3 = 0.3 C4 = 0.7 C5 = 0.2. The probability to find by chance a "seed" of 12 sequences which would produce at least one pair common for the random selection of 33 sequences (including the "seed") depends on the chosen C1, C2 and is found to vary between p < 0.0005 and p = 0.02 . We failed to find any complementary pairs after 1000 iterations of the permutation test with the parameters used for the "real" (not permuted) model construction. These results suggest that the success of the model construction based on the search for combinations of TFBS is strictly dependent on the selected training set and that the significance of the findings, depending on the correct choice of the adjustable parameters, is high enough to claim their non-randomness. Thus, we can say that in the described case the pairs found in the given (+)-training set with the given parameters are the real characteristics of this set.In order to avoid the overfitting of the model and to demonstrate the significance of our results, we performed a permutation test. For that, we conducted 2000 iterations of random permutation of (+) and (-) labels in the training sets and tried to rebuild the model using the procedure described above. The rate of correct classification on this random selection was estimated. The cases of common and complementary pairs were considered separately. The analysis was made for different The model consists of two kinds of combinations of pairs: ubiquitous pairs , and complementary pairs. We can divide the model into two modules, one for each kind of combination.M1 and M2 be modules comprising ubiquitous pairs and complementary pairs, respectively.Let M1 comprises the pair AP-1, NF-κB(1).Module M2 comprises all complementary combinations listed in the Fig. m) in M2.Module S(M) the set of sequences which possess the whole model M; then we can also consider S(M1) and S(M2) , and S(m) – the set with a submodule m.To apply the model means to search for sequences containing all these combinations. Let us call Thenm); in this case we consider four submodules, so the sequences containing M2 can be found as:Module M2 consists of submodules (S(M2) = S(m1) ∩ S(m2) ∩ S(m3) ∩ S(m4),where the set with each submodule we must consider as a union of sequence sets containing the complementary pairs:The final result of application of the model M can be presented asS(M) = S(M1) ∩ S(M2)The model gives 3.4% of false positives and re-identifies 52% of the whole (+)-training set, but these 52% comprise all most reliable sequences of the set .Applying our promoter model to screening of 13000 upstream regions from a collection of human 5'-flanking sequences , we idenP. aeruginosa binding. One of the results of our work is a list of potential target genes, enriched with different regulatory proteins, including transcription factors and known participants of the ascending pathways. This theoretical result must have two practical consequences: first, it allows to restrict further experimental research to a manageable number of candidate genes; second, it enables to understand or to clarify some uncertain details concerning the triggering pathways, and thus to make some new predictions based on this information. There is a number of published tools for searching for regulatory modules [7,5nd Noble ,59-63. Tnd Noble . Any of nd Noble , and theIn the present work we tried to address the listed problems. We could not, of course, improve the situation with the paucity of experimental data, only endeavored to make our data searches as accurate and exhaustive as possible. In principle we developed our approaches basing them, whenever possible, on biological reasoning. We find it extremely important to use as much experimental evidence as it is available at the moment. In our approach we alternated two different kinds of steps – expanding the data and restricting it: exhaustive data search – "seed" and distance constraints – exhaustive enumeration of all possible pairs – complementary pair constraints.To avoid the problem of low confidence in the (+)-training set (which may occur not only in our specific case), we developed the approach of "seed" sequences. The difference from the "seeds" used in cluster analysis is that in our approach the choice of the "seed" is biologically based. Although the "seed" approach is, obviously, a restrictive measure, moreover, a pre-process restriction, which may result in missing potentially relevant additional sequence features, we find it useful and appropriate when the choice of the "seed" is made on a solid biological basis. After having applied the restrictive "seed" technique and distance assumptions, we undertake an exhaustive, complete enumeration of all possible pairs of potential TF binding sites that can be found in the (+)-training set, which in turn reveals a large number of combinations. This list of all found pairs is processed under a new kind of constraints imposed by the search for complementary pairs.The search for complementary pairs is a completely new approach, which supplies us with a new kind of information. It enables to identify subsets of the (+)-training set which possess different regulatory modules, thus suggesting their triggering by different regulatory pathways. This kind of information becomes extremely important in two cases: (i) when two or more pathways are presupposed to be triggered in the cellular response, like in the case considered in this work; (ii) when the (+)-training set consists of not really co-regulated, but of co-expressed genes, without precise information about which of them are regulated by the same mechanism. The identification of complementary pairs and, consequently, groups of sequences enables to better define the co-regulated genes thus providing a partial, although only predicted, confirmation of the co-regulation, and at the same time to better understand the ascending pathways.P. aeruginosa this pathway is not the only one [The final result of our search supported the idea of complementary pairs. There is a lot of evidence in literature that interleukin 8, β-defensin, monocyte chemoattractant protein and different mucins are regulated through LPS-triggered pathway(s) ,15,38. Oonly one , but we Our approach, as any other, has its limits. It has been shown for the genuine composite elements of certain types that oneThe next source of limitations we see in the preselection of factors according to published data. Obviously, we can not expect that the experimental data is exhaustive; some of the transcription factors may be not reported just because their participance in a certain process has not yet been investigated. On the other hand, statistical overrepresentation, as it has already been mentioned before, can not be taken by itself as proof of biological functionality or its lack; some TFBS cannot be overrepresented due to their degenerate nature. We had no other idea of how to take into account those TFBS which are not overrepresented, but to rely on published experimental data. We find that the usual methods based on statistical overrepresentaion are even more restrictive, but maybe the best solution could be found in merging both approaches – i.e., using the experimental evidence along with statistical ones, for instance using Bayesian techniques.P. aeruginosa binding, and further development of the methodological approaches, making them more flexible and applicable to any similar task. The list of predicted target genes has to be evaluated experimentally, but may have its value for further research already on the present step. The future work on reconstructing the intracellular pathways triggering the genetic program of the antibacterial cell response will be well supported with the information picked up from this list. It may give some hints for the next steps of experimental research, for instance providing information about the first candidates to be checked. The information about the complementary subsets of regulated genes helps to better understand the triggering pathways, and the complementarity of their function is a subject for further consideration.We see the perspectives of this work in two different fields: further investigation of regulatory networks triggered by The methodological approaches presented in this paper can be, of course, applied to other objects. In this work we focused on the experimentally proven basis for the initial choice of transcription factors. This kind of evidence is stronger than any prediction, but it can work only when this information is available, which may be not the case for some other sets of genes or cellular situations. In the next step of development we would like to allow also an exhaustive computational search through the whole list of known TFs for potential constituents of the models. The usage of Bayesian techniques, as mentioned in the previous paragraph, would be also appropriate for this kind of predictions.We suggest a methodology for promoter model construction based on the search of TFBS pairs and show how it works in the particular case of antibacterial response of human lung epithelial cells. We show that the method allows to identify and predict subsets of target genes potentially triggered by different regulatory pathways and thus possessing different regulatory modules. The methodology is easily applicable to any similar task and does not depend on the number of included TFs and/or number of investigated sequences, which only should not be too low for statistical reasons., release 77-1.Eukaryotic Promoter Database , release 3.0DBTSS, the database of transcription start sites ® Professional release 7.1 TRANSCompel® Professional release 7.1 TRANSFAC® Professional release 4.1 TRANSPATHThe positive (+) training set comprises:P. aeruginosa by means of:1. Promoters of human genes shown to be expressed in epithelial cells after interaction with a. microarray analysis ,b. other methods ,28,37,382. Orthologous mouse promoters., or from DBTSS, the database of proven transcription start sites . The length of the sequences was 600 bp (-500/+100). This region comprises most of then known upstream elements and corresponds to the upstream region used by Davuluri et al. as "proximal promoters" for promoter recognition [The sequences were derived either from Eukaryotic Promoter Database ognition , plus a ognition .The "seed" set is a subset of the positive training set selected for highest experimental reliability (see Table The negative (-) training set was composed of randomly chosen 5'-upstream sequences derived from the TRANSGENOME information resource of annotated human genome features . The setP. aeruginosa binding or in the pathways triggered during similar responses. The search revealed 5 candidate factors: NF-κB [We based our selection of TFs on experimental evidence. For that we undertook an extended literature search, looking for the TFs which have been shown to take part either directly in the response of epithelial cells to s: NF-κB ,23,24,26s: NF-κB ,24,25,27s: NF-κB ,24 and Ss: NF-κB ,29,48.® database [Including C/EBP and Sp1 in the list was additionally reasoned by the fact that these factors are known to be second constituents in the most frequent NF-κB-containing composite elements as they are compiled in the TRANSCompeldatabase . Moreove® [®); (ii) at least one hit for every searched transcription factor could be found in every sequence of the (+)-training set. The lower border for the thresholds was predefined as 0.80/0.79 (core similarity/ matrix similarity).We made this search with the weight matrix approach using the Match™ tool ; the mat® . For theWe considered all the coordinates (with strand information) of all potential TF binding sites found by Match™ for each transcription factor. Further on, we examined all possible combinations of the coordinates, thus revealing all possible pairs in the sequence.Formalization of the approach, choosing the most promising results achieved with either of them. We considered all pairs of TFs within these segments. All the pairs of one type found within one distance range were merged. We considered a pair only if it appeared in the sequence at least once (within a certain distance), not taking in account the number of pairs in each sequence.We worked under two different kinds of distance assumptions as described in ES developed the methodological approaches as well as statistical analysis and conducted the data analysis. EW conceived the study and participated in its design and coordination. Both authors drafted the manuscript. Both authors read and approved the manuscript.The question is, if we choose by chance a subset of sequences, will our algorithm be able to define a model, specific to such a random subset? In other words, will this algorithm allow to make a model of anything, without dependence on the preselection of the sets ((+)-training set and/or the "seed" set)? We tried to prove the validity of the algorithm theoretically.k sequences, 100% of which possess the required common feature: a pair, a combination of pairs or complementary pairs, just by chance. Note that this estimation is written not for the whole model construction process, but only for the first step of it, where we consider only the "seed" sequences.Our algorithm is based on the definition of biologically relevant "seed" sets, in which we search for the candidate pairs . Therefore, in order to answer the question, it is reasonable to estimate the probability to come across a "seed" set of f) of the TFs included in the model and the frequencies of all possible pairs (F), constructed of these sites. If the frequencies of single sites and the pairs of them satisfy the equationLet us consider the frequencies of predicted single sites Fij as the probabilities of independent events, which is a prerequisite for the following formalism.we can interpret Methods). We did not take into consideration distances and orientations; the probability estimated for the general case will decrease further with the addition of new constraints.We measured the frequencies of predicted single sites and the frequencies of all possible pairs in the (-)-training set -training set, mi is the number of sequences possessing the i-th site, and Mij is the number of sequences possessing pairs of the i-th and j-th sites. Fij was then calculated as (1) and compared with the measured value.where For all cases investigated in this work, the difference between the calculated and measured values did not exceed standard deviation (σ), only in one case getting to 1,5 σ (data not shown). This confirms the correctness of using pair frequencies as probabilities in this case.Ppair to find a set of k sequences in N with any (at least one) pair, same in all k. We can enumerate all possible pairs of sites of the considered TFs, considering only the cases of the independent sites (i<j). Let U be the number of all possible pairs, then we can callLet us estimate the probability Fij = Fu,u ∈ {1,..., U}Ppair can be calculated as:It is easy to show, that the probability Ppairs 2to find k sequences with any common pairwise combination of pairs (pair of pairs). The pairs of pairs may consist either of 3 (when one site is shared) or of 4 different sites ; their probabilities therefore are:Let us estimate the probability andfi, fj, fl, fo are the frequencies of the single sites of the considered TFs, i<j<l<o.where We can enumerate all possible pairs of pairs (notating them as Q):V be the number of all possible pairs of pairs, V = t + s.Let k sequences each possessing a pair of pairs of one type, is:Analogously to (2), the probability to find v ∈ {1,..., V}.where k sequences with any complementary pair (complementary combinations). We consider pairs as complementary, if two of them are found in the seed set in not more than 60% of the sequences and not less than 40 %, the allowed overlap being 20%. The two complementary pairs together must cover the whole seed set. In the case studied here, comprising the 12 sequences of the seed set, we fixed that each of the pairs should be present in at least 5, but not more than 7 sequences, and they are allowed to co-occur in 0–2 sequences.Let us estimate the probability to find The probability that we choose 12 sequences, possessing any one pair of complementary pairs in accordance with these requirements can be calculated as:u, w ∈ {1,..., U}, and are the binomial coefficients (note that this formula implies that Pcompl reaches the maximum when the frequencies of both pairs are 0.5).where Methods). The results are:All the probabilities were calculated for the (-)training set of 2040 5'-upstream sequences and for the set of 5 selected transcription factors . In this case it can be seen that the simultaneous occurrence of 1 or 2 pairs in 12 randomly chosen sequences has a rather high probability, and thus we can not base our model construction on the search of only these features. describes the probability to find any combination of 3 or 4 sites, therefore, up to 6 pairs;obviously, the simultaneous search of more than 6 pairs will definitely overfit the model, so we do not consider this case). The probability to find 12 sequences sharing complementary pairs is much lower, so the consideration of a complementary combination makes the model much more specific, and the probability of finding a model with a complementary pair "by chance" is sufficiently low for us to claim that the proposed algorithm is valid. Note that this is a very rough estimation, considering only the upper borders; we would like to emphasize once more, that the probabilities were calculated without considering orientation and distance constraints, and that this is the estimation made for only the very first step of analysis: choosing of a seed set with needed properties. Obviously, this value depends on the number of the sequences in the "seed". Note that when we spread our requirements for simultaneous search on the whole (+)-training set (which is the next step of the model construction) the probability of constructing a model "by chance" will drop dramatically.The whole list of genes found with the promoter model when applying it to the collection of 13000 human 5'-upstream sequences. This list is not cleaned from hypothetical genes.Click here for fileThe list of genes (found with the promoter model when applying it to the collection of 13000 human 5'-upstream sequences) cleaned from hypothetical genes.Click here for file
Gene expression patterns from peripheral blood cells may be useful as biomarkers for monitoring MS progression and response to therapy, argue Kaminski and Achiron Despite the significant progress in increasing our understanding of the immune mechanisms of multiple sclerosis (MS), in improving clinical classification and brain imaging, and in developing new treatments, the factors that determine the course of the disease are mostly unknown . CurrentThe most commonly used disease-modifying therapies are interferon β (IFNβ) and glatThe diagnosis and management of disease could be transformed thanks to the completion of the human genome project, the availability of sequence information for nearly every gene, and the advent of novel high throughput technologies a biological state.Glatiramer acetate: A synthetic protein made of four amino acids found in myelin. It is used as an immunomodulator drug in treating MS.IFNβ: A cytokine that is secreted from fibroblasts in response to stimulation by a live or inactivated virus or by double-stranded RNA. It is used as an immunomodulator drug in treating MS.Microarray: A technology that allows the simultaneous profiling of the expression of thousands of genes (even whole genomes). Multiple gene detectors (oligonucleotides or cDNAs) are deposited on a slide that is hybridized with fluorescently labeled samples.PCR (polymerase chain reaction): The exponential amplification of a DNA fragment using repeated activation of a heat-stable DNA polymerase.Real-time PCR : A method in which the quantitation of the products of PCR is made by measuring fluorescent emission. It is used for accurate quantitation of mRNA.RT-PCR (reverse transcription–polymerase chain reaction): PCR that is performed on cDNA generated from RNA. It is used for mRNA detection and quantitation.Supervised classification: A process in which classifiers are learned from user-defined groups (classes).Unsupervised classification: A process in which classifiers are learned without user-defined groups (classes), i.e., without a predefined training set.In diseases that do not require tissue resection for diagnosis or therapy, it is rare to obtain tissues for analysis. This problem is even more pronounced in diseases like MS, in which the target organ is the very inaccessible brain and spinal cord. Despite these limitations, several groups used microarrays to analyze brain tissues obtained posthumously from patients who had MS and identified genes that characterized either acute or chronic lesions ,10,11. HIn MS, looking for markers of disease activity in the much more accessible peripheral blood does not require a significant leap of faith. MS is an autoimmune disease, and it is possible that some of the cells involved in the pathogenesis of the disease will be found in the bloodstream. Abnormal T cell populations have repeatedly been observed in the peripheral blood of patients with MS ,13,14. WFortunately, recent observations suggest that these doubts are unfounded. Bomprezzi et al. determinWeinstock-Guttman et al. analyzedPLoS Biology, Baranzini et al. [And now, in a new study published in last month's i et al. provide Interestingly, individual and pairs of genes did not perform that well, and all three genes in a triplet were required for the highest accuracy (about 80%–90%). The minimal combinatorial number of genes that contains the most predictive information is not available since combinations of more than three genes were not performed. Although the results were not tested on an independent dataset, as is frequently requested , the autWhat could Baranzini and colleagues' findings mean? Clearly, the most obvious conclusion is that the lack of response did not result from the deactivation of IFNβ. The effect of IFNβ on MX1, IFNAr1, and STAT2 was observed for two years in all patients, suggesting that the response did not depend on IFNβ bioavailability. Considering that PBMCs represent an admixture of multiple cell types, the most plausible explanation is a simple lack of shift in subcellular populations.However, the importance of Baranzini and colleagues' study lies not in its mechanistic insights, but in its clinical relevance. The careful design of the experiment, the use of reproducible real-time PCR instead of microarrays, the meticulous analysis, and the previous observations ,17,19,20
Malignant small bowel tumors are very rare and leiomyosarcoma accounts for less than 15% of the cases. Management of these tumors is challenging in view of nonspecific symptoms, unusual presentation and high incidence of metastasis. In this case report, an unusual presentation of jejunal sarcoma and management of liver metastasis with radiofrequency ablation (RFA) is discussed.A 45-year-old male presented with anemia and features of small bowel obstruction. Operative findings revealed a mass lesion in jejunum with intussusception of proximal loop. Resection of bowel mass was performed. Histopathological findings were suggestive of leiomyosarcoma. After 3-years of follow-up, the patient developed recurrence in infracolic omentum and a liver metastasis. The omental mass was resected and liver lesion was managed with radiofrequency ablation.Jejunal leiomyosarcoma is a rare variety of malignant small bowel tumor and a clinical presentation with intussusception is unusual. We suggest that an aggressive management approach using a combination of surgery and a newer technique like RFA can be attempted in patients with limited metastatic spread to liver to prolong the long-term survival in a subset of patients. Malignant tumors of the small bowel are rare and accounts for the <2 % of total gastrointestinal (GI) malignancy ,2. The aA 45-year-old male was referred to our center with the diagnosis of suspected non-Hodgkin's lymphoma (NHL) of the bowel in December 1999. He had generalized weakness for 2-years along with recurrent vomiting, occasional constipation and melaena for last 2 months. The diagnosis was considered after an ultrasound (USG) guided fine needle aspiration cytology (FNAC) from the intra-abdominal mass done elsewhere and showed features suspicious of NHL.At presentation, patient's general condition was poor and he was dehydrated and pale. There was no peripheral lymphaedenopathy. Abdominal examination revealed an ill-defined, mobile, nontendor lump in left paraumblical region extending up to left lumber region. There was no hepato-splenomegaly. Examination of chest and cardiovascular system was normal. After initial resuscitation with crystalloids and blood transfusion, patient was further investigated.At presentation his hemoglobin was low (6.4 gm%) but rest of the routine haematological investigations were within normal limit. The chest X-ray was normal. Abdominal ultrasound (USG) showed a left upper abdominal mass lesion suggestive of bowel mass. The upper GI endoscopy was normal. The barium meal follow through examination was suggestive of intussusception of proximal jejunum and a suspected mass lesion at the leading edge of intussusceptum. An USG guided core needle biopsy of mass was performed because of earlier suspicion of NHL, which showed smooth muscle bundles with areas of necrosis. Based on the above findings and biopsy report, patient was taken up for exploratory laparotomy 3 weeks after initial presentation.Operative findings revealed an intussuscepted proximal jejunum loop 12 cm distal to duodeno-jejunal flexure and a vascular polypoidal growth measuring 6.5 × 5 × 3.5 cms on serosal surface of jejunum. Liver and spleen were normal. There was no evidence of mesenteric or retroperitoneal lymphaedenopathy, ascites or peritoneal disease. The resection of involved segment of jejunum with 5 cm margins and an end-to-end anastomosis was performed. The postoperative course was uneventful.Pathological examination of the specimen revealed two mass lesions measuring 4.5 cm & 3 cm in continuity located on mucosal and serosal surface of jejunum respectively. Microscopically it showed spindle cell tumor with a mitotic rate of 5/10 high power fields (HPF). The tumor was negative for desmin, S-100, CD 34 and c-kit (CD117) but focally positive for actin on immunohistochemistry (IHC). The resection margins were free. The final diagnosis of malignant spindle cell tumor – leiomyosarcoma of jejunum was made. Due to negative margins and no evidence of disease elsewhere, no adjuvant therapy was planned and patient was kept on regular follow up. He was assessed clinically at three months interval and an USG of abdomen was performed six monthly. Patient remained disease free for 39 months. After that a follow-up USG showed a space-occupying lesion (SOL) in segment VII of liver and a mass lesion in left upper abdomen. The patient was asymptomatic and abdominal examination revealed no abnormality. A computerized tomographic (CT) scan of abdomen was performed which showed a 2 × 2 cms, brightly enhancing SOL in segment VII of liver declare that they have no competing interests.SVSD, NKS: Surgical management and review of manuscript.AS, SH, SK: Review of literature and preparation of manuscript.ST, DKP: Radiofrequency ablation.All authors have read and approved the contents of manuscript.
The delivery of implantable cardioverter defibrillator (ICD) therapy is sophisticated and requires the programming of over 100 settings. Physicians tailor these settings with the intention of optimizing ICD therapeutic efficacy, but the usefulness of this approach has not been studied and is unknown. Empiric programming of settings such as anti-tachycardia pacing (ATP) has been demonstrated to be effective, but an empiric approach to programming all VT/VF detection and therapy settings has not been studied. A single standardized empiric programming regimen was developed based on key strategies with the intention of restricting shock delivery to circumstances when it is the only effective and appropriate therapy. The EMPIRIC trial is a worldwide, multi-center, prospective, one-to-one randomized comparison of empiric to physician tailored programming for VT/VF detection and therapy in a broad group of about 900 dual chamber ICD patients. The trial will provide a better understanding of how particular programming strategies impact the quantity of shocks delivered and facilitate optimization of complex ICD programming. Over the past decade ICD implantation has become increasingly straightforward, yet ICD programming and follow up has become more complex due to device feature and capability enhancements. While sophisticated algorithms provide high sensitivity and improved specificity of arrhythmia detection, allowing delivery of necessary effective therapy with minimization of inappropriate defibrillation shocks, detection and therapy of ventricular tachycardia (VT) / ventricular fibrillation (VF) still requires programming about 100 settings .Good programming choices are crucial as they relate to patient acceptance of ICD therapy. It has been found that patients who receive multiple shocks have greater difficulty adjusting to the ICD implant. These patients may become anxious or depressed, especially if a prior history of these ailments exists . ReducinTo date, there is no proven consensus on how to use information about the patient's complex diseases to program the ICD, and usually little is known about the patient's spontaneous VT rates, their risk of syncope, or therapies to effectively terminate spontaneous ventricular arrhythmias. Furthermore, ICD indications have dramatically changed within the last five years. Physicians may retain old programming habits even with enhanced devices or expanding patient indications, which may result in sub-optimal detection and therapy, such as unnecessary shocks for faster VT, supraventricular tachycardia (SVT), and non-sustained VT. Physicians often adjust many programmable settings that may benefit the patient. For example, physicians may prescribe patient-specific regimens for anti-tachycardia pacing (ATP) or shock energies based on lab testing. While one would expect this tailoring of programming to improve outcomes, it has never been studied.Empiric programming has been shown to be effective for subsets of ICD settings, including subsets of dual chamber detection and ATP therapies ,5-10. WhA proven optimal programming approach would be useful for simplifying therapy prescription, improving therapy outcomes, reducing inadvertent programming errors, and overall reducing shock-related morbidity. The EMPIRIC trial has been designed to evaluate a standardized empiric programming regimen by testing the hypothesis stated below. The EMPIRIC trial outcome will provide an understanding of how programming strategies impact defibrillation shock delivery in ICD therapy.This trial tests the hypothesis that the shock related morbidity of ICD therapy is similar whether patients are treated with a standardized empiric programming regimen for VT/VF detection and therapy or with a patient-specific physician tailored approach.Only sustained VT/VF that cannot be painlessly terminated should result in shock therapy and it is unusual for supraventricular arrhythmias (SVT) to require shock therapy. Shock morbidity is related to the number and frequency of shocks that patients receive and therefore morbidity is reduced if shocks are delivered only when necessary for effective arrhythmia termination. Thus, indices that address shock morbidity should reflect both the frequency and appropriateness of shocks for VT/VF and SVT.Shock morbidity is quantifiable by determination of the following:true VT/VF episodes that are shocked♦ proportion of true SVT episodes that are shocked♦ proportion of ♦ time to first shock (VT/VF or SVT)♦ time to first VT/VF shock♦ time to first SVT shockThese parameters are used to define the Empiric Trial's main objectives.and the proportion of shocked SVT episodes in a population whose ICDs are programmed using a standardized regimen for VT/VF detection and therapy, is either similar to or less than the same proportion in a similar population whose ICDs are programmed using a physician-tailored approach. This primary objective was chosen to independently evaluate the effects of programming on both appropriate and inappropriate ICD shocks (which are likely to have different implications for patient management). The advantage of this approach is that it focuses on frequency of shock delivery while also allowing an assessment of their appropriateness. However, this assessment could be confounded by a disproportionate number of SVT events in the two study groups. For example, an abundance of non-shocked SVT events in the physician-tailored arm, despite a greater incidence of inappropriate SVT shock therapies in that arm, nevertheless would result in the proportion of SVT episodes shocked being similar in the two arms. The analysis is also heavily dependent on the electrogram data stored in the ICDs. Given the electrogram storage capability of ICDs, differing rates of electrogram storage might occur between study arms or between VT/VF and SVT episodes that may skew the amount of data available for analysis. Therefore, the key secondary endpoint in this study is considered to be the time to delivery of first shock therapy in any given patient. This endpoint offers the advantage that it enables patient cross over to occur between the study arms without endpoint compromise and it is a clinically robust indicator of patient shock-related morbidity. Furthermore, its analysis is not influenced by the appropriateness or otherwise of a shock therapy and therefore cannot be confounded by differential occurrence of non-shocked SVT events in the study arms.The primary objective is to demonstrate that the proportion of shocked VT/VF episodes Other secondary endpoints will further evaluate the impact of the standardized programming regimen on patients by an assessment of detection performance, health care utilization, shock impact on device longevity, and "true VT/VF" episode durations.The EMPIRIC trial is a worldwide, multi-center, prospective, one-to-one randomized comparison of empiric to physician tailored programming. About 900 patients were enrolled worldwide at 52 centers from August 2002 to October 2003. Each patient will be followed for approximately one year.The inclusion criteria require patients to meet all of the following conditions:1. Indicated for an ICD according to internationally accepted criteria.2. Willing to sign informed consent or offer a legal representative who can provide consent.3. Achieved a 10 Joule safety margin at implant.Patients are excluded if they:1. Have permanent atrial fibrillation (AF).2. Had a previous ICD.3. Have a medical condition that precludes the testing required by the protocol or limited trial participation.4. Have a life expectancy less than one year.5. Are unable to complete follow-ups at the trial center.6. Are enrolled or participating in another clinical trial.Patients receiving a Marquis DR ICD are randomized to one of the two programming approaches after meeting a 10 J safety margin. In order to control for physician practice between the two treatment arms, randomization is stratified by treatment center. Further, since the incidence and prevalence of spontaneous VT/VF and SVT among primary prevention patients is not well known, randomization is also stratified by ICD indication (secondary vs. primary). A secondary indication includes patients with a history of spontaneous sustained VT/VF or syncope with suspected VT. A primary prevention indication includes all other patients.The physician tailored approach is based on the standard practice of each physician. All VT/VF programming may be tailored to the patient except that VT detection must be turned to 'On' or 'Monitor' to record episodes of slower VT.The empiric standardized regimen is based on various programming strategies to reduce shocks. In this arm, initial device settings are fixed . In order to protect protocol design integrity, reprogramming will be encouraged for non-justified programming deviations. In this manner the initial treatment strategies are tested using an intention-to-treat analysis with characterization of programming changes.The empiric arm standardized programming regimen is based on the following key strategies to reduce shocks.Multiple ATP attempts for VT≤ 200 bpm: Three sequences of ATP will be attempted for rhythms with ventricular rates ≤ 200 bpm. Empiric ATP has been shown to terminate ≥ 90% of VTs in the VT zone [• VT zone -10. Furt VT zone . Three s VT zone . ATP wil VT zone ,12-14.ATP for VTs 201 – 250 bpm: One sequence of ATP will be delivered for fast VTs (FVT) using the FVT via VF zone, which maintains sensitivity to polymorphic VT (PVT) and VF and delivers ATP if the 8 beats prior to FVT detection are ≤ 250 bpm. Approximately 81% of ICD detected VF is monomorphic VT (MVT). MVT can be pace-terminated approximately 75% of the time with one sequence of ATP, without increased risk of syncope or acceleration [• leration ,7,15.Longer detection duration: The VF initial beats to detect will be set to 18 of 24. Shorter beats to detect are often programmed by physicians, but may increase the unnecessary shocks for non-sustained VT and for SVTs. At least 25% of ICD-detected VF is non-sustained VT/VF [• ed VT/VF -17.st VF and FVT Shock: High Output 1A 30 Joule energy will be used for the first VF and FVT shock. This will allow additional time for spontaneous conversions that frequently occur. A higher shock energy may also improve 1st shock success and therefore reduce the need for multiple shocks within an episode. The LESS study found no difference in 1st shock success with 31 J versus DFT++, however it analyzed all VT/VF faster than 200 bpm [st shocks is due to concerns about syncope. Several recent studies have shown very low syncope rates [• 200 bpm ATP shoupe rates ,19 FurthEmpiric SVT Criteria: The PR logic criteria of AF/A. Flutter and Sinus Tach will be programmed 'On' in all patients. These criteria have been shown to have a relative VT/VF sensitivity of 100% and a positive predictive value to 88.4% [• to 88.4% .SVT Criteria applied to faster rates: The SVT limit and VF rate cut-off will be increased to 200 bpm in all patients to provide SVT discrimination at faster rates. Two of the top five reasons for inappropriate detections in the GEM DR Study (933 patients) were a ventricular rate during AF in VF zone and a SVT cycle length faster than programmed SVT limit [• Avoid detecting 1:1 SVTs with Long PRs as VT: 1:1 SVTs with long PR intervals accounted for 38% of inappropriate detections in the Gem DR (7271) Clinical Study [• al Study .Longer detection duration: VF initial beats to detect will be set to 18 of 24. Shorter beats to detect may result in more unnecessary shocks for SVTs or ventricular over-sensing.• ATP attempts: In addition to terminating ventricular arrhythmias without shocks, ATP should eliminate some inappropriate shocks when inappropriate detections occur by terminating SVTs or slowing conduction.• VT rate cut-off is one of the most important ICD settings because it can result in untreated symptomatic VT if set too fast, however it can result in unnecessary therapies for non-sustained VT, SVTs, or sensing issues, if set too slow. Reports have shown that some secondary prevention patients have significant symptoms for VTs outside treated zones [The ed zones . The VT both the proportion of shocked VT/VF episodes and the proportion of shocked SVT episodes are no more than 10 percentage points greater in the empiric arm than the physician tailored arm. The chosen margin 10 percent is considered clinically important.The primary endpoint is the proportion of true episodes that are shocked during the 12-month follow-up period. The standardized empiric programming regimen will be considered non-inferior to the physician tailored programming approach if It is assumed that 24% of patients will have at least one true VT/VF episode and 33% of patients will have at least one true SVT episode during the 12-month follow-up period. Based on unpublished data from other Medtronic trials, the within-patient correlation coefficient for multiple episodes is assumed to be 0.3. Assuming a similar distribution of episode counts per patient as observed in these previous trials and a shock rate of 30% and 14% for VT/VF and SVT episodes respectively, a total of 900 patients (450 in each arm) will give at least 80% power for the VT/VF hypothesis and 90% power for the SVT hypothesis, each tested at the significance level 0.05.The critical secondary endpoint, time to first shock therapy, will be analyzed using the Cox proportional hazards model for 1) any VT/VF or SVT, 2) true VT/VF only and 3) true SVT only. The empiric programming approach will be considered non-inferior if the upper confidence limit for the hazard ratio is less than 1.5.To better understand the changing ICD patient populations, we will investigate whether or not the proportion of appropriate and inappropriate shocks delivered is related to the following baseline characteristics: main indication for implant , left ventricular ejection fraction, CAD status, history of Atrial Tach/Atrial Fib/Atrial Flutter, NYHA classification, use of amiodarone, sotalol, or beta-blockers, and inducibility for VT/VF. In addition, to facilitate understanding of the optimal programmable settings for various patient sub-groups, we will consider the impact of programmable settings on outcomes. In particular, we will examine the "treated cut-off" (TC), which is the VT detection cut-off if VT detection is 'On' or the VF detection cut-off if VT detection is 'Off' or 'Monitor'. Outcomes in patients with a faster TC (physician tailored arm) will be compared to patients with slower TC (either physician tailored arm or empiric arm). Other programmable settings that will be investigated include the number of beats to detect VF and the number of ATP attempts based at various rates . The types of arrhythmias, median ventricular cycle length, and therapies delivered will also be characterized relative to the patient's conditions and programming. Furthermore, the incidence of slower VTs in patients without a history of spontaneous, sustained monomorphic VT will be characterized.The EMPIRIC trial is a worldwide, multi-center, prospective, one-to-one randomized comparison of shock- related morbidity in a population of about 900 ICD patients whose ICD therapy is determined either by a standardized programming regimen or by physician tailored programming of VT/VF detection and therapy. Shock-related morbidity is assessed by a primary objective that compares between study arms the proportion of VT/VF episodes that are shocked and the proportion of SVT episodes that are shocked, and by a key secondary endpoint that compares to time to first shock therapy.ICD patient populations have rapidly changed within the last five years but little has been published on optimal programming for the emerging patient subsets . Therefore a standardized regimen of parameters is used in this trial for all patient populations. Today's patient population is quite diverse, so a slightly more sophisticated programming approach may be necessary (e.g. change VT cut-off based on main ICD indication) or perhaps complex physician tailoring is critical to reducing shocks.The EMPIRIC trial will characterize the shock morbidity of a single empiric programming approach compared to patient-specific, physician tailored programming. Empiric programming may be an acceptable strategy if it achieves equivalence with physician tailored programming. The EMPIRIC trial results will also provide a better understanding of how particular programming strategies impact the frequency of shocks delivered and will facilitate a way to optimize complex ICD programming.1. Have you received reimbursements, fees, funding, or salary from an organization that may in any way gain or lose financially from the publication of this paper in the past five years, or is such an organization financing the article-processing charge for this article?Dr. Morgan: Yes, Medtronic has paid me honoraria.Dr. Sterns: Yes, I am a paid investigator in several Medtronic clinical trials and key investigator in the present trial. I understand that Medtronic is paying for the processing fee for this article.Dr. Wilkoff: Yes, Medtronic, Guidant, St. Jude MedicalHanson, Ousdigian, and Otterness: Yes, Employees of Medtronic.2. Have you held any stocks or shares in an organization that may in any way gain or lose financially from the publication of this paper?Dr. Morgan and Dr. Sterns and Dr. Wilkoff: NoHanson, Ousdigian, and Otterness: Yes, own Medtronic stock.3. Do you have any other financial competing interests?Dr. Morgan and Dr. Sterns and Dr. Wilkoff: and Hanson and Ousdigian and Otterness: No.4. Are there any non-financial competing interests you would like to declare in relation to this paper?Dr. Morgan and Dr. Sterns and Dr. Wilkoff: and Hanson and Ousdigian and Otterness: No.All 6 authors contributed to the study design and writing of this manuscript.
This was particularly disturbing in that we are in the midst of a growing epidemic of obesity and diabetes with very alarming figures and projections from all over the world. Any intervention that has the potential for helping curb this dangerous epidemic which claims thousands of lives every day should be looked at with a great deal of objectivity.Before beginning the review of this book, we had no particular opinion about the role of low carbohydrate diets in diabetes. In order to write a fair and unbiased review, we have done a rather extensive search on the subject. One of the most disturbing findings of our search is the amount of hostility towards low carbohydrate diets that is on the web and in the scientific literature. We found several sites that present no scientific arguments but are, rather, full of Letter on Corpulence Addressed to the Public [The low carbohydrate approach, in fact, is not new and was used in England more than a century ago, made popular by William Harvey , an ENT e Public , the firThe Atkins Diabetes Revolution plan is The Atkins Diabetes Revolution book is an attempt by the authors to present the low carbohydrate diet as a preventive and treatment strategy for patients with type 2 diabetes and those with the metabolic syndrome, who are at high risk for developing diabetes and cardiovascular disease. In doing so, the book, which is very well written, and which clearly presents illustrative cases, explains very complex metabolic concept in a very easy to read and understandable format. The first nine chapters explain the different concepts involved in glucose and lipid metabolism and the interplay of the various cardiovascular risk factors that culminate in cardiovascular disease the number one killer of Americans today. Definitions of metabolic syndrome, pre-diabetes, body mass index, waist to hip ratio, central obesity and their relationship to diabetes, heart attacks and strokes, are eloquently presented with a great deal of accuracy yet in a simple format. Most impressive were the case presentations, especially that of reactive hypoglycemia and carbohydrate craving. This response is associated with hyperinsulinemia in the pre-diabetic phase and sometimes puzzles clinicians unless they know to look for it.The second section of the book is devoted to an in-depth discussion of the various macro and micronutrients and their role in diabetes and obesity. Concepts such as the glycemic index and glycemic load are very well illustrated. The last section consists of meal plans and menus of low carbohydrate diet that the book is advocating.1c, something that has been clearly shown to reduce microvascular disease in both type 1 and type 2 diabetes [The concept of low carbohydrate diet and glycemic control certainly has a pathophysiological merit. First, dietary carbohydrates are the principal source for the initial rise of glucose in the diabetic populations, who generally have a defect in the first phase insulin secretion that is responsible for handling the glucose load . There idiabetes ,8. Secondiabetes ,10. So, diabetes ,10. Alsodiabetes . This fadiabetes . This ladiabetes . The facdiabetes -15.In two recent randomized controlled trials published in the New England Journal of Medicine ,17, the 2subfractions [Despite the evidence from these randomized controlled trials, published in the prestigious New England Journal of Medicine, there is a significant amount of reluctance in the scientific community to acknowledge the beneficial effects of low carbohydrate diets. These studies, in fact, provide a striking example of this resistance. A commentary in the same issue of the New England Journal of Medicine states tractions ,22.On the other hand, the American Diabetes Association, despite recommending the traditional low fat diet, has recently reduced the recommended carbohydrate contents in the diet, perhaps reflecting a trend towards a reduced carbohydrate diet to follow .Atkins Diabetes Revolution has a list price of $25.95 and is available at Amazon.com and presumably other sites for half that price. Possibly, a shorter and still more affordable version of the book would be helpful for diabetic patients, their families and for the general reader, to help identify their risk for the disease.Returning to the Atkins book, despite the fact that the book is very well referenced, certain statements such as "high carbohydrate diet leads to diabetes" are not well substantiated, unless of course such a diet leads to weight gain, which it may. Furthermore, the book does not devote a sufficient amount of space discussing the side effects associated with dieting in general and low carbohydrate diet in particular. This is of concern, since it leaves the reader with the impression that the low carbohydrate diet or dieting, in general, has no negative consequences. Nonetheless, the amount of information the book provides in a simple, yet accurate format will benefit patients with diabetes and their families as well as those who are at risk for developing diabetes and the metabolic syndrome. If, after reading this book, the reader is able to identify that he or she is at risk for diabetes and the metabolic syndrome and takes action that could potentially save his or her life the book will be a valuable contribution. Atkins Diabetes Revolution, however, is sufficiently convincing to make us believe that some form of low carbohydrate intervention is worth investigating and should be considered by practitioners. The highly negative un-scientific response of critics, if anything, encourages us in this direction.As clinicians, we would not be comfortable recommending any diet without first hand experience. The
Health disparities are a growing concern. Recently, we conducted a practice-based trial to help primary care physicians improve adherence with 21 quality indicators relevant to the primary and secondary prevention of cardiovascular disease and stroke. Although the primary concern in that study was whether patients in intervention practices outperformed those in control practices, we were also interested in determining whether minority patients were more, less, or just as likely to benefit from the intervention as non-minorities.Baseline (fourth quarter 2000) and follow-up (fourth quarter 2002) data were obtained from 3 intervention practices believed to have at least 10% minority representation. Two practices had a black (non-Hispanic) population sufficient for analysis, while the other had a sufficient Hispanic population. Within each practice, changes in the 21 indicators were compared between the minority patient population and the entire patient population. The proportion of measures in which minority patients exhibited greater improvement was calculated for each practice and for all 3 practices combined, and comparisons were made using non-parametric methods.For all black patients, the observed improvement in 50% of 22 eligible study indicators was better than that observed among all white patients in the same practices. The average changes in the study indicators observed among the black and white patients were not significantly different (p = 0.300) from one another. Likewise for all minority patients in all 3 practices combined, the observed improvement in 14 of 29 (43.3%) eligible study indicators was better than that observed among all white patients. The average changes in the study indicators among all minority patients were not significantly different from the changes observed among the white patients (p = 0.272).Among 3 intervention practices involved in a quality improvement project, there did not appear to be any significant disparity between minority and non-minority patients in the improvement in study indicators. In 2002, the Institute of Medicine (IOM) issued a report suggesting that minorities are more likely than non-minorities to receive a lower quality of healthcare . BecauseDisparities are particularly evident in the area of chronic diseases. Although blacks are more likely than whites to have blood pressure monitoring, cholesterol screening, and smoking counseling, coronary heart disease is more prevalent among blacks than among whites . Additioth report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure outlines specific guidelines for preventing and managing hypertension, hyperlipidemia, and coronary heart disease [In hopes of improving health outcomes and prevention practices for all patients, much focus has recently been given towards the improvement in quality of healthcare. For example, the 7 disease . The 2nd disease . Other qRecently, there have also been a number of practice-based interventions aimed at improving the quality of healthcare for patients. For example, researchers have shown that a practice-based intervention can enhance the quality of care for families of young children . AdditioWhat these earlier interventions have lacked, however, are analyses examining whether the interventions have improved the quality of care for all patients, regardless of ethnicity. Because these types of interventions are heavily dependent on physician and/or clinical staff interaction with patients, because ethnic minorities may have less trust in their healthcare providers , and becThe aim of this study was to examine whether or not a multi-method quality improvement (QI) intervention was equally successful among patients of different ethnicities. Some of the findings from this QI intervention have been previously published ,12, and The multi-method QI intervention added practice site visits (for academic detailing and QI facilitation) and network meetings (for sharing of best practices) to the approach of guideline dissemination and audit and feedback, employed in a less intensive intervention. Ten sites received the intensive multi-method QI intervention, and ten sites received the less intensive intervention. The study was conducted in a practice-based research network (PPRNet) among users of a common electronic medical record , which historically provided audit and feedback to its practice members.As a supplement to the original study, we were also interested in whether minority patients were more, less, or just as likely to benefit from the intervention as non-minorities. The study presented here focused on outcome and process measures for minorities within 3 primary care practices, all of which received the intensive intervention. These 3 practices were selected because they each had a significant (i.e. > 10%) proportion of minority patients and had recorded patient ethnicity in their electronic medical record. Practice A is an urban internal medicine practice in the Midwestern U.S. with 5 healthcare providers. Practice B is a rural family medicine practice in the Northeastern U.S. with 8 healthcare providers. Practice C is an urban family medicine practice in the Southeastern U.S.A total of 21 study indicators data sets on standard microcomputers for analyses.To determine practice performance on the study indicators, participating practices ran a computer program to extract patient activity during the previous quarter from their electronic medical record. To protect patient confidentiality, the extract program assigned an anonymous numerical identifier unique to each patient. The extract program obtained demographic information such as age, ethnicity, and gender, and diagnoses, medications, laboratory data, and vital signs. Text of consultation reports, progress notes, and discharge summaries were not extracted. The data were copied to diskettes and mailed to PPRNet or sent electronically via a secure server. In the PPRNet offices, data were bridged to standard data dictionaries and converted to SASIn each patient's electronic medical record, ethnicity was recorded as white, black/African American, American Indian/Alaskan native, Asian, native Hawaiian/other Pacific islander, and "some other ethnicity", while ethnicity was recorded as Hispanic/Latino and non-Hispanic/Latino, all in concordance with the 2000 U.S. Census ethnicity categories. Currently, these physician practices allow the patient to designate their ethnicity categorization. However, because this process for collecting ethnicity data began in the middle of our study, some ethnicity categorizations were made by the office staff within each of the practices. Ethnicity data was only available on approximately 42% of patients, due to the fact that the electronic medical record software program did not require physicians to enter patients' ethnicity data until its most recent version was released, which occurred during the study time frame. Improvements in process and outcome measures were compared between minority and non-minority patients. Minority was defined as any ethnic designation other than white non-Hispanic.Changes in the process and outcome measures were of primary interest in this study. Within each practice, these changes were compared between the minority patient population and the white patient population. Measures were deemed eligible for comparison if at least 10 minority patients were included in the rate calculations. For example, if practice A only had 8 minority patients with a diagnosis of having had myocardial infarction (MI), then the measure of the percentage of MI patients who had been prescribed a beta blocker could not be compared between the minority and white patient population. The proportion of eligible measures in which minority patients exhibited greater improvement was calculated for each practice and for all 3 practices combined. A Wilcoxon signed rank test was used to test the hypotheses that minority patients exhibited changes similar to those of the non-minority patients. This study had approximately 80% power to detect a 6.6 percentage point difference between average improvement in the study indicators among all minority and non-minority patients.Baseline characteristics of the patients from the 3 practices are listed in table In these 3 physician practices, all of which were in the intervention arm of a randomized trial aimed at improving primary and secondary prevention of cardiovascular disease and stroke, we found that results for minorities were relatively similar to the results experienced by the overall practice populations. Change from baseline was greater among minority patients than among white patients for 48.3% of the 29 eligible study indicators, and the average changes in the study indicators among all minority patients were not significantly different from the changes observed among the white patients.There are some limitations of this study which should be noted. As noted earlier, the ethnicity status was only available on 42% of patients within the practices of interest; thus the results may not truly represent what occurred in these practices overall during the study. Given the relatively small number of eligible indicators for comparisons across ethnicities, this statistical power to detect subtle differences was not optimal. Nevertheless, the overall findings suggest that any true differences in this intervention's effectiveness across ethnicities are small.These findings are encouraging, and they suggest that the quality improvement strategies that have been developed to date for physician practices that use electronic medical records have a similar impact on minorities and non-minorities. Future studies should continue to address whether the effectiveness of interventions such as ours is cross-cultural, and whether interventions tailored to be more culturally appropriate can improve the overall effectiveness of interventions.IOM: Institute of MedicineHIV: Human Immunodeficiency VirusAIDS: Acquired Immunodeficiency SyndromeQI: Quality ImprovementLDL: Low-Density LipoproteinMI: Myocardial InfarctionThe author(s) declare that they have no competing interests.PJN helped design the study, perform the analyses, and write the manuscript. SMO helped design the study, perform the site visits, and edit the manuscript. RGJ helped design the study, perform the analyses, and write the manuscript. LFR helped design the study, assisted with data acquisition, and edited the manuscript. LMD helped design the study, perform the site visits, and edit the manuscript. CF helped perform site visits and edit the manuscript. All authors read and approved the final manuscript.Study indicators as measured at baseline (B) and the end (E) of the study for all patients and minority patients within each of the 3 practices.Click here for file
The newly published strategic plan for developing an HIV vaccine is crucially important, say the PLoS Medicine editors, but it must be followed by clear milestones and a process for monitoring progress The new global plan is exciting, but now needs clear milestones In 1997, United States President Bill Clinton announced the challenge to develop an AIDS vaccine by 2007. Since 1997, the AIDS Vaccine Advocacy Coalition (AVAC) has published annual reports on the global status of the effort to meet Clinton's deadline. Last year's report, entitled “AIDS Vaccine Trials—Getting the Global House in Order,” officially ends the countdown. Saying that “we are on a long term mission,” AVAC concludes that there will not be a safe and efficient vaccine in 2007, and that we need to “focus on the long haul and set an agenda for sustained and sustainable action that stretches well beyond 2007.” It is not that there are no vaccine candidates in clinical trials, but there is little hope that any of the current candidates will turn out to be a cheap and safe vaccine that affords long-term protection.PLoS Medicine .Among notable developments over the past 12 months, the AVAC report highlights the Global HIV/AIDS Vaccine Enterprise as an effort to improve coordination within the AIDS vaccine field. The Enterprise was announced in June 2003 and now shares its scientific strategic plan with everyone affected by the AIDS pandemic—that is, all of us—by publishing it in In its plan the Enterprise presents itself as a global endeavor and emphasizes the need for integration and capacity building around the world. It is not “a discrete organization with a pool of money” but a “coordinating group of individual funding agencies that will support specific areas of research using their own mechanisms, according to their own practices and policies, and following the Enterprise's principles.” These principles include collaboration, standardization, and coordination among international researchers and agencies. The plan focuses on specific scientific roadblocks that need to be overcome, but also looks ahead and mentions the need to build capacity for product manufacturing and clinical trials, and to address regulatory issues.10.1371/journal.pmed.0020036), mentions the danger of “group think,” and the Enterprise must not fall into that trap.These are noble goals, and the fact that they are stipulated jointly by many of the leaders in the field will generate excitement and expectations, even though much of what is said has been said before. The plan stresses collaboration and coordination; there are clear benefits from a concerted effort. But might a level of competition, rather than collaboration, be healthy, and, if so, what level of competition would work best? The Enterprise members seem to have wrestled with that question. The plan mentions an “appropriate balance between productive competition and effective collaboration,” and suggests that certain incentives could be provided by “the funders with greatest flexibility.” As long as it remains unclear where scientific breakthroughs will come from, diversity and flexibility should be encouraged and not stifled. David Ho, in his Perspective on the plan Moreover, provided it can be done, it is impossible to predict when the necessary scientific advances will happen. That said, without a list of specific projects, project leaders, and a time frame for achieving or at least evaluating specific goals, it will be impossible to define success and failure, review progress, and assure internal and external accountability.There is another reason why a best-guess timeline is essential: realistic expectations about an AIDS vaccine would stress the urgency of combating the AIDS pandemic over the next decade—and maybe longer—in the absence of an effective vaccine. The potential benefits of a vaccine cannot be overestimated, and its development has to be one top priority for the global scientific community. But its success cannot be taken for granted and will come too late for millions. Therefore, parallel efforts to prevent or reduce transmission and to treat infected individuals need to be accelerated now.The Enterprise's plan should be hailed as a crucially important outline for vaccine development, but the goodwill surrounding it won't last unless it is quickly followed up with a set of milestones, and a transparent process by which progress will be measured and course corrections implemented.
Id2 mutant animals are deficient in lobulo-alveolar development. This failure of development is believed to be due to a proliferation defect.During pregnancy, the mammary glands from Id2 expression is necessary for Wnt induced mammary hyperplasia, side branching, and cancer, by generating mice expressing a Wnt1 transgene in an Id2 mutant background.We have asked whether functional Id2. We also show that Wnt1 expression is able to cause mammary tumors in an Id2 mutant background.We show in this work that forced expression of Wnt1 in the mammary gland is capable of overcoming the block to proliferation caused by the absence of Id2 expression is not required for Wnt1 to induce mammary hyperplasia and mammary tumors.We conclude that functional Basic helix-loop-helix (bHLH) transcription factors such as MyoD, E12, and E47 are key regulators of gene expression and control many differentiation events during development . These tId2, is expressed in glandular and ductal epithelium of the mouse mammary gland and has also been implicated in its development. Mammary glands of female mice that are homozygous mutant for Id2 have impaired lobulo-alveolar development [Id2 is regulated by Wnt-β catenin signaling [Id2 expression, thereby leading to cancer. We have asked therefore whether functional Id2 expression is necessary for Wnt induced mammary hyperplasia, side branching and cancer, by generating mice expressing a Wnt1 transgene in an Id2 mutant background.There are 4 mammalian Id genes, which show differences in their patterns of expression and function ,3. One oignaling ,6. It haId2 males and females on a 129/Sv background. Id2 genotyping was done by PCR using primers Id2-S (5'-tctgagcttatgtcgaatgatagc-3'), Id-2AS (5'-cgtgttctcctggtgaaatggctg-3'), and neo 1 (5'-tcgtgctttacggtatcgccgctc-3").We used heterozygous Hemizygous transgenic MMTV-Wnt1 males on a mixed FVB/BL6/SJL background were obtained from Yi Li in the H.Varmus laboratory. Genotyping was done by PCR using Wnt1 (5'-gaacttgcttctcttctcatagcc-3') and SV40 (5'-ccacacaggcatagagtgtctgc-3') primers that produce a 350 bp product in transgenic mice.4, dehydrated, cleared in xylene, and mounted in Permount.Five mammary glands per mouse were removed and fat and muscle were dissected away. The glands were flattened between two slides and flooded with Carnoy's fixative and fixed overnight. They were then de-fatted in 3 changes of acetone, rehydrated, stained overnight in 0.2% carmine and 0.5%KSOId2 loss of function mutant mice [Id2 -/- or Wnt1 transgenic mothers, as these animals cannot feed their young [Id2 +/- females were crossed with MMTV-Wnt1 hemizygous transgenic males, producing 7 MMTV-Wnt1 Tg; Id2+/- males and we cant mice ,8. Crossir young ,7. Id2 +Id2 -/- mice have an immunologic defect. Even with this care, 50% die before maturity [Id2 -/- mice were born in sub-Mendelian ratios, they were smaller than litter-mates, and several died of unknown causes.The subject animals were kept in mixed groups in autoclaved cages because maturity . Id2 -/-Id2 +/- and Id2 -/- females glands were similar to those of WT females [Id2 +/- and MMTV-Wnt1 Tg ; Id2 -/- mice had branching patterns resembling those of MMTV-Wnt1 Tg females at 3 months , the mechanism through which hyperplasia and side branching is promoted by Wnt1 expression in the virgin mammary gland is unknown. Our results demonstrate that Wnt1 is not operating solely through Id2 or that it is not operating through Id2 at all.The lack of pithelia . We showId2-/- mice [Id2-/- phenotype, suggesting that Wnt1 signaling is independent of both Cyclin D1 and Id2 in the mammary gland.Another known Wnt target with a similar loss of function phenotype is Cyclin D1 , a prote-/- mice , a resulId2, we conclude that functional Id2 expression is not required for Wnt1 to induce mammary hyperplasia and mammary tumors.By showing that forced expression of Wnt1 in the mammary gland is capable of overcoming the block to proliferation caused by the absence of The author(s) declare that they have no competing interests.SM and CR carried out the experiments. YY participated in the design of the study. SM and RN conceived of the study, participated in its design and coordination and wrote the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
The purpose of this study was to investigate the indications for and approach to hysterectomy at Kingston General Hospital (KGH), a teaching hospital affiliated with Queen's University at Kingston, Ontario. In particular, in light of current literature and government standards suggesting the superiority of vaginal versus abdominal approaches and a high number of concurrent oophorectomies, the aim was to examine the circumstances in which concurrent oophorectomies were performed and to compare abdominal and vaginal hysterectomy outcomes.A retrospective chart audit of 372 consecutive hysterectomies performed in 2001 was completed. Data regarding patient characteristics, process of care and outcomes were collected. Data were analyzed using descriptive statistics, t-tests and linear and logistic regression.Average age was 48.5 years, mean body mass index (BMI) was 28.6, the mean length of stay (LOS) was 5.2 days using an abdominal approach and 3.0 days using a vaginal approach without laparoscopy. 14% of hysterectomies were performed vaginally, 5.9% were laparoscopically assisted vaginal hysterectomies and the rest were abdominal hysterectomies. The most common indication was dysfunctional or abnormal uterine bleeding (37%). The average age of those that had an oophorectomy was 50.8 years versus 44.3 years for those that did not (p < .05). Factors associated with LOS included surgical approach, age and the number of concurrent procedures.A significant reduction in LOS was found using the vaginal approach. Both the patient and the health care system may benefit from the tendency towards an increased use of vaginal hysterectomies. The audit process demonstrated the usefulness of an on-going review mechanism to examine trends associated with common surgical procedures. In Canada in 2001, 446 hysterectomies were performed per 100 000 women .In response to the consistent demand for this procedure, recent reports have identified hysterectomy as a key health care indicator used to measure and compare hospital performance. In particular, the Ontario Hospital Association has identified the ratio of vaginal (VH) to abdominal hysterectomy (AH) as a measure of hospital performance , with a Considerable attention has also been directed towards the high rate of concurrent oophorectomy with this procedure. This rate is of particular concern in premenopausal women because of the early menopause that ensues.The purpose of this study was to compare abdominal and vaginal approaches to hysterectomy, investigate the rate of concurrent oophorectomy, and identify factors associated with length of surgery, LOS and approach, by auditing all hysterectomies performed over a one-year period at a university teaching hospital.The study involved all patients who underwent a hysterectomy in 2001 at Kingston General Hospital (KGH), a teaching hospital affiliated with Queen's University at Kingston, Ontario. The Queen's University Health Sciences and Affiliated Teaching Hospitals Research Ethics Board approved the study (OBGY-117-03). There were no exclusion criteria. Patients were identified by medical record tracking using ICD-9 codes and charts were reviewed to collect patient characteristics, length of stay, length of surgery, indication for surgery and approach. Readmissions, complications, infections and repeat laparotomies were also assessed.Menopause was defined as one year since the last menstrual period. Up to three indications for surgery were obtained from the chart, including those identified in clinic letters, admission sheets and operative reports. All indications were collected regardless of whether or not the post-operative diagnosis coincided with the preoperative diagnosis.VH included laparoscopically assisted vaginal hysterectomy (LAVH) and AH included VH converted to AH unless otherwise noted. Readmission was defined as a visit to the emergency room or an admission to the same hospital with a diagnosis that was related (readmission to another facility was unlikely as KGH is the only tertiary care facility in the region). Post-operative infections were defined as those that occurred within 30 days of surgery. A complication of excessive bleeding was defined as an intra-operative hemorrhage requiring transfusion or laparotomy, post-operative hematoma/seroma formation, or a significant post-operative vaginal bleed that required medical attention. All complications that occurred during the surgery or within 30 days of surgery were recorded, other than problems associated with removal of catheter, urinary retention, hypertension, hypotension, pain control, nausea and vomiting or headache. Any repeat laparotomy or unplanned laparotomy (other than for conversion of VH to AH) that occurred during the surgery or within 30 days of discharge was also noted.Follow up information was tracked using hospital chart and clinic note information from the six-week post-operative visit. All data were analyzed using SPSS statistical software . Between-group comparisons utilized two-sample t-tests and one-way analysis of variance (continuous data) and Chi-square analyses . Predictors of LOS and length of surgery were identified using linear regression, while predictors of surgical approach were identified using logistic regression. Variables were offered into the models on the basis of the strength of the bivariate associations with the outcomes (p < 0.20).Three hundred and seventy two women underwent a hysterectomy in 2001. The characteristics of these patients can be found in Table The majority of hysterectomies were AH (78%), 14% were VH, 5.9% were LAVH and 2.2% were VH converted to AH. Total hysterectomies accounted for 79.8% of hysterectomies, 16.1% were subtotal, and 4% were radical or modified radical hysterectomies. There were no significant differences between patients who had a subtotal and those that had a total hysterectomy for BMI, age, LOS, length of surgery, number of infections, or number of complications. The patients differed only in terms of parity, in that those who underwent a total hysterectomy had more children .A concurrent procedure was performed in 26.6% of patients. This included biopsies (10.5%), reparative surgery (5.9%), procedures to establish urinary continence (3.5%), appendectomies (1.9%), and surgery to manage intra-operative events (2.2%). Table Fifty-eight (15.6%) of the patients had a diagnosis of cancer pre-operatively, which rose to 76 (20.4%) post-operatively. The population with cancer was older, had higher BMIs, longer surgeries, and longer lengths of stay than those without cancer Table .Twenty-six patients visited the emergency room within 30 days of their discharge and an additional nineteen patients were readmitted to the hospital. Table Infections occurred in 15.3% of patients, including urinary tract infections (7.5%), incision site infections (5.6%) and pelvic infections (2.2%). Those who developed an infection had a higher mean BMI (p = 0.018), longer LOS (p = 0.018) and longer length of surgery (p = 0.036) than those who did not.Four percent of patients had a repeat laparotomy or unplanned laparotomy (not including those for conversion of VH to AH). Other complications occurred in 24.5% of patients, the most common being excessive bleeding (11.3%) and post-operative ileus (5.4%). Other complications involving the bladder, bowel, pulmonary function, cardiac function or drug reactions occurred in less than 2% of patients respectively.Table A comparison of the abdominal and vaginal approaches revealed no differences in terms of incidence of infection, readmission to the ER or hospital, incidence of excessive bleeding or complication rate.LAVH and VH converted to AH were excluded from all regression analyses as they represented subgroups that were clinically different than routine AH and VH. Table Logistic regression for approach of hysterectomy indicated that a patient was 1.1 times more likely to have an AH for each one-point increase in BMI (p = 0.003), 47.6 times more likely to have an AH if she had a concurrent unilateral or bilateral oophorectomy (p < 0.001) and 1.7 times more likely to have a VH with each additional child (p < 0.001).The majority of the patients were overweight or obese . These numbers define a population whose obesity level is 21.8 percentage points above the national average and although there is no known average BMI for all hysterectomy patients in Canada for comparison, the high obesity rate at this centre may have contributed to the reliance on the abdominal approach. A patient was in fact eleven times more likely to have an AH for every 10-point increase in BMI. Although recent studies exclude BMI as a factor in determining the route of hysterectomy, it has been noted that obesity of the buttocks may interfere with the exposure necessary for a VH [The general trend in determining the route of hysterectomy has been to challenge the validity of the exclusionary criteria for VH, such as nulliparity, larger uterine size, previous cesarean delivery, and pelvic laparotomy. These are no longer considered to be strong contraindications to a vaginal approach -11. Yet The overall ratio of abdominal to vaginal surgeries is 5.6:1 but when only considering those surgeries performed for indications other than cancer (cancer found pre or post operatively), the ratio reduces to 3.9:1. This is consistent with the fact that most malignant indications for surgery require an abdominal approach in order to ensure access to structures and to allow for staging procedures. Our data did not demonstrate a significant difference between AH and VH in terms of outcome variables such as the rate of infection or complication, however, the two day reduction in LOS for VH may have significant cost reduction potential ,13. In oThe merit of performing a concurrent oophorectomy during hysterectomy continues to be debated for women not at high risk of developing ovarian cancer. Estimates regarding the number of prophylactic oophorectomies needed to prevent one case of ovarian cancer range from 200 to 300 . BenefitThe limitations of this study include uneven distribution of patients in each treatment group and lack of randomization due to the nature of the retrospective chart review process. Furthermore, because the audit process relied entirely on chart documentation, information may have been missed or incorrect as a result of improper or absent documentation. The broad range of information collected also prevented the researchers from employing more rigorous definitions and verification of outcomes. The retrospective nature of the study precluded an evaluation of the decision making process leading to oophorectomy as well as the influence of pre-operative indications, uterine size, parity, previous c-section and concurrent oophorectomy on surgical approach. This would need to be addressed prospectively, by surveying the surgeons at the time that the decision was made.Both the patient and the health care system may benefit from the trend towards increased use of vaginal hysterectomies. However, the abdominal approach continues to dominate, likely related to patient size, surgeon preference and the need for adnexal surgery. The audit process proved to be an important method by which to assess trends associated with common surgical procedures. This study raises important questions about the relationship between patient characteristics, surgical approach and the indications for surgery, and a prospective approach, designed to address these questions more fully, is now indicated. Furthermore, in light of recent evidence demonstrThe author(s) declare that they have no competing interests.AT performed data collection, participated in the study design and coordination and participated in drafting the manuscript. WH performed the statistical analysis and participated in drafting the manuscript. RHG conceived the study and participated in its design and coordination and participated in drafting the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
Trypanosoma evansi (T. evansi) RoTat 1.2 Variable Surface Glycoprotein (VSG), a primer pair was designed targeting the DNA region lacking homology to other known VSG genes. A total of 39 different trypanosome stocks were tested using the RoTat 1.2 based Polymerase Chain Reaction (PCR).Based on the recently sequenced gene coding for the T. evansi and in seven out of nine T. equiperdum strains tested. This product was not detected in the DNA from T. b. brucei, T. b. gambiense, T. b. rhodesiense, T. congolense, T. vivax and T. theileri parasites. The Rotat 1.2 PCR detects as few as 10 trypanosomes per reaction with purified DNA from blood samples, i.e. 50 trypanosomes/ml.This PCR yielded a 205 bp product in all T. evansi strains, except T. evansi type B, and is especially useful in dyskinetoplastic strains where kDNA based markers may fail to amplify. Furthermore, our data support previous suggestions that some T. evansi stocks have been previously misclassified as T. equiperdum.PCR amplification of the RoTat 1.2 VSG gene is a specific marker for Trypanosoma evansi. T. evansi belongs to the subgenus Trypanozoon, together with T. equiperdum and T. brucei. The parasite can infect different host species and is mechanically transmitted by different biting flies such as Tabanidae and Stomoxys as well as by vampire bats such as Desmodus rotondus [T. evansi infections of cattle and buffaloes usually lead to a pronounced immunosuppression resulting in an increased susceptibility to other opportunistic diseases such as Pasteurella and anthrax [Surra is an animal disease occurring in Africa, Asia and Latin America, caused by rotondus . Camels anthrax .T. evansi infection usually starts with clinical symptoms or the detection of antibodies to T. evansi. Conclusive evidence of T. evansi infection, however, relies on detection of the parasite in the blood or tissue fluids of infected animals. Unfortunately, parasitological techniques cannot always detect ongoing infections as the level of parasitaemia is often low and fluctuating, particularly during the chronic stage of the disease [Diagnosis of a disease .Trypanozoon specific primers have been designed previously: TBR primers which target a 177 bp repeat [T. congolense and T. vivax infections exist as well [Trypanozoon subgenus still remains a challenging issue. For T. evansi infections, the only specific test available so far is based on the detection of a kinetoplast DNA sequence [T. evansi RoTat 5.1 [T. evansi parasites. Recently, Ventura et al. [T. evansi based on a Random Amplified Polymorphic DNA (RAPD) fragment. The taxon specificity of this PCR remains uncertain since it was only tested on nine T. evansi strains, one T. equiperdum, two T. b. gambiense and one T. b. rhodesiense. Following evidence that the variable epitope of RoTat 1.2 VSG is expressed by all T. evansi strains tested so far [T. evansi but not in T. brucei isolates [As an alternative to parasitological tests, DNA detection based on PCR has been proposed. p repeat , pMUTEC p repeat and ORPHp repeat . Most ofp repeat , water bp repeat or goatsp repeat . PCR tes as well . The devsequence ,13. HoweoTat 5.1 and E152oTat 5.1 casts dod so far , and thaT. evansi populations (lanes 3–8). Moreover, the same fragment was found in seven out of the nine T. equiperdum populations tested. The T. equiperdum BoTat 1.1 was PCR negative (lane 10), while the T. equiperdum OVI strain yielded a PCR product shorter than 205 bp (lane 11) probably due to mispriming. All other tested trypanosome populations, including six T. b. brucei, eight T. b. gambiense, five T. b. rhodesiense, two T. congolense, one T. vivax and one T. theileri, were negative. (lanes 18–40). As a negative control, a PCR-mix without template DNA was included (lane 2). Sequencing of the positive samples revealed that all amplicon were identical (data not shown). The weak band in OVI did not yield sufficient material to enable sequencing.The 39 different trypanosome stocks used in this study are listed in Table 1 see . They we5 trypanosomes down to 1 trypanosome per 200 μl sample) of RoTat 1.2 trypanosomes in mouse blood was prepared to determine the analytical sensitivity of the PCR. As shown in figure A tenfold dilution series and two T. b. gambiense type II strains (ABBA and LIGO) tested positive in this PCR (data not shown).To evaluate the RoTat 1.2 diagnostic system alongside other published methods, we compared our method to the PCR-Te664 method as published by Ventura et al. using thT. evansi from the other members of the Trypanozoon subgenus. The study is an extension of the initial observation that the RoTat 1.2 VSG gene only is found in T. evansi and not in T. brucei strains [T. evansi rather than the use of this VSG in diagnosis of Salivarian trypanosomes.This study was initiated to develop a specific PCR test that would be able to distinguish strains . This stT.b. gambiense infections (sleeping sickness). In these studies, five different primers derived from VSG genes, AnTat 11.17, LiTat 1.3, 117, 2 K and U2 were used in PCR screening of different trypanosome populations, originating from distinct geographical locations [T.b. gambiense from T.b. brucei parasites from most foci of sleeping sickness in countries such as Nigeria, Cameroon, Côte d'Ivoire, R. P. Congo/Brazza. and Sudan. However, populations originating from the Moyo focus in North-west Uganda and from Cameroon were shown to be negative in AnTat 11.17 and in LiTat 1.3 (2 K) PCRs respectively. According to Bromidge et al. [T. brucei populations tested. In T. evansi, a similar phenomenon may occur in certain Kenyan isolates. A recent study by Ngaira et al. [T. evansi stocks in the Isiolo district in Kenya seem to lack the Rotat 1.2 VSG gene. It is believed that these stabilates belong to the T. evansi type B group. So far, this type of T. evansi has only been observed in this specific region in Kenya [T. evansi isolated elsewhere, are from the classical T. evansi type A group. Thus, we assume that, except for these few Kenyan strains belonging to the type B group, our PCR is specific for T. evansi.Previously, other research groups have used VSG genes as target sequences for PCR detection of ocations -20. AnTae et al. , this mia et al. pointed in Kenya ,23. To oet al. [T. b. brucei, nor with T. b. gambiense type II was observed. However, regarding T. equiperdum, both PCR test positive for the same seven T. equiperdum strains and are both negative for the BoTat 1.1 and OVI strains. Since the RAPD fragment (AF397194) shares no homology with the Rotat 1.2 VSG gene (AF317914) and is not found within the expression site of trypanosomes, both sequences can be considered as independent molecular markers. Based on the observations with both markers, it appears that on the genomic level the Botat 1.1 and the OVI strains are different from the other T. equiperdum and T. evansi strains. The observed analytical sensitivity with the RoTat 1.2 PCR is comparable to what was reported for the Te664 PCR (25 cells per reaction) [Compared to the PCR-Te664 presented by Ventura et al. , the PCReaction) .T. equiperdum strains corresponds with the serological evidence that rabbits experimentally infected with these strains develop RoTat 1.2 specific lytic antibodies within 30 days post infection [T. evansi specific and that RoTat 1.2 PCR positive T. equiperdum strains are actually T. evansi and not T. equiperdum. Indeed, in a previous molecular characterization study using Random Amplified Polymorphic DNA (RAPD) and the Multiplex-endonuclease Genotyping Approach (MEGA) it appeared that the T. equiperdum collection is not as homogenous as previously believed and that the generally followed concept that T. equiperdum is very closely related to T. evansi and more distant from T. b. brucei, seems incorrect. From the cluster analysis on the available strains, it appeared that only two clusters can be identified: a homogeneous T. evansi/T. equiperdum cluster and a more heterogeneous T. b. brucei/T. equiperdum cluster [T. evansi/T. equiperdum cluster are all PCR RoTat 1.2 VSG positive while the strains found in the more heterogeneous T. b. brucei/T. equiperdum cluster, in casu BoTat 1.1 and OVI are PCR RoTat 1.2 VSG negative.The presence of a RoTat 1.2 specific DNA sequence in some nfection . In cont cluster . InteresT. evansi strains, except T. evansi type B, and is especially useful in dyskinetoplastic strains where kDNA based markers may fail to amplify. Furthermore, our data support previous suggestions that some T. evansi stocks have been previously misclassified as T. equiperdum.PCR amplification of the RoTat 1.2 VSG gene is a specific marker for T. equiperdum strains, BoTat 1.1, OVI and STIB 818 are well documented, i.e. known origin and host. The other six are putative T. equiperdum, based on publications or on their use as reference strains in different national dourine reference laboratories [A total of 39 different trypanosome populations were used in this study. They belong to 39 stocks and six species, isolated from a variety of host species at distinct geographical locations (Table 1 see ). Only tratories -30.in vitro in Cunningham's medium [2HPO4.2H20, 2 mM NaHPO4, 80 mM glucose, 100 mM sacharose, pH 8.0). Bloodstream form trypanosomes were expanded in mice and rats and were purified from the blood by di-ethyl-amino-ethyl (DEAE) chromatography [2HPO4.2H20, 2 mM NaHPO4, 80 mM glucose, 29 mM NaCl, pH 8.0). Trypanosome sediments were subsequently stored at -80°C.Procyclic trypanosome populations were grown s medium and in ts medium . Pure prtography , followe7 cells) were resuspended in 200 μl of Phosphate Buffered Saline (PBS) and the trypanosome DNA was extracted using the commercially available QIAamp DNA mini kit , resulting in pure DNA in 200 μl of TE buffer. The typical yield of DNA extracted from a 20 μl pellet was 150 ng/μl or 30 μg total DNA. The extracts obtained were diluted 200 times in water and divided into aliquots of 2 ml in microcentrifuge tubes for storage at -20°C.Twenty μl of trypanosome sediment resulting in 200 μl of extracted DNA in Millipore water. Manipulation was performed according to the manufacturer's instructions.et al. [ann. 59°C and RoTat 1.2 Reverse ATT AGT GCT GCG TGT GTT CG, Tann. 59°C.Primers were derived from the RoTat 1.2 VSG sequence AF317914), recently cloned and sequenced by Urakawa 914, rece2 , 200 μM of each of the four dNTPs and 0.8 μM of each primer .Twenty μl of extracted DNA were mixed with 30 μl of a PCR-mix containing: 1 U Taq DNA recombinant polymerase , PCR buffer , 2.5 mM MgCl® Trio-block thermocycler. Cycling conditions were as follows: denaturation for 4 min. at 94°C, followed by 40 amplification cycles of 1 min. denaturation at 94°C, 1 min. primer-template annealing at 59°C and 1 min. polymerization at 72°C. A final elongation step was carried out for 5 min. at 72°C.All amplifications were carried out in a BiometraTwenty μl of the PCR product and ten μl of a 100 bp size marker were subjected to electrophoresis in a 2 % agarose gel (25 min. at 100 V). Gels were stained with ethidium bromide (0.5 μg/ml) and analyzed on an Imagemaster Video Detection System .et al. [Taq DNA polymerase was purchased from another distributor, i.e. Promega (UK) instead of Gibco BRL (UK).PCR on purified DNA samples was performed using primers and PCR conditions according to Ventura et al. . Only thNone declared.FC carried out the molecular work and drafted the manuscript. MR and TU participated in the molecular analysis. PM, BG and PB participated in the design and co-ordination of the study. All authors read and approved the final manuscript.Trypanosoma (T.) populations used in this studyTable 1. Data on the different Click here for file
One problem in the mobilization of patients with neurological diseases, such as spinal cord injury, is the circulatory collapse that occurs while changing from supine to vertical position because of the missing venous pump due to paralyzed leg muscles. Therefore, a tilt table with integrated stepping device (tilt stepper) was developed, which allows passive stepping movements for performing locomotion training in an early state of rehabilitation. The aim of this pilot study was to investigate if passive stepping and cycling movements of the legs during tilt table training could stabilize blood circulation and prevent neurally-mediated syncope in healthy young adults.In the first experiment, healthy subjects were tested on a traditional tilt table. Subjects who had a syncope or near-syncope in this condition underwent a second trial on the tilt stepper. In the second experiment, a group of healthy subjects was investigated on a traditional tilt table, the second group on the tilt ergometer, a device that allows cycling movements during tilt table training. We used the chi-square test to compare the occurrence of near-syncope/syncope in both groups (tilt table/tilt stepper and tilt table/tilt ergometer) and ANOVA to compare the blood pressure and heart rate between the groups at the four time intervals .Separate chi-square tests performed for each experiment showed significant differences in the occurrence of near syncope or syncope based on the device used. Comparison of the two groups (tilt stepper/ tilt table) in experiment one (ANOVA) showed that blood pressure was significantly higher at the end of head-up tilt on the tilt stepper and on the tilt table there was a greater increase in heart rate (2 minutes after head-up tilt). Comparison of the two groups (tilt ergometer/tilt table) in experiment 2 (ANOVA) showed that blood pressure was significantly higher on the tilt ergometer at the end of head-up tilt and on the tilt table the increase in heart rate was significantly larger (at 6 min and end of head-up tilt).Stabilization of blood circulation and prevention of benign syncope can be achieved by passive leg movement during a tilt table test in healthy adults. Several studies have confirmed that lack of movement leads quickly to profound negative physiological and biochemical changes in all organs and systems of the body -5. It isHead-up tilt table testing has been used for over 50 years by physiologists and physicians for many purposes. This includes the study of the human body's heart rate and blood pressure adaptations to changes in position, for modeling responses to hemorrhage, as a technique for evaluating of orthostatic hypotension, as a method to study hemodynamic and neuroendocrine responses in congestive heart failure, autonomic dysfunction and hypertension, as well as a tool for drug research ,10-14. IIn addition to the traditional tilt table, a novel apparatus with stepping device (tilt stepper) was developed in 1998 at the research department of the Paraplegia Centre of the Balgrist University Hospital in Zurich, Switzerland in collaboration with the Department Orthopaedic II of the Orthopaedic Hospital of Heidelberg to enable a mobilization with stabilized circulation and to begin with a locomotion training in an early state of rehabilitation.In the tilt stepper, the patient is strapped by a safety belt to the tilt table while the legs are moved passively in a physiological stepping pattern Figure . The incThere are only a few studies that have investigated how passive movement of the legs during a tilt table treatment affects circulation. In these studies, either functional electrical stimulation of the leg muscles -21 was uThe aim of our experiments was to investigate if passive stepping and cycling movements of the legs during a tilt table test can stabilize blood circulation and prevent neurally mediated syncope in healthy young adults.With the permission from the local Ethics Committee and the informed consent of the volunteers, the response of the blood circulation was analyzed in healthy subjects. The exclusion criteria included: recurrent syncope or near-syncope in clinical history, regular medication, abuse of nicotine or alcohol, cardiovascular or neurological diseases, acute or chronic infections, psychiatric disorder and body mass index <18 or >25. All subjects underwent a physical investigation and an ECG was completed one week before the experiment.In the first experiment, we examined 12 healthy young adults (age 24 ± 5 years) on a traditional tilt table. The subjects, who had syncope or near-syncope were treated on the tilt stepper after a waiting period of 4 weeks. Syncope was defined as a transient loss of consciousness associated with a loss of postural tone. Near-syncope was defined as the appearance of pallor, nausea, light-headedness, diaphoresis or blurred vision. Both conditions were associated with the following hemodynamic changes: a decrease in systolic blood pressure > 60% from baseline values or an absolute value < 80 mmHg (vasodepressor response) and/or a decrease in heart rate > 30 % form the baseline value or an absolute value < 40 beats/min (cardio-inhibitory response) .In the second experiment, we enrolled 42 healthy subjects (age 27 ± 4 years). They were randomized into two groups: group I (23 subjects) was put on a traditional tilt table, while the Group II (19 subjects) on a tilt ergometer. The age of the subjects was restricted to below 35 years, because the cardiovascular response is strongly dependent on age .The aim of the first experiment was to investigate if the blood circulation could be stabilized in people who have a disposition for an "early" appearance of a neurally-mediated syncope on a traditional tilt table. The appearance of a neurally-mediated syncope is physiological and it may occur in all subjects. The interpersonal difference lies in the duration that the subject can be in standing posture until syncope or near-syncope occurs. A decrease of the systolic blood pressure up to maximal 15 mmHg and/or an increase in heart rate up to 20 bpm during the first 6 minutes are considered as a normal reaction to compensate the change in the position of the body .The blood pressure was non-invasively measured with a tonometric blood pressure device. Subjects who suffered a syncope or near-syncope during the first session on the traditional tilt table were in the second session treated on the tilt stepper. In the second experiment, we investigated the effect of passively induced movements on circulation by a cycle ergometer on a tilt table. We enrolled 42 subjects: 23 on a traditional tilt table and 19 on the tilt ergometer.In both experiments, after 15 minutes of rest the subjects were tilted head-upright at a 75° angle and were returned to the supine position if a syncope or near-syncope occurred or after completion at 30 minutes. Heart rate and blood pressure were measured continuously and non-invasively. Head-up tilt tests were performed in the morning in a dim room. All subjects were instructed to fast overnight and relax the muscles of their lower limbs during the trials. This was monitored with an EMG-measurement on legs , randomly tested in the first experiment and regularly tested during the second experiment. The EMG signals were amplified and transferred to a personal computer. They were recorded by a data acquisition tool .The tilt stepper is a traditional tilt table combined with an integrated leg drive that allows a passive movement of the lower extremities Figures and 2The leg drive that is connected to the thigh by a cuff induces a hip flexion or extension movement. As the feet of the patient are fixed to footplates, the knee is also flexed or extended, respectively. In those phases where the hip and knee joints are extended, the leg pushes down a spring-dampened footplate, which is then again pushed against a foot spring that is mounted within these plates. This footplate generates a loading force on the foot sole of the patient during extension. Applying this cycle of flexion and extension in an alternating way leads to physiological kinetics of the generated motion. A special mechanism is mounted under the hip joint and allows for adjustment of hip extension up to 20°.Depending on the blood circulation condition of the patient, the device can be tilted to different angles up to a vertical position. This makes it possible for the patient to become accustomed, step-by-step, to the upright position in combination with passive leg movements. The speed of the alternating stepping movements and the range of motion of hip/knee joints can be adjusted by a control panel. The basic construction consists of a linear drive , with a precision ball screw that is driven by a synchronous motor via toothed belt . The movement frequencies range from 0.2 to 0.5 Hz (i.e. one cycle of flexion and extension takes between 2 and 5 sec).To secure subjects on the tilt table during experiments, fixation with a special harness was used during all experiments Figure .The tilt ergometer consists of a traditional tilt table with an additional ergometer device (Tera Joy Germany) that allows a passive cycling movement of the lower extremities. From a technical point of view, the tilt ergometer construction is simpler than the tilt stepper, but it generates a non-physiological motion concerning gait phase related forces on the foot sole. The cycle frequency was between 0.2–0.5 Hz.Blood pressure was measured continuously and non-invasively by a Colin CBM-7000 . The Colin CBM – 7000 is a tonometric blood pressure device that allows measuring beat-to-beat blood pressure , continuous arterial blood pressure waveform, beat-to-beat and continuous electrocardiography.In both experiments we used the chi-square test to compare the occurrence of near-syncope/syncope in both groups (tilt table/tilt stepper and tilt table/tilt ergometer). We performed 2 by 4 repeated measures ANOVAs with 2 between factors (device group – namely tilt table vs. tilt stepper or tilt table vs. tilt ergometer), 4 within factors and in their interaction (groups × time) for blood pressure and heart rate. Pairwise comparisons were made with the t-test with additional Bonferroni's correction.2 (1.1) = 6.465, p = 0.011). Table In the first experiment, 7 of 12 subjects (58%) had a syncope or near-syncope on the traditional tilt table. There was an obvious increase in heart rate in the first 6 minutes after changing the position from supine to upright. None of these 7 subjects had a syncope or near-syncope during the treatment session on the tilt stepper 4 weeks later. Comparing the occurrence of near syncope/syncope in both sessions with the chi-square test, there was a significant difference = 4.66, p < 0.0743), but there were significant differences between groups = 6.33, p < 0.0016)) and in the interactions = 7.24, p < 0.0022). The blood pressure differs between the two treatments at the end of head – up tilt (p < 0.0029), but not at 2 minutes (p < 1.000) and at 6 minutes (p < 1.000) . However, there could be shown a trend for a higher blood pressure at 2 minutes and at 6 minutes after head-up tilt in the group treated in the tilt stepper.There were significant differences for heart rate within each group = 12.17, p < 0.0130) between groups = 21.16, p < 0.0001) and in the interactions = 8.68, p < 0.0009). For the group treated on the traditional tilt table, pairwise comparisons with the t-test with additional Bonferroni's correction showed a significantly higher heart rate at 2 minutes (p < 0.0.0060), but no significant differences at 6 minutes (p < 0.2051) and at the end of head-up tilt (p < 1.000).2 (1.1) = 5.443) there was a significant difference (p = 0.021) who were on the traditional tilt table had syncope (3) or near-syncope (10). None of the 19 subjects who were on the tilt ergometer had syncope but 4 subjects had near-syncope (21%). Comparing the occurrence of near syncope/syncope in both sessions with the chi-square test = 34.43, p < 0.0001) between groups = 13.42, p < 0.0001)) and in the interactions = 10.95, p < 0.0001). Pairwise comparisons with the t-test showed no significant differences at 2 minutes (p < 0.5221) and at 6 minutes (p < 0.4429) but a significant difference at the end of head – up tilt (p < 0.0001). However, there could be shown a trend for a higher blood pressure at 2 minutes and at 6 minutes after head-up tilt in the group treated on the tilt ergometer. There were significant differences for heart rate within each group = 12.17, p < 0.0130), between groups = 21.16, p < 0.0001), and in the interactions = 8.68, p < 0.0009). Pairwise comparisons with the t-test showed no significant differences at 2 minutes (p < 0.3317), but a significantly higher heart rate in the group treated on the tilt table at 6 minutes (p < 0.0007) and at the end of head – up tilt (p < 0.0002).All subjects on the tilt stepper and tilt ergometer completed 30 minutes of head-up tilt. The duration of the head-up tilt was different in the group on the traditional tilt table, as an abrupt decrease of blood pressure or symptoms of near-syncope occurred.In the head-up tilt position the subject stands on the footplates on the tilt stepper, whereas in the tilt ergometer the harness holds the whole body weight. The subjects who were investigated on the tilt stepper felt comfortable during the whole experiment, whereas the subjects examined on the tilt ergometer in experiment two complained of discomfort. The subjects on the tilt ergometer experienced more discomfort because of the perception of no lower limb support. These statements were subjective; no standardized assessment instrument was used to measure the comfort.Tables In Figures Figure Figure The tilt table is an apparatus that has become an important part in the evaluation of patients with unexplained syncope or loss of consciousness ,24,33. IAlthough the tilt table has become an accepted diagnostic tool, there are no comparable studies with the tilt table in which the effect of passive leg motion on circulation have been investigated.The aim of these two experiments was to investigate if passive leg movements during head-up tilt can prevent syncope. The data in the present study show a stabilizing effect on the blood circulation and this study suggests that there is an effect on preventing neurally-mediated syncope by both devices. In the first experiment, none of the subjects who had syncope/near-syncope on the traditional tilt table had syncope/near-syncope four weeks later on the tilt stepper. In the second experiment, only 4 subjects who were treated on the tilt ergometer had near-syncope. In both experiments the increase of heart rate was larger in the group tested on the traditional tilt table. A correlation between heart rate and appearance of syncope was described ,36. An iIn the first experiment we treated the same subject twice on a tilt table. It cannot be excluded that an adaptation to the orthostatic change occurred in these subjects. However, there was an interval of four weeks between the first treatment on the traditional tilt table and the second treatment on the tilt stepper. Therefore, a training effect or an effect of habituation, such as described in another study in which patients suffering from syncope were treated each day over 6 weeks, seems to be unrealistic .The results in both experiments indicate that blood circulation can be stabilized by passive leg movements. However, the movements of the two devices used in these experiments are very different: on the tilt stepper there are stepping like movements and the legs can be loaded during extension and unloaded in flexion. In the tilt ergometer, the movements are the other way round. There might be more afferent input from the load receptors in the tilt stepper compared to the tilt ergometer. For example it could be shown that the load moments acting about the bilateral hip, knee and ankle joint axes during cycling are found to be generally lower than those induced during normal level walking and concIn conclusion, we could show that both passive cycle and stepping movements of the legs during head-up tilt testing can stabilize blood circulation and prevent syncope in young healthy people. In further studies, we aim to investigate if the tilt stepper could become a helpful device for patients suffering from neurological diseases.
There is evidence that groups of people with schizophrenia have deficits in Theory of Mind (ToM) capabilities. Previous studies have found these to be linked to psychotic symptoms (or psychotic symptom severity) particularly the presence of delusions and hallucinations.A visual joke ToM paradigm was employed where subjects were asked to describe two types of cartoon images, those of a purely Physical nature and those requiring inferences of mental states for interpretation, and to grade them for humour and difficulty. Twenty individuals with a DSM-lV diagnosis of schizophrenia and 20 healthy matched controls were studied. Severity of current psychopathology was measured using the Krawiecka standardized scale of psychotic symptoms. IQ was estimated using the Ammons and Ammons quick test.Individuals with schizophrenia performed significantly worse than controls in both conditions, this difference being most marked in the ToM condition. No relationship was found for poor ToM performance and psychotic positive symptomatology, specifically delusions and hallucinations.There was evidence for a compromised ToM capability in the schizophrenia group on this visual joke task. In this instance this could not be linked to particular symptomatology. Theory of Mind (ToM) describes the ability to recognise that other people have minds containing beliefs and intentions and to be able to interpret these correctly. The term, first coined by Premack and Woodruff , is alsoToM ability has been conceived as a capacity to represent epistemic mental states comprising an agent and an attitude to the truth of a proposition e.g. "Peter believes that it is raining" ,5. The tIt is widely reported that there are observed ToM deficits in schizophrenia from the numerous behavioural and neuroimaging studies that have been conducted investigating this phenomenon [e.g. ]. It is Sarfati et al used a strictly pictorial task in which 3 picture cartoon sequences were shown depicting a character producing an action and the participants had to choose the fourth and final picture from a choice of three images . SuccessSarfati et al then enlarged this experimental protocol by introducing a verbal dimension to the task . There wLangdon et al, used a task comprised of 4 card black and white cartoon picture sequences of four varieties: social script stories testing logical reasoning about people without needing to infer mental states, mechanical stories testing Physical cause and effect reasoning, false belief stories testing general mind reading abilities and capture stories testing inhibitory control. Cards were place face down in a square layout and participants had to turn the cards over and place them in the correct order to show a logical sequence of events. In order to control for possible contributory effects of executive dysfunction, inhibitory control was tested using capture picture-sequences and executive planning was tested using the Tower of London task.In both studies, it was found that individuals with schizophrenia showed a selective ToM impairment which could not be completely explained by reasoning, planning deficits or poor inhibitory control ,19.Brüne showed individuals a muddled cartoon 4 picture sequence depicting a ToM scenario between characters . The parCorcoran et al used visual jokes to look at potential ToM deficits in schizophrenia . Two setThe primary interest of the current study was to examine the associations between specific schizophrenic symptoms and ToM capabilities using the cartoon method devised by Corcoran , but witForty participants aged from 19–65 years were recruited for this study. Twenty of these had a diagnosis of DSM IV schizophrenia . These wTo assess their present symptomatology, the schizophrenia patients were assessed on the Krawiecka Standardized Scale for Rating Chronic Psychotic Patients . SymptomAll participants in this study gave written, informed consent.Sixty-three single-image cartoon jokes, printed on A4 cards were generously provided by the authors of previous studies . Thirty-It was explained to the subjects that they would be shown cartoons intended to be funny. The two complete sets of cartoons were then shown to each subject in turn. The order in which they were presented was alternated so that half the participants viewed the ToM cartoons first, and half viewed the ToM cartoons second.The subjects were shown each joke one by one and instructed to indicate to the observer when they believed they had understood its meaning. This response time was then recorded to the nearest second using a stopwatch. The participants then gave a short explanation of their interpretation of the joke's meaning. Responses were scored 1 for a correct answer and 0 for an incorrect answer. For a theory of mind answer to be correct, appropriate mental state language had to be used. Furthermore, participants were asked to subjectively grade each cartoon image for humour and difficulty on a scale of 1–5, where 1 was not funny or very easy and 5 very funny or very difficult respectively.Simple Physical descriptions of the scenario were required for the Physical joke responses to be scored correct. An example of acceptable responses can be viewed in Table Tests were all performed in quiet, distraction-free rooms.Data analysis was performed using SPSS for Windows Version 11.0.General linear model repeated measures ANOVA was used to determine the significance of any difference in the Physical versus ToM scores seen between the groups. General linear model ANCOVA controlling for Physical joke score was used to investigate the selectivity of any group difference in ToM capabilities. Linear regression analysis was used to relate Physical and ToM scores to Krawiecka sub-totals for positive, negative and non specific symptoms, individual Krawiecka symptoms, medication dose and joke block presentation order. Independent two-tailed t-tests were used to compare the group score differences in the two conditions , the average subjective ratings for humour and difficulty assigned to the stimuli by the participants, and the average response times to get the jokes.Using general linear model, repeated measures ANOVA, highly significant main effects were found for repeated measure and group as well as a significant interaction of group by joke . Table Follow-up t-tests comparing individuals with schizophrenia to controls were highly significant for both the ToM condition (p < 0.0001) and the Physical condition (p < 0.001).Additionally, within both the patient and control groups, scores were significantly worse for ToM jokes than Physical jokes (p < 0.0001 for both groups). However, the significant interaction showed that the difference of 10.6 for the patient group was greater than that for the controls (5.6). Using the general linear model, ANCOVA, controlling for Physical joke score, a significant group difference on ToM joke scores was still evident, F = 19.5, p < 0.05.The two groups were well matched for age, IQ and sex, and any difference between them was shown to be insignificant by independent 2-tailed t-test (p > 0.1). It was unnecessary, therefore, to perform regression analyses to co-vary for these factors.It was found via independent T-test analysis that there was no significant difference between the schizophrenia patients and control participants' subjective ratings for humour and difficulty or between the average response times of correct responses p > 0.05). Results are summarized in Table . ResultsFurthermore, linear regression indicated that the order of presentation of the joke sets had no significant effect on ToM or Physical joke scores.Correlations were run to investigate the relationships between performances on ToM and Physical jokes and different symptom scores . These data are displayed in Table As stated, performance was not significantly reduced in association with increasing severity of positive or negative symptoms as a whole or delusions and hallucinations specifically.The features of depression, incoherence and poverty of speech were also analysed to see if they could be having an effect on the patients ToM and Physical joke performance but there were no significant findings.The converted equivalent daily chlorpromazine patient medication doses were correlated to performance and also found to be non significant for both cartoon conditions.This study showed that individuals with schizophrenia and normal IQ had a poorer understanding of both types of jokes (and at least a reduced ability to relay their humorous intent) than matched healthy controls. This is to be expected, as schizophrenia patients have previously been reported to show poor appreciation of humour . It seemHowever, the difference between the Physical and ToM joke scores was significantly greater for schizophrenia patients, than controls. This implies that it is some aspect of the schizophrenia disease process that is associated with ToM impairment in the patient group, rather than a general difficulty with appreciation of humour.If the schizophrenia group had a poorer understanding of the jokes then we would expect this to be reflected in the subjective gradings for humour and difficulty. As shown in table It was found that both groups found the ToM jokes significantly more difficult than the Physical ones. The former were certainly more detailed and by their very nature were comprised of characters in ToM scenarios. It could be that these jokes were more difficult to understand, but there was no significant difference between the response times of the two joke types for either group. Poor verbal report of mentalistic terms may be an intrinsic feature of schizophrenia and this could have resulted in this schizophrenia group's poor performance on this set of jokes.Language and thought are intrinsically linked and the question arises as to whether disordered verbalisation in schizophrenia is a speech disturbance only or part of a disorder in thinking . LikewisThis data suggests that, as predicted, schizophrenia patients have problems in interpreting the thoughts of others, supporting the findings of previous work . The cloThere is however an alternative interpretation to these results. The individuals with schizophrenia may not be showing a domain -specific difficulty with ToM function but rather may be performing differentially more poorly than the control group on the more difficult ToM condition, such that the observed deficit could reflect a differential sensitivity to increased task difficulty.When the obtained totals for positive Krawiecka symptoms were analysed it was found that there was not a significant relationship between higher positive symptomatology and poor ToM performance, contrary to what had been predicted. Closer scrutiny of individual positive symptoms also revealed that neither delusions, hallucinations nor speech incoherence were significantly linked to an impaired ToM performance. Previous studies have shown paranoid delusions to be significantly related to poor ToM performance, in both first and second order ToM tasks and in both verbal and pictorial paradigms ,28 32]. 32. 28] These findings might be attributed to several individuals who despite scoring the maximum Krawiecka score (4) for delusions, hallucinations or both, performed similarly to controls in the ToM condition.Alternatively, perhaps the nature of our patients' delusions and hallucinations may not be those specifically implicated in ToM impairment. Unfortunately, our sample size was too small to allow further investigation of patients with different types of delusion. Unlike the findings of previous research, negative features of schizophrenia were not associated with ToM capabilities. However, the mean Krawiecka scores for these features were low within the subject group, and our number of subjects was relatively small.This study was limited especially for symptom sub-groups analyses, by its relatively small sample size, although we did find disease effects. With a large sample, further symptom-specific sub-groups could be made . Furthermore, another control group of non-schizophrenia, psychiatric patients may have been useful to explore more closely the role of diagnosis as opposed to symptoms. One of our previous studies used a psychiatric control group of patients with a psychotic affective disorder and found that positive psychotic symptomatology was linked to poor ToM performance and was not diagnosis specific . This imWe believe that the Physical cartoons themselves acted as an adequate internal control. If the schizophrenia group had performed as poorly on the Physical cartoons as they did on the ToM cartoons, this could imply either a general verbalization deficit or a general cognitive impairment. Since this was not the pattern found, our results count against a domain-general interpretation of this type. Furthermore, as mentioned previously, regression analysis showed no significant effect of language impairment, as assessed using the Krawiecka symptoms of poverty of speech and incoherence of speech, on ToM joke performance. ANCOVA also showed that the group differences on the ToM jokes could not be accounted for by the group differences on the Physical jokes. This was taken as evidence for an observable and selective compromise of ToM capacity within the schizophrenia group.However, an unrelated cognitive neuropsychological task could have been implemented testing another cognitive domain and this could have been used to further elaborate whether the observed compromised ToM function was a specific deficit or secondary to general cognitive impairment .Further research is then required in ToM and schizophrenia to see whether the presence of schizophrenia itself is enough to impair ToM capabilities or whether ToM impairment is due, instead, to presence of particular symptoms or presence of some general neuropsychological deficit. A further question that we did not address at all in the present study was whether the ToM deficits observed in schizophrenia could be state (related to fluctuating symptom severity) or trait in nature.The schizophrenia group performed significantly worse in both the Physical and ToM conditions on this visual joke task than the matched control group. The performance on the ToM condition was significantly worse and is taken as evidence for a compromised ToM capability in the schizophrenia group which is in keeping with previous research. In this instance poor ToM performance could not be significantly linked to any particular symptomatology as had been hypothesised.The author(s) declare that they have no competing interests.DM conceived and designed the study, collected neuropsychological test data and drafted the manuscript. HT helped implement the study and collect neuropsychological test data and co wrote first draft. DMac and DCO were involved in the psychiatric ratings of the patients and revisions of later drafts. PM advised on statistical analysis and helped to write the corresponding sections. SL supervised clinical aspects of study and revised later drafts and ECJ revised final draft and approved this version to be published.The pre-publication history for this paper can be accessed here:
There is an increasing prevalence of asthma in the Caribbean and patients remain non-compliant to therapy despite the development of guidelines for management and prevention. Some patients may self-medicate with medicinal herbs for symptomatic relief, as there is a long tradition of use for a variety of ailments. The study assessed the prevalence of use and the factors affecting the decision to use herbs in asthmatic patients attending a public specialty care clinic in Trinidad.de novo, pilot-tested, researcher-administered questionnaire between June and July 2003.A descriptive, cross-sectional study was conducted at the Chest Clinic in Trinidad using a Fifty-eight out of 191 patients (30.4%) reported using herbal remedies for symptomatic relief. Gender, age, ethnicity, and asthma severity did not influence the decision to use herbs; however, 62.5% of patients with tertiary level schooling used herbs, p = 0.025. Thirty-four of these 58 patients (58.6%) obtained herbs from their backyards or the supermarket; only 14 patients (24.1%) obtained herbs from an herbalist, herbal shop or pharmacy. Relatives and friends were the sole source of information for most patients (70.7%), and only 10.3% consulted an herbalist. Ginger, garlic, aloes, shandileer, wild onion, pepper and black sage were the most commonly used herbs.Among patients attending the Chest Clinic in Trinidad the use of herbal remedies in asthma is relatively common on the advice of relatives and friends. It is therefore becoming imperative for healthcare providers to become more knowledgeable on this modality and to keep abreast with the latest developments. Recent reports from the Caribbean suggest that the incidence of asthma is following the global trend of increasing prevalence. In Jamaica, a prevalence of 20.8% for exercise-induced asthma was estimated in a cross-sectional study in schoolchildren . About oInhaled corticosteroids as prophylaxis and 'as required' bronchodilator for symptomatic relief are established modalities for asthma management and prevention and the Commonwealth Caribbean Medical Research Council/Global Initiative for Asthma guidelines were adopted in the Caribbean in 1997 . It has Over the last few decades, a global resurgence in the use of herbal remedies has fuelled the growing multi-billion dollar international trade of botanical products. Many patients, dissatisfied with conventional medicines because they expect permanent cures, believe that herbal remedies are 'natural' and sometimes self-medicate without informing their attending physician.Although there is a long history of traditional use of medicinal herbs throughout the Caribbean ,8 few stThis study was undertaken to assess the extent of use of herbal remedies by asthmatic patients attending a specialty chest clinic in Trinidad for symptomatic relief and to determine the factors influencing the patient's decision to use herbs.The study was approved by the Ethics Committee of the Faculty of Medical Sciences, University of the West Indies, St. Augustine campus and permission to interview patients was granted by the Director of the Chest Clinic of the Ministry of Health, Trinidad and Tobago. The study was conducted over the two-month period June to July 2003.de novo, pilot-tested, researcher-administered questionnaire.The Chest Clinic was chosen as the source of subjects as this is the only national tertiary level health facility specializing in the management of respiratory diseases. Patients entering the study were physician-diagnosed asthmatics based on self-reporting symptoms of wheezing, chest tightness and nocturnal coughing in the previous year. Patients were recruited by consecutive sampling and the nature and purpose of the study were explained on an individual basis. Those confirming their willingness to participate signed their informed consent and were interviewed using a The questionnaire assessed demographic data such as age, gender, ethnicity, residential district, education, employment and socioeconomic status. Subjects reported their disease severity as intermittent, moderate or severe as determined by the Global Initiative for Asthma (GINA) guidelines with respect to symptom frequency . Patient2 tests were performed to determine whether there were statistically significant associations between the use of herbs and these variables. The p value was set at <0.05 for statistical significance. The data was analyzed using SPSS for Windows .The sample size was calculated as 185 patients assuming a prevalence of 86% with a cDuring the study period one hundred and ninety one patients consented to participate. The demographic details of the sample are given in Table 2-agonists (relievers) were prescribed at very high rates, Table 2-agonists. This high level of prescription and use of β2-agonists suggest a lack of symptomatic control in our sample population. Theophylline and anticholinergics were prescribed in both categories of patients, but to a lesser extent.The GINA guidelines were recently adopted in the Caribbean and asthmatic patients are currently treated according to their symptom severity. In our sample population, particularly in patients with moderate and severe symptoms, corticosteroids (controllers) and βGender, age, ethnicity, residential district, employment status, income and asthma severity had no statistically significant effect on the use of herbal remedies within the sample population, Table Most patients (70.7%) using herbs were advised by a relative or friend and only 10.3% sought the advice of an herbalist, Table Most patients (58.6%) obtained their herbs or medicinal plants from either their backyards or the supermarket. Only fourteen (24.1%) obtained their herbal supplies from an herbalist, herbal shop or pharmacy. Seventeen (29.3%) of these patients reported using herbs within the last week and most these patients (60.3%) used herbs within the last six months.Many of these patients were using both physician-prescribed antiasthmatic drugs and herbal remedies, Table Allium sativum) or ginger for symptomatic relief of asthma, Table Aloe vera) shandileer (Leonotis nepetifolia), wild onion , pepper (Capsicum spp.) tulsi (Ocimum gratissimum), black sage (Cordia curassavica), shadon beni (Eryngium foetidium), lemongrass (Cymbopogon citratus) and nutmeg (Myristica fragrans) were the more popular traditional indigenous West Indian medicinal plants used. Two patients reported using marijuana (leaves and roots). Herbs of European and North American origin, identified as Echinacea (Echinacea purpurea), Golden Seal (Hydrastis canadensis) and Chamomile (Matricaria chamomilla) were less frequently used. Five patients reported using trade name imported tablets for asthma.Most patients in the sample used more than one medicinal herb simultaneously, which were usually prepared and administered as mixtures in teas. Almost one in four patients using medicinal herbs (22.5%) used either garlic and aloes (Aloe vera), and traditional indigenous medicinal herbs such as shandileer (Leonotis nepetifolia) and tulsi (Ocimum gratissimum) were more likely to be earning less than US$12,000, Table Echinacea purpurea and Matricaria chamomilla) were more likely to be used by patients earning in excess of US$12,000 per annum. Income did not affect the use of either garlic or cocoa onion.Patients using easily accessible herbs such as ginger (Aloe vera), tulsi (Ocimum gratissimum) and golden seal were preferred in patients with at least twelve years of formal education, Table Leonotis nepetifolia), wild onion or ginger .Aloes were more expensive than conventional medicines. We assumed that an additional expense would have only been incurred by those patients purchasing processed, imported herbs obtained from a herbalist, herbal shop or pharmacy (24.1%) and who actually consulted a herbalist (10.3%). We also reasoned that since all the other herbs used were inexpensive and available from either the backyard garden or supermarket (58.6%) that the cost to patients selecting these remedies was minimal.The findings of this study are important in that local medicinal plants in Trinidad have been identified in the self-management of asthma in a significant number of patients attending the specialty clinic. These identified herbs can now be targeted for scientific investigation to determine whether their pharmacological efficacy will assist in the development of viable healthcare alternatives in a developing country. These findings are also important for policymakers in the health sector who are given the mandate to regulate issues pertaining to the public's health. We are also becoming more aware of the potential for critical interplay between herbs and drugs when taken concomitantly to produce life-threatening interactions. Since herbs are here to stay and patients will continue to self-medicate with increasing frequency, it is imperative that healthcare providers become more knowledgeable on this modality and keep abreast with the latest developments in herbal therapy.The author(s) declare that they have no competing interests.YNC was the P.I. in this study. He was responsible for the study concept, development of methodology, coordinating the research activities, analyzing the data, and writing the manuscript. AFW was responsible for data input and analysis. DA was involved in methodological development, data collection, data input and analysis and presentation at regional conference. RC was involved in methodological development, data collection, data input and analysis. NW was involved in methodological development, data collection and input. RM was involved in methodological development, data collection and input. OS was involved in methodological development, data collection and input. DW was involved in methodological development, data collection and input. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
Escherichia coli sequences display the period close to 11 base pairs.Sequence periodicity with a period close to the DNA helical repeat is a very basic genomic property. This genomic feature was demonstrated for many prokaryotic genomes. The Escherichia coli. The noncoding sequences reveal this periodicity much more prominently compared to protein-coding sequences. The sequence periodicity of ApC/GpT, ApT and GpC dinucleotides along the Escherichia coli K-12 is found to be located as well mainly within the intergenic regions.Here we demonstrate that practically only ApA/TpT dinucleotides contribute to overall dinucleotide periodicity in E. coli suggests that the periodicity is a typical property of prokaryotic intergenic regions. We suppose that this preferential distribution of dinucleotide periodicity serves many biological functions; first of all, the regulation of transcription.The observed concentration of the dinucleotide sequence periodicity in the intergenic regions of E. coli as well, is distributed in a non-uniform way, in scattered segments of the size 100–150 bases. It was also known for a long time that quite a few DNA promoter regions of E. coli possess the sequence periodicity of AA and TT dinucleotides [DNA sequence periodicity with the period about 10–11 base pairs (bp) has been long known in eukaryotic DNA sequences. It was discovered recently in prokaryotic sequences as well . The perE. coli promoters have upstream curved sequences [E. coli DNA curvature peaks are frequently located downstream of the CDS.The sequence periodicity of AA/TT dinucleotides is frequently associated with sequence-dependent DNA curvature, which is known to play an important role in the initiation of transcription of many genes . Usiequences ,17. Pedeequences showed tSince the dinucleotide periodicity with the period close to the helical repeat is associated with DNA intrinsic curvature -23, the E. coli and its distribution along the genome are systematically analyzed. A strong preference of intergenic regions to express the sequence periodicity of AA, AC, GC, and TT dinucleotides is discovered.In this work, the sequence dinucleotide periodicity in E. coli, as well as its coding and noncoding regions, was subjected to this procedure. Resulting autocorrelation profiles for all 16 dinucleotides (data not shown) were further analyzed by Fourier transform. In Fig. Positional autocorrelation analysis of the nucleotide sequences is an appropriate tool to detect all major characteristic distances in the sequences, the periodicities in particular. The complete genome of E. coli and find out where the periodical regions are located, we chose the period 11.2 bp [E. coli genome sequence and to detect the periodical sites. This calculation shows that the periodicity is not evenly distributed along the E. coli genome.To screen the genome of 11.2 bp and this 11.2 bp ,26. We uE. coli genome are shown. The periodicity is distinctly located in certain regions. Many of the peaks observed are found to correspond to the intergenic regions (indicated by the black bars at the top). For example, two such peaks of periodicity in Fig. In Fig. To verify the apparent strong correlation between the intergenic regions and AA/TT periodicity, we split intergenic regions in several families by size and analyzed the subsets separately by aligning (centering) the regions and summing up the respective local periodicity distributions. The combined maps for intergenic regions with a size from 50 to 150 bp, from 150 to 250 bp, from 250 to 350 bp, from 350 to 450 bp, and from 450 to 550 bp are shown in Fig. E. coli DNA sequence periodicity.To verify the choice of the period 11.2 bases, we calculated the periodicity maps for highly populated group of the regions of the size 200 ± 50 bp, by assuming different periods in the range 10.5–12.5 bases. The resonance 3D plot in Fig. The spectral analysis Fig. and examThe observed concentration of the sequence periodicity in the intergenic regions corroborates earlier results and suggests that the periodicity is a typical property of the intergenic regions.Escherichia coli K-12 MG1655, locus U00096, 4639221 base pairs, was taken from the National Center of Biotechnology Information . Intergenic regions were identified in accordance with the annotation to this genome of E. coli and gathered in a separate dataset.The sequence of the whole genome of X was calculated for each dinucleotide separately. For the calculation of ApA autocorrelation, for example, we calculated the number of occurrences of pairs ApA – ApA in a distance k, and designated it by Xk. Spectral analysis of autocorrelation profile X was obtained using the following formulae:Autocorrelation profile fp is normalized wave-function amplitude of period p, X is an autocorrelation profile for one chosen dinucleotide, Xi is its value in position i, is its average value, and W is a maximal considered autocorrelation distance (in our case 100 bp).where T were taken to describe idealized periodical distribution of AA and TT dinucleotides within window W. The probes were correlated with E. coli sequences by moving the probes along the sequences and calculating the value C for every position.As a probe of periodicity the sine waves with period i is an index of a dinucleotide position in the window W andwhere Cmax is introduced for the normalization purposes. It is calculated as follows:The value i is a position in the window W andwhere C = 1, while segments with no periodicity would correspond to C = 0. The results of these calculations are presented as maps of the sequence periodicity. The four sample maps are shown in Fig. Ideally periodical sequence segments would be, therefore, described by The maps around intergenic regions were combined (summed) separately for the groups of similar sizes of the intergenic regions. Five such groups were analyzed: 100 ± 50 bp, 200 ± 50 bp, 300 ± 50 bp, 400 ± 50 bp, and 500 ± 50 bp. For each group the maps were synchronized at the respective intergenic centers and the sums of the maps were calculated and smoothed by a running average within 51 bp. The standard deviations for the combined plots were estimated by generating random sequences of the same size and dinucleotides composition for each group separately and averaging the respective periodicity maps.T in the interval 10–12.5 bp. One-third (202) of the most periodic maps of this group was taken for the calculation. The maps for different periods T were smoothed five times by a running average over 51 bp. The baselines were set to 0. The surface of 3D plot was smoothed 3 times by a running average over 9 point square elements, on the grid with separations 0.1 bp for T, and 20 bp for sequence position.The resonance 3D plot for the intergenic regions of length 200 ± 50 bp was built from calculations with different periods None declared.SH carried out all graphics. ENT and AB participated in the design of the study and analysis of results. All authors drafted the manuscript. All authors read and approved the final manuscript.
Fasciola hepatica primarily involves the liver, however in some exceptional situations other organs have been reported to be involved. The ectopic involvement is either a result of Parasite migration or perhaps eosinophilic reaction.Here we report a known case of multiple myeloma who was under treatment with prednisolone and melphalan. He was infected by Fasciola hepatica, which involved many organs and the lesions were mistaken with metastatic ones.Presented here is a very unusual case of the disease, likely the first case involving the pancreas, spleen, and kidney, as well as the liver. Human fascioliasis, a commonplace infection caused by a leaf-shaped Trematode Fasciolahepatica affects a human host by chance -4. It seIn Dec 2001, a 52 year old man, who was a known case of Multiple Myeloma (MM), was presented to one of our affiliated hospitals with persistent right upper quadrant and epigastric pain, and anorexia for a period of 1 month. At the time of admission, the patient had been receiving prednisolone and melphalan for his MM, which was currently in remission. His recent condition began with tongue and facial edema two weeks before appearance of the abdominal pain.Upon physical examination, mild epigastric tenderness and a palpable liver were found. Neither icterus nor any positive sign of cardiopulmonary abnormalities were noted. Additionally, the patient did not have a fever and his peripheral lymph nodes were not enlarged. Initial laboratory findings were as follows: a hemoglobin of 10.6 g/dL, white blood cell count of 10,800/mm3 with 18% eosinophils, and a sedimentation rate of 90 mm at the end of the first hour. The total bilirubin was 0.5 mg/dL (0.2–0.8 mg/dl), alanine aminotranferase (ALT) 48 IU/L (0–40 IU/L), aspartate aminotransferase (AST) 46 IU/L (0–40 IU/L), and alkaline phosphatase (Alk P) 651 IU/L (60–140 IU/L). The eosinophilia fluctuated between 12 to 55 percent in various tests performed during the time period in question, with no unique patterns noted. Serum electrophoresis showed a monoclonal spike in the gamma region. Specific enzyme-linked immunosorbant assay (ELISA) produced a positive result for Fasciola hepatica, while the test was negative for Toxocara canis. Serologic tests for the presence of hepatitis A, B, and C viruses were negative. Blood and urine cultures were found to be sterile. Other laboratory studies, including repeated stool examinations for ova and parasites, showed no abnormalities. Chest x-rays did not demonstrate any parenchymal or pleural abnormality. Abdominal ultrasonography showed a mild hepatomegaly with multiple hypoechoic lesions in the liver. A CT scan revealed multiple but poorly defined, hypodense lesions in the liver, and a completely enlarged pancreas with mild bilateral pleural reaction, suggesting metastatic cancer . In the search for a potential malignancy, diagnostic laprascopy was performed, which revealed the presence of white-colored lesions ranging from 1 to 3 cm in diameter on the surface of both lobes of the liver with mild ascites. Multiple liver and peritoneal biopsy specimens revealed fibrinoid necrosis, associated with granulomatous reaction and a high concentration of eosinophils in the liver, accompanied by markedly inflamed peritoneal tissue with eosniphilic infiltration. No malignant cells were identified and no evidence of extramedulary plasmocytoma was found. Specific staining for fungal organisms and acid-fast bacilli were negative. The ascitic fluid also had a high level of eosinophils. Endoscopic retrograde cholangiopancreatography (ERCP) failed to show any filling defect within the biliary tree. Furthermore, the patient underwent bone marrow aspiration that only indicated high eosniophilic infiltration. The patient was placed on albendazole (400 mg twice daily for 1 week). The treatment was well tolerated and the abdominal pain was improved rapidly. At the time of discharge, the patient was in good clinical condition. During a follow-up visit two months later, a second CT unexpectedly showed not only an increase in the number and size of the hypodense lesions in the liver, but also the extension of lesions into the pancreas, the spleen and both kidneys . No evidence of peripheral enhancement of the hepatic lesions or ascites was documented. The patient was still experiencing upper quadrant pain on the right side. Laboratory investigations produced a white blood cell count of 7200/mm3 with 16% eosinophilia. Repeated stool examinations failed to identify ova and parasites. The patient was given triclabendazole . As recommended, the patient had another follow-up CT scan three months later. At that time all of his symptoms were resolved. Follow-up CT scans revealed a considerable improvement in the number and size of the lesions. At the time of the CT scan, all of his symptoms were resolved . At this time the WBC was 6000/mm3 with 6% eosinophils. After 5 months and in the last CT, the lesions had almost disappeared completely .While fascioliasis is a well-known human parasite, it sometimes produces unusual characteristics that may influence a clinician to misdiagnose the condition.In the vast majority of cases, the diagnosis is difficult in both acute and chronic phases and some important conditions such as liver abscesses and metastasis cannot be easily differentiated from fascioliasis ,13. InteAmong imaging tools, ultrasonography is of little diagnostic value during the acute phase, while a contrast-enhanced CT scan can be very useful for diagnosis ,17. In CAs clinical and laboratory findings of fascioliasis may easily be confused with many other conditions, a high index of suspicion is required to establish a correct diagnosis ,17. BothThe ingested metacercariae of Fasciola hepatica penetrate the intestinal wall and migrate through the peritoneal cavity to reach the liver. However, ectopic migration to other locations is one of the strangest manifestations of the infection ,9. The pThe pre-publication history for this paper can be accessed here:
The endometrium prepares for implantation under the control of steroid hormones. It has been suggested that there are complicated interactions between the epithelium and stroma in the endometrium during menstrual cycle. In this study, we demonstrate a difference in gene expression between the epithelial and stromal areas of the secretory human endometrium using microdissection and macroarray technique.The epithelial and stromal areas were microdissected from the human endometrium during the secretory phase. RNA was extracted and amplified by PCR. Macroarray analysis of nearly 1000 human genes was carried out in this study. Some genes identified by macroarray analysis were verified using real-time PCR.In this study, changes in expression <2.5-fold in three samples were excluded. A total of 28 genes displayed changes in expression from array data. Fifteen genes were strongly expressed in the epithelial areas, while 13 genes were strongly expressed in the stromal areas. The strongly expressed genes in the epithelial areas with a changes >5-fold were WAP four-disulfide core domain 2 (44.1 fold), matrix metalloproteinase 7 (40.1 fold), homeo box B5 (19.8 fold), msh homeo box homolog (18.8 fold), homeo box B7 (12.7 fold) and protein kinase C, theta (6.4 fold). On the other hand, decorin (55.6 fold), discoidin domain receptor member 2 (17.3 fold), tissue inhibitor of metalloproteinase 1 (9 fold), ribosomal protein S3A (6.3 fold), and tyrosine kinase with immunoglobulin and epidermal growth factor homology domains (5.2 fold) were strongly expressed in the stromal areas. WAP four-disulfide core domain 2 (19.4 fold), matrix metalloproteinase 7 (9.7-fold), decorin (16.3-fold) and tissue inhibitor of metalloproteinase 1 (7.2-fold) were verified by real-time PCR.Some of the genes we identified with differential expression are related to the immune system. These results are telling us the new information for understanding the secretory human endometrium. Many studies have sought to understand the mechanism of implantation. Recently, the rate of pregnancy in the in vitro fertilization and embryo transfer (IVF-ET) cycle has declined, and this has been attributed to a decrease in the rate of implantation. The recently developed laser microdissection method has gained widespread use throughout the research field. Information about cells can be determined without contamination by using this method. Moreover, with the macroarray technique, which was already widely used for this purpose, the profiling of the gene expression of specific cells types has become possible. Torres et al. have already reported differences in gene expression between cell types or regions within the monkey endometrium using laser microdissection and differential display ,2. Identification of cell-specific proteins, which are expressed in the endometrium during the secretory phase, has been performed using a multi-disciplinary approach in the same trial. IGF-II mRNA is expressed in the mid-to-late secretory phase and in early pregnancy . During In this study, we demonstrate differential in gene expression between the epithelial and stromal areas obtained from secretory human endometrium using laser microdissection and the macroarray method. Confirmation of differential expression of candidate genes was performed by real-time PCR. Human endometrium was obtained from 8 patients (25–38 years old) with normal menstrual cycles (28~30 days) during the mid secretory phase. These patients had had at least one intrauterine pregnancy in the past . Part of the endometrial biopsy was obtained with a curetting technique. The day of the menstrual cycle was determined by the patient's history, plasma progesterone levels (9.8~17.3 ng/ml) and the histological criteria of Noyes et al . These pThe endometrium was embedded in OCT compound and frozen immediately in isopentane that had been cooled in the liquid nitrogen. This freezing block was sliced by a cryomicrotome at 8 μm thickness. Frozen sections were fixed in 100% methanol for 3 min and stained with 1% toluidine blue. The section was laser-microdissected by the PALM MicroBeam system (PALM Microlaser. Technologies A.G.) for epithelial and stromal areas and collected in a small tube Fig . Approxi32P dCTP (3000 Ci/mmol) using a randam primer. Labeled probes were hybridized to a nylon array ) in ExpressHyb solution at 68°C overnight. After hybridization, the nylon membrane was washed with 2 × standard saline citrate (SSC) + 1% sodium dodecyl sulphate (SDS) once, twice with 1.0 × SSC + 0.5% SDS at 68°C [The RNAs obtained were synthesized from cDNA using a modified oligo (dT) primer and the BD SMART™ PCR cDNA Synthesis Kit . cDNA was PCR amplified for 24–29 cycles according to the user manual. ). 550 ng of cDNA sample was labeled with α- at 68°C ,13. The at 68°C ,15. In tRNA was reverse transcribed using oligo (dT) primers by TaKaRa RNA PCR Kit (AMV) Ver 2.1 according to the manufacturer's instructions. PCR was performed using the ABI PRISM 7700 Sequence Detection System. TaqMan Universal PCR MasterMix and Assays-on-Demand Gene Expression probes (Applied Biosystems) were used for the PCR step . Primer sequences are not publicly available, although their validity has been established by the manufacturer. The expression values obtained were normalized against those from the control human GAPDH [Secretory endometrium was collected from 8 patients . Each sample was carefully dissected by laser microdissection for epithelial and stromal areas Fig to 1d.Total RNA was extracted and subjected to macroarray with nearly 1000 genes on the nylon membrane. Fifteen genes were strongly expressed in the epithelial areas Table . Mean vaReal-time PCR was used to verify the changes in expression of certain candidate genes that are highly expressed in the array. Five samples were used for this study. WFDC2 and MMP7, which are both strongly expressed in the epithelial areas and decorin and TIMP1, which are both strongly expressed in the stromal areas by the cDNA array were chosen for verification. Each value was corrected for differences in loading relative to GAPDH mRNA expression. WFDC2 and MMP7 mRNA expression increased by 9.4- and 9.7-fold, respectively, compared to that of stromal cells. Decorin and TIMP1 mRNA expression increased by 16.3-and 7.2-fold, respectively, in stromal cells compared to that of epithelial cells. Statistically significant changes in expression of these genes were observed (p < 0.05) Fig and 2D.in vivo. It has recently become possible to acquire the information about the cell by the microdissection method. To date, various methods have been used for understanding the function of the endometrium. It is a well-known fact that epithelial cells and stromal cells in the endometrium play specific roles and are influenced by steroid hormones. However, it is very difficult to understand the molecular composition of each cell type as a function of time during the menstrual cycle. One of the problems of a cell culture experiment is that separation cultivation makes changes the composition of the cells. This is especially true as these cells are influenced by neighboring cells In this study, laser microdissection was used to isolate epithelial and stromal areas from the human endometrium. RNA was amplified by PCR and global gene expression was demonstrated by cDNA macroarray. Twenty-eight genes were identified in this study. These constitute only 2.8% of the 1000 genes on the array. Although this seems to be a small number, these genes were expressed at least 2.5-fold greater in all three samples and normalized to two house keeping genes. A similar percentage (1.2–5.8%) of genes with differential expression were reported using array analysis -19. HoweRecently, some papers focused on endometrial gene expression have been reported. However, lots of them were compared between phases in the menstrual cycle. While Okulicz et al. demonstrated a difference in the gene expression between cell compartments in the monkey endometrium, the genes they identified are not the same as ours ,2. One oFifteen of 1000 genes were strongly expressed in the epithelial areas compared to the stromal areas. Of these, WFDC2 and MMP 7 were strongly expressed in the epithelial areas as confirmed by real-time PCR. WFDC2 was originally described as an epididymis-specific protein is expressed in a number of normal human tissues. A possible role for this gene in sperm maturation is indicated by amino acid similarities to extracellular proteinase inhibitors of genital tract mucous secretions . AlthougHOXB gene induction is related to the immune system, and is specifically associated with IL-2-induced NK cell proliferation ,28. AlthOf the strongly expressed genes in the stromal aeas, decorin and TIMP1 gene expression were verified by real-time PCR. San Martin et al. reported the expression of decorin which is a leucine-rich proteoglycan in the mouse uterine and suggested it localized in the undifferentiated interimplantation site stroma . Some re
Previously we have shown that there are significant differences in empA expression in two strains of V. anguillarum, M93Sm and NB10. It is hypothesized that differences in empA regulation are due to differences in binding of regulatory elements.The induction of metalloprotease encoded by V. anguillarum, M93Sm and NB10, were examined and compared for the presence of DNA regulatory proteins that bind to and control the empA promoter region. Gel mobility shift assays, using a digoxigenin (DIG)-labeled oligomer containing a lux box-like element and the promoter for empA, were done to demonstrate the presence of a DNA-binding protein. Protein extracts from NB10 cells incubated in Luria Bertani broth + 2% NaCl (LB20), nine salts solution + 200 μg/ml mucus (NSSM), 3M , or NSS resulted in a gel mobility shift. No gel mobility shift was seen when protein extracts from either LB20- or NSSM-grown M93Sm cells were mixed with the DIG-labeled empA oligomer. The azocasein assay detected protease activity in all incubation conditions for NB10 culture supernatants. In contrast, protease activity was detected in M93Sm culture supernatants only when incubated in NSSM. Since the luxR homologue in V. anguillarum, vanT, has been cloned, sequenced, and shown to be required for protease activity, we wanted to determine if vanT mutants of NB10 exhibit the same gel shift observed in the wild-type. Site-directed mutagenesis was used to create vanT mutants in V. anguillarum M93Sm and NB10 to test whether VanT is involved with the gel mobility shift. Both vanT mutants, M02 and NB02, did not produce protease activity in any conditions. However, protein extracts from NB02 incubated in each condition still exhibited a gel shift when mixed with the DIG-labeled empA oligomer.Two strains of V. anguillarum NB10 cells contain a protein that binds to a 50 bp oligomer containing the empA promoter-lux box-like region. NB10 cells express empA during stationary phase in all growth conditions. The DNA binding protein is not present in M93Sm extracts. M93Sm cells express protease activity only when incubated at high cell density in fish gastrointestinal mucus. The gel shift observed with NB10 cells is not due to VanT binding. The data also suggest that the DNA binding protein is responsible for the less restrictive expression of empA in NB10 compared to M93Sm.The data demonstrate that protein extracts of Vibrio anguillarum is the causative agent of vibriosis, one of the major bacterial diseases affecting fish, bivalves, and crustaceans × (1 × 109)] .V. anguillarum cells as previously described [vanT-F2 and vanT-R2 . Sequencing was performed on a Beckman-Coulter CEQ 8000. The Dye Terminator Cycle Sequencing (DTCS) quick start kit was used for the sequence reactions that were prepared according to the manufacturer's specifications and run in a thermal cycling program. DNA samples were mixed with the appropriate primer Table and thenS.M.D. carried out the experimental part of the study and drafted the manuscript. D.R.N. conceived of the study, participated in its design and coordination, and edited the manuscript. P.S. participated in the experiment design and assisted with gel shift analysis. All authors have read and approved the final manuscript.
The correct final concentrations for this buffer are as follows:After the publication of this work it was bHEPES 50 mMNaCl 100 mMNaF 10 mMEDTA 5 mMNa3VO4 0.5 mMNEM 2 mMTriton 0.1%Complete protease inhibitors (Roche).We regret any inconvenience that this inaccuracy may have caused, and thank Dr. Bräuer for bringing it to our attention.
Women report fear of pain in childbirth and often lack complete information on analgesic options prior to labour. Preferences for pain relief should be discussed before labour begins. A woman's antepartum decision to use pain relief is likely influenced by her cultural background, friends, family, the media, literature and her antenatal caregivers. Pregnant women report that information about analgesia was most commonly derived from hearsay and least commonly from health professionals. Decision aids are emerging as a promising tool to assist practitioners and their patients in evidence-based decision making.Decision aids are designed to assist patients and their doctors in making informed decisions using information that is unbiased and based on high quality research evidence. Decision aids are non-directive in the sense that they do not aim to steer the user towards any one option, but rather to support decision making which is informed and consistent with personal values.We aim to evaluate the effectiveness of a Pain Relief for Labour decision aid, with and without an audio-component, compared to a pamphlet in a three-arm randomised controlled trial. Approximately 600 women expecting their first baby and planning a vaginal birth will be recruited for the trial.The primary outcomes of the study are decisional conflict (uncertainty about a course of action), knowledge, anxiety and satisfaction with decision-making and will be assessed using self-administered questionnaires. The decision aid is not intended to influence the type of analgesia used during labour, however we will monitor health service utilisation rates and maternal and perinatal outcomes. This study is funded by a competitive peer-reviewed grant from the Australian National Health and Medical Research Council (No. 253635).The Pain Relief for Labour decision aid was developed using the Ottawa Decision Support Framework and systematic reviews of the evidence about the benefits and risks of the non-pharmacological and pharmacological methods of pain relief for labour. It comprises a workbook and worksheet and has been developed in two forms – with and without an audio-component (compact disc). The format allows women to take the decision aid home and discuss it with their partner. Making evidence-based decisions in clinical practice is not always straightforward: patients and their healthcare providers may need to weigh up the evidence between several comparable options, the evidence for some treatments may be inconclusive, and the information needs to be tailored to each patient's clinical context and personal preferences ,2. Good To assist patients and their doctors in making informed decisions, information must be unbiased and based on current, high quality, quantitative research evidence. However, patient information materials are often outdated, inaccurate, omit relevant data, fail to give a balanced view and ignore uncertainties and scientific controversies ,5. It isDecision aids are "interventions designed to help people make specific and deliberative choices among options by providing (at minimum) information on the options and outcomes relevant to the person's health status" . AdditioInternationally decision aids have been evaluated in a variety of health and clinical settings. Although their use in pregnancy and birth has only just begun to be explored, this is an area in which consumers are known to want to participate actively in decision making . A surveThe pain of labour is a central part of women's experience of childbirth and is a constant feature of antenatal discussion groups . Most woHowever satisfaction with childbirth is not necessarily contingent upon the absence of pain . Many wo• Continuous caregiver support for a single individual should be available to women in labour• Midwives must involve women in decisions about analgesia and recognise the value of promoting personal control• Maternity services should ensure access to written and verbal information on pain relief and should support women in their choices for pain relief• Maternity services should respect women's wishes to have some control over their pain relief• Improved public information and data on pain and analgesiaIn Australia over 250,000 women give birth annually and the increasing use of epidural analgesia means some 75,000 women have an epidural in labour each year . Among pRandomised controlled trials have shown epidural analgesia provides the most efficacious pain relief for labour, but the adverse consequences include prolonged labour, restricted mobility, use of oxytocin augmentation and an increased incidence of instrumental delivery ,18. ConsAlthough not as effective as epidural, randomised trials show inhalational analgesia (e.g. 50% nitrous oxide in oxygen) and systemic opioid analgesics (e.g. pethidine) can provide modest benefit to some patients during labour or supplement an unsatisfactory epidural . Both thA number of women prefer to avoid pharmacological analgesia if possible . The wisWomen report fear of pain in childbirth and often lack complete information on analgesic options prior to labour . For exaDickerson stresses the importance of discussing preferences for pain relief before labour begins . A womanThe management of pain in labour is a clinical decision that fulfils Eddy's criteria for a decision in which patients' values and preferences should be included . The outDuring 2003 and 2004, we developed an evidence-based decision aid about the management of pain in labour for women having their first baby. This followed a needs assessment that collected data on the attitudes, preferences and knowledge of nulliparous women who were making plans about pain relief for labour and childbirth. The needs assessment found that women's knowledge of pain relief options was limited and these women would benefit from a decision aid for labour analgesia.In developing the decision aid we utilised the NHMRC guideline "How to prepare and present information for consumers of health services" and the The decision aid was designed for women to use at home or in the clinical setting, and takes about 30 minutes to complete. After working through the decision aid, women should take the completed worksheet to their next antenatal appointment to discuss their preferences with their health care provider. The worksheet is also useful for the practitioner, who can see rapidly from it what evidence the patient has considered, what her values and preferences are and which way she is leaning in her preferences for analgesia during labour.The decision aid was developed, pilot tested and revised with extensive consumer involvement, as outlined in the NHMRC guideline on preparing information for consumers . The conA number of draft decision aids , were developed and each subjected to pilot testing and revision as we obtained feedback. The process of testing and revising started with the study project group. The next phase included a review by a group of national and international content experts, including decision aid experts, obstetricians, midwives, perinatal epidemiologists, parent educators and psychologists. Once we were convinced that the content was accurate the decision aid was pilot-tested amongst consumers. There were several rounds of consumer review and refinement.Initially we aimed to compare the Decision Aid (workbook and audio-component) with usual care and counselling however preliminary work led us to alter our original study design. We could find no studies that compared Decision Aids with and without an audio-component. As the audio-component adds considerable complexity to the development and cost of the Decision Aid we decided to have two intervention arms: a Decision Aid with an audio-component and a Decision Aid without an audio-component. Further in pilot testing we found that women in the usual care arm were disappointed to not receive any information. Thus, to minimise refusals and losses to follow-up we decided to issue the women in the control group with a pamphlet called "Pain relief during childbirth – A guide for women" This pamphlet is published by the Royal Australian and New Zealand College of Obstetricians and Gynaecologists, is publicly available and includes information about methods of pain relief during labour . These cTo compare the relative effectiveness of the Pain Relief for Labour Decision Aid with a pamphlet on women's decisional conflict, knowledge, expectations, satisfaction with decision making and anxiety, and examine its impact on service utilisation and perinatal outcomes (as secondary outcomes).The primary study hypotheses are:Use of the Pain Relief for Labour Decision Aid by women expecting their first baby:1. Reduces decisional conflict (uncertainty about the course of action)2. Increases knowledge of labour analgesia3. Increases satisfaction with their decision making4. Reduces anxiety.The secondary hypotheses of the study are:Use of the Pain Relief for Labour Decision Aid by women expecting their first baby will not influence:1. The type of analgesia women use for labour2. Maternal and infant outcomes.We will conduct a randomised trial with the following study groups to assess the impact of the decision aid:Group 1: The pamphlet, "Pain relief during childbirth – A guide for women" Group 2: Decision aid with an audio-componentGroup 3: Decision aid without an audio-componentAn Australian tertiary obstetric hospital with a full range of non-drug and anaesthetic options for pain relief in labour. Epidurals are available 24 hours a day from anaesthetic staff designated to labour ward. All forms of antenatal care will be included in the study.Primiparous women in late pregnancy (≥36 weeks gestation) who are expecting to have a vaginal birth of a single infant will be eligible for the study. Primiparous women were selected because previous pregnancies have a strong impact on decision making and analgesia use in labour ,16. ExclThe study procedure draws on the usual schedule of weekly antenatal visits in late pregnancy Figure . We planBrief baseline data will be collected to assess comparability of the study groups. The baseline assessment will include age, brief socio-demographic data, highest level of education achieved, anxiety as assessed by the state component of the short Spielberger anxiety scale , and infThe aim of the decision aid is to assist preference elicitation, and not to influence the direction of the decision taken. Women in each study group will be given the opportunity to review the intervention they are allocated (decision aid or pamphlet) while in the antenatal clinic and/or to take home, which ever is most convenient. Many women will also want to discuss their preferences with their partner. At the next antenatal visit, women will be contacted by the research nurse to discuss the information materials and any questions they may have had.All participants will be given a follow-up questionnaire prior to their next antenatal consultation. (See Outcome Measure details below).After a study participant delivers, the midwife who provided the labour care will complete a brief questionnaire to assess the impact of the decision aid on the management of labour analgesia. Information will also be collected on caregiver support in labour, birthplace (delivery suite or birth centre), use of non-drug analgesic options and stage of labour at admission.At 12–16 weeks postpartum all participants will be mailed a second follow-up questionnaire. This will assess women's satisfaction with the decisions made and the decision-making processes. (See Outcome Measures below). Questionnaires will be mailed with reply paid envelopes, with up to two reminder telephone prompts to non-responders.We will conduct in-depth interviews to explore the impact of the decision aid on women's experiences in labour and childbirth. A sub-sample of 30 women will be purposively selected, to reflect heterogeneity of experience of labour. The interviews will provide an understanding of the complexities of analgesic preferences, management, expectations, satisfaction, and psychological health following delivery. This data will enable examination of unpredicted and subtle effects of the decision aid on psychosocial outcomes that may not be captured using quantitative methods. Interviews will be face-to-face and conducted in women's homes or at a clinic, according to participants' preferences. Interviews will be recorded and transcribed. Data will be analysed using thematic analysis.As with many obstetric interventions blinding is virtually impossible. The main outcomes of this study are self-reported and the women are clearly not blinded to their treatment allocation. However, we will institute a number of measures aimed at keeping antenatal staff blind to the treatment allocation and preventing contamination of the control group:• Women will review the decision aid with the research nurse and complete the first questionnaire (primary outcome measures) prior to their next antenatal consultation• Usual antenatal care providers will be blinded to the exact content and format of the decision aid• Regular in-service for the antenatal care providers to explain the trial protocol and to make clear the potential effect of unmasking or contamination.• Monitoring decision aid distribution and keeping them locked up and only accessible by the research nurse• Asking participants not to reveal their treatment allocation, or share their decision aid material with antenatal staff or other women. If participants do not want to keep their decision aid they will be asked to return it.The primary outcomes of this study will be:Decisional conflict (uncertainty about which preference to choose) will be assessed by the Decisional Conflict Scale which has established reliability, good psychometric properties and is short (16 items) [6 items) . It has is short 6 items [Measures of knowledge and realistic expectations about labour analgesia options and the benefits and risks of these options will be specific to this project. Thus we will need to develop, and test these measures as part of the project.Anxiety will be measured by the state component of the short Spielberger anxiety scale which has been extensively used and validated [alidated ,32. We dSatisfaction with analgesia decisions will be assessed using the Satisfaction with Decision Scale – a very brief six item scale with high reliability was developed specifically to assess satisfaction with health care decisions [ecisions .Satisfaction with the decision and anxiety will be measured again at 12–16 weeks postpartum. This interval was chosen to avoid the potential bias arising from questioning women still in the hospital who may feel a disloyalty to their caregivers by a critical appraisal and whose opinions have been shown to be more positive and short-lived than those obtained further out from the birth itself . At thatThe aim of the decision aid is to assist preference elicitation, and not to influence the direction of the decisions taken. Nevertheless, it is important to collect service utilisation and pregnancy outcome data so we will record and compare the pain relief methods used by women in all arms of the study, as well as recording and comparing rates of pregnancy complications and perinatal outcomes. The latter will be obtained (with informed consent) from the existing computerised obstetric database and include: medical or obstetric complications, induction or augmentation of labour, mode of delivery , enrolment to delivery interval, gestational age, birthweight, Apgar scores, perinatal deaths, Neonatal Intensive Care Unit admission and length of stay.The planned sample size is 600 women, with approximately 200 women to be recruited to each arm of the trial. Based on data for 2001 from the tertiary obstetric hospital where the study will be conducted, about 1500 primiparous women give birth to singleton infants after 36 weeks gestation and 92% use some form of analgesia. We anticipate that at least 50% of women will be both eligible and willing to participate.The sample size calculations for the trial are based on the mean difference in the decisional conflict scale between any two arms of the trial. The effect of decision aids on this scale is documented and effect size data are available . Meta-anApproximately 20% of primiparous women have a caesarean section (6% before labour and 14% after labour has commenced) . Some ofIf there are no significant differences in outcome for the two decision aid groups (with or without the audio-component), the decision aid groups will be pooled giving two women with the intervention for each woman in the pamphlet group thereby increasing the power to detect differences between the decision aid and the pamphlet.Analyses will be by intention to treat, including withdrawals and losses to follow-up firstly of all women randomised and then excluding women who lose their options for analgesia. Study groups will be compared in terms of baseline characteristics. As this is a randomised trial, we would anticipate minimal differences in baseline characteristics. If however, important differences are found, these potential confounders will be adjusted for in the analysis of outcomes. For the primary outcomes, the mean score for each measure for each group will be compared using t-tests. If adjustment for confounders is needed a multiple linear regression model will be used. The secondary outcomes will be compared using chi-squared tests of significance for categorical data and t-tests for continuous data. If adjustment for confounding is necessary logistic regression and multiple linear regression will be used respectively.This work involves the development of a decision aid for the management of pain in labour and childbirth. Women must decide between a range of non-pharmacological and pharmacologic methods of pain relief. However this decision must be made in the context of the likely analgesic effects of each option, the risk of complications and adverse obstetric effects, and maternal preference for relief of pain. There are currently no evidenced based materials available. We therefore expect this project to be beneficial for participating women. A systematic review of decision aids found they improved knowledge without increasing anxiety. Nevertheless we will measure anxiety levels at baseline and follow-up to document any adverse effects. A trained research nurse will interview all women and obtain written informed consent. Women will be encouraged to discuss any concerns/anxiety with the research nurse and/or with their usual antenatal care provider. Women will be reassured that they are able to withdraw from the study at any time with no adverse effects on their pregnancy management. Participation will require women to complete self-report questionnaires during and after pregnancy. Working through the decision aid will take approximately 30 minutes and review of their preferences or outstanding questions will be at a routine antenatal visit. Therefore we do not consider this to be an excessive burden on their time.The study has been approved by the Central Sydney Area Health Service Ethics Review Committee (Protocol no. X02-0247) and the University of Sydney Human Ethics Committee (Ref No. 3419). This project is funded by a nationally competitive peer-reviewed grant from the Australian National Health and Medical Research Council (No. 253635).Participants in the trial will be identified by a study number only, with a master code sheet linking names with numbers being held securely and separately from the study data. To ensure that all information is secure, data records will be kept in a secure location at the University of Sydney and accessible only to research staff. As soon as all follow-up is completed the data records will be de-identified. De-identified data will be used for the statistical analysis and all publications will include only aggregated data. The electronic version of the data will be maintained on a computer protected by password. All hard copy patient identifiable data and electronic backup files will be kept in locked cabinets, which are held in a locked room accessed only by security code and limited staff. Data files will be stored for seven years after completion of the project as recommended by the NHMRC. Disposal of identifiable information will be done through the use of designated bags and/or a shredding machine.The author(s) declare that they have no competing interests.CR, CRG, LT and KM were involved in the conception and design of the study. CR, NN and CRG were responsible for the drafting of the protocol. All authors have read and given final approval of the final manuscript.The pre-publication history for this paper can be accessed here:
Chaperones (CH) play an important role in tumor biology but no systematic work on expressional patterns has been reported so far. The aim of the study was therefore to present an analytical method for the concomitant determination of several CH in human tumor cell lines, to generate expressional patterns in the individual cell lines and to search for tumor and non-tumor cell line specific CH expression.Human tumor cell lines of neuroblastoma, colorectal and adenocarcinoma of the ovary, osteosarcoma, rhabdomyosarcoma, malignant melanoma, lung, cervical and breast cancer, promyelocytic leukaemia were homogenised, proteins were separated on two-dimensional gel electrophoresis with in-gel digestion of proteins and MALDI-TOF/TOF analysis was carried out for the identification of CH.A series of CH was identified including the main CH groups as HSP90/HATPas_C, HSP70, Cpn60_TCP1, DnaJ, Thioredoxin, TPR, Pro_isomerase, HSP20, ERP29_C, KE2, Prefoldin, DUF704, BAG, GrpE and DcpS.The ten individual tumor cell lines showed different expression patterns, which are important for the design of CH studies in tumor cell lines. The results can serve as a reference map and form the basis of a concomitant determination of CH by a protein chemical rather than an immunochemical method, independent of antibody availability or specificity. Drosophila busckii. Since then, efforts from a large number of investigators have shown that the heat shock response is ubiquitous and highly conserved. It is observed in all organisms from bacteria to plants and animals. CH form an essential defense mechanism for protection of cells from a variety of harmful conditions, including temperature elevation or heat shock, decrease in pH, hypersalinity, alcohols, heavy metals, oxidative stress, inhibitors of energy metabolism, fever or inflammation -1-propane-sulfonate) , 65 mM 1,4-dithioerythritol , 1 mM EDTA (ethylenediaminetetraacetic acid) , protease inhibitors complete and 1 mM phenylmethylsulfonyl fluoride (PMSF). The suspension was sonicated for approximately 30 sec in an ice bath. After homogenisation samples were left at room temperature for 1 h and centrifuged at 14,000 rpm for 1 h. The supernatant was transferred into Ultrafree-4 centrifugal filter units , for desalting and concentrating proteins. Protein content of the supernatant was quantified by the Bradford protein assay system . The staI values were used as given by the supplier of the immobilised pH gradient strips . Excess of dye was washed out from the gels with distilled water and gels were scanned with an ImageScanner (Amersham Bioscience). Electronic images of the gels were recorded using Adobe Photoshop and Microsoft Power Point softwares.Samples prepared from each cell line were subjected to 2-DE as described elsewhere . 1 mg pr+ ions of angiotensin I, angiotensin II, substance P, bombesin, and adrenocorticotropic hormones (clip 1–17 and clip 18–39). Each spectrum was produced by accumulating data from 200 consecutive laser shots. Those samples which were analysed by PMF from MALDI-TOF were additionally analysed using LIFT-TOF/TOF MS/MS from the same target. A maximum of three precursor ions per sample were chosen for MS/MS analysis. In the TOF1 stage, all ions were accelerated to 8 kV under conditions promoting metastable fragmentation. After selection of jointly migrating parent and fragment ions in a timed ion gate, ions were lifted by 19 kV to high potential energy in the LIFT cell. After further acceleration of the fragment ions in the second ion source, their masses could be simultaneously analysed in the reflector with high sensitivity. PMF and LIFT spectra were interpreted with the Mascot software . Database searches, through Mascot, using combined PMF and MS/MS datasets were performed via BioTools 2.2 software (Bruker). A mass tolerance of 100 ppm and 1 missing cleavage site for PMF and MS/MS tolerance of 0.5 Da and 1 missing cleavage site for MS/MS search were allowed and oxidation of methionine residues was considered. The probability score calculated by the software was used as criterion for correct identification.Spots visualised by Colloidal Coomassie Blue staining were excised with a spot picker , placed into 96-well microtiter plates and in-gel digestion and sample preparation for MALDI analysis were performed by an automated procedure ,47. BrieThe algorithm used for determining the probability of a false positive match with a given mass spectrum is described elsewhere .CH chaperoneHSP heat shock protein2-DE two dimensional gel electrophoresisMALDI-MS matrix-assisted laser desorption ionisation mass spectrometryOGP octyl-β-D-glucopyranosideCHCA α-cyano-4-hydroxy-cinnamic acidPMF peptide mass fingerprintGRP glucose regulated proteinTCP t-complex proteinThe author(s) declare that they have no competing interests.JKM did data mining and contributed to the preparation of the manuscript. LAS performed protein extraction, two-dimensional electrophoresis and data handling. MFC carried out MALDI-TOF-TOF analyses. IS developed methodology for studying proteins from cell lines. GL initiated and planned the study developing the concept, supervised 2-DE, mass spectrometry, creating data and the manuscript. All authors have read and approved the final manuscript.Table 1. Identified proteins in different human tumor cell lines: Saos-2, SK-N-SH, HCT 116, CaOv-3, A549, HL-60, A-375, A-673, MCF-7 and Hela.Click here for fileTable 1-1. Identified proteins in different normal cell lines: Kidney, Lymphocyte, FibroblastClick here for fileI, observed pI, total score and peptide matched of molecular chaperones in tumor cell linesTable 2. Theoretical molecular weight, theoretical pClick here for fileTable 3. The list of tumor cell linesClick here for file
Ischemia within the optic nerve head (ONH) may contribute to retinal ganglion cell (RGC) loss in primary open angle glaucoma (POAG). Ischemia has been reported to increase neurotrophin and high affinity Trk receptor expression by CNS neurons and glial cells. We have previously demonstrated neurotrophin and Trk expression within the lamina cribrosa (LC) region of the ONH. To determine if ischemia alters neurotrophin and Trk protein expression in cells from the human LC, cultured LC cells and ONH astrocytes were exposed to 48 hours of oxygen-glucose deprivation (OGD). Also cells were exposed to 48 hours of OGD followed by 24 hours of recovery in normal growth conditions. Cell number, neurotrophin and Trk receptor protein expression, neurotrophin secretion, and Trk receptor activation were examined.Cell number was estimated using an assay for cell metabolism following 24, 48 and 72 hours of OGD. A statistically significant decrease in LC and ONH astrocyte cell number did not occur until 72 hours of OGD, therefore cellular protein and conditioned media were collected at 48 hours OGD. Protein expression of NGF, BDNF and NT-3 by LC cells and ONH astrocytes increased following OGD, as did NGF secretion. Recovery from OGD increased BDNF protein expression in LC cells. In ONH astrocytes, recovery from OGD increased NGF protein expression, and decreased BDNF secretion. Trk A expression and activation in LC cells was increased following OGD while expression and activation of all other Trk receptors was decreased. A similar increase in Trk A expression and activation was observed in ONH astrocytes following recovery from OGD.In vitro conditions that mimic ischemia increase the expression and secretion of neurotrophins by cells from the ONH. Increased Trk A expression and activation in LC cells following OGD and in ONH astrocytes following recovery from OGD suggest autocrine/paracrine neurotrophin signaling could be a response to ONH ischemia in POAG. Also, the increase in NGF, BDNF and NT-3 protein expression and NGF secretion following OGD also suggest LC cells and ONH astrocytes may be a paracrine source of neurotrophins for RGCs. Glaucoma is an optic neuropathy defined by characteristic optic nerve head and associated visual field changes. Nearly 67 million people worldwide are believed to have glaucoma, including an estimated 2.2 million in the USA ,2. PrimaNeurotrophins are polypeptide growth factors involved in the development and maintenance of neurons, as well as non-neuronal cells. Included in this family of trophic factors are nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), neurotrophin-3 (NT-3) and neurotrophin 4 (NT-4). Neurotrophin signaling occurs via two types of receptors including (a) tyrosine kinase high affinity Trk receptors and (b) the low affinity p75 receptor . The TrkIn addition to their localization at axon terminals, Trk receptors have been localized at neuronal cell bodies, at dendritic projections, and along axons -27. DiscThe administration of exogenous NTs has been shown to protect neurons from ischemic damage ,39 suggeIn POAG, the laminar plates of the LC are compressed and bow backward from the sclera producing an excavated and exaggerated optic cup . BecauseGiven that ischemia can be a component of ocular hypertension, we used oxygen-glucose deprivation (OGD) as an acute model of in vitro ischemia and examined the expression of NTs and their receptors by cultured LC cells and ONH astrocytes following OGD. We are aware that the model used (oxygen and glucose deprivation) is not a perfect model for what occurs in the glaucomatous optic nerve head. However, this acute model was an attempt to mimic the end results of a chronic condition, and as a "first step" we felt this model was adequate for examining the response to injury in these cell types.The LC and ONH astrocyte cell lines used in this study have been previously characterized and described . PrelimiTo determine an exposure of OGD that would result in cellular changes while allowing a majority of cells to remain viable, we examined cell number at various time points of OGD using an assay that estimates cell number based on cell metabolism. The oxygen level within the anoxic incubator was determined to be below detectable levels as measured using an Oxygen Test Kit . Lamina cribrosa and ONH astrocyte cell metabolism/cell number following OGD is shown in Figure The expression of NT protein in LC cells and ONH astrocytes following OGD is shown in Figures As seen in Figure Following 48 hours of OGD, ONH astrocytes Figure demonstrFigure The expression of Trk receptor protein by ONH astrocytes following OGD and recovery from OGD is shown in Figure The expression of phosphorylated Trk receptor protein following OGD and recovery from OGD is shown in Figure Overall, exposure to OGD resulted in decreased phospho-Trk protein expression by LC cells and ONH astrocytes. Expression of the 148 kDa phospho-trk isoform was decreased to a statistically significant level in both LC cells and ONH astrocytes, as was the 120 kDa and 80 kDa isoforms in ONH astrocytes. The only increase in phospho-Trk receptor protein expression following OGD was observed in LC cells. Protein expression of the 120 kDa phospho-Trk receptor isoform was increased 60% over the controls in LC cells, which was statistically significant.Recovery from OGD resulted in increased protein expression of phospho-Trk receptor isoforms in LC cells and ONH astrocytes when compared to OGD alone, with the exception of the 120 kDa isoform in LC cells, which decreased toward control levels. Although its expression increased compared to OGD alone, expression of the 148 kDa isoform was still decreased to a significant level. Interestingly, the 120 kDa phospho-trk isoform was again the only isoform whose expression was increased above control levels. This 100% increase was observed in ONH astrocytes following recovery from OGD and was statistically significant.The secretion of NGF and BDNF following OGD and recovery from OGD is shown in Figure In this study we examined the protein expression of NTs and Trk receptors by LC cells and ONH astrocytes following OGD, an in vitro model of ischemia. Lamina cribrosa cell and ONH astrocyte responses to OGD and recovery from OGD are summarized in Tables Ischemia due to elevated IOP during POAG may cause changes within the ONH and contribute to RGC loss -11. As aNeurons, including RGCs, express Trk receptors not only at the axon terminal and cell body, but also along their axons -27 suggeThe expression of Trk receptors (both full length and truncated) by LC cells, ONH astrocytes or other cells within the LC could limit NT availability to RGCs -24. TherThere is evidence that reperfusion following ischemia is actually more detrimental to cells than ischemia itself ,46. To dIn conclusion, we have demonstrated that LC cells and ONH astrocytes increase the expression of NGF, BDNF and NT-3 protein following OGD, which may be neuroprotective for RGCs. Neurotrophins expressed by LC cells and ONH astrocytes following OGD or recovery from OGD bind and activate Trk receptors expressed by these cells. Increased NT signaling within LC cells and ONH astrocytes could increase cell survival following ischemic injury, but may compromise RGC survival during reperfusion. Further studies examining the expression of NTs and Trk receptors following hypoxia or transient ischemic insults would provide a better model for ischemic injury in POAG. Using this model, RGCs could be co-cultured with LC cells or ONH astrocytes to determine the neuroprotective effects of NTs during ischemic injury. By better understanding NT signaling within the LC under normal and injurious conditions, new strategies involving these factors could be developed to better treat patients with POAG.Lamina cribrosa cells and ONH astrocytes respond to conditions that mimic ONH ischemia by increasing NGF, BDNF and NT-3 protein expression and NGF secretion.Increased protein expression of Trk receptors and phosphorylated Trk receptors by LC cells and ONH astrocytes following OGD and recovery from OGD respectively suggest paracrine and/or autocrine NT signaling occurs within the ONH following ischemic injury.Lamina cribrosa cells and ONH astrocytes may be a paracrine source of NGF, BDNF and/or NT-3 for RGCs, especially during ischemic injury within the ONH throughout POAG progression.® Aqueous Non-Radioactive Cell Proliferation Assays and Emax™ ImmunoAssay Systems specific for NGF, BDNF, NT-3 and NT-4 were purchased from Promega Corporation, Madison, WI.DMEM and fetal bovine serum (FBS) were purchased from HyClone Labs, Logan, UT. The following materials were purchased from Gibco BRL Life Technologies, Grand Island, NY; glucose free DMEM, L-glutamine, penicillin/streptomycin and fungizone (amphotericin B). Costar 96-well plates and Nunc ELISA/EIA 96 well Maxisorp plates were purchased from Fisher Scientific, Pittsburgh, PA. Polyclonal antibodies to Trk A, Trk B, Trk C, truncated Trk B (Trk B.T) and phosphorylated Trk were purchased from Santa Cruz Biotechnology Inc, Santa Cruz, CA. CellTiter 962/95% air at 37°C and media was changed every 2 to 3 days. Characterization of these cells was performed as described previously [Lamina cribrosa and ONH astrocyte cell lines were obtained from human LC explants from separate donors as described previously . Cells weviously . Cells eeviously ,33,36. Ceviously -36. Botheviously ,33,36. A2/5% CO2) for 48 hours. The oxygen level within the anoxic incubator was measured using a Oxygen Test Kit and was determined to be below detectable levels. Recovery following OGD was achieved by placing cells in OGD conditions for 48 hours and then allowing them to recover in growth media (Ham's F-10 Media or DMEM plus 10% FBS) and an aerobic environment (95% air/5% CO2) for 24 hours. Cells cultured in growth media and an aerobic environment for served as controls.Preliminary studies examining cell survival following anoxia, hypoxia, hypoglycemia or serum withdrawl demonstrated LC cells and ONH astrocytes were resistant to hypoxia and serum withdrawl alone (data not shown). To approach what is occurring in vivo, we used the more acute oxygen-glucose deprivation (OGD) model to examine NT and trk expression and NT signaling in cells from the ONH. Preconfluent, age-matched adult LC cells and ONH astrocytes were treated with serum free media for 24 hours. Oxygen-glucose deprivation (OGD) was achieved by culturing LC cells and ONH astrocytes in glucose free serum free DMEM in an anoxic incubator (95% N2) for 24 hours. Cells cultured in growth media and an aerobic environment for served as controls.Adult LC cells and ONH astrocytes were trypsinized, counted using a hemacytometer and plated into Costar 96-well plates at a density of 1,000 cells/well. Cells were allowed to attach overnight and were then placed in serum free media for 24 hours. Lamina cribrosa cells and ONH astrocyte were exposed to OGD as described above for 24, 48 or 72 hours. Recovery following OGD was achieved by placing cells in OGD conditions for 24 or 48 hours and then allowing them to recover in growth media and an aerobic environment . Metabolically active cells convert MTS into formazan, which is soluble in aqueous solutions. The quantity of the formazan product measured by the amount of absorbance at 490 nm is therefore directly proportional to the number of living cells. Cell metabolism/cell number per well was calculated from a standard curve generated using known amounts of cells per well. A standard curve was generated for each cell line assayed. Three LC cell lines and three ONH astrocyte cell lines were assayed. The entire experiment, including standard curves, was repeated twice. Changes in cell metabolism/cell number following OGD and recovery from OGD were reported as a percent of the control ± SEM.Cell number was estimated using the CellTiter 96c Protein Assay System . Cellular lysate (50 μg) was separated on denaturing polyacrylamide gels and then transferred by electrophoresis to nitrocellulose membranes. Blots were processed using primary antibodies and the Western Breeze Chemiluminescent Immunodetection System . Blots were then exposed to Hyperfilm-ECL for various times depending on the amount of target protein present. The density (O.D. × mm2) of unsaturated bands was measured using the Discovery Series scanner and the Diversity One program from pdi and a digital Venturis FP466 computer . Western blots for NTs and trks were stripped and re-probed for β-actin to ensure equal loading.Cellular protein was collected in lysis buffer modified from Watson et al. [20 mM TConditioned media was collected and concentrated using Millipore Centriplus YM-3 Centrifugal Filter Devices . Emax™ ImmunoAssay Systems specific for NGF, BDNF, NT-3 and NT-4 (Promega) were performed according to manufacturer's instructions. Conditioned media was added to Nunc ELISA/EIA 96 well Maxisorp plates coated with anti-NT polyclonal antibodies. Secreted NT was detected by treating the plates with the respective NT monoclonal antibody followed by a horseradish peroxidase conjugated secondary antibody. Enzyme substrate was added to generate a color product whose absorbance was read at 450 nm. A NT standard included in each assay was used to generate a standard curve that was used to calculate the amount of secreted NT per well. The amount of secreted NT per sample was normalized to total protein per sample. Samples were assayed in triplicate. Conditioned media from three LC cell lines and three ONH astrocyte cell lines were assayed. Each immunoassay was repeated twice. Changes in NT secretion following OGD and recovery from OGD were reported as a percent of the control ± SEM.® statistical package, version 7.4.41 [Cell metabolism/cell number, western blot densitometry and immunoassay data were analyzed using one way analysis of variance (ANOVA) followed by validation using Student-Newman-Keuls tests. The MedCalcn 7.4.41 was usedn 7.4.41 .WL carried out tissue culture, cell proliferation assays, Western blotting, and immunoassays. WL, AC, and RW participated in design of the study, interpretation of the results, and in the writing and revision of the manuscript. All authors read and approved the final manuscript. This study is taken in part from a dissertation submitted to the UNT Health Science Center in partial fulfillment of the requirements for the degree Doctor of Philosophy for WL.
In particular, we compared the migration activity of mature Dendritic Cells (mDC) with that of immature Dendritic Cells (iDC) and also assessed intradermal versus subcutaneous administration.Dendritic Cell (DC) vaccination is a very promising therapeutic strategy in cancer patients. The immunizing ability of DC is critically influenced by their migration activity to lymphatic tissues, where they have the task of priming naïve T-cells. In the present study 99mTc-HMPAO or 111In-Oxine, and the presence of labelled DC in regional lymph nodes was evaluated at pre-set times up to a maximum of 72 h after inoculation. Determinations were carried out in 8 patients .DC were labelled with It was verified that intradermal administration resulted in about a threefold higher migration to lymph nodes than subcutaneous administration, while mDC showed, on average, a six-to eightfold higher migration than iDC. The first DC were detected in lymph nodes 20–60 min after inoculation and the maximum concentration was reached after 48–72 h.in vivo provide preliminary basic information on DC with respect to their antitumor immunization activity. Further research is needed to optimize the therapeutic potential of vaccination with DC.These data obtained These iDC possess functional characteristics typical of this maturation status, such as phagocytosis, macropinocytosis, receptor-mediated endocytosis and antigen processing [2 may considerably increase migration, inducing CCR7 expression on the surface of DC. Penetration may be limited to the peripheral zones of lymphoid tissue when the DC are still immature, or may reach the deeper T-cell zones, where a greater number of naïve T-cells are present, when DC are mature and activated.Dendritic Cell (DC) vaccination is one of the most promising tools of immunological therapy for cancer. Administration of DC, generated and loaded with tumor antigens olerance ,2. A larocessing ,4. Afterocessing . Followiocessing ,7. The mSurface antigen CCR7, present on the cell membrane of DC, strongly influences migratory capacity through its interaction with transporter molecules, TREM-2, LTC4, LTD4, etc. -10. The in vitro-generated vaccines have shown that mature, but not immature DC, induce an effective antitumor response [Recent studies on cancer patients evaluating the efficacy of response -18. Animresponse . Similarresponse -23.in vivo migration ability of DC by labelling them with 99mTc-HMPAO or 111In-Oxine. In particular, migratory activity was assessed in iDC and mDC in terms of time required for migration to lymph nodes, duration of activity, and number of cells that migrated. Migratory capacity was further evaluated by comparing subcutaneous and intradermal administration.In the course of a vaccination trial using DC pulsed with autologous tumor lysate (ATL) in cancer patients, we evaluated the In vivo migration was assessed using a part of the DC obtained for one of the therapy cycles. Three of the 8 patients were evaluated twice.The case series consisted of a subset of the 19 patients enrolled onto a phase I/II vaccination trial for advanced melanoma and renal cell carcinoma in which the first 9 patients were treated with iDC and the remaining 10 received mDC, both pulsed with autologous tumor lysate and keyhole limpet hemocyanin . In the present study 8 patients were analyzed for a total of 11 treatments. Two melanoma patients were treated with iDC (one of whom twice), while 4 other patients with melanoma and 1 with renal cell carcinoma (treated twice) received mDC. The remaining melanoma patient was treated with iDC and subsequently with mDC.The clinical trial was approved by the Italian Ministry of Health and by the Ethical Committee of Forlì Health and Social Services . All patients gave written informed consent.g) and the supernatant was passed through a 0.2-μm filter. Protein contents were determined and aliquots were stored at -80°C until use.Tumor samples surgically removed from the patients were immediately placed in PBS. Adjacent non malignant tissue was removed by scalpel and tumor cells were dispersed to create a single-cell suspension. Cells were lysed by incubation in sterile distilled water. Lysis was monitored by light microscope. Larger particles were removed by centrifugation . From days 2–6, IL-2 was administered subcutaneously at a dose of 3 million IU/die. This procedure was repeated after two weeks and once a month until progression occurred.DC were prepared from peripheral blood monocytes (PBMC) obtained by leukapheresis without previous mobilization. 5–9 liters of blood were processed in each collection. PBMC were purified on Ficoll-Paque. An aliquot of PBMC was utilized immediately for DC generation and the rest was frozen in bags for use at a later date (4–5 bags/1 collection).6 cells/ml for 2 h. The non-adherent cells were discarded and the adherent cells were incubated in CellGro DC Medium containing 1000 IU/ml rhIL-4 (Cell Genix) and 1000 IU/ml rhGM-CSF for 7 days to generate a DC-enriched cell population. On day 6 DC were pulsed with autologous tumor lysate (100 mg/ml) and with KLH (50 mg/ml) and incubated overnight. On day 7, they were defined as iDC. After eliminating the previous culture medium, pulsed iDC were cultured for a further 2 days with a cocktail of cytokines . On day 9 they were defined as mDC. iDC or mDc were removed, washed and suspended in sterile saline for therapeutic infusion into the patient.PBMC were incubated in tissue culture flasks with CellGro DC Medium at 10 × 106) were resuspended in platelet-poor autologous plasma (CFP1) and incubated for 15 min at room temperature with 99mTc-HMPAO (20 mCi) 111In-Oxine (1 mCi) . After two washes to eliminate the unbound isotope, the cells were resuspended in a total volume of 1.5 ml of CFP1. Radiolabelling of the DC and of the culture supernatant was evaluated with a gamma counter, after which DC were inoculated intradermally into the patient near healthy lymph nodes and in the contralateral zone not used for therapeutic vaccination . The patient then underwent serial acquisitions with gamma-camera positioned at the site of inoculation, with a field of view that included all the lymphatic regions of interest. The first acquisition was performed with a dynamic study of 20 min, followed by 10-min static acquisitions carried out every 30 min for the first 4–6 h and from 18 to 28 h. Other static determinations were made at 36, 48 and 72 h. The maximum duration of observation of DC migratory activity, which depended on the half-life of the radioisotope used, was 72 h for 111In-Oxine and 36 h for 99mTc-HMPAO.Labelling of DC was performed according to the methods described for leucocyte radiolabelling -26. A paThe identification of lymph node stations involved in the migratory activity was initially visual, after which we carried out a semiquantitative evaluation of the percentage of DC that migrated to lymph nodes from the inoculation site and an assessment of the speed of DC migration, expressed by activity/time curves obtained through the compartmental mathematical model.99mTc-HMPAO and the other was labelled with 111In-Oxine. The labelled cells were then suspended in CellGro DC Medium, divided into 4–5 culture flasks for each labelling molecule and incubated for 0 h, 4 h, 21 h, 24 h (99mTc-HMPAO) and 0 h, 4 h, 21 h, 24 h, 48 h (111In-Oxine). The DC from one flask were removed and centrifuged. The supernatant containing the free molecule, and the pellet containing the labelled cells, were then measured with a gamma counter.DC obtained from the culture of frozen PBMC were divided into two parts: one was labelled with 5 cells were suspended in 100 μl of buffer and incubated for 30 min at 4°C with 10 μl of appropriate fluorescein isothiocyanate or phycoerythrin-labelled monoclonal antibodies (mAbs). The cells were then washed twice and resuspended in 500 μl of assay buffer. The fluorescence was analyzed by a FACS Vantage flow cytometer . mAbs specific for human CD1a, CD14, CD80, CD86, (Becton Dickinson) CD83 and CCR7 were used.iDC and mDC phenotypes were determined by single or two-color fluorescence analysis. 3–5·10At each pre-set time the supernatant was collected and stored at -80°C until analysis was carried out using commercially available ELISA Kit to measure the production of IL-12 + p40 (bioactive heterodimer of IL-12) and IL-10 by DC.Single cell-based measurement of endocytosis was carried out as described 27). Dendritic cells were incubated for 30 min at 37°C with 0.5 mg/ml FITC-Dextran . DX-FITC was centrifuged before use to remove aggregates. As negative control, cells were incubated with DX-FITC at 4°C. The cells were washed with cold PBS containing 2% FCS and 2 nM sodium azide to exclude dead cells and were then analyzed on a FACS Vantage flow cytometer (Becton Dickinson) [. DendritAll the patients had advanced disease and all but one had undergone previous treatment. Median age was 49 years (range 46–52 years). Three patients were HLA-A1, 3 were HLA-A3, 1 was HLA-A2 and 1 was HLA-A11 are reported in Table in vitro stability of DC labelled with 99mTc-HMPAO and 111In-Oxine was evaluated using DC cultured from frozen PBMC. 99mTc-HMPAO-labelled DC showed a 75% loss of activity 4–24 h after the beginning of in vitro culture. 111In-Oxine-labelled DC showed a higher labelling stability (50%) that lasted for up to 24 h was evaluated by comparing radioactive uptake in axillary lymph nodes. The intradermal route showed a threefold higher migration than that observed for the subcutaneous route Table . Evaluat111In-Oxine or 99mTc-HMPAO. A simple numerical analysis shows that the maximum uptake ratio between 99mTc-HMPAO-labelled mDC and iDC varies from 2 to 35, with an average of 8.4. The same ratio for 111In-Oxine-labelled cells varies from 4 to 7, with an average of 6. 99mTc-HMPAO labelling is influenced by the very low iDC uptake due to its greater binding instability and to the short half-life of the radioisotope, which does not permit the acquisition of reliable counts beyond 24–36 h.iDC and mDC migration was evaluated in all 8 patients (4 iDC and 7 mDC treatments) (Table 99mTc-HMPAO and 48–60 h with 111In-Oxine). A curve fitting analysis also seemed to indicate a progressive increase in uptake after the first 60 h, but the number of patients evaluated is too low for any definitive conclusions to be drawn.A lymph node uptake can be observed in all patients within the first two hours of inoculation, reaching a maximum uptake after 12 h in 7 patients Fig. ,5. In th99mTc-HMPAO alone in the same inoculation sites used for the study, no labelled hot spots were observed. This would seem to suggest that pure tracers move through lymphatic vessels without accumulating inside lymph nodes.After injection of DC-based immunotherapy has undergone a remarkable transformation in its development from basic research to clinical application ,28. Howe2 is essential for activating DC chemotaxis through the expression of CCR7 on the cell surface [2 may therefore prove to be important for increasing migration activity and DC efficacy. However, it has also been observed that PGE2 inhibits IL-12 production, resulting in a weaker in vivo activation of T-cells [in vivo experimentation.Recent published data have shown that the choice of maturation stimulus may be crucial for therapeutic success. In particular, it has been seen that PGE surface ,32. This T-cells ,34. Thes99mTc-HMPAO and 111In-Oxine) iDC and mDC [In the present study we aimed to clarify some issues concerning DC migration activity in a clinical vaccination trial utilizing radiolabelled ; it was observed that intradermal administration had a threefold higher migration to lymph nodes than the subcutaneous route. Although it remains to clarify the extent to which this migratory capacity is active or passive, it is clear that DC must be administered intradermally to obtain a higher migration.A crucial phase of the study was the comparison between the migratory activity of iDC and that of mDC . The result was once again unequivocal, showing a greater progressive concentration of mDC that was, on average, six-eightfold higher than that of iDC, in accordance with data reported by other authors ,23 and i111In-Oxine-and 99mTc-HMPAO-labelled DC in succession and in the same site, to follow their migratory course.Notwithstanding the results obtained from the present study, many issues remain to be clarified. It has yet to be determined whether the increased activity detected in lymph nodes corresponds to an effectively greater migratory capacity or whether it is the result of a more effective adhesion capacity between surface molecules. Both hypotheses could even prove to be correct. We also do not know how long DC remain in lymph nodes. The increase in activity in lymph nodes is high in the first few determinations but tends to diminish or stabilize after around 36 h. It remains to be seen whether this presumed stabilization is the result of a sort of saturation or whether it can be attributed to the attainment of a dynamic equilibrium. The former hypothesis would indicate the need for an optimization of the number of DC to administer, as only a limited number would be functionally active. The latter would highlight the importance of the timing of administration and perhaps also the degree of DC maturation. To further investigate this, we plan to administer in vitro transiently stimulated DC (semimature DC). The therapeutic use of this type of DC, which have already begun the process of maturation and may be capable of reaching lymph nodes before their functional exhaustion, could increase the duration of their activation and stimulation. If these semimature DC prove to be equipped with a good migratory capacity, further improvement in the therapeutic use of DC may be possible.Finally, we aim to assess the migratory capacity of The migration activity of DC to regional lymph nodes is one of the many critical factors that influence the therapeutic result of antitumor vaccination. In the present study we used radioisotope-labelled DC and demonstrated that a better migration activity is obtained using intradermal than subcutaneous administration and that mDC show, on average, a six-to eightfold higher migration than iDC. Numerous other issues on DC functionality have yet to be clarified before antitumor therapeutic efficacy can be improved. The next important step will be to closely monitor the quantity and quality of responses observed in T-cells, and it is hoped that a consensus will be reached on standardized criteria for the definition and validation of clinical results obtained.DC, dendritic cell; iDC, immature dendritic cell; mDC, mature dendritic cell; ATL, autologous tumor lysate; PBMC, peripheral blood monocytes.RR and LR participated in the design of the study and were responsible for the clinical side of the study. AR, MP, LF and MS participated in the design of the study and were responsible for the biological part of the study. GM performed the apheresis collections. RG, AM and GF carried out DC labelling and migration evaluation. GG performed the mathematical and statistical analysis. All authors read and approved the final manuscript.None declared.
Excess Years of Life Lost due to exposure is an important measure of health impact complementary to rate or risk statistics. I show that the total excess Years of Life Lost due to exposure can be estimated unbiasedly by calculating the corresponding excess Years of Potential Life Lost given conditions that describe study validity (like exchangeability of exposed and unexposed) and assuming that exposure is never preventive. I further demonstrate that the excess Years of Life Lost conditional on age at death cannot be estimated unbiasedly by a calculation of conditional excess Years of Potential Life Lost without adopting speculative causal models that cannot be tested empirically. Furthermore, I point out by example that the excess Years of Life Lost for a specific cause of death, like lung cancer, cannot be identified from epidemiologic data without assuming non-testable assumptions about the causal mechanism as to how exposure produces death. Hence, excess Years of Life Lost estimated from life tables or regression models, as presented by some authors for lung cancer or after stratification for age, are potentially biased. These points were already made by Robins and Greenland 1991 reasoning on an abstract level. In addition, I demonstrate by adequate life table examples designed to critically discuss the Years of Potential Life Lost analysis published by Park et al. 2002 that the potential biases involved may be fairly extreme. Although statistics conveying information about the advancement of disease onset are helpful in exposure impact analysis and especially worthwhile in exposure impact communication, I believe that attention should be drawn to the difficulties involved and that epidemiologists should always be aware of these conceptual limits of the Years of Potential Life Lost method when applying it as a regular tool in cohort analysis. This information is used to measure the effect of exposure on disease by comparisons of such statistics. Although these measures have been proven by practice and theory to be useful for this purpose, these frequency statistics are unable to reflect all causal effects of exposure in general . One reand 1998 , Greenlaand 1998 , Morfeldand 1998 ). An illTherefore, alternative measures that focus more directly on the time-shift of events or the time-shift of frequency statistics would be most welcome. One such approach aims at Years of Life Lost (YLL). Interestingly, even in the title of one of the very first articles about Years of Life Lost, Dempsey expresseRecently, this approach was extended by Park et al. 2002 ) estimatHowever, in a letter to the editor, Morfeld 2003 claimed Here I try to explain the reasoning in detail. I expand the critique demonstrating additionally that the e-YPLL method is unjustified when applied to specific causes of death like lung cancer – the main topic of Park et al. 2002 YPLL. This is the first kind of contribution to the change in e-YPLL I have to consider. The second kind of contribution stems from the new exposed case dying at dx(1). This amount of change is simply the same as the Potential Years of Life Lost for this case: YPLL = dx(2) - dx(1) + [n/(n+1)]YPLL where n/(n+1) is the probability to survive dx(2) given the unexposed reference population. The third kind of contribution to the difference between e-YPLL(n+1) and e-YPLL(n) stems from the k exposed cases dying before dx(2). Their potential survival is shortened after introducing the new additional death at dx(2). Note that the reference-based probability to die at dx(2) is 1/(n+1) for each of these exposed cases. Therefore, the Years of Potential Life Lost is reduced for each of these cases by the amount [1/(n+1)]YPLL. This sums to [k/(n+1)]YPLL for all k exposed cases dying before dx(2). Now I can calculate how e-YPLL changes when the pair x is added to the n pairs:I assume that k of the n exposed in the study with n pairs may die before dx(2) - dx(1) +    e-YPLL(n+1) = e-YPLL(n) + d      + [n/(n+1) - (n-k)/(n+1) - k/(n+1)] YPLLx(2) - dx(1)      = e-YPLL(n) + dx(2) - dx(1)      = e-YLL(n) + d      = e-YLL(n+1).x(2) - dx(1) if the pair x is added to the n pairs I started with. Hence, I have proven that e-YPLL equals e-YLL in all ideal studies consisting of pairs of exchangeable twins given that exposure is never preventive. Note that the measures are always calculated with respect to deaths from all causes. It is remarkable that the equality e-YPLL = e-YLL holds although e-YPLL does not make use of the individual matching information.The second to last equation follows from the induction assumption. The last equality is based on the obvious fact that the true excess Years of Life Lost increases by dI have shown in Chapter 2 that the total e-YLL of the whole study group can be measured accurately by the total e-YPLL provided the assumptions of an ideal study hold including optimality criterion 2 (cf. last paragraph of Chapter 1) and provided the analysed response is death from all causes. However, this identity of e-YLL and e-YPLL no longer holds if I am interested in e-YLL conditional on age at death. To see why consider the scenarios illustrated in Figure a(1) are smaller in scenario 1 than in scenario 2 whereas it is vice versa at db(1). Irrespective of the scenario, the e-YPLL are always calculated in the same way. Therefore, we get in both scenarios the same e-YPLL at da(1) as well as the same e-YPLL at db(1). Note that the calculation of e-YPLL does not make any use of the individual matching information.In both scenarios Fig. the totaHence, the true excess Years of Life Lost conditional on age at death are not identifiable without having access to a perfect control twin or without supposing a specific mechanism for how exposure causes death. Note that criterion 1 alone – as defined in the last paragraph of Chapter 1 and hopefully fulfilled in usual cohort data – does not render the excess Years of Life Lost stratified on age at death identifiable. Consequently, estimates based on e-YPLL conditional on age at death are potentially biased in all usual settings.Next, I analyse a theoretical birth cohort of 100,000 men to demonstrate this potential bias by a realistic life table example for short. In contrast to the causal effect, e-YLL(lung cancer) comprises exactly that part of e-YLL from all causes of death that is due to the occurrence of lung cancer deaths among the exposed. In the example Figure these e-The concept presented supposes that each subject has a set of hypothetical (deterministic) death times for each combination of level of exposure and cause of death. Just one of these death times is effective, the others are latent. For the exposed twin 1 lung cancer is effective as a cause of death and heart attack is latent. It is vice versa for the unexposed twin 2. Both causes of death compete within each twin for the effective position. Of course, the result of this competition depends on the exposure conditions applied. This result motivates to introduce a second (competing) exposure into the scenario, in which I deal with two competing responses (lung cancer and heart attack). Figure In Figure From Figure     e-YLL = e-YLL(lung cancer) = e-YPLL =a(2) - da(1) + db(2) - db(1).    dThe first equation holds because the only cause of death among all asbestos exposed is lung cancer and the second because I have shown that e-YLL = e-YPLL is true in an ideal study given the aforementioned assumptions. The third equation follows from a simple evaluation of e-YLL.Next, I calculate e-YPLL(lung cancer) according to Park et al. 2002 [lungcancer(t) - expectedlungcancer(t)) YPLL(t).    e-YPLL(lung cancer) = Σ - da(1) + 0.5 (db(2) - da(2))    YPLL = db(2) - db(1),    YPLL = dfrom which I derivea(2) - da(1) + 0.5 (db(2) - da(2)) +    e-YPLL(lung cancer) = db(2) - db(1)      + db(2) - da(2)),      = e-YLL(lung cancer) + 0.5 (dsince no expected lung cancer deaths are to be subtracted.b(2) - da(2)) which is half of the causal effect of smoking on age at death from all causes according to Figure Hence, e-YPLL(lung cancer) is biased upward by 0.5 for death from all causes can be accurately determined by calculating the corresponding excess Years of Potential Life Lost (e-YPLL) provided conditions hold that are also often cited for general study validity. However, the equality of e-YLL and e-YPLL does not hold in general under certain conditions of interest because under these certain conditions we need to assure criterion 2: each exposed subject has to have an ideal unexposed control partner so that the effect of exposure can be measured on the individual level . The equality also requires that this information about individual matching is used in the analysis. I pointed out that the e-YPLL conditional on age at death are potentially biased as they were published in some analyses the aveStatistical measures that focus on the advancement of disease onset are assumed to be helpful in communicating risk factor impact on disease . FischhStatistics conveying information about the advancement of disease onset are therefore helpful in exposure impact analysis and especially worthwhile in exposure impact communication. However, attention should be drawn to the difficulties involved and that epidemiologists should always be aware of the conceptual limits of the Years of Potential Life Lost method when applying it as a regular tool in cohort analysis.Aside from Years of Life Lost, other approaches are available to convey information on the impact of a risk factor on the onset of a disease and may thus facilitate communication of epidemiological findings. One such concept is the risk and rate advancement period (RAP) introduced by Brenner et al. 1993 which coThe main conclusions about non-identifiability of e-YLL are derived as an application of a causal theory based on counterfactuals (for a review see Greenland 2000 ). Some aThe ideas of counterfactual reasoning can at least be traced back to philosophers in the eighteenth and nineteenth centuries, like Hume and Mill. Even the oldest clearly structured theory of causality, developed by Aristotle, has some similarities to counterfactual reasoning due to its manipulative four-causes concept (Vorländer 1979 ). In itsSome authors resist a counterfactual approach and argue against the speculative and metaphysical background of counterfactual worlds (Dawid 2000 ): How caNote further that the mathematically consistent analytical treatment of causal questions by counterfactual theory is obviously related to the so called multiverse approach (Everett 1957 ). The laIn addition, this link to quantum mechanics disproves the repeatedly made statement by Dawid that coWhereas the counterfactual approach can help to clarify terminology and substance of causal relations, it points simultaneously at some ambiguities when discussing competing causes of death. In this scenario, it is unclear what kind of action should be taken to cause a suppression of competing risks (Greenland 2002 ). Hence,In conclusion, the excess Years of Potential Life Lost estimates the excess Years of Life Lost due to exposure unbiasedly if we are interested in a) death from all causes and b) total excess Years of Life Lost summed up across the whole cohort. However, the method of calculating excess Years of Potential Life Lost due to exposure is potentially biased if it is applied 1) to estimate the impact of exposure on specific causes of death, like lung cancer, in the presence of competing causes or 2) to estimate the impact of exposure conditional on age at death. These potential biases can be rather severe in published analyses declares that he has no competing interests.The following example is based on the counterfactual framework presented in Chapter 1. It demonstrates that neither relative risks nor relative rates can be used in general to estimate the probability of causation unbiasedly. In particular, I show that an estimate of the attributable hazard derived from a Cox model (Breslow and Day 1987 ) fails tend>0. The time scale can be chosen arbitrarily as age, calendar time or time since start of follow-up without affecting the following arguments. I assume that the exposed subject A may experience the event (death) during the follow-up period at t1 years (t1 > 0), if unexposed counterfactually (A's twin) at t2 > t1, t2 < tend. No event may occur in subject B and his/her twin during follow-up . Thus, among the two exposed subjects, A and B, only one case occurs and this case is causally affected by exposure since t1 < t2. It follows that the probability of causation is exactly 100%.The cohort is supposed to comprise four subjects: A, A's twin, B, and B's twin. In addition, it is assumed that the cohort is followed up in mortality until the fixed censoring date tIn contrast, the incidence proportion (Rothman and Greenland 1998 ) is 0.5 1 + tend) and among the unexposed is 1/(t2 + tend). Consequently, the rate ratio is (t2 + tend)/(t1 + tend) > 1 because t2 > t1. The attributable rate among the exposed can be determined as (rate ratio -1) / rate ratio = 1 - 1/rate ratio. Thus, I conclude 0 < 1 - (t1 + tend)/(t2 + tend) < 1, again proving a systematic underestimate of the true probability of causation among the exposed by the attributable rate ratio. Note that the rare disease assumption is of no help. Assuming n exposed subjects (n>>1) that neither react nor do their n unexposed twins, we get an attributable rate among the exposed of 1 - (t1 + (n)(tend)) / (t2 + (n)(tend)) < 1. If n approaches infinity the attributable rate among the exposed decreases to zero whereas the true probability of causation remains always constant at 100%. Hence, the discrepancy is even sharpened under the rare disease assumption.Note that a change from risks to rates does not overcome the problem. The rate among the exposed is 1/ comprises 4 subjects and 2 risk sets. One set is generated by the exposed case (A), the other set by the unexposed (A's twin). Therefore, a Cox analysis of the cohort yields the following.Assuming a relative hazard (rate) in the Cox model of0 = exp (β exposure), exposure = binary exposure indicator    λ / λwe get the partial likelihood    which is maximized at b = ln 2/2.Therefore, the estimated hazard ratio is    exp (b) = √ 2yielding an estimated attributable hazard among the exposed of-1 < 1 = probability of causation.1 - (√2)Note that the rare disease assumption is again of no help to overcome this discrepancy because the estimated attributable hazard among the exposed approaches 0 when the number of controls is rising indefinitely.Greenland 1999 emphasizLife table analysis with calculation of excess Years of Potential Life Lost e-YPLL according to Park et al. 2002. Basic data (unexposed) from BEIR IV (1988), Table 2A-10, p. 133: death rates of the male US population, surviving at least 30 years, applied to a birth cohort of 100,000. Exposure impact: advancement of certain fractions of deaths. For details of assumed mechanism see Table 2 .Click here for fileTwo different exposure-response mechanisms compatible with the life table analysis in Table 1. Fractions of the unexposed deaths are advanced by 0 yr or 5 yr according to mechanism 1 and by 0 yr, 5 yr or 10 yr according to mechanism 2. The age distribution of all exposed deaths is identical under both mechanisms. The distribution of the true excess Years of Life Lost e-YLL differs between mechanisms and both diverge from e-YPLL , whereas the totals agree. (advcm = advancement)Click here for fileLife table analysis with calculation of excess Years of Potential Life Lost e-YPLL and true excess Years of Life Lost e-YLL for overall and lung cancer mortality (ICD9-162). Basic data (unexposed) from BEIR IV (1988), Table 2A-10, p. 133: overall and lung cancer death rates of the male US population, surviving at least 30 years applied to a birth cohort of 100,000. Exposure impact: advancement of factual and hypothetical lung cancer deaths by 5 years, mixture of advancements among deaths from all causes. It is assumed that the advancement of hypothetical lung cancer deaths leads to an excess of 50% of lung cancer deaths among exposed in each age category. The overall e-YLL for lung cancer are less than the number of exposed lung cancer deaths times 5 years. The e-YPLL for overall death and lung cancer death are determined according to Park et al. 2002. For all deaths e-YPLL must equal e-YLL, but e-YPLL is obviously biased for lung cancer death.Click here for fileExcel sheet explaining the calculations in Additional files 1 and 2.Click here for fileExcel sheet explaining the calculations in Additional file 3.Click here for file
Phospholipase D (PLD) is involved in many signaling pathways. In most systems, the activity of PLD is primarily regulated by the members of the ADP-Ribosylation Factor (ARF) family of GTPases, but the mechanism of activation of PLD and ARF by extracellular signals has not been fully established. Here we tested the hypothesis that ARF-guanine nucleotide exchange factors (ARF-GEFs) of the cytohesin/ARNO family mediate the activation of ARF and PLD by insulin.Wild type ARNO transiently transfected in HIRcB cells was translocated to the plasma membrane in an insulin-dependent manner and promoted the translocation of ARF to the membranes. ARNO mutants: ΔCC-ARNO and CC-ARNO were partially translocated to the membranes while ΔPH-ARNO and PH-ARNO could not be translocated to the membranes. Sec7 domain mutants of ARNO did not facilitate the ARF translocation. Overexpression of wild type ARNO significantly increased insulin-stimulated PLD activity, and mutations in the Sec7 and PH domains, or deletion of the PH or CC domains inhibited the effects of insulin.Small ARF-GEFs of the cytohesin/ARNO family mediate the activation of ARF and PLD by the insulin receptor.The online version of this article (doi:10.1186/1471-2121-4-13) contains supplementary material, which is available to authorized users. Small GTPases of the ADP-ribosylation factor (ARF) family play a major role in membrane trafficking in eukaryotic cells . ARF act2+, protein kinase C, tyrosine kinases, and G proteins (1 μCi) in 20 mM Hepes buffer containing 2 mM MgCl2/ 0.1% Na-cholate / 1 mM ATP. At the indicated time points, the reaction was quenched by addition of 100 μM ice-cold, unlabeled GTPγS and the protein-bound nucleotide was determined by filtration through nitrocellulose filters as described [ARF activation was determined by the binding of GTPγS to the purified, myristoylated recombinant human ARF1 (mhARF1), as described by Shome and coworkers . The insescribed .HIRcB cells were plated on poly-L-lysine coated glass coverslips and transfected with the constructs as indicated above. Cells were serum starved overnight and stimulated with 100 nM insulin. Live cells were imaged in a LSM5 Zeiss laser scanning confocal microscope equipped with a 63X oil immersion objective.For ARF and ARNO colocalization experiments, HIRcB cells were plated on poly-L-lysine coated coverslips as described above and co-transfected with myc-ARNO and ARF-GFP constructs using Superfect transfection reagent according to the manufacturer's instructions. Following insulin stimulation, the cells were fixed with 4% fresh paraformaldehyde in PBS at 4°C for 30 min, and permeabilized in 0.1% Triton X-100 at room temperature for 2 min. After permeabilization, the cells were blocked with 3% bovine serum albumin in PBS at room temperature for 30 min, and immunostained with a monoclonal antibody 9E10 (Upstate Biotechnology) that recognizes the myc epitope. After extensively washing, the cells were incubated with a Cy5-conjugated donkey anti-mouse secondary antibody (Jackson Immunoresearch) and imaged using a Zeiss laser scanning confocal microscope with filters appropriate for the detection of GFP and Cy5.
Human faces provide important signals in social interactions by inferring two main types of information, individual identity and emotional expression. The ability to readily assess both, the variability and consistency among emotional expressions in different individuals, is central to one's own interpretation of the imminent environment. A factorial design was used to systematically test the interaction of either constant or variable emotional expressions with constant or variable facial identities in areas involved in face processing using functional magnetic resonance imaging.Previous studies suggest a predominant role of the amygdala in the assessment of emotional variability. Here we extend this view by showing that this structure activated to faces with changing identities that display constant emotional expressions. Within this condition, amygdala activation was dependent on the type and intensity of displayed emotion, with significant responses to fearful expressions and, to a lesser extent so to neutral and happy expressions. In contrast, the lateral fusiform gyrus showed a binary pattern of increased activation to changing stimulus features while it was also differentially responsive to the intensity of displayed emotion when processing different facial identities.These results suggest that the amygdala might serve to detect constant facial emotions in different individuals, complementing its established role for detecting emotional variability. Facial expressions and facial identities are important cues for the evaluation of social contexts ,2. Two mVisual analysis of human faces has been suggested to be achieved by a core system comprising the fusiform gyrus together with the inferior occipital gyrus, the superior temporal sulcus, and the amygdala ,8. The aRecent advances to assess the effects of facial identity and emotional expression include one report on increased left amygdala activation to blocks of multiple novel vs. single identical faces displaying neutral or emotionless expressions . AnimateICE), (b) variable identity, constant expression (VICE), (c) constant identity, variable expression (CIVE), (d) variable identity, variable expression (VIVE). To control for attentional effects during the procedure, an oddball task was included to avoid confounds by an emotional judgment or a gender differentiation task .fearful expressions (shown in different identities) would elicit the strongest amygdala activation given the evidence from lesion and imaging studies that this expression is a particularly potent activator of the amygdala for the amygdala and masked it with the contrast to exclude regions showing higher activations to CIVE than to VICE. For our hypothesis in the fusiform gyrus we used a contrast that compared conditions with at least one changing stimulus feature with the condition in which the same stimulus was shown for the entire block [(VICE + CIVE+ VIVE) > 3 × CICE].In order to detect voxels that show elevated responses to the same expression displayed in different individuals we constructed the interaction contrast of our 2 × 2 design. Hence, we created the contrast [ which refer to the probabilistic behavior of Gaussian random fields . Our thrzer task ; see TabICE (Figure For the emotion-specific analyses of condition VEPI: echo-planar imagingfMRI: functional magnetic resonance imagingLOC: lateral occipital complexRT: reaction timeSPM: statistical parametric mapIVE: variable identity, variable emotionVICE: variable identity, constant emotionVICE: constant identity, constant emotionCIVE: constant identity. variable emotionCFear/Happy Max: maximally fearful/happy expressionFear/Happy Min: minimal fearful/happy expression.J.G. and O.T. designed, coordinated, and conducted data collection, analysis, and interpretation. C.B. and C.W. conceived of the study and participated in its design, analysis and interpretation. All authors read and approved the final manuscript.
Endurance exercise training can promote an adaptive muscle fiber transformation and an increase of mitochondrial biogenesis by triggering scripted changes in gene expression. However, no transcription factor has yet been identified that can direct this process. We describe the engineering of a mouse capable of continuous running of up to twice the distance of a wild-type littermate. This was achieved by targeted expression of an activated form of peroxisome proliferator-activated receptor δ (PPARδ) in skeletal muscle, which induces a switch to form increased numbers of type I muscle fibers. Treatment of wild-type mice with PPARδ agonist elicits a similar type I fiber gene expression profile in muscle. Moreover, these genetically generated fibers confer resistance to obesity with improved metabolic profiles, even in the absence of exercise. These results demonstrate that complex physiologic properties such as fatigue, endurance, and running capacity can be molecularly analyzed and manipulated. Engineered expression of the peroxisome proliferator-activated receptor δ in skeletal muscle increases type I muscle fibers allowing the modified mice to run twice the distance of wild-type littermates Skeletal muscle fibers are generally classified as type I (oxidative/slow) or type II (glycolytic/fast) fibers. They display marked differences in respect to contraction, metabolism, and susceptibility to fatigue. Type I fibers are mitochondria-rich and mainly use oxidative metabolism for energy production, which provides a stable and long-lasting supply of ATP, and thus are fatigue-resistant. Type II fibers comprise three subtypes, IIa, IIx, and IIb. Type IIb fibers have the lowest levels of mitochondrial content and oxidative enzymes, rely on glycolytic metabolism as a major energy source, and are susceptible to fatigue, while the oxidative and contraction functions of type IIa and IIx lie between type I and IIb . Adult sMuscle fiber specification appears to be associated with obesity and diabetes. For instance, rodents that gain the most weight on high-fat diets possess fewer type I fibers . In obesWe have previously established that peroxisome proliferator-activated receptor (PPAR) δ is a major transcriptional regulator of fat burning in adipose tissue through activation of enzymes associated with long-chain fatty-acid β-oxidation . AlthougA role of PPARδ in muscle fiber was suggested by its enhanced expression—at levels 10-fold and 50-fold greater than PPARα and γ isoforms, respectively (unpublished data). An examination of PPARδ in different muscle fibers reveals a significantly higher level in type I muscle (soleus) relative to type II–rich muscle (extensor digitorum longus) or type I and type II mixed muscle (gastrocnemius) A; this eTo directly assess the role of activation of PPARδ in control of muscle fiber plasticity and mitochondrial biogenesis, we generated mice expressing a transgene in which the 78-amino-acid VP16 activation domain was fused to the N-terminus of full-length PPARδ, under control of the 2.2-kb human α-skeletal actin promoter. In agreement with the previous characterization of this promoter , the VP1Type I muscle can be readily distinguished from type II or mixed muscle by its red color, because of its high concentration of myoglobin, a protein typically expressed in oxidative muscle fibers. We found that muscles in the transgenic mice appeared redder A, which A number of previous studies have shown that obese individuals have fewer oxidative fibers, implying that the presence of oxidative fibers alone may play a part in obesity resistance. To test this possibility, we fed the transgenic mice and their wild-type littermates with a high-fat diet for 97 d. Although the initial body weights of the two groups were very similar, the transgenic mice had gained less than 50% at day 47, and only one-third at day 97, of the weight gained by the wild-type animals A. The trp > 0.35, n = 4) were observed between the transgenic and control mice. Thus, the remarkable increase in endurance is the physiologic manifestation of muscle fiber transformation. This suggests that genetically directed muscle fiber switch is physiologically and functionally relevant. In addition, we looked at what effect the absence of PPARδ function has on exercise endurance. In the treadmill test, the PPARδ-null mice could sustain only 38% of the running time and 34% of the distance of their age- and weight-matched wild-type counterparts of VP16 was fused in frame with the N-terminus of mouse PPARδ. The VP16-PPARδ fusion cDNA was placed downstream of the human α-skeletal actin promoter , and upsMouse EST clones were obtained from ATCC , verified by sequencing, and used as Northern probes. Antibodies were obtained from Santa Cruz Biotechnology . Total muscle protein extracts and nuclPrior to the exercise performance test, the mice were accustomed to the treadmill with a 5-min run at 7 m/min once per day for 2 d. The exercise test regimen was 10 m/min for the first 60 min, followed by 1 m/min increment increases at 15-min intervals. Exhaustion was defined when mice were unable to avoid repetitive electrical shocks.Muscle fiber typing was essentially performed using metachromatic dye–ATPase methods as described . Muscle p-values.Number of mice for each group used in experiments is indicated in figure legends. Values are presented as mean ± SEM. A two-tailed Student's t test was used to calculate Video S1This video shows the exercise performance of a representative of the transgenic mice (right chamber) and a representative of wild-type control littermates (left chamber) on the treadmill 15 min into the exercise challenge.(52.4 MB MOV).Click here for additional data file.Video S2This video shows the exercise performance of a representative of the transgenic mice (right chamber) and a representative of wild-type control littermates (left chamber) on the treadmill 90 min into the exercise challenge.(41.7 MB MOV).Click here for additional data file.
One recovery gradient undergoes natural succession to mature tropical rainforest, while the other involves plantation of jhum fallows with teak Tectona grandis monoculture.Community recovery following primary habitat alteration can provide tests for various hypotheses in ecology and conservation biology. Prominent among these are questions related to the manner and rate of community assembly after habitat perturbation. Here we use space-for-time substitution to analyse frog and lizard community assembly along two gradients of habitat recovery following slash and burn agriculture (Frog and lizard communities accumulated species steadily during natural succession, attaining characteristics similar to those from mature forest after 30 years of regeneration. Lizards showed higher turnover and lower augmentation of species relative to frogs. Niche based classification identified a number of guilds, some of which contained both frogs and lizards. Successional change in species richness was due to increase in the number of guilds as well as the number of species per guild. Phylogenetic structure increased with succession for some guilds. Communities along the teak plantation gradient on the other hand, did not show any sign of change with chronosere age. Factor analysis revealed sets of habitat variables that independently determined changes in community and guild composition during habitat recovery.jhum cultivation cycles and plantation forestry could result in landscapes without mature forest. Lack of source pools of genetic diversity will then lead to altered vegetation succession and faunal community reassembly. It is therefore important that the value of habitat mosaics containing even patches of primary forest and successional secondary habitats be taken into account.The timescale of frog and lizard community recovery was comparable with that reported by previous studies on different faunal groups in other tropical regions. Both communities converged on primary habitat attributes during natural vegetation succession, the recovery being driven by deterministic, nonlinear changes in habitat characteristics. On the other hand, very little faunal recovery was seen even in relatively old teak plantation. In general, tree monocultures are unlikely to support recovery of natural forest communities and the combined effect of shortened Evaluation of the importance of various processes determining community structure and function is an important topic in ecology. Unlike just a decade or so ago, few studies today question whether or not community assembly is strictly random, recognizing the role of both stochastic and deterministic processes . This chThis newfound view of community ecology is an excitingly realistic one, and has the potential to make valuable contributions to conservation biology as well ,7. HowevCircumventing this problem is obviously very difficult. One possible approach is to study communities along gradients of habitat succession using space-for-time substitution (SFT) to obtain chronosequential communities . Thus, ijhum fallows giving way to mature forest, and (b) 1-yr jhum fallows planted over with teak, leading to monoculture stands. Slash-and-burn or shifting cultivation (jhum) agriculture involves clearing and burning of forest patches, so the original rainforest communities are effectively obliterated, and succession involves recovery of communities from scratch. The following questions were addressed in this study:This study takes an SFT approach to compare changes in frog and lizard community structure in two contrasting habitat succession gradients: (a) 1-yr 1. How much does frog and lizard community succession differ between the two gradients of habitat recovery?2. Does composition of the entire community change in synchrony, or does the recovery pattern differ between subcommunities such as frogs vs. lizards and guilds?3. What aspects of habitat change influence frog and lizard community recovery, and if habitat parameters are linked to niche axes, do they predict changes in guild composition?4. Do successional changes in guilds also show trends in phylogenetic structure? This last question is expected to yield interesting insights into possible evolutionary mechanisms underlying changes in community composition , but hasIn this paper, a chronosere is defined as a habitat that has recovered from perturbation for a known length of time, and can be assigned a place in the SFT. An assemblage is the set of all species of a taxonomic group in a landscape of interest. Ecological groups (EGs) are species' subsets of the assemblage with similar niche characteristics. Communities comprise species of the assemblage which share a habitat stratum in the landscape. Guilds are members of the EGs that actually coexist in the same chronosere i.e., belong to the same community, and are thus likely to have ecological and evolutionary interactions (cf. ).To draw inferences about what aspects of habitat change determine sequential communities, habitat and frog-lizard community data were analysed hierarchically. As a first step, species richness and turnover of frog and lizard communities along habitat recovery gradients was summarised, and the entire assemblage classified into ecological groups (EGs) based on niche similarities. Guilds identified from this classification were then examined for phylogenetic structure. Using factor analysis, orthogonal combinations of variables that described biotic and abiotic aspects of habitat transition were extracted. We then tested for correspondence between these composite variables and composition of frogs and lizard communities and guilds. Based upon the relationships between different habitat factors and frog and lizard communities, variables were interpreted as composite adaptive zones, and we tested whether they predicted successional changes at different levels of community organization.jhum fallows (plots jh1A and B) were dominated by herbaceous plants, tall grass, shrubs and wild bananas, along with saplings and surviving crop plants. The 4 to 5-yr post-jhum plot (jh5) was dominated by almost homogeneous stands of the bamboo Melocanna baccifera, interspersed with a few shrubs and trees. Herbs were rare, and the understorey sparse. The 7 to 10-yr post-jhum plot (jh10) was very similar to jh5. However, here the bamboo culms were more sparsely distributed, and along with M. baccifera, two other bamboos- Dendrocalamus longispathus and Bambusa tulda were in greater abundance, and woody plants were relatively more common. Compared to the other plots, a larger area was included in the 30 to 35-yr jhum plot (jh35) because it contained a greater range of ages and hence perhaps more variability. Also, this was the only accessible site in the study area that represented a chronosere aged between 30–50 years. Although M. baccifera was common, this site had a greater abundance of other bamboos and trees than any of the previous stages. Though most trees were small, woody vegetation formed a significant part of the canopy. Herbs and shrubs were rare, and the understorey generally sparse.Table The three mature forest plots were of untraceable age . They wLantana camara) and occasional herbs. The 22-yr teak plantation site (tk22) had a monotonous, uniform structure characteristic of a mature, managed teak monoculture. Undergrowth was sparse, consisting mostly of tall grass and Lantana sp. Table The 4-yr teak plot (tk4) was a young plantation characterized by a monodominant stand of teak trees. The understorey was sparse, with some tall grass, shrubs , and by Varimax rotation of the factor structure, which explained a cumulative 85.8% of the variation (see methods). Eigenvalues, factor loadings and factor scores are given in In general, although there is a change towards a tree dominated habitat in both recovery gradients, the end result is very different because the 22 year teak plantation is a monoculture, whereas the mature forest consists of a diverse tree community.jhum fallows and teak plantation communities on one hand, and the mature forest and the 35 year jhum fallows on the other.The three sampling techniques used in conjunction during the study (see methods) yielded sixteen frog and seventeen lizard species. Figure jhum to mature teak plantations on the other hand, seems to show little change in species richness or composition even after 22 years of plantation growth.The pattern of recovery is very different for the two gradients. For the mature forest succession gradient, the rate of frog and lizard community recovery is similar to that found for birds by Raman in the sIt is worth noting that there are dissimilarities in the manner of species accumulation for frogs vs. lizards. There is much less augmentation of species number in the case of the latter, the main reason for this being that younger chronoseres support more lizard than frog species richness. Species accumulation curves see show thaEcological groups defined by non-metric multidimensional scaling (NMDS) of the niche based dissimilarity matrix for the entire assemblage are shown in Figure Figure Figure These results on successional changes in guild structure and representation indicate a distinctly non-random sequence of community assembly, as certain guilds appear in later stages, followed by increase in their species richness and in many cases, phylogenetic structure. Habitat attributes that determine these changes are explored in the next section (see below).Table 2 = 0.85, and 0.97, respectively), and represents deterministic, linear aspects of vegetation succession. It has high positive loadings for tree species richness, and macro-habitat variables such as tree density, canopy cover, and canopy height, most of which increase deterministically along both gradients of habitat change. Among the measured variables, these are primary and independent, which over time drive changes in secondary (microhabitat) variables such as bamboo density, shrub abundance, and various measures of habitat heterogeneity (see methods). This factor clearly influences species composition at all levels of frog and lizard community structure.The strongest association is between factor 2 and overall species composition (frogs and lizards combined) across chronoseres. Factor 2 was strongly and non-linearly correlated with age along both teak and mature forest succession gradients and diurnal-terrestrial (DT) groups, were correlated with factor 2. This suggests that in contrast to other guilds, these two, which are both made up only of lizards, are directly influenced by a hierarchically higher order of habitat attributes.These were also the two groups that showed non directional trends in species richness as well as phylogenetic trends along habitat recovery gradients Figures and 6.nd order polynomial fit, R2= 0.99, and 0.82, respectively). This factor had high loading . This factor had no strong loadings, but is associated with shrub density, canopy height variability and tree density, all of which also affect ground cover, and can be considered macrohabitat variables.Along with factor 2, the crepuscular-nocturnal terrestrial (CT) group was correlated with factor 1 and 4. Factor 1 scores increase and then decrease with plot age along both habitat recovery gradients . It has high positive loading for CV of soil moisture, which as mentioned above, is highest in chronoseres with spatial and/or temporal variation in insolation. Among the measured variables, factor 2 probably subsumes most habitat parameters that affect both NA groups directly (see next section).The nocturnal arboreal frog group (NA(F)), was correlated with factor 7, which is non-deterministic with respect to chronosere age. This factor as a high loading for soil moisture, which by itself is difficult to interpret as a variable directly affecting this ecomorphological group. It is likely that this factor is a surrogate for an unmeasured or unclassified variable. Lastly, the nocturnal arboreal lizard group (NA(L)) is correlated with factor 8 along with factor 2. Factor 8 shows a weak negative linear relationship with recovery age along both gradients gradient . It canFactors 3, 5, and 6 showed no significant association with any level of community composition. The obvious reason for this appears to be that unlike other factors, these are completely non-deterministic with respect to age of succession, thus representing temporally and/or spatially stochastic attributes that were unlikely to show any influence on the conspicuously deterministic nature of frog and lizard community and guild (except for the DT and DA groups) succession The two gradients of habitat recovery are very different and accordingly affect frog and lizard community assembly differently, (2) Although both groups increased in species richness with habitat recovery, lizards had higher species turnover, combined with lower species augmentation within each recovery gradient (3) Looking at a finer scale of community organization, assembly appears to be driven by changes in guild representation and composition, where some guilds change directionally with age of habitat recovery by species augmentation, while others change by species turnover (4) Guilds that showed directional increase in species richness also increased in phylogenetic structure (5) Hierarchies of community organisation were affected by composite, nested habitat attributes that correspond to particular niche axes, and (6) the increase in species richness along the mature forest gradient in contrast to lack of change along the teak gradient was due to availability (or lack thereof) of variables that comprise these complex adaptive zones. Also, the results show that a niche-based guild classification reveals patterns that would have been hidden in the gross response pattern of the entire community.Some indication of the qualitative nature of potential evolutionary and ecological processes in community turnover comes from the fact that changes in phylogenetic structure are tied to guild structure in the communities. Using phylogenetic techniques, recent work has demonstrated the importance of evolutionary adaptation in assembling ecological communities . It is cIt is an open question as to what extent vegetation succession leads to changes in the number of adaptive peaks and corresponding changes in mean fitnesses of species' populations such that multiple species can persist in the same habitat. In more ecological terms this is same as asking how habitat succession leads to changes in niche availability, occupancy, and overlap . Another related question, that was partly explored using the S/G ratios in this paper, is whether similar adaptive zones (or niches or adaptive peaks) tend to be occupied by more closely related taxa. The results here do indicate that this may be true for such gradients of community change, as phylogenetic and guild structure increase directionally and in tandem with succession towards mature forest. Whether this change is driven by immigration from the regional gene pool or due to local divergent adaptation is an interesting question . Reptilejhum-rainforest succession gradient, which is about 30 years for both frogs and lizards, and suggests that recovery of diverse communities can be relatively fast, as has been reported for other fauna [jhum chronoseres are gradually replaced by trees. This vegetation succession is obviously reliant on seed rain/dispersal from nearby mature forests. In this region and many other areas Southeast Asia, apart from continued pressure from shifting cultivation and shortening cultivation cycles, it has also become popular practice to plant and maintain monocultures of timber species. As the results of this study indicate, such plantations are unlikely to support natural recovery of faunal communities, and will harbour lower biological diversity compared to primary forest.The time scale of recovery on the er fauna . Howeverjhum cycles, plantation forestry and invasion by non-native species such as Lantana and Eupatorium will lead to the local extirpation of even remnant forest patches. This loss of recolonisation pools for flora and fauna, will alter natural trajectories of succession, and strongly impact the biological diversity supported by the landscape. It is therefore important that conservation and prioritisation agencies in these areas consider the value of habitat mosaics containing even small patches of primary forest vegetation.It is possible that the combined effects of short Tectona grandis plantations and abandoned shifting cultivation (jhum) fallows of varying ages. All primary forest is referred to as "mature forest" throughout the paper because it is often difficult to determine the age of ostensibly primary tropical forest, especially in areas with poorly known history of land use and recovery [The study was carried out from November 1998 to April 1999 in and around Ngengpui Wildlife Sanctuary in south Mizoram, Northeast India. The study area covers about 200 sq. km. see for mapsrecovery ,33. Furtrecovery .Ten sampling plots representing mature and successional vegetation stages of known ages were established Table 11. To conVegetation composition and habitat structure variables were sampled on randomly located 10 × 25 m belt transects ,17. TranPrincipal Components Analysis (PCA) was used to identify different aspects of habitat change with vegetation succession and collapse the list of raw variables into composite factors that could potentially predict frog and lizard community structure. The factor structure was rotated using the Varimax method to obtain clear loading patterns . As addiAs the objective of the analysis was to combine variables into composite, orthogonal factors that could potentially account for community and guild structure, all factors with eigenvalues = 0.8 were extracted, irrespective of the number of factors thus extracted. Although somewhat arbitrary, in essence this eigenvalue threshold ensured that a factor was included only if it extracted approximately as much as one raw variable . AlthougThe low abundance of amphibians and reptiles and unstandardised sampling methodology in tropical Asia reduces the reliability of species diversity estimates and hence community structure analyses . Taking To improve detection and gather information for delineation of EGs (see below), the traditional transect method was modified by eliminating pseudoreplication and sampling both nocturnal and diurnal species on the same transect ,35. The within a chronosere, it varied considerably across plots. The DAW time on the other hand, was more or less consistent. This strategy was used because just as sampling effort needs to be proportional to habitat heterogeneity, higher microhabitat complexity calls for proportionally greater searching effort. Figure Although time taken for the NAW was more or less constant This technique was used to supplement species inventorying from the belt transects, and for an unbiased measure of the effects of weather on herpetofaunal activity and hence sampling efficiency. Comparisons of trapping frequency across plots over the study period are not used in this paper. Each pitfall array was, 'Y'-shaped, with three terminal (30 cm diam. × 60 cm depth) and one central (50 cm diam. × 70 cm depth) cylindrical aluminium funnel pitfall traps buried in the ground. The traps were connected with three opaque plastic-sheet fences (the arms of the 'Y') 0.4 m high and 5 m long, held up by bamboo stakes. In all, 22 arrays were placed, with two in each plot except for the large Jh35, which had four. Arrays were at comparable distance from plot edges, and on similar slope. Systematic trapping was initiated ten days after trap were established. Traps were opened for 5–10 consecutive days, and checked according to habitat characteristics, taking into consideration the level of exposure trapped animals were likely to be subjected to; plots with open habitat, such as jh1A were checked most and those with relatively closed habitat such as matA were checked least frequently (every third day). Most specimens (95.2 %) obtained from pitfall trapping were released a minimum of 100 m away from the array, either in the same site, or in similar habitat elsewhere. A few were retained as voucher specimens.At the end of a sampling session in a plot, far ranging searches were carried out. This augmented species inventorying, and provided information crucial for EG classification (see below). Periodically, nocturnal searches were also made to collect information about the refuge of diurnally active animals, and also to confirm the presence or absence of species in different chronoseres.Irrespective of the sampling technique, animals detected were caught whenever possible, and identified in hand. All those that escaped were identified to a justifiable level or excluded from the analyses. A few individuals of taxonomically problematic species or taxa were preserved for later identification.The effectiveness of sampling was evaluated by species accumulation curves see , and theOverlap between recovering frog and lizard communities was measured with the Bray-Curtis measure between all possible pairs of chronoseres using presence absence data of all species see . The resLife history and behavioural traits were used to group species. These are often called guilds , but arPhylogenetic structure was measured as the ratio of number of species to the number of genera (S/G ratio) in each EG. A similar approach has been used in studies addressing questions about phylogenetic structure in ecological communities .To test which habitat attributes influenced community structure, Mantel tests of correspondence between dissimilarity (distance) matrices ,19,38,39To test whether the availability of composite variables (PCA factors) that predicted community and guild structure identified by the matrix correspondence tests did indeed influence community succession and phylogenetic structure along gradients of habitat recovery, correlations between sums of factor scores and species richness, ratio of species number/guild number and S/G ratios across chronoseres were tested.SSP conceived the study, carried out the fieldwork, performed the data analyses, and drafted the manuscript. BCC and GSR participated in design and coordination of the study. GSR also supervised the vegetation identification and habitat classification. All authors read and approved the final manuscript.Photographs of habitat types. Representative photographs of habitat typesClick here for fileResults of PCA along with list of habitat variables used. PCA ResultsClick here for fileSpecies accumulation curves and frog and lizard species' lists. Species accumulation curves and listsClick here for fileMaps of study area. Location map of study area sampling plots with respect to vegetation typesClick here for file
People sometimes solve problems with a unique process called insight, accompanied by an “Aha!” experience. It has long been unclear whether different cognitive and neural processes lead to insight versus noninsight solutions, or if solutions differ only in subsequent subjective feeling. Recent behavioral studies indicate distinct patterns of performance and suggest differential hemispheric involvement for insight and noninsight solutions. Subjects solved verbal problems, and after each correct solution indicated whether they solved with or without insight. We observed two objective neural correlates of insight. Functional magnetic resonance imaging revealed Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) are used to study neural activity in subjects during a verbal task for which they report solutions achieved by insight Insight is pervasive in human when he suddenly discovered that water displacement could be used to calculate density. Since then, “Eureka!,” or “Aha!,” has often been used to express the feeling one gets when solving a problem with y animal ) cognitiAlthough many processes are shared by most types of problem solving, insight solutions appear to differ from noninsight solutions in several important ways. The clearest defining characteristic of insight problem solving is the subjective “Aha!” or “Eureka!” experience that follows insight solutions . This suPersistent questions about insight concern whether unconscious processing precedes reinterpretation and solution, whether distinct cognitive and neural mechanisms beyond a common problem-solving network are involved in insight, and whether the apparent suddenness of insight solutions reflects truly sudden changes in cognitive processing and neural activity.Recent work suggests that people are thinking—at an unconscious level—about the solution prior to solving problems with insight. Specifically, while working on a verbal problem they have yet to solve, people presented with a potential solution word read the actual solution word faster than they read an unrelated word . This “sProblem solving is a complex behavior that requires a network of cortical areas for all types of solving strategies and solutions, so solving problems with and without insight likely invokes many shared cognitive processes and neural mechanisms. One critical cognitive process distinguishing insight solutions from noninsight solutions is that solving with insight requires solvers to recognize distant or novel semantic (or associative) relations; hence, insight-specific neural activity should reflect that process. The most likely area to contribute to this component of insight problem solving is the anterior superior temporal gyrus (aSTG) of the RH. Language comprehension studies demonstrate that the RH is particularly important for recognizing distant semantic relations , and bilWe used functional magnetic resonance imaging (FMRI) in and attempted to produce a single solution word (apple) that can form a familiar compound word or phrase with each of the three problem words . We relied on solvers' reports to sort solutions into insight solutions and noninsight solutions, avoiding the complication that presumed insight problems can sometimes be solved without insight of their solutions, “no insight” for 41% (s.d. = 18.9) of their solutions, and “other” for 2% of their solutions. We marked a point about 2 s (rounded to the nearest whole second) prior to each solution button press as the solution event, and examined a time window 4–9 s after this event to isolate the corresponding hemodynamic response. Solving problems and responding to them required a strict sequence of events , but this sequence was identical whether subjects indicated solving with or without insight, so differences in FMRI signal resulted from the degree to which distinct cognitive processes and neural systems led to insight or noninsight solutions.t < 1.0, p > 0.3). More importantly, the hemodynamic responses to both insight and noninsight solutions in the homologous area of the LH are about equivalent to the response to noninsight solutions in the RH aSTG—it is the strong response to insight solutions in the RH aSTG that stands out. There is no insight effect anywhere within temporal cortex of the LH. At statistical thresholds below significant levels (p < 0.1 uncorrected), there are as many voxels in LH temporal cortex showing a noninsight effect as showing an insight effect.The involvement of the RH rather than the LH for this verbal task is not due to greater difficulty in producing insight solutions: subjects produced insight solutions at least as quickly as they produced noninsight solutions as RH aSTG, the event-related signal within it was weak and the insight–noninsight difference (peak difference = 0.15%) was relatively small. (The insight effect may be attributable as much to a negative response for noninsight solutions as to a positive response for insight solutions.)After RH aSTG, the second largest area showing an insight effect in FMRI signal was the medial frontal gyrus in the LH B. AlthouThere also was an insight effect in small clusters in or near bilateral amygdala or parahippocampal gyrus. Again, regional signal was low (83% of the brainwide average), and the signal difference was small (peak = 0.16%). However, an amygdalar response may be expected, given the emotional sensation of the insight experience . Hippocampal or parahippocampal involvement is also plausible, if memory interacts with insight solutions differently from how it interacts with noninsight solutions. For instance, insight problems may encourage distinct memory encoding or may rSeveral cortical areas showed strong solution-related FMRI signal, but approximately equally for insight and noninsight solutions. Some of these areas relate to the response sequence rather than solution processes; other areas probably reflect component processes of a problem-solving network common to both insight and noninsight solving, such as retrieving potential solutions. Two areas that may be of interest for future studies are AC and posterior middle/superior temporal gyrus. Both these areas, in the RH only, showed strong, negative solution-related signal, approximately equal in the two solution types. AC is an area that might be predicted to be involved in reorienting attention as solvers overcome impasses, given its role in performance monitoring and cognitive control . RH postA separate group of subjects participated in fundamentally the same paradigm while we continuously recorded EEGs from the scalp. We then compared time-frequency analyses of the EEGs associated with insight solutions versus noninsight solutions. EEG provides temporal resolution greatly superior to that of FMRI and thus can better elucidate the time course and suddenness of the insight effect. Furthermore, complex EEG oscillations can be parsed into constituent frequency components, some of which have been linked to particular types of neural and cognitive processes .The high temporal resolution of EEG allows us to address one of the fundamental questions raised earlier: does insight really occur suddenly, as subjective experience suggests? For problems typically solved without insight, solvers report gradually increasing closeness to solution. In contrast, for problems typically solved with insight, solvers report little or no progress until shortly before they actually solve the problem . SimilarWe predicted that a sudden change in neural activity associated with insight solutions would produce an EEG correlate. Specifically, we predicted that high-frequency EEG oscillations in the gamma band would reflect this sudden activity, because prior research has associated gamma-band activity with the activation of perceptual, lexical, and semantic representations . Gamma-bt[18] = 3.47, p=0.003); there was no difference in mean response times .Participants solved 46% (s.d. = 8.2) of the problems correctly within the time limit. Of correctly solved problems, subjects reported more insight solutions than noninsight solutions , and insight × time window × Hemisphere interactions. The overall interaction occurred because there was an insight × hemisphere interaction from −0.30 to −0.02 s but no effect in the −1.52 to −0.36 s time window. Within the −0.30 to −0.02 s interval for these two electrodes, there was a significant insight effect at the right temporal (T8) site , but not at the homologous left temporal (T7) site or any other LH temporal electrode. Laplacian mapping of this effect (There was a burst of gamma-band activity associated with correct insight solutions (but not noninsight solutions) beginning approximately 0.3 s before the button-press solution response at anterior right temporal electrodes , with nos effect B is remaThe gamma burst in the right temporal area cannot be attributed to motor processes involved in making the response because (A) motor activity associated with the bimanual button press would have caused a bilateral gamma burst, not a unilateral one; (B) the location of the gamma burst as determined by Laplacian mapping B is not Other planned statistical tests (ANOVAs) examined possible insight-related frontal theta (5–8 Hz), posterior alpha (8–13 Hz), and fronto-central beta (13–20 Hz) activity. There were no statistically significant theta or beta effects. There was a significant posterior alpha effect, which is discussed below.Complex problem solving requires a complex cortical network to encode the problem information, search memory for relevant information, evaluate this information, apply operators, and so forth. The FMRI and EEG results reported here conclusively demonstrate that solving verbal problems with insight requires at least one additional component to this cortical network, involving RH aSTG, that is less important to solving without insight. The insight effect in RH aSTG accords with the literature on integrating distant or novel semantic relations during language comprehension. When people comprehend (read or listen to) sentences or stories, neural activity increases in aSTG or temporal pole bilaterally more than when comprehending single words . Neural Like the results in language processing, the current results are predicted by the theory that the RH performs relatively coarse semantic coding . This thWe suggest that semantic integration, generally, is important for connecting various problem elements together and connecting the problem to the solution, and that coarsely coded semantic integration, computed in RH aSTG, is especially critical to insight solutions, at least for verbal problems . People come to an impasse on insight problems because their retrieval efforts are misdirected by ambiguous information in the problem or by their usual method for solving similar problems. Large semantic fields allowing for more overlap among distantly related concepts may help overcome this impasse. Because this semantic processing is weak, it may remain unconscious, perhaps overshadowed by stronger processing of the misdirected information , and solA persistent question has been whether the cognitive and neural events that lead to insight are as sudden as the subjective experience. The timing and frequency characteristics of the EEG results shed light on this question. We propose that the gamma-band insight effect in Suddenly recognizing new connections between problem elements is a hallmark of insight, but it is only one component of a large cortical network necessary for solving problems with insight, and recognizing new connections likely contributes to other tasks, such as understanding metaphors and deriF = 4.13, p = 0.027, with the Huynh-Feldt correction). Follow-up t-tests in each time window yielded significant effects of insight in the first time window at both electrode sites and in the second time window only at the RH site , with a reversal of the direction of the effect. The third time window yielded no significant effects.We turn now to another result from the EEG time-frequency analysis, which was not predicted but nevertheless suggests a provocative interpretation. The gamma burst thought to reflect the transition of the insight solution from an unconscious to a conscious state was preceded by insight-specific activity in the alpha band (8–13 Hz). Specifically, there was a burst of alpha power (estimated at 9.8 Hz) associated with insight solutions detected over right posterior parietal cortex from approximately 1.4 s until approximately 0.4 s before the solution response, at which point insight alpha power decreased to the level of noninsight alpha power, or below . An ANOVAlpha rhythms are understood to reflect idling or inhibition of cortical areas . IncreasThis interpretation of the early insight-specific alpha effect is consistent with previous behavioral research suggesting that, prior to an insight, the solution to a verbal problem can be weakly activated , especiaIn sum, when people solve problems with insight, leading to an “Aha!” experience, their solutions are accompanied by a striking increase in neural activity in RH aSTG. Thus, within the network of cortical areas required for problem solving, different components are engaged or emphasized when solving with versus without insight. We propose that the RH aSTG facilitates integration of information across distant lexical or semantic relations, allowing solvers to see connections that had previously eluded them. In the two millennia since Archimedes shouted “Eureka!,” it has seemed common knowledge that people sometimes solve problems—whether great scientific questions or trivial puzzles—by a seemingly distinct mechanism called insight. This mechanism involves suddenly seeing a problem in a new light, often without awareness of how that new light was switched on. We have demonstrated that insight solutions are indeed associated with a discrete, distinct pattern of neural activity, supporting unique cognitive processes.Ten men and eight women were paid to participate in Following practice, subjects attempted 124 compound remote associate problems during FMRI scanning. These problems b can be Prior to the experiment subjects were told the following: “A feeling of insight is a kind of ‘Aha!' characterized by suddenness and obviousness. You may not be sure how you came up with the answer, but are relatively confident that it is correct without having to mentally check it. It is as though the answer came into mind all at once—when you first thought of the word, you simply knew it was the answer. This feeling does not have to be overwhelming, but should resemble what was just described.” The experimenter interacted with subjects until this description was clear. This subjective rating could be used differently across subjects , blurring condition boundaries; yet the distinct neural correlates of insight observed across the group demonstrate that there was some consistency.If subjects failed to solve problems within 30 s, the “Solution?” prompt appeared, and subjects pressed the “no” buttons and verbalized “Don't Know.” Then the “Insight?” prompt appeared, and subjects pressed the “no” buttons again. After the insight rating, subjects performed three line-matching trials (3 s each) to distract them from thinking about the problems, allowing the critical BOLD signal to return to baseline . The totImaging was performed at the Hospital of the University of Pennsylvania, on a 1.5 Tesla GE SIGNA scanner with a fast gradient system for echo-planar imaging and a standard head coil. Head motion was restricted with plastic braces and foam padding. Anatomical high-resolution T1-weighted axial and sagittal images were acquired while subjects performed practice trials. Functional images were acquired in the same axial plane as the anatomical images using gradient-echo echo-planar sequences sensitive to BOLD signal . Each fu3.Images were coregistered through time with a three-dimensional registration algorithm . Echo pl3 in volume in which each voxel was reliably different across subjects, . Monte Carlo simulations with similar datasets reveal low false positive rates with these criteria. RH aSTG was the only cluster to exceed these criteria, and converging evidence and the a priori prediction about RH aSTG strengthen confidence in this result.Data were analyzed using general linear model analysis that extracted average responses to each trial type, correcting for linear drift and removing signal changes correlated with head motion. Each TR was divided into two 1-s images to improve time locking of the solving event and the functional image data from 128 tin electrodes embedded in an elastic cap (linked mastoid reference with forehead ground) placed according to the extended International 10–20 System. Prior to data analysis, EEG channels with excessive noise were replaced with interpolated data from neighboring channels. Eyeblink artifacts were removed from the EEG with an adaptive filter separately constructed for each subject using EMSE 5.0 . Induced oscillations were analyzed by segmenting each subject's continuous EEG into 4-s segments beginning 3 s before each solution response. 0, in the time domain has the formTime-frequency transforms (performed with EMSE 5.0) were obtained by the application of complex-valued Grossmann-Morlet wavelets, which are Gaussian in both time and frequency. Following 0 is a nondimensional frequency. In this case, ω0 is chosen to be 5.336, so that ∫ϕ0(t) ≅ 0. The constant π−¼ is a normalization factor such that ∫(ϕ0(t))2 = 1. For the discrete time case, a family of wavelets may be obtained aswhere ωt is the sample period (in seconds), s is the scale (in seconds), and n is an integer that counts the number of samples from the starting time. The Fourier wavelength λ is given bywhere δIn the frequency domain, the (continuous) Fourier transform of wheree-folding time and frequency may serve as quantitative measures of dispersion. Note that these dispersions are a function of the scale, s. For a selected frequency, 𝒻c = 1/λ, or from One reasonable way to measure the “resolution” of the wavelet transform is to consider the dispersion of the wavelets in both time and frequency. Since the wavelets are Gaussian in both domains, the e-folding time is c. From e-folding frequency is e-folding time is 0.12 s and the e-folding frequency is 2.6 Hz. For a 40-Hz ( gamma-band) center frequency, the e-folding time is 0.03 s and the e-folding frequency is 10.5 Hz. Note that these e-folding parameters imply that wavelet scaling preserves the joint time-frequency resolution , with higher temporal resolution but broader frequency resolution as the wavelet scale decreases.Then substituting into Segments corresponding to trials for which individual subjects produced the correct response were isolated and averaged separately according to whether or not the subject reported the experience of insight. Planned statistical tests (repeated-measure ANOVAs) were performed in order to detect insight-related effects on frontal midline theta (5–8 Hz), posterior alpha (8–13 Hz), fronto-central beta (13–20 Hz), and left and right temporal gamma (20–50 Hz). Response-locked event-related potentials (ERPs) were also computed using the same analysis epoch. Standard ERP analyses yielded no evidence of statistically significant effects, likely because ERPs reflect phase-locked activity rather than the induced activity examined in the wavelet analyses; due to the long response times evident in this experiment, phase locking resulting from problem presentation would not be expected.EEG effects were topographically mapped by employing spline-based Laplacian mapping with an FMRI-derived realistic head model and digitized electrode positions. Localization of EEG/ERP signals is a form of probabilistic modelling rather than direct neuroimaging. In contrast to other techniques, source estimation by Laplacian mapping indicates the presence of superficial foci of neuroelectric activity with minimal assumptions.Figure S1The far left lane shows for each region a single slice best depicting the cluster activated above threshold; middle lane shows time course of signal following insight (red line) and noninsight (blue line) solutions, across the entire active cluster; right panel shows the “insight effect” .t[12] = 2.83, p < 0.015); (B–D) depict clusters of FMRI signal at the same t-threshold used in the main paper , but the clusters are too small to surpass cluster criterion.(A) depicts bilateral IFG with lowered threshold ((B) LH medial frontal gyrus;(C) LH PC gyrus;(D) LH amygdala . Spatial coordinates and other are details listed in (914 KB PDF).Click here for additional data file.
PLoS Medicine editors very useful [I find the arguments raised by the y useful as I hadIt is not only colleagues in research and allied professions who need access but the global community, including members of the public wherever they live, those who participate in trials and those who will be on the receiving end of their outcomes.The annual reports of research ethics committees (RECs) are supposedly in the public domain after approval by Strategic Health Authorities in the UK. But very few members of the public know of their existence or how to access them. Approaches to individual committees even now can meet with varied reactions, from suspicious, defensive, or hostile—reluctantly sending one report, quizzing as to which organisation the enquirer belongs to and why they should want one—to extremely welcoming of interest and discussion.The annual reports should be easily accessible online by now, surely, but they are not. The activities of RECs and information on what research is being carried out in the name of society as a whole largely remain hidden from public view.www.corec.org.uk) or OREC . COREC has not been open about dealing with issues of concern raised with them in the past. They do state that public interest is welcome now, so it would show a real commitment to making research activity more open if they would show support for totally open access to a register and to promote that through their Web site.There is no information about public access on COREC (Central Office for Research Ethics Committees;
In its extreme form, methylation is involved in silencing one of the two X chromosomes in female mammals. Aberrant methylation underlies susceptibilities to several forms of cancer, and is likely to be involved in numerous other human diseases.For a recipe to become a meal, it's often necessary to embellish or modify the basic instructions—and to keep a note of the changes that work, so that it can be just as delicious next time around. The same is true for a gene, whose basic recipe—its nucleotide sequence—can be heritably annotated to “epigenetically” influence its level of expression without altering its sequence. Among the many epigenetic influences at work in the genome, methylation of cytosine is one of the most versatile and powerful. Addition of a methyl is to map the methylation patterns of human genes, and to determine how they vary: among individuals, among tissues within an individual, and even over time within a single tissue. In this issue, Stephan Beck and colleagues describe the execution and results of a HEP pilot project, in which they analyzed methylation within the major histocompatibility complex (MHC), the set of genes that establish an individual's self-identity within the context of immune surveillance.The key to any such large-scale project is high throughput—a rapid, efficient set of technologies that produce the needed data with minimal human intervention. The strategy used by Rakyan et al. included bisulfite sequencing of DNA, in which unmethylated cytosines are chemically converted to uracils, while methylated cytosines are not. Software they developed detects the methylated sites and provides an overall measure of the methylation level within any given sequence. They confirmed the accuracy of their method with mass spectrometry, an alternative method also suitable for high-throughput screening.They initially analyzed 253 sequences within 90 genes in the MHC, about two-thirds of the total, from multiple tissues in multiple individuals. They found that most genes were either completely methylated or completely unmethylated, while relatively few had an intermediate value. The significance of this distribution pattern is not yet clear, although it does confirm similar results in smaller samples from other research groups. The researchers also confirmed that so-called CpG islands, regions rich in CG dinucleotides, are relatively hypomethylated, especially when they occur at the upstream end of a gene.Rakyan et al. also found differences in methylation levels among tissues, with some suggestion that the variations influence tissue-specific alternative splicing, at least in some genes. Intriguing inter-individual differences were also found, with median methylation levels differing significantly between individuals for at least one tissue at almost half the sites analyzed. For instance, such differences were found in liver for the regulatory region for the tumor necrosis factor gene.A major goal of the HEP is to identify methylation variable positions, sites whose methylation state is linked with some important biological state, be it tissue type, developmental stage, or disease state. The pilot project described here begins this undertaking, which will be greatly expanded as the HEP progresses. The first phase of the full-scale HEP, an analysis of 5,000 DNA sequences, is currently underway.
We compared two methods of rooting a phylogenetic tree: the stationary and the nonstationary substitution processes. These methods do not require an outgroup.Given a multiple alignment and an unrooted tree, the maximum likelihood estimates of branch lengths and substitution parameters for each associated rooted tree are found; rooted trees are compared using their likelihood values. Site variation in substitution rates is handled by assigning sites into several classes before the analysis.In three test datasets where the trees are small and the roots are assumed known, the nonstationary process gets the correct estimate significantly more often, and fits data much better, than the stationary process. Both processes give biologically plausible root placements in a set of nine primate mitochondrial DNA sequences.The nonstationary process is simple to use and is much better than the stationary process at inferring the root. It could be useful for situations where an outgroup is unavailable. The beginning of the alignments for the genes COX1, CYTB, ND1 and ND6 were slightly adjusted. The root positions are assumed to be on the (1) gorilla, (2) orangutan, and (3) frog branch, respectively. The branches on a tree are referred to by the organism names, except for the case of four taxa, where there is an internal branch , stationary (STA) and reversible (REV) substitution models to all available mitochondrial protein-coding genes, as well as the nuclear genes ganisms: human, cIn group 1, the NONSTA and STA processes correctly placed the root in 8 and 6 genes respectively, out of 13 genes Table . In groualbumin and c-myc and three mitochondrial genes, COX1, COX2 and ATP6 from group 3 (with some mouse genes replaced with rat genes) were studied by Huelsenbeck et al. [ATP6, with NONSTA again noticeably more discriminative.The nuclear genes k et al. . For theet al. and Yang [Brown and Yang ,9 studieOur results confirmed earlier findings that the stationary process (STA) is not very good at discriminating among rooted trees corresponding to the same unrooted tree. In contrast, the nonstationary (NONSTA) process seems much more effective, with individual genes, and with combined genes. It is quite clear that the difference in log likelihoods between fitting STA and the reversible process (REV) is often small, and statistically insignificant, based on the likelihood ratio test, while those between NONSTA and STA, and between NONSTA and REV, are often large, and statistically very significant. Though the chi-square distribution may be inappropriate , it seemet al.'s analysis using STA failed to place the root correctly in any of the genes albumin, c-myc, COX1, COX2 and ATP6, there are some differences between the analyses. The raw data were different: the rat albumin and c-myc genes were used by Huelsenbeck et al.; since mouse and rat are very similar, this is not likely to matter much. Secondly, the alignments were probably different, though since the sequences are quite similar, this should not be too important. It is plausible that most of the discrepancies between the results is due to the difference in the estimation procedure (maximum likelihood vs. Bayesian) and to the fact that in Huelsenbeck et al., site variation was modeled by the gamma distribution [Although Huelsenbeck ribution , whereasalbumin, c-myc and ND2, showing the usefulness of the third codon position in this dataset, despite its markedly higher substitution rates. We also found that the pairwise identity at the third codon positions for all genes in groups 3 ranges from 34% to 61%. Base composition being generally nonuniform, the expected pairwise identity at saturation is lower than 25%. This seems to indicate that the third codon position is not saturated, and hence the phylogenetic information from this position is not just the base composition at each taxon. In addition, the base composition at the third codon position for some genes is quite different from the other positions. Our model does not fit these genes as well as a model where separate processes are associated with the codon positions. Such a model will be investigated in future.Estimates of the relative rates are quite independent of the model used, and their relative magnitudes are largely within expectations. In particular, for group 3, the relative rates for codon positions 1, 2, and 3 fall between .2 and 1.1, .1 and .6, and 1.5 and 2.7 respectively. For all genes, the third codon position evolved the fastest, followed by the first and second positions. To gauge the contribution from the third codon position, we left out the corresponding bases in group 3 and reran the analysis with NONSTA. This gave the correct root placement in only three genes: The NONSTA process is only slightly more complicated to apply, compared to the STA and REV processes. The fact that it works quite well in the verification studies and predicts biologically plausible roots for the nine-primate data demonstrates its utility and perhaps argues for its use in routine phylogenetic analysis. In any case, if no suitable outgroup is available, it could be worthwhile to try it. Though the NONSTA process is the most general time-homogeneous Markov process, it is still simplistic and imposes a severe constraint on the evolution of base composition: if two leaf nodes are at the same distance from the root, then the process stipulates that the corresponding sequences must have the same composition. This is patently unrealistic: once lineages split, they should evolve quite independently, and may explain the failure of the process at estimating the root placement for some genes. However, it is still valuable even if it does not always work, in that it can serve as a base from which exploration of richer models can be launched. For instance, one could identify lineages where the evolution significantly deviates from expectations, and then allow these lineages to have different rate matrices, which brings us closer to the very rich models of ,7,13,14.The nonstationary substitution process is simple to use, has much greater power at estimating the root compared to the stationary process, and also fits data much better than the stationary and reversible processes. It seems feasible to use this process in analyses where a suitable outgroup is not easily available. It is also a good starting point for conducting more sophisticated phylogenetic analysis with richer models.Substitutions in DNA sequences are assumed to occur independently at each site according to a Markov process, i.e., given the present base, future substitutions are independent of past substitutions.a to b is the -entry in a 4 × 4 rate matrix Q; the diagonal entries are such that each row sums to 0. For any t > 0, the transition probability P(t) is given by P(t) = exp(Qt). Let π be a probability distribution on the DNA bases. The pair defines a substitution process on a rooted tree, as follows: pick a base at the root according to π, then run the substitution process according to Q down the tree, splitting into independent copies whenever a branching is encountered. The joint probability of the observed bases at the leaf nodes can be computed using almost exactly the same algorithm by [Furthermore, it is assumed that the process is time-homogeneous, i.e., substitution rates stay constant in time. As usual, the substitution rate from base rithm by .π, Q). Associated with the rate matrix Q is a unique distribution πQ, called the equilibrium distribution of Q, such that the matrix product πQ × Q is the zero vector. The process is stationary, i.e., the sequence composition remains unchanged through time, and is described by πQ. Q is said to be reversible if it satisfies the detailed balance condition:There are two important special cases of the time-homogeneous process is then reversible, i.e., statistically the process looks the same in forward and backward time. In particular, as shown in [et al. [π and 12 off-diagonal entries in Q), 12 and 9 . Since the models are nested, the likelihood ratio test can be used to assess the relative goodness-of-fit of the MLEs. It is standard practice to allow only calibrated rate matrices, i.e., Q satisfieswhere Πshown in , the joishown in ,16,17. Sshown in ,18-22. Tshown in , and sub [et al. . Yang reπ, Q), with calibrated Q, since in general π ≠ πQ, it is not true that the expected number of substitutions in 1 time unit is 1, but the difference gets arbitrarily small as time goes to infinity.so that a branch length is the average number of substitution events per site. We adopt this practice, and remark that for the nonstationary process (i an unknown positive number ri, with the constraint thatThe sites in a DNA sequences can have very different substitution rates, the most well-known example being coding sequences, where the third codon positions evolved much faster than the others because of the degeneracy of the genetic code. In cases where the assignment of sites into several classes is known in advance, such as a coding sequence, the easiest way to deal with it is to associate to class ni is the number of sites in class i. The relative rate ri either expands or shrinks the tree depending on whether it is more or less than 1. The constraint gives a new interpretation of a branch length: it is now the average over all sites of their expected number of substitutions. Thus, this approach is similar to [where milar to : effectimilar to or the hmilar to .Given a rooted tree relating aligned coding sequences, we seek the ML estimates of the branch lengths, the substitution parameters, and the relative rates. For other sequences, the relative rates are not estimated. Gradient-based methods are perhaps the most efficient at finding the maximum. The EM algorithm is anothThe estimation procedure was implemented in C, and the source code can be requested from the first author.The idea was conceived by the first author and was inspired and refined by the second author. The first author composed the code and performed the data analysis.A text file containing the amino acid sequence alignments for group 1.Click here for fileA text file containing the amino acid sequence alignments for group 2.Click here for fileA text file containing the amino acid sequence alignments for group 3.Click here for file
Malaria remains one of the greatest threats to global health, infecting more people than ever before. Confined mainly to the tropical areas of Africa, Asia, and Central America, malaria hits Africa the hardest; the poverty-stricken lands of sub-Saharan Africa account for 90% of malaria infections worldwide. Despite ongoing efforts to battle the disease—by controlling mosquito populations, reducing human contact, and developing drug prevention and treatment—the crisis continues to worsen.The primary variables affecting risk of infection are the rate at which humans are bitten and the proportion of mosquitoes that are infectious. These two factors are often regarded as positively correlated, meaning that if the percentage of infectious mosquitoes increases, so will the human biting rate. But in a new study, David Smith, Jonathan Dushoff, and F. Ellis McKenzie challenge this assumption. Using a mathematical modeling approach to examine the relative contributions of the two factors across different landscapes and seasons, the authors show that the factors are not positively correlated. In fact, their calculations show that the rate humans are bitten and the proportion of infectious mosquitoes peak at different times and places.Their modeling results suggest that the standard metric to estimate risk of infection—the number of times an infectious mosquito bites a person per day, called the entomological inoculation rate (EIR)—is flawed when variable conditions are taken into account. Using the average EIR to estimate average risk of infection in variable environments generates biased estimates because there is not a direct correlation between EIR and the proportion of humans who are infected.The distribution of humans and suitable habitat for mosquito larvae varies across the landscape. And the density of mosquito populations varies seasonally, rising and falling with changes in rainfall, temperature, and humidity. Temporal and spatial variations in mosquito populations affect the rate humans get bitten, the number of infectious mosquitoes, and the risk of infection. To understand how these space- and time-induced variations in mosquito populations shape the epidemiology of human infection, Smith and colleagues developed a set of mathematical models that calculate the relative impact of different parameters, in order to determine which factors most influence where and when risk of infection is highest.First, they evaluated what factors affect the primary components of the EIR: the human biting rate and the proportion of infectious mosquitoes. As expected, the model predicts that fluctuations in mosquito density influence the EIR by changing the human biting rate. As more people are bitten, more people become infected; consequently, more mosquitoes feed on infected humans and so become infectious. Only adult mosquitoes transmit infection, so as mosquito populations age, the proportion of infectious mosquitoes increases. During the dry season, few mosquitoes are born, and so while the human biting rate and EIR decline, the proportion of infectious mosquitoes increases.Because mosquito populations are densest near breeding sites—where younger mosquitoes outnumber adults—the human biting rate and the number of bites by infectious mosquitoes per person per day reflect shifts in mosquito density, not in the proportion of infectious mosquitoes. The model predicts that human biting rate is highest shortly after mosquito population density peaks, typically either near breeding sites or where human density is highest. The proportion of infectious mosquitoes, on the other hand, reflect the age of the mosquito population: it peaks where older mosquitoes are found—farther from breeding sites—and when populations are declining.By mapping larval habitats against the local risk of mosquito-borne infections, Smith and colleagues conclude, epidemiological models can be developed to predict risk for local populations. Their results make the case that mathematical models can help public health officials calculate risk of infectious diseases in heterogeneous environments—that is, real world conditions—when vector ecology and the parameters of transmission are well characterized. Any plan to prevent and control the spread of mosquito-born infections would clearly benefit from paying attention to mosquito demography and behavior.
Modeling latent variables such as physical disability is challenging since its measurement is performed through proxies. This poses significant methodological challenges. The objective of this article is to present three different methods to predict latent variables based on classical summed scores, individual item responses, and latent variable models.This is a review of the literature and data analysis using "layers of information". Data was collected from the North Carolina Back Pain Project, using a modified version of the Roland Questionnaire.The three models are compared in relation to their goals and underlying concepts, previous clinical applications, data requirements, statistical theory, and practical applications. Initial linear regression models demonstrated a difference in disability between genders of 1.32 points on a scale from 0–23. Subsequent item analysis found contradictory results across items, with no clear pattern. Finally, IRT models demonstrated three items were demonstrated to present differential item functioning. After these items were removed, the difference between genders was reduced to 0.78 points . These results were shown to be robust with re-sampling methods.Purported differences in the levels of a latent variable should be tested using different models to verify whether these differences are real or simply distorted by model assumptions. Clinical researchers frequently use statistical models in an attempt to model outcomes that are not directly measured, also known as latent variables. Examples of such latent variables include mental health, quality of life, and physical disability. Although groups of items (questions) known as outcome scales can be assumed to measure latent variables, it is methodologically challenging to aggregate item responses into scores that accurately and reliably represent the latent variable.The aim of this study is to point that the choice of models with biased assumptions can lead to different conclusions regarding the associations between latent variables and predictors. Three alternative methods are presented: Prediction of latent variables measured as summed scores using linear regression models, prediction of individual item responses using logistic regression models and propensity scores to control for differences in item responses, and prediction of latent variables using Item Response Theory models with covariates. Since all three methods are statistically sophisticated, they will be described using the technique of "layers of information", and used to evaluate the purported association between gender and disability. Specifically, we will test whether this association can be explained by different reporting patterns.The method of "layers of information" was designed to explain complex statistical methods to audiences with a variety of previous quantitative backgrounds. Each layer is associated with a progressive level of complexity; thus, ensuring that readers with different needs can understand the technique to a level that will enable them to at least understand the statistical method of a clinical study (first layer) and ultimately to apply the statistical method to a new research study (last layer). In the current study, we have used five layers of information: (1) General description, (2) Examples of previous clinical applications, (3) Data requirements, (4) Statistical Theory, and (5) Analysis and Reporting.A latent construct is a concept not directly measured, but that can be estimated through proxy measures. Physical disability is an example since its level is frequently inferred from responses given to a series of items in an outcomes scale measuring patients' ability to perform activities of daily living. Because latent variables cannot be directly measured and predicted, several statistical techniques were devised to approach this problem Figure .The most common approach is to simply add patients' responses to each item; thus, creating a summed score. Summed scores are then used to determine significant predictors in a regression model Figure .Two assumptions underlie this strategy. First, we assume that the contribution of each item to the latent variable is known. For example, in a disability scale where patients are questioned about their ability to "raise a glass of water" and to "raise a 40-pound bag", researchers assume that they know the exact amount of disability associated with each of the activities stated by these items. In a scale that does not discriminate between the level of disability associated with each item, the assumption would be that answers to each of these items would represent the same amount of disability, when, in fact, they may not.The second assumption when using summed scores is that each item measures the latent construct without any interference from extraneous factors. For example, it is assumed that two individuals with the same neck disability level, but different educational levels would have similar answer patterns for an item such as "I feel pain in my neck after reading for more than two hours". In this example this assumption might not be true since individuals with different educational levels may have different levels of exposure to a two-hour reading session and consequently have a different perception of the disability caused by such activity. Therefore, in spite of having the same disability level, they would probably provide different answers to the same item. This phenomenon is known as Differential Item Functioning, previously known as item bias.A second approach is to use answers from each item and then determine how each predictor is associated with individual item responses Figure . AlthougThe last and most recent approach is to use statistical models that will concomitantly determine the latent construct level and its association with the predictor of interest Figure . The maiIn a study designed to predict factors associated with post-treatment disability after lower-extremity soft tissue sarcoma, Davis calculatIn a study evaluating the prediction of visual disability based on individual objective measures of visual impairment, Bandeen-Roche regresseAlthough these models bring new insights into the association between individual physical activities and their respective predictors, they cannot clarify whether these were true predictors or whether they simply presented different reporting patterns.To our knowledge, although multiple previous clinical research projects have used IRT for the determination of scale scores , no prevFirst, a latent construct has to be measured through a set of proxy variables. These indicators may have responses in various formats, including dichotomous (yes/no), ordinal , or nominal . IRT models assume that the latent construct is continuous and, in most cases, unidimensional, meaning that one single latent construct is assumed.Predictors can be continuous or categorical variables.Previous studies have estimated that, for logistic regression models, one should have at least 10 events per predicting variable , while fy using a linear combination of predictor variables xj, where j represents several predicting variables 1, 2, ..., p. It is assumed that no missing values are present for every observation. The fitted values, or predicted summed scores, are then the sum of coefficients βj multiplying each of the xj plus an intercept β0, although the later may be absent in some models. This model can be represented by:Differences in summed scores according to a set of predictor or covariates can be described using linear regression. In these models, the summed score is represented by y - β0 + β1 x1 +...+ βp xpith of n observations are yi, xi1 ,..., xip, then the fitting criterion chooses the βj to minimize:Ordinary least-squares models estimate the coefficients to minimize the squared sum of residuals. If the response and predictors corresponding to the ith observation as:The standard statistical theory of linear models makes the first formula more explicit by writing the model for the ci are independently and identically distributed; the ci have mean zero and finite variance σ2; the ci have a normal distribution.This model makes the following assumptions: The Individual responses to dichotomous items can be predicted by generalized linear models using a binomial distribution and, most commonly, a logit link function that will bound the probability of an answer to be between 0 (answer = no) and 1 (answer = yes). The logit link can be expressed by:π is the probability of a positive answer andx is a vector with item responses . To linearize the function, the dichotomous response for each item can be algebraically transformed to:where p(0|x), where 0 is the latent construct.Notice that, in contrast to linear models, the logistic model does not have an error term since it models the probability of an event directly that will determine the variability of the binary outcome. Logistic models are estimated by maximum likelihood, which is a method to estimate regression coefficients that will maximize the likelihood of obtaining the data for responses of the same patient to other items. This adjustment can be accomplished by propensity scores , which rIt is important to notice that although the covariates are used as predictors for the item-response, it is still impossible to infer whether this association was distorted by an association between item responses and extraneous variables rather than the association between item responses and the latent trait.i providing a positive answer to an item j is represented by:Although multiple models have been described for the regression of latent variables on predictors , we willβj represents the difficulty of item j and ui represents the trait level associated with subject i. This equation holds true in the simplest IRT model known as Rasch or one-parameter logistic (1PL). Other models – two-parameter logistic, ordinal logistic – among others – are used according to the types of response alternatives presented by each item.Where λ to represent the extent to which item j can discriminate between subjects of different trait levels, we obtain:Adding one additional parameter Finally, if we add a predictor to this equation we will haveγ is the regression coefficient for predictor x. This model allows several advantages over the two models previously described in this layer, including the absence of assumptions from summed scores as well as the summarization of all items into a single latent variable. The most frequent assumptions in IRT models are that a single construct is measured and that observations are independent, conditional on the latent variable. Different IRT models will have different assumptions about the extent to which assumptions of summed scores can be relaxed. For example, 1-Parameter. Logistic Regression models assume that each item measures the latent trait with equivalent strength.where One important practical aspect, when making use of IRT models with predictors, is to check quadrature point approximation used in the random-effects estimator. As a rule of thumb, if the coefficients do not change by more than a relative difference of 0.01%, then the choice of quadrature points does not significantly affect the outcome and the results may be confidently interpreted. Two aspects of random-effects models have the potential to make the quadrature approximation inaccurate: large group sizes and large correlations within groups .To illustrate a practical application of the previously described models, we will use data from a cohort study of patients with low-back pain to evaluate the gender-disability association. Specifically, we will evaluate whether female patients either have more severe disability or simply whether they are more likely to give positive answers to some items while having equivalent physical disability levels.Several studies have found that, compared to men, women are usually associated with higher initial disability and pain scores after low-back pain episodes ,15. HoweA description of the cohort used for this analysis is presented in detail elsewhere . BrieflyBriefly, our sample is composed by 1,633 individuals with a diagnosis of low-back pain. Most patients are females (52.3%), married (69.9%), white (83.0), and with medical insurance (68.3%).For linear and logistic regression models the data were placed in wide format, with individual variables representing patient responses to each item. For IRT models the data were presented in long compressed format Figure .When comparing the crude association between summed scores and gender, it was found that female patients had scores that were on average 1.46 points higher than their male counterparts in a 0–23 scale. This association was further tested in a linear regression model Figure controllSince the distribution of summed scores of the modified Roland Questionnaire was not normal, we used regression diagnostics using plots to determine that the relationship between predicted and observed values did not display any violations of the regression assumptions. This was confirmed by a Ramsey regression specification error test (RESET) for omitted variables (p = 0.7371) although the Breusch-Pagan / Cook-Weisberg test demonstrated a trend towards heteroscedacity (p = 0.0777).In order to further verify the robustness of this association, an ordinal logistic regression model was used with cut-points at 0–7 (low summed score), 8–15 (medium summed score), and 16–23 (high summed score). This model was considered to adequately comply with the proportionality assumption (p = 0.776). Results for the ordinal regression model demonstrated that the predicted probability of a male having low, intermediate, and high scores were progressively decreasing: 0.38, 0.33, and 0.28, respectively. This pattern was in contrast with women, where the probabilities were ascending: 0.32, 0.33, and 0.35, respectively.In summary, all results from models using summed scores point to a significant association between female gender and high disability scores. It is unclear; however, whether this association can be explained by high disability levels or simply different report patterns between men and women.As a next step, the association between individual item responses and gender was evaluated using logistic regression models stratified by propensity scores adjusting for responses to other items Figure . PropensThe analysis across propensity strata demonstrated contradictory results, with male patients being significantly associated with positive responses to items 4 and 8 ("I only stand for short periods of time because of my back problem or leg pain (sciatica)"), while female patients were significantly associated with positive responses on items 7 "), 15 , 17 ("I stay in bed most of the time because of my back or leg pain (sciatica)"), and 19 . No single item was consistently associated with gender across all propensity score strata.A new model was then built adjusting for scores pooled across strata. The results demonstrated that most items were not associated with either gender, items 4 and 8 ("I only stand for short periods of time because of my back problem or leg pain (sciatica)") being positively associated with male gender while items 7 ") and 15 being associated with female gender Figure .Since logistic regression models do not control for the latent variable one cannot test whether the association between gender and individual item responses is related to an association with disability or simply caused by women being more likely to provide a positive response to a certain item in spite of having the same degree of disability.Finally, IRT models Figure were useTo test the hypothesis that some items might present different reporting patterns, we tested for interaction terms between each item and gender. Our results demonstrated that items 7 ", Figure A new IRT model was then calculated, but now excluding all items with differential reporting patterns. The difference in disability reporting between men and women was reduced , indicating that gender was no longer significantly associated with disability. In fact, when the same items were excluded from the summed score, a multiple linear regression model demonstrated that the difference between female and male patients had been reduced to 0.78 points on the original 0–23 scale (p = 0.06), a reduction of 53.4% compared to the original difference.Bootstrapping methods were used in the linear regression model to verify whether the association was robust after multiple sampling procedures had been applied to the models. The results demonstrated a variation of only 13.2%; thus, indicating that these results are robust provided that the sample is representative of the study population.In conclusion, one could infer that although women still have slightly more disability than men, much of the previously reported differences using the modified Roland were inflated by the presence of items with different reporting patterns in scales measuring disability.We used three different regression models to investigate the association between gender and disability. Although summed models demonstrated a significant association between gender and disability, these models did not allow us to test whether this purported difference was related to the latent construct disability or to items presenting with differential item functioning. Analysis of the association within individual items demonstrated inconsistent associations with gender, with some items presenting a strong positive association with male gender while others had a positive association with female gender. Since these associations were made with the item response rather than the latent variable, it was impossible to verify whether these were valid representations of the construct of interest, associations with disability, or simply the effects of differential item functioning. Last, we examined the association between gender and disability measured as a latent variable. After removing items with differential item functioning, the association with gender was lessened and no longer significant. Therefore, we concluded that although a small difference between genders in relation to the disability associated with low back pain does exist, much of it is caused by differential item functioning than a true association with the disability construct.In summary, we advocate that the measurement of the association between latent variables and covariates be systematically performed using a combination of regression models to ensure that observed associations are not distorted by differential item functioning.RP: design, analysis, manuscript writing; MT: design, analysis, manuscript revision; UG: design, analysis, manuscript revision; LDH: design, manuscript revision; DOJ: design, manuscript revision; TC: data collection, design, manuscript revision.
Mycoplasma hominis vaa gene encodes a highly variable, surface antigen involved in the adhesion to host cells. We have analysed the structure of the vaa locus to elucidate the genetic basis for variation of vaa.The vaa on existing physical maps of five M. hominis isolates by pulsed field gel electrophoresis revealed that vaa is located in a genomic region containing the majority of other characterized membrane protein genes of M. hominis. Sequencing of an 11 kb region containing the vaa locus of M. hominis isolate 132 showed the presence of conserved housekeeping genes at the borders of the region, uvrA upstream and the hitABL operon downstream to vaa. Analysis of 20 M. hominis isolates revealed that the vaa upstream region was conserved whereas the downstream region was highly variable. In isolate 132 this region contained an open reading frame (ORF) encoding a putative 160 kDa membrane protein. Homologous ORFs were present in half of the isolates, whereas this ORF, termed vmp (variable membrane protein), was deleted from the locus in the remaining isolates. Compellingly, the conserved upstream region and variable downstream region of vaa correlates with the genetic structure of vaa itself which consists of a conserved 5' end and a variable 3' end containing a variable number of exchangeable sequence cassettes.Mapping of vaa locus contains a divergent genetic islet, and indicate pronounced intraspecies recombination. The high variability level of the locus indicate that it is a chromosomal 'hot spot', presumably important for sustaining diversity and a high adaptation potential of M. hominis.Our data demonstrate that the The mycoplasmas are wall-less prokaryotes characterized by small genomes 580 – 2200 kb) and a low G+C content, generally below 30%. They are the smallest self-replicating organisms known with cell diameters normally in the range of 0.3–0.8 μm , and are200 kb anMycoplasma hominis is an opportunistic human pathogen observed as a commensal of the urogenital tract. Primarily, urogenital infections giving rise to spontaneous abortions, pelvic inflammatory disease, and acute pyelonephritis have been associated with M. hominis, but extragenital infections resulting in infant meningitis, arthritis, and septicemia have been reported [reported .M. hominis is a very heterogeneous species as measured by a pronounced antigenic variation [M. hominis surface proteins has been elucidated in some cases. The large membrane protein (lmp) gene family displays size variation by insertion/deletion of intragenic repeats of approximately 500 bp. The lmp genes are arranged in two clusters, lmp1-2 and lmp3-4 in the M. hominis genome of most analysed isolates with a distance between the clusters of more than 110 kb [lmp family is expressed in each of the M. hominis isolates tested and decrease in the number of repeats were found to correlate to the amount of spontaneous agglutination of M. hominis cells [ariation -7. The mn 110 kb . At leasis cells ,10.vaa (variable adherence-associated) gene encodes a size and phase variable M. hominis adhesin [-3–10-4. In the ON-state 8 adenines are observed in the poly-A tract, whereas 7 or 9 results in the out-of-frame OFF-state [vaa gene is present in each M. hominis isolate [vaa [vaa gene types have been observed in more than 100 analysed clinical isolates [The adhesin -14. Phas adhesin . This isFF-state . A singl isolate . The sizate [vaa ,13. Eachate [vaa . Based ovaa and variation mechanisms in M. hominis in general, the vaa locus was characterized by mapping of the genomic position in five isolates and sequencing of a 11 kb region containing the vaa gene from isolate 132. Furthermore, the vaa locus of 20 M. hominis isolates was investigated by PCR and sequencing. In contrast to the more conserved vaa upstream region this analysis revealed that the downstream region also exhibits major variation caused by insertion/deletion and sequence variation of a large ORF encoding a putative membrane protein. Thus the vaa locus seems to constitute a 'hot spot' for variation in the M. hominis genome.To obtain a better understanding of the genomic basis for variation of vaa gene was mapped on exsisting physical and genetic maps of 5 M. hominis isolates: 132, 4195, 7488, PG21, and 93 dATP-labelled DNA fragments representing different parts of selected vaa genes . All probes hybridized to a single fragment in all digests, corresponding to the same region of the genome for all 5 isolates . The band differences observed could be explained by the presence or absence of an EcoRI site in the variable vaa gene. Using a probe comprising the cassette region of the vaa category 3 (vaa-3) gene (probe 4) band size variation was observed in the 5 isolates for both enzymes protein encoded by hitL. This system was previously characterized in M. hominis by Henrich and coworkers [hitABL operon was located almost 5 kb downstream to the vaa gene of M. hominis 132 , encoding a hypothetical 35 kDa protein, showed high similarity to a range of hypothetical proteins of similar size in the database. All of these proteins contained a HAD hydrolase superfamily motif. The highest similarity was to a hypothetical protein of Mycoplasma pulmonis . No significant homologues were found for ORFs 2 to 5. ORF5 was shown to encode a hypothetical protein containing an N-terminal signal peptide with a signal peptidase II cleavage site typical of prokaryotic prolipoproteins and may thus encode a lipoprotein having a size of 29 kDa and a pI of 9.6. Interestingly, a transmembrane helix was predicted in the C-terminal part of this putative lipoprotein using the program TMHMM (membrane probability of 1 for aa 243 to 252) [Analysis of the contig harboring the mes Fig. . At the kDa Fig. . This OR to 252) .vaa revealed the presence of a large open reading frame of 4 kb, ORF6, encoding a hypothetical protein with a molecular weight of 160 kDa , and other myosin proteins in the database. In the region between ORF6 and the hitABL operon, a tRNA(His) gene was identified by database search of Bacillus subtilis. Intriguingly, the 5' end of the gene displayed a high similarity to the orthologue from Streptococcus pneumoniae (94% identity from bp 10 to 45), even higher than to the corresponding region in Bacillus subtilis (92% identity) and Mycoplasma pneumoniae (86% identity). The transcriptional direction of ORF6, tRNA(His) and the hitABL operon was opposite that of vaa in analogy to the ORFs of the vaa upstream region . It was not possible to classify the profiles according to the vaa type or other known M. hominis groupings. Thus, despite a higly conserved organization and length of the ORFs in the vaa upstream region, there is an underlying sequence variation, presumably corresponding to the background variation level present in the M. hominis genome.To examine the variability of the ed Figs. and 4B. vaa and hitB, respectively, were used in PCR, a pattern revealing different product sizes was observed . The isolates were divided into vmp groups based on the above PCR results according to the size of the vmp gene observed in the different sized downstream PCR products. The vmp gene having a size of 4 kb was named vmp category 1 or simply vmp-1, and the isolates showing high homology of the stem regions to that observed for vmp-1. The stem-loop structure was located approximately 500 bp downstream of the stop codon of vmp-2 but interestingly, the homology between vmp-2 and vmp-3 extends beyond the stop codon in vmp-2. Careful analysis reveals that a poly-A tract in the 3'-end of the vmp genes has an extra adenine in vmp-2 compared to vmp-3, which causes a premature termination of translation of the vmp-2 gene. If the extra adenine of the poly-A tract was deleted the ORF would continue for approximately 500 bp, corresponding to vmp-3, and the termination codon would be situated close the putative rho-independent transcriptional termination stem-loop structure. The remaining 10 isolates were divided into two groups based on a genetic fingerprint in the vaa-hitABL intergenic region of isolates lacking vmp gene was different for vmp-1 and vmp-2, the latter positioned between tRNA(His) and hitABL, whereas the more similar vmp-2 and vmp-3 genes were positioned identically sequence box between the insertion site in isolate PG21 and the stem region of the stem-loops of vmp-2 and vmp-3. Furthermore, a 165 bp deletion intergenic region of isolate 7488 compared to isolate PG21. This deletion was located immediately downstream to the region deleted in isolate 93. In contrast, analysis of the vmp-1 insertion site did not show insertion at the stem-loop and poly-T structures when compared to the vaa-hitABL intergenic regions of isolates PG21 and 93. Sequence regions of 60 bp downstream to the stem-loop structure and 90 bp upstream to the poly-T tract which did not show any homology to the vaa-hitABL intergenic region of isolate PG21 were observed at the borders of the vmp-1 gene. Interestingly, comparison of the insertion site of the vmp-1 region including the 60 bp and 90 bp bordering sequences with the vaa-hitABL intergenic region of isolate PG21 revealed that insertion resulted in a deletion in the intergenic sequence corresponding to the 300 bp deletion . Half of this region was repeated once in Vmp-3 Fig. . The N-tSequence analysis revealed that the deduced Vmp proteins have a predominantly alpha-helical structure, and a coiled-coil region extending throughout almost the entire length of the proteins was identified Fig. and 6C. vmp-1 gene of these isolates. These data thus demonstrate that Vmp is expressed in M. hominis. Furthermore, the antibody reacted with a 100 kDa protein in all isolates except PG21. This is presumably a protein that has been shown previously to bind Ig-molecules unspecifically, and is not present in PG21 [vmp gene, and isolates 4195 and 7488 having a Vmp-2 and Vmp-3 type, respectively, did not react with the antibody, as expected from the low homology of the C-terminal region gene, and the lack of correlation of vmp type and absence of vmp to vaa type suggests frequent recombination in this locus or genetic islands (>10 kb), are variable sites when comparing genomes of different isolates of a given species. Often, genetic islets/islands carry pathogenesis factors and are specific for virulent clones of the species. Such factors include adhesins, toxins and restriction/modification systems. Frequently, the genetic elements are inserted into tRNA genes, show a GC content diverging from the surrounding regions and are flanked by repeated elements [vmp gene was located on either side of the tRNA(His) gene in isolates 132 and 7488, respectively, the GC content of vmp was similar to the remaining part of the vaa locus analyzed and the M. hominis genome in general (28%) and despite a thorough analysis, no obvious flanking structures such as direct or inverted repeats were observed which could indicate a site specific mechanism of insertion. Thus, the vmp gene may be mycoplasma specific and insertion/deletion of vmp at the vaa locus seems mediated by homologous recombination.The location of elements . Althougvmp fragments by PCR from four out of five isolates categorized as having a vmp-1 gene type and likewise for four out of five isolates having a vmp-2 or vmp-3 gene type. The pronounced heterogeneity observed between the vmp genes could explain the missing reaction of the remaining two isolates as being caused by sequence variation of the individual gene types as observed for a number of other M. hominis membrane protein genes, but it is also possible that additional vmp gene types exists.It was possible to amplify Mycoplasma pneumoniae. This protein is cytoplasmic, part of the primitive cytoskeleton of this mycoplasma and truncation of the gene resulted in loss of cytadherence [M. hominis in contrast to the cytoplasmic HMW2.The size and predicted structure of Vmp is interesting. The coiled-coil motif extending through most of the protein, is a highly versatile motif involved in protein-protein interactions . This modherence . HMW2 isdherence . In contM. hominis.The identification of a novel putative membrane protein displaying sequence variation is intriguing, and furthermore the remarkable size and structure displayed by the Vmp protein is interesting and should prompt investigations on the biological function of this protein in vaa locus of M. hominis contains a divergent genetic islet encoding a large, putative membrane protein called Variable membrane protein (Vmp). This genetic islet is only present in the locus of half of the 20 islolates tested, and three distinct, homologous Vmp types were observed. The composition of the locus was analysed and it was found that the vaa gene has a conserved upstream region and a highly variable downstream region, which contains the genetic islet. This locus organization corresponds to the organization of the vaa gene itself having a conserved 5' end and a variable 3' end. Thus, the mechanism underlying variation of the vaa gene seems to be intraspecies recombination exchanging variable regions of vaa and downstream regions of vaa, giving rise to a variable and dynamic 'hot spot' in the M. hominis genome.We have demonstrated that in some isolates, the M. hominis isolates were analysed . The M. hominis isolates were cultivated in BEa medium (heart infusion broth (Difco), 2.2% (w/v); horse serum, 15% (v/v); fresh yeast extract, 1.9% (w/v); benzylpenicillin, 40 IU ml-1; L-arginine, 0.23% (w/v); phenol red 0.0023% (w/v)). The pH of the medium was adjusted to 7.2 and the medium was sterilized by filtration [M. hominis isolates were harvested by centrifugation at 15,000 rpm for 45 min at culture volumes greater than 1.5 ml or at 20,000 rpm for 15 min for culture volumes smaller than 1.5 ml. E. coli OneShot competent cells and the pCRII plasmid vectors were used for TA-cloning (Invitrogen).Twenty ltration . The M. M. hominis isolates PG21, 4195, 132, 93 and 7488. The M. hominis isolates were grown in BEa medium to log phase and harvested. The cell pellets were washed and resuspended in PBS buffer . Melted NA agarose (Amersham Pharmacia Biotech) was mixed with the cell suspension in a plastic mold on ice. The agaroseblocks hereby formed were incubated overnight with 1 mg/ml Proteinase K (Roche) in lysis buffer . Subsequently, each block was washed twice in lysis buffer followed by one wash in TE buffer and cut into eight blocks of identical size. The blocks were digested overnight with 40 units of one of five restriction enzymes and 12 mg of BSA. Subsequently, the blocks were inserted into the slots of 1% NA agarose gels and the holes sealed with melted NA agarose. HindIII digested λ DNA and a λ DNA ladder (FMC) was used as molecular weight markers. The gels were used for PFGE using the CHEF-DRII separation system (Bio-Rad).PFGE was performed on genomic DNA from the five M. hominis isolates was isolated using the method described in [M. hominis cells were harvested and subsequently lysed on ice in a buffer containing 0.7% (w/v) N-laurylsarcocine, 10 mg RNase ml-1 (Sigma), 20 mM Tris pH 7.5 and 20 mM EDTA. Proteinase K (150 mg ml-1) was added and the cell lysate was incubated at 55°C for 2 hrs and 37°C for 1–2 hrs followed by phenol, phenol/chloroform and chloroform extractions [-1) treatment of harvested M. hominis at 55°C for 1 h. After the incubation the solution was heated to 100°C for 5 min to inactivate the enzyme. Plasmids from transformed E. coli were prepared as described in [DNA from the ribed in . Brieflyractions . DNA preribed in , for seqvaa types with sizes of 1220 bp (probe 1), 600 bp (probe 2), 800 bp (probe 3), and 660 bp (probe 4) were used for TA-cloning, performed according to manufacturers instructions (Invitrogen) [vaa-1 gene from M. hominis 7808, including the three cassettes III, IV and V [vaa-1 gene of M. hominis PG21 and probe 3 contained most of the vaa-3 gene of M. hominis V2785 including cassettes V and VII. TA-cloned PCR fragments or linear PCR fragments were used as DNA probes and labeled with radioactive (α-32P)dATP by nick-translation, performed as follows. 0.5–1 μg DNA was mixed with 1 × nick-translation buffer (50 mM Tris HCL (pH 7.2), 10 mM MgSO4, 0.1 mM DTT, 50 mg BSA, 60 mM of dTTP, dCTP, and dGTP respectively, 5 units of DNA polymerase I (Gibco), 0.5 ng DNase I (Roche), 20 mCi (α-32P)dATP (Du Pont) and ddH2O up to 50 μl. The reaction was incubated at 14–16°C for 1 h. Incorporation of radioactive nucleotides was verified by TLC and terminated by addition of TE-buffer with 0.5 M EDTA. The radioactive probes were denatured by heating to 100°C for 5 min and hybridization performed in 2 × SSC (1 × SSC is 0.15 M NaCl and 0.015 M sodium citrate), 0,5% SDS, 100 mg/ml yeast RNA, 5 × Denhardts solution ) at 60°C in a hybridization oven. The membranes were washed in 6 × SSC and 0.5% SDS. The membranes were placed in sealed plastic bags and exposures of X-ray films were performed at room temperature or at -20°C.PCR-products derived from different itrogen) . Probe 1IV and V . Probe 2HindIII or EcoRI and separated on 0.7% agarose gels. The gels were stained with ethidium bromide and photographed under UV irradiation. Preceding the alkaline denaturation, partial hydrolysis of the DNA in the PFGE gels was performed by soaking in a 0.25 M HCl solution to enhance the transfer of large DNA fragments. DNA transfer to Hybond-N membranes (Amersham Biosciences) was carried out as described in [Genomic DNA samples of the isolates PG21, 4195, 132, 93 and 7488 were cleaved with either ribed in .vmp-2/3 PCR product where Taq polymerase was used (PE Biosystems). Custom oligonucleotide primers were purchased from DNA Technology . PCR products were purified using the Wizard kit (Promega) according to manufacturers instructions. PCR conditions used for inverse PCR and amplification of downstream and upstream regions of the 20 isolates were as follows: 2 min at 92°C, 10 cycles of 10 s at 92°C, 30 s at 55°C, 8 min at 68°C, 20 cycles of 10 s at 92°C, 30 s at 55°C, 8 min at 68°C with 5 s added to the elongation time pr. cycle. Finally, an extension step of 7 min at 68°C was performed. The primers used for amplification of the 5.5 kb upstream product were F1 (CAGTACATGTTAATCCCAGAA GTATAGTTGG) and R1 (GCTGGATAATCGCCGTATGAACCTGC). The R1 primer was also used for amplification of the 4 kb and 0.6 kb PCR products in combination with the primers F2 (GGATCTTCTTTGTGGTCTTCC) and F3 (GGGATAGTTAGTAAAG TTGGAATAGCC), respectively. For amplification of the downstream region in the 20 isolates, the primers F4 (GCAGGTTCATACGGCGATTATCCAGC) and R4 (GCCACTTGCGGTTCTTCC) were used. For the amplification of the 0.6 kb vmp-1 PCR product, the primers F6 (CCACTGATACGTGATTTAAAAAGAAAAG) and R3 (GGTATTGTTTCTTTATCTAAGATGTTTTCAAATTC) were used with the following PCR conditions: 4 min at 94°C, 30 cycles of 15 s at 94°C, 30 s at 50°C, 1 min at 72°C and a final extension of 5 min at 72°C. For amplification of the 1.2 kb vmp-2/3 PCR product, similar conditions were used with an annealing temperature of 57°C and an elongation time of 2 min, and the primers were F5 (GAACAATTAAAAACATTAATTGGCTTAA GTGATG) and R2 (GTTTTATCTACATTGTTTTCGGATAAGG).PCR was performed using the Expand™ High Fidelity PCR System from Roche according to the manufacturer's instructions, except for the amplification of the 1.2 kb AluI and AseI (New England Biolabs) according to manufacturers instructions and analysed on 1 × TBE/2% agarose gels.The 5.5 kb upstream PCR products from the 20 analysed isolates were subjected to restriction endonuclease analysis employing the enzymes Sequencing reactions were carried out bidirectionally using the ABI PRISM Dye Terminator Cycle Sequencing Ready Reaction Kit (Perkin Elmer) on purified plasmid DNA (TA-cloned PCR products) or directly on the purified PCR products according to the instructions supplied by the manufacturer. Sequencing was performed on an ABI PRISM 377 DNA Sequencer from Perkin Elmer.vmp-1 from isolate 132 encoding aa 1281 to 1404 of the Vmp-1 protein. Cloning and expression of the construct was performed using the pET-30 Ek/LIC vector according to manufacturers instructions . The His-tagged fusion protein was purified using a nickel chelated column under denaturing conditions as previously described [Oligonucleotide primers were designed in order to amplify by PCR, the region of escribed . Sera co, were used to predict signal sequences and transmembrane helices, respectively [[Computer analysis of the obtained DNA sequences was performed using the Wisconsin Package Version 9.0, Genetics Computer Group (GCG), Madison, Wisc., sequence analysis software package . Data baectively ,24. Enerctively [.The DNA sequences obtained in this study were deposited to the EMBL database under the following accession numbers: AJ416752, AJ545046 AJ629113, AJ629114 and AJ629115.The individual parts of the work presented in the paper were conducted as follows: The ideas and designs of the experiments were developed by all the authors. Pulsed field gel electrophoresis and Southern blottings were performed by JE and TB. PCR, sequencing, and sequence analysis were performed by TB and AB. The fusion protein, polyclonal antisera and immunoblotting were made by TB. The manuscript was primarily written by TB and discussed with and approved by all authors.Multiple sequence alignment. Alignment of the three Vmp types and Lmp1 and Lmp3 from type strain PG21 using ClustalW.Click here for file
Helicobacter pylori, occurring throughout the world and causing gastroduodenal diseases, is one of the most common chronic bacterial agents in humans. The purpose of this study was to measure the general practitioners' (GPs) knowledge and practices pertaining to H. pylori infection.H. pylori.A cross-sectional type questionnaire survey was conducted in all of 19 primary health care centres (PHCC) in Samsun, Turkey, between November 1 and December 31, 2003. The questionnaire was sent to 124 GPs and 109 (87.9 %) of those filled in. They were requested to answer the questions on the knowledge, sources of medical information, diagnostic tests and treatment to H. pylori, being cited by 86 (78.9%) of GPs. Ninety-two (84.4%) of the GPs reported having used one or more tests and 17 (15.6%) never used any test for the diagnosis of H. pylori infection. Only 9.8% had used stool antigen test for diagnosis. GPs reported that they would prescribe symptomatic treatment without ordering diagnostic tests for 29 (26.6%). 54.1% of the GPs explain that they sent patients with H. pylori infection to a specialist, and most used a triple drug regimen containing a PPI. Treatment duration varies between 7 to 28 days. 80.7 of the GPs treat patients for 14 days.Medical journals were the most frequently used source of information on H. pylori infection during post-graduation period should be improved in PHCCs.GPs may not have enough knowledge about the importance of stool antigen test or possibility of usage of this test. GPs have not sufficient knowledge about the difference between symptomatic and asymptomatic individuals. It is thought that GPs preferred to treat the patients with suspected ulcer empirically or to send them to a specialist because of the limited diagnostic conditions. The efforts to educate the GPs about the algorithms regarding the management of Helicobacter pylori, occurring throughout the world and causing gastroduodenal diseases, is one of the most common chronic bacterial agents in humans [H. pylori is increasing, most ulcers are related to H. pylori infection [H. pylori infection have been promoted in order to improve early detection and treatment of ulcers in dyspeptic patients [n humans . Althougnfection ,3. For mpatients ,5.H. pylori infection in patients with chronic gastritis and peptic ulcer disease in 1983 has fundamentally changed concept of the etiologic, pathogenesis and management of upper gastrointestinal (UGI) diseases [H. pylori related information and the development and publication of international, regional and national guidelines [H. pylori infection and the pathogenesis, diagnosis and treatment of UGI diseases [H. pylori infection and treat if positive, and when to refer patients to a specialist [The successful isolation of diseases . This haidelines . Subsequdiseases . The majH. pylori infection.It is thought that too many patients with dyspeptic symptoms apply to GPs in Turkey. This study was performed a survey of GPs' to assess their knowledge and practices pertaining to H. pylori knowledge section contained 6 questions. The items assessed respondents' knowledge of diagnosis of infection, case selection for treatment and treatment options in H. pylori. Participating GPs were asked to offer the type(s) of diagnostic tests such as ELISA, histology, biopsy urease test (BUT), urea breath test (UBT) or culture of biopsy specimen.A cross-sectional study was conducted in all of 19 primary health care centres (PHCC) in Samsun, Turkey, between November 1 and December 31, 2003. The questionnaire was sent to all GPs (n = 124). 109 of 124 (87.9 %) GPs from different PHCCs completed the survey. The material used was adapted from the questionnaire devised by Sharma et al. [H. pylori and also treat the infection when the test results were positive for H. pylori.A list of 7 different clinical presentations was given. It was asked whether the respondents would offer testing for H. pylori infection among the list of four drug combination regimens included proton pump inhibitor (PPI)-based triple therapies. [H. pylori infection before or after the survey in this study period. Data were given as mean ± standard deviation (SD) and percentage.The respondents were asked to select a regimen for the management of erapies. ,11,12. GThe mean age and working year of the GPs was 31.7 ± 5.4 and 7.2 ± 5.0 years, respectively; 59 (54.1%) of the GPs were women.H. pylori, being cited by 86 (78.9%) of GPs. Pharmaceutical company-sponsored symposia (70.6%), textbooks (64.2%), conferences (20.2%) and on-line sites (6.4%) were the other major source of information used by the GPs. These numbers add up to more than 100% because the GPs had been checked more than one item.Medical journals were the most frequently used source of information on H. pylori infection. Of those, 44.1% had used UBT, 34.5% had used BUT, 23.8 % had used ELISA and 9.8% had used the stool antigen test. The practitioners included in the survey had not equally access to all diagnostic tests mentioned in the study.Ninety-two (84.4%) of the GPs reported having used one or more tests and 17 (15.6%) never used any test for the diagnosis of H. pylori infection in the 9 different clinical situations and the proportions of those who would offer treatment based on a positive test result, were summarized in Table-H. pylori infection to a specialist.The proportions of GPs who would test patients for Treatment regimens of choice are listed in Table-H. pylori detection methods, including ELISA and the UBT, has enabled GPs to diagnose and treat H. pylori infection. The inadequate treatment of peptic ulcer disease results in therapy failures, high recurrence rates, the emergence of resistant bacterial strains, and increased health care costs, therefore clinical application of current knowledge is crucial [UGI symptoms are common reasons for patients to visit GPs. In recent years, the development of non-invasive crucial ,14.H. pylori infection [H. pylori infection same as Huang J et al.'s study [There are several tests used for diagnosis of nfection ,13,15,16's study . UBT is 's study .H. pylori infection are recommended following resection of early gastric cancer and for low-grade gastric MALT lymphoma. Retesting after treatment may be prudent for patients with bleeding or otherwise complicated peptic ulcer disease [It is worrisome that considerably fewer perceived a need for testing and subsequent treatment in patients with a new diagnosis and past history of duodenal ulcer Table . Both th disease .H. pylori infection, 64.1% of GPs reported that they would offer testing for H. pylori infection and 31.5% of them reported that they would treat H. pylori infection based on a positive test result in asymptomatic individuals. These findings suggest that GPs have not sufficient knowledge about the difference between symptomatic and asymptomatic individuals. On the other hand, in any person testing positive for the infection, treatment may be offered after a full discussion about its potential risks and benefits [Although it is not recommended to test the asymptomatic individuals for benefits .H. pylori therapy was almost never recommended for suspected ulcer disease without the prior use of diagnostic tests [H. pylori infection. 54.1% of the GPs, whether ordering diagnostic tests or not, explain that they send patients with suspected or diagnosed H. pylori infection to a specialist. In the light of these findings, it is thought that GPs preferred to treat the patients with suspected ulcer, empirically or to send them to a specialist because of the limited diagnostic conditions, the lack of rapidly diagnostic tests at PHCCs, or they thought that they should be treated by a specialist.Anti-ic tests ,18. In tH. pylori peptic ulcers are treated with drugs that kill the bacteria, reduce stomach acid, and protect the stomach lining. Antibiotics are used to kill the bacteria. Two types of acid-suppressing drugs might be used: H2 blockers and PPI. H2 blockers and PPI have been prescribed alone for years as a treatment for ulcers. When used alone, these drugs do not eradicate H. pylori and, therefore, do not cure H. pylori-related ulcers. Bismuth subsalicylate, a component of Pepto-Bismol, is used to protect the stomach lining from acid. It also kills H. pylori [. pylori ,7,12,19.H. pylori infection but 15.6% never test. Urea breath test is a commonly used investigative tool for H. pylori infection. Triple therapy consisting of a proton pump inhibitor, clarithromycin and amoxicillin is the most commonly used treatment combination for H. pylori infection.The highest eradication rates are achieved with the following regimens: a PPI, clarithromycin, and either amoxicillin or metronidazole for 2 week; ranitidine bismuth citrate, clarithromycin, and either amoxicillin, metronidazole, or tetracycline for 2 week; a PPI, bismuth, metronidazole, and tetracycline for 1 to 2 week ,11,12,20H. pylori infection among GPs. Our data suggested that pharmaceutical company-sponsored symposia were used very frequently among GPs.It was found that most of the information was being obtained in traditional teaching formats such as medical journals, pharmaceutical company-sponsored symposia, textbooks and conferences. Other studies ,13,17) c,17 c13,1H. pylori infection [Patients with dyspeptic complaints are mostly managed in primary care. The most prescriptions for dyspepsia are empirical without testing due to limitations of diagnostic facilities around the world . Similarnfection ,17.H. pylori is not adequate. The choice of optimal therapeutic decision depends on the appropriate definition of the disease. In order to provide accurate diagnosis and treatment of H. pylori infection, it's suggested that efforts to educate the GPs about the algorithms regarding the management of H. pylori infection during post-graduation period should be improved in PHCCs.The diagnosis of and treatment of peptic ulcer disease related to The author(s) declare that they have no competing interests.SC participated in the design and coordination of the study; ATS provided drafted the manuscript and performed the statistical analysis. YP drafted the questionnaire and participated in study design and coordination. HL conceived the study, participated in its design and drafted the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:
Reminders at the point of care, computerized or not, have been demonstrated to be effective in changing physicians prescription behavior.The quality control of oral anticoagulant therapy (OAT) during the initiation and maintenance treatment is generally poor. Physicians' ordering of OAT (especially However, few studies have addressed the benefit of personalized reminders versus non personalized reminders, whereas the personalized reminders require more development to access patient record data and integrate with the computerized physician order entry system.The Hospital Information System of George Pompidou European Hospital integrates an electronic medical record, lab test and drugs order entry system. This system allows to evaluate such reminders and to consider their implementation for routine use as well as the continuous evaluation of their impact on medical practice quality indicators.The objective of this study is to evaluate the impact of two types of reminders on overtreatment by oral anticoagulant: a simple reminder of text formatted dose adjustment table and a personalized recommendation for oral anticoagulant dose and next date of INR control, adapted to patient data. Both types of reminders appear to the physician at the moment of drug ordering.fluindione or warfarin, will be included in the study between November 2004 and May 2006.The study is an alternating time series experiment with three 6 months periods, each one including every 2 months according to a Latin square scheme: a control period without any reminder, a period with the simple non personalized reminder, a period with personalized reminder. All patients hospitalized in departments using the computerized physician order entry system and ordered Main outcome will be the proportion of overcoagulation, as expressed by the proportion of observation time with INR over 4.5, assuming INR change linearly. Secondary outcome is the incidence of major haemorrhagic events. Data will be collected thanks to Hospital Information Systems databases.Data will be analyzed taking into account patient and physician clustering effect. According to a study carried out by French pharmacovigilance centres, haemorrhage subsequent to oral anticoagulant treatment (OAT) is the most common drug-related side effect resulting in hospitalisation in public hospitals in France . On the basis of these findings, the AFSSAPS has made the prevention of iatrogenic effects related to OAT one of its priorities. Many of these events are consequences of interactions between different drugs, resulting in inappropriate doses [Implementation of a system of support when prescriptions are made out is likely to improve prescription practices and to decrease the frequency of side effects. It should be possible to integrate a support tool into the drug prescription system, by using nomograms to adjust OAT doses.et al. [The efficiency of reminders issued at the time of prescription has been demonstrated by a various studies -8. Theseet al. reviewedet al. . We haveet al. and in tet al. . ComputeSeveral types of reminders can be issued at the time of prescription :• simple, general information concerning the recommendations that should be taken into account ,• "check list": includes questions or a precise list of practices that the doctor must tick to show that it has been done,• reminders including clinical data concerning a specific patient that must be taken into account for a given procedure .The advantage of personalised reminders over non-personalised reminders has not been demonstrated in the literature. However, the production of personalised reminders necessitates better integration of existing information and is thus more expensive to develop. It is important to determine whether this personalised tool results in a better quality of care than non-personalised tools.Several randomised clinical trials have tried to evaluate decision support systems for the prescription of OAT, but failed to draw any conclusions about their efficiency for several reasons: heterogeneity and complexity of the systems evaluated, experimental designs difficult to apply and not necessarily adapted, and too few patients included -17.Reminders issued at the time of prescription have been shown to be effective by experimental studies, but the difficulties of maintaining the effectiveness of interventions designed to improve clinical practices remains a major problem. We evaluated the effect of an active decision support system for the prescription of low molecular weight heparin as prophylaxis for venal thrombosis in an orthopaedic surgery department. In this study, the system was and was not used during alternate periods. It showed that such programs affect practices without affecting learning . Other aThe hospital information system currently collates prescriptions and results of biological tests and imaging procedures. Eight hundred computers, both laptops and fixed posts, are used to in care procedures .® program is at the centre of care delivery. It is used by doctors and nurses:The Dx-Care• to prescribe laboratory examinations and imaging tests for a patient,• to visualise the results of laboratory tests,• to establish and to consult nursing schedules,• to archive a structured observation,• to prescribe drugs.® is integrated with other applications to allow circulation of information between departments, laboratories and the pharmacy. Prescriptions for laboratory tests are transmitted to the Netlab® program which manages such tests The laboratories return the results using this same program, which retransmits them to Dx-Care®. Furthermore, prescriptions of drugs are transmitted to the Phedra program, which is used by the pharmacy to manage prescriptions. The prescription is validated by the pharmacy and this validation is then transferred to Dx-Care®. The lab test prescription facility has been available in the hospital information system since 2000 and is used by all departments of the hospital. The drug prescription facility has been implemented later on, since January 2003, and its use is still increasing .Dx-CareThe hospital information system thus allows to install decision support systems that are activated whenever a prescription is issued and routinely to collect evaluation criteria of prescription practices. If possible and validated, the use of the hospital information system to evaluate care procedures will make it possible to collect data regularly, and routinely to assess methods for the improvement of care practices.With the aim of quality of care and preventing risks, the hospital is developing a system, based on the Intranet network, of declaration of undesirable events. This system must be able to record all undesirable events and incidents linked to the use of health products and care as well as those due to the patient environment and the job of health care professionals .When a health care professional decides to report an undesirable event, he or she must complete a dedicated form available on the Intranet with all relevant information. This incident form includes an item entitled "complications associated with anti-coagulants". When the doctor clicks on this item, a form specific to haemorrhagic accidents following anti-coagulant treatment appears (see form in appendix).Among departments which already started to use the computerised drug order entry system, several (cardiology and vascular medicine departments) are heavy "consumers" of anti-thrombosis drugs in arterial thrombosis and venous thrombosis, and the way in which patients receiving anticoagulants are monitored and handled in cases of overdose. The procedure concerning curative OAT included a nomogram for adjusting doses of 1. To evaluate the effect on the frequency of overanticoagulation of the implementation in the computerised physician order entry system of two types of tool to adjust OAT doses .2. To assess any advantages of using the personalised tool rather than the non-personalised tool.1. To evaluate the frequency of haemorrhagic accidents in the context of the study.2. To evaluate the feasibility of long-term implementation of the intervention.The study is an alternate time series experiment which consists of three successive six-month periods . Each phase will consist of:• a two-month period without active support during which evaluation criteria will be collected (period A),• a two-month period with non-personalised active support (period B),• a two-month period with personalised active support (period C).To limit the impact of a learning effect on appropriate OAT management practice within the department over time , the order of these three periods was determined by using a Latin square plan . This experimental design can be considered valid for an impact study in this context whereas a randomised controlled design is difficult to apply with just one hospital .This study will include all the patients who are prescribed OAT for any indication and are hospitalised in clinical care departments where physicians are using the hospital information system to prescribe drugs.The following table shows the number of INR examinations prescribed by these departments in a three-month period. It provides an estimation of the approximate proportion of overdoses among the INR that exceeded 2, which is supposed to be found in patients treated with OAT in these units.All doctors authorised to prescribe drugs in the participating departments will be included in the study: residents and fellows, registered and non registered university hospital doctors. Each six-month period will coincide with an internship semester.®, the doctor selects the required drug from an exhaustive list. This opens up a dialogue box in which the doctor types the dose, the frequency of intake and the mode of administration. From this window, it is possible to add a text comment or to consult particular protocols that have been defined by the departments.To prescribe a drug using Dx-CareIt is planned to integrate two types of decision support systems into the computerised prescription program:1) non-personalised active system: when the drug is selected a window automatically opens giving the prescribing the nomogram for the adjustment of OAT doses in the form of a table (see Tables 2) personalised active system: when the drug is selected a window automatically opens suggesting a dose recommended according to the nomogram (taking into account the doses previously received by the patient and the patient's INR), together with a date for next INR control and an explication.Proportion of patient observation time with INR results > 4.5, assuming linear change of INRs.Intra-cranial haemorrhage or spontaneous haemorrhage necessitating surgery or a transfusion or decreasing haemoglobin concentration by more than 2 g/dl.® application allows biological laboratories to receive prescriptions and to return results. All of the INR results can be extracted from the Netlab® database accompanied by information making it possible to identify the patient, the treatment and dose received, the prescribing doctor, the hospitalisation unit, the date the test was prescribed. Data about overdoses can therefore be collected systematically by regular database searches.The NetlabFurthermore, the storage of the information in a computerised tool will make it possible to determine previous doses and INR results each time a drug is prescribed.When a health care professional decides to declare an undesirable event, he or she fills in a specific, pre-formatted form available on the Intranet. This form includes a list of events that must be declared at the GPEH.The declaration form includes an item entitled "complication of haemorrhagic accidents". When the doctor clicks on this item, a specific form for the declaration of a haemorrhagic accident associated with anti-coagulant treatment appears (see form in appendix).The determination of the number of participants necessary requires the definition of the statistical unit of interest, information about the incidence of the evaluation criteria in the study population and a hypothesis about the efficiency of the intervention.In this study, the main aims are to guide each prescription and to reduce the number of anti-coagulant overdoses: the simplest statistical unit to study is therefore the INR result. This unit will be used to calculate the sample size.This choice is not, however, perfect and the efficacy results will be presented using other indicators of the quality control of anti-coagulant treatments:Given the low incidence of major haemorrhagic accidents (not currently measured at the GPEH but probably below 1%), it is not possible in this study to estimate the number of subjects necessary to demonstrate an effect of intervention on the "haemorrhagic accident" endpoint. Recording haemorrhagic accidents will give the frequency of such accidents, which will then be used for realistic estimates of power and sample size if further studies are carried out.In previous studies evaluating the efficacy of tools to aid the prescription of OAT, the unit considered was not always the same, taking into account the number of INR per patient and the time between INR measurements to greater or lesser extents. The most recent studies considered the number of patient-days according to the method described by Rosendaal ,16,21. Ta priori lower than inter-physician variability.We may also carry out an analysis for each prescribing doctor given that the intervention targets doctors directly. This will involve adjusting the effect of the intervention to the fact that intra-physician variability is During a six-month period (January to June 2004), 4 920 INRs were requested by the six departments which already routinely use the computerized physician order entry system. The frequency of overtreatment can approximately be estimated from the percentage of INR > 4.5 among INR >2. Among the 2620 INR > 2, 330 (12%) were higher than 4.5 , for a basal incidence of the judgement criterion of 12% are and for the following hypotheses on relative reduction of the risk (RRR) of overdose, are:• RRR 30%: 2500• RRR 40%: 1300• RRR 50%: 800• RRR 60%: 500Carrying out approximately 5000 tests over six months will make it possible to detect an intervention effect of less than 30% in this period. The experimental design includes three six-month periods and should thus ensure adequate power.Statistical analyses will be performed with the STATA statistical software Standard statistical tests will be used to compare the baseline characteristics of the departments and patients.The main analysis concerns the effect of the intervention on the number of dangerously high INRs. The analysis will be carried out using a mixed effect analysis of variance model, in which the effect linked to the period will be considered fixed and that linked to the prescription tool will be considered random .Rosendaal's method will be used to analyse the number of patient-days with INR over the target .According to French policy, this study was exempt from medical ethics committee approval. The anti-coagulants being evaluated are prescribed as recommended by clinical studies validated within the GPEH. These recommendations are available on the hospital's Intranet and are thus accessible to all doctors. They conform to standard practices. Neither the patients nor the doctors will be randomised. The interventions are simply different means of giving valid information to physicians. Using funding from the PHRC , we carried out two research studies related to this project. In the first (PHRC 95), an intervention aimed at modifying the way in which emergency department doctors handle ankle injuries, the study design was a randomised controlled study and the randomisation unit was the hospital . In the OAT: Oral Anticoagulant TherapyINR: International Normalized RatioGPEH: Georges Pompidou European HospitalPHRC: Hospital Clinical Research ProgramIC and PD, conceived, wrote the protocol and prepared the manuscript.GC is the statistical expert and performed the power calculationsGC and ABR revised the protocol and the manuscript.The pre-publication history for this paper can be accessed here:
Over 750 million children have iron-deficiency anemia. A simple powdered sachet may be the key to addressing this global problem A simple powdered sachet may be the key to addressing a global problem Recent World Health Organization (WHO)/United Nations Children's Fund estimates suggest that the number of children with iron-deficiency anaemia (IDA) is greater than 750 million In the developing world, there are three major approaches available to address iron deficiency: dietary diversification so as to include foods rich in absorbable iron, fortification of staple food items (such as wheat flour), and the provision of iron supplements. When dietary or fortification strategies are not logistically or economically feasible, supplementation of individuals and groups at risk is an alternative strategy. For the past 150 years or more, oral ferrous sulphate syrups have been the primary strategy to control IDA in infants and young children Our research group at the Hospital for Sick Children in Toronto conceived the strategy of “home fortification” with “Sprinkles”—single-dose sachets containing micronutrients in a powdered form, which are easily sprinkled onto any foods prepared in the household. We hypothesized that this would be a successful method to deliver iron and other micronutrients to children at risk In Sprinkles, the iron (ferrous fumarate) is encapsulated within a thin lipid layer to prevent the iron from interacting with food. This means that there are minimal changes to the taste, color, or texture of the food upon adding Sprinkles. Other micronutrients including zinc, iodine, vitamins C, D, and A, and folic acid may be added to Sprinkles sachets. Any homemade food can be fortified with the single-dose sachets, hence the term “home fortification”. Two formulations have been developed, a nutritional anaemia formulation and a coTo investigate the bioavailability of the iron in Sprinkles, we used a dual stable isotope method and showed that anaemic infants absorbed iron from Sprinkles about twice as efficiently as nonanaemic infants when delivered in a maize-based diet in West Africa. The study was conducted in collaboration with the Kintampo Health Research Centre of the Ministry of Health in Accra, Ghana. The geometric mean iron absorption from two doses of iron was 8.3% range, 2.9%–17.8%) in infants with anaemia and 4.5% in infants without anaemia %–17.8% iIt has been suggested that zinc may compete with iron for the same receptor sites on intestinal mucosal cells in the proximal duodenum, thereby compromising the absorption of both minerals Over the past five years, we have completed seven community-based trials in four different countries ,17,18,19We further examined this through quantile-quantile plots of haemoglobin concentrations at the end of the studies for Sprinkles and ferrous sulphate drops . The oveDuring our studies we also asked about the caregivers' perception of their infants' responses to Sprinkles as compared to drops, the Sprinkles' impact on the food to which they were added , the use of sachets as a delivery vehicle, and the perceived side effects of Sprinkles ,18,19. IAs the results of the first studies showing the efficacy of Sprinkles became available, the need for a reliable high-quality supply became apparent. In 2000, the H. J. Heinz Company of Pittsburgh, Pennsylvania, United States, expressed an interest in the Sprinkles program as a component of their corporate social responsibility program. Since 2001, the H. J. Heinz Company has provided support and expertise in the evaluation of consumer needs and a supply of Sprinkles for research, while the H. J. Heinz Company Foundation has provided financial support for research activities. Through a formal process of technology transfer, local overseas Sprinkles production has been encouraged. Currently, an independently licensed copacker is supporting local production for a national program in Guyana, and plans are in place for technology transfer to Bangladesh and Pakistan.The final stage, the scale-up process, is by far the most challenging. First, this process involves dialogue with the Ministries of Health, scientific community, civil society, and other private partners. Second, it is important to identify sustainable methods of distribution that are able to reach and provide Sprinkles to the most vulnerable populations in the developing world. From our experience in Mongolia, we have determined that it is feasible to distribute Sprinkles in partnership with a non-governmental organization called World Vision. Sprinkles sachets distributed in Mongolia over a two-year period included both iron and vitamin D. Sprinkles have been successfully distributed by World Vision field staff to over 15,000 children in seven districts. Coverage has been over 80%, at a cost of about US$0.03 per sachet. In the project area, the prevalence of anaemia (haemoglobin < 115 g/l) and rickets decreased from 42% to 24% and 48% to 33%, respectively Notwithstanding these positive results on anaemia control, without committed, long-term financial input from national governments, international agencies, or nongovernmental organizations, sustainability is not guaranteed. Clearly, sustainability over the long term can most likely be achieved if a program becomes self-financing. This may be achieved through public- and private-sector partnerships that use effective social marketing models or possibly through programs which include microcredit in order to reach poorer population groups.When strategizing how to scale up Sprinkles from small-scale research projects to large-scale programs, we quickly realized that our research group did not have the necessary funding, experience, or personnel needed to influence health policy, develop a social marketing strategy, or maintain a distribution network at a countrywide level. We have thus partnered with organizations that specialize in each of these areas to help achieve our goal of sustainable distribution.Shastha Shebika). In both of these countries, Sprinkles would be produced locally through public–private partnerships via a technology transfer agreement. The cost per sachet of locally produced Sprinkles should range from US$0.010 to US$0.015, depending on the volume of production, as compared to US$0.020 to US$0.025 if imported.For example, the government of Pakistan is planning to distribute Sprinkles through their ongoing Lady Health Worker Program, which is the largest public-sector primary health-care program implemented by the Federal Ministry of Health. In Bangladesh, BRAC , the largest national non-governmental organization in the country, is planning to distribute Sprinkles through their ongoing Female Community Health Worker program (popularly known as Each stage in the evolution of the Sprinkles intervention has been evaluated in a controlled manner. We determined that the use of encapsulated iron did not appreciably change the taste or color of the food to which it was added, we showed that the haemoglobin response in anaemic infants was equivalent to the current standard of practice, and we documented the acceptability of Sprinkles among caregivers who used Sprinkles in their homes. Finally, through various partnerships, we have developed a successful model to scale up the intervention for countrywide use. Our challenge for the future is to demonstrate the cost-effectiveness of this new intervention and to advocate for the adoption of Sprinkles in the nutrition policy of developing countries.
Allelic-loss studies record data on the loss of genetic material in tumor tissue relative to normal tissue at various loci along the genome. As the deletion of a tumor suppressor gene can lead to tumor development, one objective of these studies is to determine which, if any, chromosome arms harbor tumor suppressor genes.n0 versus n1. In these cases, frequentist test statistics based on the likelihood ratio statistic have unknown distributions and are therefore not applicable. Our simulation study shows that Bayes factors favor the right model most of the time when tumor suppressor genes are present. When no tumor suppressor genes are present and background allelic-loss varies, the Bayes factors are often inconclusive, although this results in a markedly reduced false-positive rate compared to that of standard frequentist approaches. Application of our methods to three data sets of esophageal adenocarcinomas yields interesting differences from those results previously published.We propose a large class of mixture models for describing the data, and we suggest using Bayes factors to select a reasonable model from the class in order to classify the chromosome arms. Bayes factors are especially useful in the case of testing that the number of components in a mixture model is Our results indicate that Bayes factors are useful for analyzing allelic-loss data. The goal of studies of allelic loss is to determine those loci in tumor tissue where genetic material has been lost. A tumor suppressor gene (TSG) is much more likely to lie on a chromosome arm where there has been significant allelic loss than elsewhere ,2. The sEsophageal adenocarcinoma is a form of cancer involving the cells along the lining of the esophagus. The cause of esophageal adenocarcinoma is not well understood. The incidence of this cancer has been increasing rapidly. In fact, it is one of the fastest growing cancers in the United States over the past 20 years ,3,4. A sWe examine three data sets of allelic-loss on esophageal adenocarcinomas that attempt to identify the tumor suppressor genes (TSGs) involved in the development of this disease. These data sets have been previously analyzed and published. We refer to each data set by the last name of the first author of the publication. Some of the data sets record allelic loss on multiple loci per chromosome arm for some of the arms. However, because the number of loci evaluated per chromosome arm is not random , we consider only one locus per chromosome arm. In these cases, we choose data from the most informative locus for that chromosome arm.Our general approach to analyzing allelic-loss data can be described in two main steps. The first step is to choose an appropriate model for the data using Bayes factors. The second step is to classify the chromosome arms as harboring TSGs or not according to the selected model. The details involved in these two steps are described below.A natural way to model allelic-loss data is in terms of a mixture of two distributions: one distribution corresponds to chromosome arms that harbor TSGs and the other corresponds to arms that do not. It is reasonable to expect considerable variability in the loss rates of arms that harbor TSGs due to the existence of multiple pathways leading to the same tumor type . For exaXi be the number of tumors with allelic-loss for the ith chromosome arm, and let ni be the number of informative tumors for the ith chromosome arm, for i = 1, 2,...,N, where N is the number of chromosome arms in the study. The density function for Xi is written as follows:We propose a class of mixture models that account for the variation inherent in this type of data. Specifically, the class of models we propose is a mixture of two beta-binomial distributions. Let θ ≡ is a vector of unknown parameters, η is the mixing probability, πj is the average loss rate, and ωj is the dispersion parameter for j = 0,1.where ω0 → 0 and ω1 → 0). If only one of the dispersion parameters goes to 0 (ω0 → 0 or ω1 → 0), the distribution reduces to a mixture of a beta-binomial and a binomial distribution. Note that the model has only one component when the mixing parameter is zero (η = 0).The distribution converges to a mixture of two binomial distributions as both dispersion parameters go to 0 /Pr(X|H0). Thus, as Bayes factors are proportional to the posterior odds of one model to another, they are desirable measures to use for model selection. Note that if the prior odds are assumed to be 1, then the Bayes factor is equivalent to the posterior odds.Equation (1) shows that the posterior odds is calculated as the product of a term known as the Bayes factor and the prior odds. The Bayes factor is the marginal likelihood of the data under One can think of the Bayes factor as a Bayesian likelihood ratio statistic. Like the likelihood ratio statistic, the Bayes factor is a ratio of likelihoods under two models being considered. However, while the likelihood ratio statistic is the ratio of two maximized likelihoods for two competing, nested models, the Bayes factor is the ratio of two likelihoods integrated or averaged over the entire parameter space and the models need not be nested. An important consideration with a Bayesian approach is that a prior distribution is assumed for all of the parameters in the model. The advantage to this is that one can incorporate prior information into determining which model is more appropriate. This is a disadvantage, however, if the Bayes factor is sensitive to the prior and if the prior has been chosen incorrectly.lnB10 > 2 implies positive evidence in favor of the alternative model.Large Bayes factors are evidence in favor of the alternative hypothesis. Kass and Raftery (1995) discuss guidelines for interpreting the measure . FollowiComparing a uni-component model to a two-component model would address the question of whether there is one versus two groups of chromosome arms. Further, comparing a two-component beta-binomial model to a two-component binomial model would address whether there is overdispersion in either group. The advantage of this is that it provides insight into the number of chromosome arm groups, whereas standard applicable frequentist tests will only indicate whether there is one or more groups ,9.Xi ~ ηf1 + (1 - η)f0, then it can be shown using Bayes' rule thatProvided there is sufficient evidence to indicate that there are two groups of chromosome arms, it is desirable to identify which chromosome arms belong in which group. Classification of the chromosome arms can be done by calculating the conditional probability of group membership of each arm under a given model. If is the maximum likelihood estimate (MLE) of θ, Zi is the group membership of the ith chromosome arm and Zi = 1 implies that the ith chromosome arm is in the TSG group. For the analyses here, chromosome arms with conditional probabilities exceeding 0.5 are classified in the TSG group. Also note that MLEs are computed using the nlminb function in S-Plus.where Table ln(Bayes factor) for data generated under each of the scenarios described in Table H1 (models appearing in the numerator of the Bayes factor). The columns of the matrix correspond to models considered under H0 (models appearing in the denominator of the Bayes factor).Table For data generated from a two-component binomial model (Scenario 1), the true model is mostly favored over the uni-component models. In fact, when comparing the true model to a uni-component beta-binomial model, the latter model is only favored 5% of the time. This can be viewed as a false-negative rate. Note that the Bayes factors never provide evidence in favor of a uni-component model in comparisons with either of the other two-component models for data from this scenario. Furthermore, the true model is selected 75% of the time over the two-component beta-binomial model. The Bayes factors are ambiguous, however, when comparing the true model to a two-component beta-binomial/binomial model, where neither is favored 69% of the time.For data that follow a uni-component beta-binomial distribution (Scenario 2), the results are inconclusive 62% of the time when comparing the true model to the two-component binomial model. For twenty-two percent of the data sets the right model is favored, but 16% of the time, the two-component model is selected. Thus, this comparison results in a 16% false-positive rate. Similar results are found when comparing the true model to a two-component beta-binomial/binomial model. The Bayes factors favor the correct model over the two-component beta-binomial model roughly half the time and favor neither model the other half. Comparisons between the two-component models and the one-component binomial model not surprisingly show a strong preference for the two-component models, as they better accommodate the variability of the data.The third quarter of Table For data generated under Scenario 4, we expect the two-component beta-binomial model to be chosen over the other models in the class as this model is closest to the truth. The results show that when this model is compared to the two-component binomial or the one-component beta-binomial, it is mostly favored, and these models are never selected. As the two-component beta-binomial model is fairly similar to the two-component beta-binomial/binomial model, however, most of the time neither model is chosen over the other. The two-component beta-binomial is favored only 35% of the time, while the two-component beta-binomial/binomial is favored 9% of the time. Interestingly, when comparing the one-component beta-binomial to the two-component binomial, the one-component model is chosen 72% of the time and the two-component binomial model is chosen only 5% of the time. This suggests that the measure is fairly sensitive to the overdispersion in the two groups. Another example of this is a comparison between the two-component beta-binomial/binomial model and the one-component beta-binomial model. In this case, the two-component model is only favored 54% of the time, where the uni-component model is a better fit to 5% of the data sets, and both models are equally good fits to the data 41% of the time.This simulation study demonstrates that the Bayes factors are an appropriate method of model selection. They perform particularly well for data generated from the two-component models. In particular, most of the time, the correct model is chosen, and furthermore, reasonable false-negative rates are observed for comparisons made on data generated from the two-component binomial model as well as the two-component beta-binomial/binomial model. Data generated from a one-component beta-binomial model produces interesting results. Although the false-positive rates are reasonable when comparing the one-component beta-binomial model to the other two-component models , there is a large percentage of time, when neither model is favored . Since both models are often good fits to the data, it would be difficult to decide with confidence whether or not there is a second group of arms in these cases.In this section, we apply the methods discussed to three allelic-loss data sets. Specifically, we use Bayes factors to choose a reasonable model or set of models for the data in order to address whether TSGs exist on any of the chromosome arms, and we classify the chromosome arms as harboring TSGs or not based on the selected model(s).ln(Bayes factors) exceeding 2 when compared to models outside the set and with 2ln(Bayes factors) less than 2 when compared to models within the set. Details of the analysis for each data set are described below, with slightly more emphasis placed on the first data set.Table The Barrett data set records allelic loss on 20 esophageal adenocarcinomas and two high-grade dysplasias. Figure ln(Bayes factors) for the pairwise comparisons of the models for each of the three data sets. In addition, the posterior probability of each model is presented assuming a prior probability for the models such thatTable P(2 Component Model) = P(1 Component Model) = 1/2This givesP(2 bb) = P(2 bb/bin) = P(2 bin) = 1/6P(1 bb) = P(1 bin) = 1/4.For the Barrett data set, the two-component models are strongly favored over the one-component models, clearly indicating a group of arms that exhibit higher than background loss rates. In particular, the Bayes factors demonstrate that the two-component beta-binomial/binomial model provides the best fit. Note that the posterior probability of this model is considerably higher than that of the others, providing further evidence of its superiority. = 0, reducing the two-component beta-binomial model to a two-component beta-binomial/binomial model. The parameter estimates for these two models are identical and imply that the beta-binomial distribution corresponds to the TSG loss and the binomial distribution corresponds to the background loss. The estimate of the probability that a chromosome arm is in the TSG group is 0.097. The estimated background loss rate is 0.228, and the expected background loss rate for arms with TSGs is estimated at 0.708 with a loss rate variance of 0.07. The fit from the two-component binomial model gives a slightly lower mixing parameter estimate and a slightly higher estimate of the TSG loss rate.Table The conditional probabilities of group membership based on the two-component beta-binomial/binomial model yield the same classification rule as that based on the other two-component models. Chromosome arms 5q, 9p, and 17p are classified in the TSG group. The conditional probabilities of group membership for these chromosome arms are quite similar across the three models.The Gleeson data set consists of 38 esophageal adenocarcinomas. Allelic-loss data were recorded on 39 chromosome arms . A histogram of the proportion of tumors with allelic loss is presented in Figure For the Gleeson data set, the two-component beta-binomial/binomial model, the two-component binomial model and the uni-component beta-binomial model are all favored over the two-component beta-binomial model and the uni-component binomial model exhibit lower than the average background loss rate in the Barrett data set. However, 9p and 17p are categorized along with 5q in the TSG group. Furthermore, although not classified in the TSG group, chromosome arm 18q exhibits the fourth highest allelic-loss rate in the Barrett data set.The Hammoud data set consists of 30 esophageal adenocarcinomas on 39 chromosome arms (the same arms included in the Gleeson data set). A histogram of the Hammoud data set is presented in Figure The pairwise comparisons using the Bayes factors for the Hammoud data set , the authors consider a uni-component binomial distribution for the background loss [996, the The analytic approach employed by Gleeson et al. 1997) is to select a chromosome arm with a corresponding allelic-loss rate above an arbitrarily chosen cut-off of 50% as criterion for potentially harboring a TSG . With th997 is toResults from the Bayes factors for the Gleeson data set are not completely clear. They cast doubt on whether the true underlying distribution really has two components or whether the two-component models chosen also provide a reasonable fit to overdispersed data exhibiting only background loss. Recall the simulation study where we demonstrate that for data arising from a uni-component beta-binomial model, the Bayes factors indicate that both the true model and the two-component binomial model are often both reasonable fits to the data. This motivates incorporating Bayesian model averaging (BMA) into the inference process . An alteP(Hj|X). For example, suppose chromosome arm 13q is suspected of harboring a TSG from past experiments and we desire a probability that Zq 13= 1 based on these data. Because of model uncertainty we may be hesitant to compute the probability based solely on one model. Instead, we could estimate this probability as:Furthermore, one could use Bayesian model averaging when estimating the conditional probability of group membership for each of the chromosome arms. Maximum likelihood estimates from different high probability models could lead to different inferences about parameters. Thus, this approach of averaging the conditional probability over the various models to classify the arms or weighting the parameter estimates by the posterior probability of a given model may be more desirable than choosing a single best model from which to make inference. Specifically, one could weight estimates by j indexes over all of the models considered. This is a potential alternative to classifying the chromosome arms using the classical maximum likelihood approach that needs to be further explored. It is interesting to note that the two-component beta-binomial mixture model was never chosen for any of the data sets. Although it was certainly favored over the one-component binomial model in all data sets and over the uni-component beta-binomial model in the Barrett data set, it was never chosen to be in the set of candidate models. The class of models considered here is based on our beliefs of the biology of the data. However, the ability to screen the tumor cell genome for chromosome arms which harbor TSGs lies in a better understanding of the background distribution. Characterizing the background distribution would allow a more definitive identification of arms exhibiting abnormal loss.where The three data sets to which we apply our methods were previously published and analyzed using other techniques ,4,11.Computing Bayes factors can be challenging as non-trivial integration is often required to estimate the marginal probabilities under each model considered. Specifically, calculating Bayes factors involves integrating the likelihood over the entire parameter space for each model considered. Thus, the integrals tend to be high-dimensional. In general, we need to computeI = ∫ Prπ(λ|H)dλ.This can be quite computationally intensive. When the integral is of high dimension (> 6), quadrature methods can be unreliable . In addiπ(λ). The simple Monte Carlo estimate of the integral is the averaged likelihood at the sampled parameter values orAnother method of estimating integrals is simple Monte Carlo, that involves sampling from the prior distribution, π*(λ), the importance sampling function [This has been shown to be a good estimate for likelihoods that are relatively flat. However, if the posterior is concentrated relative to the prior, the variance of the estimate will be large, and convergence to a Gaussian will be slow . Thus, sfunction ,14. The is known as the importance sampling ratio. The simple Monte Carlo estimate is a special case of importance sampling where π*(·) is chosen to be the prior distribution. However, the importance sampling estimate can be an improvement over the simple Monte Carlo estimate if π*(·) is chosen such that the sampling is more efficient, e.g., if π*(·) is centered around the mass. There has been some success with importance sampling in a non-mixture model setting [where setting .Our solution is to first write the likelihood in its complete-data form. The likelihood for the mixture of two beta-binomial distributions is written as follows:z = T and the zis are unobserved group membership indicators such that zi = 0 if xi is from the background component and zi = 1 if xi is from the TSG component. Then the marginal probability of X becomeswhere I denotes the marginal probability of the data (or integrated likelihood) and where g is the prior distribution of θ.where Z. The idea behind the method is to use P where is the MLE of θ to provide information on the important groupings, i.e., which chromosome arms are likely to be clustered together. While the membership vectors are sampled independently, the membership values within a group are sampled dependently, making these groupings more likely to be maintained than if the values were sampled independently.We then estimate this integral using a method we developed called the Uniform Distance Method (UDM). This method is a variant on importance sampling and involves a combination of either quadrature or exact integration and sampling of the membership vectors, The development and assessment of UDM is discussed in detail in Desai (2000) and demonstrates solid performance in estimating these integrals . SoftwarTSG, tumor suppressor gene; MLE, maximum likelihood estimate; 2 bb, two-component beta-binomial model; 2 bb/bin, two-component beta-binomial/binomial model; 2 bin, two-component binomial model; 1 bb, uni-coniponent beta-binomial model; 1 bin, uni-component binomial model; BMA, Bayesian model averaging; UDM, uniform distance methodBoth MD and MJE contributed substantially to the development of the models and the methodology. MD performed the simulation study and analysis of the three data sets. Both authors have read and approved the final version of the manuscript.
Adhesive capsulitis or frozen shoulder is a common condition characterized by shoulder pain and stiffness. In patients in whom conservative measures have failed, more invasive interventions such as arthrographic or arthroscopic distension can be very effective in relieving symptoms and improving range of movement. However, absolute contraindications to these procedures include the presence of neoplasia around the shoulder girdle. We present five cases referred to our institution where the diagnosis of shoulder joint malignancy was delayed, following prolonged, ineffective treatment for frozen shoulder. These cases highlight the importance of careful review of the radiology and the need for reconsideration of the diagnosis in refractory "frozen shoulder". Frozen shoulder was first described by Codman in 1934, as an idiopathic painful restriction in the range of shoulder joint movement, in the presence of normal plain radiographs . It is uTumours around the shoulder girdle are uncommon causes of shoulder pain and stiffness, but often present with symptoms and a clinical history identical to that of a frozen shoulder. A strict contraindication to arthrographic or arthroscopic distension of the shoulder is the presence of a local oncological process. Such procedures may change the surgical management from being a limb-preserving resection to a forequarter amputation. In the past month, five patients have been referred to us with malignant tumours around the shoulder joint, all previously diagnosed as having a frozen shoulder. All patients had undergone prolonged conservative management and hydrodilatation, with persistence of symptoms. Two of the patients had also undergone arthroscopic surgery. The following cases illustrate the importance of reconsidering the diagnosis in refractory frozen shoulder and the value of a detailed clinical history and examination and careful consideration of radiologic imaging in assessing recalcitrant "frozen shoulder".A 60 year old woman presented to her local medical officer with an eighteen month history of worsening right shoulder pain and stiffness. She was initially treated with oral analgesia followed by a cortisone injection without improvement. Two months later she had a hydrodilatation of the shoulder but her symptoms persisted. MRI was then performed, which demonstrated a large permeative tumour arising from the scapula Figure . She wasA 42 year old man was referred to an orthopaedic specialist with a history of sudden onset left shoulder pain following a work related activity. He was initially diagnosed with rotator cuff tendinopathy and subacromial impingement, and had a course of intensive physiotherapy followed by arthroscopic shoulder surgery, without improvement in symptoms. Two months later a minor incident involving his left shoulder led to an increase in pain and swelling and reduction in movement. Hydrodilatation was then performed but pain and function of the shoulder continued to worsen. On retrospective review of plain x-rays of the shoulder, a destructive lesion at the metaphysis with a cortical breach medially in the region of the surgical neck of the humerus was realized (Figure A 50 year old women was referred to an orthopaedic specialist with a 6 month history of episodic pain in the right shoulder, with associated decreased range of movement. Initial plain x-rays were unremarkable. She then underwent a variety of procedures, which included repeated subacromial corticosteroid injections, arthrographic distension, manipulation under anaesthetic, and arthroscopic debridement and acromioplasty. On arthroscopy, a marked synovitis was observed, to which her ongoing symptoms and the development of a palpable mass on the anterior aspect of the shoulder was initially attributed. Repeat plain radiographs two years after the onset of her symptoms demonstrated a large lesion extending from the glenoid cartilage into the base of the coracoid process Figure . She wasA 68 year old man had previously had a squamous cell carcinoma of the upper back excised, after which he had a three year history of shoulder pain and stiffness. He was treated for a frozen shoulder and received intensive physiotherapy and multiple subacromial corticosteroid injections, followed by hydrodilatation. Initially this seemed to settle his symptoms, although a month later pain and stiffness recurred, with marked reduction in shoulder function, and he was referred to us. An MRI was performed, which showed lesions in the supraspinatus and trapezius muscles, which were consistent with metastatic deposits Figure . The patA 55 year old female with a past history of malignant fibrous histiocytoma of the left thigh resected five years previously, presented to her local medical officer with right shoulder pain. Plain films were performed at the time which appeared normal (Figure et al. [Adhesive capsulitis or frozen shoulder is a common condition that may affect up to 5% of the general population in their lifetime. Although the aetiology of frozen shoulder is unknown, it has been associated with diabetes mellitus, thyroid disease, ischaemic heart disease and various autoimmune conditions . Other cet al. demonstret al. [Tumours of the shoulder girdle are uncommon causes of shoulder pain and restricted movement. In most cases, they are diagnosed based on the presence of a soft tissue mass on clinical examination, as well as characteristic radiographic changes. Robinson et al. suggesteet al. . MisdiagMRI: magnetic resonance imaging, CT: computed tomography, TSE: turbo spin echo, STIR: short tau inversion recovery.
The Menopause Rating Scale is a health-related Quality of Life scale developed in the early 1990s and step-by-step validated since then. No methodologically detailed work on the utility of the scale to assess health-related changes after treatment was published before.We analysed an open, uncontrolled post-marketing study with over 9000 women with pre- and post-treatment data of the MRS scale to critically evaluate the capacity of the scale to measure the health-related effects of hormone treatment independent from the severity of complaints at baseline.The improvement of complaints during treatment relative to the baseline score was 36% in average. Patients with little/no complaints before therapy improved by 11%, those with mild complaints at entry by 32%, with moderate by 44%, and with severe symptoms by 55% – compared with the baseline score. We showed that the distribution of complaints in women before therapy returned to norm values after 6 months of hormone treatment. We also provided weak evidence that the MRS results may well predict the assessment of the treating physician. Limitations of the study, however, may have lead to overestimating the utility of the MRS scale as outcome measure.The MRS scale showed some evidence for its ability to measure treatment effects on quality of life across the full range of severity of complaints in aging women. This however needs confirmation in other and better-designed clinical/outcome studies. The Menopause Rating Scale (MRS) was initially developed in the early 1990s to measuThe validation of the MRS began some years ago -6 with t.Development and standardization of the scale were published elsewhere . In brieThe scale was defined as a menopause-specifc, health-related quality of life scale (HRQoL), because the profil of complaints in this scale importantly determines the HRQoL of women in this age span. Moreover, a good correlation between the results obtained with the MRS scale and the generic QoL scale was observed .).The MRS scale became internationally well accepted as far as the usage many countries is concerned. The first translation was into English . Other tLike in other health-related QoL scales, it is a challenge to satisfy the demands of a clinical utility and outcomes sensitivity. A comprehensive overview regarding conventional psychometric requirements of test reliability and validity were recently published elsewhere . It is t® = 2 mg estradiol valerate/2 mg estradiol valerate + 1 mg cyproterone acetate) using the MRS scale as outcome measure under routine conditions of office-based gynaecologists. The study was described in detail elsewhere [A multicenter, open post-marketing study was conducted with a product for hormone therapy .The statistical analyses were performed with the commercial statistical package SAS 8.2.Altogether, data of 9311 women were available for most of our analysis. However, the sample size varied slightly depending on the variables used because we had also missing information in a few variables.The mean age was 49.8 years (SD 6.4). About half of the participating women were still perimenopausal (51.9%) or were already in the postmenopausal phase (48.1%). The mean body mass index was not eye-catching with 24.7 (SD 3.7).The improvement of the health-related quality of life (HRQoL) – measured with the MRS scale – is described in Table Apart from the comparison of means, we calculated the relative improvement compared with the situation before therapy (baseline) to better understand the magnitude of change after therapy Table , i.e. inThe scale is able to measure an improvement in patients starting with "no/little complaints" , "mild" (5–8), "moderate"(9–15), and "severe" (16 + points) before therapy (= baseline). This is presented in Table It is interesting to compare the HRQoL before and after hormone treatment with the norm values of MRS obtained in an average population of aging women, i.e. not patients as in our post-marketing study. To this end, we compared only the MRS total scores in patients with the average female population Table . It becasuccessful (very effective and effective) and not successful . This alternative variable was then used for the comparison with the alternative "success-variable" based on MRS : "successful" (5 and more points reduction after therapy compared with baseline test) and "not successful" (less than 5 scoring points reduction after therapy compared with baseline test).The treating gynaecologist assessed individually the efficiency of the hormone treatment in the above mentioned intervention study. The gynaecologist's expert opinion regarding treatment efficiency was categorized into two categories for the purpose of this analysis: The prediction of the expert opinion of the treating gynaecologist with the MRS data seems to be good: sensitivity (correct prediction of a positive assessment by the physician) 70.8% and specificity (correct prediction of a negative assessment by the physician) 73.5% Table .The MRS scale was developed (a) to assess symptoms of aging/menopause (independent from those that are disease-related) or HRQoL between groups of women under different conditions, (b) to evaluate the severity of symptoms over time, and (c) to measure changes pre- and post hormone replacement therapy. The aim of this paper was to empirically demonstrate that the latter claim is evident.Reliability measures were found to be good across countries [validity it was shown that the internal structure of the MRS across countries was sufficiently similar to conclude that the scale really measures the same phenomenon [Reliability and validity are important to show the usefulness of the scale as a clinical utility in monitoring treatment effects – once all other methodological requirements are successfully demonstrated before. ountries . Regardienomenon .The comparison with another scale for aging women – although not a validated HRQoL scale (Kupperman) – showed sufficiently good correlations of the total score, which is compatible with the notion of a good criterion-oriented validity. The same is true for the comparison with the generic quality-of-life scale SF36 where also high correlation coefficients have been shown . AnotherHaving the above-mentioned psychometric data available, a point was reached to critically evaluate the capacity of the scale to reliably measure health-related effects of hormone treatment independent from the severity of complaints and – in addition – to the comparison of treatment effects measured by the MRS scale and the subjective assessment by the treating physician. To this end, many clinicians use the term "validity" and mean high utility for clinical work or research.The only hormone treatment study with the MRS scale as outcome measure in women during menopausal transition we could get data for methodological analysis was the above described postmarketing study. We hope to repeat/confirm this analysis with data of a more stringently designed clinical trial. But even on the basis of a methodologically weak dataset, in absence of other data, we got re-assuring methodological information about the MRS scale.It is a well-established experience that women with menopausal complaints respond to hormone therapy with a marked improvement of the HRQoL. This is what the MRS scale should be able to detect.We saw that the increased mean MRS total score at baseline (before treatment) markedly decreased after 6 months under treatment indicating a significant improvement of complaints & HRQoL. This was also the case for the mean scores of the three subscales. These data cannot disentangle the effect of treatment and "natural variation" of complaints over time. This however was not the point: It was not the intention of this paper to evaluate effectiveness of hormone therapy in an uncontrolled post-marketing study.The absolute improvement of symptoms during treatment was 9.3 points of the MRS total score on average. This is equivalent to 36% of the baseline score, and similar also for all three subscales. In other words, the MRS scale was shown to be successful in detecting treatment effects. The impressive magnitude of the therapy-related improvement of HRQoL should be obviously discussed in the context of selection of women with complaints susceptible for this kind of treatment by the participating gynaecologists. Another critical remark is that we cannot comment as to what extend the MRS scale is able to measure true or placebo treatment effects. But this is more a question concerning efficacy and the study draw any conclusions in this regard by definition of the study design.To answer the question whether the sensitivity of the MRS scale is good enough to detect even treatment-related changes in women with only little or mild symptoms as compared with severe ones, the analysis was stratified. An improvement of complaints/QoL was seen in an increasing degree in patients with little, mild, moderate and severe symptoms at baseline. The relative improvement increased with the degree of severity of symptoms at baseline, which is consistent with the general expectation. It seems to be important to underscore: The MRS scale seems to detect also a positive treatment effect in women with little complaints – although to a lesser degree.Moreover, we showed the capacity of the MRS scale to determine therapeutic efficiency with another approach: a face-value-comparison with norm values of the population ,3. The lsuccessful" and "not successful" for both the subjective opinion of the physician and the result of the MRS scale: The sensitivity (correct prediction of a positive assessment by the physician) was 70.8% and specificity (correct prediction of a negative assessment by the physician) 73.5%. In other words, the MRS scale fits well with the subjective assessment of the treatment effect estimated by the physician. However, conclusions have to be drawn very carefully because of a possibly inherent bias that may have inflated the positive result: The subjective assessment of "success" by the treating physician was obviously not as independent from the assessment by the MRS scale as desirable because the physician applied the scale to the patient. Even without being able to recall the result of the MRS six month ago or to calculate and compare the total score of both administrations, the interaction with the patients is likely to have introduced this bias in the direction of a higher compatibility between both assessments.The MRS scale was also tested whether it predicts the therapeutic assessment of the treating physician. At face value, the individually assessed efficiency of hormone treatment by the treating gynaecologists was comparable with the assessment by the MRS scale, i.e. using a simple dichotomization of the treatment effect in "Although the result may too positive compared with a blinded, really independent assessment, it permits to generate the working hypothesis of a sufficiently good prediction of the therapeutic effect by means of the MRS scale. This needs to be confirmed with better data, i.e. data from a blinded, independent comparison, i.e. with the currently used, self-administered MRS scale.The aim of this exercise was only to demonstrate that the MRS scale may well predict the clinical opinion about efficiency of hormone therapy, what was not empirically shown before. We recommend the MRS as standardized/validated "objective" scale for use in clinical studies, although some aspects discussed above need confirmation in a new study. Moreover, since the scale is already broadly used at the international level, it is important to sensitise users about some lacking information or weak evidence.The limitations of this study should be shortly summarized. First of all, this study was performed in a dataset where an earlier version of the MRS scale was used, i.e. the scale was not self-administered but completed in an interview of the physician with the patient. This could have influenced the magnitude of the absolute scores of the total and sub-scales. As far as pre-/post-treatment changes are concerned, the magnitude of the absolute changes may have been more influenced than the relative changes of the HRQoL assessment of the patients as discussed in this paper. Another problem along the same line is that we had to transform the old coding system into the new one. This was done with a simple linear transformation and is not likely to have introduced any bias. Another limitation is that this is the first study we are aware of for this kind of assessment of the validity to measure therapeutic intervention.It is not likely that the main conclusions of the study are materially biased. However, the results should be cautiously used as long as not confirmed with data obtained with the currently used self-administered MRS scale without potential influence of the physician. It can be assumed that a new study with the currently recommended MRS scale – in the sense of "patient-reported-outcome" – would demonstrate positive results but to a lesser degree.The MRS scale showed some evidence for its ability to measure treatment effects on quality of life across the full range of severity of complaints in aging women. This however needs confirmation in other and better-designed clinical studies.The authors FS and SG are employees of the company that produces HRT products.We cannot however see any conflict of interest as far as the methodological aspects of the validation of the MRS scale are concerned.LAJH: responsible for the collection and evaluation of the data, and involved in writing of the paper. DMT: responsible for building the database of this publication, responsible for the statistical evaluation, and contributed to writing of the paper. FS: responsible for the post-marketing study and designing this paper, contributed to the manuscript. SG: responsible for designing and overseeing the post-marketing study of Climen, contributed to writing and revising of the paper JS: responsible for the field work of the post-marketing study, setting up the initial database, and for the preparation of the subset of data used for this publication. HPGS: Major responsibility in developing the MRS scale, contributed to writing/revision of the manuscript.