query
dict
pos
dict
neg
dict
{ "abstract": "DOI: 10.4328/JCAM.5881 Received: 08.06.2018 Accepted: 25.06.2018 Published Online: 02.07.2018 Printed: 01.03.2019 J Clin Anal Med 2019;10(2): 193-7 Corresponding Author: Hatice Kaplanoglu, Department of Radiology, Diskapi Yildirim Beyazit Training and Research Hospital, TR-06100 Ankara, Turkiye. T.: +90 3125084443 F.: +90 31231866 E-Mail: hatice.altnkaynak@yahoo.com.tr ORCID ID: 0000-0003-1874-8167 Abstract Aim: In this study, we aim to investigate the anterior-posterior length of the lateral orbital wall, and the width and thickness of the posterior base of the sphenoid trigone using multislice computed tomography (MSCT). Material and Method: The lateral orbital distance was found by measuring the distance that starts from the axial lateral orbital rim to the point where the lateral rectus muscle contacted the bone. The lateral wall width was measured at the superior border of the lateral rectus muscle. The sphenoid trigone thickness was measured at the level passing through the superior border of the lateral rectus muscle. Results: In the right eye, orbital lateral wall length was 21.6 mm, trigone thickness was 13.0 mm, and lateral wall width was 13.0 mm, while in the left eye, the orbital lateral wall length was 20.7 mm, trigone thickness was 9.5 mm, and the lateral wall width was 13.4 mm. Discussion: In this study we measured both the mean width and length of the larger part of the sphenoid, and the trigone thickness. These measurements can be used as an anatomical guide in the deep lateral orbital decompression surgery.", "corpus_id": 109740588, "title": "Multislice computed tomography evaluation of lateral orbital wall and sphenoid trigone in Turkey" }
{ "abstract": "INTRODUCTION\nTechniques of orbital decompression for Graves' ophthalmopathy continue to evolve. Recently the deep lateral orbital wall has been proposed as the most effective and safe site for a decompression procedure associated with the least complications. Anatomic variations with structures like the middle cranial fossa render decompression of the lateral wall more logical. We aimed to understand the anatomic localization and appearance of the lateral orbital wall by measuring the width and distance of the lateral wall on computed tomography (CT).\n\n\nMATERIAL AND METHODS\nThe medical records of all patients who underwent orbital CT scans for ocular trauma or for confirmation of orbital disease at the Korea University hospital between January 2005 and May 2008 were reviewed retrospectively. All patients had been scanned with the same CT scanner (Philips Brilliance 64 channel CT; Philips Healthcare Systems). Patients who had normal orbits bilaterally were included in this study. The cut in which the lateral rectus muscle was longest and the lateral bony orbit was thickest was selected from the axial and coronal slices. The point where the lateral rectus muscle contacted the the bone was measured on this axial slice. The width of the lateral wall was measured at the level of superior border of the lateral rectus muscle on thickest part of the coronal slice.\n\n\nRESULTS\nA total of 334 orbits (167 patients) were included. Patients ranged in age from 7 years to 78 years (median age 41.1 years). The average distance of the lateral wall was 26.0 mm OD and 25.0 mm OS. The average width of the lateral wall was 16.0 mm OD 16.2 mm OS. There was no statistically significant difference between right and left. The patients were divided into 8 age groups by decades. There was no statistically significant difference between the groups in either measurement.\n\n\nCONCLUSION\nIn this study, we measured the average width and length of the thickest segment of the greater wing of the sphenoid, which can be used as anatomic guidelines during deep lateral orbital decompression surgery, and the basic standard value of the lateral orbital wall.", "corpus_id": 1897856, "title": "Measurement of width and distance of the posterior border of the deep lateral orbital wall using computed tomography." }
{ "abstract": "Objective/Hypothesis: Surgical management of Graves' ophthalmopathy is an alternative to medical therapy with corticosteroids or external beam radiotherapy. Orbital decompression has commonly been performed via a transantral approach to the medial orbital wall and floor. Although an endoscopic approach to these walls has been described, a balanced approach (incorporating a lateral decompression by an ophthalmology team) is desirable. Study Design: Retrospective review. Methods: Endoscopic medial decompression and extended lateral decompression were accomplished in 18 orbits (11 patients); inferior decompression was performed in 11 of these. Five additional procedures were performed. Results: Exophthalmos improved by a mean of 4.6 mm. All patients who underwent decompression for vision loss had improved vision after surgery. Exposure keratitis improved in six of six orbits. Two of five patients undergoing orbital decompression for vision loss developed postoperative diplopia, which was successfully treated with strabismus surgery or prism glasses. There were no other significant complications. Conclusions: The endoscopic approach to the medial orbital wall is an important component of balanced orbital decompression for patients with Graves' ophthalmopathy. Balancing the decompression and preserving the medial orbital strut between the ethmoid cavity and the orbital floor may minimize the risk of diplopia. Laryngoscope, 108:1648–1653, 1998", "corpus_id": 21017695, "score": -1, "title": "Balanced orbital decompression for graves' ophthalmopathy" }
{ "abstract": "Fog computing is an intermediate computing layer that has emerged to address the latency issues of cloud-based Internet of things (IoT) environments. As a result, new forms of security and privacy threats are emerging. These threats are mainly due to the huge number of sensors, as well as the enormous amount of data generated in IoT environments that needs to be processed in real time. These sensors send data to the cloud through the fog computing layer, creating an additional layer of vulnerabilities. In addition, the cloud by nature is vulnerable because cloud services can be located in different geographical locations and provided by multiple service providers. Moreover, cloud services can be hybrid and public, which exposes them to risks due to their infinite number of anonymous users. Access control (AC) is one of the essential prevention measures to protect data and services in computing environments. Many AC models have been implemented by researchers from academia and industry to address the problems associated with data breaches in pervasive computing environments. However, the question of which AC model(s) should be used to prevent unauthorized access to data remains. The selection of AC models for cloud-based IoT environments is highly dependent on the application requirements and how the AC models can impact the computation overhead. In this paper, we survey the features and challenges of AC models in the fog computing environment. We also discuss the diversity of different AC models. This survey provides the reader with state-of-the-art practices in the field of fog computing AC and helps to identify the existing gaps within the field.", "corpus_id": 218834076, "title": "Access Control in Fog Computing: Challenges and Research Agenda" }
{ "abstract": "With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient.", "corpus_id": 3543367, "title": "A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing" }
{ "abstract": "Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.", "corpus_id": 3305868, "score": -1, "title": "Hierarchical Attribute-Based Solution for Flexible and Scalable Access Control in Cloud Computing" }
{ "abstract": "Background: Angiogenesis is regulated by angiogenic factors such as vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (bFGF) that may be deregulated in lung cancer. The aim of this study was to find out a pattern of VEGF and bFGF protein expression in exhaled breath condensate (EBC) and serum of non-small cell lung cancer (NSCLC) patients and healthy volunteers (smokers and nonsmokers) to obtain early diagnostic values to discriminate initial stages of disease.", "corpus_id": 212618832, "title": "Non-Small Cell Lung Cancer Prognosis Based In A Cut-Off Value For Plasma Basic Fibroblast Growth Factor Expression" }
{ "abstract": "Thrombospondin-2 (TSP-2) is an endogenous negative regulator of vascularization in human cancer. TSP-2 regulates angiogenesis through binding and sequestration of the proangiogenic fibroblast growth factor-2 (FGF-2). However, it is unclear whether TSP-2 and FGF-2 are related to prognosis in non-small cell lung cancer (NSCLC). To study this issue, we measured serum (Elisa) levels of TSP-2 and FGF-2 in 40 NSCLC patients (before chemotherapy) and 22 healthy subjects. Both TSP-2 and FGF-2 concentrations were elevated in the NSCLC group compared with control (TSP-2: 26.72±8.00 vs. 18.64±5.50 ng/ml, p=0.002; FGF-2: 11.90±5.80 vs. 7.26±3.90 pg/ml, p=0.01). Receiver-operating characteristic (ROC) curves were applied to find the cut-off serum levels of TSP-2 and FGF-2 (NSCLC vs. healthy: TSP-2=15.09 ng/ml, FGF-2=2.23 pg/ml). Patients before treatment with the TSP-2 level<24.15 ng/ml had a median survival of 23.7 months, but those with TSP-2>24.15 ng/ml had only 9 months' median survival (p=0.007). Patients with FGF-2 level>11.21 pg/ml had significantly shorter survival than patients with FGF-2<11.21 pg/ml (7.5 months vs. 16 months, p=0.034). We conclude that NSCLC patients have higher serum concentrations of TSP-2 and FGF-2 than healthy people. High levels of TSP-2 and FGF-2 may predict worse survival.", "corpus_id": 2822228, "title": "Circulating Thrombospondin-2 and FGF-2 in Patients with Advanced Non-small Cell Lung Cancer: Correlation with Survival." }
{ "abstract": "OBJECTIVE\nIn patients with chronic kidney disease (CKD), aortic calcification is more frequent and severe and it is also predictive of adverse cardiovascular outcome. The aim of the present study was to characterize aortic calcification in renal compared with non-renal patients.\n\n\nMETHODS\nAortas of 31 patients with advanced CKD and of 31 age-and gender-matched controls were obtained at autopsy. Calcium and phosphorus content in the aorta was quantitated using x-ray analysis. The expression of calcification-promoting and calcification-inhibiting proteins was assessed using immunohistochemistry.\n\n\nRESULTS\nThe calcium and phosphorus content of the aorta was higher in CKD patients than in controls. Even in non-calcified aortic specimens of CKD, staining for Msx-2, BMP-2, bone sialo-protein, TNF-alpha and nitrotyrosine was significantly more marked compared to controls. The same proteins were immunodetected in calcified aortic specimens of both CKD and controls. In contrast, staining for transglutaminase-2 and Fetuin A was significantly reduced in CKD. Higher expression of cbfa-1 and Pit-1 was observed in all calcified aortas with no difference between CKD and controls. The expression of TNF-alpha, phospho-p38 and Msx-2 was correlated to the intensity of upregulation of BMP-2 and osteoblastic transdifferentiation by VSMC even in non-calcified areas of the aortas of CKD.\n\n\nCONCLUSION\nThe expression of markers characteristic for calcification is not different in calcified aorta of CKD patients compared to controls, but in CKD patients, evidence of inflammation, transformation to an osteoblastic phenotype and reduced expression of transglutaminase are also found even in non-calcified aorta.", "corpus_id": 11225470, "score": -1, "title": "Arterial calcification in patients with chronic kidney disease." }
{ "abstract": "We propose a novel general-purpose hybrid method comprising topic modeling and Class Association Rule Mining (CARM) for text classification in tandem. While topic modeling performs dimension reduction, association rule mining aspect is taken care by Apriori and Frequent Pattern(FP)- growth algorithms, separately. In order to illustrate the effectiveness of the proposed method, malware prediction using two publicly available datasets of API calls has been performed. The proposed model has generated highly accurate class association rules and Area Under the Curve (AUC) compare to the extant models in the literature. With the help of statistical significance test, it is concluded that the performances of both proposed hybrid models, i.e., topic modelina with FP-2rowth and Apriori, are same.", "corpus_id": 52934073, "title": "A Hybrid Approach Using Topic Modeling and Class-Association Rule Mining for Text Classification: the Case of Malware Detection" }
{ "abstract": "Association and classification are two important tasks in data mining. Literature abounds with works that unify these two techniques. This paper presents a new algorithm called Particle Swarm Optimization trained Classification Association Rule Mining (PSOCARM) for associative classification that generates class association rules (CARs) from transactional database by formulating a combinatorial global optimization problem, without having to specify minimal support and confidence unlike other conventional associative classifiers. We devised a new rule pruning scheme in order to reduce the number of rules and increasing the generalization aspect of the classifier. We demonstrated its effectiveness for phishing email and phishing website detection. Our experimental results indicate the superiority of our proposed algorithm with respect to accuracy and the number of rules generated as compared to the state-of-the-art algorithms.", "corpus_id": 16149402, "title": "Particle Swarm Optimization Trained Class Association Rule Mining: Application to Phishing Detection" }
{ "abstract": "In this paper, we propose QUANTMINER, a mining quantitative association rules system. This system is based on a genetic algorithm that dynamically discovers \"good\" intervals in association rules by optimizing both the support and the confidence. The experiments on real and artificial databases have shown the usefulness of QUANTMINER as an interactive data mining tool.", "corpus_id": 2403061, "score": -1, "title": "QuantMiner: A Genetic Algorithm for Mining Quantitative Association Rules" }
{ "abstract": "We report a case of palatal pyogenic granuloma following mucogingival surgery for alveolar socket preservation. A 24-year-old systemically healthy female underwent a pediculated palatal pedicle graft procedure to achieve soft tissue augmentation over a grafted maxillary anterior extraction site. After 1 month, a 15 mm × 20 mm exophytic growth extending from the palatal donor site to distance of 3–4 mm from the extraction socket was observed. After obtaining the subject's consent, local anesthesia was administered and the growth was excised from the base. On histopathological examination, the findings suggestive of pyogenic granuloma were seen. Palatal pyogenic granuloma occurs rarely and the authors were unable to find the reports of pyogenic granuloma originating in the vicinity of a surgical wound after a pediculated connective tissue mucogingival procedure. Healing plays a vital role in mucogingival procedures, and thus, it is very important to know about the complications affecting this important cascade of events. Failing to consider potential sources of irritation or trauma at the surgical site may lead to considerable morbidity even in sites that may heal without any untoward complications.", "corpus_id": 201158493, "title": "Palatal pyogenic granuloma: An unusual complication following mucogingival surgery for alveolar socket preservation" }
{ "abstract": "Pyogenic granuloma (PG) is a benign non-neoplastic mucocutaneous lesion. It is a reactional response to constant minor trauma and might be related to hormonal changes. In the mouth, PG is manifested as a sessile or pedunculated, resilient, erythematous, exophytic and painful papule or nodule with a smooth or lobulated surface that bleeds easily. PG preferentially affects the gingiva, but may also occur on the lips, tongue, oral mucosa and palate. The most common treatment is surgical excision. This paper describes a mucocutaneous PG on the upper lip, analyzing the clinical characteristics and discussing the features that distinguish this lesion from other similar oral mucosa lesions. The diagnosis of oral lesions is complex and leads the dentist to consider distinct lesions with different diagnostic methods. This case report with a 4 year-follow-up calls the attention to the uncommon mucocutaneous labial location of PG and to the fact that surgical excision is the safest method for diagnosis and treatment of PG of the lip, even when involving the mucosa and skin", "corpus_id": 1285594, "title": "Pyogenic granuloma on the upper lip: an unusual location" }
{ "abstract": "We quantify the benefits of intra-channel nonlinear compensation in meshed optical networks, in view of network configuration, fibre design aspect, and dispersion management. We report that for a WDM optical transport network employing flexible 28Gbaud PM-mQAM transponders with no in-line dispersion compensation, intra-channel nonlinear compensation, for PM-16QAM through traffic, offers significant improvements of up to 4dB in nonlinear tolerance (Q-factor) irrespective of the co-propagating modulation format, and that this benefit is further enhanced (1.5dB) by increasing local link dispersion. For dispersion managed links, we further report that advantages of intra-channel nonlinear compensation increase with in-line dispersion compensation ratio, with 1.5dB improvements after 95% in-line dispersion compensation, compared to uncompensated transmission.", "corpus_id": 11348724, "score": -1, "title": "Intra-channel nonlinearity compensation for PM-16 QAM traffic co-propagating with 28 Gbaud m-ary QAM neighbours." }
{ "abstract": "on part 1 The largest claims reinsurance treaties are reconsidered. Two approaches for estimating a certain main part of the loading are given. For the first approach certain bounds are derived, for the second the Monte-Carlo-Integrationmethod adapted. The second, not so practicable approach can be used for finding adequate mixing coefficients for the first, quite practicable approach.", "corpus_id": 198995708, "title": "ON THE LOADING OF LARGEST CLAIMS REINSURANCE COVERS" }
{ "abstract": "Abstract The general reinsurance treaty based on ordered claims, as defined in Kremer (1982, 1984a,b), is investigated and general premium formulae are given for a finite collective. Under additional assumptions simple formulae are stated for the net premium. The content of the paper is mainly of theoretical interest.", "corpus_id": 153651922, "title": "Finite formulae for the premium of the general reinsurance treaty based on ordered claims" }
{ "abstract": "Abstract For a general class of reinsurance treaties the author gives an upper bound for the net premium. This result can be seen as the counterpart to a premium bound for the classical stop-loss reinsurance cover (see Bowers, 1969). For some special cases some preliminary work can be found in Kremer (1983).", "corpus_id": 55456001, "score": -1, "title": "A General Bound for the Net Premium of the Largest Claims Reinsurance Covers" }
{ "abstract": "The interest in asteroids is increasing, due to their promising clues on the origin of the Solar System, e.g. their albedos and sizes can provide insight into the protosolar nebula. The new scientific mission scheduled to launch in 2021, the James Webb Space Telescope (JWST), provides an opportunity for observation of asteroids. In this thesis, the feasibility of using the Mid-Infrared Instrument's (MIRI) imager on the JWST for serendipitous characterization of asteroids is evaluated. The imager and the medium resolution spectrometer (MRS) run simultaneously. When MRS is running, imager data could be used for detection of asteroids in infrared, where the error in size is only 10%, comparing to 100% in optical, because the thermal radiation is independent of the albedo. Combining the optical and infrared, the albedo can be determined. This prospective use is researched by simulating the sensitivity of the imager. A tool, created during this thesis, simulates realistic cases using proposal (GTO 1282) and determines the known asteroids in the field of view (FOV) of the imager. MIRISim, created by the MIRI European Consortium, simulates their signatures. The results show a 4.1% chance of an asteroid appearing in the FOV and a 96.9% detection of such an asteroid. In a typical MIRI observation, exposure time of 488.4 s, with preferred filter F1280W, the imager can detect asteroids bigger than 250 m in diameter and closer than 3 AU. This could lead to a detection of 733 256 yet undetected asteroids and 183 314 currently known asteroids in the lifetime of JWST.", "corpus_id": 54993277, "title": "Asteroid characterization through serendipitous detection by the Mid-Infrared Instrument on the James Webb Space Telescope" }
{ "abstract": "In the past, the signal-to-noise of a chromatographic peak determined from a single measurement has served as a convenient figure of merit used to compare the performance of two different MS systems. Design evolution of mass spectrometry instrumentation has resulted in very low noise systems that have made the comparison of performance based upon signal-to-noise increasingly difficult, and in some modes of operation impossible. This is especially true when using ultra-low noise modes such as high resolution mass spectrometry or tandem MS; where there are often no ions in the background and the noise is essentially zero. Statistical methodology commonly used to establish method detection limits for trace analysis in complex matrices as a means of characterizing instrument performance is shown to be valid for high and low background noise conditions. Chemical Analysis Group", "corpus_id": 6094948, "title": "Signal, Noise, and Detection Limits in Mass Spectrometry" }
{ "abstract": "The presence of aromatic amines in the environment has been in the focus of research, as many of these compounds are known or suspected mutagens and carcinogens. To facilitate the detection of aromatic amines in complex environmental samples by LC-high resolution mass spectrometry, an on-line-post-column and a pre-column derivatization method to label (in an ideal case) all aromatic amines was evaluated by applying different derivatization reagents. 4-Fluoro-7-nitro-2,1,3-benzoxadiazole (NBD-F) was found to be the most promising labeling reagent due to its high reactivity with both primary and secondary amines and its low signal in positive mode electrospray ionization (ESI+). Post-column on-line derivatization did not result in sufficient signal intensities of derivatives. With pre-column derivatization most of the selected aromatic amines resulted in a derivative that shows common fragments of diagnostic value. The selectivity of NBD-F was studied in depth with a data set of 220 compounds with different functional groups showing that also aliphatic amines and some thiols yield a derivative. The developed method was successfully applied to wastewater effluent samples and several derivatives were confirmed by diagnostic neutral losses.", "corpus_id": 3292433, "score": -1, "title": "Nontargeted detection and identification of (aromatic) amines in environmental samples based on diagnostic derivatization and LC-high resolution mass spectrometry." }
{ "abstract": "We aimed to evaluate the impact of renin–angiotensin system (RAS) inhibitors on outcomes after transcatheter aortic valve replacement (TAVR).", "corpus_id": 216045815, "title": "Impact of renin–angiotensin system inhibitors on outcomes after transcatheter aortic valve replacement: A meta‐analysis" }
{ "abstract": "OBJECTIVES\nThe aim of this study was to assess the incidence, prognostic impact, and predictive factors of readmission for congestive heart failure (CHF) in patients with severe aortic stenosis treated by transcatheter aortic valve replacement (TAVR).\n\n\nBACKGROUND\nTAVR is indicated in patients with severe symptomatic aortic stenosis in whom surgery is considered high risk or is contraindicated. Readmission for CHF after TAVR remains a challenge, and data on prognostic and predictive factors are lacking.\n\n\nMETHODS\nAll patients who underwent TAVR from January 2010 to December 2014 were included. Follow-up was achieved for at least 1 year and included clinical and echocardiographic data. Readmission for CHF was analyzed retrospectively.\n\n\nRESULTS\nThis study included 546 patients, 534 (97.8%) of whom were implanted with balloon-expandable valves preferentially via the transfemoral approach in 87.8% of cases. After 1 year, 285 patients (52.2%) had been readmitted at least once, 132 (24.1%) for CHF. Patients readmitted for CHF had an increased risk for death (p < 0.0001) and cardiac death (p < 0.0001) compared with those not readmitted for CHF. On multivariate analysis, aortic mean gradient (hazard ratio [HR]: 0.88; 95% confidence interval [CI]: 0.79 to 0.99; p = 0.03), post-procedural blood transfusion (HR: 2.27; 95% CI: 1.13 to 5.56; p = 0.009), severe post-procedural pulmonary hypertension (HR: 1.04; 95% CI: 1.00 to 1.07; p < 0.0001), and left atrial diameter (HR: 1.47; 95% CI: 1.08 to 2.01; p = 0.02) were independently associated with CHF readmission at 1 year.\n\n\nCONCLUSIONS\nReadmission for CHF after TAVR was frequent and was strongly associated with 1-year mortality. Low gradient, persistent pulmonary hypertension, left atrial dilatation, and transfusions were predictive of readmission for CHF.", "corpus_id": 2260870, "title": "Incidence, Prognostic Impact, and Predictive Factors of Readmission for Heart Failure After Transcatheter Aortic Valve Replacement." }
{ "abstract": "ObjectivesTranscatheter aortic valve implantation (TAVI) is often undertaken in the oldest frailest cohort of patients undergoing cardiac interventions. We plan to investigate the potential benefit of cardiac rehabilitation (CR) in this vulnerable population.DesignWe undertook a pilot randomised trial of CR following TAVI to inform the feasibility and design of a future randomised clinical trial (RCT).ParticipantsWe screened patients undergoing TAVI at a single institution between June 2016 and February 2017.InterventionsParticipants were randomised post-TAVI to standard of care (control group) or standard of care plus exercise-based CR (intervention group).OutcomesWe assessed recruitment and attrition rates, uptake of CR, and explored changes in 6-min walk test, Nottingham Activities of Daily Living, Fried and Edmonton Frailty scores and Hospital Anxiety and Depression Score, from baseline (30 days post TAVI) to 3 and 6 months post randomisation. We also undertook a parallel study to assess the use of the Kansas City Cardiomyopathy Questionnaire (KCCQ) in the post-TAVI population.ResultsOf 82 patients screened, 52 met the inclusion criteria and 27 were recruited (3 patients/month). In the intervention group, 10/13 (77%) completed the prescribed course of 6 sessions of CR (mean number of sessions attended 7.5, SD 4.25) over 6 weeks. At 6 months, all participants were retained for follow-up. There was apparent improvement in outcome scores at 3 and 6 months in control and CR groups. There were no recorded adverse events associated with the intervention of CR. The KCCQ was well accepted in 38 post-TAVI patients: mean summary score 72.6 (SD 22.6).ConclusionsWe have demonstrated the feasibility of recruiting post-TAVI patients into a randomised trial of CR. We will use the findings of this pilot trial to design a fully powered multicentre RCT to inform the provision of CR and support guideline development to optimise health-related quality of life outcomes in this vulnerable population. Retrospectively registered 3rd October 2016 clinicaltrials.govNCT02921880.Trial registrationClinicaltrials.Gov identifier NCT02921880", "corpus_id": 56177632, "score": -1, "title": "Cardiac rehabilitation to improve health-related quality of life following trans-catheter aortic valve implantation: a randomised controlled feasibility study" }
{ "abstract": "In animal models increased vagal outflow has been shown to play a major role in the initiation and the maintenance of atrial fibrillation (AF), but the role of the autonomic nervous system in the genesis and maintenance of clinical AF has not been well established. This research was designed to assess the role of the autonomic nervous system in the initiation, maintenance and recurrence of clinical AF episodes by measuring various indexes of heart rate (HR) variability in relation to the occurrence and duration of clinical AF episodes. The study population consisted of patients for whom 24-hour ECG recordings were performed because of clinical reasons, and of 116 consecutive patients who were treated with transthoracic electrical cardioversion due to persistent AF (>3 month). HR variability was initially analyzed in 20-minute intervals before 62 episodes of AF in 22 patients with lone AF, and then in 15-minute periods both in patients with structural heart disease (n=35) and in patients with lone AF (n=28). HR variability was analyzed from the entire recording in 78 patients after restoration of sinus rhythm with cardioversion. HR turbulence after atrial ectopic beats located 0 to 60 min before the onset of AF episodes was compared with the means of HR turbulence after atrial ectopic beats by hour in the rest of the recording in 39 patients with structural heart disease and in 29 patients with lone AF. Traditional time and frequency domain measures of HR variability showed no significant changes before the onset of AF. However, a progressive decrease occurred both in the approximate entropy (ApEn) (p<0.001) and short-term scaling exponent values (α) (p<0.001) before the AF episodes in patients without structural heart diseases. In the analysis of possible relationship between the duration of AF and the HR variability preceding the AF, the high-frequency (HF) spectral component of HR variability was observed to be higher (p<0.0001) and low-frequency (LF) component lower (p<0.0001) before long (>200 s, n=41) compared to short (<200 s, n=51) AF episodes in patients with lone AF. After restoration of sinus rhythm with cardioversion in patients with recurrence of AF during one month, all power spectral components except the ultra-low-frequency power were increased. An increased HF spectral component specifically predicted the early recurrence of AF. Turbulence onset was significantly higher during one hour before the AF than during the other hours of the recording, both in patients with structural heart diseases and in patients with lone AF (p<0.0001 for both). In conclusion, specific changes in HR variability patterns are related to spontaneous onset, maintenance and recurrence of clinical AF episodes: 1) a decrease in the complexity of R-R intervals is a common phenomenon preceding the spontaneous onset of clinical AF episodes; 2) altered HR variability, reflecting changes", "corpus_id": 22344600, "title": "Heart rate dynamics and clinical episodes of atrial fibrillation" }
{ "abstract": "BACKGROUND\nStructural and electrophysiological changes of the atria occur with prolonged rapid rates; however, the effects of sustained atrial fibrillation (AF) on autonomic innervation of the atria are unknown. We hypothesized that electrophysiological remodeling from rapid atrial rates is accompanied by altered atrial autonomic innervation.\n\n\nMETHODS AND RESULTS\nSix dogs (paced group) underwent atrial pacing at 600 bpm; 9 dogs (control animals) were not paced. All paced dogs developed sustained AF by week 4 of pacing. All 15 animals underwent positron emission tomography imaging of the atria with [C-11] hydroxyephedrine (HED) to label sympathetic nerve terminals. HED retention in the atria was significantly greater in paced dogs compared with control animals (P=0.03). Tissue samples from the atrial appendages had a greater concentration of norepinephrine in paced animals than in control animals (P=0.01). The coefficient of variation of HED retention was also greater in paced animals (P=0.05) and was greater in the right atrium than in the left atrium (P=0.004). Epicardial activation maps of AF were obtained in the paced animals at baseline and with autonomic manipulation. Mean AF cycle length was longer in the right atrium (109.2+/-5 ms) than in the left atrium (85.8+/-5.5 ms) at baseline (P=0.005). AF cycle length did not vary significantly from baseline (97.6+/-13.4 ms) with stellate stimulation (100.5+/-6 ms) but lengthened with propranolol (107.5+/-6.1 ms, P=0.03).\n\n\nCONCLUSIONS\nRapid rates of AF produce a heterogeneous increase in atrial sympathetic innervation. These changes parallel disparate effects of rapid pacing-induced AF on atrial electrophysiology.", "corpus_id": 2383194, "title": "Atrial fibrillation produced by prolonged rapid atrial pacing is associated with heterogeneous changes in atrial sympathetic innervation." }
{ "abstract": "AMPLE experimental evidence exists to support the hypothesis that neural activity to the heart modulates the development of cardiac arrhythmias, and may be related to the genesis of sudden cardiac death. The common intermediary mechanisms relate to the actions of efferent vagal and sympathetic fibers on electrophysiologic properties of the heart. In this report we will review the present animal and clinical evidence to support the above hypothesis. Present knowledge from animal studies Sympathetic-parasympathetic interactions. The ventricular arrhythmias that may lead to sudden cardiac death are influenced by both divisions of the autonomic system. When the two systems are simultaneously active, the resulting cardiac effects are not algebraically additive. In fact, complex interactions have been found to operate.' In a study by Kolman et al.,2 for example, stimulation of the cardiac sympathetic nerves alone increased the vulnerability to ventricular fibrillation (VF) substantially. Vagal stimulation alone, on the other hand, had no significant effect on the vulnerability to VF. However, when the same vagal stimulation was given in the presence of tonic sympathetic stimulation, the vagal activity substantially attenuated the increased vulnerability produced by the sympathetic stimulation. The sympathetic-parasympathetic interactions in the heart occur both at prejunctional and at postjunctional levels. Prejunctionally, the acetylcholine (ACh) released at the vagal endings diminishes the release of norepinephrine (NE) from neighboring sympathetic nerve terminals.3'4 Postjunctionally, the neurotransmitters (ACh and NE) interact with specific receptors on the cardiac effector cell membranes. The postjunctional effects involve an inhibition of adenylate cyclase, mediated by the interaction between ACh and the muscarinic receptor, and a facilitation of the same enzyme, mediated by the interaction between NE and the /8-adrenergic receptor.5 Both prejunctional and postjunctional sympathetic-parasympathetic interactions are important in the control of heart rate and atrioventricular nodal conduction.6 Behavioral modulation of the autonomic nervous system. Substantial evidence indicates that sympathetic-parasympathetic interactions regulate myocardial electrical stability in conscious animals.7`8 Specifically, catecholamine levels are elevated when animals are exposed to an aversive environment. Vagal efferent blockade with atropine in these animals results in a substantial increase in vulnerability to develop VF. In a nonstressful setting, plasma catecholamine levels are low and vagal blockade is then without influence on susceptibility to developing VF. Furthermore, when ,3-adrenergic blockade is induced with practolol, interruption of parasympathetic input does not influence ventricular vulnerability in the stressed, conscious animal. Thus, as in anesthetized animals,9 tonic vagal activity can significantly alter the propensity to develop VF. This effect is directly related to the prevailing level of adrenergic input. The neural effects of stress on the heart appear to be mediated via a thalamic gating mechanism.1` This view is based on the findings in pigs that cryogenic blockade of this system or of its output from the frontal cortex to the brainstem delays or prevents the occurrence of VF. However, the precise relationship between central nervous system activity, the pattern of autonomic nervous system activity, and the genesis of cardiac arrhythmias remains to be defined. Influence of myocardial injury on the ventricular innervation. The sympathetic and vagal pathways of ventricular innervation may have important implications regarding arrhythmogenesis. Because a strategically placed myocardial lesion may interrupt neural transmission via axons that pass through the lesion and denervate the uninvolved myocardium \"down~~I11 12 stream,\" it is important to understand pathways of innervation. Although some sympathetic nerves innervate specific sites in the ventricle,13 the anterior left ventricle receives its major afferent and efferent sympathetic nerve supply from nerves that course in the subepicardium.14 15 Sympathetic fibers probably dive transmurally to innervate the endocardium. Vagal fibers, in contrast, cross the atrioventricular groove in the superficial subepicardium, but they then penetrate the myocardium and are located intramurally or suben-", "corpus_id": 214739166, "score": -1, "title": "Task Force 2 : Sudden cardiac death Neural-cardiac interactions" }
{ "abstract": "In this paper, we extend the geometrical one-ring multiple-input multiple-output (MIMO) channel model with respect to frequency selectivity. Our approach enables the design of efficient and accurate simulation models for wideband space-time MIMO channels under isotropic scattering conditions. Two methods will be provided to compute the parameters of the simulation model. Especially, the temporal, frequency and spatial correlation properties of the proposed wideband space-time MIMO channel simulator are studied analytically. It is shown that any given specified or measured discrete power delay profile (PDP) can be incorporated into the simulation model. The high accuracy of the simulation model is demonstrated by comparing its statistical properties with those of the underlying reference model with specified correlation properties in the time, frequency and spatial domain. As an application example of the new MIMO frequency-selective fading channel model, we study the influence of various channel model parameters on the system performance of a space-time coded orthogonal frequency division multiplexing (OFDM) system. For example, we investigate the influence of the antenna element spacings of the base station (BS) antenna as well as the mobile station (MS) antenna. It turns out that an increasing of the antenna element spacing at the BS side results in a higher diversity gain than an increasing of the antenna element spacing at the MS side. Furthermore, the diversity gain brought in by space-time block coding schemes is investigated by simulation. Our results show that transmitter diversity can significantly reduce the symbol error rate (SER) of multiple antenna systems. Finally, the influence of the Doppler effect and the impact of imperfect channel state information (CSI) on the system performance is also investigated. Copyright © 2009 John Wiley & Sons, Ltd. \n \nThis work has been presented in part at the 2006 IEEE Semiannual Vehicular Technology Conference, IEEE VTC 2006-Fall, Montreal, Canada, September 2006. \n \nWe extend the geometrical one-ring multiple-input multiple-output (MIMO) channel model with respect to frequency selectivity. Our approach enables the design of efficient and accurate simulation models for wideband space-time MIMO channels under isotropic scattering conditions. Two methods have been provided to compute the parameters of the simulation model. Especially, the temporal, frequency and spatial correlation properties of the proposed wideband space-time MIMO channel simulator are studied analytically. As an application example of the new MIMO frequency-selective fading channel model, we study the influence of various channel model parameters on the system performance of a space-time coded orthogonal frequency division multiplexing (OFDM) system.", "corpus_id": 9485123, "title": "A novel wideband space-time channel simulator based on the geometrical one-ring model with applications in MIMO-OFDM systems" }
{ "abstract": "This paper deals with the design of a set of multiple uncorrelated Rayleigh fading waveforms. The Rayleigh fading waveforms are mutually uncorrelated, but each waveform is correlated in time. The waveforms are generated by using the sum-of-sinusoids principle. Two new closed-form solutions are presented for the computation of the model parameters. Analytical and numerical results show that the resulting sum-of-sinusoids-based channel simulator fulfills all main requirements imposed by the reference model with given correlation properties derived under two-dimensional isotropic scattering conditions. The proposed methods are useful for the design of simulation models for diversity-combined Rayleigh fading channels, frequency-selective channels, and multiple-input multiple-output (MIMO) channels", "corpus_id": 1473088, "title": "Two New Methods for the Generation of Multiple Uncorrelated Rayleigh Fading Waveforms" }
{ "abstract": "The idea of unintended consequences of social action constitutes one of the core meta-assumptions of new economic sociology. Yet neither its US nor its European branch seem to make a direct statement about this. This state of affairs appears to be the result of various cumulative circumstances, such as the role played by the competition from other meta-assumptions which address similar or related issues and the rather general treatment of the unintended consequences within the field. This article takes a closer look and tries to establish whether the approach to the unintended in the US and European new economic sociologies is indeed so general. It concludes that the source of the low visibility of the unintended consequences as a fundamental problem for new economic sociology is not the fact that this is not granted proper systematization. The problem rather lies in the lack of awareness and cumulative knowledge about the unintended consequences as a main sociological problem that was already taken up in sociology.", "corpus_id": 148861376, "score": -1, "title": "The unintended consequences in new economic sociology: Why still not taken seriously?" }
{ "abstract": "In der vorliegenden Arbeit wurden Fragestellungen zu wichtigen homogen-katalysierten Reaktionen und der Synthese von funktionalisierten Nanopartikeln mit Hilfe verschiedener spektroskopischen Methoden bearbeitet. Als grundlegende Methode kam dabei die Rontgenabsorptionsspektroskopie (XAS) zum Einsatz. Wenn moglich, wurden die Messungen durch andere spektroskopische Methoden wie UV-Vis- oder Raman-Spektroskopie erganzt. Die XAS-Messungen wurden an den Speicherringen in Hamburg (Hamburger Synchrotronstrahlungslabor) und Karlsruhe (Angstromquelle Karlsruhe) durchgefuhrt. \nAus dem Bereich der homogenen Katalyse wurden die kupferkatalysierte Michaeladdition und die palladium/kupfer-katalysierte Sonogashira-Kreuzkupplung untersucht. Die genaue Kenntnis des Reaktionsmechanismus bzw. eventueller Deaktivierungsschritte des Katalysators kann bei der Optimierung katalytischer Prozesse sehr hilfreich sein, da die gewonnenen Erkenntnisse bei der Synthese von neuen optimierten Katalysatoren verwendet werden konnen. \nDie untersuchte, mittels Cu(II)acetat katalysierte Michaeladditon von chiralen Enaminen an Vinylketone wurde 2002 von Christoffers et al. veroffentlicht und stellt eine der effektivsten Varianten zum Aufbau von quartaren Stereozentren dar. Es konnten Zwischenstufen des Reaktionszyklus mit einer Kombination von EXAFS-, XANES-, UV-Vis- und Raman-Spektroskopie charakterisiert werden und mittels eines LC-XANES-Fit der Grund fur die verminderte Reaktionsgeschwindigkeit und Enantioselektivitat bei der Verwendung von Cu(II)chlorid gefunden werden. Ein Grosteil des Kupfers wird zu einem linear-koordinierten Kupfer(I)-Komplex reduziert, welcher hochstwahrscheinlich keine katalytische Aktivitat besitzt. \nIn der zweiten untersuchten Reaktion, der Sonogashira-Kreuzkupplung zwischen Bromacetophenon und Phenylacetylen, konnten Zwischenstufen der Reaktion und der Mechanismus der Bildung der katalytisch aktiven Pd(0)-Spezies aufgeklart werden. Hierbei kamen neben der Rontgenabsorptions- auch UV-Vis- und Raman-Spektroskopie zum Einsatz. Die Reaktion wurde zusatzlich zu den Messungen des Palladiums noch an der Kupfer K-Kante untersucht. Hier konnte die Rolle des Kupfers bei der Bildung der Transmetallierungs-Spezies aufgeklart werden, der in der Literatur postulierte Mechanismus bestatigt und die Strukturen der gebildeten Spezies aufgeklart werden. \n \nIm zweiten Teil der Arbeit wurde die Synthese von aminfunktionalisierten Gold-, Palladium-, und Silbernanopartikeln nach der zwei-phasigen Leff-Methode untersucht. Es wurde jeder einzelne Schritt der Synthese auf etwaige Oxidationsstufenwechsel und Anderungen in der lokalen Umgebung des Edelmetallruckstreuers untersucht. Des Weiteren wurde die Kettenlange des zur Funktionalisierung eingesetzten Alkylamins systematisch variiert, um den Einfluss auf die resultierenden Koordinationskomplexe und auf die Teilchengrose der fertigen Partikel zu analysieren. Die Teilchengrose wurde aus der Koordinationszahl der ersten Schale unter der Annahme eines Kuboktaeders als geometrischen Korper bestimmt. Es konnte die Grose des Liganden mit der resultierenden Grose der Partikel korreliert werden. Bei allen Metallen zeigte sich eine abnehmende Grose mit zunehmender Kettenlange, wobei der Effekt bei Gold am starksten ausgepragt ist. Fur die einzelnen Metalle ergaben sich fur die hexylamin-, dodecylamin- und octadecylamin-funktionalisierten Partikel durchschnittliche Grosen von 1.5 nm (Palladium), 2.9 nm (Gold) und 3.3 nm (Silber). Als weitere Methode fur die Bestimmung der Partikelgrose wurde die UV-Vis-Spektroskopie verwendet, da die Partikel der Metalle Gold und Silber im UV-Vis-Spektrum eine charakteristische Plasmonenabsorption zeigen, deren Lage und Halbwertsbreite vom Teilchendurchmesser abhangig ist. \n \nThe present work deals with the investigation of important homogeneous reactions and the synthesis of functionalized metal nanoparticles, using various spectroscopic methods. X-Ray Absorption Spectroscopy (XAS) was employed as a main method in these investigation. Additionally, other suitable spectroscopic methods like UV-Vis or Raman spectroscopy were also employed if possible. The XAS measurements were performed at the sychrotron facilities in Hamburg (Hamburger Synchrotronstrahlungslabor) and Karlsruhe (Angstromquelle Karlsruhe). \nIn the field of homogeneous catalysis the copper catalyzed Michaeladdition and the palladium/copper catalyzed Sonogashira cross-coupling have been investigated. A detailed knowledge of the reaction mechanism and catalyst deactivation can be very helpful to optimize catalytic processes, because the knowledge gained can be used in the synthesis of new and optimized catalysts. \nThe copper(II)acetate catalyszed Michaeladdtion of chiral enamines with vinylketones investigated here was first published in 2002 by Christoffers et al. and is one of the most important methods to build up quaternary stereo centres. With a combination of EXAFS (Extended X-Ray Absorption Fine Structure), XANES (X Ray Absorption Near Edge Structure), UV-Vis (Ultraviolet-Visible) and Raman spectroscopy, it was possible to characterize intermediate steps of the reaction cycle and with a LC-XANES fit it was possible to elucidate the reason for the decreased reaction rate and enantioselectivity if copper(II)chloride is employed. A large amount of the copper(II) is reduced and forms a linear coordinated copper(I) complex which is most probably catalytically inactive. The second reaction investigated is the Sonogashira cross-coupling of Bromacetophenone and Phenylacetylene. Here it was possible to reveal intermediate species and the mechanism of the formation of the catalytically active palladium(0) species with a combination of XAS, UV-Vis and Raman spectroscopy. Additionally the reaction was investigated using X-Ray absorption spectroscopy (XAS) at the copper K-edge. With this investigation it was possible to reveal the role of the copper in the formation of the transmetallation species, to confirm the mechanism proposed in literature and to elucidate the structures of the species formed. \n \nIn the second part of this thesis the synthesis of amine functionalized gold, silver and palladium nanoparticles according to the two phase Leff method, was investigated. Each reaction step was analyzed with regard to possible changes in the oxidation state or the local environment of the noble metal absorber. Additionally the chain length of the alkylamine used for functionalisation was varied systematically to examine the effect on the resulting complexes and on the size of the reduced particles. The particle size was determined by analyzing the coordination number of the first gold shell with assumption of a cuboctahedral structural motif. The size of the ligand could be correlated with the resulting particle size and a decreasing particle size with increasing chain length was found for all metals studied here. The metals form hexylamine, dodecylamine and octadecylamine functionalized particles with an average size of 1.5 nm (palladium), 2.9 nm (gold) und 3.3 nm (silver). As a complementary method for the determination of the particle size, UV-Vis spectroscopy was employed for the particles of gold and silver, that show a characteristic plasmon absorption. The position and the full width at half maximum (FWHM) of the plasmon band are dependent on the particle diameter.", "corpus_id": 94349498, "title": "Röntgenabsorptionsspektroskopische Untersuchungen von Metallkomplexen in homogen katalysierten Reaktionen und funktionalisierten Edelmetallnanopartikeln in Lösung" }
{ "abstract": "Dialkyl amides of L-valine, L-isoleucine, and L-tert-leucine (2) are excellent chiral auxiliaries for the construction of quaternary stereocenters at ambient temperature. Enaminoesters 3, prepared from these auxiliaries 2 and Michael donors 1, undergo a copper-catalyzed asymmetric Michael reaction with methyl vinyl ketone (MVK, 4) to afford products 5 in 70-90% yield and 90-99% ee (enantiomeric excess). The exclusion of moisture or oxygen is not necessary. The auxiliaries 2 are readily available by standard procedures. After workup they can be recovered almost quantitatively.", "corpus_id": 1084035, "title": "New auxiliaries for copper-catalyzed asymmetric Michael reactions: generation of quaternary stereocenters at room temperature." }
{ "abstract": "Two new types of minor flavonoids, breviflavone A and B, have been recently isolated and identified from Epimedium brevicornu in our previous research. Breviflavone B is a novel flavonoid with potent and specific estrogen receptor (ER) bioactivity. Its positional isomer, breviflavone A, is not ER active. Therefore, it is important to determine the two minor components, breviflavone A and B, in Epimedium herbs. In this report, a robust method for measurement of the two breviflavones in Epimedium ethanolic extracts has been developed by using liquid chromatography tandem mass spectrometry via selected-reaction monitoring (m/z 437-->m/z 367 for breviflavone A and m/z 437-->m/z 351 for breviflavone B) under negative electrospray ionization mode. This method has been successfully used to determine the two breviflavones in ethanolic herbal extracts of five major Epimedium species (E. brevicornu, E. koreanum, E. pubescens, E. sagittatum, and E. wushanese) from various sources. The contents of the two breviflavones range from 0.0181 to 0.1791% for breviflavone A and 0.0026 to 0.0252% for breviflavone B in the dried ethanolic extracts of those Epimedium herbal samples.", "corpus_id": 7747605, "score": -1, "title": "Determination of breviflavone A and B in Epimedium herbs with liquid chromatography-tandem mass spectrometry." }
{ "abstract": "In mammals, cadmium is widely considered as a non-genotoxic carcinogen acting through a methylation-dependent epigenetic mechanism. Here, the effects of Cd treatment on the DNA methylation patten are examined together with its effect on chromatin reconfiguration in Posidonia oceanica. DNA methylation level and pattern were analysed in actively growing organs, under short(6 h) and long(2 d or 4 d) term and low (10 mM) and high (50 mM) doses of Cd, through a Methylation-Sensitive Amplification Polymorphism technique and an immunocytological approach, respectively. The expression of one member of the CHROMOMETHYLASE (CMT) family, a DNA methyltransferase, was also assessed by qRT-PCR. Nuclear chromatin ultrastructure was investigated by transmission electron microscopy. Cd treatment induced a DNA hypermethylation, as well as an up-regulation of CMT, indicating that de novo methylation did indeed occur. Moreover, a high dose of Cd led to a progressive heterochromatinization of interphase nuclei and apoptotic figures were also observed after long-term treatment. The data demonstrate that Cd perturbs the DNA methylation status through the involvement of a specific methyltransferase. Such changes are linked to nuclear chromatin reconfiguration likely to establish a new balance of expressed/repressed chromatin. Overall, the data show an epigenetic basis to the mechanism underlying Cd toxicity in plants.", "corpus_id": 28208647, "title": "Xylem tissue sp cification , patter ing , and differentiation chanisms" }
{ "abstract": "Xylem tracheary elements (TEs) form hollow, sap-conducting tubes kept open by thickened ribs of secondary cell wall that provide the major structural element in wood. These ribs are enriched with cellulose and lignin, molecules that utilize more atmospheric CO(2) than any other biopolymer on Earth. The thickenings form characteristic patterns (e.g., spiral and pitted) that depend upon the bundling of underlying microtubules [1, 2]. To identify microtubule-associated proteins (MAPs) involved in patterning microtubules, we optimized an in vitro system for triggering single Arabidopsis cells to differentiate synchronously into TEs. From more than 200 microtubule-implicated proteins, AtMAP70-5 was the only MAP upregulated upon, and specific to, TE differentiation. It lines the borders of each microtubule bundle and forms C-shaped \"spacers\" between adjacent bundles. Manipulating levels of AtMAP70-5 and its binding partner AtMAP70-1 by overexpression or RNA interference (RNAi) silencing shifted the balance between the characteristic patterns. RNAi silencing produced stunted plants with disorganized vascular bundles. In culture, RNAi knockdown caused ribs of secondary cell wall, surrounded by microtubules, to invaginate and fall into the cytoplasm. These results suggest that AtMAP70-5 and AtMAP70-1 are essential for defining where secondary cell wall polymers are applied at the cell cortex in wood-forming cells.", "corpus_id": 2872481, "title": "The Microtubule-Associated Protein AtMAP70-5 Regulates Secondary Wall Patterning in Arabidopsis Wood Cells" }
{ "abstract": "Abstract The record of life on land or in non-marine environments during the Precambrian is sparse, limiting our ability to understand life outside of marine settings before the advent of animals. Stromatolites from such environments are known, but demonstrating stromatolite biogenicity remains difficult, as stromatolite growth can be controlled by a spectrum of biologic, chemical, and biologically-mediated processes. Stromatolites from the Mesoproterozoic (1.09 Ga) Copper Harbor Conglomerate, an alluvial fan, fluvial, and lacustrine deposit, offer an interesting test for the presence and nature of microbial life in shallow, Mesoproterozoic non-marine settings. Stromatolites from a siltstone facies are interpreted as biogenic, as they contain detrital-rich laminae, likely indicative of trapping and binding by microbes and fenestral fabrics suggestive of desiccation or lift-off structures in mats via the presence of gas (perhaps O2 from photosynthesis or other gases from mat decay). The stromatolites formed as microbial mats grew over a mudflat or sandflat with carbonate filled desiccation cracks on an erosive topography, and thus provide evidence for life in a very shallow, predominantly desiccated environment. Stromatolites from a conglomerate facies are microdigitate and record both isopachous laminae with radial-fibrous calcite fans and botryoids, typically considered abiogenic in origin, as well as wavy, conical laminations likely indicative of the former presence of microbial mats. The conglomerate-facies stromatolites are interpreted to have formed in a flooded braidplain setting with restricted circulation. This study supports the suggestion that microbial communities were abundant in non-marine environments in the Midcontinent Rift during the Mesoproterozoic. It also highlights how variable environmental factors can influence stromatolite growth, even in similar depositional settings and with a consistent microbial presence.", "corpus_id": 39726679, "score": -1, "title": "Early non-marine life: Evaluating the biogenicity of Mesoproterozoic fluvial-lacustrine stromatolites" }
{ "abstract": "Le developpement actuel de la chirurgie ambulatoire passe par l’optimisation de la duree de sejour du patient au strict temps necessaire et utile au geste chirurgical. La rachianesthesie (RA) est une technique simple et efficace permettant une epargne morphinique par rapport a l’anesthesie generale. Jusqu’a recemment, seuls des anesthesiques locaux (AL) de longue duree d’action etaient disponibles. La chloroprocaine est un AL de courte duree d’action disponible depuis peu en France. Son interet a deja ete demontre en ambulatoire avec un delai de recuperation du bloc sensitivo-moteur moindre et une sortie acceleree par rapport a la RA bilaterale a la bupivacaine. Le but de cette etude est d’evaluer si la RA bilaterale a la chloroprocaine permet de raccourcir les delais d’aptitude a la rue pour les chirurgies courtes du membre inferieur en ambulatoire par rapport a la RA unilaterale a la bupivacaine, technique habituellement employee dans le service d’ambulatoire du CHU de Bordeaux et qui permet de diminuer les doses d’AL et les effets secondaires de la RA. Nous avons realise une etude prospective de type avant/apres entre decembre 2013 et juillet 2014 incluant tous les patients devant beneficier d’une RA pour chirurgie orthopedique de membre inferieur en ambulatoire de duree prevue inferieure a 40 min. Le 1er groupe beneficiait d’une RA unilaterale a la bupivacaine (entre 6 et 8 mg) et le second d’une RA bilaterale avec 40 mg de chloroprocaine. Le critere principal de jugement etait le delai d’aptitude a la rue. 31 patients ont ete inclus dans le groupe bupivacaine, 24 dans le groupe chloroprocaine. Le delai d’aptitude a la rue etait reduit de 75 min dans le groupe chloroprocaine (272 ± 67 vs 197 ± 55 min, p < 0,001) et le delai de levee du bloc sensitivo-moteur etait diminue de 99 min (235 ± 53 vs 136 ± 30 min, p < 0,001). Les patients du groupe chloroprocaine ont consomme significativement plus de nefopam en postoperatoire. Il n’y avait pas de difference concernant les effets secondaires ou les complications de la RA. La RA a la chloroprocaine permet donc de diminuer significativement le delai d’aptitude a la rue par rapport a la RA unilaterale a la bupivacaine, sans augmenter le risque d’effets indesirables, mais necessite une bonne organisation du bloc et l’anticipation de l’analgesie postoperatoire.", "corpus_id": 163176483, "title": "Rachianesthésie bilatérale à la chloroprocaïne versus rachianesthésie unilatérale à la bupivacaïne en chirurgie de courte durée du membre inférieur en ambulatoire" }
{ "abstract": "Background and Objectives Transient neurologic symptoms (TNS) have been reported to occur after 16% to 40% of ambulatory lidocaine spinal anesthetics. Patient discomfort and the possibility of underlying lidocaine neurotoxicity have prompted a search for alternative local anesthetic agents. We compared the incidence of TNS with procaine or lidocaine spinal anesthesia in a 2:1 dose ratio. Methods Seventy outpatients undergoing knee arthroscopy were blindly randomized to receive either 100 mg hyperbaric procaine or 50 mg hyperbaric lidocaine. An interview by a blinded investigator established the presence or absence of TNS, defined as pain in the buttocks or lower extremities beginning within 24 hours of surgery. Onset of sensory and motor block, patient discomfort, supplemental anesthetics, and side effects were recorded by the unblinded managing anesthesia team. Anesthetic adequacy was determined from these data by a single blinded investigator. Hospital discharge time was recorded from the patient record. Groups were compared using appropriate statistics with a P < .05 considered significant. Results TNS occurred in 6% of procaine patients versus 31% of lidocaine patients (P = .007). Sensory block with procaine and lidocaine was similar, while motor block was decreased with procaine (P < .05). A trend toward a higher rate of block inadequacy (17% v 3%, P = .11) and intraoperative nausea (17% v 3%, P = .11) occurred with procaine. Average hospital discharge time with procaine was increased by 29 minutes (P < .05). Conclusions The incidence of TNS was substantially lower with procaine than with lidocaine. However, procaine resulted in a lower overall quality of anesthesia and a prolonged average discharge time. If the shortfalls of procaine as studied can be overcome, it may provide a suitable alternative to lidocaine for outpatient spinal anesthesia to minimize the risk of TNS.", "corpus_id": 814746, "title": "Procaine Compared With Lidocaine for Incidence of Transient Neurologic Symptoms" }
{ "abstract": "Objectives: This study examined the patterns and determinants of current smoking and intention to smoke among secondary school students of Han and Tujia nationalities in China. Methods: A cross-sectional survey was conducted in three regions, namely, Chongqing, Liaocheng, and Tianjin, of China in 2015. A structured self-administered questionnaire was used for data collection. Results: Of the total subjects (n = 1805), 78.9% were ethnic Han and 21.1% were ethnic Tujia. Overall 9.4% (Han: 7.7%; Tujia: 15.5%) secondary school students were smokers and 37.28% smoked more than once per day. Of the non-smoker students (n = 1636), 17.4% have an intention to smoke. A total of 81.1% of students reportedly had never been taught throughout school about smoking or tobacco prevention. When compared to the students who were taught in the school about smoking or tobacco prevention (18.90%) students who were never taught were more likely to smoke (OR = 2.39; 95% CI = 1.14–5.01). As compared to Han nationality students who were from Tujia nationality were more likely to smoke (OR = 2.76; 95% CI = 1.88–4.04) and were more likely to have a higher frequency of smoking (95% CI (0.88, 0.88), p = 0.010). Non-smokers who were high school students (OR = 4.29; 95% CI = 2.12–8.66), whose academic performance were situated in the last 25% (OR = 2.23; 95% CI = 1.48–3.34) and lower than 50% (OR = 1.50; 95% CI = 1.02–2.20) were more likely to have an intention of smoking. Conclusions: About one in ten secondary school students was a smoker, one in three smokers smoked more than one time per day, and a quarter of non-smokers had an intention of smoking in China. Smoking rate was higher among students from Tujia than the Han nationality. This study provided some important information for future tobacco control programs among secondary school students in the ethnic minority autonomous region and minority settlements in a multi-ethnic country.", "corpus_id": 4911098, "score": -1, "title": "Prevalence and Determinants of Current Smoking and Intention to Smoke among Secondary School Students: A Cross-Sectional Survey among Han and Tujia Nationalities in China" }
{ "abstract": "Registration of histopathology volumes to Magnetic Resonance Images(MRI) is a crucial step for finding correlations in Prostate Cancer (PCa) and assessing tumor agressivity. This paper proposes a two-stage framework aimed at registering both modalities. Firstly, Speeded-Up Robust Features (SURF) algorithm and a context-based search is used to automatically determine slice correspondences between MRI and histology volumes. This step initializes a multimodal nonrigid registration strategy, which allows to propagate histology slices to MRI. Evaluation was performed on 5 prospective studies using a slice index score and landmark distances. With respect to a manual ground truth, the first stage of the framework exhibited an average error of 1,54 slice index and 3,51 mm in the prostate specimen. The reconstruction of a three-dimensional Whole-Mount Histology (WMH) shows promising results aimed to perform later PCa pattern detection and staging.", "corpus_id": 12306647, "title": "Slice correspondence estimation using SURF descriptors and context-based search for prostate whole-mount histology MRI registration" }
{ "abstract": "Mapping the spatial disease extent in a certain anatomical organ/tissue from histology images to radiological images is important in defining the disease signature in the radiological images. One such scenario is in the context of men with prostate cancer who have had pre-operative magnetic resonance imaging (MRI) before radical prostatectomy. For these cases, the prostate cancer extent from ex vivo whole-mount histology is to be mapped to in vivo MRI. The need for determining radiology-image-based disease signatures is important for (a) training radiologist residents and (b) for constructing an MRI-based computer aided diagnosis (CAD) system for disease detection in vivo. However, a prerequisite for this data mapping is the determination of slice correspondences (i.e. indices of each pair of corresponding image slices) between histological and magnetic resonance images. The explicit determination of such slice correspondences is especially indispensable when an accurate 3D reconstruction of the histological volume cannot be achieved because of (a) the limited tissue slices with unknown inter-slice spacing, and (b) obvious histological image artifacts (tissue loss or distortion). In the clinic practice, the histology-MRI slice correspondences are often determined visually by experienced radiologists and pathologists working in unison, but this procedure is laborious and time-consuming. We present an iterative method to automatically determine slice correspondence between images from histology and MRI via a group-wise comparison scheme, followed by 2D and 3D registration. The image slice correspondences obtained using our method were compared with the ground truth correspondences determined via consensus of multiple experts over a total of 23 patient studies. In most instances, the results of our method were very close to the results obtained via visual inspection by these experts.", "corpus_id": 3089225, "title": "Computerized Medical Imaging and Graphics Determining Histology-mri Slice Correspondences for Defining Mri-based Disease Signatures of Prostate Cancer" }
{ "abstract": "We describe a new algorithm for non-rigid registration capable of estimating a constrained dense displacement field from multi-modal image data. We applied this algorithm to capture non-rigid deformation between digital images of histological slides and digital flat-bed scanned images of cryotomed sections of the larynx, and carried out validation experiments to measure the effectiveness of the algorithm. The implementation was carried out by extending the open-source Insight ToolKit software. In diagnostic imaging of cancer of the larynx, imaging modalities sensitive to both anatomy (such as MRI and CT) and function (PET) are valuable. However, these modalities differ in their capability to discriminate the margins of tumor. Gold standard tumor margins can be obtained from histological images from cryotomed sections of the larynx. Unfortunately, the process of freezing, fixation, cryotoming and staining the tissue to create histological images introduces non-rigid deformations and significant contrast changes. We demonstrate that the non-rigid registration algorithm we present is able to capture these deformations and the algorithm allows us to align histological images with scanned images of the larynx. Our non-rigid registration algorithm constructs a deformation field to warp one image onto another. The algorithm measures image similarity using a mutual information similarity criterion, and avoids spurious deformations due to noise by constraining the estimated deformation field with a linear elastic regularization term. The finite element method is used to represent the deformation field, and our implementation enables us to assign inhomogeneous material characteristics so that hard regions resist internal deformation whereas soft regions are more pliant. A gradient descent optimization strategy is used and this has enabled rapid and accurate convergence to the desired estimate of the deformation field. A further acceleration in speed without cost of accuracy is achieved by using an adaptive mesh refinement strategy.", "corpus_id": 2446981, "score": -1, "title": "Efficient multi-modal dense field non-rigid registration: alignment of histological and section images" }
{ "abstract": "Introduction Increased plasma homocysteine may be associated with adverse pregnancy outcomes, such as preeclampsia. The aim of this study was to determine the plasma homocysteine, serum folate, and vitamin B12 levels in preeclamptic pregnant women. Methods This case-control study was conducted in 2016 in Ahwaz on 51 pregnant women with preeclampsia and 51 healthy pregnant women of the same gestational age, who served as controls. The case group also was subdivided into severe and non-severe preeclampsia. Patients’ data were collected through a questionnaire and medical records. Serum homocysteine, folic acid, and vitamin B12 were analyzed using chemiluminescent assay. The results were compared between two groups. Statistical analyses were done using IBM-SPSS 20.0. A Kolmogorov-Smirnov test, independent samples t-test, Mann-Whitney test, and Chi-square test were used for data analysis. Results No different demographic characteristics were found among the groups. Pregnant women complicated with preeclampsia displayed significantly higher serum homocysteine levels (p < 0.001) and lower serum folate (p = 0.005) and vitamin B12 levels (p < 0.001) compared to controls. A statistically significant inverse correlation was evident between serum homocysteine and serum folate levels in preeclamptic patients (p = 0.005; r = −0.389). In addition, an inverse correlation was identified between homocysteine and serum vitamin B12, but it was not statistically significant (p = 0.160; r = −0.200). Significant differences occurred in serum homocysteine and folate levels between the severe and non-severe subgroups (p < 0.001, p < 0.001). Conclusion Women complicated with preeclampsia displayed higher maternal serum homocysteine and lower serum folate and vitamin B12. Further studies are needed to confirm if the prescription of folic acid and vitamin B12 in women with a deficiency of these vitamins could decrease the level of serum homocysteine and, therefore, reduce the risk of preeclampsia or, if it occurred, its severity.", "corpus_id": 12492402, "title": "The evaluation of serum homocysteine, folic acid, and vitamin B12 in patients complicated with preeclampsia" }
{ "abstract": "ObjectiveTo measure erythrocyte folate content and serum folic acid and homocysteine (Hcy) levels in preeclamptic primigravidae teenagers living at high altitude.MethodsMeasured analytes were compared to those found in normal teen controls.ResultsTeenagers complicated with preeclampsia displayed significantly lower hematocrit and erythrocyte folic acid levels with higher serum Hcy levels as compared to controls (36.40 ± 4.90 vs. 38.99 ± 2.89 %, 493.80 ± 237.30 vs. 589.90 ± 210.60 ng/mL, and 7.29 ± 2.52 vs. 5.97 ± 1.41 μmol/L, respectively, p < 0.05). There was a non-significant trend for lower serum folic acid levels among preeclampsia teenagers. Serum and erythrocyte folic acid levels positively correlated in preeclampsia teenagers, and levels of both analytes inversely correlated with Hcy levels.ConclusionThis pilot study found that teenagers complicated with preeclampsia living at higher altitude displayed lower erythrocyte folate content in addition to higher serum Hcy levels. More research is warranted to determine the clinical implications of these findings.", "corpus_id": 1937095, "title": "Erythrocyte folate content and serum folic acid and homocysteine levels in preeclamptic primigravidae teenagers living at high altitude" }
{ "abstract": "Fortification of wheat flour in 1997 and corn flour in 1999 with folic acid among other micronutrients was implemented in Costa Rica by means of two decrees, resulting in an effective public health impact. A prevalence of 25% of folic acid serum levels deficiency detected in fertile women in 1996 decreased 87% in urban areas two years later, whereas in rural areas diminished by 63%. In addition, a significant reduction of neural tube defects at the national level has been reported, dropping from a rate of 9.7 per 1000 lb during the period 1996-1998 to 6.3 per 1000 lb in the period 1999-2000. Finally, there has been a reported 74% reduction in the number of Neural Tube Defects at Birth (NTB) at the National Children's Hospital, resulting in 105 cases treated in 1995 to 26 cases in 2001.", "corpus_id": 6744622, "score": -1, "title": "The Costa Rican experience: reduction of neural tube defects following food fortification programs." }
{ "abstract": "Extra-cranial metastases of malignant gliomas such as glioblastoma (GBM) are rare. 3 Cases were reported in this evaluation with spinal leptomeningeal metastasis in patients suffering from cerebral glioma / glioblastoma multiforme with radicular pain in the extremities. Patients and Methods: The first patient (male, 34 years old) suffered from hemiparesis 2 years after the craniotomy and micro-neurosurgical extirpation of astrocytoma WHO II° then anaplastic astrocytoma WHO III° right frontotemporal. The second patient (female, 43 years old) developed a spinal channel metastases of a brain stem GBM approximately 9 months after surgical resection and radio-chemo-therapy. In the third case (male, 21 years old), initially a thoracic spinal intramedullary GBM was operatively excised with a postoperative paraplegia. In the following period of time the diagnosis of a cerebral GBM was performed by a stereotactic biopsy method. All specimens in the 3 cases were confirmed histopathologically. Results: In the first patient, who was initially operated for astrocytoma WHO II° and then later the recurrence was diagnosed after excision as anaplastic astrocytoma WHO III°, for him a lumbar spine hemilaminectomy was performed after a concomitant radio-chemotharapy and stereotactic irradiation, whereas after the surgery a further irradiation therapy was necessary. A significant regression / improvement of the neurological symptoms was registered. Intraspinal metastases of the second patient were treated by irradiation therapy with significantly withdrawal of the symptoms, which was necessary due to reduced clinical and deterioration of the neurological state of the patient. A Partial improvement of the neurological symptoms was observed in the third patient during and after the chemotherapy. Discussion: Reported cases of intraspinal dissemination from the primary intracerebral glioma tumor have varied and as in the presented cases in this series, intraspinal metastases have also been observed in patients with stable intracerebral disease. Typically, the incidence of symptomatic intraspinal metastasis has been lower than the incidence observed post mortem because patients do not survive long enough for small tumor implants to develop into symptomatic lesions. However, with improved outcome observed from newer treatments and improved diagnostics the incidence is likely to increase in the future. The management of the three patients in this series was optimized through the modern diagnostic measures leading to improve the clinical condition despite the bad overall prognosis. Conclusions: Spinal spread of malignant glioma should be considered during care and follow-up investigations in patients with this diagnosis and with spinal symptoms. The surgical therapy seems to offer benefits for these patients. The radio-chemotherapy can be helpful in this cases as well. Further examinations and studies are necessary to be performed in order to understand more about the etiology, clinical course, interrelationship and coherence of these findings.", "corpus_id": 236909769, "title": "A Series of 3 Cases of Cerebral Glioma with Intraspinal Dissemination: Evaluation and Review of the Literature" }
{ "abstract": "✓ Three cases in which gliomas invaded the meninges are presented. In each instance, one or several aspects of the case suggested the diagnosis of benign or malignant meningioma to the neurosurgeon. The points of confusion included clinical history, location of the tumor, gross appearance of the lesion, radiographic and isotope studies, and, in some cases, the microscopic appearance of the tumor as well.", "corpus_id": 585216, "title": "Meningeal invasion by gliomas." }
{ "abstract": "SummaryIntravenous contrast enhanced dynamic computed tomography of cerebral gliomata reveals a spectrum of patterns which reflect different degrees of neovascularity as well as a variable breakdown in the blood-tumor-barrier both intratumorally as well as between individual tumors. Phenomena not generally associated with gliomas including intrinsic neoplastic and peripheral cerebral hypoperfusion, hyperperfusion, and indications of vascular stealing are also demonstrated with this technique which conceivably explain and are partially responsible for certain aspects of the encephalopathy accompanying cerebral neoplasia. A comparison of the dynamic sequences with conventional selective cerebral angiography further indicates that the more contrast-sensitive dynamic method is potentially superior in the detection of subtle neovascularity.", "corpus_id": 22043412, "score": -1, "title": "Neoplastic encephalopathy: dynamic CT of cerebral gliomata" }
{ "abstract": "Using a sample of over 9,000 buyback announcements from 31 non-U.S. countries, we find support for the results of studies based on U.S. data: On average, share repurchases are associated with significant positive short- and long-term excess returns. However, excess returns depend on the likelihood of undervaluation and the efficiency and liquidity of equity markets. In contrast to findings in U.S. markets, we do not find that these long-term excess returns are simply a compensation for takeover risk or have become less significant in recent years.", "corpus_id": 167465289, "title": "Are Buybacks Good for Long-Term Shareholder Value? Evidence from Buybacks around the World" }
{ "abstract": "We show that the specific regulation in Taiwan requiring firms to repurchase shares explicitly to resolve the information ambiguity does not prevent market underreaction by the evidences of positive short-term and long-term abnormal return. In addition, firms that retire their buybacks show a superior long-term performance than those that reissue the buybacks. Our results show that the long-term price performance is positively related to firms' operating performance and dividend payouts in the post-repurchase period. Further analysis indicates that the market reacts asymmetrically to the changes of firms' operating performance and dividend payout level.", "corpus_id": 154671842, "title": "An analysis of stock repurchase in Taiwan" }
{ "abstract": "Abstract Tourist harassment is one of the major challenging issues influencing the competitiveness of various tourist destinations across the globe. While the topic has received some attention over the past two decades, there is still a dearth of research on the influence of tourist harassment on travelers' perceptions and behaviors. Drawing on qualitative data collected through 27 semi-structured interviews with international travelers visiting Petra, Jordan, the study reveals that the perceived destination image and travelers' behavioral intentions are unlikely to be influenced by harassment experiences. However, there is evidence that harassing tourists to achieve greater sales has an adverse impact on tourists’ expenditure level. That is, when harassed, tourists are less likely to be willing to make purchases. The study adds to a still-maturing stream of research on tourist harassment and provides several theoretical as well as practical implications.", "corpus_id": 159356476, "score": -1, "title": "Exploring the impact of tourist harassment on destination image, tourist expenditure, and destination loyalty" }
{ "abstract": "The continuous downscaling of semiconductor fabrication processes, which was predicted by PhD. Moore in 1965, have had a great impact in the development of nowadays integrated electronics. The reduction of transistor size has allowed, on one hand, the integration of more devices in the same área, increasing the integration density, while, on the other hand, has led to the reduction of fabrication costs, making the final product cheaper and accessible. However, this increase in the functionality of a single integrated circuit entails greater complexity in the generation and distribution of the different biasing voltages needed throughout one chip. Thus, as more different systems are integrated in the same chip, more different biasing domains coexists in it, leading several different requirements of noise, regulation and/or stability that need to be satisfied simultaneously. Therefore, power management circuits have been acquiring greater significance as technology downscales, reaching its maximum nowadays, when the nanoscale had taken those issues to its culmen. Linear regulators, and more concretely, low-dropout linear regulators, are an essential block in any power management system, able to generate precise and extremely-stable low-noise biasing voltages what make them the ideal choice for extremely biasing-sensitive circuits such as analog or radio-frequency systems. In addition to this, low-dropout linear regulators can be completely integrated without needing any external device, what translates to expenses and area savings. For all these reasons, low-dropout linear regulators have been lately acquiring extensive attention from the scientific community. However, those circuits also have some disadvantages, indeed, the maximum theoretical efficiency that can be achieved though low-dropout linear regulators is lower than switched capacitor or inductor-based solutions efficiency. In addition to this, as internal compensation is required, the system’s dominant pole is given by an internal node, making the non-dominant pole to be fixed by the charge. This leads to a great stability concern as charge variations translate to a frequency displacement of the non-dominant pole that degrades the whole system phase margin. In accordance with previously described issues, this research has been focused on the study of minimum-quiescent consumption internally compensated low-dropout linear regulators (LDO). The first objective of this research is the proposal of low-voltage", "corpus_id": 201031594, "title": "Proyecto Fin de Carrera Ingeniería de Telecomunicación Formato de Publicación de la Escuela Técnica Superior de Ingeniería" }
{ "abstract": "This brief presents an ultralow quiescent class-AB error amplifier (ERR AMP) of low dropout (LDO) and a slew-rate (SR) enhancement circuit to minimize compensation capacitance and speed up transient response designed in the 0.11-μm 1-poly 6-metal CMOS process. In order to increase the current capability with a low standby quiescent current under large-signal operation, the proposed scheme has a class-AB-operation operational transconductance amplifier (OTA) that acts as an ERR AMP. As a result, the new OTA achieved a higher dc gain and faster settling time than conventional OTAs, demonstrating a dc gain improvement of 15.8 dB and a settling time six times faster than that of a conventional OTA. The proposed additional SR enhancement circuit improved the response based on voltage-spike detection when the voltage dramatically changed at the output node.", "corpus_id": 5454373, "title": "A Capacitorless LDO Regulator With Fast Feedback Technique and Low-Quiescent Current Error Amplifier" }
{ "abstract": "Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.", "corpus_id": 61494, "score": -1, "title": "GraphLab: A New Framework For Parallel Machine Learning" }
{ "abstract": "Tilapia fishes (Oreochromis niloticus) are commonly consumed and exported in Thailand. Bacterial isolation and drug resistance from farmed tilapia fished in Thailand were previously reported. This study was purposed to study on the distribution of human pathogenic bacteria in tilapia fishes, which were collected from Thai farms (n = 180) and fresh markets (n = 160) by identification, antibiotic susceptibility test; and conduct to identify virulence genes by molecular technique. Pathogen isolations were collected from internal organs of fish samples for identification and test of antibiotic susceptibility according to Clinical and Laboratory Standards Institute (CLSI) criteria. blaCTX-M and Int1 genes detection of antibiotic resistance bacteria was performed by molecular based techniques. Klebsiella pneumoniae, Edwardsiella tarda, and coagulase-negative Staphylococci were most frequent bacteria isolated from farming tilapia fishes, respectively. However, Escherichia coli, coagulase-negative Staphylococci, and K. pneumonia were frequently distributed from tilapia fishes in markets of Bangkok area. Klebsiella pneumoniae, E. coli, and Proteus mirabilis were resisted to penicillin and ampicillin. Klebsiella pneumoniae is the most important isolated bacteria due to the distribution in tilapia fishes and positive for blaCTX-M and Int1 gene detection. However, E. coli and P. mirabilis were lack of blaCTX-M and Int1 genes, possibly there may reserve other antibiotic resistance genes.", "corpus_id": 199396952, "title": "Beta-lactamase and integron-associated antibiotic resistance genes of Klebsiella pneumoniae isolated from Tilapia fishes (Oreochromis niloticus)" }
{ "abstract": "The environment, and especially freshwater, constitutes a reactor where the evolution and the rise of new resistances occur. In water bodies such as waste water effluents, lakes, and rivers or streams, bacteria from different sources, e.g., urban, industrial, and agricultural waste, probably selected by intensive antibiotic usage, are collected and mixed with environmental species. This may cause two effects on the development of antibiotic resistances: first, the contamination of water by antibiotics or other pollutants lead to the rise of resistances due to selection processes, for instance, of strains over-expressing broad range defensive mechanisms, such as efflux pumps. Second, since environmental species are provided with intrinsic antibiotic resistance mechanisms, the mixture with allochthonous species is likely to cause genetic exchange. In this context, the role of phages and integrons for the spread of resistance mechanisms appears significant. Allochthonous species could acquire new resistances from environmental donors and introduce the newly acquired resistance mechanisms into the clinics. This is illustrated by clinically relevant resistance mechanisms, such as the fluoroquinolones resistance genes qnr. Freshwater appears to play an important role in the emergence and in the spread of antibiotic resistances, highlighting the necessity for strategies of water quality improvement. We assume that further knowledge is needed to better understand the role of the environment as reservoir of antibiotic resistances and to elucidate the link between environmental pollution by anthropogenic pressures and emergence of antibiotic resistances. Only an integrated vision of these two aspects can provide elements to assess the risk of spread of antibiotic resistances via water bodies and suggest, in this context, solutions for this urgent health issue.", "corpus_id": 930810, "title": "Origin and Evolution of Antibiotic Resistance: The Common Mechanisms of Emergence and Spread in Water Bodies" }
{ "abstract": "ABSTRACT An approximately 200-kb plasmid has been purified from clinical isolates of Stenotrophomonas maltophilia. This plasmid was found in all of the 10 isolates examined and contains both the L1 and the L2 β-lactamase genes. The location of L1 andL2 on a plasmid makes it more likely that they could spread to other gram-negative bacteria, potentially causing clinical problems. Sequence analysis of the 10 L1 genes revealed three novel genes,L1c, L1d, and L1e, with 8, 12, and 20% divergence from the published strain IID 1275 L1(L1a), respectively. The most unusual L1 enzyme (L1e) displayed markedly different kinetic properties, with respect to hydrolysis of nitrocefin and imipenem, compared to those of L1a (250- and 100-fold lowerkcat/Km ratios respectively). L1c and L1d, in contrast, displayed levels of hydrolysis very similar to that of L1a. Several nonconservative amino acid differences with respect to L1a, L1b, L1c, and L1d were observed in the substrate binding-catalytic regions of L1e, and this could explain the kinetic differences. Three novel L2 genes (L2b, L2c, andL2d) were sequenced from the same isolates, and their sequences diverge from the published sequence of strain IID 1275L2 (L2a) by 4, 9, and 25%, respectively. Differences in L1 and L2 gene sequences were not accompanied by similar divergences in 16S rRNA gene sequences, for which differences of <1% were found. It is therefore apparent that the L1 and L2 genes have evolved relatively quickly, perhaps because of their presence on a plasmid.", "corpus_id": 2466236, "score": -1, "title": "Plasmid Location and Molecular Heterogeneity of the L1 and L2 β-Lactamase Genes of Stenotrophomonas maltophilia" }
{ "abstract": "ABSTRACT In the present study, interleukin-6 (IL-6)-deficient mice were infected with Giardia lamblia clone GS/M-83-H7. Murine IL-6 deficiency did not affect the synthesis of parasite-specific intestinal immunoglobulin A. However, in contrast to wild-type mice, IL-6-deficient animals were not able to control the acute phase of parasite infection. Reverse transcription-PCR-based quantitation of cytokine mRNA levels in peripheral lymph node cells exhibited a short-term up-regulation of IL-4 expression in IL-6-deficient mice that seemed to be associated with failure in controlling the parasite population. This observation suggests a further elucidation of IL-4-dependent, Th2-type regulatory processes regarding their potential to influence the course of G. lamblia infection in the experimental murine host.", "corpus_id": 14152673, "title": "Interleukin-6-Deficient Mice Are Highly Susceptible to Giardia lamblia Infection but Exhibit Normal Intestinal Immunoglobulin A Responses against the Parasite" }
{ "abstract": "Nu/+ mice (ZU.ICR-strain) experimentally infected withGiardia lamblia (clone GS/M-83-H7) cleared the infection by day 45 postinfection (p.i.). Athymic nu/nu mice were reconstituted with immune Peyer's patch lymphocytes obtained from self-healed nu/+ littermates and thus acquired the potential to decrease their intestinal parasite mass. Intestinal B-cells from self-healed nu/+ mice as well as from immune-reconstituted athymic nude mice synthesized in vitro parasite-specific immunoglobulin A (IgA). This IgA was subsequently analyzed by immunoblotting, showing a predominant reaction with the major surface antigen (a 72000-Da polypeptide) characterizing theGiardia clone in question. The hypothesis on the causative role of intestinal IgA and immune lymphocytes in the control ofG. lamblia infection thus deserves further attention.", "corpus_id": 405207, "title": "In vitro synthesized immunoglobulin A from nu/+ and reconstituted nu/nu mice against a dominant surface antigen ofGiardia lamblia" }
{ "abstract": "A single Giardia lamblia trophozoite can give rise in vitro to G. lamblia with varying surface antigens. To determine whether antigenic variation also occurs in vivo, gerbils were inoculated with defined G. lamblia clones and the surface antigens of the intestinal trophozoites were studied at different times during the infection. The proportion of monoclonal antibody 6E7-reacting trophozoites from WB C1-6E7S-inoculated gerbils had decreased significantly by day 3 postinoculation, indicating the presence of a heterogeneous population. On day 7, the 170-kilodalton antigen was no longer present and was replaced by a variety of antigens, including a major protein of 92 kilodaltons. With the exception of isolates from gerbils inoculated with WB A6-6E7S, the banding patterns of G. lamblia isolated from gerbils on day 7 or later were the same regardless of the clones used for inoculation. These studies show that G. lamblia changes its surface antigen(s) in vivo within 7 days following inoculation and appears to maintain the same set of surface antigens during the course of infection.", "corpus_id": 24088903, "score": -1, "title": "Antigenic variation of Giardia lamblia in vivo" }
{ "abstract": "CdTe is the leading commercial thin film photovoltaic technology with current record laboratory efficiency (22.1%). However, there is much potential for progress toward the Shockley‐Queisser limit (32%). The best CdTe devices have short‐circuit current close to the limit but open‐circuit voltage has much room for improvement. Back contact optimization is likely to play a key role in any improvement. Back contact material choice is also influenced by their applicability in more complex architectures such as bifacial and tandem solar cells, where high visible and/or near‐infrared transparency is required in conjunction with their electrical properties. The CdTe research community has employed many back contact materials and processes to realize them. Excellent reviews of back contacts were published by McCandless and Sites (2011) and Kumar and Rao (2014). There have been numerous publications on CdTe back contacts since 2014. This review includes both recent and older literature to give a comprehensive picture. It includes a categorization of back contact interface materials into groups such as oxides, chalcogenides, pnictides, halides, and organics. The authors attempt to identify the more promising material groups. Attention is drawn to parallels with back contact materials used on other thin film photovoltaics such as perovskites and kesterites.", "corpus_id": 233903467, "title": "Back contacts materials used in thin film CdTe solar cells—A review" }
{ "abstract": "Using a 10 nm thick molybdenum oxide (MoO3−x) layer as a transparent and low barrier contact to p-CdTe, we demonstrate nanowire CdS-CdTe solar cells with a power conversion efficiency of 11% under front side illumination. Annealing the as-deposited MoO3 film in N2 resulted in a reduction of the cell’s series resistance, from 9.97 Ω/cm2 to 7.69 Ω/cm2, and increase in efficiency from 9.9% to 11%. Under illumination from the back, the MoO3−x/Au side, the nanowire solar cells yielded Jsc of 21 mA/cm2 and efficiency of 8.67%. Our results demonstrate use of a thin layer transition metal oxide as a potential way for a transparent back contact to nanowire CdS-CdTe solar cells. This work has implications toward enabling a novel superstrate structure nanowire CdS-CdTe solar cell on Al foil substrate by a low cost roll-to roll fabrication process.", "corpus_id": 2232031, "title": "Nanowire CdS-CdTe Solar Cells with Molybdenum Oxide as Contact" }
{ "abstract": "We consider competitive capacity investment for a duopoly of two distinct producers. The producers are exposed to stochastically fluctuating costs and interact through aggregate supply. Capacity expansion is irreversible and modeled in terms of timing strategies characterized through threshold rules. Because the impact of changing costs on the producers is asymmetric, we are led to a nonzero-sum timing game describing the transitions among the discrete investment stages. Working in a continuous-time diffusion framework, we characterize and analyze the resulting Nash equilibrium and game values. Our analysis quantifies the dynamic competition effects and yields insight into dynamic preemption and over-investment in a general asymmetric setting. A case-study considering the impact of fluctuating emission costs on power producers investing in nuclear and coal-fired plants is also presented.", "corpus_id": 54055209, "score": -1, "title": "Capacity Expansion Games with Application to Competition in Power Generation Investments" }
{ "abstract": "Three new indolediterpenoids, namely, 22-hydroxylshearinine F (1), 6-hydroxylpaspalinine (2), and 7-O-acetylemindole SB (3), along with eight related known analogs (4–11), were isolated from the sea-anemone-derived fungus Penicillium sp. AS-79. The structures and relative configurations of these compounds were determined by a detailed interpretation of the spectroscopic data, and their absolute configurations were determined by ECD calculations (1 and 2) and single-crystal X-ray diffraction (3). Some of these compounds exhibited prominent activity against aquatic and human pathogenic microbes.", "corpus_id": 12770561, "title": "Three New Indole Diterpenoids from the Sea-Anemone-Derived Fungus Penicillium sp. AS-79" }
{ "abstract": "A marine-derived strain of Dichotomomyces cejpii produces the new compounds emindole SB beta-mannoside (1) and 27-O-methylasporyzin C (2), as well as the known indoloditerpenes JBIR-03 (3) and emindole SB (4). Indole derivative 1 was found to be a CB2 antagonist, while 2 was identified as the first selective GPR18 antagonist with an indole structure. Compound 4 was found to be a nonselective CB1/CB2 antagonist. The new natural indole derivatives may serve as lead structures for the development of GPR18- and CB receptor-blocking drugs.", "corpus_id": 2211773, "title": "Indoloditerpenes from a marine-derived fungal strain of Dichotomomyces cejpii with antagonistic activity at GPR18 and cannabinoid receptors." }
{ "abstract": "HU-211, a nonpsychotropic cannabinoid and a noncompetitive NMDA antagonist, was tested in a global ischemia model in the Mongolian gerbil. Male Mongolian gerbils underwent a 10-min bilateral common carotid artery occlusion. HU-211, administered i.v. at 4 mg/kg, 30 min postischemia, induced statistically significant neuroprotection of the CA1 subfield of the hippocampus. A dose-response study demonstrated an inverted U curve in which the 4 mg/kg dose induced the best neuroprotection in the CA1 subfield of the hippocampus (p < 0.05 ANOVA followed by Duncan's post-hoc test). The therapeutic window was then investigated, and in another study, HU-211 4 mg/kg were administered i.v. at 30, 60, 120, and 180 min postinsult. A statistically significant neuroprotection was detected at 30 and 60 min administration postinsult.", "corpus_id": 1642431, "score": -1, "title": "Neuroprotective activity of HU-211, a novel NMDA antagonist, in global ischemia in gerbils." }
{ "abstract": "A triggered communication mechanism-based adaptive control strategy is proposed for output consensus of a class of uncertain nonlinear multi-agent systems. A distributed estimator is constructed via intermittent communication with their own neighbours. This estimator provides desired trajectory for parts of followers who are not able to access to leader's information directly. Recursive sliding-modes and nonlinear gain functions are applied for performance improvement of traditional dynamic surface control approaches. Also, an adaptive parameter is introduced for neural networks' weights, and it is for simplification of computational complexity in our control strategy. In theory, it is proven that all signals in the multi-agent system are ultimately bounded, that consensus tracking errors converge to a neighbourhood around the origin, and that there exists Zeno-free behaviour. Three simulation examples validate the effectiveness of the proposed strategy.", "corpus_id": 239663230, "title": "Dynamic event-triggered mechanism-based output consensus of nonlinear multi-agent systems via improved dynamic surface control approach" }
{ "abstract": "ABSTRACT This paper considers the output consensus problem in non-minimum phase nonlinear multi-agent systems. The main contribution of the paper is to guarantee achieving consensus in the presence of unstable zero dynamics. To achieve this goal, a consensus protocol consisting of two terms is proposed. The first term is a linear function of the states of each agent employed in order to overcome the non-minimum phase dynamics, and the second term is a function of the output of neighbouring agents which provides coupling among agents and guarantees output consensus in the network. The asymptotic stability of output consensus errors and the boundedness of the states of agents are also studied. A numerical example is presented to show the effectiveness of the proposed approach.", "corpus_id": 3913993, "title": "Output consensus control of multi-agent systems with nonlinear non-minimum phase dynamics" }
{ "abstract": "In this paper, the leader-following consensus problem for second-order multi-agent systems with nonlinear inherent dynamics is investigated. Two distributed control protocols are proposed under fixed undirected communication topology and fixed directed communication topology. Some sufficient conditions are obtained for the states of followers converging to the state of virtual leader globally exponentially. Rigorous proofs are given by using graph theory, matrix theory and Lyapunov theory. Simulations are also given to verify the effectiveness of the theoretical results.", "corpus_id": 5083207, "score": -1, "title": "Distributed leader-following consensus for second-order multi-agent systems with nonlinear inherent dynamics" }
{ "abstract": "This study focuses on the analysis of the distribution, both spatial and temporal, of the PM10 (particulate matter with a diameter of 10 µm or less) concentrations recorded in nine EMEP (European Monitoring and Evaluation Programme) background stations distributed throughout mainland Spain between 2001 and 2019. A study of hierarchical clusters was used to classify the stations into three main groups with similarities in yearly concentrations: GC (coastal location), GNC (north–central location), and GSE (southeastern location). The highest PM10 concentrations were registered in summer. Annual evolution showed statistically significant decreasing trends in PM10 concentration in all the stations covering a range from −0.21 to −0.50 µg m−3/year for Barcarrota and Víznar, respectively. Through the Lamb classification, the weather types were defined during the study period, and those associated with high levels of pollution were identified. Finally, the values exceeding the limits established by the legislation were analyzed for every station assessed in the study.", "corpus_id": 256708970, "title": "Connection between Weather Types and Air Pollution Levels: A 19-Year Study in Nine EMEP Stations in Spain" }
{ "abstract": "Over the last decades, changes in dust storms characteristics have been observed in different parts of the world. The changing frequency of dust storms in the southeastern Mediterranean has led to growing concern regarding atmospheric PM10 levels. A classic time series additive model was used in order to describe and evaluate the changes in PM10 concentrations during dust storm days in different cities in Israel, which is located at the margins of the global dust belt. The analysis revealed variations in the number of dust events and PM10 concentrations during 2001-2015. A significant increase in PM10 concentrations was identified since 2009 in the arid city of Beer Sheva, southern Israel. Average PM10 concentrations during dust days before 2009 were 406, 312, and 364 μg m(-3) (median 337, 269,302) for Beer Sheva, Rehovot (central Israel) and Modi'in (eastern Israel), respectively. After 2009 the average concentrations in these cities during dust storms were 536, 466, and 428 μg m(-3) (median 382, 335, 338), respectively. Regression analysis revealed associations between PM10 variations and seasonality, wind speed, as well as relative humidity. The trends and periodicity are stronger in the southern part of Israel, where higher PM10 concentrations are found. Since 2009 dust events became more extreme with much higher daily and hourly levels. The findings demonstrate that in the arid area variations of dust storms can be quantified easier through PM10 levels over a relatively short time scale of several years.", "corpus_id": 5634, "title": "Increase in dust storm related PM10 concentrations: A time series analysis of 2001-2015." }
{ "abstract": "&NA; This article describes the development of a novel model for quality assurance of pediatric asthma using administrative data and clinical guidelines. Children for whom drugs for asthma were dispensed during 1998 were recruited from the drug‐dispensing registry of the largest health maintenance organization in the southern region of Israel. The Israeli clinical guidelines were translated into a list of six markers for inadequate treatment. This list was used for a computerized search in the drug registry, and cases with markers were noted as cases in which inappropriate treatment was provided. The model was validated by proving that there was an association between inappropriate treatment (markers) and bad outcomes (emergency room visits, hospitalizations, and healthcare utilization). This model creates an interface between administrative and clinical information and provides an easy‐to‐use tool for quality assurance.", "corpus_id": 33977714, "score": -1, "title": "A Computerized Surveillance System for the Quality of Care in Childhood Asthma" }
{ "abstract": "Functional validation is one of the most complex and expensive tasks in the current processor design methodology. A significant bottleneck in the validation of processors is the lack of a golden reference model. Thus, many existing approaches employ a bottom-up methodology by using a combination of simulation techniques and formal methods. We present a top-down validation approach using a language-based specification. The specification is used to generate the necessary reference models for processor validation using symbolic simulation. We applied our methodology for property checking as well as equivalence checking of microprocessors.", "corpus_id": 14436147, "title": "A methodology for validation of microprocessors using symbolic simulation" }
{ "abstract": "Recent approaches on language-driven Design Space Exploration (DSE) use Architectural Description Languages (ADL) to capture the processor architecture, generate automatically a software toolkit (including compiler, simulator, and assembler) for that processor, and provide feedback to the designer on the quality of the architecture. It is important to verify the ADL description of the processor to ensure the correctness of the software toolkit. We present in this paper an automatic validation framework, driven by an ADL. We present algorithms for automatic validation of ADL specification of the processor pipelines. We applied our methodology to verify several realistic processor cores to demonstrate the usefulness of our approach.", "corpus_id": 36223, "title": "Automatic validation of pipeline specifications" }
{ "abstract": "Data mining techniques are studied to discover knowledge from GIS database and remote sensing image data in order to improve land use classification. Two learning granularities are proposed for inductive learning from spatial data, one is spatial object granularity, the other is pixel granularity. The characteristics and application scope of the two granularities are discussed. We also present an approach to combine inductive learning with conventional image classification methods, which selects class probability of Bayes classification as learning attributes. A land use classification experiment is performed in the Beijing area using SPOT multi-spectral image and GIS data. Rules about spatial distribution patterns and shape features are discovered by C5.0 inductive learning algorithm and then the image is reclassified by deductive reasoning. Comparing with the result produced only by Bayes classification, the overall accuracy increased 11 percent and the accuracy of some classes, such as garden and forest, increased about 30 percent. The results indicate that inductive learning can resolve the problem of spectral confusion to a great extent. Combining Bayes method with inductive learning not only improves classification accuracy greatly, but also extends the classification by subdivide some classes with the discovered knowledge.", "corpus_id": 15542284, "score": -1, "title": "LAND USE CLASSIFICATION OF REMOTE SENSING IMAGE WITH GIS DATA BASED ON SPATIAL DATA MINING TECHNIQUES" }
{ "abstract": "The aim of this study was to establish reference values for selected ophthalmic diagnostic tests in healthy blue-and-yellow macaws. We investigated a total of 35 adult macaws (70 eyes) of undetermined sex and with an average weight of 1 kg, who were living in captivity in the Federal District, Brazil. Tear production using the Schirmer tear test (STT), normal conjunctival flora, intraocular pressure (IOP) using a rebound tonometer and horizontal palpebral fissure length (HPFL) were evaluated. In this study, 84.1% of samples were positive for microbial growth. Bacteria, fungi and yeasts were isolated, and Staphylococcus spp. (21.9%) and Bacillus spp. (26.8%) were the most frequently isolated microorganisms. The mean value for STT was 7.6±4.6mm/min in the right eye (OD) and 6.6±4.4mm/min in the left eye (OS) (median = 7,11±0,76mm/min). Mean IOP was 11.4±2.5mm Hg OD and 11.6±1.8mm Hg OS (median = 11.49±0.22mm Hg), prior to anesthesia, and 7.6±2.4mm Hg OD and 7.8±1.8mm Hg OS (median 7.71±0.08mmHg) after anesthesia. The IOP was significantly lower when the animals were under anesthesia as compared to when they were conscious (p≤0.05). Horizontal palpebral fissure length was 11.7±0.1mm OD and 11.8±0.1mm OS (median = 11.72±0.07mm). The STT showed a positive correlation with palpebral fissure measurement for this species. These selected ophthalmic reference values will be particularly useful in diagnosing pathological changes in the eyes of blue-and-yellow macaws.", "corpus_id": 89962759, "title": "Reference values for selected ophthalmic tests of the blue-and-yellow macaw (Ara ararauna)" }
{ "abstract": "OBJECTIVE\nTo determine the central corneal thickness (CCT) by ultrasonic pachymetry and the effect of these values on the measurements of intraocular pressures (IOP) with rebound tonometry (TonoVet(®) ) in a captive flock of black-footed penguins (Spheniscus dermersus). Variations in CCT by age and weight, and variations in IOP by age were compared.\n\n\nANIMAL STUDIED\nBoth eyes of 18 clinically normal black-footed penguins (Spheniscus dermersus) were used.\n\n\nPROCEDURE\nThe IOP was measured by the TonoVet(®) in both eyes of all the penguins. CTT measurements were performed 5 min later in all eyes using an ultrasound pachymeter.\n\n\nRESULTS\nThe mean IOP values ± SD were 31.77 ± 3.3 mm Hg (range of mean value: 24-38). The mean CCT values were 384.08 ± 30.9 μm (range of mean value: 319-454). There was no correlation between IOP and CCT values (P = 0.125). There was no difference in CCT measurements by age (P = 0.122) or weight (P = 0.779). A correlation was observed (P = 0.032) between IOP values and age. The coefficient of correlation was negative (ρ = -0.207).\n\n\nCONCLUSIONS\nUltrasound pachymetry has shown to be a reliable and easy technique to measure CCT in penguins. No correlation was observed between IOP and CCT values in this study. IOP showed a significant but weak decrease as age increased in the black-footed penguin.", "corpus_id": 1537992, "title": "Central corneal thickness and intraocular pressure in captive black-footed penguins (Spheniscus dermersus)." }
{ "abstract": "Summary Medetomidine, a highly specific α‐2 adrenergic agonist, has been demonstrated to lower intraocular pressure (IOP) in rabbits and cats when applied topically. The purpose of this study was to assess the influence of intravenously injected medetomidine on the pupil size (PS) and the IOP of non glaucomatous dogs. IOP was measured by applanation tonometry and PS was measured using Jameson calipers at t=0 (or time of IV injection of medetomidine (Domitor®; Orion) at the dose of 1500 μg/m2 body surface area) and again after 5 minutes (t=5). The IV administration of medetomidine caused miosis in all 14 dogs. The mean PS decreased from 9.0 to 4.0 mm (p<0.001). The IOP was lowered in 10 dogs and in 4 dogs there was a rise in IOP. The mean IOP (mmHg) decreased from 22 to 21 (p>0.2). The data presented above confirm that medetomidine at a dose of 1500 μg/m2 body surface area produces miosis in non glaucomatous dogs, without influencing the IOP.", "corpus_id": 249014, "score": -1, "title": "The effect of intravenous medetomidine on pupil size and intraocular pressure in normotensive dogs" }
{ "abstract": "ABSTRACT The history of concentrating solar power (CSP) is characterized by a boom-bust pattern caused by policy support changes. Following the 2014–2016 bust phase, the combination of Chinese support and several low-cost projects triggered a new boom phase. We investigate the near- to mid-term cost, industry, market and policy outlook for the global CSP sector and show that CSP costs have decreased strongly and approach cost-competitiveness with new conventional generation. Industry has been strengthened through the entry of numerous new companies. However, the project pipeline is thin: no project broke ground in 2019 and only four projects are under construction in 2020. The only remaining large support scheme, in China, has been canceled. Without additional support soon creating a new market, the value chain may collapse and recent cost and technological advances may be undone. If policy support is renewed, however, the global CSP sector is prepared for a bright future.", "corpus_id": 225765597, "title": "The near- to mid-term outlook for concentrating solar power: mostly cloudy, chance of sun" }
{ "abstract": "Concentrating solar power (CSP) is one of the few renewable electricity technologies that can offer dispatchable electricity at large scale. Thus, it may play an important role in the future, especially to balance fluctuating sources in increasingly renewables-based power systems. Today, its costs are higher than those of PV and wind power and, as most countries do not support CSP, deployment is slow. Unless the expansion gains pace and costs decrease, the industry may stagnate or collapse, and an important technology for climate change mitigation has been lost. Keeping CSP as a maturing technology for dispatchable renewable power thus requires measures to improve its short-term economic attractiveness and to continue reducing costs in the longer term. We suggest a set of three policy instruments – feed-in tariffs or auctions reflecting the value of dispatchable CSP, and not merely its cost; risk coverage support for innovative designs; and demonstration projects – to be deployed, in regions where CSP has a potentially large role to play. This could provide the CSP industry with a balance of attractive profits and competitive pressure, the incentive to expand CSP while also reducing its costs, making it ready for broad-scale deployment when it is needed.", "corpus_id": 159006772, "title": "Policies to keep and expand the option of concentrating solar power for dispatchable renewable electricity" }
{ "abstract": "We combine an expert elicitation and a bottom-up manufacturing cost model to compare the effects of R&D and demand subsidies. We model their effects on the future costs of a low-carbon energy technology that is not currently commercially available, purely organic photovoltaics (PV). We find that: (1) successful R&D enables PV to achieve a cost target of 4c/kWh, (2) the cost of PV does not reach the target when only subsidies, and not R&D, are implemented, and (3) production-related effects on technological advanceNlearning-by-doing and economies of scaleNare not as critical to the long-term potential for cost reduction in organic PV than is the investment in and success of R&D. These results are insensitive to two levels of policy intensity, the level of a carbon price, the availability of storage technology, and uncertainty in the main parameters used in the model. However, a case can still be made for subsidies: comparisons of stochastic dominance show that subsidies provide a hedge against failure in the R&D program.", "corpus_id": 17295760, "score": -1, "title": "Demand Subsidies Versus R&D: Comparing the Uncertain Impacts of Policy on a Pre-commercial Low-carbon Energy Technology" }
{ "abstract": "Biofuel crops may help achieve the goals of energy‐efficient renewable ethanol production and greenhouse gas (GHG) mitigation through carbon (C) storage. The objective of this study was to compare the aboveground biomass yields and soil organic C (SOC) stocks under four crops (no‐till corn, switchgrass, indiangrass, and willow) 7 years since establishment at three sites in Ohio to determine if high‐yielding biofuel crops are also capable of high levels of C storage. Corn grain had the highest potential ethanol yields, with an average of more than 4100 L ha−1, and ethanol yields increased if both corn grain and stover were converted to biofuel, while willow had the lowest yields. The SOC concentration in soils under biofuels was generally unaffected by crop type; at one site, soil in the top 10 cm under willow contained nearly 13 Mg C ha−1 more SOC (or 29% more) than did soils under switchgrass or corn. Crop type affected SOC content of macroaggregates in the top 10 cm of soil, where macroaggregates in soil under corn had lower C, N and C : N ratios than those under perennial grasses or trees. Overall, the results suggest that no‐till corn is capable of high ethanol yields and equivalent SOC stocks to 40 cm depth. Long‐term monitoring and measurement of SOC stocks at depth are required to determine whether this trend remains. In addition, ecological, energy, and GHG assessments should be made to estimate the C footprint of each feedstock.", "corpus_id": 86112095, "title": "Aboveground productivity and soil carbon storage of biofuel crops in Ohio" }
{ "abstract": "Short-rotation woody crops (SRWC) could potentially displace fossil fuels and thus mitigate CO2 buildup in the atmosphere. To determine how much fossil fuel SRWC might displace in the United States and what the associated fossil carbon savings might be, a series of assumptions must be made. These assumptions concern the net SRWC biomass yields per hectare (after losses); the amount of suitable land dedicated to SRWC production; wood conversion efficiencies to electricity or liquid fuels; the energy substitution properties of various fuels; and the amount of fossil fuel used in growing, harvesting, transporting, and converting SRWC biomass. Assuming the current climate, present production, and conversion technologies and considering a conservative estimate of the U.S. land base available for SRWC (14 × 106 ha), we calculate that SRWC energy could displace 33.2 to 73.1 × 106 Mg of fossil carbon releases, 3–6% of the current annual U.S. emissions. The carbon mitigation potential per unit of land is larger with the substitution of SRWC for coal-based electricity production than for the substitution of SRWC-derived ethanol for gasoline. Assuming current climate, predicted conversion technology advancements, an optimistic estimate of the U.S. land base available for SRWC (28 × 106 ha), and an optimistic average estimate of net SRWC yields (22.4 dry Mg/ha), we calculate that SRWC energy could displace 148 to 242 × 106 Mg of annual fossil fuel carbon releases. Under this scenario, the carbon mitigation potential of SRWC-based electricity production would be equivalent to about 4.4% of current global fossil fuel emissions and 20% of current U.S. fossil fuel emissions.", "corpus_id": 153468706, "title": "The potential for short-rotation woody crops to reduce U.S. CO2 emissions" }
{ "abstract": "This report describes the technical progress of the individual research projects in the Short Rotation Woody Crops Program (SRWCP) as well as synthesizing the results for an overview of the program. The program is sponsored by the US Department of Energy's Biofuels and Municipal Waste Technology Division and has the goal of developing a viable technology for producing renewable feedstocks for biofuels such as gasoline, diesel fuel, alcohol, and medium Btu gas in the United States. The most significant accomplishments have been the productivity rates achieved with Populus hybrids in the Pacific Northwest, the establishment of monoculture viability trials, the bioengineering developments of Populus spp. (hybrid poplar), and the initiation of wood-energy quality definitions in cooperation with biofuel conversion specialists. The most serious challenges are now seen as control of diseases in Populus, lowering cutting and handling costs, increasing productivity on moderate to poor soils in the South and Midwest, local matching and development of clones with sites in monoculture trials, and identifying and learning about the physiological and genetic variability of important growth qualities within model species for genetic improvement. 39 refs.", "corpus_id": 109373871, "score": -1, "title": "Short Rotation Woody Crops Program: Annual progress report for 1987" }
{ "abstract": "On-chip block memories (BRAMs) in SRAM-based FPGAs store critical state information as well as user data which need to be protected against radiation-induced upsets. Therefore, reliability evaluation techniques and upset injection in system components are vital. Previous approaches to fault injection in BRAMs are limited in their abilities to create multiple cell upsets (MCUs) (and, in particular, a kind of MCU called multiple bit upsets) and are vulnerable to unintended state corruption in other memory elements when on-chip injectors are used. This letter proposes an efficient approach for multiple upsets emulation in BRAM contents exploiting the configuration memory cells responsible for initialization. The presented methodology ensures safe fault injection in BRAM contents while preserving the state of other memory elements of the design by using a one-time generated partial bitstream. The approach does not require the time-consuming bitstream generation process for every fault but rather uses run-time single-frame modifications for injection purposes.", "corpus_id": 54454168, "title": "Multiple Cell Upset Injection in BRAMs for Xilinx FPGAs" }
{ "abstract": "This paper presents a new approach to manage data content of memories implemented in FPGAs through the configuration bitstream. The proposed approach is able to read and write the data content from Block RAMs (BRAMs) in FPGA based designs by reading and processing the information stored in the bitstream. Thanks to this method it is possible to extract, load, copy or compare the information of BRAMs without neither resource overhead nor performance penalty in the design. It can also be applied to existing designs without the need of re-synthesizing. Due to its advantages it becomes an interesting tool to carry out several applications, such as error detection and recovery or fault injection. It also opens the doors to the design of cutting-edge applications. The approach has been implemented in a Xilinx ZYNQ System-on-Chip (SoC) device, which combines an FPGA and an ARM9 microprocessor. The access to the configuration bitstream has been performed using the ZYNQs Processor Configuration Access Port (PCAP). Nevertheless, the flow presented in this article can be adapted to devices from other Xilinx families or vendors. The proposed approach has been fully tested and compared with specifically designed memory controllers. The results obtained in the experimental tests confirm that the proposed approach works properly without increasing the resource overhead but at a penalty in terms of processing time.", "corpus_id": 2134010, "title": "A novel BRAM content accessing and processing method based on FPGA configuration bitstream" }
{ "abstract": "Abstract Despite recent advances, women trail men in political participation, especially in developing countries where the long-term economic benefits from empowering women politically have not been well-researched. We use data from 163 villages of 12 main Indian states to explore whether requiring that village leadership positions be held by women (political reservation) affected uptake of economic opportunities via the National Rural Employment Guarantee Scheme. Reservation triggered increases in women's demand for work, program participation, and access to financial services that were sustained beyond the period of female political leadership. Enhanced female participation in program oversight, civic engagement, and electoral participation are plausible channels for such effects and political and economic empowerment seem to be complementary.", "corpus_id": 212875994, "score": -1, "title": "Women's political leadership and economic empowerment: Evidence from public works in India" }
{ "abstract": "The aim of this study was to evaluate the influence of nandrolone decanoate on anxiety levels in rats. Male Wistar rats were treated with nandrolone decanoate (5mg/kg, two times per week, i.m.) or vehicle (propylene glycol—0.2 ml/kg, two times per week, IM) for 6 weeks. Control rats were subject only to procedures related to their routine husbandry. By the end of 6 weeks, all groups (24–29 rats/group) were submitted to the elevated plus maze test in order to evaluate their anxiety level. Some of these animals (12– 14/group) were treated with diazepam (1 mg/kg i.p.) 30 min before the elevated plus maze test. Nandrolone decanoate significantly decreased the percentage of time spent in the open arms (1.46 ± 0.49%) compared with control (3.80 ± 0.97%) and vehicle (3.96 ± 0.85%) groups, with no difference between control and vehicle treatments. The percentage of open arm entries was also reduced in the group treated with nandrolone decanoate in comparison with the vehicle and control. No changes in the number of closed arm entries were detected. Diazepam abolished the effects of nandrolone decanoate on the percentage of time in, and entries into the open arms. The present study showed that chronic treatment with a high dose of nandrolone decanoate increased the anxiety level in male rats.", "corpus_id": 23017620, "title": "Influence of anabolic steroid on anxiety levels in sedentary male rats" }
{ "abstract": "The effect of anaerobic physical training and nandrolone treatment on the sensitivity to phenylephrine in thoracic aorta and lipoprotein plasma levels of rats was studied. Sedentary and trained male Wistar rats were treated with vehicle or nandrolone (5 mg/kg IM; twice per week) for 6 weeks. Training was performed by jumping into water (4 sets, 10 repetitions, 30-second rest, 50% to 70% body weight load, 5 days/week, 6 weeks). Two days after the last training session, the animals were killed and blood samples for lipoprotein dosage were obtained. Thoracic aorta was isolated and concentration–effect curves of phenylephrine were performed in intact endothelium and endothelium-denuded aortic rings in the absence or presence of NG-l-arginine-methyl ester. No changes were observed in endothelium-denuded aortic rings. However, in endothelium-intact thoracic aorta, anaerobic physical training induced subsensitivity to phenylephrine (pD2=7.11±0.07) compared with sedentary group (7.55±1.74), and this effect was canceled by the inhibition of nitric oxide synthesis. No difference was observed between trained (7.22±0.07) and sedentary (7.28±0.09) groups treated with nandrolone. Anaerobic training induced an increase in high-density lipoprotein levels in vehicle-treated rats, but there were no changes in nandrolone-treated groups. Training associated with nandrolone induced an increase in low-density lipoprotein levels but no change in the other groups. If altering endothelium-dependent vasodilatation is considered to be a beneficial adaptation to anaerobic physical training, it is concluded that nandrolone treatment worsens animals' endothelial function, and this effect may be related to lipoprotein blood levels.", "corpus_id": 392860, "title": "Vascular Sensitivity to Phenylephrine in Rats Submitted to Anaerobic Training and Nandrolone Treatment" }
{ "abstract": "To examine the effects of testosterone administration to older hypogonadal males (bioavailable testosterone less than 70 ng/dL).", "corpus_id": 13503053, "score": -1, "title": "Effects of Testosterone Replacement Therapy in Old Hypogonadal Males: A Preliminary Study" }
{ "abstract": "Low student enrollment in information systems (IS) programs across the U.S. persists, despite an increase in job opportunities for IS graduates. One approach to meet this increased demand for IS employees is the recruitment and retention of underserved populations. Beyond meeting demand for employees, creating more equitable environments is an issue of social justice essential to the vitality of the IS field. Negative stereotypes about IS are one of the major factors contributing to lack of student interest. In this paper, we synthesize relevant literature on gender and racial stereotypes and existing stereotypes about IS, describe the theoretical foundation of our proposed work, and outline our research approach. In this emergent research, we aim to contribute a theoretical understanding of underserved groups in relation to IS stereotypes. Findings from this work will contribute to the design and deployment of curriculum, pedagogy, and recruitment strategies that enhance equitable IS programs.", "corpus_id": 201114794, "title": "A Critical Analysis on the Effects of Negative IS Stereotypes on Underserved Populations" }
{ "abstract": ": Declining enrollments has been a major concern for the Information Systems (IS) community over the last decade. While there are many issues to consider, one possible explanation for this decline is the negative stereotypical image students hold about IS professionals. Moreover, the underrepresentation of women in the IS field, has also been linked to the negative stereotypical image of IS professionals. There is a lack of empirical research that investigates the image of IS professionals in general and women’s perceptions in particular. To address this research gap, this study investigated students’ stereotypes of IS professionals and whether female and male students differed in terms of their perceptions of IS professionals. The study also examined the influence of the introductory IS course in shaping female and male students’ perceptions. The findings revealed that female and male students attribute some similar and some different characteristics to IS professionals. The study confirmed that the introductory IS course plays an important role on how students view the IS field.", "corpus_id": 152546044, "title": "GENDER DIFFERENCES IN STUDENTS’ PERCEPTIONS OF IS PROFESSIONALS AND THE ROLE OF THE INTRODUCTORY IS COURSE" }
{ "abstract": "1. INTRODUCTION Information Systems (IS) workers play an influential role in our knowledge-based economy. However, despite a robust and growing job market, the demand for IS majors and careers across college students continues to be low (Li, Zhang, and Zheng, 2014). One of the main reasons cited for students' lack of interest in the IS discipline has been the negative image of IS professionals (Colvin, 2007; Firth, Lawrence, and Looney, 2008; Granger et al., 2007; Joshi and Kuhn, 2011; Lomerson and Pollacia, 2006; Zhang, 2007). Popular and academic literature indicate that students' stereotypical image of an IS professional is similar to that of a computer scientist, and students perceive IS professionals as computer nerds sitting in front of the computer all day long, doing mainly technical work. Furthermore, these studies posit that students are concerned about the nature of the IS work being too technical, difficult, boring, and antisocial (Firth, Lawrence, and Looney, 2008; Galletta, 2007; Harris et al., 2009; Lomerson and Pollacia, 2006). Other studies also refer to the gendered view of the IS profession and mention that female students have the perception that men, not women, prefer to pursue majors and careers in the IS field (Cory, Parzinger, and Reeves, 2006; Galletta, 2007; Zhang, 2007). These incorrect perceptions of IS professionals have been tied to students' lack of information about the IS profession and about the typical career opportunities available to IS professionals (Akbulut, 2009; Firth, Lawrence, and Looney, 2008; Lomerson and Pollacia, 2006). Fortunately, research has also found that students' traditional negative stereotypes can be undermined if students inhabit local environments in which they are exposed to counter stereotypic roles (Dasgupta and Asgari, 2004). In this respect, at the college level, the introductory level IS course represents an excellent opportunity to clarify any misunderstandings students might have about IS professionals. Research has shown that if the content, instructors, and technologies used in introductory level IS courses are selected correctly, they might have a positive influence on how students view the IS field (Akbulut and Looney, 2007; Akbulut-Bailey, 2012; George, Valacich, and Valor, 2005; Granger et al., 2007). Therefore, the purpose of this study was to understand students' perceptions of IS professionals before and after they were exposed to the IS field and careers through the introductory IS course. More specifically we investigated the following research questions: (a) Do students hold strong stereotypic images towards IS professionals before they are formally introduced to the field of IS, and (b) Do students' initial perceptions of IS professionals shift after taking the introductory IS course and gaining more information about the nature of the IS field and potential career options? The remainder of this article is organized as follows. In the following section a discussion of the background literature is provided. Next, the research method is outlined and the results from the analyses are presented. The paper concludes with a discussion of the findings, implications, limitations, and future research directions. 2. BACKGROUND Stereotypes are defined as cognitive structures containing the perceiver's generalized assumptions about the members of a particular group (Hamilton and Troiler, 1986; Wittenbrink, Gist, and Hilton, 1997). People use stereotypes to describe others, especially in unfamiliar situations. Stereotypes may involve positive or negative beliefs. They may be accurate or inaccurate regarding the average characteristics of the group (Dasgupta and Asgari, 2004; Leyens, Yzerbyt, and Schadron, 1994). Understanding stereotypes is important because, as mentioned earlier, students' stereotypes of IS professionals might have an impact on their intentions to major in IS (Kuechler, McLeod, and Simkin, 2009; Nelson, 2014). …", "corpus_id": 59047663, "score": -1, "title": "The Impact of the Introductory IS Course on Students' Perceptions of IS Professionals" }
{ "abstract": "This paper explores the use of autonomous underwater vehicles (AUVs) equipped with sensors to construct water quality models to aid in the assessment of important environmental hazards, for instance related to point‐source pollutants or localized hypoxic regions. Our focus is on problems requiring the autonomous discovery and dense sampling of critical areas of interest in real‐time, for which standard (e.g., grid‐based) strategies are not practical due to AUV power and computing constraints that limit mission duration. To this end, we consider adaptive sampling strategies on Gaussian process (GP) stochastic models of the measured scalar field to focus sampling on the most promising and informative regions. Specifically, this study employs the GP upper confidence bound as the optimization criteria to adaptively plan sampling paths that balance a trade‐off between exploration and exploitation. Two informative path planning algorithms based on (i) branch‐and‐bound techniques and (ii) cross‐entropy optimization are presented for choosing future sampling locations while considering the motion constraints of the sampling platform. The effectiveness of the proposed methods are explored in simulated scalar fields for identifying multiple regions of interest within a three‐dimensional environment. Field experiments with an AUV using both virtual measurements on a known scalar field and in situ dissolved oxygen measurements for studying hypoxic zones validate the approach's capability to quickly explore the given area, and then subsequently increase the sampling density around regions of interest without sacrificing model fidelity of the full sampling area.", "corpus_id": 234579629, "title": "Adaptive sampling with an autonomous underwater vehicle in static marine environments" }
{ "abstract": "Marine phenomena such as algal blooms can be detected using in situ measurements onboard autonomous underwater vehicles (AUVs), but understanding plankton ecology and community structure requires retrieval and analysis of water specimens. This process requires shipboard or manual sample collection, followed by onshore lab analysis which is time-consuming. Better understanding of the relationship between the observable environmental features and organism abundance would allow more precisely targeted sampling and thereby save time. In this work, we present an approach to learn and improve models that predict this relationship. Coupled with recent advances in AUV technology allowing selective retrieval of water samples, this constitutes a new paradigm in biological sampling. We use organism abundance models along with spatial models of environmental features learned immediately after AUV deployments to compute spatial distributions of organisms in the coastal ocean purely from in situ AUV data. We use Gaussian process regression along with the unscented transform to fuse the two models, obtaining both the mean and variance of the organism abundance estimates. The uncertainty in organism abundance predictions is used in a sampling strategy to selectively acquire new water specimens that improves the organism abundance models. Simulation results are presented demonstrating the advantage of performing hierarchical probabilistic regression. After the validation through simulation, we show predictions of organism abundance from models learned on lab-analyzed water sample data, and AUV survey data.", "corpus_id": 1805805, "title": "Hierarchical probabilistic regression for AUV-based adaptive sampling of marine phenomena" }
{ "abstract": "Thin layers of phytoplankton have an important impact on coastal ocean ecology. The high spatial and temporal variability of such layers makes autonomous underwater vehicles (AUVs) ideal for their study. At the Monterey Bay Aquarium Research Institute (MBARI, Moss Landing, CA), the authors have used an AUV for obtaining repeated high-resolution surveys of thin layers in Monterey Bay, CA. The AUV is equipped with ten “gulpers” that can capture water samples when some feature is detected. In this paper, the authors present an adaptive triggering method for an AUV to capture water samples at chlorophyll fluorescence peaks in a thin layer. The algorithm keeps track of the fluorescence background level and the peaks' baseline in real time to ensure that detection is tuned to the ambient conditions. The algorithm crosschecks for concurrent high values of optical backscattering to ensure that sampling targets true particle peaks and not simply physiologically controlled fluorescence peaks. To let the AUV capture the thin layer's peak without delay, the algorithm takes advantage of the vehicle's sawtooth (i.e., yo-yo) trajectory: in one yo-yo cycle, the vehicle makes two crossings of the thin layer. On the first crossing, the vehicle detects the layer's fluorescence peak and saves the peak height; on the second crossing, as the fluorescence measurement reaches the saved peak height (plus meeting additional timing and depth conditions), a sampling is triggered. Based on the thin layer's vertical position in the vehicle's yo-yo profiles, the algorithm selects the pair of detection and triggering crossings so as to minimize the spacing between them. We use the algorithm to postprocess a data set of 20 AUV missions in the 2005 Layered Organization in the Coastal Ocean (LOCO) Experiment in Monterey Bay, CA, and compare its performance with that of a threshold triggering method. In October 2009, the presented method was field tested in an AUV mission in northern Monterey Bay, CA.", "corpus_id": 2069048, "score": -1, "title": "Design and Tests of an Adaptive Triggering Method for Capturing Peak Samples in a Thin Phytoplankton Layer by an Autonomous Underwater Vehicle" }
{ "abstract": "Primary Ovarian Insufficiency (POI) is a major cause of infertility, but its etiology remains poorly understood. Using whole-exome sequencing in a family with three cases of POI, we identified the candidate missense variant S167L in HSF2BP, an essential meiotic gene. Functional analysis of the HSF2BP-S167L variant in mouse showed that it behaves as a hypomorphic allele compared to a new loss-of-function (knock-out) mouse model. Hsf2bpS167L/S167L females show reduced fertility with smaller litter sizes. To obtain mechanistic insights, we identified C19ORF57/BRME1 as a strong interactor and stabilizer of HSF2BP and showed that the BRME1/HSF2BP protein complex co-immunoprecipitates with BRCA2, RAD51, RPA and PALB2. Meiocytes bearing the HSF2BP-S167L variant showed a strongly decreased staining of both HSF2BP and BRME1 at the recombination nodules and a reduced number of the foci formed by the recombinases RAD51/DMC1, thus leading to a lower frequency of crossovers. Our results provide insights into the molecular mechanism of HSF2BP-S167L in human ovarian insufficiency and sub(in)fertility.", "corpus_id": 221326215, "title": "A missense in HSF2BP causing primary ovarian insufficiency affects meiotic recombination by its novel interactor C19ORF57/BRME1" }
{ "abstract": "Comparisons among a variety of eukaryotes have revealed considerable variability in the structures and processes involved in their meiosis. Nevertheless, conventional forms of meiosis occur in all major groups of eukaryotes, including early-branching protists. This finding confirms that meiosis originated in the common ancestor of all eukaryotes and suggests that primordial meiosis may have had many characteristics in common with conventional extant meiosis. However, it is possible that the synaptonemal complex and the delicate crossover control related to its presence were later acquisitions. Later still, modifications to meiotic processes occurred within different groups of eukaryotes. Better knowledge on the spectrum of derived and uncommon forms of meiosis will improve our understanding of many still mysterious aspects of the meiotic process and help to explain the evolutionary basis of functional adaptations to the meiotic program.", "corpus_id": 303491, "title": "Conservation and Variability of Meiosis Across the Eukaryotes." }
{ "abstract": "At the final step of homologous recombination, Holliday junction-containing joint molecules (JMs) are resolved to form crossover or noncrossover products. The enzymes responsible for JM resolution in vivo remain uncertain, but three distinct endonucleases capable of resolving JMs in vitro have been identified: Mus81-Mms4(EME1), Slx1-Slx4(BTBD12), and Yen1(GEN1). Using physical monitoring of recombination during budding yeast meiosis, we show that all three endonucleases are capable of promoting JM resolution in vivo. However, in mms4 slx4 yen1 triple mutants, JM resolution and crossing over occur efficiently. Paradoxically, crossing over in this background is strongly dependent on the Blooms helicase ortholog Sgs1, a component of a well-characterized anticrossover activity. Sgs1-dependent crossing over, but not JM resolution per se, also requires XPG family nuclease Exo1 and the MutLγ complex Mlh1-Mlh3. Thus, Sgs1, Exo1, and MutLγ together define a previously undescribed meiotic JM resolution pathway that produces the majority of crossovers in budding yeast and, by inference, in mammals.", "corpus_id": 1958015, "score": -1, "title": "Delineation of Joint Molecule Resolution Pathways in Meiosis Identifies a Crossover-Specific Resolvase" }
{ "abstract": "Readers can be drawn into a narrative through emotional engagement with its characters and their prospective fates. The type and extent of this engagement can be manipulated by providing the characters with distinct personalities. For this reason computational storytelling systems can benefit from explicitly representing personality. We present the results of an empirical study that evaluates whether the perceived personality of fictional characters created by our simulation-based narrative generator correlates with those computationally modeled. Motivated by the mimetic narrative theory of fictional minds the system models characters’ action selection using an agent architecture grounded in a cognitive understanding of personality and affect. Results from our study support the claim that our system is capable of depicting narrative personality, that cognitive models are a viable approach to representing characters, and that a search-space of plot can be explored using character-personality as parameter. This can be taken to also provide functional evidence in support of the employed analytical narrative theory.", "corpus_id": 52087641, "title": "An Evaluation of Perceived Personality in Fictional Characters Generated by Affective Simulation" }
{ "abstract": "\n \n Measuring the quality of plot is a desirable feature for computational narrative systems.One of the notions of plot quality used in narrative theory is called tellability, which can be derived from certain structural properties, namely the types of events present and the way they are connected.These structures include not only actualized events, but also take into account virtual plans and the affective valencies of events.The present paper introduces Marie-Laure Ryan's tellability principles and suggests to computationally model them using an affective multi-agent simulation system.It discusses how such an approach implies a broader understanding of plot than commonly assumed and analysis several existing narrative systems under these considerations.Furthermore, it introduces a plot-graph formalism that allows the computational representation and analysis of the extended plot understanding.An approach to automatically generating the plot-graph is suggested in the context of the introduced multi-agent simulation system.\n \n", "corpus_id": 3720118, "title": "Towards a Computational Measure of Plot Tellability" }
{ "abstract": "Abstract The generation of extended plots for melodramatic fiction is an interesting task for Artificial Intelligence research, one that requires the application of generalization techniques to carry out fully. UNIVERSE is a story-telling program that uses plan-like units, ‘plot fragments’, to generate plot outlines. By using a rich library of plot fragments and a well-developed set of characters, UNIVERSE can create a wide range of plot outlines. In this paper we illustrate how UNIVERSE's plot fragment library is used to create plot outlines and how it might be automatically extended using explanation-based generalization methods. Our methods are based on analysis of a television melodrama, including comparisons of similar stories.", "corpus_id": 60877695, "score": -1, "title": "Story-telling as planning and learning" }
{ "abstract": "This paper develops a new concept of the meta-regional strategy, drawing on the contingency view of the meta-environment. The paper contributes to the regionalisation perspective and regional versus global strategy research in international business and global strategy. The study analyses European multinational enterprises (MNEs) that have markedly improved their standing in the Fortune Global 500 after a relative fall. We build on and extend the concept of the sharpbending process. Focusing on case studies of Daimler and BMW and their growth beyond their home region (notably in Asia), we incorporate regional integration into the analysis as a trigger of the sharpbending process. This helps us to define the meta-regional strategy: a strategy that maximizes the strategic fit between an organization’s firm-specific advantages and its unique meta-environment including resource munificence, institutions and markets of its home and host countries and regions where it operates. The paper offers answers to our main research question (When and how should MNEs adopt a global strategy?) and implications for future research on international diversification and performance, including lessons for European MNEs (such as that they can and should consider growing aggressively in the Asia-Pacific region, if they have a good strategic fit between their firm-specific advantages and their meta-environment.)", "corpus_id": 233748740, "title": "European Sharpbenders: Meta-regional Strategy and Regionalisation" }
{ "abstract": "Firms operate in a semi-globalized world wherein opportunities and constraints arise at both the country and regional levels; however, extant theories of firm internationalization focus mostly on country-level determinants. We aim to overcome this deficiency by developing a theoretical model that explicates the mechanisms driving firm internationalization in a semi-globalized world. Integrating the organizational learning literature with research on semi-globalization, we argue that firms internationalize through the interplay among three mechanisms: (1) intraregional exploitation; (2) intraregional reconfiguration; and (3) inter-regional exploration. We define and integrate these three mechanisms to derive two ideal typical internationalization trajectories that firms follow in a semi-globalized world: home regionalization and multiregionalization. We then elaborate on how macro-level contingencies moderate these two ideal types and conclude with implications for future research.", "corpus_id": 153730378, "title": "The World is Spiky: An Internationalization Framework for A Semi-Globalized World" }
{ "abstract": "This research examines region-bound headquarters disaggregation in multinational enterprises (MNEs). We link the formation of regional management centers—both dedicated regional headquarters (RHQs) and regional management mandates (RMMs) granted to operating subsidiaries—to the complexity argument underlying organizational information processing theory. We demonstrate how different dimensions of complexity associated with the number and dispersion of an MNE's subsidiary network in a focal region affect whether, and in which form, region-bound headquarters disaggregation takes place. Additionally, we consider boundary conditions affecting RMC formation based on within-region experience, global MNE footprint, and between-region effects. Empirically, we utilize a large global dataset of Japanese MNE foreign investments between 1992 and 2014, which allows us to perform event history analyses. This article is protected by copyright. All rights reserved.", "corpus_id": 157284249, "score": -1, "title": "MNE Headquarters Disaggregation: The Formation Antecedents of Regional Management Centers" }
{ "abstract": "MicroRNAs (miRNAs) constitute a recently discovered class of noncoding RNAs that play key roles in the regulation of gene expression. Despite being only ~20 nucleotides in length, these highly versatile molecules have been shown to play pivotal roles in development, basic cellular metabolism, apoptosis, and disease. While over 24,000 miRNAs have been characterized since they were first isolated in mammals in 2001, the functions of the majority of these miRNAs remain largely undescribed. That said, many now suggest that characterization of the relationships between miRNAs and transposable elements (TEs) can help elucidate miRNA functionality. Strikingly, over 20 publications have now reported the initial formation of thousands of miRNA loci from TE sequences. In this review we chronicle the findings of these reports, discuss the evolution of the field along with future directions, and examine how this information can be used to ascertain insights into miRNA transcriptional regulation and how it can be exploited to facilitate miRNA target prediction.", "corpus_id": 18796141, "title": "Burgeoning evidence indicates that microRNAs were initially formed from transposable element sequences" }
{ "abstract": "The origin and evolution of microRNA (miRNA) genes, which are of significance in tuning and buffering gene expressions in a number of critical cellular processes, have long attracted evolutionary biologists. However, genome-wide perspectives on their origins, potential mechanisms of their de novo generation and subsequent evolution remain largely unsolved in flowering plants. Here, genome-wide analyses of Oryza sativa and Arabidopsis thaliana revealed apparently divergent patterns of miRNA gene origins. A large proportion of miRNA genes in O. sativa were TE-related and MITE-related miRNAs in particular, whereas the fraction of these miRNA genes much decreased in A. thaliana. Our results show that the majority of TE-related and pseudogene-related miRNA genes have originated through inverted duplication instead of segmental or tandem duplication events. Based on the presented findings, we hypothesize and illustrate the four likely molecular mechanisms to de novo generate novel miRNA genes from TEs and pseudogenes. Our rice genome analysis demonstrates that non-MITEs and MITEs mediated inverted duplications have played different roles in de novo generating miRNA genes. It is confirmed that the previously proposed inverted duplication model may give explanations for non-MITEs mediated duplication events. However, many other miRNA genes, known from the earlier proposed model, were rather arisen from MITE transpositions into target genes to yield binding sites. We further investigated evolutionary processes spawned from de novo generated to maturely-formed miRNA genes and their regulatory systems. We found that miRNAs increase the tunability of some gene regulatory systems with low gene copy numbers. The results also suggest that gene balance effects may have largely contributed to the evolution of miRNA regulatory systems.", "corpus_id": 1300268, "title": "Evolution of MicroRNA Genes in Oryza sativa and Arabidopsis thaliana: An Update of the Inverted Duplication Model" }
{ "abstract": "BACKGROUND\nHuman telomeres are coated by the telomere repeat binding proteins TRF1 and TRF2, which are believed to function independently to regulate telomere length and protect chromosome ends, respectively.\n\n\nRESULTS\nHere, we show that TRF1 and TRF2 are linked via TIN2, a previously identified TRF1-interacting protein, and its novel binding partner TINT1. TINT1 localized to telomeres via TIN2, where it functioned as a negative regulator of telomerase-mediated telomere elongation. TIN2 associated with TINT1, and TRF1 or TRF2 throughout the cell cycle, revealing a partially redundant unit in telomeric chromatin that may provide flexibility in telomere length control. Indeed, when TRF1 was removed from telomeres by overexpression of the positive telomere length regulator tankyrase 1, the TIN2/TINT1 complex remained on telomeres via an increased association with TRF2.\n\n\nCONCLUSIONS\nOur findings suggest a dynamic cross talk between TRF1 and TRF2 and provide a molecular mechanism for telomere length homeostasis by TRF2 in the absence of TRF1.", "corpus_id": 19036934, "score": -1, "title": "A Dynamic Molecular Link between the Telomere Length Regulator TRF1 and the Chromosome End Protector TRF2" }
{ "abstract": "Introduction: One of the highly toxic mushrooms that are common in the northwest region of Iran is Amanita phalloides, which might result in renal or liver failure. Case Presentation: This is a case report of a patient referred a few days after consumption of wild mushrooms to emergency department having gastrointestinal complaint whose experiments indicated liver and renal failure. The supportive treatment was given to the patient prescribing N-acetyl cysteine (NAC) and Livergol (silymarin) along with hemodialysis. A few days after admission to the hospital, the patient died due to severe clinical symptoms. Conclusions: The patient was poisoned by A. phalloides complaining gastrointestinal symptoms including nausea; vomiting and watery diarrhea about six hours after consumption and then, amatoxin in the mushroom caused damage to hepatocytes and renal cells and finally led to hepatorenal failure. Deaths caused by this type of mushroom are extremely high and necessary trainings should be provided to the people by the health system not to consume wild mushrooms, especially in spring and summer.", "corpus_id": 22295569, "title": "Acute Hepatorenal Failure in a Patient Following Consumption of Mushrooms: A Case Report" }
{ "abstract": "Wild mushroom poisoning from the genus Amanita is a medical emergency, with Amanita phalloides being the most common offender. Patients may complain of nausea, vomiting, diarrhea and/or abdominal pain. If not aggressively treated, fulminant hepatic failure may develop within several days of ingestion. In this case report, a patient poisoned with Amanita bisporigera is described, along with the typical clinical presentation, patient outcomes, and treatment options for dealing with an Amanita mushroom poisoning.", "corpus_id": 1972124, "title": "Amanita bisporigera-Induced Hepatic Failure: A Fatal Case of Mushroom Ingestion" }
{ "abstract": "R-alpha-Lipoic acid is found naturally occurring as a prosthetic group in alpha-keto acid dehydrogenase complexes of the mitochondria, and as such plays a fundamental role in metabolism. Although this has been known for decades, only recently has free supplemented alpha-lipoic acid been found to affect cellular metabolic processes in vitro, as it has the ability to alter the redox status of cells and interact with thiols and other antioxidants. Therefore, it appears that this compound has important therapeutic potential in conditions where oxidative stress is involved. Early case studies with alpha-lipoic acid were performed with little knowledge of the action of alpha-lipoic acid at a cellular level, but with the rationale that because the naturally occurring protein bound form of alpha-lipoic acid has a pivotal role in metabolism, that supplementation may have some beneficial effect. Such studies sought to evaluate the effect of supplemented alpha-lipoic acid, using low doses, on lipid or carbohydrate metabolism, but little or no effect was observed. A common response in these trials was an increase in glucose uptake, but increased plasma levels of pyruvate and lactate were also observed, suggesting that an inhibitory effect on the pyruvate dehydrogenase complex was occurring. During the same period, alpha-lipoic acid was also used as a therapeutic agent in a number of conditions relating to liver disease, including alcohol-induced damage, mushroom poisoning, metal intoxification, and CCl4 poisoning. Alpha-Lipoic acid supplementation was successful in the treatment for these conditions in many cases. Experimental studies and clinical trials in the last 5 years using high doses of alpha-lipoic acid (600 mg in humans) have provided new and consistent evidence for the therapeutic role of antioxidant alpha-lipoic acid in the treatment of insulin resistance and diabetic polyneuropathy. This new insight should encourage clinicians to use alpha-lipoic acid in diseases affecting liver in which oxidative stress is involved.", "corpus_id": 10978676, "score": -1, "title": "Alpha-lipoic acid in liver metabolism and disease." }
{ "abstract": "This paper proposes Reduction to Uniprocessor Transformation (RUNT), which is an optimal multiprocessor real-time scheduling algorithm based on RUN with Real-Time Static Voltage and Frequency Scaling, called S-RUNT, and Real-Time Dynamic Voltage and Frequency Scaling, called D-RUNT. D-RUNT uses Enhanced Cycle-Conserving Earliest Deadline First to make use of slack produced during execution. In addition, we prove the optimality and analyze the overhead of RUNT.", "corpus_id": 16177981, "title": "Optimal Multiprocessor Real-Time Scheduling Based on RUN with Voltage and Frequency Scaling" }
{ "abstract": "SUMMARY For battery based real-time embedded systems, high per- formance to meet their real-time constraints and energy e ffi ciency to ex-tend battery life are both essential. Real-Time Dynamic Voltage Scaling (RT-DVS) has been a key technique to satisfy both requirements. This paper presents EccEDF (Enhanced ccEDF), an e ffi cient algorithm based on ccEDF. ccEDF is one of the most simple but e ffi cient RT-DVS algorithms. Its simple structure enables it to be easily and intuitively coupled with a real-time operating system without incurring any significant cost. ccEDF, however, overlooks an important factor in calculating the available slacks for reducing the operating frequency. It calculates the saved utilization simply by dividing the slack by the period without considering the time needed to run the task. If the elapsed time is considered, the maximum utilization saved by the slack on completion of the task can be found. The proposed EccEDF can precisely calculate the maximum unused utilization with consideration of the elapsed time while keeping the structural simplicity of ccEDF. Further, we analytically establish the feasibility of EccEDF using the fluid scheduling model. Our simulation results show that the proposed algorithm outperforms ccEDF in all simulations. A simulation shows that EccEDF consumes 27% less energy than ccEDF.", "corpus_id": 1889273, "title": "Enhanced Cycle-Conserving Dynamic Voltage Scaling for Low-Power Real-Time Operating Systems" }
{ "abstract": "Abstract This study evaluates the role of 1) low cloud condensation nuclei (CCN) conditions and 2) preferred radiative cooling of large cloud drops as compared to small cloud drops, on cloud droplet spectral broadening and subsequent freezing drizzle formation in stably stratified layer clouds. In addition, the sensitivity of freezing drizzle formation to ice initiation is evaluated. The evaluation is performed by simulating cloud formation over a two-dimensional idealized mountain using a detailed microphysical scheme implemented into the National Center for Atmospheric Research–Pennsylvania State University Mesoscale Model version 5. The height and width of the two-dimensional mountain were designed to produce an updraft pattern with extent and magnitude similar to documented freezing drizzle cases. The results of the model simulations were compared to observations and good agreement was found. The key results of this study are 1) low CCN concentrations lead to rapid formation of freezing drizzle. This ...", "corpus_id": 53471180, "score": -1, "title": "Freezing Drizzle Formation in Stably Stratified Layer Clouds: The Role of Radiative Cooling of Cloud Droplets, Cloud Condensation Nuclei, and Ice Initiation" }
{ "abstract": "Whole‐brain longitudinal diffusion studies are crucial to examine changes in structural connectivity in neurodegeneration. Here, we investigated the longitudinal alterations in white matter (WM) microstructure across the timecourse of Huntington's disease (HD).", "corpus_id": 257534719, "title": "Progressive alterations in white matter microstructure across the timecourse of Huntington's disease" }
{ "abstract": "Objectives To evaluate candidate outcomes for disease-modifying trials in Huntington's disease (HD) over 6-month, 9-month and 15-month intervals, across multiple domains. To present guidelines on rapid efficacy readouts for disease-modifying trials. Methods 40 controls and 61 patients with HD, recruited from four EU sites, underwent 3 T MRI and standard clinical and cognitive assessments at baseline, 6 and 15 months. Neuroimaging analysis included global and regional change in macrostructure (atrophy and cortical thinning), and microstructure (diffusion metrics). The main outcome was longitudinal effect size (ES) for each outcome. Such ESs can be used to calculate sample-size requirements for clinical trials for hypothesised treatment efficacies. Results Longitudinal changes in macrostructural neuroimaging measures such as caudate atrophy and ventricular expansion were significantly larger in HD than controls, giving rise to consistently large ES over the 6-month, 9-month and 15-month intervals. Analogous ESs for cortical metrics were smaller with wide CIs. Microstructural (diffusion) neuroimaging metrics ESs were also typically smaller over the shorter intervals, although caudate diffusivity metrics performed strongly over 9 and 15 months. Clinical and cognitive outcomes exhibited small longitudinal ESs, particularly over 6-month and 9-month intervals, with wide CIs, indicating a lack of precision. Conclusions To exploit the potential power of specific neuroimaging measures such as caudate atrophy in disease-modifying trials, we propose their use as (1) initial short-term readouts in early phase/proof-of-concept studies over 6 or 9 months, and (2) secondary end points in efficacy studies over longer periods such as 15 months.", "corpus_id": 3215001, "title": "Short-interval observational data to inform clinical trial design in Huntington's disease" }
{ "abstract": "Abstract The aim of this study was to compare the suture tension and the extent of distortion according to the continuous and interrupted suture methods. An in vitro eyelid model of 10-cm length and 5 layers was made with a 3-layer skin pad for the skin, muscle, and aponeurosis and silicone sheet and sponge for the tarsal plate and conjunctiva. The thickness of the model was 11.8 mm. All interrupted sutures were used in Khoo’s method, the buried method, and Mikamo’s method, and a continuous suture was applied in the 2-loop en bloc method, the subconjunctival buried method, and Maruo’s method. The thickness of the eyelid was measured with a custom-made micrometer that had tacks attached on a measuring bar. The tension was measured with a force-gauge. The distortion in the interrupted suture methods was 15.2% ± 3.4% of the original thickness, and it was significantly greater than the 3.3% ± 2.8% of the original thickness in the continuous suture methods (P = 0.000, t- test). In the interrupted suture methods, Khoo’s method showed the greatest rate of distortion (16.9% ± 4.5%), and this was followed by Mikamo’s technique (14.5% ± 2.5%) and the buried suture method (13.6% ± 1.4%). For the continuous suture methods, the 2-loop en bloc method showed the least tension (0.33 ± 0.05 N), and this was followed by Maruo’s method (0.41 ± 0.07 N) and the subconjunctival buried suture method (0.45 ± 0.07 N). The tension of the suture at each loop was significantly greater (P = 0.000, t-test) in the interrupted suture methods (0.52 ± 0.07 N) than that in the continuous suture methods (0.41 ± 0.08 N). For the interrupted suture methods, Khoo’s methods showed the greatest rate of tension (0.54 ± 0.06 N) compared with the buried suture technique (0.51 ± 0.08 N) and Mikamo’s technique (0.48 ± 0.07 N). For the continuous suture methods, the 2-loop en bloc method showed the least tension (0.33 ± 0.05 N), followed by Maruo’s method (0.41 ± 0.07 N) and the subconjunctival buried suture method (0.45 ± 0.07 N). We contend that a continuous suture method causes minimum notching, whereas an interrupted suture method causes less incidence of double-fold fading.", "corpus_id": 7301443, "score": -1, "title": "Tension and Distortion of the Upper Double Eyelid by a Nonincision Method" }
{ "abstract": "In this paper a novel brain tumor segmentation scheme using fractional order sobel mask and marker controlled watershed transform is proposed. To obtain the bright tumor region, regional maxima operation is performed on morphological preprocessed input T2-weighted MR image. The output regional maxima image is taken as an internal marker. Distance transform based watershed transform is applied on regional maxima image, the watershed ridge lines are used as external marker. Now, the fractional sobel mask of order a=0.3 is applied on input T2-weighted MR brain image to obtain gradient magnitude image. The segmentation of tumor region is achieved by using the watershed transform of gradient magnitude image with the help of derived internal and external markers. Region of interest (ROI) is selected to get the final segmented tumor image. Simulations are performed on images taken from the BRATS-2013 dataset for different values of a. For a = 0.3 values of accuracy, sensitivity and specificity performance parameters are comparable to other schemes compared. Moreover, fractional order a provides additional degree of freedom in optimizing the segmentation results. Proposed scheme can be used to segment other types of tumors and also for segmentation of CT images.", "corpus_id": 20269369, "title": "Brain tumor segmentation from MRI using fractional sobel mask and watershed transform" }
{ "abstract": "A new texture feature based seeded region growing algorithm is proposed for the automated segmentation of organs in Abdominal MR image. Co-occurrence texture feature and semi-variogram texture feature are extracted from the image and the seeded region growing algorithm is run on these feature spaces. With a given Region of Interest(ROI), a seed point is automatically picked up based on three homogeneity criteria. A threshold is then obtained by taking a lower value just before the one causing 'explosion'. This algorithm is tested on 12 series of 3D abdominal MR images.", "corpus_id": 16983414, "title": "Texture Feature based Automated Seeded Region Growing in Abdominal MRI Segmentation" }
{ "abstract": "There are so many methods used for segmenting the human brain images. The physician are following the invasive method to identify cancer which gives more painful to the patients. CT (computer tomography) scan, MRI (medical reasoning imaging) and CAD (computer aided design) are helpful to analyzing the abnormalities of different parts. For example the Tumor cells, cancer cells and the fractures in the different parts are examined but the accuracy is unpredictable. In this system, a Normalized cut segmentation techniques is used for segmenting medical brain images and then Region Classification algorithm is used in order to isolate the abnormal and normal regions in the medical brain images which is entirely independent and has the capability to separate various kinds of abnormalities. Finally, the abnormal stage is detected for each brain image by staging process.", "corpus_id": 34506791, "score": -1, "title": "An novel approach for segmentation using brain images" }
{ "abstract": "The thermal conductivity of high-quality narrow-bandgap (0.77eV) InN grown on GaN on sapphire substrate by pulsed- MOVPE method was measured and analyzed. To accurately extract the thermal conductivities of GaN and InN films grown on sapphire substrate, 2D multilayer thermal diffusion model and extended 3ω slope technique are employed. The thermal conductivity of sapphire substrate measured is 41 W/(mK). The thermal conductivity of undoped GaN film is measured as 108 W/(mK). High-quality pulsed-MOVPE grown InN film exhibits thermal conductivity of 126 W/(mK), which is higher in comparison to the previously-reported value of porous InN ceramics 45 W/(mK), yet lower than the theoretical value 176 W/(mK) based on phonon scattering.", "corpus_id": 137123769, "title": "Thermal conductivity measurement of pulsed-MOVPE InN alloy grown on GaN/Sapphire by 3ω method" }
{ "abstract": "a b s t r a c t In this study, we report on the pulsed metalorganic vapor phase epitaxy (MOVPE) of InN as well as the optical and electronic properties of the films as a function of V/III ratio and growth temperatures. The growth of InN films was conducted utilizing a vertical reactor with TMIn and NH 3 as the In- and N- precursors, respectively. Metallic droplet-free InN films were achieved on GaN/sapphire template in a pulsed MOVPE mode with low V/III ratio condition. In the pulsed growth mode, NH 3 was constantly flowing while the TMIn was sent into the reactor chamber for a 36-s pulse and then it bypassed the reactor chamber for an 18-s pulse for a total cycle time of 54s. At a growth pressure of 200Torr, the effects of growth temperature (510-575 1C) and V/III ratio (12,460-17,100) on the photolumines- cence (PL) transitions were investigated. Morphological evolution as well as the electrical quality of the overgrown films have also been studied for the given growth conditions.", "corpus_id": 1652907, "title": "Influence of growth temperature and V/III ratio on the optical characteristics of narrow band gap (0.77 eV) InN grown on GaN/sapphire using pulsed MOVPE" }
{ "abstract": "This paper derives a contact-aided inertial navigation observer for a 3D bipedal robot using the theory of invariant observer design. Aided inertial navigation is fundamentally a nonlinear observer design problem; thus, current solutions are based on approximations of the system dynamics, such as an Extended Kalman Filter (EKF), which uses a system's Jacobian linearization along the current best estimate of its trajectory. On the basis of the theory of invariant observer design by Barrau and Bonnabel, and in particular, the Invariant EKF (InEKF), we show that the error dynamics of the point contact-inertial system follows a log-linear autonomous differential equation; hence, the observable state variables can be rendered convergent with a domain of attraction that is independent of the system's trajectory. Due to the log-linear form of the error dynamics, it is not necessary to perform a nonlinear observability analysis to show that when using an Inertial Measurement Unit (IMU) and contact sensors, the absolute position of the robot and a rotation about the gravity vector (yaw) are unobservable. We further augment the state of the developed InEKF with IMU biases, as the online estimation of these parameters has a crucial impact on system performance. We evaluate the convergence of the proposed system with the commonly used quaternion-based EKF observer using a Monte-Carlo simulation. In addition, our experimental evaluation using a Cassie-series bipedal robot shows that the contact-aided InEKF provides better performance in comparison with the quaternion-based EKF as a result of exploiting symmetries present in the system dynamics.", "corpus_id": 44097731, "score": -1, "title": "Contact-Aided Invariant Extended Kalman Filtering for Legged Robot State Estimation" }
{ "abstract": "Background: Self-strangulation with a vehicle-assisted ligature is reported to be very uncommon suicidal method. Related cases are rarely published, and many of them report massive injuries to neck organs, and even complete decapitation. \n \nCase presentation: We report an unusual case of self-strangulation where a body was found dead inside a car with a rope around his neck and tied to an electric pole behind the car. The rope didn’t break while the car moved away, and no decapitation was associated. The manner of death was determined as suicide based on a complete criminal investigation supported by a detailed crime scene investigation and autopsy. \n \nConclusion: This case is reported for its originality due to the unusual method of suicide employed by the victim, and because it was not related to decapitation.", "corpus_id": 233469046, "title": "Self-Strangulation by Vehicle-Assisted Ligature Without Decapitation: A Case Report and A Review of the Literature" }
{ "abstract": "Abstract Self-strangulation is a very uncommon method of suicide. Deaths by vehicle-assisted ligature are rarely published and mainly related to decapitation. We report an unusual case of self-strangulation where a body was found dead inside a car with a rope round his neck and tied to a bridge banister. The rope was broken at 20 meters from the vehicle while the victim was driving his car away. The manner of death was determined as suicide based on objective scene investigation, autopsy and witness testimony. This case is reported for its rarity due to the method of suicide employed by the victim and because it was not related to decapitation.", "corpus_id": 32049685, "title": "An uncommon suicide method: Self-strangulation by vehicle-assisted ligature" }
{ "abstract": "The capacitated p-median problem (CPMP) seeks to obtain the optimal location of p medians considering distances and capacities for the services to be given by each median. This paper presents an efficient hybrid metaheuristic algorithm by combining a proposed cutting-plane neighborhood structure and a tabu search metaheuristic for the CPMP. In the proposed neighborhood structure to move from the current solution to a neighbor solution, an open median is selected and closed. Then, a linear programming (LP) model is generated by relaxing binary constraints and adding new constraints. The generated LP solution is improved using cutting-plane inequalities. The solution of this strong LP is considered as a new neighbor solution. In order to select an open median to be closed, several strategies are proposed. The neighborhood structure is combined with a tabu search algorithm in the proposed approach. The parameters of the proposed hybrid algorithm are tuned using design of experiments approach. The proposed algorithm is tested on several sets of benchmark instances. The statistical analysis shows efficiency and effectiveness of the hybrid algorithm in comparison with the best approach found in the literature.", "corpus_id": 15596973, "score": -1, "title": "A hybrid metaheuristic approach for the capacitated p-median problem" }
{ "abstract": "Ship detection is a challenging task for synthetic aperture radar (SAR) images. Ships have arbitrary directionality and multiple scales in SAR images. Furthermore, there is a lot of clutter near the ships. Traditional detection algorithms are not robust to these situations and easily cause redundancy in the detection area. With the continuous improvement in resolution, the traditional algorithms cannot achieve high-precision ship detection in SAR images. An increasing number of deep learning algorithms have been applied to SAR ship detection. In this study, a new ship detection network, known as the instance segmentation assisted ship detection network (ISASDNet), is presented. ISASDNet is a two-stage detection network with two branches. A branch is called an object branch and can extract object-level information to obtain positioning bounding boxes and classification results. Another branch called the pixel branch can be utilized for instance segmentation. In the pixel branch, the designed global relational inference layer maps the features to interaction space to learn the relationship between ship and background. The global reasoning module (GRM) based on global relational inference layers can better extract the instance segmentation results of ships. A mask assisted ship detection module (MASDM) is behind the two branches. The MASDM can improve detection results by interacting with the outputs of the two branches. In addition, a strategy is designed to extract the mask of SAR ships, which enables ISASDNet to perform object detection training and instance segmentation training at the same time. Experiments carried out two different datasets demonstrated the superiority of ISASDNet over other networks.", "corpus_id": 236521070, "title": "A Deep Detection Network Based on Interaction of Instance Segmentation and Object Detection for SAR Images" }
{ "abstract": "The cell averaging constant false alarm rate technique was applied to the detection of saccades from electro- oculographic signals. The investigated saccade detection method achieves constant false alarm rate by adjusting its sensitivity according to the average noise level in the observed signal. Excluding the choice of certain parameters and initialisation, the method can operate autonomously without user assistance. The method is computationally efficient and capable of sequential detection of saccades. Therefore, it is suitable for both real-time and non-real-time applications.", "corpus_id": 1822414, "title": "Application of the Cell Averaging Constant False Alarm Rate Technique to Saccade Detection in Electro-oculography" }
{ "abstract": "The control of computer functions by eye movements was demonstrated in 14 normal volunteers. Electrical potentials recorded by horizontal and vertical electrooculography (EOG) were transformed into a cursor that represented a moving fixation point on a computer display. Subjects were able to spell words and sentences by using eye movements to place the cursor on target letters in the display of an alphabet matrix. The successful demonstration of computer-controlled syntactic construction by eye movements offers a potentially useful technique for computer-assisted communication in special groups, such as developmentally-disabled individuals who have motor paralysis and who cannot speak.", "corpus_id": 45758798, "score": -1, "title": "Eye movement control of computer functions." }
{ "abstract": "We compared the effects of 6 months of randomly allocated endurance or resistance training on arterial dimensions. Previous research suggests that arterial size increases with exercise, but this is based on cross‐sectional comparisons or interventions that rarely exceeded 12 weeks. Using high‐resolution ultrasound, we demonstrated arterial size adaptations that are specific to the exercise mode. Resistance exercise increased diameter and function in the brachial artery. Femoral diameter and function increased after endurance exercise. Carotid arterial wall thickness decreased with training, while conduit arterial wall thicknesses remained unchanged. This study directly addressed the question of differential impacts of exercise modality on vascular adaptations of conduit arteries in humans in response to a relatively prolonged training intervention period. We conclude that both endurance and resistance modalities have impacts on arterial size, function and wall thickness in vivo, which would be expected to translate to decreased cardiovascular risk.", "corpus_id": 19397955, "title": "A prospective randomized longitudinal study involving 6 months of endurance or resistance exercise. Conduit artery adaptation in humans" }
{ "abstract": "Thickening of the carotid artery wall has been adopted as a surrogate marker of pre-clinical atherosclerosis, which is strongly related to increased cardiovascular risk. The cardioprotective effects of exercise training, including direct effects on vascular function and lumen dimension, have been consistently reported in asymptomatic subjects and those with cardiovascular risk factors and diseases. In the present review, we summarize evidence pertaining to the impact of exercise and physical activity on arterial wall remodelling of the carotid artery and peripheral arteries in the upper and lower limbs. We consider the potential role of exercise intensity, duration and modality in the context of putative mechanisms involved in wall remodelling, including haemodynamic forces. Finally, we discuss the impact of exercise training in terms of primary prevention of wall thickening in healthy subjects and remodelling of arteries in subjects with existing cardiovascular disease and risk factors.", "corpus_id": 2267987, "title": "Impact of exercise training on arterial wall thickness in humans" }
{ "abstract": "Exercise causes oxidative stress only when exhaustive. Strenuous exercise causes oxidation of glutathione, release of cytosolic enzymes, and other signs of cell damage. However, there is increasing evidence that reactive oxygen species (ROS) not only are toxic but also play an important role in cell signaling and in the regulation of gene expression. Xanthine oxidase is involved in the generation of superoxide associated with exhaustive exercise. Allopurinol (an inhibitor of this enzyme) prevents muscle damage after exhaustive exercise, but also modifies cell signaling pathways associated with both moderate and exhaustive exercise in rats and humans. In gastrocnemius muscle from rats, exercise caused an activation of MAP kinases. This in turn activated the NF-kappaB pathway and consequently the expression of important enzymes associated with defense against ROS (superoxide dismutase) and adaptation to exercise (eNOS and iNOS). All these changes were abolished when ROS production was prevented by allopurinol. Thus ROS act as signals in exercise because decreasing their formation prevents activation of important signaling pathways that cause useful adaptations in cells. Because these signals result in an upregulation of powerful antioxidant enzymes, exercise itself can be considered an antioxidant. We have found that interfering with free radical metabolism with antioxidants may hamper useful adaptations to training.", "corpus_id": 4187006, "score": -1, "title": "Moderate exercise is an antioxidant: upregulation of antioxidant genes by training." }
{ "abstract": "Tissue factor (TF) plays an essential role in hemostasis. The tissue-specific pattern of TF expression is consistent with additional hemostatic protection in vital organs. An aberrant expression of TF within the vasculature occurs in a variety of diseases, including atherosclerosis, cancer, and sepsis. TF expression in these diseases is associated with thrombotic events. Future therapeutic strategies may prove beneficial in the treatment of thrombosis. However, these strategies should be designed to avoid compromising hemostasis.", "corpus_id": 260318629, "title": "Tissue Factor in Hemostasis and Thrombosis" }
{ "abstract": "Inactivation of the murine tissue factor (TF) gene or tissue factor pathway inhibitor 1 (TFPI) gene results in embryonic lethality, indicating that both are required for embryonic development. We have shown that expression of low levels of TF from a transgene (hTF) rescues TF-null embryos. However, low-TF mice (mTF(-/-)/hTF+) have hemostatic defects in the uterus, placenta, heart, and lung. In this study, we hypothesized that the death of TFPI-/- embryos was due to unregulated TF/FVIIa activity and that the hemostatic defects in low-TF mice were due to insufficient TF expression. Therefore, we attempted to rescue TFPI-/- embryos by reducing TF expression, and to restore hemostasis in low-TF mice by abolishing TFPI expression. Intercrossing TFPI(+/-)/mTF(+/-)/hTF+/- mice generated close to the expected number of TFPI(-/-)/low-TF mice at weaning age from 128 offspring, indicating rescue of TFPI-/- embryos from embryonic lethality. Conversely, a decrease in TFPI levels dose-dependently prolonged the survival of low-TF mice and rescued the hemorrhagic defects in the lung and placenta but not in the heart or uterus. These results indicate that the correct balance between TF and TFPI in different organs is required to maintain hemostasis during embryonic development and in adult mice.", "corpus_id": 1164379, "title": "A balance between tissue factor and tissue factor pathway inhibitor is required for embryonic development and hemostasis in adult mice." }
{ "abstract": "The coagulation protease thrombin triggers fibrin formation, platelet activation, and other cellular responses at sites of tissue injury. We report a role for PAR1, a protease-activated G protein–coupled receptor for thrombin, in embryonic development. Approximately half of Par1 –/– mouse embryos died at midgestation with bleeding from multiple sites. PAR1 is expressed in endothelial cells, and a PAR1 transgene driven by an endothelial-specific promoter prevented death ofPar1 –/– embryos. Our results suggest that the coagulation cascade and PAR1 modulate endothelial cell function in developing blood vessels and that thrombin's actions on endothelial cells—rather than on platelets, mesenchymal cells, or fibrinogen—contribute to vascular development and hemostasis in the mouse embryo.", "corpus_id": 1670164, "score": -1, "title": "A Role for Thrombin Receptor Signaling in Endothelial Cells During Embryonic Development" }
{ "abstract": "We present a systematic comparison of conditional structure functions in nine turbulent flows. The flows studied include forced isotropic turbulence simulated on a periodic domain, passive grid wind tunnel turbulence in air and in pressurized SF6, active grid wind tunnel turbulence (in both synchronous and random driving modes), the flow between counter-rotating discs, oscillating grid turbulence and the flow in the Lagrangian exploration module (in both constant and random driving modes). We compare longitudinal Eulerian second-order structure functions conditioned on the instantaneous large-scale velocity in each flow to assess the ways in which the large scales affect the small scales in a variety of turbulent flows. Structure functions are shown to have larger values when the large-scale velocity significantly deviates from the mean in most flows, suggesting that dependence on the large scales is typical in many turbulent flows. The effects of the large-scale velocity on the structure functions can be quite strong, with the structure function varying by up to a factor of 2 when the large-scale velocity deviates from the mean by ±2 standard deviations. In several flows, the effects of the large-scale velocity are similar at all the length scales we measured, indicating that the large-scale effects are scale independent. In a few flows, the effects of the large-scale velocity are larger on the smallest length scales.", "corpus_id": 123006738, "title": "Signatures of non-universal large scales in conditional structure functions from various turbulent flows" }
{ "abstract": "A detailed comparison between data from experimental measurements and numerical simulations of Lagrangian velocity structure functions in turbulence is presented. Experimental data, at Reynolds number ranging from Rλ=350 to Rλ=815, are obtained in a swirling water flow between counter-rotating baffled disks. Direct numerical simulations (DNS) data, up to Rλ=284, are obtained from a statistically homogeneous and isotropic turbulent flow. By integrating information from experiments and numerics, a quantitative understanding of the velocity scaling properties over a wide range of time scales and Reynolds numbers is achieved. To this purpose, we discuss in detail the importance of statistical errors, anisotropy effects, and finite volume and filter effects, finite trajectory lengths. The local scaling properties of the Lagrangian velocity increments in the two data sets are in good quantitative agreement for all time lags, showing a degree of intermittency that changes if measured close to the Kolmogorov time...", "corpus_id": 3581826, "title": "Lagrangian structure functions in turbulence : a quantitative comparison between experiment and direct numerical simulation" }
{ "abstract": "Heavy mass ions, Kr and Xe, having energies in the approximately 10 MeV/amu range have been used to produce thick planar optical waveguides at the surface of lithium niobate (LiNbO3). The waveguides have a thickness of 40-50 micrometers, depending on ion energy and fluence, smooth profiles and refractive index jumps up to 0.04 (lambda = 633 nm). They propagate ordinary and extraordinary modes with low losses keeping a high nonlinear optical response (SHG) that makes them useful for many applications. Complementary RBS/C data provide consistent values for the partial amorphization and refractive index change at the surface. The proposed method is based on ion-induced damage caused by electronic excitation and essentially differs from the usual implantation technique using light ions (H and He) of MeV energies. It implies the generation of a buried low-index layer (acting as optical barrier), made up of amorphous nanotracks embedded into the crystalline lithium niobate crystal. An effective dielectric medium approach is developed to describe the index profiles of the waveguides. This first test demonstration could be extended to other crystalline materials and could be of great usefulness for mid-infrared applications.", "corpus_id": 26001633, "score": -1, "title": "Thick optical waveguides in lithium niobate induced by swift heavy ions (approximately 10 MeV/amu) at ultralow fluences." }
{ "abstract": "School of Electrical and Information Engineering Masters in Electrical Engineering Power Quality Analysis of Variable Speed Drives by Amit Abraham A study was conducted to evaluate the effects of harmonics generated by Variable Speed Drives (VSDs). A VSD with a technology known as Reduced Harmonics Technology (RHT) was considered and benchmarked against existing solutions in industry in terms of cost and effectiveness. The RHT VSD, like the standard VSD, uses a three phase rectifier but with significantly lower DC bus capacitor banks and an advanced motor control processor. Simulation results reveal that the RHT VSD model produced current harmonics of approximately 30% when compared to a standard VSD, without any additional mitigation solutions, produced current harmonics above 100%. The RHT VSD was also found to be less expensive than the equivalent rated standard VSD. Laboratory experiments reveal that the input current of the RHT VSD and the standard VSD are similar to the input current waveforms from the simulation of the RHT VSD and the standard VSD respectively. Simulation of the DC bus capacitance and the source impedance reveal that in a lower range of DC capacitance values (below C1 = 600μF), the size of the DC bus capacitance has more effect on the input harmonics than the source impedance. An increase in source impedance does not reduce input harmonics. In the above mentioned range of capacitance values, it was noted that, the DC bus capacitance dominate the source impedance in its ability to reduce input harmonics. When the DC capacitance was increased above C1 = 600μF, the source impedance has more effect on the input harmonics than the size of the DC bus capacitance. The simulation and experimental results show that there are higher order (above 13th order) harmonic frequency components appearing at the input of the RHT VSD when compared to a standard VSD. It is clear that there is a trade off, due to the effect of the motor control processor, between there being reduced harmonics at the lower orders (below 13th order) and there being an increase in harmonics at higher orders (above 13th order). It was also noted from the experiment that there is no notable difference in the harmonic content at the outputs of the two VSDs.", "corpus_id": 59042731, "title": "Power Quality Analysis of Variable Speed Drives" }
{ "abstract": "This article reviews and compares the effects of harmonic mitigation methods on line supply and drive. Several methods are used to reduce the line current harmonics caused by pulse width modulated (PWM) ac drives: ac line reactors, dc link chokes, phase-shifting transformers, passive harmonic filters (PHFs), multipulse converters, active filters, and active front ends. Each one does a good job of harmonic reduction and also affects the total current drawn from the supply transformer, the power factor, and the dc bus voltage within the drive. Several passive and active filters and an 18-pulse converter were tested. The comparisons are supported by computer simulations of drive systems and verified by extensive tests that were conducted in the lab.", "corpus_id": 2015508, "title": "Curb the disturbance" }
{ "abstract": "Direct current (DC) electricity distribution systems have been proposed as an alternative to traditional, alternating current (AC) distribution systems for commercial buildings. Partial replacement of AC distribution with DC distribution can improve service to DC loads and overall building energy efficiency. This article develops (i) a mixed-integer, nonlinear, nonconvex mathematical programming problem to determine maximally energy efficient designs for mixed AC–DC electricity distribution systems in commercial buildings, and (ii) describes a tailored global optimization algorithm based on Nonconvex Generalized Benders Decomposition. The results of three case studies demonstrate the strength of the decomposition approach compared to state-of-the-art general-purpose global solvers.", "corpus_id": 3100017, "score": -1, "title": "Optimal design of mixed AC-DC distribution systems for commercial buildings: A Nonconvex Generalized Benders Decomposition approach" }
{ "abstract": "Abstract This paper addresses the question of whether and how easy monetary policy may lead to excesses in financial and real asset markets and ultimately result in financial dislocation. It presents evidence suggesting that periods when short-term interest rates were persistently and significantly below what Taylor rules would prescribe are correlated with increases in asset prices, especially as regards housing, though no systematic effects are identified on equity markets. Significant asset price increases, however, can also occur when interest rates are in line with Taylor rules, possibly associated with periods of financial deregulation and/or innovation. Finding also some support for a link of countries’ pre-crisis monetary stance with the extent to which their financial sectors were hit during the recent crisis, the paper argues that accommodating monetary policy over the period 2002–2005, probably in combination with rapid financial market innovation, would, in retrospect, seem to have been among the factors behind the run-up in asset prices and financial imbalances—the (partial) unwinding of which helped trigger the recent financial market crisis.", "corpus_id": 33810601, "title": "Monetary Ease: A Factor behind Financial Crises? Some Evidence from OECD Countries" }
{ "abstract": "On 18-19 June 2004, the BIS held a conference on \"Understanding Low Inflation and Deflation\". This event brought together central bankers, academics and market practitioners to exchange views on this issue (see the conference programme in this document). This paper was presented at the workshop. The views expressed are those of the author(s) and not those of the BIS.", "corpus_id": 153344185, "title": "Deflation in a Historical Perspective" }
{ "abstract": "The issue of profits in company management is as old as the joint stock company but remains ever topical and somewhat controversial. Accountants have one measure for profit and economists another measure, whilst some others want to do away with the idea of profits entirely to ensure social responsibility by companies. The theory of sustainability calls into question the existing theory of profits, apparently based on subsidization and negative externalities, as a result of its failure to factor into company accounts their true environmental costs.  Not only does the principle of sustainability appear to validate stakeholders’ rights in corporate profits but it also calls into question the current theories of profit creation and distributional equity based on shareholder theory, as well as existing company laws. This paper examines the relevant issues and argues that new legal rules on corporate accounting and profits reflecting generational equity, rather than reliance on voluntary compliance, are imperative for good corporate governance and sustainable development.  \n \n   \n \n Key words: Corporate profits law, CSR/corporate sustainability, sustainability accounting, generational equity, IFRS environmental standards, intra/inter -generational shareholders equity.", "corpus_id": 73520802, "score": -1, "title": "Profit creation, intra and inter-generational equity: Need for new company law" }
{ "abstract": "A few studies have investigated whether the risk of laryngeal cancer depends on the types of alcoholic beverage consumed, providing conflicting results. We investigated this issue using the data from two case–control studies conducted in Italy between 1986 and 2000. These included 672 cases of laryngeal cancer and 3454 hospital controls, admitted for acute, non-neoplastic conditions, unrelated to smoking and alcohol consumption. Significant trends in risk were found for total alcohol intake, with multivariate odds ratios (ORs) of 1.12 for drinkers of 3–4 drinks/day, 2.43 for 5–7, 3.65 for 8–11, and 4.83 for > 12 drinks/day, as compared to abstainers or light drinkers. Corresponding ORs for wine drinkers were 1.12, 2.45, 3.29 and 5.91. After allowance was made for wine intake, the ORs for beer drinkers were 1.65 for 1–2 drinks/day, and 1.36 for ≥ 3 drinks/day, as compared to non-beer drinkers; corresponding values for spirits drinkers were 0.88 and 1.15. This study thus indicates that in the Italian population characterized by frequent wine consumption, wine is the beverage most strongly related to the risk of laryngeal cancer.", "corpus_id": 29247309, "title": "Type of alcoholic beverage and the risk of laryngeal cancer" }
{ "abstract": "The aim of this study is to present risk assessments for the combined effect of alcohol and tobacco in cancer of the larynx. The case control study included all newly diagnosed laryngeal cancer patients under the age of 75 in Denmark during the years 1980-2. Four age and sex matched controls were selected using the municipal person registry in which the case was listed. Ninety six per cent of all cases and 78% of controls participated in the study, which is based on 326 cases and 1134 controls. Information on alcohol consumption and tobacco use was obtained by means of mailed questionnaires. For all laryngeal cancers as well as for the subgroups concerning cancer of the glottis and supraglottis alcohol consumption and tobacco use were found to be important risk factors. The effect of joint exposure was greater than the effect predicted from the sum of effects of each factor acting separately. Thus the combined effect follows a multiplicative rather than additive model.", "corpus_id": 3031286, "title": "Interaction of alcohol and tobacco as risk factors in cancer of the laryngeal region." }
{ "abstract": "The purpose of the study was to examine the occupational history of laryngeal cancer patients, and especially their exposure to welding. The investigation was conducted as a case-control study where all newly diagnosed patients less than 75 yr of age with cancer of the larynx in Denmark during March 1980 to March 1982 were selected as cases. For each case, four age- and sex-matched controls were identified from the municipal person register in which the case was listed. Data were collected partly by means of questionnaires and partly by abstracting information from the medical records of cases. Workers exposed to welding fumes had a slightly increased risk of cancer of the larynx, most predominantly of cancer of the subglottic area.", "corpus_id": 9733143, "score": -1, "title": "Welding and cancer of the larynx: a case-control study." }
{ "abstract": "A computationally efficient method for detecting a chorus section in popular and rock music is presented. The method utilizes a distance matrix representation that is obtained by summing two separate distance matrices calculated using the mel-frequency cepstral coefficient and pitch chroma features. The benefit of computing two separate distance matrices is that different enhancement operations can be applied on each. An enhancement operation is found beneficial only for the chroma distance matrix. This is followed by detection of the off-diagonal segments of small distance from the distance matrix. From the detected segments, an initial chorus section is selected using a scoring mechanism utilizing several heuristics, and subjected to further processing. This further processing involves using image processing filters in a neighborhood of the distance matrix surrounding the initial chorus section. The final position and length of the chorus is selected based on the filtering results. On a database of 206 popular & rock music pieces an average F-measure of 86% is obtained. It takes about ten seconds to process a song with an average duration of three to four minutes on a Windows XP computer with a 2.8 GHz Intel Xeon processor.", "corpus_id": 6783556, "title": "CHORUS DETECTION WITH COMBINED USE OF MFCC AND CHROMA FEATURES AND IMAGE PROCESSING FILTERS" }
{ "abstract": "This paper describes a method for automatically segmenting and labelling sections in recordings of musical audio. We incorporate the user’s expectations for segment duration as an explicit prior probability distribution in a Bayesian framework, and demonstrate experimentally that this method can produce accurate labelled segmentations for popular music.", "corpus_id": 2908762, "title": "A Markov-Chain Monte-Carlo Approach to Musical Audio Segmentation" }
{ "abstract": null, "corpus_id": 49016128, "score": -1, "title": "A principal axis transformation for non-hermitian matrices" }
{ "abstract": "In the context of drug hypersensitivity, our group has recently proposed a new model based on the structural features of drugs (pharmacological interaction with immune receptors; p-i concept) to explain their recognition by T cells. According to this concept, even chemically inert drugs can stimulate T cells because certain drugs interact in a direct way with T-cell receptors (TCR) and possibly major histocompatibility complex molecules without the need for metabolism and covalent binding to a carrier. In this study, we investigated whether mouse T-cell hybridomas transfected with drug-specific human TCR can be used as an alternative to drug-specific T-cell clones (TCC). Indeed, they behaved like TCC and, in accordance with the p-i concept, the TCR recognize their specific drugs in a direct, processing-independent, and dose-dependent way. The presence of antigen-presenting cells was a prerequisite for interleukin-2 production by the TCR-transfected cells. The analysis of cross-reactivity confirmed the fine specificity of the TCR and also showed that TCR transfectants might provide a tool to evaluate the potential of new drugs to cause hypersensitivity due to cross-reactivity. Recombining the α- and β-chains of sulfanilamide- and quinolone-specific TCR abrogated drug reactivity, suggesting that both original α- and β-chains were involved in drug binding. The TCR-transfected hybridoma system showed that the recognition of two important classes of drugs (sulfanilamides and quinolones) by TCR occured according to the p-i concept and provides an interesting tool to study drug-TCR interactions and their biological consequences and to evaluate the cross-reactivity potential of new drugs of the same class.", "corpus_id": 26179683, "title": "Transfection of Drug-Specific T-Cell Receptors into Hybridoma Cells: Tools to Monitor Drug Interaction with T-Cell Receptors and Evaluate Cross-Reactivity to Related Compounds" }
{ "abstract": "Background It has been shown that drugs comprise a group of non‐peptide antigens that can be recognized by human T cells in the context of HLA class II and that this recognition is involved in allergic reactions. Recent studies have demonstrated a MHC‐restricted but processing‐ and metabolism‐independent pathway for the presentation of allergenic drugs such as lidocaine and sulfamethoxazole (SMX) to drug‐specific T cells. However, there is little information so far on the precise molecular mechanisms of this non‐covalent drug presentation.", "corpus_id": 1594534, "title": "Non‐covalent presentation of sulfamethoxazole to human CD4+ T cells is independent of distinct human leucocyte antigen‐bound peptides" }
{ "abstract": "Alendronate is a potent inhibitor of bone resorption. To investigate the relationship between antiresorptive activity and bone-related side effects, we studied the effect of 2 months of daily alendronate (0.04, 0.2, 1.0 or 5.0 mg/kg/day) treatment on the strength of the femoral shaft and neck and on the bone mass of ovariectomized rats. The p.o. administration regimen began immediately after ovariectomy at 6 weeks of age, and the results were compared with pamidronate (0.2, 1.0 or 5.0 mg/kg/day) or etidronate (5.0, 25.0 or 125.0 mg/kg/day) treatment. In the femoral epiphysis and neck, a preventive effect of alendronate on loss of bone mineral density was observed at the dose of 1.0 mg/kg. The alendronate-treated group did not show significant alteration of the breaking load or the cross-sectional shape of the femoral midshaft. Similar results were obtained in the femoral neck strength and femoral neck geometry. In histomorphometric analysis of tibial metaphyses, alendronate inhibited the ratio of osteoid volume to tissue volume and the mineral apposition rate at a dose of 0.2 mg/kg compared with the ovariectomized control. In contrast, etidronate tended to increase osteoid volume/bone volume at 125 mg/kg. From these results, we conclude that p.o. alendronate-treatment prevented the decrease in bone mineral density and maintained the mechanical properties of bone after ovariectomy without impairing of bone mineralization in growing rats.", "corpus_id": 11704581, "score": -1, "title": "Effects of continuous alendronate treatment on bone mass and mechanical properties in ovariectomized rats: comparison with pamidronate and etidronate in growing rats." }
{ "abstract": "Although generalised exchange has been considered to be a key ingredient of organisational social capital, it has attracted limited attention in the organisational behaviour (OB) literature. Drawing upon studies of generalised exchange in a wide range of social science disciplines and social exchange research in the OB literature, I aim to answer a key question about generalised exchange: why do some people and not others engage in generalised exchange? \nIn this thesis, I propose that the rule of collective reciprocity is the fundamental regulating mechanism of generalised exchange and introduce the concept of generalised exchange orientation (GEO) – individuals’ beliefs in favour of the rule – as an individual characteristic that motivates individuals to engage in generalised exchange. I create a theoretical framework on the antecedents and consequences of GEO and conduct three empirical studies to examine the propositions. In the first study, I develop and validate scales to measure GEO and orientations to other forms of social exchange. The results support the new scales’ validity and their measurement invariance between the United States and Japan. The second study is to analyse the antecedents of GEO and indicates that task interdependence and depersonalised trust promote GEO over time. The third study involves analysing the impact of GEO on knowledge-sharing behaviours on an in-house online platform, and it shows that GEO promotes the behaviours, moderated by organisational identification. This evidence unpacks the micro-foundations of the occurrence of generalised exchange in organisations and provide insights into the development of individual orientation towards generalised exchange. Theoretical and practical implications will be discussed.", "corpus_id": 149412775, "title": "Generalised exchange orientation : a new construct and its antecedents and consequences" }
{ "abstract": "Electronic networks of practice are computer-mediated discussion forums focused on problems of practice that enable individuals to exchange advice and ideas with others based on common interests. However, why individuals help strangers in these electronic networks is not well understood: there is no immediate benefit to the contributor, and free-riders are able to acquire the same knowledge as everyone else. To understand this paradox, we apply theories of collective action to examine how individual motivations and social capital influence knowledge contribution in electronic networks. This study reports on the activities of one electronic network supporting a professional legal association. Using archival, network, survey, and content analysis data, we empirically test a model of knowledge contribution. We find that people contribute their knowledge when they perceive that it enhances their professional reputations, when they have the experience to share, and when they are structurally embedded in the network. Surprisingly, contributions occur without regard to expectations of reciprocity from others or high levels of commitment to the network.", "corpus_id": 207357142, "title": "Why Should I Share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice" }
{ "abstract": "Understanding the attraction of virtual communities is crucial to organizations that want to tap into their enormous information potential. Existing literature theorizes that people join virtual communities to exchange information and/or social support. Theories of broader Internet use have indicated both entertainment and searching for friendship as motivational forces. This exploratory study empirically examines the importance of these reasons in assessing why people come to virtual communities by directly asking virtual community members why they joined. ::: ::: The responses to the open-ended question “Why did you join?” were categorized based upon the reasons suggested in the literature. Across 27 communities in 5 different broad types, 569 different reasons from 399 people indicated that most sought either friendship or exchange of information, and a markedly lower percent sought social support or recreation. The reasons were significantly dependent on the grouping of the communities into types. In all the community types information exchange was the most popular reason for joining. Thereafter, however, the reason varied depending on community type. Social support was the second most popular reason for members in communities with health/wellness and professional/occupational topics, but friendship was the second most popular reason among members in communities dealing with personal interests/hobbies, pets, or recreation. These findings suggest that virtual community managers should emphasize not only the content but also encourage the friendship and social support aspects as well if they wish to increase the success of their virtual community.", "corpus_id": 21854835, "score": -1, "title": "Virtual Community Attraction: Why People Hang Out Online" }
{ "abstract": "The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.", "corpus_id": 11163851, "title": "RSAT: regulatory sequence analysis tools" }
{ "abstract": "BackgroundThe detection of conserved motifs in promoters of orthologous genes (phylogenetic footprints) has become a common strategy to predict cis-acting regulatory elements. Several software tools are routinely used to raise hypotheses about regulation. However, these tools are generally used as black boxes, with default parameters. A systematic evaluation of optimal parameters for a footprint discovery strategy can bring a sizeable improvement to the predictions.ResultsWe evaluate the performances of a footprint discovery approach based on the detection of over-represented spaced motifs. This method is particularly suitable for (but not restricted to) Bacteria, since such motifs are typically bound by factors containing a Helix-Turn-Helix domain. We evaluated footprint discovery in 368 Escherichia coli K12 genes with annotated sites, under 40 different combinations of parameters (taxonomical level, background model, organism-specific filtering, operon inference). Motifs are assessed both at the levels of correctness and significance. We further report a detailed analysis of 181 bacterial orthologs of the LexA repressor. Distinct motifs are detected at various taxonomical levels, including the 7 previously characterized taxon-specific motifs. In addition, we highlight a significantly stronger conservation of half-motifs in Actinobacteria, relative to Firmicutes, suggesting an intermediate state in specificity switching between the two Gram-positive phyla, and thereby revealing the on-going evolution of LexA auto-regulation.ConclusionThe footprint discovery method proposed here shows excellent results with E. coli and can readily be extended to predict cis-acting regulatory signals and propose testable hypotheses in bacterial genomes for which nothing is known about regulation.", "corpus_id": 236222, "title": "Evaluation of phylogenetic footprint discovery for predicting bacterial cis-regulatory elements and revealing their evolution" }
{ "abstract": "Comparative sequence analysis addresses the problem of RNA folding and RNA structural diversity, and is responsible for determining the folding of many RNA molecules, including 5S, 16S, and 23S rRNAs, tRNA, RNAse P RNA, and Group I and II introns. Initially this method was utilized to fold these sequences into their secondary structures. More recently, this method has revealed numerous tertiary correlations, elucidating novel RNA structural motifs, several of which have been experimentally tested and verified, substantiating the general application of this approach. As successful as the comparative methods have been in elucidating higher-order structure, it is clear that additional structure constraints remain to be found. Deciphering such constraints requires more sensitive and rigorous protocols, in addition to RNA sequence datasets that contain additional phylogenetic diversity and an overall increase in the number of sequences. Various RNA databases, including the tRNA and rRNA sequence datasets, continue to grow in number as well as diversity. Described herein is the development of more rigorous comparative analysis protocols. Our initial development and applications on different RNA datasets have been very encouraging. Such analyses on tRNA, 16S and 23S rRNA are substantiating previously proposed associations and are now beginning to reveal additional constraints on these molecules. A subset of these involve several positions that correlate simultaneously with one another, implying units larger than a basepair can be under a phylogenetic constraint.", "corpus_id": 15090100, "score": -1, "title": "Identifying constraints on the higher-order structure of RNA: continued development and application of comparative sequence analysis methods." }
{ "abstract": "Fusion and solidification of Al and Ag samples, as well as Fe93–Al3–C4, Fe56–Co37–Al3–C4, and Fe57.5–Co38–Al1–Pb0.5–C3 alloys (in wt%), have been investigated at 6.3 GPa. Heater power jumps due to heat consumption and release on metal fusion and solidification, respectively, were used to calibrate the thermal electromotive force of the thermocouple against the melting points (mp) for Ag and Al. Thus, obtained corrections are +100°C (for sample periphery) and +65°C (center) within the 1070–1320°C range. For small samples positioned randomly in the low-gradient zone of a high pressure cell, the corrections should be +80°C and +84°C at the temperatures 1070°C and 1320°C, respectively. The temperature contrast recorded in the low-gradient cell zone gives an error about ±17°C. The method has been applied to identify the mp of the systems, which is especially important for temperature-gradient growth of large type IIa synthetic diamonds.", "corpus_id": 93464310, "title": "High-temperature calibration of a multi-anvil high pressure apparatus" }
{ "abstract": "IN apparatus of the tetrahedral anvil or of the ‘belt’ type1, where a solid-pressure transmitting medium such as pyrophyllite is used, the relation between load and pressure is fairly complicated2. The pressure calibration at room temperature is frequently made in terms of the usual resistance transitions in bismuth, thallium and barium1. There is no guarantee that such a calibration will be valid for the high-temperature range up to more than 1,000° C in which such apparatus is often used.", "corpus_id": 4294237, "title": "Combined Very High Pressure/High Temperature Calibration of the Tetrahedral Anvil Apparatus, Fusion Curves of Zinc, Aluminium, Germanium and Silicon to 60 kilobars" }
{ "abstract": "The fusion curves of aluminum and thallium to 50 kbars and of gallium to 70 kbars, were investigated by differential thermal analysis. Comparison is made with previous work and with the fusion curve of indium. A break in slope of the melting curve of gallium near 45 °C and 30 kbars led to the discovery of a new polymorph. This phase—Ga III—is slightly less dense than Ga II, both persisting to 75 kbars. Solid-solid phase boundaries for gallium and thallium were determined by changes in volume and resistance as well as with DTA. The Ga II to Ga III transition is rapid and characterized by a substantial heat as well as a drop in resistance. From the investigation of the metastability phenomena, it is tentatively concluded that Ga III is identical with Bridgman's Ga II' and that the metastable phase studied by Defrain at 1 atm. is closely related to Ga II. The existence of three polymorphs of thallium is definitely established, although the undercooling noted by Ponyatovskii for the b.c.c.-h.c.p. transition was verified. The triple point for the solid thallium phases is near 115°C and 39 kbars, with melting of the high pressure polymorph expected above 650°C and 90 kbars.", "corpus_id": 94896894, "score": -1, "title": "Fusion curves and polymorphic transitions of the group III elements—Aluminum, gallium, indium and thallium—At high pressures" }
{ "abstract": "We examined the mechanisms of interaction of crocidolite asbestos fibers with the epidermal growth factor (EGF) receptor (EGFR) and the role of the EGFR-extracellular signal-regulated kinase (ERK) signaling pathway in early-response protooncogene (c-fos/c-jun) expression and apoptosis induced by asbestos in rat pleural mesothelial (RPM) cells. Asbestos fibers, but not the nonfibrous analog riebeckite, abolished binding of EGF to the EGFR. This was not due to a direct interaction of fibers with ligand, inasmuch as binding studies using fibers and EGF in the absence of membranes showed that EGF did not adsorb to the surface of asbestos fibers. Exposure of RPM cells to asbestos caused a greater than twofold increase in steady-state message and protein levels of EGFR (P < 0.05). The tyrphostin AG-1478, which inhibits the tyrosine kinase activity of the EGFR, but not the tyrphostin A-10, which does not affect EGFR activity, significantly ameliorated asbestos-induced increases in mRNA levels of c-fos but not of c-jun. Pretreatment of RPM cells with AG-1478 significantly reduced apoptosis in cells exposed to asbestos. Our findings suggest that asbestos-induced binding to EGFR initiates signaling pathways responsible for increased expression of the protooncogene c-fos and the development of apoptosis. The ability to block asbestos-induced elevations in c-fos mRNA levels and apoptosis by small-molecule inhibitors of EGFR phosphorylation may have therapeutic implications in asbestos-related diseases.", "corpus_id": 6580731, "title": "Asbestos-induced phosphorylation of epidermal growth factor receptor is linked to c-fos and apoptosis." }
{ "abstract": "Asbestos fibers are human carcinogens with undefined mechanisms of action. In studies here, we examined signal transduction events induced by asbestos in target cells of mesothelioma and potential cell surface origins for these cascades. Asbestos fibers, but not their nonfibrous analogues, induced protracted phosphorylation of the mitogen-activated protein (MAP) kinases and extracellular signal-regulated kinases (ERK) 1 and 2, and increased kinase activity of ERK2. ERK1 and ERK2 phosphorylation and activity were initiated by addition of exogenous epidermal growth factor (EGF) and transforming growth factor-alpha, but not by isoforms of platelet-derived growth factor or insulin-like growth factor-1 in mesothelial cells. MAP kinase activation by asbestos was attenuated by suramin, which inhibits growth factor receptor interactions, or tyrphostin AG 1478, a specific inhibitor of EGF receptor tyrosine kinase activity (IC50 = 3 nM). Moreover, asbestos caused autophosphorylation of the EGF receptor, an event triggering the ERK cascade. These studies are the first to establish that a MAP kinase signal transduction pathway is initiated after phosphorylation of a peptide growth factor receptor following exposure to asbestos fibers.", "corpus_id": 2906856, "title": "Asbestos causes stimulation of the extracellular signal-regulated kinase 1 mitogen-activated protein kinase cascade after phosphorylation of the epidermal growth factor receptor." }
{ "abstract": "PoP structures have been used widely in digital consumer electronics products such as digital still cameras and mobile phones. However, the final stack height from the top to the bottom package for these structures is higher than that of the current stacked die packages. To reduce the height of the package, a flip chip technology is used. Since the logic chips of mobile applications use a pad pitch of less than 80 µm or less, an ultra-fine-pitch flip chip interconnection technique is required. C4 flip chip technology is widely used in area array flip chip packages, but it is not suitable in the ultrafine-pitch flip chips because the C4 solder bumps melt and collapse on the wide opening Cu pads. Although the industry uses ultrafine-pitch interconnections between Au stud bumps on a chip and Sn/Ag pre-solder on a carrier, this flip chip technique has two major problems. One is that the need for bumps on both die and carrier drives up material costs. The other is that the long bonding process time required in the individual flip chip bonding process with associated heating and cooling steps demands large investments in equipment. To address these problems, we developed the mount and reflow with no-clean flux processes, and new interconnection techniques were developed with Cu pillars and Sn/Ag solder bumps on Al pads for wirebonding, were developed. It is very easy to control the gap between die and substrate by adjusting the Cu pillar height. Since it is unnecessary to control the collapse of the solder bumps, we call this the C2 process for direct Chip Connection (C2). The C2 bumps are connected to Cu substrate pads, which are a surface treated with OSP (Organic Solder Preservative), with reflow and no-clean processes. This technology creates the SMT/Flip Chip hybrid assembly for SoP (System on Package) use. We have produced 50 µm-pitch C2 interconnections and tested their reliability. The interconnection resistance increase caused by the reliability testing is quite small. It is clear that C2 flip chip technology provides robust solder connections at low cost. Also the C2 structure with a low-k device was evaluated and no failures were observed at 1,500 cycles in the thermal cycle test. This indicates that low-k C2 structures seem robust. For finer pitch flip chip interconnections, a wafer-level underfill process is needed to overcome the limitations of the standard capillary underfill process for ultra-narrow spaces. To date, a wafer- level underfill process exists for the C2 process with an 80-µm pitch. In addition to fine pitch interconnections, a die thickness of 70 µm is required to reduce the final stack height. Such thin die cannot be processed by the C2 process because such dies slip too easily during the reflow process. To resolve this issue, a Post-Encapsulation Grinding (PEG) method was developed. In this method the die is ground to less than 70 µm after joining and underfilling. This report presents the PEG method and reliability test results for die thicknesses 20 µm, 70 µm and 150 µm.", "corpus_id": 13205813, "score": -1, "title": "Ultrafine-pitch C2 flip chip interconnections with solder-capped Cu pillar bumps" }
{ "abstract": "Modern multiprocessor system-on-chips employ network-on-chip (NoC) to efficiently connect different components together. NoCs need global and local interconnects to deliver high on-chip bandwidth and low communication latency to avoid being a performance bottleneck. They must also have high throughput density to reduce area occupied by wires. This paper presents techniques to implement power efficient transceivers for on-chip links that can achieve energy proportional operation. Conventional on-chip links optimized for best energy efficiency at peak data rate suffer from degraded energy efficiency under low utilization conditions. Dynamic voltage and frequency scaling and clock gating can partially alleviate this problem, but become ineffective in applications like mobile devices, where the data traffic can be very sporadic. In this paper, architecture and circuit techniques to improve energy efficiency under all utilization levels are presented. The proposed transceiver uses single-ended signaling with only 0.5 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> width and spacing and achieves 5-Gb/<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> throughput density. Fast locking signaling and clocking circuits greatly reduce the power-ON time. Fabricated in 65-nm CMOS technology, the proposed 10-Gb/s transceiver achieves wake-up time in less than 17 ns. More than 125<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> effective data rate scaling (10 Gb/s to 80 Mb/s) is obtained with an energy efficiency degradation of only 1.6<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> (627 to 997 fJ/b/mm). When the supply voltage is scaled from 1 to 0.7 V, the peak data rate scales from 10 to 6 Gb/s and the power scalable range increases to 208<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> (10 Gb/s to 48 Mb/s) with the energy efficiency degradation of only 1.2<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> (627 to 753 fJ/b/mm).", "corpus_id": 3500050, "title": "A 10-Gb/s/ch, 0.6-pJ/bit/mm Power Scalable Rapid-ON/OFF Transceiver for On-Chip Energy Proportional Interconnects" }
{ "abstract": "Modern mobile platforms utilize power cycling to lower power dissipation and increase battery life. By turning off the circuits that are not in use, power cycling provides a viable means to make power dissipation proportional to workload, hence achieving energy proportional operation. The effectiveness of this approach is governed by the turn on/off times, off-state power dissipation, and energy overhead due to power-cycling. Ideally, the circuits must turn on/off in zero time, consume no off-state power, and incur minimal energy overhead during on-to-off and off-to-on transitions. Conventional clock multipliers implemented using phase-locked loops (PLLs) present the biggest bottleneck in achieving these performance goals due to their long locking times. Even if the PLL is frequency locked, the slow phase acquisition process limits the power-on time [1-2]. Techniques such as dynamic phase-error compensation [3], edge-missing compensation [4], and hybrid PLLs [5] improve the phase acquisition time to at best few hundred reference cycles. However, such improvements are inadequate to make best use of power-cycling. Multiplying injection-locked oscillators (MILO) are shown to lock faster than PLLs, but suffer from conflicting requirements on injection strength to simultaneously achieve low jitter and fast locking. Increasing the injection strength extends lock range and reduces locking time, but severely degrades the deterministic jitter performance [6]. In view of these drawbacks, we propose a highly digital clock multiplier that seeks to achieve low jitter, fast locking, and near-zero off-state power. By using a highly scalable digital architecture with accurate frequency presetting and instantaneous phase acquisition, the prototype 8×/16× clock multiplier achieves 10ns (3 reference cycles) power-on time, 2psrms long-term absolute jitter, less than 25μW off-state power, 12pJ energy overhead for on/off transition, and 2.2mW on-state power at 2.5GHz output frequency.", "corpus_id": 7799872, "title": "A 2.5GHz 2.2mW/25µW on/off-state power 2psrms-long-term-jitter digital clock multiplier with 3-reference-cycles power-on time" }
{ "abstract": "A multiplying delay-locked loop (MDLL) for high-speed on-chip clock generation that overcomes the drawbacks of phase-locked loops (PLLs) such as jitter accumulation, high sensitivity to supply, and substrate noise is described. The MDLL design removes such drawbacks while maintaining the advantages of a PLL for multirate frequency multiplication. This design also uses a supply regulator and filter to further reduce on-chip jitter generation. The MDLL, implemented in 0.18-/spl mu/m CMOS technology, occupies a total active area of 0.05 mm/sup 2/ and has a speed range of 200 MHz to 2 GHz with selectable multiplication ratios of M=4, 5, 8, 10. The complete synthesizer, including the output clock buffers, dissipates 12 mW from a 1.8-V supply at 2.0 GHz. This MDLL architecture is used as a clock multiplier integrated on a single chip for a 72/spl times/72 STS-1 grooming switch and has a jitter of 1.73 ps (rms) and 13.1 ps (pk-pk).", "corpus_id": 16043394, "score": -1, "title": "A low-power multiplying DLL for low-jitter multigigahertz clock generation in highly integrated digital chips" }
{ "abstract": "Dans le but de faciliter la tâche d'evaluation du niveau de securite incendie aux ingenieurs et permettre aux specialistes impliques dans le domaine d'utiliser leurs langages et outils preferes, nous proposons de creer un langage dedie au domaine de la securite incendie generant automatiquement une simulation en prenant en consideration les langages metiers utilises par les specialistes intervenants dans le domaine. Ce DSL necessite la definition, la formalisation, la composition et l'integration de plusieurs modeles, par rapport aux langages specifiques utilises par les specialistes impliques dans le domaine. Le langage specifique dedie au domaine de la securite incendie est concu par composition et integration de plusieurs autres DSLs decrits par des langages techniques et naturels (ainsi que des langages naturels faisant reference a des langages techniques). Ces derniers sont modelises de maniere a ce que leurs composants soient precis et fondes sur des bases mathematiques permettant de verifier la coherence du systeme (personnes et materiaux sont en securite) avant sa mise en œuvre. Dans ce contexte, nous proposons d'adopter une approche formelle, basee sur des specifications algebriques, pour formaliser les langages utilises par les specialistes impliques dans le systeme de generation, en se concentrant a la fois sur les syntaxes et les semantiques des langages dedies. Dans l'approche algebrique, les concepts du domaine sont abstraits par des types de donnees et les relations entre eux. La semantique des langages specifiques est decrite par les relations, le mapping (correspondances) entre les types de donnees definis et leurs proprietes. Le langage de simulation est base sur un langage concu par la composition de plusieurs DSL specifiques precedemment decrits et formalises. Les differents DSLs sont implementes en se basant sur les concepts de la programmation fonctionnelle et le langage fonctionnel Haskell bien adapte a cette approche. Le resultat de ce travail est un outil informatique dedie a la generation automatique de simulation, dans le but de faciliter la tâche d'evaluation du niveau de securite incendie aux ingenieurs. Cet outil est la propriete du Centre Scientifique et Technique du bâtiment (CSTB), une organisation dont la mission est de garantir la qualite et la securite des bâtiments, en reunissant des competences multidisciplinaires pour developper et partager des connaissances scientifiques et techniques, afin de fournir aux differents acteurs les reponses attendues dans leur pratique professionnelle.", "corpus_id": 171674748, "title": "Développement d'un outil d'évaluation performantielle des réglementations incendie en France et dans les pays de l'Union Européenne" }
{ "abstract": "There have been many recent proposals for embedding abstract data types in programming languages. In order to reason about programs using abstract data types, it is desirable to specify their properties at an abstract level, independent of any particular implementation. This paper presents an algebraic technique for such specifications, develops some of the formal properties of the technique, and shows that these provide useful guidelines for the construction of adequate specifications.", "corpus_id": 11241352, "title": "The algebraic specification of abstract data types" }
{ "abstract": "Abstract We review the origins of structural operational semantics. The main publication `A Structural Approach to Operational Semantics,' also known as the `Aarhus Notes,' appeared in 1981 [G.D. Plotkin, A structural approach to operational semantics, DAIMI FN-19, Computer Science Department, Aarhus University, 1981]. The development of the ideas dates back to the early 1970s, involving many people and building on previous work on programming languages and logic. The former included abstract syntax, the SECD machine, and the abstract interpreting machines of the Vienna school; the latter included the λ -calculus and formal systems. The initial development of structural operational semantics was for simple functional languages, more or less variations of the λ -calculus; after that the ideas were gradually extended to include languages with parallel features, such as Milner's CCS. This experience set the ground for a more systematic exposition, the subject of an invited course of lectures at Aarhus University; some of these appeared in print as the 1981 Notes. We discuss the content of these lectures and some related considerations such as `small state' versus `grand state,' structural versus compositional semantics, the influence of the Scott–Strachey approach to denotational semantics, the treatment of recursion and jumps, and static semantics. We next discuss relations with other work and some immediate further development. We conclude with an account of an old, previously unpublished, idea: an alternative, perhaps more readable, graphical presentation of systems of rules for operational semantics.", "corpus_id": 503212, "score": -1, "title": "The origins of structural operational semantics" }
{ "abstract": "Objective: Recently, the ratios of neutrophil to lymphocyte (NL) and platelet to lymphocyte (PL) have been used as an indicator of inflammation. We aimed to investigate the relation of recurrent aphthous stomatitis (RAS) to inflammation by analyzing the ratios of NL and PL. Methods: We conducted a case-control study on 143 patients with RAS and 134 healthy control cases between February 2015 and March 2016. Age, sex, neutrophil count, platelet count, lymphocyte count, and the ratios of NL and PL of the participants were recorded. Results: One hundred and forty-three RAS patients and 134 control cases were included in the study. The ratios of NL and PL of RAS group were significantly higher than in the control group (p=0.004 and p=0.010, respectively). The NL ratio was the only independent predictor of RAS in multivariate logistic regression analysis (p=0.014). The cut-off value of NL ratio for predicting RAS was 3.49 with 13.3% sensitivity and 99.9% specificity (p=0.010). Conclusion: We have found that the ratios of NL and PL were higher in RAS group than the control group. The results of our study support that inflammation has an important role in the pathogenesis of RAS.", "corpus_id": 80681449, "title": "Neutrophil to lymphocyte and platelet to lymphocyte ratios as an indicator of inflammation in patients with recurrent aphthous stomatitis" }
{ "abstract": "Introduction: The aim of this study was to evaluate the neutrophil-to-lymphocyte ratio (NLR), platelet-to-lymphocyte ratio (PLR), and mean platelet volume (MPV) in patients with recurrent aphthous stomatitis (RAS). Materials and Methods: Eighty patients who were diagnosed with RAS between January 2014 and January 2016 were included in this study. Eighty age- and gender-matched healthy subjects were also enrolled as a control group. Neutrophil, lymphocyte, and platelet counts were compared between groups, in addition to NLR, PLR, and MPV values. Results: There was no significant difference in terms of lymphocyte count, platelet count, PLR, or MPV values between the two groups (P>0.05). However, white blood count, neutrophil count, and NLR were significantly higher in patients with RAS compared with the control group (P<0.05). Conclusion: The present study revealed an increased NLR among RAS patients compared with healthy controls. This suggests that development of RAS involves an inflammatory process. We believe that NLR could be used as a cheap and simple marker of inflammation.", "corpus_id": 1269155, "title": "Status of Neutrophils, Lymphocytes and Platelets in Patients with Recurrent Aphthous Stomatitis: A Retrospective Study" }
{ "abstract": "Oral ulcers observed during the course of HIV infection may be very severe. Such manifestations may interfere with oral functions and alter the patients' quality of life. It is important to stress that when HIV-infected individuals present with ulcerative lesions of the oral cavity, neoplastic processes and rare infections must be included in the differential diagnosis. Nontumefactive oral ulcers in HIV-positive patients may be a source of diagnostic difficulties because of the diverse array of underlying pathologic entities and multiplicity of etiologic agents. Biopsy should always be performed on long-standing ulcers, since either infection or a neoplastic process may be present. In the absence of infection or neoplasm, such lesions are then designated ulcers not otherwise specified.", "corpus_id": 35008742, "score": -1, "title": "Oral ulcers in HIV-infected patients: an update on epidemiology and diagnosis." }
{ "abstract": "We study analytically the equilibrium properties of the spherical hierarchical model in the presence of random fields. The expression for the critical line separating a paramagnetic from a ferromagnetic phase is derived. The critical exponents characterising this phase transition are computed analytically and compared with those of the corresponding D-dimensional short-range model, leading to conclude that the usual mapping between one dimensional long-range models and D-dimensional short-range models holds exactly for this system, in contrast to models with Ising spins. Moreover, the critical exponents of the pure model and those of the random field model satisfy a relationship that mimics the dimensional reduction rule. The absence of a spin-glass phase is strongly supported by the local stability analysis of the replica symmetric saddle-point as well as by an independent computation of the free-energy using a renormalization-like approach. This latter result enlarges the class of random field models for which the spin-glass phase has been recently ruled out.", "corpus_id": 118674273, "title": "Statistical mechanics of the spherical hierarchical model with random fields" }
{ "abstract": "We show rigorously that the spin-glass susceptibility in the random field Ising model is always bounded by the ferromagnetic susceptibility, and therefore that no spin-glass phase can be present at equilibrium out of the ferromagnetic critical line. When the magnetization is, however, fixed to values smaller than the equilibrium value, a spin-glass phase can exist, as we show explicitly on the Bethe lattice.", "corpus_id": 6863191, "title": "Elusive spin-glass phase in the random field Ising model." }
{ "abstract": "The one-dimensional Ising model in a random field is studied with use of a functional recursion relation. For temperatures exceeding a given value, the fixed function of the relation is found and shown to be a devil's staircase. From this result it is possible to evaluate the free energy to arbitrary precision. In the field-strength--temperature plane, a crossover line corresponding to the onset of frustration is found.", "corpus_id": 121390683, "score": -1, "title": "One-dimensional Ising model in a random field" }
{ "abstract": "Meshes with curvilinear elements hold the appealing promise of enhanced geometric flexibility and higher-order numerical accuracy compared to their commonly-used straight-edge counterparts. However, the generation of curved meshes remains a computationally expensive endeavor with current meshing approaches: high-order parametric elements are notoriously difficult to conform to a given boundary geometry, and enforcing a smooth and non-degenerate Jacobian everywhere brings additional numerical difficulties to the meshing of complex domains. In this paper, we propose an extension of Optimal Delaunay Triangulations (ODT) to curved and graded isotropic meshes. By exploiting a continuum mechanics interpretation of ODT instead of the usual approximation theoretical foundations, we formulate a very robust geometry and topology optimization of Bézier meshes based on a new simple functional promoting isotropic and uniform Jacobians throughout the domain. We demonstrate that our resulting curved meshes can adapt to complex domains with high precision even for a small count of elements thanks to the added flexibility afforded by more control points and higher order basis functions.", "corpus_id": 51881708, "title": "Curved optimal delaunay triangulation" }
{ "abstract": "NSFC [61100107, 61100105, 61272019, 61332015]; Natural Science Foundation of Fujian Province of China [2012J01291, 2011J05007]; National Basic Research Program of China [2011CB302400]; Research Grant Council of Hong Kong [718209, 718010, 718311, 717012]; NSF [61222206]; Chinese Academy of Sciences", "corpus_id": 2426754, "title": "Revisiting Optimal Delaunay Triangulation for 3D Graded Mesh Generation" }
{ "abstract": "Isotropic tetrahedron meshes generated by Delaunay refinement algorithms are known to contain a majority of well-shaped tetrahedra, as well as spurious sliver tetrahedra. As the slivers hamper stability of numerical simulations we aim at removing them while keeping the triangulation Delaunay for simplicity. The solution which explicitly perturbs the slivers through random vertex relocation and Delaunay connectivity update is very effective but slow. In this paper we present a perturbation algorithm which favors deterministic over random perturbation. The added value is an improved efficiency and effectiveness. Our experimental study applies the proposed algorithm to meshes obtained by Delaunay refinement as well as to carefully optimized meshes.", "corpus_id": 10984235, "score": -1, "title": "Perturbing Slivers in 3D Delaunay Meshes" }
{ "abstract": "Previous research identified personality and neurophysiological traits that are associated with inter-individual differences in lucid dreaming frequency. The present study investigated the question as to whether sensory processing sensitivity is related to lucid dreaming. Overall, 1,807 persons (1,008 woman, 799 men) with a mean age of 47.75 ± 14.41 years completed the German High Sensitive Person Scale, a Big Five personality inventory, and the lucid dream frequency scale. As expected, Aesthetic Sensitivity and Low Sensory Threshold (two of the three sensory processing factors) were positively related to lucid dream frequency. Moreover, extraversion and low agreeableness were also related to lucid dreaming frequency. Although the effect sizes of these relationships are relatively small, this research can shed light on the mechanism underlying the inter-individual differences in lucid dream frequency.", "corpus_id": 248330493, "title": "Lucid Dreaming Frequency and Sensory-Processing Sensitivity" }
{ "abstract": "Lucid dreaming is a state of awareness that one is dreaming, without leaving the sleep state. Dream reports show that self-reflection and volitional control are more pronounced in lucid compared with nonlucid dreams. Mostly on these grounds, lucid dreaming has been associated with metacognition. However, the link to lucid dreaming at the neural level has not yet been explored. We sought for relationships between the neural correlates of lucid dreaming and thought monitoring. Human participants completed a questionnaire assessing lucid dreaming ability, and underwent structural and functional MRI. We split participants based on their reported dream lucidity. Participants in the high-lucidity group showed greater gray matter volume in the frontopolar cortex (BA9/10) compared with those in the low-lucidity group. Further, differences in brain structure were mirrored by differences in brain function. The BA9/10 regions identified through structural analyses showed increases in blood oxygen level-dependent signal during thought monitoring in both groups, and more strongly in the high-lucidity group. Our results reveal shared neural systems between lucid dreaming and metacognitive function, in particular in the domain of thought monitoring. This finding contributes to our understanding of the mechanisms enabling higher-order consciousness in dreams.", "corpus_id": 17057197, "title": "Metacognitive Mechanisms Underlying Lucid Dreaming" }
{ "abstract": "The Cloud infrastructure and its extensive set of Internet-accessible resources has potential to provide significant benefits to robots and automation systems. We consider robots and automation systems that rely on data or code from a network to support their operation, i.e., where not all sensing, computation, and memory is integrated into a standalone system. This survey is organized around four potential benefits of the Cloud: 1) Big Data: access to libraries of images, maps, trajectories, and descriptive data; 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning; 3) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes; and 4) Human Computation: use of crowdsourcing to tap human skills for analyzing images and video, classification, learning, and error recovery. The Cloud can also improve robots and automation systems by providing access to: a) datasets, publications, models, benchmarks, and simulation tools; b) open competitions for designs and systems; and c) open-source software. This survey includes over 150 references on results and open challenges. A website with new developments and updates is available at: http://goldberg.berkeley.edu/cloud-robotics/", "corpus_id": 6988770, "score": -1, "title": "Image Object Label 3 D CAD Model Candidate Grasps Google Object Recognition Engine Google Cloud Storage Select Feasible Grasp with Highest Success Probability Pose EstimationCamera Robots Cloud 3 D Sensor" }
{ "abstract": "This article deals with a new high efficient and cost effective lattice-Rogowski-coil sensor for partial discharge monitoring of power transformers. The sensor is a thin flat-shape printed circuit board type Rogowski-coil, which can be installed on the internal surface of the transformer tank with minimum disturbance to the normal operation of the transformer. Thanks to the accurate and desirable geometry of the sensor, precise monitoring can be carried out. Also, it was designed to handle the common trade-off between the low resonant frequency and high mutual inductance. An experimental setup for measuring the lumped model parameters of the sensor is made and it is shown that the desired resonance frequency of approximately 10 MHz is obtained. In order to evaluate the performance of the sensor, a specially prepared distribution transformer (20 kV/0.4 kV, 500 kVA) was considered. Partial discharge calibration pulses were injected into different locations of the winding of this transformer, and the ability of the sensor was verified in defect localization. Moreover, the effects of some practical aspects, like the value of the terminating resistor of the proposed special Rogowski-coil, its distance from transformer winding as well as the capability of the extracted features from detected PD signals, on the accuracy of PD localization are studied.", "corpus_id": 233196672, "title": "A New Application of Rogowski Coil Sensor for Partial Discharge Localization in Power Transformers" }
{ "abstract": "Precise localization of partial discharge (PD) inside a power transformer winding is a challenging task. Previously, researchers have used internal calibration or reference signals to locate the PD source. The location of any arbitrary (test) PD source is ascertained from the maximum correlation between reference and test signals. However, in practical transformer windings, internal tappings or design details are usually unavailable to generate reference signals. The proposed work employs terminal measurements to construct a physically realizable ladder network. Simulated responses obtained from the ladder network for signals of known pulse-widths at all locations are used as reference data. The terminal responses of the test PD signals are obtained from a laboratory-scale winding by applying signals of arbitrary pulse-widths and shapes at various locations. The PD test signals are generated using a function generator, a PD calibrator, and real discharges. To predict the location of the PD source, the simulated reference data are then correlated with the test data. The position corresponding to the maximum correlation indicates the PD location. The proposed methodology is verified using experimental investigations carried out on two different laboratory-scale transformer windings.", "corpus_id": 4735608, "title": "Localization of Partial Discharges Inside a Transformer Winding Using a Ladder Network Constructed From Terminal Measurements" }
{ "abstract": "A novel procedure to determine the series capacitance of a transformer winding, based on frequency-response measurements, is reported. It is based on converting the measured driving-point impedance magnitude response into a rational function and thereafter exploiting the ratio of a specific coefficient in the numerator and denominator polynomial, which leads to the direct estimation of series capacitance. The theoretical formulations are derived for a mutually coupled ladder-network model, followed by sample calculations. The results obtained are accurate and its feasibility is demonstrated by experiments on model-coil and on actual, single, isolated transformer windings (layered, continuous disc, and interleaved disc). The authors believe that the proposed method is the closest one can get to indirectly measuring series capacitance.", "corpus_id": 37225646, "score": -1, "title": "Estimation of Series Capacitance of a Transformer Winding Based on Frequency-Response Data: An Indirect Measurement Approach" }
{ "abstract": "The baseline allows for a top-down scaling of the global Emergy budget to systems at regional and local levels, assuming that the geobiosphere generates energy flows and resources as co-products of the same annual cycles. Undoubtedly, the baseline is one of the best findings H.T. Odum provided with a uniform, holistic and flexible method for the evaluation of the ‘energy of one kind necessary to produce resources and products’. Nevertheless, its use has been reputed to be source of inaccuracy for downstream Emergy results of technological productions and it has undergone a number of criticisms. Though we acknowledge the usefulness of the baseline and its extensive adoption by the most of the Emergy practitioners, we suggest a preliminary redesigning of the framework behind the resourceUEVs calculation with the baseline. The goal is to use a ‘bottom-up’ approach. Accordingly, the Emergy values of the three primary sources (sun, tides, geo-heat) should not be summed, but the Exergy (as available energy) of each source be separately assigned to the corresponding resource production compartment (i.e. the natural processes involved) which are connected by exchanges of natural products (the equivalent of the commodities for the technosphere). These compartments can be framed into two matrix systems: 1) the matrix β (3×m), where the three independent Exergy flows are assigned to m natural processes (e.g. water evaporation, net primary production, soil formation, coalification), and 2) the square matrix α (m×m), where the same m natural processes produce corresponding m natural products (e.g. rain, wood, land, coal). The UEV of these natural products (ecosystem goods and services) can be obtained by inverting and scaling the two related matrices. We do recognize that the sun, tidal and geo-heat sources show different contributions in terms of magnitude and along representative time and space conditions. These are factors essentially neglected by the balance equations used to account for the baseline. The main challenge to tackle for the development of the bottom-up approach is certainly the extensive collection of reliable data able to approximately describe the network of geobiosphere processes. A feature of this bottom-up framework is that UEVs would be vectors no more calculated in seJ/unit but instead including the memory of the amounts of Exergy provided by the three separated sources (sun, tides and geo-heat) to the complex network of processes from which the unit of resource (e.g. 1 kg of soil formed,...) is directly and indirectly generated.", "corpus_id": 12972134, "title": "Quantifying the Emergy of Resources : Challenges for a Bottom-up Approach" }
{ "abstract": "Abstract Solar Emergy is the available solar energy used up directly and indirectly to make a service or product. Although this basic concept is quite straightforward, its implications are potentially profound. H.T. Odum pioneered the development and use of emergy, and presented it as a way of understanding the behavior of self-organized systems, valuing ecological goods and services, and jointly analyzing ecological and economic systems. Unfortunately, like many groundbreaking ideas, emergy has encountered a lot of resistance and criticism, particularly from economists, physicists, and engineers. Some critics have focused on detailed practical aspects of the approach, while others have taken issue with specific parts of the theory and claims. This paper discusses the main features and criticisms of emergy and provides insight into the relationship between emergy and concepts from engineering thermodynamics, such as exergy and cumulative exergy consumption. This reveals the close link between emergy and ecological cumulative exergy consumption, and indicates that most of the criticisms of emergy are either common to all holistic approaches that account for ecosystems and other macrosystems within their systems boundaries, or a result of misunderstandings derived from a lack of communication between various disciplines, or are not relevant for engineering applications. By identifying the main points of criticisms of emergy, this paper attempts to clarify many of the common misconceptions about emergy, inform the community of emergy practitioners about the aspects that need to be communicated better or improved, and suggest solutions. Further research and interaction with other disciplines is essential to bring one of H.T. Odum’s finest contributions into the mainstream to guide humanity on “the prosperous way down.”", "corpus_id": 5785630, "title": "Promise and problems of emergy analysis" }
{ "abstract": "There is considerable interest in computational and experimental flow investigations within abdominal aortic aneurysms (AAAs). This task stipulates advanced grid generation techniques and cross-validation because of the anatomical complexity. The purpose of this study is to examine the feasibility of velocity measurements by particle tracking velocimetry (PTV) in realistic AAA models. Computed tomography and rapid prototyping were combined to digitize and construct a silicone replica of a patient-specific AAA. Three-dimensional velocity measurements were acquired using PTV under steady averaged resting boundary conditions. Computational fluid dynamics (CFD) simulations were subsequently carried out with identical boundary conditions. The computational grid was created by splitting the luminal volume into manifold and nonmanifold subsections. They were filled with tetrahedral and hexahedral elements, respectively. Grid independency was tested on three successively refined meshes. Velocity differences of about 1% in all three directions existed mainly within the AAA sack. Pressure revealed similar variations, with the sparser mesh predicting larger values. PTV velocity measurements were taken along the abdominal aorta and showed good agreement with the numerical data. The results within the aneurysm neck and sack showed average velocity variations of about 5% of the mean inlet velocity. The corresponding average differences increased for all velocity components downstream the iliac bifurcation to as much as 15%. The two domains differed slightly due to flow-induced forces acting on the silicone model. Velocity quantification through narrow branches was problematic due to decreased signal to noise ratio at the larger local velocities. Computational wall pressure and shear fields are also presented. The agreement between CFD simulations and the PTV experimental data was confirmed by three-dimensional velocity comparisons at several locations within the investigated AAA anatomy indicating the feasibility of this approach.", "corpus_id": 21877044, "score": -1, "title": "CFD and PTV steady flow investigation in an anatomically accurate abdominal aortic aneurysm." }
{ "abstract": "Selecao de variaveis e um procedimento para selecionar um subconjunto de caracteristicas viaveis em um conjunto de dados, o qual se torna importante quando esse conjunto contem muitas variaveis redundantes. A calibracao multivariada combina selecao de variaveis com tecnicas estatisticas para construir modelos matematicos com o intuito de predizer uma propriedade de interesse. Nesse contexto, tecnicas de selecao tem sido aplicadas na solucao de diversos problemas. Por exemplo, Algoritmos Geneticos (AGs) sao faceis de implementar e consistem em um modelo baseado em populacao, o qual utiliza operadores de selecao e recombinacao para gerar novos individuos. No entanto, geralmente em calibracao multivariada, o conjunto de dados apresenta um grau de correlacao consideravel entre as variaveis e isso nos fornece uma evidencia de que tal problema nao pode ser decomposto adequadamente. Alem disso, alguns estudos da literatura tem afirmado que os operadores geneticos utilizados pelos AGs podem causar o rompimento dos Blocos Construtores (Building Blocks - BBs) das solucoes viaveis. Portanto, este trabalho objetiva demonstrar que a selecao de variaveis em calibracao multivariada e um problema nao-completamente decomponivel (hipotese 1), assim como que operadores de recombinacao afetam a presuncao de nao-decomponibilidade (hipotese 2). Adicionalmente, este trabalho propoe duas heuristicas, um operador de busca local e duas versoes de um Algoritmo para Selecao de Variaveis baseado em Epistasia (EbFSA) para aprimorar a capacidade de predicao do modelo e evitar o rompimento de BBs. Baseando-se na pesquisa realizada e nos resultados obtidos, torna-se possivel confirmar a viabilidade de nossas hipoteses e demonstrar que o EbFSA consegue superar alguns algoritmos tradicionais.", "corpus_id": 127370829, "title": "Variable selection in multivariate calibration considering non-decomposability assumption and building blocks hypothesis" }
{ "abstract": "This paper presents a multi-objective formulation for variable selection in calibration problems. The prediction of protein concentration on wheat is obtained by a linear regression model using variables obtained by a spectrophotometer device. This device measure hundreds of correlated variables related with physicochemical properties and that can be used to estimate the protein concentration. The problem is the selection of a subset informative and uncorrelated variables that help the minimization of prediction error. In this work we propose the use of two objectives in this problem: the prediction error and the number of variables in the model, both related to linear equations system stability. We proposed a multi-objective formulation using two multi-objective algorithms: the NSGA-II and the SPEA-II. Additionally we propose a final decision maker method to choice the final subset of variables from the Pareto front. For the case study is used wheat data obtained by NIR spectrometry where the objective is the determination of a variable subgroup with information about protein concentration. The results of traditional techniques of multivariate calibration as the Successive Projections Algorithm (SPA), Partial Least Square (PLS) and mono-objective genetic algorithm are presents for comparisons. For NIR spectral analysis of protein concentration on wheat, the number of variables selected from 775 spectral variables was reduced for just 10 in the SPEA-II algorithm. The prediction error decreased from 0.2 in the classical methods to 0.09 in proposed approach, a reduction of 45%. The model using variables selected by SPEA-II had better prediction performance than classical algorithms and full-spectrum partial least-squares (PLS).", "corpus_id": 605700, "title": "Multi-objective evolutionary algorithm for variable selection in calibration problems: A case study for protein concentration prediction" }
{ "abstract": "Cross-adaptation, which can improve the stress tolerance of strains, temporarily supplies more matching bases in transcriptome-phenotype matching approaches to reveal novel gene functions in stress responses. Transcriptome-phenotype matching based on RNA sequencing was implemented to reveal the cross-adaptation mechanism of Lactobacillus rhamnosus hsryfm 1301 in response to heat stress and oxidative stress. A total of 242 genes were upregulated and 320 genes were downregulated under heat stress, while 135 genes were upregulated and 206 genes were downregulated under oxidative stress. There were 154 overlapping genes that responded to both stresses, and 97.4% of the overlapping DEGs (differentially expressed genes) were codirectionally regulated. The overlapping DEGs were mainly classified into amino acid or oligopeptide ABC transporters, amino acid metabolism, and quorum sensing pathways. Correspondingly, the heat and oxidative tolerance of L. rhamnosus hsryfm 1301 was stronger in low nitrogen source environment. Thus, the high proportion of transcriptional homogenization, especially the decrease in abundance of nitrogen source transporter and metabolism enzyme genes, was a reason for the cross-adaptation of L. rhamnosus hsryfm 1301 to heat stress and oxidative stress. The survival rate of L. rhamnosus during processes with heat stress and oxidative stress can be improved by reducing the concentration of nitrogen source in the culture medium.", "corpus_id": 210925741, "score": -1, "title": "Transcriptional homogenization of Lactobacillus rhamnosus hsryfm 1301 under heat stress and oxidative stress" }
{ "abstract": "Geometry parameter tuning is an inherent part of the antenna design process. While most often performed in a local sense, it still entails considerable computational expenses when carried out at the level of full-wave electromagnetic (EM) simulation models. Moreover, the optimization outcome may be impaired if a good initial design is not available. This article proposes a novel approach to fast and improved-reliability gradient-based optimization of antenna structures. Our approach employs a frequency-based regularization to facilitate the relocation of antenna operating parameters to their target values, which increases the chances of identifying a satisfactory design under challenging conditions (e.g., poor-quality starting point). At the same time, the computational efficiency of the tuning process is enhanced through the involvement of variable-resolution EM models, and restricting the finite-differentiation (FD) sensitivity updates to selected parameters only. The latter is decided upon based on the analysis of the design relocation between the subsequent iterations of the optimization algorithm. The presented technique is validated using three examples of microstrip antennas optimized under different scenarios (matching improvement, gain enhancement, and size reduction). The results demonstrate superior performance in terms of reliability and design quality as compared to conventional gradient-based and derivative-free search procedures. At the same time, a significant speedup is achieved over the frequency-regularization-based procedure not using the acceleration mechanisms.", "corpus_id": 252654856, "title": "Rapid Variable-Resolution Parameter Tuning of Antenna Structures Using Frequency-Based Regularization and Sparse Sensitivity Updates" }
{ "abstract": "A novel compact single-substrate planar multiband five-element multiple-input multiple-output (MIMO) antenna system is presented in this paper. The tunable two-element folded meandered MIMO antenna covers the long-term evolution frequency bands below 1 GHz (687–813 MHz) and radio frequency identification bands centered around 2.4 and 5.8 GHz. The other two-element compact MIMO antennas operate over 754–971 MHz, 1.65–1.83 GHz, 2–3.66 GHz, and 5.1–5.6 GHz frequency bands. Furthermore, the proposed antenna elements are integrated with a wideband sensing antenna for the spectrum sensing in 0.668–1.94 and 3–4.6 GHz, which also acts as the ground plane for the MIMO elements in the cognitive radio application environment. The antenna is fabricated on a 65 mm $\\times \\,\\, 120$ mm $\\times \\,\\, 1.56$ mm low-cost FR-4 substrate. The antenna’s radiation characteristics are experimentally verified, and the results are in agreement with the full-wave simulation. The 3-D radiation pattern-based envelope correlation coefficient of the MIMO antennas is also experimentally verified which is below the desired value of 0.5. Finally, to show its utility at the Internet-of-Things platform, the antenna is tested in the realistic application environment.", "corpus_id": 49556112, "title": "Compact Planar Multistandard MIMO Antenna for IoT Applications" }
{ "abstract": "A switchable Yagi-Uda antenna prototype with radiation pattern reconfiguration is presented in this letter. The proposed reconfigurable antenna is based on the concept of switching between the reflector and director of a Yagi-Uda antenna using a radio frequency PIN diode. As a result, the minimum/maximum radiation can be steered towards desired signals or away from interfering signals in opposite directions. The measured 10 dB impedance bandwidth and gain are 210 MHz (7%) and 8.02 dBi at 3 GHz, respectively. Details of the antenna design and its performance are described and empirically analyzed.", "corpus_id": 51359994, "score": -1, "title": "Switchable Printed Yagi‐Uda Antenna with Pattern Reconfiguration" }
{ "abstract": "Retinal fundus imaging is a medical procedure used by medical professionals in the discovery and tracking of various retinal abnormalities. Sometimes the analysis of retinal fundus images can be slow and difficult when performed by medical staff, and in response to this many automated, image-processing based methods for the analysis of these images exist. In recent years, deep learning methods have become increasingly popular in machine learning applications, so it is no surprise that they are also being used in the image processing based analysis of retinal fundus images. In this paper we discuss recently proposed methods that use deep learning techniques in the image processing based analysis of digital retinal fundus images. Special attention is given to the analysis of retinal fundus image datasets and various techniques employed to the images from these datasets in order to make them suitable for deep learning based applications.", "corpus_id": 52148868, "title": "A Review of Image Processing and Deep Learning Based Methods for Automated Analysis of Digital Retinal Fundus Images" }
{ "abstract": "This paper presents a new supervised method for segmentation of blood vessels in retinal photographs. This method uses an ensemble system of bagged and boosted decision trees and utilizes a feature vector based on the orientation analysis of gradient vector field, morphological transformation, line strength measures, and Gabor filter responses. The feature vector encodes information to handle the healthy as well as the pathological retinal image. The method is evaluated on the publicly available DRIVE and STARE databases, frequently used for this purpose and also on a new public retinal vessel reference dataset CHASE_DB1 which is a subset of retinal images of multiethnic children from the Child Heart and Health Study in England (CHASE) dataset. The performance of the ensemble system is evaluated in detail and the incurred accuracy, speed, robustness, and simplicity make the algorithm a suitable tool for automated retinal image analysis.", "corpus_id": 7515294, "title": "An Ensemble Classification-Based Approach Applied to Retinal Blood Vessel Segmentation" }
{ "abstract": "paper presents a method for blood vessel detection in digital retinal images. The method uses fuzzy logic approach with block wise gridding. It uses an adaptive approach for vessel detection. The segmentation is produced by classifying each pixel of the image as vessel or nonvessel. The performance of the proposed methodology is evaluated on the publicly available DRIVE database. It also contains manually labeled images by experts. Performance of this method on set of test images shows significant improvement than other solutions present in the literature. The method proves especially accurate results for vessel detection in DRIVE images. The method is simple and has fast implementation. It shows effectiveness and robustness with different image conditions. The vessel detection performance has a sensitivity of 0.8653 with specificity 0.9833. The accuracy of the method is 0.9728 for Drive database. This blood vessel detection and segmentation technique can play a useful clinical role in an automated retinopathy analysis system. Keywordsretinopathy, block wise gridding, retinal image, vessels segmentation.", "corpus_id": 18169709, "score": -1, "title": "Retina Vessels Detection Algorithm for Biomedical Symptoms Diagnosis" }
{ "abstract": "Selecting patients for clinical trials is very labor-intensive. Our goal is to design (semi-)automated techniques that can support clinical researchers in this task. In this paper we summarize our recent advances towards such a system: First, we present the challenges involved when representing electronic health records and eligibility criteria for clinical trials in a formal language. Second, we introduce temporal conjunctive queries with negation as a formal language suitable to represent clinical trials. Third, we describe our methodology for automatic translation of clinical trial eligibility criteria from natural language into our query language. The evaluation of our prototypical implementation shows promising results. Finally, we talk about the parts we are currently working on and the challenges involved.", "corpus_id": 233225828, "title": "Temporalized Ontology-MediatedQuery Answering under Minimal-World Semantics" }
{ "abstract": "The amount of time and money required to screen patients for clinical trial and guideline eligibility presents the need for an automated screening process to streamline clinical trial enrollment and guideline implementation. This paper introduces an ontology-based approach for defining a set of patterns that can be used to represent various types of time-relevant eligibility criteria that may appear in clinical protocols. With a focus only on temporal requirements, we examined the criteria of 600 protocols and extracted a set of 37 representative time-relevant eligibility criteria. 16 patterns were designed to represent these criteria. Using a test set of an additional 100 protocols, it was found that these 16 patterns could sufficiently represent 98.5% of the time-relevant criteria. After the time-relevant criteria are modeled by these patterns, it will allow the potential to (1) use natural language processing algorithms to automatically extract temporal constraints from criteria; and (2) develop computer rules and queries to automate the processing of the criteria.", "corpus_id": 926083, "title": "Designing Ontology-based Patterns for the Representation of the Time-Relevant Eligibility Criteria of Clinical Protocols" }
{ "abstract": "A dynamic detour scheme for intelligent transportation systems (ITSs) to avoid traffic congestion is proposed in this paper. When running into traffic congestion, vehicles usually can do nothing but waiting. The proposed scheme can prevent driver suffer from traffic congestion by balancing traffic flows with n-hop notices method and experience-support method. Congestive information can be received from road side units. The notification range of congestion event is determined by the server. In this study, the range of congestion notification is defined as n-hop from congestion road. Only the vehicles located in the n-hop range will receive the notification. Also, the information about the average time for vehicles to get through each road can be recorded to help reselecting a new path to avoid congestion. According to simulation results, both n-hop notices and experience-support method are proved to improve traffic congestion effectively.", "corpus_id": 16613054, "score": -1, "title": "To Go or Not To Go: N-hop Congestion Notice with Experience Support for Dynamic Detouring" }
{ "abstract": "Magnetic skyrmions are nano-scale magnetic states that could be used in various spintronics devices. A central issue is the mechanism and rate of various possible annihilation processes and the lifetime of metastable skyrmions. While most studies have focused on classical over-the-barrier mechanism for annihilation, it is also possible that quantum mechanical tunneling through the energy barrier takes place. Calculations of the lifetime of magnetic skyrmions in a two-dimensional lattice are presented and the rate of tunneling compared with the classical annihilation rate. A remarkably strong variation in the onset temperature for tunneling and the lifetime of the skyrmion is found as a function of the values of parameters in the extended Heisenberg Hamiltonian, i.e. the out-of-plane anisotropy, Dzyaloshinskii–Moriya interaction and applied magnetic field. Materials parameters and conditions are identified where the onset of tunneling could be observed on a laboratory time scale. In particular, it is predicted that skyrmion tunneling could be observed in the PdFe/Ir(111) system when an external magnetic field on the order of 6T is applied.", "corpus_id": 219026337, "title": "Magnetic skyrmion annihilation by quantum mechanical tunneling" }
{ "abstract": "The skyrmion racetrack is a promising concept for future information technology. There, binary bits are carried by nanoscale spin swirls–skyrmions–driven along magnetic strips. Stability of the skyrmions is a critical issue for realising this technology. Here we demonstrate that the racetrack skyrmion lifetime can be calculated from first principles as a function of temperature, magnetic field and track width. Our method combines harmonic transition state theory extended to include Goldstone modes, with an atomistic spin Hamiltonian parametrized from density functional theory calculations. We demonstrate that two annihilation mechanisms contribute to the skyrmion stability: At low external magnetic field, escape through the track boundary prevails, but a crossover field exists, above which the collapse in the interior becomes dominant. Considering a Pd/Fe bilayer on an Ir(111) substrate as a well-established model system, the calculated skyrmion lifetime is found to be consistent with reported experimental measurements. Our simulations also show that the Arrhenius pre-exponential factor of escape depends only weakly on the external magnetic field, whereas the pre-exponential factor for collapse is strongly field dependent. Our results open the door for predictive simulations, free from empirical parameters, to aid the design of skyrmion-based information technology.", "corpus_id": 3413301, "title": "Lifetime of racetrack skyrmions" }
{ "abstract": "The ionic liquid [Bmim][DCA] is a propellant candidate in a standalone electrospray thruster or in a dual-mode propulsion system consisting of a chemical system and an electrospray system. Since limited published data exists for [Bmim][DCA], the electrospray characteristics are relatively unknown. Emission testing of the ionic liquid has been conducted to characterize the [Bmim][DCA] electrospray plume for both an external flow titanium needle and internal flow capillary. Mass spectrometric, retarding potential, and angle distribution measurements were collected for the positive polarity ions emitted from [Bmim][DCA] wetted emitters with nominal extraction voltages between ~1 kV to ~2.5 kV. The titanium needle operated at a sizably reduced liquid flow rate in comparison to the capillary. As such, only the major species of Bmim + ([Bmim][DCA])n with n=0,1 were identifiable in the quadrupole measurement range of 0-1000 amu and were formed at or near the needle potential. A typical needle angle distribution was found in these measurements. For the capillary emitter, flow rates from 0.27 nL/s to 2.18 nL/s were used to investigate corresponding alterations in the electrospray beam. The aim of the investigation was to ascertain the ability to “tune” or “dial-in” an electrospray thruster to specific ion or droplet sizes and thus specific performance levels. Unlike the limited species observed from the needle emission, the capillary measurements indicated the presence of n=0,1,2,3,4 cation species with large mass droplet contributions. The lowest flow rates indicated the highest levels of ions in the measurement range of 0-1000 amu with a mix of large mass droplets. For increasing flow rate, species < 500 amu ceased to exist leaving only the n=2,3,4 species mixed with large mass droplets in the electrospray beam. All ion species exceeded the quadrupole mass range at the upper flow rates. Ions emitted from the capillary were formed at levels below the emitter potential. Ohmic losses in the ionic liquid are likely the cause for the less energetic ions. Angular distribution measurements indicated broadening of the beam current and mass distribution for increasing flow rates.", "corpus_id": 12905691, "score": -1, "title": "Capillary Extraction of the Ionic Liquid (Bmim)(DCA) for Variable Flow Rate Operations" }
{ "abstract": "OBJECTIVES: to analyze the prevalence of satisfaction at work and identify associated factors in Psychosocial Care Centers. METHOD: cross-sectional study involving 546 workers from 40 Psychosocial Care Centers in the South of Brazil. The satisfaction was identified based on the Assessment Scale of Satisfaction in the Mental Health Team and a logistic regression model was used for the adjusted data analysis. RESULTS: the prevalence of satisfaction at work corresponded to 66.4%. Factors directly associated with satisfaction: higher-level function (except physicians and psychologists), work time of six months or less, making a larger number of home visits, good supervision by the team, possibility to make collective choices and take courses. CONCLUSIONS: the satisfaction is associated with the work organization and conditions and demonstrates the need to invest in team supervisions, in process that democratize the services and in the workers' training.", "corpus_id": 680095, "title": "Factors associated with satisfaction at work in Psychosocial Care Centers1" }
{ "abstract": "The NHS Plan required extensive changes in the configuration of mental health services in the UK, including introduction of crisis resolution teams, CRTs. Little is known about the effects of these changes on mental health staff and their recruitment and retention. To assess levels of burnout and sources of satisfaction and stress in CRT staff and compare them with assertive outreach team (AOT) and community mental health team (CMHT) staff. Cross sectional survey using questionnaires, including the Maslach Burnout Inventory, the Minnesota Satisfaction Scale and global job satisfaction item from the Job Diagnostic Survey. All staff in 11 CRTs in 7 London boroughs were included. One hundred and sixty-nine questionnaires were received (response rate 78%). CRT staff were moderately satisfied with their jobs and scores for the three components of burnout were low or average. Their sense of personal accomplishment was greater than in the other types of team. Our results suggest that CRTs may be sustainable from a workforce morale perspective, but longer term effects will need to be assessed.", "corpus_id": 1991201, "title": "Satisfaction and burnout among staff of crisis resolution, assertive outreach and community mental health teams" }
{ "abstract": "Community-based psychiatry has attracted a wide interest in the last 20 years. However, the evidence in the literature on monitoring and evaluating community psychiatric services for a long period of time is scanty. The aim of this monograph is to present the results of a number of evaluative studies, covering a ten-year period, conducted in South-Verona, an area of 75,000 inhabitants in Northern Italy, where a new community-based system of care, the South-Verona Community Psychiatric Service (CPS), has operated since 1978. This system, which is based on the provisions of the Italian psychiatric reform, is alternative to the old hospital-centred system of care, and provides care and support to all types of patients, without back-up from the mental hospital, where only a few old long-stay in-patients continue to reside. In the first part of the monograph, trends in the provision of psychiatric care in the period 1979-1988 are presented, using the South-Verona Psychiatric Case Register (PCR). Both one-day and one-year prevalence figures and incidence rates are lower than in other register areas outside Italy, partly because of the smaller number of specialized out-patient services available in South-Verona and partly because of less use of in-patient care in our area. Moreover, there is a tendency in Italy to care for elderly patients in geriatric institutions outside the psychiatric system. Most of the patients seen in any year are treated without in-patient care. This applies to all diagnostic groups, except affective psychosis. Rates of compulsory admission dropped dramatically after the reform. The total number of admissions to all in-patient psychiatric facilities (including private hospitals) in 1988 is only 8.4% lower than that found in 1977 (one year prior to the reform), while the mean number of occupied beds in 1988 was 47% lower than in 1977. In South-Verona point-prevalence of long-stay in-patients has slowly decreased over the years and there is a negligible build-up of new long-stay in-patients. The South-Verona CPS is now taking care of most psychiatric patients who, before the reform, would have been admitted to the mental hospital and become long-stay. These patients, who may be defined as long-term patients in the community, have consistently accumulated since 1981 and are making high use of psychiatric community services.(ABSTRACT TRUNCATED AT 400 WORDS)", "corpus_id": 10035769, "score": -1, "title": "Community-based psychiatry: long-term patterns of care in South-Verona." }
{ "abstract": "Efferent innervation of the inner ear is extensively studied but the whole model revealing the development of efferent synapses is not clear yet. In mammals the lateral and medial olivocochlear systems are known as the source of efferent fibers. The lateral olivocochlear system innervates the ipsilateral cochlea, terminating on the dendrites beneath the inner hair cells (IHCs), the dendrites being spiral ganglion neuron compounds. The medial olivocochlear system is involved in forming synapses directly on the outer hair cells (OHCs). To reach the final targets efferent axons use the afferent fibers as a scaffold. Efferent synaptogenesis occurs just before the onset of hearing. At P0 in rats we observed synaptic-like contacts lacking typical features. At P3 the synapses were immature. At P4-P5 efferent contacts with IHCs were clearly defined. At P6-P7 the efferent terminals were larger with distinct synaptic vesicles. During maturation, at P8-P10, the number of efferent synapses at the base of the ICHs reduced alongside with a decrease in the synaptic cisternae. After P12 efferent terminals formed axodendritic synapses below IHCs and large axosomatic synapses on OHCs. The innervation of OHCs underwent two stages, i.e. transitional with simultaneous innervation of IHCs and OHCs and a final OHC-targeted innervation. These results support the idea for a waiting period of efferent innervation before its final establishment in adult organ of Corti. We also summarize the role of neurotrophic factors, specific neurotransmitter systems, their receptors and transporters for refinement of cochlear efferent innervation. Biomedical Reviews 2013; 24: 33-48.", "corpus_id": 84481125, "title": "Postnatal development of the inner ear efferent innervation in mammals" }
{ "abstract": "Summaryγ-Aminobutyric acid (GABA)-ergic efferent nerve fibers were studied during the postnatal development of the rat cochlea, using light microscopic immunocytochemical techniques. Antibodies against GABA and its synthesizing enzyme, glutamate decarboxylase (GAD), were used. Immunoreactivity to GAD is already present at birth (postnatal day 1) and could be found below the inner hair cells of the basal turn. Immunoreactivity progressively extends toward the apical turn until day 3. GAD-like immunoreactivity appears under the outer hair cells on postnatal day 15 and is only found in the upper part of the second turn and in the apical turn. The distribution of GABA-like immunoreactivity closely corresponds to that observed with the anti-glutamate decarboxylase antibody. However, the GABA-like immunoreactivity appears about 1–2 days after GAD-like immunoreactivity. At the beginning of the 3rd postnatal week, an adult pattern of GABA- and GAD-like immunoreactivity is established. These results suggest that GABA, which appears under the inner hair cells largely before the onset of hearing, may play a neurotrophic function during cochlear maturation and participate in the regulation of the first cochlear potentials as soon as they appear.", "corpus_id": 546174, "title": "Ontogeny of glutamate decarboxylase and γ-aminobutyric acid immunoreactivities in the rat cochlea" }
{ "abstract": "The auditory pathway of mammals is composed of two complementary ascending afferent and descending efferent independent systems. The brainstem nuclei and cochlear projections for these systems are now well-known. In addition, a highly conspicuous distribution for serotonergic fibers was recently reported. This study focused on these serotonergic fibers and their neurons of origin. We identified several different types of serotonergic brainstem neurons surrounding the superior olivary complex and around the periolivary nuclei. Even though the 5-hydroxytryptamine (5-HT) efferent cochlear innervation originates in the periolivary area of the superior olivary complex system projecting to the cochlea, it is not involved in the transduction of pure tones during auditory processing. However, recent findings, after cochlear blockade of serotonin transporters, strongly suggested that this neuroactive substance has an important turnover within the auditory receptor. The presence of a conspicuous peripheral nerve distribution together with a particular brainstem origin could define a complex role for this innervation. Therefore, 5-HT fibers projecting to the cochlea might be involved, as in other parts of the auditory pathway, in alertness, attention, control of sleep or wakefulness cycles, and state of urgency prior to the transduction processing at the auditory receptor. A lack, or reduction, of the function of these fibers could result in pathological alterations.", "corpus_id": 18787068, "score": -1, "title": "Serotonergic innervation of the inner ear: is it involved in the general physiological control of the auditory receptor?" }
{ "abstract": "This study investigated the biogeography and genetic variation in the antitropically distributed Micromesistius genus. A 579 bp fragment of the mitochondrial coI gene was analysed in 279 individuals of Micromesistius poutassou and 163 of Micromesistius australis. The time since divergence was estimated to be c. 2 million years before present (Mb.p.) with an externally derived clock rate by Bayesian methods. Congruent estimates were obtained with an additional data set of cytochrome b sequences derived from GenBank utilizing a different clock rate. The divergence time of 2 Mb.p. was in disagreement with fossil findings in New Zealand and previous hypotheses which suggested the divergence to be much older. It, therefore, appears likely that Micromesistius has penetrated into the southern hemisphere at least two times. Paleoceanographic records indicate that conditions that would increase the likelihood for transequatorial dispersals were evident c. 2-1·6 Mb.p.. Haplotype frequency differences, along with pairwise F(ST) values, indicated that Mediterranean M. poutassou is a genetically isolated population.", "corpus_id": 34233748, "title": "Mitochondrial DNA differentiation between the antitropical blue whiting species Micromesistius poutassou and Micromesistius australis." }
{ "abstract": "Climate predictions produced by numerical climate models, often referred to as general circulation models (GCMs), suggest that by the end of the twenty-first century global mean annual surface air temperatures will increase by 1.1–6.4°C. Trace gas records from ice cores indicate that atmospheric concentrations of CO2 are already higher than at any time during the last 650 000 years. In the next 50 years, atmospheric CO2 concentrations are expected to reach a level not encountered since an epoch of time known as the Pliocene. Uniformitarianism is a key principle of geological science, but can the past also be a guide to the future? To what extent does an examination of the Pliocene geological record enable us to successfully understand and interpret this guide? How reliable are the ‘retrodictions’ of Pliocene climates produced by GCMs and what does this tell us about the accuracy of model predictions for the future? These questions provide the scientific rationale for this Theme Issue.", "corpus_id": 8495494, "title": "Introduction. Pliocene climate, processes and problems" }
{ "abstract": "Microbial communities have evolved over centuries to live symbiotically. The direct visualization of such communities at the chemical and functional level presents a challenge. Overcoming this challenge may allow one to visualize the spatial distributions of specific molecules involved in symbiosis and to define their functional roles in shaping the community structure. In this study, we examined the diversity of microbial genes and taxa and the presence of biosynthetic gene clusters by metagenomic sequencing and the compartmentalization of organic chemical components within a lichen using mass spectrometry. This approach allowed the identification of chemically distinct sections within this composite organism. Using our multipronged approach, various fungal natural products, not previously reported from lichens, were identified and two different fungal layers were visualized at the chemical level. ABSTRACT Microbes are commonly studied as individual species, but they exist as mixed assemblages in nature. At present, we know very little about the spatial organization of the molecules, including natural products that are produced within these microbial networks. Lichens represent a particularly specialized type of symbiotic microbial assemblage in which the component microorganisms exist together. These composite microbial assemblages are typically comprised of several types of microorganisms representing phylogenetically diverse life forms, including fungi, photosymbionts, bacteria, and other microbes. Here, we employed matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) imaging mass spectrometry to characterize the distributions of small molecules within a Peltigera lichen. In order to probe how small molecules are organized and localized within the microbial consortium, analytes were annotated and assigned to their respective producer microorganisms using mass spectrometry-based molecular networking and metagenome sequencing. The spatial analysis of the molecules not only reveals an ordered layering of molecules within the lichen but also supports the compartmentalization of unique functions attributed to various layers. These functions include chemical defense (e.g., antibiotics), light-harvesting functions associated with the cyanobacterial outer layer (e.g., chlorophyll), energy transfer (e.g., sugars) surrounding the sun-exposed cyanobacterial layer, and carbohydrates that may serve a structural or storage function and are observed with higher intensities in the non-sun-exposed areas (e.g., complex carbohydrates). IMPORTANCE Microbial communities have evolved over centuries to live symbiotically. The direct visualization of such communities at the chemical and functional level presents a challenge. Overcoming this challenge may allow one to visualize the spatial distributions of specific molecules involved in symbiosis and to define their functional roles in shaping the community structure. In this study, we examined the diversity of microbial genes and taxa and the presence of biosynthetic gene clusters by metagenomic sequencing and the compartmentalization of organic chemical components within a lichen using mass spectrometry. This approach allowed the identification of chemically distinct sections within this composite organism. Using our multipronged approach, various fungal natural products, not previously reported from lichens, were identified and two different fungal layers were visualized at the chemical level.", "corpus_id": 17664564, "score": -1, "title": "Spatial Molecular Architecture of the Microbial Community of a Peltigera Lichen" }
{ "abstract": "Female mate choice decisions are often based on a variety of male characteristics, some of which may reflect male quality via condition-dependent trait expression. Here, we explore the condition dependence of a male secondary sexual trait in a wolf spider and examine its influence on female mate choice. In the wolf spider Schizocosa uetzi, mature males possess a multimodal courtship display (visual p seismic) in which they slowly raise and lower their dark colored forelegs. Foreleg color is highly variable among S. uetzi males with respect to both total amount and darkness. Using diet manipulations in conjunction with color quantifications, we demonstrate condition-dependent foreleg color. High-nutrient diet males had significantly higher body condition indices and possessed more and darker foreleg color than low-nutrient diet males. However, using multiple mate choice designs, we were unable to demonstrate a female preference for male foreleg color. Using both single and 2-choice mating designs as well as using females from a range of ages, we found that copulation success was consistently independent of male foreleg color. Instead, we found courtship intensity to be the only aspect of male courtship that influenced copulation success--males that copulated displayed more leg raises per second than those that did not copulate. Copyright 2009, Oxford University Press.", "corpus_id": 53953447, "title": "Courtship effort is a better predictor of mating success than ornamentation for male wolf spiders" }
{ "abstract": "The courtship behavior of male Schizocosa uetzi wolf spiders incorporates both visual and seismic signals into a multimodal display. These two signals have been shown to interact in such a manner that the seismic signal alters a female's response to the visual signal, leading to a putative increased importance of visual signaling in the presence of a seismic signal. Experiments leading to this attention-focusing hypothesis relied in part on the video playback technique, eliciting the question of its significance under more biologically relevant conditions. Here, we directly examine female mate choice of males with differing visual signals (foreleg pigmentation) both in the presence and absence of a seismic courtship signal. We first quantified the natural variation of male foreleg pigmentation within a population of S. uetzi. The proportion of the tibia covered in pigmentation was found to be positively correlated with male weight, suggesting that this signal may convey reliable information about male size. Visual signals of live males were then manipulated into two treatments: black and brown male foreleg tibias, representing the extreme ends of the natural variation found. The seismic signaling environment was also manipulated into two treatments: seismic signal present and absent. Mating frequency was higher in the presence of a seismic signal than in its absence, but there was no interaction between the seismic and visual signaling treatments. Females mated with black and brown males equally whether a seismic signal was present or absent. This study suggests that inexperienced females do not distinguish between males of different manipulated foreleg pigmentations in mate-choice decisions, even when in the presence of a seismic courtship signal.", "corpus_id": 1400556, "title": "The Role of Visual Ornamentation in Female Choice of a Multimodal Male Courtship Display" }
{ "abstract": "Sahlqvist formulas are a syntactically specified class of modal formulas proposed by Hendrik Sahlqvist in 1975. They are important because of their first-order definability and canonicity, and hence axiomatize complete modal logics. The first-order properties definable by Sahlqvist formulas were syntactically characterized by Marcus Kracht in 1993. The present paper extends Kracht's theorem to the class of ‘generalized Sahlqvist formulas' introduced by Goranko and Vakarelov and describes an appropriate generalization of Kracht formulas.", "corpus_id": 7426253, "score": -1, "title": "An extension of Kracht's theorem to generalized Sahlqvist formulas" }
{ "abstract": "Through-thickness stress self-sensing in a quasi-isotropic carbon fiber epoxy–matrix composite by in-plane electrical resistance measurement is effective. The resistance decreases reversibly upon through-thickness compression conducted up to 67 MPa, due to an increase in the proximity between adjacent laminae. The sensing can be attained by measuring the surface resistance in the direction of the surface fibers or by measuring the volume resistance in essentially any in-plane direction. The sensing is ineffective if the transverse surface resistance is the quantity measured, due to the dominance of the surface fibers in governing the surface resistance. In the case of the longitudinal surface resistance, the decrease in resistance upon compression has a slight irreversible component, due to an irreversible increase in the proximity between adjacent laminae and the consequent increase in the degree of current penetration. This effect is smaller for the longitudinal or transverse volume resistance. The variability of the resistance from area to area in the same laminate is larger for the surface resistance than the volume resistance, due to its higher sensitivity to current spreading. The sensitivity of stress sensing, as described by the fractional change in resistance per unit through-thickness compressive stress, is −10−5 MPa−1. The magnitude of the effectiveness is lower for the resistance away from the stressed region than that at the stressed region.", "corpus_id": 43597887, "title": "Through-thickness stress sensing of a carbon fiber polymer–matrix composite by electrical resistance measurement" }
{ "abstract": "Compression in the through-thickness direction (as in fastening) resulted in reversible and irreversible changes in the microstructure of continuous carbon fiber epoxy-matrix composites, as shown by electrical resistance measurement during dynamic compression. The extent of fiber-fiber contact across the interlaminar interface was increased, with partial irreversibility even at a low stress amplitude of 1 MPa. Within a lamina, fiber squeezing in the through-thickness direction and fiber spreading in the transverse direction occurred upon fastening compression, with partial irreversibility at a stress amplitude of 100 MPa or above. For a single laminae beyond 400 MPa, the lessening of fiber squeezing in the through-thickness direction during unloading dominated over the fiber spreading in the transverse direction during loading.", "corpus_id": 7777219, "title": "Effect of through-thickness compression on the microstructure of carbon fiber polymer-matrix composites, as studied by electrical resistance measurement" }
{ "abstract": "Abstract : The history of conducting polymer research is reviewed and recent results in the area of conducting polymers as corrosion protective coatings are presented and discussed.", "corpus_id": 135739658, "score": -1, "title": "Intrinsically Electrically Conducting Polymers as Corrosion Inhibiting Coatings." }
{ "abstract": "Removing the undesired reflections of images taken through glass is an important problem in digital photography and many other vision applications. The so-called ghosting effect, i.e., pattern repetitiveness in reflection, is an effective cue used by existing techniques to remove reflection from images. Existing methods take a two-stage approach that first estimates the parameters of ghosting effect and then models reflection removal as a two-layer separation problem: reflection layer and latent image layer. This paper aimed at addressing one main challenge in such an approach, i.e., how to distinguish the repetitive patterns on the later image layer and the ghosting patterns on the reflection layer. Based on the observation that the number of repeats of natural image patterns is often different from that of ghosting patterns, we proposed a wavelet transform based regularization method. Together with a novel weighting scheme, the proposed method is capable of accurately separating two layers, and experimental results jusified its advantages over existing ones on both synthetic and real data set.", "corpus_id": 214663523, "title": "Removing Reflection with Ghosting Effect from a Single Image" }
{ "abstract": "The images taken through glass often capture a target transmitted scene as well as undesired reflected scenes. In this paper, we propose a low-rank matrix completion algorithm to remove reflection artifacts automatically from multiple glass images taken at slightly different camera locations. We assume that the transmitted scenes are more dominant than the reflected scenes in typical glass images. We first warp the multiple glass images to a reference image, where the gradients are consistent in the transmission images while the gradients are varying across the reflection images. Based on this observation, we compute a gradient reliability such that the pixels belonging to the salient edges of the transmission image are assigned high reliability. Then we suppress the gradients of the reflection images and recover the gradients of the transmission images only, by solving a low-rank matrix completion problem in gradient domain. We reconstruct an original transmission image using the resulting optimal gradient map. Experimental results show that the proposed algorithm removes the reflection artifacts from glass images faithfully and outperforms the existing algorithms on typical glass images.", "corpus_id": 31926300, "title": "Reflection Removal Using Low-Rank Matrix Completion" }
{ "abstract": "We study the common problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving weighted low-rank approximation problems, which, unlike their unweighted version, do not admit a closed-form solution in general. We analyze, in addition, the nature of locally optimal solutions that arise in this context, demonstrate the utility of accommodating the weights in reconstructing the underlying low-rank representation, and extend the formulation to non-Gaussian noise models such as logistic models. Finally, we apply the methods developed to a collaborative filtering task.", "corpus_id": 5815325, "score": -1, "title": "Weighted Low-Rank Approximations" }
{ "abstract": "When designing modern software, care must be taken to allow for applications to scale based on the demands of its users while still accommodating flexibility in development. Recently, microservices architectures have garnered the attention of many organizations—providing higher levels of scalability, availability, and fault isolation. Many organizations choose to host their microservices architectures in cloud data centres to offset costs. Incidentally, data centres become over-encumbered during peak usage hours and underutilized during off-peak hours. Traditional microservice scaling methods perform either horizontal or vertical scaling exclusively. When used in combination, however, these methods offer complementary benefits and compensate for each other's deficiencies. To leverage the high availability of horizontal scaling and the fine-grained resource control of vertical scaling, we developed two novel hybrid autoscaling algorithms and a dedicated network scaling algorithm and benchmarked them against Google's popular Kubernetes horizontal autoscaling algorithm. Results indicated up to 1.49x speedups in response times for our hybrid algorithms, and 1.69x speedups for our network algorithm under high-burst network loads.", "corpus_id": 207756948, "title": "HyScale: Hybrid and Network Scaling of Dockerized Microservices in Cloud Data Centres" }
{ "abstract": "A key advantage of infrastructure-as-a-service (IaaS) clouds is providing users on-demand access to resources. To provide on-demand access, however, cloud providers must either significantly overprovision their infrastructure (and pay a high price for operating resources with low utilization) or reject a large proportion of user requests (in which case the access is no longer on-demand). At the same time, not all users require truly on-demand access to resources. Many applications and workflows are designed for recoverable systems where interruptions in service are expected. For instance, many scientists utilize high-throughput computing (HTC)-enabled resources, such as Condor, where jobs are dispatched to available resources and terminated when the resource is no longer available. We propose a cloud infrastructure that combines on-demand allocation of resources with opportunistic provisioning of cycles from idle cloud nodes to other processes by deploying backfill virtual machines (VMs). For demonstration and experimental evaluation, we extend the Nimbus cloud computing toolkit to deploy backfill VMs on idle cloud nodes for processing an HTC workload. Initial tests show an increase in IaaS cloud utilization from 37.5% to 100% during a portion of the evaluation trace but only 6.39% overhead cost for processing the HTC workload. We demonstrate that a shared infrastructure between IaaS cloud providers and an HTC job management system can be highly beneficial to both the IaaS cloud provider and HTC users by increasing the utilization of the cloud infrastructure (thereby decreasing the overall cost) and contributing cycles that would otherwise be idle to processing HTC jobs.", "corpus_id": 3159642, "title": "Improving Utilization of Infrastructure Clouds" }
{ "abstract": "We have studied the unfolding and refolding pathway of a beta-hairpin fragment of protein G by using molecular dynamics. Although this fragment is small, it possesses several of the qualities ascribed to small proteins: cooperatively formed beta-sheet secondary structure and a hydrophobic \"core\" of packed side chains. At high temperatures, we find that the beta-hairpin unfolds through a series of sudden, discrete conformational changes. These changes occur between states that are identified with the folded state, a pair of partially unfolded kinetic intermediates, and the unfolded state. To study refolding at low temperatures, we perform a series of short simulations starting from the transition states of the discrete transitions determined by the unfolding simulations.", "corpus_id": 41398850, "score": -1, "title": "Molecular dynamics simulations of unfolding and refolding of a beta-hairpin fragment of protein G." }
{ "abstract": "Insect pollination is fundamental for natural ecosystems and agricultural crops. The bumblebee species Bombus terrestris has become a popular choice for commercial crop pollination worldwide due to its effectiveness and ease of mass rearing. Bumblebee colonies are mass produced for the pollination of more than 20 crops and imported into over 50 countries including countries outside their native ranges, and the risk of invasion by commercial non‐native bumblebees is considered an emerging issue for global conservation and biological diversity. Here, we use genome‐wide data from seven wild populations close to and far from farms using commercial colonies, as well as commercial populations, to investigate the implications of utilizing commercial bumblebee subspecies in the UK. We find evidence for generally low levels of introgression between commercial and wild bees, with higher admixture proportions in the bees occurring close to farms. We identify genomic regions putatively involved in local and global adaptation, and genes in locally adaptive regions were found to be enriched for functions related to taste receptor activity, oxidoreductase activity, fatty acid and lipid biosynthetic processes. Despite more than 30 years of bumblebee colony importation into the UK, we observe low impact on the genetic integrity of local B. terrestris populations, but we highlight that even limited introgression might negatively affect locally adapted populations.", "corpus_id": 263622520, "title": "Limited introgression from non‐native commercial strains and signatures of adaptation in the key pollinator Bombus terrestris" }
{ "abstract": "Dispersal ability is a key determinant of the propensity of an organism to cope with habitat fragmentation and climate change. Here we quantify queen dispersal in two common bumblebee species in an arable landscape. Dispersal was measured by taking DNA samples from workers in the spring and summer, and from queens in the following spring, at 14 sites across a landscape. The queens captured in the spring must be full sisters of workers that were foraging in the previous year. A range of sibship reconstruction methods were compared using simulated data sets including or no genotyping errors. The program Colony gave the most accurate reconstruction and was used for our analysis of queen dispersal. Comparison of queen dispersion with worker foraging distances was used to take into account an expected low level of false identification of sister pairs which might otherwise lead to overestimates of dispersal. Our data show that Bombus pascuorum and B. lapidarius queens can disperse by at least 3 and 5 km, respectively. These estimates are consistent with inferences drawn from studies of population structuring in common and rare bumblebee species, and suggest that regular gene flow over several kilometres due to queen dispersal are likely to be sufficient to maintain genetic cohesion of ubiquitous species over large spatial scales whereas rare bumblebee species appear unable to regularly disperse over distances greater than 10 km. Our results have clear implications for conservation strategies for this important pollinator group, particularly when attempting to conserve fragmented populations.", "corpus_id": 351311, "title": "Estimation of bumblebee queen dispersal distances using sibship reconstruction method" }
{ "abstract": "The geographic nature of biological dispersal shapes patterns of genetic variation over landscapes, so that it is possible to infer properties of dispersal from genetic variation data. Here we present an inference tool that uses geographically-referenced genotype data in combination with a convolutional neural network to estimate a critical population parameter: the mean per-generation dispersal distance. Using extensive simulation, we show that our deep learning approach is competitive with or outperforms state-of-the-art methods, particularly at small sample sizes (e.g., n=10). In addition, we evaluate varying nuisance parameters during training—including population density, population size changes, habitat size, and the size of the sampling window relative to the full habitat—and show that this strategy is effective for estimating dispersal distance when other model parameters are unknown. Whereas competing methods depend on information about local population density or accurate identification of identity-by-descent tracts as input, our method uses only single-nucleotide-polymorphism data and the spatial scale of sampling as input. These features make our method, which we call disperseNN, a potentially valuable new tool for estimating dispersal distance in non-model systems with whole genome data or reduced representation data. We apply disperseNN to 12 different species with publicly available data, yielding reasonable estimates for most species. Importantly, our method estimated consistently larger dispersal distances than mark-recapture calculations in the same species, which may be due to the limited geographic sampling area covered by some mark-recapture studies. Thus genetic tools like ours complement direct methods for improving our understanding of dispersal.", "corpus_id": 251911054, "score": -1, "title": "Dispersal inference from population genetic variation using a convolutional neural network" }
{ "abstract": "The aim of this paper is two-fold. Firstly, we present a methodology to measure the novel concept of elite quality (EQ), that is, country’s elites’ propensity– on aggregate – to create value, rather than rent seek. A four-level architecture allows for both an overall quantification of a country’s EQ, as well as an in-depth analysis of specific political economy dimensions, such as elite power. Secondly, the Elite Quality Index (EQx) is brought to life using data on 72 indicators for 32 countries. Our index negatively correlates with inequality measures, which suggests that more powerful elites less inclined to run value creation business models will exacerbate inequality. A variety of robustness tests suggest that the EQx scores and ranking are robust to ceteris paribus changes in key modelling assumptions. Thus, the EQx offers a reliable framework and new tool to analyze the political economy of countries.", "corpus_id": 231812621, "title": "Measuring Elite Quality" }
{ "abstract": "Most lectures teach the relationship between the CES, Cobb-Douglas, and Leontief functions using the value of elasticity of substitution, namely, in the discrete object model. This lecture note aims at being a reference for algebraic computations of the Leontief and Cobb-Douglas functions by taking limits of CES functions both in discrete and continuum goods models. The argument on the discrete case uses l'H�pital's rule as usually done. The argument on the continuum case also uses l'H�pital's rule to show the convergence to the Cobb-Douglas function. To guarantee the convergence to the Leontief function, however, we rely on the squeeze principle.", "corpus_id": 154584670, "title": "How Do We Get Cobb-Douglas and Leontief Functions from CES Function: A Lecture Note on Discrete and Continuum Differentiated Object Models" }
{ "abstract": "This paper deals with the adaptive dynamics associated to a hierarchical non-linear discrete population model with a general transition matrix. In the model, individuals are categorized into n dominance classes, newborns lie in the subordinate class, and it is considered as evolutionary trait a vector eta of probabilities of transition among classes. For this trait, we obtain the evolutionary singular strategy and prove its neutral evolutionary stability. Finally, we obtain conditions for the invading potential of such a strategy, which is sufficient for the convergence stability of the latter. With the help of the previous results, we provide an explanation for the bimodal distribution of badges of status observed in the Siskin (Carduelis spinus). In the Siskin, as in several bird species, patches of pigmented plumage signal the dominance status of the bearer to opponents, and central to the discussion on the evolution of status signalling is the understanding of which should be the frequency distribution of badge sizes. Though some simple verbal models predicted a bimodal distribution, up to now most species display normal distributions and bimodality has only been described for the Siskin. In this paper, we give conditions leading to one of these two distributions in terms of the survival, fecundity and aggression rates in each dominance class.", "corpus_id": 21020178, "score": -1, "title": "Evolutionarily stable transition rates in a stage-structured model. An application to the analysis of size distributions of badges of social status." }
{ "abstract": "This study presents the phonological system exhibited by children (n=59) aged 3;0 to 6;0 and focuses on the role of input frequency. Using a spontaneous child speech corpus of Spanish (CHIEDE) as a data source, as well as computational processing techniques -including an automatic phonological transcriber-, data relating to the phonological level was retrieved. This resulted in a phonological inventory of Spanish-speaking children, ordered by frequency of use, which may serve as a model for research on typical and atypical child language development. Additionally, a study was carried out on the stability of the participants’ phonological systems by calculating the variability that the different age groups displayed, and outcomes were compared with other similar corpora. Results obtained from the comparison of the phonological inventory of children and adults show that there is a relationship between frequency of use in adult speech and the order of acquisition of phonemes.", "corpus_id": 259457936, "title": "The role of the input frequency in L1 Spanish phonological acquisition. A corpus-based study" }
{ "abstract": "This research explores the role of phonotactic probability in two-year-olds' production of coda consonants. Twenty-nine children were asked to repeat CVC non-words that were used as labels for pictures of imaginary animals. The CVC non-words were controlled for their phonotactic probabilities, neighbourhood densities, word-likelihood ratings, and contained the identical coda across low and high phonotactic probability pairs. This allowed for comparisons of children's productions of the same coda consonant in low and high phonotactic probability environments. Children were significantly more likely to produce the same coda in high phonotactic probability non-words than in low phonotactic probability non-words. These results are consistent with the hypothesis that phonotactic probability is a predictor of coda production in English. Moreover, this finding provides further evidence for the role of the input and distribution of sound patterns in the ambient language as a basis for phonological acquisition.", "corpus_id": 1070871, "title": "Phonotactic probabilities in young children's speech production" }
{ "abstract": "It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together. In under two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults; our results indicate that such changes originate in children aged 10 and younger. Thus, sequential cohorts of interacting young children collectively possess the capacity not only to learn, but also to create, language.", "corpus_id": 9978841, "score": -1, "title": "Children Creating Language: How Nicaraguan Sign Language Acquired a Spatial Grammar" }
{ "abstract": "......................................................................................................................................... ii Lay Summary ............................................................................................................................... iv Preface .............................................................................................................................................v Table of", "corpus_id": 79846829, "title": "Autonomic nervous system parameters to predict the occurrence of ischemic events after transient ischemic attack or minor stroke" }
{ "abstract": "Background and Purpose— The ABCD2 score predicts the early risk of stroke after transient ischemic attack. The early risk of recurrence after minor stroke is as high but the only validated prognostic scores for use in minor stroke predict long-term risk of recurrence: the Essen Stroke Risk Score and the Stroke Prognosis Instrument II. Methods— We determined the prognostic value of the ABCD2 score, Essen Stroke Risk Score, and Stroke Prognosis Instrument II in a prospective population-based study in Oxfordshire, UK, of all incident and recurrent stroke (Oxford Vascular Study). Minor stroke was defined as an National Institutes of Health Stroke Scale score ≤5 at the time of first assessment. The 90-day risks of recurrent stroke were determined in relation to each score. Areas under the receiver operator curves indicated predictive value. Results— Of 1247 first events in the study period, 488 were transient ischemic attacks, 520 were minor strokes, and 239 were major strokes. The ABCD2 score was modestly predictive (area under the receiver operator curve, 0.64; 0.53 to 0.74; P=0.03) of recurrence at 7 days after minor stroke and at 90 days (0.62; 0.54 to 0.70; P=0.004). Neither Essen Stroke Risk Score (0.50; 0.42 to 0.59; P=0.95) nor Stroke Prognosis Instrument II (0.48; 0.39 to 0.60; P=0.92) were predictive of 7-day or 90-day risk of recurrent stroke. Of the traditional vascular risk factors, etiologic classification (Trial of ORG 10172 in Acute Stroke Treatment) and variables in the ABCD2 score, only blood pressure >140/90 mm Hg (hazard ratio, 2.75; 1.18 to 6.38; P=0.02) and large artery disease (hazard ratio, 2.21; 1.00 to 4.88; P=0.05) were predictive of 90-day risk. Conclusions— The predictive power of the ABCD2 score is modest in patients with minor stroke, and neither the Essen Stroke Risk Score nor the Stroke Prognosis Instrument II predicts early recurrence. More reliable early risk prediction after minor stroke is required.", "corpus_id": 2483475, "title": "Poor Performance of Current Prognostic Scores for Early Risk of Recurrence After Minor Stroke" }
{ "abstract": "Objectives: We evaluated the frequency and predictive value of ocular fundus abnormalities among patients who presented to the emergency department (ED) with focal neurologic deficits to determine the utility of these findings in the evaluation of patients with suspected TIA and stroke. Methods: In this cross-sectional pilot study, ocular fundus photographs were obtained using a nonmydriatic fundus camera. Demographic, neuroimaging, and ABCD2 score components were collected. Photographs were reviewed for retinal microvascular abnormalities. The results were analyzed using univariate statistics and logistic regression modeling. Results: Two hundred fifty-seven patients presented to the ED with focal neurologic deficits, of whom 81 patients (32%) had cerebrovascular disease (CVD) and 144 (56%; 95% confidence interval: 50%–62%) had retinal microvascular abnormalities. Focal and general arteriolar narrowing increased the odds of clinically diagnosed CVD by 5.5 and 2.6 times, respectively, after controlling for the ABCD2 score and diffusion-weighted imaging. These fundus findings also significantly differentiated TIA from non-CVD, even after controlling for the ABCD2 score. Conclusions: Focal and general arteriolar narrowing were independent predictors of CVD overall, and TIA alone, even after controlling for the ABCD2 score and diffusion-weighted imaging lesions. The inclusion of nonmydriatic ocular fundus photographs in the evaluation of patients presenting to the ED with focal neurologic deficits may assist in the differentiation of stroke and TIA from other causes of focal neurologic deficits.", "corpus_id": 6408089, "score": -1, "title": "Ocular fundus photography of patients with focal neurologic deficits in an emergency department" }
{ "abstract": "Purpose - The purpose of this paper is to analyse the processes involved in the creation and eventual demise of a market for biodiversity offsets in the UK. The reasons for the failure of this market to take hold as a governance mechanism are considered, and its subsequent effects examined. Design/methodology/approach - The research examines a single case study of the creation of a pilot market for biodiversity offsets in the UK. Data include policy and industry papers, complemented with interviews with biodiversity offset practitioners, regulators and non-government organisations. Findings - The case study demonstrates that a market for biodiversity offsets was piloted with the intent to contribute to the reform of the UK planning regime. However, disagreements about this political project, uncertainties in the knowledge base, and continued entanglements with existing biodiversity meant it was impossible to stabilise the assemblages necessary to support the market, leading to its eventual demise. However, the principles and devices of offsetting have proved more resilient, and have started to combine with the existing arrangements for the governance of nature. Practical implications - The paper presents a situation where a political project to reform governance arrangements through the creation of a market was not successful, making it of interest to researchers and policymakers alike. Originality/value - While biodiversity offsetting has been widely discussed from scientific, legal and political perspectives, this paper addresses it as a market, explicitly designed to become a part of a governance regime. It also advances the understanding of the mechanisms by which similar processes of marketisation can fail, and suggests avenues for future research in those contexts.", "corpus_id": 55601219, "title": "The contested instruments of a new governance regime: Accounting for nature and building markets for biodiversity offsets" }
{ "abstract": "Human actions have altered global environments and reduced biodiversity by causing extinctions and reducing the population sizes of surviving species. Increasing human population size and per capita resource use will continue to have direct and indirect ecological and evolutionary consequences. As a result, future generations will inhabit a planet with significantly less wildlife, reduced evolutionary potential, diminished ecosystem services, and an increased likelihood of contracting infectious disease. The magnitude of these effects will depend on the rate at which global human population and/or per capita resource use decline to sustainable levels and the degree to which population reductions result from increased death rates rather than decreased birth rates.", "corpus_id": 3097273, "title": "Biodiversity, Extinction, and Humanity’s Future: The Ecological and Evolutionary Consequences of Human Population and Resource Use" }
{ "abstract": "In Africa, status of biodiversity conservation of many plants and animals is questionable as this is considered to be caused by limited and lack of authentic information concerning genetic diversity. This has led to a considerable compromise of conservation decisions in Africa. As a result, lack of reliable information continues to cause a great effect on the long-term security of species of plants and animals. Current advancement in genomics has proved to play a vital role in conservation of plant and animal biodiversity. It produces genetic data that helps researchers to understand the interaction between ecosystem and organisms, also among organisms themselves. The information extracted from plants and animals via genomics techniques can be used to develop good approaches for biodiversity conservation. Despite its usefulness, there is a limited awareness on the application of potential genomics in plants and animals conservation in many developing countries, especially in Africa. The aim of this review is to raise awareness and catalyse the application of genomics techniques in rejuvenation and conservation of plants and animals in Africa. Precisely, the paper addresses the efficacy of potential genomics in plants and animals conservation; and seeks to show how Africa can benefit from genomics technology. About 62 peer-reviewed articles were reviewed. This current review has shown that genomics helps to identify good genes for fitness, and develops tools to monitor and conserve plants and animals biodiversity. The review recommends that regardless of the limitation of genomics application in biodiversity conservation in Africa, African researchers must consider using this technology for better conservation of plants and animals biodiversity.", "corpus_id": 32918607, "score": -1, "title": "Potential of genomic approaches in conservation of plant and animal biodiversity in Africa: A review" }
{ "abstract": "Conjugated linoleic acid (CLA) is a group of positional and geometric isomers of conjugated dienoic derivatives of linoleic acid. The major dietary source of CLA for humans is ruminant meats, such as beef and lamb, and dairy products, such as milk and cheese. The major isomer of CLA in natural food is cis-9,trans-11 (c9,t11). The commercial preparations contain approximately equal amounts of c9,t11 and trans-10,cis-12 (t10,c12) isomers. Studies have shown that CLA, specifically the t10,c12-isomer, can reduce fat tissue deposition and body lipid content but appears to induce insulin resistance and fatty liver and spleen in various animals. A few human studies suggest that CLA supplementation has no effect on body weight and could reduce body fat to a much lesser extent than in animals. To draw conclusions on this form of dietary supplementation and to ultimately make appropriate recommendations, further human studies are required. The postulated antiobesity mechanisms of CLA include decreased energy and food intakes, decreased lipogenesis, and increased energy expenditure, lipolysis, and fat oxidation. This review addresses recent studies of the effects of CLA on lipid metabolism, fat deposition, and body composition in both animals and humans as well as the mechanisms surrounding these effects.", "corpus_id": 13657195, "title": "Dietary conjugated linoleic acid and body composition." }
{ "abstract": "Conjugated linoleic acid (CLA) is a naturally occurring group of dienoic derivatives of linoleic acid found in the fat of beef and other ruminants. CLA is reported to have effects on both tumor development and body fat in animal models. To further characterize the metabolic effects of CLA, male AKR/J mice were fed a high-fat (45 kcal%) or low-fat (15 kcal%) diet with or without CLA (2.46 mg/kcal; 1.2 and 1.0% by weight in high- and low-fat diets, respectively) for 6 wk. CLA significantly reduced energy intake, growth rate, adipose depot weight, and carcass lipid and protein content independent of diet composition. Overall, the reduction of adipose depot weight ranged from 43 to 88%, with the retroperitoneal depot most sensitive to CLA. CLA significantly increased metabolic rate and decreased the nighttime respiratory quotient. These findings demonstrate that CLA reduces body fat by several mechanisms, including a reduced energy intake, increased metabolic rate, and a shift in the nocturnal fuel mix.", "corpus_id": 4409409, "title": "Effects of conjugated linoleic acid on body fat and energy metabolism in the mouse." }
{ "abstract": "An enzyme-linked immunosorbent assay (ELISA) using crude worm extracts (CWE) and mixtures of these as antigens of five Spanish isolates (P, C, B1, B2 and W) was developed for detecting homologous and heterologous experimental infections with these isolates between – 14 and 82 days post-infection (p.i.) in white and Iberian pigs. A total of 243 pigs (Ilberian or cross-bred with this race) with numerous parasitic infections were also screened for the presence of antibodies to a mixture of CWE of C, B1 and B2 isolate. The test showed a specificity of 93·1–98·9% depending on the cut-off values and a maximum sensitivity of 92·8–100% between days 34 and 82 p.i. A low grade of infectivity was shown in the T3 isolates compared to the T1 isolates (P, C, B1 and B2) but high cross-reactions were observed between all the isolates with minor differences between P and W isolates. The highest antibody response was found in P infections and the lowest in pigs infected with the W isolate. A clear association between the presence of several parasitic infections and false positive reactions was not found, but an important relation was shown between high background levels and the Iberian race in experimentally and conventionally raised pigs", "corpus_id": 29355674, "score": -1, "title": "Trichinella strain, pig race and other parasitic infections as factors in the reliability of ELISA for the detection of swine trichinellosis" }
{ "abstract": "Safety-critical real-time systems must meet stringent timing and fault-tolerance requirements. This article proposes a methodology for synthesizing an optimal preemptive multiprocessor aperiodic task scheduler using a formal supervisory control framework. The scheduler can tolerate single/multiple permanent processor faults. Further, the synthesis framework has been empowered with a novel BDD-based symbolic computation mechanism to control the exponential state-space complexity of the optimal exhaustive enumeration-oriented synthesis methodology.", "corpus_id": 8019971, "title": "Fault-Tolerant Preemptive Aperiodic RT Scheduling by Supervisory Control of TDES on Multiprocessors" }
{ "abstract": "This work presents a novel slack management technique, the service-rate-proportionate (SRP) slack distribution, for real-time distributed embedded systems to reduce energy consumption. The proposed SRP-based slack distribution technique has been considered with EDF and rate-based scheduling schemes that are most commonly used with embedded systems. A fault-tolerant mechanism has also been incorporated into the proposed technique in order to utilize the available dynamic slack to maintain checkpoints and provide for rollbacks on faults. Results show that, in comparison to contemporary techniques, the proposed SRP slack distribution technique achieves about 29 percent more performance/overhead improvement benefits when validated with random and real-world benchmarks.", "corpus_id": 16119890, "title": "A Dynamic Slack Management Technique for Real-Time Distributed Embedded Systems" }
{ "abstract": "This paper proposes an N-modular redundancy (NMR) technique with low energy-overhead for hard real-time multi-core systems. NMR is well-suited for multi-core platforms as they provide multiple processing units and low-overhead communication for voting. However, it can impose considerable energy overhead and hence its energy overhead must be controlled, which is the primary consideration of this paper. For this purpose the system operation can be divided into two phases: indispensable phase and on-demand phase. In the indispensable phase only half-plus-one copies for each task are executed. When no fault occurs during this phase, the results must be identical and hence the remaining copies are not required. Otherwise, the remaining copies must be executed in the on-demand phase to perform a complete majority voting. In this paper, for such a two-phase NMR, an energy-management technique is developed where two new concepts have been considered: i ) Block-partitioned scheduling that enables parallel task execution during on-demand phase, thereby leaving more slack for energy saving, ii ) Pseudo-dynamic slack, that results when a task has no faulty execution during the indispensable phase and hence the time which is reserved for its copies in the on-demand phase is reclaimed for energy saving. The energy-management technique has an off-line part that manages static and pseudo-dynamic slacks at design time and an online part that mainly manages dynamic slacks at run-time. Experimental results show that the proposed NMR technique provides up to 29 percent energy saving and is 6 orders of magnitude higher reliable as compared to a recent previous work.", "corpus_id": 16283203, "score": -1, "title": "Ieee Transactions on Parallel and Distributed Systems 1 Two-phase Low-energy N-modular Redundancy for Hard Real-time Multi-core Systems" }
{ "abstract": "AX J1845-0258 is a transient X-ray pulsar, with spin period of 6.97s, discovered with the ASCA satellite in 1993. Its soft spectrum and the possible association with a supernova remnant suggest that AX J1845-0258 might be a magnetar, but this has not been confirmed yet. A possible counterpart one order of magnitude fainter, AX J184453-025640, has been found in later X-ray observations, but no pulsations have been detected. In addition, some other X-ray sources are compatible with the pulsar location, which is in a crowded region of the Galactic plane. We have carried out a new investigation of all the X-ray sources in the ASCA error region of AX J1845-0258, using archival data obtained with Chandra in 2007 and 2010, and with XMM-Newton in 2010. We set an upper limit of 6% on the pulsed fraction of AX J184453-025640 and confirmed its rather hard spectrum (power law photon index of 1.2 +\\- 0.3). In addition to the other two fainter sources already reported in the literature, we found other X-ray sources positionally consistent with AX J1845-0258. Although many of them are possibly foreground stars likely unrelated to the pulsar, at least another new source, CXOU J184457.5-025823, could be a plausible counterpart of AX J1845-0258. It has a flux of 6x10^{-14} erg cm^{-2} s^{-1} and a spectrum well fit by a power law with photon index ~1.3 and Nh ~ 10^{22} cm^{-2}.", "corpus_id": 119256934, "title": "A new investigation of the possible X-ray counterparts of the magnetar candidate AX J1845−0258" }
{ "abstract": "We report on Very Large Array observations in the direction of the recently discovered slow X-ray pulsar AX J1845-0258. In the resulting images, we find a 5' shell of radio emission; the shell is linearly polarized with a nonthermal spectral index. We classify this source as a previously unidentified, young (<8000 yr) supernova remnant (SNR), G29.6+0.1, which we propose is physically associated with AX J1845-0258. The young age of G29.6+0.1 is then consistent with the interpretation that anomalous X-ray pulsars (AXPs) are isolated, highly magnetized neutron stars (\"magnetars\"). Three of the six known AXPs can now be associated with SNRs; we conclude that AXPs are young (≲10,000 yr) objects and that they are produced in at least 5% of core-collapse supernovae.", "corpus_id": 1208156, "title": "A New Supernova Remnant Coincident with the Slow X-Ray Pulsar AX J1845–0258" }
{ "abstract": "Three soft ..gamma..-ray bursts from the same source were recorded on 1979 March 24, 25, and 27 by the ..gamma..-ray detectors of the Cone experiment aboard the Venera 11 and Venera 12 space probes.", "corpus_id": 116521900, "score": -1, "title": "Soft gamma-ray bursts from the source B1900+14" }
{ "abstract": "Recommended Citation Boylston, Jennifer A.. \"FHIT inactivation combined with cigarette smoke enhances the oxidative stress response. ACKNOWLEDGEMENTS Foremost, I'd like to thank the members of the Brenner Laboratory, each of whom helped to make my research both successful and enjoyable. I would like to highlight the contributions of my mentor, Charles Brenner, who helped to not only develop this dissertation project, but also contributed an unwavering optimism and energy that propelled it forward. I would also like to acknowledge Dr. Brenner for his dedication in moving the laboratory from Dartmouth College to the University of Iowa as seamlessly as possible. This could not have been the success it was without his efforts. While each member of the Brenner Lab contributed in special ways, I would like to specifically recognize Dr. Rebecca Fagan, whose cheerfulness reminded me to approach my research with levity, and Sam Trammell for his propensity for cantankerous debate. Both helped to make my time spent in lab not only productive but also fun. Dr. Omar Jaffer was introduced to basic bench research in our lab, and I'm proud to say that both he and his wife, Dr. Alison Beer have become great friends. I'd also like to grant special recognition to Mark Schultz and Hayley Mcloughlin – we commiserated over the fact that science is a difficult and often thankless endeavor over more lunches than I can count. Dr. Edgar Rodriguez was an incredible and indispensible source of information and encouragement over the past few years. I hope that all of these people, as well as Edgar's wife, Erica, continue to be present in our lives for years to come. for the contributions that they have made to this dissertation. Their suggestions and ideas have been invaluable, and I'd iii particularly like to thank them for their encouragement in both past as well as my future pursuits. Finally, my partner and best friend, James Geoghegan. I may not have completed this project without his constant faith in my ability to do so. Over lunch recently, I declared that I'd only be \" 50% as good as I am \" without him. I'll continue to assert that this statement is true. iv ABSTRACT The FHIT gene is located on the most fragile site in the human genome. FHIT gene deletions are among the earliest and most frequent events in carcinogenesis, particularly in carcinogen-exposed tissue. Previous work in mouse …", "corpus_id": 83365747, "title": "FHIT inactivation combined with cigarette smoke enhances the oxidative stress response" }
{ "abstract": "Thanks to the Nobel Foundation for permission to publish this Lecture (Copyright© The Nobel Foundation 2006). Here we report the transcript of the lecture delivered by Professor Craig C Mello at the Nobel Prize ceremony. Professor Mello vividly describes the years of research that led to the discovery of RNA interference and the molecular mechanisms that regulate this fundamental cellular process. The turning point of discoveries and the role played by all his colleagues and collaborators are described, making this a wonderful report of the adventure of research. The lecture explains in simple language the importance of this discovery that has added a great level of complexity to the way cells regulate protein levels; moreover, it points out the beauty and importance of Caenorhabditis elegans as a model organism and how the use of this model has greatly contributed to the advance of science. Finally, Professor Mello leaves us with a number of questions that his research has raised and that will require years of future research to be answered.", "corpus_id": 1023660, "title": "Return to the RNAi world: rethinking gene expression and evolution" }
{ "abstract": "This book is intended to present the results of research on existing school-to-work systems in the United States. It is a comprehensive review of research on school-to-work programs, and brings those research findings to bear on the strategic choices that confront states and localities in their search for career-related programs in secondary schools and two-year colleges. School-to-work programs are classified here in two ways: school-and-work and school-for-work. It is noted that one major new initiative is the integration of academic and vocational curricula. A summary is presented of five integrated programs and their effects on students to date. There is also a summary of selected studies of programs for young people who are not attending school. It is suggested that research into school-to-work programs is still limited in several respects and new evaluation systems need to be developed.", "corpus_id": 153194793, "score": -1, "title": "School To Work: Research On Programs In The United States" }
{ "abstract": "Penelitian bertujuan untuk: 1) mengetahui konsentrasi tepung jagung yang tepat untuk medium cair Trichoderma harzianum T10, 2) mengetahui pengaruh aplikasi T. harzianum T10 dalam berbagai konsentrasi medium cair tepung jagung terhadap penekanan penyakit rebah semai dan pertumbuhan bibit mentimun. Penelitian dilaksanakan di Laboratorium Perlindungan Tanaman dan di lahan Fakultas Pertanian, Universitas Jenderal Soedirman pada bulan September 2017 sampai Januari 2018. Pengujian in vitro menggunakan Rancangan Acak Lengkap dengan  lima perlakuan dan  lima ulangan, meliputi perlakuan formula cair medium Potato Dextrose Broth (PDB), formula cair tepung jagung konsentrasi 5, 10, 15 dan 20 g/L. Pengujian in planta menggunakan Rancangan Acak Kelompok dengan 6 perlakuan dan 5 ulangan, membandingkan kontrol dengan tanaman yang diberi perlakuan T. harzianum T10 pada masing-masing formula cair konsentrasi tepung jagung. Variabel yang diamati meliputi kepadatan konidium, masa inkubasi, kejadian penyakit, area under disease progress curve (AUDPC), potensi tumbuh maksimum, daya kecambah, tinggi tanaman, panjang akar, bobot segar akar dan bobot segar tajuk. Hasil penelitian menunjukkan bahwa kepadatan konidium T. harzianum T10 tertinggi pada formula medium cair tepung jagung konsentrasi 20 g/L sebesar 3,67x10 6 konidium/mL, tetapi belum mampu menyamai medium PDB. Aplikasi T. harzianum T10 yang efektif menekan penyakit rebah semai adalah perlakuan T. harzianum T10 dalam formula cair tepung jagung konsentrasi 15 g/L, yaitu mampu menekan kejadian penyakit 71,43% dan menunda masa inkubasi 35,83%. Aplikasi T. harzianum T10 selain konsentrasi 15 g/L belum berpengaruh terhadap variabel yang diamati dan diukur.", "corpus_id": 234318946, "title": "PENGGUNAAN FORMULA CAIR Trichoderma harzianum T10 BERBAHAN TEPUNG JAGUNG TERHADAP REBAH SEMAI (Pythium sp.) BIBIT MENTIMUN" }
{ "abstract": "Lasiodiplodia theobromae, a common tea (Camellia sinensis) pathogen, usually does not sporulate or sporulates poorly in common media, which makes spore production difficult. In this study the effects of culture media, carbon source, nitrogen source, temperature, pH and light on mycelial growth and sporulation were evaluated. Among several carbon sources tested, glucose and sucrose were found superior for growth. Potassium nitrate supplemented media showed maximum growth amongst the tested inorganic nitrogen sources while peptone produced maximum growth among the tested organic nitrogen sources. Tea root extract supplemented potato dextrose agar medium was found to be the most suitable for mycelial growth and sporulation of L. theobromae. The fungus grow at temperatures ranging from 40 to 36 degrees C, with optimum growth at 28 degrees C and no growth was noted at 40 degrees C. There was no significant effect of different light period on growth of L. theobromae, but light enhanced sporulation. The fungus grow at pH 3.0-8.0 and optimum growth was observed at pH 6.0. Tea root extract supplemented potato dextrose agar medium with pH 6.0 was the most suitable for production of conidia of L. theobromae at 28 degrees C. Hence this media may be recommended for inoculum production for further studies.", "corpus_id": 6604621, "title": "Influence of culture media and environmental factors on mycelial growth and sporulation of Lasiodiplodia theobromae (Pat.) Griffon and Maubl." }
{ "abstract": "Study of fungal colonial growth is a basic method to examine their behaviour in different cultivation conditions. The influence of temperature and initial pH on growth radial velocity and growth density of Botryodiplodia theobromae RC1, was studied in order to show the growth characteristics of this fungus. Both temperature and culture medium influenced growth density, but radial velocity of growth was only affected by temperatures above 40 degrees C. In addition, initial pH of culture media did not affect either parameter.", "corpus_id": 26757619, "score": -1, "title": "[A survey of temperature and pH effect on colonial growth of Botryodiplodia theobromae RC1]." }
{ "abstract": "Objetivo: avaliar se o perfil profissional e o tipo de serviço interferem no escore do atributo da coordenação da Atenção Primária à Saúde dos municípios de residência de crianças e adolescentes vivendo com HIV, vinculados a um serviço especializado no Sul do Brasil. Método: estudo transversal, desenvolvido de março a agosto de 2014, em 25 municípios do Rio Grande do Sul, com 527 profissionais. Utilizou-se o Primary Care Assessment Tool - Brasil versão Profissionais. Para a análise, foi utilizado o Teste do qui-quadrado de Pearson, de Mann Whitney e Regressão de Poisson. Resultados: Escore satisfatório tanto na integração de cuidados (6,96), quanto nos sistemas de informações (8,22). As variáveis associadas ao alto escore foram: formação (p=0,001), cargo no serviço (p=0,003) e vínculo com o serviço (p=0,018). A unidade básica de saúde foi associada ao recebimento de informações do serviço especializado no retorno (p=0,049). Conclusão: a formação clínico geral, não possuir cargo e ter vínculo estatutário interferem positivamente na qualidade da APS, e o tipo de serviço Estratégia Saúde da Família tem potencial para coordenar a atenção à saúde às crianças e adolescentes vivendo com HIV.", "corpus_id": 245360163, "title": "Avaliação da coordenação do cuidado: crianças e adolescentes com condição crônica de infecção pelo HIV" }
{ "abstract": "No abstract available (Published: 28 June 2017) Sohn AH  Journal of the International AIDS Society  2017,  20 :21952 http://www.jiasociety.org/index.php/jias/article/view/21952  |  http://dx.doi.org/10.7448/IAS.20.1.21952", "corpus_id": 3430562, "title": "Taking a critical look at the UNAIDS global estimates on paediatric and adolescent HIV survival and death" }
{ "abstract": "Purpose of reviewTo present the methodology used to calculate coverage of antiretroviral therapy (ART) and review global and regional trends in ART coverage. Recent findingsThere has been a steady increase in ART coverage over the last decade with a more rapid increase in recent years. Current estimates of ART coverage are 43% for adults and 38% for children (ages 0–14 years). Methods for calculating coverage rely on good-quality patient monitoring systems in countries, and well informed models are needed to estimate the number of people in need of treatment. SummaryThe estimated coverage rates show that ART programs have improved over the past 8 years; however, approximately 58% (53–60%) of those people in need of ART are still not on treatment. High quality data are needed to accurately measure changes in ART coverage.", "corpus_id": 5081966, "score": -1, "title": "Estimation of antiretroviral therapy coverage: methodology and trends" }
{ "abstract": "In recent years, the biomass market has constantly increased. The densification of plant biomass would contribute to improving its efficiency as a fuel by increasing its homogeneity and allowing a wider range of lignocellulosic materials to be used as fuel. The eco-friendly solid fuels, such as pellets, have become rapidly a viable alternative to fossil fuels, due to their high energy content, which makes them suitable for use by small households and by industrial consumers. The knowledge of the physical and mechanical properties of biomass is important for the design and efficient operation of equipment for handling, storing and processing such materials. Jerusalem artichoke, Helianthus tuberosus, is native to North America. Its tubers were previously used as raw material, food, folk remedy and animal fodder. Its potential yield and low requirements meant that it could be of interest in the renewable energy sector, tubers can be used for biogas or ethanol production, and the aboveground parts – for pellets and briquettes. The objective of this research was to evaluate some physical and mechanical properties of dry biomass and pellets from Jerusalem artichoke stalks and wheat straw collected from the experimental field of the National Botanical Garden (Institute), Chişinău. The physical and mechanical properties were determined according to the European standards accepted in the Republic of Moldova; the production of solid fuels, pellets – by the equipment developed in the Institute of Agricultural Technique \"Mecagro\", Chişinău. The pellets were produced from Jerusalem artichoke biomass and mixture of Jerusalem artichoke and wheat straw with a percentage: 0%, 30%, 50%, 70% and 100%. It was determined that the bulk density of the chaffs milled by a 6 mm sieve ranged from 163 to 231 kg/m, the ash content – from 2.12 to 4.93%, the gross calorific value from 17.4 to 19.1 MJ/kg. The biomass of the species Helianthus tuberosus was characterized by high gross calorific value and moderate ash content. The physical and mechanical properties of fuel pellets varied depending on the mixture ratio: the moisture content ranged from 13.8 to 14.1%, the bulk density from 582 to 685 kg/m, the specific density from 880 to 1008 kg/m and the net calorific value from 15.6 to 17.71 MJ/kg.", "corpus_id": 222140260, "title": "THE QUALITY OF BIOMASS AND FUEL PELLETS FROM JERUSALEM ARTICHOKE STALKS AND WHEAT STRAW" }
{ "abstract": "The development of renewable energy constitutes a crucial role for the future as combustion of the plant biomass causes reduction of sulfur oxides and nitrogen oxides. The purpose of the work was to determine the basic energetic and mechanical properties of pellets that were produced from Jerusalem artichoke. The mechanical properties and combustion behaviour were studied by means of mechanical strength (Zwick / Roell Z010) and thermogravimetric (TGA) analysis. The suitability of pellets is determined both by their energy value, which is influenced by biomass moisture, and mechanical durability during their transport and storage. The analyses were conducted in the laboratory of the Department of Bioenergetics and Food Analysis at University of Rzeszow in 2017. The following parameters were analyzed: calorific value, moisture content, ash content, Carbon (C), Nitrogen (N) and Hydrogen (H). The analyzed material was characterized by high mechanical resistance levels. Due to the very high energy value 18,85 MJ/kg and high mechanical durability, both estimated in own studies, it can be stated that Jerusalem artichoke in the form of produced pellets can be used for heating purposes. When the chemical properties were examined, it was found, that the product under consideration had environmentally friendly qualities, and did not emit unpleasant odors. Furthermore, it was mechanically stable, clean, safeand comfortable to use.", "corpus_id": 165708372, "title": "Qualitative analysis of pellets produced from Jerusalem artichoke (Helianthus tuberosus L.)" }
{ "abstract": "Abstract In a cytotoxicity-guided study using the MCF-7 human breast cancer cell line, nine known compounds, ent-17-oxokaur-15(16)-en-19-oic acid (1), ent-17-hydroxykaur-15(16)-en-19-oic acid (2), ent-15β-hydroxykaur-16(17)-en-19-oic acid methyl ester (3), ent-15-nor-14-oxolabda-8(17),12E-dien-18-oic acid (4), 4,15-isoatriplicolide angelate (5), 4,15-isoatriplicolide methylacrylate (6), (+)-pinoresinol (7), (−)-loliolide (8), and vanillin (9) were isolated from the chloroform-soluble subfraction of a methanol extract of the whole plant of Helianthus tuberosus collected in Ohio, USA. This is the first time that diterpenes have been isolated and identified from this economically important plant. The bioactivities of all isolates were evaluated using the MCF-7 human breast cancer cell line as well as a soybean isoflavonoid defense activation bioassay. The results showed that two germacrane-type sesquiterpene lactones, 5 and 6, are cytotoxic agents. While compounds 2, 3, 5 and 6 blocked isoflavone accumulation in the soybean, the norisoprenoid (−)-loliolide (8) was somewhat stimulatory of these defense metabolites.", "corpus_id": 83887111, "score": -1, "title": "Bioactive constituents of Helianthus tuberosus (Jerusalem artichoke)" }
{ "abstract": "ABSTRACT In hot environments, collagen, which is normally targeted when radiocarbon (14C) dating bone, rapidly degrades. With little other skeletal material suitable for 14C dating, it can be impossible to obtain dates directly on skeletal materials. A small amount of carbonate occurs in hydroxyapatite, the mineral phase of bone and tooth enamel, and has been used as an alternative to collagen. Unfortunately, the mineral phase is often heavily contaminated with exogenous carbonate causing 14C dates to underestimate the true age of a sample. Although tooth enamel, with its larger, more stable crystals and lower porosity, is likely to be more robust to diagenesis than bone, little work has been undertaken to investigate how exogenous carbonate can be effectively removed prior to 14C dating. Typically, acid is used to dissolve calcite and etch the surface of the enamel, but it is unclear which acid is most effective. This study repeats and extends earlier work using a wider range of samples and acids and chelating agents (hydrochloric, lactic, acetic and propionic acids, and EDTA). We find that weaker acids remove carbonate contaminants more effectively than stronger acids, and acetic acid is the most effective. However, accurate dates cannot always be obtained.", "corpus_id": 235452114, "title": "DO WEAK OR STRONG ACIDS REMOVE CARBONATE CONTAMINATION FROM ANCIENT TOOTH ENAMEL MORE EFFECTIVELY? THE EFFECT OF ACID PRETREATMENT ON RADIOCARBON AND δ13C ANALYSES" }
{ "abstract": "Key trace minerals greatly strengthen teeth The outer layers of teeth are made up of nanowires of enamel that are prone to decay. Gordon et al. analyzed the composition of tooth enamel from a variety of rodents at the nanometer scale (see the Perspective by Politi). In regular and pigmented enamel, which contain different trace elements at varying boundary regions, two intergranular phases—magnesium amorphous calcium phosphate or a mixed-phase iron oxide—control the rates of enamel demineralization. This suggests that there may be alternative options to fluoridation for strengthening teeth against decay. Science, this issue p. 746; see also p. 712 Differences in strength and stability of various tooth enamels may be due to trace minerals at boundary regions. [Also see Perspective by Politi] Dental enamel, a hierarchical material composed primarily of hydroxylapatite nanowires, is susceptible to degradation by plaque biofilm–derived acids. The solubility of enamel strongly depends on the presence of Mg2+, F−, and CO32–. However, determining the distribution of these minor ions is challenging. We show—using atom probe tomography, x-ray absorption spectroscopy, and correlative techniques—that in unpigmented rodent enamel, Mg2+ is predominantly present at grain boundaries as an intergranular phase of Mg-substituted amorphous calcium phosphate (Mg-ACP). In the pigmented enamel, a mixture of ferrihydrite and amorphous iron-calcium phosphate replaces the more soluble Mg-ACP, rendering it both harder and more resistant to acid attack. These results demonstrate the presence of enduring amorphous phases with a dramatic influence on the physical and chemical properties of the mature mineralized tissue.", "corpus_id": 8762487, "title": "Amorphous intergranular phases control the properties of rodent tooth enamel" }
{ "abstract": "Truncation mutations in family with sequence similarity, member H (FAM83H) gene are considered the main cause of autosomal dominant hypocalcified amelogenesis imperfecta (ADHCAI); however, its pathogenic mechanism in amelogenesis remains poorly characterized. This study aimed to investigate the effects of truncated FAM83H on developmental defects in enamel. CRISPR/Cas9 technology was used to develop a novel Fam83h c.1186C > T (p.Q396*) knock-in mouse strain, homologous to the human FAM83H c.1192C > T mutation in ADHCAI. The Fam83hQ396⁎/Q396⁎ mice showed poor growth, a sparse and scruffy coat, scaly skin and early mortality compared to control mice. Moreover, the forelimbs of homozygous mice were swollen, exhibiting a significant inflammatory response. Incisors of Fam83hQ396⁎/Q396⁎ mice appeared chalky white, shorter, and less sharp than those of control mice, and energy dispersive X-ray spectroscopy (EDS) analysis and Prussian blue staining helped identify decreased iron and increased calcium (Ca) and phosphorus (P) levels, with an unchanged Ca/P ratio. The expression of iron transportation proteins, transferrin receptor (TFRC) and solute carrier family 40 member 1 (SLC40A1), was decreased in Fam83h-mutated ameloblasts. Micro-computed tomography revealed enamel defects in Fam83hQ396⁎/Q396⁎ mice. Fam83hQ396⁎/Q396⁎ enamel showed decreased Vickers hardness and distorted enamel rod structure and ameloblast arrangement. mRNA sequencing showed that the cell adhesion pathway was most notably clustered in LS8-Fam83h-mutated cells. Immunofluorescence analysis further revealed decreased protein expression of desmoglein 3, a component of desmosomes, in Fam83h-mutated ameloblasts. The FAM83H-casein kinase 1α (CK1α)-keratin 14 (K14)-amelogenin (AMELX) interaction was detected in ameloblasts. And K14 and AMELX were disintegrated from the tetramer in Fam83h-mutated ameloblasts in vitro and in vivo. In secretory stage ameloblasts of Fam83hQ396⁎/Q396⁎ mice, AMELX secretion exhibited obvious retention in the cytoplasm. In conclusion, truncated FAM83H exerted dominant-negative effects on gross development, amelogenesis, and enamel biomineralization by disturbing iron transportation, influencing the transportation and secretion of AMELX, and interfering with cell-cell adhesion in ameloblasts.", "corpus_id": 253052451, "score": -1, "title": "Effects of Fam83h truncation mutation on enamel developmental defects in male C57/BL6J mice." }
{ "abstract": "Recent years have witnessed increased research on the role of workplace partnership in promoting positive employment relations. However, there has been little quantitative analysis of the partnership experiences of employees. This article examines how the kinds of attributions employees make regarding indirect (union‐based) and direct (non‐union‐based) employee participation in workplace partnership might influence the process of mutual gains. It uses employee outcomes to reflect partnership gains for all stakeholders involved (i.e. employees, employers and trade unions). The article contributes to existing knowledge of workplace partnership by examining the potential role of the employment relations climate as an enabling mechanism for the process of mutual gains. The findings suggest mutual gains for all stakeholders are varied and mediated through the employment relations climate.", "corpus_id": 67827513, "title": "A mutual gains perspective on workplace partnership: Employee outcomes and the mediating role of the employment relations climate" }
{ "abstract": "The effects of partnership in the workplace on the stakeholders involved remains an area of considerable dispute in the literature. This article uses data collected in a survey of 3,500 employees in the Republic of Ireland to assess the effects of workplace partnership arrangements and practices on outcomes relevant to employers, employees and trade unions. The article also examines the manner in which workplace partnership affects stakeholder outcomes.", "corpus_id": 153435843, "title": "Who gains from workplace partnership?" }
{ "abstract": "The study of work environments is very important because it may differentiate between high and low performers among organizations. However, there is a huge gap in studies on exploring the quality of such work life in Saudi Arabia. This study aims to explore the level of Quality of Work Life in the industry situated in the Yanbu Industrial City, Saudi Arabia. It also examines the relationships between environmental factors and job satisfaction. The result reveals that the level of Quality of Work Life of the population is high. The majority of employees have adequate confidence regarding their skills, their job characteristics, opportunity to participate in decision making and relationships. However, some of them complained about their wage levels. Further, the study finds a significant relationship between environmental factors and job satisfaction. This study contributes to the understanding of quality of work life and job satisfaction in a significant area in Saudi Arabia, that is, among employees of organizations in Yanbu Industrial City.", "corpus_id": 54832389, "score": -1, "title": "A Study on Perception of Quality of Work Life and Job Satisfaction: Evidence from Saudi Arabia" }
{ "abstract": "The cancer stem cell paradigm postulates that dysregulated tissue-specific stem cells or progenitor cells are precursors for cancer biogenesis. Consequently, identifying cancer stem cells is crucial to our understanding of cancer progression and for the development of novel therapeutic agents. In this study, we demonstrate that the overexpression of Twist in breast cells can promote the generation of a breast cancer stem cell phenotype characterized by the high expression of CD44, little or no expression of CD24, and increased aldehyde dehydrogenase 1 activity, independent of the epithelial-mesenchymal transition. In addition, Twist-overexpressing cells exhibit high efflux of Hoechst 33342 and Rhodamine 123 as a result of increased expression of ABCC1 (MRP1) transporters, a property of cancer stem cells. Moreover, we show that transient expression of Twist can induce the stem cell phenotype in multiple breast cell lines and that decreasing Twist expression by short hairpin RNA in Twist-overexpressing transgenic cell lines MCF-10A/Twist and MCF-7/Twist as well as in MDA-MB-231 partially reverses the stem cell molecular signature. Importantly, we show that inoculums of only 20 cells of the Twist-overexpressing CD44(+)/CD24(-/low) subpopulation are capable of forming tumors in the mammary fat pad of severe combined immunodeficient mice. Finally, with respect to mechanism, we provide data to indicate that Twist transcriptionally regulates CD24 expression in breast cancer cells. Taken together, our data demonstrate the direct involvement of Twist in generating a breast cancer stem cell phenotype through down-regulation of CD24 expression and independent of an epithelial-mesenchymal transition.", "corpus_id": 7487403, "title": "Twist modulates breast cancer stem cells by transcriptional regulation of CD24 expression." }
{ "abstract": "Erroneous expression of genetic information is a characteristic of a transformed phenotype in cancer biogenesis [1]. The degree of chromosomal instability in a cell can determine its fate toward proliferation or cell death. Chromosomal instability manifests as aneuploidy (loss or gain of chromosomes) or as a rearrangement of chromosomal structure. For example, in human Burkitt’s lymphoma, the C-MYC oncogene is translocated downstream of the enhancer of the immunoglobulin heavy chain gene, resulting in overexpression of C-MYC, which increases both the rate of cell division and chromosomal instability [2]. The translocation resulting in the production of the chimeric BCR-ABL fusion protein has been demonstrated to transform hematopoietic cells, resulting in chronic myelogenous leukemia in humans [3]. Thus, chromosomal aberrations can result in overexpression of oncogenes or suppression of tumor suppressor genes, which in turn promote oncogenic transformation. \n \nFluorescence cytogenetic methods have been well established as a way of studying chromosomal instability, and include techniques such as spectral karyotyping (SKY) and comparative genomic hybridization (CGH). SKY permits the simultaneous visualization of each mammalian chromosome in a different fluorescent color, facilitating the identification of both structural and numerical chromosomal aberrations [4–6]. CGH reveals the different hybridization patterns of labeled tumor versus control (reference) DNA and generates a map of DNA copy number changes in tumor genomes [7]. CGH has been consistently used to characterize chromosomal aberrations in solid tumors and hematologic malignancies in patients [8,9]. It is very challenging, however, to develop a tumor model system that will allow for the correlation between the function(s)/expression of a particular protein in vivo and its ability to induce chromosomal instability. This is primarily due to the fact that tumors are heterogeneous and the phenotype observed results from the interplay of a number of proteins acting in concert. \n \nThe basic helix-loop-helix transcription factor Twist is a major regulator of mesenchymal phenotypes. It has been shown that loss of appropriate levels of expression or mutations of Twist result in developmental defects [10]. More recently, it has been demonstrated that Twist overexpression correlates with high-grade breast carcinomas [11]. To further characterize the functions of Twist in cancer biogenesis, we have generated a human breast cancer cell line that stably over-expresses human Twist (MCF-7/Twist). The overexpression of Twist causes an epithelial to mesenchymal-like transition (EMLT) leading to increased invasiveness and motility of this cell line [11,12]. This phenotypic transformation caused by Twist over-expression is the result of altered gene expression levels and profiles within the cells. \n \nTo characterize the cytogenetic changes induced by Twist overexpression, we analyzed the MCF-7/Twist cell line by SKY. As seen in Fig. 1, we found a significant number of chromosomal abnormalities and structural aberrations in the MCF-7/Twist cell line compared to the MCF-7 vector control cells. Aneuploidy was observed in all chromosomes except 2,3,12,18, and 21. In addition, structural aberrations and translocations were found in all but two chromosomes (4 and 18). This would indicate that Twist, in some capacity, promotes chromosomal instability in the MCF-7 cell line. This finding validated our earlier observations of human breast tumor samples, which exhibited increased chromosomal abnormalities in Twist-expressing tumors compared to nonexpressers [11]. Of the 144 breast tumor samples analyzed by CGH, we found that there were, on average, 14.1 cytogenetic alterations in the Twist-expressing tumors compared to 7.1 alterations in the Twist nonexpressing tumors (P < 0.05). We also found chromosomes 1, 7, 15, and 17 to be amplified only in the Twist-expressing tumors. Similar results were also observed in MCF-7/Twist cell line (Fig. 1). The amplification of these chromosomes has been reported in breast cancer and they harbor a variety of oncogenes dysregulated in the process [13,14]. Both these findings (in MCF-7/Twist cells and in patient samples) clearly demonstrate that Twist overexpression plays a role in destabilizing the genome, thus promoting chromosomal instability. The fact that the MCF-7/Twist cell line has more chromosomal instability than the tumors is possibly the result of a selection bias during the generation of a stable clone rather than a tumor progression event. To our knowledge, this is the first such report that shows a direct correlation between overexpression of Twist and increased chromosomal instability in both tissue culture cells and patient breast tumors. The data we have obtained using SKY in a transgenic cell line clearly indicates the importance of associating gene functions observed in patient tumors to that in tissue culture cells. This validation is crucial to the understanding of the functions of a gene (in this instance, Twist) to promote chromosomal instability and augment the breast tumorigenesis process. \n \n \n \nFig. 1 \n \nSpectral karyotyping of MCF-7 vector control and MCF-7/Twist cells", "corpus_id": 53142, "title": "Twist overexpression promotes chromosomal instability in the breast cancer cell line MCF-7." }
{ "abstract": "Aims: TWIST protein has been implicated in neoplastic transformation and development of some cancers. In this study, we aimed to investigate the expression of TWIST in gastric cancer and its clinical significance. Methods: A total of 76 cases of archival gastric cancer tissues were immunohistochemically evaluated for TWIST expression, and its expression was correlated with clinicopathological parameters. Semi‐quantitative reverse transcriptase polymerase chain reaction (RT‐PCR) was used to detect the mRNA of TWIST in four gastric cancer cell lines and a normal immortalised gastric epithelial cell line (GES‐1). The expression of TWIST protein in these cell lines and 14 pairs of fresh gastric carcinoma and adjacent normal tissue samples was detected by Western blotting. Results: TWIST expression increased in diffuse‐type gastric carcinoma compared with intestinal‐type gastric carcinoma (26/42, 61.9% versus 9/34, 26.5%, p<0.05). TWIST expression was significantly increased in 35 (46.1%) of the 76 cancers and correlated with lymph node metastasis (node positive rate 60.4%; node negative rate 21.4%; p<0.05). The expression of TWIST protein was higher in 9/14 (64.3%) fresh cancer tissues compared with adjacent normal tissues. The expression of mRNA and protein of TWIST in gastric cancer cell lines was up‐regulated compared with that in GES‐1. Conclusions: TWIST was highly expressed in gastric cancer. Its up‐regulation was associated with the neoplastic transformation and subsequent development of gastric cancer. Therefore, TWIST maybe a useful prognostic marker and target for gastric cancer therapy.", "corpus_id": 2322284, "score": -1, "title": "Expression and significance of TWIST basic helix‐loop‐helix protein over‐expression in gastric cancer" }
{ "abstract": "In this dissertation, we focus on extracting and understanding semantically meaningful relationships between data items of various modalities; especially relations between images and natural language. We explore the ideas and techniques to integrate such cross-media semantic relations for machine understanding of large heterogeneous datasets, made available through the expansion of the World Wide Web. The datasets collected from social media websites, news media outlets and blogging platforms usually contain multiple modalities of data. Intelligent systems are needed to automatically make sense out of these datasets and present them in such a way that humans can find the relevant pieces of information or get a summary of the available material. Such systems have to process multiple modalities of data such as images, text, linguistic features, and structured data in reference to each other. For example, image and video search and retrieval engines are required to understand the relations between visual and textual data so that they can provide relevant answers in the form of images and videos to the users’ queries presented in the form of text. We emphasize the automatic extraction of semantic topics or concepts from the data available in any form such as images, free-flowing text or metadata. These semantic concepts/topics become the basis of semantic relations across heterogeneous data types, e.g., visual and textual data. A classic problem involving image-text relations is the automatic generation of textual descriptions of images. This problem is the main focus of our work. In many cases, large amount of text is associated with images. Deep exploration of linguistic features of such text is required to fully utilize the semantic information encoded in it. A news dataset involving images and news articles is an example of this scenario. We devise frameworks for automatic news image description generation based on the semantic relations of images, as well as semantic understanding of linguistic features of the news articles.", "corpus_id": 64521016, "title": "Confluence of Vision and Natural Language Processing for Cross-media Semantic Relations Extraction" }
{ "abstract": "Automatically assigning keywords to images is of great interest as it allows one to index, retrieve, and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes low-level image features and a simple combination of basic distances to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.", "corpus_id": 13937697, "title": "A New Baseline for Image Annotation" }
{ "abstract": "Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, features based on Deep Neural Networks have shown potential to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key components that lead to better performances. Specifically, we show that a significant performance gain could be obtained by combining convolutional architectures with approximate top-$k$ ranking objectives, as thye naturally fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset outperforms the conventional visual features by about 10%, obtaining the best reported performance in the literature.", "corpus_id": 5668935, "score": -1, "title": "Deep Convolutional Ranking for Multilabel Image Annotation" }
{ "abstract": "Identification of the local aspect of a relevant compound stimulus has been found to be delayed by the presence of target-set members at the global aspect of an irrelevant compound stimulus, whereas identification of the global aspect is unaffected by the presence of local target-set members within the irrelevant object (Paquet & Merikle, 1988). This effect has been termed the global category effect, and it suggests that global dominance occurs for objects located outside the attentional focus, as well as within an attended hierarchical object. In the present experiments, attention was directed to the relevant one of two compound stimuli by using either shape information (Experiments 1 and 2) or a 100-msec peripheral rapid onset precue (Experiment 3). Results revealed a global category effect even when the physical features of the displays containing global target-set members within the irrelevant object were closely matched with those of the control displays. Critically, the magnitude of the global category effect was affected by how well attention could be focused on the relevant compound stimulus. These findings suggest (a) that the analysis of global information for irrelevant objects is more elaborate than the simple detection of features; and (b) that both perceptual and attentional mechanisms are involved in global dominance.", "corpus_id": 145238676, "title": "Global Dominance outside the Focus of Attention" }
{ "abstract": "The alphanumeric category effect refers to the finding that letters and digits are identified more efficiently when presented among items from the opposite category (a between-category, BC, search) than among items from the same category (a within-category, WC, search). In Experiment 1, the category effect was demonstrated for some targets, but not for others. The reasons for the selectivity of the effect were clarified by Experiment 2, in which the magnitude of the effect was correlated with the physical similarities between targets and distractors. Experiment 3 showed that, in a BC search, the physical differences between letters and digits allowed the observer to be more accurate in locating a target than in identifying it. This was not the case in a WC search, in which the observer was more accurate in identifying a target than in locating it (Experiment 4). The conclusion of these experiments is that the alphanumeric category effect is dependent upon the physical aspects of the stimulus and not the perceptual categories.", "corpus_id": 2764452, "title": "Some determining factors of the alphanumeric category effect" }
{ "abstract": "We compared multi-dimensional selection on the basis of the color, the global shape and the local shape of alphanumeric (letters) and non-alphanumeric (non-letters) stimuli. We investigated whether letters are selected on the basis of name codes or on the basis of highly familiar local shape codes. Participants responded to a single conjunction of color, global shape and local shape occurring in a randomized stream of other conjunctions of these attributes. Dependent variables were reaction time and measures derived from event-related brain potentials (onset latencies and peak amplitudes of the occipital selection negativity, SN). The SN results showed that, for both letters and non-letters, color and global shape were selected first and local shape was selected later. Reaction times were faster, and SN to the local shape occurred earlier for letters than for non-letters. The SN to the local shape of letters was larger than the SN to the local shape of non-letters. In contrast, the SN to the global shape of letters was smaller than the SN to the global shape of non-letters. Selection of the global shape of letters, but not of non-letters, depended on whether they occurred in the relevant color. Selection of the color of both letters and non-letters was independent of shape relevance, and selection of the local shape of both letters and non-letters was independent of color relevance. These results suggest that, (1) both letter and non-letter shapes are initially analyzed in a feature-specific manner; and (2) letters are selected for task-directed processing on the basis of highly familiar local shape codes and not on the basis of name codes.", "corpus_id": 11518670, "score": -1, "title": "Selective attention to conjunctions of color and shape of alphanumeric versus non-alphanumeric stimuli: a comparative electrophysiological study" }
{ "abstract": "Les lesions cervicales non carieuses ou les lesions cervicales d’usure ont toujours pose des difficultes en ce qui concerne leur prevention et leur restauration. Ce travail a pour objectif de permettre une meilleure comprehension des echecs de traitement, et ainsi ameliorer la prise en charge de ces lesions d'usure en etudiant la bibliographie existante et en exposant un cas clinique d’un nouveau protocole de restauration. L’etude des particularites anatomiques et histologiques des lesions cervicales non carieuses a permis de mettre en evidence l’existence d’une couche superficielle hyper-mineralisee a la surface des lesions cervicales et de la dentine sclerotique sous-jacente. Les etiologies et les facteurs de risque sont classiquement l’erosion, l’abrasion et l’abfraction. Cependant il existe d’autres facteurs de risques comme le flux salivaire, l’alimentation ou les habitudes d’hygiene dentaire. L’interception des facteurs declenchants n’est parfois pas suffisante pour repondre a la demande des patients, la mise en place d’un traitement curatif est alors necessaire. L’examen des difficultes d’adhesion amelo-dentinaire permet d’effectuer un choix de protocole d’adhesion le plus favorable en fonction de la situation clinique. L’analyse des etudes traitant d’efficacite des differents types d’adhesifs incite a privilegier l’utilisation des adhesifs avec rincage et mordancage en 3 etapes et des adhesifs auto-mordancant en 2 etapes. Les differentes therapeutiques preventives et restauratrices disponibles ont pu etre completees par l’elaboration d’un nouveau protocole de traitement par collage des pieces de ceramique a base de disilicate de lithium. Ce protocole est illustre par un cas clinique de cette nouvelle approche adhesive.", "corpus_id": 193434231, "title": "La restauration esthétique des lésions cervicales non carieuses : nouvelle approche de dentisterie adhésive" }
{ "abstract": "Objective: To assess the association of occlusal forces and brushing with non-carious cervical lesions (NCCL). Methodology: It was a Cross-sectional study. The study was conducted in Dental clinics, Department of Surgery, The Aga Khan University Hospital Karachi. The study duration was from 1st January 2009 to 28th Feb 2009. Ninety patients visiting dental clinic were examined clinically. Presence of Non- carious cervical lesions, broken restorations, fractured cusps, presence of occlusal facets, brushing habits, Para functional habits were assessed. All the relevant information and clinical examination were collected on a structured Performa and was analyzed using SPSS version 14.0. . Chi square χ2 test was applied to assess association among different categorical variables. Result: Twenty three (26%) females and 67 (74%) males were included in the study. Thirty five of them (38.9%) were found to have Non-carious cervical lesions. Presence of NCCL has no association with gender (P value 0.458). A significant association was found between NCCL and teeth sensitivity (P value 0.002).The association between use of hard tooth brush and Non-carious cervical lesions was found significant (P value <0.001). However the association among Non-carious cervical lesions and fractured cups, broken restoration, teeth grinding, jaw clenching, pan chalia chewing and frequency of teeth brushing were insignificant. Conclusion: Hard tooth brushing and teeth sensitivity have significant association with Non-carious cervical lesions. The role of occlusal wear in the formation of NCCL is not significant.", "corpus_id": 318806, "title": "Role of Brushing and Occlusal Forces in Non-Carious Cervical Lesions (NCCL)" }
{ "abstract": "PURPOSE\nThe aim of this study was to investigate whether systematic modifications of occlusal features or food consistency are suitable to reduce the loading of implants.\n\n\nMATERIALS AND METHODS\nTen healthy subjects, each of whom had a gap in the chewing center (second premolar or first molar) of one lateral dental arch, were provided with fixed partial dentures (FPD) on two ITI implants. Strain gauges attached to the abutments recorded forces in three dimensions. In each person, the original FPD was successively replaced by three FPDs with different occlusal schemes: The first had steep cusps, the second had flat cusps, and the third had the same cuspal inclination as the first but a narrow occlusal surface. Subjects chewed gummy bears and bread as a tough and a soft bolus, respectively.\n\n\nRESULTS\nIn chewing of gummy bears, the mean vertical forces of the three FPDs ranged between 264 and 284 N and were not significantly different. The mean bending moments amounted to 27 Ncm and 24 Ncm with steep and flat occlusal slopes, respectively. With the narrow occlusal surface, the bending moments were reduced by 48%, to a mean of 11 Ncm. Chewing of bread yielded similar relations with lower mean vertical forces and bending moments.\n\n\nCONCLUSION\nNarrowing the orovestibular width of the occlusal surface by 30% caused a significant reduction of lateral force components. A reduced orovestibular width of the occlusal surface is recommended in unfavorable loading conditions. In addition, the chewing of soft food is suggested during the healing period in cases of immediate loading.", "corpus_id": 22739175, "score": -1, "title": "In vivo forces on implants influenced by occlusal scheme and food consistency." }
{ "abstract": "Customer reviews are useful in providing an indirect, secondhand experience of a product. People often use reviews written by other customers as a guideline prior to purchasing a product. Such behavior signifies the authenticity of reviews in e-commerce platforms. However, fake reviews are increasingly becoming a hassle for both consumers and product owners. To address this issue, we propose You Only Need Gold (YONG), an essential information mining tool for detecting fake reviews and augmenting user discretion. Our experimental results show the poor human performance on fake review detection, substantially improved user capability given our tool, and the ultimate need for user reliance on the tool.", "corpus_id": 233365220, "title": "Can You Distinguish Truthful from Fake Reviews? User Analysis and Assistance Tool for Fake Review Detection" }
{ "abstract": "The usefulness of user-generated online reviews is hampered by fake reviews, often produced by clandestinely sponsored reviewers. Detecting fake reviews is a difficult task even for laypeople, and this has also been the case for previous automatic detection approaches, which have only had a limited success. Earlier studies showed that people who tell lies or write deceptive reviews tend to select words unnaturally. We propose a novel approach to detecting fake reviews by applying a topic modeling method based on Latent Dirichlet Allocation (LDA). A unique contribution of this paper is to explicate some latent aspects of fake and truthful reviews by means of \"topics\" that are not necessarily subject areas but related to the word choice patterns reflecting behavioral and linguistic characteristics of the fake review writers. We constructed a labeled dataset based on Yelp and demonstrated that the proposed approach helps identifying unique aspects of fake and truthful reviews, which has a potential to improving the performance of the fake review detection task. The experimental result shows that our proposed method yields better performance than that of state-of-the-art methods for small size categories in our dataset.", "corpus_id": 6322169, "title": "Capturing Word Choice Patterns with LDA for Fake Review Detection in Sentiment Analysis" }
{ "abstract": "This paper describes our participation in the SemEval-2016 task 5, Aspect Based Sentiment Analysis (ABSA). We participated in two slots in the sentence level ABSA (Subtask 1) namely: aspect category extraction (Slot 1) and sentiment polarity extraction (Slot 3) in English Restaurants and Laptops reviews. For Slot 1, we applied different models for each domain. In the restaurants domain, we used an ensemble classifier for each aspect which is a combination of a Convolutional Neural Network (CNN) classifier initialized with pretrained word vectors, and a Support Vector Machine (SVM) classifier based on the bag of words model. For the Laptops domain, we used only one CNN classifier that predicts the aspects based on a probability threshold. For Slot 3, we incorporated domain and aspect knowledge in one ensemble CNN classifier initialized with fine-tuned word vectors and used it in both domains. In the Restaurants domain, our system achieved the 2 nd and the 3 rd places in Slot 1 and Slot 3 respectively. However, we ranked the 8 th in Slot 1 and the 5 th in Slot 3 in the Laptops domain. Our extended experiments show our system could have ranked 2 nd in the Laptops domain in Slot 1 and Slot 3, had we followed the same approach we followed in the Restaurants domain in slot 1 and trained each domain separately in Slot 3.", "corpus_id": 16818666, "score": -1, "title": "NileTMRG at SemEval-2016 Task 5: Deep Convolutional Neural Networks for Aspect Category and Sentiment Extraction" }
{ "abstract": "With the vast number of government programmes around the world supporting virtually every phase of biofuels, there is a strong commitment towards the development of these fuels. However, our current knowledge base of biofuel production, marketing and environmental impact is filled with uncertainty. To shed light on the uncertainty, especially from a US perspective, this review of the biofuel economic literature attempts to determine the most fruitful areas of economic research. As a foundation, it is currently accepted that the US maize-based ethanol industry is sustainable with present government incentives and regulations, while cellulosic-based ethanol is not. Thus, without major new government incentives it is unlikely the USA will achieve goals set by various energy policy acts. The literature indicates a governmental system approach is required which advances biofuels to markets. Such an approach integrates research, regulatory initiatives and education. In terms of the food versus fuel issue, markets are very responsive to price shocks which will mitigate food inflation. However, market gyrations will occur, which will negatively impact the world’s poor. With government incentives and regulations, the short-run future of biofuels is bright, while in the long run, biofuels will contribute to, but are unlikely to dominate, our future fuel supply.", "corpus_id": 14585542, "title": "Biofuel economics from a US perspective: past and future." }
{ "abstract": "Research indicates that large biorefineries capable of handling 5000-10000MT of biomass per day are necessary to achieve process economies. However, such large biorefineries also entail increased costs of biomass transportation and storage, high transaction costs of contracting with a large number of farmers for biomass supply, potential market power issues, and local environmental impacts. We propose a network of regional biomass preprocessing centers (RBPC) that form an extended biomass supply chain feeding into a biorefinery, as a way to address these issues. The RBPC, in its mature form, is conceptualized as a flexible processing facility capable of pre-treating and converting biomass into appropriate feedstocks for a variety of final products such as fuels, chemicals, electricity, and animal feeds. We evaluate the technical and financial feasibility of a simple RBPC that uses ammonia fiber expansion pretreatment process and produces animal feed along with biorefinery feedstock.", "corpus_id": 154861602, "title": "Technical and Financial Feasibility Analysis of Distributed Bioprocessing Using Regional Biomass Pre-Processing Centers" }
{ "abstract": "A mathematical programming model is built to analyze the economic feasibility of producing ethanol from lignocellulosic feedstocks. The optimal size of an ethanol plant is determined by the trade-off between increasing transportation costs for feedstocks versus decreasing average plant costs as the plant size increases. The ethanol plant is modeled under the assumption that it utilizes recent technological advancements in dilute acid hydrolysis. Potential feedstocks include energy crops, crop residues and woody biomass. It is found that the recent technological advancements appear to make ethanol competitive with gasoline, but only if higher valued chemicals are produced as co-products with the ethanol. The low cost and chemical composition of crop residues make them attractive as a feedstock.", "corpus_id": 95629025, "score": -1, "title": "Economic feasibility of producing ethanol from lignocellulosic feedstocks" }
{ "abstract": "The Candida genus encompasses a diverse group of ascomycete fungi that have captured the attention of the scientific community, due to both their role in pathogenesis and emerging applications in biotechnology; the development of gene editing tools such as CRISPR, to analyze fungal genetics and perform functional genomic studies in these organisms, is essential to fully understand and exploit this genus, to further advance antifungal drug discovery and industrial value. However, genetic manipulation of Candida species has been met with several distinctive barriers to progress, such as unconventional codon usage in some species, as well as the absence of a complete sexual cycle in its diploid members. Despite these challenges, the last few decades have witnessed an expansion of the Candida genetic toolbox, allowing for diverse genome editing applications that range from introducing a single point mutation to generating large-scale mutant libraries for functional genomic studies. Clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 technology is among the most recent of these advancements, bringing unparalleled versatility and precision to genetic manipulation of Candida species. Since its initial applications in Candida albicans, CRISPR-Cas9 platforms are rapidly evolving to permit efficient gene editing in other members of the genus. The technology has proven useful in elucidating the pathogenesis and host-pathogen interactions of medically relevant Candida species, and has led to novel insights on antifungal drug susceptibility and resistance, as well as innovative treatment strategies. CRISPR-Cas9 tools have also been exploited to uncover potential applications of Candida species in industrial contexts. This review is intended to provide a historical overview of genetic approaches used to study the Candida genus and to discuss the state of the art of CRISPR-based genetic manipulation of Candida species, highlighting its contributions to deciphering the biology of this genus, as well as providing perspectives for the future of Candida genetics.", "corpus_id": 231149479, "title": "CRISPR-Based Genetic Manipulation of Candida Species: Historical Perspectives and Current Approaches" }
{ "abstract": "CRISPR simplifies genetic engineering of the human fungal pathogen Candida albicans. Candida albicans is a pathogenic yeast that causes mucosal and systematic infections with high mortality. The absence of facile molecular genetics has been a major impediment to analysis of pathogenesis. The lack of meiosis coupled with the absence of plasmids makes genetic engineering cumbersome, especially for essential functions and gene families. We describe a C. albicans CRISPR system that overcomes many of the obstacles to genetic engineering in this organism. The high frequency with which CRISPR-induced mutations can be directed to target genes enables easy isolation of homozygous gene knockouts, even without selection. Moreover, the system permits the creation of strains with mutations in multiple genes, gene families, and genes that encode essential functions. This CRISPR system is also effective in a fresh clinical isolate of undetermined ploidy. Our method transforms the ability to manipulate the genome of Candida and provides a new window into the biology of this pathogen.", "corpus_id": 309849, "title": "A Candida albicans CRISPR system permits genetic engineering of essential genes and gene families" }
{ "abstract": "Background Technologies for making and manipulating DNA have enabled advances in biology ever since the discovery of the DNA double helix. But introducing site-specific modifications in the genomes of cells and organisms remained elusive. Early approaches relied on the principle of site-specific recognition of DNA sequences by oligonucleotides, small molecules, or self-splicing introns. More recently, the site-directed zinc finger nucleases (ZFNs) and TAL effector nucleases (TALENs) using the principles of DNA-protein recognition were developed. However, difficulties of protein design, synthesis, and validation remained a barrier to widespread adoption of these engineered nucleases for routine use. The Cas9 enzyme (blue) generates breaks in double-stranded DNA by using its two catalytic centers (blades) to cleave each strand of a DNA target site (gold) next to a PAM sequence (red) and matching the 20-nucleotide sequence (orange) of the single guide RNA (sgRNA). The sgRNA includes a dual-RNA sequence derived from CRISPR RNA (light green) and a separate transcript (tracrRNA, dark green) that binds and stabilizes the Cas9 protein. Cas9-sgRNA–mediated DNA cleavage produces a blunt double-stranded break that triggers repair enzymes to disrupt or replace DNA sequences at or near the cleavage site. Catalytically inactive forms of Cas9 can also be used for programmable regulation of transcription and visualization of genomic loci. Advances The field of biology is now experiencing a transformative phase with the advent of facile genome engineering in animals and plants using RNA-programmable CRISPR-Cas9. The CRISPR-Cas9 technology originates from type II CRISPR-Cas systems, which provide bacteria with adaptive immunity to viruses and plasmids. The CRISPR-associated protein Cas9 is an endonuclease that uses a guide sequence within an RNA duplex, tracrRNA:crRNA, to form base pairs with DNA target sequences, enabling Cas9 to introduce a site-specific double-strand break in the DNA. The dual tracrRNA:crRNA was engineered as a single guide RNA (sgRNA) that retains two critical features: a sequence at the 5′ side that determines the DNA target site by Watson-Crick base-pairing and a duplex RNA structure at the 3′ side that binds to Cas9. This finding created a simple two-component system in which changes in the guide sequence of the sgRNA program Cas9 to target any DNA sequence of interest. The simplicity of CRISPR-Cas9 programming, together with a unique DNA cleaving mechanism, the capacity for multiplexed target recognition, and the existence of many natural type II CRISPR-Cas system variants, has enabled remarkable developments using this cost-effective and easy-to-use technology to precisely and efficiently target, edit, modify, regulate, and mark genomic loci of a wide array of cells and organisms. Outlook CRISPR-Cas9 has triggered a revolution in which laboratories around the world are using the technology for innovative applications in biology. This Review illustrates the power of the technology to systematically analyze gene functions in mammalian cells, study genomic rearrangements and the progression of cancers or other diseases, and potentially correct genetic mutations responsible for inherited disorders. CRISPR-Cas9 is having a major impact on functional genomics conducted in experimental systems. Its application in genome-wide studies will enable large-scale screening for drug targets and other phenotypes and will facilitate the generation of engineered animal models that will benefit pharmacological studies and the understanding of human diseases. CRISPR-Cas9 applications in plants and fungi also promise to change the pace and course of agricultural research. Future research directions to improve the technology will include engineering or identifying smaller Cas9 variants with distinct specificity that may be more amenable to delivery in human cells. Understanding the homology-directed repair mechanisms that follow Cas9-mediated DNA cleavage will enhance insertion of new or corrected sequences into genomes. The development of specific methods for efficient and safe delivery of Cas9 and its guide RNAs to cells and tissues will also be critical for applications of the technology in human gene therapy. The advent of facile genome engineering using the bacterial RNA-guided CRISPR-Cas9 system in animals and plants is transforming biology. We review the history of CRISPR (clustered regularly interspaced palindromic repeat) biology from its initial discovery through the elucidation of the CRISPR-Cas9 enzyme mechanism, which has set the stage for remarkable developments using this technology to modify, regulate, or mark genomic loci in a wide variety of cells and organisms from all three domains of life. These results highlight a new era in which genomic manipulation is no longer a bottleneck to experiments, paving the way toward fundamental discoveries in biology, with applications in all branches of biotechnology, as well as strategies for human therapeutics. CRISPR-cas: A revolution in genome engineering The ability to engineer genomic DNA in cells and organisms easily and precisely will have major implications for basic biology research, medicine, and biotechnology. Doudna and Charpentier review the history of genome editing technologies, including oligonucleotide coupled to genome cleaving agents that rely on endogenous repair and recombination systems to complete the targeted changes, self-splicing introns, and zinc-finger nucleases and TAL effector nucleases. They then describe how clustered regularly interspaced palindromic repeats (CRISPRs), and their associated (Cas) nucleases, were discovered to constitute an adaptive immune system in bacteria. They document development of the CRISPR-Cas system into a facile genome engineering tool that is revolutionizing all areas of molecular biology. Science, this issue 10.1126/science.1258096", "corpus_id": 6299381, "score": -1, "title": "The new frontier of genome engineering with CRISPR-Cas9" }
{ "abstract": "A television (TV) character’s actions and the consequences of these actions in TV storylines can shape the audience’s own behavioral intentions, especially if the audience identifies with that character. The current research examines how storylines depicting positive versus negative consequences of drinking affect youths’ drinking intentions, and whether post-narrative intervention messages delivered by story characters alter these influences. Results indicate that a post-narrative intervention can correct drinking intentions shaped by a pro-alcohol storyline, but the effectiveness depends on the source: a peripheral character is more effective than the main character at delivering a corrective message. This research pinpoints the role of identification with the main character as a key driver of stories’ influence and a key focus of health intervention efforts to correct these stories’ potentially undesirable impact on vulnerable audiences", "corpus_id": 262933006, "title": "ScholarWorks ScholarWorks" }
{ "abstract": "Misinformation can influence personal and societal decisions in detrimental ways. Not only is misinformation challenging to correct, but even when individuals accept corrective information, misinformation can continue to influence attitudes: a phenomenon known as belief echoes, affective perseverance, or the continued influence effect. Two controlled experiments tested the efficacy of narrative-based correctives to reduce this affective residual in the context of misinformation about organic tobacco. Study 1 (N = 385) tested within-narrative corrective endings, embedded in four discrete emotions (happiness, anger, sadness, and fear). Study 2 (N = 586) tested the utility of a narrative with a negative, emotional corrective ending (fear and anger). Results provide some evidence that narrative correctives, with or without emotional endings, can be effective at reducing misinformed beliefs and intentions, but narratives consisting of emotional corrective endings are better at correcting attitudes than a simple corrective. Implications for misinformation scholarship and corrective message design are discussed.", "corpus_id": 155750529, "title": "The Potential for Narrative Correctives to Combat Misinformation†." }
{ "abstract": "The authors investigated the predictive utility of people's subjective assessments of whether their evaluations are affect- or cognition driven (i.e., meta-cognitive bases) as separate from whether people's attitudes are actually affect- or cognition based (i.e., structural bases). Study 1 demonstrated that meta-bases uniquely predict interest in affective versus cognitive information above and beyond structural bases and other related variables (i.e., need for cognition and need for affect). In Study 2, meta-bases were shown to account for unique variance in attitude change as a function of appeal type. Finally, Study 3 showed that as people became more deliberative in their judgments, meta-bases increased in predictive utility, and structural bases decreased in predictive utility. These findings support the existence of meta-bases of attitudes and demonstrate that meta-bases are distinguishable from structural bases in their predictive utility.", "corpus_id": 15552053, "score": -1, "title": "Affective and cognitive meta-bases of attitudes: Unique effects on information interest and persuasion." }
{ "abstract": "The ovarian stimulation and the follicular puncture in ART present risks which must be planned in order to better prevent them. These complications are the ovarian hyperstimulation syndrome, the thromboembolic and carcinologic risks; the anaesthetic, hemorrhagic and infectious risks of the punctures. The presence of an endometrioma can generate an increase in the infectious risk. # 2009 Elsevier Masson SAS. All rights reserved.", "corpus_id": 1390274, "title": "Quatorzièmes Journées nationales de la FFER ( Clermont-Ferrand , 18 – 20 novembre 2009 ) Risques de la stimulation ovarienne et du prélèvement ovocytaire Ovarian stimulation and follicular puncture risks" }
{ "abstract": "Certain patients have a tendency for high response to gonadotrophin therapy which is often not ameliorated with prior gonadotrophin-releasing hormone agonist (GnRHa) suppression. As a result, these patients are frequently cancelled and often experience ovarian hyperstimulation syndrome (OHSS) episodes during in-vitro fertilization (IVF)-embryo transfer cycles. Patients with polycystic ovarian syndrome (PCOS) have been noted to be particularly sensitive to exogenous gonadotrophin therapy. We have developed a protocol which is effective in improving IVF outcome in high responder patients, including those with PCOS. Oral contraceptive pills (OCP) are taken for 25 days followed by s.c. leuprolide acetate, 1 mg/day, which is overlapped with the final 5 days of oral contraceptive administration. Low-dose gonadotrophin stimulation is then initiated on the third day of withdrawal bleeding in the form of either human menopausal gonadotrophins or purified urinary follicle-stimulating hormone at a dosage of 150 IU/day. Over a 5 year period, we reviewed our experience utilizing this dual method of suppression in 99 cycles obtained in 73 high responder patients. There were only 13 cancellations prior to embryo transfer (13.1%). The clinical and ongoing pregnancy rates per initiated cycle were 46.5 and 40.4% respectively. Only eight patients experienced mild-moderate OHSS following treatment. For those patients who had undergone previous IVF-embryo transfer cycles at our centre, significant improvements were noted in oocyte fertilization rates, embryo implantation rates and clinical/ongoing pregnancy rates with this protocol. Hormonal analyses revealed that the chief mechanism may be through an improved luteinizing hormone/follicle-stimulating hormone ratio following dual suppression. An additional feature of this dual method of suppression is significantly lower serum androgen concentrations, particularly dehydroepiandrosterone sulphate.", "corpus_id": 1821756, "title": "Dual suppression with oral contraceptives and gonadotrophin releasing-hormone agonists improves in-vitro fertilization outcome in high responder patients." }
{ "abstract": "Objective: Prenatal screening has become an increasingly common procedure all over the world. It offers couples useful information relating to the health of their fetus, although it faces us with serious ethical dilemmas as well. This study was conducted to find out the attitudes of Iranian scholars towards prenatal screening and counseling with respect to ethical issues. Methods: Two hundred and one physicians, genetic and religious scholars were interviewed with regard to demographics and attitudes towards the ethical dilemmas in prenatal screening and counseling. Interviews were analyzed using the four-principle approach. Results: Findings showed scholars’ attitudes towards: (1) the right of couples to choose prenatal screening, (2) the role of prenatal screening and counseling concerning termination of an affected fetus, (3) screening results and emotional distress in couples, and (4) the impact of prenatal screening and counseling on disability rate. Conclusion: Iranian scholars were willing to consider prenatal screening to help prevent transmission of diseases to the next generation. This goal is attained through the autonomous choice of the couple to participate in prenatal screening and counseling.", "corpus_id": 33974896, "score": -1, "title": "Prenatal Screening and Counseling in Iran and Ethical Dilemmas" }
{ "abstract": "This article reports the findings from in-depth qualitative interviews with 18 service providers who worked with families facing foreclosure. The interviews’ purpose was to better understand a broad range of families’ experiences and inductive coding focused on quotes that reflected the meaning of those experiences. The analysis extracted three main themes related to foreclosure representing threat: (a) foreclosure threatened children’s education, (b) foreclosure threatened family memories, and (c) foreclosure threatened clients’ sense of self and attaining the American Dream. Providers reported that families fought to keep their homes and hoped to buy again after foreclosure. The findings suggest that social work services could be beneficial in helping families navigate the emotional and financial impact of the foreclosure experience.", "corpus_id": 148579664, "title": "Family, Identity, and the American Dream: Service Providers’ Perspectives on Families’ Experiences With Foreclosure" }
{ "abstract": "While significant attention has been paid to Wall Street investors and families impacted by the current subprime mortgage crisis in the USA, the lives of Sesame Street are minimally discussed. Children and their families are enduring a variety of consequences of foreclosures. The consequences can be hugely disruptive to the approximately 2 million voiceless victims. For the youngest citizens of the USA — its children — the subprime mortgage crisis, particularly home foreclosures, is impacting school attendance, academic performance and achievement, social development and emotional well-being. The authors argue that media and political attention should also include the unintended and often unnoticed repercussions of foreclosures on young children and their education. It is also argued that educators and policy makers should create policies and develop concerted efforts to alleviate the negative impacts on young children.", "corpus_id": 153358450, "title": "The Lives of Sesame Street: The Impact of Foreclosures on Young Children and Families" }
{ "abstract": "Suicide is recognised to be subject to social contagion, with an elevated risk of adverse outcomes amongst those affected. Drawing upon research within the social identity approach, we hypothesised that, for those bereaved by suicide, identifying with similar others could provide ‘a social cure’. A large cross-sectional study and a longitudinal study were carried out at a charity fundraiser for suicide prevention, with participants completing an online survey before and after the event. Results showed that, for those who lost someone they knew (Study 1) or a family member (Study 2) to suicide, there was a significant increase in psychological well-being after the event. This was mediated by identification with the crowd. These findings demonstrate that collective participation in a suicide awareness event can be an effective social intervention for those bereaved by suicide in terms of psychological well-being, with implications for informing best-practice interventions targeting this at-risk group.", "corpus_id": 148950498, "score": -1, "title": "Darkness into Light? Identification with the Crowd at a Suicide Prevention Fundraiser Promotes Well-Being amongst Participants" }
{ "abstract": "In this paper, we revisit the lattice representation of continuous piecewise affine (PWA) function and give a formal proof of its representation ability. Based on this, we derive the irredundant lattice PWA representation through removal of redundant terms and literals. Necessary and sufficient conditions for irredundancy are proposed. Besides, we explain how to remove terms and literals in order to ensure irredundancy. An algorithm is given to obtain an irredundant lattice PWA representation. Both the offline and online complexity as well as the storage requirement of the irredundant lattice PWA representation are analyzed. In the worked examples, the irredundant lattice PWA representation is used to express the optimal solution of the explicit model predictive control, and the results turn out to be much more compact than those given by a state-of-the-art algorithm.", "corpus_id": 23682962, "title": "Irredundant lattice representationsof continuouspiecewise affine functions" }
{ "abstract": "The problem of constructing a general continuous piecewise-linear neural network is considered in this paper. It is shown that every projection domain of an arbitrary continuous piecewise-linear function can be partitioned into convex polyhedra by using difference functions of its local linear functions. Based on these convex polyhedra, a group of continuous piecewise-linear basis functions are formulated. It is proven that a linear combination of these basis functions plus a constant, which we call a standard continuous piecewise-linear neural network, can represent all continuous piecewise-linear functions. In addition, the proposed standard continuous piecewise-linear neural network is applied to solve some function approximation problems. A number of numerical experiments are presented to illustrate that the standard continuous piecewise-linear neural network can be a promising tool for function approximation.", "corpus_id": 1070672, "title": "Configuration of Continuous Piecewise-Linear Neural Networks" }
{ "abstract": "This brief proposes a truncated $\\ell _{1}$ distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.", "corpus_id": 5039817, "score": -1, "title": "Classification With Truncated $\\ell _{1}$ Distance Kernel" }