query
dict
pos
dict
neg
dict
{ "abstract": "Decades before the recent advances in molecular biology and the knowledge of the complete nucleotide sequence of several genomes, cytogenetic analysis provided the first information concerning the genome organization. Since the beginning of cytogenetics, great effort has been applied for understanding the chromosome evolution in a wide range of taxonomic groups. The exploration of molecular biology techniques in the cytogenetic area represents a powerful tool for advancement in the construction of physical chromosome maps of the genomes. The most important contribution of cytogenetics is related to the physical anchorage of genetic linkage maps in the chromosomes through the hybridization of DNA markers onto chromosomes. Several technologies, such as polymerase chain reaction (PCR), enzymatic restriction, flow sorting, chromosome microdissection and BAC library construction, associated with distinct labeling methods and fluorescent detection systems have allowed for the generation of a range of useful DNA probes applied in chromosome physical mapping. Concerning the probes used for molecular cytogenetics, the repetitive DNA is amongst the most explored nucleotide sequences. The recent development of bacterial artificial chromosomes (BACs) as vectors for carrying large genome fragments has allowed for the utilization of BACs as probes for the purpose of chromosome mapping. BACs have narrowed the gap between cytogenetic and molecular genetics and have become important tools for visualizing the organization of genomes and chromosome mapping. Furthermore, the use of chromosome probes has permitted the development of chromosome painting technologies, allowing an understanding of particular chromosomal areas, whole chromosomes or even whole karyotypes. Moreover, chromosomal analysis using these specific probes has contributed to the knowledge of supernumerary chromosomes, sex chromosomes, species evolution, and the identification of C Martins, DC Cabral-de-Mello, GT Valente, J Mazzuchelli, SG Oliveira 2 chromosomal rearrangements. Finally, the synergy between chromosomal and molecular biology analysis makes cytogenetics a powerful area in the integration of knowledge in genetics, genomics, taxonomy and evolution. Chromosomes as a Tool for Understanding", "corpus_id": 43981431, "title": "Chapter 1 Cytogenetic Mapping and Contribution to the Knowledge of Animal Genomes" }
{ "abstract": "The polymorphic Sp100-rs repeat cluster in chromosome band 1D of the house mouse, Mus musculus, makes up as much as 0.1–5% of the haploid genome. ‘High-copy’ versions of this long-range repeat cluster are cytogenetically apparent as DAPI-negative chromomycin-A3-positive homogeneously staining regions (HSRs). The cluster is a relatively recent acquisition in the genus Mus; the related species M. caroli possesses neither the Sp100-rs cluster nor even the Sp100-rs gene. Except for chromosomes with high-copy clusters, no major rearrangements are visible in chromosomes 1 from M. musculus and M. caroli: they have the same order of G-bands, DAPI-bands and chromomycin A3-bands. Comparative genomic hybridization (CGH) visualizes the cluster in M. musculus and detects a single region of sequence homology to the cluster in M. caroli chromosome band 1D. This indicates that the M. musculus cluster has evolved in situ from sequences originally present in the same chromosome band.", "corpus_id": 2009044, "title": "Origin of the Chromosome 1 HSR of the House Mouse Detected by CGH" }
{ "abstract": "C-banding, base-specific fluorochrome staining (CMA3/DA/DAPI), and comparative genomic hybridization (CGH) were used to analyze the constitutive heterochromatin in two Israeli Spalax species, S. galili (2n = 52) and S. judai (2n = 60). It was shown that C-positive centromeric heterochromatin and some telomeric sites comprise GC-rich DNA sequences in both species. Comparative genomic in situ hybridization revealed slight qualitative differences in highly repetitive sequences in the two Spalax species. Eight acrocentric pairs in S. judai that are involved in Robertsonian rearrangements, possessed composite heterochromatin with a preference of S. judai highly repetitive sequences in the proximal region. Heterochromatin of the sex chromosomes, two biarmed homologous pairs (4 and 5) in both species, and acrocentric chromosomes from the group with a variable centromere position in S. judai was entirely species-specific. The high level of homology in the composition of heterochromatin may relate to the recent divergence of Israeli Spalax. Interspecies heterochromatin differences are discussed in the context of possible mechanisms in the Spalax chromosome evolution.", "corpus_id": 23559362, "score": -1, "title": "Heterochromatin differentiation shows the pathways of karyotypic evolution in Israeli mole rats (Spalax, Spalacidae, Rodentia)" }
{ "abstract": "Lumbar radiculopathy pain represents a major public health problem, with few effective long-term treatments. Preclinical neuropathic and postsurgical pain studies implicate the kinase adenosine monophosphate activated kinase (AMPK) as a potential pharmacological target for the treatment of chronic pain conditions. Metformin, which acts via AMPK, is a safe and clinically available drug used in the treatment of diabetes. Despite the strong preclinical rationale, the utility of metformin as a potential pain therapeutic has not yet been studied in humans. Our objective was to assess whether metformin is associated with decreased lumbar radiculopathy pain, in a retrospective chart review. We completed a retrospective chart review of patients who sought care from a university pain specialist for lumbar radiculopathy between 2008 and 2011. Patients on metformin at the time of visit to a university pain specialist were compared with patients who were not on metformin. We compared the pain outcomes in 46 patients on metformin and 94 patients not taking metformin therapy. The major finding was that metformin use was associated with a decrease in the mean of “pain now,” by −1.85 (confidence interval: −3.6 to −0.08) on a 0–10 visual analog scale, using a matched propensity scoring analysis and confirmed using a Bayesian analysis, with a significant mean decrease of −1.36 (credible interval: −2.6 to −0.03). Additionally, patients on metformin showed a non-statistically significant trend toward decreased pain on a variety of other pain descriptors. Our proof-of-concept findings suggest that metformin use is associated with a decrease in lumbar radiculopathy pain, providing a rational for larger retrospective trials in different pain populations and for prospective trials, to test the effectiveness of metformin in reducing neuropathic pain.", "corpus_id": 14038465, "title": "The use of metformin is associated with decreased lumbar radiculopathy pain" }
{ "abstract": "Dear Editor,\n\nDercum's disease is a disorder characterized by the development of multiple painful lipomas on the trunk and proximal parts of the extremities. It is a progressive condition that leads to body disfigurement, compressive neuropathy, and a chronic pain syndrome [1]. In this letter, we describe the case of a patient with Dercum's disease in whom metformin, prescribed for newly diagnosed type 2 diabetes mellitus, provided adequate pain control.\n\nA 48-year-old white Caucasian male presented with a history of Dercum's disease, bilateral hearing impairment, and gastroesophageal reflux disease. The patient described the character of his pain as burning, dull, persistent, and exacerbated by changes in body position and physical activity. It had a diurnal variation of 1/10, lessened slightly during the night, localized to the specific lipomas, and was without any radiation or other associated symptoms. On a numerical pain scale, the patient rated the intensity of his pain as 8/10, 6/10, and 7–8/10 during the visit, during the last week, and over the last month, respectively. The intensity of pain varied among the different lipomas, and palpation evoked a more intense pain sensation in the lipomas with …", "corpus_id": 378998, "title": "Controlling newly diagnosed type 2 diabetes mellitus with metformin managed pain symptoms in a patient affected with Dercum's disease." }
{ "abstract": "We reviewed the charts of 10 patients who were admitted to the Massachusetts Eye and Ear Infirmary over a 10-year period with the diagnosis of Bacillus species endophthalmitis. To our knowledge this is the largest single series in the literature and includes the first two reported cases of Bacillus endophthalmitis following glaucoma filtering procedures. Seven cases developed following penetrating ocular trauma. One occurred in an intravenous drug abuser. Five eyes ultimately underwent enucleation; only the two eyes that developed endophthalmitis after elective surgery retained useful vision. Review of the literature indicates that parenteral and intravitreal antibiotic prophylaxis against endophthalmitis after penetrating ocular trauma should include gentamicin, in combination with vancomycin or clindamycin, to provide adequate coverage against infection with Bacillus spp., as prognosis is poor once infection is established. Bacillus spp. cultured from ocular tissues or fluids should not be dismissed as contaminants.", "corpus_id": 23125026, "score": -1, "title": "Bacillus-induced endophthalmitis: new series of 10 cases and review of the literature." }
{ "abstract": "HIERARCHICAL BAYESIAN DATA FUSION USING AUTOENCODERS Yevgeniy V. Reznichenko, B.S. Marquette University, 2018 In this thesis, a novel method for tracker fusion is proposed and evaluated for vision-based tracking. This work combines three distinct popular techniques into a recursive Bayesian estimation algorithm. First, semi supervised learning approaches are used to partition data and to train a deep neural network that is capable of capturing normal visual tracking operation and is able to detect anomalous data. We compare various methods by examining their respective receiver operating conditions (ROC) curves, which represent the trade off between specificity and sensitivity for various detection threshold levels. Next, we incorporate the trained neural networks into an existing data fusion algorithm to replace its observation weighing mechanism, which is based on the Mahalanobis distance. We evaluate different semi-supervised learning architectures to determine which is the best for our problem. We evaluated the proposed algorithm on the OTB-50 benchmark dataset and compared its performance to the performance of the constituent trackers as well as with previous fusion. Future work involving this proposed method is to be incorporated into an autonomous following unmanned aerial vehicle (UAV).", "corpus_id": 57591640, "title": "Hierarchical Bayesian Data Fusion Using Autoencoders" }
{ "abstract": "Data-driven approaches have gained increasing interests in the fault detection of wind turbines (WTs) due to the difficulty in system modeling and the availability of sensor data. However, the nonlinearity of WTs, uncertainty of disturbances and measurement noise, and temporal dependence in time-series data still pose grand challenges to effective fault detection. To this end, this paper proposes a new fault detector based on a recently developed unsupervised learning method, denoising autoencoder (DAE), which offers the learning of robust nonlinear representations from data against noise and input fluctuation. A DAE is used to build a robust multivariate reconstruction model on raw time-series data from multiple sensors, and then, the reconstruction error of the DAE trained with normal data is analyzed for fault detection. In addition, we apply the sliding-window technique to consider temporal information inherent in time-series data by including the current and past information within a small time window. A key advantage of the proposed approach is the ability to capture the nonlinear correlations among multiple sensor variables and the temporal dependence of each sensor variable simultaneously, which significantly enhanced the fault detection performance. Simulated data from a generic WT benchmark and field supervisory control and data acquisition data from a real wind farm are used to evaluate the proposed approach. The results of two case studies demonstrate the effectiveness and advantages of our proposed approach.", "corpus_id": 3388502, "title": "Wind Turbine Fault Detection Using a Denoising Autoencoder With Temporal Information" }
{ "abstract": "Many studies on the prediction of manufacturing results using sensor signals have been conducted in the field of fault detection and classification (FDC) for semiconductor manufacturing processes. However, fault diagnosis used to find clues as to root causes remains a challenging area. In particular, process monitoring using neural networks has been employed to only a limited extent because it is a black box model, making the relationships between input data and output results difficult to interpret in actual manufacturing settings, despite its high classification performance. In this paper, we propose a convolutional neural network (CNN) model, named FDC-CNN, in which a receptive field tailored to multivariate sensor signals slides along the time axis, to extract fault features. This approach enables the association of the output of the first convolutional layer with the structural meaning of the raw data, making it possible to locate the variable and time information that represents process faults. In an experiment on a chemical vapor deposition process, the proposed method outperformed other deep learning models.", "corpus_id": 22772949, "score": -1, "title": "A Convolutional Neural Network for Fault Classification and Diagnosis in Semiconductor Manufacturing Processes" }
{ "abstract": "Preclinical Research", "corpus_id": 46769289, "title": "Fucoidan Induces ROS‐Dependent Apoptosis in 5637 Human Bladder Cancer Cells by Downregulating Telomerase Activity via Inactivation of the PI3K/Akt Signaling Pathway" }
{ "abstract": "Defective oxidative phosphorylation has a crucial role in the attenuation of mitochondrial function, which confers therapy resistance in cancer. Various factors, including endogenous heat shock proteins (HSPs) and exogenous agents such as dichloroacetate, restore respiratory and other physiological functions of mitochondria in cancer cells. Functional mitochondria might ultimately lead to the restoration of apoptosis in cancer cells that are refractory to current anticancer agents. Here, we summarize the key reasons contributing to mitochondria dysfunction in cancer cells and how restoration of mitochondrial function could be exploited for cancer therapeutics.", "corpus_id": 1365704, "title": "Restoration of mitochondria function as a target for cancer therapy." }
{ "abstract": "Nanogap biosensors have fascinated researchers due to their excellent electrical properties. Nanogap biosensors comprise three arrays of electrodes that form nanometer‐size gaps. The sensing gaps have become the major building blocks of several sensing applications, including bio‐ and chemosensors. One of the advantages of nanogap biosensors is that they can be fabricated in nanoscale size for various downstream applications. Several studies have been conducted on nanogap biosensors, and nanogap biosensors exhibit potential material properties. The possibilities of combining these unique properties with a nanoscale‐gapped device and electrical detection systems allow excellent and potential prospects in biomolecular detection. However, their fabrication is challenging as the gap is becoming smaller. It includes high‐cost, low‐yield, and surface phenomena to move a step closer to the routine fabrications. This review summarizes different feasible techniques in the fabrication of nanogap electrodes, such as preparation by self‐assembly with both conventional and nonconventional approaches. This review also presents a comprehensive analysis of the fabrication, potential applications, history, and the current status of nanogap biosensors with a special focus on nanogap‐mediated bio‐ and chemical sonsors.", "corpus_id": 235480531, "score": -1, "title": "Recent advances in techniques for fabrication and characterization of nanogap biosensors: A review" }
{ "abstract": "BACKGROUND\nCytomegalovirus (CMV) is a late-stage opportunistic infection in people living with human immunodeficiency virus (HIV)/AIDS. Lack of ophthalmological diagnostic skills, lack of convenient CMV treatment, and increasing access to antiretroviral therapy have all contributed to an assumption that CMV retinitis is no longer a concern in low- and middle-income settings.\n\n\nMETHODS\nWe conducted a systematic review and meta-analysis of published and unpublished studies reporting prevalence of CMV retinitis in low- and middle-income countries. Eligible studies assessed the occurrence of CMV retinitis by funduscopic examination within a cohort of at least 10 HIV-positive adult patients.\n\n\nRESULTS\nWe identified 65 studies from 24 countries, mainly in Asia (39 studies, 12 931 patients) and Africa (18 studies, 4325 patients). By region, the highest prevalence was observed in Asia with a pooled prevalence of 14.0% (11.8%-16.2%). Almost a third (31.6%, 95% confidence interval [CI], 27.6%-35.8%) had vision loss in 1 or both eyes. Few studies reported immune status, but where reported CD4 count at diagnosis of CMV retinitis was <50 cells/µL in 73.4% of cases. There was no clear pattern of prevalence over time, which was similar for the period 1993-2002 (11.8%; 95% CI, 8%-15.7%) and 2009-2013 (17.6%; 95% CI, 12.6%-22.7%).\n\n\nCONCLUSIONS\nPrevalence of CMV retinitis in resource low- and middle-income countries, notably Asian countries, remains high, and routine retinal screening of late presenting HIV-positive patients should be considered. HIV programs must ensure capacity to manage the needs of patients who present late for care.", "corpus_id": 18920814, "title": "Burden of HIV-related cytomegalovirus retinitis in resource-limited settings: a systematic review." }
{ "abstract": "BackgroundThe Chinese government has provided health services to those infected by the human immunodeficiency virus (HIV) under the acquired immunodeficiency syndrome (AIDS) care policy since 2003. Detailed research on the actual expenditures and costs for providing care to patients with AIDS is needed for future financial planning of AIDS health care services and possible reform of HIV/AIDS-related policy. The purpose of the current study was to determine the actual expenditures and factors influencing costs for untreated AIDS patients in a rural area of China after initiating highly active antiretroviral therapy (HAART) under the national Free Care Program (China CARES).MethodsA retrospective cohort study was conducted in Yunnan and Shanxi Provinces, where HAART and all medical care are provided free to HIV-positive patients. Health expenditures and costs in the first treatment year were collected from medical records and prescriptions at local hospitals between January and June 2007. Multivariate linear regression was used to determine the factors associated with the actual expenditures in the first antiretroviral (ARV) treatment year.ResultsFive ARV regimens are commonly used in China CARES: zidovudine (AZT) + lamivudine (3TC) + nevirapine (NVP), stavudine (D4T) + 3TC + efavirenz (EFV), D4T + 3TC + NVP, didanosine (DDI) + 3TC + NVP and combivir + EFV. The mean annual expenditure per person for ARV medications was US$2,242 (US$1 = 7 Chinese Yuan (CNY)) among 276 participants. The total costs for treating all adverse drug events (ADEs) and opportunistic infections (OIs) were US$29,703 and US$23,031, respectively. The expenses for treatment of peripheral neuritis and cytomegalovirus (CMV) infections were the highest among those patients with ADEs and OIs, respectively. On the basis of multivariate linear regression, CD4 cell counts (100-199 cells/μL versus <100 cells/μL, P = 0.02; and ≥200 cells/μL versus <100 cells/μL, P < 0.004), residence in Mangshi County (P < 0.0001), ADEs (P = 0.04) and OIs (P = 0.02) were significantly associated with total expenditures in the first ARV treatment year.ConclusionsThis is the first study to determine the actual costs of HIV treatment in rural areas of China. Costs for ARV drugs represented the major portion of HIV medical expenditures. Initiating HAART in patients with higher CD4 cell count levels is likely to reduce treatment expenses for ADEs and OIs in patients with AIDS.", "corpus_id": 995717, "title": "Expenditures for the care of HIV-infected patients in rural areas in China's antiretroviral therapy programs" }
{ "abstract": "Abstract The purpose of this study was to analyze the cost and cost-effectiveness of methadone maintenance treatment (MMT) program in Dehong prefecture, Yunnan province, China. The cost-effectiveness analysis used process data retrospectively collected from the MMT clinics in Dehong Prefecture, Yunnan Province, from July 2005 to December 2007, a 30-month period available at the time of the study. Alternative estimates of the number of HIV infections prevented were calculated using incidence rate from cohort studies and retrospective studies. Program costs were collected retrospectively following standard methods using an ingredients methodology. The cost for each participant treated in MMT clinics was about $9.1–16.7 per month and the intervention averted 8.4–87.2 HIV infections with a cost-effectiveness of US$ 2509.3–4609.3 per HIV infection averted. This research demonstrates that MMT is a cost-effective intervention for reducing HIV transmission among injecting drug users, but the coverage of MMT intervention should be matched with the designed volume of MMT clinics to make the best use of resources.", "corpus_id": 7825050, "score": -1, "title": "Economic evaluation of methadone maintenance treatment in HIV/AIDS control among injecting drug users in Dehong, China" }
{ "abstract": "Oligodendrocytes, the myelin-forming cells of the central nervous system (CNS), and astrocytes constitute macroglia. This review deals with the recent progress related to the origin and differentiation of the oligodendrocytes, their relationships to other neural cells, and functional neuroglial interactions under physiological conditions and in demyelinating diseases. One of the problems in studies of the CNS is to find components, i.e., markers, for the identification of the different cells, in intact tissues or cultures. In recent years, specific biochemical, immunological, and molecular markers have been identified. Many components specific to differentiating oligodendrocytes and to myelin are now available to aid their study. Transgenic mice and spontaneous mutants have led to a better understanding of the targets of specific dys- or demyelinating diseases. The best examples are the studies concerning the effects of the mutations affecting the most abundant protein in the central nervous myelin, the proteolipid protein, which lead to dysmyelinating diseases in animals and human (jimpy mutation and Pelizaeus-Merzbacher disease or spastic paraplegia, respectively). Oligodendrocytes, as astrocytes, are able to respond to changes in the cellular and extracellular environment, possibly in relation to a glial network. There is also a remarkable plasticity of the oligodendrocyte lineage, even in the adult with a certain potentiality for myelin repair after experimental demyelination or human diseases.", "corpus_id": 1312870, "title": "Biology of oligodendrocyte and myelin in the mammalian central nervous system." }
{ "abstract": "The protein content of CNS myelin is highly simplified when compared to that of other membranes. The three proteins proteolipid protein (PLP), myelin basic protein (MBP), and 2’,3’-cyclic nucleotide 3’phosphodiesterase (CNPase) account for approximatively 40, 30, and 4%, respectively, by weight of total protein. Within PNS myelin, CNPase exists at only about 10% of the level at which it is found in the CNS (Uyemura et al., 1972). CNPase was the first enzyme to be characterized unequivocally as a component of the myelin membrane; previously, myelin was thought to be enzymatically inert (Adams et al., 1963). The enzyme (EC 3.1.4.37) hydrolyses 2’,3‘-cyclic nucleotides to give 2’-nucleotides. The enzymatic activity was first demonstrated in bovine spleen (Whitfeld et al., 1955) and pancreas (Davis and Allen, 1956). Its relationship with myelin was first demonstrated in 1967 (Kurihara and Tsukada, 1967). The evidence that CNPase is highly enriched in myelin and oligodendrocytes is solid and is based on biochemical studies of isolated myelin and oligodendrocytes (reviewed by Sims and Carnegie, 1978; Takahashi, 198 1) and immunohistochemical studies (e.g., Nishizawa et al., 1981, 1985). The present review emphasizes developments in research on CNPase since 1978. A comprehensive review of work prior to 1978 has been published (Sims and Carnegie, 1978).", "corpus_id": 512229, "title": "Molecular Structure, Localization, and Possible Functions of the Myelin‐Associated Enzyme 2′,3′‐Cyclic Nucleotide 3′‐Phosphodiesterase" }
{ "abstract": "Lewis rats were immunized with partially purified 2′,3′‐cyclic nucleotide 3′‐phosphodiesterase (CNPase) from bovine cerebral white matter and the spleen cells were fused with cell of a mouse myeloma cell line (SP‐2). The production of monoclonal antibody was detected by (1) enzyme‐linked immunoadsorbent assay, (2) immunohistochemical staining of bovine cerebrum, (3) Western blotting analysis, and (4) CNPase binding assay. Monoclonal antibody that specifically binds CNPase molecules was obtained. However, the antibody did not suppress the enzyme activity. Western blotting analysis demonstrated that the monoclonal antibody binds both CNa (Wla) and CNb (Wlb). The monoclonal antibody was identified as being of the IgG2c subclass. Immunohistochemical examination revealed that the myelin sheath in the CNS was heavily stained with the monoclonal antibody in several species (bovine, mouse, rat, and human). In contrast, peripheral nervous system myelin was not stained even in bovine tissue. These results suggest that the monoclonal antibody obtained in the present study specifically recognizes the CNPase molecules in the CNS.", "corpus_id": 7503876, "score": -1, "title": "Production of Monoclonal Antibody to 2′,3′‐Cyclic Nucleotide 3′‐Phosphodiesterase from Bovine Cerebral White Matter" }
{ "abstract": "Lipoprotein lipase has long been known to hydrolyse triglycerides from triglycerides-rich lipoproteins. More recently, it has been shown to promote the binding of lipoproteins to various lipoprotein receptors. Evidence is also presented regarding the possible atherogenic role of lipoprotein lipase. In theory, lipoprotein lipase deficiency should help to clarify this question. However, the rarity of this condition means that it has not been possible to conduct epidemiological studies. An alternative approach is to investigate the correlation of lipoprotein lipase with onset of cardiovascular disease in prospective studies in large population-based cohorts. Complementary with this approach, animal models have been used to explore the atherogenicity of lipoprotein lipase expressed by macrophages.", "corpus_id": 5171000, "title": "Lipoprotein lipase and atherosclerosis" }
{ "abstract": "Background Patients with lipoprotein lipase (LPL) deficiency had been generally thought to be spared accelerated atherosclerosis in spite of a marked elevation of plasma triglyceride levels. However, it has been recently reported that some heterozygous and homozygous LPL‐deficient patients are associated with premature atherosclerosis. In this paper, we report a 55‐year‐old type I hyperlipidaemic patient with a novel missense mutation in the LPL gene.", "corpus_id": 1817053, "title": "Novel LPL mutation (L303F) found in a patient associated with coronary artery disease and severe systemic atherosclerosis" }
{ "abstract": "OBJECTIVE\nTo explore mechanisms for hypertriglyceridemia in diabetic patients with microalbuminuria, we examined an association between heparin-releasable lipoprotein lipase (LPL) and the von Willebrand factor (vWF), based on the hypothesis that LPL bound to endothelium is decreased by generalized endothelial damage.\n\n\nRESEARCH DESIGN AND METHODS\nA total of 37 NIDDM patients with microalbuminuria and 69 patients with normoalbuminuria were studied. Plasma LPL mass in post-heparin plasma and plasma vWF antigen were quantified by sandwich-enzyme immunoassay and enzyme-linked immunosorbent assay, respectively.\n\n\nRESULTS\nThe NIDDM patients with microalbuminuria had higher plasma triglyceride (TG) and lower HDL cholesterol concentrations compared with the patients with normoalbuminuria. Heparin-releasable LPL mass was significantly lower in the microalbuminuric than in the normoalbuminuric subjects. Plasma level of vWF, a marker for endothelial damage, was significantly increased in microalbuminuric subjects compared with their normoalbuminuric counterparts. The LPL mass was inversely correlated with plasma vWF level at a high correlation coefficient value. The LPL mass was inversely related to TG and positively to HDL cholesterol concentrations.\n\n\nCONCLUSIONS\nThese results suggest that widespread endothelial damage occurred in NIDDM patients with microalbuminuria, thereby LPL moiety bound to the endothelium is decreased, which results in an impaired catabolism of TG-rich lipoproteins.", "corpus_id": 24537847, "score": -1, "title": "Decreased release of lipoprotein lipase is associated with vascular endothelial damage in NIDDM patients with microalbuminuria." }
{ "abstract": "OBJECTIVE Recent studies have demonstrated the short term efficacy of leflunomide. This study evaluates the efficacy and safety of leflunomide and sulfasalazine in rheumatoid arthritis over a two year follow up period. METHODS 358 patients with rheumatoid arthritis in a double blind trial were randomly allocated to receive either leflunomide 20 mg/day, placebo, or sulfasalazine 2 g/day. Those completing six months of treatment (n=230) were given the option to continue in 12 (n=168) and 24 (n=146) month double blinded extensions; the placebo group switched to sulfasalazine. This report compares efficacy and safety of leflunomide with sulfasalazine in the 6, 12, and 24 month patient cohorts. RESULTS The efficacy seen at six months was maintained at 12 and 24 months. Twenty four month cohorts on leflunomide showed significant improvement compared with sulfasalazine in doctor (−1.46 v−1.11, p=0.03) and patient (−1.61 v−1.04, p<0.001) global assessments, ACR20% response (82%v 60%, p<0.01), and functional ability (Δmean HAQ −0.65 v −0.36, p=0.0149; ΔHAQ disability index −0.89 v −0.60, p=0.059). Improvement in other variables was comparable for the two drugs, including slowing of disease progression. Improved HAQ scores in 6, 12, and 24 month leflunomide cohorts were seen in both non-responders (24%, 29%, 35%, respectivelyv sulfasalazine 8%, 10%, 27%) and ACR20% responders (leflunomide 63%, 62%, 66% vsulfasalazine 50%, 64%, 44%). Leflunomide is well tolerated at doses of 20 mg. No unexpected adverse events or late toxicity were noted during the two year period. Diarrhoea, nausea, and alopecia were less frequent with continued treatment. CONCLUSION These long term data confirm that leflunomide is an efficacious and safe disease modifying antirheumatic drug.", "corpus_id": 22106053, "title": "Treatment of active rheumatoid arthritis with leflunomide: two year follow up of a double blind, placebo controlled trial versus sulfasalazine" }
{ "abstract": "We have investigated the influence of sulphasalazine, a second line antirheumatic drug, on the radiological progression of erosions in rheumatoid arthritis over a two year period in 41 patients. Hand radiograph scores deteriorated significantly over this period, but in a group of 31 patients in whom one year films were also available this deterioration was limited to the first year. This slowing of radiological deterioration was not related to 'normalisation' of the erythrocyte sedimentation rate (ESR). Compared with a 'control' group of 10 patients who had refused offers of second line therapy, sulphasalazine treated patients showed less deterioration over the two year period, and this difference was more marked than in previous studies of gold or penicillamine. No significant change was seen in large joint radiographs in sulphasalazine treated patients over two years, but this probably represents the poor sensitivity of the method of assessment. No significant correlation was seen between changes in inflammatory indices and slowing of radiological deterioration in erosion score. Thus sulphasalazine appears to slow the progression of radiological disease of the hands over the second year of treatment in a representative sample of patients who continue to receive treatment for two years.", "corpus_id": 620698, "title": "Effect of sulphasalazine on the radiological progression of rheumatoid arthritis." }
{ "abstract": "The outcome of attempts to continue treatment indefinitely with either gold, penicillamine, sulphasalazine, or dapsone was studied in 240 patients with rheumatoid arthritis (RA). The usual reason for discontinuing treatment was the occurrence of an adverse effect. This led to 53% of patients stopping gold, 33% sulphasalazine, 32% penicillamine, and 17% dapsone. The next most frequent reason was that the drug was ineffective, leading to discontinuation in 37% of patients having dapsone, 24% sulphasalazine, 19% penicillamine, and 16% gold. Other reasons for stopping treatment were infrequent. The high discontinuation rate of these drugs over 2 years in part accounts for the conflict of opinion on whether they can alter the course of RA; their efficacy must to a large extent be governed by their acceptability.", "corpus_id": 34358219, "score": -1, "title": "Outcome of attempts to treat rheumatoid arthritis with gold, penicillamine, sulphasalazine, or dapsone." }
{ "abstract": "The catalytic mechanism of DNA polymerases involves multiple steps that precede and follow the transfer of a nucleotide to the 3′-hydroxyl of the growing DNA chain. Here we report a single-molecule approach to monitor the movement of E. coli DNA polymerase I (Klenow fragment) on a DNA template during DNA synthesis with single base-pair resolution. As each nucleotide is incorporated, the single-molecule Förster resonance energy transfer intensity drops in discrete steps to values consistent with single-nucleotide incorporations. Purines and pyrimidines are incorporated with comparable rates. A mismatched primer/template junction exhibits dynamics consistent with the primer moving into the exonuclease domain, which was used to determine the fraction of primer-termini bound to the exonuclease and polymerase sites. Most interestingly, we observe a structural change after the incorporation of a correctly paired nucleotide, consistent with transient movement of the polymerase past the preinsertion site or a conformational change in the polymerase. This may represent a previously unobserved step in the mechanism of DNA synthesis that could be part of the proofreading process.", "corpus_id": 19417769, "title": "Single-molecule measurements of synthesis by DNA polymerase with base-pair resolution" }
{ "abstract": "Group II intron ribozymes fold into their native structure by a unique stepwise process that involves an initial slow compaction followed by fast formation of the native state in a Mg2+-dependent manner. Single-molecule fluorescence reveals three distinct on-pathway conformations in dynamic equilibrium connected by relatively small activation barriers. From a most stable near-native state, the unobserved catalytically active conformer is reached. This most compact conformer occurs only transiently above 20 mM Mg2+ and is stabilized by substrate binding, which together explain the slow cleavage of the ribozyme. Structural dynamics increase with increasing Mg2+ concentrations, enabling the enzyme to reach its active state.", "corpus_id": 1936813, "title": "Single-molecule studies of group II intron ribozymes" }
{ "abstract": "Fifteen patients (51–78 yrs) with mild to moderately severe Alzheimer's dementia and 18 healthy subjects of the same age were examined by clinical rating scales and a battery of neuropsychological tests. Levels of the monoamine metabolites homovanillic acid (HVA), 3‐methoxy‐4‐hydroxyphenyl glycol (MHPG) and 5‐hydroxyindoleacetic acid (5‐HIAA) were determined in the lumbar cerebrospinal fluid (CSF). Correlations between clinical, psychological and biochemical measures were calculated in order to elucidate whether monoaminergic mechanisms are of importance for the maintenance of cognitive abilities in normal and pathological aging. The patients’ performance was severely impaired in all neuropsychological tests. The mean levels of monoamine metabolites, however, did not differ between patients and volunteers. The correlations between psychological test scores and CSF metabolite levels were generally low, but mostly negative, associating a poor performance to a high activity of brain monoaminergic neurons. Thus, among the volunteers high 5‐HIAA and MHPG levels correlated with poor performance in the Picture completion and the Trail making tests ‐ measures of visuo‐perceptual and visuo‐motor skills. In the demented patients poor performance in the memory tests was associated with high levels of HVA and 5‐HIAA. The results indicate that monoamine neuron acticity is not a primary determinant for cognitive abilities in healthy elderly subjects or in demented patients. The slight negative correlation between cognitive function and metabolite concentrations in the patients may reflect a disturbance in a dopaminergic‐cholinergic balance due to degenerative changes of central cholinergic pathways.", "corpus_id": 13637593, "score": -1, "title": "Neuropsychological test performance and CSF levels of monoamine metabolites in healthy volunteers and patients with Alzheimer's dementia" }
{ "abstract": "In this paper, we consider a (p, q)-generalization of the r-Whitney number sequence of the first kind that reduces to it when p = q = 1. We obtain generalizations of some earlier results for the r-Whitney sequence, including recurrence and generating function formulas. We develop a combinatorial interpretation for our generalized numbers in terms of a pair of statistics on the set of r-permutations in which the elements within cycles of a permutation are assigned colors according to certain rules. This allows one to provide combinatorial proofs of various identities, including orthogonality relations. Finally, we consider the (p, q)-Whitney matrix of the first kind and find various factorizations for it.", "corpus_id": 201916217, "title": "Generalized r-Whitney numbers of the first kind" }
{ "abstract": "We define the (q, r)-Whitney numbers of the first and second kinds in terms of the q-Boson operators, and obtain several fundamental properties such as recurrence formulas, orthogonality and inverse relations, and other interesting identities. As a special case, we obtain a q-analogue of the r-Stirling numbers of the first and second kinds. Finally, we define the (q, r)-Dowling polynomials in terms of sums of (q, r)Whitney numbers of the second kind, and obtain some of their properties.", "corpus_id": 2013227, "title": "On q-Boson Operators and q-Analogues of the r-Whitney and r-Dowling Numbers" }
{ "abstract": "The notion of generalized Bell numbers was appeared in several works but there is no a systematic treatise on this topic. In this paper we fill this gap. We discuss the most important combinatorial, algebraic and analytic properties of these numbers which in any cases generalize the similar properties of the Bell numbers. Most of these properties seems to be new. It turns out that in a paper of Whitehead these numbers appeared in a very dierent context. In addition, we introduce the so-called", "corpus_id": 9759235, "score": -1, "title": "The r-Bell numbers" }
{ "abstract": "Recent studies advocated the use of active cycling coupled with functional electrical stimulation to induce neuroplasticity and enhance functional improvements in stroke adult patients. The aim of this work was to evaluate whether the benefits induced by such a treatment are superior to standard physiotherapy. A single-blinded randomized controlled trial has been performed on post-acute elderly stroke patients. Patients underwent FES-augmented cycling training combined with voluntary pedaling or standard physiotherapy. The intervention consisted of fifteen 30-minutes sessions carried out within 3 weeks. Patients were evaluated before and after training, through functional scales, gait analysis and a voluntary pedaling test. Results were compared with an age-matched healthy group. Sixteen patients completed the training. After treatment, a general improvement of all clinical scales was obtained for both groups. Only the mechanical efficiency highlighted a group effect in favor of the experimental group. Although a group effect was not found for any other cycling or gait parameters, the experimental group showed a higher percentage of change with respect to the control group (e.g. the gait velocity was improved of 35.4% and 25.4% respectively, and its variation over time was higher than minimal clinical difference for the experimental group only). This trend suggests that differences in terms of motor recovery between the two groups may be achieved increasing the training dose. In conclusion, this study, although preliminary, showed that FES-augmented active cycling training seems to be effective in improving cycling and walking ability in post-acute elderly stroke patients. A higher sample size is required to confirm results.", "corpus_id": 18309627, "title": "Can FES-Augmented Active Cycling Training Improve Locomotion in Post-Acute Elderly Stroke Patients?" }
{ "abstract": "Gait disorders drastically affect the quality of life of stroke survivors, making post-stroke rehabilitation an important research focus. Noninvasive brain stimulation has potential in facilitating neuroplasticity and improving post-stroke gait impairment. However, a large inter-individual variability in the response to noninvasive brain stimulation interventions has been increasingly recognized. We first review the neurophysiology of human gait and post-stroke neuroplasticity for gait recovery, and then discuss how noninvasive brain stimulation techniques could be utilized to enhance gait recovery. While post-stroke neuroplasticity for gait recovery is characterized by use-dependent plasticity, it evolves over time, is idiosyncratic, and may develop maladaptive elements. Furthermore, noninvasive brain stimulation has limited reach capability and is facilitative-only in nature. Therefore, we recommend that noninvasive brain stimulation be used adjunctively with rehabilitation training and other concurrent neuroplasticity facilitation techniques. Additionally, when noninvasive brain stimulation is applied for the rehabilitation of gait impairment in stroke survivors, stimulation montages should be customized according to the specific types of neuroplasticity found in each individual. This could be done using multiple mapping techniques.", "corpus_id": 27317924, "title": "Neuroplasticity in post-stroke gait recovery and noninvasive brain stimulation" }
{ "abstract": "Transcranial direct current stimulation (tDCS) is a neuromodulatory noninvasive brain stimulation tool with potential to increase or reduce regional and remote cortical excitability. Numerous studies have shown the ability of this technique to induce neuroplasticity and to modulate cognition and behavior in adults. Clinical studies have also demonstrated the ability of tDCS to induce therapeutic effects in several central nervous system disorders. However, knowledge about its ability to modulate brain functions in children or induce clinical improvements in pediatrics is limited. The objective of this review is to describe relevant data of some recent studies that may help to understand the potential of this technique in children with specific regard to effective and safe treatment of different developmental disorders in pediatrics. Overall, the results show that standard protocols of tDCS are well tolerated by children and have promising clinical effects. Nevertheless, treatment effects seem to be partially heterogeneous, and a case of a seizure in a child with previous history of infantile spasms and diagnosed epilepsy treated with tDCS for spasticity was reported. Further research is needed to determine safety criteria for tDCS use in children and to elucidate the particular neurophysiological changes induced by this neuromodulatory technique when it is applied in the developing brain.", "corpus_id": 207472080, "score": -1, "title": "Applications of transcranial direct current stimulation in children and pediatrics" }
{ "abstract": "Subdivision of cloaca into urogenital and anorectal passages has remained controversial because of disagreements about the identity and role of the septum developing between both passages. This study aimed to clarify the development of the cloaca using a quantitative 3D morphological approach in human embryos of 4–10 post‐fertilisation weeks. Embryos were visualised with Amira 3D‐reconstruction and Cinema 4D‐remodelling software. Distances between landmarks were computed with Amira3D software. Our main finding was a pronounced difference in growth between rapidly expanding central and ventral parts, and slowly or non‐growing cranial and dorsal parts. The entrance of the Wolffian duct into the cloaca proved a stable landmark that remained linked to the position of vertebra S3. Suppressed growth in the cranial cloaca resulted in an apparent craniodorsal migration of the entrance of the Wolffian duct, while suppressed growth in the dorsal cloaca changed the entrance of the hindgut from cranial to dorsal on the cloaca. Transformation of this ‘end‐to‐end’ into an ‘end‐to‐side’ junction produced temporary ‘lateral (Rathke's) folds’. The persistent difference in dorsoventral growth straightened the embryonic caudal body axis and concomitantly extended the frontally oriented ‘urorectal (Tourneux's) septum’ caudally between the ventral urogenital and dorsal anorectal parts of the cloaca. The dorsoventral growth difference also divided the cloacal membrane into a well‐developed ventral urethral plate and a thin dorsal cloacal membrane proper, which ruptured at 6.5 weeks. The expansion of the pericloacal mesenchyme followed the dorsoventral growth difference and produced the genital tubercle. Dysregulation of dorsal cloacal development is probably an important cause of anorectal malformations: too little regressive development may result in anorectal agenesis, and too much regression in stenosis or atresia of the remaining part of the dorsal cloaca.", "corpus_id": 52933512, "title": "The development of the cloaca in the human embryo" }
{ "abstract": "The division of the embryonic cloaca is the most essential event for the formation of digestive and urinary tracts. The defective development of the cloaca results in anorectal malformations (ARMs; 2–5 per 10,000 live births). However, the developmental and pathogenic mechanisms of ARMs are unclear. In the current study, we visualized the epithelia in the developing cloaca and nephric ducts (NDs). Systemic stereoscopic analyses revealed that the ND-cloaca connection sites shifted from the lateral-middle to dorsal-anterior part of the cloaca during cloacal division from E10.5 to E11.5 in mouse embryos. Genetic cell labeling analyses revealed that the cells in the ventral cloacal epithelium in the early stages rarely contributed to the dorsal part. Moreover, we revealed the possible morphogenetic movement of endodermal cells within the anterior part of the urogenital sinus and hindgut. These results provide the basis for understanding both cloacal development and the ARM pathogenesis.", "corpus_id": 279662, "title": "Systematic stereoscopic analyses for cloacal development: The origin of anorectal malformations" }
{ "abstract": "We report a water-based optical clearing agent, SeeDB, which clears fixed brain samples in a few days without quenching many types of fluorescent dyes, including fluorescent proteins and lipophilic neuronal tracers. Our method maintained a constant sample volume during the clearing procedure, an important factor for keeping cellular morphology intact, and facilitated the quantitative reconstruction of neuronal circuits. Combined with two-photon microscopy and an optimized objective lens, we were able to image the mouse brain from the dorsal to the ventral side. We used SeeDB to describe the near-complete wiring diagram of sister mitral cells associated with a common glomerulus in the mouse olfactory bulb. We found the diversity of dendrite wiring patterns among sister mitral cells, and our results provide an anatomical basis for non-redundant odor coding by these neurons. Our simple and efficient method is useful for imaging intact morphological architecture at large scales in both the adult and developing brains.", "corpus_id": 1814349, "score": -1, "title": "SeeDB: a simple and morphology-preserving optical clearing agent for neuronal circuit reconstruction" }
{ "abstract": "From 1983 to 1991, iron caused over 30% of the deaths from accidental ingestion of drug products by children. An evidence-based expert consensus process was used to create this guideline. Relevant articles were abstracted by a trained physician researcher. The first draft of the guideline was created by the primary author. The entire panel discussed and refined the guideline before its distribution to secondary reviewers for comment. The panel then made changes in response to comments received. The objective of this guideline is to assist poison center personnel in the appropriate out-of-hospital triage and initial management of patients with suspected ingestions of iron by 1) describing the manner in which an ingestion of iron might be managed, 2) identifying the key decision elements in managing cases of iron ingestion, 3) providing clear and practical recommendations that reflect the current state of knowledge, and 4) identifying needs for research. This guideline applies to ingestion of iron alone and is based on an assessment of current scientific and clinical information. The expert consensus panel recognizes that specific patient care decisions may be at variance with this guideline and are the prerogative of the patient and the health professionals providing care, considering all of the circumstances involved. The panel's recommendations follow; the grade of recommendation is in parentheses. 1) Patients with stated or suspected self-harm or who are victims of malicious administration of an iron product should be referred to an acute care medical facility immediately. This activity should be guided by local poison center procedures. In general, this should occur regardless of the amount ingested (Grade D). 2) Pediatric or adult patients with a known ingestion of 40 mg/kg or greater of elemental iron in the form of adult ferrous salt formulations or who have severe or persistent symptoms related to iron ingestion should be referred to a healthcare facility for medical evaluation. Patients who have ingested less than 40 mg/kg of elemental iron and who are having mild symptoms can be observed at home. Mild symptoms such as vomiting and diarrhea occur frequently. These mild symptoms should not necessarily prompt referral to a healthcare facility. Patients with more serious symptoms, such as persistent vomiting and diarrhea, alterations in level of consciousness, hematemesis, and bloody diarrhea require referral. The same dose threshold should be used for pregnant women, however, when calculating the mg/kg dose ingested, the pre-pregnancy weight of the woman should be used (Grade C). 3) Patients with ingestions of children's chewable vitamins plus iron should be observed at home with appropriate follow-up. The presence of diarrhea should not be the sole indicator for referral as these products are often sweetened with sorbitol. Children may need referral for the management of dehydration if vomiting or diarrhea is severe or prolonged (Grade C). 4) Patients with unintentional ingestions of carbonyl iron or polysaccharide-iron complex formulations should be observed at home with appropriate follow-up (Grade C). 5) Ipecac syrup, activated charcoal, cathartics, or oral complexing agents, such as bicarbonate or phosphate solutions, should not be used in the out-of-hospital management of iron ingestions (Grade C). 6) Asymptomatic patients are unlikely to develop symptoms if the interval between ingestion and the call to the poison center is greater than 6 hours. These patients should not need referral or prolonged observation. Depending on the specific circumstances, follow-up calls might be indicated (Grade C).", "corpus_id": 27645913, "title": "Iron Ingestion: an Evidence-Based Consensus Guideline for Out-of-Hospital Management" }
{ "abstract": "We retrospectively studied the records of all patients with poisoning due to excessive iron ingestion admitted to a children's hospital during a 7 1/2-year period. There were 80 such children, aged between 0,6 and 3,93 years. Almost half were severely poisoned. Most children took iron tablets intended for their mothers or aunts as a supplement during pregnancy. These were packed in easy-to-open plastic packets. Estimates of the number of tablets taken were unreliable. All 29 children who received parenteral desferrioxamine (Desferal; Ciba) on presentation survived, whereas 3 of the 51 children in whom desferrioxamine therapy was delayed died. Late morbidity from brain damage and intestinal strictures was not assessed. Many cases of iron poisoning in childhood could be prevented by strip-packaging of iron tablets. Parenteral desferrioxamine should be given without delay whenever a child is suspected of having swallowed excessive iron tablets.", "corpus_id": 126947, "title": "Iron poisoning--a preventable hazard of childhood." }
{ "abstract": "The accidental ingestion of iron-containing preparations is relatively common in childhood, and intentional overdosage with iron is occasionally seen in adults. Though rarely fatal, the consequences of a substantial iron ingestion can result in profound mental retardation or death. The availability of deferoxamine mesylate, a specific and tenacious chelator of iron, and the necessity for its early administration demand that the physician be aware of a rational approach to the therapy for iron poisoning. Owing to our current understanding of the pathophysiology of iron poisoning, we believe that the simultaneous oral and continuous intravenous (IV) administration of deferoxamine offers the most rational specific therapy for this condition. In this review we shall outline the clinical description, pathophysiology, and therapeutic regimens for acute iron intoxication. We have used this information to derive an approach we believe to be reasonable. CLINICAL DESCRIPTION The oral lethal dose (LD) of elemental iron is generally", "corpus_id": 461127, "score": -1, "title": "Acute iron poisoning. A review." }
{ "abstract": "There is a pressing need for long-term neuroprotective and neuroregenerative therapies to promote full function recovery of injuries in the human nervous system resulting from trauma, stroke or degenerative diseases. Although cell-based therapies are promising in supporting repair and regeneration, direct introduction to the injury site is plagued by problems such as low transplanted cell survival rate, limited graft integration, immunorejection, and tumor formation. Neural tissue engineering offers an integrative and multifaceted approach to tackle these complex neurological disorders. Synergistic therapeutic effects can be obtained from combining customized biomaterial scaffolds with cell-based therapies. Current scaffold-facilitated cell transplantation strategies aim to achieve structural and functional rescue via offering a three-dimensional permissive and instructive environment for sustainable neuroactive factor production for prolonged periods and/or cell replacement at the target site. In this review, we intend to highlight important considerations in biomaterial selection and to review major biodegradable or non-biodegradable scaffolds used for cell transplantation to the central and peripheral nervous system in preclinical and clinical trials. Expanded knowledge in biomaterial properties and their prolonged interaction with transplanted and host cells have greatly expanded the possibilities for designing suitable carrier systems and the potential of cell therapies in the nervous system.", "corpus_id": 1290144, "title": "Carriers in Cell-Based Therapies for Neurological Disorders" }
{ "abstract": "Large nerve defects require nerve grafts to allow regeneration. To avoid donor nerve problems the concept of tissue engineering was introduced into nerve surgery. However, non-neuronal grafts support axonal regeneration only to a certain extent. They lack viable Schwann cells which provide neurotrophic and neurotopic factors and guide the sprouting nerve. This experimental study used the rat sciatic nerve to bridge 2 cm nerve gaps with collagen (type I/III) tubes. The tubes were different in their physical structure (hollow versus inner collagen skeleton, different inner diameters). To improve regeneration Schwann cells were implanted. After 8 weeks the regeneration process was monitored clinically, histologically and morphometrically. Autologous nerve grafts and collagen tubes without Schwann cells served as control. In all parameters autologous nerve grafts showed best regeneration. Nerve regeneration in a noteworthy quality was also seen with hollow collagen tubes and tubes with reduced lumen, both filled with Schwann cells. The inner skeleton, however, impaired nerve regeneration independent of whether Schwann cells were added or not. This indicates that not only viable Schwann cells are an imperative prerequisite but also structural parameters determine peripheral nerve regeneration.", "corpus_id": 572112, "title": "Structural parameters of collagen nerve grafts influence peripheral nerve regeneration." }
{ "abstract": "There is a pressing need for long-term neuroprotective and neuroregenerative therapies to promote full function recovery of injuries in the human nervous system resulting from trauma, stroke or degenerative diseases. Although cell-based therapies are promising in supporting repair and regeneration, direct introduction to the injury site is plagued by problems such as low transplanted cell survival rate, limited graft integration, immunorejection, and tumor formation. Neural tissue engineering offers an integrative and multifaceted approach to tackle these complex neurological disorders. Synergistic therapeutic effects can be obtained from combining customized biomaterial scaffolds with cell-based therapies. Current scaffold-facilitated cell transplantation strategies aim to achieve structural and functional rescue via offering a three-dimensional permissive and instructive environment for sustainable neuroactive factor production for prolonged periods and/or cell replacement at the target site. In this review, we intend to highlight important considerations in biomaterial selection and to review major biodegradable or non-biodegradable scaffolds used for cell transplantation to the central and peripheral nervous system in preclinical and clinical trials. Expanded knowledge in biomaterial properties and their prolonged interaction with transplanted and host cells have greatly expanded the possibilities for designing suitable carrier systems and the potential of cell therapies in the nervous system.", "corpus_id": 1290144, "score": -1, "title": "Carriers in Cell-Based Therapies for Neurological Disorders" }
{ "abstract": "The increasing prevalence of data-intensive applications has made large-scale data transfers more important in datacenter networks. Excessive traffic demand in oversubscribed networks has caused serious performance bottlenecks. Data replicas, with the advantage of source diversity, can potentially improve the transmission performance, but current work focuses heavily on best replica selection rather than multi-source transmission. In this paper, we present JMS, a novel traffic management system that optimizes bulk multi-source transfers in software-defined datacenter networks. With a global network view and consistent data access, JMS conveys data in parallel from multiple distributed sources and dynamically adjusts the flow volumes to maximize network utilization. The joint bandwidth allocation and flow assignment optimization problem poses a major challenge with respect to nonlinearity and multiple objectives. To cope with this, we design a fair allocation algorithm that derives a novel transformation with simple equivalent canonical linear programming to efficiently achieve global optimality. Simulation results demonstrate that JMS outperforms other transmission approaches with substantial gains, where JMS improves the network throughput by up to 52% and reduces the transfer completion time by up to 44%.", "corpus_id": 49891354, "title": "JMS: Joint Bandwidth Allocation and Flow Assignment for Transfers with Multiple Sources" }
{ "abstract": "In this paper, we introduce Mayflower, a new distributed filesystem that is co-designed from the ground up to work together with a network control plane. In addition to the standard distributed filesystem components, Mayflower has a flow monitor and manager running alongside a software-defined networking controller. This tight coupling with the network controller enables Mayflower to make intelligent replica selection and flow scheduling decisions based on both filesystem and network information. It further enables Mayflower to perform global optimizations that are unavailable to conventional distributed filesystems and network control planes. Our evaluation results from both simulations and a prototype implementation show that Mayflower reduces average read completion time by more than 25% compared to current state-of-the-art distributed filesystems with an independent network flow scheduler, and more than 75% compared to HDFS with ECMP.", "corpus_id": 1040811, "title": "Mayflower: Improving Distributed Filesystem Performance Through SDN/Filesystem Co-Design" }
{ "abstract": "Abstract A girl with hepatomegaly had increased glycogen and deactivated phosphorylase in liver and muscle. Her muscle homogenate did not activate either its own phosphorylase or rabbit muscle phosphorylase b except at 10 to 20 % of normal rate under conditions where phosphorylase kinase is active without prior phosphorylation by cyclic 3′5′-AMP dependent kinase. The latter enzyme's activity was restored to the girl's muscle homogenate by mouse muscle lacking phosphorylase kinase activity. We conclude that the patient's muscle had (1) no detectable activity of cyclic 3'5'-AMP dependent kinase and (2) reduced activity of phosphorylase kinase. We speculate that (1) might lead to (2) if phosphorylase kinase was less stable in its non-phosphorylated than in its phosphorylated form.", "corpus_id": 7619434, "score": -1, "title": "Loss of cyclic 3'5'-AMP dependent kinase and reduction of phosphorylase kinase in skeletal muscle of a girl with deactivated phosphorylase and glycogenosis of liver and muscle." }
{ "abstract": "Aneuploidy and polyploidy are commonly observed in transformed cells. These states arise from failures during mitotic chromosome segregation, some of which can be traced to defects in the function or duplication of the centrosome. The centrosome is the organizing center for the mitotic spindle, and the equivalent organelle in the budding yeast, Saccharomyces cerevisiae, is the spindle pole body. We review how defects in spindle pole body duplication or function lead to genetic instability in yeast. There are several well documented instances of genetic instability in yeast that can be traced to the spindle pole body, all of which serve as models for genetic instability in transformed cells.", "corpus_id": 221530538, "title": "Mechanisms of genetic instability revealed by analysis of yeast spindle pole body duplication" }
{ "abstract": "We describe the phenotypes caused by a cold-sensitive lethal mutation (ndc1-1) that defines the NDC1 gene of yeast. Incubation of ndc1-1 at a nonpermissive temperature causes failure of chromosome separation in mitosis but does not block the cell cycle. This defect results in an asymmetric cell division in which one daughter cell doubles in ploidy and the other inherits no chromosomes. The spindle poles are properly segregated to the two daughter cells. The primary visible defect is that the chromosomes remain associated with only one pole, and are thus delivered to one daughter cell. Meiosis II, but not meiosis I, is sensitive to the ndc1-1 defect, suggesting that NDC1 is required for some feature common to mitosis and meiosis II. ndc1-1 appears to define a new class of cell cycle gene required for the attachment of chromosomes to the spindle pole.", "corpus_id": 455576, "title": "A gene required for the separation of chromosomes on the spindle apparatus in yeast" }
{ "abstract": "Electromagnetic waves are very strong waves which are used for transmission of signals from one place to other place. The main concern for the transmission is the selection of waveguide and its physical structure, material and dimensions etc. Generally Rectangular and Circular waveguide are used for transmission of EM waves. In rectangular waveguide attenuation losses, return loss and insertion loss are present due to a small corner at the ends. Due to the presence of such losses, the transmission was not as expected and reflection may occur. Attenuation losses, Return loss and Insertion loss are less in dielectric circular waveguide compares Rectangular and Circular Waveguide. In this paper dielectric circular waveguide plays the important role in a waveguide to obtaining the desired", "corpus_id": 209504387, "score": -1, "title": "IJSRD-International Journal for Scientific Research & Development| Vol. 2, Issue 11, 2015 | ISSN (online): 2321-0613" }
{ "abstract": "Despite substantial improvements in survival from childhood cancer during the last decades, there are indications that survival rates for several cancer types are no longer improving. Moreover, evidence accumulates suggesting that socioeconomic and sociodemographic factors may have an impact on survival also in high-income countries. The aim of this review is to summarize the findings from studies on social factors and survival in childhood cancer. Several types of cancer and social factors are included in order to shed light on potential mechanisms and identify particularly affected groups. A literature search conducted in PubMed identified 333 articles published from December 2012 until June 2018, of which 24 fulfilled the inclusion criteria. The findings are diverse; some studies found no associations but several indicated a social gradient with higher mortality among children from families of lower socioeconomic status (SES). There were no clear suggestions of particularly vulnerable subgroups, but hematological malignancies were most commonly investigated. A wide range of social factors have been examined and seem to be of different importance and varying between studies. However, potential underlying mechanisms linking a specific social factor to childhood cancer survival was seldom described. This review provides some support for a relationship between lower parental SES and worse survival after childhood cancer, which is a finding that needs further attention. Studies investigating predefined hypotheses involving specific social factors within homogenous cancer types are lacking and would increase the understanding of mechanisms involved, and allow targeted interventions to reduce health inequalities.", "corpus_id": 53111396, "title": "Survival After Childhood Cancer–Social Inequalities in High-Income Countries" }
{ "abstract": "Deaths during induction chemotherapy for pediatric acute lymphoblastic leukemia (ALL) account for one‐tenth of ALL‐associated mortality and half of ALL treatment‐related mortality. We sought to ascertain patient‐ and hospital‐level factors associated with induction mortality.", "corpus_id": 1122823, "title": "Patient and hospital factors associated with induction mortality in acute lymphoblastic leukemia" }
{ "abstract": "The Affordable Care Act of 2010 launch of Medicare Value-Based Purchasing has become the platform for payment reform. It is a mechanism by which buyers of health care services hold providers accountable for high-quality and cost-effective care. The objective of the study was to examine the relationship between quality of hospital care and hospital competition using the quality-quantity behavioral model of hospital behavior. The quality-quantity behavioral model of hospital behavior was used as the conceptual framework for this study. Data from the American Hospital Association database, the Hospital Compare database, and the Area Health Resources Files database were used. Multivariate regression analysis was used to examine the effect of hospital competition on patient mortality. Hospital market competition was significantly and negatively related to the 3 mortality rates. Consistent with the literature, hospitals located in more competitive markets had lower mortality rates for patients with acute myocardial infarction, heart failure, and pneumonia. The results suggest that hospitals may be more readily to compete on quality of care and patient outcomes. The findings are important because policies that seek to control and negatively influence a competitive hospital environment, such as Certificate of Need legislation, may negatively affect patient mortality rates. Therefore, policymakers should encourage the development of policies that facilitate a more competitive and transparent health care marketplace to potentially and significantly improve patient mortality.", "corpus_id": 35794264, "score": -1, "title": "The Influence of Hospital Market Competition on Patient Mortality and Total Performance Score" }
{ "abstract": "Background: Evidence on beneficial associations of green space with cognitive function in older adults is very scarce and mainly limited to cross-sectional studies. Objectives: We aimed to investigate the association between long-term residential surrounding greenness and cognitive decline. Methods: This longitudinal study was based on three waves of data from the Whitehall II cohort, providing a 10-y follow-up (1997–1999 to 2007–2009) of 6,506 participants (45–68 y old) from the United Kingdom. Residential surrounding greenness was obtained across buffers of 500 and 1,000m around the participants’ residential addresses at each follow-up using satellite images on greenness (Normalized Difference Vegetation Index; NDVI) from a summer month in every follow-up period. Cognitive tests assessed reasoning, short-term memory, and verbal fluency. The cognitive scores were standardized and summarized in a global cognition z-score. To quantify the impact of greenness on repeated measurements of cognition, linear mixed effect models were developed that included an interaction between age and the indicator of greenness, and controlled for covariates including individual and neighborhood indicators of socioeconomic status (SES). Results: In a fully adjusted model, an interquartile range (IQR) increase in NDVI was associated with a difference in the global cognition z-score of 0.020 [95% confidence interval (CI): 0.003, 0.037; p=0.02] in the 500-m buffer and of 0.021 (95% CI: 0.003, 0.039; p=0.02) in the 1,000-m buffer over 10 y. The associations with cognitive decline over the study period were stronger among women than among men. Conclusions: Higher residential surrounding greenness was associated with slower cognitive decline over a 10-y follow-up period in the Whitehall II cohort of civil servants. https://doi.org/10.1289/EHP2875", "corpus_id": 51701972, "title": "Residential Surrounding Greenness and Cognitive Decline: A 10-Year Follow-up of the Whitehall II Cohort" }
{ "abstract": "Objective: The inverse association between socioeconomic status (SES) and cardiovascular disease (CVD) risk is well documented. Aortic stiffness assessed by aortic pulse wave velocity (PWV) is a strong predictor of CVD events. However, no previous study has examined the effect of SES on arterial stiffening over time. The present study examines this association, using several measures of SES, and attained education level in a large ageing cohort of British men and women. Methods: Participants were drawn from the Whitehall II study. The sample was composed of 3836 men and 1406 women who attended the 2008–2009 clinical examination (mean age = 65.5 years). Aortic PWV was measured in 2008–2009 and in 2012–2013 by applanation tonometry. A total of 3484 participants provided PWV measurements on both occasions. The mean difference in 5-year PWV change was examined according to household income, education, employment grade, and father's social class, using linear mixed models. Results: PWV increase [mean: confidence interval (m/s)] over 5 years was higher among participants with lower employment grade (0.38: 0.11–0.65), household income (0.58, 95%: 0.32–0.85), and education (0.30: 0.01, 0.58), after adjusting for sociodemographic variables, BMI, alcohol consumption, smoking, and other cardiovascular risk factors, namely SBP, mean arterial pressure, heart rate, cholesterol, diabetes, and antihypertensive use. Conclusion: The present study supports the presence of robust socioeconomic disparities in aortic stiffness progression. Our findings suggest that arterial aging could be an important pathophysiological pathway explaining the impact of lower SES on CVD risk.", "corpus_id": 556526, "title": "Socioeconomic status, education, and aortic stiffness progression over 5 years: the Whitehall II prospective cohort study" }
{ "abstract": "Although the Gross Domestic Product of the United States has been steadily rising since the 1950s, the gap between rich and poor is increasing (see Figure 1). John Kenneth Galbraith explained the importance of inequality when he stated, “People are poverty-stricken when their income, even if adequate for survival, falls markedly behind that of the community. Then they cannot have what the larger community regards as the minimum necessary for decency” (Hernadez 1999 p.36). Income inequality has increased over the past thirty years. However, the 1980s marked a disturbing trend in inequality. In the 1970s, inequality existed because the wealth of the upper classes was increasing at a faster rate than the wealth of the poor; in the 1980s, the rich were becoming richer while the poor were becoming poorer (Levy and Murname 1992).", "corpus_id": 43260383, "score": -1, "title": "All Men Created Unequal: Trends and Factors of Inequality in the United States" }
{ "abstract": "Memory leaks are tedious to detect and require significant debugging effort to be reproduced and localized. In particular, many of such bugs escape classical testing processes used in software development. One of the reasons is that unit and integration tests run too short for leaks to manifest via memory bloat or degraded performance. Moreover, many of such defects are environment-sensitive and not triggered by a test suite. Consequently, leaks are frequently discovered in the production scenario, causing elevated costs. In this paper we propose an approach for automated diagnosis of memory leaks during the development phase. Our technique is based on regression testing and exploits existing test suites. The key idea is to compare object (de-)allocation statistics (collected during unit/integration test executions) between a previous and the current software version. By grouping these statistics according to object creation sites we can detect anomalies and pinpoint the potential root causes of memory leaks. Such diagnosis can be completed before a visible memory bloat occurs, and in time proportional to the execution of test suite. We evaluate our approach using real leaks found in 7 Java applications. Results show that our approach has sufficient detection accuracy and is effective in isolating the leaky allocation site: true defect locations rank relatively high in the lists of suspicious code locations if the tests trigger the leak pattern. Our prototypical system imposes an acceptable instrumentation and execution overhead for practical memory leak detection even in large software projects.", "corpus_id": 12836111, "title": "Automated memory leak diagnosis by regression testing" }
{ "abstract": "Memory-related software defects manifest after a long incubation time and are usually discovered in a production scenario. As a consequence, this frequently encountered class of so-called software aging problems incur severe follow-up costs, including performance and reliability degradation, need for workarounds (usually controlled restarts) and effort for localizing the causes. While many excellent tools for identifying memory leaks exist, they are inappropriate for automated leak detection or isolation as they require developer involvement or slow down execution considerably. In this work we propose a lightweight approach which allows for automated leak detection during the standardized unit or integration tests. The core idea is to compare at the byte-code level the memory allocation behavior of related development versions of the same software. We evaluate our approach by injecting memory leaks into the YARN component of the popular Hadoop framework and comparing the accuracy of detection and isolation in various scenarios. The results show that the approach can detect and isolate such defects with high precision, even if multiple leaks are injected at once.", "corpus_id": 4852, "title": "Detection and Root Cause Analysis of Memory-Related Software Aging Defects by Automated Tests" }
{ "abstract": "This paper describes httperf, a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance. The focus of httperf is not on implementing one particular benchmark but on providing a robust, high-performance tool that facilitates the construction of both micro- and macro-level benchmarks. The three distinguishing characteristics of httperf are its robustness, which includes the ability to generate and sustain server overload, support for the HTTP/1.1 protocol, and its extensibility to new workload generators and performance measurements. In addition to reporting on the design and implementation of httperf this paper also discusses some of the experiences and insights gained while realizing this tool.", "corpus_id": 207249768, "score": -1, "title": "httperf—a tool for measuring web server performance" }
{ "abstract": "Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level.", "corpus_id": 24534363, "title": "Propagating Waves Can Explain Irregular Neural Dynamics" }
{ "abstract": "The dynamics of subthreshold membrane potential provide insight into the organization of activity in neural circuits. In many brain areas, membrane potential is bistable, transiting between a relatively hyperpolarized down state and a depolarized up state. These up and down states, which have been proposed to play a number of computational roles, have mainly been studied in anesthetized and in vitro preparations. Here, we have used intracellular recordings to characterize the dynamics of membrane potential in the auditory cortex of awake rats. We find that long up states are rare in the awake auditory cortex, with only 0.4% of up states >500 ms. Most neurons displayed only brief up states (bumps) and spent on average ∼1% of recording time in up states >500 ms. We suggest that the near absence of long up states in awake auditory cortex may reflect an adaptation to the rapid processing of auditory stimuli.", "corpus_id": 1294848, "title": "Up states are rare in awake auditory cortex." }
{ "abstract": "Recent experimental studies show cortical circuit responses to external stimuli display varied dynamical properties. These include stimulus strength-dependent population response patterns, a shift from synchronous to asynchronous states and a decline in neural variability. To elucidate the mechanisms underlying these response properties and explore how they are mechanistically related, we develop a neural circuit model that incorporates two essential features widely observed in the cerebral cortex. The first feature is a balance between excitatory and inhibitory inputs to individual neurons; the second feature is distance-dependent connectivity. We show that applying a weak external stimulus to the model evokes a wave pattern propagating along lateral connections, but a strong external stimulus triggers a localized pattern; these stimulus strength-dependent population response patterns are quantitatively comparable with those measured in experimental studies. We identify network mechanisms underlying this population response, and demonstrate that the dynamics of population-level response patterns can explain a range of prominent features in neural responses, including changes to the dynamics of neurons' membrane potentials and synaptic inputs that characterize the shift of cortical states, and the stimulus-evoked decline in neuron response variability. Our study provides a unified population activity pattern-based view of diverse cortical response properties, thus shedding new insights into cortical processing.", "corpus_id": 4998549, "score": -1, "title": "Dynamical patterns underlying response properties of cortical circuits" }
{ "abstract": "Background The objective of this study was to compare indinavir peak plasma (Cmax) values after administration of indinavir/ritonavir 800/100 mg on an empty stomach or with food. High indinavir Cmax values have been associated with indinavir-related nephrotoxicity. Methods This was an open-label, randomized, two-treatment, two-period, cross-over pharmacokinetic study performed at steady state. HIV-infected patients who had been using indinavir/ritonavir 800/100 mg twice daily for at least 4 weeks were randomized to take this combination with a light breakfast (two filled rolls and 130 ml of fluid) on a first study day, and without food on a second day, or in the reverse order. The pharmacokinetics of indinavir and ritonavir were assessed after plasma and urine sampling during 12 h. Results Data for nine patients were evaluated. Administration of indinavir/ritonavir 800/100 mg on an empty stomach resulted in a higher indinavir Cmax [geometric mean (GM) ratio – fasting/fed and 95% confidence interval (CI): 1.28 (1.08–1.52), P=0.01] and a trend to a shorter indinavir tmax (P=0.07) compared to administration with food. The mode of administration of indinavir/ritonavir did not affect plasma indinavir Cmin and AUC values, parameters that have been associated with the antiviral efficacy of indinavir, nor the urinary excretion of indinavir. Conclusions Administration of indinavir/ritonavir 800/100 mg on an empty stomach results in a higher indinavir Cmax compared to ingestion with a light meal. Stated the other way round, intake with a light meal reduces indinavir Cmax, which probably reflects a food-induced delay in the absorption of indinavir. It is recommended to administer indinavir/ritonavir 800/100 mg with food, as a possible means to prevent indinavir-related nephrotoxicity in patients who start or continue with this regimen.", "corpus_id": 13559275, "title": "Administration of Indinavir and Low-Dose Ritonavir (800/100 Mg Twice Daily) with Food Reduces Nephrotoxic Peak Plasma Levels of Indinavir" }
{ "abstract": "Objective: To determine the probable site of the nephron and the plasma indinavir (IDV) concentration at which intrarenal IDV crystallization occurs. Design: We performed in vitro crystallization experiments in IDV solutions simulating conditions found in the nephron. Methods: To determine intrarenal IDV concentrations at which conditions in the nephron allow crystallization, several concentrations of IDV basic solutions (0‐800 mM) were titrated from pH 4.0 to higher pH values until crystals formed within 1 minute. Based on the combination of pH and ionic strength at which crystals formed, we determined the site of the nephron at which this combination was first attained. Based on the capacity for concentration at that site, we were able to measure the corresponding plasma IDV concentration. Results: Under conditions normally found at the proximal tubule (i.e., pH 6.7 and ionic strength of 200 mM), IDV crystallized at 200 mg/L. Under conditions applying to the loop of Henle, pH 7.4 and ionic strength of 200 mM, IDV crystallized at 125 mg/L, which would correspond to a plasma IDV concentration of 8 mg/L. Conclusions: IDV crystallization is most likely in the loop of Henle and may already start at plasma IDV concentrations as low as 8 mg/L. Increasing hydration does not reduce the risk of IDV crystallization in the loop of Henle but instead prevents IDV crystallization and aggregation in the lower urinary tract. It remains to be confirmed whether prevention of high IDV plasma concentrations will reduce the risk of IDV crystallization in the loop of Henle.", "corpus_id": 19909266, "title": "Indinavir Crystallization Around the Loop of Henle: Experimental Evidence" }
{ "abstract": "This study follows a quantitative research design to investigate the perception of students from English medium schools and universities towards English language learning and cultural manipulation in Bangladesh. A total of 300 students from three English medium schools, two private universities and two public universities participate in the survey. A simple random sampling technique is followed to define the sample size. Further, the study uses a questionnaire as a tool for collecting data. Then, Statistical Package for the Social Sciences (SPSS) 21.0 software is used for analyzing the data. The findings reveal that majority of the respondents are practicing western culture, and therefore, our Bangladeshi culture is gradually being replaced. Though English language has been playing a great role for the communication, it has become a threat to our own culture. The students are much more attracted to western culture and lifestyle neglecting Bangladeshi ones. They start adopting western culture to the detriment of Bangladeshi tradition and culture. Finally, the paper concludes confirming that such kind of excessive indulgence in western culture undermines Bangladeshi traditional values and ways of life. The government along with the language policy makers should emphasize how the native culture and the target culture can be represented in a more sensible and balanced way.", "corpus_id": 251782112, "score": -1, "title": "Investigating English Language as a Tool of Cultural Manipulation in English Medium Schools and Universities in Khulna City, Bangladesh" }
{ "abstract": "Lung cancer is a heterogeneous group of diseases with multifactorial aetiology. Smoking has been undeniably recognized as the main aetiological factor in lung cancer, but it should be emphasized that it is not the only factor. It is worth noting that a number of nonsmokers also develop this disease. Radon exposure is the second greatest risk factor for lung cancer among smokers—after smoking—and the first one for nonsmokers. The knowledge about this element amongst specialist oncologists and pulmonologists seems to be very superficial. We discuss the impact of radon on human health, with particular emphasis on respiratory diseases, including lung cancer. A better understanding of the problem will increase the chance of reducing the impact of radon exposure on public health and may contribute to more effective prevention of a number of lung diseases.", "corpus_id": 229300924, "title": "Radon—The Element of Risk. The Impact of Radon Exposure on Human Health" }
{ "abstract": "The method for the calculation of correction factors is presented, which can be used for the assessment of the mean annual radon concentration on the basis of 1-month or 3-month indoor measurements. Annual radon concentration is an essential value for the determination of the annual dose due to radon inhalation. The measurements have been carried out in 132 houses in Poland over a period of one year. The passive method of track detectors with CR-39 foil was applied. Four thermal-precipitation regions in Poland were established and correction factors were calculated for each region, separately for houses with and without basements.", "corpus_id": 81662, "title": "Correction factors for determination of annual average radon concentration in dwellings of Poland resulting from seasonal variability of indoor radon." }
{ "abstract": "Abstract In this paper, we consider the macroeconomic models with policy lag, and study how lags in policy response affect the macroeconomic stability. The local stability of the nonzero equilibrium of this equation is investigated by analyzing the corresponding transcendental characteristic equation of its linearized equation. Some general stability criteria involving the policy lag and the system parameter are derived. By choosing the policy lag as a bifurcation parameter, the model is found to undergo a sequence of Hopf bifurcation. The direction and stability of the bifurcating periodic solutions are determined by using the normal form theory and the center manifold theorem. Moreover, we show that the government can stabilize the intrinsically unstable economy if the policy lag is sufficiently short, but the system become locally unstable when the policy lag is too long. We also find the chaotic behavior in some range of the policy lag.", "corpus_id": 119935559, "score": -1, "title": "Hopf bifurcation and chaos in macroeconomic models with policy lag" }
{ "abstract": "We present a rare case of metastatic pancreatic adenocarcinoma diagnosed antepartum. A high index of suspicion must be maintained to diagnose pancreatic cancer during pregnancy. We recommend a thorough history and physical and aggressive pursuit of sensitive imaging in patients with persistent symptoms. If pancreatic adenocarcinoma is diagnosed, a multidisciplinary approach that focuses on patient goals should be undertaken. The effect of pregnancy on tumor growth rates is unknown.", "corpus_id": 4991762, "title": "Metastatic Pancreatic Adenocarcinoma During Pregnancy" }
{ "abstract": "BACKGROUND: Acute, persistent abdominal pain due to ruptured pancreatic carcinoma and perforated stomach is extremely rare during pregnancy. CASE: We evaluated a woman at 34 weeks of gestation presenting with uterine contractions. Computed tomography scanning revealed a large retroperitoneal mass, and her blood carbohydrate antigen 19–9 level was elevated. Immediately after an emergency cesarean delivery, pancreatic cancer was detected, and pancreatoduodenectomy was performed. The patient underwent chemotherapy and remains disease-free at 2 years. CONCLUSION: Delayed diagnosis and treatment are associated with high morbidity of both neonate and mother in cases of pancreatic cancer during pregnancy. Computed tomography scanning and carbohydrate antigen 19–9 levels are useful for diagnosis, after which radical surgery should be performed immediately in late pregnancy.", "corpus_id": 3256126, "title": "Diagnosis and Management of Pancreatic Carcinoma During Pregnancy" }
{ "abstract": "Abstract Objective: To report a case of pancreatic adenocarcinoma complicating pregnancy with a review of literature. Methods: A literature search of all English articles on pancreatic adenocarcinoma in pregnancy till December 2014. Results: A 35-year-old patient presented at 22 weeks of gestation for back pain and weight loss. Subsequent she was confirmed to have metastatic pancreatic adenocarcinoma. There were in total eleven case reports identified. Abdominal pain and back pain were the presenting symptoms in 75% and 33.3% of patients respectively. Conclusions: Pancreatic adecnocarcinoma is a rare cancer in pregnancy. A high index of suspicion is required in case of atypical symptoms.", "corpus_id": 20991036, "score": -1, "title": "Metastatic pancreatic adenocarcinoma presented as back pain in pregnancy: case report and review of literature" }
{ "abstract": "We propose a new model for relational VAE semi-supervision capable of balancing disentanglement and low complexity modelling of relations with different symbolic properties. We compare the relative benefits of relation-decoder complexity and latent space structure on both inductive and transductive transfer learning. Our results depict a complex picture where enforcing structure on semi-supervised representations can greatly improve zero-shot transductive transfer, but may be less favourable or even impact negatively the capacity for inductive transfer.", "corpus_id": 226965046, "title": "On the Transferability of VAE Embeddings using Relational Knowledge with Semi-Supervision" }
{ "abstract": "In this work we explore the generalization characteristics of unsupervised representation learning by leveraging disentangled VAE's to learn a useful latent space on a set of relational reasoning problems derived from Raven Progressive Matrices. We show that the latent representations, learned by unsupervised training using the right objective function, significantly outperform the same architectures trained with purely supervised learning, especially when it comes to generalization.", "corpus_id": 53283344, "title": "Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations" }
{ "abstract": "We introduce a data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis. From a partially-scanned input shape, our method first infers a low-resolution – but complete – output. To this end, we introduce a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers. The network is trained to predict and fill in missing data, and operates on an implicit surface representation that encodes both known and unknown space. This allows us to predict global structure in unknown areas at high accuracy. We then correlate these intermediary results with 3D geometry from a shape database at test time. In a final pass, we propose a patch-based 3D shape synthesis method that imposes the 3D geometry from these retrieved shapes as constraints on the coarsely-completed mesh. This synthesis process enables us to reconstruct fine-scale detail and generate high-resolution output while respecting the global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms state-of-the-art completion method, the main contribution in our work lies in the combination of a data-driven shape predictor and analytic 3D shape synthesis. In our results, we show extensive evaluations on a newly-introduced shape completion benchmark for both real-world and synthetic data.", "corpus_id": 1003795, "score": -1, "title": "Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis" }
{ "abstract": "Modern theories of moral judgment predict that both conscious reasoning and unconscious emotional influences affect the way people decide about right and wrong. In a series of experiments, we tested the effect of subliminal and conscious priming of disgust facial expressions on moral dilemmas. “Trolley-car”-type scenarios were used, with subjects rating how acceptable they found the utilitarian course of action to be. On average, subliminal priming of disgust facial expressions resulted in higher rates of utilitarian judgments compared to neutral facial expressions. Further, in replication, we found that individual change in moral acceptability ratings due to disgust priming was modulated by individual sensitivity to disgust, revealing a bi-directional function. Our second replication extended this result to show that the function held for both subliminally and consciously presented stimuli. Combined across these experiments, we show a reliable bi-directional function, with presentation of disgust expression primes to individuals with higher disgust sensitivity resulting in more utilitarian judgments (i.e., number-based) and presentations to individuals with lower sensitivity resulting in more deontological judgments (i.e., rules-based). Our results may reconcile previous conflicting reports of disgust modulation of moral judgment by modeling how individual sensitivity to disgust determines the direction and degree of this effect.", "corpus_id": 13056035, "title": "Moral judgment modulation by disgust is bi-directionally moderated by individual sensitivity" }
{ "abstract": "Emotions seem to play a critical role in moral judgment. However, the way in which emotions exert their influence on moral judgments is still poorly understood. This study proposes a novel theoretical approach suggesting that emotions influence moral judgments based on their motivational dimension. We tested the effects of two types of induced emotions with equal valence but with different motivational implications (anger and disgust), and four types of moral scenarios (disgust-related, impersonal, personal, and beliefs) on moral judgments. We hypothesized and found that approach motivation associated with anger would make moral judgments more permissible, while disgust, associated with withdrawal motivation, would make them less permissible. Moreover, these effects varied as a function of the type of scenario: the induced emotions only affected moral judgments concerning impersonal and personal scenarios, while we observed no effects for the other scenarios. These findings suggest that emotions can play an important role in moral judgment, but that their specific effects depend upon the type of emotion induced. Furthermore, induced emotion effects were more prevalent for moral decisions in personal and impersonal scenarios, possibly because these require the performance of an action rather than making an abstract judgment. We conclude that the effects of induced emotions on moral judgments can be predicted by taking their motivational dimension into account. This finding has important implications for moral psychology, as it points toward a previously overlooked mechanism linking emotions to moral judgments.", "corpus_id": 2482632, "title": "The role of emotions for moral judgments depends on the type of emotion and moral scenario." }
{ "abstract": "Automatic dependent surveillance-broadcast (ADS-B) is one of the fundamental surveillance technologies to improve the safety, capacity, and efficiency of the national airspace system. ADS-B shares its frequency band with current radar systems that use the same 1,090 MHz band. The coexistence of radar systems and ADS-B systems is a key issue to detect and resolve conflicts in the next generation air transportation system (NextGen). This paper focuses on the performance evaluation of ADS-B with existing radar systems and performance optimization of ADS-B systems to improve the safety and efficiency of conflict detection and resolution in NextGen. We have developed a simulation environment which models the complex interplay among the air traffic load, the radar systems, the ADS-B systems, and the wireless channel. A simple model is used to derive an analytical expression for a performance metric of ADS-B. This model is then used to design an adaptive ADS-B protocol for maximizing the information coverage while guaranteeing reliable and timely communication in air traffic surveillance networks. Simulation results show that the effect of ADS-B interference on the current radar system is negligible. The operational ability of ADS-B meets the performance requirements of conflict detection and resolution in air traffic control. However, upgrades are required in the current radar system for operation within an ADS-B environment since the current radars can significantly degrade the ADS-B performance. Numerical results indicate that the proposed adaptive protocol has the potential to improve the performance of conflict detection and resolution in air traffic control.", "corpus_id": 18861544, "score": -1, "title": "Performance Evaluation and Optimization of Communication Infrastructure for the Next Generation Air Transportation System" }
{ "abstract": "This document was prepared by the community that is active in Italy, within INFN (Istituto Nazionale di Fisica Nucleare), in the field of ultra-relativistic heavy-ion collisions. The experimental study of the phase diagram of strongly-interacting matter and of the Quark-Gluon Plasma (QGP) deconfined state will proceed, in the next 10-15 years, along two directions: the high-energy regime at RHIC and at the LHC, and the low-energy regime at FAIR, NICA, SPS and RHIC. The Italian community is strongly involved in the present and future programme of the ALICE experiment, the upgrade of which will open, in the 2020s, a new phase of high-precision characterisation of the QGP properties at the LHC. As a complement of this main activity, there is a growing interest in a possible future experiment at the SPS, which would target the search for the onset of deconfinement using dimuon measurements. On a longer timescale, the community looks with interest at the ongoing studies and discussions on a possible fixed-target programme using the LHC ion beams and on the Future Circular Collider.", "corpus_id": 119178045, "title": "INFN What Next: Ultra-relativistic Heavy-Ion Collisions" }
{ "abstract": "Measurements of inclusive jet suppression in heavy ion collisions at the LHC provide direct sensitivity to the physics of jet quenching. In a sample of lead-lead collisions at root S-NN = 2.76 TeV ...", "corpus_id": 120452, "title": "Measurement of the jet radius and transverse momentum dependence of inclusive jet suppression in lead–lead collisions at √ s NN = 2 . 76 TeV with the ATLAS detector" }
{ "abstract": "In this short paper we provide an overview of new results from the ATLAS physics program at the LHC as of spring 2015. We separately summarize the results from p+Pb collisions and Pb+Pb collisions along with some of their interpretations.", "corpus_id": 1814019, "score": -1, "title": "Overview of new results from ATLAS heavy ion physics program" }
{ "abstract": "The Sleep Condition Indicator (SCI) is an eight‐item rating scale that was developed to screen for insomnia disorder based on DSM‐5 criteria. It has been shown previously to have good psychometric properties among several language translations. We developed age‐ and sex‐referenced values for the SCI to assist the evaluation of insomnia in everyday clinical practice. A random sample of 200 000 individuals (58% women, mean age: 31 ± 13 years) was selected from those who had completed the SCI via several internet platforms. Descriptive and inferential methods were applied to generate reference data and indices of reliable change for the SCI for men and women across the age deciles 16–25, 26–35, 36–45, 46–55, 56–65 and 66–75 years. The mean SCI score for the full sample was 14.97 ± 5.93. Overall, women scored worse than men (14.29 ± 5.83 versus 15.90 ± 5.94; mean difference: −1.60, η2 = 0.018, Cohen's d = 0.272) and those of older age scored worse than those younger (−0.057 points per year, 95% confidence interval (CI): −0.059 to −0.055) relative to age 16–25 years. The Reliable Change Index was established at seven scale points. In conclusion, the SCI is a useful instrument for clinicians and researchers that can help them to screen for insomnia, compare completers to individuals of similar age and sex and establish whether a reliable change was achieved following treatment.", "corpus_id": 23154432, "title": "The Sleep Condition Indicator: reference values derived from a sample of 200 000 adults" }
{ "abstract": "Insomnia disorder is frequent in the population, yet there is no French screening instrument available that is based on the updated DSM‐5 criteria. We evaluated the validity and reliability of the French version of an insomnia screening instrument based on DSM‐5 criteria, the Sleep Condition Indicator, in a population‐based sample of adults. A total of 366 community‐dwelling participants completed a face‐to‐face clinical interview to determine insomnia disorder against DSM‐5 criteria and several questionnaires including the French Sleep Condition Indicator version. Three‐hundred and twenty‐nine participants completed the Sleep Condition Indicator again after 1 month. Statistical analyses were performed to determine the reliability, construct validity, divergent validity and temporal stability of the French translation of the Sleep Condition Indicator. In addition, an explanatory factor analysis was performed to assess the underlying structure. The internal consistency (α = 0.87) and temporal stability (r = 0.86, P < 0.001) of the French Sleep Condition Indicator were high. When using the previously defined cut‐off value of ≤ 16, the area under the receiver operating characteristic curve was 0.93 with a sensitivity of 95% and a specificity of 75%. Additionally, good construct and divergent validity were demonstrated. The factor analyses showed a two‐factor structure with a focus on sleep and daytime effects. The French version of the Sleep Condition Indicator demonstrates satisfactory psychometric properties while being a useful instrument in detecting cases of insomnia disorder, consistent with features of DSM‐5, in the general population.", "corpus_id": 4258, "title": "Validation of a French version of the Sleep Condition Indicator: a clinical screening tool for insomnia disorder according to DSM‐5 criteria" }
{ "abstract": "BACKGROUND AND PURPOSE\nTo estimate the prevalence of insomnia symptoms and syndrome in the general population, describe the types of self-help treatments and consultations initiated for insomnia, and examine help-seeking determinants.\n\n\nPATIENTS AND METHODS\nA randomly selected sample of 2001 French-speaking adults from the province of Quebec (Canada) responded to a telephone survey about sleep, insomnia, and its treatments.\n\n\nRESULTS\nOf the total sample, 25.3% were dissatisfied with their sleep, 29.9% reported insomnia symptoms, and 9.5% met criteria for an insomnia syndrome. Thirteen percent of the respondents had consulted a healthcare provider specifically for insomnia in their lifetime, with general practitioners being the most frequently consulted. Daytime fatigue (48%), psychological distress (40%), and physical discomfort (22%) were the main determinants prompting individuals with insomnia to seek treatment. Of the total sample, 15% had used at least once herbal/dietary products to facilitate sleep and 11% had used prescribed sleep medications in the year preceding the survey. Other self-help strategies employed to facilitate sleep included reading, listening to music, and relaxation.\n\n\nCONCLUSIONS\nThese findings confirm the high prevalence of insomnia in the general population. While few insomnia sufferers seek professional consultations, many individuals initiate self-help treatments, particularly when daytime impairments such as fatigue become more noticeable. Improved knowledge of the determinants of help-seeking behaviors could guide the development of effective public health prevention and intervention programs to promote healthy sleep.", "corpus_id": 12908473, "score": -1, "title": "Epidemiology of insomnia: prevalence, self-help treatments, consultations, and determinants of help-seeking behaviors." }
{ "abstract": "Data were available for 160 sheep (50 Suffolk males, 50 Suffolk females, 40 Texel males and 20 Charollais males). One-fifth of animals within each breed and sex were slaughtered at each of 14, 18 or 22 weeks of age and two-fifths slaughtered at 26 weeks. After slaughter linear measurements were taken on the carcass. The left side of each carcass was then separated into eight joints and each joint dissected into lean, bone and fat. Five muscularity measures (three for the longissimus thoracis et lumborum (LTL) muscle, one for the hind leg and one for the whole carcass) and one of the shape of the LTL cross-section (depth : width) were calculated. With the exception of one measure for the LTL, muscularity increased with growth. Rates of increase in most measures were higher in Texels than in each of the other breeds, but were not different between the male and female Suffolks or between the Suffolk and Charollais lambs. Increases in most muscularity measures at a constant live weight were associated with increases in lean to bone ratio and carcass lean content. Associations with fat content were either non-significant or negative. Relationships with lean distribution were non-significant or weak. Correlations between the three measures of muscularity for the LTL were high. Correlations between the whole carcass measure and those within different regions were moderate to high in the Texels but lower in the Suffolk and Charollais breeds. The same was true for correlations between the LTL measures and hind leg muscularity. If muscularity throughout the carcass is to be described effectively, measures in more than one region may be required, particularly in the Suffolk and Charollais breeds.", "corpus_id": 55242765, "title": "Changes in muscularity with growth and its relationship with other carcass traits in three terminal sire breeds of sheep" }
{ "abstract": "A measure of muscularity, based on objective measurements, and expressed in terms of muscle depth relative to skeletal dimensions, is proposed and investigated using a simulation model. Average muscle depth is assessed as the square root of the muscle weight per unit length of a bone adjacent to the muscle. Muscularity is then defined as average muscle depth divided by bone length. Evidence based on a theoretical model, results from the literature and data from backfat selection lines of Southdown sheep is used to illustrate how muscularity defined in this way changes with growth, and the extent to which it parallels changes in muscle to bone ratio. It is concluded that although these two characteristics often change together there are situations where differences in muscularity are not accompanied by differences in muscle to bone ratio and vice versa.", "corpus_id": 1194384, "title": "An objective measure of muscularity: Changes with animal growth and differences between Genetic lines of southdown sheep." }
{ "abstract": "Material property models for poly(etheretherketone) (PEEK) have been combined with a residual stress model to provide a means for investigating the effect of crystallization process on the residual stress development in semicrystalline materials. The analysis shows that crystallization causes an increase in the residual stress levels. This increase is affected through an increase in the resin modulus values and through the resin modulus build-up at higher temperatures. The shrinkage due to crystallization was found to have no effect on the residual stress development in neat PEEK.", "corpus_id": 135842164, "score": -1, "title": "Residual stress development in neat poly(etheretherketone)" }
{ "abstract": "Breast cancer is found to be the most pervasive type of cancer among women. Computer aided detection and diagnosis of cancer at the initial stages can increase the chances of recovery and thus reduce the mortality rate through timely prognosis and adequate treatment planning. The nuclear atypia scoring or histopathological breast tumor grading remains to be a challenging problem due to the various artifacts and variabilities introduced during slide preparation and also because of the complexity in the structure of the underlying tissue patterns. Inspired by the success of symmetric positive definite (SPD) matrices in many of the challenging tasks in machine learning and computer vision, a sparse coding and dictionary learning on SPD matrices is proposed in this paper for the breast tumor grading. The proposed covariance-based SPD matrices form a Riemannian manifold and are represented as the sparse combination of Riemannian dictionary atoms. Non-linearity of the SPD manifold is tackled by embedding into the reproducing kernel Hilbert space using kernels derived from log-Euclidean metric, Jeffrey and Stein divergences and compared with the non-kernel-based affine invariant Riemannian metric. The novelty of the work lies in exploiting the kernel approach for the Hilbert space embedding of the Riemannian manifold, that can achieve a better discrimination of the breast cancer tissues, following a sparse representation over learned dictionaries and henceforth it outperforms many of the state-of-the-art algorithms in breast cancer grading in terms of quantitative and qualitative analysis.", "corpus_id": 53042498, "title": "Sparse Representation Over Learned Dictionaries on the Riemannian Manifold for Automated Grading of Nuclear Pleomorphism in Breast Cancer" }
{ "abstract": "Automated detection and segmentation of histologic primitives are critical steps for developing computer-aided diagnosis and prognosis system on histopathological tissue specimens. For a number of cancers, the clinical cancer grading system is highly correlated with the pathomic features of histologic primitives that appreciated from histopathological images. However, automated detection and segmentation of histologic primitives is pretty challenged because of the complicity and high density of histologic data. Therefore, there is a high demand for developing intelligent and computational image analysis tools for digital pathology images. Recently there have been interests in the application of “Deep Learning” strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this chapter, we present deep learning based approaches for two challenged tasks in histological image analysis: (1) Automated nuclear atypia scoring (NAS) on breast histopathology. We present a Multi-Resolution Convolutional Network (MR-CN) with Plurality Voting (MR-CN-PV) model for automated NAS. MR-CN-PV consists of three Single-Resolution Convolutional Network (SR-CN) with Majority Voting (SR-CN-MV) model for getting independent NAS. MR-CN-PV combines three scores via plurality voting for getting final score. (2) Epithelial (EP) and stromal (ST) tissues discrimination. The work utilized a pixel-wise Convolutional Network (CN-PI) based segmentation model for automated EP and ST tissues discrimination. We present experiments on two challenged datasets. For automated NAS, the MR-CN-PV model was evaluated on MITOS-ATYPIA-14 Challenge dataset. MR-CN-PV model got 67 score which was placed the second comparing with the scores of other five teams. The proposed CN-PI model outperformed patch-wise CN (CN-PA) models in discriminating EP and ST tissues on a breast histological images.", "corpus_id": 2312141, "title": "Deep Learning for Histopathological Image Analysis: Towards Computerized Diagnosis on Cancers" }
{ "abstract": "The underlying paradigm of big data-driven machine learning reflects the desire of deriving better conclusions from simply analyzing more data, without the necessity of looking at theory and models. Is having simply more data always helpful? In 1936, The Literary Digest collected 2.3M filled in questionnaires to predict the outcome of that year's US presidential election. The outcome of this big data prediction proved to be entirely wrong, whereas George Gallup only needed 3K handpicked people to make an accurate prediction. Generally, biases occur in machine learning whenever the distributions of training set and test set are different. In this work, we provide a review of different sorts of biases in (big) data sets in machine learning. We provide definitions and discussions of the most commonly appearing biases in machine learning: class imbalance and covariate shift. We also show how these biases can be quantified and corrected. This work is an introductory text for both researchers and practitioners to become more aware of this topic and thus to derive more reliable models for their learning problems.", "corpus_id": 3689710, "score": -1, "title": "Impact of Biases in Big Data" }
{ "abstract": "Wellbeing, or how people think and feel about their lives, predicts important life outcomes from happiness to health to longevity. Montessori pedagogy has features that enhance wellbeing contemporaneously and predictively, including self-determination, meaningful activities, and social stability. Here, 1905 adults, ages 18–81 (M = 36), filled out a large set of wellbeing scales followed by demographic information including type of school attended each year from 2 to 17. About half the sample had only attended conventional schools and the rest had attended Montessori for between 2 and 16 years (M = 8 years). To reduce the variable set, we first developed a measurement model of wellbeing using the survey data with exploratory then confirmatory factor analyses, arriving at four factors: general wellbeing, engagement, social trust, and self-confidence. A structural equation model that accounted for age, gender, race, childhood SES, and years in private school revealed that attending Montessori for at least two childhood years was associated with significantly higher adult wellbeing on all four factors. A second analysis found that the difference in wellbeing between Montessori and conventional schools existed even among the subsample that had exclusively attended private schools. A third analysis found that the more years one attended Montessori, the higher one’s wellbeing as an adult. Unmeasured selection effects could explain the results, in which case research should determine what third variable associated with Montessori schooling causes adult wellbeing. Several other limitations to the study are also discussed. Although some of these limitations need to be addressed, coupled with other research, including studies in which children were randomly assigned to Montessori schools, this study suggests that attending Montessori as a child might plausibly cause higher adult wellbeing.", "corpus_id": 244742937, "title": "An Association Between Montessori Education in Childhood and Adult Wellbeing" }
{ "abstract": "BACKGROUND\nThe utility of proxy reporting within the life course framework has not been adequately assessed; therefore we sought to assess the magnitude and type of agreement that exists between index and proxy reports for bodyweight, health, and socio-economic position (SEP) in childhood.\n\n\nMETHODS\nParticipants were enrolled as part of an ongoing study of preterm birth in African American women in Metro Detroit. Post-partum women and their mothers (n = 333 pairs) provided retrospective reports about the woman's childhood bodyweight, health, and SEP. Agreement was assessed using kappa, weighted kappa (κ), and intraclass correlation coefficients (ICC). Log-linear models were used to describe the pattern of agreement for ordinal data.\n\n\nRESULTS\nBirthweight and weight at age 18 was reported with a high level of agreement (ICC = 0.86 and 0.71, respectively). Kappa indicated moderate agreement for early and late childhood/adolescent weight. Log-linear models suggested that there was diagonal agreement plus linear by linear association for early childhood weight and linear by linear association in late childhood/adolescence. Reports of childhood medical problems and hospitalisations had only moderate agreement. Agreement for SEP in both early (κ = 0.14) and late childhood/adolescence (κ = 0.20) was poor. Log-linear models suggest a linear by linear association, indicating a positive association between the responses.\n\n\nCONCLUSIONS\nResults suggest that proxy reports may be utilised in conjunction with an index report to provide an estimate of the accuracy of report or to more fully capture experiences over the life course. This may be particularly useful when multiple developmental periods are examined.", "corpus_id": 2821212, "title": "Direct and proxy recall of childhood socio-economic position and health." }
{ "abstract": "Recent evidence suggests potential associations between birthweight and disease in later life. For resource or other reasons recorded birthweight may be unavailable to researchers who have access to uniquely relevant outcome data. The present study examined the validity of parental recall of birthweight. Parents of 1015 males and females aged 12 and 15 years participating in the Young Hearts Study (a cluster random sample of 1015 males and females aged 12 and 15 years from post-primary schools in Northern Ireland) completed a questionnaire which included a question about their child's birthweight. The answer provided was compared with recorded birthweight obtained from archived computerised child health records with a cut-off point for inaccurate reporting set at ±227 g (1/2 lb). The influence of social class and weight at birth on accuracy of recall was also determined. A total of 84.8% of parents accurately recalled their child's birthweight to within 227 g. Parents from non-manual occupation social classes recalled birthweight more accurately than those from manual occupation social classes (88.0 vs. 82.6% accurate: χ2 = 4.81, p = 0.03). Parents of low birthweight infants tended to recall their birthweight less accurately than parents of normal weight infants: 76.1% accurate compared to 86.1% accurate: χ2 = 3.54, p = 0.06. Parents of high birthweight infants recalled their birthweight less accurately than parents of normal weight infants: 78.5% accurate: χ2 = 3.94, p = 0.05. In conclusion, parentally recalled birthweight may be a suitable proxy for recorded birthweight for population based research into disease in childhood and adolescence.", "corpus_id": 20711270, "score": -1, "title": "Parental recall of birthweight: A good proxy for recorded birthweight?" }
{ "abstract": "This paper presents the simple yet effective phase shift control to attain constant current/constant current (CC/CV) charging for Electric Vehicle (EV) battery packs through series-series compensated resonant inductive wireless power transfer (RIWPT). The Series-Series (SS) compensation is mainly used in the proposed system to improve power transfer capability, reduce the leakage magnetic flux, and thereby maximize the power transfer efficiency. The battery pack of EV s is characterized as an equivalent variable resistance during CC/CV charging based on a real charging profile of a Chevy bolt EV battery pack. The primary side control is utilized in the proposed system to reduce the weight of the onboard power electronics converter and components requirements on the secondary side. An effective phase shift control strategy for RIWPT -based level 2 charger which only requires battery voltage and current data is implemented to achieve CC/CV charging of the EV battery pack. The effectiveness and practicality of the proposed control strategy are verified through a 7.7 kW RIWPT-based charger simulation as well as its experimental validation with a 3.7 kW RIWPT -based charger prototype.", "corpus_id": 260172263, "title": "Constant Current/Constant Voltage Charging Via Series-Series Compensated Resonant Inductive Wireless Charging for Electric Vehicle" }
{ "abstract": "For wireless charging of electric vehicle (EV) batteries, high-frequency magnetic fields are generated from magnetically coupled coils. The large air-gap between two coils may cause high leakage of magnetic fields and it may also lower the power transfer efficiency (PTE). For the first time, in this paper, we propose a new set of coil design formulas for high-efficiency and low harmonic currents and a new design procedure for low leakage of magnetic fields for high-power wireless power transfer (WPT) system. Based on the proposed design procedure, a pair of magnetically coupled coils with magnetic field shielding for a 1-kW-class golf-cart WPT system is optimized via finite-element simulation and the proposed design formulas. We built a 1-kW-class wireless EV charging system for practical measurements of the PTE, the magnetic field strength around the golf cart, and voltage/current spectrums. The fabricated system has achieved a PTE of 96% at the operating frequency of 20.15 kHz with a 156-mm air gap between the coils. At the same time, the highest magnetic field strength measured around the golf cart is 19.8 mG, which is far below the relevant electromagnetic field safety guidelines (ICNIRP 1998/2010). In addition, the third harmonic component of the measured magnetic field is 39 dB lower than the fundamental component. These practical measurement results prove the effectiveness of the proposed coil design formulas and procedure of a WPT system for high-efficiency and low magnetic field leakage.", "corpus_id": 15485118, "title": "Coil Design and Measurements of Automotive Magnetic Resonant Wireless Charging System for High-Efficiency and Low Magnetic Field Leakage" }
{ "abstract": "This paper describes a method to suppress the leakage magnetic field from a wireless power transfer (WPT) system through the use of a ferrimagnetic material and metallic shielding. To demonstrate the advantages of the coil structure with the ferrimagnetic material and metallic shielding, magnetic field distributions and the electrical performance of three different coil structures are investigated via 3D electromagnetic (EM) field solver and SPICE simulation. Results show that the suggested method considerably reduces the leakage magnetic field in the vicinity of the WPT system without significant loss of electrical performance. The simulation results of the suggested coil structure are experimentally verified with a 100 W-class WPT system for an LED TV.", "corpus_id": 24921365, "score": -1, "title": "Suppression of leakage magnetic field from a wireless power transfer system using ferrimagnetic material and metallic shielding" }
{ "abstract": "Water pollution is a serious environmental problem caused by activities. A group of pollutants that are not controlled in the environment but that cause harmful effects on the ecosystem are known as emerging pollutants. One of these groups of emerging pollutants detected in water bodies are pharmaceutical compounds. One of the main problems caused by pharmaceutical compounds as pollutant is bacterial resistance. are a family of antibiotics frequently used. Due to their poor absorption they are released into the environment through feces and urine as active ingredients. Wastewater treatment consists in three stages: primary, secondary, and tertiary treatment. Tertiary treatment employs methods such as reverse osmosis, oxidation-reduction, ultraviolet irradiation, and adsorption. Adsorption is used because it is a simple and effective. For the choice of an effective adsorbent material, surface area, porosity, adsorption capacity, mechanical stability, and factors such as profitability, regeneration, sustainability, and selectivity are considered. In the present review, the adsorbents commonly used in the treatment of water contaminated with were analyzed. The adsorbents used have been classified in a general way as metallic materials, polymers, ceramics, composites, and materials based on biomass.", "corpus_id": 236551291, "title": "ADSORBENT MATERIALS FOR EMERGING CONTAMINANT (TETRACYCLINE) REMOVAL" }
{ "abstract": "The presence of antibiotics in the water and wastewater has raised problems due to potential impacts on the environment and consequently their removal is of great importance. For this reason, this article aims to perform a study on the possibility of oxytetracycline (OTC) adsorption from aqueous medium by using the hydroxyapatite (HA) nanopowders as adsorbent materials. The hydroxyapatite nanopowders were synthesized by wet precipitation method by using orthophosphoric acid and calcium hydroxide as raw materials and investigated by XRD, SEM-EDX, FTIR and BET methods. The uncalcined and calcined hydroxyapatite samples have hexagonal crystal structure with crystal sizes smaller than 100nm and a specific surface area of 316m2/g and 139m2/g, respectively. The adsorption behavior of oxytetracycline, a zwitterionic antibiotic, on nanohydroxyapatite was investigated as a function of pH, contact time, adsorbent dosage and drug concentration by means of batch adsorption experiments. High oxytetracycline removal rates of about 97.58% and 89.95% for the uncalcined and calcined nanohydroxyapatites, respectively, were obtained at pH8 and ambient temperature. The adsorption process of oxytetracycline onto nanohydroxyapatite samples was found to follow a pseudo-second order and intraparticle diffusion kinetic models. The maximum adsorption capacities of 291.32mg/g and 278.27mg/g for uncalcined and calcined nanohydroxyapatite samples, respectively, have been found. The adsorption mechanism of OTC on the hydroxyapatite surface at pH8 can be established via surface complexation. The obtained results are indicative of good hydroxyapatite adsorption ability towards oxytetracycline drug.", "corpus_id": 4457008, "title": "Studies on adsorption of oxytetracycline from aqueous solutions onto hydroxyapatite." }
{ "abstract": "BACKGROUND Congenital transmesenteric hernia in children is a rare and potentially fatal form of internal abdominal hernia, and no specific clinical symptoms can be observed preoperatively. Therefore, this condition is not widely known among clinicians, and it is easily misdiagnosed, resulting in disastrous effects. CASE SUMMARY This report presents the case of a 13-year-old boy with a chief complaint of abdominal pain and vomiting and a history of duodenal ulcer. The patient was misdiagnosed with gastrointestinal bleeding and treated conservatively at first. Then, the patient’s symptoms were aggravated and he presented in a shock-like state. Computed tomography revealed a suspected internal hernia, extensive small intestinal obstruction, and massive effusion in the abdominal and pelvic cavity. Intraoperative exploration found a small mesenteric defect approximately 3.5 cm in diameter near the ileocecal valve, and there was about 1.8 m of herniated small intestine that was treated by resection and anastomosis. The patient recovered well and was followed for more than 5 years without developing short bowel syndrome. CONCLUSION In this report, we review the pathogenesis, presentation, diagnosis, and treatment of congenital transmesenteric hernia in children.", "corpus_id": 236208503, "score": -1, "title": "Intestinal gangrene secondary to congenital transmesenteric hernia in a child misdiagnosed with gastrointestinal bleeding: A case report" }
{ "abstract": "Seed dormancy is an important adaptive mechanism to protect seeds under the unfavorable environments. Unlike to wild type species, the seed dormancy trait of cultivated crops has been weakened by breeding programs during the domestication period. Weak seed dormancy often causes preharvest sprouting (PHS) problem in many cereal crops that result in significant economic loss. The seed dormancy is a quantitative trait loci (QTL) controlled by multiple genetic and environmental factors. So far, many QTLs for seed dormancy have been identified from rice and wheat as well as in the model plant Arabidopsis. Unveiling of QTL genes and complex mechanisms underlying seed dormancy is accelerated by the rapid progress of crop genomics. In the present study, we reviewed current status of research progress on the seed dormancy QTLs and correlated genes in Arabidopsis and cereal crops.", "corpus_id": 90232888, "title": "Quantitative trait loci (QTL) genes related to seed dormancy and preharvest sprouting." }
{ "abstract": "Two weak dormancy mutants, designated Q4359 and Q4646, were obtained from the rice cultivar N22 after treatment with 400 Gy (60) Co gamma-radiation. Compared to the N22 cultivar, the dormancy of the mutant seeds was more readily broken when exposed to a period of room temperature storage. The mutants also showed a reduced level of sensitivity to abscisic acid compared to the N22 cultivar, although Q4359 was more insensitive than Q4646. A genetic analysis indicated that in both mutants, the reduced dormancy trait was caused by a single recessive allele of a nuclear gene, but that the mutated locus was different in each case. The results of quantitative trait locus (QTL) mapping, based on the F(2) population from Q4359 x Nanjing35, suggested that Q4359 lacks the QTL qSdn-1 and carries a novel allele at QTL qSdn-9, while a similar analysis of the Q4646 x Nanjing35 F(2) population suggested that Q4646 lacks QTL qSdn-5, both qSdn-1 and qSdn-5 are major effect seed dormancy QTL in N22. Therefore, these two mutants were helpful to understand the mechanism of seed dormancy in N22.", "corpus_id": 6872190, "title": "Genetic analysis of two weak dormancy mutants derived from strong seed dormancy wild type rice N22 (Oryza sativa)." }
{ "abstract": "Abstract The volatile flavour released from red kidney beans was evaluated in vitro (in a model mouth system) and in vivo (in-nose). The dynamic release of the volatile flavour compounds was analysed by proton transfer reaction–mass spectrometry. The flavour compounds were identified by gas chromatography–mass spectrometry. Four masses (m/z 33, 45, 59 and 73; mass flavour compound + 1) were predominantly measured in the headspace of the beans and selected for dynamic flavour release studies. Comparison of the four masses, identified compounds and their quantities present showed that the four masses probably correspond to methanol (m/z 33), 2-methylbutanal (m/z 45), 2,3-butanedione (m/z 59) and 2-methylpropanal/2-butanone (m/z 73). Three mastication rates were employed in in vitro analysis (0, 26 and 52 rpm) and two mastication rates in in vivo analysis (52 rpm and free chewing). In in vitro analysis, dynamic release patterns varied significantly among the compounds and the mastication rates (MANOVA, P", "corpus_id": 94009772, "score": -1, "title": "In vitro and in vivo volatile flavour analysis of red kidney beans by proton transfer reaction-mass spectrometry" }
{ "abstract": "The past 20 years have witnessed extraordinary advances in the field of cytogenetics, with the discovery that a multitude of neoplasms is characterized by identifiable chromosomal changes. The ability of Cytogenetics to aid in the identification and precise classification of a variety of neoplasms has not gone unnoticed by Cytology. In particular, Cytology has recognized Cytogenetics as a welcome companion in the evaluation of soft tissue tumors, lymphomas, renal and urothelial tumors, and mesothelioma. This relationship requires a good understanding of the proper handling of specimens for optimal evaluation by Cytogenetics. The marriage of Cytology and Cytogenetics will likely grow stronger as more solid tumors (eg, salivary gland neoplasms) are discovered that harbor characteristic chromosomal abnormalities. Cancer (Cancer Cytopathol) 2013;121:279–90. © 2013 American Cancer Society.", "corpus_id": 5534911, "title": "The marriage of Cytology and Cytogenetics" }
{ "abstract": "A 34‐year‐old previously healthy Hispanic man presented with lower back pain. CT scan revealed an 8‐cm space‐occupying lesion in the superior pole of the left kidney with numerous small lytic lesions in the skull, vertebrae, ribs, and pelvic bones. CT‐guided fine‐needle aspiration biopsy revealed a high‐grade primitive small round cell tumor with the tumor cells being strongly positive for CD99 and vimentin. The patient subsequently underwent a left nephrectomy. Fluorescence in situ hybridization analysis using a DNA probe for the Ewing Sarcoma breakpoint region 1 (EWSR1) on chromosome 22g12 revealed a rearrangement of the EWSR1 locus. The diagnosis of primary Ewing sarcoma/primitive neuroectodermal tumor of the kidney was established. Diagn. Cytopathol. 2007;35:353–357. © 2007 Wiley‐Liss, Inc.", "corpus_id": 885459, "title": "Primary Ewing sarcoma/PNET of the kidney: Fine‐needle aspiration, histology, and dual color break apart FISH Assay" }
{ "abstract": "A new classification scheme is proposed for the differential diagnosis of Ewing's sarcoma and malignant peripheral neuroectodermal tumor (MPNT) based on conventional light microscopic and immunohistochemical findings. The presence of Homer‐Wright rosettes and/or the expression of at least two neural markers is diagnostic of MPNT Ewing's sarcoma. Ewing's sarcoma was diagnosed in cases lacking Homer‐Wright rosettes and expressing no neural marker or only one in immunohistochemistry. Using this “new” approach considerable differences were found between both tumor types. Although most MPNT were located in the thoracopulmonary region, Ewing's sarcoma was located predominantly in the pelvis and extremities. The mean age of MPNT patients was greater than that of Ewing's sarcoma patients. Most importantly, however, was a statistically significant difference in prognosis: disease‐free survival in Ewing's sarcoma patients at 7.5 years follow‐up was 60% compared with 45% in MPNT patients (P = 0.026). The detection of HNK‐1 in MPNT indicated a more aggressive biologic behavior, and the expression of protein S‐100 appeared to be correlated with a more favorable clinical course. Cancer 68:2251–2259, 1991.", "corpus_id": 19198754, "score": -1, "title": "Malignant peripheral neuroectodermal tumor and its necessary distinction from ewing's Sarcoma. A report from the kiel pediatric tumor registry" }
{ "abstract": "Obesity is an increasing trend within the United States and the importance of addressing both causation and effects of obesity are becoming more important. It has been shown that environment, genetics, and social behavior factors can lead to an increased risk of obesity. Obesity has also been associated with several negative health concerns including increased risk for heart disease, cancer, poor nutrition, and diabetes, among others. Beyond identifying individual factors that may lead to obesity, and be associated with it, it is important to take into account complex obese biological systems which may have multiple factors compounding any health problems. Evidence has shown that obese adipose tissue can develop a state of chronic low grade inflammation with the presence of pro-inflammatory cytokines. Normal physiological agents, such as β-adrenergic agonists (for example epinephrine), can induce lipolytic function, though it has now also been shown that these pro-inflammatory cytokines can also stimulate lipolysis. To begin addressing the more complex issue of multiple obesity-related factors that contribute to health problems, we looked at a direct multi-factor based compounding system. This", "corpus_id": 4691597, "title": "INFLAMMATORY CYTOKINES ALTER NORMAL LIPID" }
{ "abstract": "Mammals must take in large quantities of food, sometimes equivalent to their own body weight each day, in order to meet the energy requirements of processes such as maintenance, growth, activity, thermoregulation, pregnancy, and lactation. It is therefore remarkable to observe that in adults of most species energy intake is equal to expenditure, and thus energy balance and body weight are maintained over long periods of time. Even in young animals, in which body weight is continuously increasing, the rate of energy deposition is often relatively unaffected by external factors. It seems that both intake and output can vary and that either one of these parameters may change to compensate for variations in the other, so that energy balance is restored or maintained. For example, metabolic rate varies with environmental temperature and this produces compensatory changes in food intake and conversely, food restriction causes a fall in energy expenditure and an increased efficiency of food utilization. An extension of this concept would suggest that increases in food intake might elicit a rise in expenditure, but although many studies over the past 80 years have provided support for this suggestion, it has become widely accepted only recently.", "corpus_id": 2108046, "title": "Diet-induced thermogenesis." }
{ "abstract": "In the thoracolumbar region, between 7% and 30% of spinal fusion failures are at risk for pseudarthrosis. From a biomechanical perspective, the nonconformity of the intervertebral graft to the endplate surface could contribute to pseudarthrosis, given suboptimal stress distributions. The objective of this study was to quantify the effect of endplate-graft conformation on endplate stress distribution, maximum Von Mises stress development, and stability. The study design used an experimentally validated finite element (FE) model of the L4-L5 functional spinal unit to simulate two types of interbody grafts (cortical bone and polycaprolactone (PCL)-hydroxyapatite (HA) graft), with and without endplate-conformed surfaces. Two case studies were completed. In Case Study I, the endplate-conformed grafts and nonconformed grafts were compared under without posterior instrumentation condition, while in Case Study II, the endplate-conformed and nonconformed grafts were compared with posterior instrumentation. In both case studies, the results suggested that the increased endplate-graft conformity reduced the maximum stress on the endplate, created uniform stress distribution on endplate surfaces, and reduced the range of motion of L4-L5 segments by increasing the contact surface area between the graft and the endplate. The stress distributions in the endplate suggest that the load sharing is greater with the endplate-conformed PCL-HA graft, which might reduce the graft subsidence possibility.", "corpus_id": 17425196, "score": -1, "title": "Biomechanical evaluation of an endplate-conformed polycaprolactone-hydroxyapatite intervertebral fusion graft and its comparison with a typical nonconformed cortical graft." }
{ "abstract": "More than a decade ago the World Health Organization (WHO) declared tuberculosis (TB) a global emergency and called on the biomedical community to strengthen its efforts to combat this scourge. The WHO predicts that by 2020 almost one billion people will be infected, with 35 million dying from the disease if research for new approaches to the management of this disease is unsuccessful (1). Designing a better TB vaccine is a high priority research goal. This chapter will review the various strategies currently being used to prevent and treat TB. In spite of the numerous new vaccine candidates in clinical trials, and several others in the preclinical pipeline, no clear TB vaccine development strategy has emerged.", "corpus_id": 58916388, "title": "Vaccines Against Mycobacterium tuberculosis: An Overview from Preclinical Animal Studies to the Clinic" }
{ "abstract": "ABSTRACT We have studied CD4+ T cells that mediate immunological memory to an intravenous infection with Mycobacterium tuberculosis. The studies were conducted with a mouse model of memory immunity in which mice are rendered immune by a primary infection followed by antibiotic treatment and rest. Shortly after reinfection, tuberculosis-specific memory cells were recruited from the recirculating pool, leading to rapidly increasing precursor frequencies in the liver and a simultaneous decrease in the blood. A small subset of the infiltrating T cells was rapidly activated (<20 h) and expressed high levels of intracellular gamma interferon and the T-cell activation markers CD69 and CD25. These memory effector T cells expressed intermediate levels of CD45RB and were heterogeneous with regard to the L-selectin and CD44 markers. By adoptive transfer into nude mice, the highest level of resistance to a challenge with M. tuberculosis was mediated by CD45RBhigh,l-selectinhigh, CD44low cells. Taken together, these two lines of evidence support an important role for memory cells which have reverted to a naive phenotype in the long-term protection against M. tuberculosis.", "corpus_id": 1787123, "title": "CD4+ T-Cell Subsets That Mediate Immunological Memory to Mycobacterium tuberculosis Infection in Mice" }
{ "abstract": "Immunological memory is a fundamental feature of vertebrate immune systems, providing enhanced protection against previously encountered antigens. The established view has been that immunological memory results from clonal expansion and long-term survival of specialized memory cells. Recently, the nature of memory T cells has come under closer scrutiny because of the ability to distinguish naive and memory T cells phenotypically, particularly in humans. In this article, Charles Mackay discusses three features of memory T cells that help to explain the nature and function of these cells: the increased expression of adhesion and activation molecules on memory T cells, their potent functional status and their specific pathways of recirculation.", "corpus_id": 1523969, "score": -1, "title": "T-cell memory: the connection between function, phenotype and migration pathways." }
{ "abstract": "Introduction Conflict, fragility and political violence, that are taking place in many countries in the Middle East and North Africa (MENA) region have devastating effects on health. Digital health technologies can contribute to enhancing the quality, accessibility and availability of health care services in fragile and conflict-affected states of the MENA region. To inform future research, investments and policy processes, this scoping review aims to map out the evidence on digital health in fragile states in the MENA region. Method We conducted a scoping review following the Joanna Briggs Institute (JBI) guidelines. We conducted descriptive analysis of the general characteristics of the included papers and thematic analysis of the key findings of included studies categorized by targeted primary users of different digital health intervention. Results Out of the 10,724 articles identified, we included 93 studies. The included studies mainly focused on digital health interventions targeting healthcare providers, clients and data services, while few studies focused on health systems or organizations managers. Most of the included studies were observational studies (49%). We identified no systematic reviews. Most of the studies were conducted in Lebanon (32%) followed by Afghanistan (13%) and Palestine (12%). The first authors were mainly affiliated with institutions from countries outside the MENA region (57%), mainly United Kingdom and United States. Digital health interventions provided a platform for training, supervision, and consultation for health care providers, continuing education for medical students, and disease self-management. The review also highlighted some implementation considerations for the adoption of digital health such as computer literacy, weak technological infrastructure, and privacy concerns. Conclusion This review showed that digital health technologies can provide promising solutions in addressing health needs in fragile and conflict-affected states. However, rigorous evaluation of digital technologies in fragile settings and humanitarian crises are needed to inform their design and deployment.", "corpus_id": 258375353, "title": "Digital health in fragile states in the Middle East and North Africa (MENA) region: A scoping review of the literature" }
{ "abstract": "Abstract Objective: To determine the feasibility and acceptability of mobile health technology and its potential to improve antenatal care (ANC) services in Iraq. Methods: This was a controlled experimental study conducted at primary health care centers. One hundred pregnant women who attended those centres for ANC were exposed to weekly text messages varying in content, depending on the week of gestation, while 150 women were recruited for the unexposed group. The number of ANC visits in the intervention and control groups, was the main outcome measure. The Mann-Whitney test and the Poisson regression model were the two main statistical tests used. Results: More than 85% of recipients were in agreement with the following statements: “the client recommends this program for other pregnant women”, “personal rating for the message as a whole” and “obtained benefit from the messages”. There was a statistically significant increase in the median number of antenatal clinic visits from two to four per pregnancy, in addition to being relatively of low cost, and could be provided for a larger population with not much difference in the efforts. Conclusions: Text messaging is feasible, low cost and reasonably acceptable to Iraqi pregnant women, and encourages their ANC visits.", "corpus_id": 493182, "title": "Feasibility and acceptability of text messaging to support antenatal healthcare in Iraqi pregnant women: a pilot study" }
{ "abstract": "Hurricane forecasts are intended to convey information that is useful in helping individuals and organizations make decisions. For example, decisions include whether a mandatory evacuation should be issued, where emergency evacuation shelters should be located, and what are the appropriate quantities of emergency supplies that should be stockpiled at various locations. This paper incorporates one of the National Hurricane Center's official prediction models into a Bayesian decision framework to address complex decisions made in response to an observed tropical cyclone. The Bayesian decision process accounts for the trade-off between improving forecast accuracy and deteriorating cost efficiency (with respect to implementing a decision) as the storm evolves, which is characteristic of the above-mentioned decisions. The specific application addressed in this paper is a single-supplier, multi-retailer supply chain system in which demand at each retailer location is a random variable that is affected by the trajectory of an observed hurricane. The solution methodology is illustrated through numerical examples, and the benefit of the proposed approach compared to a traditional approach is discussed.", "corpus_id": 8448505, "score": -1, "title": "A Bayesian decision model with hurricane forecast updates for emergency supplies inventory management" }
{ "abstract": "Celiac disease (CD) is a chronic autoimmune illness triggered by gluten consumption in genetically predisposed individuals. Worldwide, CD prevalence is approximately 1%. Several studies suggest a higher prevalence of undiagnosed CD in patients with infertility. We described reproductive disorders and assessed the frequency of hospital admissions for infertility among celiac women aged 15–49. We conducted two surveys enrolling a convenient sample of celiac women, residing in Apulia or in Basilicata (Italy). Moreover, we selected hospital discharge records (HDRs) of celiac women and women with an exemption for CD, and matched the lists with HDRs for reproductive disorders. In the surveys we included 91 celiac women; 61.5% of them reported menstrual cycle disorders. 47/91 reported at least one pregnancy and 70.2% of them reported problems during pregnancy. From the HDRs and the registry of exemption, we selected 4,070 women with CD; the proportion of women hospitalized for infertility was higher among celiac women than among resident women in childbearing age (1.2% versus 0.2%). Our findings highlight a higher prevalence of reproductive disorders among celiac women than in the general population suggesting that clinicians might consider testing for CD women presenting with pregnancy disorders or infertility.", "corpus_id": 330468, "title": "Results from Ad Hoc and Routinely Collected Data among Celiac Women with Infertility or Pregnancy Related Disorders: Italy, 2001–2011" }
{ "abstract": "BACKGROUND: Coeliac women may suffer from gynaecological and obstetric complications. It is possible that these complications are the first symptom of coeliac disease. AIMS: To investigate the occurrence of subclinical coeliac disease in patients with infertility or recurrent miscarriages. SUBJECTS: Women of reproductive age who were attending the hospital because of either primary or secondary infertility, or two or more miscarriages. Women undergoing sterilisation served as control subjects. METHODS: The diagnostic investigation for infertility included the endocrine status, diagnostic laparoscopy, investigation of tubal patency, postcoital test, and semen analysis of the partner. Circulating antibodies against IgA class reticulin and gliadin were used in screening for coeliac disease. In positive cases, the diagnosis was confirmed by small bowel biopsy specimens. RESULTS: Four (2.7%) of 150 women in the infertility group, and none of the 150 control subjects were found to have coeliac disease (p = 0.06). All four women with coeliac disease suffered from infertility of unexplained origin. Altogether 98 women had no discoverable reason for infertility. Thus, in this subgroup the frequency of coeliac disease was 4.1% (four of 98), the difference from the control group being statistically significant (p = 0.02). None of the coeliac women had extensive malabsorption, but two had iron deficiency anaemia. One women with coeliac disease has had a normal delivery. None of the 50 women with miscarriage had coeliac disease. CONCLUSION: Patients having fertility problems may have subclinical coeliac disease, which can be detected by serological screening tests. Silent coeliac disease should be considered in the case of women with unexplained infertility.", "corpus_id": 139450, "title": "Infertility and coeliac disease." }
{ "abstract": "We examined humoral immunity in coeliac disease as expressed in serum (systemic immunity), and in saliva, jejunal aspirate, and whole gut lavage fluid (mucosal immunity). The aims were to define features of the secretory immune response (IgA and IgM concentrations and antibody values to gliadin and other food proteins measured by enzyme linked immunosorbent assay (ELISA)) in active disease and remission, and to establish whether secretions obtained by relatively non-invasive techniques (saliva and gut lavage fluid) can be used for indirect measurements of events in the jejunum. Serum, saliva, and jejunal aspirate from 26 adults with untreated coeliac disease, 22 treated patients, and 28 immunologically normal control subjects were studied, together with intestinal secretions obtained by gut lavage from 15 untreated and 19 treated patients with coeliac disease and 25 control subjects. Jejunal aspirate IgA and IgM and gut lavage fluid IgM concentrations were significantly raised in patients with untreated coeliac disease; the lavage fluid IgM concentration remained higher in patients with treated coeliac disease than in controls. Serum and salivary immunoglobulin concentrations were similar in the three groups. Patients with untreated coeliac disease had higher values of antibodies to gliadin compared with treated patients and control subjects in all body fluids tested; these were predominantly of IgA and IgG classes in serum, and of IgA and IgM classes in jejunal aspirate and gut lavage fluid. Values of salivary IgA antibodies to gliadin were significantly higher in untreated coeliacs, though antibody values were generally low, with a large overlap between coeliac disease patients and control subjects. In treated patients, with proved histological recovery on gluten free diet, serum IgA antigliadin antibody values fell to control values, though serum IgG antigliadin antibody values remained moderately raised. In contrast, there was persistence of secretory antigliadin antibodies in treated patients (particularly IgM antibody) in both jejunal aspirate and gut lavage fluid. Antibody responses to betalactoglobulin and ovalbumin were similar to those for gliadin, including persistence of high intestinal antibody values in patients with treated coeliac disease. There was a positive correlation between antibody values in jejunal aspirate and gut lavage fluid, but not between saliva and jejunal aspirate; thus salivary antibodies do not reflect intestinal humoral immunity.", "corpus_id": 20716469, "score": -1, "title": "Dissociation between systemic and mucosal humoral immune responses in coeliac disease." }
{ "abstract": "Acknowledgments Many thanks to the Thom Jayne of Michigan State University for detailed comments on previous drafts of this review. Nevertheless, the opinions and judgments in this paper, as well as any errors and omissions, are solely the responsibility of the authors.", "corpus_id": 166385434, "title": "Small farm commercialisation in Africa: Reviewing the issues" }
{ "abstract": "T h e editors have asked me to be brief in this rejoinder, and I shall try to comply, though I am sure they will not think I have succeeded.’ Unfortunately, this requires me to say almost nothing about Small’s helpful and appropriate comments. I agree with most of them, am gratified that he found my critique a contribution, and would like to reserve any further defense or explanation of my position for private correspondence. The Alexandroff-Rosecrance-Stein reply, however, misunderstands and misrepresents my position so seriously that I feel compelled to reply to at least a good deal of it so as to clear away the confusion. The fundamental error lies in ascribing to me, as the central element in my critique, an idealist Collingwoodian philosophy of history, according to which the historian’s task is to think the thoughts of past men after them. My critique, they repeatedly insist, calls essentially for abandoning the study of concrete facts and events in the vain pursuit of understanding motives, ultimate intentions, broad intentional processes, and mental states-a retrograde step tending toward obscurantism and rendering the objective, reliajle study of international affairs impossible. I do not hold this idealist view of history. 1 am elsewhere clearly on record on this score (Schroeder, 1972: iv-ix). This view is neither stated nor implied in my critique, nor logically connected with it. In certain", "corpus_id": 153452032, "title": "A Final Rejoinder" }
{ "abstract": "This paper has two closely related purposes, both of which, if accomplished, may help to accelerate the development of international relations as an empirically based discipline. One is to identify the shifting and expanding membership of the international system during the 125 years between the end of the Napoleonic Wars and the outbreak of World War II. The other is to classify all such members of the system according to their attributed importance or status during the same period. In each case, the intent is not only to provide certain data which may be useful to the discipline, but to describe in considerable detail the procedures by which such data were gathered or, more accurately, “made.”", "corpus_id": 155015689, "score": -1, "title": "The Composition and Status Ordering of the International System: 1815–1940" }
{ "abstract": "Although the biochemical and genetic basis of lipid metabolism is clear in Arabidopsis, there is limited information concerning the relevant genes in soybean. To address this issue, here we constructed three-dimension genetic networks using six seed oil-related traits, fifty-two lipid-metabolism-related metabolites and 54,294 SNPs in at most 286 soybean accessions. As a result, 284 and 279 candidate genes were found by phenotypic and metabolic genome-wide association studies and multi-omics analyses, respectively, to be significantly associated with seed oil-related traits and metabolites; six seed oil-related traits were found by MCP and SCAD analyses to be significantly related to thirty-one metabolites. Among the above candidate genes, 36 genes were found to be associated with oil synthesis (27), amino acid synthesis (4) and TCA cycle (5), and four genes GmFATB1a, GmPDAT, GmPLDα1 and GmDAGAT1 are known oil-synthesis-related genes. Using the above information, 133 three-dimension genetic networks were constructed, in which 24 are known, e.g., pyruvate-GmPDAT-GmFATA2-oil content. Using these networks, GmPDAT, GmAGT and GmACP4 reveal the genetic relationships between pyruvate and the three major nutrients, and GmPDAT, GmZF351 and GmPgs1 reveal the genetic relationships between amino acids and seed oil content. In addition, GmCds1, along with average temperature in July and rainfall, influence seed oil content across years. This study provides a new approach for three-dimension network construction and new information for soybean seed oil improvement and gene function identification.", "corpus_id": 216645749, "title": "Three-dimension genetic networks among seed oil-related traits, metabolites and genes reveal the genetic foundations of oil synthesis in soybean." }
{ "abstract": "Significance One of the most important agronomic traits in crop breeding is yield, which includes increased seed size and weight in grain crops and leaf biomass in forage crops. In this work, we demonstrate that a transcription regulator encoded by the BIG SEEDS1 (BS1) gene from the model legume Medicago truncatula, negatively regulates primary cell proliferation in plants. The deletion of this gene in M. truncatula and down-regulation of its orthologs in soybean (Glycine max) lead to significant increases in the size of plant organs, including leaf and seed. Understanding the BS1 gene function and its regulatory mechanism offers an opportunity for increasing plant yield in legumes and other grain crops. Plant organs, such as seeds, are primary sources of food for both humans and animals. Seed size is one of the major agronomic traits that have been selected in crop plants during their domestication. Legume seeds are a major source of dietary proteins and oils. Here, we report a conserved role for the BIG SEEDS1 (BS1) gene in the control of seed size and weight in the model legume Medicago truncatula and the grain legume soybean (Glycine max). BS1 encodes a plant-specific transcription regulator and plays a key role in the control of the size of plant organs, including seeds, seed pods, and leaves, through a regulatory module that targets primary cell proliferation. Importantly, down-regulation of BS1 orthologs in soybean by an artificial microRNA significantly increased soybean seed size, weight, and amino acid content. Our results provide a strategy for the increase in yield and seed quality in legumes.", "corpus_id": 3827598, "title": "Increasing seed size and quality by manipulating BIG SEEDS1 in legume species" }
{ "abstract": "Losses of soil N through leaching and N2 fixation by legumes often are related to soil nitrate concentration. The seasonal distribution of soil ammonium and nitrate concentrations under ungrazed legume-grass and grass swards were evaluated on two experiments that were established in 1983 (Exp. 1) and in 1984 (Exp. 2). Treatments were white clover (Trifolium repens L.) (WC), red clover (Trifolium pratense L.) (RC), and birdsfoot trefoil (Lotus corniculatus L.) (BT), each grown with tall fescue (Festuca arundicacea Schreb.) (TF) at two legume proportions, and a pure stand of TF. The concentrations of both forms of N were measured in the top 20-cm layer during 2 years in Exp. 1 and for 1 year in Exp. 2. The concentrations of nitrate and ammonium were least in winter and spring, and greatest in summer. The concentration of nitrate for the mixtures decreased in the order WC-TF, RC-TF, and BT-TF in both summers of Exp. 1 but there were no mixture differences in Exp. 2. The concentration of soil ammonium was not affected by the treatments applied. We conclude that the concentration of soil nitrate usually was small for these swards but became greater and often dependent on species and legume proportion during summer. The concentration of soil ammonium also was greater in summer but was not affected by species or legume proportion.", "corpus_id": 40571588, "score": -1, "title": "Seasonal distribution of topsoil ammonium and nitrate under legume-grass and grass swards" }
{ "abstract": "The efficiency of Wireless Sensor Network (WSN) is measured in terms of deployment schemes of heterogeneous nodes for object capturing as data and routing protocol is utilized for transmitting the data to Base Station (BS). The battery power is constrained in the WSN. The paper proposed a multitier clustering multi-hop routing protocol for energy conservation. The WSN area is divided in different zones and sub-zones so that more Cluster Heads (CH) are formed to gather the data to its member nodes and aggregate the data and send to BS. The simulation result shows that network lifetime is improved. The protocol is used for monitoring the agriculture crop field and prevents the crops from animals.", "corpus_id": 228093390, "title": "Multi-Tier Cluster Based Smart Farming Using Wireless Sensor Network" }
{ "abstract": "Extensive research happening across the globe witnessed the importance of Wireless Sensor Network in the present day application world. In the recent past, various routing algorithms have been proposed to elevate WSN network lifetime. Clustering mechanism is highly successful in conserving energy resources for network activities and has become promising field for researches. However, the problem of unbalanced energy consumption is still open because the cluster head activities are tightly coupled with role and location of a particular node in the network. Several unequal clustering algorithms are proposed to solve this wireless sensor network multihop hot spot problem. Current unequal clustering mechanisms consider only intra- and intercluster communication cost. Proper organization of wireless sensor network into clusters enables efficient utilization of limited resources and enhances lifetime of deployed sensor nodes. This paper considers a novel network organization scheme, energy-efficient edge-based network partitioning scheme, to organize sensor nodes into clusters of equal size. Also, it proposes a cluster-based routing algorithm, called zone-based routing protocol (ZBRP), for elevating sensor network lifetime. Experimental results show that ZBRP out-performs interims of network lifetime and energy conservation with its uniform energy consumption among the cluster heads.", "corpus_id": 17315687, "title": "Zone-Based Routing Protocol for Wireless Sensor Networks" }
{ "abstract": "The naive Bayes classifier, currently experiencing a renaissance in machine learning, has long been a core technique in information retrieval. We review some of the variations of naive Bayes models used for text retrieval and classification, focusing on the distributional assumptions made about word occurrences in documents.", "corpus_id": 32800624, "score": -1, "title": "Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval" }
{ "abstract": "Image retargeting aims to adjust the resolution and aspect ratio to an arbitrary size while preserving important content of the image. Usually multi-operator image retargeting demonstrates better generalization than single operator scheme due to heterogeneous characteristics of different regions in the image. Most existing multi-operator retargeting methods search the optimal operator at each step with exponential complexity and with the possibility of falling into local optimum. Therefore, in order to produce better results with lower computational costs, we formulate the multi-operator retargeting as a Markov decision-making process and apply Reinforcement Learning (RL) to achieve global optimum. Instead of using traditional image-level measures, we design a high-level semantic and aesthetic reward function to better match human visual perception. With the priori in reward, we further propose a weakly supervised Semantics and Aesthetics aware Multi-operator Image Retargeting (SAMIR) framework. Particularly, the semantic part of the reward helps to constrain the severe deformations that may occur during retargeting process, while the aesthetic part guarantees the sensory quality, which can effectively measure the perceptual effects of different operators on various image content. The operator of each step is learned in an end-to-end manner. In addition, retargeting can be performed in arbitrary target size, step size, and direction. The experiment results on both representative aesthetic datasets and retargeting datasets consistently show that our model outperforms the state-of-the-art methods.", "corpus_id": 215984094, "title": "Weakly Supervised Reinforced Multi-Operator Image Retargeting" }
{ "abstract": "Image retargeting has been applied to display images of any size via devices with various resolutions (e.g., cell phone and TV monitors). To fit an image with the target resolution, certain unimportant regions need to be deleted or distorted, and the key problem is to determine the importance of each pixel. Existing methods predict pixel-wise importance in a bottom-up manner via eye fixation estimation or saliency detection. In contrast, the proposed algorithm estimates the pixel-wise importance based on a top-down criterion where the target image maintains the semantic meaning of the original image. To this end, several semantic components corresponding to foreground objects, action contexts, and background regions are extracted. The semantic component maps are integrated by a classification guided fusion network. Specifically, the deep network classifies the original image as object or scene oriented, and fuses the semantic component maps according to classification results. The network output, referred to as the semantic collage with the same size as the original image, is then fed into any existing optimization method to generate the target image. Extensive experiments are carried out on the RetargetMe data set and S-Retarget database developed in this paper. Experimental results demonstrate the merits of the proposed algorithm over the state-of-the-art image retargeting methods.", "corpus_id": 49727224, "title": "Composing Semantic Collage for Image Retargeting" }
{ "abstract": "This book is an introduction to both offensive and defensive techniques of cyberdeception. Unlike most books on cyberdeception, this book focuses on methods rather than detection. It treats cyberdeception techniques that are current, novel, and practical, and that go well beyond traditional honeypots. It contains features friendly for classroom use: (1) minimal use of programming details and mathematics, (2) modular chapters that can be covered in many orders, (3) exercises with each chapter, and (4) an extensive reference list. Cyberattacks have grown serious enough that understanding and using deception is essential to safe operation in cyberspace. The deception techniques covered are impersonation, delays, fakes, camouflage, false excuses, and social engineering. Special attention is devoted to cyberdeception in industrial control systems and within operating systems. This material is supported by a detailed discussion of how to plan deceptions and calculate their detectability and effectiveness. Some of the chapters provide further technical details of specific deception techniques and their application. Cyberdeception can be conducted ethically and efficiently when necessary by following a few basic principles. This book is intended for advanced undergraduate students and graduate students, as well as computer professionals learning on their own. It will be especially useful for anyone who helps run important and essential computer systems such as critical-infrastructure and military systems.", "corpus_id": 1680075, "score": -1, "title": "Introduction to Cyberdeception" }
{ "abstract": "Abstract After careful consideration of the semantics of status categories for mineral species names, minor corrections and disambiguations are presented for a recent report on the nomenclature of the pyrochlore supergroup. The names betafite, elsmoreite, microlite, pyrochlore and roméite are allocated as group names within the pyrochlore supergroup. The status of the names bindheimite, bismutostibiconite, jixianite, monimolite, partzite, stetefeldtite and stibiconite is changed from ‘discredited’ to ‘questionable’ pending further research.", "corpus_id": 65081781, "title": "Clarification of status of species in the pyrochlore supergroup" }
{ "abstract": "A new scheme of nomenclature for the pyrochlore supergroup, approved by the CNMNC-IMA, is based on the ions at the A, B and Y sites. What has been referred to until now as the pyrochlore group should be referred to as the pyrochlore supergroup, and the subgroups should be changed to groups. Five groups are recommended, based on the atomic proportions of the B atoms Nb, Ta, Sb, Ti, and W. The recommended groups are pyrochlore, microlite, rom£ite, betafite, and elsmoreite, respectively. The new names are composed of two prefixes and one root name (identical to the name of the group). The first prefix refers to the dominant anion (or cation) of the dominant valence [or HO or □] at the Y site. The second prefix refers to the dominant cation of the dominant valence [or HO or □] at the A site. The prefix \"keno-\" represents \"vacancy\". Where the first and second prefixes are equal, then only one prefix is applied. Complete descriptions are missing for the majority of the pyrochlore-supergroup species. Only seven names refer to valid species on the grounds of their complete descriptions: oxycalciopyrochlore, hydropyrochlore, hydroxykenomicrolite, oxystannomicrolite, oxystibiomicrolite, hydroxycalciorom6ite, and hydrokenoelsmoreite. Fluornatromicrolite is an IMA-approved mineral, but the complete description has not yet been published. The following 20 names refer to minerals that need to be completely described in order to be approved as valid species: hydroxycalciopyrochlore, fiuornatropyrochlore, fluorcalciopyrochlore, fluorstrontiopyrochlore, fluorkenopyrochlore, oxynatropyrochlore, oxyplumbopyrochlore, oxyyttropyrochlore-(Y), kenoplumbopyrochlore, fluorcalciomicrolite, oxycalciomicrolite, kenoplumbomicrolite, hydromicrolite, hydrokenomicrolite, oxycalciobetafite, oxyuranobetafite, fluornatroromite, fluorcalcioromeite, oxycalcioromdite, and oxyplumborom£ite. For these, there are only chemical or crystal-structure data. Type specimens need to be defined. Potential candidates for several other species exist, but are not sufficiently well characterized to grant them any official status. Ancient chemical data refer to wet-chemical analyses and commonly represent a mixture of minerals. These data were not used here. All data used represent results of electron-microprobe analyses or were obtained by crystal-structure refinement. We also verified the scarcity of crystal-chemical data in the literature. There are crystal-structure determinations published for only nine pyrochlore-supergroup minerals: hydropyrochlore, hydroxykenomicrolite, hydroxycalcioromite, hydrokenoelsmoreite, hydroxycalciopyrochlore, fluorcalciopyrochlore, kenoplumbomicrolite, oxycalciobetafite, and fluornatrorom£ite. The following mineral names are now discarded: alumotungstite, bariomicrolite, bariopyrochlore, bindheimite, bismutomicrolite, bismutopyrochlore, bismutostibiconite, calciobetafite, ceriopyrochlore-(Ce), cesstibtantite, ferritungstite, jixianite, kalipyrochlore, monimolite, natrobistantite, partzite, plumbobetafite, plumbomicrolite, plumbopyrochlore, stannomicrolite, stetefeldtite, stibiconite, stibiobetafite, stibiomicrolite, strontiopyrochlore, uranmicrolite, uranpyrochlore, yttrobetafite-(Y), and yttropyrochlore-(Y).", "corpus_id": 6332717, "title": "THE PYROCHLORE SUPERGROUP OF MINERALS: NOMENCLATURE" }
{ "abstract": "In organic photovoltaics, the mechanism by which free electrons and holes are generated, overcoming the Coulomb attraction, is a currently much debated topic. To elucidate this mechanism at a molecular level, we carried out a combined electronic structure and quantum dynamical analysis that captures the elementary events from the exciton dissociation to the free carrier generation at polymer/fullerene donor/acceptor heterojunctions. Our calculations show that experimentally observed efficient charge separations can be explained by a combination of two effects: First, the delocalization of charges which substantially reduces the Coulomb barrier, and second, the vibronically hot nature of the charge-transfer state which promotes charge dissociation beyond the barrier. These effects facilitate an ultrafast charge separation even at low-band-offset heterojunctions.", "corpus_id": 9766306, "score": -1, "title": "Ultrafast charge separation in organic photovoltaics enhanced by charge delocalization and vibronically hot exciton dissociation." }
{ "abstract": "Aims:  Survival of Erwinia amylovora, causal agent of fire blight in pome fruits and other rosaceous plants, was monitored inside mature apples calyces under some storage conditions utilized in fruit.", "corpus_id": 32409720, "title": "Survival of Erwinia amylovora in mature apple fruit calyces through the viable but nonculturable (VBNC) state" }
{ "abstract": "The phytosanitary risk associated with the movement of export-quality apple fruit to countries where fire blight does not occur is reassessed based upon additional data available since 1998 and clarification or correction of previously misinterpreted data present in the literature. The low epiphytic fitness of Erwinia amylovora (Ea) on apple fruit, the documented low incidence of viable Ea populations on mature apple fruit and the lack of a documented pathway by which susceptible host material could become infected from fruit-borne inoculum remain unchanged, and support the view that movement of Ea via commercial apple fruit is highly unlikely. With this new information, we updated a previously published model to re-estimate the likelihood of fire blight outbreaks in new areas because of commercial fruit shipment. This likelihood decreased in every scenario, and ranged from one outbreak in 5217 years to one in 753,144 years. By using the corrected and newly published data and by making assumptions based upon documented pathogen biology, the model gives more robust statistical support to the opinion that the risk of importing Ea on commercial apple fruit and the concomitant risk of establishing new outbreaks of fire blight is so small as to be insignificant. Published by Elsevier Ltd.", "corpus_id": 2268690, "title": "An updated pest risk assessment for spread of Erwinia amylovora and fire blight via commercial apple fruit" }
{ "abstract": "The dairy cows at the Estonian Agricultural University appeared to have an extremely low selenium status. The selenium level was 5.6 micrograms/l in whole blood and 3.2 micrograms/l in milk, on average. The blood glutathione peroxidase was consequently extremely low. The effects of organic selenium (selenized yeast) and sodium selenite were compared in a feeding experiment on 100 dairy cows. Selenium incorporation, udder health and the in vitro function of blood neutrophils were monitored. Supplementation of the feed either with 0.2 ppm organic selenium or sodium selenite for 8 weeks, increased the blood selenium level (geometric mean) within this period from the back-ground level (about 5.6 micrograms/l) to 167 (Se-yeast) and to 91 micrograms/l (selenite). The respective change in whole blood glutathione peroxidase (GSH-PX) was from 0.22 to 3.0 (Se-yeast) and to 2.3 (selenite) microKat/g Hb. Blood GSH-PX continued to increase up to 10 weeks after the supplementation was stopped. The bioavailability of yeast selenium was superior to selenite: the relative bioavailability (selenite = 1) of yeast selenium was 1.4 if blood GSH-PX, 1.9 if blood selenium, and 2.7 if milk selenium was used as the response criterion. Selenium-supplementation showed a positive effect on udder health. The percentage of quarters harbouring mastitis pathogens dropped from 22.9 to 13.0 in the Se-yeast group and from 18.4 to 7.4 in the selenite group during the supplementation period. The effect of selenium on mastitis was also reflected as a decrease in the output of milk somatic cells and N-acetyl-beta-D-glucosaminidase (NAGase). The time-luminescence profile of zymosan-induced activity of blood neutrophils became skewed to the left in Se-supplemented cows.", "corpus_id": 22599804, "score": -1, "title": "Comparisons of selenite and selenium yeast feed supplements on Se-incorporation, mastitis and leucocyte function in Se-deficient dairy cows." }
{ "abstract": "Background: FT program (FT) is a multimodal approach used to enhance postoperative rehabilitation and accelerate recovery. It was 1st described in open heart surgery, then modified and applied successfully in colorectal surgery. FT program was described in liver resection for the 1st time in 2008. Although the program has become widely accepted, it has not yet been considered the standard of care in liver surgery. Objectives: we performed this systematic review and meta-analysis to evaluate the impact of using the FT program compared to the traditional care (TC), on the main clinical and surgical outcomes for patients who underwent elective liver resection. Methods: PubMed/Medline, Scopus, and Cochran databases were searched to identify eligible articles that compared FT with TC in elective liver resection to be included in this study. Subgroup meta-analysis between laparoscopic and open surgical approaches to liver resection was also conducted. Quality assessment was performed for all the included studies. Odds ratios (ORs) and mean differences (MDs) were considered as a summary measure of evaluating the association in this meta-analysis for dichotomous and continuous data, respectively. A 95% confidence interval (CI) was reported for both measures. I 2 was used to assess the heterogeneity across studies. Results: From 2008 to 2015, 3 randomized controlled trials (RCTs) and 5 cohort studies were identified, including 394 and 416 patients in the FT and TC groups, respectively. The length of hospital stay (LoS) was markedly shortened in both the open and laparoscopic approaches within the FT program (P < 0.00001). The reduced LoS was accompanied by accelerated functional recovery (P = 0.0008) and decreased hospital costs, with no increase in readmission, morbidity, or mortality rates. Moreover, significant results were found within the FT group such as reduced operative time (P = 0.03), lower intensive care unit admission rate (P < 0.00001), early bowel opening (P ⩽ 0.00001), and rapid normal diet restoration (P ⩽ 0.00001). Conclusion: FT program is safe, feasible, and can be applied successfully in liver resection. Future RCTs on controversial issues such as multimodal analgesia and adherence rate are needed. Specific FT guidelines should be developed for liver resection.", "corpus_id": 8380425, "title": "Fast track program in liver resection: a PRISMA-compliant systematic review and meta-analysis" }
{ "abstract": "The measurement of quality of life is becoming more important in the evaluation of medical technologies and pharmaceuticals. Particularly when the several available therapies have similar effects on survival, quality of life measures may help decide which should be the therapy of choice. The Recovery Study utilized a multidisciplinary array of indicators of health-related quality of life and recovery. This paper reports factor analyses of 58 outcome measures on a study group of 469 persons who had undergone coronary artery bypass or cardiac valve surgery 6-months previously. The factor analyses revealed 5 orthogonal dimensions. We have named them: low morale, symptoms of illness, neuropsychological function, interpersonal relationships, and economic-employment. The data argue that health-related quality of life is a multidimensional construct, and that these dimensions can be measured quantitatively with relatively simple interview and questionnaire approaches. The next research step is to determine whether the five dimensions of post-operative quality of life have different pre-operative predictors, and whether intervention on these predictors can improve the recovery and rehabilitation process.", "corpus_id": 2883932, "title": "The measurement of health-related quality of life: major dimensions identified by factor analysis." }
{ "abstract": "Quality of life, as a concept pertaining to patients undergoing coronary artery bypass graft surgery, is explored. A theory of life quality, based on the capacity of the patient to realize his own life plans, is proposed, explaining the role of differing factors, general and individualized. Using the proposed theory, three avenues of investigation are suggested, each aimed at the more effective use of surgery in improving life qualit.", "corpus_id": 5871738, "score": -1, "title": "On the Quality of Life: Some Philosophical Reflections" }
{ "abstract": "Abstract Based on the integration of the group socialization theory and the individual–context interaction model, we examined whether moral disengagement mediated the association between deviant peer affiliation and bullying perpetration and whether this mediation model was moderated by moral identity. A total of 438 adolescents participated in the current study. They completed measures regarding deviant peer affiliation, bullying perpetration, moral disengagement, and moral identity. Deviant peer affiliation positively predicted adolescents’ bullying perpetration at six months later and this relationship was partially mediated by moral disengagement. Moral identity did not moderate the direct relationship between deviant peer affiliation and adolescents’ bullying perpetration. Moral identity moderated the relationship between moral disengagement and adolescents’ bullying perpetration and in turn moderated the indirect relationship between deviant peer affiliation and bullying perpetration. Specifically, the relationship between moral disengagement and bullying perpetration and the indirect relationship between deviant peer affiliation and bullying perpetration via moral disengagement both became nonsignificant for adolescents with high moral identity.", "corpus_id": 209166168, "title": "Deviant Peer Affiliation and Bullying Perpetration in Adolescents: The Mediating Role of Moral Disengagement and the Moderating Role of Moral Identity" }
{ "abstract": "Moral identity has been positively linked to prosocial behaviors and negatively linked to antisocial behaviors; but, the processes by which it is linked to such outcomes are unclear. The purpose of the present study was to examine moral identity not only as a predictor, but also as a moderator of relationships between other predictors (moral disengagement and self-regulation) and youth outcomes (prosocial and antisocial behaviors). The sample consisted of 384 adolescents (42 % female), ages 15–18 recruited from across the US using an online survey panel. Latent variables were created for moral identity, moral disengagement, and self-regulation. Structural equation models assessed these latent variables, and interactions of moral identity with moral disengagement and self-regulation, as predictors of prosocial (charity and civic engagement) and antisocial (aggression and rule breaking) behaviors. None of the interactions were significant predicting prosocial behaviors. For antisocial behaviors, the interaction between moral identity and moral disengagement predicted aggression, while the interaction between moral identity and self-regulation was significant in predicting aggression and rule breaking. Specifically, at higher levels of moral identity, the positive link between moral disengagement and aggression was weaker, and the negative link between self-regulation and both antisocial behaviors was weaker. Thus, moral identity may buffer against the maladaptive effects of high moral disengagement and low self-regulation.", "corpus_id": 14487789, "title": "Moral Identity and Adolescent Prosocial and Antisocial Behaviors: Interactions with Moral Disengagement and Self-regulation" }
{ "abstract": "Performance bugs are programming errors that slow down program execution. While existing techniques can detect various types of performance bugs, a crucial and practical aspect of performance bugs has not received the attention it deserves: how likely are developers to fix a performance bug? In practice, fixing a performance bug can have both benefits and drawbacks, and developers fix a performance bug only when the benefits outweigh the drawbacks. Unfortunately, for many performance bugs, the benefits and drawbacks are difficult to assess accurately. This paper presents C aramel , a novel static technique that detects and fixes performance bugs that have non-intrusive fixes likely to be adopted by developers. Each performance bug detected by C aramel is associated with a loop and a condition. When the condition becomes true during the loop execution, all the remaining computation performed by the loop is wasted. Developers typically fix such performance bugs because these bugs waste computation in loops and have non-intrusive fixes: when some condition becomes true dynamically, just break out of the loop. Given a program, C aramel detects such bugs statically and gives developers a potential source-level fix for each bug. We evaluate C aramel on real-world applications, including 11 popular Java applications (e.g., Groovy, Log4J, Lucene, Struts, Tomcat, etc) and 4 widely used C/C++ applications (Chromium, GCC, Mozilla, and MySQL). C aramel finds 61 new performance bugs in the Java applications and 89 new performance bugs in the C/C++ applications. Based on our bug reports, developers so far have fixed 51 and 65 performance bugs in the Java and C/C++ applications, respectively. Most of the remaining bugs are still under consideration by developers.", "corpus_id": 1215693, "score": -1, "title": "CARAMEL: Detecting and Fixing Performance Problems That Have Non-Intrusive Fixes" }
{ "abstract": "Two orthogonal standing acoustic waves, generated by piezoelectric excitation, can form a two-dimensional pressure field in microfluidic devices. A phase difference of the excitation waves can be employed to rotate spherical µm-sized silica particles by a torque mediated through the viscous boundary δ around the particle. The measurement of the rotational rate is, so far, limited to high-speed cameras and their frame rate, and gets increasingly difficult when the sphere gets smaller. We report here a new method for measuring the rotational rate of µm sized spherical particles. We utilize an optical trap with high-speed position detection to overcome the frame rate limitation of wide field image recording. The power spectrum of an optically trapped, rotating particle reveals additional peaks corresponding to the rotational frequencies—compared to a non-rotating particle. We validate our method at low rotational rates against high-speed video observation. To demonstrate the potential of this method we addressed a recent controversy about the rotation of particles with a relatively large viscous boundary layer δ. We measured steady-state rotational rates up to 229 Hz (13.8 × 103 rpm) for a particle with a radius R ≈ δ. Recent numerical research suggests that in this regime the existing theoretical approach (valid for R≫δ ) overpredicts the steady-state rotational rate by a factor of 10. With our new method we also confirm the numerical results experimentally.", "corpus_id": 233813077, "title": "Rotational speed measurements of small spherical particles driven by acoustic viscous torques utilizing an optical trap" }
{ "abstract": "We present the first numerical simulation setup for the calculation of the acoustic viscous torque on arbitrarily shaped micro-particles inside general acoustic fields. Under typical experimental conditions, the particle deformation plays a minor role. Therefore, the particle is modeled as a rigid body which is free to perform any time-harmonic and time-averaged translation and rotation. Applying a perturbation approach, the viscoacoustic field around the particle is resolved to obtain the time-averaged driving forces for a subsequent acoustic streaming simulation. For some acoustic fields, the near-boundary streaming around the fluid-suspended particle induces surface forces on the nonrotating particle that integrate into a non-zero acoustic viscous torque. In the equilibrium state, this torque is compensated by an equal and opposite drag torque due to the particle rotation. The rotation-induced flow field is superimposed on the acoustic streaming field to obtain the total fluid motion around the rotating particle. In this work, we only consider cases within the Rayleigh limit even though the presented numerical model is not strictly limited to this regime. After a validation by analytical solutions, the numerical model is applied to challenging experimental cases. For an arbitrary particle density, we consider particle sizes that can be comparable to the viscous boundary layer thickness. This important regime has not been studied before because it lies beyond the validity limits of the available analytical solutions. The detailed numerical analysis in this work predicts nonintuitive phenomena, including an inversion of the rotation direction. Our numerical model opens the door to explore a wide range of experimentally relevant cases, including non-spherical particle rotation. As a step toward application fields such as micro-robotics, the rotation of a prolate ellipsoid is studied.", "corpus_id": 5111770, "title": "Numerical simulation of micro-particle rotation by the acoustic viscous torque." }
{ "abstract": "We present a numerical study of thermoviscous effects on the acoustic streaming flow generated by an ultrasound standing-wave resonance in a long straight microfluidic channel containing a Newtonian fluid. These effects enter primarily through the temperature and density dependence of the fluid viscosity. The resulting magnitude of the streaming flow is calculated and characterized numerically, and we find that even for thin acoustic boundary layers, the channel height affects the magnitude of the streaming flow. For the special case of a sufficiently large channel height, we have successfully validated our numerics with analytical results from 2011 by Rednikov and Sadhal for a single planar wall. We analyzed the time-averaged energy transport in the system and the time-averaged second-order temperature perturbation of the fluid. Finally, we have made three main changes in our previously published numerical scheme to improve the numerical performance: (i) The time-averaged products of first-order variables in the time-averaged second-order equations have been recast as flux densities instead of as body forces. (ii) The order of the finite-element basis functions has been increased in an optimal manner. (iii) Based on the International Association for the Properties of Water and Steam (IAPWS 1995, 2008, and 2011), we provide accurate polynomial fits in temperature for all relevant thermodynamic and transport parameters of water in the temperature range from 10 to 50 °C.", "corpus_id": 13327062, "score": -1, "title": "Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels." }
{ "abstract": "Abstract The yeast Nhp6A protein (yNhp6A) is a member of the eukaryotic HMGB family of chromatin factors that enhance apparent DNA flexibility. yNhp6A binds DNA nonspecifically with nM affinity, sharply bending DNA by >60°. It is not known whether the protein binds to unbent DNA and then deforms it, or if bent DNA conformations are ‘captured’ by protein binding. The former mechanism would be supported by discovery of conditions where unbent DNA is bound by yNhp6A. Here, we employed an array of conformational probes (FRET, fluorescence anisotropy, and circular dichroism) to reveal solution conditions in which an 18-base-pair DNA oligomer indeed remains bound to yNhp6A while unbent. In 100 mM NaCl, yNhp6A-bound DNA unbends as the temperature is raised, with no significant dissociation of the complex detected up to ∼45°C. In 200 mM NaCl, DNA unbending in the intact yNhp6A complex is again detected up to ∼35°C. Microseconds-resolved laser temperature-jump perturbation of the yNhp6a–DNA complex revealed relaxation kinetics that yielded unimolecular DNA bending/unbending rates on timescales of 500 μs−1 ms. These data provide the first direct observation of bending/unbending dynamics of DNA in complex with yNhp6A, suggesting a bind-then-bend mechanism for this protein.", "corpus_id": 59410884, "title": "Evidence for a bind-then-bend mechanism for architectural DNA binding protein yNhp6A" }
{ "abstract": "The complete catalytic cycle of EcoRV endonuclease has been observed by combining fluorescence anisotropy with fluorescence resonance energy transfer (FRET) measurements. Binding, bending, and cleavage of substrate oligonucleotides were monitored in real time by rhodamine-x anisotropy and by FRET between rhodamine and fluorescein dyes attached to opposite ends of a 14-mer DNA duplex. For the cognate GATATC site binding and bending are found to be nearly simultaneous, with association and bending rate constants of (1.45-1.6) x 10(8) M(-1) s(-1). On the basis of the measurement of k(off) by a substrate-trapping approach, the equilibrium dissociation constant of the enzyme-DNA complex in the presence of inhibitory calcium ions was calculated as 3.7 x 10(-12) M from the kinetic constants. Further, the entire DNA cleavage reaction can be observed in the presence of catalytic Mg(2+) ions. These measurements reveal that the binding and bending steps occur at equivalent rates in the presence of either Mg(2+) or Ca(2+), while a slow decrease in fluorescence intensity following bending corresponds to k(cat), which is limited by the cleavage and product dissociation steps. Measurement of k(on) and k(off) in the absence of divalent metals shows that the DNA binding affinity is decreased by 5000-fold to 1.4 x 10(-8) M, and no bending could be detected in this case. Together with crystallographic studies, these data suggest a model for the induced-fit conformational change in which the role of divalent metal ions is to stabilize the sharply bent DNA in an orientation suitable for accessing the catalytic transition state.", "corpus_id": 746431, "title": "Simultaneous DNA binding and bending by EcoRV endonuclease observed by real-time fluorescence." }
{ "abstract": "The fertility level in China is a matter of uncertainty and controversy. This paper applies Preston and Coale’s (1982) variable-r method to assess the fertility level in China. By using data from China’s 1990 and 2000 censuses as well as annual population change surveys, the variable-r method confirms that Chinese fertility has reached a level well below replacement.", "corpus_id": 5554280, "score": -1, "title": "An assessment of China’s fertility level using the variable-r method" }
{ "abstract": "Recording the sequence of events that lead to a failure of a web application can be an effective aid for debugging. Nevertheless, a recording of an event sequence may include many events that are not related to a failure, and this may render debugging more difficult. To address this problem, we have adapted Delta Debugging to function on recordings of web applications, in a manner that lets it identify and discard portions of those recordings that do not influence the occurrence of a failure. We present the results of three empirical studies that show that (1) recording reduction can achieve significant reductions in recording size and replay time on actual web applications obtained from developer forums, (2) reduced recordings do in fact help programmers locate faults significantly more efficiently as, and no less effectively than non-reduced recordings, and (3) recording reduction produces even greater reductions on larger, more complex applications.", "corpus_id": 9212076, "title": "On the use of delta debugging to reduce recordings and facilitate debugging of web applications" }
{ "abstract": "We present Sikuli, a visual approach to search and automation of graphical user interfaces using screenshots. Sikuli allows users to take a screenshot of a GUI element (such as a toolbar button, icon, or dialog box) and query a help system using the screenshot instead of the element's name. Sikuli also provides a visual scripting API for automating GUI interactions, using screenshot patterns to direct mouse and keyboard events. We report a web-based user study showing that searching by screenshot is easy to learn and faster to specify than keywords. We also demonstrate several automation tasks suitable for visual scripting, such as map navigation and bus tracking, and show how visual scripting can improve interactive help systems previously proposed in the literature.", "corpus_id": 6944008, "title": "Sikuli: using GUI screenshots for search and automation" }
{ "abstract": "Cross-device interactions involve input and output on multiple computing devices. Implementing and reasoning about interactions that cover multiple devices with a diversity of form factors and capabilities can be complex. To assist developers in programming cross-device interactions, we created DemoScript, a technique that automatically analyzes a cross-device interaction program while it is being written. DemoScript visually illustrates the step-by-step execution of a selected portion or the entire program with a novel, automatically generated cross-device storyboard visualization. In addition to helping developers understand the behavior of the program, DemoScript also allows developers to revise their program by interactively manipulating the cross-device storyboard. We evaluated DemoScript with 8 professional programmers and found that DemoScript significantly improved development efficiency by helping developers interpret and manage cross-device interaction; it also encourages testing to think through the script in a development process.", "corpus_id": 14926503, "score": -1, "title": "Enhancing Cross-Device Interaction Scripting with Interactive Illustrations" }
{ "abstract": "Authorship attribution supported by statistical or computational methods has a long history starting from the 19th century and is marked by the seminal study of Mosteller and Wallace (1964) on the ...", "corpus_id": 215856076, "title": "A survey of modern authorship attribution methods" }
{ "abstract": "The identification of authorship falls into the category of style classification, an interesting sub-field of text categorization that deals with properties of the form of linguistic expression as opposed to the content of a text. Various feature sets and classification methods have been proposed in the literature, geared towards abstracting away from the content of a text, and focusing on its stylistic properties. We demonstrate that in a realistically difficult authorship attribution scenario, deep linguistic analysis features such as context free production frequencies and semantic relationship frequencies achieve significant error reduction over more commonly used \"shallow\" features such as function word frequencies and part of speech trigrams. Modern machine learning techniques like support vector machines allow us to explore large feature vectors, combining these different feature sets to achieve high classification accuracy in style-based tasks.", "corpus_id": 2968704, "title": "Linguistic correlates of style: authorship classification with deep linguistic analysis features" }
{ "abstract": "This work examines and attempts to overcome issues caused by the lack of formal standardisation when defining text categorisation techniques and detailing how they might be appropriately integrated with each other. Despite text categorisation’s long history the concept of automation is relatively new, coinciding with the evolution of computing technology and subsequent increase in quantity and availability of electronic textual data. Nevertheless insufficient descriptions of the diverse algorithms discovered have lead to an acknowledged ambiguity when trying to accurately replicate methods, which has made reliable comparative evaluations impossible. \n \nExisting interpretations of general data mining and text categorisation methodologies are analysed in the first half of the thesis and common elements are extracted to create a distinct set of significant stages. Their possible interactions are logically determined and a unique universal architecture is generated that encapsulates all complexities and highlights the critical components. A variety of text related algorithms are also comprehensively surveyed and grouped according to which stage they belong in order to demonstrate how they can be mapped. \n \nThe second part reviews several open-source data mining applications, placing an emphasis on their ability to handle the proposed architecture, potential for expansion and text processing capabilities. Finding these inflexible and too elaborate to be readily adapted, designs for a novel framework are introduced that focus on rapid prototyping through lightweight customisations and reusable atomic components. \n \nBeing a consequence of inadequacies with existing options, a rudimentary implementation is realised along with a selection of text categorisation modules. Finally a series of experiments are conducted that validate the feasibility of the outlined methodology and importance of its composition, whilst also establishing the practicality of the framework for research purposes. The simplicity of experiments and results gathered clearly indicate the potential benefits that can be gained when a formalised approach is utilised.", "corpus_id": 129153, "score": -1, "title": "A modular architecture for systematic text categorisation" }
{ "abstract": "The advances in thoracic procedures require optimum lung separation to provide adequate room for surgical access. This can be achieved using either a double-lumen tube (DLT) or a bronchial blocker (BB). Most thoracic anesthesiologists prefer the use of DLT. However, lung separation in patients with potential difficult airway can be achieved using either BB through a single lumen tube or placement of a DLT over a tube exchanger or a fiberoptic bronchoscope. Numerous videolaryngoscopes (VL) have been introduced offering both optical and video options to visualize the glottis. Many studies reported improved glottis visualization and easier DLT intubation in patients with normal and potential difficult airway. However, these studies have a wide diversity of outcomes, which may be attributed to the differences in their designs and the prior experience of the operators in using the different devices. In the present review, we present the main outcomes of the available publications, which have addressed the use of VL-guided DLT intubation. Currently, there is enough evidence supporting using VL for DLT intubation in patients with predicted and unanticipated difficult airway. In conclusion, the use of VL could offer an effective method of DLT placement for lung separation in patients with the potential difficult airway.", "corpus_id": 29056614, "title": "Videolaryngoscopes for placement of double lumen tubes: Is it time to say goodbye to direct view?" }
{ "abstract": "The role of various airway adjuncts in the management of difficult airway has been described in the literature. Bonfils rigid fiberscope is one of the airway assist devices widely used for endotracheal intubation in the individuals with cervical instability warranting limited neck movements. With our experience in the utilization of Bonfils for single lumen endotracheal tube placement, we are increasingly using for double lumen endobronchial (DLT) intubation as well. We would like to describe our experience in the use of Bonfils for DLT placement and outline the merits and limitations of the other suitable airway assist devices in this report. The double lumen tube has to be modified by decreasing the length of DLT to accommodate the Bonfils fiberscope and this is applicable only in certain type of double lumen tubes for e.g. Bronchocath.", "corpus_id": 3032839, "title": "Bonfils assisted double lumen endobronchial tube placement in an anticipated difficult airway" }
{ "abstract": "Preface. Acknowledgements. Readings Logistics. Key to Symbols. Part I: Phonetics and Phonology: 1. How Are Sounds Made? The Production of Obstruents. 2. Introducing Phonology. Assimilation. 3. Sonorant Consonants. 4. Natural Classes of Sounds: Distinctive Features. 5. Vowel Sounds: Cardinal Vowels. 6. Phonological Processes Involving Vowel Features. 7. The Vowels of English. 8. The Timing Tier. The Great Vowel Shift. Part II: Suprasegmental Structure: 9. The Syllable. 10. Syllable Complexity: English Phonotactics. 11. The Phenomenon of Stress: Rhythm. 12. Metrical Principles and Parameters. 13. Syllable Weight. Further Metrical Machinery. 14. Tonal Phonology. Part III: Advanced Theory: 15. Modes of Rule Application. The Cycle. 16. Domains of Rule Application: Lexical and Prosodic Phonology. 17. Aspects of Lexical Representations: Underspecification, Markedness and Feature Geometry. 18. Rules and Derivations. 19. Constraints: Optimality Theory. 20. Looking Back and Moving On. References. Glossary. Index of Languages. Index of Names. Index of Subjects.", "corpus_id": 60084723, "score": -1, "title": "A course in phonology" }
{ "abstract": "This paper presents a hybrid frequency-time domain methodology for the modeling of grounding systems considering the nonlinear effect of soil ionization. The method is validated by analyzing two typical grounding systems under high soil ionization effects. It is clear from the gotten results the reduction effect of the so-called grounding system ldquoequivalent impedancerdquo when soil ionization takes place.", "corpus_id": 613613, "title": "Grounding Systems Modeling Including Soil Ionization" }
{ "abstract": "This paper presents the main characteristics of a frequency-domain methodology developed for electromagnetic transients calculation. The method incorporates the simultaneous use of some linear and nonlinear elements typically employed in network analysis, such as RLC elements, transmission lines, quadripoles, single-phase transformers, switches, arresters, etc., with elements of the type \"electromagnetic field sources,\" with the format of cylindrical electrodes. This combination of models is appropriate for many studies (e.g., for the accurate and optimized joint simulation of towers, aerial cables, grounding systems, and some utilities and facilities, interconnected or electromagnetically coupled). The method is applied for computing the consequent overvoltages in a typical 138-kV three-phase transmission line stroked by an atmospheric discharge", "corpus_id": 1974723, "title": "A Methodology for Electromagnetic Transients Calculation—An Application for the Calculation of Lightning Propagation in Transmission Lines" }
{ "abstract": "The paper presents first investigations results on the effects of lightning stroke on medium voltage (MV) installations' earthing systems, connected together with the metal shields of the MV distribution grid cables. The study enables to evaluate the distribution of the lightning current among interconnected earth electrodes in order to assess if the interconnection, usually done for reducing earth potential rise during an earth fault, can give place to dangerous situations far from the installation hit by the lightning stroke. Two case studies of direct lightning stroke are presented and discussed: two interconnected MV substations of the MV grid; a high voltage/medium voltage (HV/MV) station connected with a MV substation.", "corpus_id": 1909765, "score": -1, "title": "Lightning-current distribution in MV grids interconnected earthing systems" }
{ "abstract": "The purpose of this paper is to study the effect of nano-bismuth ferrite (BiFeO3) on the electrical properties of low-density polyethylene (LDPE) under magnetic-field treatment at different temperatures. BiFeO3/LDPE nanocomposites with 2% mass fraction were prepared by the melt-blending method, and their electrical properties were studied. The results showed that compared with LDPE alone, nanocomposites increased the crystal concentration of LDPE and the spherulites of LDPE. Filamentous flake aggregates could be observed. The spherulite change was more obvious under high-temperature magnetization. An agglomerate phenomenon appeared in the composite, and the particle distribution was clear. Under high-temperature magnetization, BiFeO3 particles were increased and showed a certain order, but the change for room-temperature magnetization was not obvious. The addition of BiFeO3 increased the crystallinity of LDPE. Although the crystallinity decreased after magnetization, it was higher than that of LDPE. An AC test showed that the breakdown strength of the composite was higher than that of LDPE. The breakdown strength increased after magnetization. The increase of breakdown strength at high temperature was less, but the breakdown field strength of the composite was higher than that of LDPE. Compared with LDPE, the conductive current of the composite was lower. So, adding BiFeO3 could improve the dielectric properties of LDPE. The current of the composite decayed faster with time. The current decayed slowly after magnetization.", "corpus_id": 237339844, "title": "Investigation of Electrical Properties of BiFeO3/LDPE Nanocomposite Dielectrics with Magnetization Treatments" }
{ "abstract": "Iron Oxide (Fe3O4) nanoparticles were deposited on the surface of low density polyethylene (LDPE) particles by solvothermal method. A magnetic field was introduced to the preparation of Fe3O4/LDPE composites, and the influences of the magnetic field on thermal conductivity and dielectric properties of composites were investigated systematically. The Fe3O4/LDPE composites treated by a vertical direction magnetic field exhibited a high thermal conductivity and a large dielectric constant at low filler loading. The enhancement of thermal conductivity and dielectric constant is attributed to the formation of the conductive chains of Fe3O4 in LDPE matrix under the action of the magnetic field, which can effectively enhance the heat flux and interfacial polarization of the Fe3O4/LDPE composites. Moreover, the relatively low dielectric loss and low conductivity achieved are attributed to the low volume fraction of fillers and excellent compatibility between Fe3O4 and LDPE. Of particular note is the dielectric properties of Fe3O4/LDPE composites induced by the magnetic field also retain good stability across a wide temperature range, and this contributes to the stability and lifespan of polymer capacitors. All the above-mentioned properties along with the simplicity and scalability of the preparation for the polymer nanocomposites make them promising for the electronics industry.", "corpus_id": 8956073, "title": "Enhanced Thermal Conductivity and Dielectric Properties of Iron Oxide/Polyethylene Nanocomposites Induced by a Magnetic Field" }
{ "abstract": "Vibration is increasingly becoming a problem as machine speeds have increased and paper quality requirements have risen along with increased competition. Balance grade G1 is requested and more time is spent on balancing the rolls in the machine. However, by balancing, mills typically measure only the MD vibration, especially with rigid assembled rolls (suction rolls, deflection compensated rolls, press rolls). Without measuring the vibration in all three directions, you don't know the exact contribution of the rotating rolls to paper machine vibration. This is why a complete vibration study must be performed.", "corpus_id": 47244582, "score": -1, "title": "Vibration Analysis" }
{ "abstract": "The behaviour of a real-time system that interacts repeatedly with its environment is most succinctly specified by its possible traces, or histories. We present a way of using the refinement calculus for developing real-time programs from requirements expressed in this form. Our trace-based specification statements and target language constructs constrain the traces of system variables, rather than updating them destructively like the usual state-machine model. The only variable that is updated is a special current-time variable. The resulting calculus allows refinement from formal specificationswith hard real-time requirements, to high-level language programs annotated with precise timing constraints.", "corpus_id": 18167504, "title": "BCS-FACS 7 th Refinement Workshop Proceedings of the BCS-FACS 7 th Refinement Workshop , Bath , 3-5 July 1996 A Real-Time Refinement Calculus that Changes only Time" }
{ "abstract": "A refinement calculus for the development of real-time systems is presented. The calculus is based upon a wide-spectrum language called the temporal agent model (TAM), within which both functional and timing properties can be expressed in either abstract or concrete terms. A specification-oriented semantics for the language is given. Program development is considered as a refinement process, i.e. calculation of a structured program from an unstructured specification. A calculus of decomposition is defined. An example program is developed.", "corpus_id": 1695662, "title": "A Specification-Oriented Semantics for the Refinement of Real-Time Systems" }
{ "abstract": "The authors describe the Maintainable Real-Time System, a fault-tolerant distributed system for process control, developed under the Mars project started in 1980 at the Technische Universitat Berlin. They explore the characteristics of distributed real-time systems and then present the Mars approach to real-time process control, its architectural design and implementation, and one of its applications. The authors focus on the maintainability of the Mars architecture, describe the Mars operating system, and discuss timing analysis. The control of a rolling mill that produces metal plates and bars is examined.<<ETX>>", "corpus_id": 1877449, "score": -1, "title": "Distributed fault-tolerant real-time systems: the Mars approach" }
{ "abstract": "The physical condition of the motor function of a musical performer is determined by the habits that musicians acquire right at the beginning of their professional training. A large percentage of instrumental musicians' health problems are caused by their occupational activities. This research work aims then to identify musculoskeletal disorders in pianists and guitarists and determine their association to anxiety levels. The study was conducted on 36 pianists and guitarists of both sexes, using the Nordic Musculoskeletal Questionnaire for wrists and hands to make a medical diagnosis, and the Adult Manifest Anxiety Scale™ (AMAS™) to carry on psychological assessment. The mean age of participants was 24.5 (SD ± 7.6) years. Twenty six musicians had at least one symptom: tendinitis, carpal tunnel syndrome, muscle cramps, and rheumatoid arthritis among others. Anxiety levels were as follows: low (14%), expected (39%), slightly elevated (30%), and clinically significant anxiety (17%). Nonetheless, the presence of any of those musculoskeletal disorders was not associated with anxiety levels. In conclusion, anxiety, sensitivity, or social concerns do not seem to cause the appearance and development of typical diseases of musicians. According to the orthopedic evaluation, the presence of musculoskeletal abnormalities is related to instrumental performance.", "corpus_id": 73540849, "title": "Multidisciplinary study of illnesses in professional pianists and guitarists and their association with anxiety levels in a Mexican university" }
{ "abstract": "Certain medical ailments occur with increased frequency among musicians and can affect musicians of all ages and ability. These maladies range in severity from incidental, asymptomatic findings among casual and occasional players to serious injuries that significantly disable professional musicians from practicing or performing. The most prevalent problems involve overuse of muscles resulting from repetitive movements of playing, often in combination with prolonged weight bearing in an awkward position. Other common problems include dermatologic irritation, peripheral neuropathies, focal dystonias, and otolaryngologic disorders. This review organizes the musical maladies according to section of the orchestra with further subclassification by pathologic process. By becoming familiar with the disorders associated with specific instruments, physicians will be better able to make the correct diagnosis in musicians with medical complaints.", "corpus_id": 1812146, "title": "Maladies in Musicians" }
{ "abstract": "Spontaneous spinal epidural hematoma is an uncommon clinical entity. Patients with this disease may present with devastating neurological deficits that can mimic other diseases. Emergency physicians should be familiar with this condition to assure appropriate therapy in a timely manner. A typical case of spontaneous spinal epidural hematoma is presented with review of appropriate differential diagnosis and management.", "corpus_id": 23341686, "score": -1, "title": "Spontaneous cervicothoracic epidural hematoma following prolonged valsalva secondary to trumpet playing." }
{ "abstract": "Unsupervised clustering plays a dominant role in detailed landcover identification specifically in agricultural and environmental monitoring of high spatial resolution remote sensing images. A method called Approximate Spectral Clustering enables spectral partitioning for big datasets to extract clusters with different characteristic without a parametric model. Various information types are used through advanced similarity criteria. Selection of similarity criterion optimal for the corresponding application is required. To solve this issue a Spectral Clustering Method is proposed which fuses partitioning obtained by distinct similarity representations. This Spectral Clustering Ensemble adopts neural Quantization in the place of Random Sampling, and advanced similarity criterion in the place of Gaussian kernel distance with distinct decaying parameters, and a two level ensemble. The built up areas in the high resolution images can be detected using unsupervised detection. In this process first, a large set of corners from each of the input images are extracted by an improved Harris corner detector. Then, the extracted corners are incorporated into a likelihood function to discover candidate regions in each input image. Given a set of candidate build-up regions, in the second stage, the problem of build-up area detection is concised as an unsupervised grouping problem. The performance of these algorithms is evaluated by Accuracy, Adjusted Rand Index (ARI) and Normalized Mutual Information (NMI). Experimental results show a significant betterment of the resulting partitioning obtained by the proposed ensemble, with respect to the evaluation measures in the applications.", "corpus_id": 212607775, "title": "Spectral Clustering Ensemble and Unsupervised Clustering for Land cover Identification in High Spatial Resolution Satellite Images" }
{ "abstract": "Spectral clustering has been successfully used in various applications, thanks to its properties such as no requirement of a parametric model, ability to extract clusters of different characteristics and easy implementation. However, it is often infeasible for large datasets due to its heavy computational load and memory requirement. To utilize its advantages for large datasets, it is applied to the dataset representatives (either obtained by quantization or sampling) rather than the data samples, which is called approximate spectral clustering. This necessitates novel approaches for defining similarities based on representatives exploiting the data characteristics, in addition to the traditional Euclidean distance based similarities. To address this challenge, we propose similarity measures based on geodesic distances and local density distribution. Our experiments using datasets with varying cluster statistics show that the proposed geodesic based similarities are successful for approximate spectral clustering with high accuracies.", "corpus_id": 966024, "title": "Geodesic Based Similarities for Approximate Spectral Clustering" }
{ "abstract": "The filamentous fungus Aspergillus flavus causes an ear rot on maize and produces a mycotoxin (aflatoxin) in colonized maize kernels. Aflatoxins are carcinogenic to humans and animals upon ingestion. Aflatoxin contamination results in a large loss of profits and marketable yields for farmers each year. Several research groups have worked to pinpoint sources of resistance to A. flavus and the resulting aflatoxin contamination in maize. Some maize genotypes exhibit greater resistance than others. A proteomics approach has recently been used to identify endogenous maize proteins that may be associated with resistance to the fungus. Research has been conducted on cloning, expression, and partial characterization of one such protein, which has a sequence similar to that of cold-regulated proteins. The expressed protein, ZmCORp, exhibited lectin-like hemagglutination activity against fungal conidia and sheep erythrocytes. Quantitative real-time PCR assays revealed that ZmCOR is expressed 50% more in maize kernels from the Mp420 line, a type of maize resistant to A. flavus, compared with the expression level of the gene in the susceptible B73 line. ZmCORp exhibited fungistatic activity when conidia from A. flavus were exposed to the protein at a final concentration of 18 mM. ZmCORp inhibited the germination of conidia by 80%. A 50% decrease in mycelial growth resulted when germinated conidia were incubated with the protein. The partial characterization of ZmCORp suggests that this protein may play an important role in enhancing kernel resistance to A. flavus infection and aflatoxin accumulation.", "corpus_id": 36030376, "score": -1, "title": "A maize lectin-like protein with antifungal activity against Aspergillus flavus." }
{ "abstract": "The development of larynx simulators as platforms for clinical investigations has been identified as a useful tool for understanding the pathophysiology of vocal folds. The primary goal of this study was the realization of electrically conductive silicone vocal folds able to replicate an electroglottography (EGG) signal under pathophysiological conditions, in order to provide a quantitative method for monitoring the vocal folds vibratory characteristics. Both simulators showed an oscillatory behavior similar to human counterpart, thanks to the materials used for their realization. In addition, the synthetic simulators are made conductive by a silicone-based conductive solution applied to the surface of the synthetic vocal folds, in order to acquire an electrical signal to be compared to an EGG signal. Results showed a direct correlation between conductance variation and the occurrence of vocal folds contact, as it happens for the real EGG signal. In addition, results suggested that both simulators are able to replicate the vibratory characteristics of healthy and pathological vocal folds and to reproduce an electrical signal that is comparable to a real EGG. This will represent a powerful tool to characterize and cluster different vocal folds pathologies, which can lead to a significant improvement of prevention programs and an early diagnosis for laryngeal diseases.", "corpus_id": 235077757, "title": "Conductive Silicone Vocal Folds Reproducing Electroglottographic Signal in Pathophysiological Conditions" }
{ "abstract": "Abstract Background Until now, it has been impossible to discriminate a pathology of the vocal folds and, in many instances, even to distinguish normal from pathological voices with an electroglottographic signal (EGG). Objectives To introduce a method for analyzing electroglottographic signals and for extracting features able to characterize phonation quantitatively. Methods The EGG signal recorded during a continuous vocal phonation is processed in order to obtain the first derivative, which is related to the velocity of movements and contact of the vocal folds. The average fundamental frequency is computed and its corresponding period is taken as the typical duration of the EGG cycle. After each glottal cycle has been identified, the EGG signal and its derivative are locally normalized in time. For each glottal cycle, the amplitude and related velocity signals are plotted in an X-Y graph thus forming a multi-layer display where each EGG cycle appears as a circular trace. This X-Y representation can be viewed as a polar graph: by increasing the angle from 0 to 360° with incremental steps corresponding to the time normalization re-sampling of the EGG cycle, mean value and variance are computed. The results are the curve of the amplitude-velocity mean cycle and the related variance curve. The shape of the mean loop is strictly associated with the relationships between amplitude-velocity changes and phonation phases. The surrounding area represents the variability of local vocal phenomena around the above mean curve. The phonation process can be characterized in more detail by computing couples of indices (mean and variance) as obtained by dividing the polar graph in 4 quadrants, roughly associated with the different phases of the glottal cycle. In our study we carried out the EGG analysis of 21 cases of normal voice and 21 cases of pathological voice, considering the variability based on the combined amplitude-velocity analysis. Results In normal subjects, the global variability indices (VI) (expression of Amplitude and Velocity variation) and the four VI of different physiological phases of glottal wave (VI1, VI2, VI3 and VI4), were definitely lower than in pathological subjects. Such difference was statistically significant (p  Conclusions The above method for analyzing the EGG signal proved to be efficient to discriminate normal subjects from pathological ones. Additional trials with more subjects are needed to confirm this preliminary data and to evaluate possible differences between different pathologies.", "corpus_id": 324159, "title": "Evaluation of the Electroglottographic signal variability by amplitude-speed combined analysis" }
{ "abstract": "When observing the vocal fold movements in their laryngoscopic examination, most laryngologists seem to be trained to consider only the gross respiratory movements of the folds, i.e. abduction and adduction. these movements constitute an essential part of the vitally important valve function of the larynx, preventing aspiration and providing parts of the mechanisms for normal swallowing, coughing, and straining. The second important function of the larynx is to serve as a transducer of aerodynamic to acoustic energy; the voice function. Probably for reasons of tradition, the examination of the voice function is generally left to the speech pathologists, who can make an auditory perceptual evaluation of the voice qualities, possibly supplemented by electro-acoustic analyses. By focussing also on the small vibratory movements of the vocal folds during phonation, using laryngeal stroboscopy, the laryngologist can contribute considerably to the diagnosis of voice disorders. For the laryngeal surgeon stroboscopy should be of particular interest, as it is a useful tool for early detection of (cancerous) invasion and for the evaluation of laryngeal paresis. This paper describes the clinical procedure of laryngeal stroboscopy, based on some introductory remarks on vocal anatomy and function.", "corpus_id": 25119601, "score": -1, "title": "Stroboscopy--a pertinent laryngological examination." }
{ "abstract": "Functions of one or more variables are usually approximated with a basis; a complete, linearly independent set of functions that spans an appropriate function space. The topic of this paper is the numerical approximation of functions using the more general notion of frames; that is, complete systems that are generally redundant but provide stable infinite representations. While frames are well-known in image and signal processing, coding theory and other areas of applied mathematics, their use in numerical analysis is far less widespread. Yet, as we show via a series of examples, frames are more flexible than bases, and can be constructed easily in a range of problems where finding orthonormal bases with desirable properties is difficult or impossible. By means of example, we exhibit a frame which yields simple, high-order approximations of smooth, multivariate functions in complicated geometries. \nA major concern when using frames is that computing a best approximation typically requires solving an ill-conditioned linear system. Nonetheless, we show that an accurate frame approximation of a function $f$ can be computed numerically up to an error of order $\\sqrt{\\epsilon}$ with a simple algorithm, or even of order $\\epsilon$ with modifications to the algorithm. Here, $\\epsilon$ is a user-controlled parameter. Crucially, the order of convergence down to this limit is determined by the existence of accurate, approximate representations of $f$ in the frame that have small-norm coefficients. We demonstrate the existence of such representations in our examples. Overall, our analysis suggests that frames are a natural generalization of bases in which to develop numerical approximation. In particular, even in the presence of severe ill-conditioning, frames impose sufficient mathematical structure so as to give rise to good accuracy in finite precision calculations.", "corpus_id": 4314117, "title": "Frames and numerical approximation" }
{ "abstract": "The application of adaptive, time-frequency based signal analysis has recently attracted increasing attention. While using adapted time-frequency atoms has shown promising results for example in audio processing, the reconstruction from the corresponding analysis coefficients usually exhibits significant error. In this contribution we propose a method to reduce the reconstruction error by using modified time-frequency atoms in the transition region between adjacent areas of time-frequency adaptation. The modification is obtained by projecting the relevant atoms onto a system of weighted vectors which are optimally concentrated inside the desired regions of adaptation. We give a theoretical derivation of the improvement of error and illustrate our method with numerical examples.", "corpus_id": 7733313, "title": "Adaptive Gabor frames by projection onto time-frequency subspaces" }
{ "abstract": "Nuclear receptors are a family of transcription factors that can be activated by lipophilic ligands. They are fundamental regulators of development, reproduction, and energy metabolism. In bone, nuclear receptors enable bone cells, including osteoblasts, osteoclasts, and osteocytes, to sense their dynamic microenvironment and maintain normal bone development and remodeling. Our views of the molecular mechanisms in this process have advanced greatly in the past decade. Drugs targeting nuclear receptors are widely used in the clinic for treating patients with bone disorders such as osteoporosis by modulating bone formation and resorption rates. Deficiency in the natural ligands of certain nuclear receptors can cause bone loss; for example, estrogen loss in postmenopausal women leads to osteoporosis and increases bone fracture risk. In contrast, excessive ligands of other nuclear receptors, such as glucocorticoids, can also be detrimental to bone health. Nonetheless, the ligand-induced osteoprotective effects of many other nuclear receptors, e.g., vitamin D receptor, are still in debate and require further characterizations. This review summarizes previous studies on the roles of nuclear receptors in bone homeostasis and incorporates the most recent findings. The advancement of our understanding in this field will help researchers improve the applications of agonists, antagonists, and selective modulators of nuclear receptors for therapeutic purposes; in particular, determining optimal pharmacological drug doses, preventing side effects, and designing new drugs that are more potent and specific.", "corpus_id": 4460422, "score": -1, "title": "Nuclear Receptors in Skeletal Homeostasis." }
{ "abstract": "Padsevonil is an antiepileptic drug (AED) candidate synthesized in a medicinal chemistry program initiated to rationally design compounds with high affinity for synaptic vesicle 2 (SV2) proteins and low-to-moderate affinity for the benzodiazepine binding site on GABAA receptors. The pharmacological profile of padsevonil was characterized in binding and electrophysiological experiments. At recombinant SV2 proteins, padsevonil’s affinity for SV2A was greater than that of levetiracetam and brivaracetam (pKi 8.5, 5.2, and 6.6, respectively). Unlike the latter AEDs, both selective SV2A ligands, padsevonil also displayed high affinity for the SV2B and SV2C isoforms (pKi 7.9 and 8.5, respectively). Padsevonil’s interaction with SV2A differed from that of levetiracetam and brivaracetam; it exhibited slower binding kinetics: dissociation t1/2 30 minutes from the human protein at 37°C compared with <0.5 minute for levetiracetam and brivaracetam. In addition, its binding was not potentiated by the allosteric modulator UCB1244283. At recombinant GABAA receptors, padsevonil displayed low to moderate affinity (pIC50≤6.1) for the benzodiazepine site, and in electrophysiological studies, its relative efficacy compared with zolpidem (full-agonist reference drug) was 40%, indicating partial agonist properties. In in vivo (mice) receptor occupancy studies, padsevonil exhibited SV2A occupancy at low ED50 (0.2 mg/kg) and benzodiazepine site occupancy at higher doses (ED50 36 mg/kg), supporting in vitro results. Padsevonil’s selectivity for its intended targets was confirmed in profiling studies, where it lacked significant effects on a wide variety of ion channels, receptors, transporters, and enzymes. Padsevonil is a first-in-class AED candidate with a unique target profile allowing for presynaptic and postsynaptic activity. SIGNIFICANCE STATEMENT Padsevonil is an antiepileptic drug candidate developed as a single molecular entity interacting with both presynaptic and postsynaptic targets. Results of in vitro and in vivo radioligand binding assays confirmed this target profile: padsevonil displayed nanomolar affinity for the three synaptic vesicle 2 protein isoforms (SV2A, B, and C) and micromolar affinity for the benzodiazepine binding site on GABAA receptors. Furthermore, padsevonil showed greater affinity for and slower binding kinetics at SV2A than the selective SV2A ligands, levetiracetam, and brivaracetam.", "corpus_id": 204756965, "title": "Pharmacological Profile of the Novel Antiepileptic Drug Candidate Padsevonil: Interactions with Synaptic Vesicle 2 Proteins and the GABAA Receptor" }
{ "abstract": "The synaptic vesicle glycoprotein 2C (SV2C) is an undercharacterized protein with enriched expression in phylogenetically old brain regions. Its precise role within the brain is unclear, though various lines of evidence suggest that SV2C is involved in the function of synaptic vesicles through the regulation of vesicular trafficking, calcium-induced exocytosis, or synaptotagmin function. SV2C has been linked to multiple neurological disorders, including Parkinson's disease and psychiatric conditions. SV2C is expressed in various cell types-primarily dopaminergic, GABAergic, and cholinergic cells. In mice, it is most highly expressed in nuclei within the basal ganglia, though it is unknown if this pattern of expression is consistent across species. Here, we use a custom SV2C-specific antiserum to describe localization within the brain of mouse, nonhuman primate, and human, including cell-type localization. We found that the immunoreactivity with this antiserum is consistent with previously-published antibodies, and confirmed localization of SV2C in the basal ganglia of rodent, rhesus macaque, and human. We observed strongest expression of SV2C in the substantia nigra, ventral tegmental area, dorsal striatum, pallidum, and nucleus accumbens of each species. Further, we demonstrate colocalization between SV2C and markers of dopaminergic, GABAergic, and cholinergic neurons within these brain regions. SV2C has been increasingly linked to dopamine and basal ganglia function. These antisera will be an important resource moving forward in our understanding of the role of SV2C in vesicle dynamics and neurological disease.", "corpus_id": 3628746, "title": "Immunochemical analysis of the expression of SV2C in mouse, macaque and human brain" }
{ "abstract": "AbstractHuntington’s disease (HD) is a neurodegenerative disease caused by a polyglutamine expansion in the huntingtin (Htt) protein. Mutant Htt causes synaptic transmission dysfunctions by interfering in the expression of synaptic proteins, leading to early HD symptoms. Synaptic vesicle proteins 2 (SV2s), a family of synaptic vesicle proteins including 3 members, SV2A, SV2B, and SV2C, plays important roles in synaptic physiology. Here, we investigated whether the expression of SV2s is affected by mutant Htt in the brains of HD transgenic (TG) mice and Neuro2a mouse neuroblastoma cells (N2a cells) expressing mutant Htt. Western blot analysis showed that the protein levels of SV2A and SV2B were not significantly changed in the brains of HD TG mice expressing mutant Htt with 82 glutamine repeats. However, in the TG mouse brain there was a dramatic decrease in the protein level of SV2C, which has a restricted distribution pattern in regions particularly vulnerable in HD. Immunostaining revealed that the immunoreactivity of SV2C was progressively weakened in the basal ganglia and hippocampus of TG mice. RT-PCR demonstrated that the mRNA level of SV2C progressively declined in the TG mouse brain without detectable changes in the mRNA levels of SV2A and SV2B, indicating that mutant Htt selectively inhibits the transcriptional expression of SV2C. Furthermore, we found that only SV2C expression was progressively inhibited in N2a cells expressing a mutant Htt containing 120 glutamine repeats. These findings suggest that the synaptic dysfunction in HD results from the mutant Htt-mediated inhibition of SV2C transcriptional expression. These data also imply that the restricted distribution and decreased expression of SV2C contribute to the brain region-selective pathology of HD.\n", "corpus_id": 14000133, "score": -1, "title": "Mutant Huntingtin Causes a Selective Decrease in the Expression of Synaptic Vesicle Protein 2C" }
{ "abstract": "Health literacy (HL) is associated with short- and long-term health outcomes, and this is particularly relevant in Hispanics, who are disproportionally affected by lower HL. Hispanics have become the largest minority population in the United States. Also, Hispanics experience higher burdens of chronic diseases such as type 2 diabetes mellitus (T2DM) than non-Hispanic whites. Thus, effectively choosing culturally appropriate validated instruments that measure a marker found in health assessments should be a serious consideration. Using a systemized approach, we identified and reviewed 33 publications and found eight different HL and numeracy (separate or combined) instruments. We assessed the study designs and instrument structures to determine how HL was measured across these studies. We categorized the results into direct and indirect measurements of HL. The Test of Functional Health Literacy in Adults (TOFHLA) family of HL instruments was favored for direct measures of HL, while the Brief Health Literacy Screen (BHLS) instrument was favored for indirect measures. Despite identified trends in instruments used, more comprehensive measurement tools have been developed but not validated in Hispanic populations. In conclusion, further validation of more comprehensive HL instruments in adult Hispanic populations with T2DM could better assess HL levels and improve health promotion efforts.", "corpus_id": 252856391, "title": "Tools to Measure Health Literacy among Adult Hispanic Populations with Type 2 Diabetes Mellitus: A Review of the Literature" }
{ "abstract": "Background. Hispanics with diabetes often have deficits in health literacy (HL). We examined the association among HL, psychosocial factors, and diabetes-related self-care activities. Methods. Cross-sectional analysis of 149 patients. Data included patient demographics and validated measures of HL, physician trust, self-efficacy, acculturation, self-care behaviors, and A1c. Results. Participants (N=60) with limited HL were older and less educated, and had more years with diabetes compared with adequate HL participants (N=89). Limited HL participants reported greater trust in their physician, greater self-efficacy, and better diet, foot care, and medication adherence. Health literacy status was not associated with acculturation or A1c. In adjusted analyses, HL status remained associated with physician trust, and we observed a notable but nonsignificant trend between HL status and medication adherence. Discussion. Lower HL was associated with greater physician trust and better medication adherence. Further research is warranted to clarify the role of HL and physician trust in optimizing self-care for Hispanics.", "corpus_id": 2789754, "title": "Health Literacy, Physician Trust, and Diabetes-related Self-care Activities in Hispanics with Limited Resources" }
{ "abstract": "The collimation of high energy electron beams for radiation therapy is treated with special attention on the contamination of the beam by electrons which ideally should be stopped, but instead are scattered back into the beam through the edge of the collimator. It is demonstrated that the mean energy of these electrons is only 40 per cent of the mean energy in the beam and that the smallest electron contamination is obtained if the material closest to the beam is of high density.", "corpus_id": 9058695, "score": -1, "title": "Collimation of high energy electron beams." }
{ "abstract": "Enormous characteristics exhibited by two-dimensional carbon-based nanomaterial, graphene attract current researchers in integrating this advanced material into the development of nextgeneration electronic, optoelectronic, photonic, and photovoltaic devices. The ultimate aim was to synthesis a single layer of graphene with large-size domain with less defect formation. The solid state of the graphene promises ultra-high performance in the devices due to ultra-high electron mobility. Within a decade, previous researchers have narrowed down their studies by applying different types of metal species as catalyst substrate in chemical vapor deposition method. The crucial part was to determine the characteristics of carbon precipitation and diffusion onto the metal surfaces. Each metal-based catalyst and its alloy revealed different behavior according to its carbon solubility and intrinsic properties. Until now, copper, nickel, and its alloy combination provide tremendous finding in the synthetization of graphene. Currently, researchers are still exploring the ideal parameters related to feeding gases, growth temperatures, and working pressures which are essential to each catalyst metals characteristic such as copper, nickel, and its alloy.", "corpus_id": 204807662, "title": "Effects of copper , nickel , and its alloy as catalysts for graphene growth via chemical vapor deposition method : A review" }
{ "abstract": "Large-area graphene growth is required for the development and production of electronic devices. Recently, chemical vapor deposition (CVD) of hydrocarbons has shown some promise in growing large-area graphene or few-layer graphene films on metal substrates such as Ni and Cu. It has been proposed that CVD growth of graphene on Ni occurs by a C segregation or precipitation process whereas graphene on Cu grows by a surface adsorption process. Here we used carbon isotope labeling in conjunction with Raman spectroscopic mapping to track carbon during the growth process. The data clearly show that at high temperatures sequentially introduced isotopic carbon diffuses into the Ni first, mixes, and then segregates and precipitates at the surface of Ni forming graphene and/or graphite with a uniform mixture of (12)C and (13)C as determined by the peak position of the Raman G-band peak. On the other hand, graphene growth on Cu is clearly by surface adsorption where the spatial distribution of (12)C and (13)C follows the precursor time sequence and the linear growth rate ranges from about 1 to as high as 6 mum/min depending upon Cu grain orientation. This data is critical in guiding the graphene growth process as we try to achieve the highest quality graphene for electronic devices.", "corpus_id": 5056875, "title": "Evolution of graphene growth on Ni and Cu by carbon isotope labeling." }
{ "abstract": "Abstract The behaviour of liquid nickel in contact with vitreous carbon and graphite has been investigated. At a temperature of 1743 K, vitreous carbon is attacked by pure nickel more strongly than graphite, i.e. the metal penetrates the substrate more deeply, and, after cooling, a nickel matrix containing a larger amount of graphite (which has precipitated) can be observed. Considerations about the depth of penetration of the metal into the substrate and about the microstructures of the two systems after cooling lead to the hypothesis of a carbon supersaturation of nickel in the presence of vitreous carbon. However, vitreous carbon, like the graphite substrate, is not attacked by nickel previously saturated with carbon. In this case, the melt does not penetrate and, after cooling, its microstructure is analogous to that of the pure nickel-graphite system. Near the eutectic temperature (1618 K), vitreous carbon is attacked by pure nickel according to the nickel-graphite phase diagram. The mechanism accounting for the observed behaviours may be possible formation of the metastable Ni3C carbide at 1743 K.", "corpus_id": 137156054, "score": -1, "title": "Interactions between liquid nickel and vitreous carbon" }
{ "abstract": "Rumex palustris responds to total submergence by increasing the elongation rate of young petioles. This favours survival by shortening the duration of submergence. Underwater elongation is stimulated by ethylene entrapped within the plant by surrounding water. However, abnormally fast extension rates were found to be maintained even when leaf tips emerged above the floodwater. This fast post-submergence growth was linked to a promotion of ethylene production that is presumed to compensate for losses brought about by ventilation. Three sources of ACC contributed to post-submergence ethylene production in R. palustris: (i) ACC that had accumulated in the roots during submergence and was transported in xylem sap to the shoot when stomata re-opened and transpiration resumed, (ii) ACC that had accumulated in the shoot during the preceding period of submergence and (iii) ACC produced de novo in the shoot following de-submergence. This new production of ethylene was associated with increased expression of an ACC synthase gene (RP-ACS1) and an ACC oxidase gene (RP-ACO1), increased ACC synthase activity and a doubling of ACC oxidase activity, measured in vitro. Out of seven species of Rumex examined, a de-submergence upsurge in ethylene production was seen only in shoots of those that had the ability to elongate fast when submerged.", "corpus_id": 25851349, "title": "De-submergence-induced ethylene production in Rumex palustris: regulation and ecophysiological significance." }
{ "abstract": "Rumex palustris, a flooding-tolerant plant, elongates its petioles in response to complete submergence. This response can be partly mimicked by enhanced ethylene levels and low O2 concentrations. High levels of CO2 do not markedly affect petiole elongation in R. palustris. Experiments with ethylene synthesis and action inhibitors demonstrate that treatment with low O2 concentrations enhances petiole extension by shifting sensitivity to ethylene without changing the rate of ethylene production. The expression level of the R. palustris gene coding for the putative ethylene receptor (RP-ERS1) is up-regulated by 3% O2 and increases after 20 min of exposure to a low concentration of O2, thus preceding the first significant increase in elongation observable after 40 to 50 min. In the flooding-sensitive species Rumex acetosa, submergence results in a different response pattern: petiole growth of the submerged plants is the same as for control plants. Exposure of R. acetosa to enhanced ethylene levels strongly inhibits petiole growth. This inhibitory effect of ethylene on R. acetosa can be reduced by both low levels of O2 and/or high concentrations of CO2.", "corpus_id": 84532, "title": "Ethylene Sensitivity and Response Sensor Expression in Petioles of Rumex Species at Low O2 and High CO2 Concentrations" }
{ "abstract": "Governmental programmes and international agreements to counteract eutrophication have largely not attained agreed objectives (e.g. reduction by half of the anthropogenic nitrogen load on Swedish coastal waters). Important components of such programmes are improved removal of nitrogen in municipal treatment plants and changed agricultural practices. In addition, increased N-removal during runoff, i.e. restoration of ponds and wetlands, is an important strategy. One explanation of the fact that the objectives have yet not been achieved might be that the most effective step to counteract diffuse pollution has not been fully implemented. It is therefore important to stress the potential of effective measures and find ways to fully implement them at the watershed level. It is important to avoid excessive applications of fertilizers because this leads to an exponential increase in leaching. Field experiments indicate that the use of winter crops or an undersown catch crop outside the main cropping season has reduced nitrate losses by up to 75% in single years, and by nearly 50% over successive years. In southern Sweden, the area of wetlands has been reduced considerably (more than 90%) by melioration activities. In a recent project, budget studies with restored ponds verified the importance of ponds and wetlands in nitrogen retention. Per unit area, increased nitrogen loading implied increased nitrogen retention, but often a decrease in the percent retained. Ponds with depths of 0.4–2.0 m and hydrological loads of 0.14–5.2 m3 m−2 day−1 were created. One hundred and fifty to seven thousand kg N ha−1 year−1 was removed in ponds loaded by streams dominated by agricultural run off. A pond receiving pre-treated municipal wastewater removed 8000 kg N ha−1 year−1. The upper limit for N-removal is set by the hydrological conditions. Sedimentation of organic material must be favoured in order to obtain adequate conditions for denitrification. To achieve the governmental objective in nitrogen load reduction changed cultivation practices within the agricultural sector must be combined with restoration of ponds/wetlands.", "corpus_id": 9447429, "score": -1, "title": "A catchment-oriented and cost-effective policy for water protection." }
{ "abstract": "Fueled by the need to surpass the limitations of conventional materials, recent years have seen a large increase in engineering applications of advanced fiber reinforced polymer (FRP) composite materials in many major industries, such as aerospace and defense, automotive, construction, marine, and oil and gas. FRP composites are very attractive for these applications due to their highly favorable material properties, including high strength-to-weight and stiffness-to-weight ratios, and corrosion resistance. Studies conducted to date have demonstrated the numerous advantages offered by FRP composites in various engineering applications. However, there are still a number of technical and implementation issues that need to be resolved prior to broader uptake of the application of FRP composites by some engineering communities such as civil construction. This special issue is aimed at disseminating the most recent advances and developments in this exciting field. A total of 17 paperswere submitted to the special issue and after a rigorous peer-review process six of themwere accepted to appear in the issue as original research articles. These six papers deal with a range of topics on the structural behavior of composite members/structures and mechanical properties and development of composite materials. The studies on the former investigate the dynamic behavior of composite FRP bridge systems and flexural behavior of previously damaged steel beams repaired with FRP. The studies on the latter are concerned with the development of ecoefficient engineered cementitious composites, mechanical properties of carbon fiber composites obtained using different molding techniques, and properties of high-density and ultra-high molecular weight polyethylene and heat-treated bamboo fiber composites. We hope that some of the most recent advances on the development and applications of FRP composites that have been disseminated in this special issue will be of interest to readers and contribute toward the advancement of research in the field.", "corpus_id": 137928733, "title": "Applications of Fiber Reinforced Polymer Composites" }
{ "abstract": "Polymer based materials are widely used in electronic packaging. The molding compound, in particular, comprises a significant portion of the package with the purpose of protecting the chips from the environment. Material characterization of molding compounds, therefore, has been a critical issue in predicting the thermo-mechanical behavior and reliability of electronic packaging. One of the distinctive features of polymers is viscoelasticity, which refers to an intermediate behavior between a solid and a liquid. To characterize time and temperature dependent characteristics of polymers, various test methods have been utilized. Among those methods, the stress relaxation test using dynamic mechanical analysis (DMA) is widely used. However, there are no standards or guidelines for performing stress relaxation test on molding compounds with DMA. In this study, DMA stress relaxation tests have been performed with the molding compound. The initial value of relaxation modulus from DMA was compared with the Young's modulus from tensile test. The temperature effect on the stress relaxation test was studied to determine the appropriate temperature profile. The sample thickness and strain dependency were also investigated. Finally, recommendations for proper future testing are proposed.", "corpus_id": 2747618, "title": "Stress relaxation test of molding compound for MEMS packaging" }
{ "abstract": "The photocatalytic degradation of phenol, 4-chlorophenol, 2,4-dichlorophenol, and 2,4,5-trichlorophenol over TiO/sub 2/ (anatase) has been investigated by using three photochemical reactors. TiO/sub 2/ was used as a thin film, coating the internal surface of a glass coil (reactors I and II) or the external surface of glass beads (reactor III). The degradation of the four phenolic compounds, in a continuous recirculation mode in all three reactors, approximates first-order kinetics to near-complete degradation. The Langmuir-Hinshelwood kinetics have been modified slightly to rationalize the first-order behavior in solid-liquid reactions and to argue in favor of a surface reaction; the degradation reactions occur on the TiO/sub 2/ particle surface. In the multipass mode experiments, both reactors I and II exhibit higher degradation rates for phenol at the higher flow rates. By contrast, the greater degradation is associated with the lower flow rates in the single-pass mode experiments.", "corpus_id": 98020332, "score": -1, "title": "Kinetics studies in heterogeneous photocatalysis. I. Photocatalytic degradation of chlorinated phenols in aerated aqueous solutions over titania supported on a glass matrix" }
{ "abstract": "In this paper, selection of optimum DC link capacitor for Integrated Modular Motor Drives (IMMD) is presented. First, a review of IMMD technologies is given and current research and future prospects are studied. Inverter topologies and gate drive techniques are evaluated in terms of DC link performance. The urge for volume reduction in IMMD poses a challenge for the selection of optimum DC link capacitor. DC Link capacitor types are discussed and critical aspects in selecting the DC links capacitor are listed. Analytical modeling of DC link capacitor parameters is performed and it is verified by simulations conducted using MATLAB/Simulink. Optimum selection of DC link capacitor is achieved based on the electrical, thermal and economical model.", "corpus_id": 13668221, "title": "DC link capacitor optimization for integrated modular motor drives" }
{ "abstract": "This paper explores the use of GaN power FETs to realize an integrated modular motor drive (IMMD) with an induction motor. A structure in which inverter modules are connected in series is proposed to reduce the module maximum voltages and to offer an opportunity to utilize low-voltage wide-band-gap GaN devices. With the superb switching performance of GaN power FETs, a reduction in IMMD size is achieved by eliminating inverter heat sink and optimizing dc-link capacitors. Gate signals of the IMMD modules are interleaved to suppress the total voltage ripple of dc-link capacitors and to further reduce the capacitor size. Motor winding configurations and their coupling effect are also investigated as a part of the IMMD design. The proposed structure and design methods are verified by experimental results.", "corpus_id": 18464479, "title": "Integrated Modular Motor Drive Design With GaN Power FETs" }
{ "abstract": "In this paper, a full-bridge resonant-type IGBT inverter suitable for heating magnetic and nonmagnetic materials at high frequency is experimentally described. A series arrangement of capacitors is adopted and an optimum mode of operation is proposed. The actual performance was tested on an experimental prototype of full bridge series resonance inverter for induction-heating cooking appliances. The low-cost developed hybrid inverter is characterized by simplicity of design and operation. Some analysis and detailed experimental results are presented.", "corpus_id": 17303999, "score": -1, "title": "Experimental investigation of full bridge series resonant inverters for induction-heating cooking appliances" }
{ "abstract": "New biomarkers have to be developed in order to increase the performance of current antigen-based malaria rapid diagnosis. Antibody production often involves the use of laboratory animals and is time-consuming and costly, especially when the target is Plasmodium, whose variable antigen expression complicates the development of long-lived biomarkers. To circumvent these obstacles, we have applied the Systematic Evolution of Ligands by EXponential enrichment method to the rapid identification of", "corpus_id": 251212710, "title": "Development of DNA Aptamers against Plasmodium falciparum Blood Stages Using Cell-SELEX" }
{ "abstract": "DNA aptamers have potential for disease diagnosis and as therapeutics, particularly when interfaced with programmable molecular technology. Here we have combined DNA aptamers specific for the malaria biomarker Plasmodium falciparum lactate dehydrogenase (PfLDH) with a DNA origami scaffold. Twelve aptamers that recognise PfLDH were integrated into a rectangular DNA origami and atomic force microscopy demonstrated that the incorporated aptamers preserve their ability to specifically bind target protein. Captured PfLDH retained enzymatic activity and protein-aptamer binding was observed dynamically using high-speed AFM. This work demonstrates the ability of DNA aptamers to recognise a malaria biomarker whilst being integrated within a supramolecular DNA scaffold, opening new possibilities for malaria diagnostic approaches based on DNA nanotechnology.", "corpus_id": 135948, "title": "A DNA aptamer recognising a malaria protein biomarker can function as part of a DNA origami assembly" }
{ "abstract": "A cysteine-substituted mutant of the ring-shaped protein TRAP (trp-RNA binding attenuation protein) can be induced to self-assemble into large, monodisperse hollow spherical cages in the presence of 1.4 nm diameter gold nanoparticles. In this study we use high-speed atomic force microscopy (HS-AFM) to probe the dynamics of the structural changes related to TRAP interactions with the gold nanoparticle as well as the disassembly of the cage structure. The dynamic aggregation of TRAP protein in the presence of gold nanoparticles was observed, including oligomeric rearrangements, consistent with a role for gold in mediating intermolecular disulfide bond formation. We were also able to observe that the TRAP-cage is composed of multiple, closely packed TRAP rings in an apparently regular arrangement. A potential role for inter-ring disulfide bonds in forming the TRAP-cage was shown by the fact that ring-ring interactions were reversed upon the addition of reducing agent dithiothreitol. A dramatic disassembly of TRAP-cages was observed using HS-AFM after the addition of dithiothreitol. To the best of our knowledge, this is the first report to show direct high-resolution imaging of the disassembly process of a large protein complex in real time.", "corpus_id": 20744768, "score": -1, "title": "Probing structural dynamics of an artificial protein cage using high-speed atomic force microscopy." }
{ "abstract": "BACKGROUND\nThe overall incidence of childhood malignancies is rather low. Central nervous system tumours constitute the largest group of solid tumours among children. In contrast to adult population, a genetic predisposition is frequently associated with these malignancies (it is assumed to occur in approximately 15-25% of all childhood tumours) and there is also a number of monogenic hereditary syndromes known to be associated with brain tumours.\n\n\nAIM\nThe purpose of this article is to present an overview of genetic syndromes reported to increase the risk of childhood central nervous system tumours. The outlined tumour predispositions are divided into two groups. Firstly, syndromes with multisystem manifestation, where neoplasia is one of the components, whereas the distinguishing symptom is usually non-oncological. Secondly, there are syndromes that are diagnosed by the associated neoplasm withou any other noticeable phenotypic manifestation. A brief description of particular diseases is provided with a focus on associated central nervous system tumours. Detection of a tumour predisposition in a child is important not only for the child itself, but also for its family relatives. Often, a modification of treatment is necessary in regards to a genetic diagnosis. With the evolution of personalised medicine the possibility of \"tailored\" therapy will probably be a demanded solution. Last but not least, it is crucial to provide the child with a specialised preventive care owing to the risk of another potential malignancy. The diagnosis of hereditary cancer predisposition has also a big impact on the relatives of the patient. It enables to specify their oncological risk and arrange a specialised preventive care program, if needed. For high-risk parents planning another pregnancy there is a possibility to prevent the transfer of a certain disposition with the aid of preimplantation and prenatal genetic testing.", "corpus_id": 43758522, "title": "[Genetic Syndromes Predisposing to Tumors of Central Nervous System in Children]." }
{ "abstract": "nloaded ner and localizer of BRCA2 (PALB2) was originally identified as a BRCA2-interacting protein that is l for key BRCA2 genome caretaker functions. It subsequently became clear that PALB2 was another ni anemia (FA) gene (FANCN), and that monoallelic PALB2 mutations are associated with increased risk st and pancreatic cancer. Mutations in PALB2 have been identified in breast cancer families worldwide, cent studies have shown that PALB2 also interacts with BRCA1. Here, we summarize the molecular and re functions and clinical phenotypes of this key DNA repair pathway component and discuss how its discovery has advanced our knowledge of both FA and adult cancer predisposition. Cancer Res; 70(19); 7353–9. ©2010 AACR.", "corpus_id": 668832, "title": "Cancer esearch ew R B 2 / FANCN : Recombining Cancer and Fanconi Anemia" }
{ "abstract": "Most of the studies carried out on Fe deficiency condition in arboreous plants have been performed, with the exception of those carried out on plants grown in the field, in hydroponic culture utilizing a total iron depletion growth condition. This can cause great stress to plants. By introducing Fe deficiency induced by the presence of bicarbonate, we found significant differences between Pyrus communis L. cv. Conference and Cydonia oblonga Mill. BA29 and MA clones, characterized by different levels of tolerance to chlorosis. Pigment content and the main protein-pigment complexes were investigated by HPLC and protein gel blot analysis, respectively. While similar changes in the structural organization of photosystems (PSs) were observed in both species under Fe deficiency, a different reorganization of the photosynthetic apparatus was found in the presence of bicarbonate between tolerant and susceptible genotypes, in agreement with the photosynthetic electron transport rate measured in isolated thylakoids. In order to characterize the intrinsic factors determining the efficiency of iron uptake in a tolerant genotype, the main mechanisms induced by Fe deficiency in Strategy I species, such as Fe3+-chelate reductase (EC 1.16.1.7) and H+-ATPase (EC 3.6.3.6) activities, were also investigated. We demonstrate that physiological and biochemical root responses in quince and pear are differentially affected by iron starvation and bicarbonate supply, and we show a high correlation between tolerance and Strategy I activation.", "corpus_id": 6028363, "score": -1, "title": "Differential responses in pear and quince genotypes induced by Fe deficiency and bicarbonate." }
{ "abstract": "Experimental work on captive Goffin’s cockatoos (Cacatua goffiniana) has highlighted the remarkable cognitive abilities of this species. However, little is known about its behavior in the natural habitat on the Tanimbar Archipelago in Indonesia. In order to fully understand the evolutionary roots leading to cognitively advanced skills, such as multi-step problem solving or flexible tool use and manufacture, it is crucial to study the ecological challenges faced by the respective species in the wild. The three-month expedition presented here aimed at gaining first insights into the cockatoos’ feeding ecology and breeding behavior. We could confirm previous predictions that Goffin’s cockatoos are opportunistic foragers and consume a variety of resources (seeds, fruit, inflorescence, roots). Their breeding season may be estimated to start between June and early July and they face potential predation from ground and aerial predators. Additionally, the observational data provide indications that Goffin’s cockatoos are extractive foragers, which together with relying on multiple food sources might be considered a prerequisite of tool use.", "corpus_id": 91448998, "title": "Notes on ecology of wild goffin’s cockatoo in the late dry season with emphasis on feeding ecology" }
{ "abstract": "In primates, complex object combinations during play are often regarded as precursors of functional behavior. Here we investigate combinatory behaviors during unrewarded object manipulation in seven parrot species, including kea, African grey parrots and Goffin cockatoos, three species previously used as model species for technical problem solving. We further examine a habitually tool using species, the black palm cockatoo. Moreover, we incorporate three neotropical species, the yellow- and the black-billed Amazon and the burrowing parakeet. Paralleling previous studies on primates and corvids, free object-object combinations and complex object-substrate combinations such as inserting objects into tubes/holes or stacking rings onto poles prevailed in the species previously linked to advanced physical cognition and tool use. In addition, free object-object combinations were intrinsically structured in Goffin cockatoos and in kea.", "corpus_id": 464440, "title": "Unrewarded Object Combinations in Captive Parrots" }
{ "abstract": "A novel chirped pulse photothermal (PT) radiometric radar with improved sensitivity over the conventional harmonically modulated thermal-wave radar technique and alternative pulsed laser photothermal radiometry is introduced for the diagnosis of biological samples, especially bones with tissue and skin overlayers. The constraints imposed by the laser safety (maximum permissible exposure) ceiling on pump laser energy and the strong attenuation of thermal-wave signals in tissues significantly limit the photothermally active depth in most biological specimens to a level which is normally insufficient for practical applications (a few mm below the skin surface). A theoretical approach for improvement of signal-to-noise ratio (SNR), minimizing the static (dc) component of the photothermal signal and making use of the photothermal radiometric nonlinearity has been introduced and verified by comparing the SNR of four distinct excitation wave forms (sine-wave, square-wave, constant-width and constant duty-cycle pulses) for chirping the pump laser, under constant exposure energy. At low frequencies fixed-pulsewidth chirps of large peak power were found to be superior to all other equal-energy modalities, with an SNR improvement up to two orders of magnitude. Distinct thickness-dependent characteristic delay times in a goat bone were obtained, establishing an active depth resolution range of ~2.8 mm in a layered skin-fat-bone structure, a favorable result compared to the maximum reported pulsed photothermal radiometric depth resolution <1 mm in turbid biological media.", "corpus_id": 1960547, "score": -1, "title": "Highly depth-resolved chirped pulse photothermal radar for bone diagnostics." }
{ "abstract": "A 3‐month‐old girl with Sturge‐Weber syndrome presented with a morbilliform rash, eosinophilia, and fulminant liver failure to our tertiary pediatric hospital. She was diagnosed with drug reaction with eosinophilia and systemic symptoms complicated by viremia and evidence of viral hepatitis on liver biopsy. We discuss the role of viral reactivation in drug reaction with eosinophilia and systemic symptoms and the relevance of antiviral therapy in management.", "corpus_id": 4334618, "title": "Use of antiviral medications in drug reaction with eosinophilia and systemic symptoms (DRESS): A case of infantile DRESS" }
{ "abstract": "Drug reaction with eosinophilia and systemic symptoms (DRESS) or drug-induced hypersensitivity syndrome (DIHS) is a severe and possibly life-threatening drug reaction. The role of human herpesvirus (HHV) reactivation in its development is now well recognized. In a prospective study of 40 patients we demonstrated reactivation of HHVs, including HHV6, HHV7 and Epstein–Barr virus (EBV) in 76% of the patients. In this issue of the BJD Ahluwalia et al. studied the HHV-6 involvement in a retrospective case series of 29 paediatric patients with DRESS. This study is of great interest because very few data are available on paediatric DRESS. They evaluated the prevalence of HHV6 reactivation by a reliable method, a quantitative real-time polymerase chain reaction analysis in whole blood, and the response to systemic corticosteroids. They demonstrated that HHV6 positivity was associated with a more severe disease course. Patients treated with systemic corticosteroids had a reduced number of days until cessation of progression compared with patients without systemic corticosteroids. Additionally, the authors underlined the frequent pulmonary involvement (50% in HHV6-positive patients) in this paediatric population. This study raised many questions. Only four patients among the 29 patients tested positive for HHV6. HHV6 reactivation is among the criteria proposed by a Japanese consensus group for the diagnosis of DIHS. But some instances of DRESS may be associated with reactivation of other HHVs, including HHV7, EBV and cytomegalovirus (CMV). The authors excluded six patients among their 35 cases (17%) who had a positive workup for co-infection with other HHVs. These co-reactivations were observed in our series and in a Japanese series of 30 patients with DIHS, in 33% and 26 6% of the patients, respectively. Are DRESS pictures without demonstrated HHV reactivation truly DRESS? In some cases HHV reactivation could not be detected. In a recent report Ushigome et al. validated the DRESS score (as defined by the RegiSCAR Study group and used in this study) when HHV6 reactivation was not demonstrated in their series of 30 patients with DIHS for the diagnosis of definite/probable DRESS. HHV detection may be missed. In a recent case report we studied the course of CMV reactivation in a patient infected with human immunodeficiency virus who developed DRESS after the introduction of antiretroviral therapy and antitoxoplasmic drugs. An increase of CMV viral loads preceded the development of DRESS and subsequent flare during the course of the disease. But, interestingly, at the time of diagnosis of DRESS the CMV was undetectable. Whereas HHV may be detected at the very beginning of DRESS it could not be detected during the first week corresponding to the time of the strongest antiviral immune response. This may explain in part the absence of detection of HHV in some definite cases of DRESS. This study confirms the importance of detection of HHV6 viral load as a possible prognostic marker in paediatric patients with DRESS. But unfortunately the authors did not give the value of the HHV6 viral load. This value seems to be more interesting than its positivity. In a recent longitudinal study Ishida et al. demonstrated that patients with DRESS/DIHS and receiving corticosteroids had higher HHV6 and CMV viral loads. The administration of corticosteroids was probably related to the severity of DRESS/DIHS, as observed in this study where the only four HHV6-positive cases were treated by corticosteroids. But in return corticosteroids may interfere with HHV reactivation. The influence of the administration of corticosteroids on the course of DRESS is also of importance. The results of Ahluwalia et al. suggest a benefit of systemic corticosteroids. In the same way, Ushigome et al. demonstrated that long-term sequelae such as autoimmune disorders were reduced in patients treated by corticosteroids. But the consequence of systemic corticosteroids on HHV reactivation needs to be evaluated in a large prospective study. The future management of severe DRESS will probably be the administration of corticosteroids along with an antiviral treatment.", "corpus_id": 1490203, "title": "Human herpesvirus 6 involvement in paediatric drug hypersensitivity syndrome" }
{ "abstract": "In this paper, we obtain characterizations of higher order Markov processes in terms of copulas corresponding to their finite-dimensional distributions. The results are applied to establish necessary and sufficient conditions for Markov processes of a given order to exhibit m-dependence, r-independence, or conditional symmetry. The paper also presents a study of applicability and limitations of different copula families in constructing higher order Markov processes with the preceding dependence properties. We further introduce new classes of copulas that allow one to combine Markovness with m-dependence or r-independence in time series.", "corpus_id": 2807645, "score": -1, "title": "COPULA-BASED CHARACTERIZATIONS FOR HIGHER ORDER MARKOV PROCESSES" }
{ "abstract": "Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.", "corpus_id": 17741944, "title": "Parametric Anatomical Modeling: a method for modeling the anatomical layout of neurons and their projections" }
{ "abstract": "The basic structure of the cortico-hippocampal system is highly conserved across mammalian species. Comparatively few hippocampal neurons can represent and address a multitude of cortical patterns, establish associations between cortical patterns and consolidate these associations in the cortex. In this study, we investigate how elementary anatomical properties in the cortex-hippocampus loop along with synaptic plasticity contribute to these functions. Specifically, we focus on the high degree of connectivity between cortex and hippocampus leading to converging and diverging forward and backward projections and heterogenous synaptic transmission delays that result from the detached location of the hippocampus and its multiple loops. We found that in a model incorporating these concepts, each cortical pattern can evoke a unique spatio-temporal spiking pattern in hippocampal neurons. This hippocampal response facilitates a reliable disambiguation of learned associations and a bridging of a time interval larger than the time window of spike-timing dependent plasticity in the cortex. Moreover, we found that repeated retrieval of a stored association leads to a compression of the interval between cue presentation and retrieval of the associated pattern from the cortex. Neither a high degree of connectivity nor heterogenous synaptic delays alone is sufficient for this behavior. We conclude that basic anatomical properties between cortex and hippocampus implement mechanisms for representing and consolidating temporal information. Since our model reveals the observed functions for a range of parameters, we suggest that these functions are robust to evolutionary changes consistent with the preserved function of the hippocampal loop across different species.", "corpus_id": 780787, "title": "Pattern Association and Consolidation Emerges from Connectivity Properties between Cortex and Hippocampus" }
{ "abstract": "Synfire chains have long been proposed to generate precisely timed sequences of neural activity. Such activity has been linked to numerous neural functions including sensory encoding, cognitive and motor responses. In particular, it has been argued that synfire chains underlie the precise spatiotemporal firing patterns that control song production in a variety of songbirds. Previous studies have suggested that the development of synfire chains requires either initial sparse connectivity or strong topological constraints, in addition to any synaptic learning rules. Here, we show that this necessity can be removed by using a previously reported but hitherto unconsidered spike-timing-dependent plasticity (STDP) rule and activity-dependent excitability. Under this rule the network develops stable synfire chains that possess a non-trivial, scalable multi-layer structure, in which relative layer sizes appear to follow a universal function. Using computational modeling and a coarse grained random walk model, we demonstrate the role of the STDP rule in growing, molding and stabilizing the chain, and link model parameters to the resulting structure.", "corpus_id": 2128767, "score": -1, "title": "Triphasic spike-timing-dependent plasticity organizes networks to produce robust sequences of neural activity" }
{ "abstract": "ABSTRACT Introduction: Mesoporous silica nanoparticles (MSNs) feature a high surface area and large pore volume, uniform and tunable pore size, and stable framework; thus, they have been used extensively as drug carriers. Areas covered: The synthesis, classification, and the latest generation of MSNs, drug loading methods, modification of MSNs, pharmacokinetic studies, biocompatibility, and toxicity of MSNs, and their application in drug delivery systems (DDS) are covered in this review. Expert opinion: It is crucial to uncover the mechanism for the formation of MSNs. Before drug loading, the characteristics of MSNs should be taken into consideration. In addition, the porosity, particle size and morphology, surface oxidation and surface functionalization can also influence the in vivo fate of MSNs, which is worthy of further study. Coating MSNs with novel materials may improve their biocompatibility, control the release of drugs loaded into the MSNs or enhance the uptake of the coated MSNs by tumor cells. MSNs can also be used as carriers for combination therapy in the treatment of cancer. Despite the rapid development of MSNs, the biological effects of these biomaterials remain relatively less understood.", "corpus_id": 59306429, "title": "Mesoporous silica nanoparticles: synthesis, classification, drug loading, pharmacokinetics, biocompatibility, and application in drug delivery" }
{ "abstract": "The aim of this study was to load amorphous hydrophobic drug into ordered mesoporous silica (SBA-15) by supercritical carbon dioxide technology in order to improve the dissolution and bioavailability of the drug. Asarone was selected as a model drug due to its lipophilic character and poor bioavailability. In vitro dissolution and in vivo bioavailability of the obtained Asarone-SBA-15 were significantly improved as compared to the micronized crystalline drug. This study offers an effective, safe, and environmentally benign means of solving the problems relating to the solubility and bioavailability of hydrophobic molecules.", "corpus_id": 1453328, "title": "Loading amorphous Asarone in mesoporous silica SBA-15 through supercritical carbon dioxide technology to enhance dissolution and bioavailability." }
{ "abstract": "HighlightsA method for quantifying &agr;‐asarone in mouse plasma is reported.Simple protein precipitation method was utilized.Small‐volume serial blood sampling in mice method was employed.Successful application in &agr;‐asarone loaded SLNs pharmacokinetic studies. ABSTRACT A simple, sensitive and selective liquid chromatography‐tandem mass spectrometric method was developed and validated for the quantification of &agr;‐asarone in mouse plasma with its application to pharmacokinetic studies. An electrospray ionization (ESI) with multiple reaction monitoring (MRM) mode was used to monitor the precursor‐product ion transitions of 209.1 > 193.9 m/z for &agr;‐asarone and 157.8 > 114.0 m/z for allantoin. Chromatographic separation was acquired on a Sepax BR‐C18 (5 &mgr;m, 120 Å 1.0 × 100 mm) column with an isocratic mobile phase consisting of methanol and 0.1% formic acid (80:20, v/v). The developed bioanalytical method was successfully validated according to the United States Food and Drug Administration (US FDA) guidelines for linearity, selectivity, accuracy, precision, recovery, matrix effect, and stability. The validated method was successfully applied to a pharmacokinetics study of &agr;‐asarone along with a combination of pharmacokinetic techniques, including small‐volume serial blood sampling in mice, reducing drug doses and the number of animals used, using a simple protein precipitation method and less solvent consumption will enable its use in further bioequivalence studies.", "corpus_id": 22617834, "score": -1, "title": "Development of a selective and sensitive LC–MS/MS method for the quantification of &agr;‐asarone in mouse plasma and its application to pharmacokinetic studies" }
{ "abstract": "It is an undeniable fact that currently information is a pretty significant presence for all companies or organizations. Therefore protecting its security is crucial and the security models driven by real datasets has become quite important. The operations based on military, government, commercial and civilians are linked to the security and availability of computer systems and network. From this point of security, the network security is a significant issue because the capacity of attacks is unceasingly rising over the years and they turn into be more sophisticated and distributed. The objective of this review is to explain and compare the most commonly used datasets. This paper focuses on the datasets used in artificial intelligent and machine learning techniques, which are the primary tools for analyzing network traffic and detecting abnormalities.", "corpus_id": 3635475, "title": "A review on cyber security datasets for machine learning algorithms" }
{ "abstract": "Nowadays, many companies and/or governments require a secure system and/or an accurate intrusion detection system (IDS) to defend their network services and the user’s private information. In network security, developing an accurate detection system for distributed denial of service (DDoS) attacks is one of challenging tasks. DDoS attacks jam the network service of the target using multiple bots hijacked by crackers and send numerous packets to the target server. Servers of many companies and/or governments have been victims of the attacks. In such an attack, detecting the crackers is extremely difficult, because they only send a command by multiple bots from another network and then leave the bots quickly after command execute. The proposed strategy is to develop an intelligent detection system for DDoS attacks by detecting patterns of DDoS attack using network packet analysis and utilizing machine learning techniques to study the patterns of DDoS attacks. In this study, we analyzed large numbers of network packets provided by the Center for Applied Internet Data Analysis and implemented the detection system using a support vector machine with the radial basis function (Gaussian) kernel. The detection system is accurate in detecting DDoS attacks.", "corpus_id": 36621728, "title": "An Intelligent DDoS Attack Detection System Using Packet Analysis and Support Vector Machine" }
{ "abstract": "Distributed denial-of-service (DDoS) attacks present an Internet-wide threat. We propose D-WARD, a DDoS defense system deployed at source-end networks that autonomously detects and stops attacks originating from these networks. Attacks are detected by the constant monitoring of two-way traffic flows between the network and the rest of the Internet and periodic comparison with normal flow models. Mismatching flows are rate-limited in proportion to their aggressiveness. D-WARD offers good service to legitimate traffic even during an attack, while effectively reducing DDoS traffic to a negligible level. A prototype of the system has been built in a Linux router. We show its effectiveness in various attack scenarios, discuss motivations for deployment, and describe associated costs.", "corpus_id": 5375148, "score": -1, "title": "Attacking DDoS at the source" }
{ "abstract": "Power system harmonics are always an important issue in power networks as they can cause many negative impacts, such as equipment thermal stress, on installations within power networks. Recently, with the increasing connections of power electronic devices based Renewable Energy Source (RES) and High Voltage Direct Current (HVDC) transmission applications, harmonics in power networks, especially high frequency harmonics (>50th order or 2.5 kHz) are on the rise. Currently, the majority of conventional VTs, such as Wound-type Voltage Transformers (WVT) and Capacitor Voltage Transformers (CVT), are widely installed and used in High Voltage (HV) and Extra High Voltage (EHV) power networks for voltage measurement. Since most of them were mainly designed to measure voltage with the required accuracy at the fundamental frequency (i.e. 50Hz in the UK), they are limited to measuring high frequency harmonics due to the coupling of their internal inductive and capacitive elements. To achieve high frequency harmonic measurements, voltage measurement devices with wide frequency bandwidths are required. Recently, non-conventional VTs, such as optical voltage transducers, are commercially available, which could provide accurate voltage measurements over a wide range of frequency. However, before they can be considered by any power utilities, their frequency response performances must be tested at a rated fundamental voltage with required minimum harmonic injections from 100Hz to 5 kHz. This must require a test system which should be capable of providing a rated fundamental voltage up to 400kV with controllable harmonic injections at required levels from 100Hz to 5 kHz. Therefore, the objective of this project is to design and implement such a test system in the National Grid (NG) HV laboratory at the University of Manchester. However, the design and the implementation of such a test system bring many challenges; for instance, a lack of adequate equipment and considerable power to provide the required harmonic injections above 0.5% to the test object.In this thesis, an Instrument Voltage Transformer Frequency Response (VTFR) test system with three different voltage power source designs is presented; The voltage power source designs are: (i) Design 1 is based on a single power source inductive coupling method to provide both a rated fundamental voltage and controllable harmonics; (ii) Design 2 is based on two separate voltage power sources inductive coupling method to provide both a rated fundamental voltage and controllable harmonics; and (iii) Design 3 is based on two separate voltage power sources capacitive coupling method to provide both the rated fundamental voltage and controllable harmonics. A hybrid approach, which combines the VTFR test system with both the voltage power sources Design 2 and 3, is proposed for testing the frequency response of any type of VTs at their rated fundamental voltages with 1% harmonic injections from 100Hz to 5 kHz. The proposed VTFR test system with voltage power source designs were firstly validated at a relatively low voltage of 33kV in the HV laboratory. Then three different VTFR test systems were constructed based on available equipment for testing VTs from 11kV to 400kV. An 11kV, a 33kV WVT and a 400kV WVT and a 275kV CVT were tested. The test results were analyzed, compared and discussed. The models of the test systems were also established and simulated. Simulation results were analysed, compared and discussed.", "corpus_id": 107580562, "title": "Design and Implementation of a Frequency Response Test System for Instrument Voltage Transformer Performance Studies" }
{ "abstract": "The results of tests on a 275 kV and a 400 kV capacitor voltage transformer (CVT) are presented. The objective has been to determine the transfer function of the CVTs in the Scottish Power EHV system, which in turn can be used to compensate for the errors at harmonic frequencies. As a primary requirement, the magnetic elements of the CVTs must be excited at nearly their nominal magnetic states. A test procedure is presented that does not require CVT disassembling and/or modeling and ensures that the above essential requirement is satisfied. The effects of burden on the frequency response will also be presented. It will be confirmed that the size and nature of burden affect the CVT frequency characteristic.", "corpus_id": 1158843, "title": "Method to Measure CVT Transfer Function" }
{ "abstract": "Contents:Volume IAcknowledgementsIntroduction Bruce H. Kobayashi and Larry E. RibsteinPART I BASICSA Multiple Jurisdictions Are a Solution to the Public Good Problem1. Charles M. Tiebout (1956), 'A Pure Theory of Local Expenditures'B Exit and Federalism2. Richard A. Epstein (1992), 'Exit Rights Under Federalism'C Optimal Jurisdiction Size3. Gordon Tullock (1969), 'Federalism: Problems of Scale'D Twin Dilemmas of Federalism: Free Riding, Spillovers, and Agency Costs4. William H. Riker (1964), 'The Origin and Purposes of Federalism' and 'The Maintenance of Federalism: The Administrative Theory'E Conditions for Federalism5. Edmund W. Kitch (1980), 'Regulation and the American Common Market'F Public Choice and Federalism6. Jonathan R. Macey (1990), 'Federal Deference to Local Regulators and the Economic Theory of Regulation: Toward a Public-Choice Explanation of Federalism'PART II FISCAL FEDERALISM AND THE OPTIMAL STRUCTURE OF THE PUBLIC SECTORA Tests of the Tiebout Model7. Edward M. Gramlich and Daniel L. Rubinfeld (1982), 'Micro Estimates of Public Spending Demand Functions and Tests of the Tiebout and Median-Voter Hypotheses'8. Paul W. Rhode and Koleman S. Strumpf (2003), 'Assessing the Importance of Tiebout Sorting: Local Heterogeneity from 1850 to 1990'B Does Structure Matter?9. Susan Rose-Ackerman (1981), 'Does Federalism Matter? Political Choice in a Federal Republic'10. Dennis Epple and Alan Zelentiz (1981), 'The Implications of Competition Among Jurisdictions: Does Tiebout Need Politics?'C Vertical and Horizontal Competition11. Albert Breton (1996), 'A Retrospective Overview' and 'The Organization of Governmental Systems'D Federalism, Development, and Self-Enforcing Federalism12. Barry R. Weingast (1995), 'The Economic Role of Political Institutions: Market-Preserving Federalism and Economic Development'E Cooperative Federalism13. Robert P. Inman and Daniel L. Rubinfeld (1997), 'Rethinking Federalism'F Optimal Taxation and Fiscal Instruments and Intergovernmental Grants14. Robert P. Inman and Daniel L. Rubinfeld (1996), 'Designing Tax Policy in Federalist Economies: An Overview'G Leviathan and the Size of Government15. Geoffrey Brennan and James M. Buchanan (1980), 'Open Economy, Federalism, and Taxing Authority'16. Jonathan Rodden (2003), 'Reviving Leviathan: Fiscal Federalism and the Growth of Government'H Distribution17. John Donahue (1997), 'Tiebout? Or Not Tiebout? The Market Metaphor and America's Devolution Debate'18. Dennis Epple and Thomas Romer (1991), 'Mobility and Redistribution'Name IndexVolume IIAcknowledgementsAn introduction by the editors to both volumes appears in Volume IPART I LAWA Commerce Clause1. Saul Levmore (1983), 'Interstate Exploitation and Judicial Intervention'B Uniform State Laws2. Larry E. Ribstein and Bruce H. Kobayashi (1996), 'An Economic Analysis of Uniform State Laws'C The Choice of State Versus Federal Law3. William F. Baxter (1963), 'Choice of Law and the Federal System'D Contractual Choice of Law and Forum4. Erin O'Hara and Larry E. Ribstein (2000), 'From Politics to Efficiency in Choice of Law'PART II SPECIFIC APPLICATIONSA Corporate Law and the Race to the Top5. Roberta Romano (1985), 'Law as a Product: Some Pieces of the Incorporation Puzzle'6. Lucian Ayre Bebchuk (1992), 'Federalism and the Corporation: The Desirable Limits on State Competition in Corporate Law'B Antitrust and the Economics of Federalism7. Frank H. Easterbrook (1983), 'Antitrust and the Economics of Federalism'C Environmental Regulation8. Richard L. Revesz (2001), 'Federalism and Environmental Regulation: A Public Choice Analysis'D Taxation9. Daniel Shaviro (1992), 'An Economic and Political Look at Federalism in Taxation'E Welfare Reform10. Charles C. Brown and Wallace E. Oates (1987), 'Assistance to the Poor in a Federal System'F Crime11. Doron Teichman (2005), 'The Market for Criminal Justice: Federalism, Crime Control, and Jurisdictional Competition'Name Index", "corpus_id": 152533656, "score": -1, "title": "The Economics of Federalism" }
{ "abstract": "The purpose of this systematic grounded theory study is to explain the process that teachers experience to transform their mindset regarding student intelligence from fixed towards growth, including effective transformation approaches and obstacles. This study focuses on the transformation experiences of 14 teachers in grades 9-12 from schools in the Midwest region of the United States. Dweck’s mindset theory, Wenger’s communities of practice, Mezirow’s Transformative Learning Theory, and Bandura’s Social Cognitive Theory guided the conceptual framework for developing a theoretical model to explain the process of teacher mindset transformation. Data collected using Dweck’s Mindset Instrument, King’s Learning Activities Survey, interviews, and activities including a metaphor tool were analyzed systematically and a model of transformation emerged. Themes of the model include: a moment of realization, experiences including experimenting and reflection, equipping activities, empowerment, application, extending, and a core category of relationships throughout the model. The model is visualized through metaphor. Implications for further research include expanded populations and use of metaphor in grounded theory studies.", "corpus_id": 158087189, "title": "The Power of Transformation: A Grounded Theory Study of Cultivating Teacher Growth Mindset towards Student Intelligence" }
{ "abstract": "article i nfo Article history: The present study examined how beliefs about intelligence, as mediated by ability-validation goals, predicted whether students lost or maintained levels of intrinsic motivation over the course of a single academic year. 978 third- through eighth-grade students were surveyed in the fall about their theories concerning the mal- leability of intelligence, need to validate their academic ability through schoolwork, and intrinsic motivation. At the end of the school year, they were surveyed again about their intrinsic motivation and subsequently characterized as either decliners (those who lost intrinsic motivation over the year) or maintainers (those who maintained or gained intrinsic motivation over the year). As predicted, decliners were more likely to en- dorse an entity theory of intelligence than maintainers and this relationship was fully mediated by the adop- tion of ability-validation goals. Implications for intervention efforts and future research are discussed.", "corpus_id": 107432, "title": "Dangerous mindsets: How beliefs about intelligence predict motivational change" }
{ "abstract": "Abstract African American college students tend to obtain lower grades than their White counterparts, even when they enter college with equivalent test scores. Past research suggests that negative stereotypes impugning Black students' intellectual abilities play a role in this underperformance. Awareness of these stereotypes can psychologically threaten African Americans, a phenomenon known as “stereotype threat” (Steele & Aronson, 1995), which can in turn provoke responses that impair both academic performance and psychological engagement with academics. An experiment was performed to test a method of helping students resist these responses to stereotype threat. Specifically, students in the experimental condition of the experiment were encouraged to see intelligence—the object of the stereotype—as a malleable rather than fixed capacity. This mind-set was predicted to make students' performances less vulnerable to stereotype threat and help them maintain their psychological engagement with academics, both of which could help boost their college grades. Results were consistent with predictions. The African American students (and, to some degree, the White students) encouraged to view intelligence as malleable reported greater enjoyment of the academic process, greater academic engagement, and obtained higher grade point averages than their counterparts in two control groups.", "corpus_id": 17797159, "score": -1, "title": "Reducing the Effects of Stereotype Threat on African American College Students by Shaping Theories of Intelligence" }
{ "abstract": "Switchgrass (Panicum Virgatum L.) has been recognized as the new energy plant, which makes it ideal for the development of phytoremediation on heavy metal contamination in soils with great potential. This study aimed to screen the best internal reference genes for the real-time quantitative PCR (RT-qPCR) in leaves and roots of switchgrass for investigating its response to various heavy metals, such as cadmium (Cd), lead (Pb), mercury (Hg), chromium (Cr), and arsenic (As). The stability of fourteen candidate reference genes was evaluated by BestKeeper, GeNorm, NormFinder, and RefFinder software. Our results identified U2AF as the best reference gene in Cd, Hg, Cr, and As treated leaves as well as in Hg, Pb, As, and Cr stressed root tissues. In Pb treated leaf tissues, 18S rRNA was demonstrated to be the best reference gene. CYP5 was determined to be the optimal reference gene in Cd treated root tissues. The least stable reference gene was identified to be CYP2 in all tested samples except for root tissues stressed by Pb. To further validate the initial screening results, we used the different sets of combinatory internal reference genes to analyze the expression of two metal transport associated genes (PvZIP4 and PvPDB8) in young leaves and roots of switchgrass. Our results demonstrated that the relative expression of the target genes consistently changed during the treatment when CYP5/UBQ1, U2AF/ACT12, eEF1a/U2AF, or 18S rRNA/ACT12 were combined as the internal reference genes. However, the time-dependent change pattern of the target genes was significantly altered when CYP2 was used as the internal reference gene. Therefore, the selection of the internal reference genes appropriate for specific experimental conditions is critical to ensure the accuracy and reliability of RT-qPCR. Our findings established a solid foundation to further study the gene regulatory network of switchgrass in response to heavy metal stress.", "corpus_id": 218533919, "title": "Identification and Validation of Reference Genes for RT-qPCR Analysis in Switchgrass under Heavy Metal Stresses" }
{ "abstract": "Reference gene evaluation and selection are necessary steps in gene expression analysis, especially in new plant varieties, through reverse transcription quantitative real-time PCR (RT-qPCR). Hedera helix L. is an important traditional medicinal plant recorded in European Pharmacopoeia. Research on gene expression in H. helix has not been widely explored, and no RT-qPCR studies have been reported. Thus, it is important and necessary to identify and validate suitable reference genes to for normalizing RT-qPCR results. In our study, 14 candidate protein-coding reference genes were selected. Their expression stability in five tissues (root, stem, leaf, petiole and shoot tip) and under seven abiotic stress conditions (cold, heat, drought, salinity, UV-C irradiation, abscisic acid and methyl jasmonate) were evaluated using geNorm and NormFinder. This study is the first to evaluate the stability of reference genes in H. helix. The results show that different reference genes should be chosen for normalization on the basis of various experimental conditions. F-box was more stable than the other selected genes under all analysis conditions except ABA treatment; 40S was the most stable reference gene under ABA treatment; in contrast, EXP and UBQ were the most unstable reference genes. The expressions of HhSE and Hhβ-AS, which are two genes related to the biosynthetic pathway of triterpenoid saponins, were also examined for reference genes in different tissues and under various cold stress conditions. The validation results confirmed the applicability and accuracy of reference genes. Additionally, this study provides a basis for the accurate and widespread use of RT-qPCR in selecting genes from the genome of H. helix.", "corpus_id": 6996769, "title": "Identification and validation of reference genes for quantitative real-time PCR studies in Hedera helix L." }
{ "abstract": "Isoflavonoid genistein (4',5,7-trihydroxyisoflavone), the aglicon of heteroside genistin, represent the major active compound from soybean. It is solubile in organic solvents such as DMSO , dimethyl formamide, acetone, ethanol. Due to its chemical structure it shows, however, poor solubility in water, that of course drastically reduces its bioavailability. The aim of this study is to demonstrate that genistein can be incorporated in different tyes of ramified cyclodextrins, compounds that increase water solubility: hydroxyl-propyl-beta-cyclodextrin (HPBCD) , randomly -metylated- beta-cyclodextrin (RAMEB) and 6-O -Maltosil- beta-cyclodextrin (G 2BCD) . The scanning electron microscopy images show a difference between the structure of the pure substance, genistein and the structure of genistein after the kneading with the three cyclodextrins. Another analyze that was made in order to prove that complexation took place was the differencial scanning calorimetry. Genistein has an endothermic peak which reflect its melting point around 300 oC. HPBCD , RAMEB and G 2BCD are amorphous materials, in these cases complexation phenomena is presumable, because the melting point disappeared. Presented data suggest that incorporation of genistein in hydroxy-propyl- beta-cyclodextrin , randomly -metylated- beta-cyclodextrin and 6-O-maltosil- Genistein Isoflavonoid (4 ',5,7-trihydroxyisoflavone), aglicon de heteroside genistin, reprezinta compusul activ principal din soia. El este solubil in solvenţi organici, cum ar fi DMSO, dimetil formamidă, acetonă, etanol. Cu toate acestea, datorita structurii sale chimice, prezinta solubilitate slaba in apă, ceea ce desigur că ii reduce in mod drastic biodisponibilitatea. Scopul acestui studiu este de a demonstra că genisteina poate fi incorporata in diferite tipuri de ciclodextrine ramificate, compusi care cresc solubilitatea in apă: hidroxil-propil-beta-ciclodextrina (HPBCD), aleator-metilata-beta-ciclodextrină (RAMEB) si a 6-O-Maltosil-beta-ciclodextrină (G 2BCD). Imaginile de microscopie electronica de baleiaj arată o diferenţă intre structura substanţei pure, genistein si structura genistein după frământare cu cele trei ciclodextrine. O alta analiza care a fost făcută in scopul de a dovedi că procesul a avut loc a fost calorimetria de scanare differentiala. Genisteina are un vârf endoterm care reflectă punctul de topire in jurul valorii de 300 oC. HPBCD, RAMEB si G 2BCD sunt materiale amorfe; in aceste cazuri, sunt presupuse a exista fenomene de complexare, deoarece punctul de topire a dispărut. Datele prezentate sugerează că incorporarea de genisteina in hidroxi-propil-beta-ciclodextrina, aleator-metilata-beta-ciclodextrina si 6-O-maltosil-beta-ciclodextrină a avut loc din cauza schimbărilor aparute in proprietăţile fizico-chimice ale compusilor. Cuvinte cheie: genisteină, hidroxi-propil-beta-ciclodextrina, aleator-metilata-beta-ciclodextrina, 6-O-maltosil-beta-ciclodextrină, SEM, DSC", "corpus_id": 31997352, "score": -1, "title": "INCORPORATION OF ISOFLAVONOID GENISTEIN IN BETA RAMIFIED CYCLODEXTRINS - AN OPTION FOR IMPROVING WATER SOLUBILITY" }
{ "abstract": "This study focuses on antecedents of the training programs’ effectiveness at public sector organizations in Bahrain. A Kirkpatrick model is utilized as a partial research framework and tested as the dependent variable training effectiveness in this study. This study further examines the relationship between the independent variables, trainer and social support toward training effectiveness. The survey instrument was developed for data collection and the questionnaires were distributed to the staff working in the public sector in Bahrain. The total usable questionnaires are 128. The study adopts a quantitative approach using SPSS statistic approach. The outcomes also discover that both antecedents have a positive and a significant relationship with the training effectiveness at various Kirkpatrick’s levels.", "corpus_id": 55478045, "title": "Antecedents of Training Effectiveness in Bahrain" }
{ "abstract": "Personalization agents are incorporated in many Web sites to tailor content and interfaces for individual users. In contrast to the proliferation of personalized Web services worldwide, empirical research on the effects of Web personalization is scant. How does exposure to personalized offers affect subsequent product consideration and choice outcome? Drawing on literature in human-computer interaction (HCI) and user behavior, this research examines the effect of three major elements of Web personalization strategies on users' information processing through different decision-making stages: personalized content quality, feature overlapping among alternatives, and personalized message framing. These elements can be manipulated by a firm during implemention of its personalization strategy. A study using a personalized ringtone download Web site was conducted. The findings provide empirical evidence of the effects of Web personalization. In particular, when users are forming their consideration sets, the age...", "corpus_id": 2867465, "title": "An Empirical Examination of the Effects of Web Personalization at Different Stages of Decision Making" }
{ "abstract": null, "corpus_id": 51844216, "score": -1, "title": "Quantifying Product Favorability" }
{ "abstract": "An exonic missense mutation, c.436C>G, in the PLP1 gene of a patient affected by the hypomyelinating leukodystrophy, Pelizaeus–Merzbacher disease, has previously been found to be responsible for the alteration of the canonical alternative splicing profile of the PLP1 gene leading to the loss of the longer PLP isoform. Here we show that the presence of the c.436C>G mutation served to introduce regulatory motifs that appear to be responsible for the perturbed splicing pattern that led to loss of the major PLP transcript. With the aim of disrupting the interaction between the PLP1 splicing regulatory motifs and their cognate splicing factors, we designed an antisense oligonucleotide-based in vitro correction protocol that successfully restored PLP transcript production in oligodendrocyte precursor cells.", "corpus_id": 588398, "title": "Restoration of the Normal Splicing Pattern of the PLP1 Gene by Means of an Antisense Oligonucleotide Directed against an Exonic Mutation" }
{ "abstract": "Thousands of mutations are identified yearly. Although many directly affect protein expression, an increasing proportion of mutations is now believed to influence mRNA splicing. They mostly affect existing splice sites, but synonymous, non-synonymous or nonsense mutations can also create or disrupt splice sites or auxiliary cis-splicing sequences. To facilitate the analysis of the different mutations, we designed Human Splicing Finder (HSF), a tool to predict the effects of mutations on splicing signals or to identify splicing motifs in any human sequence. It contains all available matrices for auxiliary sequence prediction as well as new ones for binding sites of the 9G8 and Tra2-β Serine-Arginine proteins and the hnRNP A1 ribonucleoprotein. We also developed new Position Weight Matrices to assess the strength of 5′ and 3′ splice sites and branch points. We evaluated HSF efficiency using a set of 83 intronic and 35 exonic mutations known to result in splicing defects. We showed that the mutation effect was correctly predicted in almost all cases. HSF could thus represent a valuable resource for research, diagnostic and therapeutic (e.g. therapeutic exon skipping) purposes as well as for global studies, such as the GEN2PHEN European Project or the Human Variome Project.", "corpus_id": 397031, "title": "Human Splicing Finder: an online bioinformatics tool to predict splicing signals" }
{ "abstract": "Complex tasks of motor control in humans, such as locomotion or postural control, exhibit patterns of variability that until recently have been indiscernible from random noise. Tools from the field of non-linear dynamical systems have been increasingly applied to measurements of these tasks and changes in these complex patterns have been identified. A particular tool, control entropy (CE), is a measure of the regularity, or conversely, the complexity of a signal and is used to infer the constraints present on a system. More importantly, CE can be used under nonstationary conditions, and can therefore identify changes in the complexity or constraints on a system under dynamic exercise conditions. In this review, we summarize the insight that has been gained from application of CE to signals from studies involving walking, running and postural control. We show that changing constraints can be identified during dynamic exercise and that these are reflected in changing CE. We also discuss how CE can identify increased complexity of tasks such as postural control in the fatigued state.", "corpus_id": 16111664, "score": -1, "title": "Control Entropy: What Is It and What Does It Tell Us?" }
{ "abstract": "Dehumanization is an everyday, pervasive phenomenon in health contexts. Given its detrimental consequences to health care, much research has been dedicated to understanding and promoting the humanization of health services. However, health care service research has neglected the sociopsychological processes involved in the dehumanization of self and others, in formal but also informal health-related contexts. Drawing upon sociopsychological models of dehumanization, this article will bridge this gap by presenting a critical review of studies on everyday meaning-making and person perception processes of dehumanization in health-related contexts. A database search was conducted in PsycINFO, Web of Science, Scopus, and PubMed, using a combination of keywords on dehumanization and health/illness/body; 3,229 references were screened; 95 full texts were assessed for eligibility; 59 studies were included. Most studies focused on informal contexts, reflecting a decontextualized and one-sided view of dehumanization (i.e., not integrating actors’ and victims’ perspectives). Despite the dominant focus on self-dehumanization, emerging perspectives uncover the role of processes that deny human uniqueness to others, and their individual determinants and consequences for mental health. A few studies bring to light the functions of a variety of dehumanizing body metaphors on self- and other-dehumanization. These trends in the literature leave several gaps, which are here critically analyzed to inform future research.", "corpus_id": 210463811, "title": "Self- and Other-Dehumanization Processes in Health-Related Contexts: A Critical Review of the Literature" }
{ "abstract": "Abstract Self-objectification is related to maladaptive mental health variables, but little is known about what could ameliorate these associations. Self-compassion, a construct associated with mindfulness, involves taking a non-judgmental attitude toward the self. In this study, 306 college-aged women were recruited; those who were highest ( n = 106) and lowest ( n = 104) in self-compassion were retained for analyses. Levels of body surveillance, body shame, depression, and negative eating attitudes were lower in the high self-compassion group. Furthermore, the fit of a path model wherein body surveillance related to body shame, which, in turn, related to negative eating attitudes and depressive symptomatology was compared for each group, controlling for body mass index. The model fit significantly differently such that the connections between self-objectification and negative body and eating attitudes were weaker in the high self-compassion group. Treatment implications of self-compassion as a potential means to interrupt the self-objectification process are discussed.", "corpus_id": 12423279, "title": "Not hating what you see: Self-compassion may protect against negative mental health variables connected to self-objectification in college women." }
{ "abstract": "U.S. governmental agencies are striving to do more with less. Controlling the costs of delivering healthcare services such as Medicaid is especially critical at a time of increasing program enrollment and decreasing state budgets. Fraud is estimated to steal up to ten percent of the taxpayer dollars used to fund governmentally supported healthcare, making it critical for government authorities to find cost effective methods to detect fraudulent transactions. This paper explores the use of a business intelligence system relying on statistical methods to detect fraud in one state’s existing Medicaid claim payment data. This study shows that Medicaid claim transactions that have been collected for payment purposes can be reformatted and analyzed to detect fraud and provide input for decision makers charged with making the best use of available funding. The results illustrate the efficacy of using unsupervised statistical methods to detect fraud in healthcare-related data.", "corpus_id": 48016585, "score": -1, "title": "Applying Business Intelligence Concepts to Medicaid Claim Fraud Detection" }
{ "abstract": "ABSTRACT Wild birds interconnect all parts of the globe through annual cycles of migration with little respect for country or continental borders. Although wild birds are reservoir hosts for a high diversity of gamma- and deltacoronaviruses, we have little understanding of the ecology or evolution of any of these viruses. In this review, we use genome sequence and ecological data to disentangle the evolution of coronaviruses in wild birds. Specifically, we explore host range at the levels of viral genus and species, and reveal the multi-host nature of many viral species, albeit with biases to certain types of avian host. We conclude that it is currently challenging to infer viral ecology due to major sampling and technical limitations, and suggest that improved assay performance across the breadth of gamma- and deltacoronaviruses, assay standardization, as well as better sequencing approaches, will improve both the repeatability and interpretation of results. Finally, we discuss cross-species virus transmission across both the wild bird – poultry interface as well as from birds to mammals. Clarifying the ecology and diversity in the wild bird reservoir has important ramifications for our ability to respond to the likely future emergence of coronaviruses in socioeconomically important animal species or human populations.", "corpus_id": 220587290, "title": "Wild birds as reservoirs for diverse and abundant gamma- and deltacoronaviruses" }
{ "abstract": "The genetic diversity, evolution, distribution, and taxonomy of some coronaviruses dominant in birds other than chickens remain enigmatic. In this study we sequenced the genome of a newly identified coronavirus dominant in ducks (DdCoV), and performed a large-scale surveillance of coronaviruses in chickens and ducks using a conserved RT-PCR assay. The viral genome harbors a tandem repeat which is rare in vertebrate RNA viruses. The repeat is homologous to some proteins of various cellular organisms, but its origin remains unknown. Many substitutions, insertions, deletions, and some frameshifts and recombination events have occurred in the genome of the DdCoV, as compared with the coronavirus dominant in chickens (CdCoV). The distances between DdCoV and CdCoV are large enough to separate them into different species within the genus Gammacoronavirus. Our surveillance demonstrated that DdCoVs and CdCoVs belong to different lineages and occupy different ecological niches, further supporting that they should be classified into different species. Our surveillance also demonstrated that DdCoVs and CdCoVs are prevalent in live poultry markets in some regions of China. In conclusion, this study shed novel insight into the genetic diversity, evolution, distribution, and taxonomy of the coronaviruses circulating in chickens and ducks.", "corpus_id": 1583281, "title": "Genomic Analysis and Surveillance of the Coronavirus Dominant in Ducks in China" }
{ "abstract": "A novel technique is presented for obtaining a single in-vivo image containing both functional and anatomical information in a small animal model such as a mouse. This technique, which incorporates appropriate image neutron-scatter rejection and uses a neutron opaque contrast agent, is based on neutron radiographic technology and was demonstrated through a series of Monte Carlo simulations. With respect to functional imaging, this technique can be useful in biomedical and biological research because it could achieve a spatial resolution orders of magnitude better than what presently can be achieved with current functional imaging technologies such as nuclear medicine (PET, SPECT) and fMRI. For these studies, Monte Carlo simulations were performed with thermal (0.025 eV) neutrons in a 3 cm thick phantom using the MCNP5 simulations software. The goals of these studies were to determine: 1) the extent that scattered neutrons degrade image contrast; 2) the contrasts of various normal and diseased tissues under conditions of complete scatter rejection; 3) the concentrations of Boron-10 and Gadolinium-157 required for contrast differentiation in functional imaging; and 4) the efficacy of collimation for neutron scatter image rejection. Results demonstrate that with proper neutron-scatter rejection, a neutron fluence of 2 ×107 n/cm2 will provide a signal to noise ratio of at least one ( S/N ≥ 1) when attempting to image various 300 μm thick tissues placed in a 3 cm thick phantom. Similarly, a neutron fluence of only 1 ×107 n/cm2 is required to differentiate a 300 μm thick diseased tissue relative to its normal tissue counterpart. The utility of a B-10 contrast agent was demonstrated at a concentration of 50 μg/g to achieve S/N ≥ 1 in 0.3 mm thick tissues while Gd-157 requires only slightly more than 10 μg/g to achieve the same level of differentiation. Lastly, neutron collimator with an L/D ratio from 50 to 200 were calculated to provide appropriate scatter rejection for thick tissue biological imaging with neutrons.", "corpus_id": 23308212, "score": -1, "title": "Feasibility of Small Animal Anatomical and Functional Imaging with Neutrons: A Monte Carlo Simulation Study" }
{ "abstract": "Type-2 diabetes (T2D) is characterized by hyperglycemia and hyperlipidemia, resulting in impaired insulin production and insulin resistance in peripheral tissues. Several studies have demonstrated an association between diabetes and central nervous system complications such as stroke and Alzheimer’s disease. Due to the fact that T2D is one of the fastest growing chronic illnesses, there is an urgent need to improve our knowledge on the pathogenic mechanisms to why diabetes leads to brain complications as well as to identify novel drugable targets for therapeutic use. Project 1: studies I-II Pre-clinical studies have shown that adult neurogenesis is impaired in diabetic animal models. We hypothesized that diabetes leading to neurogenesis impairment plays a role in the development of neurological complications. If so, normalizing neurogenesis in diabetes/obesity could be therapeutically useful in counteracting neurological dysfunction. The aim of studies I-II was to establish an in vitro system where to study the effect of a diabetic milieu on adult neurogenesis. Furthermore, we determined the potential role of pituitary adenylate cyclase-activating polypeptide (PACAP) and galanin to protect adult neural stem cells (NSCs) from these diabetic-like conditions. Moreover, we determined whether apoptosis and the unfolded protein response (UPR) were induced by diabetic-like conditions and whether their regulation was involved in the PACAP/galanin-mediated protective effect. Finally, we studied the potential regulation of PACAP and galanin receptors in NSCs in response to diabetic-like conditions in vitro and ex vivo. The viability of NSCs isolated from the mouse brain subventricular zone (SVZ) was assessed in presence of a diabetic milieu, as mimicked by high palmitate and glucose, which characterize diabetic glucolipotoxicity. The results show that high palmitate and glucose impair NSC viability in correlation to increased apoptosis (Bcl-2, cleaved caspase-3) and UPR signaling (CHOP, BIP, XBP1, JNK phosphorylation). We also show that PACAP and galanin counteract glucolipotoxicity via PAC1 receptor and GalR3 activation, respectively. Furthermore, we also report that PACAP and galanin receptors are regulated by diabetes in NSCs in vitro and in the SVZ ex vivo. Project 2: study III T2D is a strong risk factor for stroke and no therapy based on neuroprotection is currently available. Exendin-4 (Ex-4) is a glucagon-like peptide-1 receptor (GLP-1R) agonist in clinical use for the treatment of T2D, which has also been shown to mediate neuroprotection against stroke pre-clinically. However, the applicability of a therapy based on Ex-4 has not been investigated in a pre-clinical setting with clinical relevance. The aim of this study was to determine the potential efficacy of Ex-4 against stroke in T2D rats by using a drug administration paradigm and a dose that mimics a diabetic patient on Ex-4 therapy. Moreover, we investigated inflammation and neurogenesis as potential cellular mechanisms at the basis of Ex-4 efficacy. T2D Goto-Kakizaki (GK) rats were treated peripherally for 4 weeks with daily clinical doses of Ex-4 (0.1, 1, 5 !g/kg body weight) before inducing stroke by transient middle cerebral artery occlusion. The Ex-4 treatment was continued for 2-4 weeks thereafter. The severity of ischemic damage was measured by evaluation of stroke volume and by stereological counting of neurons in the striatum and cortex. Evaluation of stroke-induced inflammation, stem cell proliferation and neurogenesis was also quantitatively assessed by immunohistochemistry. We show that peripheral administration of Ex-4 counteracts ischemic brain damage in T2D GK rats. The results also show that Ex-4 decreased microglia infiltration and increased stroke-induced neural stem cell proliferation and neuroblast formation, while stroke-induced neurogenesis was not affected by Ex-4 treatment. Together, our data in project 1 show that we have established an in vitro assay where to study the molecular mechanism on how diabetes impact adult neurogenesis. Furthermore, our results show that this assay has the potential to be developed into a screening platform for the identification of molecules that can regulate adult neurogenesis under diabetes. In project 2, we show neuroprotective efficacy against stroke by Ex-4 in a T2D rat model, by using a pre-clinical setting with clinical relevance. Ex-4 is an anti-diabetic drug in clinical use that has been reported to show limited side effects. Thus, at least in theory stroke patients should be able to easily receive this treatment, probably with minimal risks. ! LIST OF PUBLICATIONS This thesis is based on the following original papers, which will be referred to in the text by their Roman numerals: I. Mansouri. S, Ortsater. H, Pintor Gallego. O, Darsalia. V, Sjoholm. A, Patrone. C. Pituitary adenylate cyclase-activating polypeptide counteracts the impaired adult neural stem cell viability induce by palmitate, J Neurosci Res,", "corpus_id": 46913567, "title": "Development of therapeutics for the treatment of diabetic brain complications" }
{ "abstract": "Neural tube patterning in vertebrates is controlled in part by locally secreted factors that act in a paracrine manner on nearby cells to regulate proliferation and gene expression. We show here by in situ hybridization that genes for the neuropeptide pituitary adenylate cyclase-activating peptide (PACAP) and one of its high-affinity receptors (PAC1) are widely expressed in the mouse neural tube on embryonic day (E) 10.5. Transcripts for the ligand are present in differentiating neurons in much of the neural tube, whereas the receptor gene is expressed in the underlying ventricular zone, most prominently in the alar region and floor plate. PACAP potently increased cAMP levels more than 20-fold in cultured E10.5 hindbrain neuroepithelial cells, suggesting that PACAP activates protein kinase A (PKA) in the neural tube and might act in the process of patterning. Consistent with this possibility, PACAP down-regulated expression of the sonic hedgehog- and PKA-dependent target gene gli-1 in cultured neuroepithelial cells, concomitant with a decrease in DNA synthesis. PACAP is thus an early inducer of cAMP levels in the embryo and may act in the neural tube during patterning to control cell proliferation and gene expression.", "corpus_id": 650521, "title": "Neural tube expression of pituitary adenylate cyclase-activating peptide (PACAP) and receptor: potential role in patterning and neurogenesis." }
{ "abstract": "Metabolic flux analysis (MFA) has so far been restricted to lumped networks lacking many important pathways, partly due to the difficulty in automatically generating isotope mapping matrices for genome‐scale metabolic networks. Here we introduce a procedure that uses a compound matching algorithm based on the graph theoretical concept of pattern recognition along with relevant reaction information to automatically generate genome‐scale atom mappings which trace the path of atoms from reactants to products for every reaction. The procedure is applied to the iAF1260 metabolic reconstruction of Escherichia coli yielding the genome‐scale isotope mapping model imPR90068. This model maps 90,068 non‐hydrogen atoms that span all 2,077 reactions present in iAF1260 (previous largest mapping model included 238 reactions). The expanded scope of the isotope mapping model allows the complete tracking of labeled atoms through pathways such as cofactor and prosthetic group biosynthesis and histidine metabolism. An EMU representation of imPR90068 is also constructed and made available. Biotechnol. Bioeng. 2011; 108:1372–1382. © 2011 Wiley Periodicals, Inc.", "corpus_id": 32859514, "score": -1, "title": "Construction of an E. Coli genome‐scale atom mapping model for MFA calculations" }
{ "abstract": "A rapid expansion from supercritical solution into aqueous solution (RESSAS) technology was presented for the micronization of Chinese medicinal material. Magnolia bark extract (MBE) obtained by supercritical carbon dioxide (scCO2) extraction technology was chosen as the experimental material. RESSAS process produced 303.0 nm nanoparticles (size distribution, 243.6–320.5 nm), which was significantly smaller than the 55.3 µm particles (size distribution, 8.3–102.4 µm) prepared by conventional mechanical milling. The effect of process parameters, including extraction temperature (30°C, 40°C, 50°C), extraction pressure (200, 250, 300 bar) and nozzle size (50, 100, 200 µm), on the size distribution of nanoparticles was investigated. The characteristics of nanoparticles and materials were also studied by scanning electron microscopy (SEM) and laser light scattering (LLS). This study demonstrates that RESSAS is applicable for preparing nanoparticles of MBE at low operating temperature; the process is simple without any residual solvent.", "corpus_id": 23107755, "title": "Preparation of nanoparticles of Magnolia bark extract by rapid expansion from supercritical solution into aqueous solutions" }
{ "abstract": "The basic objective of this work was to form stable suspensions of submicron particles of phytosterol, a water-insoluble drug, by rapid expansion of supercritical solution into aqueous solution (RESSAS). A supercritical phytosterol/CO2 mixture was expanded into an aqueous surfactant solution. In these experiments 4 different surfactants were used to impede growth and agglomeration of the submicron particles resulting from collisions in the free jet. The concentration of the drug in the aqueous surfactant solution was determined by high-performance liquid chromatography, while the size of the stabilized particles was measured by dynamic light scattering. Submicron phytosterol particles (<500 nm) were stabilized and in most cases a bimodal particle size distribution was obtained. Depending on surfactant and concentration of the surfactant solution, suspensions with drug concentrations up to 17 g/dm3 could be achieved, which is 2 orders of magnitude higher than the equilibrium solubility of phytosterol. Long-term stability studies indicate modest particle growth over 12 months. Thus, the results demonstrate that RESSAS can be a promising process for stabilizing submicron particles in aqueous solutions.", "corpus_id": 19085565, "title": "Stabilized nanoparticles of phytosterol by rapid expansion from supercritical solution into aqueous solution" }
{ "abstract": "Abstract Various plant sterols have been tested for their ability to order soybean phosphatidylcholine bilayers as monitored by 2H-NMR spectroscopy. Sitosterol and 24ξ-methylpollinastanol, a 9β,19-cyclopropylsterol, appear to be the most effective sterols. The presence of either a trans-oriented double bond at C-22 in the C-24 alkylated sterol side chain or a Δ8(9) in the tetracyclic ring system is shown to reduce significantly the ordering efficiency of the sterol molecule.", "corpus_id": 84473624, "score": -1, "title": "Deuterium-NMR investigation of plant sterol effects on soybean phosphatidylcholine acyl chain ordering" }
{ "abstract": "Music creation is typically composed of two parts: composing the musical score, and then performing the score with instruments to make sounds. While recent work has made much progress in automatic music generation in the symbolic domain, few attempts have been made to build an AI model that can render realistic music audio from musical scores. Directly synthesizing audio with sound sample libraries often leads to mechanical and deadpan results, since musical scores do not contain performance-level information, such as subtle changes in timing and dynamics. Moreover, while the task may sound like a text-to-speech synthesis problem, there are fundamental differences since music audio has rich polyphonic sounds. To build such an AI performer, we propose in this paper a deep convolutional model that learns in an end-to-end manner the score-to-audio mapping between a symbolic representation of music called the pianorolls and an audio representation of music called the spectrograms. The model consists of two subnets: the ContourNet, which uses a U-Net structure to learn the correspondence between pianorolls and spectrograms and to give an initial result; and the TextureNet, which further uses a multi-band residual network to refine the result by adding the spectral texture of overtones and timbre. We train the model to generate music clips of the violin, cello, and flute, with a dataset of moderate size. We also present the result of a user study that shows our model achieves higher mean opinion score (MOS) in naturalness and emotional expressivity than a WaveNet-based model and two off-the-shelf synthesizers. We open our source code at https://github.com/bwang514/PerformanceNet", "corpus_id": 53277718, "title": "PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network" }
{ "abstract": "Can we make a famous rap singer like Eminem sing whatever our favorite song? Singing style transfer attempts to make this possible, by replacing the vocal of a song from the source singer to the target singer. This paper presents a method that learns from unpaired data for singing style transfer using generative adversarial networks.", "corpus_id": 49652611, "title": "Singing Style Transfer Using Cycle-Consistent Boundary Equilibrium Generative Adversarial Networks" }
{ "abstract": "Domain adaptation plays an important role for speech recognition models, in particular, for domains that have low resources. We propose a novel generative model based on cyclic-consistent generative adversarial network (CycleGAN) for unsupervised non-parallel speech domain adaptation. The proposed model employs multiple independent discriminator on the power spectrogram, each in charge of different frequency bands. As a result we have 1) better discriminators that focuses on fine-grained details of the frequency features, and 2) a generator that is capable of generating more realistic domain adapted spectrogram. We demonstrate the effectiveness of our method on speech recognition with gender adaptation, where the model only have access to supervised data from one gender during training, but is evaluated on the other at testing time. Our model is able to achieve an average of $7.41\\%$ on phoneme error rate, and $11.10\\%$ word error rate relative performance improvement as compared to the baseline on TIMIT and WSJ dataset, respectively. Qualitatively, our model also generate more realistic sounding speech synthesis when conditioned on data from the other domain.", "corpus_id": 4590127, "score": -1, "title": "A Multi-Discriminator CycleGAN for Unsupervised Non-Parallel Speech Domain Adaptation" }
{ "abstract": "Objective This study was designed to assess superparamagnetic iron oxide (SPIO)-enhanced MRI findings of well-differentiated hepatocellular carcinomas (HCCs) correlated with their multidetector-row CT (MDCT) findings. Materials and Methods Seventy-two patients with 84 pathologically proven well-differentiated HCCs underwent triple-phase MDCT and SPIO-enhanced MRI at a magnetic field strength of 1.5 Tesla (n = 49) and 3.0 Tesla (n = 23). Two radiologists in consensus retrospectively reviewed the CT and MR images for attenuation value and the signal intensity of each tumor. The proportion of hyperintense HCCs as depicted on SPIO-enhanced T2- or T2*-weighted images were compared in terms of tumor size (< 1 cm and > 1 cm), five CT attenuation patterns based on arterial and equilibrium phases and magnetic field strength, by the use of univariate and multivariate analyses. Results Seventy-eight (93%) and 71 (85%) HCCs were identified by CT and on SPIO-enhanced T2- and T2*-weighted images, respectively. For the CT attenuation pattern, one (14%) of seven isodense-isodense, four (67%) of six hypodense-hypodense, four (80%) of five isodense-hypodense, 14 (88%) of 16 hyperdense-isodense and 48 (96%) of 50 hyperdense-hypodense HCCs were hyperintense (Cochran-Armitage test for trend, p < 0.001). Based on the use of multivariate analysis, the CT attenuation pattern was the only factor that affected the proportion of hyperintense HCCs as depicted on SPIO-enhanced T2- or T2*-weighted images (p < 0.001). Tumor size or magnetic field strength was not a factor that affected the proportion of hyperintense HCCs based on the use of univariate and multivariate analysis (p > 0.05). Conclusion Most well-differentiated HCCs show hyperintensity on SPIO-enhanced MRI, although the lesions show various CT attenuation patterns. The CT attenuation pattern is the main factor that affects the proportion of hyperintense well-differentiated HCCs as depicted on SPIO-enhanced MRI.", "corpus_id": 11467462, "title": "SPIO-Enhanced MRI Findings of Well-Differentiated Hepatocellular Carcinomas: Correlation with MDCT Findings" }
{ "abstract": "Abstract: The tumor-detecting capacity and clinical usefulness of superparamagnetic iron oxide (SPIO) magnetic resonance imaging (MRI) were examined in patients with hepatocellular carcinoma. The tumor detection rate of SPIO-MRI (64.5%) was comparable to those of dynamic computed tomography (CT) and plain MRI, but lower than that for Gd dynamic MRI (93.5%; P < 0.01%). A combination of Gd dynamic MRI and SPIO-MRI improved the detection rate; further, the tumor stage with respect to tumor blood-flow pattern was predicted by combining plain MRI with SPIO-MRI. This combination procedure may also be useful for selecting therapeutic strategies.", "corpus_id": 1037632, "title": "Tumor-detecting capacity and clinical usefulness of SPIO-MRI in patients with hepatocellular carcinoma" }
{ "abstract": "LIKE the lungs, the liver is composed of two functionally separate, plethoric, spongy, somewhat fragile organs that are separated by a narrow anatomical plane. Although methods for removing parts o...", "corpus_id": 42209456, "score": -1, "title": "Surgery for hepatic neoplasms." }
{ "abstract": "Sea anemones produce pore-forming toxins, actinoporins, which are interesting as tools for cytoplasmic membranes study, as well as being potential therapeutic agents for cancer therapy. This investigation is devoted to structural and functional study of the Heteractis crispa actinoporins diversity. Here, we described a multigene family consisting of 47 representatives expressed in the sea anemone tentacles as prepropeptide-coding transcripts. The phylogenetic analysis revealed that actinoporin clustering is consistent with the division of sea anemones into superfamilies and families. The transcriptomes of both H. crispa and Heteractis magnifica appear to contain a large repertoire of similar genes representing a rapid expansion of the actinoporin family due to gene duplication and sequence divergence. The presence of the most abundant specific group of actinoporins in H. crispa is the major difference between these species. The functional analysis of six recombinant actinoporins revealed that H. crispa actinoporin grouping was consistent with the different hemolytic activity of their representatives. According to molecular modeling data, we assume that the direction of the N-terminal dipole moment tightly reflects the actinoporins’ ability to possess hemolytic activity.", "corpus_id": 44145144, "title": "Multigene Family of Pore-Forming Toxins from Sea Anemone Heteractis crispa" }
{ "abstract": "The multigene family of equinatoxins, pore-forming proteins from sea anemone Actinia equina, has been studied at the protein and gene levels. We report the cDNA sequence of a new, sphingomyelin inhibited equinatoxin, EqtIV. The N-terminal sequences of natural Eqt I and III were also determined, confirming two isoforms of EqtI, differing at position 13. The number of Eqt genes determined by Southern blot hybridization was found to be more than five, indicating that Eqts belong to a multigene family.", "corpus_id": 919061, "title": "Equinatoxins, pore-forming proteins from the sea anemone Actinia equina, belong to a multigene family." }
{ "abstract": "We present high-contrast observations of 68 young stellar objects (YSOs) that have been explored as part of the Strategic Exploration of Exoplanets and Disks with Subaru (SEEDS) survey on the Subaru telescope. Our targets are very young (<10 Myr) stars, which often harbor protoplanetary disks where planets may be forming. We achieve a typical contrast of ∼10−4–10−5.5 at an angular distance of 1″ from the central star, corresponding to typical mass sensitivities (assuming hot-start evolutionary models) of ∼10 MJ at 70 au and ∼6 MJ at 140 au. We detected a new stellar companion to HIP 79462 and confirmed the substellar objects GQ Lup b and ROXs 42B b. An additional six companion candidates await follow-up observations to check for common proper motion. Our SEEDS YSO observations probe the population of planets and brown dwarfs at the very youngest ages; these may be compared to the results of surveys targeting somewhat older stars. Our sample and the associated observational results will help enable detailed statistical analyses of giant planet formation.", "corpus_id": 119205307, "score": -1, "title": "The SEEDS High-Contrast Imaging Survey of Exoplanets Around Young Stellar Objects" }
{ "abstract": "Motivation: Chinese stated-owned enterprises (SOEs) play an increasingly role in Chinese economy although, its management for its suppliers are yet not in an effective manner in the situation menti ...", "corpus_id": 167009680, "title": "Supplier Management in Chinese State-owned enterprises : A case study of bounded relationships from the perspective of buyer" }
{ "abstract": "No matter what type of relationship a buyer has with a particular supplier, the buyer faces the decision of whether to either stay with the supplier or to switch to another supplier. This paper introduces a model of the buyer's switching decision that integrates tenets of both transaction cost economics and relationship marketing. The model analyzes how the switching decision is affected by parameters such as transaction-specific assets, information quality and the time dimension. The resulting Nash equilibria reflect strategies in which each player makes its optimal decision, taking into account the optimal decision of the other players. A sensitivity analysis of the effects of the parameters on the performance measures of price and profit provide intuitively sound results, and demonstrates how a common ground can be found between two schools of thought on buyer-supplier relations.", "corpus_id": 153721836, "title": "From transaction cost economics to relationship marketing: a model of buyer-supplier relations" }
{ "abstract": "Although there is good reason to expect that the growth of information work and information technology will significantly affect the trade-offs inherent in different structures for organizing work, the theoretical basis for these changes remains poorly understood. This paper seeks to address this gap by analyzing the incentive effects of different ownership arrangement in the spirit of the Grossman-Hart-Moore (GHM) incomplete contracts theory of the firm. A key departure from earlier approaches is the inclusion of a role for an \"information asset\", analogous to the GHM treatment of property. This approach highlights the organizational significance of information ownership and information technology. For instance, using this framework, one can determine when 1) informed workers are more likely to be owners than employees of firms, 2) increased flexibility of assets will facilitate decentralization, and 3) the need for centralized coordination will lead to centralized ownership. The framework developed sheds light on some of the empirical findings regarding the relationship between information technology and firm size and clarifies the relationship between coordination mechanisms and the optimal distribution of asset ownership. While many implications are still unexplored and untested, building on the incomplete contracts approach appears to be a promising avenue for the careful, methodical analysis of human organizations and the impact of new technologies.", "corpus_id": 7502360, "score": -1, "title": "An incomplete contracts theory of information, technology and organization" }
{ "abstract": "Continuing research efforts in robot-assisted rehabilitation demand more adaptable and inherently soft wearable devices. A wearable rehabilitative device is required to follow the motion of the body and to provide assistive or corrective motions to restore natural movements. Providing the required level of fluidity in wearable devices becomes a challenge for rehabilitation of more sensitive and fragile body parts, such as the face. To address this challenge, we propose a soft actuation method based on a tendon-driven robotic origami (robogami) and a soft sensing method based on a strain gauge with customized stretchable mesh design. The proposed actuation and sensing methods are compatible with the requirements in a facial rehabilitative device. The conformity of robogamis originates from their multiple and redundant degrees of freedom and the controllability of the joint stiffness, which is provided by adjusting the elasticity modulus of an embedded shape memory polymer (SMP) layer. The reconfiguration of the robogami and the trajectory and directional compliance of its end-effector are controlled by modulating the temperatures, hence the stiffness, of the SMP layers. Here we demonstrate this correlation using simulation and experimental results. In this paper, we introduce a thin and highly compliant sensing method for measuring facial movements with a minimal effect on the natural motions. The measurements of the sensors on the healthy side can be used to calculate the required tendon displacement for replicating the natural motion on the paralyzed side of the face in patients suffering from facial palsy.", "corpus_id": 12940381, "title": "Soft actuation and sensing towards robot-assisted facial rehabilitation" }
{ "abstract": "We have been developing the Robot Mask with shape memory alloy based actuators that follows an approach of manipulating the skin through a minimally obtrusive wires, transparent strips and tapes based pulling mechanism to enhance the expressiveness of the face. For achieving natural looking facial expressions by taking the advantage of specific characteristics of the skin, the Robot Mask follows a human anatomy based criteria in selecting these manipulation points and directions. In this paper, we describe a case study of using the Robot Mask to assist physiotherapy of a hemifacial paralyzed patient. The significant differences in shape and size of the human head between different individuals demands proper customizations of the Robot Mask. This paper briefly describes the adjusting and customizing stages employed from the design level to the implementation level of the Robot Mask. We will also introduce a depth image sensor data based analysis, which can remotely evaluate dynamic characteristics of facial expressions in a continuous manner. We then investigate the effectiveness of the Robot Mask by analyzing the range sensor data. From the case study, we found that the Robot Mask could automate the physiotherapy tasks of rehabilitation of facial paralysis. We also verify that, while providing quick responses, the Robot Mask can reduce the asymmetry of a smiling face and manipulate the facial skin to formations similar to natural facial expressions.", "corpus_id": 1777259, "title": "Robot Assisted Physiotherapy to Support Rehabilitation of Facial Paralysis" }
{ "abstract": "Abstract Both scientists and roboticists widely agree that the musculoskeletal system of the human foot plays an important role in locomotion. Nevertheless, the contribution of the foot musculoskeletal system has not been fully uncovered because currently it is impossible to modify and evaluate musculoskeletons in living animals. Here, to understand the effects of foot windlass mechanism, we construct a bipedal robot, which has similar musculoskeleton and dynamics to those of human. By implementing experiments on this robot, we investigate the effects (e.g. jumping height) of foot windlass mechanism on drop jumping, a simple and representative bouncing gait comprising landing and push-off. Through a significant number of drop jumping trials, the results demonstrated that (1) the windlass mechanism is passively activated in the push-off phase and that (2) it contributes to the height of jumping. Our results suggest that the foot windlass mechanism contributes to the energy efficiency and performance in locomotion.", "corpus_id": 53566204, "score": -1, "title": "Using the foot windlass mechanism for jumping higher: A study on bipedal robot jumping" }
{ "abstract": "Pedunculate oak ( Quercus robur L.) is one of the most important tree componentsof Europe’s forest ecosystems, possessing both ecological and economical value. Development of genomic resources, such as genetic markers, is needed to support geneconservation and tree improvement activities. Experimental methods to develop SSR markers are laborious, time consuming and expensive, while in silico approaches havebecome a practicable and inexpensive alternative in genetic studies. The aim of this studywas to characterize simple sequence repeat (EST-SSR) markers and functional annotationof SSR containing sequences in Q. robur unigene sequences. 7170 unigene sequences(5147.315 kb) of Q. robur were downloaded from National Center for BiotechnologyInformation (NCBI). A total of 475 (6.62 %) unigene sequences containing 525 SSRs (microsatellites) were identified by using MISA software. The average frequency ofmicrosatellites was found, on average, one in every 9.8 kb of sequence. The analysisrevealed that tri-nucleotide repeats (42.6%) were most abundant followed by dinucleotide(36.9%), hexa-nucleotide (11.8%), penta-nucleotide (4.9%) and tetra-nucleotiderepeats (3.8%), respectively. Flanking sequences of the 525 SSRs generated 500 primers(95.2%) with forward and reverse strands by using Primer3 software. Gene based SSRmarkers can be used for studies of genetic diversity, population genetics, geneticmapping, gene tagging and more. Large numbers of unigenes containing SSRs (77.4%),annotations were available 46.75% of which were predicted, 23.91% were hypothetical,8.83% were putative and 20.51% belonged to other protein types. Only 22.5% sequencecould not assign to any specific protein class.", "corpus_id": 85851317, "title": "In silico EST-SSRs Analysis in UniGene of Quercus robur L." }
{ "abstract": "Turmeric (Curcuma longa L.) (Family: Zingiberaceae) is a perennial rhizomatous herbaceous plant often used as a spice since time immemorial. Turmeric plants are also widely known for its medicinal applications. Recently EST‐derived SSRs (Simple sequence repeats) are a free by‐product of the currently expanding EST (Expressed Sequence Tag) databases. SSRs have been widely applied as molecular markers in genetic studies. Development of high throughput method for detection of SSRs has given a new dimension in their use as molecular markers. A software tool SciRoKo was used to mine class I SSR in Curcuma EST database comprising 12953 sequences. A total of 568 non‐redundant SSR loci were detected with an average of one SSR per 14.73 Kb of EST. Furthermore, trinucleotide was found to be the most abundant repeat type among 1‐6‐nucleotide repeat types. It accounted for 41.19% of the total, followed by the mononucleotide (20.07%) and hexanucleotide repeats (15.14%). Among all the repeat motifs, (A/T)n accounted for the highest proportion followed by (AGG)n. These detected SSRs can be greatly used for designing primers that can be used as markers for constructing saturated genetic maps and conducting comparative genomic studies in different Curcuma species.", "corpus_id": 2209829, "title": "Mining and characterization of EST derived microsatellites in Curcuma longa L." }
{ "abstract": "Fragile X syndrome is caused by an expansion of a polymorphic CGG triplet repeat that results in silencing of FMR1 expression. This expansion triggers methylation of FMR1's CpG island, hypoacetylation of associated histones, and chromatin condensation, all characteristics of a transcriptionally inactive gene. Here, we show that there is a graded spectrum of histone H4 acetylation that is proportional to CGG repeat length and that correlates with responsiveness of the gene to DNA demethylation but not with chromatin condensation. We also identify alterations in patient cells of two recently identified histone H3 modifications: methylation of histone H3 at lysine 4 and methylation of histone H3 at lysine 9, which are marks for euchromatin and heterochromatin, respectively. In fragile X cells, there is a decrease in methylation of histone H3 at lysine 4 with a large increase in methylation at lysine 9, a change that is consistent with the model of FMR1's switch from euchromatin to heterochromatin in the disease state. The high level of histone H3 methylation at lysine 9 may account for the failure of H3 to be acetylated after treatment of fragile X cells with inhibitors of histone deacetylases, a treatment that fully restores acetylation to histone H4. Using 5-aza-2'-deoxycytidine, we show that DNA methylation is tightly coupled to the histone modifications associated with euchromatin but not to the heterochromatic mark of methylation of histone H3 at lysine 9, consistent with recent findings that this histone modification may direct DNA methylation. Despite the drug-induced accumulation of mRNA in patient cells to 35% of the wild-type level, FMR1 protein remained undetectable. The identification of intermediates in the heterochromatinization of FMR1 has enabled us to begin to dissect the epigenetics of silencing of a disease-related gene in its natural chromosomal context.", "corpus_id": 15925449, "score": -1, "title": "Histone modifications depict an aberrantly heterochromatinized FMR1 gene in fragile x syndrome." }
{ "abstract": "Résumé. En sous-résultat de l’algorithme de Schoof-Elkies-Atkin pour compter le nombre de points d’une courbe elliptique définie sur un corps fini de caractéristique p , il existe un algorithme qui, pour ‘ un nombre premier d’Elkies, calcule des points de ‘ -torsion dans une extension de degré ‘ − 1 à l’aide de ˜ O ( ‘ max( ‘, log q ) 2 ) opérations élémentaires à condition que ‘ 6 p/ 2 . Nous combinons ici un algorithme rapide dû à Bostan, Morain, Salvy et Schost avec l’approche p -adique suivie par Joux et Lercier pour obtenir un algorithme valide sans limitation sur ‘ et p et de complexité similaire. Par soucis de simplification, nous dé-crivons précisément ici l’algorithme dans le cas des corps finis de caractéristique p > 5 . Nous l’illustrons aussi avec quelques expé-rimentations. Abstract. As a subproduct of the Schoof-Elkies-Atkin algorithm to count points on elliptic curves defined over finite fields of characteristic p , there exists an algorithm that computes, for ‘ an Elkies prime, ‘ -torsion points in an extension of degree ‘ − 1 at cost ˜ O ( ‘ max( ‘, log q ) 2 ) bit operations in the favorable case where ‘ 6 p/ 2 . We combine in this work a fast algorithm for computing isogenies due to Bostan, Morain, Salvy and Schost with the p -adic approach followed by Joux and Lercier to get an algorithm valid without any limitation on ‘ and p but of similar complexity. For the sake of simplicity, we precisely state here the algorithm in the case of finite fields with characteristic p > 5 . We give experiment results too.", "corpus_id": 55094359, "title": "On Elkies subgroups of ‘ -torsion points in elliptic curves defined over a finite field" }
{ "abstract": "We survey algorithms for computing isogenies between elliptic curves defined over a field of characteristic either 0 or a large prime. We introduce a new algorithm that computes an isogeny of degree l (l different from the characteristic) in time quasi-linear with respect to l. This is based in particular on fast algorithms for power series expansion of the Weierstrass ℘-function and related functions.", "corpus_id": 1864491, "title": "Fast algorithms for computing isogenies between elliptic curves" }
{ "abstract": "The ordinal sum of triangular norms on the unit interval has been proposed to construct new triangular norms. However, considering general bounded lattices, the ordinal sum of triangular norms and conorms may not generate triangular norms and conorms. In this paper, we study and propose some new construction methods yielding triangular norms and conorms on general bounded lattices. Moreover, we generalize these construction methods by induction to a ordinal sum construction for triangular norms and conorms, applicable on any bounded lattice. And some illustrative examples are added for clarity.", "corpus_id": 38505695, "score": -1, "title": "Characterizing Ordinal Sum for t-norms and t-conorms on Bounded Lattices" }
{ "abstract": "The empirical model explaining microsolvation of molecules in superfluid helium droplets proposes a non-superfluid helium solvation layer enclosing the dopant molecule. This model warrants an empirical explanation of any helium induced substructure resolved for electronic transitions of molecules in helium droplets. Despite a wealth of such experimental data, quantitative modeling of spectra is still in its infancy. The theoretical treatment of such many-particle systems dissolved into a quantum fluid is a challenge. Moreover, the success of theoretical activities relies also on the accuracy and self-critical communication of experimental data. This will be elucidated by a critical resume of our own experimental work done within the last ten years. We come to the conclusion that spectroscopic data and among others in particular the spectral resolution depend strongly on experimental conditions. Moreover, despite the fact that none of the helium induced fine structure speaks against the empirical model for solvation in helium droplets, in many cases an unequivocal assignment of the spectroscopic details is not possible. This ambiguity needs to be considered and a careful and critical communication of experimental results is essential in order to promote success in quantitatively understanding microsolvation in superfluid helium nanodroplets.", "corpus_id": 241878, "title": "Microsolvation of molecules in superfluid helium nanodroplets revealed by means of electronic spectroscopy" }
{ "abstract": "3-Hydroxyflavone is a prototype system for excited state intramolecular proton transfer which is one step of a closed loop photocycle. It was intensively studied for the bare molecule and for the influence of solvents. In the present paper this photocycle is investigated for 3-hydroxyflavone and some hydrated complexes when doped into superfluid helium droplets by the combined measurement of fluorescence excitation spectra and dispersed emission spectra. Significant discrepancies in the proton transfer behavior to gas phase experiments provide evidence for the presence of different complex configurations of the hydrated complexes in helium droplets. Moreover, for bare 3-hydroxyflavone and its hydrated complexes the proton transfer appears to be promoted by the helium environment.", "corpus_id": 682806, "title": "Photochemistry of 3-hydroxyflavone inside superfluid helium nanodroplets." }
{ "abstract": "We characterized the entrance channel, reaction threshold, and mechanism of an excited-state H atom transfer reaction along a unidirectionally hydrogen-bonded “wire” –O–H···NH3···NH3···NH3···N. Excitation of supersonically cooled 7-hydroxyquinoline·(NH3)3 to its vibrationless S1 state produces no reaction, whereas excitation of ammonia-wire vibrations induces H atom transfer with a reaction threshold ≈ 200 wave numbers. Further translocation steps along the wire produce the S1 state 7-ketoquinoline·(NH3)3 tautomer. Ab initio calculations show that proton and electron movement along the wire are closely coupled. The rate-controlling S1 state barriers arise from crossings of a ππ* with a Rydberg-type πσ* state.", "corpus_id": 34463442, "score": -1, "title": "Probing the Threshold to H Atom Transfer Along a Hydrogen-Bonded Ammonia Wire" }
{ "abstract": "Scarcity appeals in marketing have long captured the attention of scholars and practitioners, yet we know little about their effectiveness across different cultures. Drawing on cultural differences (i.e., self-concept, need for uniqueness, and susceptibility to normative influence), the authors investigate the impact of culture on the effectiveness of (demand- vs. supply-based) scarcity appeals. The authors also study the impact of product visibility while considering the moderating effect of culture on the effectiveness of scarcity appeals (demand- vs. supply-based). To do so, the authors conducted experimental research with participants from Pakistan and France. The authors find that (1) demand-based scarcity appeals were more effective than supply-based scarcity appeals in Eastern cultures, whereas the reverse was found in Western cultures; (2) such moderating role of culture was stronger for high-visibility products than for low-visibility products; and (3) the respective prevalence of interdependent (vs. independent) self-construal and its subsequent impact on susceptibility to normative influence and need for uniqueness mediated the moderating role of culture. The authors conclude by discussing the key theoretical contributions and managerial implications of these findings and suggesting future research directions.", "corpus_id": 259969166, "title": "Scarcity Appeals in Cross-Cultural Settings: A Comprehensive Framework" }
{ "abstract": "EXTENDED ABSTRACT The value of products is not only determined by the utility that consumers derive from the products' attributes and their functional consequences, but has an important social component as well. Specifically, scarce products are generally deemed valuable, independent of the utility that their intrinsic attributes deliver. This effect has been found in several studies and appears robust (Lynn 1991). This paper identifies two distinct routes through which scarcity can increase product choice. These routes are expected to have distinct effects in the product valuation process, which have until now not been examined in detail.", "corpus_id": 153260930, "title": "How product scarcity impacts on choice: Snob and bandwagon effects" }
{ "abstract": "In a conventional wireless cellular system, signal processing is performed on a per-cell basis; out-of-cell interference is treated as background noise. This paper considers the benefit of coordinating base-stations across multiple cells in a multi-antenna beamforming system, where multiple base-stations may jointly optimize their respective beamformers to improve the overall system performance. This paper focuses on a downlink scenario where each remote user is equipped with a single antenna, but where multiple remote users may be active simultaneously in each cell. The design criterion is the minimization of the total weighted transmitted power across the base-stations subject to signal-to-interference-and-noise-ratio (SINR) constraints at the remote users. The main contribution is a practical algorithm that is capable of finding the joint optimal beamformers for all base-stations globally and efficiently. The proposed algorithm is based on a generalization of uplink-downlink duality to the multi-cell setting using the Lagrangian duality theory. The algorithm also naturally leads to a distributed implementation. Simulation results show that a coordinated beamforming system can significantly outperform a conventional system with per-cell signal processing.", "corpus_id": 1161819, "score": -1, "title": "Coordinated beamforming for the multi-cell multi-antenna wireless system" }
{ "abstract": "Automatic detection of estrus in cows is important in cattle management. This paper proposes a method of estrus detection by automatically checking cattle mounting. We use a side-view video camera and apply computer vision techniques to detect mounting behavior. In particular, we extract motion information to select a potential mount-up and mount-down motion and then verify the true mounting behavior by considering the direction, magnitude, and history of the mount motion. From experimental results using video data obtained from a Korean native cattle farm, we believe that the proposed method based on the abrupt change of a mounting cow’s height and motion history information can be utilized for detecting mounting behavior automatically, even in the case of fence occlusion.", "corpus_id": 9262951, "title": "Automated Detection of Cattle Mounting using Side-View Camera" }
{ "abstract": "A view-based approach to the representation and recognition of human movement is presented. The basis of the representation is a temporal template-a static vector-image where the vector value at each point is a function of the motion properties at the corresponding spatial location in an image sequence. Using aerobics exercises as a test domain, we explore the representational power of a simple, two component version of the templates: The first value is a binary value indicating the presence of motion and the second value is a function of the recency of motion in a sequence. We then develop a recognition method matching temporal templates against stored instances of views of known actions. The method automatically performs temporal segmentation, is invariant to linear changes in speed, and runs in real-time on standard platforms.", "corpus_id": 2006961, "title": "The Recognition of Human Movement Using Temporal Templates" }
{ "abstract": "Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature and promising recognition results were demonstrated for a number of action classes. The comparison of existing methods, however, is often limited given the different experimental settings used. The purpose of this paper is to evaluate and compare previously proposed space-time features in a common experimental setup. In particular, we consider four different feature detectors and six local feature descriptors and use a standard bag-of-features SVM approach for action recognition. We investigate the performance of these methods on a total of 25 action classes distributed over three datasets with varying difficulty. Among interesting conclusions, we demonstrate that regular sampling of space-time features consistently outperforms all tested space-time interest point detectors for human actions in realistic settings. We also demonstrate a consistent ranking for the majority of methods over different datasets and discuss their advantages and limitations.", "corpus_id": 6367640, "score": -1, "title": "Evaluation of Local Spatio-temporal Features for Action Recognition" }
{ "abstract": "Conventional robot perception and navigation pipelines are built using traditional sensors such as RGB cameras, stereo depth sensors and LiDARs. These sensors scan the entire scene in a fixed and uniform way. In contrast, programmable light curtains are a recently-invented, resource-efficient sensor that measure the depth of any vertically-ruled surface (“curtain”) specified by the user. Compared to LiDARs, light curtains are relatively inexpensive, significantly faster (45-60 Hz) and capture depth at a much higher resolution (640 scan lines). However, they require user control. The main contributions of this thesis are to (1) integrate programmable light curtains with an existing, state-of-the-art navigation and autonomy stack, (2) develop algorithms for enabling light curtains to detect and avoid obstacles for safe navigation, and (3) perform high resolution mapping and accurate robot localization using intelligent curtain placements. Our overall system consists of parallelized components that interact naturally and continuously while running at their own independent speeds. This work is a step towards full-stack autonomous robot navigation using fast, high-resolution, controllable sensing. We demonstrate our integration on a wheelchair robot.", "corpus_id": 252498054, "title": "Programmable light curtains for Safety Envelopes, SLAM and Navigation" }
{ "abstract": "In this work, we develop a planner for high-speed navigation in unknown environments, for example reaching a goal in an unknown building in minimum time, or flying as fast as possible through a forest. This planning task is challenging because the distribution over possible maps, which is needed to estimate the feasibility and cost of trajectories, is unknown and extremely hard to model for real-world environments. At the same time, the worst-case assumptions that a receding-horizon planner might make about the unknown regions of the map may be overly conservative, and may limit performance. Therefore, robots must make accurate predictions about what will happen beyond the map frontiers to navigate as fast as possible. To reason about uncertainty in the map, we model this problem as a POMDP and discuss why it is so difficult given that we have no accurate probability distribution over real-world environments. We then present a novel method of predicting collision probabilities based on training data, which compensates for the missing environment distribution and provides an approximate solution to the POMDP. Extending our previous work, the principal result of this paper is that by using a Bayesian non-parametric learning algorithm that encodes formal safety constraints as a prior over collision probabilities, our planner seamlessly reverts to safe behavior when it encounters a novel environment for which it has no relevant training data. This strategy generalizes our method across all environment types, including those for which we have training data as well as those for which we do not. In familiar environment types with dense training data, we show an 80% speed improvement compared to a planner that is constrained to guarantee safety. In experiments, our planner has reached over 8 m/s in unknown cluttered indoor spaces. Video of our experimental demonstration is available at http://groups.csail.mit.edu/rrg/bayesian_learning_high_speed_nav.", "corpus_id": 17642498, "title": "Bayesian Learning for Safe High-Speed Navigation in Unknown Environments" }
{ "abstract": "A compact high directive EBG resonator antenna operating in two frequency bands is described. Two major contributions to this compact design are using single layer superstrate and using artificial surface as ground plane. To obtain only the lower operating frequency band using superstrate layer is enough, but to extract the upper operating frequency band both superstrate layer and artificial surface as ground plane are necessary. Therefore, design of a superstrate to work in two frequency bands is very important. Initially, using appropriate frequency selective surface (FSS) structure with square loop elements, we design an optimum superstrate layer for each frequency band separately to achieve maximum directivity. Also, to design an artificial surface to work in the upper frequency band we use a suitable FSS structure over dielectric layer backed by PEC. Next, by using the idea of FSS structure with double square loop elements we propose FSS structure with modified double square loop elements, so that it operates in both of the desired operating frequency bands simultaneously. Finally, the simulation results for two operating frequency bands are shown to have good agreement with measurements.", "corpus_id": 25339329, "score": -1, "title": "Design of Compact Dual Band High Directive Electromagnetic Bandgap (EBG) Resonator Antenna Using Artificial Magnetic Conductor" }
{ "abstract": "In this paper, we review the development, at the STFC’s Central Laser Facility (CLF), of high energy, high repetition rate diode-pumped solid-state laser (DPSSL) systems based on cryogenically-cooled multi-slab ceramic Yb:YAG. Up to date, two systems have been completed, namely the DiPOLE prototype and the DiPOLE100 system. The DiPOLE prototype has demonstrated amplification of nanosecond pulses in excess of 10 J at 10 Hz repetition rate with an opticalto- optical efficiency of 22%. The larger scale DiPOLE100 system, designed to deliver 100J temporally-shaped nanosecond pulses at 10 Hz repetition rate, has been developed at the CLF for the HiLASE project in the Czech Republic. Recent experiments conducted on the DiPOLE100 system demonstrated the energy scalability of the DiPOLE concept to the 100 J pulse energy level. Furthermore, second harmonic generation experiments carried out on the DiPOLE prototype confirmed the suitability of DiPOLE-based systems for pumping high repetition rate PW-class laser systems based on Ti:sapphire or optical parametric chirped pulse amplification (OPCPA) technology.", "corpus_id": 125303245, "title": "A 100J-level nanosecond pulsed DPSSL for pumping high-efficiency, high-repetition rate PW-class lasers" }
{ "abstract": "We present a numerical model of a pulsed, diode-pumped Yb:YAG laser amplifier for the generation of high energy ns-pulses. This model is used to explore how optical-to-optical efficiency depends on factors such as pump duration, pump spectrum, pump intensity, doping concentration, and operating temperature. We put special emphasis on finding ways to achieve high efficiency within the practical limitations imposed by real-world laser systems, such as limited pump brightness and limited damage fluence. We show that a particularly advantageous way of improving efficiency within those constraints is operation at cryogenic temperature. Based on the numerical findings we present a concept for a scalable amplifier based on an end-pumped, cryogenic, gas-cooled multi-slab architecture.", "corpus_id": 2800639, "title": "Optimising the efficiency of pulsed diode pumped Yb:YAG laser amplifiers for ns pulse generation." }
{ "abstract": "Rapid progress in the development of high-intensity laser systems has extended our ability to study light–matter interactions far into the relativistic domain, in which electrons are driven to velocities close to the speed of light. As well as being of fundamental interest in their own right, these interactions enable the generation of high-energy particle beams that are short, bright and have good spatial quality. Along with steady improvements in the size, cost and repetition rate of high-intensity lasers, the unique characteristics of laser-driven particle beams are expected to be useful for a wide range of contexts, including proton therapy for the treatment of cancers, materials characterization, radiation-driven chemistry, border security through the detection of explosives, narcotics and other dangerous substances, and of course high-energy particle physics. Here, we review progress that has been made towards realizing such possibilities and the principles that underlie them.", "corpus_id": 17098915, "score": -1, "title": "Principles and applications of compact laser–plasma accelerators" }
{ "abstract": "TO THE EDITOR: In their editorial (1) on our article (2), Feeley and Shine pose questions to consider as health systems enable patients to access and share their electronic health records. The issues that they raise are relevant and timely, and data emerging from the Veterans Affairs (VA) health care system provide some initial answers. Patients in the VA health system are embracing opportunities to take ownership of their medical records. In August 2010, President Obama announced the creation of a new “Blue Button” feature on the VA’s personal health record, My HealtheVet (3). The Blue Button enables patients to easily download their health information and share it with providers and caregivers. During the Blue Button’s first year, 311 863 (21%) registered My HealtheVet users downloaded their information, suggesting that patients’ interest in sharing their health information is matched by their use of features that facilitate such sharing. Although the VA does not yet enable patients to access encounter notes (contrary to the description in Feeley and Shine’s editorial), this feature has been piloted and there are plans to incorporate it into My HealtheVet in 2012. As with medical care in general, the neediest patients have the most to gain from the coming revolution in personal health records. Sicker veterans are using My HealtheVet and—perhaps contrary to the expectations of some—are eager to share their information. A disproportionate number of My HealtheVet users are in poor or fair health (40% according to our survey, compared with 24% in the general veteran population) (2, 4), and approximately 79% of these patients expressed interest in sharing their records with a caregiver or provider outside of the VA system (2). Finally, patients desire control over the specific components of their record that are available to persons with whom they wish to share information. For example, among 4541 patients who expressed interest in sharing their information with non-VA providers in our survey, 57% were interested in sharing their medication lists; however, only 15% were interested in sharing their communications with VA providers (2). This finding suggests a need for tailored applications that allow patients to designate specific portions of their record that selected persons can access. These early experiences from the VA’s personal health record system provide insight into information-sharing preferences of patients who often have multiple chronic illnesses and psychosocial comorbid conditions—the very patients who may benefit most from a care network enhanced through information-sharing technology.", "corpus_id": 139085789, "title": "Acute and subacute neck pain." }
{ "abstract": "Join the dialogue on health care reform. Comment on the perspectives published in Annals and offer ideas of your own. All thoughtful voices should be heard. While advances in medical science have led to continued improvements in medical care and health outcomes, evidence of the comparative effectiveness of alternative management options remains inadequate for informed medical care and health policy decision making. The result is frequently suboptimal and inefficient care as well as unsustainable costs. To enhance or at least maintain quality of care as health reform and cost containment occurs, better evidence of comparative clinical and cost-effectiveness is required (1). The American Recovery and Reinvestment Act of 2009 allocated a $1.1 billion down payment to support comparative effectiveness research (CER) (2). Although comparative effectiveness can be informed by synthesis of existing clinical information (systematic reviews, meta-analysis, and decision modeling) and analysis of observational data (administrative claims, electronic medical records, registries and other clinical cohorts, and casecontrol studies), randomized clinical trials (RCTs) are the most rigorous method of generating comparative effectiveness evidence and will necessarily occupy a central role in an expanded national CER agenda. However, as currently designed and conducted, many RCTs are ill suited to meet the evidentiary needs implicit in the IOM definition of CER: comparison of effective interventions among patients in typical patient care settings, with decisions tailored to individual patient needs (3). Without major changes in how we conceive, design, conduct, and analyze RCTs, the nation risks spending large sums of money inefficiently to answer the wrong questionsor the right questions too late. This article addresses several fundamental limitations of traditional RCTs for meeting CER objectives and offers 3 potentially transformational approaches to enhance their operational efficiency, analytical efficiency, and generalizability for CER. Enhancing Structural and Operational Efficiency As currently conducted, RCTs are inefficient and have become more complex, time consuming, and expensive. More than 90% of industry-sponsored clinical trials experience delayed enrollment (4). In a study comparing 28 industry-sponsored trials started between 1999 and 2002 with 29 trials started between 2003 and 2006, the time from protocol approval to database lock increased by a median of 70% (4). Several organizations have sought to streamline study start-up. In response to an analysis in Cancer and Leukemia Group B that found a median of 580 days from concept approval to phase 3 study activation (5), the National Cancer Institute established an operational efficiency working group to reduce study activation time by at least 50%, increase the proportion of studies reaching accrual targets, and improve timely study completion (6). The National Institutes of Health's Clinical and Translational Science Award recipients are documenting study start-up metrics as a first step to fostering improvements (7). The National Cancer Institute, the CEO Roundtable, Cancer Centers, and Cooperative Groups developed standard terms for clinical trial agreements as a starting point for negotiations between study sponsors and clinical sites (8). The Institute of Medicine's Drug Forum also commissioned development of a template clinical research agreement (9). Through its Critical Path Program, the U.S. Food and Drug Administration (FDA) established the Clinical Trials Transformation Initiative (CTTI), a publicprivate partnership whose goal is to improve the quality and efficiency of clinical trials (10). The CTTI is hosted by Duke University and has broad representation from more than 50 member organizations, including academia, government, industry, clinical investigators, and patient advocates (11). The CTTI works by generating empirical data on how clinical trials are currently conducted and how they may be improved. Initial priorities for study include design principles, data quality and quantity (including monitoring), study start-up, and adverse event reporting. One of CTTI's projects is addressing site monitoring, an area that has been estimated to absorb 25% to 30% of phase 3 trial costs (12) and for which there is widespread agreement that improved efficiency is needed. The CTTI is determining the current range of monitoring practices for RCTs used by the National Institutes of Health, academic institutions, and industry; assessing the quality objectives of monitoring; and determining the performance of various monitoring practices in meeting quality objectives. This project will provide criteria to help sponsors select the most appropriate monitoring methods for a trial, thereby improving quality while optimizing resources. Collectively, these efforts are generating empirical evidence and developing the mechanisms to improve clinical trial efficiency. In conjunction with other improvements, including those described below, the resulting changes in clinical trial practices will increase the feasibility of mounting the scale and scope of RCTs required to evaluate the comparative effectiveness of medical care. Analytical Efficiency: The Potential Role of Bayesian and Adaptive Approaches The traditional frequentist school has provided a solid foundation for medical statistics. But the artificial division of results into significant and nonsignificant is better suited for one-time dichotomous decisions, such as regulatory approval, and is not the best model for comparing interventions as evidence accumulates over time, as occurs in a dynamic medical care system. With traditional trials and analytical methods, it is difficult to make optimal use of relevant existing, ancillary, or new evidence as it arises during a trial, and thus such methods often are not well suited to facilitate clinical and policy decision making. Furthermore, real-world CER can be noisier than a standard RCT. Standard statistical techniques require increased sample sizes, in part because of the resulting additional variability and in part when trials compare several active treatments whose effectiveness differs by relatively small amounts. Designs that use features that change or adapt in response to information generated during the trial can be more efficient than standard approaches. Although many standard RCTs are adaptive in limited ways (for example, those with interim monitoring and stopping rules), the frequentist paradigm inhibits adaptation because of the requirement to prespecify all possible study outcomes, which in turn requires some rigidity in design. The Bayesian approach, using formal, probabilistic statements of uncertainty based on the combination of all sources of information both from within and outside a study, prespecifies how information from various sources will be combined and how the design will change while controlling the probability of false-positive and false-negative conclusions (13). Bayesian and adaptive analytical approaches can reduce the sample size, time, and cost required to obtain decision-relevant information by incorporating existing high-quality external evidence (such as information from pivotal trials, systematic reviews, models, and rigorously conducted observational studies) into CER trial design and drawing on observed within-trial end point relationships. If new interventions become available, adaptive RCT designs can allow these interventions to be added and less effective ones dropped without restarting the trial; therefore, at any given time, the trial is comparing the alternatives most relevant to current clinical practice. This dynamic learning adaptive feature (analogous to the Institute of Medicine Evidence-Based Medicine Roundtable's learning health care system [14]) improves both the timeliness and clinical relevance of trial results. The following example shows how this model operates. A standard comparative effectiveness trial design of 4 alternative strategies for HIV infection treatment starts with the hypothesis of equal effectiveness of all 4 treatments. In contrast, as the trial progresses, the Bayesian approach answers the pragmatic questions: What is the probability that the favored therapy is the best of the 4 therapies? and What is the probability that the currently worst therapy will turn out to be best? (15). If this latter probability is low enough, the trialists can drop that treatment even if it is not, by conventional statistical testing, worse than other treatments. Newly developed HIV treatment strategies also can enter the trial, thus focusing patient resources on the most relevant treatment comparison. Bayesian and adaptive designs are particularly useful for rapidly evolving interventions (such as devices, procedures, practices, and systems interventions), especially when outcomes occur soon enough to permit adaptation of the trial design. They should also prove useful for clinical studies generated by such conditional coverage schemes as Medicare's Coverage with Evidence Development policy by adding onto an existing evidence base and adapting studies into community care settings of interest to payers and patients (16, 17). Random allocation need not be equal between trial arms or patient subgroups. Probabilities of each intervention being the best can be updated and random allocation probabilities revised, so that more patients are allocated to the most promising strategies as evidence accumulates. This flexibility can also permit Bayesian trials to focus experimentation on clinically relevant subgroups, which could facilitate tailoring strategies to particular patients, a key element of CER. Experience with Bayesian adaptive approaches has been growing in recent years. Early-phase cancer trials are commonly performed using Bayesian designs (18). In 2005, the FDA released a draft guidance document for the u", "corpus_id": 2073379, "title": "Rethinking Randomized Clinical Trials for Comparative Effectiveness Research: The Need for Transformational Change" }
{ "abstract": "Commuting exists because an important fraction of workers in developed countries do not live close to their workplaces, but at long distances from them, so that they must travel to their jobs and then back home daily. This paper studies commuting in Catalonia (a Spanish region) for the 1986-91 period. We introduce the main facts of commuting in Catalonia by a descriptive analysis using several statistical methods, first on its sectoral side (de-composing the Catalonian economy into 24 sectors), then on its territorial side (dividing Catalonia in 16 homogeneous territorial units called regions) and an analysis of the professional categories. Then, we comment briefly on the theory of residential location, which provides us with the theoretical framework needed for the study of commuting. The last part of the paper consists of an estimation of commuting using a logit model with individual data from the 1991 Spanish Population Census, in order to select the most relevant variables and estimate their effect on commuting.", "corpus_id": 154347909, "score": -1, "title": "Determinants of Individual Commuting in Catalonia, 1986-91: Theory and Empirical Evidence" }
{ "abstract": "AB BACKGROUND. Secondary lymphedema of the arm is a relatively common complication after breast cancer surgery. Although complex decongestive therapy is considered the “golden standard”, there is still a controversy as to whether adding pressotherapy is of any value. Thus, the aim of this study was to compare the efficacy of complex decongestive therapy (CDT) against complex decongestive therapy combined with a pressotherapy on functional status, pain, and quality of life in patients with secondary lymphedema of the arm after breast cancer treatment. METHODS. In this prospective, randomized, parallel, nonblind study, we recruited 108 women, mean age 56.8±8.5 years, with secondary arm lymphedema who completed breast cancer surgery 57.4±46.2 months earlier. They were randomly assigned to a CDT group (control) or CDT+pressotherapy group (experimental). The CDT protocol consisted of skin care, manual lymphatic drainage, short stretch multi-layer compression bandages, and exercises provided by therapists. In addition to that, the experimental group received pressotherapy (intermittent pneumatic compression) for 30 minutes per day at a pressure of 40 mmHg. The treatments were administered once a day, five days a week, for 3 weeks. The subjects were instructed to continue administering the skin care, manual lymphatic drainage, compression sleeve and exercises on their own for 3 months after the end of treatment.", "corpus_id": 81816723, "title": "Efikasnost dekongestivne i presoterapije kod pacijentkinja sa limfedemom ruke nakon operacije karcinoma dojke" }
{ "abstract": "SummaryBackground. The aim of this study was to compare two different physiotherapy methods in the treatment of lymphedema after breast surgery.Methods. This study was performed on 53 patients who had developed unilatreral lymphedema after the breast cancer treatment. Twenty-seven patients served as the experimental group and were treated with complex decongestive physiotherapy (CDP) applications including lymph drainage, multi layer compression bandage, elevation, remedial exercises and skin care. Twenty-six patients in the control group were treated with standard physiotherapy (SP) applications including bandage, elevation, head–neck and shoulder exercises and skin care. Both groups were recommended a home program consisting of compression bandage exercises, skin care and walking. Patients were taken to a therapy program once a day; 3 days a week for 4 weeks. The range of motion, circumferential measurement, and volumetric measurement were assessed before and after treatment.Results. The overall improving in the CDP group was shown to be greater than the SP group but when the evaluation results of both groups were compared before and after treatment, a significant statistical difference in edema according to circumferential and volumetric measurements results was found in favor of the CDP group (p < 0.05).Conclusion. In the patients with upper extremity lymphedema, the shoulder mobility can be increased and edema can be decreased by the use of complex physiotherapy programs.", "corpus_id": 1794499, "title": "The Comparison of Two Different Physiotherapy Methods in Treatment of Lymphedema after Breast Surgery" }
{ "abstract": "A conservative management technique for lymphoedema, known as Complex Physical Therapy, which comprises massage, compression bandaging, skin care and exercises, appears to be effective in the management of this chronic condition. However, it is extremely time consuming, requiring daily treatments of more than one hour in duration for a period of four weeks. A modified program, which combines all the elements of the treatment technique, was designed. This program requires treatments only twice weekly and uses pressure garments instead of compression bandaging. In this clinical trial on 25 patients, the results of the two treatment programs were found to be similar.", "corpus_id": 6770835, "score": -1, "title": "Effectiveness of modified Complex Physical Therapy for lymphoedema treatment." }
{ "abstract": "In this paper we present a method to remove the noise by applying the Perona Malik algorithm working on an irregular computational grid. This grid is obtained with a quad-tree technique and is adapted to the image intensities—pixels with similar intensities can form large elements. We apply this algorithm to remove the speckle noise present in SAR images, i.e., images obtained by radars with a synthetic aperture enabling to increase their resolution in an electronic way. The presence of the speckle in an image degrades the quality of the image and makes interpretation of features more difficult. Our purpose is to remove this noise to such a degree that the edge detection or landscape elements detection can be performed with relatively simple tools. The progress of smoothing leads to grids with significantly less number of elements than the original number of pixels. The results are compared with measurements performed on an inspected area of interest. At the end we show the possibility to modify the scheme to the adaptive mean curvature flow filter which can be used to smooth the boundaries.", "corpus_id": 123726036, "title": "Quad-tree Based Finite Volume Method for Diffusion Equations with Application to SAR Imaged Filtering" }
{ "abstract": "We propose the coarsening strategy for the finite volume computational method given by K. Mikula and N. Ramarosy (Numer. Math.89, 2001, 561?590) for the numerical solution of the (modified in the sense of F. Catte et al. (SIAM J. Numer. Anal.29, 1992, 182?193)) Perona?Malik nonlinear image selective smoothing equation (called anisotropic diffusion in image processing). The adaptive aproach is directly at hand because a solution tends to be flat in large subregions of the image, and thus it is not necessary to consider the same fine resolution of computations in the whole spatial domain. This access reduces computational effort, because the coarsening of the computational grid rapidly reduces the number of unknowns in the linear systems to be solved at discrete scale steps of the method.", "corpus_id": 3846585, "title": "An Adaptive Finite Volume Scheme for Solving Nonlinear Diffusion Equations in Image Processing" }
{ "abstract": "Most research on the diffusion of policy innovations focuses on the date of adoption and its correlates. This research examines an aspect of innovation which has received little attention: policy reinvention during the initial diffusion process and through amendment. The central proposition is that even though a set of laws or policies may be grouped into one broad, general category, states create substantively different policies through reinvention, which has important consequences for groups affected by the legislation. Hypotheses concerning the relationship between date of adoption and policy content and the effect of particular controversial policy provisions on reinventions are examined. The study has general implications for the study of the diffusion of innovations and policy in state politics.", "corpus_id": 41649668, "score": -1, "title": "Innovation and Reinvention in State Policymaking: Theory and the Evolution of Living Will Laws" }
{ "abstract": "Lyambezi is natural lake found in the Caprivi region, of Namibia. It provides the link between the Kwando-Linyanti and Chobe Rivers. The lake dried out for nearly two decades since the mid-1980s. However, Lyambezi re-emerged and went through 7 phases of drying out and reforming but has remained robust for the past half-decade. The study focused on enhancing the understanding of the variations in the lake’s areal extent from a time series analysis of Landsat imagery. The area was quantified at intra and inter-annual temporal scales based on the analysis of satellite imagery i.e. Landsat TM, ETM+ and Landsat 8 OLI sensors. Water features were delineated using the modified normalized difference water index (MNDWI). The lake spectral classes were segmented based on five dynamic thresholds based on which the area covered by water was calculated. Results show that the lake exhibits a wide range of area variations at both inter-annual and intra-annual temporal scales. However, due to the presence of the thick vegetation in the open water body, the segmentation of spectral classes yielded poor to moderate accuracy levels (kappa=28-53). The application and success of DEM was tested and the results illustrated that the use of SRTM 3 DEM led to the successful extraction of topography-based hydrological parameters that served as inputs into various GIS-based hydrological models. These results imply that lake volume can successfully be estimated, leading to a better understanding of floods on water resources availability. On the other hand, hydrological analysis results show that flood events of a magnitude above the long-term average maximum have a 2 to 3 year return period and probability of occurrence in any given year ranges between 0.29 and 0.58. This implies that floods have the potential to inundate the lake every second year, which represents a flooding occurrence at a regular enough interval to prevent the lake from going into a drying-up phase. Therefore, rainfall variability is more linked to the inundation and hence, appearance of the lake. It as well represents the most important challenge for the water resources management of the lake. Overall, the results of this study provide a better understanding of Lake Lyambezi dynamics and therefore, enhance the understanding of the behaviour and response of previously-desiccated environments to prevailing hydrological conditions.,WaterNet", "corpus_id": 128840725, "title": "An understanding of variations in the area extent of Lake Lyambezi: perspective for water resources management" }
{ "abstract": "Tibetan Plateau is a typical study area of global environmental change, and lake is an important ecological factor to reveal eco-environmental evolution. Using remote sensing technology to monitor the succession law of lakes on the plateau is of great significance to global environment change research. Based on water index computed by spectral feature fitting (SFF) method, this paper uses “whole-local ” spatial scale transformation mechanism, along with iterative algorithm, to obtain high-precise extraction of modern lakes on the Tibetan Plateau. Moreover, uses integrated data of LANDSAT ETM+ images and SRTM data to further detect and recover paleo shorelines. By comparing paleo and modern lakes, it shows that lakes on the Tibetan Plateau have shrunk significantly since the great lake period, which provides fundamental information support to researches on global paleo-climatology and paleo-hydrology change.", "corpus_id": 1600425, "title": "Lake shrinkage analysis using spectral-spatial coupled remote sensing on Tibetan Plateau" }
{ "abstract": "A comparison of climate records between Chinese Loess Plateau which shows a strengthening of summer monsoon in the last 0.6 Ma and a strengthening of Asia summer monsoon during 35-25 and 4-2.5ka B.P. intervals, and the Australian records which show a strengthening of aridity or desertification suggest that the Australian high leading to the desertification strengthened the Asia summer monsoon in the past through the cross-equator circulation. The synchronous variation in the Holocene Optimum as indicated by Asia and Australia climate records, on the other hand, suggests that the cross-equator East Asia winter monsoon circulation related to the Mongolia high might have influenced the Australia summer monsoon. The interaction of the monsoon climate between southern and northern hemispheres through cross-equator circulation probably started to be obvious since 0.6 Ma B. P.", "corpus_id": 128899615, "score": -1, "title": "A correlation between southern and northern hemispheres during the last 0.6 Ma" }
{ "abstract": "In this work, Poly(Acrylic Acid) (PAA) was employed as nanoparticle stabilizer for TiO2. Adsorption and encapsulation of nanoparticles in polyelectrolytes impart stability due to stearic and electrostatic effects. Crosslinking of the polymer through UV-Irradiation permanently encapsulates the metal as well as reinforces the polymer cage. The efficient pH and ratio of reactants were optimized then assessed through Dynamic Light Scattering (DLS) for particle size and Zeta Potential Measurements for stability in aqueous solutions. Results showed that among the various TiO2/PAA ratios, the 1:3 ratio showed minimal changes on the size and Zeta Potential values even when exposed to various pH conditions. Meanwhile at pH 5, TiO2 attained a positive surface charge, while PAA exists in its deprotonated form, thus maximizing the electrostatic interaction between the two materials. Analysis revealed that in that particular ratio and pH range, particles size and zeta-potential value of 61.79 nm and -36 mV were obtained respectively. Physical morphology of the nanocomposites was characterized through Scanning Electron Microscopy, showing agglomerates of small particles, resulting to larger particles. Further studies shall be done to utilize the potential of the polymer-coated nanoparticles in dry form.", "corpus_id": 100107474, "title": "Investigating the pH Dependence of Ultraviolet Radiation Induced Synthesis of TiO2/Poly(Acrylic Acid) Nanocomposites" }
{ "abstract": "The adsorption of poly(acrylic acid) (PAA) in aqueous suspension onto the surface of TiO(2) nanoparticles was investigated. FTIR spectroscopic data provided evidence in support of hydrogen bonding and chemical interaction in the case of the PAA-TiO(2) system. Adsorption isotherms demonstrated that part of the PAA initially added to the suspension was adsorbed onto the TiO(2) surface, after which there was a gradual attainment of an adsorption plateau. The adsorption density of PAA was found to increase with an increase of PAA molecular weight, while it decreased with an increase of pH. The thickness of the PAA adsorption layer was calculated based on measurements of suspension viscosities in the absence and presence of PAA. It was shown that the thickness of the adsorption layer increased with the increase of pH, PAA molecular weight, and its concentration. The surface charge density, the diffuse charge density, and the zeta potential of TiO(2) varied distinctly after PAA adsorption. The shift of pH(iep) toward a lower pH value was observed in the presence of PAA. PAA was found to stabilize the suspension of TiO(2) nanoparticles through electrosteric repulsion. The influence of factors such as PAA molecular weight and its concentration on the colloidal stability of the aqueous suspension was also investigated.", "corpus_id": 12384663, "title": "Adsorption of poly(acrylic acid) onto the surface of titanium dioxide and the colloidal stability of aqueous suspension." }
{ "abstract": "Partial table of contents: Ceramics Processing and Ceramic Products. Surface Chemistry. CERAMIC RAW MATERIALS. Special Inorganic Chemicals. MATERIALS CHARACTERIZATION. Particle Size and Shape. Density, Pore Structure, and Specific Surface Area. PROCESSING ADDITIVES. Liquids and Wetting Agents. Flocculants, Binders, and Bonds. PARTICLE PACKING, CONSISTENCY, AND BATCH CALCULATIONS. Batch Consistency and Formulation. PARTICLE MECHANICS AND RHEOLOGY. Mechanics of Unsaturated Bodies. BENEFICIATION. Comminution. Granulation. FORMING. Pressing. Injection Molding. POSTFORMING PROCESSES. Drying. Firing. Appendices. Index.", "corpus_id": 136598959, "score": -1, "title": "Principles of ceramics processing" }
{ "abstract": "Summary\r\n\r\nSpectral analysis techniques have successfully been applied for near real-time monitoring of power system small-signal electromechanical oscillations using synchronized phasor data from PMUs. The methods used for this purpose commonly assume that random load variations adopt the distribution function of Gaussian noise. Hence, careful attention has to be paid so that the preprocessing of synchronized phasor measurements is capable of providing data with the characteristics expected for these methods to work properly. This can be viewed as a “conditioning” step in the data handling process that has an impact on the results from spectral analysis which consume this data. This article aims to revisit the crucial step of preprocessing of PMU data in a tutorial fashion. The goal is to share the authors' experience when dealing with real PMU data originating from the Nordic Grid. This article offers a systematic and detailed methodology developed by the authors which has been successfully used in studies on estimation of electromechanical modes in the Nordic Grid. Copyright © 2013 John Wiley & Sons, Ltd.", "corpus_id": 108873152, "title": "Preprocessing synchronized phasor measurement data for spectral analysis of electromechanical oscillations in the Nordic Grid" }
{ "abstract": "For power networks such as the Nordic Grid, that have operation constraints limits imposed by the existence of low-damped electromechanical oscillations, the estimation of electromechanical mode properties is of crucial importance for providing power system control room operators with adequate indicators of the stress of their network. This article addresses the practical application of different spectral analysis techniques that can be used for the estimation of electromechanical mode properties using data emerging from real synchronized phasor measurement units (PMUs) located at both the low-voltage distribution and high-voltage transmission networks of the Nordic grid. Emphasis is made on providing systematic approaches to deal with imperfect data found in practice so that accurate estimates can be computed.", "corpus_id": 4025653, "title": "Applications of spectral analysis techniques for estimating the nordic grid's low frequency electromechanical oscillations" }
{ "abstract": "This paper presents three algorithms for identification of dominant inter-area oscillation paths: a series of interconnected corridors in which the highest content of the inter-area modes propagates through. The algorithms are developed to treat different sets of data: 1) known system model; 2) transient; and 3) ambient measurements from phasor measurement units (PMUs). These algorithms take feasibility into consideration by associating the network variables made available by PMUs, i.e., voltage and current phasors. All algorithms are demonstrated and implemented on a conceptualized Nordic Grid model. The results and comparison among three algorithms are provided. The applications of the algorithms not only facilitate in revealing critical corridors which are mostly stressed but also help in indicating relevant feedback input signals and inputs to mode meters which can be determined from the properties of dominant paths.", "corpus_id": 9240032, "score": -1, "title": "Identification of Power System Dominant Inter-Area Oscillation Paths" }
{ "abstract": "In this paper we study the Weihrauch complexity of projection operators onto closed subsets of the Euclidean space. We show that some fundamental degrees of the Weihrauch lattice can be characterized in terms of such operators.", "corpus_id": 119695101, "title": "Projection operators in the Weihrauch lattice" }
{ "abstract": "We introduce two new operations (compositional products and implication) on\nWeihrauch degrees, and investigate the overall algebraic structure. The\nvalidity of the various distributivity laws is studied and forms the basis for\na comparison with similar structures such as residuated lattices and concurrent\nKleene algebras. Introducing the notion of an ideal with respect to the\ncompositional product, we can consider suitable quotients of the Weihrauch\ndegrees. We also prove some specific characterizations using the implication.\nIn order to introduce and study compositional products and implications, we\nintroduce and study a function space of multi-valued continuous functions. This\nspace turns out to be particularly well-behaved for effectively traceable\nspaces that are closely related to admissibly represented spaces.", "corpus_id": 3783238, "title": "On the algebraic structure of Weihrauch degrees" }
{ "abstract": "Early life stage (ELS) toxicity experiments were carried out with zebra fish (Brachydanio rerio) and rainbow trout (Salmo gairdneri) and 10 chemicals used in the rubber industry. Several of these chemicals appeared to be teratogenic. A good correlation (r = 0.95) was found between the 7-day EC50 for zebra fish and the 60-day EC50 for rainbow trout for total embryotoxicity (embryolethality and malformations). The S. gairdneri test appeared to be slightly more sensitive than the test with B. rerio. It is therefore concluded that this short-term test is a good alternative for the long-term test with S. gairdneri. A remarkably good correlation (r = 0.90) was found between the ED50 for chicken embryotoxicity reported in the literature and the EC50 for embryotoxicity for both zebra fish and rainbow trout. This may, among other things, be explained by similarities in embryonic development and the absence of maternal and placental metabolism of the toxicants in tests with eggs of both fish and birds. It may therefore be concluded that both the short-term ELS test with B. rerio and the chicken egg test have the same predictive power for mammalian teratogenicity; i.e., both are suitable screening tests for direct-acting teratogens.", "corpus_id": 24875691, "score": -1, "title": "Fish embryos as teratogenicity screens: a comparison of embryotoxicity between fish and birds." }
{ "abstract": "Chicken erythroblasts transformed with avian erythroblastosis virus or S13 virus provide suitable model systems with which to analyze the maturation of immature erythroblasts into erythrocytes. The transformed cells are blocked in differentiation at around the colony-forming unit- erythroid stage of development but can be induced to differentiate in vitro. Analysis of the expression and assembly of components of the membrane skeleton indicates that these cells simultaneously synthesize alpha-spectrin, beta-spectrin, ankyrin, and protein 4.1 at levels that are comparable to those of mature erythroblasts. However, they do not express any detectable amounts of anion transporter. The peripheral membrane skeleton components assemble transiently and are subsequently rapidly catabolized, resulting in 20-40-fold lower steady-state levels than are found in maturing erythrocytes. Upon spontaneous or chemically induced terminal differentiation of these cells expression of the anion transporter is initiated with a concommitant increase in the steady- state levels of the peripheral membrane-skeletal components. These results suggest that during erythropoiesis, expression of the peripheral components of the membrane skeleton is initiated earlier than that of the anion transporter. Furthermore, they point a key role for the anion transporter in conferring long-term stability to the assembled erythroid membrane skeleton during terminal differentiation.", "corpus_id": 17952358, "title": "Control of erythroid differentiation: asynchronous expression of the anion transporter and the peripheral components of the membrane skeleton in AEV- and S13-transformed cells" }
{ "abstract": "Protein 4.1 is a peripheral membrane protein that strengthens the actin- spectrin based membrane skeleton of the red blood cell and also serves to attach this structure to the plasma membrane. In avian erythrocytes it exists as a family of closely related polypeptides that are differentially expressed during erythropoiesis. We have analyzed the synthesis and assembly onto the membrane skeleton of protein 4.1 and in this paper we show that its assembly is extremely rapid and highly efficient since greater than 95% of the molecules synthesized are assembled in less than 1 min. The remaining minor fraction of unassembled protein 4.1 differs kinetically and is either degraded or assembled with slower kinetics. All protein 4.1 variants exhibit a similar kinetic behavior irrespective of the stage of erythroid differentiation. Thus, the amount and the variants ratio of protein 4.1 assembled are determined largely at the transcriptional or at the translational level and not posttranslationally. During erythroid terminal differentiation the molar amounts of protein 4.1 and spectrin assembled change. In postmitotic cells, as compared with proliferative cells, far more protein 4.1 than spectrin is assembled onto the membrane-skeleton. This modulation may permit the assembly of an initially flexible membrane skeleton in mitotic erythroid cells. As cells become postmitotic and undergo the final steps of maturation the membrane skeleton may be gradually stabilized by the assembly of protein 4.1.", "corpus_id": 3165802, "title": "Assembly of protein 4.1 during chicken erythroid differentiation" }
{ "abstract": "Multiscale and multitool advanced characterisation of pyrophosphate-stabilised amorphous calcium carbonates allowed building a cluster-based model paving the way for tunable biomaterials.", "corpus_id": 253365622, "score": -1, "title": "Pyrophosphate-stabilised amorphous calcium carbonate for bone substitution: toward a doping-dependent cluster-based model" }