query dict | pos dict | neg dict |
|---|---|---|
{
"abstract": "Propositional model enumeration, or All-SAT, is the task to record all models of a propositional formula. It is a key task in software and hardware verification, system engineering, and predicate abstraction, to mention a few. It also provides a means to convert a CNF formula into DNF, which is relevant in circuit design. While in some applications enumerating models multiple times causes no harm, in others avoiding repetitions is crucial. We therefore present two model enumeration algorithms, which adopt dual reasoning in order to shorten the found models. The first method enumerates pairwise contradicting models. Repetitions are avoided by the use of so-called blocking clauses, for which we provide a dual encoding. In our second approach we relax the uniqueness constraint. We present an adaptation of the standard conflict-driven clause learning procedure to support model enumeration without blocking clauses.Our procedures are expressed by means of a calculus and proofs of correctness are provided.",
"corpus_id": 239769092,
"title": "On Enumerating Short Projected Models"
} | {
"abstract": "All solutions SAT (AllSAT for short) is a variant of the propositional satisfiability problem. AllSAT has been relatively unexplored compared to other variants despite its significance. We thus survey and discuss major techniques of AllSAT solvers. We accurately implemented them and conducted comprehensive experiments using a large number of instances and various types of solvers including a few publicly available software. The experiments revealed the solvers’ characteristics. We made our implemented solvers publicly available so that other researchers can easily develop their solvers by modifying our code and comparing it with existing methods.",
"corpus_id": 445580,
"title": "Implementing Efficient All Solutions SAT Solvers"
} | {
"abstract": "The situation in which a choice is made is an important information for recommender systems. Context-aware recommenders take this information into account to make predictions. So far, the best performing method for context-aware rating prediction in terms of predictive accuracy is Multiverse Recommendation based on the Tucker tensor factorization model. However this method has two drawbacks: (1) its model complexity is exponential in the number of context variables and polynomial in the size of the factorization and (2) it only works for categorical context variables. On the other hand there is a large variety of fast but specialized recommender methods which lack the generality of context-aware methods. We propose to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions. This approach results in fast context-aware recommendations because the model equation of FMs can be computed in linear time both in the number of context variables and the factorization size. For learning FMs, we develop an iterative optimization method that analytically finds the least-square solution for one parameter given the other ones. Finally, we show empirically that our approach outperforms Multiverse Recommendation in prediction quality and runtime.",
"corpus_id": 207189080,
"score": -1,
"title": "Fast context-aware recommendations with factorization machines"
} |
{
"abstract": "background Osteoporosis is an important, frequently unrecognized consequence of hypercortisolism.",
"corpus_id": 38978915,
"title": "Severe impairment of bone mass and turnover in Cushing’s disease: comparison between childhood‐onset and adulthood‐onset disease"
} | {
"abstract": null,
"corpus_id": 24951666,
"title": "Skeletal development and bone turnover revisited."
} | {
"abstract": "Endochondral ossification, the mechanism responsible for the development of the long bones, is dependent on an extremely stringent coordination between the processes of chondrocyte maturation in the growth plate, vascular expansion in the surrounding tissues, and osteoblast differentiation and osteogenesis in the perichondrium and the developing bone center. The synchronization of these processes occurring in adjacent tissues is regulated through vigorous crosstalk between chondrocytes, endothelial cells and osteoblast lineage cells. Our knowledge about the molecular constituents of these bidirectional communications is undoubtedly incomplete, but certainly some signaling pathways effective in cartilage have been recognized to play key roles in steering vascularization and osteogenesis in the perichondrial tissues. These include hypoxia-driven signaling pathways, governed by the hypoxia-inducible factors (HIFs) and vascular endothelial growth factor (VEGF), which are absolutely essential for the survival and functioning of chondrocytes in the avascular growth plate, at least in part by regulating the oxygenation of developing cartilage through the stimulation of angiogenesis in the surrounding tissues. A second coordinating signal emanating from cartilage and regulating developmental processes in the adjacent perichondrium is Indian Hedgehog (IHH). IHH, produced by pre-hypertrophic and early hypertrophic chondrocytes in the growth plate, induces the differentiation of adjacent perichondrial progenitor cells into osteoblasts, thereby harmonizing the site and time of bone formation with the developmental progression of chondrogenesis. Both signaling pathways represent vital mediators of the tightly organized conversion of avascular cartilage into vascularized and mineralized bone during endochondral ossification.",
"corpus_id": 5276854,
"score": -1,
"title": "Signaling pathways effecting crosstalk between cartilage and adjacent tissues: Seminars in cell and developmental biology: The biology and pathology of cartilage."
} |
{
"abstract": "We present a new instance segmentation approach tailored to biological images, where instances may correspond to individual cells, organisms or plant parts. Unlike instance segmentation for user photographs or road scenes, in biological data object instances may be particularly densely packed, the appearance variation may be particularly low, the processing power may be restricted, while, on the other hand, the variability of sizes of individual instances may be limited. The proposed approach successfully addresses these peculiarities. Our approach describes each object instance using an expectation of a limited number of sine waves with frequencies and phases adjusted to particular object sizes and densities. At train time, a fully-convolutional network is learned to predict the object embeddings at each pixel using a simple pixelwise regression loss, while at test time the instances are recovered using clustering in the embedding space. In the experiments, we show that our approach outperforms previous embedding-based instance segmentation approaches on a number of biological datasets, achieving state-of-the-art on a popular CVPPP benchmark. This excellent performance is combined with computational efficiency that is needed for deployment to domain specialists. The source code of the approach is available at https://github.com/kulikovv/harmonic .",
"corpus_id": 131773847,
"title": "Instance Segmentation of Biological Images Using Harmonic Embeddings"
} | {
"abstract": "We propose a new and, arguably, a very simple reduction of instance segmentation to semantic segmentation. This reduction allows to train feed-forward non-recurrent deep instance segmentation systems in an end-to-end fashion using architectures that have been proposed for semantic segmentation. Our approach proceeds by introducing a fixed number of labels (colors) and then dynamically assigning object instances to those labels during training (coloring). A standard semantic segmentation objective is then used to train a network that can color previously unseen images. At test time, individual object instances can be recovered from the output of the trained convolutional network using simple connected component analysis. In the experimental validation, the coloring approach is shown to be capable of solving diverse instance segmentation tasks arising in autonomous driving (the Cityscapes benchmark), plant phenotyping (the CVPPP leaf segmentation challenge), and high-throughput microscopy image analysis. ::: The source code is publicly available: this https URL.",
"corpus_id": 50784912,
"title": "Instance Segmentation by Deep Coloring"
} | {
"abstract": "This study examines the use of deep convolutional neural network in the classification of rice plants according to health status based on images of its leaves. A three-class classifier was implemented representing normal, unhealthy, and snail-infested plants via transfer learning from an AlexNet deep network. The network achieved an accuracy of 91.23%, using stochastic gradient descent with mini batch size of thirty (30) and initial learning rate of 0.0001. Six hundred (600) images of rice plants representing the classes were used in the training. The training and testing dataset-images were captured from rice fields around the district and validated by technicians in the field of agriculture.",
"corpus_id": 26722332,
"score": -1,
"title": "A Multiclass Deep Convolutional Neural Network Classifier for Detection of Common Rice Plant Anomalies"
} |
{
"abstract": "Purpose: To investigate the safety, feasibility and perioperative outcome of laparoscopic simultaneous resection of primary colorectal cancer with synchronous liver metastases. Methods: We conducted a systematic search of all articles published on PubMed until August 2020. Search terms included: hepatectomy, liver resection, laparoscopy, hand-assisted laparoscopy, minimally invasive, colectomy, colorectal neoplasms, colorectal resections, combined resection and simultaneous resection. No randomized trials are available, all the data have been reported as case reports, case series or case–control studies. Results: Six hundred and sixty-one laparoscopic simultaneous resections were identified in 22 reviewed articles. There were 93 (15 %) major hepatic resections. The most performed liver resections were parenchymal sparing non-anatomical resections. Colorectal resections included right colectomy, left colectomy, anterior resection and low anterior resection; majority of colorectal surgeries were rectal resections. According to the proposed reviewed data, the laparoscopic simultaneous resections appeared to be feasible and safe, even with major hepatectomies. Good experience of the surgeon and proper patient selection are the keys to successful results. Minor liver resections associated with colorectal resection can be routinely considered.",
"corpus_id": 226400598,
"title": "Simultaneous Laparoscopic Resection of Primary Colorectal Cancer and Associated Liver Metastases: A Systematic Review"
} | {
"abstract": "Abstract Purpose: To investigate the efficacy and safety of laparoscopic simultaneous resections of colorectal cancer and synchronous colorectal liver metastases (SCRLM), relative to open surgery. Methods: Between 1 January 2009 and 20 April 2014, 20 of 25 patients who underwent laparoscopic simultaneous colorectal cancer and SCRLM resections were matched with 20 of 29 patients who underwent an open approach, based on prognostic propensity scores. Perioperative results and survival outcomes were compared. Results: The laparoscopic and open groups were comparable in demographics, cancer characteristics, surgery characteristics, and chemotherapy treatment. No postoperative mortality occurred in either group. The estimated blood loss and postoperative stay were significantly greater in the open group than in the laparoscopic group (all, p < .05). All other perioperative results and postoperative complications were similar between the two groups, as well as three-year overall and disease-free survival rates. Conclusions: The postoperative complications and survival rates of patients given laparoscopic simultaneous colorectal cancer and SCRLM resections were similar to those treated with an open approach, but with greater short-term benefits. Laparoscopy in this setting by an experienced surgical team appears safe and effective, and is a feasible alternative to an open approach for selected patients.",
"corpus_id": 1246041,
"title": "Laparoscopic resections of colorectal cancer and synchronous liver metastases: a case controlled study"
} | {
"abstract": "OBJECTIVE\nTo compare early and long-term outcomes in patients undergoing resection for colorectal liver metastases (CLM) by either a laparoscopic (LA) or an open (OA) approach.\n\n\nBACKGROUND\nThe LA is still a matter of debate regarding the surgical management of CLM.\n\n\nMETHODS\nData of all patients from 32 French surgical centers who underwent liver resection for CLM from January 2006 to December 2013 were collected. Aiming to obtain 2 well-balanced cohorts for available variables influencing early outcome and survival, the LA group was matched 1:1 with the OA group by using a propensity score (PS)-based method.\n\n\nRESULTS\nThe unmatched initial cohort consisted of 2620 patients (LA: 176, OA: 2444). In the matched cohort for operative risk factors (LA: 153, OA: 153), the LA group had shorter hospitalization stays [11.1 (±9) days vs 13.9 (±10) days; P = 0.01] and was associated with lower rates of grade III to V complications [odds ratio (OR): 0.27, 95% confidence interval (CI) 0.14-0.51; P = 0.0002] and inhospital transfusions (OR: 0.33 95% CI 0.18-0.59; P < 0.0001). On a prognostic factors well-balanced population (LA: 73, OA: 73), the LA group and the OA group experienced similar overall (OS) and disease-free (DFS) survival rates [OS rates of 88% and 78% vs 84% and 75% at 3 and 5 years, respectively (P = 0.72) and DFS rates of 40% and 32% vs 52% and 36% at 3 and 5 years, respectively (P = 0.60)].\n\n\nCONCLUSIONS\nIn the patients who are suitable for LA, laparoscopy yields better operative outcomes without impairing long-term survival.",
"corpus_id": 25118947,
"score": -1,
"title": "Early and Long-term Oncological Outcomes After Laparoscopic Resection for Colorectal Liver Metastases: A Propensity Score-based Analysis."
} |
{
"abstract": "Background Studies have shown that a low serum uric acid (SUA) level associates with Parkinson’s disease (PD), but many of them did not exclude patients with impaired renal function. Studies on the association between serum bilirubin level and PD also are limited. This study determined the association between SUA level, SUA/serum creatinine (SCr) ratio and serum bilirubin levels in PD patients with normal renal and liver functions. Methods The PD patients from a neurological clinic, and the controls from the club for the elderly, were recruited into this study. The PD stage and motor and non-motor function were determined by the Hoehn-Yahr (H&Y) scale and unified Parkinson’s disease rating scale (UPDRS), respectively. Results Sixty-one PD patients and 135 controls participated. The SUA/SCr ratio, but not SUA, was significantly lower in the PD patients than in the controls (4.12 ± 0.90 vs. 4.59 ± 1.04, P = 0.003). Serum total bilirubin (TB) and indirect bilirubin (IDB) were significantly higher in the PD patients (7.92 ± 3.67 µmol/L vs. 6.59 ± 2.78 µmol/L, P = 0.003 and 4.52 ± 2.48 µmol/L vs. 3.26 ± 1.82 µmol/L, P < 0.001), respectively. Serum TB and IDB, but not SUA or SUA/SCr ratio, were associated negatively with PD stages (P = 0.010 and P = 0.014, respectively). There was no association between TB, IDB, SUA or SUA/SCr ratio and PD disease duration or motor subtypes. No significant correlation was found between SUA or SUA/SCr ratio, serum TB and IDB. Conclusion The SUA/SCr ratio is more sensitive than SUA in determining their association with PD. The high serum TB and IDB levels in PD patients compared with the controls suggest that serum bilirubin might play a role in the pathogenesis of PD. However, the lack of association between SUA or the SUA/SCr ratio and serum TB or IDB suggests that these two biomarkers play a different role in the etiopathogenesis of PD.",
"corpus_id": 213184507,
"title": "Serum Uric Acid, Serum Uric Acid to Serum Creatinine Ratio and Serum Bilirubin in Patients With Parkinson’s Disease: A Case-Control Study"
} | {
"abstract": "The objective of the study is to investigate the correlation between bilirubin and uric acid (UA) concentrations and symptoms of Parkinson’s disease (PD) in Chinese population. A total of 425 PD patients and 460 controls were included in the current study. Patients were diagnosed by a neurologist and assessed using the Hoehn & Yahr (H&Y) scale. Venous blood samples were collected, and bilirubin and UA concentrations were analyzed. Compared to controls, indirect bilirubin (IBIL) and UA concentrations were lower in PD patients (PIBIL = 0.015, PUA = 0.000). Serum IBIL in different age subgroups and H&Y stage subgroups were also lower compared to the control group (PIBIL = 0.000, PUA = 0.000) but were not significantly different among these subgroups. Females in the control group had significantly lower serum IBIL and UA concentrations than males (PIBIL = 0.000, PUA = 0.000) and the PD group (PIBIL = 0.027, PUA = 0.000). In early PD (patients with <2-year medical history and no treatment), serum IBIL and UA concentrations were also lower than the controls (PIBIL = 0.013, PUA = 0.000). Although IBIL concentration was positively correlated with UA concentration in controls (RIBIL = 0.229, PIBIL = 0.004), this positive association was not observed in the PD group (RIBIL = −0.032, PIBIL = 0.724). Decreased levels of serum IBIL and UA were observed in PD patients. It is possible that individuals with decreased serum bilirubin and UA concentrations lack the endogenous defense system to prevent peroxynitrite and other free radicals from damaging and destroying dopaminergic cells in the substantia nigra. Our results provide a basis for further investigation into the role of bilirubin in PD.",
"corpus_id": 1050437,
"title": "Lower Serum Bilirubin and Uric Acid Concentrations in Patients with Parkinson’s Disease in China"
} | {
"abstract": "Hydrophobic bile acids may cause hepatocellular necrosis and apoptosis during cholestatic liver diseases. The mechanism for this injury may involve mitochondrial dysfunction and the generation of oxidant stress. The purpose of this study was to determine the relationship of oxidant stress and the mitochondrial membrane permeability transition (MMPT) in hepatocyte necrosis induced by bile acids. The MMPT was measured spectrophotometrically and morphologically in rat liver mitochondria exposed to glycochenodeoxycholic acid (GCDC). Freshly isolated rat hepatocytes were exposed to GCDC and hepatocellular necrosis was assessed by lactate dehydrogenase release, hydroperoxide generation by dichlorofluorescein fluorescence, and the MMPT in cells by JC1 and tetramethylrhodamine methylester fluorescence on flow cytometry. GCDC induced the MMPT in a dose- and Ca2+-dependent manner. Antioxidants significantly inhibited the GCDC-induced MMPT and the generation of hydroperoxides in isolated mitochondria. Other detergents failed to induce the MMPT and a calpain-like protease inhibitor had no effect on the GCDC-induced MMPT. In isolated rat hepatocytes, GCDC induced the MMPT, which was inhibited by antioxidants. Blocking the MMPT in hepatocytes reduced hepatocyte necrosis and oxidant stress caused by GCDC. Oxidant stress, and not detergent effects or the stimulation of calpain-like proteases, mediates the GCDC-induced MMPT in hepatocytes. We propose that reducing mitochondrial generation of reactive oxygen species or preventing increases in mitochondrial Ca2+ may protect the hepatocyte against bile acid-induced necrosis.",
"corpus_id": 12362137,
"score": -1,
"title": "Role of Oxidant Stress in the Permeability Transition Induced in Rat Hepatic Mitochondria by Hydrophobic Bile Acids"
} |
{
"abstract": "De verwezenlijking van musicale ideeen aan het klavierinstrument is afhankelijk van het vermogen om geluid in beweging om te zetten, een proces dat audiomotor-transformatie heet. M.b.v. fMRI zijn hersenactivaties in klassiek-opgeleide improviserende en niet-improviserende musici bestudeerd terwijl zij in gedachten meespeelden met opnames van bekende en onbekende muziekstukken. Onze hypothese was dat audiomotor-transformatie geassocieerd kon worden met activatie van geeigende hersennetwerken die het spelen op het gehoor faciliteren. Resultaten geven aan dat howewel alle klassiek opgeleide musici een linkerhemisfeer netwerk activeren dat betrokken is bij motorische vaardigheid en herkenning van handelingen, alleen improviserende musici daarbij een rechter dorsaal-frontoparietaal netwerk activeren dat betrokken is bij ruimtelijk gestuurde motor control. Mobilisatie van dit netwerk, dat een cruciale rol speelt in de real-time transformatie van voorgestelde of waargenomen muziek in een doelgerichte handeling, kan verantwoordelijk worden gehouden niet alleen voor de sterkere activatie van auditieve cortex in improviserende musici in respons tot de auditieve perceptie van muziek, maar ook het superieure vermogen om muziek ‘op het gehoor’ te spelen dat zij in een vervolgstudie demonstreerden. Onze resultaten suggereren dat improvisatie de impliciete verwerving van hierarchische muzieksyntax bevordert, die daarna top-down wordt gerekruteerd via de dorsale route tijdens het spelen van muziek. In een studie van audiomotor-transformatie in Parkinsonpatienten, werd een dissociatie tussen spraak- en muziekdysprosodie gedemonstreerd. Terwijl de spraak van patienten betrouwbaar kon worden onderscheiden van dat van gezonde proefpersonen, puur op basis van auditieve waarneming, werd geen verschil tussen patienten en gezonde proefpersonen waargenomen in het vermogen om geimproviseerde melodieen te zingen.",
"corpus_id": 151608938,
"title": "The cerebral organization of audiomotor transformations in music"
} | {
"abstract": "We compared activation maps of professional and amateur violinists during actual and imagined performance of Mozart's violin concerto in G major (KV216). Execution and imagination of (left hand) fingering movements of the first 16 bars of the concerto were performed. Electromyography (EMG) feedback was used during imagery training to avoid actual movement execution and EMG recording was employed during the scanning of both executed and imagined musical performances. We observed that professional musicians generated higher EMG amplitudes during movement execution and showed focused cerebral activations in the contralateral primary sensorimotor cortex, the bilateral superior parietal lobes, and the ipsilateral anterior cerebellar hemisphere. The finding that professionals exhibited higher activity of the right primary auditory cortex during execution may reflect an increased strength of audio-motor associative connectivity. It appears that during execution of musical sequences in professionals, a higher economy of motor areas frees resources for increased connectivity between the finger sequences and auditory as well as somatosensory loops, which may account for the superior musical performance. Professionals also demonstrated more focused activation patterns during imagined musical performance. However, the auditory-motor loop was not involved during imagined performances in either musician group. It seems that the motor and auditory systems are coactivated as a consequence of musical training but only if one system (motor or auditory) becomes activated by actual movement execution or live musical auditory stimuli.",
"corpus_id": 1703363,
"title": "The musician's brain: functional imaging of amateurs and professionals during performance and imagery"
} | {
"abstract": "Novel experience and learning new skills are known as modulators of brain function. Advances in non-invasive brain imaging have provided new insight into structural and functional reorganization associated with skill learning and expertise. Especially, significant imaging evidences come from the domains of sports and music. Data from in vivo imaging studies in sports and music have provided vital information on plausible neural substrates contributing to brain reorganization underlying skill acquisition in humans. This mini review will attempt to take a narrow snapshot of imaging findings demonstrating functional and structural plasticity that mediate skill learning and expertise while identifying converging areas of interest and possible avenues for future research.",
"corpus_id": 348377,
"score": -1,
"title": "Reorganization and plastic changes of the human brain associated with skill learning and expertise"
} |
{
"abstract": "We are often interested in clustering objects that evolve over time and identifying solutions to the clustering problem for every time step. Evolutionary clustering provides insight into cluster evolution and temporal changes in cluster memberships while enabling performance superior to that achieved by independently clustering data collected at different time points. In this article we introduce evolutionary affinity propagation (EAP), an evolutionary clustering algorithm that groups data points by exchanging messages on a factor graph. EAP promotes temporal smoothness of the solution to clustering time-evolving data by linking the nodes of the factor graph that are associated with adjacent data snapshots, and introduces consensus nodes to enable cluster tracking and identification of cluster births and deaths. Unlike existing evolutionary clustering methods that require additional processing to approximate the number of clusters or match them across time, EAP determines the number of clusters and tracks them automatically. A comparison with existing methods on simulated and experimental data demonstrates effectiveness of the proposed EAP algorithm.",
"corpus_id": 41612416,
"title": "Evolutionary Clustering via Message Passing"
} | {
"abstract": "Topic models have proven to be a useful tool for discovering latent structures in document collections. However, most document collections often come as temporal streams and thus several aspects of the latent structure such as the number of topics, the topics' distribution and popularity are time-evolving. Several models exist that model the evolution of some but not all of the above aspects. In this paper we introduce infinite dynamic topic models, iDTM, that can accommodate the evolution of all the aforementioned aspects. Our model assumes that documents are organized into epochs, where the documents within each epoch are exchangeable but the order between the documents is maintained across epochs. iDTM allows for unbounded number of topics: topics can die or be born at any epoch, and the representation of each topic can evolve according to a Markovian dynamics. We use iDTM to analyze the birth and evolution of topics in the NIPS community and evaluated the efficacy of our model on both simulated and real datasets with favorable outcome.",
"corpus_id": 1341872,
"title": "Timeline: A Dynamic Hierarchical Dirichlet Process Model for Recovering Birth/Death and Evolution of Topics in Text Stream"
} | {
"abstract": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.",
"corpus_id": 990435,
"score": -1,
"title": "Can cascades be predicted?"
} |
{
"abstract": "The potential relationship between hypovitaminosis D and non-skeletal health outcomes is a growing public health concern. There is suggestion of a relationship between 25-hydroxyvitamin D (25(OH)D) and brain function, with equivocal epidemiological evidence for an association with common mental disorders (CMD) and cognitive function. The aim of the thesis was to investigate the association of 25(OH)D with CMDs and cognitive function in mid-adulthood. Observational and genetic studies were used to gain better insight into the causal nature of the relationship between 25(OH)D and cognitive function. During observational studies, the association of 25(OH)D with CMDs and cognitive function was assessed in the 1958 British birth cohort (1958BC). A genetic study investigated the potential for a gene-environment interaction (GxE) by APOE e4 on cognitive function using participants from the 1958BC. This GxE study was replicated in an older European cohort. The causal relationship between 25(OH)D and cognitive function was assessed using a Mendelian randomisation (MR) approach in a meta-analyses using participants from nine European cohorts. Using observational data from 1958BC, there was evidence that both low and high 25(OH)D concentrations were associated with increased risk of CMDs and lower memory function. There was also evidence of a GxE interaction for memory function; where increasing 25(OH)D concentrations may be particularly beneficial for those with APOE e4 genotype. However, results from a MR study provided no evidence for 25(OH)D concentrations acting as a causal factor for cognitive performance in mid- to later-life. Since there was evidence of a non-linear observational association, the MR study may have been underpowered to detect small causal effects at the extremes of the 25(OH)D distribution. Overall, there is some evidence of a potential non-linear association of 25(OH)D with CMDs and cognitive function. However the causal nature of this relationship requires confirmation from large long-term randomised controlled trials.",
"corpus_id": 142227719,
"title": "Vitamin D, common mental disorders and cognition : insights from genetic and observational epidemiology"
} | {
"abstract": "Some authorities view the history of science as a sort of saltatory process in which periods of modest gain and of plodding ‘normal science’ are interrupted by dramatic leaps forward and episodes of ‘revolution’ (Kuhn, 1962). If this is so then genetics has, for the past several years, been in a phase of remarkably sustained and continuous revolution. The advent of the ‘new genetics' of recombinant DNA has resulted in new discoveries occurring at a breath-taking pace, many of which have important clinical implications. Recent findings of psychiatric relevance have included the localisation of the gene for Huntington's chorea on the short arm of chromosome 4 (Gusella et al, 1983) and the use of DNA probes in predictive testing (Harper, 1986). Advances have been achieved in the understanding of the molecular biology of Alzheimer's disease, and at least some of the familial forms of the condition appear to be linked to a gene on chromosome 21 (St George-Hyslop et al, 1987). However, perhaps the most exciting development for most psychiatrists has been the report (Egeland et al, 1987) of a major gene for manic-depressive illness linked to a marker on the short arm of chromosome 11. Could this signal the leap of biological psychiatry into a revolutionary phase? It is perhaps appropriate before attempting to answer this that we give some consideration to the recent historical background.",
"corpus_id": 2707164,
"title": "Major Genes for Major Affective Disorder?"
} | {
"abstract": "BACKGROUND\nDecisions about chemotherapy for NSCLC are complex and involve trade-offs between its benefits, harms and inconveniences. We sought to find, evaluate and summarise studies quantifying the survival benefits that cancer patients judged sufficient to make chemotherapy for NSCLC worthwhile.\n\n\nMETHODS\nA search of MEDLINE identified 5 papers reporting four studies including 270 patients. Two investigators independently extracted and tabulated relevant findings from each study.\n\n\nRESULTS\nMost cancer patients were male, aged over 65 years, had primary lung cancer (65%) and had experienced chemotherapy (62%). Preferences were determined for chemotherapy in metastatic NSCLC (3 papers) and in locally advanced NSCLC (2 papers), but no studies determined preferences for adjuvant chemotherapy. Most cancer patients (>50%) judged moderate survival benefits sufficient to make chemotherapy worthwhile, for example, absolute increases of 10% in survival rates or 6 months in life expectancies. Individual patients' preferences varied widely: benefits judged sufficient ranged from very small (e.g. survival rate of 1%) to very large (e.g. survival rate of 50%). Smaller benefits were judged sufficient to make chemotherapy worthwhile for metastatic rather than locally advanced disease, for less toxic rather than more toxic chemotherapy, and in North American rather than Japanese studies. Four baseline characteristics were weakly associated with judging smaller benefits sufficient: younger age, having dependents, tertiary education and worse quality of life.\n\n\nCONCLUSIONS\nThe survival benefits patients judged sufficient to make chemotherapy for NSCLC worthwhile were moderate, widely variable, and difficult to predict. Doctors should encourage patients to express their preferences when facing decisions about chemotherapy for NSCLC.",
"corpus_id": 2700875,
"score": -1,
"title": "Patients' preferences for chemotherapy in non-small-cell lung cancer: a systematic review."
} |
{
"abstract": "In multipartite entanglement theory, the partial separability properties have an elegant, yet complicated structure, which becomes simpler in the case when multipartite correlations are considered. In this work, we elaborate this, by giving necessary and sufficient conditions for the existence and uniqueness of the class of a given class-label, by the use of which we work out the structure of the classification for some important particular cases, namely, for the finest classification, for the classification based on k-partitionability and k-producibility, and for the classification based on the atoms of the correlation properties.",
"corpus_id": 51690690,
"title": "The classification of multipartite quantum correlation"
} | {
"abstract": "The quantum mechanical description of the chemical bond is generally given in terms of delocalized bonding orbitals, or, alternatively, in terms of correlations of occupations of localised orbitals. However, in the latter case, multiorbital correlations were treated only in terms of two-orbital correlations, although the structure of multiorbital correlations is far richer; and, in the case of bonds established by more than two electrons, multiorbital correlations represent a more natural point of view. Here, for the first time, we introduce the true multiorbital correlation theory, consisting of a framework for handling the structure of multiorbital correlations, a toolbox of true multiorbital correlation measures, and the formulation of the multiorbital correlation clustering, together with an algorithm for obtaining that. These make it possible to characterise quantitatively, how well a bonding picture describes the chemical system. As proof of concept, we apply the theory for the investigation of the bond structures of several molecules. We show that the non-existence of well-defined multiorbital correlation clustering provides a reason for debated bonding picture.",
"corpus_id": 3276564,
"title": "The correlation theory of the chemical bond"
} | {
"abstract": "A new anisotropic gravity‐wave‐drag parametrization‐scheme which represents high‐drag states modelled on hydraulic jump, flow blocking and internal‐wave‐reflection theory, including trapped lee‐waves, is presented. The scheme is shown to represent the breaking of waves over mountains better than previous schemes by comparison with mesoscale simulations of PYREX case studies. Extended simulations of the Meteorological Office Unified Model at climate resolution are presented, showing the impact of the new scheme and a combination of the scheme with a new orographic‐roughness parametrization. Results show a changed distribution of surface‐gravity‐wave surface‐stress, mostly due to mountain anisotropy, and a greater tendency for lower‐troposphere wavc‐drag. Inclusion of the orographic‐roughness parametrization halves the gravity‐wave stress, the combined effect of these schemes for climate integrations being a slight improvement to the tropospheric flow.",
"corpus_id": 122734748,
"score": -1,
"title": "A new gravity‐wave‐drag scheme incorporating anisotropic orography and low‐level wave breaking: Impact upon the climate of the UK Meteorological Office Unified Model"
} |
{
"abstract": "Silver carp is a one of the most important freshwater fish species in China, and is popular when making soup in the Chinese dietary culture. In order to investigate the profile of fish soup tastes and flavours cooked using different regions of the same fish, the silver carp was cut into four different regions: head, back, abdomen, and tail. The differences in taste and flavour of the four kinds of homemade fish soup were investigated by an electronic nose and electronic tongue. The basic chemical components of the different fish regions and the SDS-PAGE profile of the fish soup samples were investigated. Two chemometrics methods (principal component analysis and discriminant factor analysis) were used to classify the odour and taste of the fish soup samples. The results showed that the electronic tongue and nose performed outstandingly in discriminating the four fish soups even though the samples were made from different regions of the same fish. The taste and flavour information of different regions of the silver carp fish could provide the theoretical basis for food intensive processing.",
"corpus_id": 218484468,
"title": "Effective discrimination of flavours and tastes of Chinese traditional fish soups made from different regions of the silver carp using an electronic nose and electronic tongue"
} | {
"abstract": "The bioavailability of vitamin C from pulsed electric fields (PEF)-treated vegetable soup in comparison with freshly made (FM) vegetable soup—gazpacho—and its impact on 8-epiPGF2α and uric acid concentrations in a human population were assessed. For this purpose six subjects consumed 500 ml PEF-treated vegetable soup/day, and six subjects consumed 500 ml FM vegetable soup/day for 14 days. On the first day of the study, the subjects drank the vegetable soup in one dose (dose–response study), and on days 2–14 they consumed 250 ml in the morning and 250 ml in the afternoon (multiple-dose–response study). Blood was collected every hour for 6 h on the first day and again on days 7 and 14. All blood samples were analyzed for vitamin C, 8-epiPGF2α, and uric acid. The maximum increase in plasma vitamin C occurred 3 h post-dose in both the PEF and the FM groups. Vitamin C remained significantly higher (P≤0.05) on days 7 and 14. The plasma 8-epiPGF2α concentration was significantly lower at the end of the study in both the PEF group (P=0.002) and the FM group (P=0.05). Plasma levels of vitamin C and 8-epiPGF2α were inversely correlated in both groups (r= − 0.549, P=0.018; and r= − 0.743, P=0.0004, respectively). To summarize, drinking two servings (500 ml) of PEF-treated or FM gazpacho daily increases plasma vitamin C and significantly decreases 8-epiPGF2α concentrations in healthy humans.",
"corpus_id": 1862499,
"title": "Intake of Mediterranean vegetable soup treated by pulsed electric fields affects plasma vitamin C and antioxidant biomarkers in humans"
} | {
"abstract": "Observational epidemiologic studies have shown that a high consumption of fruits and vegetables is associated with a decreased risk of chronic diseases. Little is known about the bioavailability of constituents from vegetables and fruits and the effect of these constituents on markers for disease risk. Currently, the recommendation is to increase intake of a mix of fruits and vegetables (\"five a day\"). We investigated the effect of this recommendation on plasma carotenoids, vitamins and homocysteine concentrations in a 4-wk dietary controlled, parallel intervention study. Male and female volunteers (n = 47) were allocated randomly to either a daily 500-g fruit and vegetable (\"high\") diet or a 100-g fruit and vegetable (\"low\") diet. Analyzed total carotenoid, vitamin C and folate concentrations of the daily high diet were 13.3 mg, 173 mg and 228.1 microg, respectively. The daily low diet contained 2.9 mg carotenoids, 65 mg vitamin C and 131.1 microg folate. Differences in final plasma levels between the high and low group were as follows: lutein, 46% [95% confidence interval (CI) 28-64]; beta-cryptoxanthin, 128% (98-159); lycopene, 22% (8-37); alpha-carotene, 121% (94-149); beta-carotene, 45% (28-62); and vitamin C, 64% (51-77) (P < 0.05). The high group had an 11% (-18 to -4) lower final plasma homocysteine and a 15% (0.8-30) higher plasma folate concentration compared with the low group (P < 0.05). This is the first trial to show that a mix of fruits and vegetables, with a moderate folate content, decreases plasma homocysteine concentrations in humans.",
"corpus_id": 4447432,
"score": -1,
"title": "Fruits and vegetables increase plasma carotenoids and vitamins and decrease homocysteine in humans."
} |
{
"abstract": "The anesthesia for the laser treatment of the premature retinopathy is a challenge for the anesthesiologist due to the anatomic and physiologic characteristic of these patients, to the pharmacokinetic and pharmacodynamic behavior of the anesthetics in them and the diseases that can be associated to them. For that reason we review",
"corpus_id": 74483042,
"title": "Anesthesia for the laser treatment of the premature retinopathy in the prematurit"
} | {
"abstract": "AimsTo report the use of ketamine sedation as an alternative anaesthetic method for babies undergoing treatment for retinopathy of prematurity (ROP).MethodsAll babies who underwent treatment for ROP over a 2-year period were included in this study. The babies preoperative weight, medical condition, and ventilation status was recorded. Data were collected on their ventilation status pre-, intra-, and postprocedure. Any change in their cardiac or respiratory status during or in the subsequent 3 days following the treatment was noted.ResultsEleven babies, 22 eyes, required treatment over this period. The procedure was well tolerated with only three babies having intraoperative complications, which all resolved spontaneously. Two babies had postoperative complications requiring additional ventilation. In no case was the procedure abandoned owing to anaesthetic complications.ConclusionsThe use of ketamine sedation allows the laser to be performed in a ward setting and avoids the potential risk of general anaesthesia and inter- and intra-hospital transfer. It has been found to produce few intra- or postoperative complications for the infant, while providing satisfactory conditions for the treatment of ROP.",
"corpus_id": 475944,
"title": "Ketamine sedation during the treatment of retinopathy of prematurity"
} | {
"abstract": "Posterior ischemic optic neuropathy (PION) is an uncommon cause of perioperative visual loss. Perioperative PION has been most frequently reported after spinal surgery and radical neck dissection. The visual loss typically presents immediately after recovery from anesthesia, although it may be delayed by several days. Visual loss is often bilateral and profound with count fingers vision or worse. The examination findings are consistent with an optic neuropathy; however the funduscopic examination is initially normal. The cause is unknown, although patient-specific susceptibility to perioperative hemodynamic derangements is likely. No treatment has proven to be effective. The prognosis for visual recovery is generally poor.",
"corpus_id": 73112619,
"score": -1,
"title": "Perioperative posterior ischemic optic neuropathy: review of the literature."
} |
{
"abstract": "Ejaculatory dysfunction impacts large numbers of men of all ages and around the world. In addition, a great majority of men with chronic spinal cord injury (SCI) experience ejaculatory dysfunction, which negatively impacts the quality of life of these individuals and their partners. SCI men emphasize the significance of regaining sexual function as their main goal. Currently, there is a marked absence of literature reporting the alterations to sexual function and ejaculation in particular in animal models of chronic SCI. In addition, there are many unanswered questions pertaining to the spinal cord control of ejaculation in healthy, intact men. It is known that ejaculation is controlled by a population of lumbar spinothalamic (LSt) cells in the lumbar spinal cord through their direct projections to preganglionic autonomic and motor neurons in the lumbosacral spinal cord. It is hypothesized that LSt cells control ejaculatory reflexes through the release of their neuropeptides galanin, cholecystokinin (CCK), gastrin-releasing peptide (GRP), and enkephalin onto receptors in autonomic and motor areas of the lumbosacral spinal cord. This hypothesis was tested in this thesis utilizing a paradigm in anesthetized and spinalized male rats, with stimulation of the sensory inputs via the dorsal penile nerve. Consistent with the hypothesis, mu and delta opioid receptor, galanin, CCK, and GRP receptor activation in LSt target areas in the lumbosacral spinal cord was demonstrated to be critical for ejaculatory reflexes. Next, the hypothesis that intrathecal infusions of the LSt neuropeptides can improve ejaculatory reflexes in male rats with chronic SCI was tested. Results indicated that intrathecal infusions of GRP and the mu opioid receptor agonist DAMGO improved ejaculatory reflexes in male rats with chronic contusion SCI. Finally, the hypothesis that the D3 receptor agonist 7-OH-DPAT will recover ejaculatory function in male rats with chronic spinal cord injury was tested. Indeed, systemic infusions of 7-OH-DPAT greatly improved ejaculatory reflexes in SCI males. Together, the studies in this thesis further clarified the mechanisms involved in the spinal cord control of ejaculation in male rats and represent an initial but pivotal first step towards the recovery of ejaculatory function after chronic spinal cord injury.",
"corpus_id": 68576127,
"title": "Spinal Cord Control of Ejaculatory Reflexes in Male Rats"
} | {
"abstract": "INTRODUCTION\nOrgasm is less frequent in men with spinal cord injury (SCI) than in able-bodied subjects, and is poorly understood.\n\n\nAIM\nTo assess the effect of autonomic stimulation on orgasm in SCI men using midodrine, an alpha1-adrenergic agonist agent.\n\n\nMATERIALS AND METHODS\nPenile vibratory stimulation (PVS) was performed in 158 SCI men on midodrine as part of a treatment for anejaculation, after they failed a baseline PVS. A maximum of four trials were performed, weekly, with increasing doses of midodrine.\n\n\nMAIN OUTCOME MEASURE\nThe presence and type of ejaculation, orgasm experiences, and cardiovascular data were collected.\n\n\nRESULTS\nEjaculation either antegrade or retrograde was obtained in 102 SCI men (65%). Orgasm without ejaculation was reported by 14 patients (9%) on baseline PVS. Ninety-three patients (59%) experienced orgasm during PVS on midodrine. Orgasm was significantly related to the presence of ejaculation in 86 patients (84%), and more strikingly to antegrade ejaculation (pure or mixed with retrograde), i.e., in 98% of 70 patients. Orgasm was significantly more frequent in patients with upper motor neuron and incomplete lesions who present somatic responses during PVS. There was no effect of the presence of psychogenic erection. There was a significant increase in both systolic and diastolic blood pressure. Sixteen patients, mainly tetraplegics, developed intense autonomic dysreflexia (AD) that required an oral nicardipine chlorhydrate.\n\n\nCONCLUSIONS\nOrgasm is the brain's cognitive interpretation of genital sensations and somatic responses, AD, and ejaculation. Intact sacral and T10-L2 cord segments are mandatory, allowing coordination between internal and external sphincters. Autonomic stimulation with midodrine enhances orgasm rate, mainly by creating antegrade ejaculation.",
"corpus_id": 1247026,
"title": "Midodrine improves orgasm in spinal cord-injured men: the effects of autonomic stimulation."
} | {
"abstract": "OBJECTIVES\nThe aim of this study was to evaluate and compare the effects of physiologic and pharmacologic sympathetic stimulation on time and frequency domain indexes of heart rate variability.\n\n\nBACKGROUND\nMeasurements of heart rate variability have been used as indexes of sympathetic tone. To date, the effects of circulating catecholamines on heart rate variability have not been evaluated.\n\n\nMETHODS\nFourteen normal subjects (eight men, six women, mean [+/- SD] age 28.5 +/- 4.8 years) were evaluated. Five-minute electrocardiographic recordings were obtained in triplicate after physiologic and pharmacologic sympathetic stimulation: during upright tilt, after maximal exercise, during epinephrine and isoproterenol infusions at 50 ng/kg body weight per min, during beta-adrenergic blockade and during combined beta-adrenergic and parasympathetic blockade.\n\n\nRESULTS\nBeta-adrenergic stimulation resulted in a significant decrease in time domain measures of heart rate variability. The frequency domain indexes showed variable responses, depending on the individual stimulus. Tilt caused an increase in low frequency power and in the ratio of low to high frequency power. These changes were not necessarily observed with other conditions of beta-adrenergic stimulation. Double blockade suppressed baseline heart rate variability, but beta-adrenergic blockade had no significant effect. Time domain measures of heart rate variability demonstrated excellent reproducibility over the three recordings, but the frequency domain variables demonstrated fair to excellent reproducibility.\n\n\nCONCLUSIONS\nThese findings suggest that different modes of beta-adrenergic stimulation may result in divergent heart rate variability responses. Thus, current heart rate variability techniques cannot be used as general indexes of \"sympathetic\" tone. Studies utilizing heart rate variability to quantify sympathetic tone need to consider this.",
"corpus_id": 25818291,
"score": -1,
"title": "Effect of physiologic and pharmacologic adrenergic stimulation on heart rate variability."
} |
{
"abstract": "Pharmacogenetics is the study of how interindividual variations in the DNA sequence of specific genes affect drug response. This article highlights current pharmacogenetic knowledge on important human drug-metabolizing cytochrome P450s (CYPs) to understand the large interindividual variability in drug clearance and responses in clinical practice. The human CYP superfamily contains 57 functional genes and 58 pseudogenes, with members of the 1, 2, and 3 families playing an important role in the metabolism of therapeutic drugs, other xenobiotics, and some endogenous compounds. Polymorphisms in the CYP family may have had the most impact on the fate of therapeutic drugs. CYP2D6, 2C19, and 2C9 polymorphisms account for the most frequent variations in phase I metabolism of drugs, since almost 80% of drugs in use today are metabolized by these enzymes. Approximately 5–14% of Caucasians, 0–5% Africans, and 0–1% of Asians lack CYP2D6 activity, and these individuals are known as poor metabolizers. CYP2C9 is another clinically significant enzyme that demonstrates multiple genetic variants with a potentially functional impact on the efficacy and adverse effects of drugs that are mainly eliminated by this enzyme. Studies into the CYP2C9 polymorphism have highlighted the importance of the CYP2C9*2 and *3 alleles. Extensive polymorphism also occurs in other CYP genes, such as CYP1A1, 2A6, 2A13, 2C8, 3A4, and 3A5. Since several of these CYPs (e.g., CYP1A1 and 1A2) play a role in the bioactivation of many procarcinogens, polymorphisms of these enzymes may contribute to the variable susceptibility to carcinogenesis. The distribution of the common variant alleles of CYP genes varies among different ethnic populations. Pharmacogenetics has the potential to achieve optimal quality use of medicines, and to improve the efficacy and safety of both prospective and currently available drugs. Further studies are warranted to explore the gene-dose, gene-concentration, and gene-response relationships for these important drug-metabolizing CYPs.",
"corpus_id": 205554875,
"title": "Polymorphism of human cytochrome P450 enzymes and its clinical impact"
} | {
"abstract": "There are a considerable number of reports identifying and characterizing genetic variants within the CYP2C9 coding region. Much less is known about polymorphic promoter sequences that also might contribute to interindividual differences in CYP2C9 expression. To address this problem, approximately 10,000 base pairs of CYP2C9 upstream information were resequenced using 24 DNA samples from the Coriell Polymorphism Discovery Resource. Thirty-one single-nucleotide polymorphisms (SNPs) were identified; nine SNPs were novel, whereas 22 were reported previously. Using both sequencing and multiplex single-base extension, individual SNP frequencies were determined in 193 DNA samples obtained from unrelated, self-reported Hispanic Americans of Mexican descent, and they were compared with similar data obtained from a non-Latino white cohort. Significant interethnic differences were observed in several SNP frequencies, some of which seemed unique to the Hispanic population. Analysis using PHASE 2.1 inferred nine common (>1%) variant haplotypes, two of which included the g.3608C>T (R144C) CYP2C9*2 and two the g.42614A>C (I359L) CYP2C9*3 SNPs. Haplotype variants were introduced into a CYP2C9/luciferase reporter plasmid using site-directed mutagenesis, and the impact of the variants on promoter activity assessed by transient expression in HepG2 cells. Both constitutive and pregnane X receptor-mediated inducible activities were measured. Haplotypes 1B, 3A, and 3B each exhibited a 65% decrease in constitutive promoter activity relative to the reference haplotype. Haplotypes 1D and 3B exhibited a 50% decrease and a 40% increase in induced promoter activity, respectively. These data suggest that genetic variation within CYP2C9 regulatory sequences is likely to contribute to differences in CYP2C9 phenotype both within and among different populations.",
"corpus_id": 2416783,
"title": "Novel CYP2C9 Promoter Variants and Assessment of Their Impact on Gene Expression"
} | {
"abstract": "CYP2C9 is the most abundant CYP2C subfamily enzyme in human liver and the most important contributor from this subfamily to drug metabolism. Polymorphisms resulting in decreased enzyme activity are common in the CYP2C9 gene and this, combined with narrow therapeutic indices for several key drug substrates, results in some important issues relating to drug safety and efficacy. CYP2C9 substrate selectivity is detailed and, based on crystal structures for the enzyme, we describe how CYP2C9 catalyzes these reactions. Factors relevant to clinical response to CYP2C9 substrates including inhibition, induction and genetic polymorphism are discussed in detail. In particular, we consider the issue of ethnic variation in pattern and frequency of genetic polymorphisms and clinical implications. Warfarin is the most well studied CYP2C9 substrate; recent work on use of dosing algorithms that include CYP2C9 genotype to improve patient safety during initiation of warfarin dosing are reviewed and prospects for their clinical implementation considered. Finally, we discuss a novel approach to cataloging the functional capabilities of rare ‘variants of uncertain significance’, which are increasingly detected as more exome and genome sequencing of diverse populations is conducted.",
"corpus_id": 4472217,
"score": -1,
"title": "Pharmacogenomics of CYP2C9: Functional and Clinical Considerations†"
} |
{
"abstract": "Streaming applications have become increasingly important and widespread, and they will be running on soon- to-be-prevalent chip multiprocessors (CMPs). We address the problem of energy-aware scheduling of streaming applications, which are represented by task graphs, on a CMP using on/off and dynamic voltage scaling (DVS) on a per-processor basis. The goal is to minimize the energy consumption of streaming applications while satisfying two typical quality-of-service (QoS) requirements, namely, throughput and response time. To the best of our knowledge, this paper is the first work to tackle this problem. We make a key observation: the trade-off between static power and dynamic power should play a critical role in both parallel processing and pipelining that are used to reduce energy consumption in the scheduling process. Based on this observation, we propose two scheduling algorithms, Scheduling 1D and Scheduling 2D, for linear and general task graphs, respectively. The proposed algorithms exploit the difference between the two QoS requirements and perform processor allocation, task mapping and task speed scheduling simultaneously. Experimental results show that the proposed algorithms can achieve significant energy savings (e.g., 24% on average for 70 nm technology) over the baseline that only considers the response time requirement.",
"corpus_id": 6393470,
"title": "Energy-Aware Scheduling for Streaming Applications on Chip Multiprocessors"
} | {
"abstract": "Integrating soft and hard activities in a real-time environment has been an active area of research both under fixed priority scheduling and dynamic priority scheduling. Most of the existing work, however, has been done under the assumption that soft real-time tasks and hard real-time tasks are independent. The paper presents an efficient method that allows soft realtime aperiodic tasks and hard real-time tasks to share resources.",
"corpus_id": 2800709,
"title": "Aperiodic servers with resource constraints"
} | {
"abstract": "This paper investigates the problem of server parameter selection in hierarchical fixed priority preemptive systems. A set of algorithms are provided that determine the optimal values for a single server parameter (capacity, period, or priority) when the other two parameters are fixed. By contrast, the general problem of server parameter selection is shown to be a holistic one: typically the locally optimal solution for a single server does not form part of the globally optimal solution. Empirical investigations show that improvements in remaining utilisation (spare capacity) can be achieved by choosing server periods that are exact divisors of their task periods; enabling tasks to be bound to the release of their server, enhancing task schedulability and reducing server capacity requirements.",
"corpus_id": 711907,
"score": -1,
"title": "An Investigation into Server Parameter Selection for Hierarchical Fixed Priority Pre-emptive Systems"
} |
{
"abstract": "We present a novel method for interactive retrieval of 3D shapes using ysical objects. Our method is based on simple ysical 3D interaction with a set of tangible blocks. As the user connects blocks, the system automatically recognizes the shape of the constructed ysical structure and picks similar 3D shape models from a preset model database, in real time. Our system fully supports interactive retrieval of 3D shape models in an extremely simple fashion, which is completely non-verbal and cross-cultural. These advantages make it an ideal interface for inexperienced users, previously barred from many applications that include 3D shape retrieval tasks.",
"corpus_id": 602164,
"title": "Interactive retrieval of 3D shape models using physical objects"
} | {
"abstract": "New acquisition and modeling tools make it easier to create 3D models, and affordable and powerful graphics hardware makes it easier to use them. As a result, the number of 3D models available on the web is increasing rapidly. However, it is still not as easy to find 3D models as it is to find, for example, text documents and images. What is needed is a \\3D model search engine,\" a specialized search engine that targets 3D models. We created a prototype 3D model search engine to investigate the design and implementation issues. Our search engine can be partitioned into three main components: (1) acquisition: 3D models have to be collected from the web, (2) analysis: they have to be analyzed for later matching, and (3) query processing and matching: an online system has to match user queries to the collected 3D models. Our site currently indexes over 36,000 models, of which about 31,000 are freely available. In addition to a text search interface, it offers several 3D and 2D shape-based query interfaces. Since it went online one year ago (in November 2001), it has processed over 148,000 searches from 37,800 hosts in 103 different countries. Currently 20--25% of the about 1,000 visitors per week are returning users. This paper reports on our initial experiences designing, building, and running the 3D model search engine.",
"corpus_id": 6759445,
"title": "Early experiences with a 3D model search engine"
} | {
"abstract": "The advances in 3D data acquisition techniques, graphics hardware, and 3D data modeling and visualizing techniques have led to the proliferation of 3D models. This has made the searching for specific 3D models a vital issue. Techniques for effective and efficient content-based retrieval of 3D models have therefore become an essential research topic. In this paper, a novel feature, called elevation descriptor, is proposed for 3D model retrieval. The elevation descriptor is invariant to translation and scaling of 3D models and it is robust for rotation. First, six elevations are obtained to describe the altitude information of a 3D model from six different views. Each elevation is represented by a gray-level image which is decomposed into several concentric circles. The elevation descriptor is obtained by taking the difference between the altitude sums of two successive concentric circles. An efficient similarity matching method is used to find the best match for an input model. Experimental results show that the proposed method is superior to other descriptors, including spherical harmonics, the MPEG-7 3D shape spectrum descriptor, and D2.",
"corpus_id": 14506370,
"score": -1,
"title": "A new 3D model retrieval approach based on the elevation descriptor"
} |
{
"abstract": "Recent works on semantic Simultaneous Localization and Mapping (SLAM) utilizing object landmarks have shown superiority in terms of robustness and accuracy in tracking and localization. 3D object landmarks represented by a cubic or quadric surface are inferred from 2D object bounding boxes which are typically captured from multiple views by an object detector. Nevertheless, bounding box noises and small camera baseline may lead to an inaccurate 3D object landmark inference. Inspired by the dual quadric enveloping property, in this work, we introduce the horizontal support assumption to constrain rotation w.r.t. roll and pitch for a quadric representation. As the result, we reduce the number of quadric parameters and narrow down the solution space, and ultimately produce a relatively accurate inference. Extensive experimental evaluations under both simulated and real scenarios are conducted in this paper. Quantitative results demonstrate that our approach outperforms the state-of-the-art.",
"corpus_id": 239037663,
"title": "Robust Improvement in 3D Object Landmark Inference for Semantic Mapping"
} | {
"abstract": "In this work we present a novel approach to recover objects 3D position and occupancy in a generic scene using only 2D object detections from multiple view images. The method reformulates the problem as the estimation of a quadric (ellipsoid) in 3D given a set of 2D ellipses fitted to the object detection bounding boxes in multiple views. We show that a closed-form solution exists in the dual-space using a minimum of three views while a solution with two views is possible through the use of non-linear optimisation and object constraints on the size of the object shape. In order to make the solution robust toward inaccurate bounding boxes, a likely occurrence in object detection methods, we introduce a data preconditioning technique and a non-linear refinement of the closed form solution based on implicit subspace constraints. Results on synthetic tests and on different real datasets, involving challenging scenarios, demonstrate the applicability and potential of our method in several realistic scenarios.",
"corpus_id": 8708303,
"title": "3D Object Localisation from Multi-View Image Detections"
} | {
"abstract": "The problem of generating maps with mobile robots has received considerable attention over the past years. Most of the techniques developed so far have been designed for situations in which the environment is static during the mapping process. Dynamic objects, however, can lead to serious errors in the resulting maps such as spurious objects or misalignments due to localization errors. In this paper we consider the problem of creating maps with mobile robots in dynamic environments. We present a new approach that interleaves mapping and localization with a probabilistic technique to identify spurious measurements. In several experiments we demonstrate that our algorithm generates accurate 2D and 3D in different kinds of dynamic indoor and outdoor environments. We also use our algorithm to isolate the dynamic objects and generate 3D representation of them.",
"corpus_id": 508432,
"score": -1,
"title": "Map building with mobile robots in dynamic environments"
} |
{
"abstract": "Shared control is an increasingly popular approach to facilitate control and communication between humans and intelligent machines. However, there is little consensus in guidelines for design and evaluation of shared control, or even in a definition of what constitutes shared control. This lack of consensus complicates cross fertilization of shared control research between different application domains. This paper provides a definition for shared control in context with previous definitions, and a set of general axioms for design and evaluation of shared control solutions. The utility of the definition and axioms are demonstrated by applying them to four application domains: automotive, robot-assisted surgery, brain–machine interfaces, and learning. Literature is discussed for each of these four domains in light of the proposed definition and axioms. Finally, to facilitate design choices for other applications, we propose a hierarchical framework for shared control that links the shared control literature with traded control, co-operative control, and other human–automation interaction methods. Future work should reveal the generalizability and utility of the proposed shared control framework in designing useful, safe, and comfortable interaction between humans and intelligent machines.",
"corpus_id": 52281602,
"title": "A Topology of Shared Control Systems—Finding Common Ground in Diversity"
} | {
"abstract": "The use of a BCI in a practical application depends on the effective implementation of three adaptation levels. The BCI must adapt to the characteristic features of the user’s EEG, periodically adjust for reducing the impact of EEG variations, and engage the adaptive capabilities of the user’s brain through feedback. Implementing the adapting levels, requires the BCI to support several feature extraction methods, to use of flexible classifiers whose parameters can be dynamically updated according to an ”online” modality, and to provide sensible feedback to the user so that he can modulate his brain activity to make the BCI accomplish his intents.",
"corpus_id": 80358,
"title": "Towards a Practical Brain-Computer Interface"
} | {
"abstract": "Brain-computer interface (BCI) systems allow the user to interact with a computer by merely thinking. Successful BCI operation depends on the continuous adaptation of the system to the user. This paper presents an implementation of this adaptation using incremental support vector machines (SVM). This approach is tested on three subjects and three types of mental activities across ten sessions. The results show that the continuous adaptation of the BCI to the user's brain activity brings clear advantages over a non-adapting approach.",
"corpus_id": 7567385,
"score": -1,
"title": "BCI adaptation using incremental-SVM learning"
} |
{
"abstract": "Cooperative multi-agent planning (MAP) is a relatively recent research field that combines technologies, algorithms, and techniques developed by the Artificial Intelligence Planning and Multi-Agent Systems communities. While planning has been generally treated as a single-agent task, MAP generalizes this concept by considering multiple intelligent agents that work cooperatively to develop a course of action that satisfies the goals of the group. This article reviews the most relevant approaches to MAP, putting the focus on the solvers that took part in the 2015 Competition of Distributed and Multi-Agent Planning, and classifies them according to their key features and relative performance.",
"corpus_id": 779672,
"title": "Cooperative Multi-Agent Planning"
} | {
"abstract": "Almost every planner needs good heuristics to be efficient. Heuristic planning has experienced an impressive progress over the last years thanks to the emergence of more and more powerful estimators. However, this progress has not been translated to multi-agent planning (MAP) due to the difficulty of applying classical heuristics in distributed environments. The application of local search heuristics in each agent has been the most widely adopted approach in MAP but there exist some recent attempts to use global heuristics. In this paper we show that the success of global heuristics in MAP depends on a proper selection of heuristics for a distributed environment as well as on their adequate combination.",
"corpus_id": 15514231,
"title": "Global Heuristics for Distributed Cooperative Multi-Agent Planning"
} | {
"abstract": "Many day-to-day situations involve decision making: for example, a taxi company has some transportation tasks to be carried out, a large firm has to distribute a lot of complicated tasks among its subdivisions or subcontractors, and an air-traffic controller has to assign time slots to planes that are landing or taking off. Intelligent agents can aid in this decision-making process. Agents are often classified into two categories according to the techniques they employ in their decision making: reactive agents (cf. (Ferber and Drogoul, 1992)) base their next decision solely on their current sensory input; planning agents, on the other hand, take into account anticipated future developments — for instance as a result of their own actions — to decide on the most favourable course of action. When an agent should plan and when it should be reactive depends on the particular situation it finds itself in. Consider the example where an agent has to plan a route from one place to another. A reactive agent might use a compass to plot its course, whereas a planning agent would consult a map. Clearly, the planning agent will come up with the shortest route in most cases, as it won’t be confounded by uncrossable rivers, one-way streets, and labyrinthine city layouts. On the other hand, there are also situations where a reactive agent can at least be equally effective, for instance if there are no maps to consult, for instance in a domain of (Mars) exploration rovers. Nevertheless, the ability to plan ahead is invaluable in many domains, so in this paper we will focus on planning agents. The general structure of a planning problem is easy to explain: (the relevant part of) the world is in a certain state, but managers or directors would like it to be in another state. The (abstract) problem of how one should get from the current state of the world through a sequence of actions to the desired goal state is a planning problem. Ideally, to solve such planning problems, we would like to have a general planning-problem solver. However, such an algorithm solving all planning problems can be proven to be non-existing.1 We therefore start to concentrate on a simplification of the general planning problem called ‘the classical planning problem’. Although not all realistic problems can be modeled as a classical planning problem, they can help to solve more",
"corpus_id": 10249932,
"score": -1,
"title": "Multi-agent Planning An introduction to planning and coordination"
} |
{
"abstract": "Self-driving vehicles (SDVs) hold great potential for improving traffic safety and are poised to positively affect the quality of life of millions of people. To unlock this potential one of the critical aspects of the autonomous technology is understanding and predicting future movement of vehicles surrounding the SDV. This work presents a deep-learning-based method for kinematically feasible motion prediction of such traffic actors. Previous work did not explicitly encode vehicle kinematics and instead relied on the models to learn the constraints directly from the data, potentially resulting in kinematically infeasible, suboptimal trajectory predictions. To address this issue we propose a method that seamlessly combines ideas from the AI with physically grounded vehicle motion models. In this way we employ best of the both worlds, coupling powerful learning models with strong feasibility guarantees for their outputs. The proposed approach is general, being applicable to any type of learning method. Extensive experiments using deep convnets on real-world data strongly indicate its benefits, outperforming the existing state-of-the-art.",
"corpus_id": 199064393,
"title": "Deep Kinematic Models for Physically Realistic Prediction of Vehicle Trajectories"
} | {
"abstract": "Self-driving cars have been a dream as long automobiles have existed. The automobile is ubiquitous in the developed world and is becoming so in the developing world. In 2007, the world's two largest automakers sold over 18 million vehicles worldwide. As we consider domains to which we can apply intelligent systems, the automotive industry stands out as having the most potential for impact.",
"corpus_id": 206468666,
"title": "Self-Driving Cars and the Urban Challenge"
} | {
"abstract": "Road Divider is generically used for dividing the Road for ongoing and incoming traffic. This helps keeping the flow of traffic. Generally, there is equal number of lanes for both ongoing and incoming traffic. For example, in any city, there is industrial area or shopping area where the traffic generally flows in one direction in the morning or evening. The other side of Road divider is mostly either empty or under-utilized. This is true for peak morning and evening hours. This results in loss of time for the car owners, traffic jams as well as underutilization of available resources. Our idea is to formulate a mechanism of automated movable road divider that can shift lanes, so that we can have more number of lanes in the direction of the rush. The cumulative impact of the time and fuel that can be saved by adding even one extra lane to the direction of the rush will be significant. With the smart application proposed below, we will also eliminate the dependency on manual intervention and manual traffic coordination so that we can have a smarter traffic all over the city. An Automated movable road divider can provide a solution to the above-mentioned problem effectively. This is possible through IOT. IOT refers to Internet of Things where the actual digitalization comes into picture. Here sensors play a major role. We can achieve this using Arduino board. The sensors placed on the dividers sense the flow of traffic whether flow of traffic is smooth or not? If the flow is smooth on either side then there is nothing to worry but the lane which is having more traffic, the divider is moved to a certain distance to the smoother lane in order to smoothen the busy lane.",
"corpus_id": 49350735,
"score": -1,
"title": "Design and implementation of smart movable road divider using IOT"
} |
{
"abstract": "Internet of Things (IoT) is growing as one of the fastest developing technologies around the world. With IPv6 settling down, people have a lot of addressing spaces left that even allows sensors to communicate with each other while collecting data, leaving alone cars that communicate while travelling. IoT (Internet of Things) has changed how humans, machines and devices communicate with one another. However, with its growth, a very alarming topic is the security and privacy issues that are encountered regularly. As many devices exchange their data through internet, there is a high possibility that a device may be attacked with a malicious packet of data. In such cases, the security of the network of communication should be strong enough to identify malicious data. In other words, it is very important to create an intrusion detection system for the network. In our research, we propose a comparison between different machine learning algorithms that can be used to identify any malicious or anomalous data and provide the best algorithm for two data-sets. One dataset is on the environmental characteristics collected from sensors and another one is network dataset. The first data-set is developed from the data exchanged between the sensors in an IoT environment and the second dataset is UNSW-NB15 data which is available online.",
"corpus_id": 207815853,
"title": "Anomaly detection In IoT using machine learning algorithms"
} | {
"abstract": "Nowadays, computer security has become very familiar question in the society as nearly everyone has connected their computers to internet to get access to information from various informative sources and send or transmit messages in today‟s much complex computer networking world. The most common security threats are intruder which is generally referred as hacker or cracker and the other is virus. To protect the computer on network from intruders, Intrusion Detection Systems are very much important defensive measure component. In this paper we propose a hybrid approach which is the combination two algorithms for clustering and classification that are K-Means and Naïve Bayes respectively. Using KDD Cup‟99 dataset we‟ll be evaluating the performance of our proposed approach. The evaluation will show that new type of attack can be detected effectively in the system and efficiency and accuracy of IDS will improve in terms of detection rate along with its reasonable prediction time.",
"corpus_id": 1665486,
"title": "Intrusion Detection System in Data Mining using Hybrid Approach"
} | {
"abstract": "In this paper, a novel underwater robot with sixrotor mechanical structure is proposed different with existing mechanical structure and control method. The new designed underwater helicopter is called Underwater Six-Rotor Unmanned Helicopter, and it can perform full freedom underwater actions. Therefore, the proposed underwater helicopter has its own advantages compared with existing AUVs (Autonomous underwater helicopters) and ROVs (Remotely Operated Vehicles). The hardware and software structures of the proposed system are analyzed in deep. Moreover, the kinematic and dynamic model of proposed new vehicle is established. Finally, spiral shape trajectory tracking effect is simulated to verify theoretical model.",
"corpus_id": 247456285,
"score": -1,
"title": "Simulation and Design of Underwater Six-Rotor Unmanned Helicopter"
} |
{
"abstract": "The algebraic soft-decision decoding algorithm (ASD) requires a reliability matrix as its input. In this paper, a new method to construct the reliability matrix over partial response (PR) channels of interest in magnetic recording is proposed by using recently introduced pattern-output Viterbi algorithm (POVA). A modified bit-level generalized minimum distance (BGMD) algorithm is also proposed with the POVA to achieve performance gains over PR channels that are as large as gains as over AWGN channels.",
"corpus_id": 27350989,
"title": "Application of pattern-output viterbi algorithm to algebraic soft-decision decoding over partial response channels"
} | {
"abstract": "The performance of algebraic soft-decision decoding of Reed-Solomon codes using bit-level soft information is investigated. Optimal multiplicity assignment strategies for algebraic soft-decision decoding (SDD) with infinite cost are first studied over erasure channels and the binary-symmetric channel. The corresponding decoding radii are calculated in closed forms and tight bounds on the error probability are derived. The multiplicity assignment strategy and the corresponding performance analysis are then generalized to characterize the decoding region of algebraic SDD over a mixed error and bit-level erasure channel. The bit-level decoding region of the proposed multiplicity assignment strategy is shown to be significantly larger than that of conventional Berlekamp-Massey decoding. As an application, a bit-level generalized minimum distance decoding algorithm is proposed. The proposed decoding compares favorably with many other Reed-Solomon SDD algorithms over various channels. Moreover, owing to the simplicity of the proposed bit-level generalized minimum distance decoding, its performance can be tightly bounded using order statistics.",
"corpus_id": 2264,
"title": "Algebraic Soft-Decision Decoding of Reed–Solomon Codes Using Bit-Level Soft Information"
} | {
"abstract": "algorithmics and modular computations, Theory of Codes and Cryptography (3).From an analytical 1. RE Blahut. Theory and practice of error control codes. eecs.uottawa.ca/∼yongacog/courses/coding/ (3) R.E. Blahut,Theory and Practice of Error Control Codes, Addison Wesley, 1983. QA 268. Cached. Download as a PDF 457, Theory and Practice of Error Control CodesBlahut 1984 (Show Context). Citation Context..ontinued fractions.",
"corpus_id": 46054218,
"score": -1,
"title": "Theory and practice of error control codes"
} |
{
"abstract": "Purpose – The purpose of this paper is to integrate the empirical and game theoretical approaches to address the strategic interactions among countries in choosing their optimal levels of intellectual property rights (IPRs), and to identify how these countries can reach an efficient and equitable equilibrium.Design/methodology/approach – Because countries' decisions on which IPR standards and protections to implement are interrelated, the authors apply game theory to characterize the scenarios before and after the 1994 Agreement on Trade‐related Intellectual Property Rights (TRIPS) involving developed and developing countries.Findings – The model shows that the pre‐TRIPS equilibrium is comprised of high‐income (H‐I) developed countries which choose a strong IPR protection while the middle‐income (M‐I) and low‐income (L‐I) developing countries choose a weak IPR standard. For countries to move from such an equilibrium to the uniformly strong IPR regime under TRIPS, it is necessary for the H‐I countries to c...",
"corpus_id": 39580408,
"title": "Intellectual property rights and knowledge sharing across countries"
} | {
"abstract": "Have developing countries gained from the incorporation of IPR standards into the WTO framework? We use historical, theoretical, and empirical methods to answer this question and reach several conclusions. First, U.S. history provides a clear case of a developing country which used strong patent rights and weak copyrights in the 19th century to enhance its growth prospects. Second, recent theoretical literature presents a strong case for welfare gains to developing countries from patent harmonization if developed countries pay lump-sums to offset higher royalty payments by developing countries. Third, the creation of intellectual property in new types of inventions is necessary, but the scope, depth, and enforcement of IPRs is likely to differ across countries according to their economic and political institutions, their per capita income, and their capability to engage in and disseminate the fruits of R&D.",
"corpus_id": 152650348,
"title": "Have Developing Countries Gained From the Marriage Between Trade Agreements and Intellectual Property Rights"
} | {
"abstract": "We study the online market for peer-to-peer P2P lending, in which individuals bid on unsecured microloans sought by other individual borrowers. Using a large sample of consummated and failed listings from the largest online P2P lending marketplace, Prosper.com, we find that the online friendships of borrowers act as signals of credit quality. Friendships increase the probability of successful funding, lower interest rates on funded loans, and are associated with lower ex post default rates. The economic effects of friendships show a striking gradation based on the roles and identities of the friends. We discuss the implications of our findings for the disintermediation of financial markets and the design of decentralized electronic markets. \n \nThis paper was accepted by Sandra Slaughter, information systems.",
"corpus_id": 36846675,
"score": -1,
"title": "Judging Borrowers by the Company They Keep: Friendship Networks and Information Asymmetry in Online Peer-to-Peer Lending"
} |
{
"abstract": "Wireless capsule endoscopy (WCE) has been widely used in gastrointestinal (GI) diagnosis that allows the physicians to examine the interior wall of the human GI tract through a pain-free procedure. However, there are still several limitations of the technology, which limits its functionality, ultimately limiting its wide acceptance. Its counterpart, the wired endoscopic system is a painful procedure that demotivates patients from going through the procedure, and adversely affects early diagnosis. Furthermore, the current generation of capsules is unable to automate the detection of abnormality. As a result, physicians are required to spend longer hours to examine each image from the endoscopic capsule for abnormalities, which makes this technology tiresome and error-prone. Early detection of cancer is important to improve the survival rate in patients with colorectal cancer. Hence, a fluorescence-imaging-based endoscopic capsule that automates the detection process of colorectal cancer was designed and developed in our lab. The proof of concept of this endoscopic capsule was tested on porcine intestine and liquid phantom. The proposed WCE system offers great possibilities for future applicability in selective and specific detection of other fluorescently labelled cancers.",
"corpus_id": 215606043,
"title": "A Fluorescence-Based Wireless Capsule Endoscopy System for Detecting Colorectal Cancer"
} | {
"abstract": "A conformal circularly polarized (CP) capsule antenna, designed for ingestible wireless capsule endoscope (WCE) systems at industrial, scientific, and medical (ISM) band (2.4–2.48 GHz), is presented. The antenna consists of a rectangular loop, an asymmetric U-shaped strip, and a protruding L-shaped stub printed on the top of the dielectric substrate. The CP wave is generated by the corner-fed U-shaped strip protruding into the rectangular slot. Further, by adjusting the size of the U-shaped strip and the L-shaped stub, the 3 dB axial-ratio (AR) band can be fully covered by the 10 dB impedance band. The simulated 10 dB impedance bandwidth (BW) and 3 dB ARBW are 31.58% and 13.11%, respectively. The simulated results show that the overlapped impedance BW and ARBW are from 2.28 to 2.6 GHz, which completely covers the 2.4 GHz ISM band. The measured 10 dB impedance BW is 39.21%. Finally, the radiation performance, safety consideration, and link budget of the antenna are examined and characterized. Owing to the broad overlapped impedance BW and ARBW, the minimized in-capsule foot print, and ease of fabrication, the designed antenna is a capable candidate for a WCE system.",
"corpus_id": 4607876,
"title": "A Conformal Circularly Polarized Antenna for Wireless Capsule Endoscope Systems"
} | {
"abstract": "In this work, we present an integrated planner for collision-free single and dual arm grasping motions. The proposed Grasp-RRT planner combines the three main tasks needed for grasping an object: finding a feasible grasp, solving the inverse kinematics and searching a collision-free trajectory that brings the hand to the grasping pose. Therefore, RRT-based algorithms are used to build a tree of reachable and collision-free configurations. During RRT-generation, potential grasping positions are generated and approach movements toward them are computed. The quality of reachable grasping poses is scored with an online grasp quality measurement module which is based on the computation of applied forces in order to diminish the net torque.We also present an extension to a dual arm planner which generates bimanual grasps together with corresponding dual arm grasping motions. The algorithms are evaluated with different setups in simulation and on the humanoid robot ARMAR-III.",
"corpus_id": 1353916,
"score": -1,
"title": "Integrated Grasp and motion planning"
} |
{
"abstract": "Data quality assessment and data cleaning are context-dependent activities. Motivated by this observation, we propose the Ontological Multidimensional Data Model (OMD model), which can be used to model and represent contexts as logic-based ontologies. The data under assessment are mapped into the context for additional analysis, processing, and quality data extraction. The resulting contexts allow for the representation of dimensions, and multidimensional data quality assessment becomes possible. At the core of a multidimensional context, we include a generalized multidimensional data model and a Datalog± ontology with provably good properties in terms of query answering. These main components are used to represent dimension hierarchies, dimensional constraints, and dimensional rules and define predicates for quality data specification. Query answering relies on and triggers navigation through dimension hierarchies and becomes the basic tool for the extraction of quality data. The OMD model is interesting per se beyond applications to data quality. It allows for a logic-based and computationally tractable representation of multidimensional data, extending previous multidimensional data models with additional expressive power and functionalities.",
"corpus_id": 4391957,
"title": "Ontological Multidimensional Data Models and Contextual Data Quality"
} | {
"abstract": "The quality of data is context dependent. Starting from this intuition and experience, we propose and develop a conceptual framework that captures in formal terms the notion of \"context-dependent data quality\". We start by proposing a generic and abstract notion of context, and also of its uses, in general and in data management in particular. On this basis, we investigate \"data quality assessment\" and \"quality query answering\" as context-dependent activities. A context for the assessment of a database D at hand is modeled as an external database schema, with possibly materialized or virtual data, and connections to external data sources. The database D is put in context via mappings to the contextual schema, which produces a collection C of alternative clean versions of D. The quality of D is measured in terms of its distance to C. The class C} is also used to define and do \"quality query answering\". The proposed model allows for natural extensions, like the use of data quality predicates, the optimization of the access by the context to external data sources, and also the representation of contexts by means of more expressive ontologies.",
"corpus_id": 6621968,
"title": "Contexts and Data Quality Assessment"
} | {
"abstract": "The problem of data cleaning, which consists of emoving inconsistencies and errors from original data sets, is well known in the area of decision support systems and data warehouses. However, for non-conventional applications, such as the migration of largely unstructured data into structured one, or the integration of heterogeneous scientific data sets in inter-discipl- inary fields (e.g., in environmental science), existing ETL (Extraction Transformation Loading) and data cleaning tools for writing data cleaning programs are insufficient. The main challenge with them is the design of a data flow graph that effectively generates clean data, and can perform efficiently on large sets of input data. The difficulty with them comes from (i) a lack of clear separation between the logical specification of data transformations and their physical implementation and (ii) the lack of explanation of cleaning results and user interaction facilities to tune a data cleaning program. This paper addresses these two problems and presents a language, an execution model and algorithms that enable users to express data cleaning specifications declaratively and perform the cleaning efficiently. We use as an example a set of bibliographic references used to construct the Citeseer Web site. The underlying data integration problem is to derive structured and clean textual records so that meaningful queries can be performed. Experimental results report on the assessement of the proposed framework for data cleaning.",
"corpus_id": 8296909,
"score": -1,
"title": "Declarative Data Cleaning: Language, Model, and Algorithms"
} |
{
"abstract": "Bioactive lipids serve as intracellular and extracellular mediators in cell signaling in normal and pathological conditions. Here we describe that an important regulator of some of these lipids, the lipid phosphate phosphatase‐3 (LPP3), is abundantly expressed in specific plasma membrane domains of Bergmann glia (BG), a specialized type of astrocyte with key roles in cerebellum development and physiology. Mice selectively lacking expression of LPP3/Ppap2b in the nervous system are viable and fertile but exhibit defects in postnatal cerebellum development and modifications in the cytoarchitecture and arrangement of BG with a mild non‐progressive motor coordination defect. Lipid and gene profiling studies in combination with pharmacological treatments suggest that most of these effects are associated with alterations in sphingosine‐1‐phosphate (S1P) metabolism and signaling. Altogether our data indicate that LPP3 participates in several aspects of neuron‐glia communication required for proper cerebellum development. © 2011 Wiley‐Liss, Inc.",
"corpus_id": 32313944,
"title": "Expression of LPP3 in Bergmann glia is required for proper cerebellar sphingosine‐1‐phosphate metabolism/signaling and development"
} | {
"abstract": "Our knowledge of how bioactive lipids participate during development has been limited principally due to the difficulties of working with lipids. The availability of some of these lipids is regulated by the Lipid phosphate phosphatases (LPPs). The targeted inactivation of Ppap2b, which codes for the isoenzyme Lpp3, has profound developmental defects. Lpp3 deficient embryos die around E9.5 due to extraembryonic vascular defects, making difficult to analyze its participation in later stages of mouse development. To gain some predictive information regarding the possible participation of Lpp3 in later stages of development, we generated a Ppap2b null reporter allele and it was used to establish its expression pattern in E8.5-13.5 embryos. We found that Ppap2b expression during these stages was highly dynamic with significant expression in structures where multiple inductive interactions occur such as the limb buds, mammary gland primordia, heart cushions and valves among others. These observations suggest that Lpp3 expression may play a key role in modulating/integrating multiple signaling pathways during development.",
"corpus_id": 97673,
"title": "Generation of a reporter-null allele of Ppap2b/Lpp3and its expression during embryogenesis."
} | {
"abstract": "Temporal and spatial controls of cell migration are crucial during normal development and in disease. Our understanding, though, of the mechanisms that guide cells along a specific migratory path remains largely unclear. We have identified wunen 2 as a repellant for migrating primordial germ cells. We show that wunen 2 maps next to and acts redundantly with the previously characterized gene wunen, and that known wunen mutants affect both transcripts. Both genes encode Drosophila homologs of mammalian phosphatidic acid phosphatase. Our work demonstrates that the catalytic residues of Wunen 2 are necessary for its repellant effect and that it can affect germ cell survival. We propose that spatially restricted phospholipid hydrolysis creates a gradient of signal necessary and specific for the migration and survival of germ cells.",
"corpus_id": 13484920,
"score": -1,
"title": "Spatially restricted activity of a Drosophila lipid phosphatase guides migrating germ cells."
} |
{
"abstract": "In this paper ceiling affixed ARToolKitPlus 2D code artificial landmarks are evaluated for purposes of robot localization and navigation. Ceiling affixed codes rarely come in contact with people, equipment or robots, and for this reason they are more likely to stay detectable over a longer period of time. Multi threshold averaging, light gradient compensation and neighbourhood search techniques further enhanced AR-ToolKitPlus performance. Multi threshold averaging collects positioning results at each gray scale threshold level. After all of the threshold levels are analyzed, the related results are averaged into a final result. Light gradient compensation eliminates effects of uneven lighting in the neighbourhood of a 2D code. Neighbourhood search for a 2D code requires fewer computational resources than a global search. Further repeatability improvements are achieved by means of averaging localizations. Localization performance is evaluated at varying distances of the 2D code from the camera. Experimental results show substantial improvements in repeatability and reliability over the baseline ARToolKitPlus performance. Improved performance will allow for realtime 2D code based localization for indoor robot navigation.",
"corpus_id": 206529747,
"title": "Realtime 2D code based localization for indoor robot navigation"
} | {
"abstract": "A method for robot indoor automatic positioning and orientating based on two-dimensional (2D) barcode landmark is proposed. By using the scheme of the 2D barcode for reference, a special landmark is designed which is convenient to operate and easy to recognize , contain coordinates of their absolute positions and have some ability to automatically correct errors . Landmarks are placed over the “ceiling” and photographed by a camera mounted on the robot with its optical axis vertical to the ceiling plane. The coordinates and angle of the landmark is acquired through image segmentation, contour extracting, characteristic curves matching and landmark properties identifying, and then the robot’s current absolute position and heading angle is computed. The experiments proved the effectiveness of the method and shows that the method can meet accuracy requirements of indoor position and orientation.",
"corpus_id": 2608087,
"title": "A Robot Indoor Position and Orientation Method based on 2D Barcode Landmark"
} | {
"abstract": "To reduce the requirement of hardware in wireless sensor node location,a new and adaptable node relative location approach based on link quality indication(LQI) or receive signal strength indication(RSSI) is investigated.After analyzing a vast amount of data,a two-step curve fitting method is studied to achieve the corresponding relationship between internode distance and RSSI as well as the sender voltage.Through applying the sensor network based on CC2430 and ZigBee protocol,the effects on the distance estimation of two types of fitting method for RF signal attenuation characteristics are discussed.The result shows that the given approach could estimate the diatance between two nodes and locate unknown nodes better.",
"corpus_id": 112252385,
"score": -1,
"title": "Wireless sensor node location approach based on transmission distance estimation"
} |
{
"abstract": ".............................................................................................................................. ii Dedication .......................................................................................................................... iii Acknowledgments.............................................................................................................. iv Table of",
"corpus_id": 81970745,
"title": "The Role Of The RNA-Binding Protein Rho Guanine Nucleotide Exchange Factor In The Cellular Stress Response"
} | {
"abstract": "RNA-binding protein pathology now represents one of the best characterized pathologic features of amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration patients with TDP-43 or FUS pathology (FTLD-TDP and FTLD-FUS). Using liquid chromatography tandem mass spectrometry, we identified altered levels of the RNA-binding motif 45 (RBM45) protein in the cerebrospinal fluid (CSF) of ALS patients. This protein contains sequence similarities to TAR DNA-binding protein 43 (TDP-43) and fused-in-sarcoma (FUS) that are contained in cytoplasmic inclusions of ALS and FTLD-TDP or FTLD-FUS patients. To further characterize RBM45, we first verified the presence of RBM45 in CSF and spinal cord tissue extracts of ALS patients by immunoblot. We next used immunohistochemistry to examine the subcellular distribution of RBM45 and observed in a punctate staining pattern within nuclei of neurons and glia in the brain and spinal cord. We also detected RBM45 cytoplasmic inclusions in 91 % of ALS, 100 % of FTLD-TDP and 75 % of Alzheimer’s disease (AD) cases. The most extensive RBM45 pathology was observed in patients that harbor the C9ORF72 hexanucleotide repeat expansion. These RBM45 inclusions were observed in spinal cord motor neurons, glia and neurons of the dentate gyrus. By confocal microscopy, RBM45 co-localizes with ubiquitin and TDP-43 in inclusion bodies. In neurons containing RBM45 cytoplasmic inclusions we often detected the protein in a punctate pattern within the nucleus that lacked either TDP-43 or ubiquitin. We identified RBM45 using a proteomic screen of CSF from ALS and control subjects for candidate biomarkers, and link this RNA-binding protein to inclusion pathology in ALS, FTLD-TDP and AD.",
"corpus_id": 3009517,
"title": "The RNA-binding motif 45 (RBM45) protein accumulates in inclusion bodies in amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration with TDP-43 inclusions (FTLD-TDP) patients"
} | {
"abstract": "TDP-43 is a predominantly nuclear DNA/RNA binding protein involved in transcriptional regulation and RNA processing. TDP-43 is also a component of the cytoplasmic inclusion bodies characteristic of amyotrophic lateral sclerosis (ALS) and of frontotemporal lobar degeneration with ubiquitinated inclusions (FTLD-U). We have investigated the premise that abnormalities of TDP-43 in disease would be reflected by changes in processing of its target RNAs. To this end, we have firstly identified RNA targets of TDP-43 using UV-Cross-Linking and Immunoprecipitation (UV-CLIP) of SHSY5Y cells, a human neuroblastoma cell line. We used conventional cloning strategies to identify, after quality control steps, 127 targets. Results show that TDP-43 binds mainly to introns at UG/TG repeat motifs (49%) and polypyrimidine rich sequences (17.65%). To determine if the identified RNA targets of TDP-43 were abnormally processed in ALS versus control lumbar spinal cord RNA, we performed RT-PCR using primers designed according to the location of TDP-43 binding within the gene, and prior evidence of alternative splicing of exons adjacent to this site. Of eight genes meeting these criteria, five were differentially spliced in ALS versus control. This supports the premise that abnormalities of TDP-43 in ALS are reflected in changes of RNA processing.",
"corpus_id": 5410643,
"score": -1,
"title": "RNA targets of TDP-43 identified by UV-CLIP are deregulated in ALS"
} |
{
"abstract": "We present an analysis of olivine‐rich exposures at Bellicia and Arruntia craters using Dawn Framing Camera (FC) color data. Our results confirm the existence of olivine‐rich materials at these localities as described by Ammannito et al. ( ) using Visual Infrared Spectrometer (VIR) data. Analyzing laboratory spectra of various howardite–eucrite–diogenite meteorites, high‐Ca pyroxenes, olivines, and olivine‐orthopyroxene mixtures, we derive three FC spectral band parameters that are indicators of olivine‐rich materials. Combining the three band parameters allows us, for the first time, to reliably identify sites showing modal olivine contents >40%. The olivine‐rich exposures at Bellicia and Arruntia are mapped using higher spatial resolution FC data. The exposures are located on the slopes of outer/inner crater walls, on the floor of Arruntia, in the ejecta, as well as in nearby fresh small impact craters. The spatial extent of the exposures ranges from a few hundred meters to few kilometers. The olivine‐rich exposures are in accordance with both the magma ocean and the serial magmatism model (e.g., Righter and Drake ; Yamaguchi et al. ). However, it remains unsolved why the olivine‐rich materials are mainly concentrated in the northern hemisphere (approximately 36–42°N, 46–74°E) and are almost absent in the Rheasilvia basin.",
"corpus_id": 119260024,
"title": "Olivine‐rich exposures at Bellicia and Arruntia craters on (4) Vesta from Dawn FC"
} | {
"abstract": "Asteroid 4 Vesta seems to be a major intact protoplanet, with a surface composition similar to that of the HED (howardite–eucrite–diogenite) meteorites. The southern hemisphere is dominated by a giant impact scar, but previous impact models have failed to reproduce the observed topography. The recent discovery that Vesta’s southern hemisphere is dominated by two overlapping basins provides an opportunity to model Vesta’s topography more accurately. Here we report three-dimensional simulations of Vesta’s global evolution under two overlapping planet-scale collisions. We closely reproduce its observed shape, and provide maps of impact excavation and ejecta deposition. Spiral patterns observed in the younger basin Rheasilvia, about one billion years old, are attributed to Coriolis forces during crater collapse. Surface materials exposed in the north come from a depth of about 20 kilometres, according to our models, whereas materials exposed inside the southern double-excavation come from depths of about 60–100 kilometres. If Vesta began as a layered, completely differentiated protoplanet, then our model predicts large areas of pure diogenites and olivine-rich rocks. These are not seen, possibly implying that the outer 100 kilometres or so of Vesta is composed mainly of a basaltic crust (eucrites) with ultramafic intrusions (diogenites).",
"corpus_id": 4410838,
"title": "The structure of the asteroid 4 Vesta as revealed by models of planet-scale collisions"
} | {
"abstract": "We present new reflectance spectra of 12 V-type asteroids obtained at the 3.6 m Telescopio Nazionale Galileo (TNG) covering the spectral range 0.7 to 2.5 μm. This spectral range, encompassing the 1 and 2 μm, pyroxene features, allows a precise mineralogical characterization of the asteroids. The spectra of these asteroids are examined and compared to spectra for the Howardite, Eucrite and Diogenite (HED) meteorites, of which Vesta is believed to be the parent body. The observed objects were selected from different dynamical populations with the aim to verify if there exist spectral parameters that can shed light on the origin of the objects. A reassessment of data previously published has also been performed using a new methodology. We derive spectral parameters from NIR spectra to infer mineralogical information of the observed asteroids. \n \n \n \nThe V-type asteroids here discussed show mainly orthopyroxene mineralogy although some of them seem to have a mineralogical composition containing cations that are smaller than Mg cations. Most of the observed Vestoids show a low abundance of Ca (<10 per cent Wo). This result implies that no one of the Vestoids studied consists of just eucritic material, but they must additionally have a diogenitic component. However, we must remember that the ground-based data are subject to larger errors than the laboratory data used as reference spectra for interpretation. \n \n \n \nFinally, we note that the intermediate belt asteroid (21238) 1995WV7 has spectral parameters quite different from the observed V-type objects of the inner belt, so it could be a basaltic asteroid not related to Vesta. \n \n \n \nThis mineralogical analysis of asteroids related to Vesta is done in support of NASA’s Dawn mission, which will enter into orbit around Vesta in the summer of 2011. This work extends the scientific context of the mission to include processes contributing to the nature of smaller V-type asteroids that may be related to Vesta.",
"corpus_id": 120573014,
"score": -1,
"title": "Mineralogical characterization of some V‐type asteroids, in support of the NASA Dawn mission★"
} |
{
"abstract": "Background and Objectives: The aim of this study was to determine the temperature depth profiles induced in human skin in vivo by using a pulsed 975 nm diode laser (with 5ms pulse duration) and compare them with those induced by the more common 532 nm (KTP) and 1,064 nm (Nd:YAG) lasers. Quantitative assessment of the energy deposition characteristics in human skin at 975 nm should help design of safe and effective treatment protocols when using such lasers. Study Design/Materials and Methods: Temperature depth profiles induced in the human skin by the three lasers were determined using pulsed photothermal radiometry (PPTR). This technique involves time‐resolved measurement of mid‐infrared emission from the irradiated test site and reconstruction of the laser‐induced temperature profiles using an earlier developed optimization algorithm. Measurements were performed on volar sides of the forearms in seven volunteers with healthy skin. At irradiation spot diameters of 3–4mm, the radiant exposures were 0.24, 0.36, and 5.7 J/cm for the 975, 532, and 1,064nm lasers, respectively. Results: Upon normalization to the same radiant exposure of 1 J/cm, the assessed maximum temperature rise in the epidermis averaged 0.8 °C for the 975 nm laser, 7.4 °C for the 532 nm, and 0.6 °C for the 1,064 nm laser. The characteristic subsurface depth to which 50% of the absorbed laser energy was deposited was on average 0.31mm at 975 nm irradiation, and slightly deeper at 1,064 nm, and 0.15mm at 532 nm. The experimentally obtained relations were reproduced in a dedicated numerical simulation. Conclusions: The assessed energy deposition characteristics show that the pulsed 975nm diode laser is very suitable for controlled heating of the upper dermis as required, for example, for nonablative skin rejuvenation. The risks of nonselective overheating of the epidermis and subcutis are significantly reduced in comparison with irradiation at 532 and 1,064nm, respectively. Lasers Surg. Med. © 2019 Wiley Periodicals, Inc.",
"corpus_id": 189818145,
"title": "Lasers in Surgery and Medicine"
} | {
"abstract": "We report on the first experimental evaluation of pulsed photothermal radiometry (PPTR) using a spectrally composite kernel matrix in signal analysis. Numerical studies have indicated that this approach could enable PPTR temperature profiling in watery tissues with better accuracy and stability as compared to the customary monochromatic approximation. By using an optimized experimental set-up and image reconstruction code (involving a projected ν-method and adaptive regularization), we demonstrate accurate localization of thin absorbing layers in agar tissue phantoms with pronounced spectral variation of a mid-infrared absorption coefficient. Moreover, the widths of reconstructed temperature peaks reach 14–17% of their depth, significantly less than in earlier reports on PPTR depth profiling in watery tissues. Experimental results are replicated by a detailed numerical simulation, which enables analysis of the broadening effect as a function of temperature profile amplitude and depth.",
"corpus_id": 7145766,
"title": "A spectrally composite reconstruction approach for improved resolution of pulsed photothermal temperature profiling in water-based samples"
} | {
"abstract": "BACKGROUND\nThe flashlamp-pumped pulsed dye laser (577,585 nm) with 300 to 450 microseconds pulsewidths has been demonstrated to effectively and safely treat port-wine stains, telangiectases, and superficial hemangiomas in children.\n\n\nOBJECTIVE\nThe objective of this manuscript is to review the indications of the pulsed dye laser in the treatment of vascular lesions in children.\n\n\nCONCLUSION\nPulsed dye laser treatment of port-wine stains can remove or lighten the lesions with multiple treatment sessions. Spider telangiectases respond with complete resolution, usually within one to two treatment sessions. Superficial hemangiomas respond quite easily and effectively with the pulsed dye laser, while a more variable response is noted in deeper hemangiomas, early proliferative lesions, and ulcerated hemangiomas. This procedure is safe with a low incidence of scarring and pigmentary alteration.",
"corpus_id": 2930245,
"score": -1,
"title": "Pulsed dye laser treatment of vascular lesions in children."
} |
{
"abstract": "Summary As an assignment from the Swedish Environmental Protection Agency, IVL has during 2006/2007 performed a \"Screening Study\" of 1,5,9-cyclododecatriene. The screening programme included measurements in background areas and in the vicinity of potential point sources. Measurements were also done in urban areas reflecting diffuse emission pathways from society. Sample types were air, soil, sediment, sludge and biota (fish). A total of 55 samples were included. CDDT was not found in any of the samples. The reported detection limits were 0.04 - 0.05 ng/m 3 in air, 10 ng/g DW in sediment and soil, 20 ng/g DW in sludge and 1-4 ng/g WW in fish. The overall conclusion is that 1,5,9-cyclododecatriene is generally not present in the Swedish environment in concentration that is of environmental concern. The substance is thus not recommended as a candidate to be included in regular monitoring. Keyword screening 1,5,9-cyklododecatriene CDDT Bibliographic data IVL Report B1747 The report can be ordered via",
"corpus_id": 264234499,
"title": "Results from the Swedish National Screening Programme 2004"
} | {
"abstract": "Extrapolating toxicant effects with a fixed application factor (AF) approach or one of the species sensitivity distribution (SSD) models presumes that toxicant effects on single, individual-level endpoints reflect effects at the ecosystem level. Measured effect concentrations on plankton from multispecies field tests using tributyltin (TBT) and linear alkylbenzene sulfonates (LAS) were compared with published laboratory single-species test results and measured in situ concentrations. Extrapolation methods were evaluated by comparing predicted no-effect concentrations (PNECs), calculated by AF and SSD models with NOECs and E(L)C(50)s obtained from field studies. Overall, structural parameters were more sensitive than functional ones. Measured effect concentrations covered approximately the same range between laboratory and field experiments. Both SSD and AF approaches provide PNECs that appear to be protective for ecosystems. The AF approach is simpler to apply than the SSD models and results in PNECs that are no less conservative. Calculated PNEC values and the lowest field effect concentrations were lower than measured environmental concentrations for both substances, indicating that they may pose a risk to marine ecosystems.",
"corpus_id": 857377,
"title": "Comparing sensitivity of ecotoxicological effect endpoints between laboratory and field."
} | {
"abstract": "Abstract Tolerance levels to zinc ions of three diatoms ( Skeletonema costatum (Grev.) Cleve, Thalassiosira pseudonana (Hust.) Hasle and Phaeodactylum tricornutum (Bohlin) grown in dialysis culture in the local fjord water were studied. Declining relative growth rates were observed by addition of 50, 250 and 25,000μg/l of zinc ions, respectively, for the three algae. Reduced final cell concentrations were found at lower zinc levels. At least for one species a significant increase in zinc uptake by the cells took place at zinc levels which did not seem to influence the growth and development of the alga. Two clones of Skeletonema costatum studied showed significant intraspecific differences regarding the tolerance to zinc pollution. Dialysis bioassay was found suitable for monitoring heavy metal pollution of aquatic recipients.",
"corpus_id": 83956476,
"score": -1,
"title": "Heavy metal tolerance of marine phytoplankton. I. The tolerance of three algal species to zinc in coastal sea water"
} |
{
"abstract": "With the IMS 4F, a scanning ion microscope and mass spectrometer (SIMS), it is possible to map chemical elements with a lateral resolution of about 250 nm over a field of view of 50 × 50 μm2. Such conditions should enable the imaging of subcellular structures with constitutive ionic species such as CN−, P−, S−. The study was performed on heart and renal tissues prepared either by chemical procedure or cryofixation‐freeze substitution (CF‐FS) prior to embedding. Heart tissue was chosen because cardiocytes display a simple structural organization whereas the structural organization of kidney tubular cells is more complex. Whatever the preparation procedure, nuclei were easily identified due to their high P− content. The CN−, P−, and S− ion images obtained on heart and renal tissues prepared by chemical procedure showed weak contrasts inside the cytoplasm so that it was difficult to recognize the organelles. After CF‐FS, enhanced contrasted images allow organelle (mitochondria, myofibrils, lysosomes, vacuoles, basal lamina, etc) characterization. This work demonstrated that CF‐FS is a more suitable preparation procedure than chemical method to reveal organelle structures by their chemical composition. The improvements in the imaging of these structures is an essential step to establish the correlation between the localization of a trace element (or a molecule tagged with isotopes or particular atoms) and its subcellular targets.",
"corpus_id": 221527290,
"title": "Imaging of subcellular structures by scanning ion microscopy and mass spectrometry. Advantage of cryofixation and freeze substitution procedure over chemical preparation"
} | {
"abstract": "An X‐ray microanalytical preparation technique using continuous specimen cooling and consisting of cryotransfer of frozen sections into the electron microscope, freeze‐drying of the sections within the microscope and analysis at liquid nitrogen temperature is compared with a more conventional technique characterized by freeze‐drying of sections in a vacuum evaporator with subsequent carbon coating, transfer of frozen‐dried sections through the room air into the electron microscope and analysis at ambient temperature. For this comparison elemental concentrations in mitochondria, in areas of the rough endoplasmic reticulum and in the cytoplasm of rat hepatocytes, were measured.",
"corpus_id": 1362747,
"title": "X‐ray microanalysis with continuous specimen cooling: is it necessary?"
} | {
"abstract": "ERK1/2 is involved in a variety of cellular processes during development, but the functions of these isoforms in brain development remain to be determined. Here, we generated double knockout (DKO) mice to study the individual and combined roles of ERK1 and ERK2 during cortical development. Mice deficient in Erk2, and more dramatically in the DKOs, displayed proliferation defects in late radial glial progenitors within the ventricular zone, and a severe disruption of lamination in the cerebral cortex. Immunohistochemical analyses revealed that late‐generated cortical neurons were misplaced and failed to migrate the upper cortical layers in DKO mice. Moreover, these mice displayed fewer radial glial fibers, which provide architectural guides for radially migrating neurons. These results suggest that extracellular signal‐regulated kinase signaling is essential for the expansion of the radial glial population and for the maintenance of radial glial scaffolding. Tangential migration of interneurons and oligodendrocytes from the ganglionic eminences (GE) to the dorsal cortex was more severely impaired in DKO mice than in mice deficient for Erk2 alone, because of reduced progenitor proliferation in the GE of the ventral telencephalon. These data demonstrate functional overlaps between ERK1 and ERK2 and indicate that extracellular signal‐regulated kinase signaling plays a crucial role in cortical development.",
"corpus_id": 12339376,
"score": -1,
"title": "ERK1 and ERK2 are required for radial glial maintenance and cortical lamination"
} |
{
"abstract": "Assimilation of lidar observations for air quality modelling is investigated via the development of a new model, which assimilates ground-based lidar network measurements using optimal interpolation (OI) in a chemistry transport model. First, a tool for assimilating PM10 (particulate matter with a diameter lower than 10 um) concentration measurements on the vertical is developed in the air quality modelling platform POLYPHEMUS. It is applied to western Europe for one month from 15 July to 15 August 2001 to investigate the potential impact of future ground-based lidar networks on analysis and short-term forecasts (the description of the future) of PM10. The efficiency of assimilating lidar network measurements is compared to the efficiency of assimilating concentration measurements from the AirBase ground network, which includes about 500 stations in western Europe. A sensitivity study on the number and location of required lidars is also performed to help define an optimal lidar network for PM10 forecasts. Secondly, a new model for simulating normalised lidar signals (PR2) is developed and integrated in POLYPHEMUS. Simulated lidar signals are compared to hourly ground-based mobile and in-situ lidar observations performed during the MEGAPOLI (Megacities : Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation) summer experiment in July 2009. It is found that the model correctly reproduces the vertical distribution of aerosol optical properties and their temporal variability. Additionally, two new algorithms for assimilating lidar signals are presented and evaluated during MEGAPOLI. The aerosol simulations without and with lidar data assimilation are evaluated using the AIRPARIF (a regional operational network in charge of air quality survey around the Paris area) database to demonstrate the feasibility and the usefulness of assimilating lidar profiles for aerosol forecasts. Finally, POLYPHEMUS with the model for assimilating lidar signals is applied to the Mediterranean basin, where 9 ground-based lidar stations from the ACTRIS/EARLINET network and 1 lidar station in Corsica performed a 72-hour period of intensive and continuous measurements in July 2012. Several parameters of the assimilation system are also studied to better estimate the spatial and temporal influence of the assimilation of lidar signals on aerosol forecasts.",
"corpus_id": 127675062,
"title": "A new air quality modelling approach at the regional scale using lidar data assimilation"
} | {
"abstract": "Monitoring aerosols over wide areas is important for the assessment of the population's exposure to health relevant particulate matter (PM). Satellite observations of aerosol optical depth (AOD) can contribute to the improvement of highly needed analyzed and forecasted distributions of PM when combined with a model and ground-based observations. In this paper, we evaluate the contribution of column AOD observations from a future imager on a geostationary satellite by performing an Observing System Simulation Experiment (OSSE). In the OSSE simulated imager, AOD observations and ground-based PM observations are assimilated in the chemistry transport model LOTOS-EUROS to assess the added value of the satellite observations relative to the value of ground-based observations. Results show that in highly polluted situations, the imager AOD observations improve analyzed and forecasted PM2.5 concentrations even in the vicinity of simultaneously incorporated ground-based PM observations. The added value of the proposed imager is small when considering monthly averaged PM distributions. This is attributed to relatively large errors in the imager AODs in case of background aerosol loads coupled to the fact that the imager AODs are column values and an indirect estimate of PM. In the future, model improvements and optimization of the assimilation system should be achieved for better handling of situations with aerosol plumes above the boundary layer and satellite observations containing aerosol profile information. With the suggested improvements, the developed OSSE will form a powerful tool for determining the added value of future missions and defining requirements for planned satellite observations.",
"corpus_id": 3162223,
"title": "The Added Value of a Proposed Satellite Imager for Ground Level Particulate Matter Analyses and Forecasts"
} | {
"abstract": "Recognition of the extent and magnitude of night-time light pollution impacts on natural ecosystems is increasing, with pervasive effects observed in both nocturnal and diurnal species. Municipal and industrial lighting is on the cusp of a step change where energy-efficient lighting technology is driving a shift from “yellow” high-pressure sodium vapor lamps (HPS) to new “white” light-emitting diodes (LEDs). We hypothesized that white LEDs would be more attractive and thus have greater ecological impacts than HPS due to the peak UV-green-blue visual sensitivity of nocturnal invertebrates. Our results support this hypothesis; on average LED light traps captured 48% more insects than were captured with light traps fitted with HPS lamps, and this effect was dependent on air temperature (significant light × air temperature interaction). We found no evidence that manipulating the color temperature of white LEDs would minimize the ecological impacts of the adoption of white LED lights. As such, large-scale adoption of energy-efficient white LED lighting for municipal and industrial use may exacerbate ecological impacts and potentially amplify phytosanitary pest infestations. Our findings highlight the urgent need for collaborative research between ecologists and electrical engineers to ensure that future developments in LED technology minimize their potential ecological effects.",
"corpus_id": 13633739,
"score": -1,
"title": "LED lighting increases the ecological impact of light pollution irrespective of color temperature."
} |
{
"abstract": "tRNA molecules have well-defined sequence conservations that reflect the conserved tertiary pairs maintaining their architecture and functions during the translation processes. An analysis of aligned tRNA sequences present in the GtRNAdb database (the Lowe Laboratory, University of California, Santa Cruz) led to surprising conservations on some cytosolic tRNAs specific for alanine compared to other tRNA species, including tRNAs specific for glycine. First, besides the well-known G3oU70 base pair in the amino acid stem, there is the frequent occurrence of a second wobble pair at G30oU40, a pair generally observed as a Watson–Crick pair throughout phylogeny. Second, the tertiary pair R15/Y48 occurs as a purine–purine R15/A48 pair. Finally, the conserved T54/A58 pair maintaining the fold of the T-loop is observed as a purine–purine A54/A58 pair. The R15/A48 and A54/A58 pairs always occur together. The G30oU40 pair occurs alone or together with these other two pairs. The pairing variations are observed to a variable extent depending on phylogeny. Among eukaryotes, insects display all variations simultaneously, whereas mammals present either the G30oU40 pair or both R15/A48 and A54/A58. tRNAs with the anticodon 34A(I)GC36 are the most prone to display all those pair variations in mammals and insects. tRNAs with anticodon Y34GC36 have preferentially G30oU40 only. These unusual pairs are not observed in bacterial, nor archaeal, tRNAs, probably because of the avoidance of A34-containing anticodons in four-codon boxes. Among eukaryotes, these unusual pairing features were not observed in fungi and nematodes. These unusual structural features may affect, besides aminoacylation, transcription rates (e.g., 54/58) or ribosomal translocation (30/40).",
"corpus_id": 220908159,
"title": "Unusual tertiary pairs in eukaryotic tRNAAla"
} | {
"abstract": "During translation, aminoacyl-tRNA synthetases recognize the identities of the tRNAs to charge them with their respective amino acids. The conserved identities of 58,244 eukaryotic tRNAs of 24 invertebrates and 45 vertebrates in genomic tRNA database were analyzed and their novel features extracted. The internal promoter sequences, namely, A-Box and B-Box, were investigated and evidence gathered that the intervention of optional nucleotides at 17a and 17b correlated with the optimal length of the A-Box. The presence of canonical transcription terminator sequences at the immediate vicinity of tRNA genes was ventured. Even though non-canonical introns had been reported in red alga, green alga, and nucleomorph so far, fairly motivating evidence of their existence emerged in tRNA genes of other eukaryotes. Non-canonical introns were seen to interfere with the internal promoters in two cases, questioning their transcription fidelity. In a first of its kind, phylogenetic constructs based on tRNA molecules delineated and built the trees of the vast and diverse invertebrates and vertebrates. Finally, two tRNA models representing the invertebrates and the vertebrates were drawn, by isolating the dominant consensus in the positional fluctuations of nucleotide compositions.",
"corpus_id": 1263068,
"title": "Eukaryotic tRNAs fingerprint invertebrates vis-à-vis vertebrates"
} | {
"abstract": "Liver cell transplantation may provide a means to replace lost or deficient liver tissue, but devices capable of delivering hepatocytes to a desirable anatomic location and guiding the development of a new tissue from these cells and the host tissue are needed. We have investigated whether sponges fabricated from poly-L-lactic acid (PLA) infiltrated with polyvinyl alcohol (PVA) would meet these requirements. Highly porous sponges (porosity = 90-95%) were fabricated from PLA using a particulate leaching technique. To enable even and efficient cell seeding, the devices were infiltrated with the hydrophilic polymer polyvinyl alcohol (PVA). This reduced their contact angle with water from 79 to 23 degrees, but did not inhibit the ability of hepatocytes to adhere to the polymer. Porous sponges of PLA infiltrated with PVA readily absorbed aqueous solutions into 98% of their pore volume, and could be evenly seeded with high densities (5 x 10(7) cells/mL) of hepatocytes. Hepatocyte-seeded devices were implanted into the mesentery of laboratory rats, and 6 +/- 2 x 10(5) of the hepatocytes engrafted per sponge. Fibrovascular tissue invaded through the devices' pores, leading to a composite tissue consisting of hepatocytes, blood vessels and fibrous tissue, and the polymer sponge.",
"corpus_id": 44554960,
"score": -1,
"title": "Biodegradable sponges for hepatocyte transplantation."
} |
{
"abstract": "The Jacobi-Trudi formulas imply that the minors of the banded Toeplitz matrices can be written as certain skew Schur polynomials. In 2012, Alexandersson expressed the corresponding skew partitions in terms of the indices of the struck-out rows and columns. In the present paper, we develop the same idea and obtain some new applications. First, we prove a slight generalization and modification of Alexandersson's formula. Second, we deduce corollaries about the cofactors and eigenvectors of banded Toeplitz matrices, and give new simple proofs to the corresponding formulas published by Trench in 1985.",
"corpus_id": 37940137,
"title": "Cofactors and eigenvectors of banded Toeplitz matrices: Trench formulas via skew Schur polynomials"
} | {
"abstract": "We prove that for arbitrary partitions $\\mathbf{\\lambda} \\subseteq \\mathbf{\\kappa},$ and integers $0\\leq c<r\\leq n,$ the sequence of Schur polynomials $S_{(\\mathbf{\\kappa} + k\\cdot\\mathbf{1}^c)/(\\mathbf{\\lambda} + k\\cdot\\mathbf{1}^r)}(x_1,\\dots,x_n)$ for $k$ sufficiently large, satisfy a linear recurrence. The roots of the characteristic equation are given explicitly. These recurrences are also valid for certain sequences of minors of banded Toeplitz matrices. In addition, we show that Widom's determinant formula from 1958 is a special case of a well-known identity for Schur polynomials.",
"corpus_id": 1364687,
"title": "Schur Polynomials, Banded Toeplitz Matrices and Widom's Formula"
} | {
"abstract": "Given an arbitrary complex-valued infinite matrix $\\infmatA=(a_{ij}),$$i=1,\\dotsc,\\infty;$ $j=1,\\dotsc,\\infty$ and a positive integer $n$ we introduce anaturally associated polynomial basis $\\pol ...",
"corpus_id": 13607630,
"score": -1,
"title": "Around a multivariate Schmidt–Spitzer theorem"
} |
{
"abstract": "Talus flatirons (TFs) are morphostratigraphic markers of prior talus deposition that are now disconnected from the active hillslope. Three generations of TFs (TF1, TF2, TF3) exist flanking a Sonoran Desert inselberg, Rock Peak, in a welded tuff caprocks-controlled landscape bounded by pediments. TFs at Rock Peak enable estimation of slope retreat rates through the application of cosmogenic 10Be, optically stimulated luminescence dating, and catchment-wide denudation rates (CWDR). We estimate disconnection of TF1 on Rock Peak at 88.9 ± 7.8 ka (northern slope) and 29.1 ± 2.5 ka (southern slope). Rates of hillslope retreat measure between 311.6 mm·ka−1 (northern slope) and 728.5 mm·ka−1 (southern slope). Asymmetry in retreat rates is consistent with CWDR, with southern slopes denuding ∼1.5 times faster. The asymmetry is interpreted as the result of the southward structural dip of strata present (>10°). Denudation rates on the summit of Rock Peak (54.3 ± 19.4 mm·ka−1 welded tuff; 111.2 ± 15.3 mm·ka−1 sandstone conglomerate) support interpretation that removal of welded tuff caprock accelerates denudation of this landscape and amplifies the impact of the structural dip. Given this, we interpret that Rock Peak will evolve into a rounded residual hill as pediments flanking the inselberg lengthen through time, similar to landforms observed in the surrounding landscape where the welded tuff and underlying sedimentary caprocks are no longer present. Using the range of slope retreat rates from Rock Peak, we provide a first estimate for the length of time necessary for pediments to form via hillslope retreat in the Sonoran Desert. Key Words: caprock, landscape evolution, pediment association, talus flatiron, 10Be exposure dating.",
"corpus_id": 201331107,
"title": "Asymmetric Hillslope Retreat Revealed from Talus Flatirons on Rock Peak, San Tan Mountains, Arizona, United States: Assessing Caprock Lithology Control on Landscape Evolution"
} | {
"abstract": "Publisher Summary This chapter provides an overview of some historical perspectives on landscape evolution, identifies the key qualitative studies that have moved the science of large-scale geomorphology forward, explores some of the new numeric models that simulate real landscapes and real processes, and provides a glimpse of future landscape evolution studies. Landscape evolution models come in two basic types, qualitative and quantitative, that can be applied across a wide range of spatial and temporal scales. The chapter focuses on models that address large-scale landforms and processes over the graded and cyclic scales of Schumm & Licthy (1965). The geomorphologic equivalent to the much sought unified theory of physical forces is a single landscape evolution model that can successfully explain the bewildering display of landforms and landscapes at all spatial and temporal scales and successfully predict the time-dependent changes in that landscape. One of the frontiers of landscape evolution research may include the direct reconstruction of the mean elevation of topography, as a global response to glacial climates, and the reconciliation of the present disparity of erosion rates from landscape of markedly similar relief and mean elevation.",
"corpus_id": 5758758,
"title": "Landscape evolution models"
} | {
"abstract": "Landscape evaluation has an important role in developing and maintaining sustainability on local, regional and global scales. For this purpose different mathematical, empirical and traditional approaches were applied including Feng-shui theory. The basis of this theory refers to the Chinese traditional knowledge, examination and evaluation of the intrinsic energies of people and places. At the present study the site analysis and the landscape evaluation with Feng-shui theory were applied using the shapes and imaginaries of the landforms in the Shandiz urban region, northeast of Iran. The unsustainable development plans along the land use changes in this region has resulted in land degradations and natural hazards. Therefore, it is necessary to explain the traditional or ecological landscape evaluation at the study area. Based on the Feng-shui method we investigated the assigning nine-lattice zones map of the study area to achieve the ecological landscape evaluation. We think this method is a useful fresh traditional method for application in Middle East area. Our results exhibited that the central cave spot and four directions at the study area had spatially adaptation on the Lo-shu tablet in the nine landscape zones. These zones revealed that the new optimal strategies based on Feng-shui variant characteristics which are the brief research outcomes of landscape evaluation.",
"corpus_id": 53407089,
"score": -1,
"title": "Ecological Evaluation of Landscape Using Feng-Shui Theory at Shandiz Urban Region, NE Iran"
} |
{
"abstract": "Effectiveness of a Critical Care Nurse Residency Program by Pamela A. Redman MSN, Walden University 2011 BA, Spalding University 2001 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Nursing Practice Walden University November 2016",
"corpus_id": 56647185,
"title": "Effectiveness of a Critical Care Nurse Residency Program"
} | {
"abstract": "Reports that new nurse graduates are not sufficiently prepared to enter the workforce are of concern to educators, employers, and other stakeholders. Often, this lack of 'practice readiness' is defined in relation to an inability to 'hit the ground running' and is attributed to a 'gap' between theory and practice and the nature of current work environments. To gain a deeper understanding of the process of making the transition from student to graduate nurse, discussion groups were held across Alberta with 14 new graduates and 133 staff nurses, employers, and educators. Five additional new graduates and 34 staff nurses, employers, and educators provided input by fax or e-mail. The findings of this initiative speak to the need to examine assumptions underlying 'practice readiness' and what constitutes an effective transition to the workplace. The problems to be addressed are complex and a wide range of sustainable, evidence-based approaches are required to resolve them.",
"corpus_id": 890245,
"title": "Successful Transition of the New Graduate Nurse"
} | {
"abstract": "New nurses continue to face challenging work environments and high expectations for professional competence as they enter practice. Nurse residency programs are gaining prominence as a mechanism to ease new graduates' transition to practice. This study examined new graduates' perceptions of their professional practice competence and work environment throughout a yearlong nurse residency program. Employing a repeated measures design, data were collected at baseline, at 6 months, and at 12 months. Results showed that job satisfaction was significantly lowest at 6 months and highest at 12 months. Job stress was found to be lowest at 12 months and organizational commitment was highest at baseline. Of the variables related to professional practice, clinical decision-making was highest at 12 months and quality of nursing performance significantly increased at each measurement point. These data add to the growing evidence supporting the efficacy of nurse residency programs.",
"corpus_id": 4696993,
"score": -1,
"title": "Perceptions of professional practice and work environment of new graduates in a nurse residency program."
} |
{
"abstract": "Sorafenib has emerged as an effective therapeutic option for radioactive iodine (RAI)-refractory, locally advanced or metastatic differentiated thyroid cancer (DTC). We investigated the efficacy and safety of sorafenib treatment in a real-world setting and unveil predictive markers of responsiveness to sorafenib. The treatment response, progression-free survival (PFS), overall survival, and adverse events (AEs) of sorafenib-treated RAI-refractory, locally advanced or metastatic DTC patients at three institutes were retrospectively reviewed, and their tumor doubling time was calculated by three investigators. Total eighty-five patients were treated with sorafenib, and seven patients discontinued sorafenib due to AEs before the first tumor assessment. The median PFS was 14.4 months, and the objective response rate was 10.3% in 78 patients who were able to evaluate the tumor response. Age, sex, histologic type, tumor location, RAI avidity, or the presence of FDG-PET uptake did not affect PFS. However, smaller tumor size (≤1.5 cm) of the target lesions in lung showed better PFS (hazard ratio [HR] 0.39, p = 0.01), and tumors with the shortest doubling time (≤6 months) had worse outcome (HR 2.70, p < 0.01). Because of AEs, dose reductions or drug interruptions were required in 64% of patients, and eventually, 23% of patients discontinued sorafenib permanently. The most common AE was hand-foot skin reaction (HFSR). Patients with severe HFSR showed better PFS, but there were no statistical significance (HR 0.65, p = 0.05). In conclusion, small tumor size and long doubling time of each target lesion can be a prognostic marker to predict the responsiveness to sorafenib in RAI-refractory DTC patients.",
"corpus_id": 128352552,
"title": "Tumor doubling time predicts response to sorafenib in radioactive iodine-refractory differentiated thyroid cancer."
} | {
"abstract": "Objective The aim of the present study was to assess patient compliance with tyrosine kinase inhibitor (TKI) treatment used for refractory and progressive thyroid cancer, in addition to the efficacy and serious adverse events associated with these agents. Methods We retrospectively analyzed data from adult patients with metastatic differentiated or medullary thyroid cancer unresponsive to conventional treatment and treated with TKIs. Patients received treatment until disease progression or onset of serious adverse events, or until they expressed an intention to stop treatment. Results Twenty-four patients received TKIs. The median duration of treatment was four (range: 1–19) cycles. The most frequent adverse events were fatigue, nausea, diarrhea, hypertension, and stomatitis, and the most severe were nasal bleeding, diarrhea, heart failure, rhabdomyolysis, renal failure, QT prolongation, neutropenia, and severe fatigue. Dose reduction was required in eight patients, while five decided to terminate TKI therapy because adverse events impaired their everyday activities. During therapy, two patients showed a partial response and three showed stable disease. The lungs were the metastatic sites favoring a response to treatment. Conclusion Patient selection and meticulous pretreatment education are necessary in order to ensure adherence with TKI therapy. If adverse events appear, dose reduction or temporary treatment interruption may be offered because some adverse events resolve with continuation of treatment. In the event of serious adverse events, treatment discontinuation is necessary.",
"corpus_id": 1308433,
"title": "Oncotargets and Therapy Dovepress Dovepress"
} | {
"abstract": "Tyrosine kinase inhibitors (TKIs) are multi-targeted anti-cancer agents effective in the treatment of renal cell carcinoma (RCC), imatinib-resistant gastrointestinal stromal tumor (GIST) and pancreatic cancer (PC). Targeting and inhibiting a wide range of oncogenically relevant receptor tyrosine kinases (RTKs), TKIs have been the golden standard treatment of several types of cancer. The cardiotoxicity of TKIs, however, has also emerged alongside their anti-cancer potencies and has attracted research attention. Over the past few years significant progress has been made in developing a deeper understanding of aspects such as extent of cardiotoxicity, prognostic implications and survival predictions, toxicological mechanisms, and potential cardioprotective therapies. In this review we focus on a typical TKI sunitinib and summarize the up-to-date knowledge of sunitinib-induced cardiac abnormalities reported in clinical studies, weighing their implications of prognostic values. We also examine recent findings in underlying mechanisms, and development of potential cardioprotective agents.",
"corpus_id": 45233028,
"score": -1,
"title": "Progress on the cardiotoxicity of sunitinib: Prognostic significance, mechanism and protective therapies."
} |
{
"abstract": "Reliable broadcast can be a very useful primitive for many distributed applications, especially in the context of sensoractuator networks. Recently, the issue of reliable broadcast has been addressed in the context of the radio network model that is characterized by a shared channel, and where a transmission is heard by all nodes within the sender’s neighborhood. This basic defining feature of the radio network model can be termed as the reliable local broadcast assumption. However, in actuality, wireless networks do not exhibit such perfect and predictable behavior. Thus any attempt at distributed protocol design for multi-hop wireless networks based on the idealized radio network model requires the availability of a reliable local broadcast primitive that can provide guarantees of such idealized behavior. We present a simple proof-of-concept approach toward the implementation of a reliable local broadcast primitive with probabilistic guarantees, with the intent to highlight the potential for lightweight scalable solutions to achieve probabilistic reliable local broadcast in a wireless network.",
"corpus_id": 15753649,
"title": "Reliable Local Broadcast in a Wireless Network Prone to Byzantine Failures"
} | {
"abstract": "Theorists and practitioners have fairly different perspectives on how wireless broadcast works. Theorists think about synchrony; practitioners think about backoff. Theorists assume reliable communication; practitioners worry about collisions. The examples are endless. Our goal is to begin to reconcile the theory and practice of wireless broadcast, in the presence of failures. We propose new models for wireless broadcast and use them to examine what makes a broadcast model good. In the process, we pose some interesting questions that help to bridge the gap.",
"corpus_id": 1750649,
"title": "Usability: reconciling theory and practice"
} | {
"abstract": "PURPOSE\nThe paper's aim is to compare experienced and potential US medical tourists' foreign health service-quality expectations.\n\n\nDESIGN/METHODOLOGY/APPROACH\nData were collected via an online survey involving 1,588 US consumers engaging or expressing an interest in medical tourism. The sample included 219 experienced and 1,369 potential medical tourists. Respondents completed a SERVQUAL questionnaire. Mann-Whitney U-tests were used to determine significant differences between experienced and potential US medical tourists' service-quality expectations.\n\n\nFINDINGS\nFor all five service-quality dimensions (tangibles, reliability, responsiveness, assurance and empathy) experienced medical tourists had significantly lower expectations than potential medical tourists. Experienced medical tourists also had significantly lower service-quality expectations than potential medical tourists for 11 individual SERVQUAL items.\n\n\nPRACTICAL IMPLICATIONS\nResults suggest using experience level to segment medical tourists. The study also has implications for managing medical tourist service-quality expectations at service delivery point and via external marketing communications.\n\n\nORIGINALITY/VALUE\nManaging medical tourists' service quality expectations is important since expectations can significantly influence choice processes, their experience and post-consumption behavior. This study is the first to compare experienced and potential US medical tourist service-quality expectations. The study establishes a foundation for future service-quality expectations research in the rapidly growing medical tourism industry.",
"corpus_id": 20897358,
"score": -1,
"title": "Experienced and potential medical tourists' service quality expectations."
} |
{
"abstract": "K. Adaricheva and M. Bolat have recently proved that if $U_0$ and $U_1$ are circles in a triangle with vertices $A_0,A_1,A_2$, then there exist $j\\in \\{0,1,2\\}$ and $k\\in\\{0,1\\}$ such that $U_{1-k}$ is included in the convex hull of $U_k\\cup(\\{A_0,A_1, A_2\\}\\setminus\\{A_j\\})$. One could say disks instead of circles. Here we prove the existence of such a $j$ and $k$ for the more general case where $U_0$ and $U_1$ are compact sets in the plane such that $U_1$ is obtained from $U_0$ by a positive homothety or by a translation. Also, we give a short survey to show how lattice theoretical antecedents, including a series of papers on planar semimodular lattices by G. Gratzer and E. Knapp, lead to our result.",
"corpus_id": 59568844,
"title": "A convex combinatorial property of compact sets in the plane and its roots in lattice theory"
} | {
"abstract": "A recent result of G. Czédli and E.T. Schmidt gives a construction of slim (planar) semimodular lattices from planar distributive lattices by adding elements, adding “forks”. We give a construction that accomplishes the same by deleting elements, by “resections”.",
"corpus_id": 293288,
"title": "Notes on Planar Semimodular Lattices. VII. Resections of Planar Semimodular Lattices"
} | {
"abstract": "Abstract.The purpose of this paper is to explore the supersymmetry invariance of a particular supergravity theory, which we refer to as D = 4 generalized AdS-Lorentz deformed supergravity, in the presence of a non-trivial boundary. In particular, we show that the so-called generalized minimal AdS-Lorentz superalgebra can be interpreted as a peculiar torsion deformation of $\\mathfrak{osp} (4 \\vert 1)$𝔬𝔰𝔭(4|1), and we present the construction of a bulk Lagrangian based on the aforementioned generalized AdS-Lorentz superalgebra. In the presence of a non-trivial boundary of space-time, that is when the boundary is not thought of as set at infinity, the fields do not asymptotically vanish, and this has some consequences on the invariances of the theory, in particular on supersymmetry invariance. In this work, we adopt the so-called rheonomic (geometric) approach in superspace and show that a supersymmetric extension of a Gauss-Bonnet-like term is required in order to restore the supersymmetry invariance of the theory. The action we end up with can be recast as a MacDowell-Mansouri-type action, namely as a sum of quadratic terms in the generalized AdS-Lorentz covariant super field-strengths.",
"corpus_id": 54912902,
"score": -1,
"title": "Generalized AdS-Lorentz deformed supergravity on a manifold with boundary"
} |
{
"abstract": "Playing a significant role in the operation of the modern electric grid, Demand Response Programs (DRPs) balance the demand on the consumer level with the supply through the deployment of specific measures. An important undesirable phenomenon may occur when simultaneously launched DRPs on a same network interact negatively between each other. Therefore, an optimal coordination is necessary in order to optimize a preset objective function. Within this context, and as a new approach for solving this issue, this paper suggests to perform an optimal clustering applied on the network gathered data. Based on published results, the importance of synchronization between different DRPs is proved. It is shown how the suggested clustering would improve the synchronization process. As a demonstration for this new approach, clustering simulations are applied on a set of a real data built through measurements performed on the distribution network of a university campus.",
"corpus_id": 57362223,
"title": "An Optimal Approach for Offering Multiple Demand Response Programs Over a Power Distribution Network"
} | {
"abstract": "In this paper, the distributionally robust optimization approach (DROA) is proposed to schedule the energy consumption of the heating, ventilation and air conditioning (HVAC) system with consideration of the weather forecast error. The maximum interval of the outdoor temperature is partitioned into subintervals, and the proposed DROA constructs the ambiguity set of the probability distribution of the outdoor temperature based on the probabilistic information of these subintervals of historical weather data. The actual energy consumption will be adjusted according to the forecast error and the scheduled consumption in real time. The energy consumption scheduling of HVAC through the proposed DROA is formulated as a nonlinear problem with distributionally robust chance constraints. These constraints are reformulated to be linear and then the problem is solved via linear programming. Compared with the method that takes into account the weather forecast error based on the mean and the variance of historical data, simulation results demonstrate that the proposed DROA effectively reduces the electricity cost with less computation time, and the electricity cost is reduced compared with the traditional robust method.",
"corpus_id": 3710533,
"title": "Energy Consumption Scheduling of HVAC Considering Weather Forecast Error Through the Distributionally Robust Approach"
} | {
"abstract": "We develop tractable semidefinite programming based approximations for distributionally robust individual and joint chance constraints, assuming that only the first- and second-order moments as well as the support of the uncertain parameters are given. It is known that robust chance constraints can be conservatively approximated by Worst-Case Conditional Value-at-Risk (CVaR) constraints. We first prove that this approximation is exact for robust individual chance constraints with concave or (not necessarily concave) quadratic constraint functions, and we demonstrate that the Worst-Case CVaR can be computed efficiently for these classes of constraint functions. Next, we study the Worst-Case CVaR approximation for joint chance constraints. This approximation affords intuitive dual interpretations and is provably tighter than two popular benchmark approximations. The tightness depends on a set of scaling parameters, which can be tuned via a sequential convex optimization algorithm. We show that the approximation becomes essentially exact when the scaling parameters are chosen optimally and that the Worst-Case CVaR can be evaluated efficiently if the scaling parameters are kept constant. We evaluate our joint chance constraint approximation in the context of a dynamic water reservoir control problem and numerically demonstrate its superiority over the two benchmark approximations.",
"corpus_id": 11547182,
"score": -1,
"title": "Distributionally robust joint chance constraints with second-order moment information"
} |
{
"abstract": "Vehicle ego-localization is an essential process for many driver assistance and autonomous driving systems. The traditional solution of GPS localization is often unreliable in urban environments where tall buildings can cause shadowing of the satellite signal and multipath propagation. Typical visual feature based localization methods rely on calculation of the fundamental matrix which can be unstable when the baseline is small. In this paper we propose a novel method which uses the scale of matched SURF image features and Dynamic Time Warping to perform stable localization. By comparing SURF feature scales between input images and a pre-constructed database, stable localization is achieved without the need to calculate the fundamental matrix. In addition, 3D information is added to the database feature points in order to perform lateral localization, and therefore lane recognition. From experimental data captured from real traffic environments, we show how the proposed system can provide high localization accuracy relative to an image database, and can also perform lateral localization to recognize the vehicle's current lane.",
"corpus_id": 15955225,
"title": "Single camera vehicle localization using SURF scale and dynamic time warping"
} | {
"abstract": "This paper focuses on ego-localization using in-vehicle cameras. We propose a 2D ego-localization method using streetscape appearance as a feature of image matching and the triangulation of matching results. The image sequences of two in-vehicle cameras are matched to a database that contains a sequence of streetscape images and their corresponding positions. First, the proposed method searches for images similar to the input image from the database. Second, vehicle position is calculated based on triangulation using the positions stored in the database and the viewing directions of the two cameras. By assuming that the streetscape appearance changes continuously, a sequential image matching algorithm is used to improve the ego-localization accuracy. From experimental results, we confirmed that the proposed method surpasses the accuracy of a general GPS and achieved sufficient accuracy to be used for driving lane recognition.",
"corpus_id": 1239254,
"title": "Ego-localization using streetscape image sequences from in-vehicle cameras"
} | {
"abstract": "Abstract A technique has been described which allows a quantitative, topographic evaluation of ventilation and perfusion distribution in the human lung, using the Anger scintillation camera to detect the distribution of the radioactive isotope, 135 Xenon. The data obtained from four normal right and left lungs in seated, resting subjects, confirms the large (three to fourfold) apex to base perfusion gradient and the smaller (30%) ventilation gradient previously described by several authors. No significant horizontal gradient of either ventilation or perfusion could be detected.",
"corpus_id": 29442925,
"score": -1,
"title": "Use of scintillation camera and 135-xenon for study of topographic pulmonary function."
} |
{
"abstract": "Automated identification of individuals is one of the most popular works today. Human verification, especially the iris pattern recognition is widely applied as a robust method for applications that demands high security. Many reasons are considered for choosing the iris in human verification, for example, the stability of iris biometric features throughout human life and unaffected of its patterns by human genes (genetic independency). This paper presents a new contribution in iris recognition biometric system. It uses Fourier descriptors method to extract the iris significant feature for iris signature representation. The biometric system is proposed and implemented using four comparative classifiers. It involves four sequential processes: the image enhancement process; iris feature extraction and patterns creation process; template construction process; and finally, the recognition and classification process. The mathematical morphology operations and canny edge detector are both applied for best outer/inner boundary localization procedure. The system satisfied 100% accuracy result regarding iris-pupil boundary localization for CASIA-v1and CASIA-v4 dataset. Also, when the identity of 30 persons were verified, the maximum matching result was %96.67 for Back propagation and lowest rate was %83.5 for Radial basic function.",
"corpus_id": 55924373,
"title": "Fourier Descriptors for Iris Recognition"
} | {
"abstract": "With over decade of intensive research in the field of biometric, security based applications havebeen developed. There are many biometric security systemsfor person identificationbased on palm print, face, voice, iris, etc. Many researchers have recommended PCA as an efficient algorithm for such applications due to its simplicity, accuracy, and dimensionality reduction on large dataset while retaining as much as original information as possible. This paper presents the details of PCA tool for analyzing patterns in images. This paper focuses on choosing iris as a biometric for identification since it is unique of a person and it remains unchanged over many years (throughout the life of a person). CASIA v1 database has been used in the studies of PCA for personal identification. PCA gives 85% accuracy by using Euclidean distance as a classifier.",
"corpus_id": 1303311,
"title": "Iris Recognition based on PCA for Person Identification"
} | {
"abstract": "This article summarizes the results of a three‐dimensional study of changes in the morphology of the L6 rat vertebra at 120 days after ovariectomy (OVX), with estrogen replacement therapy used as a positive control. Synchrotron radiation microtomography was used to quantify the structural parameters defining trabecular bone architecture, while finite‐element methods were used to explore the relationships between these parameters and the compressive elastic behavior of the vertebrae. There was a 22% decrease in trabecular bone volume (TBV) and a 19% decline in mean trabecular thickness (Tb.Th) with OVX. This was accompanied by a 150% increase in trabecular connectivity, a result of the perforation of trabecular plates. Finite‐element analysis of the trabecular bone removed from the cortical shell showed a 37% decline in the Young's modulus in compression after OVX with no appreciable change in the estrogen‐treated group. The intact vertebrae (containing its trabecular bone) exhibited a 15% decrease in modulus with OVX, but this decline lacked statistical significance. OVX‐induced changes in the trabecular architecture were different from those that have been observed in the proximal tibia. This difference was a consequence of the much more platelike structure of the trabecular bone in the vertebra.",
"corpus_id": 38779750,
"score": -1,
"title": "Three‐Dimensional Morphometry of the L6 Vertebra in the Ovariectomized Rat Model of Osteoporosis: Biomechanical Implications"
} |
{
"abstract": "Abstract The cluster formation in three-dimensional wireless sensor networks (3D-WSN) gives rise to overlapping of signals due to spherical sensing range which leads to information redundancy in the network. To address this problem, we develop a sensing algorithm for 3D-WSN based on dodecahedron topology which we call three-dimensional distributed clustering (3D-DC) algorithm. Using 3D-DC algorithm in 3D-WSN, accurate information extraction appears to be a major challenge due to the environmental noise where a cluster head (CH) node gathers and estimates information in each dodecahedron cluster. Hence, to extract precise information in each dodecahedron cluster, we propose three-dimensional information estimation (3D-IE) algorithm. Moreover, node deployment strategy also plays an important factor to maximize information accuracy in 3D-WSN. In most cases, sensor nodes are deployed deterministically or randomly. But both the deployment scenarios are not aware of where to exactly place the sensor nodes to extract more information in terms of accuracy. Therefore, placing nodes in its appropriate positions in 3D-WSN is a challenging task. We propose a three-dimensional node placement (3D-NP) algorithm which can find the possible nodes and their deployment strategy to maximize information accuracy in the network. We perform simulations using MATLAB to validate the 3D-DC, 3D-IE and 3D-NP, algorithms, respectively.",
"corpus_id": 12511948,
"title": "Information Estimation with Node Placement Strategy in 3D Wireless Sensor Networks"
} | {
"abstract": "Due to deployment of inflated amount of sensor nodes in three dimensional space, observed data are highly correlated among sensor nodes. Since the data are highly correlated, it produces large quantity of redundant data in the network. To reduce data redundancy, we propose a clustering algorithm called Three Dimensional Event based Spatially Correlated Clustering (3D-ESCC) algorithm. Moreover, to extract more accurate data in each distributed cluster of 3D-ESCC algorithm, we propose an Event based Data Estimation (EDE) model in three dimensional space and compare it with other data estimation models. In distributed wireless sensor networks, it may be possible that due to extreme physical condition (e.g heavy rainfall, high temperature and battery discharge) the sensor nodes fails to operate. In such situation, we are able to develop a data prediction model in distributed cluster in case of node failure. Computer simulations and validations are performed to validate 3D-ESCC algorithm and EDE model.",
"corpus_id": 50382,
"title": "Spatial data estimation in three dimensional distributed wireless sensor networks"
} | {
"abstract": "In this paper, a new fuzzy space vector modulation direct torque control strategy for induction machine based on indirect matrix converter is proposed. In the rectifier stage a space vector modulation strategy is employed. In the inverter stage two fuzzy logic regulators used to replace the classical PI regulators in PWM direct torque control method. In this method, the input current is nearly sinusoidal and the input displacement angle is adjustable. Using this control strategy, the advantages of indirect matrix converter and direct torque control method are combined. The performance of the proposed drive system is evaluated through digital simulation using MATLAB-SIMULINK package and simulation results are used to verify the effectiveness of the proposed strategy and support the analytical results.",
"corpus_id": 1642707,
"score": -1,
"title": "A New Fuzzy Direct Torque Control Strategy for Induction Machine Based on Indirect Matrix Converter"
} |
{
"abstract": "During the last decades, franchising as an organizational form has received a lot of attention from researchers and practitioners alike. While many studies have examined various aspects of franchis ...",
"corpus_id": 167644552,
"title": "Performance in Franchise Systems : The Franchisee Perspective"
} | {
"abstract": "This article examined franchisee satisfaction as mediator and franchisee characteristics as moderators of the relationship between franchisee perceived relationship value and loyalty. Using the data from 218 franchisees in 5 Chinese convenience store franchise companies, the findings revealed a partially mediating role for franchisee satisfaction in the relationship between perceived relationship value and loyalty. Furthermore, results showed that the relationship was stronger for franchisees with older and higher education, but it was weaker for those franchisees with shorter relationship length. \n \n \n \n Key words: Relationship value, loyalty, mediator, moderators, franchising, China.",
"corpus_id": 154516994,
"title": "Franchisee perceived relationship value and loyalty in a franchising context: assessing the mediating role of franchisee satisfaction and the moderating role of franchisee characteristics"
} | {
"abstract": "We investigate the high spectral efficiency capabilities of a cellular data system that combines the following: 1) multiple transmit signals, each using a separately adaptive modulation; 2) adaptive array processing at the receiver; and 3) aggressive frequency reuse (reuse in every cell). We focus on the link capacity between one user and its serving base station, for both uncoded and ideally coded transmissions. System performance is measured in terms of average data throughput, where the average is over user location, shadow fading, and fast fading. We normalize this average by the total bandwidth, call it the mean spectral efficiency, and show why this metric is a useful representation of system capability. We then quantify it, using simulations, to characterize multiple-input multiple-output systems performance for a wide variety of channel conditions and system design options.",
"corpus_id": 18063471,
"score": -1,
"title": "Attainable throughput of an interference-limited multiple-input multiple-output (MIMO) cellular system"
} |
{
"abstract": "A bone morphogenetic protein (BMP) signaling pathway is implicated in dorsoventral patterning in Xenopus. Here we show that three genes in the zebrafish, swirl, snailhouse, and somitabun, function as critical components within a BMP pathway to pattern ventral regions of the embryo. The dorsalized mutant phenotypes of these genes can be rescued by overexpression of bmp4, bmp2b, an activated BMP type I receptor, and the downstream functioning Smad1 gene. Consistent with a function as a BMP ligand, swirl functions cell nonautonomously to specify ventral cell fates. Chromosomal mapping of swirl and cDNA sequence analysis demonstrate that swirl is a mutation in the zebrafish bmp2b gene. Interestingly, our analysis suggests that the previously described nonneural/neural ectodermal interaction specifying the neural crest occurs through a patterning function of swirl/bmp2b during gastrulation. We observe a loss in neural crest progenitors in swirl/bmp2b mutant embryos, while somitabun mutants display an opposite, dramatic expansion of the prospective neural crest. Examination of dorsally and ventrally restricted markers during gastrulation reveals a successive reduction and reciprocal expansion in nonneural and neural ectoderm, respectively, in snailhouse, somitabun, and swirl mutant embryos, with swirl/bmp2b mutants exhibiting almost no nonneural ectoderm. Based on the alterations in tissue-specific gene expression, we propose a model whereby swirl/bmp2b acts as a morphogen to specify different cell types along the dorsoventral axis.",
"corpus_id": 14155358,
"title": "Ventral and lateral regions of the zebrafish gastrula, including the neural crest progenitors, are established by a bmp2b/swirl pathway of genes."
} | {
"abstract": "The temporal and spatial transcription patterns of the Xenopus laevis Bone morphogenetic protein 2 (BMP-2) gene have been investigated. Unlike the closely related BMP-4 gene, the BMP-2 gene is strongly transcribed during oogenesis. Besides some enrichment within the animal half, maternal BMP-2 transcripts are ubiquitously distributed in the early cleavage stage embryos but rapidly decline during gastrulation. Zygotic transcription of this gene starts during early neurulation and transcripts are subsequently localized to neural crest cells, olfactory placodes, pineal body and heart anlage. Microinjection of BMP-2 RNA into the two dorsal blastomeres of 4-cell stage embryos leads to ventralization of developing embryos. This coincides with a decrease of transcripts from dorsal marker genes (beta-tubulin, alpha-actin) but not from ventral marker genes (alpha-globin). BMP-2 overexpression inhibits transcription of the early response gene XFD-1, a fork head/HNF-3 related transcription factor expressed in the dorsal lip, but stimulates transcription of the posterior/ventral marker gene Xhox3, a member of the helix-turn-helix family. Activin A incubated animal caps from BMP-2 RNA injected embryos show transcription of ventral but an inhibition of dorsal marker genes; thus, BMP-2 overrides the dorsalizing activity of activin A. The results demonstrate that BMP-2 overexpression exerts very similar effects as have previously been described for BMP-4, and they suggest that BMP-2 may act already as a maternal factor in ventral mesoderm formation.",
"corpus_id": 136963,
"title": "Bone morphogenetic protein 2 in the early development of Xenopus laevis"
} | {
"abstract": "BackgroundPrevention programs often promote HIV testing as one possible strategy of combating the spread of the disease.ObjectiveTo examine levels of HIV testing practices among a large sample of university students and the relationship among HIV testing, sociodemographic variables, and HIV-related behaviors.MethodsA total of 1252 students were surveyed between June 2001 and February 2002 using a 193-item questionnaire measuring a variety of HIV-related knowledge and attitudinal and behavioral items.ResultsHierarchical logistic regression analyses revealed that youths, married persons, persons who had attended an HIV education forum, and those who knew someone with HIV/AIDS were more likely to report a previous HIV test. However, HIV testing was not associated with condom use or number of sex partners.ConclusionThe lack of significant findings between testing and risky sexual behaviors should not negate the importance of HIV testing. Being informed regarding personal HIV serostatus is one of the first steps in self-protection. Effective messages and programs need to be developed and implemented in Jamaica to promote HIV testing and help persons to adequately assess their level of risk with respect to contracting HIV.",
"corpus_id": 4883362,
"score": -1,
"title": "Journal of the International AIDS Society BioMed Central Research article Prevalence and Correlates of HIV Testing: An Analysis of University Students in Jamaica"
} |
{
"abstract": "Speedup measures how much faster we can solve the same problem using many cores. If we can afford to keep the execution time fixed, then quality up measures how much better the solution will be computed using many cores. In this paper we describe our multithreaded implementation to track one solution path defined by a polynomial homotopy. Limiting quality to accuracy and confusing accuracy with precision, we strive to offset the cost of multiprecision arithmetic running multithreaded code on many cores.",
"corpus_id": 12609064,
"title": "Quality Up in Polynomial Homotopy Continuation by Multithreaded Path Tracking"
} | {
"abstract": "Homotopy continuation methods to solve polynomial systems scale very well on parallel machines. We examine its parallel implementation on multiprocessor multicore workstations using threads. With more cores we speed up pleasingly parallel path tracking jobs. In addition, we compute solutions more accurately in about the same amount of time with threads, and thus achieve quality up. Focusing on polynomial evaluation and linear system solving (key ingredients of Newton's method) we can double the accuracy of the results with the quad doubles of QD-2.3.9 in less than double the time, if all available eight cores are used.",
"corpus_id": 265846,
"title": "Polynomial homotopies on multicore workstations"
} | {
"abstract": "Delusions are defined as irrational beliefs that compromise good functioning. However, in the empirical literature, delusions have been found to have some psychological benefits. One proposal is that some delusions defuse negative emotions and protect one from low self-esteem by allowing motivational influences on belief formation. In this paper I focus on delusions that have been construed as playing a defensive function (motivated delusions) and argue that some of their psychological benefits can convert into epistemic ones. Notwithstanding their epistemic costs, motivated delusions also have potential epistemic benefits for agents who have faced adversities, undergone physical or psychological trauma, or are subject to negative emotions and low self-esteem. To account for the epistemic status of motivated delusions, costly and beneficial at the same time, I introduce the notion of epistemic innocence. A delusion is epistemically innocent when adopting it delivers a significant epistemic benefit, and the benefit could not be attained if the delusion were not adopted. The analysis leads to a novel account of the status of delusions by inviting a reflection on the relationship between psychological and epistemic benefits.",
"corpus_id": 10630912,
"score": -1,
"title": "The epistemic innocence of motivated delusions"
} |
{
"abstract": "The chronic and progressive nature of diabetes is usually associated with micro‐ and macrovascular complications where failure of pancreatic β‐cell function and a general condition of hyperglycaemia is created. One possible factor is failure of the patient to comply with and adhere to the prescribed insulin due to the inconvenient administration route. This review summarizes the rationale for oral insulin administration, existing barriers and some counter‐strategies trialled.",
"corpus_id": 12848146,
"title": "Oral insulin delivery: existing barriers and current counter‐strategies"
} | {
"abstract": "ABSTRACT Introduction: Lipid-based drug delivery systems (LBDDS) are the most promising technique to formulate the poorly water soluble drugs. Nanotechnology strongly influences the therapeutic performance of hydrophobic drugs and has become an essential approach in drug delivery research. Self-nanoemulsifying drug delivery systems (SNEDDS) are a vital strategy that combines benefits of LBDDS and nanotechnology. SNEDDS are now preferred to improve the formulation of drugs with poor aqueous solubility. Areas covered: The review in its first part shortly describes the LBDDS, nanoemulsions and clarifies the ambiguity between nanoemulsions and microemulsions. In the second part, the review discusses SNEDDS and elaborates on the current developments and modifications in this area without discussing their associated preparation techniques and excipient properties. Expert opinion: SNEDDS have exhibit the potential to increase the bioavailability of poorly water soluble drugs. The stability of SNEDDS is further increased by solidification. Controlled release and supersaturation can be achieved, and are associated with increased patient compliance and improved drug loads, respectively. Presence of biodegradable ingredients and ease of large-scale manufacturing combined with a lot of ‘drug-targeting opportunities’ give SNEDDS a clear distinction and prominence over other solubility enhancement techniques.",
"corpus_id": 4021260,
"title": "From nanoemulsions to self-nanoemulsions, with recent advances in self-nanoemulsifying drug delivery systems (SNEDDS)"
} | {
"abstract": "Reversible, fast, all-optical switching of the reflection of a cholesteric liquid crystal (CLC) is demonstrated in a formulation doped with push-pull azobenzene dyes. The reflection of the photosensitive CLC compositions is optically switched by exposure to 488 and 532 nm CW lasers as well as ns pulsed 532 nm irradiation. Laser-directed optical switching of the reflection of the CLC compositions occurs rapidly, within a few hundred milliseconds for the CW laser lines examined here. Also observed is optical switching on the order of tens of nanoseconds when the CLC is exposed to a single nanosecond pulse with 0.2 J/cm(2) energy density. The rapid cis-trans isomerization typical of push-pull azobenzene dye is used for the first time to rapidly restore the reflection of the CLC from a photoinduced isotropic state within seconds after cessation of light exposure.",
"corpus_id": 207314932,
"score": -1,
"title": "Optically switchable, rapidly relaxing cholesteric liquid crystal reflectors."
} |
{
"abstract": "A methodology for synthesizing robust optimal input trajectories for constrained linear hybrid systems subject to bounded additive disturbances is presented. The computed control sequence optimizes nominal performance while robustly guarantees that safety/performance constraints are respected. Specifically, for hybrid systems representable in the piecewise affine form, robustness is achieved with an open-loop optimization strategy based on the mixed logical \ndynamical modelling framework.",
"corpus_id": 59906110,
"title": "Robust optimal control of linear hybrid systems: An MLD approach"
} | {
"abstract": "This paper proposes an approach to extend the mixed logical dynamical modelling framework for synthesizing robust optimal control actions for constrained piecewise affine systems subject to bounded additive input disturbances. Rather than using closed-loop dynamic programming arguments, robustness is achieved here with an open-loop optimization strategy, such that the optimal control sequence optimizes nominal performance while robustly guaranteeing that safety/performance constraints are respected. The proposed approach is based on the robust mode control concept, which enforces the control input to generate trajectories such that the mode of the system, at each time instant, is independent of the disturbances.",
"corpus_id": 9554018,
"title": "Optimal control of uncertain piecewise affine/mixed logical dynamical systems"
} | {
"abstract": "Flexibility, ease of deployment and of spatial reconfiguration, and low cost make wireless sensor networks (WSNs) fundamental component of modern networked control systems. However, due to the energy-constrained nature of WSNs, the transmission rate of the sensor nodes is a critical aspect to take into account in control design. Two are the main contributions of this paper. First, a general transmission strategy for communication between controller and sensors is proposed. Then, a scenario with a controller and a wireless node providing measures is investigated, and two energy-aware control schemes based on explicit model predictive control (MPC) are presented. We consider both nominal and robust control in the presence of disturbances, and convergence properties are given for the latter. The proposed control schemes are tested and compared to traditional MPC techniques. The results show the effectiveness of the proposed energy-aware approach, which achieves a profitable trade-off between energy consumption of wireless sensors and loss in system performance.",
"corpus_id": 19000256,
"score": -1,
"title": "Energy-aware robust Model Predictive Control based on wireless sensor feedback"
} |
{
"abstract": "A previously unannotated, putative fliK gene was identified in the Campylobacter jejuni genome based on sequence analysis; deletion mutants in this gene had a 'polyhook' phenotype characteristic of fliK mutants in other genera. The mutants greatly overexpressed the sigma(54)-dependent flagellar hook protein FlgE, to form unusual filamentous structures resembling straight flagella in addition to polyhooks. The genome sequence reveals only one gene predicted to encode an orthologue of the NtrC-family activator required for sigma(54)-dependent transcription. Hence, all sigma(54)-dependent genes in the genome would be overexpressed in the fliK mutant together with flgE. Microarray analysis of genome-wide transcription in the mutant showed increased transcription of a subset of genes, often downstream of sigma(54)-dependent promoters identified by a quality-predictive algorithm applied to the whole genome. Assessment of genome-wide transcription in deletion mutants in rpoN, encoding sigma(54), and in the sigma(54)-activator gene flgR, showed reciprocally reduced transcription of genes that were overexpressed in the fliK mutant. The fliA (sigma(28))-dependent regulon was also analysed. Together the data clearly define the roles of the alternative sigma factors RpoN and FliA in flagellar biogenesis in C. jejuni, and identify additional putative members of their respective regulons.",
"corpus_id": 13933971,
"title": "Deletion of a previously uncharacterized flagellar-hook-length control gene fliK modulates the sigma54-dependent regulon in Campylobacter jejuni."
} | {
"abstract": "The human pathogen Campylobacter jejuni is a highly motile organism that carries a flagellum on each pole. The flagellar motility is regarded as an important trait in C. jejuni colonization of the intestinal tract, however, the knowledge of the regulation of this important colonization factor is rudimentary. We demonstrate by phosphorylation assays that the sensor FlgS and the response regulator FlgR form a two-component system that is on the top of the Campylobacter flagellum hierarchy. Phosphorylated FlgR is needed to activate RpoN-dependent genes of which the products form the hook-basal body filament complex. By real-time reverse transcriptase-PCR we identified that FlgS, FlgR, RpoN, and FliA belong to the early flagellar genes and are regulated by σ70. FliD and the putative anti-σ-factor FlgM are regulated by a σ54- and σ28-dependent promoters. Activation of the fla regulon is growth phase-dependent, a 100-fold rpoN mRNA reduction is seen in the early stationary phase compared with the early logarithmic phase. Whereas flaB transcription decreases, flaA transcription increases in early stationary phase. Our data show that the C. jejuni flagellar hierarchy largely differs from that of other bacteria. Phenotypical analysis revealed that unflagellated C. jejuni mutants grow three times faster in broth medium compared with wild-type bacteria. In vivo the C. jejuni flagella are needed to pass the gastrointestinal tract of chickens, but not to colonize the ceaca of the chicken.",
"corpus_id": 1816214,
"title": "The FlgS/FlgR Two-component Signal Transduction System Regulates the fla Regulon in Campylobacter jejuni*"
} | {
"abstract": "The AES Electrophoresis Society—formerly the American Electrophoresis Society—has over four decades of history that began, in the 1970s, with meetings of biochemists, chemists, and chemical engineers to discuss new advances in the electrophoretic separation technologies of the day. Over repeated meetings, the idea to found a society was born, and the AES has gone from strength to strength. Since 2000, the annual meeting has been held in conjunction with the American Institute of Chemical Engineers' annual meeting, but attracts a broad spectrum of academics and industry from medicine and the life sciences through to electronic and mechanical engineers, bioengineers and physicists, as well as chemical engineers and biochemists. The 2015 meeting was held in the beautiful location of Salt Lake City, and as befits a town named after a large body of dissolved ions, the group “charged” to the city to deliver their latest results. This Special Issue represents a cross-section of papers selected from those presenting. \n \nExamining the breadth of these papers, it is interesting to note that the most common sub-discipline to appear, forming a plurality if not a majority, is a subject which would almost certainly have received scant, if any, discussion in the original meetings in the 1970s. Dielectrophoresis (DEP) is a close relative of electrophoresis, but one where the action of electric field upon a dipole (permanent or, much more commonly, induced) allows the use of both AC and DC fields to manipulate cells and other suspended particles. Indeed, this year—2016—marks the 50th anniversary of the first paper to be published showing that not only could living and dead cells be manipulated, but that they could be separated based on differences in their response to an applied field of similar frequency. The five papers on dielectrophoresis presented here cover a broad range, with many focussing on new approaches to electrode design and manufacture, through to increasing the throughput and selectivity of the separations. Examples include electrodes manufactured from platinum black threads1 to carbonized SU8,2 novel geometries such as the nanoslit,3 and new applications such as the detection of Babeosis in blood cells.4 Finally, Mata-Gomez and colleagues5 report on a DEP-based device for capturing macromolecules. Interestingly, many of these papers use yeast as a model organism—just as in the 1966 paper. \n \nAdvances across the other sub-disciplines of Electrophoresis are all represented; Saucedo-Espinosa and Lapizco-Encinas advance the understanding of electro-osmotic flow in low ionic strength media;6 capillary electrophoresis is represented by Paracha and Hestekin who use the technique to increase beta amyloid aggregation;7 whilst the colleagues of Victor Ugaz describe two processes with implication for microfabrication, one additive and one subtractive; Shi and Ugaz8 describe electro-polymerization of hydrogels constructed using microelectrodes, whilst Huang et al.9 describe a highly novel enzyme-based fabrication process for micromachining of channels. \n \nThis year's AES Electrophoresis Society Annual Meeting also included several entries to the “Art in Microfluidic Science” competition that was jointly sponsored by the AES Electrophoresis Society and AIP's Biomicrofluidics journal. Among the winning entries on the video include Tayloria Adams, a postdoc at UC, Irvine showing dielectrophoretic patterning of sickle cells (Best Video—https://www.youtube.com/watch?v=UFJGmHnHtU0), Renny E. Fernandez of Southern Methodist University showing dielectrophoretic profiles of cells created by mesoscopic threads coated with Pt black1 (Runner up Video—https://www.youtube.com/watch?v=XwoJg3YV0pU&feature=youtu.be) and Aashish Priye, a scientist at Sandia profiling advances in convective PCR (Honorable Mention—https://www.youtube.com/watch?v=-zWgg7yo1ak&feature=youtu.be). The best image entitled: “Miscarried Separation in a Spider Channel Microdevice” was presented by Mario Saucedo of Rochester Institute of Technology. The runner-up image award entitled: “A Bacteria Flower,” was presented by Avanish Mishra, a student at Purdue University. The image winning an honourable mention entitled: “Conductivity gradient-enhanced dielectrophoresis”3 was presented by Ali Rohani, a graduate student at University of Virginia (see Supplementary Material, Ref. 13, for images). \n \nFinally, we are thrilled to present two significant reviews of their fields: one by Egatz-Gomez et al.10 on porous materials in microfluidics, and the other on microfluidic approaches to nucleic acid detection in liquid biopsy by Knob et al.11 And to coincide with the 50th anniversary of the publication of the first paper on dielectrophoretic separation, there is a Perspective article on the path that dielectrophoretic cell separation has taken in the last 50 years,12 including the recollections of the student—now a retired professor—who performed the first experiments that started a discipline. In his comments he reflected that “I had no idea the paper by Pohl and I would have much of an impact”; who knows where the papers here, or the future of Electrophoresis, may take us?",
"corpus_id": 30207210,
"score": -1,
"title": "Preface to Special Topic: Selected Papers from the 2015 Annual Meeting of the AES Electrophoresis Society in Salt Lake City, Utah."
} |
{
"abstract": "This review summarizes the effects of more than 20 metals that, research has indicated, may influence male reproductive health. Though males lack an apparent, easily measurable reproductive cycle, progress has been made in evaluating tests to identify chemical hazards and estimate reproductive health risks. Some agents discussed in this review are well known to have potential toxic effects on the male reproductive system, whereas some are not so well established in toxicology. This review attempts to cover most of the known toxicants and their effects on male fertility. The literature suggests a need for further research in those chemicals that are reactive and capable of covalent interactions in biological systems, as well as those defined as mutagens and/or carcinogens, to cause aneuploidy or other chromosomal aberrations, affect sperm motility in vitro, share hormonal activity or affect hormone action, and those that act directly or indirectly to affect the hypothalamo-pituitary-gonadal axis.",
"corpus_id": 26266320,
"title": "Environmental and occupational exposure of metals and their role in male reproductive functions"
} | {
"abstract": "Calcium is essential for functioning of different systems including male reproduction. However, it has also been reported as chemo-castrative agent. The study has been undertaken to elucidate the effect of excessive dietary calcium on male reproductive system in animals with possible action. Adult male healthy rats fed CaCl2 at different doses (0.5, 1.0 and 1.5 g%) in diet for 13 and 26 days to investigate reproductive parameters as well as the markers of oxidative stress. Significant alteration was found (P < 0.05) in testicular and accessory sex organs weight, epididymal sperm count, testicular steroidogenic enzyme (Δ5 3β-HSD and 17β-HSD) activities, serum testosterone, LH, FSH, LPO, activities of antioxidant enzymes, testicular histoarchitecture along with adrenal Δ5 3β-HSD activity with corticosterone level in dose- and time-dependent manner. Overall observations suggest that excessive dietary calcium enhances the generation of free-radicals resulting in structural and functional disruption of male reproduction.",
"corpus_id": 2716748,
"title": "Excessive dietary calcium in the disruption of structural and functional status of adult male reproductive system in rat with possible mechanism"
} | {
"abstract": "Mineral utilization was studied by metabolic balance techniques in 10 healthy male volunteers fed diets containing 65 and 94 g protein. Both diets contained approximately 650 mg calcium, 1 mg copper, 16 mg iron, 250 mg magnesium 1000 mg phosphorus, and 7 mg zinc. The diet consisted of conventional foods; the additional 29 g protein was egg white protein mixed into a beverage and fed twice per day. Plasma mineral levels were not affected by the increase in dietary protein. When the diet provided 94 g of protein, urinary calcium and zinc were slightly, but significantly, increased by an average of 35 mg (p less than 0.05) and 0.15 mg (p less than 0.001), respectively. Apparent mineral absorption and balance were unchanged by this modest increase in dietary protein.",
"corpus_id": 4462319,
"score": -1,
"title": "Effect of a moderate increase in dietary protein on the retention and excretion of Ca, Cu, Fe, Mg, P, and Zn by adult males."
} |
{
"abstract": "We propose perceptually guided photo retargeting, which shrinks a photo by simulating a human’s process of sequentially perceiving visually/semantically important regions in a photo. In particular, we first project the local features (graphlets in this paper) onto a semantic space, wherein visual cues such as global spatial layout and rough geometric context are exploited. Thereafter, a sparsity-constrained learning algorithm is derived to select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path which simulates how a human actively perceives semantics in a photo. Furthermore, we learn the prior distribution of such active graphlet paths (AGPs) from training photos that are marked as esthetically pleasing by multiple users. The learned priors enforce the corresponding AGP of a retargeted photo to be maximally similar to those from the training photos. On top of the retargeting model, we further design an online learning scheme to incrementally update the model with new photos that are esthetically pleasing. The online update module makes the algorithm less dependent on the number and contents of the initial training data. Experimental results show that: 1) the proposed AGP is over 90% consistent with human gaze shifting path, as verified by the eye-tracking data, and 2) the retargeting algorithm outperforms its competitors significantly, as AGP is more indicative of photo esthetics than conventional saliency maps.",
"corpus_id": 4370254,
"title": "Perceptually Guided Photo Retargeting"
} | {
"abstract": "Image retargeting aims to adapt images to displays of small sizes and different aspect ratios. Effective retargeting requires emphasizing the important content while retaining surrounding context with minimal visual distortion. In this paper, we present such an effective image retargeting method using saliency-based mesh parametrization. Our method first constructs a mesh image representation that is consistent with the underlying image structures. Such a mesh representation enables easy preservation of image structures during retargeting since it captures underlying image structures. Based on this mesh representation, we formulate the problem of retargeting an image to a desired size as a constrained image mesh parametrization problem that aims at finding a homomorphous target mesh with desired size. Specifically, to emphasize salient objects and minimize visual distortion, we associate image saliency into the image mesh and regard image structure as constraints for mesh parametrization. Through a stretch-based mesh parametrization process we obtain the homomorphous target mesh, which is then used to render the target image by texture mapping. The effectiveness of our algorithm is demonstrated by experiments.",
"corpus_id": 8295244,
"title": "Image Retargeting Using Mesh Parametrization"
} | {
"abstract": null,
"corpus_id": 7289997,
"score": -1,
"title": "Image Retargeting Quality Assessment"
} |
{
"abstract": "Abstract Synthetic cannabinoids are one of the most significant groups within the category new psychoactive substances (NPS) and in recent years new compounds have continuously been introduced to the market of recreational drugs. A sensitive and quantitative screening method in urine with metabolites of frequently seized compounds in Norway (AB‐FUBINACA, AB‐PINACA, AB‐CHMINACA, AM‐2201, AKB48, 5F‐AKB48, BB‐22, JWH‐018, JWH‐073, JWH‐081, JWH‐122, JWH‐203, JWH‐250, PB‐22, 5F‐PB‐22, RCS‐4, THJ‐2201, and UR‐144) using ultra‐high pressure liquid chromatography–quadrupole time of flight–mass spectrometry (UHPLC–QTOF–MS) has been developed. The samples were treated with ß‐glucuronidase prior to extraction and solid‐phase extraction was used. Liquid handling was automated using a robot. Chromatographic separation was achieved using a C18‐column and a gradient of water and acetonitrile, both with 0.1% formic acid. Each sample was initially screened for identification and quantification followed by a second injection for confirmation. The concentrations by which the compounds could be confirmed varied between 0.1 and 12 ng/mL. Overall the validation showed that the method fulfilled the set criteria and requirements for matrix effect, extraction recovery, linearity, precision, accuracy, specificity, and stability. One thousand urine samples from subjects in drug withdrawal programs were analyzed using the presented method. The metabolite AB‐FUBINACA M3, hydroxylated metabolite of 5F‐AKB48, hydroxylated metabolite of AKB48, AKB48 N‐pentanoic acid, 5F‐PB‐22 3‐carboxyindole, BB‐22 3‐carboxyindole, JWH‐018 N‐(5‐hydroxypentyl), JWH‐018 N‐pentanoic acid, and JWH‐073 N‐butanoic acid were quantified and confirmed in 2.3% of the samples. The method was proven to be sensitive, selective and robust for routine use for the investigated metabolites.",
"corpus_id": 51617913,
"title": "Screening, quantification, and confirmation of synthetic cannabinoid metabolites in urine by UHPLC–QTOF–MS"
} | {
"abstract": "Over the past years, use of synthetic cannabinoids has become increasingly popular. To draw the right conclusions regarding new intake of these substances in situations of repeated urinary drug testing, knowledge of their elimination rate in urine is essential. We report data from consecutive urine specimens from five subjects after ingestion of synthetic cannabinoids. Urinary concentrations of the carboxylic acid metabolites JWH-018-COOH and JWH-073-COOH were measured by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS-MS) with a limit of quantification of 0.1 ng/mL. In these subjects, specimens remained positive over a period of 20-43 (mean 27) days for JWH-018-COOH and over a period of 11-25 (mean 19) days for JWH-073-COOH. Detection times were shorter for subjects that appeared to have ingested only one, or a few, doses prior to urine collection in the study. Creatinine-normalized concentrations (CN-concentrations) slowly declined throughout the follow-up period in all subjects, suggesting that no new intake had taken place during this period. Mean elimination half-lives in urine were 14.0 (range 4.4-23.8) days for CN-JWH-018-COOH and 9.3 (range 3.6-16.8) days for CN-JWH-073-COOH. These data show that urine specimens could be positive for JWH-018-COOH for more than 6 weeks and JWH-073-COOH for more than 3 weeks after ingestion. However, such long detection periods require a low limit of quantification.",
"corpus_id": 1393276,
"title": "Detection Times of Carboxylic Acid Metabolites of the Synthetic Cannabinoids JWH-018 and JWH-073 in Human Urine."
} | {
"abstract": "Marijuana is the most widely used drug of abuse all over the world. The major active constituent of the drug is Δ⁹- tetrahydrocannabinol (Δ⁹-THC). Δ⁹-THC exerts its psychological activities by interacting with the cannabinoid receptors (CB₁ and CB₂) in the brain. JWH-018, HU-210, and CP-47497, with CB₁ agonist activity (similar to Δ⁹-THC), have been used by the drug culture to spike smokable herbal products to attain psychological effects similar to those obtained by smoking marijuana. The products spiked with these CB₁ agonists are commonly referred to as \"Spice\" or \"K2\". The most common compound used in these products is JWH-018 and related compounds (JWH-073 and JWH-250). Little work has been done on the detection of these synthetic cannabimimetic compounds in biological specimens. This report investigated the metabolism of JWH-018 by human liver microsomes, identification of the metabolites of JWH-018 in urine specimen of an individual who admitted use of the drug, and reports on the quantitation of three of its urinary metabolites, namely the 6-OH-, the N-alkyl OH (terminal hydroxyl)-, and the N-alkyl terminal carboxy metabolites using liquid chromatography-tandem mass spectrometry. The concentrations of these metabolites are determined in several forensic urine specimens.",
"corpus_id": 22037665,
"score": -1,
"title": "Liquid chromatography-tandem mass spectrometry analysis of urine specimens for K2 (JWH-018) metabolites."
} |
{
"abstract": "Online prediction of key parameters (e.g., process indices) is essential in many industrial processes because online measurement is not available. Data-based modeling is widely used for parameter prediction. However, model mismatch usually occurs owing to the variation of the feed properties, which changes the process dynamics. The current neural network online prediction models usually use fixed activation functions, and it is not easy to perform dynamic modification. Therefore, a few methods are proposed here. Firstly, an extreme learning machine (ELM)-based single-layer feedforward neural network with activation-function learning (AFL–SLFN) is proposed. The activation functions of the ELM are adjusted to enhance the ELM network structure and accuracy. Then, a hybrid model with adaptive weights is established by using the AFL–SLFN as a sub-model, which improves the prediction accuracy. To track the process dynamics and maintain the generalization ability of the model, a multiscale model-modification strategy is proposed. Here, small-, medium-, and large-scale modification is performed in accordance with the degree and the causes of the decrease in model accuracy. In the small-scale modification, an improved just-in-time local modeling method is used to update the parameters of the hybrid model. In the medium-scale modification, an improved elementary effect (EE)-based Morris pruning method is proposed for optimizing the sub-model structure. Remodeling is adopted in the large-scale modification. Finally, a simulation using industrial process data for tailings grade prediction in a flotation process reveals that the proposed method has better performance than some state-of-the-art methods. The proposed method can achieve rapid online training and allows optimization of the model parameters and structure for improving the model accuracy.",
"corpus_id": 210966715,
"title": "ELM-Based AFL–SLFN Modeling and Multiscale Model-Modification Strategy for Online Prediction"
} | {
"abstract": "Abstract Accurate and reliable forecasting models for electricity demand (G) are critical in engineering applications. They assist renewable and conventional energy engineers, electricity providers, end-users, and government entities in addressing energy sustainability challenges for the National Electricity Market (NEM) in Australia, including the expansion of distribution networks, energy pricing, and policy development. In this study, data-driven techniques for forecasting short-term (24-h) G-data are adopted using 0.5 h, 1.0 h, and 24 h forecasting horizons. These techniques are based on the Multivariate Adaptive Regression Spline (MARS), Support Vector Regression (SVR), and Autoregressive Integrated Moving Average (ARIMA) models. This study is focused in Queensland, Australia’s second largest state, where end-user demand for energy continues to increase. To determine the MARS and SVR model inputs, the partial autocorrelation function is applied to historical (area aggregated) G data in the training period to discriminate the significant (lagged) inputs. On the other hand, single input G data is used to develop the univariate ARIMA model. The predictors are based on statistically significant lagged inputs and partitioned into training (80%) and testing (20%) subsets to construct the forecasting models. The accuracy of the G forecasts, with respect to the measured G data, is assessed using statistical metrics such as the Pearson Product-Moment Correlation coefficient (r), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). Normalized model assessment metrics based on RMSE and MAE relative to observed means ( RMSE G ¯ and MAE G ¯ ), Willmott’s Index (WI), Legates and McCabe Index ( E LM ) , and Nash–Sutcliffe coefficients ( E NS ) are also utilised to assess the models’ preciseness. For the 0.5 h and 1.0 h short-term forecasting horizons, the MARS model outperforms the SVR and ARIMA models displaying the largest WI (0.993 and 0.990) and lowest MAE (45.363 and 86.502 MW), respectively. In contrast, the SVR model is superior to the MARS and ARIMA models for the daily (24 h) forecasting horizon demonstrating a greater WI (0.890) and MAE (162.363 MW). Therefore, the MARS and SVR models can be considered more suitable for short-term G forecasting in Queensland, Australia, when compared to the ARIMA model. Accordingly, they are useful scientific tools for further exploration of real-time electricity demand data forecasting.",
"corpus_id": 3845501,
"title": "Short-term electricity demand forecasting with MARS, SVR and ARIMA models using aggregated demand data in Queensland, Australia"
} | {
"abstract": "Male Anopheles mosquitoes erect their antennal hairs prior to mating. The erectile mechanism resides in a unique annulus at the base of each hair whorl. It appears that the insect regulates the degree of hydration of this annulus. When the annulus is made to swell the attached hairs are pushed to their erect position.",
"corpus_id": 9660686,
"score": -1,
"title": "Antennal hair erection in male mosquitoes: a new mechanical effector in insects."
} |
{
"abstract": ".................................................................................................................... v Table of",
"corpus_id": 134192676,
"title": "Quantitative modelling for assessing system trade-offs in environmental flow management"
} | {
"abstract": "Abstract. The objective is to present a reservoir management system which is capable of determining optimal operating rules both for flood event based and normal operation while at the same time attempting to achieve ecologically oriented operation. In order to maintain the variability of the natural flow regime, a new dynamic operating policy is introduced for normal operation. Flood event based operation is managed by a two-part step function. Both operating policies are optimized using a state-of-the-art multi-objective evolution strategy algorithm.",
"corpus_id": 8006869,
"title": "Optimum multi-objective reservoir operation with emphasis on flood control and ecology"
} | {
"abstract": "Modellbasierte Managementsysteme fur Flussgebiete mit Mehrzweckspeichern sind heutzutage unverzichtbar fur eine optimale Bewirtschaftung. Es wird ein Managementsystem vorgestellt, das mit Hilfe evolutionarer Algorithmen sowohl fur den Normalbetrieb als auch fur den ereignisbezogenen Betrieb eines Mehrzweckspeichers mehrere Bewirtschaftungsziele gleichzeitig berucksichtigt. Das Ergebnis ist eine Menge von so genannten Pareto-optimalen Losungen, die die effektivsten Kompromisse darstellen und als transparente Grundlage fur Entscheidungstrager dienen konnen. Zielkonflikte und -synergien konnen erkannt und analysiert werden. Um die naturliche Abflussdynamik im Normalbetrieb zu berucksichtigen und somit die negativen okologischen Auswirkungen im Unterlauf eines Mehrzweckspeichers zu minimieren, wird ein dynamisches Betriebsregelkonzept verwendet. Das hier vorgestellte Managementsystem eignet sich ebenfalls zum Einsatz einer adaptiven Steuerung, die auf einer Nachfuhrung aktualisierter Vorhersagen basiert.",
"corpus_id": 179954126,
"score": -1,
"title": "Optimierung von Mehrzweckspeichern im Hinblick auf Hochwasserrisiko und Ökologie"
} |
{
"abstract": "The occurrence of asthma or wheezing, and other allergic diseases, in 3808 pairs of twins aged 18 to 88 years was recorded by mailed questionnaire in 1980 (a pairwise response rate of 64%; individual, 69%). This sample (Cohort 1) was resurveyed in 1988 (78% pairwise followup), and a further 2159 pairs aged 18 to 25 years (Cohort 2) responded usefully to a similar item on asthma on another instrument in 1989 sent to 4078 pairs (pairwise 53%). The crude cumulative incidence of wheezing was 13.2% in 1980, 18.9% in 1988, and 21.8% in 1989. Genetic analyses performed using this screening data suggested a strong genetic component to wheezing, hayfever and allergy, and sizeable genetic correlations between different atopic conditions. Genetic influences specific to particular traits such as wheezing were also detectable. A secular increase in incidence of wheeze experienced by consecutive birth cohorts seemed to be due to nonfamilial environmental factors. A more detailed respiratory symptoms questionnaire was mailed to 1989 pairs (3978 individuals) where one or both of the twins reported ever wheezing. This was returned by 3193 individuals (80%). A detailed history of wheeze frequency, recency, aggravating factors and treatment was recorded for 73% of Cohort 1 wheezing probands, and 69% of those from Cohort 2. Of these individuals, one third reported wheeze in the previous month, and a further third within the previous two years. The median age at onset of wheezing was 12 years. This questionnaire also invited the twins to take part in clinical testing around Australia. A total of 863 individuals including 419 complete twin pairs underwent histamine inhalation challenge, allergic skin prick testing, and venesection in seven cities. Bronchial hyperresponsiveness was present in 67% of the twins reporting wheeze in the previous twelve months, in 41% of those reporting more distant symptoms, and 24% of those who had never wheezed. Total serum IgE and bronchial responsiveness were highest in those who had wheezed most recently, and whose skin tests demonstrated allergy to house dust, cockroach, and rye grass. The heritability of total serum IgE and of bronchial responsiveness were both approximately 60%. Monozygotic cotwin-control analyses suggested that house dust mite sensitisation was the single strongest environmentally controlled risk factor for developing wheezing. In dizygotic (DZ) cotwin-control analysis, sensitisation to grasses was also an important predictor, suggesting pollinosis is genetically correlated with wheezing, rather than causative. A sib-pair linkage analysis using all DZ twins did not support linkage to the highaffinity immunoglobulin E receptor β-subunit gene on chromosome 11q of atopy or bronchial hyper-responsiveness. Proxy reports by the twins on first degree relatives were supplemented using the responses of 1110 mothers of the twins to a telephone interview, who gave reports on themselves, spouse and offspring. This allowed validation of proxy reports and selection of the most informative diagnostic algorithm to apply to the combined reports. The diagnoses in this fashion were incorporated into recurrence risk calculations and segregation analysis. The classical and limited complex segregation analyses were suggestive of the action of a major gene on asthma. The recurrence risks of doctor-diagnosed asthma to a relative of an asthmatic proband born 1940-59 were: mother, 0.17; father, 0.14; MZ female cotwin, 0.49; MZ male cotwin, 0.43; DZ female cotwin, 0.29; DZ male cotwin, 0.24; sibling, 0.12",
"corpus_id": 256684122,
"title": "Asthma and allergic diseases in Australian twins and their families"
} | {
"abstract": "This paper reviews dependence models for bivariate survival data, classifying them into the four groups: the shock model, the Freund model, the Clayton model, and the mixture model. The paper then concentrates on the mixture model, discussing the testing problem for the equality of marginal distributions under the Weibull type baseline hazard assumption. The new test proposed recently by Fujii is introduced, and its characteristic is studied with respect to the test proposed by Nayak and the sign test by simulation study.",
"corpus_id": 5719441,
"title": "Models for association in bivariate survival data."
} | {
"abstract": "Recently, S. Bravyi and R. Konig [Phys. Rev. Lett. 110, 170503 (2013)] have shown that there is a trade-off between fault-tolerantly implementable logical gates and geometric locality of stabilizer codes. They consider locality-preserving operations which are implemented by a constant-depth geometrically local circuit and are thus fault tolerant by construction. In particular, they show that, for local stabilizer codes in D spatial dimensions, locality-preserving gates are restricted to a set of unitary gates known as the Dth level of the Clifford hierarchy. In this paper, we explore this idea further by providing several extensions and applications of their characterization to qubit stabilizer and subsystem codes. First, we present a no-go theorem for self-correcting quantum memory. Namely, we prove that a three-dimensional stabilizer Hamiltonian with a locality-preserving implementation of a non-Clifford gate cannot have a macroscopic energy barrier. This result implies that non-Clifford gates do not admit such implementations in Haah's cubic code and Michnicki's welded code. Second, we prove that the code distance of a D-dimensional local stabilizer code with a nontrivial locality-preserving mth-level Clifford logical gate is upper bounded by O(L^(D+1−m)). For codes with non-Clifford gates (m>2), this improves the previous best bound by S. Bravyi and B. Terhal [New. J. Phys. 11, 043029 (2009)]. Topological color codes, introduced by H. Bombin and M. A. Martin-Delgado [Phys. Rev. Lett. 97, 180501 (2006); Phys. Rev. Lett. 98, 160502 (2007); Phys. Rev. B 75, 075103 (2007)], saturate the bound for m=D. Third, we prove that the qubit erasure threshold for codes with a nontrivial transversal mth-level Clifford logical gate is upper bounded by 1/m. This implies that no family of fault-tolerant codes with transversal gates in increasing level of the Clifford hierarchy may exist. This result applies to arbitrary stabilizer and subsystem codes and is not restricted to geometrically local codes. Fourth, we extend the result of Bravyi and Konig to subsystem codes. Unlike stabilizer codes, the so-called union lemma does not apply to subsystem codes. This problem is avoided by assuming the presence of an error threshold in a subsystem code, and a conclusion analogous to that of Bravyi and Konig is recovered.",
"corpus_id": 15105507,
"score": -1,
"title": "Fault-tolerant logical gates in quantum error-correcting codes"
} |
{
"abstract": "The aim of the present work was to evaluate with different statistical criteria the suitability of nine equations for describing and optimizing the simultaneous effect of temperature and pH on glucanex activity using two characteristic polysaccharides (curdlan and laminarin) as substrates. The most satisfactory solutions were found with an empirical equation constituted with parameters of practical interest (Rosso model), and a hybrid model between the Arrhenius equation and the mathematical expression generated by the protonation‐hydroxylation mechanism (Tijskens model). The joint optimal values of pH and temperature calculated with the Rosso model were obtained at 4.64 and 50°C with curdlan and 4.64 and 48°C using laminarin as substrate. © 2011 American Institute of Chemical Engineers Biotechnol. Prog., 2012",
"corpus_id": 11584407,
"title": "Comparison of several mathematical models for describing the joint effect of temperature and ph on glucanex activity"
} | {
"abstract": "In this study, the behavior of enzyme activity as a function of pH and temperature is modeled on the basis of fundamental considerations. A formulation is developed that includes the activation of enzymes with increasing temperatures and the deactivation of enzymes at higher temperature, together with the effect of protonation and hydroxylation on activity at various constant pH levels. The model is calibrated and validated against an extensive set of experimental data on phytases from seven different origins. The percentage variance accounted for (R(2)(adj)), obtained by statistical nonlinear regression analysis on all data sets, was shown to range from 97.6% to 99.5%. The equilibrium constant of protonation and hydroxylation proved to be independent of temperature.",
"corpus_id": 2026086,
"title": "Modeling the effect of temperature and pH on activity of enzymes: the case of phytases."
} | {
"abstract": "The dried and wet chitosan-clay composite beads were prepared by mixing equal weights of cuttlebone chitosan and activated clay and then spraying drop-wise through a syringe, with and without freeze-drying, respectively. These beads were then immersed in 5 g/L of glutaraldehyde solution at a dosage of 0.5 g/L and were cross-linked, which were finally used as supports for beta-glucosidase immobilization. The properties of the enzyme immobilized on wet- and dried-composite beads were compared. Kinetic modeling of thermal inactivation of free and immobilized enzymes was also investigated. For a given enzymatic reaction, the rate constant related to the decomposition of the enzyme-substrate complex to final product and the uncomplexed enzyme using dried-composite immobilized enzyme was larger than those using both free and wet-composite immobilized enzymes.",
"corpus_id": 8014172,
"score": -1,
"title": "Thermal inactivation and reactivity of beta-glucosidase immobilized on chitosan-clay composite."
} |
{
"abstract": "ABSTRACT Structural priming in comprehension seems to be more variable than in production. Sometimes it occurs without lexical overlap, sometimes it does not. This raises questions about the use of abstract syntactic structure and how it varies across tasks. We use a visual-world eye-tracking judgment task and observe two kinds of priming effects. First, participants were more likely to switch to looking at the target referent immediately after the word when the syntactic structure of the target matched that of the prime. Second, participants also looked more to referents that could take on the thematic role that was in sentence-final position in the prime sentence, and thus in discourse focus. Critically, neither effect depended upon lexical overlap. Our results suggest that structural priming in comprehension manifests itself differently depending on situational demands, reflecting the activation of different levels of representation under different pressures.",
"corpus_id": 84839332,
"title": "The use of syntax and information structure during language comprehension: Evidence from structural priming"
} | {
"abstract": "Abstract We report two sets of experiments that demonstrate syntactic priming from comprehension to comprehension in young children. Children acted out double-object and prepositional-object dative sentences while we monitored their eye movements. We measured whether hearing one type of dative as a prime influenced children’s online interpretation of subsequent dative utterances. In target sentences, the onset of the direct object noun was consistent with both an animate recipient and an inanimate theme, creating a temporary ambiguity in the argument structure of the verb (double-object e.g., Show the hor se the book ; prepositional-object e.g., Show the hor n to the dog ). The first set of experiments demonstrated priming in four-year-old children (M = 4.1), both when the same verb was used in prime and target sentences (Experiment 1a) and when different verbs were used (Experiment 1b). The second set found parallel priming in three-year-old children (M = 3.1). These results indicate that young children employ abstract structural representations during online sentence comprehension.",
"corpus_id": 1047000,
"title": "Syntactic priming during language comprehension in three- and four-year-old children"
} | {
"abstract": "Sex differences often call sexual selection to mind; however, a new damselfly study cautions on being too hasty, and implicates viability selection in the evolution of male and female colouration.",
"corpus_id": 177782,
"score": -1,
"title": "Sexual Dimorphism: Why the Sexes Are (and Are Not) Different"
} |
{
"abstract": "To enable highly automated driving and the associated comfort services for the driver, vehicles require a reliable and constant cellular data connection. However, due to their mobility vehicles experience significant fluctuations in their connection quality in terms of bandwidth and availability. To maintain constantly high quality of service, these fluctuations need to be anticipated and predicted before they occur. To this end, different techniques such as connectivity maps and online throughput estimations exist. In this paper, we investigate the possibilities of a large-scale future deployment of such techniques by relying solely on lowcost hardware for network measurements. Therefore, we conducted a measurement campaign over three weeks in which more than 74,000 throughput estimates with correlated network quality parameters were obtained. Based on this data set—which we make publicly available to the community—we provide insights in the challenging task of network quality prediction for vehicular scenarios. More specifically, we analyse the potential of machine learning approaches for bandwidth prediction and assess their underlying assumptions.",
"corpus_id": 14045304,
"title": "Cellular Bandwidth Prediction for Highly Automated Driving - Evaluation of Machine Learning Approaches based on Real-World Data"
} | {
"abstract": "With the advent of high-speed cellular access and the overwhelming popularity of smartphones, a large percent of today's Internet content is being delivered via cellular links. Due to the nature of long-range wireless signal propagation, the capacity of the last hop cellular link can vary by orders of magnitude within a short period of time (e.g., a few seconds). Unfortunately, TCP does not perform well in such fast-changing environments, potentially leading to poor spectrum utilization and high end-to-end packet delay. In this paper we revisit seminal work in cross-layer optimization in the context of 4G cellular networks. Specifically, we leverage the rich physical layer information exchanged between base stations (NodeB) and mobile phones (UE) to predict the capacity of the underlying cellular link, and propose nCQIC, a cross-layer congestion control design. Experiments on real cellular networks confirm that our capacity estimation method is both accurate and precise. A CQIC sender uses these capacity estimates to adjust its packet sending behavior. Our preliminary evaluation reveals that CQIC improves throughput over TCP by 1.08-2.89x for small and medium flows. For large flows, CQIC attains throughput comparable to TCP while reducing the average RTT by 2.38-2.65x.",
"corpus_id": 1058430,
"title": "CQIC: Revisiting Cross-Layer Congestion Control for Cellular Networks"
} | {
"abstract": "The volume of best effort traffic is exploded by rapid adoption of peer-to-peer and content applications. Smart phone consumers are spending more times on applications which include video and music streaming, playing games, video chatting, social media like uploading photos to Facebook, Twitter etc. Many such applications are always running in background and sometimes come in foreground based on user preferences. In this work we propose an approach to improve the user experience by giving more bandwidth to preferred applications. We describe a preliminary model explaining our technique in detail. Further, we validate our proposal using real time test setup with Wireshark traffic analyzer, and results are detailed with respect to (1) Percentage of network share (2) Jitter experience and (3) Time taken for the algorithm to adapt. Proposed algorithm has been tested in two different platforms such as Android and Tizen. Our preliminary observations show that our proposed algorithm allocates more bandwidth to high priority applications while maintaining the low priority APPs are intact with above minimum bandwidth. Our approach gives users better jitter free experience for video streaming (high priority) applications in both Android (KitKat) and Tizen (Z1) platforms.",
"corpus_id": 3438205,
"score": -1,
"title": "Minimum complexity APP prioritization by bandwidth apportioning in smart phones"
} |
{
"abstract": "A natural way of communicating an audio concept is to imitate it with one's voice. This creates an approximation of the imagined sound (e.g. a particular owl's hoot), much like how a visual sketch approximates a visual concept (e.g a drawing of the owl). If a machine could understand vocal imitations, users could communicate with software in this natural way, enabling new interactions (e.g. programming a music synthesizer by imitating the desired sound with one's voice). In this work, we collect thousands of crowd-sourced vocal imitations of a large set of diverse sounds, along with data on the crowd's ability to correctly label these vocal imitations. The resulting data set will help the research community understand which audio concepts can be effectively communicated with this approach. We have released the data set so the community can study the related issues and build systems that leverage vocal imitation as an interaction modality.",
"corpus_id": 240686,
"title": "VocalSketch: Vocally Imitating Audio Concepts"
} | {
"abstract": "Describing unidentified sounds with words is a frustrating task and vocally imitating them is often a convenient way to address the issue. This article reports on a study that compared the effectiveness of vocal imitations and verbalizations to communicate different referent sounds. The stimuli included mechanical and synthesized sounds and were selected on the basis of participants' confidence in identifying the cause of the sounds, ranging from easy-to-identify to unidentifiable sounds. The study used a selection of vocal imitations and verbalizations deemed adequate descriptions of the referent sounds. These descriptions were used in a nine-alternative forced-choice experiment: Participants listened to a description and picked one sound from a list of nine possible referent sounds. Results showed that recognition based on verbalizations was maximally effective when the referent sounds were identifiable. Recognition accuracy with verbalizations dropped when identifiability of the sounds decreased. Conversely, recognition accuracy with vocal imitations did not depend on the identifiability of the referent sounds and was as high as with the best verbalizations. This shows that vocal imitations are an effective means of representing and communicating sounds and suggests that they could be used in a number of applications.",
"corpus_id": 166068,
"title": "On the effectiveness of vocal imitations and verbal descriptions of sounds."
} | {
"abstract": "The influence of listener's expertise and sound identification on the categorization of environmental sounds is reported in three studies. In Study 1, the causal uncertainty of 96 sounds was measured by counting the different causes described by 29 participants. In Study 2, 15 experts and 15 nonexperts classified a selection of 60 sounds and indicated the similarities they used. In Study 3, 38 participants indicated their confidence in identifying the sounds. Participants reported using either acoustical similarities or similarities of the causes of the sounds. Experts used acoustical similarity more often than nonexperts, who used the similarity of the cause of the sounds. Sounds with a low causal uncertainty were more often grouped together because of the similarities of the cause, whereas sounds with a high causal uncertainty were grouped together more often because of the acoustical similarities. The same conclusions were reached for identification confidence. This measure allowed the sound classification to be predicted, and is a straightforward method to determine the appropriate description of a sound.",
"corpus_id": 18891785,
"score": -1,
"title": "Listener expertise and sound identification influence the categorization of environmental sounds."
} |
{
"abstract": "In the present study we examined psychometric properties of the Serbian translation of the Empathy Quotient scale (S-EQ). The translated version of the EQ was applied on a sample of 694 high-school students. A sub-sample consisting of 375 high-school students also completed the Interpersonal Reactivity Index (IRI), another widely used empathy measure. The following statistical analyses were applied: internal consistency analysis, explanatory (EFA) and confirmatory (CFA) factor analyses, and factor congruence analysis. Correlation with IRI and gender differences were calculated to demonstrate validity of the instrument. Results show that the Serbian 40-item version of EQ has lower reliability (Cronbach's alpha = .782) than the original. The originally proposed one factor structure of the instrument was not confirmed. The short version with 28 items showed better reliability (alpha= .807). The three-factor solution (cognitive empathy, emotional reactivity, and social skills) showed good cross-sample stability (Tucker congruence coefficient over .8) but the results of CFA confirmed the solution proposed in the reviewed literature only partially. The mean scores are similar to those obtained in the other studies, and, as expected, women have significantly higher scores than men. Correlations with all subscales of IRI are statistically significant for the first two subscales of EQ, but not for the 'social skills.' We concluded that the Serbian version of the 'Empathy Quotient' is a useful research tool which can contribute to cross-cultural studies of empathy, although its psychometric characteristics are not as good as those obtained in the original study. We also suggest that a 28-item should be used preferably to the original 40-item version.",
"corpus_id": 55732114,
"title": "Psychometric properties of the Serbian version of the Empathy Quotient (S-EQ)"
} | {
"abstract": "Introduction. The Empathy Quotient (EQ) is a self-report questionnaire that was developed to measure the cognitive, affective, and behavioural aspects of empathy. We evaluated its cross-cultural validity in an Italian sample. Methods. A sample of 18- to 30-year-old undergraduate students of both sexes (N=256, males=118) were invited to fill in the Italian version of the EQ, as well as other measures of emotional competence and psychological distress. Results. The EQ had an excellent reliability (Cronbach's alpha=.79; test–retest at 1 month: Pearson's r=.85), and was normally distributed. Females scored higher than males, and more males (n=14, 11.9%) than females (n=4, 2.9%) scored lower than 30, the cutoff score that best differentiates autism spectrum conditions from controls. EQ was negatively related to the Toronto Alexithymia Scale (TAS) and positively related to the Marlowe-Crowne Social Desirability Scale (SDS). Principal component analysis retrieved the three-factor structure of the EQ. Lower emotional reactivity correlated with higher scores in measures of risk in both the schizophrenia-like (Peters et al. Delusions Inventory) and the bipolar (Hypomanic Personality Scale) spectra. Conclusions. The Italian version of the EQ has good validity, with an acceptable replication of the original three-factor solution, yielding three subscales with high internal and test–retest reliability.",
"corpus_id": 1157699,
"title": "The Empathy Quotient: A cross-cultural comparison of the Italian version"
} | {
"abstract": "The present investigation evaluates the effectiveness of students’ evaluations of teaching effectiveness (SETs) as a means for enhancing university teaching. We emphasize the multidimensionality of SETs, an Australian version of the Students’ Evaluations of Educational Quality (Marsh, 1987) instrument (ASEEQ), and Wilson’s (1986) feedback/consultation intervention. All teachers (N = 92) completed self-evaluation surveys and were evaluated by students at the middle of Semester 1 and at the ends of Semesters 1 and 2. Three randomly assigned groups received the feedback/consultation intervention at midterm of Semester 1 (MT), at the end of Semester 1 (ET), or received no intervention (control). Each MT and ET teacher ‘‘targeted” specific ASEEQ dimensions that were the focus of his or her individually structured intervention. The ratings for all groups improved over time, but only ratings for the ET group improved significantly more than those in the control group. For both ET and MT groups, targeted dimensions improved more than nontargeted dimensions. The results suggest that SET feedback coupled with consultation is an effective means to improve teaching effectiveness, and the study provides one model for feedback/consultation.",
"corpus_id": 54836817,
"score": -1,
"title": "The Use of Students’ Evaluations and an Individually Structured Intervention to Enhance University Teaching Effectiveness"
} |
{
"abstract": "From time to time, some eminent physicists commenced to ask: What is the reality behind quantum mechanical predictions? Is there a realism interpretation of Quantum Physics? This paper is intended to explore such a possibility of a realism interpretation of QM, based on a derivation of Maxwell equations in Quaternion Space. In this regards, we begin with Quaternion space and its respective Quaternion Relativity (it also may be called as Rotational Relativity) as it has been discussed in several papers including [1]. The purpose of the present paper is to review our previous derivation of Maxwell equations in Q-space [17], with discussion on some implications. First, we will review our previous results in deriving Maxwell equations using Dirac decomposition, introduced by Gersten (1999). Then we will shortly make a few remark on helical solutions of Maxwell equations, Smarandache's Hypothesis and possible cosmological entanglement. Further observations are of course recommended to refute or verify some implications of this proposition.",
"corpus_id": 219326335,
"title": "Towards realism interpretation of wave mechanics based on Maxwell equations in quaternion space and some implications, including Smarandache’s hypothesis"
} | {
"abstract": "Quaternion space and its respective Quaternion Relativity (it also may be called as Rotational Relativity) have been defined in a number of papers including [1], and this new theory is capable to describe relativistic motion in elegant and straightforward way. \nNonetheless there are subsequent theoretical developments which remain an open question, for instance how to derive Maxwell equations in Q-space. \nThe purpose of the present paper is to derive a \nconsistent description of Maxwell equations in Q-space. First we consider a simplified method similar to the Feynman’s derivation of Maxwell equations from Lorentz force. And then we present another derivation method using Dirac decomposition, introduced by Gersten (1999). Further observation is of course recommended in order to refute or verify some implication of this proposition.",
"corpus_id": 398603,
"title": "A derivation of Maxwell equations in quaternion space"
} | {
"abstract": "We present a system of a self-dual Yang-Mills field and a self-dual vector-spinor field with nilpotent fermionic symmetry (but not supersymmetry) in 2+2 dimensions, that generates supersymmetric integrable systems in lower dimensions. Our field content is (A_\\mu{}^I, \\psi_\\mu{}^I, \\chi^{I J}), where I and J are the adjoint indices of arbitrary gauge group. The \\chi^{I J} is a Stueckelberg field for consistency. The system has local nilpotent fermionic symmetry with the algebra \\{N_\\alpha{}^I, N_\\beta{}^J \\} = 0. This system generates supersymmetric Kadomtsev-Petviashvili equations in D=2+1, and supersymmetric Korteweg-de Vries equations in D=1+1 after appropriate dimensional reductions. We also show that a similar self-dual system in seven dimensions generates self-dual system in four dimensions. Based on our results we conjecture that lower-dimensional supersymmetric integral models can be generated by non-supersymmetric self-dual systems in higher dimensions only with nilpotent fermionic symmetries.",
"corpus_id": 119110752,
"score": -1,
"title": "Self-dual Yang-Mills and vector-spinor fields, nilpotent fermionic symmetry, and supersymmetric integrable systems"
} |
{
"abstract": "In this paper, a conceptual multi-zone model for climate control of a live stock building is elaborated. The main challenge of this research is to estimate the parameters of a nonlinear hybrid model. A recursive estimation algorithm, the Extended Kalman Filter (EKF) is implemented for estimation. Since the EKF is sensitive to the initial guess, in the following the estimation process is split up into simple parts and approximate parameters are found with a non recursive least squares method in order to provide good initial values. Results based on experiments from a real life stable facility are presented at the end.",
"corpus_id": 34696144,
"title": "Multi-Zone hybrid model for failure detection of the stable ventilation systems"
} | {
"abstract": "In this paper, a multi-zone modeling concept is proposed based on a simplified energy balance formulation to provide a better prediction of the indoor horizontal temperature variation inside the livestock building. The developed mathematical models reflect the influences from the weather, the livestock, the ventilation system and the building on the dynamic performance of indoor climate. Some significant parameters employed in the climate model as well as the airflow interaction between each conceptual zone are identified with the use of experimental time series data collected during spring and winter at a real scale livestock building in Denmark. The obtained comparative results between the measured data and the simulated output confirm that a very simple multi-zone model can capture the salient dynamical features of the climate dynamics which are needed for control purposes.",
"corpus_id": 16062398,
"title": "Parameter Estimation of Dynamic Multi-zone Models for Livestock Indoor Climate Control"
} | {
"abstract": "System identification is a well-established field. It is concerned with the determination of particular models for systems that are intended for a certain purpose such as control. Although dynamical systems encountered in the physical world are native to the continuous-time domain, system identification has been based largely on discrete-time models for a long time in the past, ignoring certain merits of the native continuous-time models. Continuous-time-model-based system identification techniques were initiated in the middle of the last century, but were overshadowed by the overwhelming developments in discrete-time methods for some time. This was due mainly to the 'go completely digital' trend that was spurred by parallel developments in digital computers. The field of identification has now matured and several of the methods are now incorporated in the continuous time system identification (CONTSID) toolbox for use with Matlab. The paper presents a perspective of these techniques in a unified framework.",
"corpus_id": 191992914,
"score": -1,
"title": "Identification of Continuous-Time Systems"
} |
{
"abstract": "Over recent years the historic environment has come to be seen as an “heritage asset” to be managed and developed for economic benefit rather (than was the case in the past) as a public good to be preserved and protected for future generations. Today, the contemporary heritage sector (of which the historic environment is a key element) is often perceived by policy makers as a significant contributor to the economic and social vitality of the places at multiple scales, and not insignificantly at the level of the region. In England, this shift has occurred along with rapid expansion of the processes and structures of regional governance. This coincidence has given opportunities for a ‘heritage modernization agenda’ to take root within the regions and has allowed heritage and the management of the issues associated with the historic environment to shift from the margins of public policy to more integral components in a range of regional plans, including those for tourism. This paper will explore the ways in which this process has occurred generally across the English regions and more specifically in Yorkshire and the Humber. In this latter aspect, the paper focuses on the work of the Yorkshire and the Humber Historic Environment Forum and its role as a key agent in seeking to embed issues around the management and development of Yorkshires’ historic environment into broader public policy agendas. To do this the paper will consider the development of the devolved government agenda under New Labour in England since 1997; examine the constructivist nature of Yorkshire’s Heritage asset and the role of the Historic Environment Forum in this process; explore the emergence of heritage within the domain of regional planning; and finally, consider the integration of ‘heritage’ and historic environment policy within the plans and policies for development of the regional visitor economy. In so doing it explores some of the relations and tensions residing in the selection of aspects of the past as ways of promoting both regional tourism and regional identity. Throughout its analysis, the paper touches on three key themes within the Regional Identities and Imaginations gateway: ‘using the past to imagine marketable representations of regions; the way ‘policies and strategies to differentiate between regional place products’; and the extent to which the policy process developing around the historic environment and its relation to regional tourism demonstrates preferred narratives for the ‘consumption of regional identities and imaginations’. Growing Design: Design Consultancies In Three English Industrial City-Regions Presenting Author: Peter Sunley University Of Southampton, UNITED KINGDOM Co-authors: Suzanne Reimer, Steven Pinch, James Macmillen There is widespread recognition that design is an important and distinctive component of the knowledge economy. Design consultancies constitute a creative and business-facing service sector that is crucial to diffusing the benefits of innovation throughout the economy. This paper uses the results of firm interviews to examine the evolution and experiences of the design firms in key design disciplines in three old industrial cities in the UK: Manchester; Newcastle and Birmingham. It seeks to understand the importance of these urban contexts to the development and growth of these firms. It highlights the major constraints on growth reported by a sample of firms in each city and, more specifically, it emphasises the quantity and quality of demand for their services and the availability of trained and experienced labour. The paper stresses the importance of the regional distribution of educational institutions with strong design courses, and the location decisions of design graduates, to the geographical development of the industry. The paper then considers some of the policy implications of its findings. Some types of design firms in these cities perceive that they benefit from proximity to local creative quarters and initiatives, but that these benefits are limited and should not be over-estimated. Policy support for creative quarters and local creative buzz appears to have had a beneficial, but hard to measure, impact on some design firms. The paper concludes that, from a design perspective, stimulating local creative buzz in industrial cities is insufficient as a policy aim if it neglects the need to address the key constraints; the nature of business demand for design and the availability of skilled labour. The Emergence Of New High-Technology Agglomerations Presenting Author: Nina Suvinen, University Of Tampere, FINLAND The regions which seem to produce remarkable innovativeness and industrial competitiveness have for long time been under massive research (Moulaert and Sekia 2003). The research have often focused on economical issues, innovations, interactions between different actors like universities and firms but less attention have been paid to factors which affect to the emergence of new high-technology agglomerations. There are different views about these factors.",
"corpus_id": 236505355,
"title": "Regional Studies Association Regions : The Dilemmas of Integration and Competition ?"
} | {
"abstract": "Abstract The paper analyses the changing patterns of migration in the transition process from a “Socialist” into a “Capitalist” city against the background of “Modernization Theory”. Based on the example of Erfurt, a former district capital and now capital of the new federal state of Thuringia, an overview is given of basic population development trends before and after the unification of the two Germanys. A more detailed analysis of different urban areas (historic centre, city extensions of the industrial period, Socialist housing areas, new suburban housing areas) reveals a general turn-around of migration streams: Before 1990 the urban population showed a steady increase, while after reunification a dramatic loss could be observed. This decrease is caused both by emigration to areas in former West Germany, as well as by the beginning suburbanisation process. The latter is also fostered by the growing number of West – East migrants across the former internal German border. The findings are summarised in a model of the post-Socialist urban area.",
"corpus_id": 153331659,
"title": "From Concentration to De-concentration — Migration Patterns in the Post-socialist City☆"
} | {
"abstract": "Previous research on market economies characterized by stable framework conditions shows that several regional factors determine start-up activity. Not much is known about what drives entrepreneurship in unstable environments characterized by significant institutional changes that affect the availability of entrepreneurial opportunities. To fill this gap, this paper focuses on post-communist regions in which start-up activity was basically nonexistent under socialism, but significantly more in evidence after the institutional shock of introducing a market economy. It is argued and shown that the allocation of talent into productive entrepreneurship is higher in areas abundantly endowed with individuals who have a relatively high ability to detect viable entrepreneurial opportunities, as indicated by their qualification, and in regions home to a population that is characterized by a high alertness toward opportunities, as indicated by remnants of an entrepreneurial culture that predates socialism. How institutional context affects entrepreneurship over the course of transition is reflected by the negative relationship between urbanization and entrepreneurship that presumably has to do with ill-devised socialist urban planning policies. The regional application of the theory on institutions and entrepreneurship outlined in this paper shows that an entrepreneurial rebound after an adverse large-scale shock accompanied by massive structural change and economic dislocation is most pronounced in areas with a strong human capital basis and a regional culture that favors entrepreneurship.",
"corpus_id": 39397905,
"score": -1,
"title": "Ready, set, go!: Why are some regions entrepreneurial jump-starters?"
} |
{
"abstract": "The concept of a technolog y strateg y (TS) has been developing in the literature of Technology management since the 1970s. TS has been defined as a set of technology and related objectives, vari ant scenarios, technology roadmaps, targeting practices, and know-how aimed at an adequate specification of the desired long-term development of a technological system and related proc esses (R&D, supply, sales, control, service, etc.). This article has two main scientific goals. The first goal is to describe methodically the main specifics and forms of technology planning/TS through the comprehensive study of available professional literature. The second goal is to analyze the development of technology planning methods, based on the bibliometrical analysis of ScienceDirect database (1823-2013). Main goals, individual explanations, practical examples, statistics and graphical information should help explain how technology planning currently looks like, what are its main priorities and problems.",
"corpus_id": 62379789,
"title": "Systemic Introduction to Technology Planning in the Context of Technology Competitiveness"
} | {
"abstract": "A study of the evolution of competitive capabilities in exemplar New Zealand firms identified that technology strategy played the key role in motivating the firms' transition to positions of global prominence. Adequate description of these transitions required a view of technology strategy that is more dynamic than those typically available. We use complexity theory to identify, first, a number of positive feedback loops that have driven the technological progression of these firms, second to identify the complex webs of strategic development within which technology has progressed, and finally to explain why these trajectories carry firms to positions of distinctive advantage. These loops come together to impel firms through a radical transition from broad technology dabblers to focussed technology specialists. We view the study as exploratory to a class of studies aimed at understanding the evolution of technology strategy over time.",
"corpus_id": 153368402,
"title": "The Dynamics of Technology Strategy: An Exploratory Study"
} | {
"abstract": "The authors' purpose is to improve the coupling between technology development and corporate strategic planning in multinational firms by providing a much needed technology planning framework. \n \n \n \nThe framework, which is developed in some detail, divides the planning process into three stages: technology scanning, strategy development (product level) and implementation (country level). In the first stage an answer is sought to the question, “What technologies (as distinct from businesses) are we, or should we be in?”. in the second, the aim is to develop a strategy for each of the products from the chosen technologies; in the third stage, details of implementation on a country-by-country basis are worked out. Although presented as a sequence of three stages, the framework is to be applied iteratively. \n \n \n \nThe authors argue that technology for all its vital importance to a global company, cannot be treated as a profit centre. This is part of the difficulty in implementing the technology management function, especially in multidivisional and global firms. They believe that use of this framework will make it easier to integrate technology development into the strategic planning process. In addition it will serve to integrate managers from different parts of the company into a formalized technology planning exercise.",
"corpus_id": 153886860,
"score": -1,
"title": "Technology development in the multinational firm: a framework for planning and strategy"
} |
{
"abstract": "Judicious control of indoor wireless coverage is crucial in built environments. It enhances signal reception, reduces harmful interference, and raises the barrier for malicious attackers. Existing methods are either costly, vulnerable to attacks, or hard to configure. We present a low-cost, secure, and easy-to-configure approach that uses an easily-accessible, 3D-fabricated reflector to customize wireless coverage. With input on coarse-grained environment setting and preferred coverage (e.g., areas with signals to be strengthened or weakened), the system computes an optimized reflector shape tailored to the given environment. The user simply 3D prints the reflector and places it around a Wi-Fi access point to realize the target coverage. We conduct experiments to examine the efficacy and limits of optimized reflectors in different indoor settings. Results show that optimized reflectors coexist with a variety of Wi-Fi APs and correctly weaken or enhance signals in target areas by up to 10 or 6 dB, resulting to throughput changes by up to -63.3% or 55.1%.",
"corpus_id": 3707090,
"title": "Customizing indoor wireless coverage via 3D-fabricated reflectors"
} | {
"abstract": "Directing wireless signals and customizing wireless coverage is of great importance in residential, commercial, and industrial environments. It can improve the wireless reception quality, reduce the energy consumption, and achieve better security and privacy. To this end, we propose \\name, a new computational approach to control wireless coverage by mounting signal reflectors in carefully optimized shapes on wireless routers. Leveraging 3D reconstruction, fast-wave simulations in acoustics, computational optimization, and 3D fabrication, our method is low-cost, adapts to different wireless routers and physical environments, and has a far-reaching impact by interweaving computational techniques to solve key problems in wireless communication.",
"corpus_id": 8100985,
"title": "3D Printing Your Wireless Coverage"
} | {
"abstract": "In this paper we demonstrate how to extend an indoor personal navigation system based upon fusing pedestrian dead reckoning data and WiFi fingerprints by using simple, unobtrusive visual landmarks perceived by the user's smartphone camera. The proposed navigation system employs a factor graph to represent the localization constraints stemming from measurements obtained using the sensors available in a mobile device. The novelty of this work lies in integration of the constraints imposed by opportunistic observations of simple landmarks based on QR codes in the graph-based formulation of the localization problem. The experiments concern feasibility of detecting and reading QR codes under real-life conditions, and the accuracy of user position estimation. The experimental results confirm that QR codes can provide valuable localization information in real-time, especially when the site lacks a rich landscape of WiFi signals.",
"corpus_id": 14614216,
"score": -1,
"title": "Indoor navigation using QR codes and WiFi signals with an implementation on mobile platform"
} |
{
"abstract": "The sensitivity of pure cultures of Staphylococcus epidermidis and Klebsiella pneumoniae towards arsenic was studied with particular reference to biochemical changes induced by the heavy metal in these organisms. Arsenic strongly inhibited the growth and viability of both the organisms. Addition of arsenic prolonged the lag phase and this was found to be the concentration dependent phenomenon. The Minimum inhibitory concentration (MIC) determined was 200 ppm and 20 ppm in S. epidermidis and K. pneumoniae respectively that inhibited growth, synthesis of protein, DNA, RNA completely and activity of dehydrogenases of the TCA cycle. In S. epidermidis and K. pneumoniae, cell wall, membrane and cytoplasm 24.5%, 32.5%, 43% and 20%, 35%, 45% arsenic respectively got incorporated. As the activity of dehydrogenases was inhibited by arsenic, cells were incapable of oxidizing substrate. It resulted in limited supply of energy rich compounds such as ATP that affected the synthesis of macromolecules. Ultimately multiplication and growth of the organism got ceased.",
"corpus_id": 86463382,
"title": "Arsenic toxicity in pathogenic Staphylococcus epidermidis and Klebsiella pneumoniae"
} | {
"abstract": "Since environmental exposure to arsenicals has been correlated with a high skin cancer risk among populations exposed to sunlight, it is possible that arsenicals might interfere with the repair of damage to DNA (mostly thymine dimers) resulting from the ultraviolet rays in sunlight. To test this hypothesis, strains of E. coli, differing from each other only in one or more repair functions, were exposed to UV light and then plated in the presence or absence of sodium arsenite. Survival after irradiation of wild type E. coli (WP2) was significantly decreased by 0.5mM arsenite. This effect was also seen in strains which are unable to carry out excision repair, suggesting that arsenite inhibits one or more steps in the post-replication repair pathways. This is confirmed by the finding that arsenite has no effect on the post-irradiation survival of a recA mutant, which does not carry out post-replication repair. Mutagenesis after ultraviolet irradiation depends on the rec+ and lex+ genes. Arsenite decreases mutagenesis in strains containing these genes. In order to determine its mechanism of action, dose-response relationships of arsenite on a number of cellular functions were carried out. The most sensitive cellular functions found were the induction of β-galactosidase and the synthesis of RNA. Since error-prone repair in E. coli is an inducible process, the inhibition of mutagenesis after UV irradiation may be the result of inhibition of messenger RNA synthesis.",
"corpus_id": 2783336,
"title": "Effects of arsenite on DNA repair in Escherichia coli"
} | {
"abstract": "We studied, in male Sprague Dawley rats, the role of the cognate hyaluronan receptor, CD44 signaling in the antihyperalgesia induced by high molecular weight hyaluronan (HMWH). Low molecular weight hyaluronan (LMWH) acts at both peptidergic and nonpeptidergic nociceptors to induce mechanical hyperalgesia that is prevented by intrathecal oligodeoxynucleotide antisense to CD44 mRNA, which also prevents hyperalgesia induced by a CD44 receptor agonist, A6. Ongoing LMWH and A6 hyperalgesia are reversed by HMWH. HMWH also reverses the hyperalgesia induced by diverse pronociceptive mediators, prostaglandin E2, epinephrine, TNFα, and interleukin-6, and the neuropathic pain induced by the cancer chemotherapy paclitaxel. Although CD44 antisense has no effect on the hyperalgesia induced by inflammatory mediators or paclitaxel, it eliminates the antihyperalgesic effect of HMWH. HMWH also reverses the hyperalgesia induced by activation of intracellular second messengers, PKA and PKCε, indicating that HMWH-induced antihyperalgesia, although dependent on CD44, is mediated by an intracellular signaling pathway rather than as a competitive receptor antagonist. Sensitization of cultured small-diameter DRG neurons by prostaglandin E2 is also prevented and reversed by HMWH. These results demonstrate the central role of CD44 signaling in HMWH-induced antihyperalgesia, and establish it as a therapeutic target against inflammatory and neuropathic pain. SIGNIFICANCE STATEMENT We demonstrate that hyaluronan (HA) with different molecular weights produces opposing nociceptive effects. While low molecular weight HA increases sensitivity to mechanical stimulation, high molecular weight HA reduces sensitization, attenuating inflammatory and neuropathic hyperalgesia. Both pronociceptive and antinociceptive effects of HA are mediated by activation of signaling pathways downstream CD44, the cognate HA receptor, in nociceptors. These results contribute to our understanding of the role of the extracellular matrix in pain, and indicate CD44 as a potential therapeutic target to alleviate inflammatory and neuropathic pain.",
"corpus_id": 5323093,
"score": -1,
"title": "CD44 Signaling Mediates High Molecular Weight Hyaluronan-Induced Antihyperalgesia"
} |
{
"abstract": "We present a pseudoparticle nonequilibrium Green function formalism as a tool to study the coupling between plasmons and excitons in nonequilibrium molecular junctions. The formalism treats plasmon-exciton couplings and intra-molecular interactions exactly, and is shown to be especially convenient for exploration of plasmonic absorption spectrum of plexitonic systems, where combined electron and energy transfers play an important role. We demonstrate the sensitivity of the molecule-plasmon Fano resonance to junction bias and intra-molecular interactions (Coulomb repulsion and intra-molecular exciton coupling). The electromagnetic theory is used in order to derive self-consistent ¯eld-induced coupling terms between the molecular and the plasmon excitations. Our study opens a way to deal with strongly interacting plasmon-exciton systems in nonequilibrium molecular devices.",
"corpus_id": 121088639,
"title": "Non-Markovian theory of collective plasmon-molecule excitations in nanojunctions combined with classical electrodynamic simulations"
} | {
"abstract": "Self-assembled quasi one-dimensional nanostructures of pi-conjugated molecules may find a use in devices owing to their intriguing optoelectronic properties, which include sharp exciton transitions, strong circular dichroism, high exciton mobilities and photoconductivity. However, many applications require immobilization of these nanostructures on a solid substrate, which is a challenge to achieve without destroying their delicate supramolecular structure. Here, we use a drop-flow technique to immobilize double-walled tubular J-aggregates of amphiphilic cyanine dyes without affecting their morphological or optical properties. High-resolution images of the topography and exciton fluorescence of individual J-aggregates are obtained simultaneously with polarization-resolved near-field scanning optical microscopy. These images show remarkably uniform supramolecular structure, both along individual nanotubes and between nanotubes in an ensemble, demonstrating their potential for light harvesting and energy transport.",
"corpus_id": 1334332,
"title": "Uniform exciton fluorescence from individual molecular nanotubes immobilized on solid substrates."
} | {
"abstract": "The concentration-dependent absorption and temperature-dependent fluorescence of the perylene bisimide dye PBI 1 in methylcyclohexane point to a biphasic aggregation behavior. At intermediate concentrations and temperatures, respectively, a dimer with low fluorescence yield dominates, which cannot be extended to longer aggregates. Those are formed at high concentrations and low temperatures, respectively, via a second, energetically unfavorable dimer species that acts as a nucleus. A corresponding aggregation model reproduces accurately the concentration dependence and allows extracting the equilibrium constants and spectra of the distinct species. The differences in the photophysical properties indicate H-type excitonic coupling for the favored dimer and J-type characteristics for the extended aggregates which could be related to structural models based on DFT calculations. The energetics can be understood by considering hydrogen-bonding and π-π-stacking interactions.",
"corpus_id": 2282863,
"score": -1,
"title": "Biphasic self-assembly pathways and size-dependent photophysical properties of perylene bisimide dye aggregates."
} |
{
"abstract": "Lakeshores are under increasing pressure from human activities, which cause extensive hydromorphological alterations. In this article, a method is described for assessing those alterations by using physical criteria developed in relation to the response by the lakeshore ecosystem, using benthic invertebrates as indicators. Two alpine lakes (Lake Bled and Lake Bohinj, Slovenia) were used as a case study. Both lakes are subjected to varying levels of physical alterations and lakeshore uses, which are described using alteration variables for four lakeshore zones: littoral zone, shoreline zone, riparian zone and lakeshore region. On the basis of these four variables, a Lakeshore Modification Index (LMI) was developed as a weighted sum of all variables. The weights were based on each variable's explanatory power regarding the distribution of benthic invertebrate taxa using canonical correspondence analyses. Both the LMI and all four lakeshore zone alteration variables showed significant (p < 0.01) negative correlations with species richness, but the LMI showed the strongest correlation (Pearson r = −0.086, p < 0.01). Differences existed in the level of alteration of the two lakes, with Lake Bled being more altered than Lake Bohinj; Lake Bled also exhibited the highest values for all four alteration variables and the highest LMI score. With the use of a classification system with five equidistant LMI classes, a difference was observed between the lakes in the distribution of LMI classes (two‐sample Kolmogorov–Smirnov test Z = 5.714, p < 0.01). An assessment and classification of lakeshore modifications based on physical criteria, similar to that given by the LMI, can provide an important tool for lake management in practice, where a reliable method for assessing pressures is needed to support decision‐making. Copyright © 2012 John Wiley & Sons, Ltd.",
"corpus_id": 129913581,
"title": "A Lakeshore Modification Index and its association with benthic invertebrates in alpine lakes"
} | {
"abstract": "The EU Water Framework Directive (WFD) requires the ecological assessment of water bodies. Since the littoral zones and the lakeshores are part of lakes as water bodies as defined by the WFD, a new scheme for ecological quality assessment of lakeshores should be established. It is proposed that this scheme should go beyond the formal requirements of the WFD, as it includes aspects of nature conservancy, landscape protection, and regional planning and development. Some of these aspects are subject to other EU legislation (e.g. Habitats Directive) and some are subject to national legislation. Ten general Quality Elements (QEs) are proposed, which can be refined and reified through several levels of detail, depending on the specific aims of a study. A list of eleven topics, which should be discussed in the establishment of the lakeshore quality assessment scheme, is given. The more complex ones are the implementation of other EU legislation, the definition of lakeshore types and reference conditions, the stipulation of best aggregation procedures, and a better understanding of the significance of hydrological and morphological impacts on the biota.",
"corpus_id": 2218661,
"title": "New approaches to integrated quality assessment of lakeshores"
} | {
"abstract": "In 1977–81 the river Rhine near the lake of Constance held the highest densities of Zebra Mussels (Dreissena polymorpha) ever found in Central and Western Europe with up to 12 kg/m2 fresh biomass. Wintering diving ducks and Coots consumed every year 97% of the standing crop. The population maintained itself by mass immigration of mainly 1-year-old mussels during the summer. This leads to large temporal and spatial (within 4 km) fluctuations of biomass.",
"corpus_id": 13564179,
"score": -1,
"title": "Der Einfluss von Wasservögeln auf Populationen der Wandermuschel (Dreissena polymorpha Pall.) am Untersee/Hochrhein (Bodensee)"
} |
{
"abstract": "In education and research, references play a key role. However, extracting and parsing references are difficult problems. One concern is that there are many styles of references; hence, given a surface form, identifying what style was employed is problematic, especially in heterogeneous collections of theses and dissertations, which cover many fields and disciplines, and where different styles may be used even in the same publication. We address these problems by drawing upon suitable knowledge found in the WWW. In particular, we research a two-stage classifier approach, involving multi-class classification with respect to reference styles, and partially solve the problem of parsing surface representations of references. We describe empirical evidence for the effectiveness of our approach and plans for improvement of our methods.",
"corpus_id": 1473297,
"title": "A hybrid two-stage approach for discipline-independent canonical representation extraction from references"
} | {
"abstract": "Machine reading is a long-standing goal of AI and NLP. In recent years, tremendous progress has been made in developing machine learning approaches for many of its subtasks such as parsing, information extraction, and question answering. However, existing end-to-end solutions typically require substantial amount of human efforts (e.g., labeled data and/or manual engineering), and are not well poised for Web-scale knowledge acquisition. In this paper, we propose a unifying approach for machine reading by bootstrapping from the easiest extractable knowledge and conquering the long tail via a self-supervised learning process. This self-supervision is powered by joint inference based on Markov logic, and is made scalable by leveraging hierarchical structures and coarse-to-fine inference. Researchers at the University of Washington have taken the first steps in this direction. Our existing work explores the wide spectrum of this vision and shows its promise.",
"corpus_id": 6291204,
"title": "Machine Reading at the University of Washington"
} | {
"abstract": "In the task of question answering, Memory Networks have recently shown to be quite effective towards complex reasoning as well as scalability, in spite of limited range of topics covered in training data. In this paper, we introduce Factual Memory Network, which learns to answer questions by extracting and reasoning over relevant facts from a Knowledge Base. Our system generate distributed representation of questions and KB in same word vector space, extract a subset of initial candidate facts, then try to find a path to answer entity using multi-hop reasoning and refinement. Additionally, we also improve the run-time efficiency of our model using various computational heuristics.",
"corpus_id": 18780529,
"score": -1,
"title": "Question Answering over Knowledge Base using Factual Memory Networks"
} |
{
"abstract": "Abstract We studied, over a 6-yr period, temporal dynamics of plum curculio, Conotrachelus nenuphar (Herbst.), immigration into an unsprayed section of a commercial apple orchard with the main aim of establishing the relationships between the timing of immigration, weather factors, and phenological tree stage. By using panel and pyramid traps baited with attractive synthetic odor and deployed near woods adjacent to orchard trees, we exploited the chemical cues potentially directing the spring immigration by plum curculios. On each of the 6 yr, traps were inspected on a daily basis over the entire period of plum curculio immigration, which ranged from 51 to 85 d. Across all 6 yr, most immigrant plum curculios (on average 57% of the total) potentially colonizing host trees were captured by traps by the time of petal fall. Based on our combined trapping and weather data, we propose the occurrence of pre- and postpetal fall periods of plum curculio immigration, each of which is influenced to a different extent by temperatures prevailing in spring. Only during the prepetal fall period, but not afterward, was there a strong influence of air temperature on captures by both panel and pyramid traps. Thermal constants (expressed in degree days) estimated reflected more accurately onset of plum curculio immigration than tree phenology. Our combined results indicate that the odor-baited traps evaluated can be used to predict initiation of plum curculio immigration using thermal constants and also to monitor accurately the magnitude of plum curculio immigration into orchard blocks. Findings are discussed with respect to the ecology and management of plum curculio.",
"corpus_id": 86352537,
"title": "Temporal Dynamics of Plum Curculio, Conotrachelus nenuphar (Herbst.) (Coleoptera: Curculionidae), Immigration into an Apple Orchard in Massachusetts"
} | {
"abstract": "The attractiveness of different synthetic host odors and a synthetic aggregation pheromone (grandisoic acid [GA]) to overwintered adult plum curculios (PCs), Conotrachelus nenuphar(Herbst) (Coleoptera: Curculionidae), was examined using two types of traps (sticky panels and black pyramids) placed in border areas surrounding an unsprayed section of an apple orchard in Massachusetts. In 2001, we evaluated the response of PCs to three synthetic fruit volatiles (benzaldehyde [BEN], ethyl isovalerate [EIV], and limonene [LIM]) assessed alone and in combination with GA, as well as the response to GA alone and a no-odor (control) treatment. BEN was the only host volatile that synergized the response of PCs to GA for both trap types. For both trap types, GA was as attractive to PCs as a single component as when in combination with either EIV or LIM. In 2002, four release rates of BEN (0, 2.5, 10, and 40 mg/day) and two release rates of GA (1 and 2 mg/day) were evaluated for attractiveness to PCs using panel and pyramid traps. For panel traps, an increase in amount of GA released (from 1 to 2 mg/day) was associated with a 35% increase in captures. However, PC captures by pyramid traps were similar regardless of the amount of GA released. For panel traps, 10 and 40 mg/day of BEN were the most attractive release rates regardless of the amount of GA released. For pyramid traps baited with GA, PC captures were enhanced by the presence of BEN, regardless of release rate. In 2003, GA at 1 mg/day + BEN at 80 mg/day of release did not enhance PC captures by panel traps relative to lower release rates of BEN. Pyramid traps releasing GA at 1 mg/day performed best when baited with BEN at 10 mg/day of release; a release rate of 80 mg/day of BEN decreased the attractiveness of the binary combination of BEN + GA. Combined results suggest that BEN at 10 mg/day + GA at 1 mg/day of release constitutes an attractive lure that may improve the effectiveness of monitoring traps for PCs.",
"corpus_id": 6182119,
"title": "Field Evaluation of Plant Odor and Pheromonal Combinations for Attracting Plum Curculios"
} | {
"abstract": "Three isolates of Newcastle disease virus (NDV) were isolated from tracheal samples of dead village chickens in two provinces (Phnom Penh and Kampong Cham) in Cambodia during 2011–2012. All of these Cambodian NDV isolates were categorized as velogenic pathotype, based on in vivo pathogenicity tests and F cleavage site motif sequence (112RRRKRF117). The phylogenetic analysis and the evolutionary distances based on the sequences of the F gene revealed that all the three field isolates of NDV from Cambodia form a distinct cluster (VIIh) together with three Indonesian strains and were assigned to the genotype VII within the class II. Further phylogenetic analysis based on the hyper-variable region of the F gene revealed that some of NDV strains from Malaysia since the mid-2000s were also classified into the VIIh virus. This indicates that the VIIh NDVs are spreading through Southeast Asia. The present investigation, therefore, emphasizes the importance of further surveillance of NDV in neighboring countries as well as throughout Southeast Asia to contain further spreading of these VIIh viruses.",
"corpus_id": 7657857,
"score": -1,
"title": "Molecular epidemiological investigation of velogenic Newcastle disease viruses from village chickens in Cambodia"
} |
{
"abstract": "Mucosal surfaces represent a major gateway to microorganisms which may be harmful to health. The humoral immune response has an important action in the defense of these surfaces, as it is able to prevent the entry of pathogens in the body. Vaccines with local application have been evaluated in order to stimulate an efficient immune response in the mucous membranes, since conventional vaccines, for parenteral application, tend to stimulate a mostly systemic response. Vaccines that use the mucosa as an inoculation route are able to generate an immune response directly in the application mucosa and corresponding mucosa, since the mucosal system is integrated, which represents an important advantage in choosing the inoculation route. This paper aims to illustrate some concepts related to mucosal immunity in general, as well as to gather information about what has been studied in relation to mucosal routes of administration of vaccines, immunomodulators and antigen delivery systems.",
"corpus_id": 237391349,
"title": "Development and use of mucosal vaccines: Potential and limitations"
} | {
"abstract": "The vast surfaces of the gastrointestinal, respiratory, and genitourinary tracts represent major sites of potential attack by invading micro‐organisms. Immunoglobulin A (IgA), as the principal antibody class in the secretions that bathe these mucosal surfaces, acts as an important first line of defence. IgA, also an important serum immunoglobulin, mediates a variety of protective functions through interaction with specific receptors and immune mediators. The importance of such protection is underlined by the fact that certain pathogens have evolved mechanisms to compromise IgA‐mediated defence, providing an opportunity for more effective invasion. IgA function may also be perturbed in certain disease states, some of which are characterized by deposition of IgA in specific tissues. This review details current understanding of the roles played by IgA in both health and disease. Copyright © 2006 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.",
"corpus_id": 2273986,
"title": "Function of immunoglobulin A in immunity"
} | {
"abstract": "Monoclonal antibody PAb1620 recognizes a conformational epitope on the transcription factor p53 and, upon binding, allosterically inhibits p53 binding to DNA. A highly diverse (1.5×1010 members) phage-displayed library of peptides containing 40 random amino acids was used to identify the PAb1620 binding site on p53. Panning this library against PAb1620 resulted in three unique peptides which have statistically significant sequence identities with p53 sufficient to identify the binding site as being composed of amino acids 106–113 and 146–156. Based on these results, we propose a mechanism by which PAb1620 can allosterically inhibit p53 binding to DNA through an indirect interaction between the antibody binding site and the L1 loop (amino acids 112–124) of p53, which is a component of the DNA binding region.",
"corpus_id": 7794795,
"score": -1,
"title": "Identification of an allosteric binding site on the transcription factor p53 using a phage-displayed peptide library"
} |
{
"abstract": "Tumors are neogrowths formed by the growth of normal cells or tissues through complex mechanisms under the influence of many factors. The occurrence and development of tumors are affected by many factors. Pescadillo ribosomal biogenesis factor 1 (PES1) has been identified as a cancer-related gene. The study of these genes may open up new avenues for early diagnosis, treatment and prognosis of tumors. As a nucleolar protein and part of the Pes1/Bop1/WDR12 (PeBoW) complex, PES1 is involved in ribosome biogenesis and DNA replication. Many studies have shown that high expression of PES1 is often closely related to the occurrence, proliferation, invasion, metastasis, prognosis and sensitivity to chemotherapeutics of various human malignant tumors through a series of molecular mechanisms and signaling pathways. The molecules that regulate the expression of PES1 include microRNA (miRNA), circular RNA (circRNA), c-Jun, bromodomain-containing protein 4 (BRD4) and nucleolar phosphoprotein B23. However, the detailed pathogenic mechanisms of PES1 overexpression in human malignancies remains unclear. This article summarizes the role of PES1 in the carcinogenesis, prognosis and treatment of multiple tumors, and introduces the molecular mechanisms and signal transduction pathways related to PES1.",
"corpus_id": 245222881,
"title": "The functional role of Pescadillo ribosomal biogenesis factor 1 in cancer"
} | {
"abstract": "Pescadillo is a nucleolar protein that has been suggested to be involved in embryonic development and ribosome biogenesis. Deregulated expression of human pescadillo (PES1) was described in some tumors, but its precise roles in tumorigenesis remains unclear. In this study, we generated three monoclonal antibodies recognizing PES1 with high specificity and sensitivity, with which PES1 expression in human colon cancer was analyzed immunohistochemically. Out of 265 colon cancer tissues, 89 (33.6%) showed positive PES1 expression, which was significantly higher than in non-cancerous tissues (P<0.001). Silencing of PES1 in colon cancer cells resulted in decreased proliferation, reduced growth of xenografts, and cell cycle arrest in G1 phase, indicating PES1 functions as an oncogene. We then explored the mechanism by which PES1 expression is controlled in human colon cancers and demonstrated that c-Jun, but not JunB, JunD, c-Fos, or mutant c-Jun, positively regulated PES1 promoter transcription activity. In addition, we mapped −274/−264 region of PES1 promoter as the c-Jun binding sequence, which was validated by chromatin immunoprecipitation and electrophoretic mobility shift assays. Moreover, we demonstrated a positive correlation between c-Jun and PES1 expression in colon cancer cells and colon cancer tissues. Upstream of c-Jun, it was revealed that c-Jun NH2-terminal kinases (JNK) is essential for controlling PES1 expression. Our study, in the first place, uncovers the oncogenic role of PES1 in colon cancer and elucidates the molecular mechanism directing PES1 expression.",
"corpus_id": 3026797,
"title": "Transcriptional Regulation of PES1 Expression by c-Jun in Colon Cancer"
} | {
"abstract": "Background: Peter Pan (PPAN) localizes to nucleoli and functions in ribosome biogenesis. Results: PPAN localizes also to mitochondria, and PPAN knockdown triggers p53-independent mitochondrial apoptosis and nucleolar stress as observed by de-stabilization of nucleophosmin. Conclusion: PPAN orchestrates a p53-independent stress-response pathway by coupling nucleolar stress induction to the mitochondrial apoptosis. Significance: Novel insight into the anti-apoptotic role of the ribosome processing factor PPAN is provided. Proper ribosome formation is a prerequisite for cell growth and proliferation. Failure of this process results in nucleolar stress and p53-mediated apoptosis. The Wnt target Peter Pan (PPAN) is required for 45 S rRNA maturation. So far, the role of PPAN in nucleolar stress response has remained elusive. We demonstrate that PPAN localizes to mitochondria in addition to its nucleolar localization and inhibits the mitochondrial apoptosis pathway in a p53-independent manner. Loss of PPAN induces BAX stabilization, depolarization of mitochondria, and release of cytochrome c, demonstrating its important role as an anti-apoptotic factor. Staurosporine-induced nucleolar stress and apoptosis disrupt nucleolar PPAN localization and induce its accumulation in the cytoplasm. This is accompanied by phosphorylation and subsequent cleavage of PPAN by caspases. Moreover, we show that PPAN is a novel interaction partner of the anti-apoptotic protein nucleophosmin (NPM). PPAN depletion induces NPM and upstream-binding factor (UBF) degradation, which is independent of caspases. In summary, we provide evidence for a novel nucleolar stress-response pathway involving PPAN, NPM, and BAX to guarantee cell survival in a p53-independent manner.",
"corpus_id": 7929487,
"score": -1,
"title": "The Wnt Target Protein Peter Pan Defines a Novel p53-independent Nucleolar Stress-Response Pathway*"
} |
{
"abstract": "Expert Finding has been a widely studied area of research. However, most of the work in this area has focused solely on analyzing networks representing people in academia. In this work, we will present an approach for two types of heterogeneous news sources (i.e., Traditional Network Sources (TNS) and Policy Network Sources (PNS)) for experts on a set of topics. Our overall objective is to discover who are the expert journalists and policy analysts on specific topics. This work is based on our intuition that the PNS and TNS could complement each other, thus leveraging information for the learning task. We propose a probabilistic generative model named Context-based Latent Dirichlet Allocation (CBLDA) that performs the task of co-ranking authors in the heterogeneous networks of TNS and PNS. We will demonstrate that our proposed approach outperforms baselines in terms of precision, mean average precision, and discounted cumulative gain.",
"corpus_id": 15400327,
"title": "Co-Ranking Authors in Heterogeneous News Networks"
} | {
"abstract": "In this paper, we present a topic level expertise search framework for heterogeneous networks. Different from the traditional Web search engines that perform retrieval and ranking at document level (or at object level), we investigate the problem of expertise search at topic level over heterogeneous networks. In particular, we study this problem in an academic search and mining system, which extracts and integrates the academic data from the distributed Web. We present a unified topic model to simultaneously model topical aspects of different objects in the academic network. Based on the learned topic models, we investigate the expertise search problem from three dimensions: ranking, citation tracing analysis, and topical graph search. Specifically, we propose a topic level random walk method for ranking the different objects. In citation tracing analysis, we aim to uncover how a piece of work influences its follow-up work. Finally, we have developed a topical graph search function, based on the topic modeling and citation tracing analysis. Experimental results show that various expertise search and mining tasks can indeed benefit from the proposed topic level analysis approach.",
"corpus_id": 3796827,
"title": "Topic level expertise search over heterogeneous networks"
} | {
"abstract": "Transcribing speech in properly formatted written language presents some challenges for automatic speech recognition systems. The difficulty arises from the conversion ambiguity between verbal and written language in both directions. Non-lexical vocabulary items such as numeric entities, dates, times, abbreviations and acronyms are particularly ambiguous. This paper describes a finite-state transducer based approach that improves proper transcription of these entities. The approach involves training a language model in the written language domain, and integrating verbal expansions of vocabulary items as a finite-state model into the decoding graph construction. We build an inverted finite-state transducer to map written vocabulary items to alternate verbal expansions using rewrite rules. Then, this verbalizer transducer is composed with the n-gram language model to obtain a verbalized language model, whose input labels are in the verbal language domain while output labels are in the written language domain. We show that the proposed approach is very effective in improving the recognition accuracy of numeric entities.",
"corpus_id": 13567148,
"score": -1,
"title": "Language model verbalization for automatic speech recognition"
} |
{
"abstract": "ABSTRACT This article is concerned with a group of language teachers’ reading and interpretation of a peer-reviewed journal article. It draws empirical materials from a larger study, which explored how teachers addressed the ‘crises’ of representation, legitimation and praxis in educational research. In this article, I present a subset of data to illustrate how some of the participants responded to the crisis of praxis. I describe the participants’ interpretive work as ways of reading, and discuss how these ways of reading constitute an imaginative-ethical approach to reading research. Such an approach, it is argued, holds the potential to interrupt the dominant instrumental model of research use. Throughout the article, a case is made for understanding how teachers read and interpret research-based recommendations in keeping with their local contexts and professional commitment.",
"corpus_id": 257718244,
"title": "How language teachers address the crisis of praxis in educational research"
} | {
"abstract": "This paper shares the findings from a study that assessed the impact of a graduate level curriculum that engaged fifty-seven k-12 teachers in community-based critical literacy practices. The findings from the participants‘ written critical reflections following two community exploration activities showed that they gained enhanced awareness of social inequalities. In addition, some of the participants made connections between the observed community disparities and their civic responsibilities to work towards social justice. A high school teacher reflects: ―This year, our [community] walk",
"corpus_id": 153896327,
"title": "Reading the World: Supporting Teachers' Professional Development Using Community- Based Critical Literacy Practices"
} | {
"abstract": "Advocates of school \"restructuring\" argue that organizational design features such as teacher involvement in decision making, staff collaboration, and supportive leadership by school administrators can enhance the effectiveness of secondary schools. Little is known, however, about the contextual and organizational conditions that support implementation of these \"organic\" design features in schools. This article investigates this issue. Following past research on the social organization of schools, we aggregate the perceptions of teachers within schools to form school-level measures of organic design features. The article then investigates the psychometric properties of these measures and tests various propositions about factors that affect their implementation in high schools. The results show that school-to-school differences in organic design features are strongly related to the public or private status of high schools. But the results also show that there is much within-school variance in perceptions of school organizational design, with teachers in different academic departments and curriculum tracks, as well as teachers with different social backgrounds, having varying perceptions of the structure of the schools in which they work. The implications of these findings for research on school organization and for debates about school restructuring are discussed.",
"corpus_id": 143781147,
"score": -1,
"title": "Organizational Design in High Schools: A Multilevel Analysis"
} |
{
"abstract": "Background: Leukemia is a life-threatening chronic disease for children. The recurrence of the disease causes tension and reduces the quality of life for the family, especially for mothers. Religion is an important humanitarian aspect of holistic care that can be very effective in determining the health level of the patient and the family members. The present study aims at investigating the role of religious coping (RCOPE) in the quality of life for mothers of children with recurrent leukemia. Methods: This is a cross-sectional study of the descriptive-correlational type. Two-hundred mothers with children aging 1–15 years suffering from leukemia were selected using a continuous sampling method. The data were collected using questionnaires eliciting information about personal information, Persian version of the Caregiver Quality of Life Index-Cancer, and RCOPE. The collected data were analyzed in SPSS using descriptive tests and independent samples t-test. Results: The result of examining the relation between life quality and demographic features of mothers showed that education level, income, and occupation had a significant statistical relationship with general quality of life mothers. The results of examining the relationship between quality of life and RCOPE of mothers showed that RCOPE was positively correlated only with the positive coping dimension quality of life (P < 0/001). Negative RCOPE had a significant reverse statistical correlation with general quality of life and all its aspects. Conclusion: The quality of life for the participants in this study was significantly related to RCOPE. Mothers with negative RCOPE faced low scores for quality of life, and religious support can improve their life quality. Further longitudinal studies are required to investigate the effects of establishing support communities.",
"corpus_id": 49274813,
"title": "Investigating the relationship between the quality of life and religious coping in mothers of children with recurrence leukemia"
} | {
"abstract": "AIM\nTo examine the psychosocial impact of recurrence on survivors of cancer and their family members.\n\n\nBACKGROUND\nCancer recurrence is described as one of the most stressful phases of cancer. Recurrence brings back many negative emotions, which are different and may be more intense than those after first diagnosis of cancer. Survivors and their family members have to deal with new psychological distress.\n\n\nDESIGN\nA qualitative descriptive study was conducted in four cancer units of two hospitals in North of Spain.\n\n\nMETHODS\nFifteen survivors of cancer with a recent diagnosis of recurrence, 13 family members and 14 nurses were interviewed. Data collection and analysis were based on the constant comparative method of grounded theory.\n\n\nRESULTS\nFour major categories were found: (1) 'Again': when cancer comes back, (2) the shock of recurrence, (3) the impact of the diagnosis on family life, and (4) factors that influence the impact of recurrence. Learning that cancer had come back was, for most of the families, more devastating than hearing that they had cancer for the first time. Signs of shock and suffering were experienced by families as an initial response to recurrence. The new diagnosis often entailed a change in the family life. Survivorship period and age seemed also significant in the psychosocial experience of recurrence.\n\n\nCONCLUSIONS\nThe term 'again', used by all the participants to describe a recurrence of the disease, symbolised a beginning and a continuation with cancer; it implied a re-encounter with health services, and it represented new suffering for the families.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nTherapeutic nursing interventions should be planned and provided to both patients with recurrent cancer and their family members. Family nursing can play an important role in helping families master the impact of the recurrent illness.",
"corpus_id": 1489315,
"title": "'Again': the impact of recurrence on survivors of cancer and family members."
} | {
"abstract": "Diagnosis of a life-threatening disease is a majorfamily stressor How family members communicate with each other about the situation and their fears has received little study. The communication patterns of 41 couples where the woman was newly diagnosed with Stage 1 or 2 breast cancer were investigated. Family interviews were done atfive points, from the time of diagnosis to I year later Qualitative grounded theory methods were triangulated with responses to the Couple Communication Scale and State Trait Anxiety Inventory. Three major types of couple discussion patterns about fears, doubts, and emotional issues were seen, based on whether they shared similar or different views about the importance of talking. Some couples talked openly or reasonably openly. Others did not talk to each other, although afew of these talked to other people. Another group, who held divergent views, demonstrated more problems in their communication Selective open disclosure was generally perceived as the most satisfactory of the patterns. Quantitative findings generally supported the talking themes that emerged.",
"corpus_id": 22838373,
"score": -1,
"title": "Family Communication Patterns in Coping with Early Breast Cancer"
} |
{
"abstract": "DRERUP, SAMUEL A., Ph.D., April 2016, Environmental and Plant Biology Functional Responses of Stream Communities to Acid Mine Drainage Remediation Director of Dissertation: Morgan L. Vis Acid mine drainage (AMD) is a consequence of historical and present day mining activities. Remediation efforts are frequently successful in improving water quality with elevated pH and decreased dissolved metals. In many streams, there has been chemical and biological recovery. The goal of restoration is to improve both biological communities and processes within the stream. I compared biofilm community structure (using fatty acid profiles), function (primary production, extracellular enzyme activity), and food web structure from three stream categories in southeast Ohio: streams impaired by acid mine drainage, streams that have undergone remediation of AMD impairment, and streams that have not been impaired by AMD. I hypothesize that remediated streams will be more reliant on terrestrial sources of energy due to nutrient limitation of benthic biofilms. Fatty acid profiles (PLFA and total fatty acids) identified distinct biofilm communities associated with AMD-impaired streams or AMD-remediated and AMDunimpaired streams and showed that these biofilm communities were not different throughout the sampling season. I found that the lowest rates of benthic biofilm gross primary productivity and primary producer biomass (chlorophyll a) were in the impaired streams while AMD-unimpaired streams had the highest. Biofilm production and primary producer biomass in streams that were classified as remediated were in between impaired and unimpaired and not statistically different from either. Results of carbon and",
"corpus_id": 134834166,
"title": "Functional Responses of Stream Communities to Acid Mine Drainage Remediation"
} | {
"abstract": "The spatial congruence of chemical and biological recovery along an 18-km acid mine impaired stream was examined to evaluate the efficacy of treatment with an alkaline doser. Two methods were used to evaluate biological recovery: the biological structure of the benthic macroinvertebrate community and several ecosystem processing measures (leaf litter breakdown, microbial respiration rates) along the gradient of improved water chemistry. We found that the doser successfully reduced the acidity and lowered dissolved metals (Al, Fe, and Mn), but downstream improvements were not linear. Water chemistry was more variable, and precipitated metals were elevated in a 3–5-km “mixing zone” immediately downstream of the doser, then stabilized into a “recovery zone” 10–18 km below the doser. Macroinvertebrate communities exhibited a longitudinal pattern of recovery, but it did not exactly match the water chemistry gradient Taxonomic richness (number of families) recovered about 6.5 km downstream of the doser, while total abundance and % EPT taxa recovery were incomplete except at the most downstream site, 18 km away. The functional measures of ecosystem processes (leaf litter breakdown, microbial respiration of conditioned leaves, and shredder biomass) closely matched the measures of community structure and also showed a more modest longitudinal trend of biological recovery than expected based on pH and alkalinity. The measures of microbial respiration had added diagnostic value and indicated that biological recovery downstream of the doser is limited by factors other than habitat and acidity/alkalinity, perhaps episodes of AMD and/or impaired energy/nutrient inputs. A better understanding of the factors that govern spatial and temporal variations in acid mine contaminants, especially episodic events, will improve our ability to predict biological recovery after remediation.",
"corpus_id": 3302799,
"title": "Use of leaf litter breakdown and macroinvertebrates to evaluate gradient of recovery in an acid mine impacted stream remediated with an active alkaline doser"
} | {
"abstract": "Restoration of streams impacted by acid mine drainage (AMD) focuses on improving water quality, however precipitates of metals on the substrata can remain and adversely affect the benthos. To examine the effects of AMD precipitates independently of aqueous effects, four substrata treatments, clean sandstone, clean limestone, AMD precipitate-coated sandstone and coated limestone, were placed in a circumneutral stream of high water quality for 4 weeks. Iron and Al were the most abundant metals on rocks with AMD precipitate. and significantly decreased after the exposure. Precipitate on the substrata did not significantly affect macroinvertebrate or periphyton density and species composition. In an additional experiment, percent survival of caged live caddisflies was significantly lower when exposed in situ for 5 days in an AMD affected stream than in a reference stream. Caddisfly whole-body concentrations of all combined metals and Fe alone were significantly higher in the AMD stream. Whole-body metal concentrations were higher in killed caddisflies than in live, indicating the importance of passive uptake. The results suggest the aqueous chemical environment of AMD had a greater affect on organisms than a coating of recent AMD precipitate on the substrata (ca. 0.5 mm thick), and treatment that improves water quality in AMD impacted streams has the potential to aid in recovery of the abiotic and biotic benthic environment.",
"corpus_id": 11460731,
"score": -1,
"title": "Impact of acid mine drainage on benthic communities in streams: the relative roles of substratum vs. aqueous effects."
} |
{
"abstract": "Image dehazing is challenging due to the problem of ill-posed parameter estimation. Numerous prior-based and learning-based methods have achieved great success. However, most learning-based methods use the changes and connections between scale and depth in convolutional neural networks for feature extraction. Although the performance is greatly improved compared with the prior-based methods, the performance in extracting detailed information is inferior. In this paper, we proposed an image dehazing model built with a convolutional neural network and Transformer, called Transformer for image dehazing (TID). First, we propose a Transformer-based channel attention module (TCAM), using a spatial attention module as its supplement. These two modules form an attention module that enhances channel and spatial features. Second, we use a multiscale parallel residual network as the backbone, which can extract feature information of different scales to achieve feature fusion. We experimented on the RESIDE dataset, and then conducted extensive comparisons and ablation studies with state-of-the-art methods. Experimental results show that our proposed method effectively improves the quality of the restored image, and it is also better than the existing attention modules in performance.",
"corpus_id": 248597191,
"title": "A Novel Transformer-Based Attention Network for Image Dehazing"
} | {
"abstract": "In this tutorial, we review recent wavelet denoising techniques for medical ultrasound and for magnetic resonance images. We evaluate their implementation via MATLAB package and discuss their performances in terms of SNR (signal-to-noise ratio) or PSNR (peak signal-to-noise ratio) and visual aspects of image quality. Image denoising using wavelet-based multiresolution analysis requires a delicate compromise between noise reduction and preserving significant image details. Hence, some subtleties associated with these denoising techniques will be explained in detail.",
"corpus_id": 363341,
"title": "A review of wavelet denoising in medical imaging"
} | {
"abstract": "Anisotropic diffusion-based filters are a widespread used resource for medical image denoising because they are designed to preserve the image details during noise removal. This paper aims at providing a quantitative evaluation of this important feature without the inaccuracies of the commonly adopted full-reference metrics. For the first time, the true value of detail preservation yielded by an anisotropic diffusion filter is formally derived from the filter theory. Many computer simulations are reported in the paper in order to study how values and locations of errors representing filtering distortion depend upon the parameter settings.",
"corpus_id": 52046225,
"score": -1,
"title": "On the Accuracy of Denoising Algorithms in Medical Imaging: A Case Study"
} |
{
"abstract": "Throughout many years of systems engineering development, a plethora of research has been conducted in an academia regarding legacy systems and legacy modernization. Their works have been resulted in papers, journals, and other products. According to some authors, legacy systems can be defined as a complex and critical system that work well, although it was developed with an outdated technology (software and hardware). They resist of modification and evolution, difficult to understand, and there is scarcity of experts/knowledge, and are inflexible towards new business requirements. The reports from the academic field clearly indicate the legacy systems as the systems that bring difficulties. The problems such as difficult to maintain, limited supplier/vendor, lack of experts/knowledge and integration issues are common in legacy systems. In addition, the old system also has their lifetime and at some point they cannot be expanded anymore. Hence, there is momentum to modernize legacy systems in order to support organizations’ business requirements. From industrial perspective, business requirements are also evolving and they need more flexible, robust, and agile systems. Organizations cannot depend on their old systems any longer since they are difficult to maintain and the knowledge around it are diminishing. The problems mentioned above are having serious impact on the organization and hence, contributing towards higher maintenance costs. However, many multinational organizations now are still running their business in their legacy systems for so many reasons. A survey in 2008 in the United States revealed that more than 50% of their IT systems are classified as legacy systems. Furthermore, Gartner Group in 1997 reported that 80% of the world's business ran on COBOL with over 200 billion lines of code in existence and with an estimated 5 billion lines of new code annually. In addition, the TIOBE index also reports that COBOL as one of the most popular languages ever used. Based on this fact, it is clear that there is a different way of perceiving the legacy systems in academia and in industry. Therefore, this research aims at finding the different perception of legacy systems between academia and industry. The Grounded Theory method has been used to interview legacy experts from the industry and the results were validated through survey with 104 participants through online surveys during 3,5 weeks. The study revealed that the legacy systems are not merely about IT, but also involve business and organization aspect. Academies tend to see the legacy systems from a technical point of view which leads to bad impression of the systems. However, professional in the industry see the legacy system more from the business value of the system. Consequently, problems from the technical side of the legacy systems are not really the problems for professionals in industry unless the problems disturb their business process. 4 Revisiting legacy systems and legacy modernization from the industrial perspective",
"corpus_id": 107447123,
"title": "Revisiting legacy systems and legacy modernization from the industrial perspective"
} | {
"abstract": "This paper presents the findings of a case study of a large scale legacy to service-oriented architecture migration process in the payments domain of a Dutch bank. The paper presents the business drivers that initiated the migration, and describes a 4-phase migration process. For each phase, the paper details benefits of using the techniques, best practices that contribute to the success, and possible challenges that are faced during migration. Based on these observations, the findings are discussed as lessons learned, including the implications of using reverse engineering techniques to facilitate the migration process, adopting a pragmatic migration realization approach, emphasizing the organizational and business perspectives, and harvesting knowledge of the system throughout the system's life cycle.",
"corpus_id": 6912536,
"title": "Migrating a large scale legacy application to SOA: Challenges and lessons learned"
} | {
"abstract": "REST architectural style has become a prevalent choice for distributed resources, such as the northbound API of software-defined networking (SDN). As services often undergo frequent changes and updates, the corresponding REST APIs need to change and update accordingly. To allow REST APIs to change and evolve without breaking its clients, a REST API can be designed to facilitate hypertext-driven navigation and its related mechanisms to deal with structure changes in the API. This paper addresses the issues in hypertext-driven navigation in REST APIs from three aspects. First, we present REST Chart, a Petri-Net-based REST service description framework and language to design extensible REST APIs, and it is applied to cope with the rapid evolution of SDN northbound APIs. Second, we describe some important design patterns, such as backtracking and generator, within the REST Chart framework to navigate through large scale APIs in the RESTful architecture. Third, we present a client side differential cache mechanism to reduce the overhead of hypertext-driven navigation, addressing a major issue that affects the application of REST API. The proposed approach is applied to applications in SDN, which is integrated with a generalized SDN controller, SOX. The benefits of the proposed approach are verified in different conditions. Experimental results on SDN applications show that on average, the proposed cache mechanism reduces the overhead of using the hypertext-driven REST API by 66%, while fully maintaining the desired flexibility and extensibility of the REST API.",
"corpus_id": 8146367,
"score": -1,
"title": "Design Patterns and Extensibility of REST API for Networking Applications"
} |
{
"abstract": "In this study, we used data from the Health and Retirement Study (HRS) to investigate factors associated with older adults’ engagement with advance care planning (ACP) across varying levels of cognitive functioning status. Our analysis used a sample of 17,698 participants in the HRS 2014 survey. Survey descriptive procedures (Proc SurveyMeans, Proc SurveyFreq) and logistic regression procedures (Proc SurveyLogistic) were used. Race, ethnicity, level of cognition, education, age, and number of chronic diseases consistently predicted ACP. Participants with lower levels of cognition were less likely to have a living will and durable power of attorney for healthcare (DPOAH). African American and Hispanic participants, younger participants, and those with lower cognition and education levels were less likely to engage in ACP. Marital status and loneliness predicted ACP engagement. Some results varied across the cognition cohorts. Our results indicated that sociodemographic status, together with health and cognitive status, has a significant role in predicting ACP. The results can provide valuable insights on ACP for older adults with or at risk of Alzheimer’s disease and related dementia and other cognitive impairments, caregivers, families, and healthcare providers.",
"corpus_id": 255037694,
"title": "Advance Care Planning Among Older Adults with Cognitive Impairment"
} | {
"abstract": "CONTEXT\nEnd-of-life care for people with dementia can be poor, involving emergency hospital admissions, burdensome treatments of uncertain value, and undertreatment of pain and other symptoms. Advance care planning (ACP) is identified, in England and elsewhere, as a means of improving end-of-life outcomes for people with dementia and their carers.\n\n\nOBJECTIVE\nTo systematically and critically review empirical evidence concerning the effectiveness of ACP in improving end-of-life outcomes for people with dementia and their carers.\n\n\nMETHODS\nSystematic searches of academic databases (CINAHL Plus with full text, PsycINFO, SocINDEX with full text, and PubMed) were conducted to identify research studies, published between January 2000-January 2017 and involving statistical methods, in which ACP is an intervention or independent variable, and in which end-of-life outcomes for people with dementia and/or their carers are reported.\n\n\nRESULTS\nA total of 18 relevant studies were identified. Most found ACP to be associated with some improved end-of-life outcomes. Studies were predominantly, but not exclusively, from the U.S. and care home-based. Type of ACP and outcome measures varied. Quality was assessed using National Institute of Health and Care Excellence quality appraisal checklists. Over half of the studies were of moderate to high quality. Three were randomized controlled trials, two of which were low quality.\n\n\nCONCLUSION\nThere is a need for more high-quality outcome studies, particularly using randomized designs to control for confounding. These need to be underpinned by sufficient development work and process evaluation to clarify the appropriateness of outcome measures, explore implementation issues and identify \"active elements.\"",
"corpus_id": 2697975,
"title": "The Effectiveness of Advance Care Planning in Improving End-of-Life Outcomes for People With Dementia and Their Carers: A Systematic Review and Critical Discussion."
} | {
"abstract": "PURPOSE\nTo investigate self-reported barriers to medication adherence among chronically ill adolescents, and to investigate whether barriers are unique to specific chronic diseases or more generic across conditions.\n\n\nMETHODS\nA systematic search of Web of Science, PubMed, Embase, PsycINFO, and CINAHL from January 2000 to May 2012 was conducted. Articles were included if they examined barriers to medication intake among chronically ill adolescents aged 13-19 years. Articles were excluded if adolescent's views on barriers to adherence were not separated from younger children's or caregiver's views. Data was analyzed using a thematic synthesis approach.\n\n\nRESULTS\nOf 3,655 records 28 articles with both quantitative, qualitative, and q-methodology study designs were included in the review. The synthesis led to the following key themes: Relations, adolescent development, health and illness, forgetfulness, organization, medicine complexity, and financial costs. Most reported barriers to adherence were not unique to specific diseases.\n\n\nCONCLUSION\nSome barriers seem to be specific to adolescence; for example, relations to parents and peers and adolescent development. Knowledge and assessment of barriers to medication adherence is important for both policy-makers and clinicians in planning interventions and communicating with adolescents about their treatment.",
"corpus_id": 25794974,
"score": -1,
"title": "Self-reported barriers to medication adherence among chronically ill adolescents: a systematic review."
} |
{
"abstract": "To mimic reality through a computational system is not a trivial task. It requires that one manages resources, displays coherent imagery, as well as allowing real-time interaction. This work describes the implementation of a system which allows the display of content in a CAVE system. The system provides support to non conventional devices, such as Data Gloves, Position Tracker as well as devices such as a joystick and 3D Mouse. To display the functionality of the system we developed a prototype in which a user can handle a virtual object using his/her own hand. We discuss the system implementation based on the Instant Reality architecture and the X3D standard. We describe the development of a set of plugins for a Data Glove and Position Tracker. Through such plugins the user actions are detected and processed, which allows that such a user handles virtual content with his own hands.",
"corpus_id": 23669772,
"title": "3D Object Handling Support System in a CAVE Setup"
} | {
"abstract": "Nowadays medical training simulators play an important role in education and further training of surgeons. With Virtual Reality based training systems it is possible to simulate a surgery under realistic conditions. Input data for the visualization of anatomic structures is tomographic image data. For the visualization of medical datasets direct volume rendering is the method of choice. In this paper we introduce a system based on the new X3D extension proposal of the Medical Working Group for a volume rendering component, with some extensions for controlling the speed vs. quality trade-off. For a convincing and instructive training simulation, our implementation delivers high performance combined with high quality visualization of medical datasets. Furthermore, it integrates haptic force-feedback devices to assure realistic interactions.",
"corpus_id": 14792180,
"title": "Using X3D for medical training simulations"
} | {
"abstract": "Publisher Summary This chapter discusses data concerning the time course of word identification in a discourse context. A simulation of arithmetic word-problem understanding provides a plausible account for some well-known phenomena. The current theories use representations with several mutually constraining layers. There is typically a linguistic level of representation, conceptual levels to represent both the local and global meaning and structure of a text, and a level at which the text itself has lost its individuality and its information content. Knowledge provides part of the context within which a discourse interpreted. The integration phase is the price the model pays for the necessary flexibility in the construction process.",
"corpus_id": 15246663,
"score": -1,
"title": "The role of knowledge in discourse comprehension: a construction-integration model."
} |
{
"abstract": "This paper provides steady-state and transient analysis of the equivalent circuit of the 1 MWh battery tied to the grid for wind integration. It also discusses the installation of a 1 MWh battery system at Reese Technology Center (RTC) in Lubbock, Texas. The research involves deploying energy storage devices for application with wind turbine model to understand the transient behavior of the system under three phase fault conditions. A 1 MW/1 MWh battery storage system at the RTC is connected to the South Plains Electric Cooperative (SPEC) grid. The batteries are used for energy storage and for mitigation of transient conditions grid dynamics. In this paper, the equivalent circuit of the 1 MWh battery is modeled in PSCAD and analyzed for its charge and discharge characteristics under transient fault conditions when it is tied to the grid for wind integration.",
"corpus_id": 15380460,
"title": "Analysis of Equivalent Circuit of the Utility Scale Battery for Wind Integration"
} | {
"abstract": "To improve the use of lithium-ion batteries in electric vehicle (EV) applications, evaluations and comparisons of different equivalent circuit models are presented in this paper. Based on an analysis of the traditional lithium-ion battery equivalent circuit models such as the Rint, RC, Thevenin and PNGV models, an improved Thevenin model, named dual polarization (DP) model, is put forward by adding an extra RC to simulate the electrochemical polarization and concentration polarization separately. The model parameters are identified with a genetic algorithm, which is used to find the optimal time constant of the model, and the experimental data from a Hybrid Pulse Power Characterization (HPPC) test on a LiMn 2 O 4 battery module. Evaluations on the five models are carried out from the point of view of the dynamic performance and the state of charge (SoC) estimation. The dynamic performances of the five models are obtained by conducting the Dynamic Stress Test (DST) and the accuracy of SoC estimation with the Robust Extended Kalman Filter (REKF) approach is determined by performing a Federal Urban Driving Schedules (FUDS) experiment. By comparison, the DP model has the best dynamic performance and provides the most accurate SoC estimation. Finally, sensitivity of the different SoC initial values is investigated based on the accuracy of SoC estimation with the REKF approach based on the DP model. It is clear that the errors resulting from the SoC initial value are significantly reduced and the true SoC is convergent within an acceptable error.",
"corpus_id": 15980747,
"title": "Evaluation of Lithium-Ion Battery Equivalent Circuit Models for State of Charge Estimation by an Experimental Approach"
} | {
"abstract": "Embedded systems are at the core of many security-sensitive and safety-critical applications, including automotive, industrial control systems, and critical infrastructures. Existing protection mechanisms against (software-based) malware are inflexible, too complex, expensive, or do not meet real-time requirements. We present TyTAN, which, to the best of our knowledge, is the first security architecture for embedded systems that provides (1) hardware-assisted strong isolation of dynamically configurable tasks and (2) real-time guarantees. We implemented TyTAN on the Intel® Siskiyou Peak embedded platform and demonstrate its efficiency and effectiveness through extensive evaluation.",
"corpus_id": 14748680,
"score": -1,
"title": "TyTAN: Tiny trust anchor for tiny devices"
} |
{
"abstract": "Abstract Over-accumulation of reactive oxygen species (ROS) causes mitochondrial dysfunction and impairs the osteogenic potential of bone marrow-derived mesenchymal stem cells (BMMSCs). Selenium (Se) protects BMMSCs from oxidative stress-induced damage; however, it is unknown whether Se supplementation can promote the repair of osteoporotic bone defects by rescuing the impaired osteogenic potential of osteoporotic BMMSCs (OP-BMMSCs). In vitro treatment with sodium selenite (Na2SeO3) successfully improved the osteogenic differentiation of OP-BMMSCs, as demonstrated by increased matrix mineralization and up-regulated osteogenic genes expression. More importantly, Na2SeO3 restored the impaired mitochondrial functions of OP-BMMSCs, significantly up-regulated glutathione peroxidase 1 (GPx1) expression and attenuated the intracellular ROS and mitochondrial superoxide. Silencing of Gpx1 completely abrogated the protective effects of Na2SeO3 on mitochondrial functions of OP-BMMSCs, suggesting the important role of GPx1 in protecting OP-BMMSCs from oxidative stress. We further fabricated Se-modified bone cement based on silk fibroin and calcium phosphate cement (SF/CPC). After 8 weeks of implantation, Se-modified bone cement significantly promoted bone defect repair, evidenced by the increased new bone tissue formation and enhanced GPx1 expression in ovariectomized rats. These findings revealed that Se supplementation rescued mitochondrial functions of OP-BMMSCs through activation of the GPx1-mediated antioxidant pathway, and more importantly, supplementation with Se in SF/CPC accelerated bone regeneration in ovariectomized rats, representing a novel strategy for treating osteoporotic bone fractures or defects.",
"corpus_id": 256890314,
"title": "Selenium-modified bone cement promotes osteoporotic bone defect repair in ovariectomized rats by restoring GPx1-mediated mitochondrial antioxidant functions"
} | {
"abstract": "Selenium (Se), an essential mineral, plays a major role in cellular redox status and may have beneficial effects on bone health. The objective of this study was to determine whether Se deficiency affects redox status and bone microarchitecture in a mouse model. Thirty-three male C57BL/6J mice, 18 wk old, were randomly assigned to 3 groups. Mice were fed either a purified, Se-deficient diet (SeDef) containing ∼0.9 μg Se/kg diet, or Se-adequate diets containing ∼100 μg Se/kg diet from either selenomethionine (SeMet) or pinto beans (SeBean) for 4 mo. The Se concentration, glutathione peroxidase (GPx1) activity, and GPx1 mRNA in liver were lower in the SeDef group than in the SeMet or SeBean group. The femoral trabecular bone volume/total volume and trabecular number were less, whereas trabecular separation was greater, in the SeDef group than in either the SeMet or SeBean group (P < 0.05). Bone structural parameters between the SeMet and SeBean groups did not differ. Furthermore, Serum concentrations of C-reactive protein, tartrate-resistant acid phosphatase, and intact parathyroid hormone were higher in the SeDef group than in the other 2 groups. These findings demonstrate that Se deficiency is detrimental to bone microarchitecture by increasing bone resorption, possibly through decreasing antioxidative potential.",
"corpus_id": 71919,
"title": "Selenium deficiency decreases antioxidative capacity and is detrimental to bone microarchitecture in mice."
} | {
"abstract": "An optimal diet for rodents in chemical carcinogenicity studies should be nutritionally adequate for growth and maintenance without excesses of high energy and growth-enhancing nutrients. Purified diets are expensive, and standardized purified diets for long-term studies are not yet established. Purified diets caused periportal lipidosis, hemorrhagic diseases and calcification of tissues in rodents. Diet restriction will result in consumption of most food during the resting phase. This will cause increased activity during the resting phase with a shift of nocturnal cycle and associated changes in physiological processes. Diet restriction may modify the carcinogenic responses to chemicals, and the practice is labor intensive. Decreasing the fat and protein content to adequate levels with a slight increase in fiber content and making the diet available only during the normal feeding period (night) may decrease the energy consumption, slow the growth and lower the body weight gain by 10-20%, with a substantial decrease in the prevalence of spontaneous tumors in the pituitary and mammary glands. We should take advantage of the biological similarities between rodents and humans to enhance the utility of rodent studies; however, mimicking the diet and feeding procedures of humans without a thorough understanding of the physiology of the altered rodent may not be useful. Contaminant concentrations of the diets should be as low as is practical. Each lot of diet should be analyzed for macronutrients and labile micronutrients with complete micronutrient analyses of randomly selected lots.",
"corpus_id": 4442892,
"score": -1,
"title": "Rodent diets for carcinogenesis studies."
} |
{
"abstract": "The Experience of Older Homeless Females with Type 2 Diabetes by Joan Downes MA, Walden University, 2015 BS, University of Phoenix, 2012 Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy",
"corpus_id": 196573458,
"title": "The Experience of Older Homeless Females with Type 2 Diabetes"
} | {
"abstract": "In the United States, approximately 28 million persons have diabetestriple the number from 3 decades agoand one third of Americans with type 2 diabetes mellitus are unaware of their condition (1). The complications from diabetes, including heart disease, stroke, amputations, kidney disease, eye disease, low quality of life, and poor mental health, are manifold and prolific. In addition to its toll on health, diabetes costs the United States $245 billion annually, including $176 billion from direct health care costs (2). There have been positive improvements in the quality of diabetes care in the United States (3), and consequently, population-level decreases in the rates of complications among persons with diabetes (notably, myocardial infarction, deaths from hyperglycemic crisis, stroke, amputations, and end-stage renal disease) (4). Although the rates of these complications may be decreasing, the residual rates are still high. More importantly, the absolute numbers of persons with diabetes-related complications have increased over time, driven by the growing prevalence of diabetes, and even conservative scenarios project that the number of Americans with diabetes will nearly triple by 2050 (5). This is especially concerning because roughly 1 in 5 health care dollars already go to treating diabetes; 25% of Medicare's annual budget is used on persons with diabetes; and from 1997 to 2006, diabetes accounted for the single biggest contributor to inflation-adjusted health care spending growth among Medicare beneficiaries (6). In this context, attention to diabetes prevention should be a high priority because even small reductions in the incidence of chronic diseases, such as diabetes, can have a substantial effect on future prevalence of disease (5). As summarized in the systematic review by Selph and colleagues (7) in this issue, considerable science exists for prevention or delay of type 2 diabetes in persons with prediabetes (impaired glucose tolerance or impaired fasting glucose) through treatment with lifestyle intervention (6 studies), pharmacologic intervention (8 studies), or multifactorial intervention (2 studies). Treatment duration ranged from 6 months to 6 years with follow-up extending up to 23 years, and lifestyle intervention reduced risk for progression to type 2 diabetes by an average of 45%. Most trials of treatment of impaired glucose tolerance or impaired fasting glucose were not sufficiently powered to find effects on all-cause or cardiovascular disease (CVD) mortality, although lifestyle modification was associated with a decreased risk for both outcomes after 23 years in 1 trial. Although lifestyle interventions were not highlighted by Selph and colleagues, they also have been shown to have positive effects on regression from prediabetes to normoglycemia, CVD risk factors (such as weight, blood pressure, lipids, and inflammatory markers), incidence of the metabolic syndrome, urinary incontinence in women, and quality of life. The strong evidence backing diabetes prevention unequivocally calls for aggressive implementation, and adequate integration of community and clinic resources and infrastructure for delivery of effective lifestyle interventions are imperatively needed. However, 90% of the 86 million Americans with prediabetes are not aware of their condition (8), and the first step to resolving this should be a national policy on screening and detection of prediabetes. Recommendations for prediabetes and diabetes screening have remained unresolved and vary from the position of the American Diabetes Association, which recommends broader screening by targeting everyone aged 45 years or older or persons at high risk for diabetes, to the highly conservative position of the U.S. Preventive Services Task Force (USPSTF), which recommends screening only adults with sustained treated or untreated hypertension. However, the review by Selph and colleagues, done to support an upcoming update of USPSTF recommendations, has concluded that there is moderate certainty that measuring blood glucose to detect prediabetes or diabetes has net benefits and no significant harms in adults at high risk for diabetes. In a draft of these recommendations, the USPSTF broadened its criteria for screening but has not yet finalized recommendations after a public comment period that ended in early November 2014 (www.uspreventiveservicestask force.org/Page/Document/RecommendationStatement Draft/screening-for-abnormal-glucose-and-type-2-diabetes-mellitus). A national policy to screen all persons at high risk for diabetes (closer to the American Diabetes Association policy) would help identify those with undetected diabetes and prediabetes. The initial treatment of these conditions is lifestyle intervention followed by metformin, and evidence for benefits of these treatments exists. Much of the debate around screening for prediabetes and diabetes focuses on the lack of direct evidence from randomized, controlled trials comparing screened with unscreened persons on a hard outcome, such as CVD or mortality. However, such a definitive trial of hyperglycemia screening is infeasible, unrealistic, and arguably unethical given the strong evidence for diabetes prevention among persons with impaired glucose tolerance or impaired fasting glucose. Intriguingly, the USPSTF used evidence from diabetes prevention trials to recommend broad screening for obesity but has been cautious about recommending screening for diabetes. Furthermore, the effect of early identification and treatment on multiple diabetes complications, above and beyond CVD and mortality (such as retinopathy, quality of life, and health care costs), should be considered in the big picture. Evidence from studies other than randomized, controlled trials should also play an important role in helping to resolve the screening debate. Systematic reviews of the economics of screening for diabetes or dysglycemia and several published models have concluded that screening for hyperglycemia followed by treatment with lifestyle intervention or metformin among persons with diabetes and prediabetes would be cost-effective (9). Evidence-based medicine is a set of principles and methods intended to ensure that, to the greatest extent possible, population-based policies and individual medical decisions are consistent with evidence of effectiveness and benefit (10). To the greatest extent possible, short of a direct randomized, controlled trial testing of screening on a hard outcome, the overall body of data supports a broad policy, closer to the American Diabetes Association position, for early detection of prediabetes and diabetes. Furthermore, reliable tests exist, the risk associated with screening are small or none, and the cost-effectiveness of screening is well within the justifiable range (9). Detection of prediabetes and diabetes would offer a strategic window of opportunity to intervene on other CVD risk factors in an integrated manner (1). Without screening, 90% of prediabetes cases will remain undetected, and we will continue to miss the opportunity to aggressively implement strategies to prevent diabetes and remain unable to slow the growing costs of managing diabetes and its complications.",
"corpus_id": 2264282,
"title": "Screening for Hyperglycemia: The Gateway to Diabetes Prevention and Management for All Americans"
} | {
"abstract": "Aims Understanding type 2 diabetes mellitus is critical for designing effective diabetes prevention policies in Qatar and the Middle East. Methods Using the Qatar 2012 WHO STEPwise approach to surveillance survey, a subsample of 1224 Qatari participants aged 18–64 years was selected. Subjects had their fasting blood glucose levels tested, had not been diagnosed with or treated for diabetes, had a fasting time >12 hours and were not pregnant. We applied a hypothesized structural equation model (SEM) to assess sociodemographic, behavioral, anthropometric and metabolic variables affecting persons with type 2 diabetes mellitus. Results There is a direct effect of triglyceride levels (0.336) and body mass index (BMI) (0.164) on diabetes status. We also found that physical activity levels negatively affect BMI (−0.148) and positively affect high-density lipoprotein (HDL) (0.106); sociodemographic background negatively affects diet (−0.522) and BMI (−0.352); HDL positively affects total cholesterol (0.230) and has a negative effect on BMI (−0.108), triglycerides (−0.128) and waist circumference (−0.104). Diet has a positive effect on triglycerides (0.281) while family history of diabetes negatively affects total cholesterol (−0.104). BMI has a positive effect on waist circumference (0.788) and mediates the effects of physical activity over diabetes status (−0.028). BMI also mediates the effects that sociodemographic factors (−0.058) and physical activity (−0.024) have on diabetes status. BMI and HDL (−0.002) together mediate the effect of physical activity on diabetes status and similarly HDL and tryglycerides (−0.005) also mediate the effect of physical activity on diabetes status. Finally diet and tryglycerides mediate the effects that sociodemographic factors have on diabetes status (−0.049). Conclusions This study's main finding is that triglyceride levels and BMI are the main variables directly affecting diabetes status in the Qatari population.",
"corpus_id": 6985584,
"score": -1,
"title": "Structural equation model for estimating risk factors in type 2 diabetes mellitus in a Middle Eastern setting: evidence from the STEPS Qatar"
} |
{
"abstract": "any maturity, and better in-sample fit for options with maturity equal to the maturity span of the implied trees. Deltas calculated from IVT are consistently lower (higher) than Black-Scholes deltas for both European and American calls (puts) in absolute term. The reverse holds true for GBT deltas. These empirical findings about the relative performance of GBT, IVT, and Standard Black-Scholes models are important to practitioners as they indicate that different methods should be used for different applications, and some cautions should be exercised. © 2002 Wiley Periodicals, Inc. Jrl Fut Mark 22:601–626, 2002",
"corpus_id": 251220096,
"title": "Institutional Knowledge at Singapore Management University Institutional Knowledge at Singapore Management University Pricing Options Using Implied Trees Pricing Options Using Implied Trees"
} | {
"abstract": "In this paper, the boundary conditions for put-call parity are extended to take into account the potential rational early exercise of an option and the possibility that dividends and capitalisation changes will differ from expectations. A series of statistical tests provide the basis for a conclusion in favour of put-call parity and the hypothesised risk-return relationships in the Australian exchange traded options market over the sample period.",
"corpus_id": 153968643,
"title": "Put Call Parity: An Extension of Boundary Conditions"
} | {
"abstract": "Abstract Put-call parity theory in the presence of dividends is extended to take account of transaction costs and new, testable models are derived. These models are used to test the efficiency of the London Traded Options Market using synchronous option and share prices. When account is taken only of option spread, significant numbers of deviations from put-call parity are identified. When commission costs on options and shares are considered however, almost none of these deviations prove to be exploitable.",
"corpus_id": 154052680,
"score": -1,
"title": "Put-call parity theory and an empirical test of the efficiency of the London Traded Options Market"
} |
{
"abstract": "The quality of a 3D reconstruction obtained from stereo images depends on the accuracy of the stereo camera calibration. Continuous online camera calibration is key to enable a long maintenance-free operation of autonomous vehicles. Conventional algorithms for online calibration usually assume a static world. This, however, is often violated in urban scenarios. In this paper an algorithm is presented that determines the twelve degrees-of-freedom (12-DoF) extrinsic calibration of a stereo camera system in dynamic, urban scenarios. An Extended Kalman Filter (EKF) continuously estimates the extrinsic stereo camera calibration by tracking the position of salient points in 3D space that are observed in both cameras. The EKF predicts the 3D position of tracked points based on the estimated calibration and the vehicle's motion, which is estimated using an inertial navigation system (INS) and odometry sensors. However, only static points can be reliable predicted. A convolutional neural network (CNN) is applied to segment camera images on a pixel level. These segmented images are further converted to binary images labeling static and potentially dynamic pixels. Only salient points which are labeled as static are retained for estimation of the stereo camera calibration. While the stereo camera's extrinsic parameters are only observable under transformation and two independent rotations of the vehicle, we present a robust filter update scheme, that enables estimation of the 12-DoF extrinsic stereo camera calibration even in the absence of significant rotations. Test on urban roads show that minor road imperfections are sufficient to estimate all 12-DoF extrinsic parameters over time.",
"corpus_id": 3933198,
"title": "Continuous stereo camera calibration in urban scenarios"
} | {
"abstract": "Accurate stereo camera calibration is crucial for 3D reconstruction from stereo images. In this paper, we propose an algorithm for continuous online recalibration of all extrinsic parameters of a stereo camera, which is rigidly mounted on an autonomous vehicle. The algorithm estimates the six degrees-of-freedom (6-DoF) of the transformation from the vehicle coordinate system to the coordinate system of the stereo camera and at the same time the relative 6-DoF transformation between the two camera sensors. Salient points in the environment that are observed by both cameras are tracked over time in 3D space. An Unscented Kalman Filter (UKF) is applied to recursively estimate the extrinsic stereo camera calibration and the 3D position of all observed points. The projections of the points and the measured vehicle motion, which is estimated using an inertial measurement unit (IMU), are given as input. The observability of the stereo camera calibration states is analyzed to identify critical vehicle motion sequences. Results with real world data show that the algorithm is capable of continuously estimating the stereo camera calibrations in spite of large initial errors and varying extrinsic parameters.",
"corpus_id": 16219025,
"title": "Continuous extrinsic online calibration for stereo cameras"
} | {
"abstract": "Bundle adjustment constitutes a large, nonlinear least-squares problem that is often solved as the last step of feature-based structure and motion estimation computer vision algorithms to obtain optimal estimates. Due to the very large number of parameters involved, a general purpose least-squares algorithm incurs high computational and memory storage costs when applied to bundle adjustment. Fortunately, the lack of interaction among certain subgroups of parameters results in the corresponding Jacobian being sparse, a fact that can be exploited to achieve considerable computational savings. This article presents sba, a publicly available C/C++ software package for realizing generic bundle adjustment with high efficiency and flexibility regarding parameterization.",
"corpus_id": 474253,
"score": -1,
"title": "SBA: A software package for generic sparse bundle adjustment"
} |
{
"abstract": "The rise of social media services has changed the ways in which users can communicate and consume content online. Whilst online social networks allow for fast and convenient delivery of knowledge, users are prone to information overload when too much information is presented for them to read and process. \n \nAutomatic text summarisation is a tool to help mitigate information overload. In automatic text summarisation, short summaries are generated algorithmically from extended text, such as news articles or scientific papers. This thesis addresses the challenges in applying text summarisation to the Twitter social network. It also goes beyond text, exploiting additional information that is unique to social networks to create summaries which are personal to an intended reader. \n \nUnlike previous work in tweet summarisation, the experiments here address the home timelines of readers, which contain the incoming posts from authors to whom they have explicitly subscribed. \n \nA novel contribution is made in this work the form of a large gold standard ($19,350$ tweets), the majority of which will be shared with the research community. The gold standard is a collection of timelines that have been subjectively annotated by the readers to whom they belong, allowing fair evaluation of summaries which are not limited to tweets of general interest, but which are specific to the reader. \n \nWhere the home timeline is used by professional users for social media analysis, automatic text summarisation can be applied to give results which beat all baselines. In the general case, where no limitation is placed on the types of readers, personalisation features which exploit the relationship between author and reader and the reader's own previous posts, were shown to outperform both automatic text summarisation and all baselines.",
"corpus_id": 29184296,
"title": "A ranking approach to summarising Twitter home timelines"
} | {
"abstract": "This paper describes an approach to improve summaries for a collection of Twitter posts created using the Phrase Reinforcement (PR) Algorithm (Sharifi et al., 2010a). The PR algorithm often generates summaries with excess text and noisy speech. We parse these summaries using a dependency parser and use the dependencies to eliminate some of the excess text and build better-formed summaries. We compare the results to those obtained using the PR Algorithm.",
"corpus_id": 394071,
"title": "Better Twitter Summaries?"
} | {
"abstract": "I. The Administrative State, Democratic Constitutionalism, and the Rule of Law The Problem: Retrofitting the American Administrative State into the Constitutional Scheme Public Administration and American Constitutionalism The American Public Administrative \"Orthodoxy\" \"Reinvented\" Public Administration: Toward a New Public Management US Constitutionalism Controlling Administrative Discretion: The Role of Law Judicial Responses to the Administrative State Conclusion: \"Retrofitting\" as an Incremental Project Administrative Law and the Judiciary Today The Commerce Clause Delegated Power The Federal Government's Administrative Law Framework Judicial Review of Agency Action Review of Informational Activity Adjudications Rulemaking Review of Executive Orders Alternatives to Litigation Regulatory Negotiation Environmental Law: Changing Public Administration Practices Judicial Review of Agency Actions Interpretation of Environmental Laws The Growth of Environmental Conflict Resolution II. The Constitutionalization of Public Administrative Action The Individual as Client and Customer of Public Agencies The Public Administration of Services Constraining Clients: The Problem of Conditional Benefits Clients and Customers in Court: The Traditional Response The Demise of the Doctrine of Privilege A Constitutional Limit to Clients' and Customers' Interests in Public Benefits The Case Law in Sum Impact on Public Administration Street-Level Encounters The Need for Street-Level Intuition versus the Fear of Arbitrary or Discriminatory Administration and Law Enforcement The Fourth Amendment Impact on Public Administration The Individual as Government Employee or Contractor Public Administrative Values and Public Employment Constitutional Values in Public Employment Considering Whether the Constitution Should Apply to Public Employment Judicial Doctrines The Structure of Public Employees' Constitutional Rights Today Conclusion: The Courts, Public Personnel Management, and Contracting The Individual as Inmate in Administrative Institutions Administrative Values and Practices Total Institutions and Public Administrative Values Theory and Practice in Public Total Institutions Prior to Reform in the 1970s Transformational Cases Subsequent Developments: The Right to Treatment and Prisoners' Rights Today Implementation and Impact Conclusion: Consequences for Public Administrators The Individual as Antagonist of the Administrative State The Antagonist of the Administrative State The Antagonist in Court: Traditional Approaches Public Administrators' Liability and Immunity Suing States and Their Employees Failure to Train of to Warn Public Law Litigation and Remedial Law Standing State Action Doctrine, Outsourcing, and Private Entities' Liability for Constitutional Torts Law, Courts, and Public Administration Judicial Supervision of Public Administration Administrative Values and Constitutional Democracy Assessing the Impact of Judicial Supervision on Public Administration The Next Steps: Public Service Education and Training in Law",
"corpus_id": 153049188,
"score": -1,
"title": "Public Administration and Law"
} |
{
"abstract": "Selenium-binding protein 1 (Selenbp1) is a 2,3,7,8-tetrechlorodibenzo-p-dioxin inducible protein whose function is yet to be comprehensively elucidated. As the highly homologous isoform, Selenbp2, is expressed at low levels in the kidney, it is worthwhile comparing wild-type C57BL mice and Selenbp1-deficient mice under dioxin-free conditions. Accordingly, we conducted a mouse metabolomics analysis under non-dioxin-treated conditions. DNA microarray analysis was performed based on observed changes in lipid metabolism-related factors. The results showed fluctuations in the expression of numerous genes. Real-time RT-PCR confirmed the decreased expression levels of the cytochrome P450 4a (Cyp4a) subfamily, known to be involved in fatty acid ω- and ω-1 hydroxylation. Furthermore, peroxisome proliferator-activated receptor-α (Pparα) and retinoid-X-receptor-α (Rxrα), which form a heterodimer with Pparα to promote gene expression, were simultaneously reduced. This indicated that reduced Cyp4a expression was mediated via decreased Pparα and Rxrα. In line with this finding, increased levels of leukotrienes and prostaglandins were detected. Conversely, decreased hydrogen peroxide levels and reduced superoxide dismutase (SOD) activity supported the suppression of the renal expression of Sod1 and Sod2 in Selenbp1-deficient mice. Therefore, we infer that ablation of Selenbp1 elicits oxidative stress caused by increased levels of superoxide anions, which alters lipid metabolism via the Pparα pathway.",
"corpus_id": 235228848,
"title": "Ablation of Selenbp1 Alters Lipid Metabolism via the Pparα Pathway in Mouse Kidney"
} | {
"abstract": "Dioxin and related chemicals alter the expression of a number of genes by activating the aryl hydrocarbon receptors (AHR) to produce a variety of disorders including hepatotoxicity. However, it remains largely unknown how these changes in gene expression are linked to toxicity. To address this issue, we initially examined the effect of 2,3,7,8-tetrachrolodibenzo-p-dioxin (TCDD), a most toxic dioxin, on the hepatic and serum metabolome in male pubertal rats and found that TCDD causes many changes in the level of fatty acids, bile acids, amino acids, and their metabolites. Among these findings was the discovery that TCDD increases the content of leukotriene B4 (LTB4), an inducer of inflammation due to the activation of leukocytes, in the liver of rats and mice. Further analyses suggested that an increase in LTB4 comes from a dual mechanism consisting of an induction of arachidonate lipoxygenase-5, a rate-limiting enzyme in LTB4 synthesis, and the down-regulation of LTC4 synthase, an enzyme that converts LTA4 to LTC4. The above changes required AHR activation, because the same was not observed in AHR knock-out rats. In agreement with LTB4 accumulation, TCDD caused the marked infiltration of neutrophils into the liver. However, deleting LTB4 receptors (BLT1) blocked this effect. A TCDD-produced increase in the mRNA expression of inflammatory markers, including tumor-necrosis factor and hepatic damage, was also suppressed in BLT1-null mice. The above observations focusing on metabolomic changes provide novel evidence that TCDD accumulates LTB4 in the liver by an AHR-dependent induction of LTB4 biosynthesis to cause hepatotoxicity through neutrophil activation.",
"corpus_id": 2318918,
"title": "Dioxin-induced increase in leukotriene B4 biosynthesis through the aryl hydrocarbon receptor and its relevance to hepatotoxicity owing to neutrophil infiltration"
} | {
"abstract": "Graphical abstract Figure. No Caption available. ABSTRACT Many forms of the toxic effects produced by dioxins and related chemicals take place following activation of the aryl hydrocarbon receptor (AHR). Our previous studies have demonstrated that treating pregnant rats with 2,3,7,8‐tetrachlorodibenzo‐p‐dioxin (TCDD), a highly toxic dioxin, attenuates the pituitary expression of gonadotropins to reduce testicular steroidogenesis during the fetal stage, resulting in the impairment of sexually‐dimorphic behaviors after the offspring reach maturity. To investigate the contribution of AHR to these disorders, we examined the effects of TCDD on AHR‐knockout (AHR‐KO) Wistar rats. When pregnant AHR‐heterozygous rats were given an oral dose of 1 &mgr;g/kg TCDD at gestational day (GD) 15, TCDD reduced the expression of pituitary gonadotropins and testicular steroidogenic proteins in male wild‐type fetuses at GD20 without affecting body weight, sex ratio and litter size. However, the same defect did not occur in AHR‐KO fetuses. Further, fetal exposure to TCDD impaired the activity of masculine sexual behavior after reaching adulthood only in the wild‐type offspring. Also, in female offspring, not only the fetal gonadotropins production but also sexual dimorphism, such as saccharin preference, after growing up were suppressed by TCDD only in the wild‐type. Interestingly, in the absence of TCDD, deleting AHR reduced masculine sexual behavior, as well as fetal steroidogenesis of the pituitary‐gonadal axis. These results provide novel evidence that 1) AHR is required for TCDD‐produced defects in sexually‐dimorphic behaviors of the offspring, and 2) AHR signaling plays a role in gonadotropin synthesis during the developmental stage to acquire sexual dimorphism after reaching adulthood.",
"corpus_id": 21677736,
"score": -1,
"title": "The aryl hydrocarbon receptor is indispensable for dioxin‐induced defects in sexually‐dimorphic behaviors due to the reduction in fetal steroidogenesis of the pituitary‐gonadal axis in rats"
} |
{
"abstract": "This paper describes a method for hierarchical reinforcement learning in which high-level policies automatically discover subgoals, and low-level policies learn to specialize for different subgoals. Subgoals are represented as desired abstract observations which cluster raw input data. High-level value functions cover the state space at a coarse level; low-level value functions cover only parts of the state space at a fine-grained level. An experiment shows that this method outperforms several flat reinforcement learning methods. A second experiment shows how problems of partial observability due to observation abstraction can be overcome using high-level policies with memory.",
"corpus_id": 1530606,
"title": "Hierarchical reinforcement learning with subpolicies specializing for learned subgoals"
} | {
"abstract": "This paper presents reinforcement learning with a Long Short-Term Memory recurrent neural network: RL-LSTM. Model-free RL-LSTM using Advantage (λ) learning and directed exploration can solve non-Markovian tasks with long-term dependencies between relevant events. This is demonstrated in a T-maze task, as well as in a difficult variation of the pole balancing task.",
"corpus_id": 6627108,
"title": "Reinforcement Learning with Long Short-Term Memory"
} | {
"abstract": "Multi-agent learning provides a potential solution for frameworks to learn and simulate traffic behaviors. This paper proposes a novel architecture to learn multiple driving behaviors in a traffic scenario. The proposed architecture can learn multiple behaviors independently as well as simultaneously. We take advantage of the homogeneity of agents and learn in a parameter sharing paradigm. To further speed up the training process asynchronous updates are employed into the architecture. While learning different behaviors simultaneously, the given framework was also able to learn cooperation between the agents, without any explicit communication. We applied this framework to learn two important behaviors in driving: 1) Lane-Keeping and 2) Over-Taking. Results indicate faster convergence and learning of a more generic behavior, that is scalable to any number of agents. When compared the results with existing approaches, our results indicate equal and even better performance in some cases.",
"corpus_id": 53718535,
"score": -1,
"title": "Parameter Sharing Reinforcement Learning Architecture for Multi Agent Driving Behaviors"
} |
{
"abstract": "..................................................................................................................... vii CHAPTER I ........................................................................................................................",
"corpus_id": 210438743,
"title": "Two of the Same? Infants' Conceptual Representation of Faces Based Upon Gender, Race, and Kind Information"
} | {
"abstract": "Early in the first year of life infants exhibit equivalent performance distinguishing among people within their own race and within other races. However, with development and experience, their face recognition skills become tuned to groups of people they interact with the most. This developmental tuning is hypothesized to be the origin of adult face processing biases including the other-race bias. In adults the other-race bias has also been associated with impairments in facial emotion processing for other-race faces. The present investigation aimed to show perceptual narrowing for other-race faces during infancy and to determine whether the race of a face influences infants' ability to match emotional sounds with emotional facial expressions. Behavioral (visual-paired comparison; VPC) and electrophysiological (event-related potentials; ERPs) measures were recorded in 5-month-old and 9-month-old infants. Behaviorally, 5-month-olds distinguished faces within their own race and within another race, whereas 9-month-olds only distinguish faces within their own race. ERPs were recorded while an emotion sound (laughing or crying) was presented prior to viewing an image of a static African American or Caucasian face expressing either a happy or a sad emotion. Consistent with behavioral findings, ERPs revealed race-specific perceptual processing of faces and emotion/sound face congruency at 9 months but not 5 months of age. In addition, from 5 to 9 months, the neural networks activated for sound/face congruency were found to shift from an anterior ERP component (Nc) related to attention to posterior ERP components (N290, P400) related to perception.",
"corpus_id": 4510235,
"title": "Building biases in infancy: the influence of race on face and voice emotion matching."
} | {
"abstract": "part one Social work roles in medication management: history and overview of social work roles in medication management defining effective collaboration. Part two A primer on psychopharmacology basic principles neurotransmission (Part Contents)",
"corpus_id": 57615837,
"score": -1,
"title": "The social worker & psychotropic medication : toward effective collaboration with mental health clients, families, and providers"
} |
{
"abstract": "Cognitive Radio (CR) is an emerging and promising technology which is predicted to solve the problem of spectrum shortage by utilization of the spectrum efficiently by exploiting the licensed spectrum white spaces. Video on demand (VoD) is a very popular present day service. It is a well-known fact, that video on demand internet service needs a large amount of bandwidth. CR WMNs is envisaged to provide the much needed bandwidth for video streaming as CR WMNs has the ability to access a large part of the under-utilized licensed spectrum. In CR WMNs interference is a critical issue which degrades cognitive radio wireless mesh networks (CR WMNs) performance. As the number of users rises, interference also increases resulting in low throughput. In this paper first an analytical model for CR WMNs has been presented. We have also presented an efficient VoD model and an optimizing approach which minimizes interference to provide a much needed higher network capacity to VoD services. Simulation results show effectiveness of our proposed approach as there is an increase in the number of concurrent VoD sessions.",
"corpus_id": 15213612,
"title": "An Efficient Video on Demand System over Cognitive Radio Wireless Mesh Networks"
} | {
"abstract": "Wireless mesh networks (WMNs) are one of the emerging technologies. Their capability for self-organization significantly reduces the complexity of network deployment and maintenance, and thus, requires minimal upfront investment. These networks consist of simple mesh routers and mesh clients, where mesh routers have minimal mobility and form the backbone of WMNs. They provide network access for both mesh and conventional clients. IEEE 802.16 standard (www.ieee 802.org/16) is a recent standard for broadband wireless access networks, which includes a mesh mode operation for distributed channel access of peering nodes. In accordance with the IEEE 802.16 MAC protocol, time is partitioned into frames of fixed duration, each one divided into two sub-frames, for control and data transmission, respectively. Slots in the control sub-frame are used by nodes to negotiate the schedule of transmissions in data sub-frames, and are accessed by means of a collision-free distributed procedure, namely the mesh election procedure. In this paper, we have analyzed the performance of the mesh election procedure by means of simulations, and identify the system configuration parameters that have the most impact on the performance of control message transmission using distributed scheduling algorithm. \n \n \n \n Key words: Mesh, 802.16, distributed scheduling, performance.",
"corpus_id": 508190,
"title": "Link establishment and performance evaluation in IEEE 802.16 wireless mesh networks"
} | {
"abstract": "Several distributed scheduling policies for wireless networks that achieve a provable efficiency ratio have been developed recently. These policies are characterized by the selection, at the onset of every time slot, of a subset of links according to the interference model of the network and the length of the links' queues. The selected subset of links is for the next time slot only. In this paper, we propose a new framework for the stability analysis of distributed scheduling policies that allow links to transmit data packets in any future time slots by means of slot reservations. Within this framework, we propose and analyze a reservation-based distributed scheduling policy for IEEE 802.16 mesh networks. We find sufficient conditions for the stability of the network when the traffic uses one hop. Specifically, we prove a lower bound for its efficiency ratio by evaluating the stability conditions obtained from our proposed framework. Finally, we compare this lower bound with the capacity achieved in simulation results.",
"corpus_id": 2248396,
"score": -1,
"title": "Reservation-based distributed scheduling in wireless networks"
} |
{
"abstract": "This tutorial describes the geometry and algorithms for generating line drawings from 3D models, focusing on occluding contours. \nThe geometry of occluding contours on meshes and on smooth surfaces is described in detail, together with algorithms for extracting contours, computing their visibility, and creating stylized renderings and animations. Exact methods and hardware-accelerated fast methods are both described, and the trade-offs between different methods are discussed. The tutorial brings together and organizes material that, at present, is scattered throughout the literature. It also includes some novel explanations, and implementation tips. \nA thorough survey of the field of non-photorealistic 3D rendering is also included, covering other kinds of line drawings and artistic shading.",
"corpus_id": 52912187,
"title": "Line Drawings from 3D Models"
} | {
"abstract": "We present an approach for clutter control in NPR line drawing where measures of view and drawing complexity drive the simplification or omission of lines. We define two types of density information: the a-priori density and the causal density, and use them to control which parts of a drawing need simplification. The a-priori density is a measure of the visual complexity of the potential drawing and is computed on the complete arrangement of lines from the view. This measure affords a systematic approach for characterizing the structure of cluttered regions in terms of geometry, scale, and directionality. The causal density measures the spatial complexity of the current state of the drawing as strokes are added, allowing for clutter control through line omission or stylization. We show how these density measures permit a variety of pictorial simplification styles where complexity is reduced either uniformly, or in a spatially-varying manner through indication.",
"corpus_id": 2422831,
"title": "Density measure for line-drawing simplification"
} | {
"abstract": "The Selective Search approach processes large document collections efficiently by partitioning the collection into topically homogeneous groups (shards), and searching only a few shards that are estimated to contain relevant documents for the query. The ability to identify the relevant shards for the query, directly impacts Selective Search performance. We thus investigate three new approaches for the shard ranking problem, and three techniques to estimate how many of the top shards should be searched for a query (shard rank cutoff estimation). We learn a highly effective shard ranking model using the popular learning-to-rank framework. Another approach leverages the topical organization of the collection along with pseudo relevance feedback (PRF) to improve the search performance further. Empirical evaluation using a large collection demonstrates statistically significant improvements over strong baselines. Experiments also show that shard cutoff estimation is essential to balance search precision and recall.",
"corpus_id": 29253845,
"score": -1,
"title": "Improving Shard Selection for Selective Search"
} |
{
"abstract": "The genetic structure of Caiman crocodilus was investigated using a 1085 bp mtDNA fragment of the cytochrome b gene. Inferences were based on 125 individuals from nine localities in Peru, Brazil and French Guiana. With the exception of Mamiraua Lake, Anavilhanas Archipelago and the Tapara Community which show a signal of demographic expansion, the sampled localities are in a mutation-drift genetic equilibrium. Divergence between the Amazon basin and extra-Amazon basin localities is significant; however, inference from Nested Clade Analysis cannot distinguish between continuous range expansion, long distance colonization or past fragmentation; however, past fragmentation is unlikely due to low number of mutational steps separating these two regions. The divergence is probably maintained by the reduced ability of C. crocodilus to cross salt water barriers. Within the Amazon basin, continuous range expansion without isolation-by-distance is the most likely process causing genetic structuring. The observed genetic patterns are compatible with the ecology of C. crocodilus, and history of human exploitation. As commercial hunting depleted more valuable species, C. crocodilus expanded its range and ecological niche, prompting hunters to harvest it. Following a period of intense hunting, C. crocodilus is now experiencing recovery and a second population expansion especially in protected areas.",
"corpus_id": 53962162,
"title": "Population genetic analysis of Caiman crocodilus (Linnaeus, 1758) from South America"
} | {
"abstract": "Jaú National Park is a large rain forest reserve that contains small populations of four caiman species. We sampled crocodilian populations during 30 surveys over a period of four years in five study areas. We found the mean abundance of caiman species to be very low (1.0 +/- 0.5 caiman/km of shoreline), independent of habitat type (river, stream or lake) and season. While abundance was almost equal, the species' composition varied in different waterbody and study areas. We analysed the structure similarity of this assemblage. Lake and river habitats were the most similar habitats, and inhabited by at least two species, mainly Caiman crocodilus and Melanosuchus niger. However, those species can also inhabit streams. Streams were the most dissimilar habitats studied and also had two other species: Paleosuchus trigonatus and P. palpebrosus. The structure of these assemblage does not suggest a pattern of species associated and separated by habitat. Trends in species relationships had a negative correlation with species of similar size, C. crocodilus and P. trigonatus, and an apparent complete exclusion of M. niger and P. trigonatus. Microhabitat analysis suggests a slender habitat partitioning. P. trigonatus was absent from river and lake Igapó (flooded forest), but frequent in stream Igapó. This species was the most terrestrial and found in microhabitats similar to C. crocodilus (shallow waters, slow current). Melanosuchus niger inhabits deep, fast moving waters in different study areas. Despite inhabiting the same waterbodies in many surveys, M. niger and C. crocodilus did not share the same microhabitats. Paleosuchus palpebrosus was observed only in running waters and never in stagnant lake habitats. Cluster analysis revealed three survey groups: two constitute a mosaic in floodplains, (a) a cluster with both M. niger and C. crocodilus, and another (b) with only C. crocodilus. A third cluster (c) included more species, and the presence of Paleosuchus species. There was no significant difference among wariness of caimans between disturbed and undisturbed localities. However, there was a clear trend to increase wariness during the course of consecutive surveys at four localities, suggesting that we, more than local inhabitants, had disturbed caimans. The factors that are limiting caiman populations can be independent of human exploitation. Currently in Amazonia, increased the pressure of hunting, habitat loss and habitat alteration, and there is no evidence of widespread recovery of caiman populations. In large reserves as Jaú without many disturbance, most caiman populations can be low density, suggesting that in blackwater environments their recovery from exploitation should be very slow.",
"corpus_id": 318059,
"title": "Distribution and abundance of four caiman species (Crocodylia: Alligatoridae) in Jaú National Park, Amazonas, Brazil."
} | {
"abstract": "ABSTRACT Background: Cell-derived plasma microparticles (<1.5 μm) originating from various cell types have the potential to regulate thrombogenesis and inflammatory responses. The aim of this study was to test the hypothesis that microparticles generated during hepatic surgery co-regulate postoperative procoagulant and proinflammatory events. Methods: In 30 patients undergoing liver resection, plasma microparticles were isolated, quantitated, and characterized as endothelial (CD31+, CD41−), platelet (CD41+), or leukocyte (CD11b+) origin by flow cytometry and their procoagulant and proinflammatory activity was measured by immunoassays. Results: During liver resection, the total numbers of microparticles increased with significantly more Annexin V-positive, endothelial and platelet-derived microparticles following extended hepatectomy compared to standard and minor liver resections. After liver resection, microparticle tissue factor and procoagulant activity increased along with overall coagulation as assessed by thrombelastography. Levels of leukocyte-derived microparticles specifically increased in patients with systemic inflammation as assessed by C-reactive protein but are independent of the extent of liver resection. Conclusions: Endothelial and platelet-derived microparticles are specifically elevated during liver resection, accompanied by increased procoagulant activity. Leukocyte-derived microparticles are a potential marker for systemic inflammation. Plasma microparticles may represent a specific response to surgical stress and may be an important mediator of postoperative coagulation and inflammation.",
"corpus_id": 4959171,
"score": -1,
"title": "Endothelial- and Platelet-Derived Microparticles Are Generated During Liver Resection in Humans"
} |
{
"abstract": "The aim of the project is to improve our knowledge on the multiplicity of planet-host stars at wide physical separations. \nWe cross-matched approximately 6200 square degree area of the Southern sky imaged by the Visible Infrared Survey Telescope for Astronomy (VISTA) Hemisphere Survey (VHS) with the Two Micron All Sky Survey (2MASS) to look for wide common proper motion companions to known planet-host stars. We complemented our astrometric search with photometric criteria. \nWe confirmed spectroscopically the co-moving nature of seven sources out of 16 companion candidates and discarded eight, while the remaining one stays as a candidate. Among these new wide companions to planet-host stars, we discovered a T4.5 dwarf companion at 6.3 arcmin (~9000 au) from HIP70849, a K7V star which hosts a 9 Jupiter mass planet with an eccentric orbit. We also report two new stellar M dwarf companions to one G and one metal-rich K star. We infer stellar and substellar binary frequencies for our complete sample of 37 targets of 5.4+/-3.8% and 2.7+/-2.7% (1 sigma confidence level), respectively, for projected physical separations larger than ~60-160 au assuming the range of distances of planet-host stars (24-75 pc). These values are comparable to the frequencies of non planet-host stars. We find that the period-eccentricity trend holds with a lack of multiple systems with planets at large eccentricities (e > 0.2) for periods less than 40 days. However, the lack of planets more massive than 2.5 Jupiter masses and short periods (<40 days) orbiting single stars is not so obvious due to recent discoveries by ground-based transit surveys and space missions.",
"corpus_id": 118516214,
"title": "Binary frequency of planet-host stars at wide separations: A new brown dwarf companion to a planet-host star"
} | {
"abstract": "Keck/HIRES precision radial velocities of HD 207832 indicate the presence of two Jovian-type planetary companions in Keplerian orbits around this G star. The planets have minimum masses of Msin i = 0.56 MJup and 0.73 MJup, with orbital periods of ∼162 and ∼1156 days, and eccentricities of 0.13 and 0.27, respectively. Strömgren b and y photometry reveals a clear stellar rotation signature of the host star with a period of 17.8 days, well separated from the period of the radial velocity variations, reinforcing their Keplerian origin. The values of the semimajor axes of the planets suggest that these objects have migrated from the region of giant planet formation to closer orbits. In order to examine the possibility of the existence of additional (small) planets in the system, we studied the orbital stability of hypothetical terrestrial-sized objects in the region between the two planets and interior to the orbit of the inner body. Results indicated that stable orbits exist only in a small region interior to planet b. However, the current observational data offer no evidence for the existence of additional objects in this system.",
"corpus_id": 1137984,
"title": "THE LICK–CARNEGIE SURVEY: A NEW TWO-PLANET SYSTEM AROUND THE STAR HD 207832"
} | {
"abstract": "The light curve of an eclipsing system shows anomalies whenever the eclipsing body passes in front of active regions on the eclipsed star. In some cases, the pattern of anomalies can be used to determine the obliquity Ψ of the eclipsed star. Here we present a method for detecting and analyzing these patterns, based on a statistical test for correlations between the anomalies observed in a sequence of eclipses. Compared to previous methods, ours makes fewer assumptions and is easier to automate. We apply it to a sample of 64 stars with transiting planets and 24 eclipsing binaries for which precise space-based data are available, and for which there was either some indication of flux anomalies or a previously reported obliquity measurement. We were able to determine obliquities for 10 stars with hot Jupiters. In particular we found Ψ ≲ 10° for Kepler-45, which is only the second M dwarf with a measured obliquity. The other eight cases are G and K stars with low obliquities. Among the eclipsing binaries, we were able to determine obliquities in eight cases, all of which are consistent with zero. Our results also reveal some common patterns of stellar activity for magnetically active G and K stars, including persistently active longitudes.",
"corpus_id": 59454470,
"score": -1,
"title": "Stellar Obliquity and Magnetic Activity of Planet-hosting Stars and Eclipsing Binaries Based on Transit Chord Correlation"
} |
{
"abstract": "The Seiberg-Witten map links non-commutative gauge theories to ordinary gauge theories, and allows to express the non-commutative variables in terms of the commutative ones. Its explicit form can be found order by order in the non-commutative parameter θ and the gauge potential A by the requirement that gauge orbits are mapped on gauge orbits. This of course leaves ambiguities, corresponding to gauge transformations, and there is an infinity of solutions. Is there one better, clearer than the others? In the abelian case, we were able to find a solution, linked by a gauge transformation to already known formulas, which has the property of admitting a recursive formulation, uncovering some pattern in the map. In the special case of a pure gauge, both abelian and non abelian, these expressions can be summed up, and the transformation is expressed using the parametrisation in terms of the gauge group.",
"corpus_id": 17253379,
"title": "Towards an explicit expression of the Seiberg-Witten map at all orders"
} | {
"abstract": "We show that the higher-order derivative α′ corrections to the Dirac–Born–Infeld (DBI) and Chern–Simon actions is derived from noncommutativity in the Seiberg–Witten limit, and is shown to agree with Wyllard's (hep-th/0008125) result, as conjectured by Das et al., (hep-th/0106024). In calculating the corrections, we have expressed in terms of F, Â in terms of A up to order , and made use of it.",
"corpus_id": 985334,
"title": "DERIVATIVE CORRECTIONS TO DIRAC–BORN–INFELD AND CHERN–SIMON ACTIONS FROM NONCOMMUTATIVITY"
} | {
"abstract": "Multivariate outcomes are ubiquitous. Joint analysis of multivariate outcomes provides several benfits over separate analysis of each outcome. However, joint analysis of multivariate outcomes that are mixed, i.e., not on the same scale of measurement, can be challenging. This dissertation provides novel methods to analyze bivariate mixed outcomes, where we have exactly one continuous outcome and one binary outcome. A penalized generalized estimating equations framework to perform simultaneous estimation and variable selection for bivaraite mixed outcomes in the presence of a large number of covariates is provided. Next, fully Bayesian and empirical Bayes approaches to estimating the association between the two outcomes using a copula-based model are provided. Finally, methods for estimating and testing genomic effects in bivariate mixed secondary outcome models under case-control designs are presented. Statistical Methods for Analyzing Bivariate Mixed Outcomes",
"corpus_id": 125873240,
"score": -1,
"title": "Statistical Methods for Analyzing Bivariate Mixed Outcomes"
} |
{
"abstract": "The current global need for clean, renewable energy sources has led to a high penetration of distributed generation on distribution networks. This produces side effects on the power systems due to the variable characteristics of the primary energy sources (i.e. wind and solar). Energy storage systems (ESS) play a key role in providing additional system security, reliability and flexibility in response to changes in generation, which are still difficult to forecast. However, ESS in power networks present open questions around the benefits that these technologies can bring to the different actors involved in the energy supply chain. The main contributions of this paper are: (1) it gives a thorough review of the current research on ESS allocation (including ESS siting and sizing) methods in power networks; (2) it highlights the factors, challenges and problems for the sustainable development of ESS technologies; (3) the importance of designing the energy storage system for particular networks, taking into account the interaction of storage with other system flexibility options. Hence, this review points out current ESS design methodologies in power networks and provides framework guidelines for future ESS research.",
"corpus_id": 56171947,
"title": "Energy storage allocation in power networks – A state-of-the-art review"
} | {
"abstract": "Due to the increasingly serious energy crisis and environmental pollution problem, traditional fossil energy is gradually being replaced by renewable energy in recent years. However, the introduction of renewable energy into power systems will lead to large voltage fluctuations and high capital costs. To solve these problems, an energy storage system (ESS) is employed into a power system to reduce total costs and greenhouse gas emissions. Hence, this paper proposes a two-stage method based on a back-propagation neural network (BPNN) and hybrid multi-objective particle swarm optimization (HMOPSO) to determine the optimal placements and sizes of ESSs in a transmission system. Owing to the uncertainties of renewable energy, a BPNN is utilized to forecast the outputs of the wind power and load demand based on historic data in the city of Madison, USA. Furthermore, power-voltage (P-V) sensitivity analysis is conducted in this paper to improve the converge speed of the proposed algorithm, and continuous wind distribution is discretized by a three-point estimation method. The Institute of Electrical and Electronic Engineers (IEEE) 30-bus system is adopted to perform case studies. The simulation results of each case clearly demonstrate the necessity for optimal storage allocation and the efficiency of the proposed method.",
"corpus_id": 6542764,
"title": "Electrical Energy Forecasting and Optimal Allocation of ESS in a Hybrid Wind-Diesel Power System"
} | {
"abstract": "In the face of climate change and resource scarcity, energy supply systems are on the verge of a major transformation, which mainly includes the introduction of new components and their integration into the existing infrastructures, new network configurations and reliable topologies, optimal design and novel operation schemes, and new incentives and business models. This revolution is affecting the current paradigm and demanding that energy systems be integrated into multi-carrier energy hubs [1]. [...]",
"corpus_id": 116381643,
"score": -1,
"title": "Special Issue on Advances in Integrated Energy Systems Design, Control and Optimization"
} |
{
"abstract": "Objective To conduct a systematic literature review of imaging techniques and findings in patients with peribiliary liver metastasis. Methods Several electronic datasets were searched from January 1990 to June 2017 to identify studies assessing the use of different imaging techniques for the detection and staging of peribiliary metastases. Results The search identified 44 studies, of which six met the inclusion criteria and were included in the systematic review. Multidetector computed tomography (MDCT) is the technique of choice in the preoperative setting and during the follow-up of patients with liver tumors. However, the diagnostic performance of MDCT for the assessment of biliary tree neoplasms was low compared with magnetic resonance imaging (MRI). Ultrasound (US), without and with contrast enhancement (CEUS), is commonly employed as a first-line tool for evaluating focal liver lesions; however, the sensitivity and specificity of US and CEUS for both the detection and characterization are related to operator expertise and patient suitability. MRI has thus become the gold standard technique because of its ability to provide morphologic and functional data. MRI showed the best diagnostic performance for the detection of peribiliary metastases. Conclusions MRI should be considered the gold standard technique for the radiological assessment of secondary biliary tree lesions.",
"corpus_id": 220255089,
"title": "Radiological assessment of secondary biliary tree lesions: an update"
} | {
"abstract": "AbstractBackground: We describe the thin-section helical computed tomographic (CT) findings of biliary obstruction caused by metastasis.\nMethods: Thin-section helical CT (5 mm slice thickness, 1:1 pitch, portal phase) and direct cholangiography in 50 consecutive patients with biliary obstruction caused by metastases were reviewed retrospectively by three radiologists. The primary sites were the stomach (n = 36), colon (n = 12), jejunum (n = 1), and uterus (n = 1). The level of biliary obstruction was analyzed with the Bismuth classification, and the CT findings of biliary obstruction were classified into six types: small (<2 cm) periductal masses, large (≥2 cm) periductal masses, extrinsic compression by a metastatic liver mass, high-attenuation intraductal mass, intrapancreatic mass, and no demonstrable lesion.\nResults: The level of biliary obstruction was the hilum in 18 patients (36%), the proximal common duct in 20 (40%), the distal common duct in five (10%), and the periampullary area in seven (14%). Of 18 hilar obstructions, tumor involvement of the secondary confluence of intrahepatic bile ducts was seen in 10 (right in six, left in one, and bilateral in three). Periductal masses were seen in 68% (small in 18, large in 16). In one patient (2%), a large metastatic mass of the liver resulted in extrinsic compression and biliary obstruction. Lesions mimicking primary biliary or pancreatic tumor were seen in four, respectively. In seven, we found no obstructing lesion on CT.\nConclusion: Biliary obstruction in patients with known primary malignancies can show atypical patterns mimicking primary pancreatobiliary malignancies on thin-section helical CT. \n",
"corpus_id": 3047370,
"title": "\nBiliary obstruction in metastatic disease: thin-section helical CT findings"
} | {
"abstract": "We report on a patient with obstructive jaundice caused by recurrence of gastric carcinoma in the wall of an extrahepatic bile duct more than 5 years after gastrectomy who was treated with pancreaticoduodenectomy. Histopathologic examination of the surgically resected specimen revealed a poorly differentiated adenocarcinoma with focal signet ring cells in the wall of the common bile duct which was histologically similar to the primary gastric carcinoma. To confirm the diagnosis, immunohistochemical staining was performed with antibodies against cytokeratins (CK7, CK20) and mucin peptide core antigens (MUC5AC, MUC6, MUC2). Based on the expression patterns of this monoclonal antibody panel, the final diagnosis of the common bile duct tumor was an isolated local recurrence of the gastric carcinoma. The patient has survived for more than 26 months after pancreaticoduodenectomy without recurrence.",
"corpus_id": 5739352,
"score": -1,
"title": "A case of extrahepatic bile duct wall recurrence of gastric carcinoma that was treated with pancreaticoduodenectomy."
} |
{
"abstract": "Loss of chromosome Y (LOY) is a mosaic aneuploidy that can be detected mainly in blood samples of male individuals. Usually, LOY occurrence increases with chronological age in healthy men. Moreover, recently LOY has been reported in association with several diseases, such as cancer, where its frequency is even higher. The Y chromosome is one of the shortest chromosomes of the human karyotype, and it is crucial for correct male development. This chromosome has functions beyond the male reproductive system, and loss of its genes or even LOY can have consequences for the male body that are yet to be elucidated. Analyses of the Y chromosome are largely applied in forensic contexts such as paternity testing, ancestry studies, and sexual assault cases, among others. Thus, LOY can be a disadvantage, limiting laboratory methods and result interpretation. However, as an advantage, LOY detection could be used as a biological age biomarker due to its association with the aging process. The potential application of LOY as biomarker highlights the necessity to clarify the molecular mechanism behind its occurrence and its possible applications in both health and forensic studies.",
"corpus_id": 220518130,
"title": "Loss of Chromosome Y and Its Potential Applications as Biomarker in Health and Forensic Sciences"
} | {
"abstract": "It has been observed for centuries that men have a shorter lifespan than women. The current difference globally is on average 4 years, and the difference is even larger in populations with longer life expectancy, for example, ≈6 years in the European Union and 7 years in Japan.1 A larger difference in populations with higher longevity suggests that the underlying factors are stronger in populations with a large part of the mortality related to age-associated diseases. Cardiovascular diseases are the leading causes of death globally and are increasing.2 The share of total mortality that is because of cardiovascular diseases is similar in both sexes, but men fall ill and die from it at a younger age. Cardiovascular disease risk factors are equally important for men and women.3 Hence, the age differences in incidence and mortality between men and women are because of other reasons than differential environmental risk factor exposures. Recent discoveries on pathological effects from a male-specific genetic risk factor—loss of chromosome Y (LOY) in blood cells—can partly explain the observed sex difference in longevity. Analyses by Haitjema et al4 in this issue of Circulation: Cardiovascular Genetics describe a previously unknown association between LOY in blood cells and major cardiovascular events.\n\nSee Article by Haitjema et al \n\nA high prevalence …",
"corpus_id": 798064,
"title": "Loss of Chromosome Y in Leukocytes and Major Cardiovascular Events."
} | {
"abstract": "There are numerous observations reporting that phagocytes expressing major histocompatibility complex (MHC) Class II molecules are associated with the central nervous system (CNS) in normal and pathological conditions. Although MHC Class II expression is necessary for antigen presentation to CD4 + T-cells, it is not sufficient and co-stimulatory molecules are also required. We review here recent in vivo studies demonstrating that the microglia and perivascular macrophages are unable to initiate a primary immune response in the CNS microenvironment, but may support secondary immune responses. Although in vitro studies show that microglia do not support a primary immune response leading to T-cell proliferation, they do show that microglia may protect the CNS from the unwanted attentions of autoreactive T-cells by inducing their apoptosis. The lack of cells in the CNS parenchyma with the ability to initiate a primary immune response has a cost, namely that pathogens may persist in the CNS undetected by the immune system.",
"corpus_id": 1349369,
"score": -1,
"title": "A revised view of the central nervous system microenvironment and major histocompatibility complex class II antigen presentation"
} |
{
"abstract": "In this paper, we report the cloning and characterization of the STAT6 gene from the pufferfish, Tetraodon nigroviridis. The TnSTAT6 gene is composed of 20 exons and 19 introns. The exoneintron organization of this gene is similar to that of HsSTAT6 except for the exons encoding the C-terminal transactivation domain. The full-length complementary (c)DNA of TnSTAT6 encodes a 794-amino acid protein that is 31% identical to human STAT6. We generated a constitutively active TnSTAT6-JH1 by fusing the kinase domain of carp JAK1 to the C-terminal end of TnSTAT6 and demonstrated that the fusion protein has specific DNA-binding ability and can activate a reporter construct carrying multiple copies of mammalian IL-4response elements. Interestingly, TnSTAT6-JH1 associated with and phosphorylated TnSTAT6 on Tyr661. Mutation of this residue, Y661W, in TnSTAT6 abolished its association with TnSTAT6-JH1. This is consistent with the importance of the corresponding Tyr641 of HsSTAT6 in tyrosine phosphorylation and dimer formation. On the other hand, treatment of mammalian IL-4 did not induce tyrosine phosphorylation of wild-type TnSTAT6, suggesting that both the divergent N-terminal domain and coiled-coiled domain of TnSTAT6 may affect the interaction of TnSTAT6 with mammalian IL-4 receptor complexes. 2010 Elsevier Ltd. All rights reserved.",
"corpus_id": 115145529,
"title": "Expression and characterization of a constitutively active STAT 6 from Tetraodon"
} | {
"abstract": "The STAT5 (signal transducer and activator of transcription 5) gene was isolated and characterized from a round-spotted pufferfish genomic library. This gene is composed of 19 exons spanning 11 kb. The full-length cDNA of Tetraodon fluviatilis STAT5 (TfSTAT5) contains 2461 bp and encodes a protein of 785 amino acid residues. From the amino acid sequence comparison, TfSTAT5 is most similar to mouse STAT5a and STAT5b with an overall identity of 76% and 78%, respectively, and has < 35% identity with other mammalian STATs. The exon/intron junctions of the TfSTAT5 gene were almost identical to those of mouse STAT5a and STAT5b genes, indicating that these genes are highly conserved at the levels of amino acid sequence and genomic structure. To understand better the biochemical properties of TfSTAT5, a chimeric STAT5 was generated by fusion of the kinase-catalytic domain of carp Janus kinase 1 (JAK1) to the C-terminal end of TfSTAT5. The fusion protein was expressed and tyrosine-phosphorylated by its kinase domain. The fusion protein exhibits specific DNA-binding and transactivation potential toward an artificial fish promoter as well as authentic mammalian promoters such as the beta-casein promoter and cytokine inducible SH2 containing protein (CIS) promoter when expressed in both fish and mammalian cells. However, TfSTAT5 could not induce the transcription of beta-casein promoter via rat prolactin and Nb2 prolactin receptor. To our knowledge, this is the first report describing detailed biochemical characterization of a STAT protein from fish.",
"corpus_id": 341410,
"title": "Genomic structure, expression and characterization of a STAT5 homologue from pufferfish (Tetraodon fluviatilis)."
} | {
"abstract": "Efforts to develop neurotrophic factors to restore function and protect dying neurons in chronic neurodegenerative diseases like Alzheimer’s (AD) and Parkinson’s (PD) have been attempted for decades. Despite abundant data establishing nonclinical proof-of-concept, significant delivery issues have precluded the successful translation of this concept to the clinic. The development of AAV2 viral vectors to deliver therapeutic genes has emerged as a safe and effective means to achieve sustained, long-term, targeted, bioactive protein expression. Thus, it potentially offers a practical means to solve those long-standing delivery/translational issues associated with neurotrophic factors. Data are presented for two AAV2 viral vector constructs expressing one of two different neurotrophic factors: nerve growth factor (NGF) and neurturin (NRTN). One (AAV2-NGF; aka CERE-110) is being developed as a treatment to improve the function and delay further degeneration of cholinergic neurons in the nucleus basalis of Meynert, the degeneration of which has been linked to cognitive deficits in AD. The other (AAV2-NRTN; aka CERE-120) is similarly being developed to treat the degenerating nigrostriatal dopamine neurons and major motor deficits in PD. The data presented here demonstrate: (1) 2-year, targeted, bioactive-protein in monkeys, (2) persistent, bioactive-protein throughout the life-span of the rat, and (3) accurately targeted bioactive-protein in aged rats, with (4) no safety issues or antibodies to the protein detected. They also provide empirical guidance to establish parameters for human dosing and collectively support the idea that gene transfer may overcome key delivery obstacles that have precluded successful translation of neurotrophic factors to the clinic. More specifically, they also enabled the AAV-NGF and AAV-NRTN programs to advance into ongoing multi-center, double-blind clinical trials in AD and PD patients.",
"corpus_id": 6435881,
"score": -1,
"title": "Gene transfer provides a practical means for safe, long-term, targeted delivery of biologically active neurotrophic factor proteins for neurodegenerative diseases"
} |
{
"abstract": "Despite Arabidopsis thaliana's pre‐eminence as a model organism, major questions remain regarding the geographic structure of its genetic variation due to the geographically incomplete sample set available for previous studies. Many of these questions are addressed here with an analysis of genome‐wide variation at 10 loci in 475 individuals from 167 globally distributed populations, including many from critical but previously un‐sampled regions. Rooted haplotype networks at three loci suggest that A. thaliana arose in the Caucasus region. Identification of large‐scale metapopulations indicates clear east–west genetic structure, both within proposed Pleistocene refugia and post‐Pleistocene colonized regions. The refugia themselves are genetically differentiated from one another and display elevated levels of within‐population genetic diversity relative to recolonized areas. The timing of an inferred demographic expansion coincides with the Eemian interglacial (approximately 120 000 years ago). Taken together, these patterns are strongly suggestive of Pleistocene range dynamics. Spatial autocorrelation analyses indicate that isolation by distance is pervasive at all hierarchical levels, but that it is reduced in portions of Europe.",
"corpus_id": 36474032,
"title": "Native range genetic variation in Arabidopsis thaliana is strongly geographically structured and reflects Pleistocene glacial dynamics"
} | {
"abstract": "Arabidopsis thaliana is the preeminent plant model organism. However, significant advances in evolution and ecology are being made by expanding the scope of research beyond this single species into the broader genus Arabidopsis. Surprisingly, few studies have rigorously investigated phylogenetic relationships between the nine Arabidopsis species, and this study evaluates both these and hypotheses related to two instances of intra-generic hybridization. DNA sequences from the 5' flanking region of the nuclear Atmyb2 gene from 12 of the 14 Arabidopsis taxa were used to reconstruct the generic phylogeny. The strict consensus tree was highly concordant with previous studies, identifying lineages corresponding to widespread species but exhibiting a large basal polytomy. Our data indicates that the paternal parent of the allopolyploid A. suecica is A. neglecta rather than A. arenosa s.l., although the need for a detailed phylogeographical study of these three species is noted. Finally, our data provided additional phylogenetic evidence of hybridization between Arabidopsis lyrata s.l. and A. halleri s.l. Taken together, the well-defined lineages within the genus and the potential for hybridization between them highlight Arabidopsis as a promising group for comparative and experimental studies of hybridization.",
"corpus_id": 2013024,
"title": "Further insights into the phylogeny of Arabidopsis (Brassicaceae) from nuclear Atmyb2 flanking sequence."
} | {
"abstract": "Whole genome duplication (WGD), which gives rise to polyploids, is a unique type of mutation that duplicates all the genetic material in a genome. WGD provides an evolutionary opportunity by generating abundant genetic “raw material,” and has been implicated in diversification, speciation, adaptive radiation, and invasiveness, and has also played an important role in crop breeding. However, WGD at least initially challenges basic biological functions by increasing cell size, altering relationships between cell volume and DNA content, and doubling the number of homologous chromosome copies that must be sorted during cell division. Newly polyploid lineages often have extensive changes in gene regulation, genome structure, and may suffer meiotic or mitotic chromosome mis-segregation. The abundance of species that persist in nature as polyploids shows that these problems are surmountable and/or that advantages of WGD might outweigh drawbacks. The molecularly especially tractable Arabidopsis genus has several ancient polyploidy events in its history and contains several independent more recent polyploids. This genus can thus provide important insights into molecular aspects of polyploid formation, establishment, and genome evolution. The ability to integrate ecological and evolutionary questions with molecular and genetic understanding makes comparative analyses in this genus particularly attractive and holds promise for advancing our general understanding of polyploid biology. Here, we highlight some of the findings from Arabidopsis that have given us insights into the origin and evolution of polyploids.",
"corpus_id": 12575943,
"score": -1,
"title": "Polyploidy in the Arabidopsis genus"
} |
{
"abstract": "Fibromyalgia appears to present in subgroups with regard to biological pain induction, with primarily inflammatory, neuropathic/neurodegenerative, sympathetic, oxidative, nitrosative, or muscular factors and/or central sensitization. Recent research has also discussed glial activation or interrupted dopaminergic neurotransmission, as well as increased skin mast cells and mitochondrial dysfunction. Therapy is difficult, and the treatment options used so far mostly just have the potential to address only one of these aspects. As ambroxol addresses all of them in a single substance and furthermore also reduces visceral hypersensitivity, in fibromyalgia existing as irritable bowel syndrome or chronic bladder pain, it should be systematically investigated for this purpose. Encouraged by first clinical observations of two working groups using topical or oral ambroxol for fibromyalgia treatments, the present paper outlines the scientific argument for this approach by looking at each of the aforementioned aspects of this complex disease and summarizes putative modes of action of ambroxol. Nevertheless, at this point the evidence basis for ambroxol is not strong enough for clinical recommendation.",
"corpus_id": 35020881,
"title": "Ambroxol for the treatment of fibromyalgia: science or fiction?"
} | {
"abstract": "Fibromyalgia is a chronic pain syndrome with unknown etiology. Recent studies have shown some evidence demonstrating that oxidative stress, mitochondrial dysfunction and inflammation may have a role in the pathophysiology of fibromyalgia. Despite several skin-related symptoms accompanied by small fiber neuropathy have been studied in FM, these mitochondrial changes have not been yet studied in this tissue. Skin biopsies from patients showed a significant mitochondrial dysfunction with reduced mitochondrial chain activities and bioenergetics levels and increased levels of oxidative stress. These data were related to increased levels of inflammation and correlated with pain, the principal symptom of FM. All these parameters have shown a role in peripheral nerve damage which has been observed in FM as a possible responsible to allodynia. Our findings may support the role of oxidative stress, mitochondrial dysfunction and inflammation as interdependent events in the pathophysiology of FM with a special role in the peripheral alterations.",
"corpus_id": 1621991,
"title": "Oxidative stress, mitochondrial dysfunction and, inflammation common events in skin of patients with Fibromyalgia."
} | {
"abstract": "Fibromyalgia (FM) is a complex disorder that affects up to 5% of the general population worldwide. Its pathophysiological mechanisms are difficult to identify and current drug therapies demonstrate limited effectiveness. Both mitochondrial dysfunction and coenzyme Q10 (CoQ10) deficiency have been implicated in FM pathophysiology. We have investigated the effect of CoQ10 supplementation. We carried out a randomized, double-blind, placebo-controlled trial to evaluate clinical and gene expression effects of forty days of CoQ10 supplementation (300 mg/day) on 20 FM patients. This study was registered with controlled-trials.com (ISRCTN 21164124). An important clinical improvement was evident after CoQ10 versus placebo treatment showing a reduction of FIQ (p<0.001), and a most prominent reduction in pain (p<0.001), fatigue, and morning tiredness (p<0.01) subscales from FIQ. Furthermore, we observed an important reduction in the pain visual scale (p<0.01) and a reduction in tender points (p<0.01), including recovery of inflammation, antioxidant enzymes, mitochondrial biogenesis, and AMPK gene expression levels, associated with phosphorylation of the AMPK activity. These results lead to the hypothesis that CoQ10 have a potential therapeutic effect in FM, and indicate new potential molecular targets for the therapy of this disease. AMPK could be implicated in the pathophysiology of FM.",
"corpus_id": 21465391,
"score": -1,
"title": "Can coenzyme q10 improve clinical and molecular parameters in fibromyalgia?"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.