title
stringlengths
8
300
abstract
stringlengths
0
10k
Characterization of Arabidopsis enhanced disease susceptibility mutants that are affected in systemically induced resistance.
In Arabidopsis, the rhizobacterial strain Pseudomonas fluorescens WCS417r triggers jasmonate (JA)- and ethylene (ET)-dependent induced systemic resistance (ISR) that is effective against different pathogens. Arabidopsis genotypes unable to express rhizobacteria-mediated ISR against the bacterial pathogen Pseudomonas syringae pv. tomato DC3000 (Pst DC3000) exhibit enhanced disease susceptibility towards this pathogen. To identify novel components controlling induced resistance, we tested 11 Arabidopsis mutants with enhanced disease susceptibility (eds) to pathogenic P. syringae bacteria for WCS417r-mediated ISR and pathogen-induced systemic acquired resistance (SAR). Mutants eds4-1, eds8-1 and eds10-1 failed to develop WCS417r-mediated ISR, while mutants eds5-1 and eds12-1 failed to express pathogen-induced SAR. Whereas eds5-1 is known to be blocked in salicylic acid (SA) biosynthesis, analysis of eds12-1 revealed that its impaired SAR response is caused by reduced sensitivity to this molecule. Analysis of the ISR-impaired eds mutants revealed that they are non-responsive to induction of resistance by methyl jasmonate (MeJA) (eds4-1, eds8-1 and eds10-1), or the ET precursor 1-aminocyclopropane-1-carboxylate (ACC) (eds4-1 and eds10-1). Moreover, eds4-1 and eds8-1 showed reduced expression of the plant defensin gene PDF1.2 after MeJA and ACC treatment, which was associated with reduced sensitivity to either ET (eds4-1) or MeJA (eds8-1). Although blocked in WCS417r-, MeJA- and ACC-induced ISR, eds10-1 behaved normally for several other responses to MeJA or ACC. The results indicate that EDS12 is required for SAR and acts downstream of SA, whereas EDS4, EDS8 and EDS10 are required for ISR acting either in JA signalling (EDS8), ET signalling (EDS4), or downstream JA and ET signalling (EDS10) in the ISR pathway.
Emerging Technologies to Conserve Biodiversity.
Technologies to identify individual animals, follow their movements, identify and locate animal and plant species, and assess the status of their habitats remotely have become better, faster, and cheaper as threats to the survival of species are increasing. New technologies alone do not save species, and new data create new problems. For example, improving technologies alone cannot prevent poaching: solutions require providing appropriate tools to the right people. Habitat loss is another driver: the challenge here is to connect existing sophisticated remote sensing with species occurrence data to predict where species remain. Other challenges include assembling a wider public to crowdsource data, managing the massive quantities of data generated, and developing solutions to rapidly emerging threats.
Reference values of lymphocyte subsets in healthy, HIV-negative children in Cameroon.
Lymphocyte subset reference values used to monitor infectious diseases, including HIV/AIDS, tuberculosis, malaria, or other immunological disorders in healthy children in Cameroon, are lacking. Values for Caucasian cohorts are already being utilized for clinical decisions but could be inappropriate for African populations. We report here the immunological profile for children aged from birth through 6 years in Cameroon and also compare our values to data from other African and Caucasian populations. In a cohort of 352 healthy children, aged 0 to 6 years, the relative and absolute numbers of T-cell subsets, B cells, and NK lymphocytes were determined from peripheral blood collected in EDTA tubes. Samples were stained with BD Multitest reagents in Trucount tubes and analyzed by using CellQuest-Pro and FlowJo software. We evaluated about 23 different lymphocyte subsets in which the absolute number and percentage values differed significantly (P < 0.05) with age and peaked between 6 and 12 months. B-cell values were higher compared to reported values from developed countries. Differences in activated and differentiated T cells were observed in subjects between 1 and 6 years of age. The absolute CD8(+) T-cell count and the CD4(+)/CD8(+) ratio seem to depend on gender. Normal lymphocyte subsets values among children from Cameroon differ from reported values in Caucasian and some African populations. The differences observed could be due to genetic and environmental factors coupled with the methodology used. These values could be used as initial national reference guidelines as more data are assembled.
Comparison of gait in patients following a computer-navigated minimally invasive anterior approach and a conventional posterolateral approach for total hip arthroplasty: a randomized controlled trial.
Minimally invasive total hip arthroplasty (MIS THA) aims at minimizing damage to muscles and tendons to accelerate postoperative recovery. Computer navigation allows a precise prosthesis alignment without complete visualization of the bony landmarks during MIS THA. A randomized controlled trial (RCT) was conducted to determine the effectiveness of a computer-navigated MIS anterior approach for THA compared to a conventional posterolateral THA technique on the restoration of physical functioning during recovery following surgery. Thirty-five patients underwent computer-navigated MIS THA via the anterior approach, and 40 patients underwent conventional THA using the conventional posterolateral approach. Gait analysis was performed preoperatively, 6 weeks, and 3 and 6 months postoperatively using a body-fixed-sensor based gait analysis system. Walking speed, step length, cadence, and frontal plane angular movements of the pelvis and thorax were assessed. The same data were obtained from 30 healthy subjects. No differences were found in the recovery of spatiotemporal parameters or in angular movements of the pelvis and thorax following the computer-navigated MIS anterior approach or the conventional posterolateral approach. Although gait improved after surgery, small differences in several spatiotemporal parameters and angular movements of the trunk remained at 6 months postoperatively between both patient groups and healthy subjects.
CAVE: Configuration Assessment, Visualization and Evaluation
To achieve peak performance of an algorithm (in particular for problems in AI), algorithm configuration is often necessary to determine a well-performing parameter configuration. So far, most studies in algorithm configuration focused on proposing better algorithm configuration procedures or on improving a particular algorithm’s performance. In contrast, we use all the collected empirical performance data gathered during algorithm configuration runs to generate extensive insights into an algorithm, given problem instances and the used configurator. To this end, we provide a tool, called CAVE , that automatically generates comprehensive reports and insightful figures from all available empirical data. CAVE aims to help algorithm and configurator developers to better understand their experimental setup in an automated fashion. We showcase its use by thoroughly analyzing the well studied SAT solver spear on a benchmark of software verification instances and by empirically verifying two long-standing assumptions in algorithm configuration and parameter importance: (i) Parameter importance changes depending on the instance set at hand and (ii) Local and global parameter importance analysis do not necessarily agree with each other.
Active Network Alignment: A Matching-Based Approach
Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting. The majority of the existing active methods focus on absolute queries ("are nodes a and b the same or not?"), whereas we argue that it is generally easier for a human expert to answer relative queries ("which node in the set b1,...,bn is the most similar to node a?"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance. We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.
Design Capital and Design Moves: The Logic of Digital Business Strategy
As information technology becomes integral to the products and services in a growing range of industries, there has been a corresponding surge of interest in understanding how firms can effectively formulate and execute digital business strategies. This fusion of IT within the business environment gives rise to a strategic tension between investing in digital artifacts for long-term value creation and exploiting them for short-term value appropriation. Further, relentless innovation and competitive pressures dictate that firms continually adapt these artifacts to changing market and technological conditions, but sustained profitability requires scalable architectures that can serve a large customer base and stable interfaces that support integration across a diverse ecosystem of complementary offerings. The study of digital business strategy needs new concepts and methods to examine how these forces are managed in pursuit of competitive advantage. We conceptualize the logic of digital business strategy in terms of two constructs: design capital (i.e., the cumulative stock of designs owned or controlled by a firm), and design moves (i.e., the discrete strategic actions that enlarge, reduce, or modify a firm’s stock of designs). We also identify two salient dimensions of design capital, namely option value and technical debt. Using embedded case studies of four firms, we develop a rich conceptual model and testable propositions to lay out a design-based logic of digital business strategy. This logic highlights the interplay between design moves and design capital in the context of digital business strategy and contributes to a growing body of insights that link the design of digital artifacts to competitive strategy and firm-level performance.
Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning
In distributed training of deep neural networks, parallel minibatch SGD is widely used to speed up the training process by using multiple workers. It uses multiple workers to sample local stochastic gradient in parallel, aggregates all gradients in a single server to obtain the average, and update each worker’s local model using a SGD update with the averaged gradient. Ideally, parallel mini-batch SGD can achieve a linear speed-up of the training time (with respect to the number of workers) compared with SGD over a single worker. However, such linear scalability in practice is significantly limited by the growing demand for gradient communication as more workers are involved. Model averaging, which periodically averages individual models trained over parallel workers, is another common practice used for distributed training of deep neural networks since (Zinkevich et al. 2010) (McDonald, Hall, and Mann 2010). Compared with parallel mini-batch SGD, the communication overhead of model averaging is significantly reduced. Impressively, tremendous experimental works have verified that model averaging can still achieve a good speed-up of the training time as long as the averaging interval is carefully controlled. However, it remains a mystery in theory why such a simple heuristic works so well. This paper provides a thorough and rigorous theoretical study on why model averaging can work as well as parallel mini-batch SGD with significantly less communication overhead. Introduction Consider the distributed training of deep neural networks over multiple workers (Dean et al. 2012), where all workers can access all or partial training data and aim to find a common model that yields the minimum training loss. Such a scenario can be modeled as the following distributed parallel non-convex optimization min x∈Rm f(x) ∆ = 1 N N ∑
On Convergence and Stability of GANs
We analyze convergence of GANs through the lens of online learning and game theory, to understand what makes it hard to achieve consistent stable training in practice. We identify that the underlying game here can be ill-posed and poorly conditioned, and propose a simple regularization scheme based on local perturbations of the input data to address these issues. Currently, the methods that improve stability either impose additional computational costs or require the usage of specific architectures/modeling objectives. Further, we show that WGAN-GP, which is the state-of-the-art stable training procedure, is similar to LS-GAN, does not follow from KR-duality and can be too restrictive in general. In contrast, our proposed algorithm is fast, simple to implement and achieves competitive performance in a stable fashion across a variety of architectures and objective functions with minimal hyperparameter tuning. We show significant improvements over WGAN-GP across these conditions.
Analysis of Common-Mode Noise for Weakly Coupled Differential Serpentine Delay Microstrip Line in High-Speed Digital Circuits
This study investigates the mechanisms of generation of the transient transmission common-mode noise in a differential serpentine delay line under weak coupling condition. The generation mechanism and the frequency of common-mode noise are investigated with reference to the time-domain transmission waveform and the differential-to-common mode conversion mixed-mode S-parameters using the circuit solver HSPICE and 3-D full-wave simulator HFSS, respectively. The generation mechanisms of common-mode noise include length mismatch between vertical-turn-coupled traces, the length effect of parallel-coupled traces, and the crosstalk noise effect. Moreover, a graphical method based on wave tracing is presented to illustrate the cancellation mechanism of near-end common-mode noise for the symmetrical differential serpentine delay line. Some practical, commonly used layout routings of the differential serpentine delay line are investigated. Some important design guidelines are provided to help design differential serpentine delay line with low common-mode noise. A comparison between simulated and measured results validates the equivalent circuit model and analytical approach.
Archaea: very diverse, often different but never bad?
Christa Schleper is Professor and Head of the Department of Genetics in Ecology at the University of Vienna. Her research combines microbial physiology, genetics and metagenomics and is now focused primarily on anaerobic ammoniaoxidizing Archaea and on the study of genetic elements in Sulfolobus. This is the second Current Opinions in Microbiology volume dedicated to the Archaea. Comparing the reviews published here with those published in December 2005 underlines the impressive progress made in several areas of on-going archaeal research (evolution and phylogenetics, metabolic diversity and pathway biochemistry, transcription, cell cycle and pyrrolysine studies) and identifies areas in which archaeal research has more recently expanded (anaerobic methane and aerobic ammonia oxidations, CRISPR, tRNAs, cell surface structures, viruses and symbiotic interactions). The impacts of rapid genome sequencing, metagenomics and microbial cultivation techniques, all much expanded since 2005, are also very apparent. Perhaps most notably, they have documented the presence of Archaea in almost all natural environments, and confirmed that Archaea contribute substantially to the global carbon and nitrogen cycles, in both marine and terrestrial ecosystems. The accumulation of microbial genome sequences has also fully established that the Archaea and Bacteria constitute fundamentally different evolutionary Domains [1]. As a taxon, the term prokaryote is therefore seriously questioned [2] but, with no nuclear membrane, Archaea and Bacteria may share features not found in Eukarya. For example, translation can be tightly coupled with transcription and, although transmitted horizontally, adaptive immunity CRISPR systems (see below) may exist only in Archaea and Bacteria. Prokaryote is also still used sometimes as a synonym for Bacteria, ignoring the Archaea, and many of the 2005 reviews addressed this with features described as being (or not being) bacterial-like or eukaryal-like. Evolutionary relationships are often still noted, but the reviews now published definitively describe archaeal features and functions. A reader must therefore recognize the diversity of the Archaea; an archaeal feature is not necessarily present in all Archaea, just as not all Bacteria are Gram-negative.
Differences in frequency of violence and reported injury between relationships with reciprocal and nonreciprocal intimate partner violence.
OBJECTIVES We sought to examine the prevalence of reciprocal (i.e., perpetrated by both partners) and nonreciprocal intimate partner violence and to determine whether reciprocity is related to violence frequency and injury. METHODS We analyzed data on young US adults aged 18 to 28 years from the 2001 National Longitudinal Study of Adolescent Health, which contained information about partner violence and injury reported by 11,370 respondents on 18761 heterosexual relationships. RESULTS Almost 24% of all relationships had some violence, and half (49.7%) of those were reciprocally violent. In nonreciprocally violent relationships, women were the perpetrators in more than 70% of the cases. Reciprocity was associated with more frequent violence among women (adjusted odds ratio [AOR]=2.3; 95% confidence interval [CI]=1.9, 2.8), but not men (AOR=1.26; 95% CI=0.9, 1.7). Regarding injury, men were more likely to inflict injury than were women (AOR=1.3; 95% CI=1.1, 1.5), and reciprocal intimate partner violence was associated with greater injury than was nonreciprocal intimate partner violence regardless of the gender of the perpetrator (AOR=4.4; 95% CI=3.6, 5.5). CONCLUSIONS The context of the violence (reciprocal vs nonreciprocal) is a strong predictor of reported injury. Prevention approaches that address the escalation of partner violence may be needed to address reciprocal violence.
Tuning metaheuristics: A data mining based approach for particle swarm optimization
The paper is concerned with practices for tuning the parameters of metaheuristics. Settings such as, e.g., the cooling factor in simulated annealing, may greatly affect a metaheuristic’s efficiency as well as effectiveness in solving a given decision problem. However, procedures for organizing parameter calibration are scarce and commonly limited to particular metaheuristics. We argue that the parameter selection task can appropriately be addressed by means of a data mining based approach. In particular, a hybrid system is devised, which employs regression models to learn suitable parameter values from past moves of a metaheuristic in an online fashion. In order to identify a suitable regression method and, more generally, to demonstrate the feasibility of the proposed approach, a case study of particle swarm optimization is conducted. Empirical results suggest that characteristics of the decision problem as well as search history data indeed embody information that allows suitable parameter values to be determined, and that this type of information can successfully be extracted by means of nonlinear regression models. 2011 Elsevier Ltd. All rights reserved.
When the hammer meets the nail: Multi-server PIR for database-driven CRN with location privacy assurance
We show that it is possible to achieve information theoretic location privacy for secondary users (SUs) in database-driven cognitive radio networks (CRNs) with an end-to-end delay less than a second, which is significantly better than that of the existing alternatives offering only a computational privacy. This is achieved based on a keen observation that, by the requirement of Federal Communications Commission (FCC), all certified spectrum databases synchronize their records. Hence, the same copy of spectrum database is available through multiple (distinct) providers. We harness the synergy between multi-server private information retrieval (PIR) and database-driven CRN architecture to offer an optimal level of privacy with high efficiency by exploiting this observation. We demonstrated, analytically and experimentally with deployments on actual cloud systems that, our adaptations of multi-server PIR outperform that of the (currently) fastest single-server PIR by a magnitude of times with information theoretic security, collusion resiliency and fault-tolerance features. Our analysis indicates that multiserver PIR is an ideal cryptographic tool to provide location privacy in database-driven CRNs, in which the requirement of replicated databases is a natural part of the system architecture, and therefore SUs can enjoy all advantages of multi-server PIR without any additional architectural and deployment costs.
Discriminatively trained recurrent neural networks for single-channel speech separation
This paper describes an in-depth investigation of training criteria, network architectures and feature representations for regression-based single-channel speech separation with deep neural networks (DNNs). We use a generic discriminative training criterion corresponding to optimal source reconstruction from time-frequency masks, and introduce its application to speech separation in a reduced feature space (Mel domain). A comparative evaluation of time-frequency mask estimation by DNNs, recurrent DNNs and non-negative matrix factorization on the 2nd CHiME Speech Separation and Recognition Challenge shows consistent improvements by discriminative training, whereas long short-term memory recurrent DNNs obtain the overall best results. Furthermore, our results confirm the importance of fine-tuning the feature representation for DNN training.
Constraint-induced movement therapy for the lower extremities in multiple sclerosis: case series with 4-year follow-up.
OBJECTIVE To evaluate in a preliminary manner the feasibility, safety, and efficacy of Constraint-Induced Movement therapy (CIMT) of persons with impaired lower extremity use from multiple sclerosis (MS). DESIGN Clinical trial with periodic follow-up for up to 4 years. SETTING University-based rehabilitation research laboratory. PARTICIPANTS A referred sample of ambulatory adults with chronic MS (N=4) with at least moderate loss of lower extremity use (average item score ≤6.5/10 on the functional performance measure of the Lower Extremity Motor Activity Log [LE-MAL]). INTERVENTIONS CIMT was administered for 52.5 hours over 3 consecutive weeks (15 consecutive weekdays) to each patient. MAIN OUTCOME MEASURES The primary outcome was the LE-MAL score at posttreatment. Secondary outcomes were posttreatment scores on laboratory assessments of maximal lower extremity movement ability. RESULTS All the patients improved substantially at posttreatment on the LE-MAL, with smaller improvements on the laboratory motor measures. Scores on the LE-MAL continued to improve for 6 months afterward. By 1 year, patients remained on average at posttreatment levels. At 4 years, half of the patients remained above pretreatment levels. There were no adverse events, and fatigue ratings were not significantly changed by the end of treatment. CONCLUSIONS This initial trial of lower extremity CIMT for MS indicates that the treatment can be safely administered, is well tolerated, and produces substantially improved real-world lower extremity use for as long as 4 years afterward. Further trials are needed to determine the consistency of these findings.
Safety and efficacy of the PrePex device for rapid scale-up of male circumcision for HIV prevention in resource-limited settings.
OBJECTIVE To assess the safety and efficacy of the PrePex device for nonsurgical circumcision in adult males as part of a comprehensive HIV prevention program in Rwanda. METHODS Single-center 6-week noncontrolled study in which healthy men underwent circumcision using the PrePex device, which employs fitted rings to clamp the foreskin, leading to distal necrosis. In the first phase of the study, the feasibility of the procedure was tested on 5 subjects in a sterile environment; in the main phase, an additional 50 subjects were circumcised in a nonsterile setting by physicians or a nurse. Outcome measures included the rate of successful circumcision, time to complete healing, pain, and adverse events. RESULTS In the feasibility phase, all 5 subjects achieved complete circumcision without adverse events. In the main phase, all 50 subjects achieved circumcision with 1 case of diffuse edema after device removal, which resolved with minimal intervention. Pain was minimal except briefly during device removal (day 7 after placement in most cases). The entire procedure was bloodless, requiring no anesthesia, no suturing, and no sterile settings. Subjects had no sick/absent days associated with the procedure. Median time for complete healing was 21 days after device removal. There were no instances of erroneous placement and no mechanical problems with the device. CONCLUSION The PrePex device was safe and effective for nonsurgical adult male circumcision without anesthesia or sterile settings and may be useful in mass circumcision programs to reduce the risk of HIV infection, particularly in resource-limited settings.
An Efficient Face Normalization Algorithm Based on Eyes Detection
This paper presents an effective and efficient face normalization method based on eyes location. The face is first rapidly detected based on boosted cascade of simple Haar-like features. Then, the algorithm detects the position of pupils in the face image using the geometric relation between the face and the eyes. Finally, the algorithm normalizes the orientation, the scale and the grayscale of the face image. The experimental results demonstrated that this algorithm can detect and normalize the face image efficiently and accurately. The algorithm can be used in face recognition because the normalized faces can improve the recognition rate
Contrastive Estimation: Training Log-Linear Models on Unlabeled Data
Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are log-linear, allowing the incorporation of arbitrary features into the model. To train on u labeled data, we requireunsupervisedestimation methods for log-linear models; few exist. We describe a novel approach,contrastive estimation . We show that the new technique can be intuitively understood as exploiting implicit negative evidence and is computationally efficient. Applied to a sequence labeling problem—POS tagging given a tagging dictionary and unlabeled text—contrastive estimation outperforms EM (with the same feature set), is more robust to degradations of the dictionary, and can largely recover by modeling additional features.
Importance of the high-molecular-mass isoform of adiponectin in improved insulin sensitivity with rosiglitazone treatment in HIV disease.
The present study was designed to investigate the relationship of isoforms of adiponectin to insulin sensitivity in subjects with HIV-associated insulin resistance in response to treatment with the thiazolidinedione, rosiglitazone. The two isoforms of adiponectin, HMW (high-molecular-mass) and LMW (low-molecular-mass), were separated by sucrose-gradient-density centrifugation. The amount of adiponectin in gradient fractions was determined by ELISA. Peripheral insulin sensitivity (Rd) was determined with hyperinsulinaemic-euglycaemic clamp, whereas hepatic sensitivity [HOMA (Homoeostasis Model Assessment) %S] was based on basal glucose and insulin values. Treatment with rosiglitazone for 3 months resulted in a significant improvement in the index of hepatic insulin sensitivity (86.4+/-15% compared with 139+/-23; P=0.007) as well as peripheral insulin sensitivity (4.04+/-0.23 compared with 6.17+/-0.66 mg of glucose/kg of lean body mass per min; P<0.001). Improvement in HOMA was associated with increased levels of HMW adiponectin (r=0.541, P=0.045), but not LMW adiponectin. The present study suggests that the HMW isoform of adiponectin is important in the regulation of rosiglitazone-mediated improvement in insulin sensitivity in individuals with HIV-associated insulin resistance, particularly in the liver.
Mind the semantic gap
Hypertext can be seen as a logic representation, where semantics are encoded in both the textual nodes and the graph of links. Systems that have a very formal representation of these semantics are able to manipulate the hypertexts in a sophisticated way; for example by adapting them or sculpting them at run-time. However, hypertext systems which require the author to write in terms of structures with explicit semantics are difficult/costly to write in, and can be seen as too restrictive by certain authors because they do not allow the playful ambiguity often associated with literary hypertext.In this paper we present a vector-based model of the formality of semantics in hypertext systems, where the vectors represent the translation of semantics from author to system and from system to reader. We categorise a variety of existing systems and draw out some general conclusions about the profiles they share. We believe that our model will help hypertext system designers analyse how their own systems formalise semantics, and will warn them when they need to mind the Semantic Gap between authors and readers.
Old School vs. New School: Comparing Transition-Based Parsers with and without Neural Network Enhancement
In this paper, we attempt a comparison between "new school" transitionbased parsers that use neural networks and their classical "old school" counterpart. We carry out experiments on treebanks from the Universal Dependencies project. To facilitate the comparison and analysis of results, we only work on a subset of those treebanks. However, we carefully select this subset in the hope to have results that are representative for the whole set of treebanks. We select two parsers that are hopefully representative of the two schools; MaltParser and UDPipe and we look at the impact of training size on the two models. We hypothesize that neural network enhanced models have a steeper learning curve with increased training size. We observe, however, that, contrary to expectations, neural network enhanced models need only a small amount of training data to outperform the classical models but the learning curves of both models increase at a similar pace after that. We carry out an error analysis on the development sets parsed by the two systems and observe that overall MaltParser suffers more than UDPipe from longer dependencies. We observe that MaltParser is only marginally better than UDPipe on a restricted set of short dependencies.
Auricular Acupressure on Specific Points for Hemodialysis Patients with Insomnia: A Pilot Randomized Controlled Trial
OBJECTIVES To assess the feasibility and acceptability of a randomized controlled trial compared auricular acupressure (AA) on specific acupoints with AA on non-specific acupoints for treating maintenance hemodialysis (MHD) patients with insomnia. METHODS Sixty three (63) eligible subjects were randomly assigned into either AA group received AA on specific acupoints (n=32), or sham AA (SAA) group received AA on points irrelevant to insomnia treatment (n=31) for eight weeks. All participants were followed up for 12 weeks after treatments. The primary outcome was clinical response at eight weeks after randomization, defined as a reduction of Pittsburgh Sleep Quality Index (PSQI) global score by 3 points and more. RESULTS Fifty-eight (58) participants completed the trial and five dropped out. Twenty participants in AA group (62.5%) and ten in SAA group (32.3%) responded to the eight-week interventions (χ2 = 5.77, P = 0.02). PSQI global score declined 3.75 ± 4.36 (95%CI -5.32, -2.18) and 2.26 ± 3.89 (95%CI -3.68, -0.83) in AA group and SAA group respectively. Three participants died during the follow-up period. No evidence supported their deaths were related to the AA intervention. No other adverse event was observed. CONCLUSION Feasibility and logistics of patient recruitment, randomization procedure, blinding approach, interventions application and outcome assessment had been tested in this pilot trial. The preliminary data appeared to show a favorable result on AA treatment. A full-scale trial is warranted. TRIAL REGISTRATION Chinese Clinical Trial Registry ChiCTR-TRC-12002272.
Clothing genre classification by exploiting the style elements
This paper presents a novel approach to automatically classify the upperwear genre from a full-body input image with no restrictions of model poses, image backgrounds, and image resolutions. Five style elements, that are crucial for clothing recognition, are identified based on the clothing design theory. The corresponding features of each of these style elements are also designed. We illustrate the effectiveness of our approach by showing that the proposed algorithm achieved overall precision of 92.04%, recall of 92.45%, and F score of 92.25% with 1,077 clothing images crawled from popular online stores.
Assessment of celecoxib pharmacodynamics in pancreatic cancer.
Cyclooxygenase-2 (COX-2) inhibitors are being developed as chemopreventive and anticancer agents. This study aimed to determine the biological effect of the COX-2 inhibitor celecoxib in pancreatic cancer as an early step to the further development of the agent in this disease. Eight patients scheduled for resection of an infiltrating adenocarcinoma of the pancreas were randomized to receive celecoxib at a dose of 400 mg twice daily or placebo for 5 to 15 days before the surgery. In addition, carcinomas from nine additional patients were xenografted in nude mice, expanded, and treated with vehicle or celecoxib for 28 days. Celecoxib markedly decreased the intra-tumor levels of prostaglandin E2 in patient carcinomas and in the heterotransplanted xenografts. However, this effect did not result in inhibition of cell proliferation or microvessel density (as assessed by Ki67 and CD31 staining). In addition, a panel of markers, including bcl-2, COX-1, COX-2, and VEGF, did not change with treatment in a significant manner. Furthermore, there was no evidence of antitumor effects in the xenografted carcinomas. In summary, celecoxib efficiently inhibited the synthesis of prostaglandin E2 both in pancreatic cancer surgical specimens and in xenografted carcinomas but did not exert evident antitumor, antiproliferative, or antiangiogenic effect as a single agent. The direct pancreatic cancer xenograft model proved to be a valuable tool for drug evaluation and biological studies and showed similar results to those observed in resected pancreatic cancer specimens.
Visual domain-specific modelling : Benefits and experiences of using metaCASE tools
1 Introduction Jackson (Jackson 95) recognises the vital difference between an application's domain and its code: two different worlds, each with its own language, experts, ways of thinking etc. A finished application forms the intersection between these worlds. The difficult job of the software engineer is to build a bridge between these worlds, at the same time as solving problems in both worlds.
Cloud Computing: Characteristics and Deployment Approaches
Cloud Computing is a generic term for anything that involves delivering hosted services over the Internet. It is a new paradigm based on a pay-as-you-go approach. Whereas, many large enterprises have embraced the cloud technologies and infrastructures, numerous vendors have developed application, platform and infrastructure services for other organizations to consume. This paper introduces the Cloud Computing concepts, explores the benefits it promises, discusses the inherent issues and challenges and explains, in some detail, the deployment and delivery models for Cloud Computing. The aim is to provide some general information for enterprises wishing to integrate their existing IT processes and systems with Cloud services and infrastructures, which are readily available for their consumption.
Survey of Maneuvering Target Tracking . Part I : Dynamic Models
The key to successful target tracking lies in the effective extraction of useful information about the target’s state from observations. A good model of the target will certainly facilitate this information extraction to a great extent. In general, one can say without exaggeration that a good model is worth a thousand pieces of data. This statement has an even stronger positive connotation in target tracking where observation data are rather limited. Most tracking algorithms are model based because knowledge of target motion is available and a good model-based tracking algorithm will greatly outperform any model-free tracking algorithm if the underlying model turns out to be a good one. As such, it is hard to overstate the importance of the role of a good model here. Various mathematical models of target motion have been developed over the past three decades. They are, however, scattered in the literature. Many of them have never appeared in any periodical in the public domain. As a result, few people have a good knowledge of these models. This is partly due to a lack of a comprehensive survey. The importance of such a survey for both practitioners and researchers in the tracking community is evident. The single best source so far is, in our opinion, the recent book by Blackman and Popoli [1], which is nonetheless far from complete. Some more or less standard models for target motion can be found in established books on target tracking and/or estimation, such as [2–12]. This paper is the first part of a comprehensive and up-to-date survey of the techniques for maneuvering target tracking. The survey is an ongoing project. The conference versions of its first several parts have appeared in [13–17]. It is well known that the so-called measurement-origin uncertainty and target motion uncertainty are two major challenges in target tracking. To limit the scope of the work, this survey deals only with the second uncertainty, leaving the techniques unique for the data-association problems untouched. Target detection, tracking, and recognition are closely interrelated areas, with significant overlaps. It is not easy to draw a clear line to separate them. To be relatively more focused, this part covers mainly dynamic models of a “point target,” that is, those of the dynamic (temporal) behaviors, rather than spatial characteristics, of a target. While many of these models are also useful for target detection and recognition, this survey is only concerned with their value for target tracking. This of course does not prevent us from developing or applying a model that describes both the temporal evolution and spatial characteristics of a target. Needless to say, target dynamic models and tracking algorithms have intimate ties. The
Using Phishing Experiments and Scenario-based Surveys to Understand Security Behaviours in Practice
Threats from social engineering can cause organisations severe damage if they are not considered and managed. In order to understand how to manage those threats, it is important to examine reasons why organisational employees fall victim to social engineering. In this paper, the objective is to understand security behaviours in practice by investigating factors that may cause an individual to comply with a request posed by a perpetrator. In order to attain this objective, we collect data through a scenario-based survey and conduct phishing experiments in three organisations. The results from the experiment reveal that the degree of target information in an attack increases the likelihood that an organisational employee fall victim to an actual attack. Further, an individual’s trust and risk behaviour significantly affects the actual behaviour during the phishing experiment. Computer experience at work, helpfulness and gender (females tend to be less susceptible to a generic attack than men), has a significant correlation with behaviour reported by respondents in the scenario-based survey. No correlation between the performance in the scenario-based survey and experiment was found. We argue that the result does not imply that one or the other method should be ruled out as they have both advantages and disadvantages which should be considered in the context of collecting data in the critical domain of information security. Discussions of the findings, implications and recommendations for future research are further provided.
Formation of reactive halide species by myeloperoxidase and eosinophil peroxidase.
The formation of chloro- and bromohydrins from 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine following incubation with myeloperoxidase or eosinophil peroxidase in the presence of hydrogen peroxide, chloride and/or bromide was analysed by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. These products were only formed below a certain pH threshold value, that increased with increasing halide concentration. Thermodynamic considerations on halide and pH dependencies of reduction potentials of all redox couples showed that the formation of a given reactive halide species in halide oxidation coupled with the reduction of compound I of heme peroxidases is only possible below a certain pH threshold that depends on halide concentration. The comparison of experimentally derived and calculated data revealed that Cl(2), Br(2), or BrCl will primarily be formed by the myeloperoxidase-H(2)O(2)-halide system. However, the eosinophil peroxidase-H(2)O(2)-halide system forms directly HOCl and HOBr.
[The corrugator supercilii muscle. A review].
Corrugator supercilii is a facial, forehead and supra-orbital muscle. The frown glabellar wrinkles are mainly formed by repeated contractions of this muscle. These wrinkles will produce the picture of premature ageing even in a young person. Many treatments reduce or abolish the action of this muscle, enhancing the appearance of the glabellar area. We propose to review the recent material related to the anatomical characteristics of this muscle in order to build the necessary knowledge to optimize the result of these different treatments.
SAMU: Design and implementation of selectivity-aware MU-MIMO for wideband WiFi
In anticipation of the increasing demand of wireless traffic, WiFi standardization efforts have recently focused on two key technologies for capacity improvement: multi-user MIMO and wider bandwidth. However, users experience heterogeneous channel orthogonality characteristics across sub-carriers in the same channel bandwidth, which prevents ideal multi-user gain. Moreover, frequency selectivity increases as bandwidth scales and correspondingly severely deteriorates multi-user MIMO performance. In this work, we consider the frequency selectivity of current and emerging WiFi channel bandwidths to optimize multi-user MIMO by dividing the occupied channel bandwidth into equally-sized sub-channels according to the level of frequency selectivity. In our selectivity-aware multi-user MIMO design, SAMU, each sub-channel is allocated according to the largest bandwidth that can be considered frequency-flat, and an optimal subset of users is chosen to serve in each sub-channel according to spatial orthogonality, achieving a significant performance improvement for all users in the network. Additionally, we propose a selectivity-aware very high throughput (SA-VHT) mode, which is based on and an extension to the existing IEEE 802.11ac standard. Over emulated and real indoor channels, even with minimal mobility, SAMU achieves as much as 80 percent throughput improvement compared to existing multi-user MIMO schemes, which could serve as a lower bound as bandwidth scales.
The Major Transitions in Evolution
It's not surprisingly when entering this site to get the book. One of the popular books now is the the major transitions in evolution. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.
Renewable Energy Pricing Driven Scheduling in Distributed Smart Community Systems
A smart community is a distributed system consisting of a set of smart homes which utilize the smart home scheduling techniques to enable customers to automatically schedule their energy loads targeting various purposes such as electricity bill reduction. Smart home scheduling is usually implemented in a decentralized fashion inside a smart community, where customers compete for the community level renewable energy due to their relatively low prices. Typically there exists an aggregator as a community wide electricity policy maker aiming to minimize the total electricity bill among all customers. This paper develops a new renewable energy aware pricing scheme to achieve this target. We establish the proof that under certain assumptions the optimal solution of decentralized smart home scheduling is equivalent to that of the centralized technique, reaching the theoretical lower bound of the community wide total electricity bill. In addition, an advanced cross entropy optimization technique is proposed to compute the pricing scheme of renewable energy, which is then integrated in smart home scheduling. The simulation results demonstrate that our pricing scheme facilitates the reduction of both the community wide electricity bill and individual electricity bills compared to the uniform pricing. In particular, the community wide electricity bill can be reduced to only 0.06 percent above the theoretic lower bound.
Trans-gram, Fast Cross-lingual Word-embeddings
We introduce Trans-gram, a simple and computationally-efficient method to simultaneously learn and align wordembeddings for a variety of languages, using only monolingual data and a smaller set of sentence-aligned data. We use our new method to compute aligned wordembeddings for twenty-one languages using English as a pivot language. We show that some linguistic features are aligned across languages for which we do not have aligned data, even though those properties do not exist in the pivot language. We also achieve state of the art results on standard cross-lingual text classification and word translation tasks.
Efficient and Robust Retrieval by Shape Content through Curvature Scale Space
We introduce a very fast and reliable method for shape similarity retrieval in large image databases which is robust with respect to noise, scale and orientation changes of the objects. The maxima of curvature zero crossing contours of Curvature Scale Space (CSS) image are used to represent the shapes of object boundary contours. While a complex boundary is represented by about ve pairs of integer values, an eeective indexing method based on the aspect ratio of the CSS image , eccentricity and circularity is used to narrow down the range of searching. Since the matching algorithm has been designed to use global information, it is sensitive to major occlusion, but some minor occlusion will not cause any problems. We have tested and evaluated our method on a prototype database of 450 images of marine animals with a vast variety of shapes with very good results. The method can either be used in real applications or produce a reliable shape description for more complicated images when other features such as color and texture should also be considered. Since shape similarity is a subjective issue, in order to evaluate the method, we asked a number of volunteers to perform similarity retrieval based on shape on a randomly selected small database. We then compared the results of this experiment to the outputs of our system to the same queries and on the same database. The comparison indicated a promising performance of the system.
What's Going On in Neural Constituency Parsers? An Analysis
A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and featurerich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.
Near-regular texture analysis and manipulation
A near-regular texture deviates geometrically and photometrically from a regular congruent tiling. Although near-regular textures are ubiquitous in the man-made and natural world, they present computational challenges for state of the art texture analysis and synthesis algorithms. Using regular tiling as our anchor point, and with user-assisted lattice extraction, we can explicitly model the deformation of a near-regular texture with respect to geometry, lighting and color. We treat a deformation field both as a function that acts on a texture and as a texture that is acted upon, and develop a multi-modal framework where each deformation field is subject to analysis, synthesis and manipulation. Using this formalization, we are able to construct simple parametric models to faithfully synthesize the appearance of a near-regular texture and purposefully control its regularity.
PHILOSOPHY AND THE MIRROR OF NATURE
Richard Rorty's Philosophy and the Mirror of Nature brings to light the deep sense of crisis within the profession of academic philosophy which is similar to the paralyzing pluralism in contemporary theology and the inveterate indeterminacy of literary criticism. Richard Rorty's provocative and profound meditations impel philosophers to examine the problematic status of their discipline— only to discover that modern European philosophy has come to an end. Rorty strikes a deathblow to modern European philosophy by telling a story about the emergence, development and decline of its primary props: the correspondence theory of truth, the notion of privileged representations and the idea of a self-reflective transcendental subject. Rorty's fascinating tale—his-story —is regulated by three fundamental shifts which he delineates in detail and promotes in principle: the move toward anti-realism or conventionalism in ontology, the move toward the demythologizing of the Myth of the Given or anti-foundationalism in epistemology, and the move toward detranscendentalizing the subject or dismissing the mind as a sphere of inquiry. The chief importance of Rorty's book is that it brings together in an original and intelligible narrative the major insights of the patriarchs of postmodern American philosophy—W. V. Quine, Wilfred Sellars, and Nelson Goodman— and persuasively presents the radical consequences of their views for contemporary philosophy. Rorty credits Wittgenstein, Heidegger and Dewey for having "brought us into a period of 'revolutionary' philosophy" by undermining the prevailing Cartesian and Kantian paradigms and advancing new conceptions of philosophy. And these monumental figures surely inspire Rorty. Yet, Rorty's philosophical debts—the actual sources of his particular anti-Cartesian and antiKantian arguments—are Quine's holism, Sellars' anti-foundationalism, and Goodman's pluralism. In short, despite his adamant attack on analytical philosophy—the last stage of modern European philosophy—Rorty feels most comfortable with the analytical form of philosophical argumentation (shunned by Wittgenstein and Heidegger). From the disparate figures of Wittgenstein, Heidegger, and Dewey, Rorty gets a historicist directive: to eschew the quest for certainty and the search for foundations.
The pandemic of physical inactivity: global action for public health
Physical inactivity is the fourth leading cause of death worldwide. We summarise present global efforts to counteract this problem and point the way forward to address the pandemic of physical inactivity. Although evidence for the benefits of physical activity for health has been available since the 1950s, promotion to improve the health of populations has lagged in relation to the available evidence and has only recently developed an identifiable infrastructure, including efforts in planning, policy, leadership and advocacy, workforce training and development, and monitoring and surveillance. The reasons for this late start are myriad, multifactorial, and complex. This infrastructure should continue to be formed, intersectoral approaches are essential to advance, and advocacy remains a key pillar. Although there is a need to build global capacity based on the present foundations, a systems approach that focuses on populations and the complex interactions among the correlates of physical inactivity, rather than solely a behavioural science approach focusing on individuals, is the way forward to increase physical activity worldwide.
Gait Generation With Smooth Transition Using CPG-Based Locomotion Control for Hexapod Walking Robot
This paper presents a locomotion control method based on central pattern generator (CPG) for hexapod walking robot to achieve gait generation with smooth transition. By deriving an analytical limit cycle approximation of the Van der Pol oscillator, a simple diffusive coupling scheme is proposed to construct a ring-shape CPG network with phase-locked behavior. The stability of the proposed network is proved using synchronization analysis with guaranteed uniform ultimate boundedness of the synchronous errors. In contrast to conventional numerical methods in tuning parameters of the CPG network, our method provides an explicit result that could analytically determine the network parameters to yield the rhythmic waveforms with prescribed frequency, amplitude, and phase relationship among neurons. Employing the proposed network to govern the swing/stance phase according to the profile of the resulting CPG signals, a locomotion control strategy for the hexapod robot is further developed to manipulate the leg movements during the gait cycle. By simply adjusting the phase lags of the CPG network, the proposed control strategy is capable of generating stable walking gaits for hexapod robots and achieving smooth transition among the generated gaits. The simulation and experimental results have demonstrated the effectiveness of the proposed locomotion control method.
ADvanced IMage Algebra (ADIMA): a novel method for depicting multiple sclerosis lesion heterogeneity, as demonstrated by quantitative MRI
BACKGROUND There are modest correlations between multiple sclerosis (MS) disability and white matter lesion (WML) volumes, as measured by T2-weighted (T2w) magnetic resonance imaging (MRI) scans (T2-WML). This may partly reflect pathological heterogeneity in WMLs, which is not apparent on T2w scans. OBJECTIVE To determine if ADvanced IMage Algebra (ADIMA), a novel MRI post-processing method, can reveal WML heterogeneity from proton-density weighted (PDw) and T2w images. METHODS We obtained conventional PDw and T2w images from 10 patients with relapsing-remitting MS (RRMS) and ADIMA images were calculated from these. We classified all WML into bright (ADIMA-b) and dark (ADIMA-d) sub-regions, which were segmented. We obtained conventional T2-WML and T1-WML volumes for comparison, as well as the following quantitative magnetic resonance parameters: magnetisation transfer ratio (MTR), T1 and T2. Also, we assessed the reproducibility of the segmentation for ADIMA-b, ADIMA-d and T2-WML. RESULTS Our study's ADIMA-derived volumes correlated with conventional lesion volumes (p < 0.05). ADIMA-b exhibited higher T1 and T2, and lower MTR than the T2-WML (p < 0.001). Despite the similarity in T1 values between ADIMA-b and T1-WML, these regions were only partly overlapping with each other. ADIMA-d exhibited quantitative characteristics similar to T2-WML; however, they were only partly overlapping. Mean intra- and inter-observer coefficients of variation for ADIMA-b, ADIMA-d and T2-WML volumes were all < 6 % and < 10 %, respectively. CONCLUSION ADIMA enabled the simple classification of WML into two groups having different quantitative magnetic resonance properties, which can be reproducibly distinguished.
Management of Acral Lentiginous Melanoma
Cutaneous malignant melanoma is the most common cause of mortality from skin cancers in Caucasian populations. The incidence rates of malignant melanoma show considerable varia‐ tion worldwide. Annual incidence rates per 100,000 people vary between about 40 in Australia and New Zealand to about 20 in the United States [1,2]. In contrast, a significantly lower inci‐ dence rate has been reported in Asian populations with rates of 0.65 to 1/100,000 [3-5]. In addi‐ tion, the most common sites of melanoma occurrence in Asians are the extremities at a rate of about 50% of all cases [6,7], compared to only 2-3% in Caucasian populations [8].
Which children benefit from letter names in learning letter sounds?
Typical U.S. children use their knowledge of letters' names to help learn the letters' sounds. They perform better on letter sound tests with letters that have their sounds at the beginnings of their names, such as v, than with letters that have their sounds at the ends of their names, such as m, and letters that do not have their sounds in their names, such as h. We found this same pattern among children with speech sound disorders, children with language impairments as well as speech sound disorders, and children who later developed serious reading problems. Even children who scored at chance on rhyming and sound matching tasks performed better on the letter sound task with letters such as v than with letters such as m and h. Our results suggest that a wide range of children use the names of letters to help learn the sounds and that phonological awareness, as conventionally measured, is not required in order to do so.
Vertex Domination in t-Norm Fuzzy Graphs
For the first time, We do fuzzification the concept of domination in crisp graph on a generalization of fuzzy graph by using membership values of vertices, α-strong edges and edges. In this paper, we introduce the first variation on the domination theme which we call vertex domination. We determine the vertex domination number γv for several classes of t-norm fuzzy graphs which include complete t-norm fuzzy graph, complete bipartite t-norm fuzzy graph, star t-norm fuzzy graph and empty t-norm fuzzy graph. The relationship between effective edges and α-strong edges is obtained. Finally, we discuss about vertex dominating set of a fuzzy tree with respect to a t-norm ⊗ by using the bridges and α-strong edges equivalence.
Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
Many real-world tasks involve multiple agents with partial observability and limited communication. Learning is challenging in these settings due to local viewpoints of agents, which perceive the world as non-stationary due to concurrentlyexploring teammates. Approaches that learn specialized policies for individual tasks face problems when applied to the real world: not only do agents have to learn and store distinct policies for each task, but in practice identities of tasks are often non-observable, making these approaches inapplicable. This paper formalizes and addresses the problem of multi-task multi-agent reinforcement learning under partial observability. We introduce a decentralized single-task learning approach that is robust to concurrent interactions of teammates, and present an approach for distilling single-task policies into a unified policy that performs well across multiple related tasks, without explicit provision of task identity.