title
stringlengths
8
300
abstract
stringlengths
0
10k
Sliding mode control strategy for three-phase DVR employing twelve-switch voltage source converter
In this study, a new method is proposed for the suppression of voltage amplitude fluctuations on the load terminals. Moreover, the voltage harmonics on the load terminals are effectively reduced below the defined limits in the IEEE-519 standard. In the studied topology, a twelve-switch three-phase voltage source converter is used for effective control on the zero-sequence components due to the unbalanced voltages or single phase voltage sags. In order to obtain a fast dynamic response, sliding mode control strategy is used to control the system. The existence condition of the sliding mode is presented. The performance of the closed-loop system is verified with cases of voltage sag and utility voltage harmonics. The overall system is explained in detail and verified by using MATLAB/Simulink.
Immunological functions of the human prepuce.
The demonisation of the human male prepuce has been an unscientific process, even though some research, on the surface, might seem to support it. In the late 19th century, when male circumcision came into vogue in medicine in the United States, there was near universal acceptance among American medical professionals that circumcision was an eVective treatment for such “diseases” as masturbation, headache, insanity, epilepsy, paralysis, strabismus, rectal prolapse, hydrocephalus, and clubfoot. Leading medical journals published thousands of case reports demonstrating these and other miraculous therapeutic benefits from preputial amputation. The notion that circumcision improves hygiene and prevents sexually transmitted diseases (STDs) originated at the same time in the context of the discourse over racial and moral hygiene. The peculiar American phenomenon of mass newborn (that is, involuntary) circumcision is a product of the cold war era. United States doctors readily embraced the concept of mass, involuntary circumcision just as they had embraced involuntary sterilisation and other eugenic measures— practices rejected by almost all other Western nations. Mass circumcision peaked in the 1970s, when almost 90% of male neonates in the United States were circumcised. Since then, the rate has declined, but circumcision industry spokesmen have added to the list of diseases that circumcision allegedly prevents and cures. Historically, the most common reason given for circumcision has been that it prevents masturbation. Today, the most common reason given is that it inhibits the transmission of STDs, even though rigorously controlled studies have consistently shown that circumcised males are at greater risk for all major STDs than males whose penises are intact. Circumcision advocates are now claiming that circumcision prevents AIDS. A review of the scientific literature, however, reveals that the actual eVect of circumcision is the destruction of the clinically demonstrated hygienic and immunological properties of the prepuce and intact penis. The sphincter action of the preputial orifice functions like a one way valve, blocking the entry of contaminants while allowing the passage of urine. 8 Ectopic sebaceous glands concentrated near the frenulum produce smegma. This natural emollient contains prostatic and seminal secretions, desquamated epithelial cells, and the mucin content of the urethral glands of Littré. 14 It protects and lubricates the glans and inner lamella of the prepuce, facilitating erection, preputial eversion, and penetration during sexual intercourse. The inner prepuce contains apocrine glands, which secrete cathepsin B, lysozyme, chymotrypsin, neutrophil elastase, cytokine (a non-antibody protein that generates an immune response on contact with specific antigens), and pheromones such as androsterone. Lysozyme, which is also found in tears, human milk, and other body fluids, destroys bacterial cell walls. The natural composition of preputial bacterial flora is age dependent and similar to that of the eyes, mouth, skin, and female genitals. Washing the preputial sac was once thought to aid hygiene. Washing a stallion’s preputial sack with soap, however, encourages the growth of pathogenic organisms. Washing the human prepuce with soap is a common cause of balanoposthitis. Fussell et al have claimed that the prepuce is predisposed to colonisation by pathogenic bacteria, but they did not measure naturally occurring bacterial flora in living cohorts with undisturbed preputial microenvironments. They measured bacterial rates in dead, amputated, chemically treated prepuces inoculated with virulent strains of pathogenic bacteria— conditions that represent no known biological or behavioural reality. Animal experiments reveal that in the presence of hydrogen peroxide and halide or pseudohalides, soluble peroxidase in the prepuce has an antimicrobial activity. Plasma cells in the mucosal lining of the bovine prepuce secrete immunoglobulin under the epidermis that diVuses across the epidermis into the preputial cavity. In response to pathogenic bacterial infection, preputial plasma cells increase. Antibodies in breast milk supplement genital mucosal immunity in infants. Oligosaccharides in breast milk are ingested, then excreted in the urine, where they prevent Escherichia coli from adhering to the urinary tract and inner lining of the prepuce. An 8 year prospective study that controlled for genitourinary abnormalities found no diVerence in the rate of upper urinary tract infections between circumcised and intact boys. There are no histological studies that validate the claim that the sclerotic keratinisation of the epithelium of the surgically externalised, desiccated glans penis, meatus, or scar of the circumcised penis creates a barrier against infection. The higher rate of STDs in circumcised males might well be the result of Sex Transm Inf 1998;74:364–367 364
Speaking Difficulties Encountered by Young EFL Learners
Speaking is the active use of language to express meaning, andfor young learners, the spoken language is the medium through which a new language is encountered, understood, practiced, and learnt. Rather than oral skills being simply one aspect of learning language, the spoken form in the young learner’s classroom acts as the prime source of language learning. However,speaking problems can be major challenges to effective foreign language learning and communication. English as foreign language (EFL) learners, no matter how much they know about the English language, still face many speaking difficulties.Many studies have indicated that oral language development has largely been neglected in the classroom, and most of the time, oral language in the classroom is used more by teachers than by students. However, oral language, even as used by the teacher, hardly ever functions as a means for students to gain knowledge and explore ideas. To develop the knowledge to deal with oral communication problems in an EFL context, researchers first need to know the real nature of those problems and the circumstances in which ‘problems’ are constructed.
Hyperpolarised organic phosphates as NMR reporters of compartmental pH.
Organic phosphate metabolites contain functional groups with pKa values near the physiologic pH range, yielding pH-dependent (13)C chemical shift changes of adjacent quaternary carbon sites. When formed in defined cellular compartments from exogenous hyperpolarised (13)C substrates, metabolites can thus yield localised pH values and correlations of organelle pH and catalytic activity.
MATHSM: medial axis transform toward high speed machining of pockets
The pocketing operation is a fundamental procedure in NC machining. Typical pocketing schemes compute uniform successive offsets or parallel cuts of the outline of the pocket, resulting in a toolpath with C discontinuities. These discontinuities render the toolpath quite impractical in the context of high speed machining. This work addresses and fully resolves the need for a C continuous toolpath in high speed machining and offers MATHSM, a C continuous toolpath for arbitrary C continuous pockets. MATHSM generates a C continuous toolpath that consists of primarily circular arcs while maximizing the radii of the generated arcs and, therefore, minimizing the exerted radial acceleration.
Static UML Model Generator from Analysis of Requirements (SUGAR)
In this paper, we propose a tool, named Static UML Model Generator from Analysis of Requirements (SUGAR), which generates both use-case and class models by emphasizing on natural language requirements. SUGAR aims at integrating both requirement analysis and design phases by identifying use-cases, actors, classes along with its attributes and methods with proper association among classes. This tool extends the idea of previously existing tools and implemented with the help of efficient natural language processing tools of Stanford NLP Group, WordNet and JavaRAP using the modified approach of Rational Unified Process with better accuracy. SUGAR has added new features and also able to incorporate solution for those problems existed in previous tools by developing both analysis and design class models. SUGAR generates all static UML models in Java in conjunction with Rational Rose and provides all functionalities of the system even though the developer is having less domain knowledge.
Learning using privileged information: SV M+ and weighted SVM
Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training. The same goal is pursued within the learning using privileged information paradigm which was recently introduced by Vapnik et al. and is aimed at utilizing additional information available only at training time-a framework implemented by SVM+. We relate the privileged information to importance weighting and show that the prior knowledge expressible with privileged features can also be encoded by weights associated with every training example. We show that a weighted SVM can always replicate an SVM+ solution, while the converse is not true and we construct a counterexample highlighting the limitations of SVM+. Finally, we touch on the problem of choosing weights for weighted SVMs when privileged features are not available.
Hilbertian Repulsive Effect and Dark Energy
A repulsive gravitational effect of general relativity (without cosmological term), which was pointed out by Hilbert many years ago, could play a decisive role in the explanation of the observational data concerning the accelerated expansion of the universe.
The informed consent process in a cross-cultural setting: is the process achieving the intended result?
This report is based on the experiences of Navajo interpreters working in a diabetes clinical trial and describes the problems encountered in translating the standard research consent across cultural and linguistic barriers. The interpreters and a Navajo language consultant developed a translation of the standard consent form, maintaining the sequence of information and exactly translating English words and phrases. After four months of using the translated consent, the interpreters met with the language expert and a diabetes expert to review their experiences in presenting the translation in the initial phases of recruitment. Their experiences suggest that the consent process often leads to embarrassment, confusion, and misperceptions that promoted mistrust. The formal processes that have been mandated to protect human subjects may create barriers to research in cross-cultural settings and may discourage participation unless sufficient attention is given to ensuring that both translations and cross-cultural communications are effective.
Using Machine Learning Algorithms for Breast Cancer Risk Prediction and Diagnosis
Breast cancer represents one of the diseases that make a high number of deaths every year. It is the most common type of all cancers and the main cause of women’s deaths worldwide. Classification and data mining methods are an effective way to classify data. Especially in medical field, where those methods are widely used in diagnosis and analysis to make decisions. In this paper, a performance comparison between different machine learning algorithms: Support Vector Machine (SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted. The main objective is to assess the correctness in classifying data with respect to efficiency and effectiveness of each algorithm in terms of accuracy, precision, sensitivity and specificity. Experimental results show that SVM gives the highest accuracy (97.13%) with lowest error rate. All experiments are executed within a simulation environment and conducted in WEKA data mining tool. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs.
The role of estrogen receptors and androgen receptors in sex steroid regulation of B lymphopoiesis.
Several observations suggest that sex steroids might participate in steady state regulation of B lymphopoiesis. B cell precursors decline dramatically in bone marrow of pregnant or estrogen-treated mice. Reciprocally, the same cell populations are increased in hypogonadal mice or male castrates. Estrogen treatment of hypogonadal mice reduced precursors to normal. However, questions remain about which hormones and receptors are the most important. Furthermore, these observations need to be reconciled with advances regarding new sex steroid receptors. We have now characterized B lymphopoiesis in androgen receptor-deficient testicular feminization (Tfm) mice. Testicular feminization mice had substantially elevated numbers of B cell precursors in the bone marrow and B cells in the spleen as compared with wild-type mice. The importance of one estrogen receptor (ER alpha) was evaluated in gene-targeted mice, and B cell precursors were found to be within the normal range. Our previous studies indicated that hormone receptors in stromal cells may be important for estrogen-mediated suppression of B lymphopoiesis. We now show that estrogen-mediated inhibition of B cell precursor expansion in culture was blocked by a specific estrogen receptor antagonist (ICI 182,780). Stromal cells derived from ER alpha-targeted bone marrow were fully estrogen responsive. RT-PCR analyses of these stromal cells revealed splice-variant transcripts of ER alpha, as well as message for a recently discovered estrogen-binding receptor, ER beta. Thus, androgens may normally inhibit B lymphopoiesis through the androgen receptor, whereas estrogens might utilize one or more receptors to achieve the same physiologic response.
Wearable Activity Tracking in Car Manufacturing
A context-aware wearable computing system could support a production or maintenance worker by recognizing the worker's actions and delivering just-in-time information about activities to be performed.
Modeling the global freight transportation system: A multi-level modeling perspective
The interconnectedness of different actors in the global freight transportation industry has rendered such a system as a large complex system where different sub-systems are interrelated. On such a system, policy-related- exploratory analyses which have predictive capacity are difficult to perform. Although there are many global simulation models for various large complex systems, there is unfortunately very little research aimed to develop a global freight transportation model. In this paper, we present a multi-level framework to develop an integrated model of the global freight transportation system. We employ a system view to incorporate different relevant sub-systems and categorize them in different levels. The fourstep model of freight transport is used as the basic foundation of the framework proposed. In addition to that, we also present the computational framework which adheres to the high level modeling framework to provide a conceptualization of the discrete-event simulation model which will be developed.
Development of an unit type robot "KOHGA2" with stuck avoidance ability
To search victims in the narrow space at the disaster site, we have developed the snake-like rescue robot called "KOHGA". The robot is constructed by connecting multiple crawler vehicles serially by active joints. KOHGA has a problem that obstacles are caught to the joints and then the robot is stuck. To solve this problem, we developed an unit assembled robot "KOHGA2". It can be rearranged. The robot can swing crawler-arms and avoid the stuck. In this paper, we report the construction of the hardware and the control system of KOHGA2, the basic mobility performance, and the stuck avoidance strategy.
Knowledge discovery through directed probabilistic topic models: a survey
Graphical models have become the basic framework for topic based probabilistic modeling. Especially models with latent variables have proved to be effective in capturing hidden structures in the data. In this paper, we survey an important subclass Directed Probabilistic Topic Models (DPTMs) with soft clustering abilities and their applications for knowledge discovery in text corpora. From an unsupervised learning perspective, “topics are semantically related probabilistic clusters of words in text corpora; and the process for finding these topics is called topic modeling”. In topic modeling, a document consists of different hidden topics and the topic probabilities provide an explicit representation of a document to smooth data from the semantic level. It has been an active area of research during the last decade. Many models have been proposed for handling the problems of modeling text corpora with different characteristics, for applications such as document classification, hidden association finding, expert finding, community discovery and temporal trend analysis. We give basic concepts, advantages and disadvantages in a chronological order, existing models classification into different categories, their parameter estimation and inference making algorithms with models performance evaluation measures. We also discuss their applications, open challenges and future directions in this dynamic area of research.
Boosting Domain Adaptation by Discovering Latent Domains
Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.
Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection
Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features in convolutional layers. However, how to better aggregate multi-level convolutional feature maps for salient object detection is underexplored. In this work, we present Amulet, a generic aggregating multi-level convolutional feature framework for salient object detection. Our framework first integrates multi-level feature maps into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then it adaptively learns to combine these feature maps at each resolution and predict saliency maps with the combined features. Finally, the predicted results are efficiently fused to generate the final saliency map. In addition, to achieve accurate boundary inference and semantic enhancement, edge-aware feature maps in low-level layers and the predicted results of low resolution features are recursively embedded into the learning framework. By aggregating multi-level convolutional features in this efficient and flexible manner, the proposed saliency model provides accurate salient object labeling. Comprehensive experiments demonstrate that our method performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.
Burnout among U.S. medical students, residents, and early career physicians relative to the general U.S. population.
PURPOSE To compare the prevalence of burnout and other forms of distress across career stages and the experiences of trainees and early career (EC) physicians versus those of similarly aged college graduates pursuing other careers. METHOD In 2011 and 2012, the authors conducted a national survey of medical students, residents/fellows, and EC physicians (≤ 5 years in practice) and of a probability-based sample of the general U.S. population. All surveys assessed burnout, symptoms of depression and suicidal ideation, quality of life, and fatigue. RESULTS Response rates were 35.2% (4,402/12,500) for medical students, 22.5% (1,701/7,560) for residents/fellows, and 26.7% (7,288/27,276) for EC physicians. In multivariate models that controlled for relationship status, sex, age, and career stage, being a resident/fellow was associated with increased odds of burnout and being a medical student with increased odds of depressive symptoms, whereas EC physicians had the lowest odds of high fatigue. Compared with the population control samples, medical students, residents/fellows, and EC physicians were more likely to be burned out (all P < .0001). Medical students and residents/fellows were more likely to exhibit symptoms of depression than the population control samples (both P < .0001) but not more likely to have experienced recent suicidal ideation. CONCLUSIONS Training appears to be the peak time for distress among physicians, but differences in the prevalence of burnout, depressive symptoms, and recent suicidal ideation are relatively small. At each stage, burnout is more prevalent among physicians than among their peers in the U.S. population.
Development and validation of the stroke action test.
BACKGROUND AND PURPOSE Accurately assessing the public's readiness to respond to stroke is important. Most published measures are based on recall or recognition of stroke symptoms, or knowledge of the best action for stroke when the diagnosis is provided. The purpose of this study was to develop and evaluate a new written instrument whose items require the respondent to associate individual symptoms with the most appropriate action. METHODS The Stroke Action Test (STAT) contains 21 items that name or describe stroke symptoms from all 5 groups of warning signs and 7 items that are nonstroke symptoms. For each item, the respondent selects 1 of 4 options: call 911, call doctor, wait 1 hour, or wait 1 day. The instrument validation sample included 249 subjects from community-based organizations. Score reliability and validity were analyzed using multiple data and information sources. RESULTS The mean overall STAT score (all 28 items) for the lay people was 36.8%. On average, they chose call 911 for 34.1% of the stroke symptoms. They chose call doctor for 39.4% of the stroke symptoms, wait 1 hour for 20.1%, and wait 1 day for 6.0%. Score reliability is good (alpha=0.83). Evidence confirming score validity is presented based on analysis of item content and response patterns, and examination of the relationships between test scores and key variables related to stroke knowledge. CONCLUSIONS STAT directly assesses a critical aspect of practical stroke knowledge that has been largely overlooked and provides scores with good reliability and validity.
An Integrated Bayesian Approach for Effective Multi-Truth Discovery
Truth-finding is the fundamental technique for corroborating reports from multiple sources in both data integration and collective intelligent applications. Traditional truth-finding methods assume a single true value for each data item and therefore cannot deal will multiple true values (i.e., the multi-truth-finding problem). So far, the existing approaches handle the multi-truth-finding problem in the same way as the single-truth-finding problems. Unfortunately, the multi-truth-finding problem has its unique features, such as the involvement of sets of values in claims, different implications of inter-value mutual exclusion, and larger source profiles. Considering these features could provide new opportunities for obtaining more accurate truth-finding results. Based on this insight, we propose an integrated Bayesian approach to the multi-truth-finding problem, by taking these features into account. To improve the truth-finding efficiency, we reformulate the multi-truth-finding problem model based on the mappings between sources and (sets of) values. New mutual exclusive relations are defined to reflect the possible co-existence of multiple true values. A finer-grained copy detection method is also proposed to deal with sources with large profiles. The experimental results on three real-world datasets show the effectiveness of our approach.
Homogeneous ice nucleation at moderate supercooling from molecular simulation.
Among all of the freezing transitions, that of water into ice is probably the most relevant to biology, physics, geology, or atmospheric science. In this work, we investigate homogeneous ice nucleation by means of computer simulations. We evaluate the size of the critical cluster and the nucleation rate for temperatures ranging between 15 and 35 K below melting. We use the TIP4P/2005 and the TIP4P/ice water models. Both give similar results when compared at the same temperature difference with the model's melting temperature. The size of the critical cluster varies from ∼8000 molecules (radius = 4 nm) at 15 K below melting to ∼600 molecules (radius = 1.7 nm) at 35 K below melting. We use Classical Nucleation Theory (CNT) to estimate the ice-water interfacial free energy and the nucleation free-energy barrier. We obtain an interfacial free energy of 29(3) mN/m from an extrapolation of our results to the melting temperature. This value is in good agreement both with experimental measurements and with previous estimates from computer simulations of TIP4P-like models. Moreover, we obtain estimates of the nucleation rate from simulations of the critical cluster at the barrier top. The values we get for both models agree within statistical error with experimental measurements. At temperatures higher than 20 K below melting, we get nucleation rates slower than the appearance of a critical cluster in all water of the hydrosphere during the age of the universe. Therefore, our simulations predict that water freezing above this temperature must necessarily be heterogeneous.
Portable video supercomputing
As inexpensive imaging chips and wireless telecommunications are incorporated into an increasing array, of portable products, the need for high efficiency, high throughput embedded processing will become an important challenge in computer architecture. Videocentric applications, such wireless videoconferencing, real-time video enhancement and analysis, and new, immersive modes of distance education, will exceed the computational capabilities of current microprocessor and digital signal processor (DSP) architectures. A new class of embedded computers, portable video supercomputers, will combine supercomputer performance with the energy efficiency required for deployment in portable systems. We examine one candidate portable video supercomputer, a low memory, monolithically integrated SIMD architecture (SIMPil) that exploits the substantial data parallelism that exists in a suite of implemented video processing applications. The processing element microarchitecture is optimized using a novel technique that combines application simulation and technology modeling to provide a desired combination of performance, area, and energy consumption. Analysis results show that, for MPEG encoding, a SIMPil array implemented in 100 nm CMOS provides 100x greater performance and 10x higher energy efficiency than today's DSPs implemented 150 nm CMOS. This is accomplished using execution parallelism and a carefully selected processing element design. This research demonstrates that appropriately designed SIMD arrays, implemented monolithically in today's technology, can provide high performance and high efficiency for embedded video processing.
DASAR EPISTEMOLOGI ILMU KOGNITIF DALAMTINJAUAN PEMIKIRAN TIGA DUNIA KARL RAIMUNDPOPPER
The title of this tesis is "The Epistemology Background Of Cognitive Science In Perspective Of Karl Raimund Popper's Three World". The thought of Three World is the one of the most important work from Karl R. Popper, because The Three World is ontological, and also epistemological basis of Karl R Popper's philosophy. This Karl R. Popper's Three World thought, then will be use for looking the basic epistemology in cognitive science. The aims of this research is to: (1) describe the definition and the main problems of cognitive science, (2) find epistemology background in cognitive science. This research is a library researched using several methods, i.e: Interpretation - to understand the contents of data which been found in Karl R. Popper's Three World
A Cognitive Science Based Machine Learning Architecture
In an attempt to illustrate the application of cognitive science principles to hard AI problems in machine learning we propose the LIDA technology, a cognitive science based architecture capable of more human-like learning. A LIDA based software agent or cognitive robot will be capable of three fundamental, continuously active, humanlike learning mechanisms: 1) perceptual learning, the learning of new objects, categories, relations, etc., 2) episodic learning of events, the what, where, and when, 3) procedural learning, the learning of new actions and action sequences with which to accomplish new tasks. The paper argues for the use of modular components, each specializing in implementing individual facets of human and animal cognition, as a viable approach towards achieving general intelligence. Relevance of Cognitive Science to AI Dating back to Samuel’s checker player (1959), machine learning is among the oldest of the sub-branches of AI with many practitioners and many successes to its credit. Still, after fifty years of effort there are remaining difficulties. Machine learning often requires large, accurate training sets, shows little awareness of what’s known or not known, integrates new knowledge poorly into old, learns only one task at a time, allows little transfer of learned knowledge to new tasks, and is poor at learning from human teachers. Clearly, machine learning presents a number of hard AI problems. Can cognitive science help? In contrast, human learning has solved many of these problems, and is typically continual, quick, efficient, accurate, robust, flexible, and effortless. As an example consider perceptual learning, the learning of new objects, categories, relations, etc. Traditional machine learning approaches such as object detection, classification, clustering, etc, are highly susceptible to the problems raised above. However, perceptual learning in humans and animals seem to have no such restrictions. Perceptual learning in humans occurs incrementally so there is no need for a large training set. Learning and knowledge extraction are achieved simultaneously through a dynamical system that can adapt to changes in the nature of the stimuli perceived in the environment. Additionally, human like learning is based on reinforcement rather than fitting to a dataset or model. Therefore, in addition to learning, humans can also forget. Initially, many associations are made between entities, the ones that are sufficiently reinforced persist, while the ones that aren’t decay. All this suggests a possible heuristic: If you want smart software, copy it after humans. We’ve done just that. The Learning Intelligent Distribution Agent (LIDA) architecture that we propose here was designed to be consistent with what is known from cognitive science and neuroscience. In addition to being a computational architecture, it’s intended to model human cognition. We’ll go on to describe the LIDA architecture and its human-like learning capabilities. Modularity as an Approach for Intelligence LIDA provides a conceptual and computational model of cognition. It is the learning extension of the original IDA system implemented as a software agent. IDA ‘lives’ on a computer system with connections to the Internet and various databases, and does personnel work for the US Navy, performing all the specific personnel tasks of a human (Franklin 2001). The LIDA architecture is partly symbolic and partly connectionist with all symbols being grounded in the physical world in the sense of Brooks (1986; 1990). We argue for unique, specialized mechanisms to computationally implement the various facets of human cognition such as perception, episodic memories, functional consciousness, and action selection. We offer an evolutionary argument and a functional argument to justify the specialized, modular component approach to the design of an intelligent system. The evolutionary argument draws support from the sustained efforts by neuroscientists and brain physiologists in mapping distinct functions to different areas in the brain. In many ways the brain can be viewed as a kluge of different mechanisms. For example, parts of perceptual associative memory are believed to be in the perirhinal cortex (Davachi, Mitchell & Wagner 2003), while some of the neural correlates of autobiographical memory have been identified as the medial frontal cortex and left 1Compilation copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. hippocampus (Conway & Fthenaki 2000; Maguire 2001). In addition to the neuroscience evidence, there are developmental arguments for a distinct mechanism for perceptual memory. Infants who have not yet developed object permanence (any episodic memory) are quite able to recognize and categorize (Mandler 2000). Other arguments come from studies of human amnesiacs with significant loss of declarative memory, but mostly intact perceptual associative memory and learning (Gabrieli et al. 1990, Fahle & Daum 2002). Perhaps the most convincing argument comes from experiments with rats in a radial arm maze. With four arms baited and four not (with none restocked), normal rats learn to recognize which arms to search (perceptual associative memory) and remember which arms they have already fed in (episodic memory) so as not to search there a second time. Rats with their hippocampal systems excised lose their episodic memory but retain perceptual associative memory, again arguing for distinct mechanisms (Olton, Becker, & Handelman 1979). Similarly, arguments for finer distinctions between the various episodic memory systems have been made. Episodic memories are memories for events of the what, where, and when. Conway (2001) argues for a unique memory system for recent, highly specific, sensory-perceptual information, than autobiographical memory, on the basis of different functions, knowledge stored, access techniques, phenomenology, and neurology. Additionally, Moscovitch et al. (2005) offer neuroimaging evidence for a finer grained component analysis for semantic and spatial memories. The functional arguments in support of specialized modular components for general intelligence are derived from the need for primitives for any agent capable of robust autonomy. An initial set of primitive feature detectors for perception and a set of primitive effectors for action execution are a computational necessity for any autonomous agent, natural or artificial, software or robotic (Franklin 1997). Additionally, as widely recognized in humans, the need for primitive motivators implemented as feelings and emotions may also be required for any cognitive system that attempts to stimulate general intelligence. In humans, primitive feature detectors for vision include neurons in the primary visual cortex (V1) detecting line segments at various orientations, while primitive effectors include neuronal groups controlling individual muscles. Higher level visual perception such as categorization, object detection, etc, is realized from associations between the various primitive feature detectors. Similarly, more complex actions and action sequences are learnt by associating the primitive effectors. These functional requirements that differ between perception and action strengthen the argument for specialized modular components. The LIDA Architecture On the basis of the arguments for specialized, modular components as an approach for intelligence, the LIDA architecture operates by the interplay of unique mechanisms implementing the major facets of human cognition. The mechanisms used in implementing the several modules have been inspired by a number of different ‘new AI’ techniques (Drescher 1991; Hofstadter & Mitchell 1994; Jackson 1987; Kanerva 1988; Maes 1989; Brooks 1986). We now describe LIDA’s primary mechanisms. Perceptual Associative Memory. LIDA perceives both exogenously and endogenously with Barsalou’s perceptual symbol systems serving as a guide (1999). The perceptual knowledge-base of this agent, called perceptual associative memory, takes the form of a semantic net with activation called the slipnet, a la Hofstadter and Mitchell’s Copycat architecture (1994). Nodes of the slipnet constitute the agent’s perceptual symbols, representing individuals, categories, and perhaps higher-level ideas and concepts. Pieces of the slipnet containing nodes and links, together with perceptual codelets (a codelet is a small piece of code running independently; perceptual codelets are a special type of codelet designed for perceptual tasks such as recognition) with the task of copying the piece to working memory, constitute Barsalou’s perceptual symbol simulators (1999). Together they constitute an integrated perceptual system for LIDA, allowing the system to recognize, categorize and understand. Workspace. LIDA’s workspace is analogous to the preconscious buffers of human working memory. Perceptual codelets write to the workspace as do other, more internal codelets. Attention codelets (codelets that form coalitions with other codelets to compete for functional consciousness) watch what is written in the workspace in order to react to it. Items in the workspace decay over time, and may be overwritten. Another pivotal role of the workspace is the building of temporary structures over multiple cognitive cycles (see below). Perceptual symbols from the slipnet are assimilated into existing relational and situational templates while preserving spatial and temporal relations between the symbols. The structures in the workspace also decay rapidly. Episodic Memory. Episodic memory in the LIDA architecture is composed of a declarative memory (DM) for the long term storage of autobiographical and semantic information as well as a short term transient episodic memory (TEM) similar to Conway’s (2001) sensoryperceptual episodic memory with a retention rate measured in hours. LIDA employs variants of sparse distributed memory
LAYER RECURRENT NEURAL NETWORKS
In this paper, we propose a Layer-RNN (L-RNN) module that is able to learn contextual information adaptively using within-layer recurrence. Our contributions are three-fold: (i) we propose a hybrid neural network architecture that interleaves traditional convolutional layers with L-RNN module for learning longrange dependencies at multiple levels; (ii) we show that a L-RNN module can be seamlessly inserted into any convolutional layer of a pre-trained CNN, and the entire network then fine-tuned, leading to a boost in performance; (iii) we report experiments on the CIFAR-10 classification task, showing that a network with interleaved convolutional layers and L-RNN modules, achieves comparable results (5.39% top1 error) using only 15 layers and fewer parameters to ResNet-164 (5.46%); and on the PASCAL VOC2012 semantic segmentation task, we show that the performance of a pre-trained FCN network can be boosted by 5% (mean IOU) by simply inserting Layer-RNNs.
A 2.02–5.16 fJ/Conversion Step 10 Bit Hybrid Coarse-Fine SAR ADC With Time-Domain Quantizer in 90 nm CMOS
This paper presents an ultra-low-voltage and power-efficient 10 bit hybrid successive approximation register (SAR) analog-to-digital converter (ADC). For reducing the digital-to-analog converter (DAC) capacitance and comparator requirement, we propose a hybrid architecture comprising a coarse 7 bit SAR ADC and fine 3.5 bit time-to-digital converter (TDC). The Vcm-based switching method is adopted for coarse conversion to reduce DAC power and maintain common mode. The residual voltage after coarse conversion is converted to time domain, and the fine TDC detects the least three bits with 0.5 bit redundancy by using a Vernier delay structure. Offset calibration and delay time locking are implemented to guarantee the ADC performance under process variation. The test chip, fabricated in 90 nm CMOS technology, occupied a core area of 0.04 mm2. With a 0.4 V supply and a Nyquist rate input, the prototype consumed 200 nW at 250 kS/s and achieved an ENOB of 8.63 bits and a SFDR of 78.5 dB. The operation frequency was scalable from 250 kS/s to 4 MS/s. The converter had a power supply range of 0.4-0.7 V, and the figure of merit (FoM) were 2.02-5.16 fJ/conversion step.
The effect of selective and non-selective phosphodiesterase inhibitors on allergen- and leukotriene C(4)-induced contractions in passively sensitized human airways.
Non-selective inhibitors of cyclic nucleotide phosphodiesterase (PDE) block allergen-induced contraction of passively sensitized human airways in vitro by a dual mechanism involving a direct relaxant effect on smooth muscle and inhibition of histamine and cysteinyl leukotriene (LT) release from airways. We investigated the effects of non-selective PDE inhibitors and selective inhibitors of PDE3 and PDE4 in order to determine the involvement of PDE isoenzymes in the suppression of allergic bronchoconstriction. Macroscopically normal airways from 76 patients were sensitized with IgE-rich sera (>250 u ml(-1)) containing specific antibodies against allergen (Dermatophagoides farinae). Contractile responses of bronchial rings were assessed using standard organ bath techniques. Passive sensitization caused increased contractile responses to allergen, histamine and LTC(4). Non-selective PDE inhibitors (theophylline, 3-isobutyl-1-methylxanthine [IBMX]), a PDE3-selective inhibitor (motapizone), PDE4-selective inhibitors (RP73401, rolipram, AWD 12-281) and a mixed PDE3/4 inhibitor (zardaverine) all significantly relaxed inherent bronchial tone at resting tension and to a similar degree. Theophylline, IBMX, zardaverine and the combination of motapizone and RP73401 inhibited the contractile responses to allergen and LTC(4). Pre-treatment with motapizone, RP73401, rolipram or the methylxanthine adenosine receptor antagonist, 8-phenyltheophylline, did not significantly decrease responses to either allergen or LTC(4). We conclude that combined inhibition of PDE3 and PDE4, but not selective inhibition of either isoenzyme or antagonism of adenosine receptors, is effective in suppressing allergen-induced contractions of passively sensitized human airways. The relationship between allergen- and LTC(4)-induced responses suggests that PDE inhibitors with PDE3 and PDE4 selectivity are likely to act in part through inhibition of mediator release and not simply through direct relaxant actions on airway smooth muscle.
Switching from zidovudine/lamivudine to tenofovir/emtricitabine improves fat distribution as measured by fat mass ratio.
OBJECTIVES Fat mass ratio (FMR) has been suggested as an objective indicator of abnormal body fat distribution in HIV infection. Although it could provide more comprehensive information on body fat changes than limb fat mass, FMR has scarcely been used in clinical trials examining body fat distribution in HIV-infected patients. METHODS A subanalysis of a controlled, randomized clinical trial in virologically suppressed HIV-1-infected men switching from zidovudine (ZDV)/lamivudine (3TC) to emtricitabine (FTC)/tenofovir (TDF) versus continuing on ZDV/3TC was carried out. FMR was assessed by dual X-ray absorptiometry (DEXA) for a period of 72 weeks. Lipoatrophy was defined as FMR ≥ 1.5. Multivariate linear regression models for the change in FMR from baseline were fitted. RESULTS Sixty-five men were randomized and treated (28 in the FTC/TDF arm and 37 in the ZDV/3TC arm), and 57 completed the study (25 and 32 in each arm, respectively). In the FTC/TDF arm, adjusted mean FMR decreased by 0.52 at week 72 (P = 0.014), and in the ZDV/3TC arm it increased by 0.13 (P = 0.491; P between arms = 0.023). Among subjects with lipoatrophy (baseline FMR ≥ 1.5), adjusted FMR decreased by 0.76 (P = 0.003) in the FTC/TDF arm and increased by 0.21 (P = 0.411; P between arms = 0.009) in the ZDV/3TC arm. Baseline FMR and treatment group were significant predictors (P < 0.05) of post-baseline changes in FMR. CONCLUSIONS Switching from ZDV/3TC to FTC/TDF led to an improvement in FMR, compared with progressive worsening of FMR in subjects receiving ZDV/3TC, showing that fat mass not only increased but was also distributed in a healthier way after the switch.
Concept learning as motor program induction: A large-scale empirical study
Human concept learning is particularly impressive in two respects: the internal structure of concepts can be representationally rich, and yet the very same concepts can also be learned from just a few examples. Several decades of research have dramatically advanced our understanding of these two aspects of concepts. While the richness and speed of concept learning are most often studied in isolation, the power of human concepts may be best explained through their synthesis. This paper presents a large-scale empirical study of one-shot concept learning, suggesting that rich generative knowledge in the form of a motor program can be induced from just a single example of a novel concept. Participants were asked to draw novel handwritten characters given a reference form, and we recorded the motor data used for production. Multiple drawers of the same character not only produced visually similar drawings, but they also showed a striking correspondence in their strokes, as measured by their number, shape, order, and direction. This suggests that participants can infer a rich motorbased concept from a single example. We also show that the motor programs induced by individual subjects provide a powerful basis for one-shot classification, yielding far higher accuracy than state-of-the-art pattern recognition methods based on just the visual form.
Influence of Phenol-Enriched Olive Oils on Human Intestinal Immune Function.
Olive oil (OO) phenolic compounds (PC) are able to influence gut microbial populations and metabolic output. Our aim was to investigate whether these compounds and changes affect the mucosal immune system. In a randomized, controlled, double blind cross-over human trial, for three weeks, preceded by two-week washout periods, 10 hypercholesterolemic participants ingested 25 mL/day of three raw virgin OO differing in their PC concentration and origin: (1) an OO containing 80 mg PC/kg (VOO); (2) a PC-enriched OO containing 500 mg PC/kg from OO (FVOO); and (3) a PC-enriched OO containing a mixture of 500 mg PC/kg from OO and thyme (1:1, FVOOT). Intestinal immunity (fecal immunoglobulin A (IgA) and IgA-coated bacteria) and inflammation markers (C-reactive protein (CRP) and fecal interleukin 6 (IL-6), tumor necrosis factor α (TNFα) and calprotectin) was analyzed. The ingestion of high amounts of OO PC, as contained in FVOO, tended to increase the proportions of IgA-coated bacteria and increased plasma levels of CRP. However, lower amounts of OO PC (VOO) and the combination of two PC sources (FVOOT) did not show significant effects on the variables investigated. Results indicate a potential stimulation of the immune system with very high doses of OO PC, which should be further investigated.
Impact of coronary chronic total occlusions on long-term mortality in patients undergoing coronary artery bypass grafting.
OBJECTIVES The presence of a coronary chronic total occlusion (CTO) is a common consideration in favour of surgical revascularization. However, studies have shown that not all patients undergoing coronary artery bypass grafting (CABG) have a bypass graft placed on the CTO vessel. The aim of this study was to determine the prevalence of CTO among patients referred for CABG and the significance of incomplete CTO revascularization in these patients. METHODS The study included 405 consecutive patients undergoing CABG during a 2-year period. Clinical, echocardiographic and angiographic data were collected. Determination of whether or not a CTO was bypassed was made by correlating data from the surgical reports and preprocedural angiograms. The primary end point of this study was 5-year all-cause mortality. RESULTS Two hundred and twenty-one CTOs were found in 174 patients: 132 patients (76) had 1 CTO; 37 (21) had 2 CTOs and 5 (3) had 3 CTOs. Of the 221 CTOs, 191 (86%) were bypassed. All left anterior descending (LAD) CTOs were grafted; however, 12 of left circumflex and 22% of right coronary artery CTOs did not receive bypass grafts. Incomplete CTO revascularization was associated with older age, more comorbidities, including stroke, renal impairment and lower ejection fraction. However, incomplete CTO revascularization was not associated with increased 5-year mortality. CONCLUSIONS Coronary CTOs are a common finding in patients referred for bypass surgery. The presence of a CTO is not independently associated with an adverse long-term outcome. While most CTOs are successfully bypassed, failure to revascularize a non-LAD CTO is not associated with adverse long-term outcome.
PCM and Memristor based nanocrossbars
This paper presents performance comparison between two emerging resistive Non-Volatile Memory (NVM) technologies; namely Memristors and Phase Change Memory (PCM); using nanocrossbar architecture. A comparison in terms of leakage current, reading and writing delay, and energy consumption between both non-volatile memory devices, with SRAM based nanocrossbar as benchmark was carried. It was found that Memristive crossbars offer 3 orders of magnitude improvement in the average read cycle, compared to SRAM based crossbars. On the other hand; both PCM based and Memristor based crossbars offered more than 2 orders of magnitude improvement in leakage energy compared to SRAM based crossbars. The aim of this comparison is to provide a fair simulation platform to study and compare PCM crossbar and Memristive crossbar.
Prediction of pneumonia 30-day readmissions: a single-center attempt to increase model performance.
BACKGROUND Existing models developed to predict 30 days readmissions for pneumonia lack discriminative ability. We attempted to increase model performance with the addition of variables found to be of benefit in other studies. METHODS From 133,368 admissions to a tertiary-care hospital from January 2009 to March 2012, the study cohort consisted of 956 index admissions for pneumonia, using the Centers for Medicare and Medicaid Services definition. We collected variables previously reported to be associated with 30-day all-cause readmission, including vital signs, comorbidities, laboratory values, demographics, socioeconomic indicators, and indicators of hospital utilization. Separate logistic regression models were developed to identify the predictors of all-cause hospital readmission 30 days after discharge from the index pneumonia admission for pneumonia-related readmissions, and for pneumonia-unrelated readmissions. RESULTS Of the 965 index admissions for pneumonia, 148 (15.5%) subjects were readmitted within 30 days. The variables in the multivariate-model that were significantly associated with 30-day all-cause readmission were male sex (odds ratio 1.59, 95% CI 1.03-2.45), 3 or more previous admissions (odds ratio 1.84, 95% CI 1.22-2.78), chronic lung disease (odds ratio 1.63, 95% CI 1.07-2.48), cancer (odds ratio 2.18, 95% CI 1.24-3.84), median income < $43,000 (odds ratio 1.82, 95% CI 1.18-2.81), history of anxiety or depression (odds ratio 1.62, 95% CI 1.04-2.52), and hematocrit < 30% (odds ratio 1.86, 95% CI 1.07-3.22). The model performance, as measured by the C statistic, was 0.71 (0.66-0.75), with minimal optimism according to bootstrap re-sampling (optimism corrected C statistic 0.67). CONCLUSIONS The addition of socioeconomic status and healthcare utilization variables significantly improved model performance, compared to the model using only the Centers for Medicare and Medicaid Services variables.
Thermoplastic Forming of Bulk Metallic Glass— A Technology for MEMS and Microstructure Fabrication
A technology for microelectromechanical systems (MEMS) and microstructure fabrication is introduced where the bulk metallic glass (BMG) is formed at a temperature where the BMG exist as a viscous liquid under an applied pressure into a mold. This thermoplastic forming is carried out under comparable forming pressure and temperatures that are used for plastics. The range of possible sizes in all three dimensions of this technology allows the replication of high strength features ranging from about 30 nm to centimeters with aspect ratios of 20 to 1, which are homogeneous and isotropic and free of stresses and porosity. Our processing method includes a hot-cutting technique that enables a clean planar separation of the parts from the BMG reservoir. It also allows to net-shape three-dimensional parts on the micron scale. The technology can be implemented into conventional MEMS fabrication processes. The properties of BMG as well as the thermoplastic formability enable new applications and performance improvements of existing MEMS devices and nanostructures
Incidence of congenital rubella syndrome at a hospital serving a predominantly Hispanic population, El Paso, Texas.
OBJECTIVE The current epidemiology of rubella reveals an increase in the number of cases among adult Hispanics and an increase in the number of congenital rubella syndrome (CRS) cases among infants of Hispanic mothers. Recent rubella outbreaks have occurred primarily among adult Hispanics, many of whom are foreign-born natives of countries where rubella vaccination is not routine or has only recently been implemented. The objective of this study was to estimate the incidence of CRS in a hospital serving a predominantly Hispanic population. METHODS Hospital charts of infants <1 year old discharged between January 1, 1994 and December 31, 1996 with International Classification of Diseases, Ninth Revision (ICD-9) discharge codes consistent with CRS were reviewed; we looked for cataracts, deafness, congenital heart defects, dermal erythropoiesis, microcephaly, meningoencephalitis, and other defects associated with CRS. We abstracted data on maternal and infant ethnicity, maternal age, gestational age, infants' birth weight, infants' clinical characteristics, and laboratory evaluation. Cases were categorized according to the Council of State and Territorial Epidemiologists' case classification for CRS. RESULTS Of the 182 infants with 1 or more ICD-9 codes consistent with CRS, 6 (3.3%) met either the confirmed or probable case definition for CRS. Two infants met the definition for confirmed CRS. Although laboratory tests for rubella immunoglobulin M antibodies were positive for both of these infants, only 1 of the cases had been reported to the state health department. Four other infants had clinical presentations that met the definition for a probable case. One of these had been tested for rubella immunoglobulin M antibodies, and the test was negative. The other 3 had not been tested. The rate of infants meeting the definition of confirmed and probable CRS was 3.1 per 10 000 hospital births. All confirmed and probable cases were among infants born to Hispanic mothers. Maternal country of origin was Mexico for the 2 confirmed cases and 1 of the probable cases, and unknown for the remaining 3 probable cases. CONCLUSION The rate of confirmed and probable CRS among infants in this predominantly Hispanic population is higher than the reported rate in the United States in the vaccine era, which has been reported to range from approximately 0.01-0.08 per 10 000 live births. These findings indicate a need for heightened awareness of CRS among physicians who serve populations at risk for rubella. Physicians should report all confirmed and probable CRS cases to the state health department. The lack of appropriate laboratory testing in 3 infants with probable CRS indicates that physicians should consider a diagnosis of CRS in infants with some signs consistent with CRS, particularly in areas serving high numbers of individuals at risk for rubella.
Design and Implementation of IoT-Based Automation System for Smart Home
Home Automation System (HAS) gains popularity due to communication technology advancement. Smart home is one of the Internet of Things (IoT) applications that facilitates the control of home appliances over the Internet using automation system. This paper proposes a low-cost Wi-Fi based automation system for Smart Home (SH) in order to monitor and control home appliances remotely using Android-based application. An Arduino Mega microcontroller provided with Wi-Fi module is utilized to build the automation system. In addition, several sensors are used to monitor the temperature, humidity and motion in home. A relay board is exploited to connect the HAS with home under controlled appliances. The proposed automation system, can easily and efficiently control the electrical appliances via Wi-Fi and Virtuino mobile application.
Nortriptyline for treating enuresis in ADHD—a randomized double-blind controlled clinical trial
Treating enuresis in children with attention deficit hyperactivity disorder (ADHD) has not been previously reported. This study aims to investigate the efficacy, tolerability, and adverse effects of nortriptyline for treating enuresis in children with ADHD. Forty-three children aged from 5 to 14 years old were randomized into two groups. The treatment group received methylphenidate plus nortriptyline, while the placebo group received methylphenidate plus placebo. Nortriptyline and placebo were administered for 30 days and methylphenidate was administered for 45 days. The major outcome measure was parent-reported frequency of enuresis for 2 weeks prior to the intervention, during the intervention, and for 2 weeks after stopping the adjuvant therapy. Adverse effects were also checked. While nortriptyline statistically decreased the incidence of nocturnal enuresis during the intervention, the number of enuresis events did not significantly change in the placebo group. In addition, enuresis was not different from the baseline frequency of enuresis after stopping nortriptyline or placebo administration. Both nortriptyline and placebo were tolerated well. Administration of nortriptyline for treating enuresis in ADHD has not been investigated before. Nortriptyline is statistically superior to placebo. However, enuresis will relapse after stopping nortriptyline in children with ADHD who continue taking methylphenidate.
On the Computational Efficiency of Training Neural Networks
Neural Networks are formally hard to train. How can we circumvent hardness results? • Over specified networks: While over specification seems to speedup training , formally hardness results are valid in the improper model. • Changing the activation function: While changing the activation function from sigmoid to ReLu has lead to faster convergence of SGD methods, formally these networks are still hard.
Antimicrobial and Cytotoxic Assessment of Marine Cyanobacteria - Synechocystis and Synechococcus
Aqueous extracts and organic solvent extracts of isolated marine cyanobacteria strains were tested for antimicrobial activity against a fungus, Gram-positive and Gram-negative bacteria and for cytotoxic activity against primary rat hepatocytes and HL-60 cells. Antimicrobial activity was based on the agar diffusion assay. Cytotoxic activity was measured by apoptotic cell death scored by cell surface evaluation and nuclear morphology. A high percentage of apoptotic cells were observed for HL-60 cells when treated with cyanobacterial organic extracts. Slight apoptotic effects were observed in primary rat hepatocytes when exposed to aqueous cyanobacterial extracts. Nine cyanobacteria strains were found to have antibiotic activity against two Gram-positive bacteria, Clavibacter michiganensis subsp. insidiosum and Cellulomonas uda. No inhibitory effects were found against the fungus Candida albicans and Gram-negative bacteria. Marine Synechocystis and Synechococcus extracts induce apoptosis in eukaryotic cells and cause inhibition of Gram-positive bacteria. The different activity in different extracts suggests different compounds with different polarities.
Rumor detection for Persian Tweets
Nowadays, striking growth of online social media has led to easier and faster spreading of rumors on cyber space, in addition to tradition ways. In this paper, rumor detection on Persian Twitter community has been addressed for the first time by exploring and analyzing the significances of two categories of rumor features: Structural and Content-based features. While applying both feature sets leads to precision more than 80%, precision around 70% using only structural features has been obtained. Moreover, we show how features of users tending to produce and spread rumors are effective in rumor detection process. Also, the experiments have led to learning of a language model of collected rumors. Finally, all features have been ranked and the most discriminative ones have been discussed.
Fuzzy optimization for supply chain planning under supply, demand and process uncertainties
In today’s global marketplace, individual firms do not compete as independent entities rather as an integral part of a supply chain. This paper proposes a fuzzy mathematical programming model for supply chain planning which considers supply, demand and process uncertainties. The model has been formulated as a fuzzy mixed-integer linear programming model where data are ill-known andmodelled by triangular fuzzy numbers. The fuzzy model provides the decision maker with alternative decision plans for different degrees of satisfaction. This proposal is tested by using data from a real automobile supply chain. © 2009 Elsevier B.V. All rights reserved.
Insecurity of Property Rights and Social Matching in the Tenancy Market
This paper shows that insecurity of property rights over agricultural land can have large efficiency and equity costs because of the way it affects matching in the tenancy market. A principal-agent framework is used to model the landlord's decision to rent when he takes into account the risk of losing the land to the tenant and when contract enforcement is decreasing in social distance with the tenant. These effects are quantified for the case of local land rental markets in the Dominican Republic. Results show that insecure property rights lead to matching in the tenancy market along socio-economic lines, severely limiting the size of the rental market and the choice of tenants for landlords, both with efficiency costs. Social segmentation reduces access to land for the rural poor, with high equity costs. Simulations suggest that improving tenure security would increase rental transactions by 21% and the area rented to the poor by 63%. Increased property rights security is hence beneficial not only to asset owners, but also to those with whom they might interact in the market.
ParBlockchain: Leveraging Transaction Parallelism in Permissioned Blockchain Systems
Many existing blockchains do not adequately address all the characteristics of distributed system applications and suffer from serious architectural limitations resulting in performance and confidentiality issues. While recent permissioned blockchain systems, have tried to overcome these limitations, their focus has mainly been on workloads with no-contention, i.e., no conflicting transactions. In this paper, we introduce OXII, a new paradigm for permissioned blockchains to support distributed applications that execute concurrently. OXII is designed for workloads with (different degrees of) contention. We then present ParBlockchain, a permissioned blockchain designed specifically in the OXII paradigm. The evaluation of ParBlockchain using a series of benchmarks reveals that its performance in workloads with any degree of contention is better than the state of the art permissioned blockchain systems.
An ahead pipelined alloyed perceptron with single cycle access time
The increasing pipeline depth, aggressive clock rates and execution width of modern processors require ever more accurate dynamic branch predictors to fully exploit their potential. Recent research on ahead pipelined branch predictors [11, 19] and branch predictors based on perceptrons [10, 11] have offered either increased accuracy or effective single cycle access times, at the cost of large hardware budgets and additional complexity in the branch predictor recovery mechanism. Here we show that a pipelined perceptron predictor can be constructed so that it has an effective latency of one cycle with a minimal loss of accuracy. We then introduce the concept of a precomputed local perceptron, which allows the use of both local and global history in an ahead pipelined perceptron. Both of these two techniques together allow this new perceptron predictor to match or exceed the accuracy of previous designs except at very small hardware budgets, and allow the elimination of most of the complexity in the rest of the pipeline associated with overriding predictors.
APPLYING MACHINE LEARNING METHODS FOR TIME SERIES FORECASTING
This paper describes a strategy on learning from time series data and on using learned model for forecasting. Time series forecasting, which analyzes and predicts a variable changing over time, has received much attention due to its use for forecasting stock prices, but it can also be used for pattern recognition and data mining. Our method for learning from time series data consists of detecting patterns within the data, describing the detected patterns, clustering the patterns, and creating a model to describe the data. It uses a change-point detection method to partition a time series into segments, each of the segments is then described by an autoregressive model. Then, it partitions all the segments into clusters, each of the clusters is considered as a state for a Markov model. It then creates the transitions between states in the Markov model based on the transitions between segments as the time series progressing. Our method for using the learned model for forecasting consists of indentifying current state, forecasting trends, and adapting to changes. It uses a moving window to monitor real-time data and creates an autoregressive model for the recently observed data, which is then matched to a state of the learned Markov model. Following the transitions of the model, it forecasts future trends. It also continues to monitor real-time data and makes corrections if necessary for adapting to changes. We implemented and successfully tested the methods for an application of load balancing on a parallel computing system.
Review on Text Mining Algorithms
Nowadays twitter microblog has become very popular in the conversation practice and in spreading awareness about various issues among the people. People share their short messages / tweets among their private / public social network. These messages are valuable for the number of tasks to identify hidden knowledge patterns from the discussions. Many research have been conducted on text classification. Text classification uses terms as features which can be grouped to vote for belongingness of a class. Text classification can be carried on twitter data and various machine learning algorithms can be used for feature based performance evaluation. In this context we have reviewed few papers taken from various sources like IEEE Xplore, ACM, Elsevier etc.
Modeling Relational Information in Question-Answer Pairs with Convolutional Neural Networks
In this paper, we propose convolutional neural networks for learning an optimal representation of question and answer sentences. Their main aspect is the use of relational information given by the matches between words from the two members of the pair. The matches are encoded as embeddings with additional parameters (dimensions), which are tuned by the network. These allows for better capturing interactions between questions and answers, resulting in a significant boost in accuracy. We test our models on two widely used answer sentence selection benchmarks. The results clearly show the effectiveness of our relational information, which allows our relatively simple network to approach the state of the art.
Role of cortisol in the pathogenesis of deficient counterregulation after antecedent hypoglycemia in normal humans.
The aim of this study was to determine the role of increased plasma cortisol levels in the pathogenesis of hypoglycemia-associated autonomic failure. Experiments were carried out on 16 lean, healthy, overnight fasted male subjects. One group (n = 8) underwent two separate, 2-d randomized experiments separated by at least 2 mo. On day 1 insulin was infused at a rate of 1.5 mU/kg per min and 2 h clamped hypoglycemia (53 +/- 2 mg/dl) or euglycemia (93 +/- 3 mg/dl) was obtained during morning and afternoon. The next morning subjects underwent a 2-h hyperinsulinemic (1.5 mU/kg per min) hypoglycemic (53 +/- 2 mg/dl) clamp study. In the other group (n = 8), day 1 consisted of morning and afternoon 2-h clamped hyperinsulinemic euglycemia with cortisol infused to stimulate levels of plasma cortisol occurring during clamped hypoglycemia (53 mg/dl). The next morning (day 2) subjects underwent a 2-h hyperinsulinemic hypoglycemic clamp identical to the first group. Despite equivalent day 2 plasma glucose and insulin levels, steady state epinephrine, norepinephrine, pancreatic polypeptide, glucagon, ACTH and muscle sympathetic nerve activity (MSNA) values were significantly (R < 0.01) blunted after day 1 cortisol infusion compared to antecedent euglycemia. Compared to day 1 cortisol, antecedent hypoglycemia produced similar blunted day 2 responses of epinephrine, norepinephrine, pancreatic polypeptide and MSNA compared to day 1 cortisol. Antecedent hypoglycemia, however, produced a more pronounced blunting of plasma glucagon, ACTH, and hepatic glucose production compared to day 1 cortisol. We conclude that in healthy overnight fasted men (a) antecedent physiologic increases of plasma cortisol can significantly blunt epinephrine, norepinephrine, glucagon, and MSNA responses to subsequent hypoglycemia and (b) these data suggest that increased plasma cortisol is the mechanism responsible for antecedent hypoglycemia causing hypoglycemia associated autonomic failure.
High bandwidth fast steering mirror
A high bandwidth, gimbaled, fast steering mirror (FSM) assembly has been designed and tested at the Lockheed Martin Space Systems Company (LMSSC) Advanced Technology Center (ATC). The design requirements were to gimbal a 5 cm diameter mirror about its reflective surface, and provide 1 KHz tip/tilt/piston control while maintaining λ/900 flatness of the mirror. The simple, yet very compact and rugged device also has manual tip/tilt/piston alignment capability. The off-the-shelf Piezo translators (PZT) actuators enable reliable and repeatable closed loop control. The adopted solution achieves a good mass balance and gimbaled motion about the center of the mirror front surface. Special care was taken to insure the best positioning means with the mounted mirror assembly held kinematically in place. The manual adjusters have very good resolution, with the capability to be locked in place. All solutions were thoroughly modeled and analyzed. This paper covers the design, analysis, fabrication, assembly, and testing of this device. The FSM was designed for ground test only.
A Meta-Analytic Study of Social Desirability Distortion in Computer-Administered Questionnaires , Traditional Questionnaires , and Interviews
A meta-analysis of social desirability distortion compared computer questionnaires with traditional paper-and-pencil questionnaires and face-to-face interviews in 61 studies (1967— 1997; 673 effect sizes). Controlling for correlated observations, a near-zero overall effect size was obtained for computer versus paper-and-pencil questionnaires. With moderators, there was less distortion on computerized measures of social desirability responding than on the paper-and-pencil measures, especially when respondents were alone and could backtrack. There was more distortion on the computer on other scales, but distortion was small when respondents were alone, anonymous, and could backtrack. There was less distortion on computerized versions of interviews than on face-to-face interviews. Research is needed on nonlinear patterns of distortion, and on the effects of context and interface on privacy perceptions and on responses to sensitive questions.
View Independent Vehicle Make, Model and Color Recognition Using Convolutional Neural Network
This paper describes the details of Sighthound’s fully automated vehicle make, model and color recognition system. The backbone of our system is a deep convolutional neural network that is not only computationally inexpensive, but also provides state-of-the-art results on several competitive benchmarks. Additionally, our deep network is trained on a large dataset of several million images which are labeled through a semi-automated process. Finally we test our system on several public datasets as well as our own internal test dataset. Our results show that we outperform other methods on all benchmarks by significant margins. Our model is available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud
An Architecture for Parallel Topic Models
This paper describes a high performance sampling architecture for inference of latent topic models on a cluster of workstations. Our system is faster than previous work by over an order of magnitude and it is capable of dealing with hundreds of millions of documents and thousands of topics. The algorithm relies on a novel communication structure, namely the use of a distributed (key, value) storage for synchronizing the sampler state between computers. Our architecture entirely obviates the need for separate computation and synchronization phases. Instead, disk, CPU, and network are used simultaneously to achieve high performance. We show that this architecture is entirely general and that it can be extended easily to more sophisticated latent variable models such as n-grams and hierarchies.
e-Monitoring of Asthma Therapy to Improve Compliance in children using a real-time medication monitoring system (RTMM): the e-MATIC study protocol
BACKGROUND Many children with asthma do not have sufficient asthma control, which leads to increased healthcare costs and productivity loss of parents. One of the causative factors are adherence problems. Effective interventions improving medication adherence may therefore improve asthma control and reduce costs. A promising solution is sending real time text-messages via the mobile phone network, when a medicine is about to be forgotten. As the effect of real time text-messages in children with asthma is unknown, the primary aim of this study is to determine the effect of a Real Time Medication Monitoring system (RTMM) with text-messages on adherence to inhaled corticosteroids (ICS). The secondary objective is to study the effects of RTMM on asthma control, quality of life and cost-effectiveness of treatment. METHODS A multicenter, randomized controlled trial involving 220 children (4-11 years) using ICS for asthma. All children receive an RTMM-device for one year, which registers time and date of ICS doses. Children in the intervention group also receive tailored text-messages, sent only when a dose is at risk of omission. Primary outcome measure is the proportion of ICS dosages taken within the individually predefined time-interval. Secondary outcome measures include asthma control (monthly Asthma Control Tests), asthma exacerbations, healthcare use (collected from hospital records, patient reports and pharmacy record data), and disease-specific quality of life (PAQLQ questionnaire). Parental and children's acceptance of RTMM is evaluated with online focus groups and patient questionnaires. An economic evaluation is performed adopting a societal perspective, including relevant healthcare costs and parental productivity loss. Furthermore, a decision-analytic model is developed in which different levels of adherence are associated with clinical and financial outcomes. Also, sensitivity analyses are carried out on different price levels for RTMM. DISCUSSION If RTMM with tailored text-message reminders proves to be effective, this technique can be used in daily practice, which would support children with suboptimal adherence in their asthma (self)management and in achieving better asthma control and better quality of life. TRIAL REGISTRATION Netherlands Trial Register NTR2583.
Surface plasmon resonance sensors for detection of chemical and biological species.
5.1. Detection Formats 475 5.2. Food Quality and Safety Analysis 477 5.2.1. Pathogens 477 5.2.2. Toxins 479 5.2.3. Veterinary Drugs 479 5.2.4. Vitamins 480 5.2.5. Hormones 480 5.2.6. Diagnostic Antibodies 480 5.2.7. Allergens 481 5.2.8. Proteins 481 5.2.9. Chemical Contaminants 481 5.3. Medical Diagnostics 481 5.3.1. Cancer Markers 481 5.3.2. Antibodies against Viral Pathogens 482 5.3.3. Drugs and Drug-Induced Antibodies 483 5.3.4. Hormones 483 5.3.5. Allergy Markers 483 5.3.6. Heart Attack Markers 484 5.3.7. Other Molecular Biomarkers 484 5.4. Environmental Monitoring 484 5.4.1. Pesticides 484 5.4.2. 2,4,6-Trinitrotoluene (TNT) 485 5.4.3. Aromatic Hydrocarbons 485 5.4.4. Heavy Metals 485 5.4.5. Phenols 485 5.4.6. Polychlorinated Biphenyls 487 5.4.7. Dioxins 487 5.5. Summary 488 6. Conclusions 489 7. Abbreviations 489 8. Acknowledgment 489 9. References 489
New trust metric for the RPL routing protocol
Establishing trust relationships between routing nodes represents a vital security requirement to establish reliable routing processes that exclude infected or selfish nodes. In this paper, we propose a new security scheme for the Internet of things and mainly for the RPL (Routing Protocol for Low-power and Lossy Networks) called: Metric-based RPL Trustworthiness Scheme (MRTS). The primary aim is to enhance RPL security and deal with the trust inference problem. MRTS addresses trust issue during the construction and maintenance of routing paths from each node to the BR (Border Router). To handle this issue, we extend DIO (DODAG Information Object) message by introducing a new trust-based metric ERNT (Extended RPL Node Trustworthiness) and a new Objective Function TOF (Trust Objective Function). In fact, ERNT represents the trust values for each node within the network, and TOF demonstrates how ERNT is mapped to path cost. In MRTS all nodes collaborate to calculate ERNT by taking into account nodes' behavior including selfishness, energy, and honesty components. We implemented our scheme by extending the distributed Bellman-Ford algorithm. Evaluation results demonstrated that the new scheme improves the security of RPL.
Pyramid Methods in Image Processing
The data structure used to represent image information can be critical to the successful completion of an image processing task. One structure that has attracted considerable attention is the image pyramid This consists of a set of lowpass or bandpass copies of an image, each representing pattern information of a different scale. Here we describe a variety of pyramid methods that we have developed for image data compression, enhancement, analysis and graphics. ©1984 RCA Corporation Final manuscript received November 12, 1984 Reprint Re-29-6-5 that can perform most of the routine visual tasks that humans do effortlessly. It is becoming increasingly clear that the format used to represent image data can be as critical in image processing as the algorithms applied to the data. A digital image is initially encoded as an array of pixel intensities, but this raw format is not suited to most asks. Alternatively, an image may be represented by its Fourier transform, with operations applied to the transform coefficients rather than to the original pixel values. This is appropriate for some data compression and image enhancement tasks, but inappropriate for others. The transform representation is particularly unsuited for machine vision and computer graphics, where the spatial location of pattem elements is critical. Recently there has been a great deal of interest in representations that retain spatial localization as well as localization in the spatial—frequency domain. This is achieved by decomposing the image into a set of spatial frequency bandpass component images. Individual samples of a component image represent image pattern information that is appropriately localized, while the bandpassed image as a whole represents information about a particular fineness of detail or scale. There is evidence that the human visual system uses such a representation, 1 and multiresolution schemes are becoming increasingly popular in machine vision and in image processing in general. The importance of analyzing images at many scales arises from the nature of images themselves. Scenes in the world contain objects of many sizes, and these objects contain features of many sizes. Moreover, objects can be at various distances from the viewer. As a result, any analysis procedure that is applied only at a single scale may miss information at other scales. The solution is to carry out analyses at all scales simultaneously. Convolution is the basic operation of most image analysis systems, and convolution with large weighting functions is a notoriously expensive computation. In a multiresolution system one wishes to perform convolutions with kernels of many sizes, ranging from very small to very large. and the computational problems appear forbidding. Therefore one of the main problems in working with multiresolution representations is to develop fast and efficient techniques. Members of the Advanced Image Processing Research Group have been actively involved in the development of multiresolution techniques for some time. Most of the work revolves around a representation known as a "pyramid," which is versatile, convenient, and efficient to use. We have applied pyramid-based methods to some fundamental problems in image analysis, data compression, and image manipulation.
Stupid Tutoring Systems, Intelligent Humans
The initial vision for intelligent tutoring systems involved powerful, multi-faceted systems that would leverage rich models of students and pedagogies to create complex learning interactions. But the intelligent tutoring systems used at scale today are much simpler. In this article, I present hypotheses on the factors underlying this development, and discuss the potential of educational data mining driving human decision-making as an alternate paradigm for online learning, focusing on intelligence amplification rather than artificial intelligence.
Analysis and design of monolithic, high PSR, linear regulators for SoC applications
Linear regulators are critical analog blocks that shield a system from fluctuations in supply rails and the importance of determining their power supply rejection (PSR) performance is magnified in SoC systems, given their inherently noisy environments. In this work, a simple, intuitive, voltage divider model is introduced to analyze the PSR of linear regulators, from which design guidelines for obtaining high PSR performance are derived. The PSR of regulators that use PMOS output stages for low drop-out (LDO), crucial for modern low-voltage systems, is enhanced by error amplifiers which present a supply-correlated ripple at the gate of the PMOS pass device. On the other hand, amplifiers that suppress the supply ripple at their output are optimal for NMOS output stages since the source is now free from output ripple. A better PSR bandwidth, at the cost of dc PSR, can be obtained by interchanging the amplifiers in the two cases. It has also been proved that the dc PSR, its dominant frequency breakpoint (where performance starts to degrade), and three subsequent breakpoints are determined by the dc open-loop gain, error amplifier bandwidth, unity-gain frequency (UGF) of the system, output pole, and ESR zero, respectively. These results were verified with SPICE simulations using BSIM3 models for the TSMC 0.35 /spl mu/m CMOS process from MOSIS.
ACCELERATED VIBRATION TESTING BASED ON FATIGUE DAMAGE SPECTRA
Failure of an aerospace component can arise through the long term exposure to fatigue damaging events such as large numbers of low amplitude random events and/or relatively fewer high amplitude events. Mission profiling and test synthesis is a process for deriving a simple laboratory test that has at least the same damage potential as the real environment but in a fraction of the real time. In this paper we introduce the technical concepts and present a case study showing how new technology has dramatically reduced the time it takes to prepare and reduce the original test data.
Competing Loyalty Programs : Impact of Market Saturation , Market Share , and Category Expandability
Loyalty programs have become an important component of firms’ relationship management strategies. There are now some industries in which numerous rival loyalty programs are offered, inducing intense competition among these programs. However, existing research on loyalty programs has often studied such programs in a noncompetitive setting and has often focused on a single program in isolation. Addressing this gap, this research examines the effect of a firm’s competitive positioning and market saturation on the performance of the firm’s loyalty program. Based on the analysis of firmand individual-level data from the airline industry, the results indicate that larger firms tend to benefit more from their loyalty program offerings than smaller firms. Moreover, when the product category demand is rigid, the impact of an individual loyalty program decreases as the marketplace becomes more saturated with competing programs. However, when the product category is highly expandable, the saturation effect disappears. Under such situations, loyalty programs can help an industry gain competitive advantage over substitute offerings outside the industry, and multiple programs can effectively coexist even under a high level of market saturation.
A systematic review of implementation strategies for assessment, prevention, and management of ICU delirium and their effect on clinical outcomes
INTRODUCTION Despite recommendations from professional societies and patient safety organizations, the majority of ICU patients worldwide are not routinely monitored for delirium, thus preventing timely prevention and management. The purpose of this systematic review is to summarize what types of implementation strategies have been tested to improve ICU clinicians' ability to effectively assess, prevent and treat delirium and to evaluate the effect of these strategies on clinical outcomes. METHOD We searched PubMed, Embase, PsychINFO, Cochrane and CINAHL (January 2000 and April 2014) for studies on implementation strategies that included delirium-oriented interventions in adult ICU patients. Studies were suitable for inclusion if implementation strategies' efficacy, in terms of a clinical outcome, or process outcome was described. RESULTS We included 21 studies, all including process measures, while 9 reported both process measures and clinical outcomes. Some individual strategies such as "audit and feedback" and "tailored interventions" may be important to establish clinical outcome improvements, but otherwise robust data on effectiveness of specific implementation strategies were scarce. Successful implementation interventions were frequently reported to change process measures, such as improvements in adherence to delirium screening with up to 92%, but relating process measures to outcome changes was generally not possible. In meta-analyses, reduced mortality and ICU length of stay reduction were statistically more likely with implementation programs that employed more (six or more) rather than less implementation strategies and when a framework was used that either integrated current evidence on pain, agitation and delirium management (PAD) or when a strategy of early awakening, breathing, delirium screening and early exercise (ABCDE bundle) was employed. Using implementation strategies aimed at organizational change, next to behavioral change, was also associated with reduced mortality. CONCLUSION Our findings may indicate that multi-component implementation programs with a higher number of strategies targeting ICU delirium assessment, prevention and treatment and integrated within PAD or ABCDE bundle have the potential to improve clinical outcomes. However, prospective confirmation of these findings is needed to inform the most effective implementation practice with regard to integrated delirium management and such research should clearly delineate effective practice change from improvements in clinical outcomes.
Dry needling: a literature review with implications for clinical practice guidelines1
BACKGROUND Wet needling uses hollow-bore needles to deliver corticosteroids, anesthetics, sclerosants, botulinum toxins, or other agents. In contrast, dry needling requires the insertion of thin monofilament needles, as used in the practice of acupuncture, without the use of injectate into muscles, ligaments, tendons, subcutaneous fascia, and scar tissue. Dry needles may also be inserted in the vicinity of peripheral nerves and/or neurovascular bundles in order to manage a variety of neuromusculoskeletal pain syndromes. Nevertheless, some position statements by several US State Boards of Physical Therapy have narrowly defined dry needling as an 'intramuscular' procedure involving the isolated treatment of 'myofascial trigger points' (MTrPs). OBJECTIVES To operationalize an appropriate definition for dry needling based on the existing literature and to further investigate the optimal frequency, duration, and intensity of dry needling for both spinal and extremity neuromusculoskeletal conditions. MAJOR FINDINGS According to recent findings in the literature, the needle tip touches, taps, or pricks tiny nerve endings or neural tissue (i.e. 'sensitive loci' or 'nociceptors') when it is inserted into a MTrP. To date, there is a paucity of high-quality evidence to underpin the use of direct dry needling into MTrPs for the purpose of short and long-term pain and disability reduction in patients with musculoskeletal pain syndromes. Furthermore, there is a lack of robust evidence validating the clinical diagnostic criteria for trigger point identification or diagnosis. High-quality studies have also demonstrated that manual examination for the identification and localization of a trigger point is neither valid nor reliable between-examiners. CONCLUSIONS Several studies have demonstrated immediate or short-term improvements in pain and/or disability by targeting trigger points (TrPs) using in-and-out techniques such as 'pistoning' or 'sparrow pecking'; however, to date, no high-quality, long-term trials supporting in-and-out needling techniques at exclusively muscular TrPs exist, and the practice should therefore be questioned. The insertion of dry needles into asymptomatic body areas proximal and/or distal to the primary source of pain is supported by the myofascial pain syndrome literature. Physical therapists should not ignore the findings of the Western or biomedical 'acupuncture' literature that have used the very same 'dry needles' to treat patients with a variety of neuromusculoskeletal conditions in numerous, large scale randomized controlled trials. Although the optimal frequency, duration, and intensity of dry needling has yet to be determined for many neuromusculoskeletal conditions, the vast majority of dry needling randomized controlled trials have manually stimulated the needles and left them in situ for between 10 and 30 minute durations. Position statements and clinical practice guidelines for dry needling should be based on the best available literature, not a single paradigm or school of thought; therefore, physical therapy associations and state boards of physical therapy should consider broadening the definition of dry needling to encompass the stimulation of neural, muscular, and connective tissues, not just 'TrPs'.
Unsupervised Semantic Role Labellin
We present an unsupervised method for labelling the arguments of verbs with their semantic roles. Our bootstrapping algorithm makes initial unambiguous role assignments, and then iteratively updates the probability model on which future assignments are based. A novel aspect of our approach is the use of verb, slot, and noun class information as the basis for backing off in our probability model. We achieve 50–65% reduction in the error rate over an informed baseline, indicating the potential of our approach for a task that has heretofore relied on large amounts of manually generated training data.
Smart parking solutions for urban areas
Finding a parking place in a busy city centre is often a frustrating task for many drivers; time and fuel are wasted in the quest for a vacant spot and traffic in the area increases due to the slow moving vehicles circling around. In this paper, we present the results of a survey on the needs of drivers from parking infrastructures from a smart services perspective. As smart parking systems are becoming a necessity in today's urban areas, we discuss the latest trends in parking availability monitoring, parking reservation and dynamic pricing schemes. We also examine how these schemes can be integrated forming technologically advanced parking infrastructures whose aim is to benefit both the drivers and the parking operators alike.
Little evidence of association between severity of trigonocephaly and cognitive development in infants with single-suture metopic synostosis.
OBJECTIVES To measure severity of trigonocephaly among infants with single-suture metopic craniosynostosis by using a novel shape descriptor, the trigonocephaly severity index (TSI), and to evaluate whether degree of trigonocephaly correlates with their neurodevelopmental test scores. METHODS We conducted a multicenter cross-sectional and longitudinal study, identifying and recruiting 65 infants with metopic synostosis before their corrective surgery. We obtained computed tomography images for 49 infants and measured the presurgical TSI, a 3-dimensional outline-based cranial shape descriptor. We evaluated neurodevelopment by administering the Bayley Scales of Infant Development, Second Edition, and the Preschool Language Scale, Third Edition, before surgery and at 18 and 36 months of age. We fit linear regression models to estimate associations between test scores and TSI values adjusted for age at testing and race/ethnicity. We fit logistic regression models to estimate whether the odds of developmental delay were increased among children with more severe trigonocephaly. RESULTS We observed little adjusted association between neurodevelopmental test scores and TSI values, and no associations that persisted at 3 years. Trigonocephaly was less severe among children referred at older ages. CONCLUSION We observed little evidence of an association between the severity of trigonocephaly among metopic synostosis patients and their neurodevelopmental test scores. Detecting such a relationship with precision may require larger sample sizes or alternative phenotypic quantifiers. Until studies are conducted to explore these possibilities, it appears that although associated with the presence of metopic synostosis, the risk of developmental delays in young children is unrelated to further variation in trigonocephalic shape.
Exploring User-Specific Information in Music Retrieval
With the advancement of mobile computing technology and cloud-based streaming music service, user-centered music retrieval has become increasingly important. User-specific information has a fundamental impact on personal music preferences and interests. However, existing research pays little attention to the modeling and integration of user-specific information in music retrieval algorithms/models to facilitate music search. In this paper, we propose a novel model, named User-Information-Aware Music Interest Topic (UIA-MIT) model. The model is able to effectively capture the influence of user-specific information on music preferences, and further associate users' music preferences and search terms under the same latent space. Based on this model, a user information aware retrieval system is developed, which can search and re-rank the results based on age- and/or gender-specific music preferences. A comprehensive experimental study demonstrates that our methods can significantly improve the search accuracy over existing text-based music retrieval methods.
Low-Swing Differential Conditional Capturing Flip-Flop for LC Resonant Clock Distribution Networks
In this paper we introduce a new flip-flop for use in a low- swing LC resonant clocking scheme. The proposed low-swing differential conditional capturing flip-flop (LS-DCCFF) operates with a low-swing sinusoidal clock through the utilization of reduced swing inverters at the clock port. The functionality of the proposed flip-flop was verified at extreme corners through simulations with parasitics extracted from layout. The LS-DCCFF enables 6.5% reduction in power compared to the full- swing flip-flop with 19% area overhead. In addition, a frequency dependent delay associated with driving pulsed flip-flops with a low-swing sinusoidal clock has been characterized. The LS-DCCFF has 870 ps longer data to output delay as compared to the full-swing flip-flop at the same setup time for a 100 MHz sinusoidal clock. The functionality of the proposed flip-flop was tested and verified by using the LS-DCCFF in a dual-mode multiply and accumulate (MAC) unit fabricated in TSMC 90-nm CMOS technology. Low-swing resonant clocking achieved around 5.8% reduction in total power with 5.7% area overhead for the MAC.
Citizenship, nationality, and ethnicity : reconciling competing identities
Part I: The Conceptual Kit: The Search for Clarity: . 1. Introducing the Argument. 2. Rethinking Citizenship, Nationality and Ethnicity. 3. Avoiding Conflations and Subsumptions. 4. Race and Religion: Untenable Factors in Nation Formation. Part II: The Empirical Process: The Trajectory of Ethnification: . 5. Colonialism and European Expansion. 6. Proletarian Internationalism and the Socialist State. 7. The Nation--State and Project Homogenization. 8. Immigration and the Chauvinism of Prosperity. Part III: Towards a Rapproachment: Concepts and Reality: . 9. Reconceptualizing Nation and Nationality: The Cruciality of Territory and Language. 10. Class, Nation, Ethnie and Race: Interlinkages. 11. Reconciling Nationality and Ethnicity: The Role of Citizenship. Notes. References. Index.
Gated Recurrent Capsules for Visual Word Embeddings
The caption retrieval task can be defined as follows: given a set of images I and a set of describing sentences S, for each image i in I we ought to find the sentence in S that best describes i. The most commonly applied method to solve this problem is to build a multimodal space and to map each image and each sentence to that space, so that they can be compared easily. A non-conventional model called Word2VisualVec has been proposed recently: instead of mapping images and sentences to a multimodal space, they mapped sentences directly to a space of visual features. Advances in the computation of visual features let us infer that such an approach is promising. In this paper, we propose a new Recurrent Neural Network model following that unconventional approach based on Gated Recurrent Capsules (GRCs), designed as an extension of Gated Recurrent Units (GRUs). We show that GRCs outperform GRUs on the caption retrieval task. We also state that GRCs present a great potential for other applications.
Scheduler technologies in support of high performance data analysis
Job schedulers are a key component of scalable computing infrastructures. They orchestrate all of the work executed on the computing infrastructure and directly impact the effectiveness of the system. Recently, job workloads have diversified from long-running, synchronously-parallel simulations to include short-duration, independently parallel high performance data analysis (HPDA) jobs. Each of these job types requires different features and scheduler tuning to run efficiently. A number of schedulers have been developed to address both job workload and computing system heterogeneity. High performance computing (HPC) schedulers were designed to schedule large-scale scientific modeling and simulations on supercomputers. Big Data schedulers were designed to schedule data processing and analytic jobs on clusters. This paper compares and contrasts the features of HPC and Big Data schedulers with a focus on accommodating both scientific computing and high performance data analytic workloads. Job latency is critical for the efficient utilization of scalable computing infrastructures, and this paper presents the results of job launch benchmarking of several current schedulers: Slurm, Son of Grid Engine, Mesos, and Yarn. We find that all of these schedulers have low utilization for short-running jobs. Furthermore, employing multilevel scheduling significantly improves the utilization across all schedulers.
The anti-inflammatory activity of licorice, a widely used Chinese herb.
CONTEXT Increasing incidence and impact of inflammatory diseases have encouraged the search of new pharmacological strategies to face them. Licorice has been used to treat inflammatory diseases since ancient times in China. OBJECTIVE To summarize the current knowledge on anti-inflammatory properties and mechanisms of compounds isolated from licorice, to introduce the traditional use, modern clinical trials and officially approved drugs, to evaluate the safety and to obtain new insights for further research of licorice. METHODS PubMed, Web of Science, Science Direct and ResearchGate were information sources for the search terms 'licorice', 'licorice metabolites', 'anti-inflammatory', 'triterpenoids', 'flavonoids' and their combinations, mainly from year 2010 to 2016 without language restriction. Studies were selected from Science Citation Index journals, in vitro studies with Jadad score less than 2 points and in vivo and clinical studies with experimental flaws were excluded. RESULTS Two hundred and ninety-five papers were searched and 93 papers were reviewed. Licorice extract, 3 triterpenes and 13 flavonoids exhibit evident anti-inflammatory properties mainly by decreasing TNF, MMPs, PGE2 and free radicals, which also explained its traditional applications in stimulating digestive system functions, eliminating phlegm, relieving coughing, nourishing qi and alleviating pain in TCM. Five hundred and fifty-four drugs containing licorice have been approved by CFDA. The side effect may due to the cortical hormone like action. CONCLUSION Licorice and its natural compounds have demonstrated anti-inflammatory activities. More pharmacokinetic studies using different models with different dosages should be carried out, and the maximum tolerated dose is also critical for clinical use of licorice extract and purified compounds.
Who Gets What in British Politics – and How? An Analysis of Media Reports on Lobbying around Government Policies, 2001–7
Questions about the political influence of organised interests are at the heart of democratic theory and political science. Yet comparatively little is known empirically about the effectiveness of different power resources in policy struggles and how organised interests succeed or fail to employ these resources to achieve desired political outcomes. The main factors behind the empirical neglect of political influence include problems of measurement and a scarcity of relevant data. To address this problem, a newspaper analysis was conducted to compile a data set of 163 policy proposals advanced by UK governments between 2001 and 2007 and to record the reported policy position of organised interests. The data are used to assess frequently voiced expectations in the literature about organised interest politics and political influence in a new light. The results show that support from interest groups is positively related to a proposal becoming policy. The positions of business groups are no better reflected ...
Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycaemia in Diabetes (RECORD): study design and protocol
Studies suggest that in addition to blood glucose concentrations, thiazolidinediones such as rosiglitazone improve some cardiovascular (CV) risk factors and surrogate markers, that are abnormal in type 2 diabetes. However, fluid retention might lead to cardiac failure in a minority of people. The aim of the Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycaemia in Diabetes (RECORD) study is to evaluate the long-term impact of these effects on CV outcomes, as well as on long-term glycaemic control, in people with type 2 diabetes. RECORD is a 6-year, randomised, open-label study in type 2 diabetic patients with inadequate blood glucose control (HbA1c 7.1–9.0%) on metformin or sulphonylurea alone. The study is being performed in 327 centres in Europe and Australasia. After a 4-week run-in, participants were randomised by current treatment stratum to add-on rosiglitazone, metformin or sulphonylurea, with dose titration to a target HbA1c of ≤7.0%. If confirmed HbA1c rises to ≥8.5%, either a third glucose-lowering drug is added (rosiglitazone-treated group) or insulin is started (non-rosiglitazone group). The same criterion for failure of triple oral drug therapy in the rosiglitazone-treated group is used for starting insulin in this group. The primary endpoint is the time to first CV hospitalisation or death, blindly adjudicated by a central endpoints committee. The study aim is to evaluate non-inferiority of the rosiglitazone group vs the non-rosiglitazone group with respect to CV outcomes. Safety, tolerability and study conduct are monitored by an independent board. All CV endpoint and safety data are held and analysed by a clinical trials organisation, and are not available to the study investigators while data collection is open. Over a 2-year period a total of 7,428 people were screened in 25 countries. Of these, 4,458 were randomised; 2,228 on background metformin, 2,230 on background sulphonylurea. Approximately half of the participants are male (52%) and almost all are Caucasian (99%). The RECORD study should provide robust data on the extent to which rosiglitazone, in combination with metformin or sulphonylurea therapy, affects CV outcomes and progression of diabetes in the long term.
Single centre 20 year survey of antiepileptic drug-induced hypersensitivity reactions.
BACKGROUND Epilepsy is a chronic neurological disease which affects about 1% of the human population. There are 50 million patients in the world suffering from this disease and 2 million new cases per year are observed. The necessary treatment with antiepileptic drugs (AEDs) increases the risk of adverse reactions. In case of 15% of people receiving AEDs, cutaneous reactions, like maculopapular or erythematous pruritic rash, may appear within four weeks of initiating therapy with AEDs. METHODS This study involved 300 epileptic patients in the period between September 1989 and September 2009. A cutaneous adverse reaction was defined as a diffuse rash, which had no other obvious reason than a drug effect, and resulted in contacting a physician. RESULTS Among 300 epileptic patients of Neurological Practice in Kielce (132 males and 168 females), a skin reaction to at least one AED was found in 30 patients. As much as 95% of the reactions occurred during therapies with carbamazepine, phenytoin, lamotrigine or oxcarbazepine. One of the patients developed Stevens-Johnson syndrome. CONCLUSION Some hypersensitivity problems of epileptic patients were obviously related to antiepileptic treatment. Among AEDs, gabapentin, topiramate, levetiracetam, vigabatrin, and phenobarbital were not associated with skin lesions, although the number of patients in the case of the latter was small.
Staging for vulvar cancer.
Vulvar cancer has been staged by the International Federation of Gynaecology and Obstetrics (FIGO) since 1969, and the original staging system was based on clinical findings only. This system provided a very good spread of prognostic groupings. Because vulvar cancer is virtually always treated surgically, the status of the lymph nodes is the most important prognostic factor and this can only be determined with certainty by histological examination of resected lymph nodes, FIGO introduced a surgical staging system in 1988. This was modified in 1994 to include a category of microinvasive vulvar cancer (stage IA), because such patients have virtually no risk of lymph node metastases. This system did not give a reasonably even spread of prognostic groupings. In addition, patients with stage III disease were shown to be a heterogeneous group prognostically, and the number of positive nodes and the morphology of those nodes were not taken into account. A new surgical staging system for vulvar cancer was introduced by FIGO in 2009. Initial retrospective analyses have suggested that this new staging system has overcome the major deficiencies in the 1994 system.
Efficient Edge Detection on Low-Cost FPGAs
Improving the efficiency of edge detection in embedded applications, such as UAV control, is critical for reducing system cost and power dissipation. Field programmable gate arrays (FPGA) are a good platform for making improvements because of their specialised internal structure. However, current FPGA edge detectors do not exploit this structure well. A new edge detection architecture is proposed that is better optimised for FPGAs. The basis of the architecture is the Sobel edge kernels that are shown to be the most suitable because of their separability and absence of multiplications. Edge intensities are calculated with a new 4:2 compressor that consists of two custom-designed 3:2 compressors. Addition speed is increased by breaking carry propagation chains with look-ahead logic. Testing of the design showed it gives a 28% increase in speed and 4.4% reduction in area over previous equivalent designs, which demonstrated that it will lower the cost of edge detection systems, dissipate less power and still maintain highspeed control. Keywords—edge detection; FPGA; compressor; low-cost; UAV
Hamming Clustering: A New Approach to Rule Extraction
A new algorithm, called Hamming Clustering (HC), is proposed to extract a set of rules underlying a given classification problem. It is able to reconstruct the and-or expression associated with any Boolean function from a training set of
Multiple-event direct to histogram TDC in 65nm FPGA technology
A novel multiple-event Time to Digital Converter (TDC) with direct to histogram output is implemented in a 65nm Xilinx Virtex 5 FPGA. The delay-line based architecture achieves 16.3 ps temporal accuracy over a 2.86ns dynamic range. The measured maximum conversion rate of 6.17 Gsamples/s and the sampling rate of 61.7 Gsamples/s are the highest published in the literature. The system achieves a linearity of -0.9/+3 LSB DNL and -1.5/+5 LSB INL. The TDC is demonstrated in a direct time of flight optical ranging application with 12mm error over a 350mm range.
Approximate logic synthesis for error tolerant applications
Error tolerance formally captures the notion that -- for a wide variety of applications including audio, video, graphics, and wireless communications -- a defective chip that produces erroneous values at its outputs may be acceptable, provided the errors are of certain types and their severities are within application-specified thresholds. All previous research on error tolerance has focused on identifying such defective but acceptable chips during post-fabrication testing to improve yield. In this paper, we explore a completely new approach to exploit error tolerance based on the following observation: If certain deviations from the nominal output values are acceptable, then we can exploit this flexibility during circuit design to reduce circuit area and delay as well as to increase yield. The specific metric of error tolerance we focus on is error rate, i.e., how often the circuit produces erroneous outputs. We propose a new logic synthesis approach for the new problem of identifying how to exploit a given error rate threshold to maximally reduce the area of the synthesized circuit. Experiment results show that for an error rate threshold within 1%, our approach provides 9.43% literal reductions on average for all the benchmarks that we target.
A Survey on Data Center Networking (DCN): Infrastructure and Operations
Data centers (DCs), owing to the exponential growth of Internet services, have emerged as an irreplaceable and crucial infrastructure to power this ever-growing trend. A DC typically houses a large number of computing and storage nodes, interconnected by a specially designed network, namely, DC network (DCN). The DCN serves as a communication backbone and plays a pivotal role in optimizing DC operations. However, compared to the traditional network, the unique requirements in the DCN, for example, large scale, vast application diversity, high power density, and high reliability, pose significant challenges to its infrastructure and operations. We have observed from the premium publication venues (e.g., journals and system conferences) that increasing research efforts are being devoted to optimize the design and operations of the DCN. In this paper, we aim to present a systematic taxonomy and survey of recent research efforts on the DCN. Specifically, we propose to classify these research efforts into two areas: 1) DCN infrastructure and 2) DCN operations. For the former aspect, we review and compare the list of transmission technologies and network topologies used or proposed in the DCN infrastructure. For the latter aspect, we summarize the existing traffic control techniques in the DCN operations, and survey optimization methods to achieve diverse operational objectives, including high network utilization, fair bandwidth sharing, low service latency, low energy consumption, high resiliency, and etc., for efficient DC operations. We finally conclude this survey by envisioning a few open research opportunities in DCN infrastructure and operations.
Modeling and simulation of the power transformer faults and related protective relay behavior
The modeling of power transformer faults and its application to performance evaluation of a commercial digital power transformer relay are the objective of this study. A new method to build an EMTP/ATP power transformer model is proposed in this paper. Detailed modeling of the transformer relay is also discussed. The transient waveforms generated by ATP under different operating conditions are utilized to evaluate the performance of the transformer relay. The computer simulation results presented in this paper are consistent with the laboratory test result obtained using an analog power system model.
Metric Learning
Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. is book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learning literature that covers algorithms, theory and applications for both numerical and structured data. We first introduce relevant definitions and classic metric functions, as well as examples of their use in machine learning and data mining. We then review a wide range of metric learning algorithms, starting with the simple setting of linear distance and similarity learning. We show how one may scale-up these methods to very large amounts of training data. To go beyond the linear case, we discuss methods that learn nonlinear metrics or multiple linear metrics throughout the feature space, and review methods for more complex settings such as multi-task and semi-supervised learning. Although most of the existing work has focused on numerical data, we cover the literature on metric learning for structured data like strings, trees, graphs and time series. In the more technical part of the book, we present some recent statistical frameworks for analyzing the generalization performance in metric learning and derive results for some of the algorithms presented earlier. Finally, we illustrate the relevance of metric learning in real-world problems through a series of successful applications to computer vision, bioinformatics and information retrieval.
GLTM: A Global and Local Word Embedding-Based Topic Model for Short Texts
Short texts have become a kind of prevalent source of information, and discovering topical information from short text collections is valuable for many applications. Due to the length limitation, conventional topic models based on document-level word co-occurrence information often fail to distill semantically coherent topics from short text collections. On the other hand, word embeddings as a powerful tool have been successfully applied in natural language processing. Word embeddings trained on large corpus are encoded with general semantic and syntactic information of words, and hence they can be leveraged to guide topic modeling for short text collections as supplementary information for sparse co-occurrence patterns. However, word embeddings are trained on large external corpus and the encoded information is not necessarily suitable for training data set of topic models, which is ignored by most existing models. In this article, we propose a novel global and local word embedding-based topic model (GLTM) for short texts. In the GLTM, we train global word embeddings from large external corpus and employ the continuous skip-gram model with negative sampling (SGNS) to obtain local word embeddings. Utilizing both the global and local word embeddings, the GLTM can distill semantic relatedness information between words which can be further leveraged by Gibbs sampler in the inference process to strengthen semantic coherence of topics. Compared with five state-of-the-art short text topic models on four real-world short text collections, the proposed GLTM exhibits the superiority in most cases.
Molecular and cell-based approaches for neuroprotection in glaucoma.
A hallmark of glaucomatous optic nerve damage is retinal ganglion cell (RGC) death. RGCs, like other central nervous system neurons, have a limited capacity to survive or regenerate an axon after injury. Strategies that prevent or slow down RGC degeneration, in combination with intraocular pressure management, may be beneficial to preserve vision in glaucoma. Recent progress in neurobiological research has led to a better understanding of the molecular pathways that regulate the survival of injured RGCs. Here we discuss a variety of experimental strategies including intraocular delivery of neuroprotective molecules, viral-mediated gene transfer, cell implants and stem cell therapies, which share the ultimate goal of promoting RGC survival after optic nerve damage. The challenge now is to assess how this wealth of knowledge can be translated into viable therapies for the treatment of glaucoma and other optic neuropathies.
A Practical Evaluation of Surge Arrester Placement for Transmission Line Lightning Protection
The use of metal-oxide varistor surge arresters (MOVs) in lightning protection of overhead transmission lines to improve reliability is of great interest to electric utilities. However due to economic reasons, it is not possible to completely equip an overhead transmission line with surge arresters at each transmission structure. In this paper, an evaluation of lightning protection design on a 115 kV transmission line using surge arresters, utilizing a model based on field data, is presented. The model developed is used for computer simulation using the Alternative Transients Program. Various design procedures aimed at maximizing the reliability of service on the transmission line using a minimal number of surge arresters are analyzed. Different designs considered for transmission line lightning protection using MOV arresters include: the use of a different number of surge arresters per tower, distance between towers with surge arresters and the dependence of these configurations on tower footing resistance. The lightning protection designs are analyzed using `lightning flashover charts,' proposed in this paper. Also, an analytical model of two 115 kV transmission lines in Southwest U.S. has been developed and different surge arrester location strategies used on these transmission lines have been analyzed. Practical experiences and effectiveness of various lightning protection designs used on these transmission lines are discussed.
Transdiaphragmatic pressure and neural respiratory drive measured during inspiratory muscle training in stable patients with chronic obstructive pulmonary disease
PURPOSE Inspiratory muscle training (IMT) is a rehabilitation therapy for stable patients with COPD. However, its therapeutic effect remains undefined due to the unclear nature of diaphragmatic mobilization during IMT. Diaphragmatic mobilization, represented by transdiaphragmatic pressure (Pdi), and neural respiratory drive, expressed as the corrected root mean square (RMS) of the diaphragmatic electromyogram (EMGdi), both provide vital information to select the proper IMT device and loads in COPD, therefore contributing to the curative effect of IMT. Pdi and RMS of EMGdi (RMSdi%) were measured and compared during inspiratory resistive training and threshold load training in stable patients with COPD. PATIENTS AND METHODS Pdi and neural respiratory drive were measured continuously during inspiratory resistive training and threshold load training in 12 stable patients with COPD (forced expiratory volume in 1 s ± SD was 26.1%±10.2% predicted). RESULTS Pdi was significantly higher during high-intensity threshold load training (91.46±17.24 cmH2O) than during inspiratory resistive training (27.24±6.13 cmH2O) in stable patients with COPD, with P<0.01 for each. Significant difference was also found in RMSdi% between high-intensity threshold load training and inspiratory resistive training (69.98%±16.78% vs 17.26%±14.65%, P<0.01). CONCLUSION We concluded that threshold load training shows greater mobilization of Pdi and neural respiratory drive than inspiratory resistive training in stable patients with COPD.
A Seismic Strengthening Technique for Reinforced Concrete Columns Using Sprayed FRP
Conventional methods for seismic retrofitting of concrete columns include reinforcement with steel plates or steel frame braces, as well as cross-sectional increments and in-filled walls. However, these methods have some disadvantages, such as the increase in mass and the need for precise construction. Fiber-reinforced polymer (FRP) sheets for seismic strengthening of concrete columns using new light-weight composite materials, such as carbon fiber or glass fiber, have been developed, have excellent durability and performance, and are being widely applied to overcome the shortcomings of conventional seismic strengthening methods. Nonetheless, the FRP-sheet reinforcement method also has some drawbacks, such as the need for prior surface treatment, problems at joints, and relatively expensive material costs. In the current research, the structural and material properties associated with a new method for seismic strengthening of concrete columns using FRP were investigated. The new technique is a sprayed FRP system, achieved by mixing chopped glass and carbon fibers with epoxy and vinyl ester resin in the open air and randomly spraying the resulting mixture onto the uneven surface of the concrete columns. This paper reports on the seismic resistance of reinforced concrete columns controlled by shear strengthening using the sprayed FRP system. Five shear column specimens were designed, and then strengthened with sprayed FRP by using different combinations of short carbon or glass fibers and epoxy or vinyl ester resins. There was also a non-strengthened control specimen. Cyclic loading tests were carried out, and the ultimate load carrying capacity and deformation were investigated, as well as hysteresis in the lateral load-drift relationship. The results showed that shear strengths and deformation capacities of shear columns strengthened using sprayed FRP improved markedly, compared with those of the control column. The spraying FRP technique developed in this study can be practically and effectively used for the seismic strengthening of existing concrete columns.
Tables, Counters, and Shelves: Semantic Mapping of Surfaces in 3D
Abslracl Semantic mapping aims to create maps that include meaningful features, both to robots nnd humans. We prescnt :10 extens ion to our feature based mapping technique that includes information about the locations of horizontl.lJ surfaces such as tables, shelves, or counters in the map. The surfaces a rc detected in 3D point clouds, the locations of which arc optimized by our SLAM algorithm. The resulting scans of surfaces :lrc then analyzed to segment them into distinct surfaces, which may include measurements of a single surface across multiple scans. Preliminary rl'Sults arc presented in the form of a feature based map augmented with a sct of 3D point clouds in a consistent global map frame that represent all detected surfaces within the mapped area.
Torasemide inhibits transcardiac extraction of aldosterone in patients with congestive heart failure.
control of unmappable ventricular tachycardia in patients with ischemic and nonischemic cardiomyopathy. Circulation 2000;101:1288–96. 6. Josephson ME, Waxman HL, Cain ME, et al. Ventricular activation during ventricular endocardial pacing. II. Role of pace-mapping to localize origin of ventricular tachycardia. Am J Cardiol 1982;50:11–22. 7. Leclercq C, Faris O, Tunin R, et al. Systolic improvement and mechanical resynchronization does not require electrical synchrony in the dilated failing heart with left bundle-branch block. Circulation 2002;106:1760–3.
A Survey on OFDM-Based Elastic Core Optical Networking
Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed.
15 GHz 25 dBm multigate-cell stacked CMOS power amplifier with 32 % PAE and ≥ 30 dB gain for 5G applications
A three-stage stacked FET CMOS power amplifier (PA) for the 12.7 to 15.3 GHz frequency range is presented. The PA achieves more than 30 dB linear gain with saturated output power of 25.1 dBm (320 mW) and peak power added efficiency (PAE) of 32.4 % at 13.5 GHz. The PA is implemented in 45 nm CMOS SOI technology. High gain is achieved with two cascode pre-driver stages and a final high power stage. The output stage comprises 512 four-stack multigate-cell devices to allow high voltage swings and correspondingly high output power. The effective gate width of the output device is 614 μm. To the authors knowledge, combination of power and efficiency achieved in this work are the highest reported for CMOS PAs in the 15 GHz band. The amplifier occupies an area of 1 × 1 mm2 including pads.
Learning Personalized Models for Facial Expression Analysis and Gesture Recognition
Facial expression and gesture recognition algorithms are key enabling technologies for human-computer interaction (HCI) systems. State of the art approaches for automatic detection of body movements and analyzing emotions from facial features heavily rely on advanced machine learning algorithms. Most of these methods are designed for the average user, but the assumption “one-size-fits-all” ignores diversity in cultural background, gender, ethnicity, and personal behavior, and limits their applicability in real-world scenarios. A possible solution is to build personalized interfaces, which practically implies learning person-specific classifiers and usually collecting a significant amount of labeled samples for each novel user. As data annotation is a tedious and time-consuming process, in this paper we present a framework for personalizing classification models which does not require labeled target data. Personalization is achieved by devising a novel transfer learning approach. Specifically, we propose a regression framework which exploits auxiliary (source) annotated data to learn the relation between person-specific sample distributions and parameters of the corresponding classifiers. Then, when considering a new target user, the classification model is computed by simply feeding the associated (unlabeled) sample distribution into the learned regression function. We evaluate the proposed approach in different applications: pain recognition and action unit detection using visual data and gestures classification using inertial measurements, demonstrating the generality of our method with respect to different input data types and basic classifiers. We also show the advantages of our approach in terms of accuracy and computational time both with respect to user-independent approaches and to previous personalization techniques.
Chatbot for IT Security Training: Using Motivational Interviewing to Improve Security Behaviour
We conduct a pre-study with 25 participants on Mechanical Turk to find out which security behavioural problems are most important for online users. These questions are based on motivational interviewing (MI), an evidence-based treatment methodology that enables to train people about different kinds of behavioural changes. Based on that the chatbot is developed using Artificial Intelligence Markup Language (AIML). The chatbot is trained to speak about three topics: passwords, privacy and secure browsing. These three topics were ’most-wanted’ by the users of the pre-study. With the chatbot three training sessions with people are conducted.
Joint Embedding Models for Textual and Social Analysis
In online social networks, users openly interact, share content, and endorse each other. Although the data is interconnected, previous research has primarily focused on modeling the social network behavior separately from the textual content. Here we model the data in a holistic way, taking into account connections between social behavior and content. Specifically, we define multiple decision tasks over the relationships between users and the content generated by them. We show, on a real world dataset, that a learning a joint embedding (over user characteristics and language) and using joint prediction (based on intraand inter-task constraints) produces consistent gains over (1) learning specialized embeddings, and (2) predicting locally w.r.t. a single task, with or without constraints.
Improving software developers' fluency by recommending development environment commands
Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.
The structural biology of CRISPR-Cas systems.
Prokaryotic CRISPR-Cas genomic loci encode RNA-mediated adaptive immune systems that bear some functional similarities with eukaryotic RNA interference. Acquired and heritable immunity against bacteriophage and plasmids begins with integration of ∼30 base pair foreign DNA sequences into the host genome. CRISPR-derived transcripts assemble with CRISPR-associated (Cas) proteins to target complementary nucleic acids for degradation. Here we review recent advances in the structural biology of these targeting complexes, with a focus on structural studies of the multisubunit Type I CRISPR RNA-guided surveillance and the Cas9 DNA endonuclease found in Type II CRISPR-Cas systems. These complexes have distinct structures that are each capable of site-specific double-stranded DNA binding and local helix unwinding.
Early molecular response to posttransplantation imatinib determines outcome in MRD+ Philadelphia-positive acute lymphoblastic leukemia (Ph+ ALL).
In adult Philadelphia chromosome-positive acute lymphoblastic leukemia (Ph+ ALL), minimal residual disease (MRD) after stem cell transplantation (SCT) is associated with a relapse probability exceeding 90%. Starting imatinib in the setting of MRD may decrease this high relapse rate. In this prospective multicenter study, 27 Ph+ ALL patients received imatinib upon detection of MRD after SCT. Bcr-abl transcripts became undetectable in 14 (52%) of 27 patients, after a median of 1.5 months (0.9-3.7 months) ((early)CR(mol)). All patients who achieved an (early)CR(mol) remained in remission for the duration of imatinib treatment; 3 patients relapsed after imatinib was discontinued. Failure to achieve polymerase chain reaction (PCR) negativity shortly after starting imatinib predicted relapse, which occurred in 12 (92%) of 13 patients after a median of 3 months. Disease-free survival (DFS) in (early)CR(mol) patients is 91% +/- 9% and 54% +/- 21% after 12 and 24 months, respectively, compared with 8% +/- 7% after 12 months in patients remaining MRD+ (P < .001). In conclusion, approximately half of patients with Ph+ ALL receiving imatinib for MRD positivity after SCT experience prolonged DFS, which can be anticipated by the rapid achievement of a molecular complete remission (CR). Continued detection of bcr-abl transcripts after 2 to 3 months on imatinib identifies patients who will ultimately experience relapse and in whom additional or alternative antileukemic treatment should be initiated.
Exploring Miner Evolution in Bitcoin Network
In recent years, Bitcoin, a peer-to-peer network based crypto digital currency, has attracted a lot of attentions from the media, the academia, and the general public. A user in Bitcoin network can create Bitcoins by packing and verifying new transactions in the network using their computation power. Driven by the price surge of Bitcoin, users are increasingly investing on expensive specialized hardware for Bitcoin mining. To obtain steady payouts, users also pool their computation resources to conduct pool mining. In this paper, we study the evolution of Bitcoin miners by analyzing the complete transaction blockchain. We characterize how the productivity, computation power and transaction activity of miners evolve over time. We also conduct an in-depth study on the largest mining pool F2Pool. We show how it grows over time and how computation power is distributed among its miners. Finally, we build a simple economic model to explain the evolution of Bitcoin miners.
Word segmentation for the Myanmar language
This study reports the development of a Myanmar word segmentation method using Unicode standard encoding. Word segmentation is an essential step prior to natural language processing in the Myanmar language, because a Myanmar text is a string of characters without explicit word boundary delimiters. The proposed method has two phases: syllable segmentation and syllable merging. A rule-based heuristic approach was adopted for syllable segmentation, and a dictionary-based statistical approach for syllable merging. Evaluation of test results showed that the method is very effective for the Myanmar language.
A fully-adaptive wideband 0.5–32.75Gb/s FPGA transceiver in 16nm FinFET CMOS technology
This paper describes the design of a low power fully-adaptive wideband, flexible reach transceiver in 16nm FinFET CMOS embedded within FPGA. The receiver utilizes a 3-stage CTLE with a segmented AGC to minimize parasitic peaking and 15-tap DFE to operate over both short and long channels. The transmitter uses a swing boosted CML driver architecture. Low noise wideband fractional N LC PLLs combined with linear active inductor based phase interpolators and high speed clocking are utilized for low jitter clock generation. The transceiver achieves >1200mVdpp TX swing with <;190 fs RJ and 5.39 ps TJ to achieve BER <; 10-15 over a 30 dB loss backplane at 32.75 Gb/s, while consuming 577 mW.