title
stringlengths
8
300
abstract
stringlengths
0
10k
RAP-CLA: A Reconfigurable Approximate Carry Look-Ahead Adder
In this brief, we propose a fast yet energy-efficient reconfigurable approximate carry look-ahead adder (RAP-CLA). This adder has the ability of switching between the approximate and exact operating modes making it suitable for both error-resilient and exact applications. The structure, which is more area and power efficient than state-of-the-art reconfigurable approximate adders, is achieved by some modifications to the conventional carry look ahead adder (CLA). The efficacy of the proposed RAP-CLA adder is evaluated by comparing its characteristics to those of two state-of-the-art reconfigurable approximate adders as well as the conventional (exact) CLA in a 15 nm FinFET technology. The results reveal that, in the approximate operating mode, the proposed 32-bit adder provides up to 55% and 28% delay and power reductions compared to those of the exact CLA, respectively, at the cost of up to 35.16% error rate. It also provides up to 49% and 19% lower delay and power consumption, respectively, compared to other approximate adders considered in this brief. Finally, the effectiveness of the proposed adder on two image processing applications of smoothing and sharpening is demonstrated.
Machine Learning Techniques for Intrusion Detection
An Intrusion Detection System (IDS) is a software that monitors a single or a network of computers for malicious activities (attacks) that are aimed at stealing or censoring information or corrupting network protocols. Most techniques used in today’s IDS are not able to deal with the dynamic and complex nature of cyber attacks on computer networks. Hence, efficient adaptive methods like various techniques of machine learning can result in higher detection rates, lower false alarm rates and reasonable computation and communication costs. In this paper, we study several such schemes and compare their performance. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI). We explain how various characteristics of CI techniques can be used to build efficient IDS.
Development of 6kV SiC hybrid power switch based on 1200V SiC JFET and MOSFET
Series-connected power switch provides a viable solution to implement high voltage and high frequency converters. By using the commercially available 1200V Silicon Carbide (SiC) Junction Field Effect Transistor (JFET) and Metal Oxide semiconductor Filed-effect Transistor (MOSFET), a 6 kV SiC hybrid power switch concept and its application are demonstrated. To solve the parameter deviation issue in the series device structure, an optimized voltage control method is introduced, which can guarantee the equal voltage sharing under both static and dynamic state. Without Zener diode arrays, this strategy can significantly reduce the turn-off switching loss. Moreover, this hybrid MOSFET-JFETs concept is also presented to suppress the silicon MOSFET parasitic capacitance effect. In addition, the positive gate drive voltage greatly accelerates turn-on speed and decreases the switching loss. Compared with the conventional super-JFETs, the proposed scheme is suitable for series-connected device, and can achieve better performance. The effectiveness of this method is validated by simulations and experiments, and promising results are obtained.
Concentrated Differential Privacy
The Fundamental Law of Information Recovery states, informally, that “overly accurate” estimates of “too many” statistics completely destroys privacy ([DN03] et sequelae). Differential privacy is a mathematically rigorous definition of privacy tailored to analysis of large datasets and equipped with a formal measure of privacy loss [DMNS06, Dwo06]. Moreover, differentially private algorithms take as input a parameter, typically called ε, that caps the permitted privacy loss in any execution of the algorithm and offers a concrete privacy/utility tradeoff. One of the strengths of differential privacy is the ability to reason about cumulative privacy loss over multiple analyses, given the values of ε used in each individual analysis. By appropriate choice of ε it is possible to stay within the bounds of the Fundamental Law while releasing any given number of estimated statistics; however, before this work the bounds were not tight. Roughly speaking, differential privacy ensures that the outcome of any anlysis on a database x is distributed very similarly to the outcome on any neighboring database y that differs from x in just one row (Definition 2.3). That is, differentially private algorithms are randomized, and in particular the max divergence between these two distributions (a sort maximum log odds ratio for any event; see Definition 2.2 below) is bounded by the privacy parameter ε. This absolute guarantee on the maximum privacy loss is now sometimes referred to as “pure” differential privacy. A popular relaxation, (ε, δ)-differential privacy (Definition 2.4)[DKM+06], guarantees that with probability at most 1−δ the privacy loss does not exceed ε.1 Typically δ is taken to be “cryptographically” small, that is, smaller than the inverse of any polynomial in the size of the dataset, and pure differential privacy is simply the special case in which δ = 0. The relaxation frequently permits asymptotically better accuracy than pure differential privacy for the same value of ε, even when δ is very small. What happens in the case of multiple analyses? While the composition of k (ε, 0)-differentially privacy algorithms is at worst (kε, 0)-differentially private, it is also simultaneously ( √
Probabilistic Analysis of Plug-In Electric Vehicles Impact on Electrical Grid Through Homes and Parking Lots
Plug-in electric vehicles in the future will possibly emerge widely in city areas. Fleets of such vehicles in large numbers could be regarded as considerable stochastic loads in view of the electrical grid. Moreover, they are not stabled in unique positions to define their impact on the grid. Municipal parking lots could be considered as important aggregators letting these vehicles interact with the utility grid in certain positions. A bidirectional power interface in a parking lot could link electric vehicles with the utility grid or any storage and dispersed generation. Such vehicles, depending on their need, could transact power with parking lots. Considering parking lots equipped with power interfaces, in more general terms, parking-to-vehicle and vehicle-to-parking are propose here instead of conventional grid-to-vehicle and vehicle-to-grid concepts. Based on statistical data and adopting general regulations on vehicles (dis)charging, a novel stochastic methodology is presented to estimate total daily impact of vehicles aggregated in parking lots on the grid. Different scenarios of plug-in vehicles' penetration are suggested in this paper and finally, the scenarios are simulated on standard grids that include several parking lots. The results show acceptable penetration level margins in terms of bus voltages and grid power loss.
Patterns and correlates of physical activity: a cross-sectional study in urban Chinese women
BACKGROUND Inactivity is a modifiable risk factor for many diseases. Rapid economic development in China has been associated with changes in lifestyle, including physical activity. The purpose of this study was to investigate the patterns and correlates of physical activity in middle-aged and elderly women from urban Shanghai. METHODS Study population consisted of 74,942 Chinese women, 40-70 years of age, participating in the baseline survey of the Shanghai Women's Health Study (1997-2000), an ongoing population-based cohort study. A validated, interviewer-administered physical activity questionnaire was used to collect information about several physical activity domains (exercise/sports, walking and cycling for transportation, housework). Correlations between physical activity domains were evaluated by Spearman rank-correlation coefficients. Associations between physical activity and socio-demographic and lifestyle factors were evaluated by odds ratios derived from logistic regression. RESULTS While more than a third of study participants engaged in regular exercise, this form of activity contributed only about 10% to daily non-occupational energy expenditure. About two-thirds of women met current recommendations for lifestyle activity. Age was positively associated with participation in exercise/sports and housework. Dietary energy intake was positively associated with all physical activity domains. High socioeconomic status, unemployment (including retirement), history of chronic disease, small household, non-smoking status, alcohol and tea consumption, and ginseng intake were all positively associated with exercise participation. High socioeconomic status and small household were inversely associated with non-exercise activities. CONCLUSION This study demonstrates that physical activity domains other than sports and exercise are important contributors to total energy expenditure in women. Correlates of physical activity are domain-specific. These findings provide important information for research on the health benefits of physical activity and have public health implications for designing interventions to promote participation in physical activity.
Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language
This article presents a measure of semantic similarity in an is-a taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The article presents algorithms that take advantage of taxonomic similarity in resolving syntactic and semantic ambiguity, along with experimental results demonstrating their e ectiveness.
Understanding Search-Engine Optimization
Because users rarely click on links beyond the first search results page, boosting search-engine ranking has become essential to business success. With a deeper knowledge of search-engine optimization best practices, organizations can avoid unethical practices and effectively monitor strategies approved by popular search engines.
Predicting Symptom Trajectories of Schizophrenia using Mobile Sensing
Continuously monitoring schizophrenia patients’ psychiatric symptoms is crucial for in-time intervention and treatment adjustment. The Brief Psychiatric Rating Scale (BPRS) is a survey administered by clinicians to evaluate symptom severity in schizophrenia. The CrossCheck symptom prediction system is capable of tracking schizophrenia symptoms based on BPRS using passive sensing from mobile phones. We present results from an ongoing randomized control trial, where passive sensing data, self-reports, and clinician administered 7-item BPRS surveys are collected from 36 outpatients with schizophrenia recently discharged from hospital over a period ranging from 2-12 months. We show that our system can predict a symptom scale score based on a 7-item BPRS within ±1.45 error on average using automatically tracked behavioral features from phones (e.g., mobility, conversation, activity, smartphone usage, the ambient acoustic environment) and user supplied self-reports. Importantly, we show our system is also capable of predicting an individual BPRS score within ±1.59 error purely based on passive sensing from phones without any self-reported information from outpatients. Finally, we discuss how well our predictive system reflects symptoms experienced by patients by reviewing a number of case studies.
Neural Decomposition of Time-Series Data for Effective Generalization
We present a neural network technique for the analysis and extrapolation of time-series data called neural decomposition (ND). Units with a sinusoidal activation function are used to perform a Fourier-like decomposition of training samples into a sum of sinusoids, augmented by units with nonperiodic activation functions to capture linear trends and other nonperiodic components. We show how careful weight initialization can be combined with regularization to form a simple model that generalizes well. Our method generalizes effectively on the Mackey–Glass series, a data set of unemployment rates as reported by the U.S. Department of Labor Statistics, a time-series of monthly international airline passengers, the monthly ozone concentration in downtown Los Angeles, and an unevenly sampled time series of oxygen isotope measurements from a cave in north India. We find that ND outperforms popular time-series forecasting techniques, including long short-term memory network, echo-state networks, autoregressive integrated moving average (ARIMA), seasonal ARIMA, support vector regression with a radial basis function, and Gashler and Ashmore’s model.
Health Facility Utilisation Changes during the Introduction of Community Case Management of Malaria in South Western Uganda: An Interrupted Time Series Approach
BACKGROUND Malaria endemic countries have scaled-up community health worker (CHW) interventions, to diagnose and treat malaria in communities with limited access to public health systems. The evaluations of these programmes have centred on CHW's compliance to guidelines, but the broader changes at public health centres including utilisation and diagnoses made, has received limited attention. METHODS This analysis was conducted during a CHW-intervention for malaria in Rukungiri District, Western Uganda. Outpatient department (OPD) visit data were collected for children under-5 attending three health centres one year before the CHW-intervention started (pre-intervention period) and for 20 months during the intervention (intervention-period). An interrupted time series analysis with segmented regression models was used to compare the trends in malaria, non-malaria and overall OPD visits during the pre-intervention and intervention-period. RESULTS The introduction of a CHW-intervention suggested the frequency of diagnoses of diarrhoeal diseases, pneumonia and helminths increased, whilst the frequency of malaria diagnoses declined at health centres. In May 2010 when the intervention began, overall health centre utilisation decreased by 63% compared to the pre-intervention period and the health centres saw 32 fewer overall visits per month compared to the pre-intervention period (p<0.001). Malaria visits also declined shortly after the intervention began and there were 27 fewer visits per month during the intervention-period compared with the pre-intervention period (p<0.05). The declines in overall and malaria visits were sustained for the entire intervention-period. In contrast, there were no observable changes in trends of non-malarial visits between the pre-intervention and intervention-period. CONCLUSIONS This analysis suggests introducing a CHW-intervention can reduce the number of child malaria visits and change the profile of cases presenting at health centres. The reduction in workload of health workers may allow them to spend more time with patients or undertake additional curative or preventative roles.
Consumer-to-Consumer Electronic Commerce: A Distinct Research Stream
Consumer-to-consumer (C2C) e-commerce is a growing area of e-commerce. However, according to a meta-analysis of critical themes of e-commerce , C2C e-commerce was only represented in the area of online auctions (Wareham, Zheng, & Straub, 2005). C2C e-commerce can encompass much more than just auctions. The question then becomes, " is C2C e-commerce a different research area that deserves its own stream of research? " This study adapts constructs from a business-to-consumer (B2C) e-commerce study of satisfaction (Devaraj, Fan, & Kohli, 2002) to determine what, if any, the differences are in the C2C e-commerce arena. The constructs include elements of the technology acceptance model (TAM), which includes perceived ease of use and usefulness; transaction cost analysis (TCA), which includes uncertainty, asset specificity, and time; and service quality (SERVQUAL), which includes reliability, responsiveness, assurance, and empathy. Participants in the study answered questions regarding these various constructs in relation to their experiences with C2C e-commerce. The findings indicate that TAM, TCA, and SERVQUAL all impact satisfaction in C2C e-commerce. Reliability and responsiveness (areas of service quality) were found to influence C2C e-commerce satisfaction, where as they were not found to be an influence in the B2C study. These findings warrant further research in the C2C e-commerce arena. The study provides implications for future research and practice.
The dynamical point of view of low-discrepancy sequences
In this overview we show by examples, how to associate certain sequences in the higher-dimensional unit cube to suitable dynamical systems. We present methods and notions from ergodic theory that serve as tools for the study of low-discrepancy sequences and discuss an important technique, cutting- and-stacking of intervals. Communicated by W. G. Nowak Dedicated to the memory of Gerard Rauzy
Man-in-the-middle attacks on Secure Simple Pairing in Bluetooth standard V5.0 and its countermeasure
Bluetooth devices are widely employed in the home network systems. It is important to secure the home members’ Bluetooth devices, because they always store and transmit personal sensitive information. In the Bluetooth standard, Secure Simple Pairing (SSP) is an essential security mechanism for Bluetooth devices. We examine the security of SSP in the recent Bluetooth standard V5.0. The passkey entry association model in SSP is analyzed under the man-in-the-middle (MITM) attacks. Our contribution is twofold. (1) We demonstrate that the passkey entry association model is vulnerable to the MITM attack, once the host reuses the passkey. (2) An improved passkey entry protocol is therefore designed to fix the reusing passkey defect in the passkey entry association model. The improved passkey entry protocol can be easily adapted to the Bluetooth standard, because it only uses the basic cryptographic components existed in the Bluetooth standard. Our research results are beneficial to the security enhancement of Bluetooth devices in the home network systems.
KNN Model-Based Approach in Classification
The k-Nearest-Neighbours (kNN) is a simple but effective method for classification. The major drawbacks with respect to kNN are (1) its low efficiency being a lazy learning method prohibits it in many applications such as dynamic web mining for a large repository, and (2) its dependency on the selection of a “good value” for k. In this paper, we propose a novel kNN type method for classification that is aimed at overcoming these shortcomings. Our method constructs a kNN model for the data, which replaces the data to serve as the basis of classification. The value of k is automatically determined, is varied for different data, and is optimal in terms of classification accuracy. The construction of the model reduces the dependency on k and makes classification faster. Experiments were carried out on some public datasets collected from the UCI machine learning repository in order to test our method. The experimental results show that the kNN based model compares well with C5.0 and kNN in terms of classification accuracy, but is more efficient than the standard kNN.
Futures and spot prices – an analysis of the Scandinavian electricity market
In this paper we first give a presentation of the history and organisation of the electricity market in Scandinavia, which has been gradually restructured over the last decade. A futures market has been in operation there since September 1995. We analyse the historical prices in the spot and futures markets, using general theory for pricing of commodities futures contracts. We find that the futures prices on average exceeded the actual spot price at delivery. Hence, we conclude that there is a negative risk premium in the electricity futures market. This result contradicts the findings in most other commodities markets, where the risk premium from holding a futures contract tend to be zero or positive. Physical factors like unexpected precipitation can contribute to explain parts of the observations. However, we also identify the difference in flexibility between the supply and demand sides of the electricity market, leaving the demand side with higher incentive to hedge their positions in the futures market, as a possible explanation for the negative risk premium. The limited data available might not be sufficient to draw fully conclusive results. However, the analysis described in the paper can be repeated with higher significance in a few years from now.
Exponential Stability of Linear Delay Impulsive Differential Equations
Corresponding author: Elena Braverman Technion Israel Institute of Technology, Department of Mathematics, 32000, Haifa, Israel e-mail : maelena@tx.technion.ac.il Abstract For an impulsive delay differential equation with bounded delay and bounded coefficients the following result is established. If each solution is bounded on [0,∞) together with its derivative for each bounded right-hand side then the equation is exponentially stable. A coefficient stability theorem is presented.
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
Image processing is an active research area in which medical image processing is a highly challenging field. Medical imaging techniques are used to image the inner portions of the human body for medical diagnosis. Brain tumor is a serious life altering disease condition. Image segmentation plays a significant role in image processing as it helps in the extraction of suspicious regions from the medical images. In this paper we have proposed segmentation of brain MRI image using K-means clustering algorithm followed by morphological filtering which avoids the misclustered regions that can inevitably be formed after segmentation of the brain MRI image for detection of tumor location.
Hyperglycemia and prostate cancer recurrence in men treated for localized prostate cancer
Background:Obesity is consistently linked with prostate cancer (PCa) recurrence and mortality, though the mechanism is unknown. Impaired glucose regulation, which is common among obese individuals, has been hypothesized as a potential mechanism for PCa tumor growth. In this study, we explore the relationship between serum glucose at time of treatment and risk of PCa recurrence following initial therapy.Methods:The study group comprised 1734 men treated with radical prostatectomy (RP) or radiation therapy (RT) for localized PCa between 2001–2010. Serum glucose levels closest to date of diagnosis were determined. PCa recurrence was determined based on PSA progression (nadir PSA+2 for RT; PSA⩾0.2 for RP) or secondary therapy. Multivariate Cox regression was performed to determine whether glucose level was associated with biochemical recurrence after adjusting for age, race, body mass index, comorbidity, diagnosis of diabetes, Gleason Sum, PSA, treatment and treatment year.Results:Recurrence was identified in 16% of men over a mean follow-up period of 41 months (range 1–121 months). Those with elevated glucose (⩾100 mg/dl) had a 50% increased risk of recurrence (HR 1.5, 95% CI: 1.1–2.0) compared with those with a normal glucose level (<100 mg/dl). This effect was seen in both those undergoing RP (HR 1.9, 95% CI: 1.0–3.6) and those treated with RT (HR 1.4, 95% CI: 1.0–2.0).Conclusions:Glucose levels at the time of PCa diagnosis are an independent predictor of PCa recurrence for men undergoing treatment for localized disease.
Polyglot: An Extensible Compiler Framework for Java
Polyglot is an extensible compiler framework that supports the easy creation of compilers for languages similar to Java, while avoiding code duplication. The Polyglot framework is useful for domain-specific languages, exploration of language design, and for simplified versions of Java for pedagogical use. We have used Polyglot to implement several major and minor modifications to Java; the cost of implementing language extensions scales well with the degree to which the language differs from Java. This paper focuses on the design choices in Polyglot that are important for making the framework usable and highly extensible. Polyglot source code is available.
Person re-identification based on hierarchical bipartite graph matching
This work proposes a novel person re-identification method based on Hierarchical Bipartite Graph Matching. Because human eyes observe person appearance roughly first and then goes further into the details gradually, our method abstracts person image from coarse to fine granularity, and finally into a three layer tree structure. Then, three bipartite graph matching methods are proposed for the matching of each layer between the trees. At the bottom layer Non-complete Bipartite Graph matching is proposed to collect matching pairs among small local regions. At the middle layer Semi-complete Bipartite Graph matching is used to deal with the problem of spatial misalignment between two person bodies. Complete Bipartite Graph matching is presented to refine the ranking result at the top layer. The effectiveness of our method is validated on the CAVIAR4REID and VIPeR datasets, and competitive results are achieved on both datasets.
Generalization Performance of Some Learning Problems in Hilbert Functional Spaces
We investigate the generalization performance of some learning problems in Hilbert functional Spaces. We introduce a notion of convergence of the estimated functional predictor to the best underlying predictor, and obtain an estimate on the rate of the convergence. This estimate allows us to derive generalization bounds on some learning formulations.
Comparison of antianginal actions of verapamil and propranolol.
Verapamil is a slow calcium channel blocker that has been in clinical use for almost 20 years as an antiarrhythmic agent. It also has an antianginal action,' but inconsistent results have been obtained in clinical trials owing to insufficient dosage.2 We found that the drug produced a significant decrease in the number of anginal attacks and the consumption of glyceryl trinitrate with improvement in exercise tolerance and ST changes when used in a dose of 360 mg daily.3 We have now compared the potency and mode of action of verapamil with those of propranolol, a standard beta-adrenergicreceptor-blocking drug, using fixed high doses of each.
Intellectual Capital and Its Measurement
Intellectual capital is becoming the preeminent resource for creating economic wealth. Tangible assets such as property, plant, and equipment continue to be important factors in the production of both goods and services. However, their relative importance has decreased through time as the importance of intangible, knowledge-based assets has increased. This shift in importance has raised a number of accounting questions critical for managing assets such as brand names, trade secrets, production processes, distribution channels, and work-related competencies. This paper develops a working definition of intellectual capital and a framework for identifying and classifying the various components of intellectual capital. In addition, methods of measuring intellectual capital at both the individual-component and organization levels are presented. This provides an exploratory foundation for accounting systems and processes useful for meaningful management of intellectual assets. INTELLECTUAL CAPITAL AND ITS MEASUREMENT
The kitchen as a graphical user interface
Everyday objects can become computer interfaces by the overlay of digital information. This paper describes scenarios and implementations in which imagery is digitally painted on the objects and spaces of a kitchen. Five augmented physical interfaces were designed to orient and inform people in the tasks of cleaning, cooking, and accessing information: Information Table, Information Annotation of Kitchen, HeatSink, Spatial Definition, and Social Floor. Together, these interfaces augment the entire room into a single graphical user interface.
Parental discipline behaviours and beliefs about their child: associations with child internalizing and mediation relationships.
INTRODUCTION Internalizing disorders of childhood are a common and disabling problem, with sufferers at increased risk of subsequent psychiatric morbidity. Several studies have found associations between parenting styles and children's internalizing, although few have considered the role of parental discipline. Parental discipline style may exert an effect on children's internalizing symptoms. Anxiety and depression are reliably found to run in families and parental anxiety has been shown to effect parenting behaviour. This study set out to examine the links between parental anxiety, parental discipline style and child internalizing symptoms. METHOD Eighty-eight parents of children aged 4-10 years were recruited through primary schools. All parents completed questionnaires including measures relating to: adult anxiety (State-Trait Anxiety Inventory - Trait version, Penn State Worry Questionnaire), parental depression (Beck Depression Inventory - Fastscreen), parental discipline (The Parenting Scale), parenting-related attributions (Parenting Attitudes, Beliefs and Cognitions Scale) and child psychological morbidity (Child Behaviour Checklist 4-18 version). RESULTS Significant correlations were found between both parental anxiety and child internalizing symptoms with ineffective discipline and negative beliefs about parenting. Particularly strong correlations were found between parental anxiety and child internalizing symptoms with harsh discipline. Parents of anxious/withdrawn children were more likely to hold negative beliefs about their child. The link between parental anxiety and child internalizing symptoms was mediated by harsh discipline. The link between parental anxiety and harsh discipline was mediated by parental beliefs about the child. CONCLUSION Discipline style may be an important factor in the relationship between parent anxiety and child internalizing symptoms.
Fuzzy logic based traffic light controller
Traffic congestion is a major concern for many cities throughout the world. Developing a sophisticated traffic monitoring and control system would result in an effective solution to this problem. In a conventional traffic light controller, the traffic lights change at constant cycle time. Hence it does not provide an optimal solution. Many traffic light controllers implemented in current practice, are based on the 'time-of-the-day' scheme, which use a limited number of predetermined traffic light patterns and implement these patterns depending upon the time of the day. These automated systems do not provide an optimal control for fluctuating traffic volumes. A traffic light controller based on fuzzy logic can be used for optimum control of fluctuating traffic volumes such as over saturated or unusual load conditions. The objective is to improve the vehicular throughput and minimize delays. The rules of fuzzy logic controller are formulated by following the same protocols that a human operator would use to control the time intervals of the traffic light. The length of the current green phase is extended or terminated depending upon the 'arrival' i.e. the number of vehicles approaching the green phase and the 'queue' that corresponds to the number of queuing vehicles in red phases. A prototype system for controlling traffic at an intersection is designed using VB6 and Matlab tool. The traffic intersection is simulated in VB6 and the data regarding the traffic parameters is collected in VB6 environment. The decision on the duration of the extension is taken using the Matlab tool. This decision is based on the Arrival and Queue of vehicles, which is imported in Matlab from VB6 environment. The time delay experienced by the vehicles using the fixed as well as fuzzy traffic controller is then compared to observe the effectiveness of the fuzzy traffic controller.
EFFICIENT CONSTRAINED PATH PLANNING VIA SEARCH IN STATE LATTICES
We propose a novel approach to constrained path planning that is based on a special search space which efficiently encodes feasible paths. The paths are encoded implicitly as connections between states, but only feasible and local connections are included. Once this search space is developed, we systematically generate a near-minimal set of spatially distinct path primitives. This set expresses the local connectivity of constrained motions and also eliminates redundancies. The set of primitives is used to define heuristic search, and thereby create a very efficient path planner at the chosen resolution. We also discuss a wide variety of space and terrestrial robotics applications where this motion planner can be especially useful.
New Multibase Non-Adjacent Form Scalar Multiplication and its Application to Elliptic Curve Cryptosystems (extended version)
In this paper we present a new method for scalar multiplication that uses a generic multibase representation to reduce the number of required operations. Further, a multibase NAF-like algorithm that efficiently converts numbers to such representation without impacting memory or speed performance is developed and showed to be sublinear in terms of the number of nonzero terms. Additional representation reductions are discussed with the introduction of window-based variants that use an extended set of precomputations. To realize the proposed multibase scalar multiplication with or without precomputations in the setting of Elliptic Curve Cryptosystems (ECC) over prime fields, we also present a methodology to derive fast composite operations such as tripling or quintupling of a point that require less memory than previous point formulae. Point operations are then protected against simple side-channel attacks using a highly efficient atomic structure. Extensive testing is carried out to show that our multibase scalar multiplication is the fastest method to date in the setting of ECC and exhibits a small footprint, which makes it ideal for implementation on constrained devices.
The state of CRM adoption by the financial services in the UK: an empirical investigation
In recent years, organisations have begun to realise the importance of knowing their customers better. Customer relationship management (CRM) is an approach to managing customer related knowledge of increasing strategic significance. The successful adoption of IT-enabled CRM redefines the traditional models of interaction between businesses and their customers, both nationally and globally. It is regarded as a source for competitive advantage because it enables organisations to explore and use knowledge of their customers and to foster profitable and long-lasting one-to-one relationships. This paper discusses the results of an exploratory survey conducted in the UK financial services sector; it discusses CRM practice and expectations, the motives for implementing it, and evaluates post-implementation experiences. It also investigates the CRM tools functionality in the strategic, process, communication, and business-to-customer (B2C) organisational context and reports the extent of their use. The results show that despite the anticipated potential, the benefits from such tools are rather small. # 2004 Published by Elsevier B.V.
The use of allograft (and avoidance of autograft) in anterior lumbar interbody fusion: a critical analysis
The aim of this report is to analyze the validity of allograft in anterior lumbar interbody fusion. Forty-three patients underwent anterior lumbar interbody fusion using allograft in the period between 1995 and 1998. All suffered from crippling chronic low back pain with or without sciatica. Discogenic disease was verified in 40 cases by discography. All patients were investigated preoperatively with magnetic resonance imaging (MRI). The surgical technique is described. Follow-up radiographs were performed postoperatively, then at 1.5, 3, 6 and 12 months, as required. Radiological fusion was confirmed in all single-level fusions (100%, n=24). In two-level fusions the rate was 93% (n=28/30). However, radiological union could only be confirmed in 11 of the 12 levels in the three-level fusions. Allograft offers a better alternative to autograft for anterior lumbar interbody fusion. Donor site morbidity is avoided, hospital stay is shorter and fusion rates are satisfactory.
A Novel Programmable Parallel CRC Circuit
A new hardware scheme for computing the transition and control matrix of a parallel cyclic redundancy checksum is proposed. This opens possibilities for parallel high-speed cyclic redundancy checksum circuits that reconfigure very rapidly to new polynomials. The area requirements are lower than those for a realization storing a precomputed matrix. An additional simplification arises as only the polynomial needs to be supplied. The derived equations allow the width of the data to be processed in parallel to be selected independently of the degree of the polynomial. The new design has been simulated and outperforms a recently proposed architecture significantly in speed, area, and energy efficiency.
Towards Generating Real-life Datasets for Network Intrusion Detection
With exponential growth in the number of computer applications and the sizes of networks, the potential damage that can be caused by attacks launched over the Internet keeps increasing dramatically. A number of network intrusion detection methods have been developed with respective strengths and weaknesses. The majority of network intrusion detection research and development is still based on simulated datasets due to non-availability of real datasets. A simulated dataset cannot represent a real network intrusion scenario. It is important to generate real and timely datasets to ensure accurate and consistent evaluation of detection methods. In this paper, we propose a systematic approach to generate unbiased fullfeature real-life network intrusion datasets to compensate for the crucial shortcomings of existing datasets. We establish the importance of an intrusion dataset in the development and validation process of detection mechanisms, identify a set of requirements for effective dataset generation, and discuss several attack scenarios and their incorporation in generating datasets. We also establish the effectiveness of the generated dataset in the context of several existing datasets.
An Efficient Representation for Filtrations of Simplicial Complexes
A filtration over a simplicial complex K is an ordering of the simplices of K such that all prefixes in the ordering are subcomplexes of K. Filtrations are at the core of Persistent Homology, a major tool in Topological Data Analysis. To represent the filtration of a simplicial complex, the entire filtration can be appended to any data structure that explicitly stores all the simplices of the complex such as the Hasse diagram or the recently introduced Simplex Tree [Algorithmica’14]. However, with the popularity of various computational methods that need to handle simplicial complexes, and with the rapidly increasing size of the complexes, the task of finding a compact data structure that can still support efficient queries is of great interest. This direction has been recently pursued for the case of maintaining simplicial complexes. For instance, Boissonnat et al. [Algorithmica’17] considered storing the simplices that are maximal with respect to inclusion and Attali et al. [IJCGA’12] considered storing the simplices that block the expansion of the complex. Nevertheless, so far there has been no data structure that compactly stores the filtration of a simplicial complex, while also allowing the efficient implementation of basic operations on the complex. In this article, we propose a new data structure called the Critical Simplex Diagram (CSD), which is a variant of the Simplex Array List [Algorithmica’17]. Our data structure allows one to store in a compact way the filtration of a simplicial complex and allows for the efficient implementation of a large range of basic operations. Moreover, we prove that our data structure is essentially optimal with respect to the requisite storage space. Finally, we show that the CSD representation admits fast construction algorithms for Flag complexes and relaxed Delaunay complexes.
Partial nephrectomy for T2 renal masses: contemporary trends and oncologic efficacy
Increasing popularity and improved technical feasibility of partial nephrectomy (PN) has encouraged urologists to treat larger renal masses with nephron-sparing surgery. We used a national database to characterize practice patterns for the surgical management of patients with T2 renal tumors and examined the effect of PN on cancer-specific survival in such patients. Between 2001 and 2011, 10,259 patients with primary tumor size >7 cm confined to the kidney (T2) were treated surgically for kidney cancer. PN trends were examined using annual percentage change (APC). Multivariate survival models were developed to identify independent determinants of PN use and cancer-specific survival (CSS) following surgical treatment of kidney cancer. Overall, 543 patients (5.29 %) were treated with PN versus 9716 (94.71 %) who underwent radical nephrectomy (RN). The use of PN increased progressively between 2001 and 2011 (APC +11.1 %, p < 0.05). Male gender, geographic location, year of diagnosis, and disease stage were independent determinants of increased PN use (all p values <0.05). Cancer-specific mortality was not inferior for patients treated with PN versus RN (HR 0.68, 95 % CI 0.50–0.94). Male gender, younger age, white race, tumor size >10 cm, localized disease, and papillary histology were all associated with improved CSS with PN (all p values <0.05). PN is increasingly utilized to treat T2 renal masses. Our analysis demonstrates that PN for T2 renal masses has no contraindicated effect on CSS.
Relationship Between Soil Properties and Patterns of Bacterial β-diversity Across Reclaimed and Natural Boreal Forest Soils
Productivity gradients in the boreal forest are largely determined by regional-scale changes in soil conditions, and bacterial communities are likely to respond to these changes. Few studies, however, have examined how variation in specific edaphic properties influences the composition of soil bacterial communities along environmental gradients. We quantified bacterial compositional diversity patterns in ten boreal forest sites of contrasting fertility. Bulk soil (organic and mineral horizons) was sampled from sites representing two extremes of a natural moisture-nutrient gradient and two distinct disturbance types, one barren and the other vegetation-rich. We constructed 16S rRNA gene clone libraries to characterize the bacterial communities under phylogenetic- and species-based frameworks. Using a nucleotide analog to label DNA-synthesizing bacteria, we also assessed the composition of active taxa in disturbed sites. Most sites were dominated by sequences related to the α-Proteobacteria, followed by acidobacterial and betaproteobacterial sequences. Non-parametric multivariate regression indicated that pH, which was lowest in the natural sites, explained 34% and 16% of the variability in community structure as determined by phylogenetic-based (UniFrac distances) and species-based (Jaccard similarities) metrics, respectively. Soil pH was also a significant predictor of richness (Chao1) and diversity (Shannon) measures. Within the natural edaphic gradient, soil moisture accounted for 32% of the variance in phylogenetic (but not species) community structure. In the boreal system we studied, bacterial β-diversity patterns appear to be largely related to “master” variables (e.g., pH, moisture) rather than to observable attributes (e.g., plant cover) leading to regional-scale fertility gradients.
Embedded Assistive Stick for Visually Impaired Persons
In this paper, a smart stick is intended and executed to aid blind persons so that they can walk independently without much difficulty. Firstly, pothole detection and avoidance system are implemented by setting the ultrasonic sensor at 30-degree angle on a suitable blind stick to sense if there is a hole or staircase in front of the blind at about 30 cm distance to avoid a person from falling and as a result may be producing many damages. Secondly, a moisture sensor is placed at the down of stick to measure the degree of water land soil moisture in forward-facing of the user and aware him as soon as that degree exceeds a measured level that may submerge the foot of him. Thirdly, knee above obstacle detection and avoidance system is implemented by using an additional ultrasonic sensor on the top of the stick to turn an alarm and vibration ON when there is a person, obstacle or wall at a distance of 50 cm in front to avoid an accident and thus helping the person to move independently. Fourthly, an ultrasonic sensor is placed down the stick at about 20 cm from the ground level to detect and avoid knee below obstacles and stairs at a distance of 70 cm in front of the user. Fifthly, a wireless remote consisting of RF modules (transmitter and receiver) is implemented, so if a person drops stick or forget it somewhere, he can press a switch of the remote consisting of transmitter part, and as a result alarm with vibrations will turn on, so the user can know the location of the stick. The stick is implemented practically using single wheel leg blinding cane, Arduino microcontroller three ultrasonic sensors RF modules. Also, two buzzers and two vibration motor are used on the stick to fit on when any difficulties occur.
Temperature Drift Compensation for Hemispherical Resonator Gyro Based on Natural Frequency
Temperature changes have a strong effect on Hemispherical Resonator Gyro (HRG) output; therefore, it is of vital importance to observe their influence and then make necessary compensations. In this paper, a temperature compensation model for HRG based on the natural frequency of the resonator is established and then temperature drift compensations are accomplished. To begin with, a math model of the relationship between the temperature and the natural frequency of HRG is set up. Then, the math model is written into a Taylor expansion expression and the expansion coefficients are calibrated through temperature experiments. The experimental results show that the frequency changes correspond to temperature changes and each temperature only corresponds to one natural frequency, so the output of HRG can be compensated through the natural frequency of the resonator instead of the temperature itself. As a result, compensations are made for the output drift of HRG based on natural frequency through a stepwise linear regression method. The compensation results show that temperature-frequency method is valid and suitable for the gyroscope drift compensation, which would ensure HRG's application in a larger temperature range in the future.
Knowledge discovery from road traffic accident data in Ethiopia: Data quality, ensembling and trend analysis for improving road safety
Descriptive analysis of the magnitude and situation of road safety in general and road accidents in particular is important, but understanding of data quality, factors related with dangerous situations and various interesting patterns in data is of even greater importance. Under the umbrella of information architecture research for road safety in developing countries, the objective of this machine learning experimental research is to explore data quality issues, analyze trends and predict the role of road users on possible injury risks. The research employed TreeNet, Classification and Adaptive Regression Trees (CART), Random Forest (RF) and hybrid ensemble approach. To identify relevant patterns and illustrate the performance of the techniques for the road safety domain, road accident data collected from Addis Ababa Traffic Office is subject to several analyses. Empirical results illustrate that data quality is a major problem that needs architectural guideline and the prototype models could classify accidents with promising accuracy. In addition, an ensemble technique proves to be better in terms of predictive accuracy in the domain under study.
A bayesian network approach to traffic flow forecasting
A new approach based on Bayesian networks for traffic flow forecasting is proposed. In this paper, traffic flows among adjacent road links in a transportation network are modeled as a Bayesian network. The joint probability distribution between the cause nodes (data utilized for forecasting) and the effect node (data to be forecasted) in a constructed Bayesian network is described as a Gaussian mixture model (GMM) whose parameters are estimated via the competitive expectation maximization (CEM) algorithm. Finally, traffic flow forecasting is performed under the criterion of minimum mean square error (mmse). The approach departs from many existing traffic flow forecasting models in that it explicitly includes information from adjacent road links to analyze the trends of the current link statistically. Furthermore, it also encompasses the issue of traffic flow forecasting when incomplete data exist. Comprehensive experiments on urban vehicular traffic flow data of Beijing and comparisons with several other methods show that the Bayesian network is a very promising and effective approach for traffic flow modeling and forecasting, both for complete data and incomplete data
Use alone or in Combination of Red and Infrared Laser in Skin Wounds.
A systematic review was conducted covering the action of red laser, infrared and combination of both, with emphasis on cutaneous wound therapy, showing the different settings on parameters such as fluency, power, energy density, time of application, frequency mode and even the type of low-power lasers and their wavelengths. It was observed that in general, the lasers brings good clinical and histological results mainly, but there is not a protocol that defines a dosage of use that has predictability of therapeutic success in repairing these wounds.
Microtia repair with rib cartilage grafts: a review of personal experience with 1000 cases.
Surgical construction of the auricle with autogenous tissues is a unique marrying of science and art. Although the surgeon's facility with both sculpture and design is imperative, the surgical result is equally influenced by adherence to sound principles of plastic surgery and tissue transfer. The material reviewed in this article is derived from clinical experience with congenital microtia: 1094 completed ears in 1000 patients (94 cases were bilateral). This article focuses on total repair of major congenital ear defects, but includes relevant supplementary input from experience gained by managing more than 125 traumatic auricular deformities.
A Circuit-Level Substrate Current Model for Smart-Power ICs
This paper presents a new modeling methodology accounting for generation and propagation of minority carriers that can be used directly in circuit-level simulators in order to estimate coupled parasitic currents. The method is based on a new compact model of basic components (p-n junction and resistance) and takes into account minority carriers at the boundary. An equivalent circuit schematic of the substrate is built by identifying these basic elements in the substrate and interconnecting them. Parasitic effects such as bipolar or latch-up effects result from the continuity of minority carriers guaranteed by the components' models. A structure similar to a half-bridge perturbing sensitive n-wells has been simulated. It is composed by four p-n junctions connected together by their common p-doped sides. The results are in good agreement with those obtained from physical device simulations.
BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits
We present efficient algorithms for the problem of contextual bandits with i.i.d. covariates, an arbitrary sequence of rewards, and an arbitrary class of policies. Our algorithm BISTRO requires d calls to the empirical risk minimization (ERM) oracle per round, where d is the number of actions. The method uses unlabeled data to make the problem computationally simple. When the ERM problem itself is computationally hard, we extend the approach by employing multiplicative approximation algorithms for the ERM. The integrality gap of the relaxation only enters in the regret bound rather than the benchmark. Finally, we show that the adversarial version of the contextual bandit problem is learnable (and efficient) whenever the full-information supervised online learning problem has a non-trivial regret guarantee (and efficient).
Set Point Identification and Robustness Testing of Electrostatic Separation Processes
Identification of the optimal operating conditions and evaluation of their robustness are critical issues for the industrial application of electrostatic separation techniques. In spite of extensive investigations performed in recent years, no standard procedure is available for guiding the research of the set point and for minimizing the process sensibility to changes in certain critical factors. The aim of this paper is to formulate a set of recommendations regarding the choice of high-voltage, roll-speed, and feed-rate values for an important class of electrostatic separation applications: the selective sorting of conductive and nonconductive constituents of granular industrial wastes. The experiments were carried out on a laboratory separator, built by one of the authors, with various samples of chopped wire wastes furnished by l'Entreprise des Industries des Cacircbles, Biskra, Algeria. Several one-factor-at-a-time experiments, followed by two factorial designs (one composite, the other fractional), were performed based on the following three-step strategy: 1) identifying the domain of variation of the controlled variables; 2) finding the best choice of the set point; and 3) assessing the robustness of the process, i.e., testing whether the performance of the system remains satisfactory even when the factors vary slightly around that point. The results presented in this paper are strictly valid only for a well-defined category of processed materials, but a similar approach could be adopted for a wider range of electrostatic separation applications
Optimality of Frequency Flat Precoding in Frequency Selective Millimeter Wave Channels
Millimeter wave (mmWave) MIMO communication is a key feature of next generation wireless systems. The selection of precoders and combiners for wideband mmWave channels has involved frequency selective designs based on channel state information. In this letter, we show that under some assumptions, the dominant subspaces of the frequency domain channel matrices are equivalent. This means that semi-unitary frequency flat precoding and combining are sufficient to achieve the maximum spectral efficiency when there is not too much scattering in the channel. It also motivates the use of techniques such as compressive subspace estimation as an alternative to estimating the full channel.
Adoption of E-Learning in Higher Education : Expansion of UTAUT Model
This research is aimed at identifying the determinants that influence higher educational students’ behavioral intention to utilize elearning systems. The study, therefore, proposed an extension of Unified Theory of Acceptance and use of Technology (UTAUT) model by integrating it with four other variables. Data collected from 264 higher educational students using e-learning systems in Ghana through survey questionnaire were used to test the proposed research model. The study indicated that six variables, Performance expectancy (PE), Effort Expectancy (EE), Social Influence (SI), Facilitating Factor (FF), personal innovativeness (PI) and Study Modes (SM) had significant impact on students’ behavioral intention on e-learning system. The empirical outcome reflects both theoretical and practical consideration in promoting e-learning systems in higher education in Ghana.
A Neural Probabilistic Language Model
A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.
Executive and visuo-motor function in adolescents and adults with autism spectrum disorder.
This study broadly examines executive (EF) and visuo-motor function in 30 adolescent and adult individuals with high-functioning autism spectrum disorder (ASD) in comparison to 28 controls matched for age, gender, and IQ. ASD individuals showed impaired spatial working memory, whereas planning, cognitive flexibility, and inhibition were spared. Pure movement execution during visuo-motor information processing also was intact. In contrast, execution time of reading, naming, and of visuo-motor information processing tasks including a choice component was increased in the ASD group. Results of this study are in line with previous studies reporting only minimal EF difficulties in older individuals with ASD when assessed by computerized tasks. The finding of impaired visuo-motor information processing should be accounted for in further neuropsychological studies in ASD.
Spontaneous network formation among cooperative RNA replicators
The origins of life on Earth required the establishment of self-replicating chemical systems capable of maintaining and evolving biological information. In an RNA world, single self-replicating RNAs would have faced the extreme challenge of possessing a mutation rate low enough both to sustain their own information and to compete successfully against molecular parasites with limited evolvability. Thus theoretical analyses suggest that networks of interacting molecules were more likely to develop and sustain life-like behaviour. Here we show that mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. We find that a specific three-membered network has highly cooperative growth dynamics. When such cooperative networks are competed directly against selfish autocatalytic cycles, the former grow faster, indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation. We can observe the evolvability of networks through in vitro selection. Our experiments highlight the advantages of cooperative behaviour even at the molecular stages of nascent life.
The electronic couplings in electron transfer and excitation energy transfer.
The transport of charge via electrons and the transport of excitation energy via excitons are two processes of fundamental importance in diverse areas of research. Characterization of electron transfer (ET) and excitation energy transfer (EET) rates are essential for a full understanding of, for instance, biological systems (such as respiration and photosynthesis) and opto-electronic devices (which interconvert electric and light energy). In this Account, we examine one of the parameters, the electronic coupling factor, for which reliable values are critical in determining transfer rates. Although ET and EET are different processes, many strategies for calculating the couplings share common themes. We emphasize the similarities in basic assumptions between the computational methods for the ET and EET couplings, examine the differences, and summarize the properties, advantages, and limits of the different computational methods. The electronic coupling factor is an off-diagonal Hamiltonian matrix element between the initial and final diabatic states in the transport processes. ET coupling is essentially the interaction of the two molecular orbitals (MOs) where the electron occupancy is changed. Singlet excitation energy transfer (SEET), however, contains a Frster dipole-dipole coupling as its most important constituent. Triplet excitation energy transfer (TEET) involves an exchange of two electrons of different spin and energy; thus, it is like an overlap interaction of two pairs of MOs. Strategies for calculating ET and EET couplings can be classified as (1) energy-gap-based approaches, (2) direct calculation of the off-diagonal matrix elements, or (3) use of an additional operator to describe the extent of charge or excitation localization and to calculate the coupling value. Some of the difficulties in calculating the couplings were recently resolved. Methods were developed to remove the nondynamical correlation problem from the highly precise coupled cluster models for ET coupling. It is now possible to obtain reliable ET couplings from entry-level excited-state Hamiltonians. A scheme to calculate the EET coupling in a general class of systems, regardless of the contributing terms, was also developed. In the past, empirically derived parameters were heavily invoked in model description of charge and excitation energy drifts in a solid-state device. Recent advances, including the methods described in this Account, permit the first-principle quantum mechanical characterization of one class of the parameters in such descriptions, enhancing the predictive power and allowing a deeper understanding of the systems involved.
The potential of Virtual Reality as anxiety management tool: a randomized controlled study in a sample of patients affected by Generalized Anxiety Disorder
BACKGROUND Generalized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR) to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD. METHODS/DESIGN The trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables. CONCLUSION We argue that the use of VR for relaxation represents a promising approach in the treatment of GAD since it enhances the quality of the relaxing experience through the elicitation of the sense of presence. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias. TRIAL REGISTRATION NCT00602212 (ClinicalTrials.gov).
YGGDRASIL - A Statistical Package for Learning Split Models
There are two main objectives of this paper. The first is to present a statistical framework for models with context specific indepen­ dence structures, i.e. conditional independen­ cies holding only for specific values of the con­ ditioning variables. This framework is consti­ tuted by the class of split models. Split mod­ els are an extension of graphical models for contingency tables and allow for a more so­ phisticated modelling than graphical models. The treatment of split models include estima­ tion, representation and a Markov property for reading off those independencies holding in a specific context. The second objective is to present a software package named YG­ GDRASIL which is designed for statistical in­ ference in split models, i.e. for learning such models on the basis of data.
Copy Detection Mechanisms for Digital Documents
In a digital library system, documents are available in digital form and therefore are more easily copied and their copyrights are more easily violated. This is a very serious problem, as it discourages owners of valuable information from sharing it with authorized users. There are two main philosophies for addressing this problem: prevention and detection. The former actually makes unauthorized use of documents difficult or impossible while the latter makes it easier to discover such activity.In this paper we propose a system for registering documents and then detecting copies, either complete copies or partial copies. We describe algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security). We also describe a working prototype, called COPS, describe implementation issues, and present experimental results that suggest the proper settings for copy detection parameters.
An Adaptive Eigenshape Model
There has been a great deal of recent interest in statistical models of 2D landmark data for generating compact deformable models of a given object. This paper extends this work to a class of parametrised shapes where there are no landmarks available. A rigorous statistical framework for the eigenshape model is introduced, which is an extension to the conventional Linear Point Distribution Model. One of the problems associated with landmark free methods is that a large degree of variability in any shape descriptor may be due to the choice of parametrisation. An automated training method is described which utilises an iterative feedback method to overcome this problem. The result is an automatically generated compact linear shape model. The model has been successfully applied to a problem of tracking the outline of a walking pedestrian in real time.
Q-methodology as a research and design tool for HCI
A "discount" version of Q-methodology for HCI, called "HCI-Q", can be used in iterative design cycles to explore, from the point of view of users and other stakeholders, what makes technologies personally significant. Initially, designers critically reflect on their own assumptions about how a design may affect social and individual behavior. Then, designers use these assumptions as stimuli to elicit other people's points of view. This process of critical self-reflection and evaluation helps the designer to assess the fit between a design and its intended social context of use. To demonstrate the utility of HCI-Q for research and design, we use HCI-Q to explore stakeholders' responses to a prototype Alternative and Augmentative Communication (AAC) application called Vid2Speech. We show that our adaptation of Q-methodology is useful for revealing the structure of consensus and conflict among stakeholder perspectives, helping to situate design within the context of relevant value tensions and norms.
Spin: lexical semantics, transitivity, and the identification of implicit sentiment
Title of Document: SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT Stephan Charles Greene Doctor of Philosophy, 2007 Directed By: Professor Philip Resnik, Department of Linguistics and Institute for Advanced Computer Studies Current interest in automatic sentiment analysis i motivated by a variety of information requirements. The vast majority of work in sentiment analysis has been specifically targeted at detecting subjective state ments and mining opinions. This dissertation focuses on a different but related pro blem that to date has received relatively little attention in NLP research: detect ing implicit sentiment , or spin, in text. This text classification task is distinguished from ther sentiment analysis work in that there is no assumption that the documents to b e classified with respect to sentiment are necessarily overt expressions of opin ion. They rather are documents that might reveal a perspective . This dissertation describes a novel approach to t e identification of implicit sentiment, motivated by ideas drawn from the literature on lexical semantics and argument structure, supported and refined through psycholinguistic experimentation. A relationship pr edictive of sentiment is established for components of meaning that are thou g t to be drivers of verbal argument selection and linking and to be arbiters o f what is foregrounded or backgrounded in discourse. In computational experim nts employing targeted lexical selection for verbs and nouns, a set of features re flective of these components of meaning is extracted for the terms. As observable p roxies for the underlying semantic components, these features are exploited using mach ine learning methods for text classification with respect to perspective. After i nitial experimentation with manually selected lexical resources, the method is generaliz d to require no manual selection or hand tuning of any kind. The robustness of this lin gu stically motivated method is demonstrated by successfully applying it to three d istinct text domains under a number of different experimental conditions, obtain ing the best classification accuracies yet reported for several sentiment class ification tasks. A novel graph-based classifier combination method is introduced which f urther improves classification accuracy by integrating statistical classifiers wit h models of inter-document relationships. SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT
Facial recognition using histogram of gradients and support vector machines
Face recognition is widely used in computer vision and in many other biometric applications where security is a major concern. The most common problem in recognizing a face arises due to pose variations, different illumination conditions and so on. The main focus of this paper is to recognize whether a given face input corresponds to a registered person in the database. Face recognition is done using Histogram of Oriented Gradients (HOG) technique in AT & T database with an inclusion of a real time subject to evaluate the performance of the algorithm. The feature vectors generated by HOG descriptor are used to train Support Vector Machines (SVM) and results are verified against a given test input. The proposed method checks whether a test image in different pose and lighting conditions is matched correctly with trained images of the facial database. The results of the proposed approach show minimal false positives and improved detection accuracy.
Carotid artery diameter correlates with risk factors for cardiovascular disease in a population of 55-year-old subjects.
BACKGROUND AND PURPOSE We investigated whether, in a randomly selected population of 55-year-old men and women, there is a relationship between common carotid artery (CCA) diameter and intima-media (IM) thickness and conventional risk factors for cardiovascular disease such as gender, smoking, elevated blood lipids, and high blood pressure. METHODS CCA diameter and IM thickness of the distal right and left CCAs were measured by high-frequency ultrasound methods. Fifty-seven men (73% of the invited men) and 47 women (62% of the invited women) participated. RESULTS In the whole group the CCA diameter was correlated with gender (P<0.001), cholesterol (P=0.007), triglycerides (P<0.001), apoB (P<0.001), apoB/A-1 (P<0.001), systolic blood pressure (P=0. 001), and glucose (P=0.006). HDL was inversely correlated with mean CCA diameter (P=0.003). In men the CCA diameter was correlated with a combined risk factor score (P=0.005), systolic blood pressure (P=0. 011), platelet count (P=0.033), apoB (P=0.025), and occurrence of plaque (P=0.003). In women the CCA diameter was correlated with a combined risk factor score (P=0.010), systolic blood pressure (P=0. 033), body mass index (P<0.001), cholesterol (P=0.009), triglycerides (P=0.14), apoB (P=0.002), and apoB/A1 (P=0.003). IM thickness was correlated with systolic blood pressure (P<0.001). CONCLUSIONS There are correlations between risk factors for cardiovascular disease and carotid artery diameter and IM thickness in both women and men in a population of 55-year-old subjects. The increased vessel diameter in subjects with cardiovascular risk factors may be a sign of attenuated vasoregulation, which could be an important factor during the development of atherosclerosis.
Focused Activities and the Development of Social Capital in a Distributed Learning "Community"
This study examined the development of individual social capital in a distributed learning community. Feld’s theory of focused choice predicts that the formation of network ties is constrained by contextual factors that function as foci of activities. In our research, we examined how group assignment and location could function as such foci to influence the development of individual social capital in a distributed learning community. Given that networks with different content flows may possess different properties, we examined two different types of networks—task-related instrumental networks and non-task-related expressive networks. A longitudinal research design was used to evaluate the evolution of networks over time. Hypotheses were tested using a sample of 32 students enrolled in a distributed learning class. The results show strong support for Feld’s theory. While serving as foci of activities to organize social interactions, both group assignment and geographic separation can also function to fragment a learning community.
Effect of azelastine on bronchoconstriction induced by histamine and leukotriene C4 in patients with extrinsic asthma.
Azelastine, a new oral agent with antiallergic and antihistamine properties, has been shown to inhibit the effect of histamine and leukotriene (LT) in vitro, though not a specific leukotriene receptor antagonist. The effect of both a single dose (8.8 mg) and 14 days' treatment (8.8 mg twice daily) with azelastine on bronchoconstriction induced by LTC4 and histamine has been examined in 10 patients with mild asthma in a placebo controlled, double blind, crossover study. LTC4 and histamine were inhaled in doubling concentrations from a dosimeter and the results expressed as the cumulative dose (PD) producing a 20% fall in FEV1 (PD20FEV1) and 35% fall in specific airways conductance (PD35sGaw). The single dose of azelastine produced a significantly greater FEV1 and sGaw values than placebo at 3 hours, but this bronchodilator effect was not present after 14 days of treatment. Azelastine was an effective H1 antagonist; after a single dose and 14 days' treatment with placebo the geometric mean PD20FEV1 histamine values (mumol) were 0.52 (95% confidence interval 0.14-1.83) and 0.54 (0.12-2.38), compared with 22.9 (11.5-38.3) and 15.2 (6.47-35.6) after azelastine (p less than 0.01 for both). LTC4 was on average 1000 times more potent than histamine in inducing bronchoconstriction. Azelastine did not inhibit the effect of inhaled LTC4; the geometric mean PD20FEV1 LTC4 (nmol) after a single dose and 14 days' treatment was 0.60 and 0.59 with placebo compared with 0.65 and 0.75 with azelastine. The PD35sGaw LTC4 was also unchanged at 0.66 and 0.73 for placebo compared with 0.83 and 0.74 for azelastine. Thus prolonged blockade of H1 receptors did not attenuate the response to LTC4, suggesting that histamine and LTC4 act on bronchial smooth muscle through different receptors. Four patients complained of drowsiness while taking azelastine but only one who was taking placebo and three patients complained of a bitter, metallic taste while taking azelastine.
Paradigm Whose Time Has Come by
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. American Educational Research Association is collaborating with JSTOR to digitize, preserve and extend access to Educational Researcher.
Realtime Edge-Based Visual Odometry for a Monocular Camera
In this work we present a novel algorithm for realtime visual odometry for a monocular camera. The main idea is to develop an approach between classical feature-based visual odometry systems and modern direct dense/semi-dense methods, trying to benefit from the best attributes of both. Similar to feature-based systems, we extract information from the images, instead of working with raw image intensities as direct methods. In particular, the information extracted are the edges present in the image, while the rest of the algorithm is designed to take advantage of the structural information provided when pixels are treated as edges. Edge extraction is an efficient and higly parallelizable operation. The edge depth information extracted is dense enough to allow acceptable surface fitting, similar to modern semi-dense methods. This is a valuable attribute that feature-based odometry lacks. Experimental results show that the proposed method has similar drift than state of the art feature-based and direct methods, and is a simple algorithm that runs at realtime and can be parallelized. Finally, we have also developed an inertial aided version that successfully stabilizes an unmanned air vehicle in complex indoor environments using only a frontal camera, while running the complete solution in the embedded hardware on board the vehicle.
The elimination of severe slugging—experiments and modeling
-Severe slugging can occur in a pipdine-riser system operating at low liquid and gas rates. The flow of gas into the riser can be blocked by liquid accumulation at the base of the riser. This can cause formation of liquid slugs of a length equal to or longer than the height of the riser. A cyclic process results in which a period of no liquid production into the separator occurs, followed by a period of very high liquid production. This study is an experimental and theoretical investigation of two methods for eliminating this undesirable phenomenon, using choking and gas lift. Choking was found to effectively eliminate or reduce the severity of the slugging. However, the system pressure might increase to some extent. Gas lift can also eliminate severe slugging. While choking reduces the velocities in the riser, gas lift increases the velocities, approaching annular flow. It was found that a relatively large amount of gas was needed before gas injection would completely stabilize the flow through the riser. However, gas injection reduces the slug length and cycle time, causing a more continuous production and a lower system pressure. Theoretical models for the elimination of severe slugging by gas lift and choking have been developed. The models enable the prediction of the flow behavior in the riser. One model is capable of predicting the unstable flow conditions for severe slugging based on a static force balance. The second method is a simplified transient model based on the assumption of a quasi-equilibrium force balance. This model can be used to estimate the characteristics of the flow, such as slug length and cycle time. The models were tested against new severe slugging data acquired in this study. An excellent agreement between the experimental data and the theoretical models was found. Copyright © 1996 Elsevier Science Ltd.
LCDet: Low-Complexity Fully-Convolutional Neural Networks for Object Detection in Embedded Systems
Deep Convolutional Neural Networks (CNN) are the state-of-the-art performers for the object detection task. It is well known that object detection requires more com- putation and memory than image classification. In this work, we propose LCDet, a fully-convolutional neural net- work for generic object detection that aims to work in em- bedded systems. We design and develop an end-to-end TensorFlow(TF)-based model. The detection works by a single forward pass through the network. Additionally, we employ 8-bit quantization on the learned weights. As a use case, we choose face detection and train the proposed model on images containing a varying number of faces of different sizes. We evaluate the face detection perfor- mance on publicly available dataset FDDB and Widerface. Our experimental results show that the proposed method achieves comparative accuracy comparing with state-of- the-art CNN-based face detection methods while reducing the model size by 3× and memory-BW by 3 - 4× compar- ing with one of the best real-time CNN-based object de- tector YOLO [23]. Our 8-bit fixed-point TF-model pro- vides additional 4× memory reduction while keeping the accuracy nearly as good as the floating point model and achieves 20× performance gain compared to the floating point model. Thus the proposed model is amenable for em- bedded implementations and is generic to be extended to any number of categories of objects.
Classroom Applications of Research on Self-Regulated Learning
This article describes how self-regulated learning (SRL) has become a popular topic in research in educational psychology and how the research has been translated into classroom practices. Research during the past 30 years on students’ learning and achievement has progressively included emphases on cognitive strategies, metacognition, motivation, task engagement, and social supports in classrooms. SRL emerged as a construct that encompassed these various aspects of academic learning and provided more holistic views of the skills, knowledge, and motivation that students acquire. The complexity of SRL has been appealing to educational researchers who seek to provide effective interventions in schools that benefit teachers and students directly. Examples of SRL in classrooms are provided for three areas of research: strategies for reading and writing, cognitive engagement in tasks, and self-assessment. The pedagogical principles and underlying research are discussed for each area. Whether SRL is viewed as a set of skills that can be taught explicitly or as developmental processes of self-regulation that emerge from experience, teachers can provide information and opportunities to students of all ages that will help them become strategic, motivated, and independent learners.
Eye Movement-Based Human-Computer Interaction Techniques : Toward Non-Command Interfaces
User-computer dialogues are typically one-sided, with the bandwidth from computer to user far greater than that from user to computer. The movement of a user’s eyes can provide a convenient, natural, and high-bandwidth source of additional user input, to help redress this imbalance. We therefore investigate the introduction of eye movements as a computer input medium. Our emphasis is on the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. This chapter describes research at NRL on developing such interaction techniques and the broader issues raised by non-command-based interaction styles. It discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, reports our experiences and observations on them, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user-computer interaction.
Pharmacokinetics of bendroflumethiazide in hypertensive patients
After four weeks on placebo treatment, 8 hypertensive patients (WHO stage I) were treated for 2 weeks with bendroflumethiazide (bft) 2.5 mg and KCl 1.5 g daily. Subsequently they received bft 5 mg and KCl 1.5 g daily for a further fortnight. At the end of each period of treatment blood pressure was recorded and blood samples and urine were collected for analysis of bft by GLC. Before taking the daily dose of bft, no trace of the drug was found in plasma. Peak levels of bft were seen after 2.3 h and averaged 23 and 50 ng · ml−1 after 2.5 and 5 mg, respectively. After bft 2.5 mg the plasma level was too low for kinetic analysis. The plasma half-life after 5 mg averaged 4.1 h. The mean apparent volume of distribution was 1.18 l · kg−1. Non-renal clearance averaged 200 ml · min−1. The renal clearance of bft was significantly lower (p<0.05) after 5 mg (48 ml · min−1) than after 2.5 mg bft (93 ml · min−1), although the creatinine clearance remained unchanged. No correlation was found between the plasma level of bft and its effect on blood pressure.
New low-field extremity MRI, compacTscan: comparison with whole-body 1.5 T conventional MRI.
Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.
Social networking site use by mothers of young children
In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.
Digital forensics investigations in the Cloud
The essentially infinite storage space offered by Cloud Computing is quickly becoming a problem for forensics investigators in regards to evidence acquisition, forensic imaging and extended time for data analysis. It is apparent that the amount of stored data will at some point become impossible to practically image for the forensic investigators to complete a full investigation. In this paper, we address these issues by determining the relationship between acquisition times on the different storage capacities, using remote acquisition to obtain data from virtual machines in the cloud. A hypothetical case study is used to investigate the importance of using a partial and full approach for acquisition of data from the cloud and to determine how each approach affects the duration and accuracy of the forensics investigation and outcome. Our results indicate that the relation between the time taken for image acquisition and different storage volumes is not linear, owing to several factors affecting remote acquisition, especially over the Internet. Performing the acquisition using cloud resources showed a considerable reduction in time when compared to the conventional imaging method. For a 30GB storage volume, the least time was recorded for the snapshot functionality of the cloud and dd command. The time using this method is reduced by almost 77 percent. FTK Remote Agent proved to be most efficient showing an almost 12 percent reduction in time over other methods of acquisition. Furthermore, the timelines produced with the help of the case study, showed that the hybrid approach should be preferred to complete approach for performing acquisition from the cloud, especially in time critical scenarios.
HICCUPS : Hidden Communication System for Corrupted Networks
This article presents HICCUPS (HIdden Communication system for CorrUPted networkS), a steganographic system dedicated to shared medium networks including wireless local area networks. The novelty of HICCUPS is: usage of secure telecommunications network armed with cryptographic mechanisms to provide steganographic system and proposal of new protocol with bandwidth allocation based on corrupted frames. All functional parts of the system and possibility of its implementation in existing public networks are discussed. An example of implementation framework for wireless local area networks IEEE 802.11 is also presented.
The lateral transpsoas approach to the lumbar and thoracic spine: A review
BACKGROUND In the last several years, the lateral transpsoas approach to the thoracic and lumbar spine, also known as extreme lateral interbody fusion (XLIF) or direct lateral interbody fusion (DLIF), has become an increasingly common method to achieve fusion. Several recent large series describe several advantages to this approach, including less tissue dissection, smaller incisions, decreased operative time, blood loss, shorter hospital stay, reduced postoperative pain, enhanced fusion rates, and the ability to place instrumentation through the same incision. Indications for this approach have expanded and now include degenerative disease, tumor, deformity, and infection. METHODS A lateral X-ray confirms that the patient is in a truly lateral position. Next, a series of tubes and dilators are used, along with fluoroscopy, to identify the mid-position of the disk to be incised. After continued dilation, the optimal site to enter the disk space is the midpoint of the disk, or a position slightly anterior to the midpoint of the disk. XLIF typically allows for a larger implant to be inserted compared to TLIF or PLIF, and, if necessary, instrumentation can be inserted percutaneously, which would allow for an overall minimally invasive procedure. RESULTS Fixation techniques appear to be equal between XLIF and more traditional approaches. Some caution should be exercised because common fusion levels of the lumbar spine, including L4-5 and L4-S1, are often inaccessible. In addition, XLIF has a unique set of complications, including neural injuries, psoas weakness, and thigh numbness. CONCLUSION Additional studies are required to further evaluate and monitor the short and long-term safety, efficacy, outcomes, and complications of XLIF procedures.
Sinusoidally Modulated Leaky-Wave Antenna for Millimeter-Wave Application
A millimeter-wave sinusoidally modulated (SM) leaky-wave antenna (LWA) based on inset dielectric waveguide (IDW) is presented in this paper. The proposed antenna, radiating at 10° from broadside at 60 GHz, consists of a SM IDW, a rectangular waveguide for excitation and a transition for impedance matching. Fundamental TE01 mode is excited by the IDW with the leaky wave generated by the SM inset groove depth. The electric field is normal to the metallic waveguide wall and thus reduces the conductor loss. As a proof of concept, the modulated dielectric inset as well as the dielectric transition are conveniently fabricated by 3-D printing (tan δ = 0.02). Measurements of the antenna prototype show that the main beam can be scanned from -9° to 40° in a frequency range from 50 to 85 GHz within a gain variation between 9.1 and 14.2 dBi. Meanwhile, the reflection coefficient |S11| is kept below -13.4 dB over the whole frequency band. The measured results agree reasonably well with simulations. Furthermore, the gain of the proposed antenna can be enhanced by extending its length and using low-loss dielectric materials such as Teflon (tan δ <; 0.002).
Transmission Expansion Planning of Systems With Increasing Wind Power Integration
This paper proposes an efficient approach for probabilistic transmission expansion planning (TEP) that considers load and wind power generation uncertainties. The Benders decomposition algorithm in conjunction with Monte Carlo simulation is used to tackle the proposed probabilistic TEP. An upper bound on total load shedding is introduced in order to obtain network solutions that have an acceptable probability of load curtailment. The proposed approach is applied on Garver six-bus test system and on IEEE 24-bus reliability test system. The effect of contingency analysis, load and mainly wind production uncertainties on network expansion configurations and costs is investigated. It is shown that the method presented can be used effectively to study the effect of increasing wind power integration on TEP of systems with high wind generation uncertainties.
Worth a Glance: Using Eye Movements to Investigate the Cognitive Neuroscience of Memory
Results of several investigations indicate that eye movements can reveal memory for elements of previous experience. These effects of memory on eye movement behavior can emerge very rapidly, changing the efficiency and even the nature of visual processing without appealing to verbal reports and without requiring conscious recollection. This aspect of eye movement based memory investigations is particularly useful when eye movement methods are used with special populations (e.g., young children, elderly individuals, and patients with severe amnesia), and also permits use of comparable paradigms in animals and humans, helping to bridge different memory literatures and permitting cross-species generalizations. Unique characteristics of eye movement methods have produced findings that challenge long-held views about the nature of memory, its organization in the brain, and its failures in special populations. Recently, eye movement methods have been successfully combined with neuroimaging techniques such as fMRI, single-unit recording, and magnetoencephalography, permitting more sophisticated investigations of memory. Ultimately, combined use of eye-tracking with neuropsychological and neuroimaging methods promises to provide a more comprehensive account of brain-behavior relationships and adheres to the "converging evidence" approach to cognitive neuroscience.
Classification using partial least squares with penalized logistic regression
MOTIVATION One important aspect of data-mining of microarray data is to discover the molecular variation among cancers. In microarray studies, the number n of samples is relatively small compared to the number p of genes per sample (usually in thousands). It is known that standard statistical methods in classification are efficient (i.e. in the present case, yield successful classifiers) particularly when n is (far) larger than p. This naturally calls for the use of a dimension reduction procedure together with the classification one. RESULTS In this paper, the question of classification in such a high-dimensional setting is addressed. We view the classification problem as a regression one with few observations and many predictor variables. We propose a new method combining partial least squares (PLS) and Ridge penalized logistic regression. We review the existing methods based on PLS and/or penalized likelihood techniques, outline their interest in some cases and theoretically explain their sometimes poor behavior. Our procedure is compared with these other classifiers. The predictive performance of the resulting classification rule is illustrated on three data sets: Leukemia, Colon and Prostate.
Chaotic mixing in three-dimensional microvascular networks fabricated by direct-write assembly.
The creation of geometrically complex fluidic devices is a subject of broad fundamental and technological interest. Here, we demonstrate the fabrication of three-dimensional (3D) microvascular networks through direct-write assembly of a fugitive organic ink. This approach yields a pervasive network of smooth cylindrical channels (approximately 10-300 microm) with defined connectivity. Square-spiral towers, isolated within this vascular network, promote fluid mixing through chaotic advection. These vertical towers give rise to dramatic improvements in mixing relative to simple straight (1D) and square-wave (2D) channels while significantly reducing the device planar footprint. We envisage that 3D microvascular networks will provide an enabling platform for a wide array of fluidic-based applications.
User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities
Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.
Traffic Surveillance using Multi-Camera Detection and Multi-Target Tracking
Non-intrusive video-detection for traffic flow observation and surveillance is the primary alternative to conventional inductive loop detectors. Video Image Detection Systems (VIDS) can derive traffic parameters by means of image processing and pattern recognition methods. Existing VIDS emulate the inductive loops. We propose a trajectory based recognition algorithm to expand the common approach and to obtain new types of information (e.g. queue length or erratic movements). Different views of the same area by more than one camera sensor is necessary, because of the typical limitations of single camera systems, resulting from the occlusion effect of other cars, trees and traffic signs. A distributed cooperative multi-camera system enables a significant enlargement of the observation area. The trajectories are derived from multi-target tracking. The fusion of object data from different cameras is done using a tracking method. This approach opens up opportunities to identify and specify traffic objects, their location, speed and other characteristic object information. The system provides new derived and consolidated information about traffic participants. Thus, this approach is also beneficial for a description of individual traffic participants.
Minimum Cost Fault Tolerant Adder Circuits in Reversible Logic Synthesis
Conventional circuit dissipates energy to reload missing information because of overlapped mapping between input and output vectors. Reversibility recovers energy loss and prevents bit error by including Fault Tolerant mechanism. Reversible Computing is gaining the popularity of various fields such as Quantum Computing, DNA Informatics and CMOS Technology etc. In this paper, we have proposed the fault tolerant design of Reversible Full Adder (RFT-FA) with minimum quantum cost. Also we have proposed the cost effective design of Carry Skip Adder (CSA) and Carry Look-Ahead Adder (CLA) circuits by using proposed fault tolerant full adder circuit. The regular structures of n-bit Reversible Fault Tolerant Carry Skip Adder (RFT-CSA) and Carry Look-ahead Adder (RFT-CLA) by composing several theorems. Proposed designs have been populated by merging the minimization of total gates, garbage outputs, quantum cost and critical path delay criterion and comparing with exiting designs.
Expropriation and Control Rights: A Dynamic Model of
This paper studies the strategic interaction between a foreign direct investor and a host country. We analyze how the investor can use his control rights to protect his investment if he faces the risk of \"creeping expropriation\" once his investment is sunk. It is shown that this hold-up problem may cause underinvestment if the outside option of the investor is too weak, and overinvestment if it is too strong. We also analyze the impact of spillover effects, we give a rationale for \"tax holidays\" and we examine how stochastic returns affect the strategic interaction of investor and host country.
Enhancing SOM Based Data Visualization
The Self-Organizing Map (SOM) is an eeective data exploration tool. One of the reasons for this is that it is conceptually very simple and its visualization is easy. In this paper, we propose new ways to enhance the visualization capabilities of the SOM in three areas: clustering, correlation hunting, and novelty detection. These enhancements are illustrated by various examples using real-world data.
Nerve transfers in tetraplegia I: Background and technique
BACKGROUND The recovery of hand function is consistently rated as the highest priority for persons with tetraplegia. Recovering even partial arm and hand function can have an enormous impact on independence and quality of life of an individual. Currently, tendon transfers are the accepted modality for improving hand function. In this procedure, the distal end of a functional muscle is cut and reattached at the insertion site of a nonfunctional muscle. The tendon transfer sacrifices the function at a lesser location to provide function at a more important location. Nerve transfers are conceptually similar to tendon transfers and involve cutting and connecting a healthy but less critical nerve to a more important but paralyzed nerve to restore its function. METHODS We present a case of a 28-year-old patient with a C5-level ASIA B (international classification level 1) injury who underwent nerve transfers to restore arm and hand function. Intact peripheral innervation was confirmed in the paralyzed muscle groups corresponding to finger flexors and extensors, wrist flexors and extensors, and triceps bilaterally. Volitional control and good strength were present in the biceps and brachialis muscles, the deltoid, and the trapezius. The patient underwent nerve transfers to restore finger flexion and extension, wrist flexion and extension, and elbow extension. Intraoperative motor-evoked potentials and direct nerve stimulation were used to identify donor and recipient nerve branches. RESULTS The patient tolerated the procedure well, with a preserved function in both elbow flexion and shoulder abduction. CONCLUSIONS Nerve transfers are a technically feasible means of restoring the upper extremity function in tetraplegia in cases that may not be amenable to tendon transfers.
Inverse Optimization: A New Perspective on the Black-Litterman Model
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.
A master-slave synchronization model for enhanced servo clock design
Slave servo clocks have an essential role in hardware and software synchronization techniques based on Precision Time Protocol (PTP). The objective of servo clocks is to remove the drift between slave and master nodes, while keeping the output timing jitter within given uncertainty boundaries. Up to now, no univocal criteria exist for servo clock design. In fact, the relationship between controller design, performances and uncertainty sources is quite evanescent. In this paper, we propose a quite simple, but exhaustive linear model, which is expected to be used in the design of enhanced servo clock architectures.
Exercise pressor reflex in humans with end-stage renal disease.
Previous work has suggested that end-stage renal disease (ESRD) patients may have an exaggerated sympathetic nervous system (SNS) response during exercise. We hypothesized that ESRD patients have an exaggerated blood pressure (BP) response during moderate static handgrip exercise (SHG 30%) and that the exaggerated BP response is mediated by SNS overactivation, characterized by augmented mechanoreceptor activation and blunted metaboreceptor control, as has been described in other chronic diseases. We measured hemodynamics and muscle sympathetic nerve activity (MSNA) in 13 ESRD and 16 controls during: 1) passive hand movement (PHM; mechanoreceptor isolation); 2) low-level rhythmic handgrip exercise (RHG 20%; central command and mechanoreceptor activation); 3) SHG 30%, followed by posthandgrip circulatory arrest (PHGCA; metaboreceptor activation); and 4) cold pressor test (CPT; nonexercise stimulus). ESRD patients had exaggerated increases in systolic BP during SHG 30%; however, the absolute and relative increase in MSNA was not augmented, excluding SNS overactivation as the cause of the exaggerated BP response. Increase in MSNA was not exaggerated during RHG 20% and PHM, demonstrating that mechanoreceptor activation is not heightened in ESRD. During PHGCA, MSNA remained elevated in controls but decreased rapidly to baseline levels in ESRD, indicative of markedly blunted metaboreceptor control of MSNA. MSNA response to CPT was virtually identical in ESRD and controls, excluding a generalized sympathetic hyporeactivity in ESRD. In conclusion, ESRD patients have an exaggerated increase in SBP during SHG 30% that is not mediated by overactivation of the SNS directed to muscle. SBP responses were also exaggerated during mechanoreceptor activation and metaboreceptor activation, but without concomitant augmentation in MSNA responses. Metaboreceptor control of MSNA was blunted in ESRD, but the overall ability to mount a SNS response was not impaired. Other mechanisms besides SNS overactivation, such as impaired vasodilatation, should be explored to explain the exaggerated exercise pressor reflex in ESRD.
Security and Privacy Challenges in Cloud Computing Environments
Cloud computing is an evolving paradigm with tremendous momentum, but its unique aspects exacerbate security and privacy challenges. This article explores the roadblocks and solutions to providing a trustworthy cloud computing environment.
Multi words quran and hadith searching based on news using TF-IDF
Each week religious leaders need to give advice to their community. Religious advice matters ideally contain discussion and solution of the problem that arising in society. But the lot of religious resources that must be considered agains many arising problems make this religious task is not easy. Especially in moslem community, the religious resources are Quran and Kutubus Sitah, the six most referenced collection of Muhammad (pbuh) news (hadith). The problem that arising in society can be read from various online mass media. Doing manually, they must know the Arabic word of the problem, and make searching manually from Mu'jam, Quran and Hadith index, then write out the found verses or hadith. TF-IDF method is often used in the weighting informational retrieval and text mining. This research want to make tools that get input from mass media news, make multi words searching from database using TF-IDF (Term Frequency - Inverse Document Frequency), and give relevan verse of Quran and hadith. Top five the most relevan verse of Quran and hadith will be displayed. Justified by religious leader, application give 60% precision for Quranic verses, and 53% for hadith, with the average query time 2.706 seconds.
Dissecting games engines: The case of Unity3D
Recent trends on how video games are played have pushed for the need to revise the game engine architecture. Indeed, game players are more mobile, using smartphones and tablets that lack CPU resources compared to PC and dedicated box. Two emerging solutions, cloud gaming and computing offload, would represent the next steps toward improving game player experience. By consequence, dissecting and analyzing game engines performances would help to better understand how to move to these new directions, which is so far missing in the literature. In this paper, we fill this gap by analyzing and evaluating one of the most popular game engine, namely Unity3D. First, we dissected the Unity3D architecture and modules. A benchmark was then used to evaluate the CPU and GPU performances of the different modules constituting Unity3D, for five representative games.
A classification of glycosyl hydrolases based on amino acid sequence similarities.
The amino acid sequences of 301 glycosyl hydrolases and related enzymes have been compared. A total of 291 sequences corresponding to 39 EC entries could be classified into 35 families. Only ten sequences (less than 5% of the sample) could not be assigned to any family. With the sequences available for this analysis, 18 families were found to be monospecific (containing only one EC number) and 17 were found to be polyspecific (containing at least two EC numbers). Implications on the folding characteristics and mechanism of action of these enzymes and on the evolution of carbohydrate metabolism are discussed. With the steady increase in sequence and structural data, it is suggested that the enzyme classification system should perhaps be revised.
Bilinear classifiers for visual recognition
We describe an algorithm for learning bilinear SVMs. Bilinear classifiers are a discriminative variant of bilinear models, which capture the dependence of data on multiple factors. Such models are particularly appropriate for visual data that is better represented as a matrix or tensor, rather than a vector. Matrix encodings allow for more natural regularization through rank restriction. For example, a rank-one scanning-window classifier yields a separable filter. Low-rank models have fewer parameters and so are easier to regularize and faster to score at run-time. We learn low-rank models with bilinear classifiers. We also use bilinear classifiers for transfer learning by sharing linear factors between different classification tasks. Bilinear classifiers are trained with biconvex programs. Such programs are optimized with coordinate descent, where each coordinate step requires solving a convex program in our case, we use a standard off-the-shelf SVM solver. We demonstrate bilinear SVMs on difficult problems of people detection in video sequences and action classification of video sequences, achieving state-of-the-art results in both.
A fuzzy inhomogenous multiattribute group decision making approach to solve outsourcing provider selection problems
Considering various situations and characteristics of supply chain management, we regard the outsourcing provider selection as a type of fuzzy inhomogenous multiattribute group decision making (MAGDM) problems with fuzzy alternatives’ comparisons and incomplete weight information. Hereby we focus on developing a new fuzzy linear programming method for solving such MAGDM problems. In this method, the decision makers’ preferences are given through pair-wise alternatives’ comparisons with fuzzy truth degrees represented as trapezoidal fuzzy numbers (TrFNs). Intuitionistic fuzzy sets, TrFNs, intervals and real numbers are used to express the inhomogenous decision information. Under the condition that the fuzzy positive ideal solution (PIS) and fuzzy negative ideal solution (NIS) are known, the fuzzy consistency and inconsistency indices are defined on the basis of the relative closeness degrees and expressed with TrFNs. The attribute weights are estimated through constructing a new fuzzy linear programming model, which is solved by the developed method of fuzzy linear programming with TrFNs. Through solving the constructed linear goal programming model, we obtain the collective comprehensive relative closeness degrees of alternatives to the fuzzy PIS, which are used to rank the alternatives. The effectiveness of the proposed method is verified with an example of IT outsourcing provider selection. 2014 Elsevier B.V. All rights reserved.
Peer-supported storytelling for grieving pediatric oncology nurses.
Telling stories about deceased patients to supportive peers is frequently mentioned as an activity used for meaning-making in anecdotal reports of clinical practice and the literature addressing nurses' experiences caring for dying children. This study examines peer-supported storytelling for grieving pediatric oncology nurses using a mixed methods single-group descriptive repeated measures design. Participants were 6 registered nurses from a tertiary care pediatric hospital inpatient oncology unit who self-identified as experiencing grief. Participants met in self-selected dyads for 2 storytelling sessions. Questionnaires were completed at baseline, midpoint, and study end. Sessions were audio-recorded. Participants reported (1) receiving and providing support during sessions; (2) that sessions had an impact on their grief; (3) that sessions had an impact on their meaning-making, and the explicit session focus on making sense of and identifying benefit in their experiences was particularly helpful. There was a significant positive correlation between participant report of number of special patient deaths during career and impact of sessions on grief.
An empirical assessment of households sorting into private schooling under public education provision
We estimate structural quantile treatment effects to analyze the relationship between household income and sorting into private or public education, using Italian data. Public education provision is redistributive when rich families, who contribute to its financing, find it optimal to sort out of the public system to buy the educational services in the private market. This may occur when the education quality is lower in the public compared to the private sector, meaning that households with higher income capacity face lower opportunity costs from sorting out of the public system. We exploit heterogeneity in expected tax deductions to exogenously manipulate the distribution of net of taxes income, equalized by family needs, and explore the consequences of this manipulation on various quantiles of the distribution of the amount of the educational transfers in-kind accruing to the households, valuing public education. We find that an increase in income reduces the amount of educational transfers in-kind (i) more for higher quantiles of the educational transfers in-kind, indicating that households with higher preferences for quality sort out of the public education system; (ii) more for lower quantiles of the households’ income capacity, indicating that richer households receive lower transfers for a given preference quality.
Did a quality improvement collaborative make stroke care better? A cluster randomized trial
BACKGROUND Stroke can result in death and long-term disability. Fast and high-quality care can reduce the impact of stroke, but UK national audit data has demonstrated variability in compliance with recommended processes of care. Though quality improvement collaboratives (QICs) are widely used, whether a QIC could improve reliability of stroke care was unknown. METHODS Twenty-four NHS hospitals in the Northwest of England were randomly allocated to participate either in Stroke 90:10, a QIC based on the Breakthrough Series (BTS) model, or to a control group giving normal care. The QIC focused on nine processes of quality care for stroke already used in the national stroke audit. The nine processes were grouped into two distinct care bundles: one relating to early hours care and one relating to rehabilitation following stroke. Using an interrupted time series design and difference-in-difference analysis, we aimed to determine whether hospitals participating in the QIC improved more than the control group on bundle compliance. RESULTS Data were available from nine interventions (3,533 patients) and nine control hospitals (3,059 patients). Hospitals in the QIC showed a modest improvement from baseline in the odds of average compliance equivalent to a relative improvement of 10.9% (95% CI 1.3%, 20.6%) in the Early Hours Bundle and 11.2% (95% CI 1.4%, 21.5%) in the Rehabilitation Bundle. Secondary analysis suggested that some specific processes were more sensitive to an intervention effect. CONCLUSIONS Some aspects of stroke care improved during the QIC, but the effects of the QIC were modest and further improvement is needed. The extent to which a BTS QIC can improve quality of stroke care remains uncertain. Some aspects of care may respond better to collaboratives than others. TRIAL REGISTRATION ISRCTN13893902.
Deep Embedding Network for Clustering
Clustering is a fundamental technique widely used for exploring the inherent data structure in pattern recognition and machine learning. Most of the existing methods focus on modeling the similarity/dissimilarity relationship among instances, such as k-means and spectral clustering, and ignore to extract more effective representation for clustering. In this paper, we propose a deep embedding network for representation learning, which is more beneficial for clustering by considering two constraints on learned representations. We first utilize a deep auto encoder to learn the reduced representations from the raw data. To make the learned representations suitable for clustering, we first impose a locality-persevering constraint on the learned representations, which aims to embed original data into its underlying manifold space. Then, different from spectral clustering which extracts representations from the block diagonal similarity matrix, we apply a group sparsity constraint for the learned representations, and aim to learn block diagonal representations in which the nonzero groups correspond to its cluster. After obtaining the learned representations, we use k-means to cluster them. To evaluate the proposed deep embedding network, we compare its performance with k-means and spectral clustering on three commonly-used datasets. The experiments demonstrate that the proposed method achieves promising performance.
P450s and UGTs: Key Players in the Structural Diversity of Triterpenoid Saponins.
The recent spread of next-generation sequencing techniques has facilitated transcriptome analyses of non-model plants. As a result, many of the genes encoding enzymes related to the production of specialized metabolites have been identified. Compounds derived from 2,3-oxidosqualene (the common precursor of sterols, steroids and triterpenoids), a linear compound of 30 carbon atoms produced through the mevalonate pathway, are called triterpenes. These include essential sterols, which are structural components of biomembranes; steroids such as the plant hormones, brassinolides and the toxin in potatoes, solanine; as well as the structurally diverse triterpenoids. Triterpenoids containing one or more sugar moieties attached to triterpenoid aglycones are called triterpenoid saponins. Triterpenoid saponins have been shown to have various medicinal properties, such as anti-inflammatory, anticancerogenic and antiviral effects. This review summarizes the recent progress in gene discovery and elucidates the biochemical functions of biosynthetic enzymes in triterpenoid saponin biosynthesis. Special focus is placed on key players in generating the structural diversity of triterpenoid saponins, cytochrome P450 monooxygenases (P450s) and the UDP-dependent glycosyltransferases (UGTs). Perspectives on further gene discovery and the use of biosynthetic genes for the microbial production of plant-derived triterpenoid saponins are also discussed.
Network Intrusion Detection System for Denial of Service Attack based on Misuse Detection
In a wireless network system the security is a main concern for a user. It is basically suffering from mainly two security attacks i) Virus Attack ii) Intruders. Intruder does not only mean it want to hack the private information over the network, it also includes using a node bandwidth and increasing the Delay of Service for other host over the network. This paper is basically based on such type of attack. This paper reviews the comparison of different Intrusion Detection System. On the behalf of the reviewed work we proposed a new Network Intrusion System that will mainly detects the most prominent attack of Wireless Network i.e. DoS Attack. The proposed system is an intelligent system that will detect the intrusion dynamically on the bases of Misuse Detection which has very less false negative. The system not only detects the intruders by the IP address, it detects the system with its contents also.
Ascendance, resistance, resilience : concepts and analyses for designing energy and water systems in a changing climate
VIII