_id
stringlengths
40
40
title
stringlengths
8
300
text
stringlengths
0
10k
6b557c35514d4b6bd75cebdaa2151517f5e820e2
Prediction , operations , and condition monitoring in wind energy
Recent developments in wind energy research including wind speed prediction, wind turbine control, operations of hybrid power systems, as well as condition monitoring and fault detection are surveyed. Approaches based on statistics, physics, and data mining for wind speed prediction at different time scales are reviewed. Comparative analysis of prediction results reported in the literature is presented. Studies of classical and intelligent control of wind turbines involving different objectives and strategies are reported. Models for planning operations of different hybrid power systems including wind generation for various objectives are addressed. Methodologies for condition monitoring and fault detection are discussed. Future research directions in wind energy are proposed. 2013 Elsevier Ltd. All rights reserved.
e81f115f2ac725f27ea6549f4de0a71b3a3f6a5c
NEUROPSI: a brief neuropsychological test battery in Spanish with norms by age and educational level.
The purpose of this research was to develop, standardize, and test the reliability of a short neuropsychological test battery in the Spanish language. This neuropsychological battery was named "NEUROPSI," and was developed to assess briefly a wide spectrum of cognitive functions, including orientation, attention, memory, language, visuoperceptual abilities, and executive functions. The NEUROPSI includes items that are relevant for Spanish-speaking communities. It can be applied to illiterates and low educational groups. Administration time is 25 to 30 min. Normative data were collected from 800 monolingual Spanish-speaking individuals, ages 16 to 85 years. Four age groups were used: (1) 16 to 30 years, (2) 31 to 50 years, (3) 51 to 65 years, and (4) 66 to 85 years. Data also are analyzed and presented within 4 different educational levels that were represented in this sample; (1) illiterates (zero years of school); (2) 1 to 4 years of school; (2) 5 to 9 years of school; and (3) 10 or more years of formal education. The effects of age and education, as well as the factor structure of the NEUROPSI are analyzed. The NEUROPSI may fulfill the need for brief, reliable, and objective evaluation of a broad range of cognitive functions in Spanish-speaking populations.
382c057c0be037340e7d6494fc3a580b9d6b958c
Should TED talks be teaching us something?
The nonprofit phenomenon “TED,” the brand name for the concepts of Technology Education and Design, was born in 1984. It launched into pop culture stardom in 2006 when the organization’s curators began offering short, free, unrestricted, and educational video segments. Known as “TEDTalks,” these informational segments are designed to be no longer than 18 minutes in length and provide succinct, targeted enlightenment on various topics or ideas that are deemed “worth spreading.” TED Talks, often delivered in sophisticated studios with trendy backdrops, follow a format that focuses learners on the presenter and limited, extremely purposeful visual aids. Topics range from global warming to running to the developing world. Popular TED Talks, such as Sir Ken Robinson’s “Schools Kill Creatively” or Dan Gilbert’s “Why Are We Happy?” can easily garner well over a million views. TED Talks are a curious phenomenon for educators to observe. They are in many ways the antithesis of traditional lectures, which are typically 60-120 minutes in length and delivered in cavernous halls by faculty members engaged in everyday academic lives. Perhaps the formality of the lecture is the biggest superficial difference in comparison to casual TEDTalks (Table 1). However, TED Talks are not as unstructured as they may appear. Presenters are well coached and instructed to follow a specific presentation formula, whichmaximizes storyboarding and highlights passion for the subject. While learning is not formally assessed, TED Talks do seem to accomplish their goals of spreading ideas while sparking curiosity within the learner. The fact that some presentations have been viewed more than 16 million times points to the effectiveness of the platform in at least reaching learners and stimulating a desire to click, listen, and learn.Moreover, the TEDTalks website is the fourth most popular technology website and the single most popular conference and events website in the world. The TED phenomenon may have both direct and subliminal messages for academia. Perhaps an initial question to ponder is whether the TED phenomenon is a logical grassroots educational evolution or a reaction to the digital generation and their preference for learning that occurs “wherever, whenever.” The diverse cross-section of TED devotees ranging in background and age would seem to provide evidence that the platform does not solely appeal to younger generations of learners. Instead, it suggests that adult learners are either more drawn to digital learning than they think they are or than they are likely to admit. The perceived efficacy of TED once again calls into question the continued reliance of academia on the lecture as the primary currency of learning. TED Talks do not convey large chunks of information but rather present grander ideas. Would TED-like educational modules or blocks of 18-20 minutes be more likely to pique student curiosity across a variety of pharmacy topics, maintain attention span, and improve retention? Many faculty members who are recognized as outstanding teachers or lecturers might confess that they already teach through a TED-like lens. Collaterally, TED Talks or TED-formatted learning experiences might be ideal springboards for incorporation into inverted or flipped classroom environments where information is gathered and learned at home, while ideas are analyzed, debated, and assimilated within the classroom. Unarguably, TED Talks have given scientists and other researchers a real-time, mass media driven opportunity to disseminate their research, ideas, and theories that might otherwise have gone unnoticed. Similar platforms or approaches may be able to provide opportunities for the academy to further transmit research to the general public. The TED approach to idea dissemination is not without its critics. Several authors have criticized TED for flattening or dumbing down ideas so they fit into a preconceived, convenient format that is primarily designed to entertain. Consequently, the oversimplified ideas and conceptsmay provoke little effort from the learner to analyze data, theory, or controversy. Some Corresponding Author: Frank Romanelli, PharmD, MPH, 789 South Limestone Road, University of Kentucky College of Pharmacy, Lexington, KY 40536. E-mail: froma2@email. uky.edu American Journal of Pharmaceutical Education 2014; 78 (6) Article 113.
36eff99a7f23cec395e4efc80ff7f937934c7be6
Geometry and Meaning
Geometry and Meaning is an interesting book about a relationship between geometry and logic defined on certain types of abstract spaces and how that intimate relationship might be exploited when applied in computational linguistics. It is also about an approach to information retrieval, because the analysis of natural language, especially negation, is applied to problems in IR, and indeed illustrated throughout the book by simple examples using search engines. It is refreshing to see IR issues tackled from a different point of view than the standard vector space (Salton, 1968). It is an enjoyable read, as intended by the author, and succeeds as a sort of tourist guide to the subject in hand. The early part of the book concentrates on the introduction of a number of elementary concepts from mathematics: graph theory, linear algebra (especially vector spaces), lattice theory, and logic. These concepts are well motivated and illustrated with good examples, mostly of a classificatory or taxonomic kind. One of the major goals of the book is to argue that non-classical logic, in the form of a quantum logic, is a candidate for analyzing language and its underlying logic, with a promise that such an approach could lead to improved search engines. The argument for this is aided by copious references to early philosophers, scientists, and mathematicians, creating the impression that when Aristotle, Descartes, Boole, and Grassmann were laying the foundations for taxonomy, analytical geometry, logic, and vector spaces, they had a more flexible and broader view of these subjects than is current. This is especially true of logic. Thus the historical approach taken to introducing quantum logic (chapter 7) is to show that this particular kind of logic and its interpretation in vector space were inherent in some of the ideas of these earlier thinkers. Widdows claims that Aristotle was never respected for his mathematics and that Grassmann’s Ausdehnungslehre was largely ignored and left in obscurity. Whether Aristotle was never admired for his mathematics I am unable to judge, but certainly Alfred North Whitehead (1925) was not complimentary when he said:
f0d82cbac15c4379677d815c9d32f7044b19d869
Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity.
Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems from micro- to macroscales. We present examples of how human brain imaging data are being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers and emphasize their utility in informing diagnosis and monitoring, brain-machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights that are critical for the neuroengineer's tool kit.
f7d5f8c60972c18812925715f685ce8ae5d5659d
A new exact method for the two-dimensional orthogonal packing problem
The two-dimensional orthogonal packing problem (2OPP ) consists of determining if a set of rectangles (items) can be packed into one rectangle of fixed size (bin). In this paper we propose two exact algorithms for solving this problem. The first algorithm is an improvement on a classical branch&bound method, whereas the second algorithm is based on a two-step enumerative method. We also describe reduction procedures and lower bounds which can be used within the branch&bound method. We report computational experiments for randomly generated benchmarks, which demonstrate the efficiency of both methods.
90c1104142203c8ead18882d49bfea8aec23e758
Sensitivity and diagnosticity of NASA-TLX and simplified SWAT to assess the mental workload associated with operating an agricultural sprayer.
The objectives of the present study were: a) to investigate three continuous variants of the NASA-Task Load Index (TLX) (standard NASA (CNASA), average NASA (C1NASA) and principal component NASA (PCNASA)) and five different variants of the simplified subjective workload assessment technique (SSWAT) (continuous standard SSWAT (CSSWAT), continuous average SSWAT (C1SSWAT), continuous principal component SSWAT (PCSSWAT), discrete event-based SSWAT (D1SSWAT) and discrete standard SSWAT (DSSWAT)) in terms of their sensitivity and diagnosticity to assess the mental workload associated with agricultural spraying; b) to compare and select the best variants of NASA-TLX and SSWAT for future mental workload research in the agricultural domain. A total of 16 male university students (mean 30.4 +/- 12.5 years) participated in this study. All the participants were trained to drive an agricultural spraying simulator. Sensitivity was assessed by the ability of the scales to report the maximum change in workload ratings due to the change in illumination and difficulty levels. In addition, the factor loading method was used to quantify sensitivity. The diagnosticity was assessed by the ability of the scale to diagnose the change in task levels from single to dual. Among all the variants of NASA-TLX and SSWAT, PCNASA and discrete variants of SSWAT showed the highest sensitivity and diagnosticity. Moreover, among all the variants of NASA and SSWAT, the discrete variants of SSWAT showed the highest sensitivity and diagnosticity but also high between-subject variability. The continuous variants of both scales had relatively low sensitivity and diagnosticity and also low between-subject variability. Hence, when selecting a scale for future mental workload research in the agricultural domain, a researcher should decide what to compromise: 1) between-subject variability or 2) sensitivity and diagnosticity. STATEMENT OF RELEVANCE: The use of subjective workload scales is very popular in mental workload research. The present study investigated the different variants of two popular workload rating scales (i.e. NASA-TLX and SSWAT) in terms of their sensitivity and diagnositicity and selected the best variants of each scale for future mental workload research.
b1cbfd6c1e7f8a77e6c1e6db6cd0625e3bd785ef
Stadium Hashing: Scalable and Flexible Hashing on GPUs
Hashing is one of the most fundamental operations that provides a means for a program to obtain fast access to large amounts of data. Despite the emergence of GPUs as many-threaded general purpose processors, high performance parallel data hashing solutions for GPUs are yet to receive adequate attention. Existing hashing solutions for GPUs not only impose restrictions (e.g., inability to concurrently execute insertion and retrieval operations, limitation on the size of key-value data pairs) that limit their applicability, their performance does not scale to large hash tables that must be kept out-of-core in the host memory. In this paper we present Stadium Hashing (Stash) that is scalable to large hash tables and practical as it does not impose the aforementioned restrictions. To support large out-of-core hash tables, Stash uses a compact data structure named ticket-board that is separate from hash table buckets and is held inside GPU global memory. Ticket-board locally resolves significant portion of insertion and lookup operations and hence, by reducing accesses to the host memory, it accelerates the execution of these operations. Split design of the ticket-board also enables arbitrarily large keys and values. Unlike existing methods, Stash naturally supports concurrent insertions and retrievals due to its use of double hashing as the collision resolution strategy. Furthermore, we propose Stash with collaborative lanes (clStash) that enhances GPU's SIMD resource utilization for batched insertions during hash table creation. For concurrent insertion and retrieval streams, Stadium hashing can be up to 2 and 3 times faster than GPU Cuckoo hashing for in-core and out-of-core tables respectively.
20f5b475effb8fd0bf26bc72b4490b033ac25129
Real time detection of lane markers in urban streets
We present a robust and real time approach to lane marker detection in urban streets. It is based on generating a top view of the road, filtering using selective oriented Gaussian filters, using RANSAC line fitting to give initial guesses to a new and fast RANSAC algorithm for fitting Bezier Splines, which is then followed by a post-processing step. Our algorithm can detect all lanes in still images of the street in various conditions, while operating at a rate of 50 Hz and achieving comparable results to previous techniques.
27edbcf8c6023905db4de18a4189c2093ab39b23
Robust Lane Detection and Tracking in Challenging Scenarios
A lane-detection system is an important component of many intelligent transportation systems. We present a robust lane-detection-and-tracking algorithm to deal with challenging scenarios such as a lane curvature, worn lane markings, lane changes, and emerging, ending, merging, and splitting lanes. We first present a comparative study to find a good real-time lane-marking classifier. Once detection is done, the lane markings are grouped into lane-boundary hypotheses. We group left and right lane boundaries separately to effectively handle merging and splitting lanes. A fast and robust algorithm, based on random-sample consensus and particle filtering, is proposed to generate a large number of hypotheses in real time. The generated hypotheses are evaluated and grouped based on a probabilistic framework. The suggested framework effectively combines a likelihood-based object-recognition algorithm with a Markov-style process (tracking) and can also be applied to general-part-based object-tracking problems. An experimental result on local streets and highways shows that the suggested algorithm is very reliable.
4d2cd0b25c5b0f69b6976752ebca43ec5f04a461
Lane detection and tracking using B-Snake
In this paper, we proposed a B-Snake based lane detection and tracking algorithm without any cameras’ parameters. Compared with other lane models, the B-Snake based lane model is able to describe a wider range of lane structures since B-Spline can form any arbitrary shape by a set of control points. The problems of detecting both sides of lane markings (or boundaries) have been merged here as the problem of detecting the mid-line of the lane, by using the knowledge of the perspective parallel lines. Furthermore, a robust algorithm, called CHEVP, is presented for providing a good initial position for the B-Snake. Also, a minimum error method by Minimum Mean Square Error (MMSE) is proposed to determine the control points of the B-Snake model by the overall image forces on two sides of lane. Experimental results show that the proposed method is robust against noise, shadows, and illumination variations in the captured road images. It is also applicable to the marked and the unmarked roads, as well as the dash and the solid paint line roads. q 2003 Elsevier B.V. All rights reserved.
1c0f7854c14debcc34368e210568696a01c40573
Using vanishing points for camera calibration
In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.
235aff8bdb65654163110b35f268de6933814c49
Realtime lane tracking of curved local road
A lane detection system is an important component of many intelligent transportation systems. We present a robust realtime lane tracking algorithm for a curved local road. First, we present a comparative study to find a good realtime lane marking classifier. Once lane markings are detected, they are grouped into many lane boundary hypotheses represented by constrained cubic spline curves. We present a robust hypothesis generation algorithm using a particle filtering technique and a RANSAC (random sample concensus) algorithm. We introduce a probabilistic approach to group lane boundary hypotheses into left and right lane boundaries. The proposed grouping approach can be applied to general part-based object tracking problems. It incorporates a likelihood-based object recognition technique into a Markov-style process. An experimental result on local streets shows that the suggested algorithm is very reliable
514ee2a4d6dec51d726012bd74b32b1e05f13271
The Ontological Foundation of REA Enterprise Information Systems
Philosophers have studied ontologies for centuries in their search for a systematic explanation of existence: “What kind of things exist?” Recently, ontologies have emerged as a major research topic in the fields of artificial intelligence and knowledge management where they address the content issue: “What kind of things should we represent?” The answer to that question differs with the scope of the ontology. Ontologies that are subject-independent are called upper-level ontologies, and they attempt to define concepts that are shared by all domains, such as time and space. Domain ontologies, on the other hand, attempt to define the things that are relevant to a specific application domain. Both types of ontologies are becoming increasingly important in the era of the Internet where consistent and machine-readable semantic definitions of economic phenomena become the language of e-commerce. In this paper, we propose the conceptual accounting framework of the Resource-Event-Agent (REA) model of McCarthy (1982) as an enterprise domain ontology, and we build upon the initial ontology work of Geerts and McCarthy (2000) which explored REA with respect to the ontological categorizations of John Sowa (1999). Because of its conceptual modeling heritage, REA already resembles an established ontology in many declarative (categories) and procedural (axioms) respects, and we also propose here to extend formally that framework both (1) vertically in terms of entrepreneurial logic (value chains) and workflow detail, and (2) horizontally in terms of type and commitment images of enterprise economic phenomena. A strong emphasis throughout the paper is given to the microeconomic foundations of the category definitions.
944692d5d33fbc5f42294a8310380e0b057a1320
Dual- and Multiband U-Slot Patch Antennas
A wide band patch antenna fed by an L-probe can be designed for dual- and multi-band application by cutting U-slots on the patch. Simulation and measurement results are presented to illustrate this design.
6800fbe3314be9f638fb075e15b489d1aadb3030
Advances in Collaborative Filtering
The collaborative filtering (CF) approach to recommenders has recently enjoyed much interest and progress. The fact that it played a central role within the recently completed Netflix competition has contributed to its popularity. This chapter surveys the recent progress in the field. Matrix factorization techniques, which became a first choice for implementing CF, are described together with recent innovations. We also describe several extensions that bring competitive accuracy into neighborhood methods, which used to dominate the field. The chapter demonstrates how to utilize temporal models and implicit feedback to extend models accuracy. In passing, we include detailed descriptions of some the central methods developed for tackling the challenge of the Netflix Prize competition.
12bbec48c8fde83ea276402ffedd2e241e978a12
VirtualTable: a projection augmented reality game
VirtualTable is a projection augmented reality installation where users are engaged in an interactive tower defense game. The installation runs continuously and is designed to attract people to a table, which the game is projected onto. Any number of players can join the game for an optional period of time. The goal is to prevent the virtual stylized soot balls, spawning on one side of the table, from reaching the cheese. To stop them, the players can place any kind of object on the table, that then will become part of the game. Depending on the object, it will become either a wall, an obstacle for the soot balls, or a tower, that eliminates them within a physical range. The number of enemies is dependent on the number of objects in the field, forcing the players to use strategy and collaboration and not the sheer number of objects to win the game.
ffd76d49439c078a6afc246e6d0638a01ad563f8
A Context-Aware Usability Model for Mobile Health Applications
Mobile healthcare is a fast growing area of research that capitalizes on mobile technologies and wearables to provide realtime and continuous monitoring and analysis of vital signs of users. Yet, most of the current applications are developed for general population without taking into consideration the context and needs of different user groups. Designing and developing mobile health applications and diaries according to the user context can significantly improve the quality of user interaction and encourage the application use. In this paper, we propose a user context model and a set of usability attributes for developing mobile applications in healthcare. The proposed model and the selected attributes are integrated into a mobile application development framework to provide user-centered and context-aware guidelines. To validate our framework, a mobile diary was implemented for patients undergoing Peritoneal Dialysis (PD) and tested with real users.
8deafc34941a79b9cfc348ab63ec51752c7b1cde
New approach for clustering of big data: DisK-means
The exponential growth in the amount of data gathered from various sources has resulted in the need for more efficient algorithms to quickly analyze large datasets. Clustering techniques, like K-Means are useful in analyzing data in a parallel fashion. K-Means largely depends upon a proper initialization to produce optimal results. K-means++ initialization algorithm provides solution based on providing an initial set of centres to the K-Means algorithm. However, its inherent sequential nature makes it suffer from various limitations when applied to large datasets. For instance, it makes k iterations to find k centres. In this paper, we present an algorithm that attempts to overcome the drawbacks of previous algorithms. Our work provides a method to select a good initial seeding in less time, facilitating fast and accurate cluster analysis over large datasets.
455d562bf02dcb5161c98668a5f5e470d02b70b8
A probabilistic constrained clustering for transfer learning and image category discovery
Neural network-based clustering has recently gained popularity, and in particular a constrained clustering formulation has been proposed to perform transfer learning and image category discovery using deep learning. The core idea is to formulate a clustering objective with pairwise constraints that can be used to train a deep clustering network; therefore the cluster assignments and their underlying feature representations are jointly optimized end-toend. In this work, we provide a novel clustering formulation to address scalability issues of previous work in terms of optimizing deeper networks and larger amounts of categories. The proposed objective directly minimizes the negative log-likelihood of cluster assignment with respect to the pairwise constraints, has no hyper-parameters, and demonstrates improved scalability and performance on both supervised learning and unsupervised transfer learning.
e6bef595cb78bcad4880aea6a3a73ecd32fbfe06
Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach
The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.
d77d2ab03f891d8f0822083020486a6de1f2900f
EEG Classification of Different Imaginary Movements within the Same Limb
The task of discriminating the motor imagery of different movements within the same limb using electroencephalography (EEG) signals is challenging because these imaginary movements have close spatial representations on the motor cortex area. There is, however, a pressing need to succeed in this task. The reason is that the ability to classify different same-limb imaginary movements could increase the number of control dimensions of a brain-computer interface (BCI). In this paper, we propose a 3-class BCI system that discriminates EEG signals corresponding to rest, imaginary grasp movements, and imaginary elbow movements. Besides, the differences between simple motor imagery and goal-oriented motor imagery in terms of their topographical distributions and classification accuracies are also being investigated. To the best of our knowledge, both problems have not been explored in the literature. Based on the EEG data recorded from 12 able-bodied individuals, we have demonstrated that same-limb motor imagery classification is possible. For the binary classification of imaginary grasp and elbow (goal-oriented) movements, the average accuracy achieved is 66.9%. For the 3-class problem of discriminating rest against imaginary grasp and elbow movements, the average classification accuracy achieved is 60.7%, which is greater than the random classification accuracy of 33.3%. Our results also show that goal-oriented imaginary elbow movements lead to a better classification performance compared to simple imaginary elbow movements. This proposed BCI system could potentially be used in controlling a robotic rehabilitation system, which can assist stroke patients in performing task-specific exercises.
de0f84359078ec9ba79f4d0061fe73f6cac6591c
Single-Stage Single-Switch Four-Output Resonant LED Driver With High Power Factor and Passive Current Balancing
A resonant single-stage single-switch four-output LED driver with high power factor and passive current balancing is proposed. By controlling one output current, the other output currents of four-output LED driver can be controlled via passive current balancing, which makes its control simple. When magnetizing inductor current operates in critical conduction mode, unity power factor is achieved. The proposed LED driver uses only one active switch and one magnetic component, thus it benefits from low cost, small volume, and light weight. Moreover, high-efficiency performance is achieved due to single-stage power conversion and soft-switching characteristics. The characteristics of the proposed LED driver are studied in this paper and experimental results of two 110-W four-output isolated LED drivers are provided to verify the studied results.
1924ae6773f09efcfc791454d42a3ec53207a815
Flexible Ambiguity Resolution and Incompleteness Detection in Requirements Descriptions via an Indicator-Based Configuration of Text Analysis Pipelines
Natural language software requirements descriptions enable end users to formulate their wishes and expectations for a future software product without much prior knowledge in requirements engineering. However, these descriptions are susceptible to linguistic inaccuracies such as ambiguities and incompleteness that can harm the development process. There is a number of software solutions that can detect deficits in requirements descriptions and partially solve them, but they are often hard to use and not suitable for end users. For this reason, we develop a software system that helps end-users to create unambiguous and complete requirements descriptions by combining existing expert tools and controlling them using automatic compensation strategies. In order to recognize the necessity of individual compensation methods in the descriptions, we have developed linguistic indicators, which we present in this paper. Based on these indicators, the whole text analysis pipeline is ad-hoc configured and thus adapted to the individual circumstances of a requirements description.
727774c3a911d45ea6fe2d4ad66fd3b453a18c99
Correlating low-level image statistics with users - rapid aesthetic and affective judgments of web pages
In this paper, we report a study that examines the relationship between image-based computational analyses of web pages and users' aesthetic judgments about the same image material. Web pages were iteratively decomposed into quadrants of minimum entropy (quadtree decomposition) based on low-level image statistics, to permit a characterization of these pages in terms of their respective organizational symmetry, balance and equilibrium. These attributes were then evaluated for their correlation with human participants' subjective ratings of the same web pages on four aesthetic and affective dimensions. Several of these correlations were quite large and revealed interesting patterns in the relationship between low-level (i.e., pixel-level) image statistics and design-relevant dimensions.
21c76cc8ebfb9c112c2594ce490b47e458b50e31
American Sign Language Recognition Using Leap Motion Sensor
In this paper, we present an American Sign Language recognition system using a compact and affordable 3D motion sensor. The palm-sized Leap Motion sensor provides a much more portable and economical solution than Cyblerglove or Microsoft kinect used in existing studies. We apply k-nearest neighbor and support vector machine to classify the 26 letters of the English alphabet in American Sign Language using the derived features from the sensory data. The experiment result shows that the highest average classification rate of 72.78% and 79.83% was achieved by k-nearest neighbor and support vector machine respectively. We also provide detailed discussions on the parameter setting in machine learning methods and accuracy of specific alphabet letters in this paper.
519f5892938d4423cecc999b6e489b72fc0d0ca7
Cognitive, emotional, and behavioral considerations for chronic pain management in the Ehlers-Danlos syndrome hypermobility-type: a narrative review.
BACKGROUND Ehlers-Danlos syndrome (EDS) hypermobility-type is the most common hereditary disorder of the connective tissue. The tissue fragility characteristic of this condition leads to multi-systemic symptoms in which pain, often severe, chronic, and disabling, is the most experienced. Clinical observations suggest that the complex patient with EDS hypermobility-type is refractory toward several biomedical and physical approaches. In this context and in accordance with the contemporary conceptualization of pain (biopsychosocial perspective), the identification of psychological aspects involved in the pain experience can be useful to improve interventions for this under-recognized pathology. PURPOSE Review of the literature on joint hypermobility and EDS hypermobility-type concerning psychological factors linked to pain chronicity and disability. METHODS A comprehensive search was performed using scientific online databases and references lists, encompassing publications reporting quantitative and qualitative research as well as unpublished literature. RESULTS Despite scarce research, psychological factors associated with EDS hypermobility-type that potentially affect pain chronicity and disability were identified. These are cognitive problems and attention to body sensations, negative emotions, and unhealthy patterns of activity (hypo/hyperactivity). CONCLUSIONS As in other chronic pain conditions, these aspects should be more explored in EDS hypermobility-type, and integrated into chronic pain prevention and management programs. Implications for Rehabilitation Clinicians should be aware that joint hypermobility may be associated with other health problems, and in its presence suspect a heritable disorder of connective tissue such as the Ehlers-Danlos syndrome (EDS) hypermobility-type, in which chronic pain is one of the most frequent and invalidating symptoms. It is necessary to explore the psychosocial functioning of patients as part of the overall chronic pain management in the EDS hypermobility-type, especially when they do not respond to biomedical approaches as psychological factors may be operating against rehabilitation. Further research on the psychological factors linked to pain chronicity and disability in the EDS hypermobility-type is needed.
7d2fda30e52c39431dbb90ae065da036a55acdc7
A brief review: factors affecting the length of the rest interval between resistance exercise sets.
Research has indicated that multiple sets are superior to single sets for maximal strength development. However, whether maximal strength gains are achieved may depend on the ability to sustain a consistent number of repetitions over consecutive sets. A key factor that determines the ability to sustain repetitions is the length of rest interval between sets. The length of the rest interval is commonly prescribed based on the training goal, but may vary based on several other factors. The purpose of this review was to discuss these factors in the context of different training goals. When training for muscular strength, the magnitude of the load lifted is a key determinant of the rest interval prescribed between sets. For loads less than 90% of 1 repetition maximum, 3-5 minutes rest between sets allows for greater strength increases through the maintenance of training intensity. However, when testing for maximal strength, 1-2 minutes rest between sets might be sufficient between repeated attempts. When training for muscular power, a minimum of 3 minutes rest should be prescribed between sets of repeated maximal effort movements (e.g., plyometric jumps). When training for muscular hypertrophy, consecutive sets should be performed prior to when full recovery has taken place. Shorter rest intervals of 30-60 seconds between sets have been associated with higher acute increases in growth hormone, which may contribute to the hypertrophic effect. When training for muscular endurance, an ideal strategy might be to perform resistance exercises in a circuit, with shorter rest intervals (e.g., 30 seconds) between exercises that involve dissimilar muscle groups, and longer rest intervals (e.g., 3 minutes) between exercises that involve similar muscle groups. In summary, the length of the rest interval between sets is only 1 component of a resistance exercise program directed toward different training goals. Prescribing the appropriate rest interval does not ensure a desired outcome if other components such as intensity and volume are not prescribed appropriately.
fe0643f3405c22fe7ca0b7d1274a812d6e3e5a11
Silicon carbide power MOSFETs: Breakthrough performance from 900 V up to 15 kV
Since Cree, Inc.'s 2<sup>nd</sup> generation 4H-SiC MOSFETs were commercially released with a specific on-resistance (R<sub>ON, SP</sub>) of 5 mΩ·cm<sup>2</sup> for a 1200 V-rating in early 2013, we have further optimized the device design and fabrication processes as well as greatly expanded the voltage ratings from 900 V up to 15 kV for a much wider range of high-power, high-frequency, and high-voltage energy-conversion and transmission applications. Using these next-generation SiC MOSFETs, we have now achieved new breakthrough performance for voltage ratings from 900 V up to 15 kV with a R<sub>ON, SP</sub> as low as 2.3 mΩ·cm<sup>2</sup> for a breakdown voltage (BV) of 1230 V and 900 V-rating, 2.7 mΩ·cm<sup>2</sup> for a BV of 1620 V and 1200 V-rating, 3.38 mΩ·cm<sup>2</sup> for a BV of 1830 V and 1700 V-rating, 10.6 mΩ·cm<sup>2</sup> for a BV of 4160 V and 3300 V-rating, 123 mΩ·cm<sup>2</sup> for a BV of 12 kV and 10 kV-rating, and 208 mΩ·cm<sup>2</sup> for a BV of 15.5 kV and 15 kV-rating. In addition, due to the lack of current tailing during the bipolar device switching turn-off, the SiC MOSFETs reported in this work exhibit incredibly high frequency switching performance over their silicon counter parts.
011d4ccb74f32f597df54ac8037a7903bd95038b
The evolution of human skin coloration.
Skin color is one of the most conspicuous ways in which humans vary and has been widely used to define human races. Here we present new evidence indicating that variations in skin color are adaptive, and are related to the regulation of ultraviolet (UV) radiation penetration in the integument and its direct and indirect effects on fitness. Using remotely sensed data on UV radiation levels, hypotheses concerning the distribution of the skin colors of indigenous peoples relative to UV levels were tested quantitatively in this study for the first time. The major results of this study are: (1) skin reflectance is strongly correlated with absolute latitude and UV radiation levels. The highest correlation between skin reflectance and UV levels was observed at 545 nm, near the absorption maximum for oxyhemoglobin, suggesting that the main role of melanin pigmentation in humans is regulation of the effects of UV radiation on the contents of cutaneous blood vessels located in the dermis. (2) Predicted skin reflectances deviated little from observed values. (3) In all populations for which skin reflectance data were available for males and females, females were found to be lighter skinned than males. (4) The clinal gradation of skin coloration observed among indigenous peoples is correlated with UV radiation levels and represents a compromise solution to the conflicting physiological requirements of photoprotection and vitamin D synthesis. The earliest members of the hominid lineage probably had a mostly unpigmented or lightly pigmented integument covered with dark black hair, similar to that of the modern chimpanzee. The evolution of a naked, darkly pigmented integument occurred early in the evolution of the genus Homo. A dark epidermis protected sweat glands from UV-induced injury, thus insuring the integrity of somatic thermoregulation. Of greater significance to individual reproductive success was that highly melanized skin protected against UV-induced photolysis of folate (Branda & Eaton, 1978, Science201, 625-626; Jablonski, 1992, Proc. Australas. Soc. Hum. Biol.5, 455-462, 1999, Med. Hypotheses52, 581-582), a metabolite essential for normal development of the embryonic neural tube (Bower & Stanley, 1989, The Medical Journal of Australia150, 613-619; Medical Research Council Vitamin Research Group, 1991, The Lancet338, 31-37) and spermatogenesis (Cosentino et al., 1990, Proc. Natn. Acad. Sci. U.S.A.87, 1431-1435; Mathur et al., 1977, Fertility Sterility28, 1356-1360).As hominids migrated outside of the tropics, varying degrees of depigmentation evolved in order to permit UVB-induced synthesis of previtamin D(3). The lighter color of female skin may be required to permit synthesis of the relatively higher amounts of vitamin D(3)necessary during pregnancy and lactation. Skin coloration in humans is adaptive and labile. Skin pigmentation levels have changed more than once in human evolution. Because of this, skin coloration is of no value in determining phylogenetic relationships among modern human groups.
d87d70ecd0fdf0976cebbeaeacf25ad9872ffde1
Robust and false positive free watermarking in IWT domain using SVD and ABC
Watermarking is used to protect the copyrighted materials from being misused and help us to know the lawful ownership. The security of any watermarking scheme is always a prime concern for the developer. In this work, the robustness and security issue of IWT (integer wavelet transform) and SVD (singular value decomposition) based watermarking is explored. Generally, SVD based watermarking techniques suffer with an issue of false positive problem. This leads to even authenticating the wrong owner. We are proposing a novel solution to this false positive problem; that arises in SVD based approach. Firstly, IWT is employed on the host image and then SVD is performed on this transformed host. The properties of IWT and SVD help in achieving high value of robustness. Singular values are used for the watermark embedding. In order to further improve the quality of watermarking, the optimization of scaling factor (mixing ratio) is performed with the help of artificial bee colony (ABC) algorithm. A comparison with other schemes is performed to show the superiority of proposed scheme. & 2015 Elsevier Ltd. All rights reserved.
ae3ebe6c69fdb19e12d3218a5127788fae269c10
A Literature Survey of Benchmark Functions For Global Optimization Problems
Test functions are important to validate and compare the performance of optimization algorithms. There have been many test or benchmark functions reported in the literature; however, there is no standard list or set of benchmark functions. Ideally, test functions should have diverse properties so that can be truly useful to test new algorithms in an unbiased way. For this purpose, we have reviewed and compiled a rich set of 175 benchmark functions for unconstrained optimization problems with diverse properties in terms of modality, separability, and valley landscape. This is by far the most complete set of functions so far in the literature, and tt can be expected this complete set of functions can be used for validation of new optimization in the future.
d28235adc2c8c6fdfaa474bc2bab931129149fd6
Approaches to Measuring the Difficulty of Games in Dynamic Difficulty Adjustment Systems
In this article, three approaches are proposed for measuring difficulty that can be useful in developing Dynamic Difficulty Adjustment (DDA) systems in different game genres. Our analysis of the existing DDA systems shows that there are three ways to measure the difficulty of the game: using the formal model of gameplay, using the features of the game, and direct examination of the player. These approaches are described in this article and supplemented by appropriate examples of DDA implementations. In addition, the article describes the distinction between task complexity and task difficulty in DDA systems. Separating task complexity (especially the structural one) is suggested, which is an objective characteristic of the task, and task difficulty, which is related to the interaction between the task and the task performer.
5c881260bcc64070b2b33c10d28f23f793b8344f
A low-voltage, low quiescent current, low drop-out regulator
The demand for low voltage, low drop-out (LDO) regulators is increasing because of the growing demand for portable electronics, i.e., cellular phones, pagers, laptops, etc. LDOs are used coherently with dc-dc converters as well as standalone parts. In power supply systems, they are typically cascaded onto switching regulators to suppress noise and provide a low noise output. The need for low voltage is innate to portable low power devices and corroborated by lower breakdown voltages resulting from reductions in feature size. Low quiescent current in a battery operated system is an intrinsic performance parameter because it partially determines battery life. This paper discusses some techniques that enable the practical realizations of low quiescent current LDOs at low voltages and in existing technologies. The proposed circuit exploits the frequency response dependence on load-current to minimize quiescent current flow. Moreover, the output current capabilities of MOS power transistors are enhanced and drop-out voltages are decreased for a given device size. Other applications, like dc-dc converters, can also reap the benefits of these enhanced MOS devices. An LDO prototype incorporating the aforementioned techniques was fabricated. The circuit was operable down to input voltages of 1 V with a zero-load quiescent current flow of 23 μA. Moreover, the regulator provided 18 and 50 mA of output current at input voltages of 1 and 1.2 V respectively.
950ff860dbc8a24fc638ac942ce9c1f51fb24899
Where to Go Next: A Spatio-temporal LSTM model for Next POI Recommendation
Next Point-of-Interest (POI) recommendation is of great value for both location-based service providers and users. Recently Recurrent Neural Networks (RNNs) have been proved to be effective on sequential recommendation tasks. However, existing RNN solutions rarely consider the spatio-temporal intervals between neighbor checkins, which are essential for modeling user check-in behaviors in next POI recommendation. In this paper, we propose a new variant of LSTM, named STLSTM, which implements time gates and distance gates into LSTM to capture the spatio-temporal relation between successive check-ins. Specifically, one time gate and one distance gate are designed to control short-term interest update, and another time gate and distance gate are designed to control long-term interest update. Furthermore, to reduce the number of parameters and improve efficiency, we further integrate coupled input and forget gates with our proposed model. Finally, we evaluate the proposed model using four real-world datasets from various location-based social networks. Our experimental results show that our model significantly outperforms the state-of-the-art approaches for next POI recommendation.
f99a50ce62845c62d9fcdec277e0857350534cc9
Absorptive Frequency-Selective Transmission Structure With Square-Loop Hybrid Resonator
A novel design of an absorptive frequency-selective transmission structure (AFST) is proposed. This structure is based on the design of a frequency-dependent lossy layer with square-loop hybrid resonator (SLHR). The parallel resonance provided by the hybrid resonator is utilized to bypass the lossy path and improve the insertion loss. Meanwhile, the series resonance of the hybrid resonator is used for expanding the upper absorption bandwidth. Furthermore, the absorption for out-of-band frequencies is achieved by using four metallic strips with lumped resistors, which are connected with the SLHR. The quantity of lumped elements required in a unit cell can be reduced by at least 50% compared to previous structures. The design guidelines are explained with the aid of an equivalent circuit model. Both simulation and experiment results are presented to demonstrate the performance of our AFST. It is shown that an insertion loss of 0.29 dB at 6.1 GHz and a 112.4% 10 dB reflection reduction bandwidth are obtained under the normal incidence.
26f70336acf7247a35d3c0be6308fe29f25d2872
Implementation of AES-GCM encryption algorithm for high performance and low power architecture Using FPGA
Evaluation of the Advanced Encryption Standard (AES) algorithm in FPGA is proposed here. This Evaluation is compared with other works to show the efficiency. Here we are concerned about two major purposes. The first is to define some of the terms and concepts behind basic cryptographic methods, and to offer a way to compare the myriad cryptographic schemes in use today. The second is to provide some real examples of cryptography in use today. The design uses an iterative looping approach with block and key size of 128 bits, lookup table implementation of S-box. This gives low complexity architecture and easily achieves low latency as well as high throughput. Simulation results, performance results are presented and compared with previous reported designs. Since its acceptance as the adopted symmetric-key algorithm, the Advanced Encryption Standard (AES) and its recently standardized authentication Galois/Counter Mode (GCM) have been utilized in various security-constrained applications. Many of the AES-GCM applications are power and resource constrained and requires efficient hardware implementations. In this project, AES-GCM algorithms are evaluated and optimized to identify the high-performance and low-power architectures. The Advanced Encryption Standard (AES) is a specification for the encryption of electronic data. The Cipher Block Chaining (CBC) mode is a confidentiality mode whose encryption process features the combining (“chaining”) of the plaintext blocks with the previous Cipher text blocks. The CBC mode requires an IV to combine with the first plaintext block. The IV need not be secret, but it must be unpredictable. Also, the integrity of the IV should be protected. Galois/Counter Mode (GCM) is a block cipher mode of operation that uses universal hashing over a binary Galois field to provide authenticated encryption. Galois Hash is used for authentication, and the Advanced Encryption Standard (AES) block cipher is used for encryption in counter mode of operation. To obtain the least-complexity S-box, the formulations for the Galois Field (GF) sub-field inversions in GF (24) are optimized By conducting exhaustive simulations for the input transitions, we analyze the synthesis of the AES S-boxes considering the switching activities, gate-level net lists, and parasitic information. Finally, by implementation of AES-GCM the high-performance GF (2128) multiplier architectures, gives the detailed information of its performance. An optimized coding for the implementation of Advanced Encryption Standard-Galois Counter Mode has been developed. The speed factor of the algorithm implementation has been targeted and a software code in Verilog HDL has been developed. This implementation is useful in wireless security like military communication and mobile telephony where there is a grayer emphasis on the speed of communication.
03f64a5989e4d2ecab989d9724ad4cc58f976daf
Multi-Document Summarization using Sentence-based Topic Models
Most of the existing multi-document summarization methods decompose the documents into sentences and work directly in the sentence space using a term-sentence matrix. However, the knowledge on the document side, i.e. the topics embedded in the documents, can help the context understanding and guide the sentence selection in the summarization procedure. In this paper, we propose a new Bayesian sentence-based topic model for summarization by making use of both the term-document and term-sentence associations. An efficient variational Bayesian algorithm is derived for model parameter estimation. Experimental results on benchmark data sets show the effectiveness of the proposed model for the multi-document summarization task.
9a1b3247fc7f0abf892a40884169e0ed10d3b684
Intrusion detection by machine learning: A review
The popularity of using Internet contains some risks of network attacks. Intrusion detection is one major research problem in network security, whose aim is to identify unusual access or attacks to secure internal networks. In literature, intrusion detection systems have been approached by various machine learning techniques. However, there is no a review paper to examine and understand the current status of using machine learning techniques to solve the intrusion detection problems. This chapter reviews 55 related studies in the period between 2000 and 2007 focusing on developing single, hybrid, and ensemble classifiers. Related studies are compared by their classifier design, datasets used, and other experimental setups. Current achievements and limitations in developing intrusion detection systems by machine learning are present and discussed. A number of future research directions are also provided. 2009 Elsevier Ltd. All rights reserved.
a10d128fd95710308dfee83953c5b26293b9ede7
Combining OpenFlow and sFlow for an effective and scalable anomaly detection and mitigation mechanism on SDN environments
Software Defined Networks (SDNs) based on the OpenFlow (OF) protocol export controlplane programmability of switched substrates. As a result, rich functionality in traffic management, load balancing, routing, firewall configuration, etc. that may pertain to specific flows they control, may be easily developed. In this paper we extend these functionalities with an efficient and scalable mechanism for performing anomaly detection and mitigation in SDN architectures. Flow statistics may reveal anomalies triggered by large scale malicious events (typically massive Distributed Denial of Service attacks) and subsequently assist networked resource owners/operators to raise mitigation policies against these threats. First, we demonstrate that OF statistics collection and processing overloads the centralized control plane, introducing scalability issues. Second, we propose a modular architecture for the separation of the data collection process from the SDN control plane with the employment of sFlow monitoring data. We then report experimental results that compare its performance against native OF approaches that use standard flow table statistics. Both alternatives are evaluated using an entropy-based method on high volume real network traffic data collected from a university campus network. The packet traces were fed to hardware and software OF devices in order to assess flow-based datagathering and related anomaly detection options. We subsequently present experimental results that demonstrate the effectiveness of the proposed sFlow-based mechanism compared to the native OF approach, in terms of overhead imposed on usage of system resources. Finally, we conclude by demonstrating that once a network anomaly is detected and identified, the OF protocol can effectively mitigate it via flow table modifications. 2013 Elsevier B.V. All rights reserved.
c84b10c01a84f26fe8a1c978c919fbe5a9f9a661
Software-Defined Networking: A Comprehensive Survey
The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network’s control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms with a focus on aspects such as resiliency, scalability, performance, security, and dependabilityVas well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined
1821fbfc03a45af816a8d7aef50321654b0aeec0
Revisiting Traffic Anomaly Detection Using Software Defined Networking
Despite their exponential growth, home and small office/home office networks continue to be poorly managed. Consequently, security of hosts in most home networks is easily compromised and these hosts are in turn used for largescale malicious activities without the home users’ knowledge. We argue that the advent of Software Defined Networking (SDN) provides a unique opportunity to effectively detect and contain network security problems in home and home office networks. We show how four prominent traffic anomaly detection algorithms can be implemented in an SDN context using Openflow compliant switches and NOX as a controller. Our experiments indicate that these algorithms are significantly more accurate in identifying malicious activities in the home networks as compared to the ISP. Furthermore, the efficiency analysis of our SDN implementations on a programmable home network router indicates that the anomaly detectors can operate at line rates without introducing any performance penalties for the home network traffic.
3192a953370bc8bf4b906261e8e2596355d2b610
A clean slate 4D approach to network control and management
Today's data networks are surprisingly fragile and difficult to manage. We argue that the root of these problems lies in the complexity of the control and management planes--the software and protocols coordinating network elements--and particularly the way the decision logic and the distributed-systems issues are inexorably intertwined. We advocate a complete refactoring of the functionality and propose three key principles--network-level objectives, network-wide views, and direct control--that we believe should underlie a new architecture. Following these principles, we identify an extreme design point that we call "4D," after the architecture's four planes: decision, dissemination, discovery, and data. The 4D architecture completely separates an AS's decision logic from pro-tocols that govern the interaction among network elements. The AS-level objectives are specified in the decision plane, and en-forced through direct configuration of the state that drives how the data plane forwards packets. In the 4D architecture, the routers and switches simply forward packets at the behest of the decision plane, and collect measurement data to aid the decision plane in controlling the network. Although 4D would involve substantial changes to today's control and management planes, the format of data packets does not need to change; this eases the deployment path for the 4D architecture, while still enabling substantial innovation in network control and management. We hope that exploring an extreme design point will help focus the attention of the research and industrial communities on this crucially important and intellectually challenging area.
883e3a3950968ebf8d03d3281076671538660c7c
Sensing spatial distribution of urban land use by integrating points-of-interest and Google Word2Vec model
Urban land use information plays an essential role in a wide variety of urban planning and environmental monitoring processes. During the past few decades, with the rapid technological development of remote sensing (RS), geographic information systems (GIS) and geospatial big data, numerous methods have been developed to identify urban land use at a fine scale. Points-of-interest (POIs) have been widely used to extract information pertaining to urban land use types and functional zones. However, it is difficult to quantify the relationship between spatial distributions of POIs and regional land use types due to a lack of reliable models. Previous methods may ignore abundant spatial features that can be extracted from POIs. In this study, we establish an innovative framework that detects urban land use distributions at the scale of traffic analysis zones (TAZs) by integrating Baidu POIs and a Word2Vec model. This framework was implemented using a Google open-source model of a deep-learning language in 2013. First, data for the Pearl River Delta (PRD) are transformed into a TAZ-POI corpus using a greedy algorithmby considering the spatial distributions of TAZs and inner POIs. Then, high-dimensional characteristic vectors of POIs and TAZs are extracted using the Word2Vec model. Finally, to validate the reliability of the POI/TAZ vectors, we implement a K-Means-based clustering model to analyze correlations between the POI/TAZ vectors and deploy TAZ vectors to identify urban land use types using a random forest algorithm (RFA) model. Compared with some state-of-the-art probabilistic topic models (PTMs), the proposed method can efficiently obtain the highest accuracy (OA = 0.8728, kappa = 0.8399). Moreover, the results can be used to help urban planners to monitor dynamic urban land use and evaluate the impact of urban planning schemes. ARTICLE HISTORY Received 21 March 2016 Accepted 28 September 2016
b0f7423f93e7c6e506c115771ef82440077a732a
Full virtualization based ARINC 653 partitioning
As the number of electronic components of avionics systems is significantly increasing, it is desirable to run several avionics software on a single computing device. In such system, providing a seamless way to integrate separate applications on a computing device is a very critical issue as the Integrated Modular Avionics (IMA) concept addresses. In this context, the ARINC 653 standard defines resource partitioning of avionics application software. The virtualization technology has very high potential of providing an optimal implementation of the partition concept. In this paper, we study supports for full virtualization based ARINC 653 partitioning. The supports include extension of XML-based configuration file format and hierarchical scheduler for temporal partitioning. We show that our implementation can support well-known VMMs, such as VirtualBox and VMware and present basic performance numbers.
5fa463ad51c0fda19cf6a32d851a12eec5e872b1
Human Identification From Freestyle Walks Using Posture-Based Gait Feature
With the increase of terrorist threats around the world, human identification research has become a sought after area of research. Unlike standard biometric recognition techniques, gait recognition is a non-intrusive technique. Both data collection and classification processes can be done without a subject’s cooperation. In this paper, we propose a new model-based gait recognition technique called postured-based gait recognition. It consists of two elements: posture-based features and posture-based classification. Posture-based features are composed of displacements of all joints between current and adjacent frames and center-of-body (CoB) relative coordinates of all joints, where the coordinates of each joint come from its relative position to four joints: hip-center, hip-left, hip-right, and spine joints, from the front forward. The CoB relative coordinate system is a critical part to handle the different observation angle issue. In posture-based classification, postured-based gait features of all frames are considered. The dominant subject becomes a classification result. The postured-based gait recognition technique outperforms the existing techniques in both fixed direction and freestyle walk scenarios, where turning around and changing directions are involved. This suggests that a set of postures and quick movements are sufficient to identify a person. The proposed technique also performs well under the gallery-size test and the cumulative match characteristic test, which implies that the postured-based gait recognition technique is not gallery-size sensitive and is a good potential tool for forensic and surveillance use.
602f775577a5458e8b6c5d5a3cdccc7bb183662c
Comparing comprehension measured by multiple-choice and open-ended questions.
This study compared the nature of text comprehension as measured by multiple-choice format and open-ended format questions. Participants read a short text while explaining preselected sentences. After reading the text, participants answered open-ended and multiple-choice versions of the same questions based on their memory of the text content. The results indicated that performance on open-ended questions was correlated with the quality of self-explanations, but performance on multiple-choice questions was correlated with the level of prior knowledge related to the text. These results suggest that open-ended and multiple-choice format questions measure different aspects of comprehension processes. The results are discussed in terms of dual process theories of text comprehension.
ebeca41ac60c2151137a45fcc5d1a70a419cad65
Current location-based next POI recommendation
Availability of large volume of community contributed location data enables a lot of location providing services and these services have attracted many industries and academic researchers by its importance. In this paper we propose the new recommender system that recommends the new POI for next hours. First we find the users with similar check-in sequences and depict their check-in sequences as a directed graph, then find the users current location. To recommend the new POI recommendation for next hour we refer to the directed graph we have created. Our algorithm considers both the temporal factor i.e., recommendation time, and the spatial(distance) at the same time. We conduct an experiment on random data collected from Foursquare and Gowalla. Experiment results show that our proposed model outperforms the collaborative-filtering based state-of-the-art recommender techniques.
08952d434a9b6f1dc9281f2693b2dd855edcda6b
SiRiUS: Securing Remote Untrusted Storage
This paper presents SiRiUS, a secure file system designed to be layered over insecure network and P2P file systems such as NFS, CIFS, OceanStore, and Yahoo! Briefcase. SiRiUS assumes the network storage is untrusted and provides its own read-write cryptographic access control for file level sharing. Key management and revocation is simple with minimal out-of-band communication. File system freshness guarantees are supported by SiRiUS using hash tree constructions. SiRiUS contains a novel method of performing file random access in a cryptographic file system without the use of a block server. Extensions to SiRiUS include large scale group sharing using the NNL key revocation construction. Our implementation of SiRiUS performs well relative to the underlying file system despite using cryptographic operations.
adeca3a75008d92cb52f5f2561dda7005a8814a4
Calibrated fuzzy AHP for current bank account selection
0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.089 ⇑ Corresponding author. Tel.: +44 23 92 844171. E-mail addresses: Alessio.Ishizaka@port.ac.uk (A. I com (N.H. Nguyen). Fuzzy AHP is a hybrid method that combines Fuzzy Set Theory and AHP. It has been developed to take into account uncertainty and imprecision in the evaluations. Fuzzy Set Theory requires the definition of a membership function. At present, there are no indications of how these membership functions can be constructed. In this paper, a way to calibrate the membership functions with comparisons given by the decision-maker on alternatives with known measures is proposed. This new technique is illustrated in a study measuring the most important factors in selecting a student current account. 2012 Elsevier Ltd. All rights reserved.
539b15c0215582d12e2228d486374651c21ac75d
Lane-Change Fuzzy Control in Autonomous Vehicles for the Overtaking Maneuver
The automation of the overtaking maneuver is considered to be one of the toughest challenges in the development of autonomous vehicles. This operation involves two vehicles (the overtaking and the overtaken) cooperatively driving, as well as the surveillance of any other vehicles that are involved in the maneuver. This operation consists of two lane changes-one from the right to the left lane of the road, and the other is to return to the right lane after passing. Lane-change maneuvers have been used to move into or out of a circulation lane or platoon; however, overtaking operations have not received much coverage in the literature. In this paper, we present an overtaking system for autonomous vehicles equipped with path-tracking and lane-change capabilities. The system uses fuzzy controllers that mimic human behavior and reactions during overtaking maneuvers. The system is based on the information that is supplied by a high-precision Global Positioning System and a wireless network environment. It is able to drive an automated vehicle and overtake a second vehicle that is driving in the same lane of the road.
a306754e556446a5199e258f464fd6e26be547fe
Safety and Efficacy of Selective Neurectomy of the Gastrocnemius Muscle for Calf Reduction in 300 Cases
Liposuction alone is not always sufficient to correct the shape of the lower leg, and muscle reduction may be necessary. To assess the outcomes of a new technique of selective neurectomy of the gastrocnemius muscle to correct calf hypertrophy. Between October 2007 and May 2010, 300 patients underwent neurectomy of the medial and lateral heads of the gastrocnemius muscle at the Department of Cosmetic and Plastic Surgery, the Second People’s Hospital of Guangdong Province (Guangzhou, China) to correct the shape of their lower legs. Follow-up data from these 300 patients were analyzed retrospectively. Cosmetic results were evaluated independently by the surgeon, the patient, and a third party. Preoperative and postoperative calf circumferences were compared. The Fugl-Meyer motor function assessment was evaluated 3 months after surgery. The average reduction in calf circumference was 3.2 ± 1.2 cm. The Fugl-Meyer scores were normal in all patients both before and 3 months after surgery. A normal calf shape was achieved in all patients. Six patients complained of fatigue while walking and four of scar pigmentation, but in all cases, this resolved within 6 months. Calf asymmetry was observed in only two patients. The present series suggests that neurectomy of the medial and lateral heads of the gastrocnemius muscle may be safe and effective for correcting the shape of the calves. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
f31e0932a2f35a6d7feff20977ce08b5b5398c60
Structure of the tendon connective tissue.
Tendons consist of collagen (mostly type I collagen) and elastin embedded in a proteoglycan-water matrix with collagen accounting for 65-80% and elastin approximately 1-2% of the dry mass of the tendon. These elements are produced by tenoblasts and tenocytes, which are the elongated fibroblasts and fibrocytes that lie between the collagen fibers, and are organized in a complex hierarchical scheme to form the tendon proper. Soluble tropocollagen molecules form cross-links to create insoluble collagen molecules which then aggregate progressively into microfibrils and then into electronmicroscopically clearly visible units, the collagen fibrils. A bunch of collagen fibrils forms a collagen fiber, which is the basic unit of a tendon. A fine sheath of connective tissue called endotenon invests each collagen fiber and binds fibers together. A bunch of collagen fibers forms a primary fiber bundle, and a group of primary fiber bundles forms a secondary fiber bundle. A group of secondary fiber bundles, in turn, forms a tertiary bundle, and the tertiary bundles make up the tendon. The entire tendon is surrounded by a fine connective tissue sheath called epitenon. The three-dimensional ultrastructure of tendon fibers and fiber bundles is complex. Within one collagen fiber, the fibrils are oriented not only longitudinally but also transversely and horizontally. The longitudinal fibers do not run only parallel but also cross each other, forming spirals. Some of the individual fibrils and fibril groups form spiral-type plaits. The basic function of the tendon is to transmit the force created by the muscle to the bone, and, in this way, make joint movement possible. The complex macro- and microstructure of tendons and tendon fibers make this possible. During various phases of movements, the tendons are exposed not only to longitudinal but also to transversal and rotational forces. In addition, they must be prepared to withstand direct contusions and pressures. The above-described three-dimensional internal structure of the fibers forms a buffer medium against forces of various directions, thus preventing damage and disconnection of the fibers.
6939327c1732e027130f0706b6279f78b8ecd2b7
Flexible Container-Based Computing Platform on Cloud for Scientific Workflows
Cloud computing is expected to be a promising solution for scientific computing. In this paper, we propose a flexible container-based computing platform to run scientific workflows on cloud. We integrate Galaxy, a popular biology workflow system, with four famous container cluster systems. Preliminary evaluation shows that container cluster systems introduce negligible performance overhead for data intensive scientific workflows, meanwhile, they are able to solve tool installation problem, guarantee reproducibility and improve resource utilization. Moreover, we implement four ways of using Docker, the most popular container tool, for our platform. Docker in Docker and Sibling Docker, which run everything within containers, both help scientists easily deploy our platform on any clouds in a few minutes.
545dd72cd0357995144bb19bef132bcc67a52667
Voiced-Unvoiced Classification of Speech Using a Neural Network Trained with LPC Coefficients
Voiced-Unvoiced classification (V-UV) is a well understood but still not perfectly solved problem. It tackles the problem of determining whether a signal frame contains harmonic content or not. This paper presents a new approach to this problem using a conventional multi-layer perceptron neural network trained with linear predictive coding (LPC) coefficients. LPC is a method that results in a number of coefficients that can be transformed to the envelope of the spectrum of the input frame. As a spectrum is suitable for determining the harmonic content, so are the LPC-coefficients. The proposed neural network works reasonably well compared to other approaches and has been evaluated on a small dataset of 4 different speakers.
89cbcc1e740a4591443ff4765a6ae8df0fdf5554
Piaget ’ s Constructivism , Papert ’ s Constructionism : What ’ s the difference ?
What is the difference between Piaget's constructivism and Papert’s “constructionism”? Beyond the mere play on the words, I think the distinction holds, and that integrating both views can enrich our understanding of how people learn and grow. Piaget’s constructivism offers a window into what children are interested in, and able to achieve, at different stages of their development. The theory describes how children’s ways of doing and thinking evolve over time, and under which circumstance children are more likely to let go of—or hold onto— their currently held views. Piaget suggests that children have very good reasons not to abandon their worldviews just because someone else, be it an expert, tells them they’re wrong. Papert’s constructionism, in contrast, focuses more on the art of learning, or ‘learning to learn’, and on the significance of making things in learning. Papert is interested in how learners engage in a conversation with [their own or other people’s] artifacts, and how these conversations boost self-directed learning, and ultimately facilitate the construction of new knowledge. He stresses the importance of tools, media, and context in human development. Integrating both perspectives illuminates the processes by which individuals come to make sense of their experience, gradually optimizing their interactions with the world
19c05a149bb20f27dd0eca0ec3ac847390b2d100
Microphone array processing for distant speech recognition: Towards real-world deployment
Distant speech recognition (DSR) holds out the promise of providing a natural human computer interface in that it enables verbal interactions with computers without the necessity of donning intrusive body- or head-mounted devices. Recognizing distant speech robustly, however, remains a challenge. This paper provides a overview of DSR systems based on microphone arrays. In particular, we present recent work on acoustic beamforming for DSR, along with experimental results verifying the effectiveness of the various algorithms described here; beginning from a word error rate (WER) of 14.3% with a single microphone of a 64-channel linear array, our state-of-the-art DSR system achieved a WER of 5.3%, which was comparable to that of 4.2% obtained with a lapel microphone. Furthermore, we report the results of speech recognition experiments on data captured with a popular device, the Kinect [1]. Even for speakers at a distance of four meters from the Kinect, our DSR system achieved acceptable recognition performance on a large vocabulary task, a WER of 24.1%, beginning from a WER of 42.5% with a single array channel.
142bd1d4e41e5e29bdd87e0d5a145f3c708a3f44
Ford Campus vision and lidar data set
This paper describes a data set collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS-LV) and consumer (Xsens MTi-G) inertial measurement unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research campus and downtown Dearborn, Michigan during NovemberDecember 2009. The vehicle path trajectory in these data sets contain several large and small-scale loop closures, which should be useful for testing various state of the art computer vision and simultaneous localization and mapping (SLAM) algorithms. Fig. 1. The modified Ford F-250 pickup truck.
1de3c8ddf30b9d6389aebc3bfa8a02a169a7368b
Mining frequent closed graphs on evolving data streams
Graph mining is a challenging task by itself, and even more so when processing data streams which evolve in real-time. Data stream mining faces hard constraints regarding time and space for processing, and also needs to provide for concept drift detection. In this paper we present a framework for studying graph pattern mining on time-varying streams. Three new methods for mining frequent closed subgraphs are presented. All methods work on coresets of closed subgraphs, compressed representations of graph sets, and maintain these sets in a batch-incremental manner, but use different approaches to address potential concept drift. An evaluation study on datasets comprising up to four million graphs explores the strength and limitations of the proposed methods. To the best of our knowledge this is the first work on mining frequent closed subgraphs in non-stationary data streams.
31ea3186aa7072a9e25218efe229f5ee3cca3316
A ug 2 01 7 Reinforced Video Captioning with Entailment Rewards
Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.
4b944d518b88beeb9b2376975400cabd6e919957
SDN and Virtualization Solutions for the Internet of Things: A Survey
The imminent arrival of the Internet of Things (IoT), which consists of a vast number of devices with heterogeneous characteristics, means that future networks need a new architecture to accommodate the expected increase in data generation. Software defined networking (SDN) and network virtualization (NV) are two technologies that promise to cost-effectively provide the scale and versatility necessary for IoT services. In this paper, we survey the state of the art on the application of SDN and NV to IoT. To the best of our knowledge, we are the first to provide a comprehensive description of every possible IoT implementation aspect for the two technologies. We start by outlining the ways of combining SDN and NV. Subsequently, we present how the two technologies can be used in the mobile and cellular context, with emphasis on forthcoming 5G networks. Afterward, we move to the study of wireless sensor networks, arguably the current foremost example of an IoT network. Finally, we review some general SDN-NV-enabled IoT architectures, along with real-life deployments and use-cases. We conclude by giving directions for future research on this topic.
fa16642fe405382cbd407ce1bc22213561185aba
Non-Invasive Glucose Meter for Android-Based Devices
This study helps in monitoring blood glucose level of a patient with the aid of an android device non-invasively. Diabetes is a metabolic disease characterized by high level of sugar in the blood, and considered as the fastest growing long-term disease affecting millions of people globally. The study measures the blood glucose level using sensor patch through diffused reflectance spectra on the inner side of the forearm. The Arduino microcontroller does the processing of the information from the sensor patch while the Bluetooth module wirelessly transmits to the android device the measured glucose level for storing, interpreting and displaying. Results showed that there is no significant between the measured values using the commercially available glucose meter and the created device. Based on ISO 15197 standard 39 of the 40 trials conducted, or 97.5% fell within the acceptable range.
a360a526794df3aa8de96f83df171769a4022642
Joint Text Embedding for Personalized Content-based Recommendation
Learning a good representation of text is key to many recommendation applications. Examples include news recommendation where texts to be recommended are constantly published everyday. However, most existing recommendation techniques, such as matrix factorization based methods, mainly rely on interaction histories to learn representations of items. While latent factors of items can be learned e‚ectively from user interaction data, in many cases, such data is not available, especially for newly emerged items. In this work, we aim to address the problem of personalized recommendation for completely new items with text information available. We cast the problem as a personalized text ranking problem and propose a general framework that combines text embedding with personalized recommendation. Users and textual content are embedded into latent feature space. Œe text embedding function can be learned end-to-end by predicting user interactions with items. To alleviate sparsity in interaction data, and leverage large amount of text data with liŠle or no user interactions, we further propose a joint text embedding model that incorporates unsupervised text embedding with a combination module. Experimental results show that our model can signi€cantly improve the e‚ectiveness of recommendation systems on real-world datasets.
1aa60b5ae893cd93a221bf71b6b264f5aa5ca6b8
Why Not?
As humans, we have expectations for the results of any action, e.g. we expect at least one student to be returned when we query a university database for student records. When these expectations are not met, traditional database users often explore datasets via a series of slightly altered SQL queries. Yet most database access is via limited interfaces that deprive end users of the ability to alter their query in any way to garner better understanding of the dataset and result set. Users are unable to question why a particular data item is Not in the result set of a given query. In this work, we develop a model for answers to WHY NOT? queries. We show through a user study the usefulness of our answers, and describe two algorithms for finding the manipulation that discarded the data item of interest. Moreover, we work through two different methods for tracing the discarded data item that can be used with either algorithm. Using our algorithms, it is feasible for users to find the manipulation that excluded the data item of interest, and can eliminate the need for exhausting debugging.
f39e21382458bf723e207d0ac649680f9b4dde4a
Recognition of Offline Handwritten Chinese Characters Using the Tesseract Open Source OCR Engine
Due to the complex structure and handwritten deformation, the offline handwritten Chinese characters recognition has been one of the most challenging problems. In this paper, an offline handwritten Chinese character recognition tool has been developed based on the Tesseract open source OCR engine. The tool mainly contributes on the following two points: First, a handwritten Chinese character features library is generated, which is independent of a specific user's writing style, Second, by preprocessing the input image and adjusting the Tesseract engine, multiple candidate recognition results are output based on weight ranking. The recognition accuracy rate of this tool is above 88% for both known user test set and unknown user test set. It has shown that the Tesseract engine is feasible for offline handwritten Chinese character recognition to a certain degree.
3cc0c9a9917f9ed032376fa467838e720701e783
Gal4 in the Drosophila female germline
The modular Gal4 system has proven to be an extremely useful tool for conditional gene expression in Drosophila. One limitation has been the inability of the system to work in the female germline. A modified Gal4 system that works throughout oogenesis is presented here. To achieve germline expression, it was critical to change the basal promoter and 3'-UTR in the Gal4-responsive expression vector (generating UASp). Basal promoters and heterologous 3'-UTRs are often considered neutral, but as shown here, can endow qualitative tissue-specificity to a chimeric transcript. The modified Gal4 system was used to investigate the role of the Drosophila FGF homologue branchless, ligand for the FGF receptor breathless, in border cell migration. FGF signaling guides tracheal cell migration in the embryo. However, misexpression of branchless in the ovary had no effect on border cell migration. Thus border cells and tracheal cells appear to be guided differently.
e79b34f6779095a73ba4604291d84bc26802b35e
Improving Relation Extraction by Pre-trained Language Representations
Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.
bad43ffc1c7d07db5990f631334bfa3157a6b134
Plate-laminated corporate-feed slotted waveguide array antenna at 350-GHz band by silicon process
A corporate feed slotted waveguide array antenna with broadband characteristics in term of gain in the 350 GHz band is achieved by measurement for the first time. The etching accuracy for thin laminated plates of the diffusion bonding process with conventional chemical etching is limited to ±20μm. This limits the use of this process for antenna fabrication in the submillimeter wave band where the fabrication tolerances are very severe. To improve the etching accuracy of the thin laminated plates, a new fabrication process has been developed. Each silicon wafer is etched by DRIE (deep reactive ion etcher) and is plated by gold on the surface. This new fabrication process provides better fabrication tolerances about ±5 μm using wafer bond aligner. The thin laminated wafers are then bonded with the diffusion bonding process under high temperature and high pressure. To validate the proposed antenna concepts, an antenna prototype has been designed and fabricated in the 350 GHz band. The 3dB-down gain bandwidth is about 44.6 GHz by this silicon process while it was about 15GHz by the conventional process using metal plates in measurement.
0dacd4593ba6bce441bae37fc3ff7f3b70408ee1
Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds
Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (ε, 0)and (ε, δ)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.
24d800e6681a129b7787cbb05d0e224acad70e8d
Exposure: A Passive DNS Analysis Service to Detect and Report Malicious Domains
A wide range of malicious activities rely on the domain name service (DNS) to manage their large, distributed networks of infected machines. As a consequence, the monitoring and analysis of DNS queries has recently been proposed as one of the most promising techniques to detect and blacklist domains involved in malicious activities (e.g., phishing, spam, botnets command-and-control, etc.). EXPOSURE is a system we designed to detect such domains in real time, by applying 15 unique features grouped in four categories. We conducted a controlled experiment with a large, real-world dataset consisting of billions of DNS requests. The extremely positive results obtained in the tests convinced us to implement our techniques and deploy it as a free, online service. In this article, we present the Exposure system and describe the results and lessons learned from 17 months of its operation. Over this amount of time, the service detected over 100K malicious domains. The statistics about the time of usage, number of queries, and target IP addresses of each domain are also published on a daily basis on the service Web page.
32334506f746e83367cecb91a0ab841e287cd958
Practical privacy: the SuLQ framework
We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to {0, 1}. The true answer is ΣiεS f(di), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11].
49934d08d42ed9e279a82cbad2086377443c8a75
Differentially Private Empirical Risk Minimization
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.
61efdc56bc6c034e9d13a0c99d0b651a78bfc596
Differentially Private Distributed Constrained Optimization
Many resource allocation problems can be formulated as an optimization problem whose constraints contain sensitive information about participating users. This paper concerns a class of resource allocation problems whose objective function depends on the aggregate allocation (i.e., the sum of individual allocations); in particular, we investigate distributed algorithmic solutions that preserve the privacy of participating users. Without privacy considerations, existing distributed algorithms normally consist of a central entity computing and broadcasting certain public coordination signals to participating users. However, the coordination signals often depend on user information, so that an adversary who has access to the coordination signals can potentially decode information on individual users and put user privacy at risk. We present a distributed optimization algorithm that preserves differential privacy, which is a strong notion that guarantees user privacy regardless of any auxiliary information an adversary may have. The algorithm achieves privacy by perturbing the public signals with additive noise, whose magnitude is determined by the sensitivity of the projection operation onto user-specified constraints. By viewing the differentially private algorithm as an implementation of stochastic gradient descent, we are able to derive a bound for the suboptimality of the algorithm. We illustrate the implementation of our algorithm via a case study of electric vehicle charging. Specifically, we derive the sensitivity and present numerical simulations for the algorithm. Through numerical simulations, we are able to investigate various aspects of the algorithm when being used in practice, including the choice of step size, number of iterations, and the trade-off between privacy level and suboptimality.
c7788c34ba1387f1e437a2f83e1931f0c64d8e4e
The role of transparency in recommender systems
Recommender Systems act as a personalized decision guides, aiding users in decisions on matters related to personal taste. Most previous research on Recommender Systems has focused on the statistical accuracy of the algorithms driving the systems, with little emphasis on interface issues and the user's perspective. The goal of this research was to examine the role of transprency (user understanding of why a particular recommendation was made) in Recommender Systems. To explore this issue, we conducted a user study of five music Recommender Systems. Preliminary results indicate that users like and feel more confident about recommendations that they perceive as transparent.
7731c8a1c56fdfa149759a8bb7b81464da0b15c1
Recognizing Abnormal Heart Sounds Using Deep Learning
The work presented here applies deep learning to the task of automated cardiac auscultation, i.e. recognizing abnormalities in heart sounds. We describe an automated heart sound classification algorithm that combines the use of time-frequency heat map representations with a deep convolutional neural network (CNN). Given the cost-sensitive nature of misclassification, our CNN architecture is trained using a modified loss function that directly optimizes the trade-off between sensitivity and specificity. We evaluated our algorithm at the 2016 PhysioNet Computing in Cardiology challenge where the objective was to accurately classify normal and abnormal heart sounds from single, short, potentially noisy recordings. Our entry to the challenge achieved a final specificity of 0.95, sensitivity of 0.73 and overall score of 0.84. We achieved the greatest specificity score out of all challenge entries and, using just a single CNN, our algorithm differed in overall score by only 0.02 compared to the top place finisher, which used an ensemble approach.
17a00f26b68f40fb03e998a7eef40437dd40e561
The Tire as an Intelligent Sensor
Active safety systems are based upon the accurate and fast estimation of the value of important dynamical variables such as forces, load transfer, actual tire-road friction (kinetic friction) muk, and maximum tire-road friction available (potential friction) mup. Measuring these parameters directly from tires offers the potential for improving significantly the performance of active safety systems. We present a distributed architecture for a data-acquisition system that is based on a number of complex intelligent sensors inside the tire that form a wireless sensor network with coordination nodes placed on the body of the car. The design of this system has been extremely challenging due to the very limited available energy combined with strict application requirements for data rate, delay, size, weight, and reliability in a highly dynamical environment. Moreover, it required expertise in multiple engineering disciplines, including control-system design, signal processing, integrated-circuit design, communications, real-time software design, antenna design, energy scavenging, and system assembly.
190dcdb71a119ec830d6e7e6e01bb42c6c10c2f3
Surgical precision JIT compilers
Just-in-time (JIT) compilation of running programs provides more optimization opportunities than offline compilation. Modern JIT compilers, such as those in virtual machines like Oracle's HotSpot for Java or Google's V8 for JavaScript, rely on dynamic profiling as their key mechanism to guide optimizations. While these JIT compilers offer good average performance, their behavior is a black box and the achieved performance is highly unpredictable. In this paper, we propose to turn JIT compilation into a precision tool by adding two essential and generic metaprogramming facilities: First, allow programs to invoke JIT compilation explicitly. This enables controlled specialization of arbitrary code at run-time, in the style of partial evaluation. It also enables the JIT compiler to report warnings and errors to the program when it is unable to compile a code path in the demanded way. Second, allow the JIT compiler to call back into the program to perform compile-time computation. This lets the program itself define the translation strategy for certain constructs on the fly and gives rise to a powerful JIT macro facility that enables "smart" libraries to supply domain-specific compiler optimizations or safety checks. We present Lancet, a JIT compiler framework for Java bytecode that enables such a tight, two-way integration with the running program. Lancet itself was derived from a high-level Java bytecode interpreter: staging the interpreter using LMS (Lightweight Modular Staging) produced a simple bytecode compiler. Adding abstract interpretation turned the simple compiler into an optimizing compiler. This fact provides compelling evidence for the scalability of the staged-interpreter approach to compiler construction. In the case of Lancet, JIT macros also provide a natural interface to existing LMS-based toolchains such as the Delite parallelism and DSL framework, which can now serve as accelerator macros for arbitrary JVM bytecode.
f5888af5e5353eb74d37ec50e9840e58b1992953
An LDA-Based Approach to Scientific Paper Recommendation
Recommendation of scientific papers is a task aimed to support researchers in accessing relevant articles from a large pool of unseen articles. When writing a paper, a researcher focuses on the topics related to her/his scientific domain, by using a technical language. The core idea of this paper is to exploit the topics related to the researchers scientific production (authored articles) to formally define her/his profile; in particular we propose to employ topic modeling to formally represent the user profile, and language modeling to formally represent each unseen paper. The recommendation technique we propose relies on the assessment of the closeness of the language used in the researchers papers and the one employed in the unseen papers. The proposed approach exploits a reliable knowledge source for building the user profile, and it alleviates the cold-start problem, typical of collaborative filtering techniques. We also present a preliminary evaluation of our approach on the DBLP.
1f8be49d63c694ec71c2310309cd02a2d8dd457f
Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning
In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds "more noise" into features which are "less relevant" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.
31e9d9458471b4a0cfc6cf1de219b10af0f37239
Why do you play World of Warcraft? An in-depth exploration of self-reported motivations to play online and in-game behaviours in the virtual world of Azeroth
Massively multiplayer online role-playing games (MMORPGs) are video games in which players create an avatar that evolves and interacts with other avatars in a persistent virtual world. Motivations to play MMORPGs are heterogeneous (e.g. achievement, socialisation, immersion in virtual worlds). This study investigates in detail the relationships between self-reported motives and actual in-game behaviours. We recruited a sample of 690 World of Warcraft players (the most popular MMORPG) who agreed to have their avatar monitored for 8 months. Participants completed an initial online survey about their motives to play. Their actual in-game behaviours were measured through the game’s official database (the Armory website). Results showed specific associations between motives and in-game behaviours. Moreover, longitudinal analyses revealed that teamworkand competition-oriented motives are the most accurate predictors of fast progression in the game. In addition, although specific associations exist between problematic use and certain motives (e.g. advancement, escapism), longitudinal analyses showed that high involvement in the game is not necessarily associated with a negative impact upon daily living. 2012 Elsevier Ltd. All rights reserved.
33127e014cf537192c33a5b0e4b62df2a7b1869f
Policy ratification
It is not sufficient to merely check the syntax of new policies before they are deployed in a system; policies need to be analyzed for their interactions with each other and with their local environment. That is, policies need to go through a ratification process. We believe policy ratification becomes an essential part of system management as the number of policies in the system increases and as the system administration becomes more decentralized. In this paper, we focus on the basic tasks involved in policy ratification. To a large degree, these basic tasks can be performed independent of policy model and language and require little domain-specific knowledge. We present algorithms from constraint, linear, and logic programming disciplines to help perform ratification tasks. We provide an algorithm to efficiently assign priorities to the policies based on relative policy preferences indicated by policy administrators. Finally, with an example, we show how these algorithms have been integrated with our policy system to provide feedback to a policy administrator regarding potential interactions of policies with each other and with their deployment environment.
c6b5c1cc565c878db50ad20aafd804284558ad02
Centrality in valued graphs : A measure of betweenness based on network flow
A new measure of centrality, C,, is introduced. It is based on the concept of network flows. While conceptually similar to Freeman’s original measure, Ca, the new measure differs from the original in two important ways. First, C, is defined for both valued and non-valued graphs. This makes C, applicable to a wider variety of network datasets. Second, the computation of C, is not based on geodesic paths as is C, but on all the independent paths between all pairs of points in the network.
2ccca721c20ad1d8503ede36fe310626070de640
Distributed Energy Resources Topology Identification via Graphical Modeling
Distributed energy resources (DERs), such as photovoltaic, wind, and gas generators, are connected to the grid more than ever before, which introduces tremendous changes in the distribution grid. Due to these changes, it is important to understand where these DERs are connected in order to sustainably operate the distribution grid. But the exact distribution system topology is difficult to obtain due to frequent distribution grid reconfigurations and insufficient knowledge about new components. In this paper, we propose a methodology that utilizes new data from sensor-equipped DER devices to obtain the distribution grid topology. Specifically, a graphical model is presented to describe the probabilistic relationship among different voltage measurements. With power flow analysis, a mutual information-based identification algorithm is proposed to deal with tree and partially meshed networks. Simulation results show highly accurate connectivity identification in the IEEE standard distribution test systems and Electric Power Research Institute test systems.
8eb3ebd0a1d8a26c7070543180d233f841b79850
Performance of Reliable Transport Protocol over IEEE 802.11 Wireless LAN: Analysis and Enhancement
IEEE 802.11 Medium Access Control(MAC) is proposed to support asynchronous and time bounded delivery of radio data packets in infrastructure and ad hoc networks. The basis of the IEEE 802.11 WLAN MAC protocol is Distributed Coordination Function(DCF), which is a Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA) with binary slotted exponential back-off scheme. Since IEEE 802.11 MAC has its own characteristics that are different from other wireless MAC protocols, the performance of reliable transport protocol over 802.11 needs further study. This paper proposes a scheme named DCF+, which is compatible with DCF, to enhance the performance of reliable transport protocol over WLAN. To analyze the performance of DCF and DCF+, this paper also introduces an analytical model to compute the saturated throughput of WLAN. Comparing with other models, this model is shown to be able to predict the behaviors of 802.11 more accurately. Moreover, DCF+ is able to improve the performance of TCP over WLAN, which is verified by modeling and elaborate simulation results.
5574763d870bae0fd3fd6d3014297942a045f60a
Utilization of Data mining Approaches for Prediction of Life Threatening Diseases Survivability
Data mining now-a-days plays an important role in prediction of diseases in health care industry. The Health care industry utilizes data mining Techniques and finds out the information which is hidden in the data set. Many diagnoses have been done for predicting diseases. Without knowing the knowledge of profound medicine and clinical experience the treatment goes wrong. The time taken to recover from diseases depends on patients&apos; severity. For finding out the disease, number of test needs to be taken by patient. In most cases not all test become more effective. And at last it leads to the death of the patient. Many experiments have been conducted by comparing the performance of predictive data mining for reducing the number of test taken by the patient indirectly. This research paper is to present a survey on predicting the presence of life threatening diseases which causes to death and list out the various classification algorithms that has been used with number of attributes for prediction.
6273df9def7c011bc21cd42a4029d4b7c7c48c2e
A 45GHz Doherty power amplifier with 23% PAE and 18dBm output power, in 45nm SOI CMOS
A 45GHz Doherty power amplifier is implemented in 45nm SOI CMOS. Two-stack FET amplifiers are used as main and auxiliary amplifiers, allowing a supply voltage of 2.5V and high output power. The use of slow-wave coplanar waveguides (CPW) improves the PAE and gain by approximately 3% and 1dB, and reduces the die area by 20%. This amplifier exhibits more than 18dBm saturated output power, with peak power gain of 7dB. It occupies 0.64mm2 while achieving a peak PAE of 23%; at 6dB back-off the PAE is 17%.
1e396464e440e6032be3f035a9a6837c32c9d2c0
Review of Micro Thermoelectric Generator
Used for thermal energy harvesting, thermoelectric generator (TEG) can convert heat into electricity directly. Structurally, the main part of TEG is the thermopile, which consists of thermocouples connected in series electrically and in parallel thermally. Benefiting from massive progress achieved in a microelectromechanical systems technology, micro TEG (<inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEG) with advantages of small volume and high output voltage has obtained attention in recent 20 years. The review gives a comprehensive survey of the development and current status of <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEG. First, the principle of operation is introduced and some key parameters used for characterizing the performance of <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEG are highlighted. Next, <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEGs are classified from the perspectives of structure, material, and fabrication technology. Then, almost all the relevant works are summarized for the convenience of comparison and reference. Summarized information includes the structure, material property, fabrication technology, output performance, and so on. This will provide readers with an overall evaluation of different studies and guide them in choosing the suitable <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEGs for their applications. In addition, the existing and potential applications of <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEG are shown, especially the applications in the Internet of things. Finally, we summarize the challenges encountered in improving the output power of <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEG and predicted that more researchers would focus their efforts on the flexible structure <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEG, and combination of <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEG and other energy harvestings. With the emergence of more low-power devices and the gradual improvement of <italic>ZT</italic> value of the thermoelectric material, <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>-TEG is promising for applications in various fields. [2017-0610]
4c11a7b668dee651cc2d8eb2eaf8665449b1738f
Modern Release Engineering in a Nutshell -- Why Researchers Should Care
The release engineering process is the process that brings high quality code changes from a developer's workspace to the end user, encompassing code change integration, continuous integration, build system specifications, infrastructure-as-code, deployment and release. Recent practices of continuous delivery, which bring new content to the end user in days or hours rather than months or years, have generated a surge of industry-driven interest in the release engineering pipeline. This paper argues that the involvement of researchers is essential, by providing a brief introduction to the six major phases of the release engineering pipeline, a roadmap of future research, and a checklist of three major ways that the release engineering process of a system under study can invalidate the findings of software engineering studies. The main take-home message is that, while release engineering technology has flourished tremendously due to industry, empirical validation of best practices and the impact of the release engineering process on (amongst others) software quality is largely missing and provides major research opportunities.
9f6db3f5809a9d1b9f1c70d9d30382a0bd8be8d0
A Review on Performance Analysis of Cloud Computing Services for Scientific Computing
Cloud computing has emerged as a very important commercial infrastructure that promises to reduce the need for maintaining costly computing facilities by organizations and institutes. Through the use of virtualization and time sharing of resources, clouds serve with a single set of physical resources as a large user base with altogether different needs. Thus, the clouds have the promise to provide to their owners the benefits of an economy of calibration and, at the same time, become a substitute for scientists to clusters, grids, and parallel production conditions. However, the present commercial clouds have been built to support web and small database workloads, which are very different from common scientific computing workloads. Furthermore, the use of virtualization and resource time sharing may introduce significant performance penalties for the demanding scientific computing workloads. In this paper, we analyze the performance of cloud computing services for scientific computing workloads. This paper evaluate the presence in real scientific computing workloads of Many-Task Computing users, that is, of users who employ loosely coupled applications comprising many tasks to achieve their scientific goals. Our effective method demonstrates to yield comparative and even better results than the more complex state-of-the-art techniques but has the advantage to be appropriate for real-time applications.
6fcccd6def46a4dd50f85df4d4c011bd9f1855af
Cedalion: a language for language oriented programming
Language Oriented Programming (LOP) is a paradigm that puts domain specific programming languages (DSLs) at the center of the software development process. Currently, there are three main approaches to LOP: (1) the use of internal DSLs, implemented as libraries in a given host language; (2) the use of external DSLs, implemented as interpreters or compilers in an external language; and (3) the use of language workbenches, which are integrated development environments (IDEs) for defining and using external DSLs. In this paper, we contribute: (4) a novel language-oriented approach to LOP for defining and using internal DSLs. While language workbenches adapt internal DSL features to overcome some of the limitations of external DSLs, our approach adapts language workbench features to overcome some of the limitations of internal DSLs. We introduce Cedalion, an LOP host language for internal DSLs, featuring static validation and projectional editing. To validate our approach we present a case study in which Cedalion was used by biologists in designing a DNA microarray for molecular Biology research.
7cbbe0025b71a265c6bee195b5595cfad397a734
Health chair: implicitly sensing heart and respiratory rate
People interact with chairs frequently, making them a potential location to perform implicit health sensing that requires no additional effort by users. We surveyed 550 participants to understand how people sit in chairs and inform the design of a chair that detects heart and respiratory rate from the armrests and backrests of the chair respectively. In a laboratory study with 18 participants, we evaluated a range of common sitting positions to determine when heart rate and respiratory rate detection was possible (32% of the time for heart rate, 52% for respiratory rate) and evaluate the accuracy of the detected rate (83% for heart rate, 73% for respiratory rate). We discuss the challenges of moving this sensing to the wild by evaluating an in-situ study totaling 40 hours with 11 participants. We show that, as an implicit sensor, the chair can collect vital signs data from its occupant through natural interaction with the chair.
a00a757b26d5c4f53b628a9c565990cdd0e51876
The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings
We motivate and describe a new freely available human-human dialogue data set for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-character variant of the DiET chat tool (Healey et al., 2003; Mills and Healey, submitted) with a novel task, where a Learner needs to learn invented visual attribute words (such as “burchak” for square) from a tutor. As such, the text-based interactions closely resemble face-to-face conversation and thus contain many of the linguistic phenomena encountered in natural, spontaneous dialogue. These include selfand other-correction, mid-sentence continuations, interruptions, overlaps, fillers, and hedges. We also present a generic n-gram framework for building user (i.e. tutor) simulations from this type of incremental data, which is freely available to researchers. We show that the simulations produce outputs that are similar to the original data (e.g. 78% turn match similarity). Finally, we train and evaluate a Reinforcement Learning dialogue control agent for learning visually grounded word meanings, trained from the BURCHAK corpus. The learned policy shows comparable performance to a rulebased system built previously.
b49e31fe5948b3ca4552ac69dd7a735607467f1c
GUSS: Solving Collections of Data Related Models Within GAMS
In many applications, optimization of a collection of problems is required where each problem is structurally the same, but in which some or all of the data defining the instance is updated. Such models are easily specified within modern modeling systems, but have often been slow to solve due to the time needed to regenerate the instance, and the inability to use advance solution information (such as basis factorizations) from previous solves as the collection is processed. We describe a new language extension, GUSS, that gathers data from different sources/symbols to define the collection of models (called scenarios), updates a base model instance with this scenario data and solves the updated model instance and scatters the scenario results to symbols in the GAMS database. We demonstrate the utility of this approach in three applications, namely data envelopment analysis, cross validation and stochastic dual dynamic programming. The language extensions are available for general use in all versions of GAMS starting with release 23.7.
5914781bde18606e55e8f7683f55889df91576ec
30 + years of research and practice of outsourcing – Exploring the past and anticipating the future
Article history: Received 7 January 2008 Received in revised form 24 June 2008 Accepted 31 July 2008 Available online 5 April 2009 Outsourcing is a phenomenon that as a practice originated in the 1950s, but it was not until the 1980s when the strategy became widely adopted in organizations. Since then, the strategy has evolved from a strictly cost focused approach towards more cooperative nature, in which cost is only one, often secondary, decision-making criterion. In the development of the strategy, three broad and somewhat overlapping, yet distinct phases can be identified: the era of the Big Bang, the era of the Bandwagon, and the era of Barrierless Organizations. This paper illustrates that the evolution of the practice has caused several contradictions among researchers, as well as led to the situation where the theoretical background of the phenomenon has recently become much richer. Through examining existing research, this paper intends to identify the development of outsourcing strategy from a practical as well as a theoretical perspective from its birth up to today. In addition, through providing insights from managers in the information technology industry, this paper aims at providing a glimpse from the future – that is – what may be the future directions and research issues in this complex phenomenon? © 2009 Elsevier Inc. All rights reserved.
423455ad8afb9b2534c0954a5e61c95bea611801
Virtualizing I/O Devices on VMware Workstation's Hosted Virtual Machine Monitor
Virtual machines were developed by IBM in the 1960’s to provide concurrent, interactive access to a mainframe computer. Each virtual machine is a replica of the underlying physical machine and users are given the illusion of running directly on the physical machine. Virtual machines also provide benefits like isolation and resource sharing, and the ability to run multiple flavors and configurations of operating systems. VMware Workstation brings such mainframe-class virtual machine technology to PC-based desktop and workstation computers. This paper focuses on VMware Workstation’s approach to virtualizing I/O devices. PCs have a staggering variety of hardware, and are usually pre-installed with an operating system. Instead of replacing the pre-installed OS, VMware Workstation uses it to host a user-level application (VMApp) component, as well as to schedule a privileged virtual machine monitor (VMM) component. The VMM directly provides high-performance CPU virtualization while the VMApp uses the host OS to virtualize I/O devices and shield the VMM from the variety of devices. A crucial question is whether virtualizing devices via such a hosted architecture can meet the performance required of high throughput, low latency devices. To this end, this paper studies the virtualization and performance of an Ethernet adapter on VMware Workstation. Results indicate that with optimizations, VMware Workstation’s hosted virtualization architecture can match native I/O throughput on standard PCs. Although a straightforward hosted implementation is CPU-limited due to virtualization overhead on a 733 MHz Pentium R III system on a 100 Mb/s Ethernet, a series of optimizations targeted at reducing CPU utilization allows the system to match native network throughput. Further optimizations are discussed both within and outside a hosted architecture.
c5788be735f3caadc7d0d3147aa52fd4a6036ec4
Detecting epistasis in human complex traits
Genome-wide association studies (GWASs) have become the focus of the statistical analysis of complex traits in humans, successfully shedding light on several aspects of genetic architecture and biological aetiology. Single-nucleotide polymorphisms (SNPs) are usually modelled as having additive, cumulative and independent effects on the phenotype. Although evidently a useful approach, it is often argued that this is not a realistic biological model and that epistasis (that is, the statistical interaction between SNPs) should be included. The purpose of this Review is to summarize recent directions in methodology for detecting epistasis and to discuss evidence of the role of epistasis in human complex trait variation. We also discuss the relevance of epistasis in the context of GWASs and potential hazards in the interpretation of statistical interaction terms.
d3569f184b7083c0433bf00fa561736ae6f8d31e
Interactive Entity Resolution in Relational Data: A Visual Analytic Tool and Its Evaluation
Databases often contain uncertain and imprecise references to real-world entities. Entity resolution, the process of reconciling multiple references to underlying real-world entities, is an important data cleaning process required before accurate visualization or analysis of the data is possible. In many cases, in addition to noisy data describing entities, there is data describing the relationships among the entities. This relational data is important during the entity resolution process; it is useful both for the algorithms which determine likely database references to be resolved and for visual analytic tools which support the entity resolution process. In this paper, we introduce a novel user interface, D-Dupe, for interactive entity resolution in relational data. D-Dupe effectively combines relational entity resolution algorithms with a novel network visualization that enables users to make use of an entity's relational context for making resolution decisions. Since resolution decisions often are interdependent, D-Dupe facilitates understanding this complex process through animations which highlight combined inferences and a history mechanism which allows users to inspect chains of resolution decisions. An empirical study with 12 users confirmed the benefits of the relational context visualization on the performance of entity resolution tasks in relational data in terms of time as well as users' confidence and satisfaction.
c630196c34533903b48e546897d46df27c844bc2
High-power-transfer-density capacitive wireless power transfer system for electric vehicle charging
This paper introduces a large air-gap capacitive wireless power transfer (WPT) system for electric vehicle charging that achieves a power transfer density exceeding the state-of-the-art by more than a factor of four. This high power transfer density is achieved by operating at a high switching frequency (6.78 MHz), combined with an innovative approach to designing matching networks that enable effective power transfer at this high frequency. In this approach, the matching networks are designed such that the parasitic capacitances present in a vehicle charging environment are absorbed and utilized as part of the wireless power transfer mechanism. A new modeling approach is developed to simplify the complex network of parasitic capacitances into equivalent capacitances that are directly utilized as the matching network capacitors. A systematic procedure to accurately measure these equivalent capacitances is also presented. A prototype capacitive WPT system with 150 cm2 coupling plates, operating at 6.78 MHz and incorporating matching networks designed using the proposed approach, is built and tested. The prototype system transfers 589 W of power across a 12-cm air gap, achieving a power transfer density of 19.6 kW/m2.
1750a3716a03aaacdfbb0e25214beaa5e1e2b6ee
Ontology Development 101 : A Guide to Creating Your First Ontology
1 Why develop an ontology? In recent years the development of ontologies—explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)—has been moving from the realm of ArtificialIntelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web. The ontologies on the Web range from large taxonomies categorizing Web sites (such as on Yahoo!) to categorizations of products for sale and their features (such as on Amazon.com). The WWW Consortium (W3C) is developing the Resource Description Framework (Brickley and Guha 1999), a language for encoding knowledge on Web pages to make it understandable to electronic agents searching for information. The Defense Advanced Research Projects Agency (DARPA), in conjunction with the W3C, is developing DARPA Agent Markup Language (DAML) by extending RDF with more expressive constructs aimed at facilitating agent interaction on the Web (Hendler and McGuinness 2000). Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Price and Spackman 2000) and the semantic network of the Unified Medical Language System (Humphreys and Lindberg 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). An ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. Why would someone want to develop an ontology? Some of the reasons are:
7c459c36e19629ff0dfb4bd0e541cc5d2d3f03e0
Generic Taxonomy of Social Engineering Attack
Social engineering is a type of attack that allows unauthorized access to a system to achieve specific objective. Commonly, the purpose is to obtain information for social engineers. Some successful social engineering attacks get victims’ information via human based retrieval approach, example technique terms as dumpster diving or shoulder surfing attack to get access to password. Alternatively, victims’ information also can be stolen using technical-based method such as from pop-up windows, email or web sites to get the password or other sensitive information. This research performed a preliminary analysis on social engineering attack taxonomy that emphasized on types of technical-based social engineering attack. Results from the analysis become a guideline in proposing a new generic taxonomy of Social Engineering Attack (SEA).